id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
βŒ€
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
988,000
https://en.wikipedia.org/wiki/Hell-fire%20trigger
A hell-fire trigger is a device that allows a semi-automatic firearm to fire at an increased rate. The hell-fire clamps to the trigger guard behind the trigger and presses a "finger" against the back of the trigger to increase the force that returns the trigger to its forward position, effectively decreasing the time required for the trigger to reset, allowing for a faster follow-up shot. Internally, the firearm is not altered. As in all semi-automatic firearms, only one round is fired with every stroke of the trigger. This makes the "hell-fire trigger" avoid classification as a machine gun within the definitions used by United States federal law, as stated in an ATF private-letter ruling from 1990. However, as with all private-letter rulings, this determination on the U.S. legality of hell-fire triggers is limited to the facts regarding the specific device being examined. The 1990 opinion may be modified or revoked at any subsequent time by the Bureau of Alcohol, Tobacco, Firearms and Explosives. Furthermore, agency opinion is not always considered legally binding. During the Waco siege, David Koresh, leader of the Branch Davidians, reportedly told authorities that he utilized semi-automatic guns with the part installed. Another well-known case of its reported use is the 101 California Street shooting. See also Bump stock Binary trigger Forced reset trigger Trigger crank References Firearm components
Hell-fire trigger
[ "Technology" ]
283
[ "Firearm components", "Components" ]
988,114
https://en.wikipedia.org/wiki/Data%20logger
A data logger (also datalogger or data recorder) is an electronic device that records data over time or about location either with a built-in instrument or sensor or via external instruments and sensors. Increasingly, but not entirely, they are based on a digital processor (or computer), and called digital data loggers (DDL). They generally are small, battery-powered, portable, and equipped with a microprocessor, internal memory for data storage, and sensors. Some data loggers interface with a personal computer and use software to activate the data logger and view and analyze the collected data, while others have a local interface device (keypad, LCD) and can be used as a stand-alone device. Data loggers vary from general-purpose devices for various measurement applications to very specific devices for measuring in one environment or application type only. While it is common for general-purpose types to be programmable, many remain static machines with only a limited number or no changeable parameters. Electronic data loggers have replaced chart recorders in many applications. One primary benefit of using data loggers is their ability to automatically collect data on a 24-hour basis. Upon activation, data loggers are typically deployed and left unattended to measure and record information for the duration of the monitoring period. This allows for a comprehensive, accurate picture of the environmental conditions being monitored, such as air temperature and relative humidity. The cost of data loggers has been declining over the years as technology improves and costs are reduced. Simple single-channel data loggers can cost as little as $25, while more complicated loggers may cost hundreds or thousands of dollars. Data formats Standardization of protocols and data formats has been a problem but is now growing in the industry and XML, JSON, and YAML are increasingly being adopted for data exchange. The development of the Semantic Web and the Internet of Things is likely to accelerate this present trend. Instrumentation protocols Several protocols have been standardized including a smart protocol, SDI-12, that allows some instrumentation to be connected to a variety of data loggers. The use of this standard has not gained much acceptance outside the environmental industry. SDI-12 also supports multi-drop instruments. Some data logging companies support the MODBUS standard. This has been used traditionally in the industrial control area, and many industrial instruments support this communication standard. Another multi-drop protocol that is now starting to become more widely used is based upon CAN-Bus (ISO 11898). Some data loggers use a flexible scripting environment to adapt to various non-standard protocols. Data logging versus data acquisition The terms data logging and data acquisition are often used interchangeably. However, in a historical context, they are quite different. A data logger is a data acquisition system, but a data acquisition system is not necessarily a data logger. Data loggers typically have slower sample rates. A maximum sample rate of 1 Hz may be considered to be very fast for a data logger, yet very slow for a typical data acquisition system. Data loggers are implicitly stand-alone devices, while typical data acquisition systems must remain tethered to a computer to acquire data. This stand-alone aspect of data loggers implies onboard memory that is used to store acquired data. Sometimes this memory is very large to accommodate many days, or even months, of unattended recording. This memory may be battery-backed static random access memory, flash memory, or EEPROM. Earlier data loggers used magnetic tape, punched paper tape, or directly viewable records such as "strip chart recorders". Given the extended recording times of data loggers, they typically feature a mechanism to record the date and time in a timestamp to ensure that each recorded data value is associated with a date and time of acquisition to produce a sequence of events. As such, data loggers typically employ built-in real-time clocks whose published drift can be an important consideration when choosing between data loggers. Data loggers range from simple single-channel input to complex multi-channel instruments. Typically, the simpler the device the less programming flexibility. Some more sophisticated instruments allow for cross-channel computations and alarms based on predetermined conditions. The newest data loggers can serve web pages, allowing numerous people to monitor a system remotely. The unattended and remote nature of many data logger applications implies the need for some applications to operate from a DC power source, such as a battery. Solar power may be used to supplement these power sources. These constraints have generally led to ensuring that the devices they market are extremely power efficient relative to computers. In many cases, they are required to operate in harsh environmental conditions where computers will not function reliably. This unattended nature also dictates that data loggers must be extremely reliable. Since they may operate for long periods nonstop with little or no human supervision and may be installed in harsh or remote locations, it is imperative that so long as they have power, they will not fail to log data for any reason. Manufacturers go to great lengths to ensure that the devices can be depended on in these applications. As such data loggers are almost completely immune to the problems that might affect a general-purpose computer in the same application, such as program crashes and the instability of some operating systems. Applications Applications of data logging include: Unattended weather station recording (such as wind speed / direction, temperature, relative humidity, solar radiation). Unattended hydrographic recording (such as water level, water depth, water flow, water pH, water conductivity). Unattended soil moisture level recording. Unattended gas pressure recording. Offshore buoys for recording a variety of environmental conditions. Road traffic counting. Measure temperatures (humidity, etc.) of perishables during shipments: Cold chain. Measure variations in light intensity. Measuring temperature of pharmaceutical products, medicines and vaccines during storage Measuring temperature and humidity of perishable products during transportation to ensure cold chain is maintained Process monitoring for maintenance and troubleshooting applications. Process monitoring to verify warranty conditions Wildlife research with pop-up archival tags Measure vibration and handling shock (drop height) environment of distribution packaging. Tank level monitoring. Deformation monitoring of any object with geodetic or geotechnical sensors controlled by an automatic deformation monitoring system. Environmental monitoring. Vehicle testing (including crash testing) Motor racing Monitoring of relay status in railway signaling. For science education enabling 'measurement', 'scientific investigation' and an appreciation of 'change' Record trend data at regular intervals in veterinary vital signs monitoring. Load profile recording for energy consumption management. Temperature, humidity and power use for heating and air conditioning efficiency studies. Water level monitoring for groundwater studies. Digital electronic bus sniffer for debug and validation Examples Black-box (stimulus/response) loggers: A flight data recorder (FDR) is a piece of recording equipment used to collect specific aircraft performance data. The term may also be used, albeit less accurately, to describe the cockpit voice recorder (CVR), another type of data recording device found on board aircraft. An event data recorder (EDR) is a device installed by the manufacturer in some automobiles which collects and stores various data during the time-frame immediately before and after a crash. A voyage data recorder (VDR) is a data recording system designed to collect data from various sensors on board a ship. A train event recorder is a device that records data about the operation of train controls and performance in response to those controls and other train control systems. An accident data recorder (ADR) is a device for triggering accidents or incidents in most kind of land vehicles and recording the relevant data. In automobiles, all diagnostic trouble codes (DTCs) are logged in engine control units (ECUs) so that at the time of service of a vehicle, a service engineer will read all the DTCs using Tech-2 or similar tools connected to the on-board diagnostics port, and will come to know problems occurred in the vehicle. Sometimes a small OBD data logger is plugged into the same port to continuously record vehicle data. In embedded system and digital electronics design, specialized high-speed digital data logger help overcome the limitations of more traditional instruments such as the oscilloscope and the logic analyzer. The main advantage of a data logger is its ability to record very long traces, which proves very useful when trying to correct functional bugs that happen once in while. In the racing industry, Data Loggers are used to record data such as braking points, lap/sector timing, and track maps, as well as any on-board vehicle sensors. Health data loggers: The growing, preparation, storage and transportation of food. Data logger is generally used for data storage and these are small in size. A Holter monitor is a portable device for continuously monitoring various electrical activity of the cardiovascular system for at least 24 hours. Electronic health record loggers. Other general data acquisition loggers: An (scientific) experimental testing data acquisition tool. Ultra Wideband Data Recorder, high-speed data recording up to 2 Giga Samples per second. Future directions Data Loggers are changing more rapidly now than ever before. The original model of a stand-alone data logger is changed to one of a device that collects data but also has access to wireless communications for alarming of events, automatic reporting of data, and remote control. Data loggers are beginning to serve web pages for current readings, e-mail their alarms, and FTP their daily results into databases or direct to the users. Very recently, there is a trend to move away from proprietary products with commercial software to open-source software and hardware devices. The Raspberry Pi single-board computer is among others a popular platform hosting real-time Linux or preemptive-kernel Linux operating systems with many digital interfaces like I2C, SPI, or UART enable the direct interconnection of a digital sensor and a computer, and an unlimited number of configurations to show measurements in real-time over the internet, process data, plot charts, and diagrams... See also Black box Bus analyzer Computer data logging: logging APIs, server logs & syslog, web logging & web counters Continuous emissions monitoring system Runtime intelligence Sequence of events recorder SensorML Shock and vibration data logger Temperature data logger References Recording devices Onboard computers Measuring instruments
Data logger
[ "Technology", "Engineering" ]
2,121
[ "Recording devices", "Measuring instruments" ]
988,191
https://en.wikipedia.org/wiki/Polygon%20mesh
In 3D computer graphics and solid modeling, a polygon mesh is a collection of , s and s that defines the shape of a polyhedral object's surface. It simplifies rendering, as in a wire-frame model. The faces usually consist of triangles (triangle mesh), quadrilaterals (quads), or other simple convex polygons (n-gons). A polygonal mesh may also be more generally composed of concave polygons, or even polygons with holes. The study of polygon meshes is a large sub-field of computer graphics (specifically 3D computer graphics) and geometric modeling. Different representations of polygon meshes are used for different applications and goals. The variety of operations performed on meshes may include: Boolean logic (Constructive solid geometry), smoothing, simplification, and many others. Algorithms also exist for ray tracing, collision detection, and rigid-body dynamics with polygon meshes. If the mesh's edges are rendered instead of the faces, then the model becomes a wireframe model. Several methods exist for mesh generation, including the marching cubes algorithm. Volumetric meshes are distinct from polygon meshes in that they explicitly represent both the surface and interior region of a structure, while polygon meshes only explicitly represent the surface (the volume is implicit). Elements Objects created with polygon meshes must store different types of elements. These include vertices, edges, faces, polygons and surfaces. In many applications, only vertices, edges and either faces or polygons are stored. A renderer may support only 3-sided faces, so polygons must be constructed of many of these, as shown above. However, many renderers either support quads and higher-sided polygons, or are able to convert polygons to triangles on the fly, making it unnecessary to store a mesh in a triangulated form. Representations Polygon meshes may be represented in a variety of ways, using different methods to store the vertex, edge and face data. These include: Each of the representations above have particular advantages and drawbacks, further discussed in Smith (2006). The choice of the data structure is governed by the application, the performance required, size of the data, and the operations to be performed. For example, it is easier to deal with triangles than general polygons, especially in computational geometry. For certain operations it is necessary to have a fast access to topological information such as edges or neighboring faces; this requires more complex structures such as the winged-edge representation. For hardware rendering, compact, simple structures are needed; thus the corner-table (triangle fan) is commonly incorporated into low-level rendering APIs such as DirectX and OpenGL. Vertex-vertex meshes Vertex-vertex meshes represent an object as a set of vertices connected to other vertices. This is the simplest representation, but not widely used since the face and edge information is implicit. Thus, it is necessary to traverse the data in order to generate a list of faces for rendering. In addition, operations on edges and faces are not easily accomplished. However, VV meshes benefit from small storage space and efficient morphing of shape. The above figure shows a four-sided box as represented by a VV mesh. Each vertex indexes its neighboring vertices. The last two vertices, 8 and 9 at the top and bottom center of the "box-cylinder", have four connected vertices rather than five. A general system must be able to handle an arbitrary number of vertices connected to any given vertex. For a complete description of VV meshes see Smith (2006). Face-vertex meshes Face-vertex meshes represent an object as a set of faces and a set of vertices. This is the most widely used mesh representation, being the input typically accepted by modern graphics hardware. Face-vertex meshes improve on VV-mesh for modeling in that they allow explicit lookup of the vertices of a face, and the faces surrounding a vertex. The above figure shows the "box-cylinder" example as an FV mesh. Vertex v5 is highlighted to show the faces that surround it. Notice that, in this example, every face is required to have exactly 3 vertices. However, this does not mean every vertex has the same number of surrounding faces. For rendering, the face list is usually transmitted to the GPU as a set of indices to vertices, and the vertices are sent as position/color/normal structures (in the figure, only position is given). This has the benefit that changes in shape, but not geometry, can be dynamically updated by simply resending the vertex data without updating the face connectivity. Modeling requires easy traversal of all structures. With face-vertex meshes it is easy to find the vertices of a face. Also, the vertex list contains a list of faces connected to each vertex. Unlike VV meshes, both faces and vertices are explicit, so locating neighboring faces and vertices is constant time. However, the edges are implicit, so a search is still needed to find all the faces surrounding a given face. Other dynamic operations, such as splitting or merging a face, are also difficult with face-vertex meshes. Winged-edge meshes Introduced by Baumgart in 1975, winged-edge meshes explicitly represent the vertices, faces, and edges of a mesh. This representation is widely used in modeling programs to provide the greatest flexibility in dynamically changing the mesh geometry, because split and merge operations can be done quickly. Their primary drawback is large storage requirements and increased complexity due to maintaining many indices. A good discussion of implementation issues of Winged-edge meshes may be found in the book Graphics Gems II. Winged-edge meshes address the issue of traversing from edge to edge, and providing an ordered set of faces around an edge. For any given edge, the number of outgoing edges may be arbitrary. To simplify this, winged-edge meshes provide only four, the nearest clockwise and counter-clockwise edges at each end. The other edges may be traversed incrementally. The information for each edge therefore resembles a butterfly, hence "winged-edge" meshes. The above figure shows the "box-cylinder" as a winged-edge mesh. The total data for an edge consists of 2 vertices (endpoints), 2 faces (on each side), and 4 edges (winged-edge). Rendering of winged-edge meshes for graphics hardware requires generating a Face index list. This is usually done only when the geometry changes. Winged-edge meshes are ideally suited for dynamic geometry, such as subdivision surfaces and interactive modeling, since changes to the mesh can occur locally. Traversal across the mesh, as might be needed for collision detection, can be accomplished efficiently. See Baumgart (1975) for more details. Render dynamic meshes Winged-edge meshes are not the only representation which allows for dynamic changes to geometry. A new representation which combines winged-edge meshes and face-vertex meshes is the render dynamic mesh, which explicitly stores both, the vertices of a face and faces of a vertex (like FV meshes), and the faces and vertices of an edge (like winged-edge). Render dynamic meshes require slightly less storage space than standard winged-edge meshes, and can be directly rendered by graphics hardware since the face list contains an index of vertices. In addition, traversal from vertex to face is explicit (constant time), as is from face to vertex. RD meshes do not require the four outgoing edges since these can be found by traversing from edge to face, then face to neighboring edge. RD meshes benefit from the features of winged-edge meshes by allowing for geometry to be dynamically updated. See Tobler & Maierhofer (WSCG 2006) for more details. Summary of mesh representation In the above table, explicit indicates that the operation can be performed in constant time, as the data is directly stored; list compare indicates that a list comparison between two lists must be performed to accomplish the operation; and pair search indicates a search must be done on two indices. The notation avg(V,V) means the average number of vertices connected to a given vertex; avg(E,V) means the average number of edges connected to a given vertex, and avg(F,V) is the average number of faces connected to a given vertex. The notation "V β†’ f1, f2, f3, ... β†’ v1, v2, v3, ..." describes that a traversal across multiple elements is required to perform the operation. For example, to get "all vertices around a given vertex V" using the face-vertex mesh, it is necessary to first find the faces around the given vertex V using the vertex list. Then, from those faces, use the face list to find the vertices around them. Winged-edge meshes explicitly store nearly all information, and other operations always traverse to the edge first to get additional info. Vertex-vertex meshes are the only representation that explicitly stores the neighboring vertices of a given vertex. As the mesh representations become more complex (from left to right in the summary), the amount of information explicitly stored increases. This gives more direct, constant time, access to traversal and topology of various elements but at the cost of increased overhead and space in maintaining indices properly. Figure 7 shows the connectivity information for each of the four technique described in this article. Other representations also exist, such as half-edge and corner tables. These are all variants of how vertices, faces and edges index one another. As a general rule, face-vertex meshes are used whenever an object must be rendered on graphics hardware that does not change geometry (connectivity), but may deform or morph shape (vertex positions) such as real-time rendering of static or morphing objects. Winged-edge or render dynamic meshes are used when the geometry changes, such as in interactive modeling packages or for computing subdivision surfaces. Vertex-vertex meshes are ideal for efficient, complex changes in geometry or topology so long as hardware rendering is not of concern. Other representations File formats There exist many different file formats for storing polygon mesh data. Each format is most effective when used for the purpose intended by its creator. Popular formats include .fbx, .dae, .obj, and .stl. A table of some more of these formats is presented below: See also Boundary representation Euler operator Hypergraph Manifold (a mesh can be manifold or non-manifold) Mesh subdivision (a technique for adding detail to a polygon mesh) Polygon modeling Polygonizer Simplex T-spline Triangulation (geometry) Wire-frame model References External links OpenMesh open source half-edge mesh representation. Polygon Mesh Processing Library 3D computer graphics Virtual reality Computer graphics data structures Mesh generation Geometry processing
Polygon mesh
[ "Physics" ]
2,251
[ "Tessellation", "Mesh generation", "Symmetry" ]
988,208
https://en.wikipedia.org/wiki/Eskimo%20kiss
An Eskimo kiss, nose kiss, or nose rub is a gesture of affection where one rubs the tip of one's nose against another person's face. In Inuit culture, the gesture is known as a kunik, and consists of pressing or rubbing the tip of one's nose against another's cheek. In non-Inuit English-speaking culture, two people Eskimo kiss by rubbing the tips of their noses together. Nose-to-cheek kisses are found in other cultures as well. History When early Western explorers of the Arctic first witnessed Inuit nose rubbing as a greeting behavior, they dubbed it Eskimo kissing. The practice was also prevalent in nearby non-Inuit cultures. In Inuit culture Among the Inuit, kunik is a form of expressing affection, usually between family members and loved ones or to young children, that involves pressing the nose and upper lip against the skin (commonly of the cheeks or forehead) and breathing in, causing the loved one's skin or hair to be suctioned against the nose and upper lip. A common misconception is that the practice arose so that Inuit could kiss without their mouths freezing together. Rather, it is a non-erotic but intimate greeting used by people who, when they meet outside, often have little except their nose and eyes exposed. The greeting was described in reports of Kerlungner and Wearner, part of a group of Alaskan Native people touring the United States with entrepreneur Miner W. Bruce in the 1890s: "Mr. Bruce yesterday instructed Kerlungner and Wearner that in this country they should not rub noses, and to close the lesson the two young women kissed each other in the new style for a beginning, both seeming to fear that they looked silly as they did it." In other cultures Other peoples use similar greeting practices, notably the Māori of New Zealand and Hawaiians, who practice the hongi and honi greetings, respectively. Mongolian nomads of the Gobi Desert have a similar practice, as do certain Southeast Asian cultures, such as the Bengalis, Khmer people, Lao people, Thai people, Vietnamese people, Timor, Savu people, Sumba people and Iban people. Nose kissing is also employed as a traditional greeting by Arab tribesmen when greeting members of the same tribe. See also Cheek kissing Notes References Kissing Inuit culture Gestures
Eskimo kiss
[ "Biology" ]
480
[ "Behavior", "Gestures", "Human behavior" ]
158,854
https://en.wikipedia.org/wiki/Dir%20%28command%29
In computing, dir (directory) is a command in various computer operating systems used for computer file and directory listing. It is one of the basic commands to help navigate the file system. The command is usually implemented as an internal command in the command-line interpreter (shell). On some systems, a more graphical representation of the directory structure can be displayed using the tree command. Implementations The command is available in the command-line interface (CLI) of the operating systems Digital Research CP/M, MP/M, Intel ISIS-II, iRMX 86, Cromemco CDOS, MetaComCo TRIPOS, DOS, IBM/Toshiba 4690 OS, IBM OS/2, Microsoft Windows, Singularity, Datalight ROM-DOS, ReactOS, GNU, AROS and in the DCL command-line interface used on DEC VMS, RT-11 and RSX-11. It is also supplied with OS/8 as a CUSP (Commonly-Used System Program). The dir command is supported by Tim Paterson's SCP 86-DOS. On MS-DOS, the command is available in versions 1 and later. It is also available in the open source MS-DOS emulator DOSBox. MS-DOS prompts "Abort, Retry, Fail?" after being commanded to list a directory with no diskette in the drive. The numerical computing environments MATLAB and GNU Octave include a dir function with similar functionality. Examples DOS, Windows, ReactOS List all files and directories in the current working directory. List any text files and batch files (filename extension ".txt" or ".bat"). Recursively list all files and directories in the specified directory and any subdirectories, in wide format, pausing after each screen of output. The directory name is enclosed in double-quotes, to prevent it from being interpreted is as two separate command-line options because it contains a whitespace character. List any NTFS junction points: Unices dir is not a Unix command; Unix has the analogous ls command instead. The GNU operating system, however, has a dir command that "is equivalent to lsΒ -CΒ -b; that is, by default files are listed in columns, sorted vertically, and special characters are represented by backslash escape sequences". Actually, for compatibility reasons, ls produces device-dependent output. The dir instruction, unlike ls -Cb, produces device-independent output. See also Directory (OpenVMS command) List of DOS commands ls (corresponding command for *nix systems) References Further reading External links dir | Microsoft Docs Open source DIR implementation that comes with MS-DOS v2.0 Dir command syntax and examples CP/M commands Internal DOS commands Microcomputer software Microsoft free software MSX-DOS commands OS/2 commands ReactOS commands Windows commands Windows administration
Dir (command)
[ "Technology" ]
595
[ "Windows commands", "Computing commands", "CP/M commands", "OS/2 commands", "ReactOS commands", "MSX-DOS commands" ]
158,859
https://en.wikipedia.org/wiki/Plug%20and%20play
In computing, a plug and play (PnP) device or computer bus is one with a specification that facilitates the recognition of a hardware component in a system without the need for physical device configuration or user intervention in resolving resource conflicts. The term "plug and play" has since been expanded to a wide variety of applications to which the same lack of user setup applies. Expansion devices are controlled and exchange data with the host system through defined memory or I/O space port addresses, direct memory access channels, interrupt request lines and other mechanisms, which must be uniquely associated with a particular device to operate. Some computers provided unique combinations of these resources to each slot of a motherboard or backplane. Other designs provided all resources to all slots, and each peripheral device had its own address decoding for the registers or memory blocks it needed to communicate with the host system. Since fixed assignments made expansion of a system difficult, devices used several manual methods for assigning addresses and other resources, such as hard-wired jumpers, pins that could be connected with wire or removable straps, or switches that could be set for particular addresses. As microprocessors made mass-market computers affordable, software configuration of I/O devices was advantageous to allow installation by non-specialist users. Early systems for software configuration of devices included the MSX standard, NuBus, Amiga Autoconfig, and IBM Microchannel. Initially all expansion cards for the IBM PC required physical selection of I/O configuration on the board with jumper straps or DIP switches, but increasingly ISA bus devices were arranged for software configuration. By 1995, Microsoft Windows included a comprehensive method of enumerating hardware at boot time and allocating resources, which was called the "Plug and Play" standard. Plug and play devices can have resources allocated at boot-time only, or may be hotplug systems such as USB and IEEE 1394 (FireWire). History of device configuration Some early microcomputer peripheral devices required the end user physically to cut some wires and solder together others in order to make configuration changes; such changes were intended to be largely permanent for the life of the hardware. As computers became more accessible to the general public, the need developed for more frequent changes to be made by computer users unskilled with using soldering irons. Rather than cutting and soldering connections, configuration was accomplished by jumpers or DIP switches. Later on this configuration process was automated: Plug and Play. MSX The MSX system, released in 1983, was designed to be plug and play from the ground up, and achieved this by a system of slots and subslots, where each had its own virtual address space, thus eliminating device addressing conflicts in its very source. No jumpers or any manual configuration was required, and the independent address space for each slot allowed very cheap and commonplace chips to be used, alongside cheap glue logic. On the software side, the drivers and extensions were supplied in the card's own ROM, thus requiring no disks or any kind of user intervention to configure the software. The ROM extensions abstracted any hardware differences and offered standard APIs as specified by ASCII Corporation. NuBus In 1984, the NuBus architecture was developed by the Massachusetts Institute of Technology (MIT) as a platform agnostic peripheral interface that fully automated device configuration. The specification was sufficiently intelligent that it could work with both big endian and little endian computer platforms that had previously been mutually incompatible. However, this agnostic approach increased interfacing complexity and required support chips on every device which in the 1980s was expensive to do, and apart from its use in Apple Macintoshes and NeXT machines, the technology was not widely adopted. Amiga Autoconfig and Zorro bus In 1984, Commodore developed the Autoconfig protocol and the Zorro expansion bus for its Amiga line of expandable computers. The first public appearance was in the CES computer show at Las Vegas in 1985, with the so-called "Lorraine" prototype. Like NuBus, Zorro devices had absolutely no jumpers or DIP switches. Configuration information was stored on a read-only device on each peripheral, and at boot time the host system allocated the requested resources to the installed card. The Zorro architecture did not spread to general computing use outside of the Amiga product line, but was eventually upgraded as Zorro II and Zorro III for the later iteration of Amiga computers. Micro-Channel Architecture In 1987, IBM released an update to the IBM PC known as the Personal System/2 line of computers using the Micro Channel Architecture. The PS/2 was capable of totally automatic self-configuration. Every piece of expansion hardware was issued with a floppy disk containing a special file used to auto-configure the hardware to work with the computer. The user would install the device, turn on the computer, load the configuration information from the disk, and the hardware automatically assigned interrupts, DMA, and other needed settings. However, the disks posed a problem if they were damaged or lost, as the only options at the time to obtain replacements were via postal mail or IBM's dial-up BBS service. Without the disks, any new hardware would be completely useless and the computer would occasionally not boot at all until the unconfigured device was removed. Micro Channel did not gain widespread support, because IBM wanted to exclude clone manufacturers from this next-generation computing platform. Anyone developing for MCA had to sign non-disclosure agreements and pay royalties to IBM for each device sold, putting a price premium on MCA devices. End-users and clone manufacturers revolted against IBM and developed their own open standards bus, known as EISA. Consequently, MCA usage languished except in IBM's mainframes. ISA and PCI self-configuration In time, many Industry Standard Architecture (ISA) cards incorporated, through proprietary and varied techniques, hardware to self-configure or to provide for software configuration; often, the card came with a configuration program on disk that could automatically set the software-configurable (but not itself self-configuring) hardware. Some cards had both jumpers and software-configuration, with some settings controlled by each; this compromise reduced the number of jumpers that had to be set, while avoiding great expense for certain settings, e.g. nonvolatile registers for a base address setting. The problems of required jumpers continued on, but slowly diminished as more and more devices, both ISA and other types, included extra self-configuration hardware. However, these efforts still did not solve the problem of making sure the end-user has the appropriate software driver for the hardware. ISA PnP or (legacy) Plug & Play ISA was a plug-and-play system that used a combination of modifications to hardware, the system BIOS, and operating system software to automatically manage resource allocations. It was superseded by the PCI bus during the mid-1990s. The PCI plug and play (autoconfiguration) is based on the PCI BIOS Specification in 1990s, the PCI BIOS Specification is superseded by the ACPI in 2000s. Legacy Plug and Play In 1995, Microsoft released Windows 95, which tried to automate device detection and configuration as much as possible, but could still fall back to manual settings if necessary. During the initial install process of Windows 95, it would attempt to automatically detect all devices installed in the system. Since full auto-detection of everything was a new process without full industry support, the detection process constantly wrote to a progress tracking log file during the detection process. In the event that device probing would fail and the system would freeze, the end-user could reboot the computer, restart the detection process, and the installer would use the tracking log to skip past the point that caused the previous freeze. At the time, there could be a mix of devices in a system, some capable of automatic configuration, and some still using fully manual settings via jumpers and DIP switches. The old world of DOS still lurked underneath Windows 95, and systems could be configured to load devices in three different ways: through Windows 95 Device Manager drivers only using DOS drivers loaded in the CONFIG.SYS and AUTOEXEC.BAT configuration files using a combination of DOS drivers and Windows 95 Device Manager drivers Microsoft could not assert full control over all device settings, so configuration files could include a mix of driver entries inserted by the Windows 95 automatic configuration process, and could also include driver entries inserted or modified manually by the computer users themselves. The Windows 95 Device Manager also could offer users a choice of several semi-automatic configurations to try to free up resources for devices that still needed manual configuration. Also, although some later ISA devices were capable of automatic configuration, it was common for PC ISA expansion cards to limit themselves to a very small number of choices for interrupt request lines. For example, a network interface might limit itself to only interrupts 3, 7, and 10, while a sound card might limit itself to interrupts 5, 7, and 12. This results in few configuration choices if some of those interrupts are already used by some other device. The hardware of PC computers additionally limited device expansion options because interrupts could not be shared, and some multifunction expansion cards would use multiple interrupts for different card functions, such as a dual-port serial card requiring a separate interrupt for each serial port. Because of this complex operating environment, the autodetection process sometimes produced incorrect results, especially in systems with large numbers of expansion devices. This led to device conflicts within Windows 95, resulting in devices which were supposed to be fully self-configuring failing to work. The unreliability of the device installation process led to Plug and Play being sometimes referred to as Plug and Pray. Until approximately 2000, PC computers could still be purchased with a mix of ISA and PCI slots, so it was still possible that manual ISA device configuration might be necessary. But with successive releases of new operating systems like Windows 2000 and Windows XP, Microsoft had sufficient clout to say that drivers would no longer be provided for older devices that did not support auto-detection. In some cases, the user was forced to purchase new expansion devices or a whole new system to support the next operating system release. Current plug and play interfaces Several completely automated computer interfaces are currently used, each of which requires no device configuration or other action on the part of the computer user, apart from software installation, for the self-configuring devices. These interfaces include: IEEE 1394 (FireWire) PCI, Mini PCI PCI Express, Mini PCI Express, Thunderbolt PCMCIA, PC Card, ExpressCard SATA, Serial Attached SCSI USB DVI, HDMI For most of these interfaces, very little technical information is available to the end user about the performance of the interface. Although both FireWire and USB have bandwidth that must be shared by all devices, most modern operating systems are unable to monitor and report the amount of bandwidth being used or available, or to identify which devices are currently using the interface. See also Convention over configuration (the PnP's principle) Autoconfig (Amiga) Hot swapping PCI configuration space References External links Plug and play in Windows 2000 on ZDNet https://community.rapid7.com/docs/DOC-2150 Computer peripherals Motherboard
Plug and play
[ "Technology" ]
2,327
[ "Computer peripherals", "Components" ]
158,951
https://en.wikipedia.org/wiki/New%20Urbanism
New Urbanism is an urban design movement that promotes environmentally friendly habits by creating walkable neighbourhoods containing a wide range of housing and job types. It arose in the United States in the early 1980s, and has gradually influenced many aspects of real estate development, urban planning, and municipal land-use strategies. New Urbanism attempts to address the ills associated with urban sprawl and post-WW II suburban development. New Urbanism is strongly influenced by urban design practices that were prominent until the rise of the automobile prior to World War II; it encompasses basic principles such as traditional neighborhood development (TND) and transit-oriented development (TOD). These concrete principles emerge from two organizing concepts or goals: building a sense of community and the development of ecological practices. New Urbanists support regional planning for open space; context-appropriate architecture and planning; adequate provision of infrastructure such as sporting facilities, libraries and community centres; and the balanced development of jobs and housing. They believe their strategies can reduce traffic congestion by encouraging the population to ride bikes, walk, or take the train. They also hope to increase the supply of affordable housing and rein in suburban sprawl. The Charter of the New Urbanism also covers issues such as historic preservation, safe streets, green building, and the redevelopment of brownfield land. The ten Principles of Intelligent Urbanism also phrase guidelines for New Urbanist approaches. Architecturally, New Urbanist developments are often accompanied by New Classical, postmodern, or vernacular styles, although that is not always the case. The movement's principles are reflected in the field of Complementary architecture. Background New Urbanism began to solidify in the 1970s and 80s with the urban visions and theoretical models for the reconstruction of the "European" city proposed by architect LΓ©on Krier, and the pattern language theories of Christopher Alexander. The term "new urbanism" itself started being used in this context in the mid-1980s, but it wasn't until the early 1990s that it was commonly written as a proper noun capitalized. In 1991, the Local Government Commission, a private nonprofit group in Sacramento, California, invited architects Peter Calthorpe, Michael Corbett, AndrΓ©s Duany, Elizabeth Moule, Elizabeth Plater-Zyberk, Stefanos Polyzoides, and Daniel Solomon to develop a set of community principles for land use planning. Named the Ahwahnee Principles (after Yosemite National Park's Ahwahnee Hotel), the commission presented the principles to about one hundred government officials in the fall of 1991, at its first Yosemite Conference for Local Elected Officials. In 2009, co-founders Elizabeth Moule, Hank Dittmar, and Stefanos Polyzoides authored the Canons of Sustainable Architecture and Urbanism to clarify and detail the relationship between New Urbanism and sustainability. The Canons are "a set of operating principles for human settlement that reestablish the relationship between the art of building, the making of community, and the conservation of our natural world". They promote the use of passive heating and cooling solutions, the use of locally obtained materials, and in general, a "culture of permanence". Defining elements AndrΓ©s Duany and Elizabeth Plater-Zyberk, two of the founders of the Congress for the New Urbanism, observed mixed-use streetscapes with corner shops, front porches, and a diversity of well-crafted housing while living in one of the Victorian neighborhoods of New Haven, Connecticut. They and their colleagues observed patterns including the following: The neighborhood has a discernible center. This is often a square or a green and sometimes a busy or memorable street corner. A transit stop would be located at this center. Most of the dwellings are within a five-minute walk of the center, an average of roughly . There are a variety of dwelling types β€” usually houses, rowhouses, and apartments β€” so that younger and older people, singles and families, the poor and the wealthy may find places to live. At the edge of the neighborhood, there are shops and offices of sufficiently varied types to supply the weekly needs of a household. A small ancillary building or garage apartment is permitted within the backyard of each house. It may be used as a rental unit or place to work (for example, an office or craft workshop). An elementary school is close enough so that most children can walk from their home. There are small playgrounds accessible to every dwelling β€” not more than a tenth of a mile away. Streets within the neighborhood form a connected network, which disperses traffic by providing a variety of pedestrian and vehicular routes to any destination. The streets are relatively narrow and shaded by rows of trees. This slows traffic, creating an environment suitable for pedestrians and bicycles. Buildings in the neighborhood center are placed close to the street, creating a well-defined outdoor room. Parking lots and garage doors rarely front the street. Parking is relegated to the rear of buildings, usually accessed by alleys. Certain prominent sites at the termination of street vistas or in the neighborhood center are reserved for civic buildings. These provide sites for community meetings, education, and religious or cultural activities. Terminology Several terms are viewed either as synonymous, included in, or overlapping with the New Urbanism. The terms Neotraditional Development or Traditional Neighborhood Development are often associated with the New Urbanism. These terms generally refer to complete New Towns or new neighborhoods, often built in traditional architectural styles, as opposed to smaller infill and redevelopment projects. The term Traditional Urbanism has also been used to describe the New Urbanism by those who object to the "new" moniker. The term "Walkable Urbanism" was proposed as an alternative term by developer and professor Christopher Leinberger. Many debate whether Smart Growth and the New Urbanism are the same or whether substantive differences exist between the two; overlap exists in membership and content between the two movements. Placemaking is another term that is often used to signify New Urbanist efforts or those of like-minded groups. The term Transit-Oriented Development is sometimes cited as being coined by prominent New Urbanist Peter Calthorpe and is heavily promoted by New Urbanists. The term sustainable development is sometimes associated with the New Urbanism as there has been an increasing focus on the environmental benefits of New Urbanism associated with the rise of the term sustainability in the 2000s, however, this has caused some confusion as the term is also used by the United Nations and Agenda 21 to include human development issues (e.g., developing country) that exceed the scope of land development intended to be addressed by the New Urbanism or Sustainable Urbanism. The term "livability" or "livable communities" was popular under the Obama administration, though it dates back at least to the mid-1990s when the term was used by the Local Government Commission. Planning magazine discussed the proliferation of "urbanisms" in an article in 2011 titled "A Short Guide to 60 of the Newest Urbanisms". Several New Urbanists have popularized terminology under the umbrella of the New Urbanism including Sustainable Urbanism and Tactical Urbanism (of which Guerrilla Urbanism can be viewed as a subset). The term Tactical Urbanism was coined by Frenchman Michel de Certau in 1968 and revived in 2011 by New Urbanist Mike Lydon and the co-authors of the Tactical Urbanism Guide. In 2011 Andres Duany authored a book that used the term Agrarian Urbanism to describe an agriculturally-focused subset of New Urbanist town design. In 2013 a group of New Urbanists led by CNU co-founder Andres Duany began a research project under the banner of Lean Urbanism which purported to provide a bridge between Tactical Urbanism and the New Urbanism. Other terms have surfaced in reaction to the New Urbanism intended to provide a contrast, alternative to, or a refinement of the New Urbanism. Some of these terms include Everyday Urbanism by Harvard Professor Margaret Crawford, John Chase, and John Kaliski, Ecological Urbanism, and True Urbanism by architect Bernard Zyscovich. Landscape urbanism was popularized by Charles Waldheim who explicitly defined it as in opposition to the New Urbanism in his lectures at Harvard University. Landscape Urbanism and its Discontents, edited by Andres Duany and Emily Talen, specifically addressed the tension between these two views of urbanism. Organizations The primary organization promoting the New Urbanism in the United States is the Congress for the New Urbanism (CNU). The Congress for the New Urbanism is the leading organization promoting walkable, mixed-use neighborhood development, sustainable communities and healthier living conditions. CNU members promote the principles of CNU's Charter and the hallmarks of New Urbanism, including: Livable streets arranged in compact, walkable blocks. A range of housing choices to serve people of diverse ages and income levels. Schools, stores and other nearby destinations reachable by walking, bicycling or transit service. An affirming, human-scaled public realm where appropriately designed buildings define and enliven streets and other public spaces. The CNU has met annually since 1993 when they held their first general meeting in Alexandria, Virginia, with approximately one hundred attendees. By 2008 the Congress was drawing two to three thousand attendees to the annual meetings. The CNU began forming local and regional chapters circa 2004 with the founding of the New England and Florida Chapters. By 2011 there were 16 official chapters and interest groups for 7 more. , Canada hosts two full CNU Chapters, one in Ontario (CNU Ontario), and one in British Columbia (Cascadia) which also includes a portion of the north-west US states. While the CNU has international participation in Canada, sister organizations have been formed in other areas of the world including the Council for European Urbanism (CEU), the Movement for Israeli Urbanism (MIU) and the Australian Council for the New Urbanism. By 2002 chapters of Students for the New Urbanism began appearing at universities including the Savannah College of Art and Design, University of Georgia, University of Notre Dame, and the University of Miami. In 2003, a group of younger professionals and students met at the 11th Congress in Washington, D.C., and began developing a "Manifesto of the Next Generation of New Urbanists". The Next Generation of New Urbanists held their first major session the following year at the 12th meeting of the CNU in Chicago in 2004. The group has continued meeting annually with a focus on young professionals, students, new member issues, and ensuring the flow of fresh ideas and diverse viewpoints within the New Urbanism and the CNU. Spinoff projects of the Next Generation of the New Urbanists include the Living Urbanism publication first published in 2008 and the first Tactical Urbanism Guide. The CNU has spawned publications and research groups. Publications include the New Urban News and the New Town Paper. Research groups have formed independent nonprofits to research individual topics such as the Form-Based Codes Institute, The National Charrette Institute and the Center for Applied Transect Studies. In the United Kingdom New Urbanist and European urbanism principles are practised and taught by The Prince's Foundation for the Built Environment. They have also been broadly supported in the final report of the Building Better Building Beautiful Commission, Living with Beauty, and by organisations such as Create Streets. Around the world, other organisations promote New Urbanism as part of their remit, such as INTBAU, A Vision of Europe, Council for European Urbanism, and others. The CNU and other national organizations have also formed partnerships with like-minded groups. Organizations under the banner of Smart Growth also often work with the Congress for the New Urbanism. In addition the CNU has formed partnerships on specific projects such as working with the United States Green Building Council and the Natural Resources Defense Council to develop the LEED for Neighborhood Development standards, and with the Institute of Transportation Engineers to develop a Context Sensitive Solutions (CSS) Design manual. Founded in 1984, the Seaside Institute is a nonprofit promoting the New Urbanist movement, based in Seaside, Florida. The organization's primary goal is to inspire livable communities that are centered around sustainability, connectivity, and adaptability. Since 1993, the Seaside Institute has awarded the Seaside Prize to professionals who have made a significant impact on how communities can be built and rebuilt to reflect New Urbanist principles. Emerging New Urbanist (ENU) empowers, includes, fosters, and advances the goals of the Charter of the New Urbanism. Criticism New Urbanism has drawn both praise and criticism from all parts of the political spectrum. It has been criticized both for being a social engineering scheme and for failing to address social equity and for both restricting private enterprise and for being a deregulatory force in support of private sector developers. Journalist Alex Marshall has decried New Urbanism as essentially a marketing scheme that repackages conventional suburban sprawl behind a faΓ§ade of nostalgic imagery and empty, aspirational slogans. In a 1996 article in Metropolis magazine, Marshall denounced New Urbanism as "a grand fraud". The attack continued in numerous articles, including an opinion column in The Washington Post in September of the same year, and in Marshall's first book, How Cities Work: Suburbs, Sprawl, and the Roads Not Taken. Critics have asserted that the effectiveness claimed for the New Urbanist solution of mixed income developments lacks statistical evidence. Independent studies have supported the idea of addressing poverty through mixed-income developments, but the argument that New Urbanism produces such diversity has been challenged from findings from one community in Canada. Some parties have criticized the New Urbanism for being too accommodating of motor vehicles and not going far enough to promote cleaner modes of travelling such as walking, cycling, and public transport. The Charter of the New Urbanism states that "communities should be designed for the pedestrian and transit as well as the car". Some critics suggest that communities should exclude the car altogether in favor of car-free developments. Steve Melia proposes the idea of "filtered permeability" (see Permeability (spatial and transport planning)) which increases the connectivity of the pedestrian and cycling network resulting in a time and convenience advantage over drivers while still limiting the connectivity of the vehicular network and thus maintaining the safety benefits of cul de sacs and horseshoe loops in resistance to property crime. In response to critiques of a lack of evidence for the New Urbanism's claimed environmental benefits, a rating system for neighborhood environmental design, LEED-ND, was developed by the U.S. Green Building Council, Natural Resources Defense Council, and the Congress for the New Urbanism (CNU), to quantify the sustainability of New Urbanist neighborhood design. New Urbanist and board member of CNU Doug Farr has taken a step further and coined Sustainable Urbanism, which combines New Urbanism and LEED-ND to create walkable, transit-served urbanism with high performance buildings and infrastructure. Criticizing the lack of evidence for low greenhouse gas emissions results, Susan Subak has pointed out that while New Urbanism emphasizes walkability and building variety, it is the scale of dwellings, especially the absence of large houses that may determine successful, low carbon outcomes at the community level. New Urbanism has been criticized for being a form of centrally planned, large-scale development, "instead of allowing the initiative for construction to be taken by the final users themselves". It has been criticized for asserting universal principles of design instead of attending to local conditions. Examples United States New Urbanism is having a growing influence on how and where metropolitan regions choose to grow. At least fourteen large-scale planning initiatives are based on the principles of linking transportation and land-use policies, and using the neighborhood as the fundamental building block of a region. Miami, Florida has adopted the most ambitious New Urbanist-based zoning code reform yet undertaken by a major U.S. city. More than six hundred new towns, villages, and neighborhoods, following New Urbanist principles, have been planned or are currently under construction in the U.S. Hundreds of new, small-scale, urban and suburban infill projects are under way to reestablish walkable streets and blocks. In Maryland and several other states, New Urbanist principles are an integral part of smart growth legislation. In the mid-1990s, the U.S. Department of Housing and Urban Development (HUD) adopted the principles of the New Urbanism in its multibillion-dollar program to rebuild public housing projects nationwide. New Urbanists have planned and developed hundreds of projects in infill locations. Most were driven by the private sector, but many, including HUD projects, used public money. Prospect New Town Founded in the mid-1990s, Prospect New Town is Colorado's first full-scale New Urbanist community. Developer Kiki Wallace worked with the firm of Duany Plater Zyberk & Company to develop the neighborhood that was formerly his family's tree farm. Currently in its final phase of development, the neighborhood is intended to have a population of approximately 2,000 people in 585 units on 340 lots. The development includes a town center interwoven into the center of the residential area, with businesses ranging from restaurants to professional offices. The streets are oriented to maximize the view of the mountains, and the traditional town center is no more than five minutes on foot from any place in the neighborhood. University Place in Memphis In 2010, University Place in Memphis, Tennessee became the second only U.S. Green Building Council (USGBC) LEED certified neighborhood. LEED ND (neighborhood development) standards integrates principles of smart growth, urbanism, and green building and were developed through a collaboration between USGBC, Congress for the New Urbanism, and the Natural Resources Defense Council. University Place, developed by McCormack Baron Salazar, is a 405-unit, , mixed-income, mixed use, multigenerational, HOPE VI grant community that revitalized the severely distressed Lamar Terrace public housing site. The Cotton District The Cotton District in Starkville, Mississippi was the first New Urbanist development, begun in 1968 long before the New Urbanism movement was organized. The District borders Mississippi State University, and consists mostly of residential rental units for college students along with restaurants, bars and retail. The Cotton District got its name because it is built in the vicinity of an old cotton mill. Seaside Seaside, Florida, the first fully New Urbanist town, began development in 1981 on of Florida Panhandle coastline. It was featured on the cover of the Atlantic Monthly in 1988, when only a few streets were completed, and it has become internationally famous for its architecture, as well as the quality of its streets and public spaces. Seaside is now a tourist destination, and it appeared in the film The Truman Show (1998). Lots sold for US$15,000 in the early 1980s. Slightly over a decade later, in the mid-1990s, the price had escalated to about US$200,000. Today, most lots sell for more than $1Β million, and some houses top $5Β million. Mueller Community The Mueller Community is located on the site of the former Robert Mueller Municipal Airport in Austin, Texas, which closed in 1999. Per the developer, the value of the Mueller development upon completion will be $1.3 billion, and will comprise of non-residential development, of retail space, 4,600 homes, and of open space. An estimated 10,000 permanent jobs within the development will have been created by the time it is complete. In 2012, the Mueller Community had more electric cars per capita than any other neighborhood in the United States – a fact partially attributable to an incentive program. Stapleton The site of the former Stapleton International Airport in Denver and Aurora, Colorado, closed in 1995, is now being redeveloped by Forest City Enterprises. Stapleton is expected to be home to at least 30,000 residents, six schools, and of retail. Construction began in 2001. Northfield Stapleton, one of the development's major retail centers, recently opened. San Antonio In 1997, San Antonio, Texas, as part of a new master plan, created new regulations called the Unified Development Code (UDC), largely influenced by New Urbanism. One feature of the UDC is six unique land development patterns that can be applied to certain districts: Conservation Development; Commercial Center Development; Office or Institutional Campus Development; Commercial Retrofit Development; Tradition Neighborhood Development; and Transit Oriented Development. Each district has specific standards and design regulations. The six development patterns were created to reflect existing development patterns. Mountain House Mountain House, one of the latest New Urbanist projects in the United States, is a new town located near Tracy, California. Construction started in 2001. Mountain House will consist of 12 villages, each with its own elementary school, park, and commercial area. In addition, a future train station, transit center, and bus system are planned for Mountain House. Mesa del Sol Mesa del Sol, New Mexicoβ€”the largest New Urbanist project in the United Statesβ€”was designed by architect Peter Calthorpe, and is being developed by Forest City Enterprises. Mesa del Sol may take five decades to reach full build-out, at which time it should have: 38,000 residential units, housing a population of 100,000; a industrial office park; four town centers; an urban center; and a downtown that would provide a twin city within Albuquerque. I'On Located in Mount Pleasant, South Carolina, I'On is a traditional neighborhood development, mixed with a new urbanism styled architecture, reflecting on the building designs of the nearby downtown areas of Charleston, South Carolina. Founded on April 30, 1995, I'On was designed by the town planning firms of Dover, Kohl & Partners and Duany Plater-Zyberk & Company, and currently holds over 750 single family homes. Features of the community include extensive sidewalks, shared public greens and parks, trails, and a grid of narrow, traffic calming streets. Most homes are required to have a front porch of not less than in depth. Floor heights of , raised foundations, and smaller lot sizes give the community a dense, vertical feel. Haile Plantation Haile Plantation, Florida, is a 2,600-household, development of regional impact southwest of the city of Gainesville, within Alachua County. Haile Village Center is a traditional neighborhood center within the development. It was originally started in 1978 and completed in 2007. In addition to the 2,600 homes the neighborhood consists of two merchant centers (one a New England narrow street village and the other a chain grocery strip mall), as well as two public elementary schools and an 18-hole golf course. Celebration, Florida In June 1996, the Walt Disney Company unveiled its town of Celebration, near Orlando, Florida. Celebration opened its downtown in October 1996, relying heavily on the experiences of Seaside, whose downtown was nearly complete. Disney shuns the label New Urbanism, calling Celebration simply a "town". Celebration's Downtown has become one of the area's most popular tourist destinations making the community a showcase for New Urbanism as a prime example of the creation of a "sense of place". Jersey City The construction of the Hudson Bergen Light Rail in Hudson County, New Jersey has spurred transit-oriented development. In Jersey City, at least three projects are planned to transform brownfield sites, two of which have required remediation of toxic waste by previous owners: Bayfront, once site of a Honeywell plant is a site on the Hackensack River, near the planned West Campus of New Jersey City University. Canal Crossing, named for the former Morris Canal, was once partially owned by PPG Industries, and is a site west of Liberty State Park. Liberty Harbor is on the north side of the Morris Canal. Old York Village, Chesterfield Township, New Jersey The sparsely developed agricultural Township of Chesterfield in New Jersey covers approximately and has made farmland preservation a priority since the 1970s. Chesterfield has permanently preserved more than of farmland through state and county programs and a township-wide transfer of development credits program that directs future growth to a designated "receiving area" known as Old York Village. Old York Village is a neo-traditional, new urbanism town on incorporating a variety of housing types, neighborhood commercial facilities, a new elementary school, civic uses, and active and passive open space areas with preserved agricultural land surrounding the planned village. Construction began in the early 2000s and a significant percentage of the community is now complete. Old York Village was the winner of the American Planning Association National Outstanding Planning Award in 2004. Civita Civita is a sustainable, transit-oriented master-planned village under development in the Mission Valley area of San Diego, California, United States. Located on a former quarry site, the urban-style village is organized around a community park that cascades down the terraced property. Civita development plans call for of parks and open space, 4,780 residences (including approximately 478 affordable units), an approximately retail center, and for an office/business campus. Sudberry Properties, the developer of Civita, incorporated numerous green building practices in the Civita design. In 2009, Civita achieved a Stage 1 Gold rating for the U.S. Green Building Council's 2009 LEED-ND (Neighborhood Development) pilot and received the California Governor's Environmental and Economic Leadership Award. In 2010, Civita was designated as a California Catalyst Community by the California Department of Housing and Community Development to support innovation and test sustainable strategies that reflect the interdependence of environmental, economic, and community health. Del Mar Station Del Mar Station, which won a Congress for the New Urbanism Charter Award in 2003, is a transit-oriented development surrounding a prominent Metro Rail stop on the Gold Line, which connects Los Angeles and Pasadena. Located at the southern edge of downtown Pasadena, it serves as a gateway to the city with 347 apartments, out of which 15% are affordable units. Approximately of retail is linked with a network of public plazas, paseos, and private courtyards. The , US$77 million project sits above a 1,200-car multi-level subterranean parking garage, with 600 spaces dedicated to transit. A light rail right-of-way, detailed as a public street, bisects the site. It was designed by Moule & Polyzoides. Norfolk, VA, East Beach East Beach in Norfolk, VA, was designed and built in the style of traditional Atlantic coastal villages. The Master Plan for East Beach was developed in the style of β€œNew Urbanism” by world renowned TND master planners Duany Plater-Zyberk. Newly constructed homes reflect traditional classic detail and proportion of Tidewater Virginia homes, and are built with materials that will withstand the test of time and forces of Mother Nature and the Chesapeake Bay. Other countries New Urbanism is closely related to the Urban village movement in Europe. They both occurred at similar times and share many of the same principles although urban villages has an emphasis on traditional city planning. In Europe many brown-field sites have been redeveloped since the 1980s following the models of the traditional city neighbourhoods rather than Modernist models. One well-publicized example is Poundbury in England, a suburban extension to the town of Dorchester, which was built on land owned by the Duchy of Cornwall under the overview of Prince Charles. The original masterplan was designed by Leon Krier. A report carried out after the first phase of construction found a high degree of satisfaction by residents, although the aspirations to reduce car dependency had not been successful. Rising house prices and a perceived premium have made the open market housing unaffordable for many local people. The Council for European Urbanism (CEU), formed in 2003, shares many of the same aims as the U.S.'s New Urbanists. CEU's Charter is a development of the Congress for the New Urbanism Charter revised and reorganised to relate better to European conditions. An Australian organisation, Australian Council for New Urbanism has since 2001 run conferences and events to promote New Urbanism in that country. A New Zealand Urban Design Protocol was created by the Ministry for the Environment in 2005. There are many developments around the world that follow New Urbanist principles to a greater or lesser extent: Europe Le Plessis-Robinson, a 21st-century example of neo-traditionalism, in the south-west of Paris. This city is in the process of transforming itself, destroying old modern blocklike buildings and replacing them with traditional buildings and houses in one of the biggest worldwide projects with Val d'Europe. In 2008 the city was nominated best architectural project of the European Union. Poundbury, in Dorset, England, is a neotraditionalist urban extension focussed on high quality urban realm and the expression of traditional modes of urban or village life. Tornagrain, between Inverness and Nairn, Scotland, The design is based on the architectural and planning traditions of the Highlands and the rest of Scotland. Val d'Europe, east of Paris, France. Developed by Disneyland Resort Paris, this town is a kind of European counterpart to Walt Disney World Celebration City. Jakriborg, in Southern Sweden, is a recent example of the New Urbanist movement. Brandevoort, in Helmond, in the Netherlands, is a new example of the New Urbanist movement. Sankt EriksomrΓ₯det quarter in Stockholm, Sweden, built in the 1990s. Other developments can be found at Heulebrug, part of Knokke-Heist, in Belgium, and Fonti di Matilde in San Bartolomeo (outside of Reggio Emilia), Italy. Kartanonkoski, in Vantaa, Finland, is the only example of neotraditional architecture in Finland implemented on a larger scale. The area has around 4000 inhabitants and its architecture has been mainly influenced by Nordic Classicism. Vauban and its surrounding city Freiburg serve as centers for innovation integrating solar roofs, carbon neutral buildings, Passivhaus, and point-access block single-exit apartment blocks into the fabric of New Urbanist architecture and neighborhoods. Americas Mahogany Bay Village, Belize, is New Urbanist community on Ambergris Caye, Belize. Orchid Bay, Belize, is one of the largest New Urbanist projects in Central America and the Caribbean. Las Catalinas, Costa Rica, is a coastal town in the Guanacaste Province of Northwest Costa Rica. Envisioned as a compact, walkable beach town, Las Catalinas was founded in 2006 by Charles Brewer and incorporates many of the principles of New Urbanism. McKenzie Towne is a New Urbanist development which commenced in 1995 by Carma Developers LP in Calgary. Cornell, within the city of Markham, Ontario, was designed with walkable neighborhoods, density to support public transit, a variety of housing types and retail. New Amherst is a new urbanist development in the town of Cobourg, Ontario. UniverCity, beside the Simon Fraser University campus on Burnaby Mountain in Burnaby, British Columbia, is a sustainable community that is designed to be walkable, dense, and well connected to public transit networks. Mount Pleasant Village in the city of Brampton, Ontario was designed as a mixed-use neighbourhood surrounding a train station and with a central square. Asia The structure plan for Thimphu, Bhutan, follows Principles of Intelligent Urbanism, which share underlying axioms with the New Urbanism. Africa There are several such developments in South Africa. The most notable is Melrose Arch in Johannesburg. Triple Point is a comparable mixed-use development in East London, in Eastern Cape province. The development, announced in 2007, comprises 30Β hectares. It is made up of three apartment complexes together with over 30 residential sites as well as 20,000 sqΒ m of residential and office space. The development is valued at over R2Β billion ($250Β million). There have been cases where market forces of urban decay are confused with new urbanism in African cities . This has led to a form of suburban mixed-use development that does not promote walkability. Australia Most new developments on the edges of Australia's major cities are master planned, often guided expressly by the principles of New Urbanism. The relationship between housing, activity centres, the transport network and key social infrastructure (sporting facilities, libraries, community centres etc.) is defined at structure planning stage. Jindee, Western Australia, a new coastal development north of Perth which has been designed using Smart Code. Tullimbar Village, New South Wales, is a new development which follows the principles of New Urbanism. Another important factor or principle of New Urbanism that guides Australia's major cities is how good their foot circulation seems to be which is guided by the wayfinding systems that are implemented. Kenneth B. Hall Jr. and Gerald A. Porterfield said in their book, "Community by Design," the way to gain good circulation is to take some thoughtful consideration to things like wayfinding, sight lines, transition, visual clues, and reference points. Circulation design should work to create an interesting and informative system that utilizes subtle elements as well as technical ones. City of Port Philip, Australia, is a good example of wayfinding where they have come up with a comprehensive pedestrian signage system, specifically for their local areas of St Kilda, South Melbourne and Port Melbourne. The city's wayfinding system consists of 26 individually designed panels that are placed on some major streets such as St Kilda and St Kilda East, linking St Kilda Junction and Balaclava Station to the foreshore via Fitzroy, Carlisle and Acland Streets. City of Port Philip also created directional signage systems that makes use of the already existing street furniture such as trash cans to help provide for 130 directional indicators across Port Melbourne. 20-minute neighbourhoods Melbourne followed up a 2014 plan by launching 20-minute neighbourhoods in January 2018, aiming to provide for most daily needs within a 20-minute walk from home, together with safe cycling and public transport options. Another definition has used the time taken to cycle, or take a bus. In Melbourne the concept was initiated in the suburbs of Croydon South, Strathmore, and Sunshine West. The concept has since expanded to other cities, such as Singapore and Hamilton in New Zealand. Critics have pointed out that Melbourne's plan excludes jobs and that a previous target for public transport use has been shelved. The concept has been equated with localism. Dubai launched the 20-minute city project in 2022, where residents are able to access daily needs & destinations within 20 minutes by foot or bicycle. The plan involves placing 55% of the residents within 800 meters of mass transit stations, allowing them to reach 80% of their daily needs and destinations. See also List of examples of New Urbanism Urban planners, architects and New Urbanists Ivan Chtcheglov Walter F. Chatham Larry Beasley Christopher Charles Benninger Peter Calthorpe AndrΓ©s Duany Hans Kollhoff Leon Krier Gabriele Tagliaventi James Howard Kunstler Elizabeth Plater-Zyberk Sim Van der Ryn Pier Carlo Bontempi Ali Kemal Arkun Matthew Sergio Digoy Locations Atlantic Station, Atlanta Birkdale Village, North Carolina Carlton Landing, Oklahoma Daybreak, South Jordan, Utah DeLand, FL Greenbelt, Maryland Issaquah Highlands, Issaquah, Washington Kentlands, Gaithersburg, Maryland National Harbor New Town, Missouri Orenco Station, Oregon (New Urbanist transit-oriented development) Beacon Cove Coed Darcy Poundbury Prospect New Town, Colorado Seabrook, Washington Verrado, Buckeye, Arizona Uptown, Dallas, Texas (New Urbanist area rated most pedestrian-friendly in Texas) Old York Village, Chesterfield Township, New Jersey Topics Car-free movement Carsharing Circles of Sustainability Community building Crime prevention through environmental design European Urban Renaissance EcoMobility Garden City Movement Gentrification International Network for Traditional Building, Architecture & Urbanism Land recycling Land value tax Missing Middle Housing MIU (Movement for Israeli Urbanism) Mixed-use development Mobility transition Naked streets/Shared space New Classical Architecture New pedestrianism Principles of Intelligent Urbanism Pedestrian-oriented development Pedestrian Village Preservation development Traditional Neighborhood Development Urban decay Urbanism Urban green space Urban renaissance Urban resilience Urban sprawl Urban vitality Walking audit World Urbanism Day YIMBY References External links Congress for the New Urbanism Australian Council for New Urbanism Council for European Urbanism Sustainable design Sustainable transport Environmentalism Human ecology Theories of aesthetics Urban studies and planning terminology
New Urbanism
[ "Physics", "Environmental_science" ]
7,460
[ "Physical systems", "Transport", "Sustainable transport", "Human ecology", "Environmental social science" ]
159,023
https://en.wikipedia.org/wiki/Tree%20decomposition
In graph theory, a tree decomposition is a mapping of a graph into a tree that can be used to define the treewidth of the graph and speed up solving certain computational problems on the graph. Tree decompositions are also called junction trees, clique trees, or join trees. They play an important role in problems like probabilistic inference, constraint satisfaction, query optimization, and matrix decomposition. The concept of tree decomposition was originally introduced by . Later it was rediscovered by and has since been studied by many other authors. Definition Intuitively, a tree decomposition represents the vertices of a given graph as subtrees of a tree, in such a way that vertices in are adjacent only when the corresponding subtrees intersect. Thus, forms a subgraph of the intersection graph of the subtrees. The full intersection graph is a chordal graph. Each subtree associates a graph vertex with a set of tree nodes. To define this formally, we represent each tree node as the set of vertices associated with it. Thus, given a graph , a tree decomposition is a pair , where is a family of subsets (sometimes called bags) of , and is a tree whose nodes are the subsets , satisfying the following properties: The union of all sets equals . That is, each graph vertex is associated with at least one tree node. For every edge in the graph, there is a subset that contains both and . That is, vertices are adjacent in the graph only when the corresponding subtrees have a node in common. If and both contain a vertex , then all nodes of the tree in the (unique) path between and contain as well. That is, the nodes associated with vertex form a connected subset of . This is also known as coherence, or the running intersection property. It can be stated equivalently that if , and are nodes, and is on the path from to , then . The tree decomposition of a graph is far from unique; for example, a trivial tree decomposition contains all vertices of the graph in its single root node. A tree decomposition in which the underlying tree is a path graph is called a path decomposition, and the width parameter derived from these special types of tree decompositions is known as pathwidth. A tree decomposition of treewidth is smooth, if for all , and for all . Treewidth The width of a tree decomposition is the size of its largest set minus one. The treewidth of a graph is the minimum width among all possible tree decompositions of . In this definition, the size of the largest set is diminished by one in order to make the treewidth of a tree equal to one. Treewidth may also be defined from other structures than tree decompositions, including chordal graphs, brambles, and havens. It is NP-complete to determine whether a given graph has treewidth at most a given variable . However, when is any fixed constant, the graphs with treewidth can be recognized, and a width tree decomposition constructed for them, in linear time. The time dependence of this algorithm on is an exponential function of . Dynamic programming At the beginning of the 1970s, it was observed that a large class of combinatorial optimization problems defined on graphs could be efficiently solved by non-serial dynamic programming as long as the graph had a bounded dimension, a parameter related to treewidth. Later, several authors independently observed, at the end of the 1980s, that many algorithmic problems that are NP-complete for arbitrary graphs may be solved efficiently by dynamic programming for graphs of bounded treewidth, using the tree-decompositions of these graphs. As an example, consider the problem of finding the maximum independent set in a graph of treewidth . To solve this problem, first choose one of the nodes of the tree decomposition to be the root, arbitrarily. For a node of the tree decomposition, let be the union of the sets descending from . For an independent set let denote the size of the largest independent subset of such that Similarly, for an adjacent pair of nodes and , with farther from the root of the tree than , and an independent set let denote the size of the largest independent subset of such that We may calculate these and values by a bottom-up traversal of the tree: where the sum in the calculation of is over the children of node . At each node or edge, there are at most sets for which we need to calculate these values, so if is a constant then the whole calculation takes constant time per edge or node. The size of the maximum independent set is the largest value stored at the root node, and the maximum independent set itself can be found (as is standard in dynamic programming algorithms) by backtracking through these stored values starting from this largest value. Thus, in graphs of bounded treewidth, the maximum independent set problem may be solved in linear time. Similar algorithms apply to many other graph problems. This dynamic programming approach is used in machine learning via the junction tree algorithm for belief propagation in graphs of bounded treewidth. It also plays a key role in algorithms for computing the treewidth and constructing tree decompositions: typically, such algorithms have a first step that approximates the treewidth, constructing a tree decomposition with this approximate width, and then a second step that performs dynamic programming in the approximate tree decomposition to compute the exact value of the treewidth. See also Brambles and havensTwo kinds of structures that can be used as an alternative to tree decomposition in defining the treewidth of a graph. Branch-decompositionA closely related structure whose width is within a constant factor of treewidth. Decomposition MethodTree Decomposition is used in Decomposition Method for solving constraint satisfaction problem. Notes References . . . . . . . . . Trees (graph theory) Graph minor theory Graph theory objects
Tree decomposition
[ "Mathematics" ]
1,202
[ "Graph minor theory", "Mathematical relations", "Graph theory", "Graph theory objects" ]
159,032
https://en.wikipedia.org/wiki/Lead%20programmer
In software development, a lead programmer is responsible for providing technical guidance and mentorship to a team of software developers. Alternative titles include development lead, technical lead, lead programmer, or lead application developer. When primarily contributing a low-level enterprise software design with focus on the structure of the app, e.g. design patterns, the role would be a software architect (as distinct to the high-level less technical role of solutions architect.) Responsibilities A lead programmer has responsibilities which may vary from company to company, but in general is responsible for overseeing the work, in a technical sense, of a team of software developers working on a project, ensuring work meets the technical requirements, such as coding conventions, set by the software architect responsible for the underlying architecture. A lead programmer's duties are often "hands on", meaning they typically write software code on a daily basis, assisting their team to meet deadlines and improve the quality of the codebase. They act as a mentor for new or lower-level software developers or programmers, as well as for all the members on the development team, primarily through processes such as pair programming, conducting of code reviews, promoting good development principles, such as test-driven development, and taking the lead in correcting code defects. Although the responsibilities are primarily technical, lead programmer also generally serve as an interface between the programmers and management, have ownership of their team's development plans and have supervisorial responsibilities in delegating work. They ensure that sections of software projects come in on time and under budget, and assisting technically with hiring and reviewing performance of staff. Lead programmers also serve as technical advisers to management and provide programming perspective on requirements. Typically a lead programmer will oversee a development team of between two and ten programmers. A lead programmer typically reports to a principal who manages a number of teams. Technical direction may be provided by a software architect. Where teams follow the waterfall, extreme programming, or kanban approaches, the lead programmer is referred to as an engineering manager, or a software development manager, and collaborates directly with a peer, the product owner, who gathers the customer requirements that the end product must meet. In a true Agile approach, the lead programmer collaborates with a separate position of scrum master, who acts as an intermediary seeking a compromise between business demand (product owner) and team capacity and skillset, e.g. which story tickets from the product backlog will be passed into the next Agile sprint. References Lead programmer Software project management
Lead programmer
[ "Technology" ]
507
[ "People in information technology", "Information technology" ]
159,035
https://en.wikipedia.org/wiki/Bytownite
Bytownite is a calcium rich member of the plagioclase solid solution series of feldspar minerals with composition between anorthite and labradorite. It is usually defined as having between 70 and 90%An (formula: ). Like others of the series, bytownite forms grey to white triclinic crystals commonly exhibiting the typical plagioclase twinning and associated fine striations. The specific gravity of bytownite varies between 2.74 and 2.75. The refractive indices ranges are nΞ±=1.563 – 1.572, nΞ²=1.568 – 1.578, and nΞ³=1.573 – 1.583. Precise determination of these two properties with chemical, X-ray diffraction, or petrographic analysis are required for identification. Occurrence Bytownite is a rock forming mineral occurring in mafic igneous rocks such as gabbros and anorthosites. It also occurs as phenocrysts in mafic volcanic rocks. It is rare in metamorphic rocks. It is typically associated with pyroxenes and olivine. The mineral was first described in 1836 and named for an occurrence at Bytown (now Ottawa), Canada. Other noted occurrences in Canada include the Shawmere anorthosite in Foleyet Township, Ontario, and on Yamaska Mountain, near Abbotsford, Quebec. It occurs on RΓΉm island, Scotland and Eycott Hill, near Keswick, Cumberland, England. It is reported from Naaraodal, Norway and in the Bushveld complex of South Africa. It is also found in Isa Valley, Western Australia. In the US it is found in the Stillwater igneous complex of Montana; from near Lakeview, Lake County, Oregon. It occurs in the Lucky Cuss mine, Tombstone, Arizona; and from the Grants district, McKinley County, New Mexico. In the eastern US it occurs at Cornwall, Lebanon County, Pennsylvania and Phoenixville, Chester County, Pennsylvania. References Hurlbut, Cornelius S.; Klein, Cornelis, 1985, Manual of Mineralogy, 20th ed., Wiley, External links Tectosilicates Calcium minerals Sodium minerals Feldspar Triclinic minerals Gemstones Minerals in space group 2 fr:Bytownite
Bytownite
[ "Physics" ]
481
[ "Materials", "Gemstones", "Matter" ]
159,046
https://en.wikipedia.org/wiki/Geothermal%20desalination
Geothermal desalination refers to the process of using geothermal energy to power the process of converting salt water to fresh water. The process is considered economically efficient, and while overall environmental impact is uncertain, it has potential to be more environmentally friendly compared to conventional desalination options. Geothermal desalination plants have already been successful in various regions, and there is potential for further development to allow the process to be used in an increased number of water scarce regions. Process explanation Desalination is the process of removing minerals from seawater to convert it into fresh water. Desalination is divided into two categories in terms of processes: processes driven by thermal energy and processes driven by mechanical energy. Geothermal desalination uses geothermal energy as the thermal energy source to drive the desalination process. There are two types of geothermal desalination: direct and indirect. Direct geothermal desalination heats seawater to boiling in an evaporator, then transferring to a condenser. In contrast, indirect geothermal desalination converts geothermal energy into electricity which is then used for membrane desalination. If the geothermal energy is used indirectly, it can be used to generate power for the water desalination process, as well as excess electricity that can be used for consumers. Similarly, if the geothermal energy is used directly, the excess geothermal energy can be used to drive heating and cooling processes. Applications Current One use of geothermal desalination is in producing fresh water for agriculture. One example of agricultural applications of geothermal energy is the Balcova-Naridere Geothermal Field (BNGF) in Turkey. However, arsenic and boron, two potentially toxic elements, have been found in the geothermal water used to generate electricity. Since the construction of the geothermal desalination plant in this region, these toxic elements have contaminated freshwater wells, rendering this water unusable for agriculture. Due to the increase in contamination in the surrounding environment, this project is not considered a success. Another use of geothermal desalination is the production of drinking water, as shown by the Milos Island Project in Greece, which relied entirely on geothermal energy to produce desalinated water. This plant was constructed because geothermal energy is readily available in this region, as Milos Island is located in a volcanic region, which makes using geothermal energy a viable way to power the desalination of salt water. The Milos Island plant utilizes a combination of direct and indirect desalination. Unlike the BNGF project, this is considered a success as it produced drinkable water without polluting the environment at a low cost using only geothermal energy. Future potential Research indicates geothermal desalination can be implemented in some regions with water scarcity, as it is a relatively low cost solution to increasing available fresh water. In particular, two regions that have ample geothermal resources and are experiencing water scarcity are California and Saudi Arabia. Because these regions already have existing desalination plants, implementation of geothermal desalination plants would be relatively easy. Furthermore, as the technology for producing geothermal energy improves, geothermal desalination will become possible in more regions. Technologies that are currently being developed will allow the geothermal water used to produce energy to be the water that becomes desalinated. This will allow regions that are not close to an ocean to perform geothermal desalination, which will widely expand the potential for regions to perform geothermal desalination. Environmental impacts Much of the environmental impact in the geothermal desalination process stems from the use of geothermal energy, not from the desalination process itself. Geothermal desalination has both environmental benefits and drawbacks. One benefit is that geothermal energy is a renewable resource and emits fewer greenhouse gasses than non-renewable energy sources. Another benefit to the environment is that geothermal energy has a smaller land footprint compared to wind or solar energy. More specifically, the land usage required for geothermal desalination site has been estimated to be 1.2 to 2.7 square terameters are required for each megawatt of energy produced. One environmental drawback is due to geothermal desalination being an energy intensive process; the energy consumption ranges from about 4 to 27 kWh per square meter of the desalination plant. Moreover, some researchers are concerned that due to lack of regulation on carbon dioxide () emissions from geothermal plants, particularly in the United States, there are significant detrimental emissions from these plants that are not being measured. Geothermal power has been found to leak toxic elements such as mercury, boron, and arsenic into the environment, meaning geothermal desalination plants are a potential health hazard for their surrounding environment. Ultimately though, the long term environmental consequences of geothermal power desalination plants are still not clear. Economic factors Geothermal energy is not dependent on day or night cycles and weather conditions, meaning it has a high-capacity factor, which is a measure of how often a plant is running at maximum power. This provides a stable and reliable energy supply. This also means that geothermal desalination plants can operate in any weather condition at any time of day. In terms of capacity, the United States, Indonesia, Philippines, Turkey, New Zealand, and Mexico accounted for 75% of the global geothermal energy capacity. It would be the most economically feasible to perform geothermal desalination in these countries due to their geothermal energy capacity. For membrane desalination specifically, using geothermal energy reduces cost compared to using other energy sources. This is because geothermal power is traditionally produced at a competitive cost compared to other energy sources including fossil fuels; a 2011 study estimates the cost to be $0.10/kWh. Specifically, the US Department of Energy has estimated that geothermal desalination can produce desalinated water at a cost of $1.50 per cubic meter of desalinated water. History The exact origins of geothermal desalination are unclear; however some early work is credited to Leon Awerbuch, a scientist working in Research & Development at the Bechtel Group at the time, who proposed the process of using geothermal energy for water desalination in 1972. In 1994, a prototype that used geothermal energy to power desalination was built by Caldor-Marseille. This prototype was able to produce a few cubic meters of desalinated water per day. In 1995, a geothermal desalination prototype plant was built in Tunisia, which is one of the earliest documented cases of a geothermal desalination plant. Its capacity was three cubic meters of water per day, which could meet the needs of the surrounding communities. The cost of water was estimated to be $1.20 per cubic meter. See also Geothermal power Desalination References Geothermal energy Water desalination
Geothermal desalination
[ "Chemistry" ]
1,397
[ "Water treatment", "Water technology", "Water desalination" ]
159,069
https://en.wikipedia.org/wiki/Polaroid%20%28polarizer%29
Polaroid is a type of synthetic plastic sheet which is used as a polarizer or polarizing filter. A trademark of the Polaroid Corporation, the term has since entered common use. Patent The original material, patented in 1929 and further developed in 1932 by Edwin H. Land, consists of many microscopic crystals of iodoquinine sulphate (herapathite) embedded in a transparent nitrocellulose polymer film. The needle-like crystals are aligned during the manufacture of the film by stretching or by applying electric or magnetic fields. With the crystals aligned, the sheet is dichroic: it tends to absorb light which is polarized parallel to the direction of crystal alignment but to transmit light which is polarized perpendicular to it. The resultant electric field of an electromagnetic wave (such as light) determines its polarization. If the wave interacts with a line of crystals as in a sheet of polaroid, any varying electric field in the direction parallel to the line of the crystals will cause a current to flow along this line. The electrons moving in this current will collide with other particles and re-emit the light backwards and forwards. This will cancel the incident wave causing little or no transmission through the sheet. The component of the electric field perpendicular to the line of crystals, however, can cause only small movements in the electrons as they cannot move very much from side to side. This means there will be little change in the perpendicular component of the field leading to transmission of the part of the light wave polarized perpendicular to the crystals only, hence allowing the material to be used as a light polarizer. This material, known as J-sheet, was later replaced by the improved H-sheet Polaroid, invented in 1938 by Land. H-sheet is a polyvinyl alcohol (PVA) polymer impregnated with iodine. During manufacture, the PVA polymer chains are stretched such that they form an array of aligned, linear molecules in the material. The iodine dopant attaches to the PVA molecules and makes them conducting along the length of the chains. Light polarized parallel to the chains is absorbed, and light polarized perpendicular to the chains is transmitted. Another type of Polaroid is the K-sheet polarizer, which consists of aligned polyvinylene chains in a PVA polymer created by dehydrating PVA. This polarizer material is particularly resistant to humidity and heat. Applications Polarizing sheets are used in liquid-crystal displays, optical microscopes and sunglasses. Since Polaroid sheet is dichroic, it will absorb impinging light of one plane of polarization, so sunglasses will reduce the partially polarized light reflected from level surfaces such as windows and sheets of water, for example. They are also used to examine for chain orientation in transparent plastic products made from polystyrene or polycarbonate. The intensity of light passing through a Polaroid polarizer is described by Malus' law. References Edwin H. Land (1951). "Some aspects on the development of sheet polarizers". Journal of the Optical Society of America 41(12): 957–963. Halliday, Resnick, Walker. Fundamentals of Physics, 7th edition, John Wiley & Sons William Shurcliff (1962). Polarized Light: Production and Use, Harvard University Press. External links "One-Way Glass Stops Glare" Popular Mechanics, April 1936 pp. 481-483 Products introduced in 1929 Optical materials Polarization (waves) Brand name materials ca:Polaroid fr:Polaroid it:Polaroid pl:Filtr polaryzacyjny sv:Polaroid
Polaroid (polarizer)
[ "Physics" ]
750
[ "Astrophysics", "Optical materials", "Materials", "Polarization (waves)", "Matter" ]
159,081
https://en.wikipedia.org/wiki/Shock%20%28mechanics%29
In mechanics and physics, shock is a sudden acceleration caused, for example, by impact, drop, kick, earthquake, or explosion. Shock is a transient physical excitation. Shock describes matter subject to extreme rates of force with respect to time. Shock is a vector that has units of an acceleration (rate of change of velocity). The unit g (or g) represents multiples of the standard acceleration of gravity and is conventionally used. A shock pulse can be characterised by its peak acceleration, the duration, and the shape of the shock pulse (half sine, triangular, trapezoidal, etc.). The shock response spectrum is a method for further evaluating a mechanical shock. Shock measurement Shock measurement is of interest in several fields such as Propagation of heel shock through a runner's body Measure the magnitude of a shock need to cause damage to an item: fragility. Measure shock attenuation through athletic flooring Measuring the effectiveness of a shock absorber Measuring the shock absorbing ability of package cushioning Measure the ability of an athletic helmet to protect people Measure the effectiveness of shock mounts Determining the ability of structures to resist seismic shock: earthquakes, etc. Determining whether personal protective fabric attenuates or amplifies shocks Verifying that a Naval ship and its equipment can survive explosive shocks Shocks are usually measured by accelerometers but other transducers and high speed imaging are also used. A wide variety of laboratory instrumentation is available; stand-alone shock data loggers are also used. Field shocks are highly variable and often have very uneven shapes. Even laboratory controlled shocks often have uneven shapes and include short duration spikes; Noise can be reduced by appropriate digital or analog filtering. Governing test methods and specifications provide detail about the conduct of shock tests. Proper placement of measuring instruments is critical. Fragile items and packaged goods respond with variation to uniform laboratory shocks; Replicate testing is often called for. For example, MIL-STD-810G Method 516.6 indicates: ''at least three times in both directions along each of three orthogonal axes". Shock testing Shock testing typically falls into two categories, classical shock testing and pyroshock or ballistic shock testing. Classical shock testing consists of the following shock impulses: half sine, haversine, sawtooth wave, and trapezoid. Pyroshock and ballistic shock tests are specialized and are not considered classical shocks. Classical shocks can be performed on Electro Dynamic (ED) Shakers, Free Fall Drop Tower or Pneumatic Shock Machines. A classical shock impulse is created when the shock machine table changes direction abruptly. This abrupt change in direction causes a rapid velocity change which creates the shock impulse. Testing the effects of shock are sometimes conducted on end-use applications: for example, automobile crash tests. Use of proper test methods and Verification and validation protocols are important for all phases of testing and evaluation. Effects of shock Mechanical shock has the potential for damaging an item (e.g., an entire light bulb) or an element of the item (e.g. a filament in an Incandescent light bulb): A brittle or fragile item can fracture. For example, two crystal wine glasses may shatter when impacted against each other. A shear pin in an engine is designed to fracture with a specific magnitude of shock. Note that a soft ductile material may sometimes exhibit brittle failure during shock due to time-temperature superposition. A malleable item can be bent by a shock. For example, a copper pitcher may bend when dropped on the floor. Some items may appear to be not damaged by a single shock but will experience fatigue failure with numerous repeated low-level shocks. A shock may result in only minor damage which may not be critical for use. However, cumulative minor damage from several shocks will eventually result in the item being unusable. A shock may not produce immediate apparent damage but might cause the service life of the product to be shortened: the reliability is reduced. A shock may cause an item to become out of adjustment. For example, when a precision scientific instrument is subjected to a moderate shock, good metrology practice may be to have it recalibrated before further use. Some materials such as primary high explosives may detonate with mechanical shock or impact. When glass bottles of liquid are dropped or subjected to shock, the water hammer effect may cause hydrodynamic glass breakage. Considerations When laboratory testing, field experience, or engineering judgement indicates that an item could be damaged by mechanical shock, several courses of action might be considered: Reduce and control the input shock at the source. Modify the item to improve its toughness or support it to better handle shocks. Use shock absorbers, shock mounts, or cushions to control the shock transmitted to the item. Cushioning reduces the peak acceleration by extending the duration of the shock. Plan for failures: accept certain losses. Have redundant systems available, etc. See also Section 516.6, Shock Notes Further reading DeSilva, C. W., "Vibration and Shock Handbook", CRC, 2005, Harris, C. M., and Peirsol, A. G. "Shock and Vibration Handbook", 2001, McGraw Hill, ISO 18431:2007 - Mechanical vibration and shock ASTM D6537, Standard Practice for Instrumented Package Shock Testing for Determination of Package Performance. MIL-STD-810G, Environmental Test Methods and Engineering Guidelines, 2000, sect 516.6 Brogliato, B., "Nonsmooth Mechanics. Models, Dynamics and Control", Springer London, 2nd Edition, 1999. External links Response to mechanical shock, Department of Energy, Shock Response Spectrum, a primer, A Study in the Application of SRS, Mechanics Packaging Fracture mechanics Acceleration
Shock (mechanics)
[ "Physics", "Materials_science", "Mathematics", "Engineering" ]
1,175
[ "Structural engineering", "Physical quantities", "Fracture mechanics", "Acceleration", "Quantity", "Materials science", "Materials degradation", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities" ]
159,151
https://en.wikipedia.org/wiki/Chemical%20composition
A chemical composition specifies the identity, arrangement, and ratio of the chemical elements making up a compound by way of chemical and atomic bonds. Chemical formulas can be used to describe the relative amounts of elements present in a compound. For example, the chemical formula for water is H2O: this means that each molecule of water is constituted by 2 atoms of hydrogen (H) and 1 atom of oxygen (O). The chemical composition of water may be interpreted as a 2:1 ratio of hydrogen atoms to oxygen atoms. Different types of chemical formulas are used to convey composition information, such as an empirical or molecular formula. Nomenclature can be used to express not only the elements present in a compound but their arrangement within the molecules of the compound. In this way, compounds will have unique names which can describe their elemental composition. Composite mixture The chemical composition of a mixture can be defined as the distribution of the individual substances that constitute the mixture, called "components". In other words, it is equivalent to quantifying the concentration of each component. Because there are different ways to define the concentration of a component, there are also different ways to define the composition of a mixture. It may be expressed as molar fraction, volume fraction, mass fraction, molality, molarity or normality or mixing ratio. Chemical composition of a mixture can be represented graphically in plots like ternary plot and quaternary plot. References Chemical properties Analytical chemistry
Chemical composition
[ "Chemistry" ]
293
[ "nan" ]
159,178
https://en.wikipedia.org/wiki/Stinking%20badges
"Stinkin' badges" is a paraphrase of a line of dialogue from the 1948 film The Treasure of the Sierra Madre. That line was in turn derived from dialogue in the 1927 novel of the same name, which was the basis for the film. In 2005, the full quote from the film was chosen as #36 on the American Film Institute list, AFI's 100 Years...100 Movie Quotes. The shorter, better-known version of the quote was first heard in the 1967 episode of the TV series The Monkees "It's a Nice Place to Visit". It was also included in the 1974 Mel Brooks film Blazing Saddles, and has since been included in many other films and television shows. History The original version of the line appeared in B. Traven's novel The Treasure of the Sierra Madre (1927): The line was popularized by John Huston's 1948 film adaptation of the novel, which was altered from its content in the novel to meet the Motion Picture Production Code regulations severely limiting profanity in film. In one scene, a Mexican bandit leader named "Gold Hat" (portrayed by Alfonso Bedoya) tries to convince Fred C. Dobbs (Humphrey Bogart) that he and his company are Federales: Appearances in media Comics In one issue of the Teenage Mutant Ninja Turtles Archie comics, the Malignoid drones Scul and Bean meet with the nihilistic industrian Null to discuss the contract between him and the Malignoid queen Maligna. When Null insists on consolidating the contract through his lawyers, either Scul or Bean yells out: "Lawyers?! We don't need no stinkin' lawyers!!" In the Teenage Mutant Ninja Turtles series from Image Comics, Donatello paraphrases a variation of that sentence ("Plans?! I don't need no stinking plans!") whilst using his cyborg systems to restore a stripped-down aircar. Games In the game Leisure Suit Larry 6: Shape Up or Slip Out! (1993), the main protagonist has the line of dialogue, "Badges? Ve don' need no steenkin' badges!" to Cavaricchi, the aerobics instructor. Literature The Luis Valdez play I Don't Have to Show You No Stinkin' Badges (1987) draws its title from this quote, and makes a specific reference to Sierra Madre. In Eldest (2005), the second novel in Christopher Paolini's The Inheritance Cycle series, a cobbler named Loring eschews the use of barges as a means of human transportation, saying, "Barges? We don't want no stinking barges." In William S. Burroughs' report on the 1968 Democratic Convention for Esquire magazine, Burroughs has a cop demand to see the permit of the candidate's entourage. The response is: "Permits? We don't have any permits. We don't have to show you any stinking permits. You are talking suh to the future President of America." References External links English phrases Quotations from film Badges 1940s neologisms
Stinking badges
[ "Mathematics" ]
645
[ "Symbols", "Badges" ]
159,225
https://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac%20statistics
Fermi–Dirac statistics is a type of quantum statistics that applies to the physics of a system consisting of many non-interacting, identical particles that obey the Pauli exclusion principle. A result is the Fermi–Dirac distribution of particles over energy states. It is named after Enrico Fermi and Paul Dirac, each of whom derived the distribution independently in 1926. Fermi–Dirac statistics is a part of the field of statistical mechanics and uses the principles of quantum mechanics. Fermi–Dirac statistics applies to identical and indistinguishable particles with half-integer spin (1/2, 3/2, etc.), called fermions, in thermodynamic equilibrium. For the case of negligible interaction between particles, the system can be described in terms of single-particle energy states. A result is the Fermi–Dirac distribution of particles over these states where no two particles can occupy the same state, which has a considerable effect on the properties of the system. Fermi–Dirac statistics is most commonly applied to electrons, a type of fermion with spinΒ 1/2. A counterpart to Fermi–Dirac statistics is Bose–Einstein statistics, which applies to identical and indistinguishable particles with integer spin (0, 1, 2, etc.) called bosons. In classical physics, Maxwell–Boltzmann statistics is used to describe particles that are identical and treated as distinguishable. For both Bose–Einstein and Maxwell–Boltzmann statistics, more than one particle can occupy the same state, unlike Fermi–Dirac statistics. History Before the introduction of Fermi–Dirac statistics in 1926, understanding some aspects of electron behavior was difficult due to seemingly contradictory phenomena. For example, the electronic heat capacity of a metal at room temperature seemed to come from 100 times fewer electrons than were in the electric current. It was also difficult to understand why the emission currents generated by applying high electric fields to metals at room temperature were almost independent of temperature. The difficulty encountered by the Drude model, the electronic theory of metals at that time, was due to considering that electrons were (according to classical statistics theory) all equivalent. In other words, it was believed that each electron contributed to the specific heat an amount on the order of the Boltzmann constantΒ kB. This problem remained unsolved until the development of Fermi–Dirac statistics. Fermi–Dirac statistics was first published in 1926 by Enrico Fermi and Paul Dirac. According to Max Born, Pascual Jordan developed in 1925 the same statistics, which he called Pauli statistics, but it was not published in a timely manner. According to Dirac, it was first studied by Fermi, and Dirac called it "Fermi statistics" and the corresponding particles "fermions". Fermi–Dirac statistics was applied in 1926 by Ralph Fowler to describe the collapse of a star to a white dwarf. In 1927 Arnold Sommerfeld applied it to electrons in metals and developed the free electron model, and in 1928 Fowler and Lothar Nordheim applied it to field electron emission from metals. Fermi–Dirac statistics continue to be an important part of physics. Fermi–Dirac distribution For a system of identical fermions in thermodynamic equilibrium, the average number of fermions in a single-particle state is given by the Fermi–Dirac (F–D) distribution: where is the Boltzmann constant, is the absolute temperature, is the energy of the single-particle state , and is the total chemical potential. The distribution is normalized by the condition that can be used to express in that can assume either a positive or negative value. At zero absolute temperature, is equal to the Fermi energy plus the potential energy per fermion, provided it is in a neighbourhood of positive spectral density. In the case of a spectral gap, such as for electrons in a semiconductor, the point of symmetry is typically called the Fermi level orβ€”for electronsβ€”the electrochemical potential, and will be located in the middle of the gap. The Fermi–Dirac distribution is only valid if the number of fermions in the system is large enough so that adding one more fermion to the system has negligible effect on . Since the Fermi–Dirac distribution was derived using the Pauli exclusion principle, which allows at most one fermion to occupy each possible state, a result is that . The variance of the number of particles in state i can be calculated from the above expression for : Distribution of particles over energy From the Fermi–Dirac distribution of particles over states, one can find the distribution of particles over energy. The average number of fermions with energy can be found by multiplying the Fermi–Dirac distribution by the degeneracy (i.e. the number of states with energy ), When , it is possible that , since there is more than one state that can be occupied by fermions with the same energy . When a quasi-continuum of energies has an associated density of states (i.e. the number of states per unit energy range per unit volume), the average number of fermions per unit energy range per unit volume is where is called the Fermi function and is the same function that is used for the Fermi–Dirac distribution : so that Quantum and classical regimes The Fermi–Dirac distribution approaches the Maxwell–Boltzmann distribution in the limit of high temperature and low particle density, without the need for any ad hoc assumptions: In the limit of low particle density, , therefore or equivalently . In that case, , which is the result from Maxwell-Boltzmann statistics. In the limit of high temperature, the particles are distributed over a large range of energy values, therefore the occupancy on each state (especially the high energy ones with ) is again very small, . This again reduces to Maxwell-Boltzmann statistics. The classical regime, where Maxwell–Boltzmann statistics can be used as an approximation to Fermi–Dirac statistics, is found by considering the situation that is far from the limit imposed by the Heisenberg uncertainty principle for a particle's position and momentum. For example, in physics of semiconductor, when the density of states of conduction band is much higher than the doping concentration, the energy gap between conduction band and fermi level could be calculated using Maxwell-Boltzmann statistics. Otherwise, if the doping concentration is not negligible compared to density of states of conduction band, the Fermi–Dirac distribution should be used instead for accurate calculation. It can then be shown that the classical situation prevails when the concentration of particles corresponds to an average interparticle separation that is much greater than the average de Broglie wavelength of the particles: where is the Planck constant, and is the mass of a particle. For the case of conduction electrons in a typical metal at = 300Β K (i.e. approximately room temperature), the system is far from the classical regime because . This is due to the small mass of the electron and the high concentration (i.e. small ) of conduction electrons in the metal. Thus Fermi–Dirac statistics is needed for conduction electrons in a typical metal. Another example of a system that is not in the classical regime is the system that consists of the electrons of a star that has collapsed to a white dwarf. Although the temperature of white dwarf is high (typically = on its surface), its high electron concentration and the small mass of each electron precludes using a classical approximation, and again Fermi–Dirac statistics is required. Derivations Grand canonical ensemble The Fermi–Dirac distribution, which applies only to a quantum system of non-interacting fermions, is easily derived from the grand canonical ensemble. In this ensemble, the system is able to exchange energy and exchange particles with a reservoir (temperature T and chemical potential ΞΌ fixed by the reservoir). Due to the non-interacting quality, each available single-particle level (with energy level Ο΅) forms a separate thermodynamic system in contact with the reservoir. In other words, each single-particle level is a separate, tiny grand canonical ensemble. By the Pauli exclusion principle, there are only two possible microstates for the single-particle level: no particle (energy E = 0), or one particle (energy E = Ξ΅). The resulting partition function for that single-particle level therefore has just two terms: and the average particle number for that single-particle level substate is given by This result applies for each single-particle level, and thus gives the Fermi–Dirac distribution for the entire state of the system. The variance in particle number (due to thermal fluctuations) may also be derived (the particle number has a simple Bernoulli distribution): This quantity is important in transport phenomena such as the Mott relations for electrical conductivity and thermoelectric coefficient for an electron gas, where the ability of an energy level to contribute to transport phenomena is proportional to . Canonical ensemble It is also possible to derive Fermi–Dirac statistics in the canonical ensemble. Consider a many-particle system composed of N identical fermions that have negligible mutual interaction and are in thermal equilibrium. Since there is negligible interaction between the fermions, the energy of a state of the many-particle system can be expressed as a sum of single-particle energies: where is called the occupancy number and is the number of particles in the single-particle state with energy . The summation is over all possible single-particle states . The probability that the many-particle system is in the state is given by the normalized canonical distribution: where , is called the Boltzmann factor, and the summation is over all possible states of the many-particle system. The average value for an occupancy number is Note that the state of the many-particle system can be specified by the particle occupancy of the single-particle states, i.e. by specifying so that and the equation for becomes where the summation is over all combinations of values of which obey the Pauli exclusion principle, and = 0 or for each . Furthermore, each combination of values of satisfies the constraint that the total number of particles is : Rearranging the summations, where the upper index on the summation sign indicates that the sum is not over and is subject to the constraint that the total number of particles associated with the summation is . Note that still depends on through the constraint, since in one case and is evaluated with while in the other case and is evaluated with To simplify the notation and to clearly indicate that still depends on through define so that the previous expression for can be rewritten and evaluated in terms of the : The following approximation will be used to find an expression to substitute for : where If the number of particles is large enough so that the change in the chemical potential is very small when a particle is added to the system, then Applying the exponential function to both sides, substituting for and rearranging, Substituting the above into the equation for and using a previous definition of to substitute for , results in the Fermi–Dirac distribution: Like the Maxwell–Boltzmann distribution and the Bose–Einstein distribution, the Fermi–Dirac distribution can also be derived by the Darwin–Fowler method of mean values. Microcanonical ensemble A result can be achieved by directly analyzing the multiplicities of the system and using Lagrange multipliers. Suppose we have a number of energy levels, labeled by index i, each level having energy Ξ΅i and containing a total of ni particles. Suppose each level contains gi distinct sublevels, all of which have the same energy, and which are distinguishable. For example, two particles may have different momenta (i.e. their momenta may be along different directions), in which case they are distinguishable from each other, yet they can still have the same energy. The value of gi associated with level i is called the "degeneracy" of that energy level. The Pauli exclusion principle states that only one fermion can occupy any such sublevel. The number of ways of distributing ni indistinguishable particles among the gi sublevels of an energy level, with a maximum of one particle per sublevel, is given by the binomial coefficient, using its combinatorial interpretation: For example, distributing two particles in three sublevels will give population numbers of 110, 101, or 011 for a total of three ways which equalsΒ 3!/(2!1!). The number of ways that a set of occupation numbers ni can be realized is the product of the ways that each individual energy level can be populated: Following the same procedure used in deriving the Maxwell–Boltzmann statistics, we wish to find the set of ni for which W is maximized, subject to the constraint that there be a fixed number of particles and a fixed energy. We constrain our solution using Lagrange multipliers forming the function: Using Stirling's approximation for the factorials, taking the derivative with respect to ni, setting the result to zero, and solving for ni yields the Fermi–Dirac population numbers: By a process similar to that outlined in the Maxwell–Boltzmann statistics article, it can be shown thermodynamically that and , so that finally, the probability that a state will be occupied is See also Grand canonical ensemble Pauli exclusion principle Complete Fermi-Dirac integral Fermi level Fermi gas Maxwell–Boltzmann statistics Bose–Einstein statistics Parastatistics Logistic function Sigmoid function Notes References Further reading Statistical mechanics
Fermi–Dirac statistics
[ "Physics" ]
2,853
[ "Statistical mechanics" ]
159,266
https://en.wikipedia.org/wiki/Gene%20expression
Gene expression is the process by which information from a gene is used in the synthesis of a functional gene product that enables it to produce end products, proteins or non-coding RNA, and ultimately affect a phenotype. These products are often proteins, but in non-protein-coding genes such as transfer RNA (tRNA) and small nuclear RNA (snRNA), the product is a functional non-coding RNA. The process of gene expression is used by all known lifeβ€”eukaryotes (including multicellular organisms), prokaryotes (bacteria and archaea), and utilized by virusesβ€”to generate the macromolecular machinery for life. In genetics, gene expression is the most fundamental level at which the genotype gives rise to the phenotype, i.e. observable trait. The genetic information stored in DNA represents the genotype, whereas the phenotype results from the "interpretation" of that information. Such phenotypes are often displayed by the synthesis of proteins that control the organism's structure and development, or that act as enzymes catalyzing specific metabolic pathways. All steps in the gene expression process may be modulated (regulated), including the transcription, RNA splicing, translation, and post-translational modification of a protein. Regulation of gene expression gives control over the timing, location, and amount of a given gene product (protein or ncRNA) present in a cell and can have a profound effect on the cellular structure and function. Regulation of gene expression is the basis for cellular differentiation, development, morphogenesis and the versatility and adaptability of any organism. Gene regulation may therefore serve as a substrate for evolutionary change. Mechanism Transcription The production of a RNA copy from a DNA strand is called transcription, and is performed by RNA polymerases, which add one ribonucleotide at a time to a growing RNA strand as per the complementarity law of the nucleotide bases. This RNA is complementary to the template 3β€² β†’ 5β€² DNA strand, with the exception that thymines (T) are replaced with uracils (U) in the RNA and possible errors. In bacteria, transcription is carried out by a single type of RNA polymerase, which needs to bind a DNA sequence called a Pribnow box with the help of the sigma factor protein (Οƒ factor) to start transcription. In eukaryotes, transcription is performed in the nucleus by three types of RNA polymerases, each of which needs a special DNA sequence called the promoter and a set of DNA-binding proteinsβ€”transcription factorsβ€”to initiate the process (see regulation of transcription below). RNA polymerase I is responsible for transcription of ribosomal RNA (rRNA) genes. RNA polymerase II (Pol II) transcribes all protein-coding genes but also some non-coding RNAs (e.g., snRNAs, snoRNAs or long non-coding RNAs). RNA polymerase III transcribes 5S rRNA, transfer RNA (tRNA) genes, and some small non-coding RNAs (e.g., 7SK). Transcription ends when the polymerase encounters a sequence called the terminator. mRNA processing While transcription of prokaryotic protein-coding genes creates messenger RNA (mRNA) that is ready for translation into protein, transcription of eukaryotic genes leaves a primary transcript of RNA (pre-RNA), which first has to undergo a series of modifications to become a mature RNA. Types and steps involved in the maturation processes vary between coding and non-coding preRNAs; i.e. even though preRNA molecules for both mRNA and tRNA undergo splicing, the steps and machinery involved are different. The processing of non-coding RNA is described below (non-coding RNA maturation). The processing of pre-mRNA include 5β€² capping, which is set of enzymatic reactions that add 7-methylguanosine (m7G) to the 5β€² end of pre-mRNA and thus protect the RNA from degradation by exonucleases. The m7G cap is then bound by cap binding complex heterodimer (CBP20/CBP80), which aids in mRNA export to cytoplasm and also protect the RNA from decapping. Another modification is 3β€² cleavage and polyadenylation. They occur if polyadenylation signal sequence (5β€²- AAUAAA-3β€²) is present in pre-mRNA, which is usually between protein-coding sequence and terminator. The pre-mRNA is first cleaved and then a series of ~200 adenines (A) are added to form poly(A) tail, which protects the RNA from degradation. The poly(A) tail is bound by multiple poly(A)-binding proteins (PABPs) necessary for mRNA export and translation re-initiation. In the inverse process of deadenylation, poly(A) tails are shortened by the CCR4-Not 3β€²-5β€² exonuclease, which often leads to full transcript decay. A very important modification of eukaryotic pre-mRNA is RNA splicing. The majority of eukaryotic pre-mRNAs consist of alternating segments called exons and introns. During the process of splicing, an RNA-protein catalytical complex known as spliceosome catalyzes two transesterification reactions, which remove an intron and release it in form of lariat structure, and then splice neighbouring exons together. In certain cases, some introns or exons can be either removed or retained in mature mRNA. This so-called alternative splicing creates series of different transcripts originating from a single gene. Because these transcripts can be potentially translated into different proteins, splicing extends the complexity of eukaryotic gene expression and the size of a species proteome. Extensive RNA processing may be an evolutionary advantage made possible by the nucleus of eukaryotes. In prokaryotes, transcription and translation happen together, whilst in eukaryotes, the nuclear membrane separates the two processes, giving time for RNA processing to occur. Non-coding RNA maturation In most organisms non-coding genes (ncRNA) are transcribed as precursors that undergo further processing. In the case of ribosomal RNAs (rRNA), they are often transcribed as a pre-rRNA that contains one or more rRNAs. The pre-rRNA is cleaved and modified (2β€²-O-methylation and pseudouridine formation) at specific sites by approximately 150 different small nucleolus-restricted RNA species, called snoRNAs. SnoRNAs associate with proteins, forming snoRNPs. While snoRNA part basepair with the target RNA and thus position the modification at a precise site, the protein part performs the catalytical reaction. In eukaryotes, in particular a snoRNP called RNase, MRP cleaves the 45S pre-rRNA into the 28S, 5.8S, and 18S rRNAs. The rRNA and RNA processing factors form large aggregates called the nucleolus. In the case of transfer RNA (tRNA), for example, the 5β€² sequence is removed by RNase P, whereas the 3β€² end is removed by the tRNase Z enzyme and the non-templated 3β€² CCA tail is added by a nucleotidyl transferase. In the case of micro RNA (miRNA), miRNAs are first transcribed as primary transcripts or pri-miRNA with a cap and poly-A tail and processed to short, 70-nucleotide stem-loop structures known as pre-miRNA in the cell nucleus by the enzymes Drosha and Pasha. After being exported, it is then processed to mature miRNAs in the cytoplasm by interaction with the endonuclease Dicer, which also initiates the formation of the RNA-induced silencing complex (RISC), composed of the Argonaute protein. Even snRNAs and snoRNAs themselves undergo series of modification before they become part of functional RNP complex. This is done either in the nucleoplasm or in the specialized compartments called Cajal bodies. Their bases are methylated or pseudouridinilated by a group of small Cajal body-specific RNAs (scaRNAs), which are structurally similar to snoRNAs. RNA export In eukaryotes most mature RNA must be exported to the cytoplasm from the nucleus. While some RNAs function in the nucleus, many RNAs are transported through the nuclear pores and into the cytosol. Export of RNAs requires association with specific proteins known as exportins. Specific exportin molecules are responsible for the export of a given RNA type. mRNA transport also requires the correct association with Exon Junction Complex (EJC), which ensures that correct processing of the mRNA is completed before export. In some cases RNAs are additionally transported to a specific part of the cytoplasm, such as a synapse; they are then towed by motor proteins that bind through linker proteins to specific sequences (called "zipcodes") on the RNA. Translation For some non-coding RNA, the mature RNA is the final gene product. In the case of messenger RNA (mRNA) the RNA is an information carrier coding for the synthesis of one or more proteins. mRNA carrying a single protein sequence (common in eukaryotes) is monocistronic whilst mRNA carrying multiple protein sequences (common in prokaryotes) is known as polycistronic. Every mRNA consists of three parts: a 5β€² untranslated region (5β€²UTR), a protein-coding region or open reading frame (ORF), and a 3β€² untranslated region (3β€²UTR). The coding region carries information for protein synthesis encoded by the genetic code to form triplets. Each triplet of nucleotides of the coding region is called a codon and corresponds to a binding site complementary to an anticodon triplet in transfer RNA. Transfer RNAs with the same anticodon sequence always carry an identical type of amino acid. Amino acids are then chained together by the ribosome according to the order of triplets in the coding region. The ribosome helps transfer RNA to bind to messenger RNA and takes the amino acid from each transfer RNA and makes a structure-less protein out of it. Each mRNA molecule is translated into many protein molecules, on average ~2800 in mammals. In prokaryotes translation generally occurs at the point of transcription (co-transcriptionally), often using a messenger RNA that is still in the process of being created. In eukaryotes translation can occur in a variety of regions of the cell depending on where the protein being written is supposed to be. Major locations are the cytoplasm for soluble cytoplasmic proteins and the membrane of the endoplasmic reticulum for proteins that are for export from the cell or insertion into a cell membrane. Proteins that are supposed to be produced at the endoplasmic reticulum are recognised part-way through the translation process. This is governed by the signal recognition particleβ€”a protein that binds to the ribosome and directs it to the endoplasmic reticulum when it finds a signal peptide on the growing (nascent) amino acid chain. Folding Each protein exists as an unfolded polypeptide or random coil when translated from a sequence of mRNA into a linear chain of amino acids. This polypeptide lacks any developed three-dimensional structure (the left hand side of the neighboring figure). The polypeptide then folds into its characteristic and functional three-dimensional structure from a random coil. Amino acids interact with each other to produce a well-defined three-dimensional structure, the folded protein (the right hand side of the figure) known as the native state. The resulting three-dimensional structure is determined by the amino acid sequence (Anfinsen's dogma). The correct three-dimensional structure is essential to function, although some parts of functional proteins may remain unfolded. Failure to fold into the intended shape usually produces inactive proteins with different properties including toxic prions. Several neurodegenerative and other diseases are believed to result from the accumulation of misfolded proteins. Many allergies are caused by the folding of the proteins, for the immune system does not produce antibodies for certain protein structures. Enzymes called chaperones assist the newly formed protein to attain (fold into) the 3-dimensional structure it needs to function. Similarly, RNA chaperones help RNAs attain their functional shapes. Assisting protein folding is one of the main roles of the endoplasmic reticulum in eukaryotes. Translocation Secretory proteins of eukaryotes or prokaryotes must be translocated to enter the secretory pathway. Newly synthesized proteins are directed to the eukaryotic Sec61 or prokaryotic SecYEG translocation channel by signal peptides. The efficiency of protein secretion in eukaryotes is very dependent on the signal peptide which has been used. Protein transport Many proteins are destined for other parts of the cell than the cytosol and a wide range of signalling sequences or (signal peptides) are used to direct proteins to where they are supposed to be. In prokaryotes this is normally a simple process due to limited compartmentalisation of the cell. However, in eukaryotes there is a great variety of different targeting processes to ensure the protein arrives at the correct organelle. Not all proteins remain within the cell and many are exported, for example, digestive enzymes, hormones and extracellular matrix proteins. In eukaryotes the export pathway is well developed and the main mechanism for the export of these proteins is translocation to the endoplasmic reticulum, followed by transport via the Golgi apparatus. Regulation of gene expression Regulation of gene expression is the control of the amount and timing of appearance of the functional product of a gene. Control of expression is vital to allow a cell to produce the gene products it needs when it needs them; in turn, this gives cells the flexibility to adapt to a variable environment, external signals, damage to the cell, and other stimuli. More generally, gene regulation gives the cell control over all structure and function, and is the basis for cellular differentiation, morphogenesis and the versatility and adaptability of any organism. Numerous terms are used to describe types of genes depending on how they are regulated; these include: A constitutive gene is a gene that is transcribed continually as opposed to a facultative gene, which is only transcribed when needed. A housekeeping gene is a gene that is required to maintain basic cellular function and so is typically expressed in all cell types of an organism. Examples include actin, GAPDH and ubiquitin. Some housekeeping genes are transcribed at a relatively constant rate and these genes can be used as a reference point in experiments to measure the expression rates of other genes. A facultative gene is a gene only transcribed when needed as opposed to a constitutive gene. An inducible gene is a gene whose expression is either responsive to environmental change or dependent on the position in the cell cycle. Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The stability of the final gene product, whether it is RNA or protein, also contributes to the expression level of the geneβ€”an unstable product results in a low expression level. In general gene expression is regulated through changes in the number and type of interactions between molecules that collectively influence transcription of DNA and translation of RNA. Some simple examples of where gene expression is important are: Control of insulin expression so it gives a signal for blood glucose regulation. X chromosome inactivation in female mammals to prevent an "overdose" of the genes it contains. Cyclin expression levels control progression through the eukaryotic cell cycle. Transcriptional regulation Regulation of transcription can be broken down into three main routes of influence; genetic (direct interaction of a control factor with the gene), modulation interaction of a control factor with the transcription machinery and epigenetic (non-sequence changes in DNA structure that influence transcription). Direct interaction with DNA is the simplest and the most direct method by which a protein changes transcription levels. Genes often have several protein binding sites around the coding region with the specific function of regulating transcription. There are many classes of regulatory DNA binding sites known as enhancers, insulators and silencers. The mechanisms for regulating transcription are varied, from blocking key binding sites on the DNA for RNA polymerase to acting as an activator and promoting transcription by assisting RNA polymerase binding. The activity of transcription factors is further modulated by intracellular signals causing protein post-translational modification including phosphorylation, acetylation, or glycosylation. These changes influence a transcription factor's ability to bind, directly or indirectly, to promoter DNA, to recruit RNA polymerase, or to favor elongation of a newly synthesized RNA molecule. The nuclear membrane in eukaryotes allows further regulation of transcription factors by the duration of their presence in the nucleus, which is regulated by reversible changes in their structure and by binding of other proteins. Environmental stimuli or endocrine signals may cause modification of regulatory proteins eliciting cascades of intracellular signals, which result in regulation of gene expression. It has become apparent that there is a significant influence of non-DNA-sequence specific effects on transcription. These effects are referred to as epigenetic and involve the higher order structure of DNA, non-sequence specific DNA binding proteins and chemical modification of DNA. In general epigenetic effects alter the accessibility of DNA to proteins and so modulate transcription. In eukaryotes the structure of chromatin, controlled by the histone code, regulates access to DNA with significant impacts on the expression of genes in euchromatin and heterochromatin areas. Enhancers, transcription factors, mediator complex and DNA loops in mammalian transcription Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Enhancers and their associated transcription factors have a leading role in the regulation of gene expression. Enhancers are genome regions that regulate genes. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. Multiple enhancers, each often tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control gene expression. The illustration shows an enhancer looping around to come into proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1). One member of the dimer is anchored to its binding motif on the enhancer and the other member is anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function-specific transcription factors (among the about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer. A small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern transcription level of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter. Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene. DNA methylation and demethylation in transcriptional regulation DNA methylation is a widespread mechanism for epigenetic influence on gene expression and is seen in bacteria and eukaryotes and has roles in heritable transcription silencing and transcription regulation. Methylation most often occurs on a cytosine (see Figure). Methylation of cytosine primarily occurs in dinucleotide sequences where a cytosine is followed by a guanine, a CpG site. The number of CpG sites in the human genome is about 28 million. Depending on the type of cell, about 70% of the CpG sites have a methylated cytosine. Methylation of cytosine in DNA has a major role in regulating gene expression. Methylation of CpGs in a promoter region of a gene usually represses gene transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene. Transcriptional regulation in learning and memory In a rat, contextual fear conditioning (CFC) is a painful learning experience. Just one episode of CFC can result in a life-long fearful memory. After an episode of CFC, cytosine methylation is altered in the promoter regions of about 9.17% of all genes in the hippocampus neuron DNA of a rat. The hippocampus is where new memories are initially stored. After CFC about 500 genes have increased transcription (often due to demethylation of CpG sites in a promoter region) and about 1,000 genes have decreased transcription (often due to newly formed 5-methylcytosine at CpG sites in a promoter region). The pattern of induced and repressed genes within neurons appears to provide a molecular basis for forming the first transient memory of this training event in the hippocampus of the rat brain. Some specific mechanisms guiding new DNA methylations and new DNA demethylations in the hippocampus during memory establishment have been established (see for summary). One mechanism includes guiding the short isoform of the TET1 DNA demethylation enzyme, TET1s, to about 600 locations on the genome. The guidance is performed by association of TET1s with EGR1 protein, a transcription factor important in memory formation. Bringing TET1s to these locations initiates DNA demethylation at those sites, up-regulating associated genes. A second mechanism involves DNMT3A2, a splice-isoform of DNA methyltransferase DNMT3A, which adds methyl groups to cytosines in DNA. This isoform is induced by synaptic activity, and its location of action appears to be determined by histone post-translational modifications (a histone code). The resulting new messenger RNAs are then transported by messenger RNP particles (neuronal granules) to synapses of the neurons, where they can be translated into proteins affecting the activities of synapses. In particular, the brain-derived neurotrophic factor gene (BDNF) is known as a "learning gene". After CFC there was upregulation of BDNF gene expression, related to decreased CpG methylation of certain internal promoters of the gene, and this was correlated with learning. Transcriptional regulation in cancer The majority of gene promoters contain a CpG island with numerous CpG sites. When many of a gene's promoter CpG sites are methylated the gene becomes silenced. Colorectal cancers typically have 3 to 6 driver mutations and 33 to 66 hitchhiker or passenger mutations. However, transcriptional silencing may be of more importance than mutation in causing progression to cancer. For example, in colorectal cancers about 600 to 800 genes are transcriptionally silenced by CpG island methylation (see regulation of transcription in cancer). Transcriptional repression in cancer can also occur by other epigenetic mechanisms, such as altered expression of microRNAs. In breast cancer, transcriptional repression of BRCA1 may occur more frequently by over-transcribed microRNA-182 than by hypermethylation of the BRCA1 promoter (see Low expression of BRCA1 in breast and ovarian cancers). Post-transcriptional regulation In eukaryotes, where export of RNA is required before translation is possible, nuclear export is thought to provide additional control over gene expression. All transport in and out of the nucleus is via the nuclear pore and transport is controlled by a wide range of importin and exportin proteins. Expression of a gene coding for a protein is only possible if the messenger RNA carrying the code survives long enough to be translated. In a typical cell, an RNA molecule is only stable if specifically protected from degradation. RNA degradation has particular importance in regulation of expression in eukaryotic cells where mRNA has to travel significant distances before being translated. In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5β€² cap and poly-adenylated tail. Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation. If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway. Three prime untranslated regions and microRNAs Three prime untranslated regions (3β€²UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3β€²-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3β€²-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3β€²-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA. The 3β€²-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3β€²-UTRs. Among all regulatory motifs within the 3β€²-UTRs (e.g. including silencer regions), MREs make up about half of the motifs. As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Friedman et al. estimate that >45,000 miRNA target sites within human mRNA 3β€²UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs. Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold). The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes. The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders. Translational regulation Direct regulation of translation is less prevalent than control of transcription or mRNA stability but is occasionally used. Inhibition of protein translation is a major target for toxins and antibiotics, so they can kill a cell by overriding its normal gene expression control. Protein synthesis inhibitors include the antibiotic neomycin and the toxin ricin. Post-translational modifications Post-translational modifications (PTMs) are covalent modifications to proteins. Like RNA splicing, they help to significantly diversify the proteome. These modifications are usually catalyzed by enzymes. Additionally, processes like covalent additions to amino acid side chain residues can often be reversed by other enzymes. However, some, like the proteolytic cleavage of the protein backbone, are irreversible. PTMs play many important roles in the cell. For example, phosphorylation is primarily involved in activating and deactivating proteins and in signaling pathways. PTMs are involved in transcriptional regulation: an important function of acetylation and methylation is histone tail modification, which alters how accessible DNA is for transcription. They can also be seen in the immune system, where glycosylation plays a key role. One type of PTM can initiate another type of PTM, as can be seen in how ubiquitination tags proteins for degradation through proteolysis. Proteolysis, other than being involved in breaking down proteins, is also important in activating and deactivating them, and in regulating biological processes such as DNA transcription and cell death. Measurement Measuring gene expression is an important part of many life sciences, as the ability to quantify the level at which a particular gene is expressed within a cell, tissue or organism can provide a lot of valuable information. For example, measuring gene expression can: Identify viral infection of a cell (viral protein expression). Determine an individual's susceptibility to cancer (oncogene expression). Find if a bacterium is resistant to penicillin (beta-lactamase expression). Gene expression profiling evaluates a panel of genes to help understand the fundamental mechanism of a cell. This is increasingly used in cancer therapy to target specific chemotherapy. (See RNA-Seq and DNA_microarray for details.) Similarly, the analysis of the location of protein expression is a powerful tool, and this can be done on an organismal or cellular scale. Investigation of localization is particularly important for the study of development in multicellular organisms and as an indicator of protein function in single cells. Ideally, measurement of expression is done by detecting the final gene product (for many genes, this is the protein); however, it is often easier to detect one of the precursors, typically mRNA and to infer gene-expression levels from these measurements. mRNA quantification Levels of mRNA can be quantitatively measured by northern blotting, which provides size and sequence information about the mRNA molecules. A sample of RNA is separated on an agarose gel and hybridized to a radioactively labeled RNA probe that is complementary to the target sequence. The radiolabeled RNA is then detected by an autoradiograph. Because the use of radioactive reagents makes the procedure time-consuming and potentially dangerous, alternative labeling and detection methods, such as digoxigenin and biotin chemistries, have been developed. Perceived disadvantages of Northern blotting are that large quantities of RNA are required and that quantification may not be completely accurate, as it involves measuring band strength in an image of a gel. On the other hand, the additional mRNA size information from the Northern blot allows the discrimination of alternately spliced transcripts. Another approach for measuring mRNA abundance is RT-qPCR. In this technique, reverse transcription is followed by quantitative PCR. Reverse transcription first generates a DNA template from the mRNA; this single-stranded template is called cDNA. The cDNA template is then amplified in the quantitative step, during which the fluorescence emitted by labeled hybridization probes or intercalating dyes changes as the DNA amplification process progresses. With a carefully constructed standard curve, qPCR can produce an absolute measurement of the number of copies of original mRNA, typically in units of copies per nanolitre of homogenized tissue or copies per cell. qPCR is very sensitive (detection of a single mRNA molecule is theoretically possible), but can be expensive depending on the type of reporter used; fluorescently labeled oligonucleotide probes are more expensive than non-specific intercalating fluorescent dyes. For expression profiling, or high-throughput analysis of many genes within a sample, quantitative PCR may be performed for hundreds of genes simultaneously in the case of low-density arrays. A second approach is the hybridization microarray. A single array or "chip" may contain probes to determine transcript levels for every known gene in the genome of one or more organisms. Alternatively, "tag based" technologies like Serial analysis of gene expression (SAGE) and RNA-Seq, which can provide a relative measure of the cellular concentration of different mRNAs, can be used. An advantage of tag-based methods is the "open architecture", allowing for the exact measurement of any transcript, with a known or unknown sequence. Next-generation sequencing (NGS) such as RNA-Seq is another approach, producing vast quantities of sequence data that can be matched to a reference genome. Although NGS is comparatively time-consuming, expensive, and resource-intensive, it can identify single-nucleotide polymorphisms, splice-variants, and novel genes, and can also be used to profile expression in organisms for which little or no sequence information is available. RNA profiles in Wikipedia Profiles like these are found for almost all proteins listed in Wikipedia. They are generated by organizations such as the Genomics Institute of the Novartis Research Foundation and the European Bioinformatics Institute. Additional information can be found by searching their databases (for an example of the GLUT4 transporter pictured here, see citation). These profiles indicate the level of DNA expression (and hence RNA produced) of a certain protein in a certain tissue, and are color-coded accordingly in the images located in the Protein Box on the right side of each Wikipedia page. Protein quantification For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification. One of the most commonly used methods is to perform a Western blot against the protein of interest. This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate) is separated on a polyacrylamide gel, transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size. mRNA-protein correlation While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation or protein stability. Post-translational factors, such as protein transport in highly polar cells, can influence the measured mRNA-protein correlation as well. Localization Analysis of expression is not limited to quantification; localization can also be determined. mRNA can be detected with a suitably labelled complementary mRNA strand and protein can be detected via labelled antibodies. The probed sample is then observed by microscopy to identify where the mRNA or protein is. By replacing the gene with a new version fused to a green fluorescent protein marker or similar, expression may be directly quantified in live cells. This is done by imaging using a fluorescence microscope. It is very difficult to clone a GFP-fused protein into its native location in the genome without affecting expression levels, so this method often cannot be used to measure endogenous gene expression. It is, however, widely used to measure the expression of a gene artificially introduced into the cell, for example via an expression vector. By fusing a target protein to a fluorescent reporter, the protein's behavior, including its cellular localization and expression level, can be significantly changed. The enzyme-linked immunosorbent assay works by using antibodies immobilised on a microtiter plate to capture proteins of interest from samples added to the well. Using a detection antibody conjugated to an enzyme or fluorophore the quantity of bound protein can be accurately measured by fluorometric or colourimetric detection. The detection process is very similar to that of a Western blot, but by avoiding the gel steps more accurate quantification can be achieved. Expression system An expression system is a system specifically designed for the production of a gene product of choice. This is normally a protein although may also be RNA, such as tRNA or a ribozyme. An expression system consists of a gene, normally encoded by DNA, and the molecular machinery required to transcribe the DNA into mRNA and translate the mRNA into protein using the reagents provided. In the broadest sense this includes every living cell but the term is more normally used to refer to expression as a laboratory tool. An expression system is therefore often artificial in some manner. Expression systems are, however, a fundamentally natural process. Viruses are an excellent example where they replicate by using the host cell as an expression system for the viral proteins and genome. Inducible expression Doxycycline is also used in "Tet-on" and "Tet-off" tetracycline controlled transcriptional activation to regulate transgene expression in organisms and cell cultures. In nature In addition to these biological tools, certain naturally observed configurations of DNA (genes, promoters, enhancers, repressors) and the associated machinery itself are referred to as an expression system. This term is normally used in the case where a gene or set of genes is switched on under well defined conditions, for example, the simple repressor switch expression system in Lambda phage and the lac operator system in bacteria. Several natural expression systems are directly used or modified and used for artificial expression systems such as the Tet-on and Tet-off expression system. Gene networks Genes have sometimes been regarded as nodes in a network, with inputs being proteins such as transcription factors, and outputs being the level of gene expression. The node itself performs a function, and the operation of these functions have been interpreted as performing a kind of information processing within cells and determines cellular behavior. Gene networks can also be constructed without formulating an explicit causal model. This is often the case when assembling networks from large expression data sets. Covariation and correlation of expression is computed across a large sample of cases and measurements (often transcriptome or proteome data). The source of variation can be either experimental or natural (observational). There are several ways to construct gene expression networks, but one common approach is to compute a matrix of all pair-wise correlations of expression across conditions, time points, or individuals and convert the matrix (after thresholding at some cut-off value) into a graphical representation in which nodes represent genes, transcripts, or proteins and edges connecting these nodes represent the strength of association (see GeneNetwork GeneNetwork 2). Techniques and tools The following experimental techniques are used to measure gene expression and are listed in roughly chronological order, starting with the older, more established technologies. They are divided into two groups based on their degree of multiplexity. Low-to-mid-plex techniques: Reporter gene Northern blot Western blot Fluorescent in situ hybridization Reverse transcription PCR Higher-plex techniques: SAGE DNA microarray Tiling array RNA-Seq Gene expression databases Gene expression omnibus (GEO) at NCBI Expression Atlas at the EBI Bgee Bgee at the SIB Swiss Institute of Bioinformatics Mouse Gene Expression Database at the Jackson Laboratory CollecTF: a database of experimentally validated transcription factor-binding sites in Bacteria. COLOMBOS: collection of bacterial expression compendia. Many Microbe Microarrays Database: microbial Affymetrix data See also References External links Plant Transcription Factor Database and Plant Transcriptional Regulation Data and Analysis Platform Molecular biology
Gene expression
[ "Chemistry", "Biology" ]
8,503
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
159,271
https://en.wikipedia.org/wiki/ElcomSoft
ElcomSoft is a privately owned software company headquartered in Moscow, Russia. Since its establishment in 1990, the company has been working on computer security programs, with the main focus on password and system recovery software. DMCA case On July 16, 2001, Dmitry Sklyarov, a Russian citizen employed by ElcomSoft who was at the time visiting the United States for DEF CON, was arrested and charged for violating the United States DMCA law by writing ElcomSoft's Advanced eBook Processor software. He was later released on bail and allowed to return to Russia, and the charges against him were dropped. The charges against ElcomSoft were not, and a court case ensued, attracting much public attention and protest. On December 17, 2002, ElcomSoft was found not guilty of all four charges under the DMCA. Thunder Tables Thunder Tables is the company's own technology developed to ensure guaranteed recovery of Microsoft Word and Microsoft Excel documents protected with 40-bit encryption. The technology first appeared in 2007 and employs the time–memory tradeoff method to build pre-computed hash tables, which open the corresponding files in a matter of seconds instead of days. These tables take around four gigabytes. So far, the technology is used in two password recovery programs: Advanced Office Password Breaker and Advanced PDF Password Recovery. Cracking Wi-Fi passwords with GPUs In 2009 ElcomSoft released a tool that takes WPA/WPA2 Hash Codes and uses brute-force methods to guess the password associated with a wireless network. The advantages of using such methods over the traditional ones, such as rainbow tables, are numerous. Vulnerability in Canon authentication software On November 30, 2010, Elcomsoft announced that the encryption system used by Canon cameras to ensure that pictures and Exif metadata have not been altered was flawed and cannot be fixed. On that same day, Dmitry Sklyarov gave a presentation at the Confidence 2.0 conference in Prague demonstrating the flaws. Among others, he showed an image of an astronaut planting a flag of the Soviet Union on the moon; all the images pass Canon's authenticity verification. Nude celebrity photo leak In 2014 an attacker used the Elcomsoft Phone Password Breaker to determine celebrity Jennifer Lawrence's password and obtain nude photos. Wired said about Apple's cloud services, "...cloud services might be about as secure as leaving your front door key under the mat." References Software companies established in 1990 Computer law Cryptography law Software companies of Russia Computer security software companies Companies based in Moscow Russian companies established in 1990 Cryptographic attacks Password cracking software
ElcomSoft
[ "Technology" ]
534
[ "Cryptographic attacks", "Computer law", "Computing and society", "Computer security exploits" ]
159,280
https://en.wikipedia.org/wiki/Change%20ringing
Change ringing is the art of ringing a set of tuned bells in a tightly controlled manner to produce precise variations in their successive striking sequences, known as "changes". This can be by method ringing in which the ringers commit to memory the rules for generating each change, or by call changes, where the ringers are instructed how to generate each change by instructions from a conductor. This creates a form of bell music which cannot be discerned as a conventional melody, but is a series of mathematical sequences. It can also be automated by machinery. Change ringing originated following the invention of English full-circle tower bell ringing in the early 17th century, when bell ringers found that swinging a bell through a much larger arc than that required for swing-chiming gave control over the time between successive strikes of the clapper. Ordinarily a bell will swing through a small arc only at a set speed governed by its size and shape in the nature of a simple pendulum, but by swinging through a larger arc approaching a full circle, control of the strike interval can be exercised by the ringer. This culminated in the technique of full circle ringing, which enabled ringers to independently change the speeds of their individual bells accurately to combine in ringing different mathematical permutations, known as "changes". Speed control of a tower bell is exerted by the ringer only when each bell is mouth upwards and moving slowly near the balance point; this constraint and the intricate rope manipulation involved normally requires that each bell have its own ringer. The considerable weights of full-circle tower bells also means they cannot be easily stopped or started and the practical change of interval between successive strikes is limited. This places limitations on the rules for generating easily-rung changes; each bell must strike once in each change, but its position of striking in successive changes can only change by one place. Change ringing is practised worldwide, but it is by far most common on church bells in English churches, where it first developed. Change ringing is also performed on handbells, where conventionally each ringer holds two bells, and chimed on carillons and chimes of bells, though these are more commonly used to play conventional melodies. Technique and physics Today, some towers have as many as sixteen bells that can be rung together, though six or eight bells are more common. The highest pitch bell is known as the treble, and the lowest is the tenor. For convenience, the bells are referred to by number, with the treble being number 1 and the other bells numbered by their pitchβ€”2, 3, 4, etc.β€”sequentially down the scale. (This system often seems counter-intuitive to musicians, who are used to a numbering that ascends with pitch.) The bells are usually tuned to a diatonic major scale, with the tenor bell being the tonic (or key) note of the scale. Some towers contain additional bells so that different subsets of the full number can be rung, still to a diatonic scale. For instance, many 12-bell towers have a flat sixth, which if rung instead of the normal number 6 bell allows 2 to 9 to be rung as light diatonic octave; other variations are also possible. The bells in a tower reside in the bell chamber or belfry usually with louvred windows to enable the sound to escape. The bells are mounted within a bellframe of steel or wood. Each bell is suspended from a headstock fitted on trunnions (plain or non-friction bearings) mounted to the belfry framework so that the bell assembly can rotate. When stationary in the down position, the centre of mass of the bell and clapper is appreciably below the centreline of the trunnion supports, giving a pendulous effect to the assembly, and this dynamic is controlled by the ringer's rope. The headstock is fitted with a wooden stay, which, in conjunction with a slider, limits maximum rotational movement to a little less than 370 degrees. To the headstock a large wooden wheel is fitted and to which a rope is attached. The rope wraps and unwraps on the rim of the wheel as the bell rotates backwards and forwards. This is full circle ringing and quite different from fixed or limited motion bells, which chime. Within the bell the clapper is constrained to swing in the direction that the bell swings. The clapper is a rigid steel or wrought iron bar with a large ball to strike the bell. The thickest part of the mouth of bell is called the soundbow and it is against this that the ball strikes. Beyond the ball is a flight, which controls the speed of the clapper. In very small bells this can be nearly as long as the rest of the clapper. Below the bell chamber there may be one or more sound chambers, (one of which is likely to house the clock mechanism if the church has one) and through which the rope passes before it drops into the ringing chamber or room. Typically, the rope's length is such that it falls close to or on to the floor of the ringing chamber. About from the floor, the rope has a woollen grip called the sally (usually around long) while the lower end of the rope is doubled over to form an easily held tail-end. Unattended bells are normally left hanging in the normal ("down") position, but prior to being rung, the bells are rung up. In the down position, the bells are safe if a person touches them or pulls a rope. A bell that is up is dangerous to be near, and only expert ringers should ever contemplate entering a bell chamber or touching a rope when the bells are up. To raise a bell, the ringer pulls on the rope and starts the bell swinging. Each time the bell swings the ringer adds a little more energy to the system, similar to pushing a child's swing. Eventually there is enough energy for the bell to swing right up and be left over-centre just beyond the balance point with the stay resting against the slider to hold the bell in position, ready to be rung. Bellringers typically stand in a circle around the ringing chamber, each managing one rope. Bells and their attendant ropes are so mounted that the ropes are pulled in a circular sequence, usually clockwise, starting with the lightest (treble) bell and descending to the heaviest (tenor). To ring the bell, the ringer first pulls the sally towards the floor, upsetting the bell's balance and swinging it on its bearings. As the bell swings downwards the rope unwinds from the wheel and the ringer adds enough pull to counteract friction and air resistance. The bell winds the rope back onto the other side of the wheel as it rises and the ringer can slow (or check) the rise of the bell if required. The rope is attached to one side of the wheel so that a different amount of rope is wound on and off as it swings to and fro. The first stroke is the handstroke with a small amount of rope on the wheel. The ringer pulls on the sally and when the bell swings up it draws up more rope onto the wheel and the sally rises to, or beyond, the ceiling. The ringer keeps hold of the tail-end of the rope to control the bell. After a controlled pause with the bell, on or close to its balancing point, the ringer rings the backstroke by pulling the tail-end, causing the bell to swing back towards its starting position. As the sally rises, the ringer catches it to pause the bell at its balance position. In English-style ringing the bell is rung up such that the clapper is resting on the lower edge of the bell when the bell is on the stay. During each swing, the clapper travels faster than the bell, eventually striking the soundbow and making the bell sound. The bell speaks roughly when horizontal as it rises, thus projecting the sound outwards. The clapper rebounds very slightly, allowing the bell to ring. At the balance point, the clapper passes over the top and rests against the soundbow. In change ringing where the order the bells are struck in is constantly altered, it is necessary to time the swing so that this strike occurs with precise positioning within the overall pattern. Precision of striking is important at all times. To ring quickly, the bell must not complete the full 360 degrees before swinging back in the opposite direction; while ringing slowly, the ringer waits with the bell held at the balance, before allowing it to swing back. To achieve this, the ringer must work with the bell's momentum, applying just the right amount of effort during the pull that the bell swings as far as required and no further. This allows two adjacent bells to reverse positions, the quicker bell passing the slower bell to establish a new pattern. Although ringing up certainly involves some physical exertion, actual ringing should rely more on practised skill than mere brute force. Even the smallest bell in a tower is much heavier than the person ringing it. The heaviest bell hung for full-circle ringing is in Liverpool Cathedral and weighs . Despite this colossal weight, it can be safely rung by one (experienced) ringer. (Whilst heavier bells exist – for example Big Ben – they are generally only chimed, either by swinging the bell slightly or having the bell hung dead and using a mechanical hammer.) Changes The simplest way to sound a ring of bells is by ringing rounds. This is a repeated sequence of bells descending from the highest to lowest note, which is from the lightest to the heaviest bell. This was the original sequence used before change ringing was developed, and change ringing always starts and ends with this sequence. Two forms of ringing changes have developed; Call changes: where the conductor of the ringing commands each change. Method ringing: where after a word of command to start, the changes are rung from memory by the ringers. Call change ringing Most ringers begin their ringing career with call change ringing; they can thus concentrate on learning the physical skills needed to handle their bells without needing to worry about "methods". There are also many towers where experienced ringers practise call change ringing as an art in its own right (and even exclusively), particularly in the English county of Devon. The technique was probably developed in the early 17th century in the early days of change ringing. Call change ringing requires one ringer to give commands to change the order of the bells, as distinct from method ringing, where the ringers memorise the course of bells as part of a continuous pattern. Call change instructions In call change ringing each different sequence of the bells, known as a "row", is specifically called out by one ringer, the "conductor", who instructs the other ringers how to change their bells' places from row to row. This command is known as a "call". The change is made at the next "handstroke" (when the sally on the bell rope is pulled), after the call. In calling, the conductor usually has a strategy or plan to achieve the desired progression of rows, rather than remembering each call, and an example of these is shown in the example on eight bells. Conductors can space out the calls at will, but each row is normally struck twice at least because of the difficulty of calling continuous changes. Calls are usually of the form "X to (or after) Y" or "X and Y"; in which X and Y refer to two of the bells by their physical numbers in the tower (not by their positions in the row). All cause two bells to swap. The first form is used for calling up and calling down, and the second form swaps the two bells mentioned. As an example of calling up and down, consider the following sequence of rows, and the calls a conductor would use to call them: Thus it can be seen how these ways of calling differ: In calling up, The first-called bell moves after the second called bell. In calling down, The first-called bell moves after the second called bell. In Swapping, the bells simply swap position In all cases, the ringer of the bell immediately above (behind) the swapping pair must also be alert, as that bell follows a new bell after the swap. Rarer forms of change calling may name just one of the moving bells, call the moving bell by position rather than number, or call out the full change. The example on the right shows called changes eight bells being called using the "down" system. The sequence of calls shown gives three well-known musical rows, which are Whittingtons, Queens, and Tittums. Whittingtons - bell 1 and 2 stay in place, other bells ascend the odds and descend the evens Queens - descending odd bells then descending evens Tittums - interspersed light and heavy bells, giving a "tee-tum, tee-tum...." effect. Method ringing Method ringing is the continuously changing form of change ringing, and gets its name from the use of a particular method to generate the changes. After starting in repetitive rounds, at a given command, the ringers vary the bells' order, to produce a series of distinct sequences known as rows or changes. In this way permutation of the bells' striking order proceeds. For example 123456 can become 214365 in the next sequence. The method is committed to memory by each ringer, so that only a few commands are given by the ringer in charge (the conductor). Learning the method does not consist of memorising the individual sequences, but using a variety of techniques such as: Memorising the path of the bell, not the numbers of the bells it strikes after. This can be by visualising a tracking line in a method diagram or by breaking the line into small "work" units which are joined together. and looking for visual signposts, such as when the ringer's bell crosses with another particular bell. There are thousands of different methods, of which two methods on six bells are explained in detail below. Plain hunt In method ringing, plain hunt is the simplest form of generating changing permutations in a continuous fashion, and is a fundamental building-block of many change ringing methods. The accompanying diagram shows plain hunt on six bells. The course of two bells only are shown for clarity. Each row in the diagram shows the order of striking after each change. Plain hunt consists of a plain undeviating course of a bell between the first and last places in the striking order, by moving a place in the sequence at each change, but with two strikes in the first and last position to enable a turn-around as the internal bells change over. Thus each bell moves one position at each succeeding change, unless they reach the first or last position, where they remain for two changes then proceed to the other end of the sequence. All of the bells are doing this at every change, without any words of command. This simple rule can be extended to any number of bells, however it repeats the sequence after twice the number of bells hunting. Plain Bob To enable a greater number of changes to be rung without repetition, more advanced methods were developed, many based upon the plain hunt. "Plain Bob" is one of the oldest and simplest of these, and is shown as an example above. A "plain course" of plain bob minor is shown in diagrammatic form, which has the characteristics: all the bells plain hunt, until the treble bell is first, and depending where they are in the pattern, they perform "dodges" in the 3-4 position or perform dodges in the 5-6 positions or sit for two blows if they are just above the treble, then go first again. The red bell track shows the order of "works", which are deviations from the plain hunt. 3/4 down dodge 5/6 down dodge 5/6 up dodge 3/4 up dodge make 2nds place. And then it repeats. Each bells starts at a different place in this cyclical order. A dodge means just that: two bells dodge round each other, thus changing their relationship to the treble, and giving rise to different changes. The plain bob pattern can be extended beyond the constraints of the plain course of 60 changes, to the full unique 720 changes possible (this is 6 factorial on 6 bells, which is 1Γ—2Γ—3Γ—4Γ—5Γ—6 = 720 changes). To do this, at set points in the sequences one of the ringers, called the "conductor" calls out commands such as "bob" or "single", which introduce further variations. The conductor follows a "composition" which they have to commit to memory. This enables the other ringers to produce large numbers of unique changes without memorising huge quantities of data, without any written prompts. Ringers can also ring different methods, with different "works" on different numbers of bells - so there is a huge variety of ways of ringing changes in method ringing. Peals and quarter peals For some people, the ultimate goal of this system is to ring all the permutations, to ring a tower's bells in every possible order without repeating – what is called an extent (or sometimes, formerly, a full peal). The feasibility of this depends on how many bells are involved: if a tower has n bells, they have n! (read factorial) possible permutations, a number that becomes quite large as n grows. For example, while six bells have 720 permutations, eight bells have 40,320; furthermore, 10! = 3,628,800, and 12! = 479,001,600. Estimating two seconds for each change (a reasonable pace), one finds that while an extent on six bells can be accomplished in half an hour, an extent on eight bells should take nearly twenty-two and a half hours. (When in 1963 ringers in Loughborough became the only band in history to achieve this feat on tower bells, it took them just under 18 hours.) An extent on 12 bells would take over thirty years. Since extents are obviously not always practicable, ringers more often undertake shorter performances. Such ringing starts and ends with rounds, having meanwhile visited only a subset of the available permutations; but truth is still considered essential β€” no row can ever be repeated; to do so would make the ringing false. A peal is an extended performance; it must comprise at least 5000 changes (but 5040 on 7 bells). A performance of 1250 changes likewise makes a quarter peal (quarter for short); a peal or a quarter tends to last about three hours or 45 minutes, respectively. Changes on handbells Change ringing can also be performed on handbells, and is quite popular in its own right. Many record-length peals, including the longest peal ever rung, are by handbell ringers. Normally each ringer has a bell in each hand and sit or stand in a circle (like tower ringers). The tower bell terms of handstroke and backstroke are retained, referring to an upwards and downwards ring of the bell respectively; and as in towers, the ringing proceeds in alternate rows of handstroke and backstroke. Occasionally, a technique called lapping, or cross and stretch is used. Ringers stand or sit in a straight line at a single convenient table on which the bells are placed. They pick up a bell each time they ring it, and then put it down. As the bell sequence changes, however, the ringers physically swap the bells accordinglyβ€”so the bells move up and down the table and each row is rung in strict sequence from right to left. Ringers in cross and stretch thus do not have responsibility for their own personal bell, but handle each as it comes. Some handbell change ringers practice a hybrid of these two methods, known as body ringing: ringers standing in a line each hold one bell, exchanging places in the line so that the changes sound correctly when the bells are rung in sequence from right to left. History and modern culture Change ringing as we know it today emerged in England in the 17th century. To that era we can trace the origins of the earliest ringing societies, such as the Lincoln Cathedral Guild, which claims to date to 1612 or the Antient Society of Ringers of St Stephen in Bristol, which was founded in 1620 and lasted as a ringing society until the late 19th century. The recreation began to flourish in earnest in the Restoration era; an important milestone in the development of method ringing as a careful science was the 1668 publication by Richard Duckworth and Fabian Stedman of their book Tintinnalogia, which promised in its subtitle to lay down "plain and easie Rules for Ringing all sorts of Plain Changes". Stedman followed this in 1677 with another famous early guide, Campanalogia. Throughout the years since, the group theoretical underpinnings of change ringing have been pursued by mathematicians. "Changes" can be viewed as permutations; sets of permutations constitute mathematical groups, which in turn can be depicted via so-called Cayley graphs, which in turn can be mapped onto polyhedra. Bells have been installed in towers around the world and many rings in the British Isles have been augmented to ten, twelve, fourteen, or even sixteen bells. Today change ringing is, particularly in England, a popular and commonplace sound, often issuing from a church tower before or after a service or wedding. While on these everyday occasions the ringers must usually content themselves with shorter "touches", each lasting a few minutes, for special occasions they often attempt a quarter-peal or peal, lasting approximately 45 minutes or three hours respectively. If a peal attempt succeeds, towers sometimes mark the occasion with a peal board mounted on the wall of the ringing chamber; at St Peter Mancroft in Norwich there is one documenting what is generally considered to have been the first true peal: 5040 changes of Plain Bob Triples (a method still popular today), rung 2 May 1715. There is some evidence there may have been an earlier peal (also Plain Bob Triples), rung January 7, 1690 at St Sepulchre-without-Newgate in the City of London by the Ancient Society of College Youths. Today over 4000 peals are rung each year. Organisation and extent The Central Council of Church Bell Ringers, founded in 1891, is dedicated to representing change ringers around the world. Most regional and local ringing guilds are affiliated with the council. Its journal, The Ringing World, has been published weekly since 1911; in addition to news and features relating to bellringing and the bellringing community, it publishes records of achievements such as peals and quarter-peals. Ringers generally adhere to the Council's rules and definitions governing change ringing. The Central Council, by means of its peal records, also keeps track of record length peals, both on tower bells and handbells. (The record for tower bells remains the 1963 Loughborough extent of Plain Bob Major [40,320 changes]; for handbells it was set in 2007 in Willingham, Cambridgeshire, with 72,000 changes of 100 different Treble Dodging Minor methods, taking just over 24 hours to ring) More importantly, perhaps, along with keeping track of the first peal ever rung in a method, the Central Council controls the naming of new methods: it generally allows the first band to ring a method to name it. Much ringing is carried out by bands of ringers meeting at their local tower to ring its bells. For the sake of variety, though, many ringers like to take occasional trips to make a tower grab ringing the bells of a less familiar tower. The setting, the church architecture, the chance to ring more bells than usual, the bells' unique tone, their ease or difficulty of ringing, and sometimes even the unusual means of accessing the ringing chamber can all be part of the attraction. The traditional means of finding bell towers, and still the most popular way today, is the book (and now internet database) Dove's Guide for Church Bell Ringers. there are 7,141 English style rings in ringable condition. The Netherlands, Belgium, Pakistan, India, and Spain have one each. The Windward Isles and the Isle of Man have 2 each. Canada and New Zealand 8 each. The Channel Isles 11. Africa as a continent has 13. Scotland 23, Ireland 38, USA 48, Australia 61 and Wales 227. The remaining 6,695 (94%) are in England (including three mobile rings). World-wide there are 985 unringable rings, 930 in England, 55 in Wales and 12 elsewhere. Number of bells Methods of change ringing are named for the number of working bells, or the bells that switch order within the change. It takes a pair to switch, and commonly the largest bell (the tenor) does not change place. For example, there may be six bells, only five of which work, allowing for only two pairs. A method of ringing for these bells would be called doubles. Doubles is the most common group of methods rung in the United Kingdom, since the majority of parish churches with bell towers in the UK are fitted with only six bells. "Plain Bob Doubles" is a method rung on five bells whereas "Plain Bob Triples" is the same method rung on seven working bells. There are two separate ways to refer to the number of bells. One way is used for even numbers, the other for an odd number. The name for 9 bells is pronounced "kate-ers" and comes from the French "quatres". The name for 11 bells also comes from the French and is pronounced "sinks" c.f. Cinque Ports. The names refer to the number of bells which change places in each row. With three bells only one pair can change, and so it is singles. With seven bells there are clearly three pairs with the one left over not moving this row. Named changes Mathematical abstraction though each row may be, some rows do have a musical or melodic meaning to the listener. Over the years, a number of these have acquired names β€” they are named changes. Both the conductors directing call-change ringing and the composers coming up with plans for a bout of method ringing sometimes like to work their favourite named changes in. The table below lists some popular named changes on eight bells; many of these names are also applicable by extension on more or fewer bells. {| class="wikitable" ! style="text-align:center;"| Change ! align-"center" | Name |- | 12345678 () | Rounds |- | 87654321 () | Back rounds or Reverse Rounds |- | 13572468 () | Queens (an apocryphal story says it appealed to Elizabeth I) |- | 15263748 () | Tittums (so named because of the ti-tum ti-tum sound it makes) |} Such names are often humorous; for example, the sequence 14235 on five bells is called weasels because it is the tune of the refrain to the children's song Pop Goes the Weasel. This is particularly effective at the end of ringing down. The bells are in order, and so if not chimed leave a pause, the sequence becomes: 1..4..23.5 where a dot indicates a pause. Called changes are listed at MAW Call Change Collection Striking Although neither call change nor method ringing produces conventional tunes, it is still the aim of the ringers to produce a pleasant sound. One of the most important aspects of this is good striking β€” not only should the bells never clash by sounding at the same moment, the bells should sound to a perfect rhythm, tapping out a steady beat. It is the custom to leave a pause of one beat after every alternate row, i.e., after the ringing of each β€˜backstroke’ row. This is called 'open handstroke' ringing (or open handstroke leading). In Devon, Cornwall and parts of Yorkshire, this custom is not followed when call-change ringing; instead the bells strike steadily without the pause. This latter custom is known as the closed-hand or cartwheel arrangement. However for method ringing the universal practice is to ring with open handstrokes, even in the South West of England. Striking competitions are held where various bands of ringers attempt to ring with their best striking. They are judged on their number of faults (striking errors); the band with the fewest faults wins. These competitions are organized on regional and national levels, being particularly popular among the call-change ringers of Devon where it is customary to include the quality of the rise and lower of the bells as part of the judging criteria. Competitions for method ringers usually start "off the stay"β€”i.e., the bells are rung up before the competition begins. At the annual National 12 Bell Striking Contest the bands are ringing methods and producing a different change approximately every 2.5 seconds, with a gap between bells of 0.21 seconds. To an expert ringer's ear at this level of competition a variation of a tenth of this would be discernible as a striking fault. Sport In 2016 readers of The Ringing World magazine wrote to insist that bell ringing was "an art and a sport", as demonstrated by regular "striking competitions". It was suggested that classification of change ringing as a sport by Sport England could save it from becoming obsolete. But the Central Council of Church Bell Ringers opposed the move, suggesting that it would jeopardise its relationship with church bodies, since bell ringing should be seen as part of Christian worship, not exercise. The council's president, Chris Mew, said: "Where is the glamour of the sports field and where are the David Beckhams of the belfry?" Virtual The COVID-19 pandemic made it impossible for bell ringers to assemble in belfries. Searching for alternative methods, in March 2020 two ringers from the USA developed software called Ringing Room that mimics the operation of ropes and bells, and permits people to ring together online, in a type of networked music performance. Various other online platforms for virtual change ringing have also been created, but Ringing Room is the most popular, with over 10,000 people joining in the first year. In one Shropshire church, bells can be tied up with their sounds simulated by sensors, so ringers can practise in silence using Bluetooth headsets. In literature and television The mystery novel The Nine Tailors by Dorothy L. Sayers (1934) contains a great deal of information on change-ringing. Her fictional detective, Lord Peter Wimsey, demonstrates his skill at ringing, and the solution to the central puzzle of the book rests in part upon his knowledge of the patterns of change ringing. Connie Willis, who frequently references Sayers in To Say Nothing of the Dog (1997), features bell ringers in her earlier novel Doomsday Book (1992); a group of American women led by a Mrs. Taylor frequently appears practising for or ringing both handbells and changes. The British television series Midsomer Murders aired an episode in the fifth season on a series of murders within a bell-ringing team, in "Ring Out Your Dead". In the science-fiction novel Anathem by Neal Stephenson (2008) changes are rung in a cloistered monastery for mathematicians to signal different ceremonies. English bell-ringing terms Back – at or near last place in a change. Back bells – the heavier bells (so tend to limit the speed). Backstroke (or Backstroke home) – The part of a bell's cycle started by pulling on the tail end (rope end) in the tower, or with the bells raised in hand; also: the position at which the back bells come into rounds order at backstroke. Baldrick – the leather lined metal strap from which the clappers used to be hung. Band – a group of ringers for a given set of bells (or for a special purpose, e.g., a "peal band") Bearings – the load-bearing assembly on which the headstock (and so the whole bell) turns about its gudgeon pins. Modern hanging means the bell is hung on ball bearings, but were traditionally plain bearings. Bob – the commonest type of call in most methods or a class of plainmethod (in which either dodging takes place or some bells are not just hunting or place making); also can mean (usually called the "Bob place") the appropriate point in the method (e.g. a lead end) to modify the sequence of changes. Bob caller – someone who calls a touch, but does not check the ringing as a conductor would. Bristol start – starting to raise in peal by adding an extra bell each time. Bump the stay – allow the bell to swing over the balance, out of control, so the stay pushes the slider to its limit, stopping the bell. Cambridge – The right place surprise method, one of the standard eight, that is often the first learned. Canons – loops cast onto older bells' crowns. Cinques – (pronounced "sinks") methods for working eleven bells (possibly with a twelfth covering) the name deriving from the practice of swapping five pairs of bells. Clapper – the metal (usually cast iron) rod/hammer hung from a pivot below the crown of the bell, that strikes the soundbow of the bell when the bell stops moving. Clocking – causing a bell to sound while down by pulling a hammer against it (as a clock would) or by pulling the clapper against the side of the bell. Closed leads (also called cartwheeling) – handstroke changes follow backstroke changes with no handstroke gap (unlike open leads) Come round – return to rounds to end a touch (e.g. "come round at handstroke), or produce rounds prematurely. Cover – a bell (e.g. tenor) ringing at the end of every row, while the other bells ring a method. Delight – a treble bob method in which an internal place is made sometimes, but not every time, the treble is going from one dodge to another ("cross sections"). Dodge – Changing direction for one stroke in bell ringing (although strictly a dodge is taking a retrograde step in the middle of a portion of hunting). Dodging practice is an exercise where two bells exchange places on every stroke, sometimes taught to aid learners change from call changes to plain hunt. Double method – a method where the structure is the same if reversed. Doubles – a method with five working bells, possibly with a sixth covering. Down – EITHER: when the bells are hanging with the mouth lowermost position, OR: moving towards the front (as in "hunting down"). Extent – a touch where all possible changes are rung exactly once each; the number of such different rows is N factorial, where N is the number of bells. Firing- From rounds all the bells are rung at once for a few strokes before returning to rounds. Done at special occasions such as weddings or New Year. Fire out – to ring haphazardly, either because ringers accidentally try to ring at once, or deliberately for wedding ringing. Front – at or near the start of a row. Front bells – the smaller bells which are rung first in rounds. Garter hole – the hole in the wheel where the rope passes through. Handstroke – the stroke when the sally is gripped. Hunt – move one place at a time up or down (see plain hunt, treble bob hunt, etc.). Lead end – the change on which the treble is leading (ringing first) at its backstroke. Little Bob – a method in which the treble plain hunts between lead and a place short of the last place. Line – the sequence of places a bell rings in a method, or the diagram describing the method (the convention being that the treble line is shown in red while the others are blue). Method – an agreed/named sequence of changes that forms a round block, See plain course. Muffling For commemorative services such as funerals, memorial services and Remembrance Sunday, the bells are rung half-muffled with a leather pad on one side of the clapper. Very rarely fully muffled with pads both sides.. Sally – the woollen bulge woven into the rope. It is both an indicator and a help with gripping. Slider – A device which allows the bell to go over the balance at each end of its swing, but not to over-rotate. Stay – a device that is attached to the headstock and works in conjunction with the slider. Tenor – the lowest pitched bell in the tower. Treble – the highest-pitched bell in the tower. Up – EITHER: when the bells are raised to the mouth uppermost position, OR: moving towards the back (as in "hunting up"). See also The Australian and New Zealand Association of Bellringers Braid theory Change ringing software Grandsire John Taylor & Co Margery Sampson Steinhaus–Johnson–Trotter algorithm Veronese bellringing art Whitechapel Bell Foundry References External links The Craft of Bellringing, a DVD documentary about change ringing Discover Bell Ringing – introduction for non-ringers Call changes explained ringing.info, a wide-ranging and well-organized compendium of ringing links The Central Council of Church Bell Ringers Framework for Method Ringing published by the Council in 2019, superseding its former decisions Ringing World, the Council's weekly journal Dove's Guide , a directory of towers worldwide with bells hung for change ringing Some recordings of change ringing Change Ringing Wiki - Info for ringers - Video of plain hunt ringing Animation of English Full-circle ringing Change-ringing resources, an online compendium of almost everything you need to know Campanology Culture of England English music Permutations
Change ringing
[ "Mathematics" ]
7,871
[ "Functions and mappings", "Permutations", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
159,284
https://en.wikipedia.org/wiki/Novartis
Novartis AG is a Swiss multinational pharmaceutical corporation based in Basel, Switzerland. Consistently ranked in the global top five, Novartis is one of the largest pharmaceutical companies in the world and was the fourth largest by revenue in 2022. Novartis manufactures the drugs clozapine (Clozaril), diclofenac (Voltaren; sold to GlaxoSmithKline in 2015 deal), carbamazepine (Tegretol), valsartan (Diovan), imatinib mesylate (Gleevec/Glivec), cyclosporine (Neoral/Sandimmune), letrozole (Femara), methylphenidate (Ritalin; production ceased 2020), terbinafine (Lamisil), deferasirox (Exjade), and others. Novartis was formed in 1996 by the merger of Ciba-Geigy and Sandoz. It was considered the largest corporate merger in history during that time. The pharmaceutical and agrochemical divisions of both companies formed Novartis as an independent entity. The name Novartis was based on the Latin terms, β€œnovae artes” (new skills). After the merger, other Ciba-Geigy and Sandoz businesses were sold, or, like Ciba Specialty Chemicals, spun off as independent companies. The Sandoz brand disappeared for three years, but was revived in 2003 when Novartis consolidated its generic drugs businesses into a single subsidiary and named it Sandoz. Novartis divested its agrochemical and genetically modified crops business in 2000 with the spinout of Syngenta in partnership with AstraZeneca, which also divested its agrochemical business. The new company also acquired a series of acquisitions in order to strengthen its core businesses. Novartis is a full member of the European Federation of Pharmaceutical Industries and Associations (EFPIA), the Biotechnology Innovation Organization (BIO), the International Federation of Pharmaceutical Manufacturers and Associations (IFPMA), and the Pharmaceutical Research and Manufacturers of America (PhRMA). Novartis is the third most valuable pharmaceutical company in Europe, after Novo Nordisk and Roche. History Novartis was created in March 1996 and began operations on 20 December from the merger of Ciba-Geigy and Sandoz Laboratories, both Swiss companies. Ciba-Geigy Ciba-Geigy was formed in 1970 by the merger of J. R. Geigy Ltd (founded in Basel in 1857) and CIBA (founded in Basel in 1859). Ciba began in 1859, when Alexander Clavel (1805–1873) took up the production of fuchsine in his factory for silk-dyeing works in Basel. By 1873, he sold his dye factory to the company Bindschedler and Busch. In 1884, Bindschedler and Busch was transformed into a joint-stock company named "" (Company for Chemical Industry Basel). The acronym, CIBA, was adopted as the company's name in 1945. The foundation for Geigy was established in 1857, when Johann Rudolf Geigy-Merian (1830–1917) and Johann Muller-Pack acquired a site in Basel, where they built a dyewood mill and a dye extraction plant. Two years later, they began the production of synthetic fuchsine. In 1901, they formed the public limited company Geigy, and the name of the company was changed to J. R. Geigy Ltd in 1914. CIBA and Geigy merged in 1970 to form Ciba‑Geigy Ltd. . Mid-1990s controversy In the mid-1990s, state and federal health and environmental agencies identified an increased incidence of childhood cancers in Toms River, New Jersey, from the 1970–1995 period. Multiple investigations by state and federal environmental and health agencies indicated that the likely source of the increased cancer risk was contamination from Toms River Chemical Plant (then operated by Ciba-Geigy), which had been in operation since 1952, and the Reich Farm/Union Carbide. The area was designated a United States Environmental Protection Agency Superfund site in 1983 after an underground plume of toxic chemicals was identified. The following year, a discharge pipe was shut down after a sinkhole at the corner of Bay Avenue and Vaughn Avenue revealed that it had been leaking. The plant ceased operation in 1996. A follow-up study from the 1996–2000 period indicated that while there were more cancer cases than expected, rates had significantly fallen and the difference was statistically insignificant compared to normal statewide cancer rates. Since 1996, the Toms River water system has been subject to the most stringent water testing in New Jersey and is considered safe for consumption. Dan Fagin's Toms River: A Story of Science and Salvation, the 2014 Pulitzer Prize winning book, examined the issue of industrial pollution at the site in detail. Sandoz Sandoz is the generic drugs division of Novartis. Before the 1996 merger with Ciba-Geigy to form Novartis, Sandoz Pharmaceuticals (Sandoz AG) was a pharmaceutical company headquartered in Basel, Switzerland (as was Ciba-Geigy), and was best known for developing drugs such as Sandimmune for organ transplantation, the antipsychotic Clozaril, Mellaril Tablets and Serentil Tablets for treating psychiatric disorders, and Cafergot Tablets and Torecan Suppositories for treating migraine headaches. The Chemiefirma Kern und Sandoz ("Kern and Sandoz Chemistry Firm") was founded in 1886 by Alfred Kern (1850–1893) and Edouard Sandoz (1853–1928). The first dyes manufactured by them were alizarinblue and auramine. After Kern's death, the partnership became the corporation Chemische Fabrik vormals Sandoz in 1895. The company began producing the fever-reducing drug antipyrin in the same year. In 1899, the company began producing the sugar substitute saccharin. Further pharmaceutical research began in 1917 under Arthur Stoll (1887–1971), who is the founder of Sandoz's pharmaceutical department in 1917. In 1918, Arthur Stoll isolated ergotamine from ergot; the substance was eventually used to treat migraine and headaches and was introduced under the trade name Gynergen in 1921. Between the World Wars, Gynergen (1921) and Calcium-Sandoz (1929) were brought to market. Sandoz also produced chemicals for textiles, paper, and leather, beginning in 1929. In 1939, the company began producing agricultural chemicals. The psychedelic effects of lysergic acid diethylamide (LSD) were discovered at the Sandoz laboratories in 1943 by Arthur Stoll and Albert Hofmann. Sandoz began clinical trials and marketed the substance, from 1947 through the mid-1960s, under the name Delysid as a psychiatric drug, thought useful for treating a wide variety of mental ailments, ranging from alcoholism to sexual deviancy. Sandoz suggested in its marketing literature that psychiatrists take LSD themselves, to gain a better subjective understanding of the schizophrenic experience, and many did exactly that and so did other scientific researchers. The Sandoz product received mass publicity as early as 1954, in a Time magazine feature. Research on LSD peaked in the 1950s and early 1960s. The CIA purchased quantities of LSD from Sandoz for use in its illegal human experimentation program known as MKUltra. Sandoz withdrew the drug from the market in 1965. The drug became a cultural novelty of the 1960s after psychologist Timothy Leary at Harvard University began to promote its use for recreational and spiritual experiences among the general public. Sandoz opened its first foreign offices in 1964. In 1967, Sandoz merged with Wander AG (known for Ovomaltine and Isostar). Sandoz acquired the companies Delmark, WasabrΓΆd (a Swedish manufacturer of crisp bread), and Gerber Products Company (a baby food company). On 1 November 1986, a fire broke out in a production plant storage room, which led to the Sandoz chemical spill and a large amount of pesticide being released into the upper Rhine river. This exposure killed many fish and other aquatic life. In 1995, Sandoz spun off its specialty chemicals business to form Clariant. In 1997, Clariant merged with the specialty chemicals business that was spun off from Hoechst AG in Germany. Merger In 1996, Ciba-Geigy merged with Sandoz, with the pharmaceutical and agrochemical divisions of both staying together to form Novartis. Other Ciba-Geigy and Sandoz businesses were spun off as independent companies. notably Ciba Specialty Chemicals. Sandoz's Master Builders Technologies, a producer of chemicals for the construction industry, was sold off to SKW Trostberg A.G., a subsidiary of the German energy company VIAG, while its North American corn herbicide business became part of the German chemical maker BASF. Post-merger In 1998, the company entered into a biotechnology licensing agreement with the University of California at Berkeley Department of Plant and Microbial Biology. Critics of the agreement expressed concern over prospects that the agreement would diminish academic objectivity, or lead to the commercialization of genetically modified plants. The agreement expired in 2003. 2000–2010 In 2000, Novartis and AstraZeneca combined their agrobusiness divisions to create a new company, Syngenta. In 2003, Novartis organized all its generics businesses into one division, and merged some of its subsidiaries into one company, reusing the predecessor brand name of Sandoz. In 2005, Novartis expanded its subsidiary Sandoz significantly through the US$8.29Β billion acquisition of Hexal, one of Germany's leading generic drug companies, and Eon Labs, a fast-growing United States generic pharmaceutical company. In 2006, Novartis acquired the California-based Chiron Corporation. Chiron had been divided into three units: Chiron Vaccines, Chiron Blood Testing, and Chiron BioPharmaceuticals. The biopharmaceutical unit was integrated into Novartis Pharmaceuticals, while the vaccines and blood testing units were made into a new Novartis Vaccines and Diagnostics division. Also in 2006, Sandoz became the first company to have a biosimilar drug approved in Europe with its recombinant human growth hormone drug. In 2007, Novartis sold the Gerber Products Company to NestlΓ© as part of its continuing effort to shed old Sandoz and Ciba-Geigy businesses and focus on healthcare. In 2009, Novartis reached an agreement to acquire an 85 percent stake in the Chinese vaccines company Zhejiang Tianyuan Bio-Pharmaceutical Co., Ltd. as part of a strategic initiative to build a vaccines industry leader in this country and expand the group's limited presence in this fast-growing market segment. This proposed acquisition will require government and regulatory approvals in China. In 2010, Novartis offered to pay US$39.3Β billion to fully acquire Alcon, the world's largest eye-care company, including a majority stake held by NestlΓ©. Novartis had bought 25 percent of Alcon in 2008. Novartis created a new division and called it Alcon, under which it placed its CIBA VISION subsidiary and Novartis Ophthalmics, which became the second-largest division of Novartis. The total cost for Alcon amounted to $60Β billion. 2011–present In 2011, Novartis acquired the medical laboratory diagnostics company Genoptix to "serve as a strong foundation for our (Novartis') individualized treatment programs". In 2012, the Company cut approximately 2,000 positions in the United States, primarily in sales, in response to anticipated revenue downturns from the hypertension drug Diovan, which was losing patent protection, and the realization that the anticipated successor to Diovan, Rasilez, was failing in clinical trials. The 2012 personnel reductions follow ~2000 cut positions in Switzerland and the United States in 2011, ~1400 cut positions in the United States in 2010, and a reduction of "thousands" and several site closures in previous years. Also in 2012, Novartis became the biggest manufacturer of generic skin care medicine, after agreeing to buy Fougera Pharmaceuticals for $1.525Β billion in cash. In 2013, the Indian Supreme Court issued a decision rejecting Novartis' patent application in India on the final form of Gleevec, Novartis's cancer drug; the case caused great controversy. In 2013, Novartis was sued again by the US government, this time for allegedly bribing doctors for a decade so that their patients are steered towards the company's drugs. In January 2014, Novartis announced plans to cut 500 jobs from its pharmaceuticals division. In February 2014, Novartis announced that it acquired CoStim Pharmaceuticals. In May 2014, Novartis purchased the rights to market Ophthotech's Fovista (an anti-PDGF aptamer, also being investigated for use in combination with anti-VEGF treatments) outside the U.S. for up to $1Β billion. Novartis acquired exclusive rights to market the eye drug outside of the states while retaining U.S. marketing rights. The company agreed to pay Ophthotech $200Β million upfront, and $130Β million in milestone payments relating to Phase III trials. Ophthotech is also eligible to receive up to $300Β million dependent upon future marketing approval milestones outside of America and up to $400Β million relating to sales milestones. In September 2014, Ophthotech received its first $50Β million phase III trial milestone payment from Novartis. In April 2014, Novartis announced that it would acquire GlaxoSmithKline's cancer drug business for $16Β billion as well as selling its vaccines business to GlaxoSmithKline for $7.1Β billion. In August 2014 Genetic Engineering & Biotechnology News reported that Novartis had acquired a 15 percent stake in Gamida Cell for $35Β million, with the option to purchase the whole company for approximately $165Β million. In October 2014, Novartis announced its intention to sell its influenza vaccine business (inclusive of its development pipeline), subject to regulatory approval, to CSL for $275Β million. In March 2015, the company announced BioPharma had completed its acquisition of two Phase III cancer-drug candidates; the MEK inhibitor binimetinib (MEK 162) and the BRAF inhibitor encorafenib (LGX818), for $85Β million. In addition, the company sold its RNAi portfolio to Arrowhead Research for $10Β million and $25Β million in stock. In June, the company announced it would acquire Spinifex Pharmaceuticals for more than $200Β million. In August, the company acquired the remaining rights to the CD20 monoclonal antibody Ofatumumab from GlaxoSmithKline for up to $1Β billion. In October the company acquired Admune Therapeutics for an undisclosed sum, as well as licensing PBF-509, an adenosine A2A receptor antagonist which is in Phase I clinical trials for non-small cell lung cancer, from Palobiofarma. In November 2016, the company announced it would acquire Selexys Pharmaceuticals for $665Β million. In December, the company acquired Encore Vision, gaining the company's principle compound, EV06, is a first-in-class topical therapy for presbyopia. In December Novartis acquired Ziarco Group Limited, bolstering its presence in eczema treatments. In late October 2017, Reuters announced that Novartis would acquire Advanced Accelerator Applications for $3.9Β billion, paying $41 per ordinary share and $82 per American depositary share representing a 47 percent premium. In March 2018, GlaxoSmithKline announced that it has reached an agreement with Novartis to acquire Novartis' 36.5 percent stake in their Consumer Healthcare Joint Venture for $13Β billion (Β£9.2Β billion). In April of the same year, the business utilised some of the proceeds from the aforementioned GlaxoSmithKline deal to acquire Avexis for $218 per share or $8.7Β billion in total, gaining the lead compound AVXS-101 used to treat spinal muscular atrophy. In August 2018, Novartis signed a deal with Laekna-a Shanghai-based pharmaceutical company for its two clinical-stage cancer drugs. Novartis gave Laekna the exclusive international rights for the drugs that are oral pan-Akt kinase inhibitors namely; afuresertib (ASB138) and uprosertib (UPB795). In mid-October, the company announced it would acquire Endocyte Inc for $2.1Β billion ($24 per share) merging it with a newly created subsidiary. Endocyte will bolster Novartis' offering in its radiopharmaceuticals business, with Endocyte's first in class candidate 177Lu-PSMA-617 being targeted against metastatic castration-resistant prostate cancer. In late December the company announced it would acquire France-based contract manufacturer, CellforCure from LFB, boosting its capacity to produce cell and gene therapies. On 9 April 2019, Novartis announced that it had completed the spin-off of Alcon as a separate commercial entity. Alcon was listed on the SIX exchange in Switzerland and NYSE exchange in the U.S. Novartis announced during late 2019 a five-year artificial intelligence "alliance" with Microsoft. The companies aim to create applications for "Microsoft's AI capabilities", in turn improving the other's drug development processes. Microsoft seeks to "test AI products it is already working on in 'real-life' situations". The deal will pursue solutions for "organizing and using" data generated from Novartis' laboratory experiments, clinical trials, and manufacturing plants. It will also look at improving manufacturing of Chimeric antigen receptor T cell (CAR T cells). Finally, the deal "will also apply AI to generative chemistry to enhance drug design". In November 2019, Sandoz announced it would acquire the Japanese business of Aspen Global inc for €300Β million (around $330Β million), boosting the business's presence in Asia. In late November 2019, the business announced it would acquire The Medicines Company for ($85 per share) in order to acquire amongst other assets, the cholesterol lowering therapy; inclisiran. In April 2020, the company announced it would acquire Amblyotech. In September 2020, Novartis was imposed a fine of €385Β million by the French competition authority on accusations of abusive practices to preserve sales of Lucentis over a cheaper drug. Also in September, BioNTech has leased a large production facility from Novartis to follow all advance demands for its coronavirus vaccine in Europe and sell it to China. In July 2020, Novartis agreed to pay $678Β million to settle allegations that the company violated the False Claims Act and Anti-Kickback Statute by paying physicians to induce them to prescribe certain of the company's drugs. Novartis allegedly spent hundreds of millions of dollars on fraudulent speaker programs that served as a means to bribe doctors with cash payments and other extravagant rewards. Many of these speaking programs were allegedly nothing more than social gatherings at expensive restaurants, with limited or no discussion about the Novartis drugs. In October Novartis announced it would acquire Vedere Bio for $280Β million boosting the businesses cell and gene therapy offerings. In October 2020, as part of a joint venture to develop therapeutic drugs to combat COVID-19, Novartis bought 6% of all shares outstanding in Swiss DARPin research company Molecular Partners AG at CHF 23 per share. In December 2020, Novartis announced it would acquire Cadent Therapeutics for up to $770Β million, gaining full rights to CAD-9303 (a NMDAr positive allosteric modulator), MIJ-821 (a NMDAr negative allosteric modulator) and CAD-1883 a clinical-stage SK channel positive allosteric modulator. In September 2021, the company announced it would acquire gene-therapy business, Arctos Medical, broadening its optogenetics range. In December, Novartis announced it would purchase Gyroscope Therapeutics from health care investment company, Syncona Ltd, for up to $1.5Β billion. In February 2022, New York City-based biotechnology company Cambrian Biopharma announced it had licensed rights to mTOR inhibitor programs from Novartis. As part of the deal, Cambrian was setting up a subsidiary called Tornado Therapeutics. In August 2022, the company announced its plan to spin off Sandoz generic drugs unit to form a publicly traded business as part of a restructuring. With the unit having generated US$9.69Β billion in 2021, the spin-off would create the biggest generic drugs company in Europe by sales. In June 2023, Novartis announced it would acquire Chinook Therapeutics and its drug pipeline for up to $3.5Β billion. In July 2023, Novartis acquired DTx Pharma, a developer of technology for delivering RNA-based therapies, upfront for $500Β million and an additional $500Β million subject to reaching certain targets. Also in June, Novartis announced it would it would sell Xiidra to Bausch & Lomb for $1.75Β billion and receive additional $750Β million linked to future sales for Xiidra as well as two pipeline assets. In September 2023, Novartis announced that the spin-off had been approved by its shareholders and that it would be completed by the next month, resulting in Novartis shareholders receiving one Sandoz share for every five Novartis shares. Sandoz will be listed on the SIX Swiss Exchange with a market capitalization between $18Β billion and $25bn. On 4 October 2023, Novartis completed the spin-off of Sandoz as a stand-alone company. In November 2023, Legend Biotech and Novartis signed an out-license deal to develop and manufacture Legend's chimeric antigen receptor (CAR-T) therapies, that go after delta-like ligand protein 3 (DLL3) including large cell neuroendocrine carcinoma candidate LB2102 for $100Β million upfront, and Legend Biotech will be eligible to receive up to $1.01Β billion in clinical, regulatory, and commercial milestone payments and tiered royalties. In December 2023, Novartis sold its 15 ophthalmology drugs to JB Chemicals for β‚Ή1,089 crore ($116 million). In 2023, the World Intellectual Property Organization (WIPO)’s Madrid Yearly Review ranked Novartis's number of marks applications filled under the Madrid System as 4th in the world, with 110 trademarks applications submitted during 2023. In February 2024, Novartis announced it would acquire the German biotech firm MorphoSys AG for €2.7bn. Germany's antitrust regulator, the Federal Cartel Office, approved the takeover in March 2024. In May 2024, Novartis announced it would acquire Mariana Oncology for $1 billion upfront and up to $750 million more if certain milestones were met. In July 2024, Novartis entered into a strategic collaboration with Dren Bio to develop therapeutic bispecific antibodies for cancer, with the deal worth up to $3 billion. In November 2024, Novartis and Ratio Therapeutics entered into a worldwide licence and collaboration agreement worth $745m to advance a somatostatin receptor 2 (SSTR2)-targeting radiotherapeutic candidate for cancer. Acquisition history Novartis Novartis Ciba-Geigy J. R. Geigy Ltd CIBA Sandoz Kern and Sandoz Chemistry Firm Wander AG Lek d.d. (Slovenia) Aspen Global inc (Japanese business) Hexal Eon Labs Chiron Corporation Matrix Pharmaceuticals Inc PowderJect PathoGenesis Cetus Corporation Cetus Oncology Biocine Company Chiron Diagnostics Chiron Intraoptics Chiron Technologiesβ€― Adatomed GmbH Zhejiang Tianyuan Bio-Pharmaceutical Co., Ltd Alcon Texas Pharmacal Company Genoptix Fougera Pharmaceuticals CoStim Pharmaceuticals GlaxoSmithKline (Cancer drug division) Spinifex Pharmaceuticals Admune Therapeutic Selexys Pharmaceuticals Ziarco Group Limited Advanced Accelerator Applications AveXis Endocyte CellforCure The Medicines Company Amblyotech Vedere Bio Cadent Therapeutics Luc Therapeutics Ataxion Therapeutics Arctos Medical Gyroscope Therapeutics Chinook Therapeutics DTx Pharma MorphoSys Mariana Oncology Corporate structure Novartis AG is a publicly traded Swiss holding company that operates through the Novartis Group and owns, directly or indirectly, all companies worldwide that operate as subsidiaries of the Novartis Group. Novartis's businesses are divided into two operating divisions: Innovative Medicines and Sandoz (generics). The eye-care division Alcon was spun off into an independent company in April 2019. In August 2022, Novartis announced plans to spin off Sandoz as part of restructuring. The spin-off was completed in October 2023. The Innovative Medicines business is made up of two commercial units: Innovative Medicines International and Innovative Medicines US. The two business units combine the pharmaceutical and oncology divisions and commercially focus on global and US market respectively. Novartis operates directly through subsidiaries, each of which fall under one of the divisions, and that Novartis categorizes as fulfilling one or more of the following functions: Holding/Finance, Sales, Production, and Research Novartis AG also held 33.3 percent of the shares of Roche until 2022, however it did not exercise control over Roche. Novartis also has two significant license agreements with Genentech, a Roche subsidiary. One agreement is for Lucentis; the other is for Xolair. In 2014, Novartis established a center in Hyderabad, India, in order to offshore several of its R&D, clinical development, medical writing and administrative functions. The center supports the drug major's operations in the pharmaceuticals (Novartis), eye care (Alcon), and generic drugs segments (Sandoz). Place in its market segments Novartis is the world's first largest in life sciences and agribusiness markets. It is also the second-largest pharmaceutical company by market cap in 2019. Alcon: At the time Novartis bought Alcon, they had annual sales of $6.5Β billion and a net income of $2Β billion. In April 2019, Novartis completed the spin-off of Alcon as a separate commercial entity. Sandoz: , Sandoz has been recognized as the world's second-largest generic drug company. Sandoz' biosimilars lead its field, getting the first biosimilar approvals in the EU. In 2018, Sandoz reported US$9.9Β billion in net sales. In August 2022, Novartis announced plans to spin off Sandoz by second half of 2023. Vaccines and Diagnostics Division: In 2013, Novartis announced it was considering selling the vaccines and diagnostics division off. This sale was completed in late 2015, and the division was integrated into CSL's BioCSL operation, with the combined entity trading as Seqirus. In 2018, Novartis sold its consumer healthcare joint venture vaccines division to GlaxoSmithKline for US$13.0Β billion. Consumer: Novartis is not a leader in the over-the-counter or animal health segments; its leading OTC brands are Excedrin and Theraflu, but sales have been slowed by problems at its key US manufacturing plant. In 2018, Novartis ranked second on the Access to Medicine Index, which "ranks companies on how readily they make their products available to the world's poor." Finance For the fiscal year 2022, Novartis reported earnings of US$6.955Β billion, with an annual revenue of US$50.545Β billion, a decrease of 71 percent over the previous fiscal cycle. Novartis shares traded at over $80.56 per share, and its market capitalization was valued at $198.34B as of 31 January 2023. Research The company's global research operations, called "Novartis Institutes for BioMedical Research (NIBR)" have their global headquarters in Cambridge, Massachusetts, United States. Two research institutes reside within NIBR that focus on diseases in the developing world: Novartis Institute for Tropical Diseases, which works on tuberculosis, dengue, and malaria, and Novartis Vaccines Institute for Global Health, which works on salmonella typhi (typhoid fever) and shigella. Novartis is also involved in publicly funded collaborative research projects, with other industrial and academic partners. One example in the area of non-clinical safety assessment is the InnoMed PredTox project. The company is expanding its activities in joint research projects within the framework of the Innovative Medicines Initiative of EFPIA and the European Commission. Novartis is working with Science 37 in order to allow video based telemedicine visits instead of physical traveling to clinics for patients. It is planning for ten clinical trials over three years using mobile technology to help free patients from burdensome hospital trips. Products Pharmaceuticals (66 in total as of 28 April 2023) Consumer health Benefiber Bialcol Alcohol Buckley's cold and cough formula Bufferin ChestEze Comtrex cold and cough Denavir/Vectavir Desenex Doan's pain relief Ex-Lax Excedrin Fenistil Gas-X Habitrol Keri skin care Lamisil foot care Lipactin herpes symptomatic treatment Maalox Nicotinell No-doz Quinvaxem (Pentavalent vaccine) Otrivine Prevacid 24HR Savlon Tavist Theraflu Vagistat Tixylix Voltaren In January 2009, the United States Department of Health and Human Services awarded Novartis a $486Β million contract for construction of the first US plant to produce cell-based influenza vaccine, to be located in Holly Springs, North Carolina. The stated goal of this program is the capability of producing 150,000,000 doses of pandemic vaccine within six months of declaring a flu pandemic. In April 2014, Novartis divested its consumer health section with $3.5Β billion worth of assets into a new joint venture with GlaxoSmithKline, named GSK Consumer Healthcare, of which Novartis will hold a 36.5% stake. In March 2018, GSK announced that it has reached an agreement with Novartis to acquire Novartis' 36.5% stake in their Consumer Healthcare Joint Venture for $13Β billion (Β£9.2Β billion). Animal health Pet care Interceptor (Milbemycin oxime), oral worm control product Sentinel Flavor Tabs (Milbemycin oxime, Lufenuron), oral flea control product Deramaxx (Deracoxib), oral treatment for pain and inflammation from osteoarthritis in dogs Capstar (Nitenpyram), oral tablet for flea control Milbemax (Milbemycin oxime, Praziquantel), oral worm treatment Program (Lufenuron), oral tablet for flea control Livestock Acatalk Duostar (Fluazuron, Ivermectin), tick control for cattle CLiK (Dicyclanil), blowfly control for sheep Denagard (Tiamulin), antibiotic for the treatment of swine dysentery associated with Brachyspira (formerly Serpulina or Treponema) Fasinex (Triclabendazole), oral drench for cattle that is used for the treatment and control of all three stages of liver fluke ViraShield, For use in healthy cattle, including pregnant cows and heifers, as an aid in the prevention of disease caused by infectious bovine rhinotracheitis (IBR), bovine virus diarrhoea (BVD Type 1 and BVD Type 2), parainfluenza Type 3 (PI3), and bovine respiratory syncytial (BRSV) viruses Bioprotection (insect and rodent control) Actara (Thiamethoxam) Atrazine (Atrazine) Larvadex (Cyromazine) Neporex (Cyromazine) Oxyfly (Lambda-cyhalothrin) Virusnip (Potassium monopersulfate) Controversies and criticism Challenge to India's patent laws Novartis fought a seven-year, controversial battle to patent Gleevec in India, and took the case all the way to the Indian Supreme Court, where the patent application was finally rejected. The patent application at the center of the case was filed by Novartis in India in 1998, after India had agreed to enter the World Trade Organization and to abide by worldwide intellectual property standards under the TRIPS agreement. As part of this agreement, India made changes to its patent law; the biggest of which was that prior to these changes, patents on products were not allowed, afterwards they were, albeit with restrictions. These changes came into effect in 2005, so Novartis' patent application waited in a "mailbox" with others until then, under procedures that India instituted to manage the transition. India also passed certain amendments to its patent law in 2005, just before the laws came into effect, which played a key role in the rejection of the patent application. The patent application claimed the final form of Gleevec (the beta crystalline form of imatinib mesylate). In 1993 before India allowed patents on products, Novartis had patented imatinib, with salts vaguely specified, in many countries but could not patent it in India. The key differences between the two patent applications were that the 1998 patent application specified the counterion (Gleevec is a specific saltβ€”imatinib mesylate) while the 1993 patent application did not claim any specific salts nor did it mention mesylate, and the 1998 patent application specified the solid form of Gleevecβ€”the way the individual molecules are packed together into a solid when the drug itself is manufactured (this is separate from processes by which the drug itself is formulated into pills or capsules)β€”while the 1993 patent application did not. The solid form of imatinib mesylate in Gleevec is beta crystalline. As provided under the TRIPS agreement, Novartis applied for Exclusive Marketing Rights (EMR) for Gleevec from the Indian Patent Office and the EMR was granted in November 2003. Novartis made use of the EMR to obtain orders against some generic manufacturers who had already launched Gleevec in India. Novartis set the price of Gleevec at US$2666 per patient per month; generic companies were selling their versions at US$177 to 266 per patient per month. Novartis also initiated a program to assist patients who could not afford its version of the drug, concurrent with its product launch. When examination of Novartis' patent application began in 2005, it came under immediate attack from oppositions initiated by generic companies that were already selling Gleevec in India and by advocacy groups. The application was rejected by the patent office and by an appeal board. The key basis for the rejection was the part of Indian patent law that was created by amendment in 2005, describing the patentability of new uses for known drugs and modifications of known drugs. That section, Paragraph 3d, specified that such inventions are patentable only if "they differ significantly in properties with regard to efficacy." At one point, Novartis went to court to try to invalidate Paragraph 3d; it argued that the provision was unconstitutionally vague and that it violated TRIPS. Novartis lost that case and did not appeal. Novartis did appeal the rejection by the patent office to India's Supreme Court, which took the case. The Supreme Court case hinged on the interpretation of Paragraph 3d. The Supreme Court decided that the substance that Novartis sought to patent was indeed a modification of a known drug (the raw form of imatinib, which was publicly disclosed in the 1993 patent application and in scientific articles), that Novartis did not present evidence of a difference in therapeutic efficacy between the final form of Gleevec and the raw form of imatinib, and that therefore the patent application was properly rejected by the patent office and lower courts. Although the court ruled narrowly, and took care to note that the subject application was filed during a time of transition in Indian patent law, the decision generated widespread global news coverage and reignited debates on balancing public good with monopolistic pricing, innovation with affordability etc. Had Novartis won and had its patent issued, it could not have prevented generics companies in India from selling generic Gleevec, but it could have obliged them to pay a reasonable royalty under a grandfather clause included in India's patent law. In reaction to the decision, Ranjit Shahani, vice-chairman and managing director of Novartis India Ltd was quoted as saying "This ruling is a setback for patients that will hinder medical progress for diseases without effective treatment options." He also said that companies like Novartis would invest less money in research in India as a result of the ruling. Novartis also emphasised that it continues to be committed to good access to its drugs; according to Novartis, by 2013, "95% of patients in Indiaβ€”roughly 16,000 peopleβ€”receive Glivec free of charge... and it has provided more than $1.7Β billion worth of Glivec to Indian patients in its support program since it was started...." Sexual discrimination On 17 May 2010, a jury in the United States District Court for the Southern District of New York awarded $3,367,250 in compensatory damages against Novartis, finding that the company had committed sexual discrimination against twelve female sales representatives and entry-level managers since 2002, in matters of pay, promotion, and treatment after learning that the employees were pregnant. Two months later the company settled with the remaining plaintiffs for $152.5Β million plus attorney fees. Marketing violations In September 2008, the US Food and Drug Administration (FDA) sent a notice to Novartis Pharmaceuticals regarding its advertising of Focalin XR, an ADHD drug, in which the company overstated its efficacy while marketing to the public and medical professionals. In 2005, federal prosecutors opened an investigation into Novartis' marketing of several drugs: Trileptal, an antiseizure drug; three drugs for heart conditionsβ€”Diovan (the company's top-selling product), Exforge, and Tekturna; Sandostatin, a drug to treat a growth hormone disorder; and Zelnorm, a drug for irritable bowel syndrome. In September 2010, Novartis agreed to pay US$422.5Β million in criminal and civil claims and to enter into a corporate integrity agreement with the US Office of the Inspector General. According to The New York Times, "Federal prosecutors accused Novartis of paying illegal kickbacks to health care professionals through speaker programs, advisory boards, entertainment, travel and meals. But aside from pleading guilty to one misdemeanor charge of mislabeling in an agreement that Novartis announced in February, the company denied wrongdoing." In the same New York Times article, Frank Lichtenberg, a Columbia professor who receives pharmaceutical financing for research on innovation in the industry, said off-label prescribing was encouraged by the American Medical Association and paid for by insurers, but off-label marketing was clearly illegal. "So it's not surprising that they would settle because they don't have a legal leg to stand on." In April 2013, federal prosecutors filed two lawsuits against Novartis under the False Claims Act for off-label marketing and kickbacks; in both suits, prosecutors are seeking treble damages. The first suit "accused Novartis of inducing pharmacies to switch thousands of kidney transplant patients to its immunosuppressant drug Myfortic in exchange for kickbacks disguised as rebates and discounts". In the second, the Justice Department joined a qui tam, or whistleblower, lawsuit brought by a former sales rep over off-label marketing of three drugs: Lotrel and Valturna (both hypertension drugs), and the diabetes drug, Starlix. Twenty-seven states, the District of Columbia and Chicago and New York also joined. Avastin Outside the US, Novartis markets the drug ranibizumab (trade name Lucentis), which is a monoclonal antibody fragment derived from the same parent mouse antibody as bevacizumab (Avastin). Both Avastin and Lucentis were created by Genentech which is owned by Roche; Roche markets Avastin worldwide, and also markets Lucentis in the US. Lucentis has been approved worldwide as a treatment for wet macular degeneration and other retinal disorders; Avastin is used to treat certain cancers. Because the price of Lucentis is much higher than Avastin, many ophthalmologists began having compounding pharmacies formulate Avastin for administration to the eye and began treating their patients with Avastin. In 2011, four trusts of the National Health Service in the UK issued policies approving use and payment for administering Avastin for macular degeneration, in order to save money, even though Avastin had not been approved for that indication. In April 2012, after failing to persuade the trusts that it was uncertain whether Avastin was as safe and effective as Lucentis, and in order to retain the market for Lucentis, Novartis announced it would sue the trusts. However, in July Novartis offered significant discounts (kept confidential) to the trusts, and the trusts agreed to change their policy, and in November, Novartis dropped the litigation. Valsartan In the summer of 2013, two Japanese universities retracted several publications of clinical trials that purported to show that Valsartan (branded as Diovan) had cardiovascular benefits, when it was found that statistical analysis had been manipulated, and that a Novartis employee had participated in the statistical analysis but had not disclosed his relationship with Novartis but only his affiliation with Osaka City University, where he was a lecturer. As a result, several Japanese hospitals stopped using the drug, and media outlets ran reports on the scandal in Japan. In January 2014 Japan's Health Ministry filed a criminal complaint with the Tokyo public prosecutor's office against Novartis and an unspecified number of employees, for allegedly misleading consumers through advertisements that used the research to support the benefits of Diovan. On 1 July 2014 the prosecutor's office announced it was formally charging the company and one of its employees. Corruption In January 2018, Novartis began being investigated by US and Greek authorities for allegedly bribing Greek public officials in the 2006–2015 period, in a scheme which included two former prime ministers, several former health ministers, many high ranking party members of the Nea Dimokratia and PASOK ruling parties, as well as bankers. The manager of Novartis' Greek branch was prohibited from leaving the country. The minister's deputy described the allegations as "the biggest scandal since the creation of the Greek state", which caused "annual state expenditure on medicine to explode". Most of the ministers involved in the scandal have denied the allegations and sought to paint the case as "political targeting" and "fabrication" by the Syriza opposition party. However, the Greek Judicial Council ruled that the scandal was real. Besides bribery that involves artificial increases in the price of several medicines, the case also involves money laundering, with suspicions of "illegal funds of more than four billion euros ($4.2Β billion)" were involved. In June 2020, Novartis reached settlements with the US Department of Justice (DOJ) and the US Securities and Exchange Commission (SEC) resolving all Foreign Corrupt Practices Act (FCPA) investigations into historical conduct by the company and its subsidiaries. As part of the resolutions, Novartis and some of its current and former subsidiaries would pay US$233.9Β million to the DOJ and US$112.8Β million to the SEC. Michael Cohen Novartis paid $1.2Β million to Essential Consultants, an entity owned by Michael Cohen, following the 2017 inauguration of Donald Trump. Cohen was paid monthly, with each payment just under $100,000. Novartis claims it paid Cohen to help it understand and influence the new administration's approach to drug pricing and regulation. In July 2018, the US Senate committee report "White House Access for Sale" revealed that Novartis Ag's relationship with Cohen was "longer and more detailed". Novartis initially stated that the relationship ceased a month after entering the US$1.2Β million contract with Cohen's consulting firm since the consultants were not able to provide the information the pharmaceutical company needed. Later, it became clear, however, that then-CEO Joseph Jimenez and Cohen communicated via email multiple times during 2017, which included ideas to lower drug prices to be discussed with the president. According to the report, several of the ideas appeared later in Trump's drug pricing plan, released in early 2018, in which pharmaceutical companies were protected from reduced revenues. AveXis data integrity Having already received approval for Zolgensma in May 2019, on 28 June AveXis (a Novartis company) voluntarily disclosed to the FDA that some data previously submitted to the agency as part of the Biologics License Application (BLA) package was inaccurate. Specifically, the data manipulation related to an in vivo murine potency assay used in the early development of the product but the issue the FDA and wider community has taken is that AveXis was aware of the data manipulation as early as 14 March 2019, almost two months before the BLA was approved. To compound the problem in early August it emerged a senior manager sold almost $1Β million worth of stock immediately before the FDA probe became public on 6 August, but after the company had informed the FDA of the problem. As of September 2019, the FDA was still preparing its response to the scandal. Philanthropy Fight against leprosy Novartis has been committed for decades to eliminate leprosy by providing free, multidrug therapy to all endemic countries since 2000. See also List of pharmaceutical companies Pharmaceutical industry in Switzerland References Further reading External links Biotechnology companies established in 1996 Biotechnology companies of Switzerland Companies listed on the SIX Swiss Exchange Companies listed on the New York Stock Exchange Companies in the Swiss Market Index Eyewear companies of Switzerland Life sciences industry Multinational companies headquartered in Switzerland Pharmaceutical companies established in 1996 Pharmaceutical companies of Switzerland Swiss brands Swiss companies established in 1996 Vaccine producers Veterinary medicine companies Companies in the Dow Jones Global Titans 50 Companies in the S&P Europe 350 Dividend Aristocrats
Novartis
[ "Biology" ]
9,851
[ "Life sciences industry" ]
159,285
https://en.wikipedia.org/wiki/Abel%20Prize
The Abel Prize ( ; ) is awarded annually by the King of Norway to one or more outstanding mathematicians. It is named after the Norwegian mathematician Niels Henrik Abel (1802–1829) and directly modeled after the Nobel Prizes; as such, it is widely considered the Nobel Prize of mathematics. It comes with a monetary award of 7.5 million Norwegian kroner (NOK; increased from 6 million NOK in 2019). The Abel Prize's history dates back to 1899, when its establishment was proposed by the Norwegian mathematician Sophus Lie when he learned that Alfred Nobel's plans for annual prizes would not include a prize in mathematics. In 1902, King Oscar II of Sweden and Norway indicated his willingness to finance the creation of a mathematics prize to complement the Nobel Prizes, but the establishment of the prize was prevented by the dissolution of the union between Norway and Sweden in 1905. It took almost a century before the prize was finally established by the Government of Norway in 2001, and it was specifically intended "to give the mathematicians their own equivalent of a Nobel Prize." The laureates are selected by the Abel Committee, the members of whom are appointed by the Norwegian Academy of Science and Letters. The award ceremony takes place in the aula of the University of Oslo, where the Nobel Peace Prize was awarded between 1947 and 1989. The Abel Prize board has also established an Abel symposium, administered by the Norwegian Mathematical Society, which takes place twice a year. History The prize was first proposed in 1899, to be part of the celebration of the 100th anniversary of Niels Henrik Abel's birth in 1802. The Norwegian mathematician Sophus Lie proposed establishing an Abel Prize when he learned that Alfred Nobel's plans for annual prizes would not include a prize in mathematics. King Oscar II was willing to finance a mathematics prize in 1902, and the mathematicians Ludwig Sylow and Carl StΓΈrmer drew up statutes and rules for the proposed prize. However, Lie's influence decreased after his death, and the dissolution of the union between Sweden and Norway in 1905 ended the first attempt to create an Abel Prize. After interest in the concept of the prize had risen in 2001, a working group was formed to develop a proposal, which was presented to the Prime Minister of Norway in May. In August 2001, the Norwegian government announced that the prize would be awarded beginning in 2002, the two-hundredth anniversary of Abel's birth. Atle Selberg received an honorary Abel Prize in 2002, but the first actual Abel Prize was awarded in 2003. A book series presenting Abel Prize laureates and their research was commenced in 2010. The first three volumes cover the years 2003–2007, 2008–2012, and 2013–2017 respectively. In 2019, Karen Uhlenbeck became the first woman to win the Abel Prize, with the award committee citing "the fundamental impact of her work on analysis, geometry and mathematical physics. The Bernt Michael Holmboe Memorial Prize was created in 2005. Named after Abel's teacher, it promotes excellence in teaching. Selection criteria and funding Anyone may submit a nomination for the Abel Prize, although self-nominations are not permitted. The nominee must be alive. If the awardee dies after being declared the winner, the prize will be awarded posthumously. The Norwegian Academy of Science and Letters declares the winner of the Abel Prize each March after recommendation by the Abel Committee, which consists of five leading mathematicians. Both Norwegians and non-Norwegians may serve on the Committee. They are elected by the Norwegian Academy of Science and Letters and nominated by the International Mathematical Union and the European Mathematical Society. Funding The Norwegian Government gave the prize an initial funding of NOK 200 million (about €21.7 million) in 2001. Previously, the funding came from the Abel foundation, but today the prize is financed directly through the national budget. The funding is controlled by the Board, which consists of members elected by the Norwegian Academy of Science and Letters. The current board consists of Ingrid K. Glad (chair), Aslak Bakke Buan, Helge K. Dahle, Kristin Vinje, Cordian Riener and Gunn Elisabeth Birkelund. Laureates See also Fields Medal List of prizes known as the Nobel or the highest honors of a field List of mathematics awards References External links Official website of the Abel Symposium What is the Abel Prize? Millennium Mathematics Project & Isaac Newton Institute 2001 establishments in Norway Academic awards Awards established in 2001 International awards Mathematics awards Niels Henrik Abel Norwegian awards
Abel Prize
[ "Technology" ]
906
[ "Science and technology awards", "International science and technology awards", "Mathematics awards" ]
159,292
https://en.wikipedia.org/wiki/Potassium%20chloride
Potassium chloride (KCl, or potassium salt) is a metal halide salt composed of potassium and chlorine. It is odorless and has a white or colorless vitreous crystal appearance. The solid dissolves readily in water, and its solutions have a salt-like taste. Potassium chloride can be obtained from ancient dried lake deposits. KCl is used as a fertilizer, in medicine, in scientific applications, domestic water softeners (as a substitute for sodium chloride salt), and in food processing, where it may be known as E number additive E508. It occurs naturally as the mineral sylvite, which is named after salt's historical designations sal degistivum Sylvii and sal febrifugum Sylvii, and in combination with sodium chloride as sylvinite. Uses Fertilizer The majority of the potassium chloride produced is used for making fertilizer, called potash, since the growth of many plants is limited by potassium availability. The term "potash" refers to various mined and manufactured salts that contain potassium in water-soluble form. Potassium chloride sold as fertilizer is known as "muriate of potash"β€”it is the common name for potassium chloride () used in agriculture. The vast majority of potash fertilizer worldwide is sold as muriate of potash. The dominance of muriate of potash in the fertilizer market is due to its high potassium content (approximately 60% equivalent) and relative affordability compared to other potassium sources like sulfate of potash (potassium sulfate). Potassium is one of the three primary macronutrients essential for plant growth, alongside nitrogen and phosphorus. Potassium plays a vital role in various plant physiological processes, including enzyme activation, photosynthesis, protein synthesis, and water regulation. For watering plants, a moderate concentration of potassium chloride (KCl) is used to avoid potential toxicity: 6 mM (millimolar) is generally effective and safe for most plants, that is approximately per liter of water. Medical use Potassium is vital in the human body, and potassium chloride by mouth is the standard means to treat low blood potassium, although it can also be given intravenously. It is on the World Health Organization's List of Essential Medicines. It is also an ingredient in Oral Rehydration Therapy (ORT)/solution (ORS) to reduce hypokalemia caused by diarrhoea. This is another medicine on the WHO's List of Essential Medicines. Potassium chloride contains 52% of elemental potassium by mass. Overdose causes hyperkalemia which can disrupt cell signaling to the extent that the heart will stop, reversibly in the case of some open heart surgeries. Culinary use Potassium chloride can be used as a salt substitute for food, but due to its weak, bitter, unsalty flavor, it is often mixed with ordinary table salt (sodium chloride) to improve the taste, to form low sodium salt. The addition of 1 ppm of thaumatin considerably reduces this bitterness. Complaints of bitterness or a chemical or metallic taste are also reported with potassium chloride used in food. Execution In the United States, potassium chloride is used as the final drug in the three-injection sequence of lethal injection as a form of capital punishment. It induces cardiac arrest, ultimately killing the inmate. Industrial As a chemical feedstock, the salt is used for the manufacture of potassium hydroxide and potassium metal. It is also used in medicine, lethal injections, scientific applications, food processing, soaps, and as a sodium-free substitute for table salt for people concerned about the health effects of sodium. It is used as a supplement in animal feed to boost the potassium level in the feed. As an added benefit, it is known to increase milk production. It is sometimes used in solution as a completion fluid in petroleum and natural gas operations, as well as being an alternative to sodium chloride in household water softener units. Glass manufacturers use granular potash as a flux, lowering the temperature at which a mixture melts. Because potash imparts excellent clarity to glass, it is commonly used in eyeglasses, glassware, televisions, and computer monitors. Because natural potassium contains a tiny amount of the isotope potassium-40, potassium chloride is used as a beta radiation source to calibrate radiation monitoring equipment. It also emits a relatively low level of 511 keV gamma rays from positron annihilation, which can be used to calibrate medical scanners. Potassium chloride is used in some de-icing products designed to be safer for pets and plants, though these are inferior in melting quality to calcium chloride. It is also used in various brands of bottled water. Potassium chloride was once used as a fire extinguishing agent, and in portable and wheeled fire extinguishers. Known as Super-K dry chemical, it was more effective than sodium bicarbonate-based dry chemicals and was compatible with protein foam. This agent fell out of favor with the introduction of potassium bicarbonate (Purple-K) dry chemical in the late 1960s, which was much less corrosive, as well as more effective. It is rated for B and C fires. Along with sodium chloride and lithium chloride, potassium chloride is used as a flux for the gas welding of aluminium. Potassium chloride is also an optical crystal with a wide transmission range from 210Β nm to 20Β ΞΌm. While cheap, KCl crystals are hygroscopic. This limits its application to protected environments or short-term uses such as prototyping. Exposed to free air, KCl optics will "rot". Whereas KCl components were formerly used for infrared optics, they have been entirely replaced by much tougher crystals such as zinc selenide. Potassium chloride is used as a scotophor with designation P10 in dark-trace CRTs, e.g. in the Skiatron. Toxicity The typical amounts of potassium chloride found in the diet appear to be generally safe. In larger quantities, however, potassium chloride is toxic. The of orally ingested potassium chloride is approximately 2.5Β g/kg, or for a body mass of . In comparison, the of sodium chloride (table salt) is 3.75Β g/kg. Intravenously, the of potassium chloride is far smaller, at about 57.2Β mg/kg to 66.7Β mg/kg; this is found by dividing the lethal concentration of positive potassium ions (about 30 to 35Β mg/kg) by the proportion by mass of potassium ions in potassium chloride (about 0.52445Β mg K+/mg KCl). Chemical properties Solubility KCl is soluble in a variety of polar solvents. Solutions of KCl are common standards, for example for calibration of the electrical conductivity of (ionic) solutions, since KCl solutions are stable, allowing for reproducible measurements. In aqueous solution, it is essentially fully ionized into solvated and ions. Redox and the conversion to potassium metal Although potassium is more electropositive than sodium, KCl can be reduced to the metal by reaction with metallic sodium at 850Β Β°C because the more volatile potassium can be removed by distillation (see Le Chatelier's principle): KCl_{(l)}{} + Na_{(l)} <=> NaCl_{(l)}{} + K_{(g)} This method is the main method for producing metallic potassium. Electrolysis (used for sodium) fails because of the high solubility of potassium in molten KCl. Other potassium chloride stoichiometries Potassium chlorides with formulas other than KCl have been predicted to become stable under pressures of 20 GPa or more. Among these, two phases of KCl3 were synthesized and characterized. At 20-40 GPa, a trigonal structure containing K+ and Cl3βˆ’ is obtained; above 40 GPa this gives way to a phase isostructural with the intermetallic compound Cr3Si. Physical properties Under ambient conditions, the crystal structure of potassium chloride is like that of NaCl. It adopts a face-centered cubic structure known as the B1 phase with a lattice constant of roughly 6.3 Γ…. Crystals cleave easily in three directions. Other polymorphic and hydrated phases are adopted at high pressures. Some other properties are Transmission range: 210Β nm to 20Β ΞΌm Transmittivity = 92% at 450Β nm and rises linearly to 94% at 16Β ΞΌm Refractive index = 1.456 at 10Β ΞΌm Reflection loss = 6.8% at 10Β ΞΌm (two surfaces) dN/dT (expansion coefficient)= βˆ’33.2Γ—10βˆ’6/Β°C dL/dT (refractive index gradient)= 40Γ—10βˆ’6/Β°C Thermal conductivity = 0.036 W/(cmΒ·K) Damage threshold (Newman and Novak): 4 GW/cm2 or 2 J/cm2 (0.5 or 1 ns pulse rate); 4.2 J/cm2 (1.7 ns pulse rate Kovalev and Faizullov) As with other compounds containing potassium, KCl in powdered form gives a lilac flame. Production Potassium chloride is extracted from minerals sylvite, carnallite, and potash. It is also extracted from salt water and can be manufactured by crystallization from solution, flotation or electrostatic separation from suitable minerals. It is a by-product of the production of nitric acid from potassium nitrate and hydrochloric acid. Most potassium chloride is produced as agricultural and industrial-grade potash in Saskatchewan, Canada, Russia, and Belarus. Saskatchewan alone accounted for over 25% of the world's potash production in 2017. Laboratory methods Potassium chloride is inexpensively available and is rarely prepared intentionally in the laboratory. It can be generated by treating potassium hydroxide (or other potassium bases) with hydrochloric acid: KOH + HCl -> KCl + H2O This conversion is an acid-base neutralization reaction. The resulting salt can then be purified by recrystallization. Another method would be to allow potassium to burn in the presence of chlorine gas, also a very exothermic reaction: 2 K + Cl2 -> 2 KCl References Further reading External links Alkali metal chlorides Chlorides Dietary minerals Edible salt Inorganic fertilizers Lethal injection components Metal halides Potash Potassium compounds World Health Organization essential medicines E-number additives Rock salt crystal structure
Potassium chloride
[ "Chemistry" ]
2,185
[ "Chlorides", "Inorganic compounds", "Salts", "Potash", "Metal halides", "Edible salt" ]
159,421
https://en.wikipedia.org/wiki/Urination
Urination is the release of urine from the bladder to the outside of the body. Urine is released through the urethra and exits the penis or vulva through the urinary meatus in placental mammals, but is released through the cloaca in other vertebrates. It is the urinary system's form of excretion. It is also known medically as micturition, voiding, uresis, or, rarely, emiction, and known colloquially by various names including peeing, weeing, pissing, and euphemistically number one. The process of urination is under voluntary control in healthy humans and other animals, but may occur as a reflex in infants, some elderly individuals, and those with neurological injury. It is normal for adult humans to urinate up to seven times during the day. In some animals, in addition to expelling waste material, urination can mark territory or express submissiveness. Physiologically, urination involves coordination between the central, autonomic, and somatic nervous systems. Brain centres that regulate urination include the pontine micturition center, periaqueductal gray, and the cerebral cortex. Anatomy and physiology Anatomy of the bladder and outlet The main organs involved in urination are the urinary bladder and the urethra. The smooth muscle of the bladder, known as the detrusor, is innervated by sympathetic nervous system fibers from the lumbar spinal cord and parasympathetic fibers from the sacral spinal cord. Fibers in the pelvic nerves constitute the main afferent limb of the voiding reflex; the parasympathetic fibers to the bladder that constitute the excitatory efferent limb also travel in these nerves. Part of the urethra is surrounded by the male or female external urethral sphincter, which is innervated by the somatic pudendal nerve originating in the cord, in an area termed Onuf's nucleus. Smooth muscle bundles pass on either side of the urethra, and these fibers are sometimes called the internal urethral sphincter, although they do not encircle the urethra. Further along the urethra is a sphincter of skeletal muscle, the sphincter of the membranous urethra (external urethral sphincter). The bladder's epithelium is termed transitional epithelium which contains a superficial layer of dome-like cells and multiple layers of stratified cuboidal cells underneath when evacuated. When the bladder is fully distended the superficial cells become squamous (flat) and the stratification of the cuboidal cells is reduced in order to provide lateral stretching. Physiology The physiology of micturition and the physiologic basis of its disorders are subjects about which there is much confusion, especially at the supraspinal level. Micturition is fundamentally a spinobulbospinal reflex facilitated and inhibited by higher brain centers such as the pontine micturition center and, like defecation, subject to voluntary facilitation and inhibition. In healthy individuals, the lower urinary tract has two discrete phases of activity: the storage (or guarding) phase, when urine is stored in the bladder; and the voiding phase, when urine is released through the urethra. The state of the reflex system is dependent on both a conscious signal from the brain and the firing rate of sensory fibers from the bladder and urethra. At low bladder volumes, afferent firing is low, resulting in excitation of the outlet (the sphincter and urethra), and relaxation of the bladder. At high bladder volumes, afferent firing increases, causing a conscious sensation of urinary urge. Individual ready to urinate consciously initiates voiding, causing the bladder to contract and the outlet to relax. Voiding continues until the bladder empties completely, at which point the bladder relaxes and the outlet contracts to re-initiate storage. The muscles controlling micturition are controlled by the autonomic and somatic nervous systems. During the storage phase, the internal urethral sphincter remains tense and the detrusor muscle relaxed by sympathetic stimulation. During micturition, parasympathetic stimulation causes the detrusor muscle to contract and the internal urethral sphincter to relax. The external urethral sphincter (sphincter urethrae) is under somatic control and is consciously relaxed during micturition. In infants, voiding occurs involuntarily (as a reflex). The ability to voluntarily inhibit micturition develops by the age of two–three years, as control at higher levels of the central nervous system develops. In the adult, the volume of urine in the bladder that normally initiates a reflex contraction is about . Storage phase During storage, bladder pressure stays low, because of the bladder's highly compliant nature. A plot of bladder (intravesical) pressure against the depressant of fluid in the bladder (called a cystometrogram), will show a very slight rise as the bladder is filled. This phenomenon is a manifestation of the law of Laplace, which states that the pressure in a spherical viscus is equal to twice the wall tension divided by the radius. In the case of the bladder, the tension increases as the organ fills, but so does the radius. Therefore, the pressure increase is slight until the organ is relatively full. The bladder's smooth muscle has some inherent contractile activity; however, when its nerve supply is intact, stretch receptors in the bladder wall initiate a reflex contraction that has a lower threshold than the inherent contractile response of the muscle. Action potentials carried by sensory neurons from stretch receptors in the urinary bladder wall travel to the sacral segments of the spinal cord through the pelvic nerves. Since bladder wall stretch is low during the storage phase, these afferent neurons fire at low frequencies. Low-frequency afferent signals cause relaxation of the bladder by inhibiting sacral parasympathetic preganglionic neurons and exciting lumbar sympathetic preganglionic neurons. Conversely, afferent input causes contraction of the sphincter through excitation of Onuf's nucleus, and contraction of the bladder neck and urethra through excitation of the sympathetic preganglionic neurons. Diuresis (production of urine by the kidney) occurs constantly, and as the bladder becomes full, afferent firing increases, yet the micturition reflex can be voluntarily inhibited until it is appropriate to begin voiding. Voiding phase Voiding begins when a voluntary signal is sent from the brain to begin urination, and continues until the bladder is empty. Bladder afferent signals ascend the spinal cord to the periaqueductal gray, where they project both to the pontine micturition center and to the cerebrum. At a certain level of afferent activity, the conscious urge to void or urination urgency, becomes difficult to ignore. Once the voluntary signal to begin voiding has been issued, neurons in the pontine micturition center fire maximally, causing excitation of sacral preganglionic neurons. The firing of these neurons causes the wall of the bladder to contract; as a result, a sudden, sharp rise in intravesical pressure occurs. The pontine micturition center also causes inhibition of Onuf's nucleus, resulting in relaxation of the external urinary sphincter. When the external urinary sphincter is relaxed urine is released from the urinary bladder when the pressure there is great enough to force urine to flow out of the urethra. The micturition reflex normally produces a series of contractions of the urinary bladder. The flow of urine through the urethra has an overall excitatory role in micturition, which helps sustain voiding until the bladder is empty. Many men, and some women, may sometimes briefly shiver after or during urination. After urination, the female urethra empties partially by gravity, with assistance from muscles. Urine remaining in the male urethra is expelled by several contractions of the bulbospongiosus muscle, and, by some men, manual squeezing along the length of the penis to expel the rest of the urine. For land mammals over 1 kilogram, the duration of urination does not vary with body mass, being dispersed around an average of 21 seconds (standard deviation 13 seconds), despite a 4 order of magnitude (1000Γ—) difference in bladder volume. This is due to increased urethra length of large animals, which amplifies gravitational force (hence flow rate), and increased urethra width, which increases flow rate. For smaller mammals a different phenomenon occurs, where urine is discharged as droplets, and urination in smaller mammals, such as mice and rats, can occur in less than a second. The posited benefits of faster voiding are decreased risk of predation (while voiding) and decreased risk of urinary tract infection. Voluntary control The mechanism by which voluntary urination is initiated remains unsettled. One possibility is that the voluntary relaxation of the muscles of the pelvic floor causes a sufficient downward tug on the detrusor muscle to initiate its contraction. Another possibility is the excitation or disinhibition of neurons in the pontine micturition center, which causes concurrent contraction of the bladder and relaxation of the sphincter. There is an inhibitory area for micturition in the midbrain. After transection of the brain stem just above the pons, the threshold is lowered and less bladder filling is required to trigger it, whereas after transection at the top of the midbrain, the threshold for the reflex is essentially normal. There is another facilitatory area in the posterior hypothalamus. In humans with lesions in the superior frontal gyrus, the desire to urinate is reduced and there is also difficulty in stopping micturition once it has commenced. However, stimulation experiments in animals indicate that other cortical areas also affect the process. The bladder can be made to contract by voluntary facilitation of the spinal voiding reflex when it contains only a few milliliters of urine. Voluntary contraction of the abdominal muscles aids the expulsion of urine by increasing the pressure applied to the urinary bladder wall, but voiding can be initiated without straining even when the bladder is nearly empty. Voiding can also be consciously interrupted once it has begun, through a contraction of the perineal muscles. The external sphincter can be contracted voluntarily, which will prevent urine from passing down the urethra. Experience of urination The need to urinate is experienced as an uncomfortable, full feeling. It is highly correlated with the fullness of the bladder. In many males the feeling of the need to urinate can be sensed at the base of the penis as well as the bladder, even though the neural activity associated with a full bladder comes from the bladder itself, and can be felt there as well. In females the need to urinate is felt in the lower abdomen region when the bladder is full. When the bladder becomes too full, the sphincter muscles will involuntarily relax, allowing urine to pass from the bladder. Release of urine is experienced as a lessening of the discomfort. Disorders Clinical conditions Many clinical conditions can cause disturbances to normal urination, including: Urinary incontinence, the inability to hold urine Stress incontinence, incontinence as a result of external mechanical disturbances Urge incontinence, incontinence that occurs as a result of the uncontrollable urge to urinate Mixed incontinence, a combination of the two types of incontinence Urinary retention, the inability to initiate urination Overactive bladder, a strong urge to urinate, usually accompanied by detrusor overactivity Interstitial cystitis, a condition characterized by urinary frequency, urgency, and pain Prostatitis, an inflammation of the prostate gland that can cause urinary frequency, urgency, and pain Benign prostatic hyperplasia, an enlargement of the prostate that can cause urinary frequency, urgency, retention, and the dribbling of urine Urinary tract infection, which can cause urinary frequency and dysuria Polyuria, abnormally large production of urine, associated with, in particular, diabetes mellitus (types 1 and 2), and diabetes insipidus Oliguria, low urine output, usually due to a problem with the upper urinary tract Anuria refers to absent or almost absent urine output. Micturition syncope, a vasovagal response which may cause fainting. Paruresis, the inability to urinate in the presence of others, such as in a public toilet. Bladder sphincter dyssynergia, a discoordination between the bladder and external urethral sphincter as a result of brain or spinal cord injury A drug that increases urination is called a diuretic, whereas antidiuretics decrease the production of urine by the kidneys. Experimentally induced disorders There are three major types of bladder dysfunction due to neural lesions: (1) the type due to interruption of the afferent nerves from the bladder; (2) the type due to interruption of both afferent and efferent nerves; and (3) the type due to interruption of facilitatory and inhibitory pathways descending from the brain. In all three types the bladder contracts, but the contractions are generally not sufficient to empty the viscus completely, and residual urine is left in the bladder. Paruresis, also known as shy bladder syndrome, is an example of a bladder interruption from the brain that often causes total interruption until the person has left a public area. These people (males) may have difficulty urinating in the presence of others and will consequently avoid using urinals without dividers or those directly adjacent to another person. Alternatively, they may opt for the privacy of a stall or simply avoid public toilets altogether. Deafferentation When the sacral dorsal roots are cut in experimental animals or interrupted by diseases of the dorsal roots such as tabes dorsalis in humans, all reflex contractions of the bladder are abolished. The bladder becomes distended, thin-walled, and hypotonic, but there are some contractions because of the intrinsic response of the smooth muscle to stretch. Denervation When the afferent and efferent nerves are both destroyed, as they may be by tumors of the cauda equina or filum terminale, the bladder is flaccid and distended for a while. Gradually, however, the muscle of the "decentralized bladder" becomes active, with many contraction waves that expel dribbles of urine out of the urethra. The bladder becomes shrunken and the bladder wall hypertrophied. The reason for the difference between the small, hypertrophic bladder seen in this condition and the distended, hypotonic bladder seen when only the afferent nerves are interrupted is not known. The hyperactive state in the former condition suggests the development of denervation hypersensitization even though the neurons interrupted are preganglionic rather than postganglionic. Spinal cord injury During spinal shock, the bladder is flaccid and unresponsive. It becomes overfilled, and urine dribbles through the sphincters (overflow incontinence). After spinal shock has passed, a spinally mediated voiding reflex ensues, although there is no voluntary control and no inhibition or facilitation from higher centers. Some paraplegic patients train themselves to initiate voiding by pinching or stroking their thighs, provoking a mild mass reflex. In some instances, the voiding reflex becomes hyperactive. Bladder capacity is reduced and the wall becomes hypertrophied. This type of bladder is sometimes called the spastic neurogenic bladder. The reflex hyperactivity is made worse, and may be caused, by infection in the bladder wall. Techniques Young children A common technique used in many developing nations involves holding the child by the backs of the thighs, above the ground, facing outward, in order to urinate. Fetal urination The fetus urinates hourly and produces most of the amniotic fluid in the second and third trimester of pregnancy. The amniotic fluid is then recycled by fetal swallowing. Urination after injury Occasionally, if a male's penis is damaged or removed, or a female's genitals/urinary tract is damaged, other urination techniques must be used. Most often in such cases, doctors will reposition the urethra to a location where urination can still be accomplished, usually in a position that would promote urination only while seated/squatting, though a permanent urinary catheter may be used in rare cases. Alternative urination tools Sometimes urination is done in a container such as a bottle, urinal, bedpan, or chamber pot (also known as a gazunder). A container or wearable urine collection device may be used so that the urine can be examined for medical reasons or for a drug test, for a bedridden patient, when no toilet is available, or there is no other possibility to dispose of the urine immediately. An alternative solution (for traveling, stakeouts, etc.) is a special disposable bag containing absorbent material that solidifies the urine within seconds, making it convenient and safe to store and dispose of later. It is possible for both sexes to urinate into bottles in case of emergencies. The technique can help children to urinate discreetly inside cars and in other places without being seen by others. A female urination device can assist women and girls in urinating while standing or into a bottle. In microgravity, excrement tends to float freely, so astronauts use a specially designed space toilet, which uses suction to collect and recycle urine; the space toilet also has a receptacle for defecation. Social and cultural aspects Art A puer mingens is a figure in a work of art depicted as a prepubescent boy in the act of urinating, either actual or simulated. The puer mingens could represent anything from whimsy and boyish innocence to erotic symbols of virility and masculine bravado. Toilet training Babies have little socialized control over urination within traditions or families that do not practice elimination communication and instead use diapers. Toilet training is the process of learning to restrict urination to socially approved times and situations. Consequently, young children sometimes develop nocturnal enuresis. Facilities It is socially more accepted and more environmentally hygienic for those who are able, especially when indoors and in outdoor urban or suburban areas, to urinate in a toilet. Public toilets may have urinals, usually for males, although female urinals exist, designed to be used in various ways. Urination without facilities Acceptability of outdoor urination in a public place other than at a public urinal varies with the situation and with customs. Potential disadvantages include a dislike of the smell of urine, and exposure of genitals. It can be avoided or mitigated by going to a quiet place and/or facing a tree or wall if urinating standing up, or while squatting, hiding the back behind walls, bushes, or a tree. Portable toilets (port-a-potties) are frequently placed in outdoor situations where no immediate facility is available. These need to be serviced (cleaned out) on a regular basis. Urination in a heavily wooded area is generally harmless, actually saves water, and may be condoned for males (and less commonly, females) in certain situations as long as common sense is used. Examples (depending on circumstances) include activities such as camping, hiking, delivery driving, cross country running, rural fishing, amateur baseball, golf, etc. The more developed and crowded a place is, the more public urination tends to be objectionable. In the countryside, it is more acceptable than in a street in a town, where it may be a common transgression. Often this is done after the consumption of alcoholic beverages, which causes production of additional urine as well as a reduction of inhibitions. One proposed way to inhibit public urination due to drunkenness is the Urilift, which is disguised as a normal manhole by day but raises out of the ground at night to provide a public restroom for bar-goers. In many places, public urination is punishable by fines, though attitudes vary widely by country. In general, females are less likely to urinate in public than males. Women and girls, unlike men and boys, are restricted in where they can urinate conveniently and discreetly. The 5th-century BC historian Herodotus, writing on the culture of the ancient Persians and highlighting the differences with those of the Greeks, noted that to urinate in the presence of others was prohibited among Persians. There was a popular belief in the UK, that it was legal for a man to urinate in public so long as it occurred on the rear wheel of his vehicle and he had his right hand on the vehicle, but this is not true. Public urination still remains more accepted by males in the UK, although British cultural tradition itself seems to find such practices objectionable. In Islamic toilet etiquette, it is haram to urinate while facing the Qibla, or to turn one's back to it when urinating or relieving bowels, but modesty requirements for females make it impossible for girls to relieve themselves without facilities. When toilets are unavailable, females can relieve themselves in Laos, Russia and Mongolia in emergency, but it remains less accepted for females in India even when circumstances make this a highly desirable option. Women generally need to urinate more frequently than men, but as opposed to the common misconception, it is not due to having smaller bladders. Resisting the urge to urinate because of lack of facilities can promote urinary tract infections which can lead to more serious infections and, in rare situations, can cause renal damage in women. Female urination devices are available to help women to urinate discreetly, as well to help them urinate while standing. Sitting, standing, or squatting Techniques and body postures while urinating vary across cultures. Different anatomical conditions in men and women may presume different postures, yet these are largely shaped by cultural norms, types of clothing, and the sanitary facilities available. While sitting toilets are the most common form in Western countries, squat toilets are common in Asia, Africa, and the Arab world. Urinals for men are widespread worldwide, although women's urinals are available in some countries, recently becoming more common in Western countries. With the spread of pants among women, a standing posture became impractical, but in some regions where women wear traditional skirts or robes, an upright posture is common. Males Cultures around the world differ regarding socially accepted voiding positions and preferences: in the Middle-East and Asia, the squatting position was more prevalent, while in the Western world the standing and sitting positions were more common. For practising Muslim men, the genital modesty of squatting is also associated with proper cleanliness requirements or awrah. In Western culture, the standing position is regarded as the more efficient option among healthy males. In restrooms without urinals, and sometimes at home, men may be urged to use the sitting position as to diminish spattering of urine. Elderly males with prostate gland enlargement may benefit from sitting down to urinate, with the seated voiding position found superior as compared with standing in elderly males with benign prostate hyperplasia. Females In Western culture, females usually sit or squat for urination, depending on what type of toilet they use; a squat toilet is used for urination in a squatting position. Women averting contact with a toilet seat may employ a partial squatting position (or "hovering"), similar to using a female urinal. However, this may not completely void the bladder. Females may also urinate while standing, and while clothed. It is common for women in various regions of Africa to use this position when they urinate, as do women in Laos. Herodotus described a similar custom in ancient Egypt. An alternative method for women voiding while standing is to use a female urination device to assist. Talking about urination In many societies and in many social classes, even mentioning the need to urinate is seen as a social transgression, despite it being a universal need. Many adults avoid stating that they need to urinate. Many expressions exist, some euphemistic and some vulgar. For example, centuries ago the standard English word (both noun and verb, for the product and the activity) was "piss", but subsequently "pee", formerly associated with children, has become more common in general public speech. Since elimination of bodily wastes is, of necessity, a subject talked about with toddlers during toilet training, other expressions considered suitable for use by and with children exist, and some continue to be used by adults, e.g. "weeing", "doing/having a wee-wee", "to tinkle", "go potty", "go pee pee". Other expressions include "squirting" and "taking a leak", and, predominantly by younger persons for outdoor female urination, "popping a squat", referring to the position many women adopt in such circumstances. National varieties of English show creativity. American English uses "to whiz". Australian English has coined "I am off to take a Chinese singing lesson", derived from the tinkling sound of urination against the China porcelain of a toilet bowl. British English uses "going to see my aunt", "going to see a man about a dog", "to piddle", "to splash (one's) boots", as well as "to have a slash", which originates from the Scottish term for a large splash of liquid. One of the most common, albeit old-fashioned, euphemisms in British English is "to spend a penny", a reference to coin-operated pay toilets, which used (pre-decimalisation) to charge that sum. Use in language References to urination are commonly used in slang. Usage in English includes: Piss (someone) off (to anger someone; alternatively, to leave somewhere in a hurry) Piss off! (to express contempt; see above) Pissing down (to refer to heavy rain) Pissing contest (an unproductive ego-driven battle) Pisshead (vulgar way to refer to someone who drinks too much alcohol) Piss ant (a worthless person; in non-slang usage the term refers to several species of ant whose colonies have a urine-like odor) Pissing up a flagpole (to partake in a futile activity) Pissing into the wind (to act in ways that cause self-harm) Piss away (to squander or use wastefully) Taking the piss (to take liberties, be unreasonable, or to mock another person) Full of piss and vinegar (energetic or ambitious late adolescent or young adult male) Piss up (British expression for drinking to get drunk) Pissed (drunk in British English or angry in American English) Urination and sexual activity Urolagnia, a paraphilia, is an inclination to obtain sexual enjoyment by looking at or thinking of urine or urination. Urine may be consumed, or the person may bathe in it; this is known colloquially as a golden shower. Drinking urine is known as urophagia, though uraphagia refers to the consumption of urine regardless of whether the context is sexual. Involuntary urination during sexual intercourse is common, but rarely acknowledged. In one survey, 24% of women reported involuntary urination during sexual intercourse; in 66% of patients urination occurred on penetration, while in 33% urine leakage was restricted to orgasm. Female kob may exhibit urolagnia during sex; one female will urinate while the other sticks her nose in the stream. Some mammals urinate on themselves in order to attract mates during the rut or urinate on other individuals before mating with them. A male Patagonian mara, a type of rodent, will stand on his hind legs and urinate on a female's rump, to which the female may respond by spraying a jet of urine backwards into the face of the male. The male's urination is meant to repel other males from his partner while the female's urination is a rejection of any approaching male when she is not receptive. Both anal digging and urination are more frequent during the breeding season and are more commonly done by males. A male porcupine urinates on a female porcupine prior to mating, spraying the urine at high velocity. Electric shock injuries and deaths In 2008 in London, a person died when they were urinating alongside a railway track at a train station and they received an electric shock. The person received the electric shock when their stream of urine connected with the electric current from the live third rail. In 2010 in Washington state, a person who had died had received burns injuries on their body that were related to receiving an electric shock. It is thought that an electric current had traveled through their stream of urine and into their body. It is thought that the person had urinated into a roadside ditch and a live wire that was lying in the ditch gave the person an electric shock. In 2014 in Spain, a person died while urinating on a lamp post when he received an electric shock, which may have traveled through the stream of urine and into his body. Other species While the primary purpose of urination is the same across the animal kingdom, urination often serves a social purpose beyond the expulsion of waste material. In dogs and other animals, urination can mark territory or express submissiveness. In small rodents such as rats and mice, it marks familiar paths. The urine of animals of differing physiology or sex sometimes has different characteristics. For example, the urine of birds and reptiles is whitish, consisting of a pastelike suspension of uric acid crystals, and discharged with the feces of the animal via the cloaca, whereas mammals' urine is a yellowish colour, with mostly urea instead of uric acid, and is discharged via the urethra, separately from the feces. Some animals' (example: carnivores') urine possesses a strong odour, especially when it is used to mark territory or Felids and canids scent-mark their territories using urine. Wolves mark their territories by urinating in a raised-leg posture and release preputial gland secretions in their urine. Male dogs mark their territories with urine more frequently than females. Young cattle can be toilet-trained to urinate in a "latrine" where their urine can be collected for wastewater treatment, which could be used to reduce greenhouse gas emissions from the animals' urine in countries such as the Netherlands, the United States, and New Zealand. See also Defecation Human positions Post-void dribbling Post micturition convulsion syndrome Sanitation References Further reading External links , describes the neurophysiology of urination "Urination" at HowStuffWorks.com Articles containing video clips Excretion Human positions Medical signs Partial squatting position Urine Urology
Urination
[ "Biology" ]
6,558
[ "Behavior", "Human positions", "Urine", "Excretion", "Animal waste products", "Human behavior" ]
159,441
https://en.wikipedia.org/wiki/Type%20metal
In printing, type metal refers to the metal alloys used in traditional typefounding and hot metal typesetting. Historically, type metal was an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting. The proportions used are in the range: lead 50β€’86%, antimony 11β€’30% and tin 3β€’20%. Antimony and tin are added to lead for durability while reducing the difference between the coefficients of expansion of the matrix and the alloy. Apart from durability, the general requirements for type-metal are that it should produce a true and sharp cast, and retain correct dimensions and form after cooling down. It should also be easy to cast, at reasonable low melting temperature, iron should not dissolve in the molten metal, and mould and nozzles should stay clean and easy to maintain. Today, Monotype machines can utilize a wide range of different alloys. Mechanical linecasting equipment uses alloys that are close to eutectic. History Although the knowledge of casting soft metals in moulds was well established before Johannes Gutenberg's time, his discovery of an alloy that was hard, durable, and would take a clear impression from the mould represents a fundamental aspect of his solution to the problem of printing with movable type. This alloy did not shrink as much as lead alone when cooled. Gutenberg's other contributions were the creation of inks that would adhere to metal type and a method of softening handmade printing paper so that it would take the impression well. Required characteristics Cheap, plentifully available as galena and easily workable, lead has many of the ideal characteristics, but on its own it lacks the necessary hardness and does not make castings with sharp details because molten lead shrinks and sags when it cools to a solid. After much experimentation it was found that adding pewterer's tin, obtained from cassiterite, improved the ability of the cast type to withstand the wear and tear of the printing process, making it tougher but not more brittle. Despite patiently trying different proportions of both metals, solving the second part of the type metal problem proved very difficult without the addition of yet a third metal, antimony. Alchemists had shown that when stibnite, an antimony sulfide ore, was heated with scrap iron, metallic antimony was produced. The typefounder would typically introduce powdered stibnite and horseshoe nails into his crucible to melt lead, tin and antimony into type metal. Both the iron and the sulfides would be rejected in the process. The addition of antimony conferred the much needed improvements in the properties of hardness, wear resistance and especially, the sharpness of reproduction of the type design, given that it has the curious property of diminishing the shrinkage of the alloy upon solidification. Composition of type metal Type metal is an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting. The proportions used are in the range: lead 50β€’86%, antimony 11β€’30% and tin 3β€’20%. The basic characteristics of these metals are as follows: Lead Type metal is an alloy of lead (Pb). Pure lead is a relatively cheap metal, is soft thus easy to work, and it is easy to cast since it melts at . However, it shrinks when it solidifies making letters that are not sharp enough for printing. In addition pure lead letters will quickly deform during use; a direct result of the easy workability of lead. Lead is exceptionally soft, malleable, and ductile but with little tensile strength. Lead oxide is a poison, that primarily damages brain function. Metallic lead is more stable and less toxic than its oxidized form. Metallic lead cannot be absorbed through contact with skin, so may be handled, carefully, with far less risk than lead oxide. Tin Tin (Sn) promotes the fluidity of the molten alloy and makes the type tough, giving the alloy resistance to wear. It is harder, stiffer and tougher than lead. Antimony Antimony (Sb) is a metalloid element, which melts at . Antimony has a crystalline appearance while being both brittle and fusible. When alloyed with lead to produce type metal, antimony gives it the hardness it needs to resist deformation during printing, and gives it sharper castings from the mould to produce clear, easily read printed text on the page. Typical type metal proportions The actual compositions differed over time, different machines were adjusted to different alloys depending on the intended uses of the type. Printers had sometimes their own preferences about the quality of particular alloys. The Lanston Monotype Corporation in the United Kingdom had a whole range of alloys listed in their manuals. Alloys for mechanical composition Most mechanical typesetting is divided basically into two different competing technologies: line casting (Linotype and Intertype) and single character casting (Monotype). The manuals for the Monotype composition caster (1952 and later editions) mention at least five different alloys to be used for casting, depending the purpose of the type and the work to be done with it. Although in general Monotype cast type characters can be visually identified as having a square nick (as opposed to the round nicks used on foundry type), there is no easy way to identify the alloy aside from an expensive chemical assay in a laboratory. Apart from this the two Monotype companies in the United States and the UK also made moulds with 'round' nicks. Typefounders and printers could and did order specially designed moulds to their own specifications: height, size, kind of nick, even the number of nicks could be changed. Type produced with these special moulds can only be identified if the foundry or printer is known. In Switzerland the company "Metallum Pratteln AG", in Basel had yet another list of type-metal alloys. If needed, any alloy according to customer specifications could be produced. Dross Regeneration-metal was melted into the crucible to replace lost tin and antimony through the dross. Every time type metal is remelted, tin and antimony oxidise. These oxides form on the surface of the crucible and must be removed. After stirring the molten metal, grey powder forms on the surface, the dross, needing to be skimmed. Dross contains recoverable amounts of tin and antimony. Dross must be processed at specialized companies, in order to extract the pure metals in conditions that would prevent environmental pollution and remain economically feasible. Behaviour of bipolar alloys Pure metal melts and solidifies in a simple manner at a specific temperature. This is not the case with alloys. There we find a range of temperatures with all kinds of different events. The melting temperature of all mixtures is considerably lower than the pure components. Antimony/Lead mixture examples The addition of a small amount of antimony (5% to 6%) to lead will significantly alter the alloy's behavior compared to pure lead: although the melting point of pure antimony is 630Β Β°C, this mixture will be completely molten and a homogeneous fluid even at temperatures as low as 371Β Β°C. Letting this mixture cool the alloy will remain liquid even through 355Β Β°C, the melting point of pure lead. Once the temperature reaches 291Β Β°C, lead crystals will start to form, increasing the cohesion of the liquid alloy. At 252Β Β°C, the mixture will start to fully solidify, during which the temperature will remain constant. Only when the mixture has fully solidified will the temperature start to decrease again. Using a 10% antimony, 90% lead mixture delays lead crystal formation until approximately 260Β Β°C. Using a 12% antimony, 88% lead mixture prevents crystal formation entirely, becoming a eutectic. This alloy has a clear melting point, at 252Β Β°C. Increasing the antimony content beyond 12% will lead to predominantly antimony crystallization. Tri-polar mixtures Adding tin to this bipolar-system complicates the behaviour even further. Some tin enters into the eutectic. A mixture of 4% tin, 12% antimony, and 84% lead solidifies at 240Β Β°C. Depending from the metals in excess, compared with the eutectic, crystals are formed, depleting the liquid, until the eutectic 4/12 mixture is formed once more. The 12/20 alloy contains many mixed crystals of tin and antimony, these crystals constitute the hardness of the alloy and the resistance against wear. Raising the content of antimony cannot be done without adding some tin too. Because the fluidity of the mixture will dramatically diminish when the temperature goes down somewhere in the channels of the machine. Nozzles can be blocked by antimony crystals. Metals used on typecasting machines Eutectic alloys are used on Linotype-machines and Ludlow-casters to prevent blockage of the mould and to ensure continuous trouble-free casting. Alloys used on Monotype machines tend to contain higher contents of tin, to obtain tougher character. All characters should be able to resist the pressure during printing. This meant an extra investment, but Monotype was an expensive system all the way. Present usage of type metal The fierce competition between the different mechanical typecasting systems like Linotype and Monotype has given rise to some lasting fairy tales about typemetal. Linotype users looked down on Monotype and vice versa. Monotype machines however can utilize a wide range of different alloys; maintaining a constant and a high production meant a strict standardization of the typemetal in the company, so as to reduce by all means any interruption of the production. Repeated assays were done at regular intervals to monitor the alloy used, since every time the metal is recycled, roughly half a per cent of tin content is lost through oxidation. These oxides are removed with the dross while cleaning the surface of the molten metal. Nowadays this "battle" has lost its importance, at least for Monotype. The quality of the produced type is far more important. Alloys with a high-content of antimony, and subsequently a high content of tin, can be cast at a higher temperature, and at a lower speed and with more cooling at a Monotype composition or supercaster. Although care was taken to avoid mixing different types of type metal in shops with different type casting systems, in actual practice this often occurred. Since a Monotype composition caster can cope with a variety of different metal alloys, occasional mixing of Linotype alloy with discarded typefounders alloy has proven its usefulness. Mechanical linecasting equipment use alloys that are close to eutectic. Contamination of type metals Copper Copper has been used for hardening type metal; this metal easily forms mixed crystals with tin when the alloy cools down. These crystals will grow just below the exit opening of the nozzle in Monotype machines, resulting in a total blockage after some time. These nozzles are very difficult to clean, because the hard crystals will resist drilling. Zinc Brass spaces contain zinc, which is extremely counterproductive in type metal. Even a tiny amount β€”Β less than 1%Β β€” will form a dusty surface on the molten metal surface that is difficult to remove. Characters cast from contaminated type metal such as this are of inferior quality, the solution being to discard and replace with fresh alloy. Brass and zinc should therefore be removed before remelting. The same applies to aluminium, although this metal will float on top of the melt, and will be easily discovered and removed, before it is dissolved into the lead. Magnesium Magnesium plates are very dangerous in molten lead, because this metal can easily burn and will ignite in the molten lead. Iron Iron is hardly dissolved into type metal, although the molten metal is always in contact with the cast iron surface of the melting pot. Historic references to type metals Joseph Moxon, in his Mechanick Exercises, mentions a mix of equal amounts of "antimony" and iron nails. The "antimony" here was in fact stibnite, antimony-sulfide (Sb2S3). The iron was burned away in this process, reducing the antimony and at the same time removing the unwanted sulfur. In this way ferro-sulfide was formed, that would evaporate with all the fumes. The mixture of stibnite and nails was heated red hot in an open-air furnace, until all is molten and finished. The resulting metal can contain up to 9% of iron. Further purification can be done by mixing the hot melt with kitchen-salt, NaCl. After this red hot lead from another melting pot is added and stirred thoroughly. Some tin was added to the alloy for casting small characters and narrow spaces, to better fill narrow areas of the mould. The good properties of tin were well known. The use of tin was sometime minimized to save expenses. Much of this toxic work was done by child labour, a labor force that includes children. As a supposed antidote to the inhaled toxic metal fumes, the workers were given a mixture of red wine and salad oil: References Alloys Printing
Type metal
[ "Chemistry" ]
2,741
[ "Alloys", "Chemical mixtures" ]
159,472
https://en.wikipedia.org/wiki/Flight
Flight or flying is the motion of an object through an atmosphere, or through the vacuum of outer space, without contacting any planetary surface. This can be achieved by generating aerodynamic lift associated with gliding or propulsive thrust, aerostatically using buoyancy, or by ballistic movement. Many things can fly, from animal aviators such as birds, bats and insects, to natural gliders/parachuters such as patagial animals, anemochorous seeds and ballistospores, to human inventions like aircraft (airplanes, helicopters, airships, balloons, etc.) and rockets which may propel spacecraft and spaceplanes. The engineering aspects of flight are the purview of aerospace engineering which is subdivided into aeronautics, the study of vehicles that travel through the atmosphere, and astronautics, the study of vehicles that travel through space, and ballistics, the study of the flight of projectiles. Types of flight Buoyant flight Humans have managed to construct lighter-than-air vehicles that raise off the ground and fly, due to their buoyancy in the air. An aerostat is a system that remains aloft primarily through the use of buoyancy to give an aircraft the same overall density as air. Aerostats include free balloons, airships, and moored balloons. An aerostat's main structural component is its envelope, a lightweight skin that encloses a volume of lifting gas to provide buoyancy, to which other components are attached. Aerostats are so named because they use "aerostatic" lift, a buoyant force that does not require lateral movement through the surrounding air mass to effect a lifting force. By contrast, aerodynes primarily use aerodynamic lift, which requires the lateral movement of at least some part of the aircraft through the surrounding air mass. Aerodynamic flight Unpowered flight versus powered flight Some things that fly do not generate propulsive thrust through the air, for example, the flying squirrel. This is termed gliding. Some other things can exploit rising air to climb such as raptors (when gliding) and man-made sailplane gliders. This is termed soaring. However most other birds and all powered aircraft need a source of propulsion to climb. This is termed powered flight. Animal flight The only groups of living things that use powered flight are birds, insects, and bats, while many groups have evolved gliding. The extinct pterosaurs, an order of reptiles contemporaneous with the dinosaurs, were also very successful flying animals, and there were apparently some flying dinosaurs (see Flying and gliding animals#Non-avian dinosaurs). Each of these groups' wings evolved independently, with insects the first animal group to evolve flight. The wings of the flying vertebrate groups are all based on the forelimbs, but differ significantly in structure; insect wings are hypothesized to be highly modified versions of structures that form gills in most other groups of arthropods. Bats are the only mammals capable of sustaining level flight (see bat flight). However, there are several gliding mammals which are able to glide from tree to tree using fleshy membranes between their limbs; some can travel hundreds of meters in this way with very little loss in height. Flying frogs use greatly enlarged webbed feet for a similar purpose, and there are flying lizards which fold out their mobile ribs into a pair of flat gliding surfaces. "Flying" snakes also use mobile ribs to flatten their body into an aerodynamic shape, with a back and forth motion much the same as they use on the ground. Flying fish can glide using enlarged wing-like fins, and have been observed soaring for hundreds of meters. It is thought that this ability was chosen by natural selection because it was an effective means of escape from underwater predators. The longest recorded flight of a flying fish was 45 seconds. Most birds fly (see bird flight), with some exceptions. The largest birds, the ostrich and the emu, are earthbound flightless birds, as were the now-extinct dodos and the Phorusrhacids, which were the dominant predators of South America in the Cenozoic era. The non-flying penguins have wings adapted for use under water and use the same wing movements for swimming that most other birds use for flight. Most small flightless birds are native to small islands, and lead a lifestyle where flight would offer little advantage. Among living animals that fly, the wandering albatross has the greatest wingspan, up to ; the great bustard has the greatest weight, topping at . Most species of insects can fly as adults. Insect flight makes use of either of two basic aerodynamic models: creating a leading edge vortex, found in most insects, and using clap and fling, found in very small insects such as thrips. Many species of spiders, spider mites and lepidoptera use a technique called ballooning to ride air currents such as thermals, by exposing their gossamer threads which gets lifted by wind and atmospheric electric fields. Mechanical Mechanical flight is the use of a machine to fly. These machines include aircraft such as airplanes, gliders, helicopters, autogyros, airships, balloons, ornithopters as well as spacecraft. Gliders are capable of unpowered flight. Another form of mechanical flight is para-sailing, where a parachute-like object is pulled by a boat. In an airplane, lift is created by the wings; the shape of the wings of the airplane are designed specially for the type of flight desired. There are different types of wings: tempered, semi-tempered, sweptback, rectangular and elliptical. An aircraft wing is sometimes called an airfoil, which is a device that creates lift when air flows across it. Supersonic Supersonic flight is flight faster than the speed of sound. Supersonic flight is associated with the formation of shock waves that form a sonic boom that can be heard from the ground, and is frequently startling. The creation of this shockwave requires a significant amount of energy; because of this, supersonic flight is generally less efficient than subsonic flight at about 85% of the speed of sound. Hypersonic Hypersonic flight is very high speed flight where the heat generated by the compression of the air due to the motion through the air causes chemical changes to the air. Hypersonic flight is achieved primarily by reentering spacecraft such as the Space Shuttle and Soyuz. Ballistic Atmospheric Some things generate little or no lift and move only or mostly under the action of momentum, gravity, air drag and in some cases thrust. This is termed ballistic flight. Examples include balls, arrows, bullets, fireworks etc. Spaceflight Essentially an extreme form of ballistic flight, spaceflight is the use of space technology to achieve the flight of spacecraft into and through outer space. Examples include ballistic missiles, orbital spaceflight, etc. Spaceflight is used in space exploration, and also in commercial activities like space tourism and satellite telecommunications. Additional non-commercial uses of spaceflight include space observatories, reconnaissance satellites and other Earth observation satellites. A spaceflight typically begins with a rocket launch, which provides the initial thrust to overcome the force of gravity and propels the spacecraft from the surface of the Earth. Once in space, the motion of a spacecraftβ€”both when unpropelled and when under propulsionβ€”is covered by the area of study called astrodynamics. Some spacecraft remain in space indefinitely, some disintegrate during atmospheric reentry, and others reach a planetary or lunar surface for landing or impact. Solid-state propulsion In 2018, researchers at Massachusetts Institute of Technology (MIT) managed to fly an aeroplane with no moving parts, powered by an "ionic wind" also known as electroaerodynamic thrust. History Many human cultures have built devices that fly, from the earliest projectiles such as stones and spears, the boomerang in Australia, the hot air Kongming lantern, and kites. Aviation George Cayley studied flight scientifically in the first half of the 19th century, and in the second half of the 19th century Otto Lilienthal made over 200 gliding flights and was also one of the first to understand flight scientifically. His work was replicated and extended by the Wright brothers who made gliding flights and finally the first controlled and extended, manned powered flights. Spaceflight Spaceflight, particularly human spaceflight became a reality in the 20th century following theoretical and practical breakthroughs by Konstantin Tsiolkovsky and Robert H. Goddard. The first orbital spaceflight was in 1957, and Yuri Gagarin was carried aboard the first crewed orbital spaceflight in 1961. Physics There are different approaches to flight. If an object has a lower density than air, then it is buoyant and is able to float in the air without expending energy. A heavier than air craft, known as an aerodyne, includes flighted animals and insects, fixed-wing aircraft and rotorcraft. Because the craft is heavier than air, it must generate lift to overcome its weight. The wind resistance caused by the craft moving through the air is called drag and is overcome by propulsive thrust except in the case of gliding. Some vehicles also use thrust in the place of lift; for example rockets and Harrier jump jets. Forces Forces relevant to flight are Propulsive thrust (except in gliders) Lift, created by the reaction to an airflow Drag, created by aerodynamic friction Weight, created by gravity Buoyancy, for lighter than air flight These forces must be balanced for stable flight to occur. Thrust A fixed-wing aircraft generates forward thrust when air is pushed in the direction opposite to flight. This can be done in several ways including by the spinning blades of a propeller, or a rotating fan pushing air out from the back of a jet engine, or by ejecting hot gases from a rocket engine. The forward thrust is proportional to the mass of the airstream multiplied by the difference in velocity of the airstream. Reverse thrust can be generated to aid braking after landing by reversing the pitch of variable-pitch propeller blades, or using a thrust reverser on a jet engine. Rotary wing aircraft and thrust vectoring V/STOL aircraft use engine thrust to support the weight of the aircraft, and vector sum of this thrust fore and aft to control forward speed. Lift In the context of an air flow relative to a flying body, the lift force is the component of the aerodynamic force that is perpendicular to the flow direction. Aerodynamic lift results when the wing causes the surrounding air to be deflected - the air then causes a force on the wing in the opposite direction, in accordance with Newton's third law of motion. Lift is commonly associated with the wing of an aircraft, although lift is also generated by rotors on rotorcraft (which are effectively rotating wings, performing the same function without requiring that the aircraft move forward through the air). While common meanings of the word "lift" suggest that lift opposes gravity, aerodynamic lift can be in any direction. When an aircraft is cruising for example, lift does oppose gravity, but lift occurs at an angle when climbing, descending or banking. On high-speed cars, the lift force is directed downwards (called "down-force") to keep the car stable on the road. Drag For a solid object moving through a fluid, the drag is the component of the net aerodynamic or hydrodynamic force acting opposite to the direction of the movement. Therefore, drag opposes the motion of the object, and in a powered vehicle it must be overcome by thrust. The process which creates lift also causes some drag. Lift-to-drag ratio Aerodynamic lift is created by the motion of an aerodynamic object (wing) through the air, which due to its shape and angle deflects the air. For sustained straight and level flight, lift must be equal and opposite to weight. In general, long narrow wings are able deflect a large amount of air at a slow speed, whereas smaller wings need a higher forward speed to deflect an equivalent amount of air and thus generate an equivalent amount of lift. Large cargo aircraft tend to use longer wings with higher angles of attack, whereas supersonic aircraft tend to have short wings and rely heavily on high forward speed to generate lift. However, this lift (deflection) process inevitably causes a retarding force called drag. Because lift and drag are both aerodynamic forces, the ratio of lift to drag is an indication of the aerodynamic efficiency of the airplane. The lift to drag ratio is the L/D ratio, pronounced "L over D ratio." An airplane has a high L/D ratio if it produces a large amount of lift or a small amount of drag. The lift/drag ratio is determined by dividing the lift coefficient by the drag coefficient, CL/CD. The lift coefficient Cl is equal to the lift L divided by the (density r times half the velocity V squared times the wing area A). [Cl = L / (A * .5 * r * V^2)] The lift coefficient is also affected by the compressibility of the air, which is much greater at higher speeds, so velocity V is not a linear function. Compressibility is also affected by the shape of the aircraft surfaces. The drag coefficient Cd is equal to the drag D divided by the (density r times half the velocity V squared times the reference area A). [Cd = D / (A * .5 * r * V^2)] Lift-to-drag ratios for practical aircraft vary from about 4:1 for vehicles and birds with relatively short wings, up to 60:1 or more for vehicles with very long wings, such as gliders. A greater angle of attack relative to the forward movement also increases the extent of deflection, and thus generates extra lift. However a greater angle of attack also generates extra drag. Lift/drag ratio also determines the glide ratio and gliding range. Since the glide ratio is based only on the relationship of the aerodynamics forces acting on the aircraft, aircraft weight will not affect it. The only effect weight has is to vary the time that the aircraft will glide for – a heavier aircraft gliding at a higher airspeed will arrive at the same touchdown point in a shorter time. Buoyancy Air pressure acting up against an object in air is greater than the pressure above pushing down. The buoyancy, in both cases, is equal to the weight of fluid displaced - Archimedes' principle holds for air just as it does for water. A cubic meter of air at ordinary atmospheric pressure and room temperature has a mass of about 1.2 kilograms, so its weight is about 12 newtons. Therefore, any 1-cubic-meter object in air is buoyed up with a force of 12 newtons. If the mass of the 1-cubic-meter object is greater than 1.2 kilograms (so that its weight is greater than 12 newtons), it falls to the ground when released. If an object of this size has a mass less than 1.2 kilograms, it rises in the air. Any object that has a mass that is less than the mass of an equal volume of air will rise in air - in other words, any object less dense than air will rise. Thrust to weight ratio Thrust-to-weight ratio is, as its name suggests, the ratio of instantaneous thrust to weight (where weight means weight at the Earth's standard acceleration ). It is a dimensionless parameter characteristic of rockets and other jet engines and of vehicles propelled by such engines (typically space launch vehicles and jet aircraft). If the thrust-to-weight ratio is greater than the local gravity strength (expressed in gs), then flight can occur without any forward motion or any aerodynamic lift being required. If the thrust-to-weight ratio times the lift-to-drag ratio is greater than local gravity then takeoff using aerodynamic lift is possible. Flight dynamics Flight dynamics is the science of air and space vehicle orientation and control in three dimensions. The three critical flight dynamics parameters are the angles of rotation in three dimensions about the vehicle's center of mass, known as pitch, roll and yaw (See Tait-Bryan rotations for an explanation). The control of these dimensions can involve a horizontal stabilizer (i.e. "a tail"), ailerons and other movable aerodynamic devices which control angular stability i.e. flight attitude (which in turn affects altitude, heading). Wings are often angled slightly upwards- they have "positive dihedral angle" which gives inherent roll stabilization. Energy efficiency To create thrust so as to be able to gain height, and to push through the air to overcome the drag associated with lift all takes energy. Different objects and creatures capable of flight vary in the efficiency of their muscles, motors and how well this translates into forward thrust. Propulsive efficiency determines how much energy vehicles generate from a unit of fuel. Range The range that powered flight articles can achieve is ultimately limited by their drag, as well as how much energy they can store on board and how efficiently they can turn that energy into propulsion. For powered aircraft the useful energy is determined by their fuel fraction- what percentage of the takeoff weight is fuel, as well as the specific energy of the fuel used. Power-to-weight ratio All animals and devices capable of sustained flight need relatively high power-to-weight ratios to be able to generate enough lift and/or thrust to achieve take off. Takeoff and landing Vehicles that can fly can have different ways to takeoff and land. Conventional aircraft accelerate along the ground until sufficient lift is generated for takeoff, and reverse the process for landing. Some aircraft can take off at low speed; this is called a short takeoff. Some aircraft such as helicopters and Harrier jump jets can take off and land vertically. Rockets also usually take off and land vertically, but some designs can land horizontally. Guidance, navigation and control Navigation Navigation is the systems necessary to calculate current position (e.g. compass, GPS, LORAN, star tracker, inertial measurement unit, and altimeter). In aircraft, successful air navigation involves piloting an aircraft from place to place without getting lost, breaking the laws applying to aircraft, or endangering the safety of those on board or on the ground. The techniques used for navigation in the air will depend on whether the aircraft is flying under the visual flight rules (VFR) or the instrument flight rules (IFR). In the latter case, the pilot will navigate exclusively using instruments and radio navigation aids such as beacons, or as directed under radar control by air traffic control. In the VFR case, a pilot will largely navigate using dead reckoning combined with visual observations (known as pilotage), with reference to appropriate maps. This may be supplemented using radio navigation aids. Guidance A guidance system is a device or group of devices used in the navigation of a ship, aircraft, missile, rocket, satellite, or other moving object. Typically, guidance is responsible for the calculation of the vector (i.e., direction, velocity) toward an objective. Control A conventional fixed-wing aircraft flight control system consists of flight control surfaces, the respective cockpit controls, connecting linkages, and the necessary operating mechanisms to control an aircraft's direction in flight. Aircraft engine controls are also considered as flight controls as they change speed. Traffic In the case of aircraft, air traffic is controlled by air traffic control systems. Collision avoidance is the process of controlling spacecraft to try to prevent collisions. Flight safety Air safety is a term encompassing the theory, investigation and categorization of flight failures, and the prevention of such failures through regulation, education and training. It can also be applied in the context of campaigns that inform the public as to the safety of air travel. See also Aerodynamics Levitation Transvection (flying) Backward flying References Notes Bibliography Coulson-Thomas, Colin. The Oxford Illustrated Dictionary. Oxford, UK: Oxford University Press, 1976, First edition 1975, . French, A. P. Newtonian Mechanics (The M.I.T. Introductory Physics Series) (1st ed.). New York: W. W. Norton & Company Inc., 1970. Honicke, K., R. Lindner, P. Anders, M. Krahl, H. Hadrich and K. Rohricht. Beschreibung der Konstruktion der Triebwerksanlagen. Berlin: Interflug, 1968. Sutton, George P. Oscar Biblarz. Rocket Propulsion Elements. New York: Wiley-Interscience, 2000 (7th edition). . Walker, Peter. Chambers Dictionary of Science and Technology. Edinburgh: Chambers Harrap Publishers Ltd., 2000, First edition 1998. . External links History and photographs of early aeroplanes etc. 'Birds in Flight and Aeroplanes' by Evolutionary Biologist and trained Engineer John Maynard-Smith Freeview video provided by the Vega Science Trust. Aerodynamics Sky
Flight
[ "Physics", "Chemistry", "Engineering" ]
4,272
[ "Physical phenomena", "Aerodynamics", "Flight", "Motion (physics)", "Aerospace engineering", "Fluid dynamics" ]
159,501
https://en.wikipedia.org/wiki/Alpha%E2%80%93beta%20pruning
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. It is an adversarial search algorithm used commonly for machine playing of two-player combinatorial games (Tic-tac-toe, Chess, Connect 4, etc.). It stops evaluating a move when at least one possibility has been found that proves the move to be worse than a previously examined move. Such moves need not be evaluated further. When applied to a standard minimax tree, it returns the same move as minimax would, but prunes away branches that cannot possibly influence the final decision. History John McCarthy during the Dartmouth Workshop met Alex Bernstein of IBM, who was writing a chess program. McCarthy invented alpha–beta search and recommended it to him, but Bernstein was "unconvinced". Allen Newell and Herbert A. Simon who used what John McCarthy calls an "approximation" in 1958 wrote that alpha–beta "appears to have been reinvented a number of times". Arthur Samuel had an early version for a checkers simulation. Richards, Timothy Hart, Michael Levin and/or Daniel Edwards also invented alpha–beta independently in the United States. McCarthy proposed similar ideas during the Dartmouth workshop in 1956 and suggested it to a group of his students including Alan Kotok at MIT in 1961. Alexander Brudno independently conceived the alpha–beta algorithm, publishing his results in 1963. Donald Knuth and Ronald W. Moore refined the algorithm in 1975. Judea Pearl proved its optimality in terms of the expected running time for trees with randomly assigned leaf values in two papers. The optimality of the randomized version of alpha–beta was shown by Michael Saks and Avi Wigderson in 1986. Core idea A game tree can represent many two-player zero-sum games, such as chess, checkers, and reversi. Each node in the tree represents a possible situation in the game. Each terminal node (outcome) of a branch is assigned a numeric score that determines the value of the outcome to the player with the next move. The algorithm maintains two values, alpha and beta, which respectively represent the minimum score that the maximizing player is assured of and the maximum score that the minimizing player is assured of. Initially, alpha is negative infinity and beta is positive infinity, i.e. both players start with their worst possible score. Whenever the maximum score that the minimizing player (i.e. the "beta" player) is assured of becomes less than the minimum score that the maximizing player (i.e., the "alpha" player) is assured of (i.e. beta < alpha), the maximizing player need not consider further descendants of this node, as they will never be reached in the actual play. To illustrate this with a real-life example, suppose somebody is playing chess, and it is their turn. Move "A" will improve the player's position. The player continues to look for moves to make sure a better one hasn't been missed. Move "B" is also a good move, but the player then realizes that it will allow the opponent to force checkmate in two moves. Thus, other outcomes from playing move B no longer need to be considered since the opponent can force a win. The maximum score that the opponent could force after move "B" is negative infinity: a loss for the player. This is less than the minimum position that was previously found; move "A" does not result in a forced loss in two moves. Improvements over naive minimax The benefit of alpha–beta pruning lies in the fact that branches of the search tree can be eliminated. This way, the search time can be limited to the 'more promising' subtree, and a deeper search can be performed in the same time. Like its predecessor, it belongs to the branch and bound class of algorithms. The optimization reduces the effective depth to slightly more than half that of simple minimax if the nodes are evaluated in an optimal or near optimal order (best choice for side on move ordered first at each node). With an (average or constant) branching factor of b, and a search depth of d plies, the maximum number of leaf node positions evaluated (when the move ordering is pessimal) is O(bd) – the same as a simple minimax search. If the move ordering for the search is optimal (meaning the best moves are always searched first), the number of leaf node positions evaluated is about O(bΓ—1Γ—bΓ—1Γ—...Γ—b) for odd depth and O(bΓ—1Γ—bΓ—1Γ—...Γ—1) for even depth, or . In the latter case, where the ply of a search is even, the effective branching factor is reduced to its square root, or, equivalently, the search can go twice as deep with the same amount of computation. The explanation of bΓ—1Γ—bΓ—1Γ—... is that all the first player's moves must be studied to find the best one, but for each, only the second player's best move is needed to refute all but the first (and best) first player moveβ€”alpha–beta ensures no other second player moves need be considered. When nodes are considered in a random order (i.e., the algorithm randomizes), asymptotically, the expected number of nodes evaluated in uniform trees with binary leaf-values is . For the same trees, when the values are assigned to the leaf values independently of each other and say zero and one are both equally probable, the expected number of nodes evaluated is , which is much smaller than the work done by the randomized algorithm, mentioned above, and is again optimal for such random trees. When the leaf values are chosen independently of each other but from the interval uniformly at random, the expected number of nodes evaluated increases to in the limit, which is again optimal for this kind of random tree. Note that the actual work for "small" values of is better approximated using . A chess program that searches four plies with an average of 36 branches per node evaluates more than one million terminal nodes. An optimal alpha-beta prune would eliminate all but about 2,000 terminal nodes, a reduction of 99.8%. Normally during alpha–beta, the subtrees are temporarily dominated by either a first player advantage (when many first player moves are good, and at each search depth the first move checked by the first player is adequate, but all second player responses are required to try to find a refutation), or vice versa. This advantage can switch sides many times during the search if the move ordering is incorrect, each time leading to inefficiency. As the number of positions searched decreases exponentially each move nearer the current position, it is worth spending considerable effort on sorting early moves. An improved sort at any depth will exponentially reduce the total number of positions searched, but sorting all positions at depths near the root node is relatively cheap as there are so few of them. In practice, the move ordering is often determined by the results of earlier, smaller searches, such as through iterative deepening. Additionally, this algorithm can be trivially modified to return an entire principal variation in addition to the score. Some more aggressive algorithms such as MTD(f) do not easily permit such a modification. Pseudocode The pseudo-code for depth limited minimax with alpha–beta pruning is as follows: Implementations of alpha–beta pruning can often be delineated by whether they are "fail-soft," or "fail-hard". With fail-soft alpha–beta, the alphabeta function may return values (v) that exceed (v < Ξ± or v > Ξ²) the Ξ± and Ξ² bounds set by its function call arguments. In comparison, fail-hard alpha–beta limits its function return value into the inclusive range of Ξ± and Ξ². The main difference between fail-soft and fail-hard implementations is whether Ξ± and Ξ² are updated before or after the cutoff check. If they are updated before the check, then they can exceed initial bounds and the algorithm is fail-soft. The following pseudo-code illustrates the fail-hard variation. function alphabeta(node, depth, Ξ±, Ξ², maximizingPlayer) is if depth == 0 or node is terminal then return the heuristic value of node if maximizingPlayer then value := βˆ’βˆž for each child of node do value := max(value, alphabeta(child, depth βˆ’ 1, Ξ±, Ξ², FALSE)) if value > Ξ² then break (* Ξ² cutoff *) Ξ± := max(Ξ±, value) return value else value := +∞ for each child of node do value := min(value, alphabeta(child, depth βˆ’ 1, Ξ±, Ξ², TRUE)) if value < Ξ± then break (* Ξ± cutoff *) Ξ² := min(Ξ², value) return value (* Initial call *) alphabeta(origin, depth, βˆ’βˆž, +∞, TRUE) The following pseudocode illustrates fail-soft alpha-beta. function alphabeta(node, depth, Ξ±, Ξ², maximizingPlayer) is if depth == 0 or node is terminal then return the heuristic value of node if maximizingPlayer then value := βˆ’βˆž for each child of node do value := max(value, alphabeta(child, depth βˆ’ 1, Ξ±, Ξ², FALSE)) Ξ± := max(Ξ±, value) if value β‰₯ Ξ² then break (* Ξ² cutoff *) return value else value := +∞ for each child of node do value := min(value, alphabeta(child, depth βˆ’ 1, Ξ±, Ξ², TRUE)) Ξ² := min(Ξ², value) if value ≀ Ξ± then break (* Ξ± cutoff *) return value (* Initial call *) alphabeta(origin, depth, βˆ’βˆž, +∞, TRUE) Heuristic improvements Further improvement can be achieved without sacrificing accuracy by using ordering heuristics to search earlier parts of the tree that are likely to force alpha–beta cutoffs. For example, in chess, moves that capture pieces may be examined before moves that do not, and moves that have scored highly in earlier passes through the game-tree analysis may be evaluated before others. Another common, and very cheap, heuristic is the killer heuristic, where the last move that caused a beta-cutoff at the same tree level in the tree search is always examined first. This idea can also be generalized into a set of refutation tables. Alpha–beta search can be made even faster by considering only a narrow search window (generally determined by guesswork based on experience). This is known as an aspiration window. In the extreme case, the search is performed with alpha and beta equal; a technique known as zero-window search, null-window search, or scout search. This is particularly useful for win/loss searches near the end of a game where the extra depth gained from the narrow window and a simple win/loss evaluation function may lead to a conclusive result. If an aspiration search fails, it is straightforward to detect whether it failed high (high edge of window was too low) or low (lower edge of window was too high). This gives information about what window values might be useful in a re-search of the position. Over time, other improvements have been suggested, and indeed the Falphabeta (fail-soft alpha–beta) idea of John Fishburn is nearly universal and is already incorporated above in a slightly modified form. Fishburn also suggested a combination of the killer heuristic and zero-window search under the name Lalphabeta ("last move with minimal window alpha–beta search"). Other algorithms Since the minimax algorithm and its variants are inherently depth-first, a strategy such as iterative deepening is usually used in conjunction with alpha–beta so that a reasonably good move can be returned even if the algorithm is interrupted before it has finished execution. Another advantage of using iterative deepening is that searches at shallower depths give move-ordering hints, as well as shallow alpha and beta estimates, that both can help produce cutoffs for higher depth searches much earlier than would otherwise be possible. Algorithms like SSS*, on the other hand, use the best-first strategy. This can potentially make them more time-efficient, but typically at a heavy cost in space-efficiency. See also Minimax Expectiminimax Negamax Pruning (algorithm) Branch and bound Combinatorial optimization Principal variation search Transposition table References Bibliography Game artificial intelligence Graph algorithms Optimization algorithms and methods Search algorithms Articles with example pseudocode
Alpha–beta pruning
[ "Mathematics" ]
2,654
[ "Game artificial intelligence", "Recreational mathematics", "Combinatorics", "Game theory", "Combinatorial game theory" ]
159,505
https://en.wikipedia.org/wiki/Additive%20color
Additive color or additive mixing is a property of a color model that predicts the appearance of colors made by coincident component lights, i.e. the perceived color can be predicted by summing the numeric representations of the component colors. Modern formulations of Grassmann's laws describe the additivity in the color perception of light mixtures in terms of algebraic equations. Additive color predicts perception and not any sort of change in the photons of light themselves. These predictions are only applicable in the limited scope of color matching experiments where viewers match small patches of uniform color isolated against a gray or black background. Additive color models are applied in the design and testing of electronic displays that are used to render realistic images containing diverse sets of color using phosphors that emit light of a limited set of primary colors. Examination with a sufficiently powerful magnifying lens will reveal that each pixel in CRT, LCD, and most other types of color video displays is composed of red, green, and blue light-emitting phosphors which appear as a variety of single colors when viewed from a normal distance. Additive color, alone, does not predict the appearance of mixtures of printed color inks, dye layers in color photographs on film, or paint mixtures. Instead, subtractive color is used to model the appearance of pigments or dyes, such as those in paints and inks. The combination of two of the common three additive primary colors in equal proportions produces an additive secondary colorβ€”cyan, magenta or yellow. Additive color is also used to predict colors from overlapping projected colored lights often used in theatrical lighting for plays, concerts, circus shows, and night clubs. The full gamut of color available in any additive color system is defined by all the possible combinations of all the possible luminosities of each primary color in that system. In chromaticity space, a gamut is a plane convex polygon with corners at the primaries. For three primaries, it is a triangle. History Systems of additive color are motivated by the Young–Helmholtz theory of trichromatic color vision, which was articulated around 1850 by Hermann von Helmholtz, based on earlier work by Thomas Young. For his experimental work on the subject, James Clerk Maxwell is sometimes credited as being the father of additive color. He had the photographer Thomas Sutton photograph a tartan ribbon on black-and-white film three times, first with a red, then green, then blue color filter over the lens. The three black-and-white images were developed and then projected onto a screen with three different projectors, each equipped with the corresponding red, green, or blue color filter used to take its image. When brought into alignment, the three images (a black-and-red image, a black-and-green image and a black-and-blue image) formed a full-color image, thus demonstrating the principles of additive color. See also Color mixing Color space Color theory Color motion picture film Kinemacolor Prizma Color RGB color model Subtractive color Technicolor William Friese-Greene References External links RGB and CMYK Colour systems. http://www.edinphoto.org.uk/1_P/1_photographers_maxwell.htm - Photos and stories from the James Clerk Maxwell Foundation. Stanford University CS 178 interactive Flash demo comparing additive and subtractive color mixing. Color space Color
Additive color
[ "Mathematics" ]
705
[ "Color space", "Space (mathematics)", "Metric spaces" ]
159,506
https://en.wikipedia.org/wiki/RGB%20color%20spaces
RGB color spaces is a category of additive colorimetric color spaces specifying part of its absolute color space definition using the RGB color model. RGB color spaces are commonly found describing the mapping of the RGB color model to human perceivable color, but some RGB color spaces use imaginary (non-real-world) primaries and thus can not be displayed directly. Like any color space, while the specifications in this category use the RGB color model to describe their space, it is not mandatory to use that model to signal pixel color values. Broadcast TV color spaces like NTSC, PAL, Rec. 709, Rec. 2020 additionally describe a translation from RGB to YCbCr and that is how they are usually signalled for transmission, but an image can be stored as either RGB or YCbCr. This demonstrates using the singular term "RGB color space" can be misleading, since a chosen color space or signalled colour can be described by any appropriate color model. However the singular can be seen in specifications where storage signalled as RGB is its intended use. Definition The normal human eye contains three types of color-sensitive cone cells. Each cell is responsive to light of either long, medium, or short wavelengths, which we generally categorize as red, green, and blue. Taken together, the responses of these cone cells are called the Tristimulus values, and the combination of their responses is processed into the psychological effect of color vision. RGB use in color space definitions employ primaries (and often a white point) based on the RGB color model, to map to real world color. Applying Grassmann's law of light additivity, the range of colors that can be produced are those enclosed within the triangle on the chromaticity diagram defined using the primaries as vertices. The primary colors are usually mapped to xyY chromaticity coordinates, though the uΚΉ,vΚΉ coordinates from the UCS chromaticity diagram may be used. Both xyY and uΚΉ,vΚΉ are derived from the CIE 1931 color space, a device independent space also known as XYZ which covers the full gamut of human-perceptible colors visible to the CIE 2Β° standard observer. Applications RGB color spaces are well-suited to describing the electronic display of color, such as computer monitors and color television. These devices often reproduce colours using an array of red, green, and blue phosphors agitated by a cathode-ray tube (CRT), or an array of red, green, and blue LCDs lit by a backlight, and are therefore naturally described by an additive color model with RGB primaries. Early examples of RGB color spaces came with the adoption of the NTSC color television standard in 1953 across North America, followed by PAL and SECAM covering the rest of the world. These early RGB spaces were defined in part by the phosphor used by CRTs in use at the time, and the gamma of the electron beam. While these color spaces reproduced the intended colors using additive red, green, and blue primaries, the broadcast signal itself was encoded from RGB components to a composite signal such as YIQ, and decoded back by the receiver into RGB signals for display. HDTV uses the BT.709 color space, later repurposed for computer monitors as sRGB. Both use the same color primaries and white point, but different transfer functions, as HDTV is intended for a dark living room while sRGB is intended for a brighter office environment. The gamut of these spaces is limited, covering only 35.9% of the CIE 1931 gamut. While this allows the use of a limited bit depth without causing color banding, and therefore reduces transmission bandwidth, it also prevents the encoding of deeply saturated colors that might be available in an alternate color spaces. Some RGB color spaces such as Adobe RGB and ProPhoto intended for the creation, rather than transmission, of images are designed with expanded gamuts to address this issue, however this does not mean the larger space has 'more colors". The numerical quantity of colors is related to bit depth and not the size or shape of the gamut. A large space with a low bit depth can be detrimental to the gamut density and result in high errors. More recent color spaces such as Rec. 2020 for UHD-TVs define an extremely large gamut covering 63.3% of the CIE 1931 space. This standard is not currently realisable with current LCD technology, and alternative architectures such as quantum dot or OLED based devices are currently in development. Color space specifications employing the RGB color model The CIE 1931 color space standard defines both the CIE RGB space, which is a color space with monochromatic primaries, and the CIE XYZ color space, which is functionally similar to a linear RGB color space, however the primaries are not physically realizable, thus are not described as red, green, and blue. M.A.C. is not to be confused with MacOS. Here, M.A.C.refers to Multiplexed Analogue Components. See also CIELUV color space Web colors RGB color model RGBA color model References External links RGB Color Chart. Color space
RGB color spaces
[ "Mathematics" ]
1,092
[ "Color space", "Space (mathematics)", "Metric spaces" ]
159,513
https://en.wikipedia.org/wiki/Evaluation%20function
An evaluation function, also known as a heuristic evaluation function or static evaluation function, is a function used by game-playing computer programs to estimate the value or goodness of a position (usually at a leaf or terminal node) in a game tree. Most of the time, the value is either a real number or a quantized integer, often in nths of the value of a playing piece such as a stone in go or a pawn in chess, where n may be tenths, hundredths or other convenient fraction, but sometimes, the value is an array of three values in the unit interval, representing the win, draw, and loss percentages of the position. There do not exist analytical or theoretical models for evaluation functions for unsolved games, nor are such functions entirely ad-hoc. The composition of evaluation functions is determined empirically by inserting a candidate function into an automaton and evaluating its subsequent performance. A significant body of evidence now exists for several games like chess, shogi and go as to the general composition of evaluation functions for them. Games in which game playing computer programs employ evaluation functions include chess, go, shogi (Japanese chess), othello, hex, backgammon, and checkers. In addition, with the advent of programs such as MuZero, computer programs also use evaluation functions to play video games, such as those from the Atari 2600. Some games like tic-tac-toe are strongly solved, and do not require search or evaluation because a discrete solution tree is available. Relation to search A tree of such evaluations is usually part of a search algorithm, such as Monte Carlo tree search or a minimax algorithm like alpha–beta search. The value is presumed to represent the relative probability of winning if the game tree were expanded from that node to the end of the game. The function looks only at the current position (i.e. what spaces the pieces are on and their relationship to each other) and does not take into account the history of the position or explore possible moves forward of the node (therefore static). This implies that for dynamic positions where tactical threats exist, the evaluation function will not be an accurate assessment of the position. These positions are termed non-quiescent; they require at least a limited kind of search extension called quiescence search to resolve threats before evaluation. Some values returned by evaluation functions are absolute rather than heuristic, if a win, loss or draw occurs at the node. There is an intricate relationship between search and knowledge in the evaluation function. Deeper search favors less near-term tactical factors and more subtle long-horizon positional motifs in the evaluation. There is also a trade-off between efficacy of encoded knowledge and computational complexity: computing detailed knowledge may take so much time that performance decreases, so approximations to exact knowledge are often better. Because the evaluation function depends on the nominal depth of search as well as the extensions and reductions employed in the search, there is no generic or stand-alone formulation for an evaluation function. An evaluation function which works well in one application will usually need to be substantially re-tuned or re-trained to work effectively in another application. In chess In computer chess, the output of an evaluation function is typically an integer, and the units of the evaluation function are typically referred to as pawns. The term 'pawn' refers to the value when the player has one more pawn than the opponent in a position, as explained in Chess piece relative value. The integer 1 usually represents some fraction of a pawn, and commonly used in computer chess are centipawns, which are a hundredth of a pawn. Larger evaluations indicate a material imbalance or positional advantage or that a win of material is usually imminent. Very large evaluations may indicate that checkmate is imminent. An evaluation function also implicitly encodes the value of the right to move, which can vary from a small fraction of a pawn to win or loss. Handcrafted evaluation functions Historically in computer chess, the terms of an evaluation function are constructed (i.e. handcrafted) by the engine developer, as opposed to discovered through training neural networks. The general approach for constructing handcrafted evaluation functions is as a linear combination of various weighted terms determined to influence the value of a position. However, not all terms in a handcrafted evaluation function are linear, some, such as king safety and pawn structure, are nonlinear. Each term may be considered to be composed of first order factors (those that depend only on the space and any piece on it), second order factors (the space in relation to other spaces), and nth-order factors (dependencies on history of the position). A handcrafted evaluation function typically has of a material balance term that usually dominates the evaluation. The conventional values used for material are Queen=9, Rook=5; Knight or Bishop=3; Pawn=1; the king is assigned an arbitrarily large value, usually larger than the total value of all the other pieces. In addition, it typically has a set of positional terms usually totaling no more than the value of a pawn, though in some positions the positional terms can get much larger, such as when checkmate is imminent. Handcrafted evaluation functions typically contain dozens to hundreds of individual terms. In practice, effective handcrafted evaluation functions are not created by expanding the list of evaluated parameters, but by careful tuning or training of the weights relative to each other, of a modest set of parameters such as those described above. Toward this end, positions from various databases are employed, such as from master games, engine games, Lichess games, or even from self-play, as in reinforcement learning. Example An example handcrafted evaluation function for chess might look like the following: c1 * material + c2 * mobility + c3 * king safety + c4 * center control + c5 * pawn structure + c6 * king tropism + ... Each of the terms is a weight multiplied by a difference factor: the value of white's material or positional terms minus black's. The material term is obtained by assigning a value in pawn-units to each of the pieces. Mobility is the number of legal moves available to a player, or alternately the sum of the number of spaces attacked or defended by each piece, including spaces occupied by friendly or opposing pieces. Effective mobility, or the number of "safe" spaces a piece may move to, may also be taken into account. King safety is a set of bonuses and penalties assessed for the location of the king and the configuration of pawns and pieces adjacent to or in front of the king, and opposing pieces bearing on spaces around the king. Center control is derived from how many pawns and pieces occupy or bear on the four center spaces and sometimes the 12 spaces of the extended center. Pawn structure is a set of penalties and bonuses for various strengths and weaknesses in pawn structure, such as penalties for doubled and isolated pawns. King tropism is a bonus for closeness (or penalty for distance) of certain pieces, especially queens and knights, to the opposing king. Neural networks While neural networks have been used in the evaluation functions of chess engines since the late 1980s, they did not become popular in computer chess until the late 2010s, as the hardware needed to train neural networks was not strong enough at the time, and fast training algorithms and network topology and architectures have not been developed yet. Initially, neural network based evaluation functions generally consisted of one neural network for the entire evaluation function, with input features selected from the board and whose output is an integer, normalized to the centipawn scale so that a value of 100 is roughly equivalent to a material advantage of a pawn. The parameters in neural networks are typically trained using reinforcement learning or supervised learning. More recently, evaluation functions in computer chess have started to use multiple neural networks, with each neural network trained for a specific part of the evaluation, such as pawn structure or endgames. This allows for hybrid approaches where an evaluation function consists of both neural networks and handcrafted terms. Deep neural networks have been used, albeit infrequently, in computer chess after Matthew Lai's Giraffe in 2015 and Deepmind's AlphaZero in 2017 demonstrated the feasibility of deep neural networks in evaluation functions. The distributed computing project Leela Chess Zero was started shortly after to attempt to replicate the results of Deepmind's AlphaZero paper. Apart from the size of the networks, the neural networks used in AlphaZero and Leela Chess Zero also differ from those used in traditional chess engines as they have two outputs, one for evaluation (the value head) and one for move ordering (the policy head), rather than only one output for evaluation. In addition, while it is possible to set the output of the value head of Leela's neural network to a real number to approximate the centipawn scale used in traditional chess engines, by default the output is the win-draw-loss percentages, a vector of three values each from the unit interval. Since deep neural networks are very large, engines using deep neural networks in their evaluation function usually require a graphics processing unit in order to efficiently calculate the evaluation function. Piece-square tables An important technique in evaluation since at least the early 1990s is the use of piece-square tables (also called piece-value tables) for evaluation. Each table is a set of 64 values corresponding to the squares of the chessboard. The most basic implementation of piece-square table consists of separate tables for each type of piece per player, which in chess results in 12 piece-square tables in total. More complex variants of piece-square tables are used in computer chess, one of the most prominent being the king-piece-square table, used in Stockfish, Komodo Dragon, Ethereal, and many other engines, where each table considers the position of every type of piece in relation to the player's king, rather than the position of the every type of piece alone. The values in the tables are bonuses/penalties for the location of each piece on each space, and encode a composite of many subtle factors difficult to quantify analytically. In handcrafted evaluation functions, there are sometimes two sets of tables: one for the opening/middlegame, and one for the endgame; positions of the middle game are interpolated between the two. Originally developed in computer shogi in 2018 by Yu Nasu, the most common evaluation function used in computer chess today is the efficiently updatable neural network, or NNUE for short, a sparse and shallow neural network that has only piece-square tables as the inputs into the neural network. In fact, the most basic NNUE architecture is simply the 12 piece-square tables described above, a neural network with only one layer and no activation functions. An efficiently updatable neural network architecture, using king-piece-square tables as its inputs, was first ported to chess in a Stockfish derivative called Stockfish NNUE, publicly released on May 30, 2020, and was adopted by many other engines before eventually being incorporated into the official Stockfish engine on August 6, 2020. Endgame tablebases Chess engines frequently use endgame tablebases in their evaluation function, as it allows the engine to play perfectly in the endgame. In Go Historically, evaluation functions in Computer Go took into account both territory controlled, influence of stones, number of prisoners and life and death of groups on the board. However, modern go playing computer programs largely use deep neural networks in their evaluation functions, such as AlphaGo, Leela Zero, Fine Art, and KataGo, and output a win/draw/loss percentage rather than a value in number of stones. References Slate, D and Atkin, L., 1983, "Chess 4.5, the Northwestern University Chess Program" in Chess Skill in Man and Machine 2nd Ed., pp.Β 93–100. Springer-Verlag, New York, NY. Ebeling, Carl, 1987, All the Right Moves: A VLSI Architecture for Chess (ACM Distinguished Dissertation), pp.Β 56–86. MIT Press, Cambridge, MA External links Keys to Evaluating Positions GameDev.net - Chess Programming Part VI: Evaluation Functions Computer chess Game artificial intelligence Heuristics
Evaluation function
[ "Mathematics" ]
2,530
[ "Game theory", "Game artificial intelligence" ]
159,587
https://en.wikipedia.org/wiki/Diffeology
In mathematics, a diffeology on a set generalizes the concept of smooth charts in a differentiable manifold, by declaring what constitutes the "smooth parametrizations" into the set. The concept was first introduced by Jean-Marie Souriau in the 1980s under the name Espace diffΓ©rentiel and later developed by his students Paul Donato and Patrick Iglesias. A related idea was introduced by Kuo-TsaΓ― Chen (ι™³εœ‹ζ‰, Chen Guocai) in the 1970s, using convex sets instead of open sets for the domains of the plots. Intuitive definition Recall that a topological manifold is a topological space which is locally homeomorphic to . Differentiable manifolds (also called smooth manifolds) generalize the notion of smoothness on in the following sense: a differentiable manifold is a topological manifold with a differentiable atlas, i.e. a collection of maps from open subsets of to the manifold which are used to "pull back" the differential structure from to the manifold. A diffeological space consists of a set together with a collection of maps (called a diffeology) satisfying suitable axioms, which are used to characterize smoothness of the space in a way similar to charts of an atlas. A smooth manifold can be equivalently defined as a diffeological space which is locally diffeomorphic to . But there are many diffeological spaces which do not carry any local model, nor a sufficiently interesting underlying topological space. Diffeology is therefore suitable to treat examples of objects more general than manifolds. Formal definition A diffeology on a set consists of a collection of maps, called plots or parametrizations, from open subsets of (for all ) to such that the following axioms hold: Covering axiom: every constant map is a plot. Locality axiom: for a given map , if every point in has a neighborhood such that is a plot, then itself is a plot. Smooth compatibility axiom: if is a plot, and is a smooth function from an open subset of some into the domain of , then the composite is a plot. Note that the domains of different plots can be subsets of for different values of ; in particular, any diffeology contains the elements of its underlying set as the plots with . A set together with a diffeology is called a diffeological space. More abstractly, a diffeological space is a concrete sheaf on the site of open subsets of , for all , and open covers. Morphisms A map between diffeological spaces is called smooth if and only if its composite with any plot of the first space is a plot of the second space. It is called a diffeomorphism if it is smooth, bijective, and its inverse is also smooth. By construction, given a diffeological space , its plots defined on are precisely all the smooth maps from to . Diffeological spaces form a category where the morphisms are smooth maps. The category of diffeological spaces is closed under many categorical operations: for instance, it is Cartesian closed, complete and cocomplete, and more generally it is a quasitopos. D-topology Any diffeological space is automatically a topological space with the so-called D-topology: the final topology such that all plots are continuous (with respect to the euclidean topology on ). In other words, a subset is open if and only if is open for any plot on . Actually, the D-topology is completely determined by smooth curves, i.e. a subset is open if and only if is open for any smooth map . The D-topology is automatically locally path-connected and a differentiable map between diffeological spaces is automatically continuous between their D-topologies. Additional structures A Cartan-De Rham calculus can be developed in the framework of diffeologies, as well as a suitable adaptation of the notions of fiber bundles, homotopy, etc. However, there is not a canonical definition of tangent spaces and tangent bundles for diffeological spaces. Examples Trivial examples Any set can be endowed with the coarse (or trivial, or indiscrete) diffeology, i.e. the largest possible diffeology (any map is a plot). The corresponding D-topology is the trivial topology. Any set can be endowed with the discrete (or fine) diffeology, i.e. the smallest possible diffeology (the only plots are the locally constant maps). The corresponding D-topology is the discrete topology. Any topological space can be endowed with the continuous diffeology, whose plots are all continuous maps. Manifolds Any differentiable manifold can be assigned the diffeology consisting of all smooth maps from all open subsets of Euclidean spaces into it. This diffeology will contain not only the charts of , but also all smooth curves into , all constant maps (with domains open subsets of Euclidean spaces), etc. The D-topology recovers the original manifold topology. With this diffeology, a map between two smooth manifolds is smooth in the usual sense if and only if it is smooth in the diffeological sense. Accordingly, smooth manifolds with smooth maps form a full subcategory of the category of diffeological spaces. This procedure similarly assigns diffeologies to other spaces that possess a smooth structure that is determined by a local model. More precisely, each of the examples below form a full subcategory of diffeological spaces. Orbifolds, which are modeled on quotient spaces , for is a finite linear subgroup, and smooth maps between them. Manifolds with boundary or corners, which are modeled on orthants, and smooth maps between them. Banach manifolds, which are modeled on Banach spaces, and smooth maps between them. FrΓ©chet manifolds, which are modeled on FrΓ©chet spaces, and smooth maps between them. Constructions from other diffeological spaces If a set is given two different diffeologies, their intersection is a diffeology on , called the intersection diffeology, which is finer than both starting diffeologies. The D-topology of the intersection diffeology is the intersection of the D-topologies of the initial diffeologies. If is a subset of the diffeological space , then the subspace diffeology on is the diffeology consisting of the plots of whose images are subsets of . The D-topology of is equal to the subspace topology of the D-topology of if is open, but may be finer in general. If and are diffeological spaces, then the product diffeology on the Cartesian product is the diffeology generated by all products of plots of and of . The D-topology of is the coarsest delta-generated topology containing the product topology of the D-topologies of and ; it is equal to the product topology when or is locally compact, but may be finer in general. If is a diffeological space and is an equivalence relation on , then the quotient diffeology on the quotient set /~ is the diffeology generated by all compositions of plots of with the projection from to . The D-topology on is the quotient topology of the D-topology of (note that this topology may be trivial without the diffeology being trivial). The pushforward diffeology of a diffeological space by a function is the diffeology on generated by the compositions , for a plot of . In other words, the pushforward diffeology is the smallest diffeology on making differentiable. The quotient diffeology boils down to the pushforward diffeology by the projection . The pullback diffeology of a diffeological space by a function is the diffeology on whose plots are maps such that the composition is a plot of . In other words, the pullback diffeology is the smallest diffeology on making differentiable. The functional diffeology between two diffeological spaces is the diffeology on the set of differentiable maps, whose plots are the maps such that is smooth (with respect to the product diffeology of ). When and are manifolds, the D-topology of is the smallest locally path-connected topology containing the weak topology. Wire/spaghetti diffeology The wire diffeology (or spaghetti diffeology) on is the diffeology whose plots factor locally through . More precisely, a map is a plot if and only if for every there is an open neighbourhood of such that for two plots and . This diffeology does not coincide with the standard diffeology on : for instance, the identity is not a plot in the wire diffeology. This example can be enlarged to diffeologies whose plots factor locally through . More generally, one can consider the rank--restricted diffeology on a smooth manifold : a map is a plot if and only if the rank of its differential is less or equal than . For one recovers the wire diffeology. Other examples Quotients gives an easy way to construct non-manifold diffeologies. For example, the set of real numbers is a smooth manifold. The quotient , for some irrational , called irrational torus, is a diffeological space diffeomorphic to the quotient of the regular 2-torus by a line of slope . It has a non-trivial diffeology, but its D-topology is the trivial topology. Combining the subspace diffeology and the functional diffeology, one can define diffeologies on the space of sections of a fibre bundle, or the space of bisections of a Lie groupoid, etc. Subductions and inductions Analogously to the notions of submersions and immersions between manifolds, there are two special classes of morphisms between diffeological spaces. A subduction is a surjective function between diffeological spaces such that the diffeology of is the pushforward of the diffeology of . Similarly, an induction is an injective function between diffeological spaces such that the diffeology of is the pullback of the diffeology of . Note that subductions and inductions are automatically smooth. It is instructive to consider the case where and are smooth manifolds. Every surjective submersion is a subduction. A subduction need not be a surjective submersion. One example is given by . An injective immersion need not be an induction. One example is the parametrization of the "figure-eight," given by . An induction need not be an injective immersion. One example is the "semi-cubic," given by . In the category of diffeological spaces, subductions are precisely the strong epimorphisms, and inductions are precisely the strong monomorphisms. A map that is both a subduction and induction is a diffeomorphism. References External links Patrick Iglesias-Zemmour: Diffeology (book), Mathematical Surveys and Monographs, vol. 185, American Mathematical Society, Providence, RI USA [2013]. Patrick Iglesias-Zemmour: Diffeology (many documents) diffeology.net Global hub on diffeology and related topics Differential geometry Functions and mappings Chen, Guocai Smooth manifolds
Diffeology
[ "Mathematics" ]
2,355
[ "Mathematical analysis", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
159,594
https://en.wikipedia.org/wiki/Methaqualone
Methaqualone is a hypnotic sedative. It was sold under the brand names Quaalude ( ) and Sopor among others, which contained 300Β mg of methaqualone, and sold as a combination drug under the brand name Mandrax, which contained 250Β mg methaqualone and 25Β mg diphenhydramine within the same tablet, mostly in Europe. Commercial production of methaqualone was halted in the mid-1980s due to widespread abuse and addictiveness. It is a member of the quinazolinone class. Medical use The sedative–hypnotic activity of methaqualone was recognized in 1955. Its use peaked in the early 1970s for the treatment of insomnia, and as a sedative and muscle relaxant. Methaqualone was not recommended for use while pregnant and is in pregnancy category D. Similar to other GABAergic agents, methaqualone will produce tolerance and physical dependence with extended periods of use. Overdose An overdose of methaqualone can lead to coma and death. Additional effects are delirium, convulsions, hypertonia, hyperreflexia, vomiting, kidney failure, and death through cardiac or respiratory arrest. Methaqualone overdose resembles barbiturate poisoning, but with increased motor difficulties and a lower incidence of cardiac or respiratory depression. The standard single tablet adult dose of Quaalude brand of methaqualone was 300Β mg when made by Lemmon. A dose of 8000Β mg is lethal and a dose as little as 2000Β mg could induce a coma if taken with an alcoholic beverage. Pharmacology Pharmacodynamics Methaqualone primarily acts as a sedative, relieving anxiety and promoting sleep. Methaqualone binds to GABA-A receptors, and it shows negligible affinity for a wide array of other potential targets, including other receptors and neurotransmitter transporters. Methaqualone is a positive allosteric modulator at many subtypes of GABA-A receptor, similar to classical benzodiazepines such as diazepam. GABA-A receptors are inhibitory, so methaqualone tends to inhibit action potentials, similar to GABA itself or other GABA-A agonists. Unlike most benzodiazepines, methaqualone acts as a negative allosteric modulator at a few GABA-A receptor subtypes, which tends to cause an excitatory response in neurons expressing those receptors. Because methaqualone can be either excitatory or inhibitory depending on the subunit composition of the GABA-A receptor, it can be characterized as a mixed GABA-A receptor modulator. The methaqualone binding site is distinct from the benzodiazepine, barbiturate, and neurosteroid binding sites on the GABA-A receptor complex, and it may partially overlap with the etomidate binding site. Pharmacokinetics Methaqualone peaks in the bloodstream within several hours, with a half-life of 20–60 hours. History Methaqualone was first synthesized in India in 1951 by Indra Kishore Kacker and Syed Husain Zaheer, who were conducting research on finding new antimalarial medications. In 1962, methaqualone was patented in the United States by Wallace and Tiernan. By 1965, it was the most commonly prescribed sedative in Britain, where it has been sold legally under the names Malsed, Malsedin, and Renoval. In 1965, a methaqualone/antihistamine combination was sold as the sedative drug Mandrax in Europe, by Roussel Laboratories (now part of Sanofi S.A.). In 1972, it was the sixth-bestselling sedative in the US, where it was legal under the brand name Quaalude. Quaalude in the United States was originally manufactured in 1965 by the pharmaceutical firm William H. Rorer, Inc., based in Fort Washington, Pennsylvania. The drug name "Quaalude" is a portmanteau, combining the words "quiet interlude" and shared a stylistic reference to another drug marketed by the firm, Maalox. In 1978, Rorer sold the rights to manufacture Quaalude to the Lemmon Company of Sellersville, Pennsylvania. At that time, Rorer chairman John Eckman commented on Quaalude's bad reputation stemming from illegal manufacture and use of methaqualone, and illegal sale and use of legally prescribed Quaalude: "Quaalude accounted for less than 2% of our sales, but created 98% of our headaches." Both companies still regarded Quaalude as an excellent sleeping drug. Lemmon, well aware of Quaalude's public image problems, used advertisements in medical journals to urge physicians "not to permit the abuses of illegal users to deprive a legitimate patient of the drug". Lemmon also marketed a small quantity under another name, Mequin, so doctors could prescribe the drug without the negative connotations. The rights to Quaalude were held by the JB Roerig & Company division of Pfizer, before the drug was discontinued in the United States in 1985, mainly due to its psychological addictiveness, widespread abuse, and illegal recreational use. A 2024 Hungarian investigative documentary reported on large-scale production and sales of the drug by the Hungarian People's Republic to the United States in the 1970s and 1980s. It asserts that a Hungarian state-owned company utilized connections to Colombian drug cartels to facilitate the sale of extraordinary amounts to the United States. Society and culture Methaqualone became increasingly popular as a recreational drug and club drug in the late 1960s and 1970s, known variously as "ludes" or "disco biscuits" due to its widespread use during the popularity of disco in the 1970s, or "sopers" (also "soaps") in the United States and Canada, and "mandrakes" and "mandies" in the United Kingdom, Australia and New Zealand. The substance was sold both as a free base and as a salt (hydrochloride). Brand names It was sold under the brand name Quaalude (sometimes stylized "Quāālude" in the United States and Canada), and Mandrax in the UK, South Africa, and Australia. Regulation Methaqualone was initially placed in Schedule I as defined by the UN Convention on Psychotropic Substances, but was moved to Schedule II in 1979. In Canada, methaqualone is listed in Schedule III of the Controlled Drugs and Substances Act and requires a prescription, but it is no longer manufactured. Methaqualone is banned in India. In the United States it was withdrawn from the market in 1983 and made a Schedule I drug in 1984. Recreational Methaqualone became increasingly popular as a recreational drug in the late 1960s and 1970s, known variously as "ludes" or "sopers" and "soaps" (sopor is a Latin word for sleep) in the United States and "mandrakes" and "mandies" in the UK, Australia and New Zealand. The drug was more tightly regulated in Britain under the Misuse of Drugs Act 1971 and in the U.S. from 1973. It was withdrawn from many developed markets in the early 1980s. In the United States it was withdrawn in 1983 and made a Schedule I drug in 1984. It has a DEA ACSCN of 2565 and in 2022 the aggregate annual manufacturing quota for the United States was 60 grams. Mention of its possible use in some types of cancer and AIDS treatments has periodically appeared in the literature since the late 1980s. Research does not appear to have reached an advanced stage. The DEA has also added the methaqualone analogue mecloqualone (also a result of some incomplete clandestine syntheses) to Schedule I as ACSCN 2572, with a manufacturing quota of 30 g. Gene Haislip, the former head of the Chemical Control Division of the Drug Enforcement Administration (DEA), told the PBS documentary program Frontline, "We beat 'em." By working with governments and manufacturers around the world, the DEA was able to halt production and, Haislip said, "eliminated the problem". Methaqualone was manufactured in the United States under the name Quaalude by the pharmaceutical firms Rorer and Lemmon with the numbers 714 stamped on the tablet, so people often referred to Quaalude as 714's, "Lemmons", or "Lemmon 7's". Methaqualone was also manufactured in the US under the trade names Sopor and Parest. After the legal manufacture of the drug ended in the United States in 1982, underground laboratories in Mexico continued the illegal manufacture of methaqualone throughout the 1980s, continuing the use of the "714" stamp, until their popularity waned in the early 1990s. Drugs purported to be methaqualone are in a significant majority of cases found to be inert, or contain diphenhydramine or benzodiazepines. Illicit methaqualone is one of the most commonly used recreational drugs in South Africa. Manufactured clandestinely, often in India, it comes in tablet form, but is smoked with marijuana. This method of ingestion is known as "white pipe". It is popular elsewhere in Africa and in India. Chemical weapon – Project Coast Illegal efforts to weaponize methaqualone have occurred. During the 1980s, the apartheid regime in South Africa ordered the covert manufacture of a large amount of methaqualone at the front company Delta G Scientific Company, as part of a secret chemical weapons program known as Project Coast. Methaqualone was given the codename MosRefCat (Mossgas Refinery Catalyst). Details of this activity came to light during the 1998 hearings of the post-apartheid Truth and Reconciliation Commission. Sexual assault Actor Bill Cosby admitted in a 2015 civil deposition to giving methaqualone to women before allegedly sexually assaulting them. Film director Roman Polanski was convicted in 1977 of sexually assaulting a 13-year-old girl after giving her alcohol and methaqualone. Popular culture Quaaludes are mentioned in the 1983 film Scarface, when Al Pacino's character Tony Montana says, "Another quaalude... she'll love me again." Quaaludes are also referenced extensively in the 2013 film The Wolf of Wall Street. Parody glam rocker "Quay Lewd", one of the costumed performance personae used by Tubes singer Fee Waybill, was named after the drug. Many songs also refer to quaaludes, including the following: David Bowie's "Time" ("Time, in quaaludes and red wine") and "Rebel Rebel" ("You got your cue line/And a handful of 'ludes"); "Cosmic Doo Doo" by the American country music singer-songwriter Blaze Foley ("Got some quaaludes in their purse"); "That Smell" by Lynyrd Skynyrd ("Can't speak a word when you're full of 'ludes"); "Flakes" by Frank Zappa ("(Wanna buy some mandies, Bob?)"); "Straight Edge" by Minor Threat ("Laugh at the thought of eating ludes"); and "Kind of Girl" by French Montana ("That high got me feelin' like the Quaaludes from Wolf of Wall Street"). Season 18 of Law & Order: Special Victims Unit addresses Quaalude administration as a date rape drug in episode 9, "Decline and Fall", which aired January 18, 2017. In True Detective season 1, Rust Cohle's use of Quaaludes is briefly mentioned in several episodes. It is also used by Patrick Melrose in Edward St Aubyn's 1992 novel Bad News. Further reading References External links Erowid Vault – Methaqualone (Quaaludes) GABAA receptor positive allosteric modulators Quinazolinones Sedatives Hypnotics Amidines Withdrawn drugs South Africa and weapons of mass destruction 2-Tolyl compounds
Methaqualone
[ "Chemistry", "Biology" ]
2,570
[ "Hypnotics", "Behavior", "Amidines", "Functional groups", "Drug safety", "Sleep", "Bases (chemistry)", "Withdrawn drugs" ]
159,668
https://en.wikipedia.org/wiki/Western%20astrology
Western astrology is the system of astrology most popular in Western countries. It is historically based on Ptolemy's Tetrabiblos (2nd century CE), which in turn was a continuation of Hellenistic and ultimately Babylonian traditions. Western astrology is largely horoscopic, that is, it is a form of divination based on the construction of a horoscope for an exact moment, such as a person's birth as well as location (since time zones may or may not affect a person's birth chart), in which various cosmic bodies are said to have an influence. Astrology in western popular culture is often reduced to sun sign astrology, which considers only the individual's date of birth (i.e. the "position of the Sun" at that date). Astrology is a pseudoscience and has consistently failed experimental and theoretical verification. Astrology was widely considered a respectable academic and scientific field before the Enlightenment, but modern research has found no consistent empirical basis to it. Core principles A central principle of astrology is integration within the cosmos. The individual, Earth, and its environment are viewed as a single organism, all parts of which are correlated with each other. Cycles of change that are observed in the heavens are therefore reflective (not causative) of similar cycles of change observed on earth and within the individual. This relationship is expressed in the Hermetic maxim "as above, so below; as below, so above", which postulates symmetry between the individual as a microcosm and the celestial environment as a macrocosm. As opposed to Sidereal astrology, Western astrology evaluates a person's birth based on the alignments of the stars and planets from the perspective on earth instead of in space. At the heart of astrology is the metaphysical principle that mathematical relationships express qualities or 'tones' of energy which manifest in numbers, visual angles, shapes and sounds – all connected within a pattern of proportion. An early example is Ptolemy, who wrote influential texts on all these topics. Al-Kindi, in the 9th century, developed Ptolemy's ideas in De Aspectibus which explores many points of relevance to astrology and the use of planetary aspects. The zodiac The zodiac is the belt or band of constellations through which the Sun, Moon, and planets move on their journey across the sky. Astrologers noted these constellations and so attached a particular significance to them. Over time they developed the system of twelve signs of the zodiac, based on twelve of the constellations through which the sun passes throughout the year, those constellations that are "Enlightened by the mind". Most western astrologers use the tropical zodiac beginning with the sign of Aries at the Northern Hemisphere vernal equinox always on or around March 21 of each year. The Western Zodiac is drawn based on the Earth's relationship to fixed, designated positions in the sky, and the Earth's seasons. The Sidereal Zodiac is drawn based on the Earth's position in relation to the constellations, and follows their movements in the sky. Due to a phenomenon called precession of the equinoxes (where the Earth's axis slowly rotates like a spinning top in a 25,700-year cycle), there is a slow shift in the correspondence between Earth's seasons (and calendar) and the constellations of the zodiac. Thus, the tropical zodiac corresponds with the position of the earth in relation to fixed positions in the sky (Western Astrology), while the sidereal zodiac is drawn based on the position in relation to the constellations (sidereal zodiac). The twelve signs In modern Western astrology the signs of the zodiac are believed to represent twelve basic personality types or characteristic modes of expression. The twelve signs are divided into four elements fire, earth, air and water. Fire and air signs are considered masculine, while water and earth signs are considered feminine. The twelve signs are also divided into three qualities, also called modalities, Cardinal, fixed and mutable. Note: these are only approximations and the exact date on which the sign of the sun changes varies from year to year. Zodiac sign for an individual depends on the placement of planets and the ascendant in that sign. If a person has nothing placed in a particular sign, that sign will play no active role in their personality. On the other hand, a person with, for example, both the sun and moon in Cancer, will strongly display the characteristics of that sign in their make up. Sun-sign astrology Newspapers often print astrology columns which purport to provide guidance on what might occur in a day in relation to the sign of the zodiac that included the sun when the person was born. Astrologers refer to this as the "sun sign", but it is often commonly called the "star sign". These predictions are vague or general; so much so that even practicing astrologers consider them of little to no value on their own. Experiments have shown that when people are shown a newspaper horoscope for their own sign along with a newspaper horoscope for a different sign, they judge them to be equally accurate on average. Other tests have been performed on complete, personalized horoscopes cast by professional astrologers, and have shown no correlation between the horoscope results and the person it was cast for. The planets In modern Western astrology the planets represent basic drives or impulses in the human psyche. These planets differ from the definition of a planet in astronomy in that the Sun, Moon, and recently, Pluto are all considered to be planets for the purposes of astrology. Each planet is also said to be the ruler of one or two zodiac signs. The three outer planets (Uranus, Neptune, Pluto) have each been assigned rulership of a zodiac sign by astrologers. Traditionally rulership of the signs was, according to Ptolemy, based on seasonal derivations and astronomical measurement, whereby the luminaries being the brightest planets were given rulership of the brightest months of the year and Saturn the coldest furthest classical planet was given to the coldest months of the year, with the other planets ruling the remaining signs as per astronomical measurement. It is noteworthy that the modern rulerships do not follow the same logic. Classical planets The astrological 'planets' are the seven heavenly bodies known to the ancients. The Sun and Moon, also known as 'the lights', are included as they were thought to act like the astronomical planets. Astrologers call inner planets Mercury, Venus and Mars, the 'personal planets', as they represent the most immediate drives. The 'lights' symbolise respectively the existential and sensitive fundamentals of the individuality. The following table summarizes the rulership by the seven classically known planets of each of the twelve astrological signs, together with their effects on world events, people and the earth itself as understood in the Middle Ages. Modern modifications to the Ptolemaic system Additional planets These are the planets discovered in modern times, which have since been assigned meanings by Western astrologers. Sidereal and tropical astrology There are two camps of thought among western astrologers about the "starting point", 0 degrees Aries, in the zodiac. Sidereal astrology uses a fixed starting point in the background of stars, while tropical astrology, used by the majority of Western astrologers, chooses as a starting point the position of the Sun against the background of stars at the Northern hemisphere vernal equinox (i.e. when the Sun position against the heavens crosses over from the southern hemisphere to the northern hemisphere) each year. The consequence of the Tropical approach is that when we say the Sun or a planet is in a certain zodiac sign, observation of it in the sky will show that it does not lie within that constellation at all. As the Earth spins on its axis, it "wobbles" like a top, causing the vernal equinox to move gradually backwards against the star background, (a phenomenon known as the Precession of the equinoxes) at a rate of about 30 degrees (one Zodiacal sign length) every 2,160 years. Thus the two zodiacs would be aligned only once every 26,000 years. They were aligned about 2,000 years ago when the zodiac was originally established. This phenomenon gives us the conceptual basis for the Age of Aquarius, whose "dawning" coincides with the movement of the vernal equinox across the cusp from Pisces to Aquarius in the star background. The moon's nodes Also important in astrology are the moon's nodes. The nodes are where the moon's path crosses the ecliptic. The North, or Ascending Node marks the place where the moon crosses from South to North (or ascends), while the South, or Descending Node marks where the moon crosses from North to South (or descends). While Lunar nodes are not considered by Western astrologers to be as important a factor as each of the planets, they are thought to mark sensitive areas that are worth taking into account. – North or ascending Node. Also the ruler of Pathways and Choices. – South or descending Node. Also the ruler of Karma and the Past. Essential dignity In astrology, "essential dignity" is the strength of a planet or point's zodiac position, judged only by its position by sign and degree, what the pre-eminent 17th-century astrologer William Lilly called "the strength, fortitude or debility of the Planets [or] significators." In other words, essential dignity seeks to view the strengths of a planet or point as though it were isolated from other factors in the sky of the natal chart. Traditionally, there are five dignities: domicile and detriment, exaltation and fall, triplicity, terms, and face. However, the later two have diminished in usage. A planet's domicile is the zodiac sign over which it has rulership. The horoscope Western astrology is based mainly upon the construction of a horoscope, which is a map or chart of the heavens at a particular moment. The moment chosen is the beginning of the existence of the subject of the horoscope, as it is believed that the subject will carry with it the pattern of the heavens from that moment throughout its life. The most common form of horoscope is the natal chart based on the moment of a person's birth; though in theory a horoscope can be drawn up for the beginning of anything, from a business enterprise to the foundation of a nation state. Interpretation In Western horoscopic astrology the interpretation of a horoscope is governed by: The position of the planets in the astrological signs of the zodiac, The position of the planets in the houses of the horoscope, The position of the primary angles of the horoscope, namely the horizon line (called the ascendant/descendant axis), and the prime vertical line (called the zenith/midheaven and nadir/imum coeli axis), The angles formed by the planets relative to each other and the primary angles, called aspects The position of deduced astronomical entities, such as the Lunar nodes. Some astrologers also use the position of various mathematical points, such as the Arabic parts. Various techniques are used, with different degrees of complexity, to provide what astrologers claim are forecasts or predictive statements about the future, as well as to analyse past and current events. These include transits, progressions, and primary directions. Different branches of astrology, such as horary and electional astrology, have their own specific sets of techniques. The primary angles There are four primary angles in the horoscope (though the cusps of the houses are often included as important angles by some astrologers). Asc - The ascendant or rising sign is the eastern point where the ecliptic and horizon intersect. During the course of a day, because of the Earth's rotation, the entire circle of the ecliptic will pass through the ascendant and will be advanced by about 1Β°. This provides us with the term rising sign, which is the sign of the zodiac that was rising in the east at the exact time that the horoscope or natal chart is calculated. In creating a horoscope the ascendant is traditionally placed as the left-hand side point of the chart. In most house systems the ascendant lies on the cusp of the 1st house of the horoscope. The ascendant is generally considered the most important and personalized angle in the horoscope by the vast majority of astrologers. It signifies a person's awakening consciousness, in the same way that the Sun's appearance on the eastern horizon signifies the dawn of a new day. Due to the fact that the ascendant is specific to a particular time and place, it signifies the individual environment and conditioning that a person receives during their upbringing, and also the circumstances of their childhood. For this reason, the ascendant is also concerned with how a person has learned to present themself to the world, especially in public and in impersonal situations. The opposite point to the ascendant in the west is the descendant, which denotes how a person reacts in their relationships with others. It also show the kind of person we are likely to be attracted to, and our ability to form romantic attachments. In most house systems the descendant lies on the cusp of the 7th house of the horoscope. Mc - The midheaven or medium coeli is the point on the ecliptic that is furthest above the plane of the horizon. For events occurring where the planes of the ecliptic and the horizon coincide, the limiting position for these points is located 90Β° from the ascendant. For astrologers, the midheaven traditionally indicates a person's career, status, aim in life, aspirations, public reputation, and life goal. In quadrant house systems the midheaven lies on the cusp of the 10th house of the horoscope. The opposite point to the midheaven is known as the imum coeli. For astrologers the nadir or IC traditionally indicates the circumstances at the beginning and end of a person's life, their parents and the parental home, and their own domestic life. In quadrant house systems it lies on the cusp of the 4th house of the horoscope. The houses The horoscope is divided by astrologers into 12 portions called the houses. The houses of the horoscope are interpreted as being 12 different spheres of life or activity. There are various ways of calculating the houses in the horoscope or birth chart. However, there is no dispute about their meanings, and the 12 houses Many modern astrologers assume that the houses relate to their corresponding signs, i.e. that the first house has a natural affinity with the first sign, Aries, and so on. Aspects The aspects are the angles the planets make to each other in the horoscope, and also to the ascendant, midheaven, descendant and nadir. The aspects are measured by the angular distance along the ecliptic in degrees and minutes of celestial longitude between two points, as viewed from the earth. They indicate focal points in the horoscope where the energies involved are given extra emphasis. The more exact the angle, the more powerful the aspect, although an allowance of a few degrees each side of the aspect called an orb is allowed for interpretation. The following are the aspects in order of importance: - Conjunction 0Β° (orb Β±10Β°). The conjunction is a major point in the chart, giving strong emphasis to the planets involved. The planets will act together to outside stimulus and act on each other. - Opposition 180Β° (orb Β±10Β°). The opposition is indicative of tension, conflict and confrontation, due to the polarity between the two elements involved. Stress arises when one is used over the other, causing an imbalance; but the opposition can work well if the two parts of the aspect are made to complement each other in a synthesis. - Trine 120Β°(orb Β±7.5Β°). The trine indicates harmony, and ease of expression, with the two elements reinforcing each other. The trine is a source of artistic and creative talent, but can be a 'line of least resistance' to a person of weak character. - Square 90Β°(orb Β±7.5Β°). The square indicates frustration, inhibitions, disruption and inner conflict, but can become a source of energy and activation to a person determined to overcome limitations. - Sextile 60Β°(orb Β±5Β°). The sextile is similar to the trine, but of less significance. It indicates ease of communication between the two elements involved, with compatibility and harmony between them. - Quincunx 150Β°(orb Β±2.5Β°). The quincunx indicates difficulty and stress, due to incompatible elements being forced together. It can mean an area of self-neglect in a person's life (especially health), or obligations being forced on a person. The quincunx can vary from minor to quite major in impact. - Semisextile 30Β° (orb Β±1.25Β°). Slight in effect. Indicates an area of life where a conscious effort to be positive will have to be made. - Semisquare 45Β°(orb Β±2.5Β°). Indicates somewhat difficult circumstance. Similar in effect to semisextile. - Sesquiquadrate 135Β°(orb Β±2.5Β°). Indicates somewhat stressful conditions. Similar to semisextile. Q - Quintile 72Β° (orb Β±1.25Β°). Slight in effect. Indicates talent and vaguely fortunate circumstances. bQ - Biquintile 144Β° (orb Β±1.25Β°). Slight in effect. Indicates talent and vaguely fortunate circumstances. - Retrograde: A planet is retrograde when it appears to move backwards across the sky when seen from the earth. Although it is not an aspect, some astrologers believe that it should be included for consideration in the chart. Planets which are retrograde in the natal chart are considered by them to be potential weak points. Astrology and science The majority of professional astrologers rely on performing astrology-based personality tests and making relevant predictions about the remunerator's future. Those who continue to have faith in astrology have been characterised as doing so "in spite of the fact that there is no verified scientific basis for their beliefs, and indeed that there is strong evidence to the contrary". Astrology has not demonstrated its effectiveness in controlled studies and has no scientific validity, and as such, is regarded as pseudoscience. There is no proposed mechanism of action by which the positions and motions of stars and planets could affect people and events on Earth that does not contradict well understood, basic aspects of biology and physics. Where astrology has made falsifiable predictions, it has been falsified. The most famous test was headed by Shawn Carlson and included a committee of scientists and a committee of astrologers. It led to the conclusion that natal astrology performed no better than chance. See also Zodiac Astrological symbols Astrological signs List of asteroids in astrology Chinese zodiac Circle of stars Cusp (astrology) Elements of the zodiac Natal astrology Synoptical astrology Tarotscope Notes References Bibliography External links The Astrotest - An account of a test of the predictive power of astrology, with references to other experiments. Astrology History of astrology Astrology by tradition Paganism in Europe Superstitions Pseudoscience
Western astrology
[ "Astronomy" ]
4,082
[ "History of astrology", "Astrology", "History of astronomy" ]
159,669
https://en.wikipedia.org/wiki/Chinese%20astrology
Chinese astrology is based on traditional Chinese astronomy and the Chinese calendar. Chinese astrology flourished during the Han dynasty (2nd century BC to 2nd century AD). Chinese astrology has a close relation with Chinese philosophy (theory of the three harmonies: heaven, earth, and human), and uses the principles of yin and yang, wuxing (five phases), the ten Heavenly Stems, the twelve Earthly Branches, the lunisolar calendar (moon calendar and sun calendar), and the time calculation after year, month, day, and shichen (, double hour). These concepts are not readily found or familiar in Western astrology or culture. History and background Chinese astrology was elaborated during the Zhou dynasty (1046–256 BC) and flourished during the Han dynasty (2nd century BC to 2nd century AD). During the Han period, the familiar elements of traditional Chinese cultureβ€”the yin-yang philosophy, the theory and technology of the five elements (Wuxing), the concepts of heaven and earth, and Taoist, Buddhist and Confucian moralityβ€”were brought together to formalize the philosophical principles of Chinese medicine and divination, astrology and alchemy. The five classical planets are associated with the wuxing: Venusβ€”Metal (White Tiger) Jupiterβ€”Wood (Azure Dragon) Mercuryβ€”Water (Black Tortoise) Marsβ€”Fire (Vermilion Bird) (may be associated with the phoenix which was also an imperial symbol along with the Dragon) Saturnβ€”Earth (Yellow Dragon) According to Chinese astrology, a person's fate can be determined by the position of the major planets at the person's birth along with the positions of the Sun, Moon, comets, the person's time of birth, and zodiac sign. The system of the twelve-year cycle of animal signs was built from observations of the orbit of Jupiter (the Year Star; ). Following the orbit of Jupiter around the Sun, Chinese astronomers divided the celestial circle into 12 sections, and rounded it to 12 years (from 11.86). Jupiter is associated with the constellation Sheti (- BoΓΆtes) and is sometimes called Sheti. A system of computing one's predestined fate is based on birthday, birth season, and birth hour, known as zi wei dou shu (), or Purple Star Astrology, is still used regularly in modern-day Chinese astrology to divine one's fortune. The 28 Chinese constellations, Xiu (), are quite different from Western constellations. For example, the Big Bear (Ursa Major) is known as Dou (); the belt of Orion is known as Shen (), or the "Happiness, Fortune, Longevity" trio of demigods. The seven northern constellations are referred to as Xuan Wu (). Xuan Wu is also known as the spirit of the northern sky or the spirit of water in Taoist belief. In addition to astrological readings of the heavenly bodies, the stars in the sky form the basis of many fairy tales. For example, the Summer Triangle is the trio of the cowherd (Altair), the weaving maiden fairy (Vega), and the "tai bai" fairy (Deneb). The two forbidden lovers were separated by the silvery river (the Milky Way). Each year on the seventh day of the seventh month in the Chinese calendar, the birds form a bridge across the Milky Way. The cowherd carries their two sons (the two stars on each side of Altair) across the bridge to reunite with their fairy mother. The tai bai fairy acts as the chaperone of these two immortal lovers. Chinese zodiac Chinese astrology has a close relation with Chinese philosophy. The core values and concepts of Chinese philosophy originate from Taoism. Table of the sixty-year calendar The following table shows the 60-year cycle matched up to the Western calendar for the years 1924–2043 (see sexagenary cycle article for years 1924–1983). This is only applied to Chinese Lunar calendar. The sexagenary cycle begins at lichun. Each of the Chinese lunar years are associated with a combination of the ten Heavenly Stems () and the twelve Earthly Branches () which make up the 60 Stem-Branches () in a sexagenary cycle. Wuxing Although it is usually translated as 'element', the Chinese word xing literally means something like 'changing states of being', 'permutations' or 'metamorphoses of being'. In fact, Sinologists cannot agree on one single translation. The Chinese notion of 'element' is therefore quite different from the Western one. In the west, India Vedic, and Japanese Go dai elements were seen as the basic building blocks of matter and static or stationary. The Chinese 'elements', by contrast, were seen as ever changing, and the transliteration of xing is simply 'the five changes' and in traditional Chinese medicine are commonly referred to as phrases. Things seen as associated to each xing are listed below. Wood () The East () Springtime () Azure Dragon () The Planet Jupiter () The Color Green () Liver () and Gall bladder () Fire () The South () Summer () Vermilion Bird/Vermilion Phoenix () The Planet Mars () The Color Red () Circulatory system, Heart () and Small intestine () Earth () Center () Change of seasons (the last month of the season) The Yellow Dragon () The Planet Saturn () The Color Yellow () Digestive system, Spleen () and Stomach () Metal () The West () Autumn () White Tiger () The Planet Venus () The Color White () Respiratory system, Lung () and Large intestine () Water () The North () Winter () Black Tortoise () The Planet Mercury () The Color Black/Blue () Skeleton (), Urinary bladder and Kidney () Wuxing generating cycle ( sheng) (Inter-promoting, begetting, engendering, mothering or enhancing cycle) Generating: Wood fuels Fire to burn; Fire creates Earth (ash); Earth produces minerals, Metal; Metal creates Water from condensation; Water nourishes Wood to grow. Wuxing regulating cycle ( kΓ¨) The regulating cycle is important to create restraints in the whole system. For example, if Fire was allowed to burn out of control, it would be devastating and destructive as we see in nature in the form of bush fires or internally as high fevers, (Destructing, overcoming or inter-restraining or weakening cycle). Fire makes Metal flexible; Metal adds the minerals to Wood for there to be strong upward growth; Wood draws water from the Earth to create stability for building; Earth gives Water direction, like the banks of a river; Water controls Fire by cooling its heat. See also Chinese calendar correspondence table Chinese spiritual world concepts Chinese fortune telling Chinese zodiac Da Liu Ren Dunhuang Star Chart Feng shui Four Pillars of Destiny Qimen Dunjia Symbolic stars Synoptical astrology Tai Sui Tai Yi Shen Shu Traditional Chinese star names References Further reading Taoist divination Astrology Astrology by tradition Divination astrology astrology
Chinese astrology
[ "Astronomy" ]
1,491
[ "Astrology", "History of astronomy" ]
159,695
https://en.wikipedia.org/wiki/Polynomial-time%20reduction
In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second. A polynomial-time reduction proves that the first problem is no more difficult than the second one, because whenever an efficient algorithm exists for the second problem, one exists for the first problem as well. By contraposition, if no efficient algorithm exists for the first problem, none exists for the second either. Polynomial-time reductions are frequently used in complexity theory for defining both complexity classes and complete problems for those classes. Types of reductions The three most common types of polynomial-time reduction, from the most to the least restrictive, are polynomial-time many-one reductions, truth-table reductions, and Turing reductions. The most frequently used of these are the many-one reductions, and in some cases the phrase "polynomial-time reduction" may be used to mean a polynomial-time many-one reduction. The most general reductions are the Turing reductions and the most restrictive are the many-one reductions with truth-table reductions occupying the space in between. Many-one reductions A polynomial-time many-one reduction from a problem A to a problem B (both of which are usually required to be decision problems) is a polynomial-time algorithm for transforming inputs to problem A into inputs to problem B, such that the transformed problem has the same output as the original problem. An instance x of problem A can be solved by applying this transformation to produce an instance y of problem B, giving y as the input to an algorithm for problem B, and returning its output. Polynomial-time many-one reductions may also be known as polynomial transformations or Karp reductions, named after Richard Karp. A reduction of this type is denoted by or . Truth-table reductions A polynomial-time truth-table reduction from a problem A to a problem B (both decision problems) is a polynomial time algorithm for transforming inputs to problem A into a fixed number of inputs to problem B, such that the output for the original problem can be expressed as a function of the outputs for B. The function that maps outputs for B into the output for A must be the same for all inputs, so that it can be expressed by a truth table. A reduction of this type may be denoted by the expression . Turing reductions A polynomial-time Turing reduction from a problem A to a problem B is an algorithm that solves problem A using a polynomial number of calls to a subroutine for problem B, and polynomial time outside of those subroutine calls. Polynomial-time Turing reductions are also known as Cook reductions, named after Stephen Cook. A reduction of this type may be denoted by the expression . Many-one reductions can be regarded as restricted variants of Turing reductions where the number of calls made to the subroutine for problem B is exactly one and the value returned by the reduction is the same value as the one returned by the subroutine. Completeness A complete problem for a given complexity class C and reduction ≀ is a problem P that belongs to C, such that every problem A in C has a reduction A ≀ P. For instance, a problem is NP-complete if it belongs to NP and all problems in NP have polynomial-time many-one reductions to it. A problem that belongs to NP can be proven to be NP-complete by finding a single polynomial-time many-one reduction to it from a known NP-complete problem. Polynomial-time many-one reductions have been used to define complete problems for other complexity classes, including the PSPACE-complete languages and EXPTIME-complete languages. Every decision problem in P (the class of polynomial-time decision problems) may be reduced to every other nontrivial decision problem (where nontrivial means that not every input has the same output), by a polynomial-time many-one reduction. To transform an instance of problem A to B, solve A in polynomial time, and then use the solution to choose one of two instances of problem B with different answers. Therefore, for complexity classes within P such as L, NL, NC, and P itself, polynomial-time reductions cannot be used to define complete languages: if they were used in this way, every nontrivial problem in P would be complete. Instead, weaker reductions such as log-space reductions or NC reductions are used for defining classes of complete problems for these classes, such as the P-complete problems. Defining complexity classes The definitions of the complexity classes NP, PSPACE, and EXPTIME do not involve reductions: reductions come into their study only in the definition of complete languages for these classes. However, in some cases a complexity class may be defined by reductions. If C is any decision problem, then one can define a complexity class C consisting of the languages A for which . In this case, C will automatically be complete for C, but C may have other complete problems as well. An example of this is the complexity class defined from the existential theory of the reals, a computational problem that is known to be NP-hard and in PSPACE, but is not known to be complete for NP, PSPACE, or any language in the polynomial hierarchy. is the set of problems having a polynomial-time many-one reduction to the existential theory of the reals; it has several other complete problems such as determining the rectilinear crossing number of an undirected graph. Each problem in inherits the property of belonging to PSPACE, and each -complete problem is NP-hard. Similarly, the complexity class GI consists of the problems that can be reduced to the graph isomorphism problem. Since graph isomorphism is known to belong both to NP and co-AM, the same is true for every problem in this class. A problem is GI-complete if it is complete for this class; the graph isomorphism problem itself is GI-complete, as are several other related problems. See also Karp's 21 NP-complete problems External links MIT OpenCourseWare: 16. Complexity: P, NP, NP-completeness, Reductions References Reduction (complexity) he:Χ¨Χ“Χ•Χ§Χ¦Χ™Χ” Χ€Χ•ΧœΧ™Χ Χ•ΧžΧ™Χͺ
Polynomial-time reduction
[ "Mathematics" ]
1,340
[ "Reduction (complexity)", "Functions and mappings", "Mathematical relations", "Mathematical objects" ]
159,730
https://en.wikipedia.org/wiki/Mathematical%20practice
Mathematical practice comprises the working practices of professional mathematicians: selecting theorems to prove, using informal notations to persuade themselves and others that various steps in the final proof are convincing, and seeking peer review and publication, as opposed to the end result of proven and published theorems. Philip Kitcher has proposed a more formal definition of a mathematical practice, as a quintuple. His intention was primarily to document mathematical practice through its historical changes. Historical tradition The evolution of mathematical practice was slow, and some contributors to modern mathematics did not follow even the practice of their time. For example, Pierre de Fermat was infamous for withholding his proofs, but nonetheless had a vast reputation for correct assertions of results. One motivation to study mathematical practice is that, despite much work in the 20th century, some still feel that the foundations of mathematics remain unclear and ambiguous. One proposed remedy is to shift focus to some degree onto 'what is meant by a proof', and other such questions of method. If mathematics has been informally used throughout history, in numerous cultures and continents, then it could be argued that "mathematical practice" is the practice, or use, of mathematics in everyday life. One definition of mathematical practice, as described above, is the "working practices of professional mathematicians". However, another definition, more in keeping with the predominant usage of mathematics, is that mathematical practice is the everyday practice, or use, of math. Whether one is estimating the total cost of their groceries, calculating miles per gallon, or figuring out how many minutes on the treadmill that chocolate Γ©clair will require, math in everyday life relies on practicality (i.e., does it answer the question?) rather than formal proof. Teaching practice Mathematical teaching usually requires the use of several important teaching pedagogies or components. Most GCSE, A-Level and undergraduate mathematics require the following components: Textbooks or lecture notes which display the mathematical material to be covered/taught within the context of the teaching of mathematics. This requires that the mathematical content being taught at the (say) undergraduate level is of a well documented and widely accepted nature that has been unanimously verified as being correct and meaningful within a mathematical context. Workbooks. Usually, in order to ensure that students have an opportunity to learn and test the material that they have learnt, workbooks or question papers enable mathematical understanding to be tested. It is not unknown for exam papers to draw upon questions from such test papers, or to require prerequisite knowledge of such test papers for mathematical progression. Exam papers and standardised (and preferably apolitical) testing methods. Often, within countries such as the US, the UK (and, in all likelihood, China) there are standardised qualifications, examinations and workbooks that form the concrete teaching materials needed for secondary-school and pre-university courses (for example, within the UK, all students are required to sit or take Scottish Highers/Advanced Highers, A-levels or their equivalent in order to ensure that a certain minimal level of mathematical competence in a wide variety of topics has been obtained). Note, however, that at the undergraduate, post-graduate and doctoral levels within these countries, there need not be any standardised process via which mathematicians of differing ability levels can be tested or examined. Other common test formats within the UK and beyond include the BMO (which is a multiple-choice test competition paper used in order to determine the best candidates that are to represent countries within the International Mathematical Olympiad). See also Common Core State Standards Initiative: Mathematical practice Foundations of mathematics Informal mathematics Philosophy of mathematics Notes Further reading 447 pages. Philosophy of mathematics
Mathematical practice
[ "Mathematics" ]
744
[ "nan" ]
159,731
https://en.wikipedia.org/wiki/Quasi-empiricism%20in%20mathematics
Quasi-empiricism in mathematics is the attempt in the philosophy of mathematics to direct philosophers' attention to mathematical practice, in particular, relations with physics, social sciences, and computational mathematics, rather than solely to issues in the foundations of mathematics. Of concern to this discussion are several topics: the relationship of empiricism (see Penelope Maddy) with mathematics, issues related to realism, the importance of culture, necessity of application, etc. Primary arguments A primary argument with respect to quasi-empiricism is that whilst mathematics and physics are frequently considered to be closely linked fields of study, this may reflect human cognitive bias. It is claimed that, despite rigorous application of appropriate empirical methods or mathematical practice in either field, this would nonetheless be insufficient to disprove alternate approaches. Eugene Wigner (1960) noted that this culture need not be restricted to mathematics, physics, or even humans. He stated further that "The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning." Wigner used several examples to demonstrate why 'bafflement' is an appropriate description, such as showing how mathematics adds to situational knowledge in ways that are either not possible otherwise or are so outside normal thought to be of little notice. The predictive ability, in the sense of describing potential phenomena prior to observation of such, which can be supported by a mathematical system would be another example. Following up on Wigner, Richard Hamming (1980) wrote about applications of mathematics as a central theme to this topic and suggested that successful use can sometimes trump proof, in the following sense: where a theorem has evident veracity through applicability, later evidence that shows the theorem's proof to be problematic would result more in trying to firm up the theorem rather than in trying to redo the applications or to deny results obtained to date. Hamming had four explanations for the 'effectiveness' that we see with mathematics and definitely saw this topic as worthy of discussion and study. "We see what we look for." Why 'quasi' is apropos in reference to this discussion. "We select the kind of mathematics to use." Our use and modification of mathematics are essentially situational and goal-driven. "Science in fact answers comparatively few problems." What still needs to be looked at is a larger set. "The evolution of man provided the model." There may be limits attributable to the human element. For Willard Van Orman Quine (1960), existence is only existence in a structure. This position is relevant to quasi-empiricism because Quine believes that the same evidence that supports theorizing about the structure of the world is the same as the evidence supporting theorizing about mathematical structures. Hilary Putnam (1975) stated that mathematics had accepted informal proofs and proof by authority, and had made and corrected errors all through its history. Also, he stated that Euclid's system of proving geometry theorems was unique to the classical Greeks and did not evolve similarly in other mathematical cultures in China, India, and Arabia. This and other evidence led many mathematicians to reject the label of Platonists, along with Plato's ontology which, along with the methods and epistemology of Aristotle, had served as a foundation ontology for the Western world since its beginnings. A truly international culture of mathematics would, Putnam and others (1983) argued, necessarily be at least 'quasi'-empirical (embracing 'the scientific method' for consensus if not experiment). Imre Lakatos (1976), who did his original work on this topic for his dissertation (1961, Cambridge), argued for 'research programs' as a means to support a basis for mathematics and considered thought experiments as appropriate to mathematical discovery. Lakatos may have been the first to use 'quasi-empiricism' in the context of this subject. Operational aspects Several recent works pertain to this topic. Gregory Chaitin's and Stephen Wolfram's work, though their positions may be considered controversial, apply. Chaitin (1997/2003) suggests an underlying randomness to mathematics and Wolfram (A New Kind of Science, 2002) argues that undecidability may have practical relevance, that is, be more than an abstraction. Another relevant addition would be the discussions concerning interactive computation, especially those related to the meaning and use of Turing's model (Church-Turing thesis, Turing machines, etc.). These works are heavily computational and raise another set of issues. To quote Chaitin (1997/2003): The collection of "Undecidables" in Wolfram (A New Kind of Science, 2002) is another example. Wegner's 2006 paper "Principles of Problem Solving" suggests that interactive computation can help mathematics form a more appropriate framework (empirical) than can be founded with rationalism alone. Related to this argument is that the function (even recursively related ad infinitum) is too simple a construct to handle the reality of entities that resolve (via computation or some type of analog) n-dimensional (general sense of the word) systems. See also Entscheidungsproblem Charles Sanders Peirce Karl Popper Postmodern mathematics Thomas Tymoczko Unreasonable ineffectiveness of mathematics References Philosophy of mathematics Theoretical computer science Empiricism
Quasi-empiricism in mathematics
[ "Mathematics" ]
1,156
[ "Theoretical computer science", "Applied mathematics", "nan" ]
159,735
https://en.wikipedia.org/wiki/Quasi-empirical%20method
Quasi-empirical methods are scientific methods used to gain knowledge in situations where empirical evidence cannot be gathered through experimentation, or experience cannot falsify the ideas involved. Quasi-empirical methods aim to be as closely analogous to empirical methods as possible. Empirical research relies on, and its empirical methods involve experimentation and disclosure of apparatus for reproducibility, by which scientific findings are validated by other scientists. Empirical methods are studied extensively in the philosophy of science, but they cannot be used directly in fields whose hypotheses cannot be falsified by real experiment (for example, mathematics, philosophy, theology, and ideology). Because of such limits, the scientific method must rely not only on empirical methods but sometimes also on quasi-empirical ones. The prefix quasi- came to denote methods that are "almost" or "socially approximate" an ideal of truly empirical methods. Quasi-empirical method usually refers to a means of choosing problems to focus on (or ignore), selecting prior work on which to build an argument or proof, notations for informal claims, peer review and acceptance, and incentives to discover, ignore, or correct errors. To disprove a theory logically, it is unnecessary to find all counterexamples to a theory; all that is required is one counterexample. The converse does not prove a theory; Bayesian inference simply makes a theory more likely, by weight of evidence. Since it is not possible to find all counter-examples to a theory, it is also possible to argue that no science is strictly empirical, but this is not the usual meaning of "quasi-empirical". Examples Albert Einstein's discovery of the general relativity theory relied upon thought experiments and mathematics. Empirical methods only became relevant when confirmation was sought. Furthermore, some empirical confirmation was found only some time after the general acceptance of the theory. Thought experiments are almost standard procedure in philosophy, where a conjecture is tested out in the imagination for possible effects on experience; when these are thought to be implausible, unlikely to occur, or not actually occurring, then the conjecture may be either rejected or amended. Logical positivism was a perhaps extreme version of this practice, though this claim is open to debate. Quasi-empiricism in mathematics is an important topic in post-20th-century philosophy of mathematics, especially as reflected in the actual mathematical practice of working mathematicians. References Scientific method Philosophy of science Philosophy of mathematics Thought experiments
Quasi-empirical method
[ "Mathematics" ]
495
[ "nan" ]
159,750
https://en.wikipedia.org/wiki/Pectin
Pectin ( : "congealed" and "curdled") is a heteropolysaccharide, a structural polymer contained in the primary lamella, in the middle lamella, and in the cell walls of terrestrial plants. The principal chemical component of pectin is galacturonic acid (a sugar acid derived from galactose) which was isolated and described by Henri Braconnot in 1825. Commercially produced pectin is a white-to-light-brown powder, produced from citrus fruits for use as an edible gelling agent, especially in jams and jellies, dessert fillings, medications, and sweets; as a food stabiliser in fruit juices and milk drinks, and as a source of dietary fiber. Biology Pectin is composed of complex polysaccharides that are present in the primary cell walls of a plant, and are abundant in the green parts of terrestrial plants. Pectin is the principal component of the middle lamella, where it binds cells. Pectin is deposited by exocytosis into the cell wall via vesicles produced in the Golgi apparatus. The amount, structure and chemical composition of pectin is different among plants, within a plant over time, and in various parts of a plant. Pectin is an important cell wall polysaccharide that allows primary cell wall extension and plant growth. During fruit ripening, pectin is broken down by the enzymes pectinase and pectinesterase, in which process the fruit becomes softer as the middle lamellae break down and cells become separated from each other. A similar process of cell separation caused by the breakdown of pectin occurs in the abscission zone of the petioles of deciduous plants at leaf fall. Pectin is a natural part of the human diet, but does not contribute significantly to nutrition. The daily intake of pectin from fruits and vegetables can be estimated to be around 5Β g if approximately 500Β g of fruits and vegetables are consumed per day. In human digestion, pectin binds to cholesterol in the gastrointestinal tract and slows glucose absorption by trapping carbohydrates. Pectin is thus a soluble dietary fiber. In non-obese diabetic (NOD) mice pectin has been shown to increase the incidence of autoimmune type 1 diabetes. A study found that after consumption of fruit the concentration of methanol in the human body increased by as much as an order of magnitude due to the degradation of natural pectin (which is esterified with methanol) in the colon. Pectin has been observed to have some function in repairing the DNA of some types of plant seeds, usually desert plants. Pectinaceous surface pellicles, which are rich in pectin, create a mucilage layer that holds in dew that helps the cell repair its DNA. Consumption of pectin has been shown to slightly (3–7%) reduce blood LDL cholesterol levels. The effect depends upon the source of pectin; apple and citrus pectins were more effective than orange pulp fibre pectin. The mechanism appears to be an increase of viscosity in the intestinal tract, leading to a reduced absorption of cholesterol from bile or food. In the large intestine and colon, microorganisms degrade pectin and liberate short-chain fatty acids that have a positive prebiotic effect. Chemistry Pectins, also known as pectic polysaccharides, are rich in galacturonic acid. Several distinct polysaccharides have been identified and characterised within the pectic group. Homogalacturonans are linear chains of Ξ±-(1–4)-linked D-galacturonic acid. Substituted galacturonans are characterised by the presence of saccharide appendant residues (such as D-xylose or D-apiose in the respective cases of xylogalacturonan and apiogalacturonan) branching from a backbone of D-galacturonic acid residues. Rhamnogalacturonan I pectins (RG-I) contain a backbone of the repeating disaccharide: 4)-Ξ±-D-galacturonic acid-(1,2)-Ξ±-L-rhamnose-(1. From many of the rhamnose residues, sidechains of various neutral sugars branch off. The neutral sugars are mainly D-galactose, L-arabinose and D-xylose, with the types and proportions of neutral sugars varying with the origin of pectin. Another structural type of pectin is rhamnogalacturonan II (RG-II), which is a less frequent, complex, highly branched polysaccharide. Rhamnogalacturonan II is classified by some authors within the group of substituted galacturonans since the rhamnogalacturonan II backbone is made exclusively of D-galacturonic acid units. The molecular weight of isolated pectine greatly varies by the source and the method of isolation. Values have been reported as low as 28 kDa for apple pomace up to 753 kDa for sweet potato peels. In nature, around 80 percent of carboxyl groups of galacturonic acid are esterified with methanol. This proportion is decreased to a varying degree during pectin extraction. Pectins are classified as high- versus low-methoxy pectins (short HM-pectins versus LM-pectins), with more or less than half of all the galacturonic acid esterified. The ratio of esterified to non-esterified galacturonic acid determines the behaviour of pectin in food applications – HM-pectins can form a gel under acidic conditions in the presence of high sugar concentrations, while LM-pectins form gels by interaction with divalent cations, particularly Ca2+, according to the idealized 'egg box' model, in which ionic bridges are formed between calcium ions and the ionised carboxyl groups of the galacturonic acid. In high-methoxy pectins at soluble solids content above 60% and a pH value between 2.8 and 3.6, hydrogen bonds and hydrophobic interactions bind the individual pectin chains together. These bonds form as water is bound by sugar and forces pectin strands to stick together. These form a three-dimensional molecular net that creates the macromolecular gel. The gelling-mechanism is called a low-water-activity gel or sugar-acid-pectin gel. While low-methoxy pectins need calcium to form a gel, they can do so at lower soluble solids and higher pH than high-methoxy pectins. Normally low-methoxy pectins form gels with a range of pH from 2.6 to 7.0 and with a soluble solids content between 10 and 70%. The non-esterified galacturonic acid units can be either free acids (carboxyl groups) or salts with sodium, potassium, or calcium. The salts of partially esterified pectins are called pectinates, if the degree of esterification is below 5 percent the salts are called pectates, the insoluble acid form, pectic acid. Some plants, such as sugar beet, potatoes and pears, contain pectins with acetylated galacturonic acid in addition to methyl esters. Acetylation prevents gel-formation but increases the stabilising and emulsifying effects of pectin. Amidated pectin is a modified form of pectin. Here, some of the galacturonic acid is converted with ammonia to carboxylic acid amide. These pectins are more tolerant of varying calcium concentrations that occur in use. Thiolated pectin exhibits substantially improved gelling properties since this thiomer is able to crosslink via disulfide bond formation. These high gelling properties are advantageous for various pharmaceutical applications and applications in food industry. To prepare a pectin-gel, the ingredients are heated, dissolving the pectin. Upon cooling below gelling temperature, a gel starts to form. If gel formation is too strong, syneresis or a granular texture are the result, while weak gelling leads to excessively soft gels. Amidated pectins behave like low-ester pectins but need less calcium and are more tolerant of excess calcium. Also, gels from amidated pectin are thermoreversible; they can be heated and after cooling solidify again, whereas conventional pectin-gels will afterwards remain liquid. High-ester pectins set at higher temperatures than low-ester pectins. However, gelling reactions with calcium increase as the degree of esterification falls. Similarly, lower pH-values or higher soluble solids (normally sugars) increase gelling speeds. Suitable pectins can therefore be selected for jams and jellies, or for higher-sugar confectionery jellies. Sources and production Pears, apples, guavas, quince, plums, gooseberries, and oranges and other citrus fruits contain large amounts of pectin, while soft fruits, like cherries, grapes, and strawberries, contain small amounts of pectin. Typical levels of pectin in fresh fruits and vegetables are: Apples, 1–1.5% Apricots, 1% Cherries, 0.4% Oranges, 0.5–3.5% Carrots 1.4% Citrus peels, 30% Rose hips, 15% The main raw materials for pectin production are dried citrus peels or apple pomace, both by-products of juice production. Pomace from sugar beets is also used to a small extent. From these materials, pectin is extracted by adding hot dilute acid at pH values from 1.5 to 3.5. During several hours of extraction, the protopectin loses some of its branching and chain length and goes into solution. After filtering, the extract is concentrated in a vacuum and the pectin is then precipitated by adding ethanol or isopropanol. An old technique of precipitating pectin with aluminium salts is no longer used (apart from alcohols and polyvalent cations, pectin also precipitates with proteins and detergents). Alcohol-precipitated pectin is then separated, washed, and dried. Treating the initial pectin with dilute acid leads to low-esterified pectins. When this process includes ammonium hydroxide (NH3(aq)), amidated pectins are obtained. After drying and milling, pectin is usually standardised with sugar, and sometimes calcium salts or organic acids, to optimise performance in a particular application. Uses The main use for pectin is as a gelling agent, thickening agent and stabiliser in food. In some countries, pectin is also available as a solution or an extract, or as a blended powder, for home jam making. The classical application is giving the jelly-like consistency to jams or marmalades, which would otherwise be sweet juices. Pectin also reduces syneresis in jams and marmalades and increases the gel strength of low-calorie jams. For household use, pectin is an ingredient in gelling sugar (also known as "jam sugar") where it is diluted to the right concentration with sugar and some citric acid to adjust pH. For various food applications, different kinds of pectins can be distinguished by their properties, such as acidity, degree of esterification, relative number of methoxyl groups in the molecules, etc. For instance, the term "high methoxyl" refers to pectins that have a large proportion of the carboxyl groups in the pectin molecule that are esterified with methanol, compared to low methoxyl pectins: high methoxyl pectins are defined as those with a degree of esterification equal to or above 50, are typically used in traditional jam and jelly making; such pectins require high sugar concentrations and acidic conditions to form gels, and provide a smooth texture and suitable to be used in bakery fillings and confectionery applications; low methoxyl pectins have a degree of esterification of less than 50, can be either amidated or non-amidated: the percentage level of substitution of the amide group, defined as the degree of amidation, defines the efficacy of a pectin; low methoxyl pectins can provide a range of textures and rheological properties, depending on the calcium concentration and the calcium reactivity of the pectin chosenβ€”amidated low methoxyl pectins are generally thermoreversible, meaning they can form gels that can melt and reform, whereas non-amidated low methoxyl pectins can form thermostable gels that withstand high temperatures; these properties make low methoxyl pectins suitable for low sugar and sugar-free applications, dairy products, and stabilizing acidic protein drinks. For conventional jams and marmalades that contain above 60% sugar and soluble fruit solids, high-ester (high methoxyl) pectins are used. With low-ester (low methoxyl) pectins and amidated pectins, less sugar is needed, so that diet products can be made. Water extract of aiyu seeds is traditionally used in Taiwan to make aiyu jelly, where the extract gels without heating due to low-ester pectins from the seeds and the bivalent cations from the water. Pectin is used in confectionery jellies to give a good gel structure, a clean bite and to confer a good flavour release. Pectin can also be used to stabilise acidic protein drinks, such as drinking yogurt, to improve the mouth-feel and the pulp stability in juice based drinks and as a fat substitute in baked goods. Typical levels of pectin used as a food additive are between 0.5 and 1.0% – this is about the same amount of pectin as in fresh fruit. In medicine, pectin increases viscosity and volume of stool so that it is used against constipation and diarrhea. Until 2002, it was one of the main ingredients used in Kaopectate – a medication to combat diarrhea – along with kaolinite. It has been used in gentle heavy metal removal from biological systems. Pectin is also used in throat lozenges as a demulcent. In cosmetic products, pectin acts as a stabiliser. Pectin is also used in wound healing preparations and speciality medical adhesives, such as colostomy devices. Sriamornsak revealed that pectin could be used in various oral drug delivery platforms, e.g., controlled release systems, gastro-retentive systems, colon-specific delivery systems and mucoadhesive delivery systems, according to its intoxicity and low cost. It was found that pectin from different sources provides different gelling abilities, due to variations in molecular size and chemical composition. Like other natural polymers, a major problem with pectin is inconsistency in reproducibility between samples, which may result in poor reproducibility in drug delivery characteristics. In ruminant nutrition, depending on the extent of lignification of the cell wall, pectin is up to 90% digestible by bacterial enzymes. Ruminant nutritionists recommend that the digestibility and energy concentration in forages be improved by increasing pectin concentration in the forage. In cigars, pectin is considered an excellent substitute for vegetable glue and many cigar smokers and collectors use pectin for repairing damaged tobacco leaves on their cigars. Yablokov et al., writing in Chernobyl: Consequences of the Catastrophe for People and the Environment, quote research conducted by the Ukrainian Center of Radiation Medicine and the Belarusian Institute of Radiation Medicine and Endocrinology, concluded, regarding pectin's radioprotective effects, that "adding pectin preparations to the food of inhabitants of the Chernobyl-contaminated regions promotes an effective excretion of incorporated radionuclides" such as cesium-137. The authors reported on the positive results of using pectin food additive preparations in a number of clinical studies conducted on children in severely polluted areas, with up to 50% improvement over control groups. During the Second World War, Allied pilots were provided with maps printed on silk, for navigation in escape and evasion efforts. The printing process at first proved nearly impossible because the several layers of ink immediately ran, blurring outlines and rendering place names illegible until the inventor of the maps, Clayton Hutton, mixed a little pectin with the ink and at once the pectin coagulated the ink and prevented it from running, allowing small topographic features to be clearly visible. Legal status At the Joint FAO/WHO Expert Committee Report on Food Additives and in the European Union, no numerical acceptable daily intake (ADI) has been set, as pectin is considered safe. The European Union (EU) has not set a daily intake limit for two types of pectin, known as E440(i) and Amidated Pectin E440(ii). The EU has established purity standards for these additives in the EU Commission Regulation (EU)/231/2012. Pectin can be used as needed in most food categories, a concept referred to as "quantum satis". The European Food Safety Authority (EFSA) conducted a re-evaluation of Pectin E440(i) and Amidated Pectin E440(ii) in 2017. The EFSA concluded that the use of these food additives poses no safety concern for the general population. Furthermore, the agency stated that it is not necessary to establish a numerical value for the Acceptable Daily Intake (ADI). In the United States, pectin is generally recognised as safe for human consumption. In the International Numbering System (INS), pectin has the number 440. In Europe, pectins are differentiated into the E numbers E440(i) for non-amidated pectins and E440(ii) for amidated pectins. There are specifications in all national and international legislation defining its quality and regulating its use. History Pectin was first isolated and described in 1825 by Henri Braconnot, though the action of pectin to make jams and marmalades was known long before. To obtain well-set jams from fruits that had little or only poor quality pectin, pectin-rich fruits or their extracts were mixed into the recipe. During the Industrial Revolution, the makers of fruit preserves turned to producers of apple juice to obtain dried apple pomace that was cooked to extract pectin. Later, in the 1920s and 1930s, factories were built that commercially extracted pectin from dried apple pomace, and later citrus peel, in regions that produced apple juice in both the US and Europe. Pectin was first sold as a liquid extract, but is now most often used as dried powder, which is easier than a liquid to store and handle. See also Fruit snacks References External links Codex General Standard for Food Additives (GSFA) Online Database; A list of permitted uses of pectin, further link to the JECFA (...) specification of pectin. European parliament and council directive No 95/2/EC of 20 February 1995 on food additives other than colours and sweeteners; EU-Directive that lists the foods, pectin may be used in. Note: The link points to a "consleg"-version of the directive, that may not include the very latest changes. The Directive will be replaced by a new Regulation for food additives in the next few years. Certo Health: Information on reported health benefits of apple pectin, (UK). Polysaccharides Food additives Food science Edible thickening agents Demulcents Food stabilizers E-number additives
Pectin
[ "Chemistry" ]
4,326
[ "Carbohydrates", "Polysaccharides" ]
159,851
https://en.wikipedia.org/wiki/Sclerophyll
Sclerophyll is a type of vegetation that is adapted to long periods of dryness and heat. The plants feature hard leaves, short internodes (the distance between leaves along the stem) and leaf orientation which is parallel or oblique to direct sunlight. The word comes from the Greek sklΔ“ros (hard) and phyllon (leaf). The term was coined by A.F.W. Schimper in 1898 (translated in 1903), originally as a synonym of xeromorph, but the two words were later differentiated. Sclerophyllous plants occur in many parts of the world, but are most typical of areas with low rainfall or seasonal droughts, such as Australia, Africa, and western North and South America. They are prominent throughout Australia, parts of Argentina, the Cerrado biogeographic region of Bolivia, Paraguay and Brazil, and in the Mediterranean biomes that cover the Mediterranean Basin, California, Chile, and the Cape Province of South Africa. In the Mediterranean basin, holm oak, cork oak and olives are typical hardwood trees. In addition, there are several species of pine under the trees in the vegetation zone. The shrub layer contains numerous herbs such as rosemary, thyme and lavender. In relation to the potential natural vegetation, around 2% of the Earth's land surface is covered by sclerophyll woodlands, and a total of 10% of all plant species on Earth live there. Description Sclerophyll woody plants are characterized by their relatively small, stiff, leathery and long-lasting leaves. The sclerophyll vegetation is the result of an adaptation of the flora to the summer dry period of a Mediterranean-type climate. Plant species with this type of adaptation tend to be evergreen with great longevity, slow growth and with no loss of leaves during the unfavorable season. As a result, the thickets that make up these ecosystems are of the persistent evergreen type, in addition to the predominance of plants, even herbaceous ones, with "hard" leaves, which are covered by a thick leathery layer called the cuticle, that prevents water loss during the dry season. The aerial and underground structures of these plants are modified to make up for water shortages that may affect their survival. The name sclerophyll derives from the highly developed sclerenchyma from the plant, which is responsible for the hardness or stiffness of the leaves. This structure of the leaves inhibits transpiration and thus prevents major water losses during the dry season. Most of the plant species in the sclerophyll zone are not only insensitive to summer drought, they have also used various strategies to adapt to frequent wildfires, heavy rainfall and nutrient deficiencies. Ecology The type of sclerophyllic trees in the Palearctic flora region include the holm oak (Quercus ilex), myrtle (Myrtus communis), strawberry tree (Arbutus unedo), wild olive (Olea europaea), laurel (Laurus nobilis), mock privet (Phillyrea latifolia), the Italian buckthorn (Rhamnus alaternus), etc. In central and southern California, the coastal hills are covered in sclerophyll vegetation known as chaparral. The flora of this ecoregion also includes tree species Scrub oak (Quercus dumosa), California buckeye (Aesculus californica), San Gabriel Mountain liveforever (Dudlea densiflora), Catalina mahogany (Cercocarpus traskiae), and the threatened jewelflower (Streptanthus albidus ssp. Peramoenus). In South Africa, in the Cape region, there are Mediterranean open forests known as fynbos. The abundance of endemics is so extraordinary (68% of the 8600 vascular plant species in the area) that the South African sclerophyll area, the cape flora, forms the smallest of the six flora kingdoms on earth. Plants include Elegia, Thamnochortus, and Willdenowia and proteas such as king protea (Protea cynaroides) and blushing bride (Serruria florida). In most of Australia, sclerophyll vegetation such as eucalyptus trees, melaleucas, banksias, callistemons and grevilleas dominate the mallee and woodland areas of its cities, including those lacking a Mediterranean climate, such as Sydney, Melbourne, Hobart and Brisbane. In Chile, south of the desert areas, there is evergreen bushland called matorral. Typical species include Litre (Lithraea venenosa), Quillay or Soapbark Tree (Quillaja saponaria), and bromeliads of genus Puya. Climate The sclerophyll regions are located in the outer subtropics bordering the temperate zone (also known as the warm-temperate zone). Accordingly, the annual average temperatures are relatively high at ; An average of over is reached for at least four months, eight to twelve months it is over and no month is below on average. Frost and snow occur only occasionally and the growing season lasts longer than 150 days and is in the winter half-year. The lower limit of the moderate annual precipitation is (semi-arid climate) and the upper limit . Generally, the summers are dry and hot with a dry season of a maximum of seven months, but at least two to three months. The winters are rainy and cool. However, not all regions with sclerophyll vegetation feature the classic Mediterranean climate; parts of eastern Italy, eastern Australia and eastern South Africa, which feature sclerophyll woodlands, tend to have uniform rainfall or even a more summer-dominant rainfall, whereby falling under the humid subtropical climate zone (Cfa/Cwa). Furthermore, other areas with sclerophyll flora would grade to the oceanic climate (Cfb); particularly the eastern parts of the Eastern Cape province in South Africa, and Tasmania, Victoria and southern New South Wales in Australia. Soils Sclerophyll plants are also found in areas with nutrient-poor and acidic soils, and soils with heavy concentrations of aluminum and other metals. Sclerophyll leaves transpire less and have a lower uptake than malacophyllous or laurophyllous leaves. These lower transpiration rates may reduce the uptake of toxic ions and better provide for C-carboxylation under nutrient-poor conditions, particularly low availability of mineral nitrogen and phosphate. Sclerophyllous plants are found in tropical heath forests, which grown on nutrient-poor sandy soils in humid regions in the Rio Orinoco and the Rio Negro basins of northern South America on quartz sand, in the kerangas forests of Borneo and on the Malay Peninsula, in coastal sandy areas along the Gulf of Guinea in Gabon, Cameroon, and CΓ΄te d'Ivoire, and in eastern Australia. Since water drains rapidly through these soils, sclerophylly also protects plants against drought stress during dry periods. Sclerophylly's advantages in nutrient-poor conditions may be another factor in the prevalence of sclerophyllous plants in nutrient-poor areas in drier-climate regions, like much of Australia and the Cerrado of Brazil. Distribution The zone of the sclerophyll vegetation lies in the border area between the subtropics and the temperate zone, approximately between the 30th and 40th degree of latitude (in the northern hemisphere also up to the 45th degree of latitude). Their presence is limited to the coastal western sides of the continents, but nonetheless can typical in any regions of a continent with scarce annual precipitation or frequent seasonal droughts and poor soils that are heavily leached. The sclerophyll zone often merges into temperate deciduous forests towards the poles, on the coasts also into temperate rainforests and towards the equator in hot semi-deserts or deserts. The Mediterranean areas, which have a very high biodiversity, are under great pressure from the population. This is especially true for the Mediterranean region since ancient times. Through overexploitation (logging, grazing, agricultural use) and frequent fires caused by people, the original forest vegetation is converted. In extreme cases, the hard-leaf vegetation disappears completely and is replaced by open rock heaths. Some sclerophyll areas are closer to the equator than the Mediterranean zoneβ€”for example, the interior of Madagascar, the dry half of New Caledonia, the lower edge areas of the Madrean pine-oak woodlands of the Mexican highlands between 800 and 1800/2000 m or around 2000 m high plateaus of the Asir Mountains on the western edge of the Arabian Peninsula. Land use While the winter rain areas of America, South Africa and Australia, with an unusually large variety of food crops, were ideal gathering areas for hunter gatherers until European colonization, agriculture and cattle breeding spread in the Mediterranean area since the Neolithic, which permanently changed the face of the landscape. In the sclerophyll regions near the coast, permanent crops such as olive and wine cultivation established themselves; However, the landscape forms that characterize the degenerate shrubbery and shrub heaths Macchie and Garigue are predominantly a result of grazing (especially with goats). In the course of the last millennia, the original vegetation in almost all areas of this vegetation zone has been greatly changed by the influence of humans. Where the plants have not been replaced by vineyards and olive groves, the maquis was the predominant form of vegetation on the Mediterranean. The maquis has been degraded in many places to the low shrub heather, the garigue. Many plant species that are rich in aromatic oils belong to both vegetation societies. The diversity of the original sclerophyll vegetation in the world is high to extremely high (3000–5000 species per ha). Australian bush Most areas of the Australian continent able to support woody plants are occupied by sclerophyll communities as forests, savannas, or heathlands. Common plants include the Proteaceae (grevilleas, banksias and hakeas), tea-trees, acacias, boronias, and eucalypts. The most common sclerophyll communities in Australia are savannas dominated by grasses with an overstorey of eucalypts and acacias. Acacia (particularly mulga) shrublands also cover extensive areas. All the dominant overstorey acacia species and a majority of the understorey acacias have a scleromorphic adaptation in which the leaves have been reduced to phyllodes consisting entirely of the petiole. Many plants of the sclerophyllous woodlands and shrublands also produce leaves unpalatable to herbivores by the inclusion of toxic and indigestible compounds which assure survival of these long-lived leaves. This trait is particularly noticeable in the eucalypt and Melaleuca species which possess oil glands within their leaves that produce a pungent volatile oil that makes them unpalatable to most browsers. These traits make the majority of woody plants in these woodlands largely unpalatable to domestic livestock. It is therefore important from a grazing perspective that these woodlands support a more or less continuous layer of herbaceous ground cover dominated by grasses. Sclerophyll forests cover a much smaller area of the continent, being restricted to relatively high rainfall locations. They have a eucalyptus overstory (10 to 30 metres) with the understory also being hard-leaved. Dry sclerophyll forests are the most common forest type on the continent, and although it may seem barren dry sclerophyll forest is highly diverse. For example, a study of sclerophyll vegetation in Seal Creek, Victoria, found 138 species. Even less extensive are wet sclerophyll forests. They have a taller eucalyptus overstory than dry sclerophyll forests, or more (typically mountain ash, alpine ash, rose gum, karri, messmate stringybark, or manna gum, and a soft-leaved, fairly dense understory (tree ferns are common). They require ample rainfallβ€”at least 1000Β mm (40Β inches). Evolution Sclerophyllous plants are all part of a specific environment and are anything but newcomers. By the time of European settlement, sclerophyll forest accounted for the vast bulk of the forested areas. Most of the wooded parts of present-day Australia have become sclerophyll dominated as a result of the extreme age of the continent combined with Aboriginal fire use. Deep weathering of the crust over many millions of years leached chemicals out of the rock, leaving Australian soils deficient in nutrients, particularly phosphorus. Such nutrient deficient soils support non-sclerophyllous plant communities elsewhere in the world and did so over most of Australia prior to European arrival. However such deficient soils cannot support the nutrient losses associated with frequent fires and are rapidly replaced with sclerophyllous species under traditional Aboriginal burning regimens. With the cessation of traditional burning non-sclerophyllous species have re-colonized sclerophyll habitat in many parts of Australia. The presence of toxic compounds combined with a high carbon : nitrogen ratio make the leaves and branches of scleromorphic species long-lived in the litter, and can lead to a large build-up of litter in woodlands. The toxic compounds of many species, notably Eucalyptus species, are volatile and flammable and the presence of large amounts of flammable litter, coupled with an herbaceous understorey, encourages fire. All the Australian sclerophyllous communities are liable to be burnt with varying frequencies and many of the woody plants of these woodlands have developed adaptations to survive and minimise the effects of fire. Sclerophyllous plants generally resist dry conditions well, making them successful in areas of seasonally variable rainfall. In Australia, however, they evolved in response to the low level of phosphorus in the soilβ€”indeed, many native Australian plants cannot tolerate higher levels of phosphorus and will die if fertilised incorrectly. The leaves are hard due to lignin, which prevents wilting and allows plants to grow, even when there is not enough phosphorus for substantial new cell growth. Regions These are the biomes or ecoregions in the world that feature an abundance of, or are known for having, sclerophyll vegetation: Cumberland Plain Woodland Sydney Sandstone Ridgetop Woodland Eastern Suburbs Banksia Scrub Tasmanian dry sclerophyll forests Aegean and Western Turkey sclerophyllous and mixed forests California chaparral and woodlands California coastal sage and chaparral Chilean Matorral Mallee Woodlands and Shrublands Italian sclerophyllous and semi-deciduous forests Eastern Mediterranean conifer–sclerophyllous–broadleaf forests Southwest Iberian Mediterranean sclerophyllous and mixed forests Tyrrhenian–Adriatic sclerophyllous and mixed forests Canary Islands dry woodlands and forests Mediterranean acacia–argania dry woodlands Mediterranean dry woodlands and steppe Southeastern Iberian shrubs and woodlands Cyprus Mediterranean forests Crete Mediterranean forests Cape Floristic Region Southern Anatolian montane conifer and deciduous forests Albany thickets Northwest Iberian montane forests See also Mediterranean forests, woodlands, and scrub Chaparral Fynbos Maquis shrubland Garrigue Kwongan Matorral Barren vegetation References Mediterranean forests, woodlands, and scrub California chaparral and woodlands Flora of the Chilean Matorral Mallee Woodlands and Shrublands Ecology Sclerophyll forests
Sclerophyll
[ "Biology" ]
3,191
[ "Ecology" ]
159,856
https://en.wikipedia.org/wiki/Adhocracy
Adhocracy is a flexible, adaptable, and informal form of organization defined by a lack of formal structure and employs specialized multidisciplinary teams grouped by function. It operates in a fashion opposite to bureaucracy. Warren Bennis coined the term in his 1968 book The Temporary Society. Alvin Toffler popularized the term in 1970 with his book, Future Shock, and has since become often used in the management theory of organizations (particularly online organizations). The concept has been further developed by academics such as Henry Mintzberg. Adhocracy is the system of adaptive, creative, and flexible integrative behavior based on non-permanence and spontaneity. These characteristics are believed to allow adhocracy to respond faster than traditional bureaucratic organizations while being more open to new ideas. Overview Robert H. Waterman, Jr. defines adhocracy as "any form of organization that cuts across normal bureaucratic lines to capture opportunities, solve problems, and get results". For Henry Mintzberg, an adhocracy is a complex and dynamic organizational form. It is different from bureaucracy; like Toffler, Mintzberg considers bureaucracy a thing of the past, and adhocracy one of the future. When done well, adhocracy can be very good at problem solving and innovation and thrive in diverse environments. It requires sophisticated and often automated technical systems to develop and thrive. Academics have described Wikipedia as an adhocracy. Characteristics Some characteristics of Mintzberg's definition include: highly organic structure little formalization of behavior job specialization not necessarily based on formal training a tendency to group the specialists in functional units for housekeeping purposes but to deploy them in small, market-based project teams to do their work a reliance on liaison devices to encourage mutual adjustment within and between these teams low or no standardization of procedures roles not clearly defined selective decentralization work organization rests on specialized teams power-shifts to specialized teams horizontal job specialization high cost of communication culture based on non-bureaucratic work All members of an organization have the authority within their areas of specialization, and in coordination with other members, to make decisions and to take actions affecting the future of the organization. There is an absence of hierarchy. According to Robert H. Waterman, Jr., "Teams should be big enough to represent all parts of the bureaucracy that will be affected by their work, yet small enough to get the job done efficiently." Types administrative – "feature an autonomous operating core; usually in an institutionalized bureaucracy like a government department or standing agency" operational – solves problems on behalf of its clients Alvin Toffler claimed in his book Future Shock that adhocracies will get more common and are likely to replace bureaucracy. He also wrote that they will most often come in form of a temporary structure, formed to resolve a given problem and dissolved afterwards. An example are cross-department task forces. Issues Downsides of adhocracies can include "half-baked actions", personnel problems stemming from organization's temporary nature, extremism in suggested or undertaken actions, and threats to democracy and legality rising from adhocracy's often low-key profile. To address those problems, researchers in adhocracy suggest a model merging adhocracy and bureaucracy, the bureau-adhocracy. Etymology The word is a portmanteau of the Latin ad hoc, meaning "for the purpose", and the suffix -cracy, from the ancient Greek kratein (κρατΡῖν), meaning "to govern", and is thus a heteroclite. Use in fiction The term is also used to describe the form of government used in the science fiction novels Voyage from Yesteryear by James P. Hogan and Down and Out in the Magic Kingdom, by Cory Doctorow. In the radio play Das Unternehmen Der Wega (The Mission of the Vega) by Friedrich DΓΌrrenmatt, the human inhabitants of Venus, all banished there from various regions of Earth for civil and political offenses, form and live under a peaceful adhocracy, to the frustration of delegates from an Earth faction who hope to gain their cooperation in a war brewing on Earth. In the Metrozone series of novels by Simon Morden, The novel The Curve of the Earth features "ad-hoc" meetings conducted virtually, by which all decisions governing the Freezone collective are taken. The ad-hocs are administered by an artificial intelligence and polled from suitably qualified individuals who are judged by the AI to have sufficient experience. Failure to arrive at a decision results in the polling of a new ad-hoc, whose members are not told of previous ad-hocs before hearing the decision which must be made. The asura in the fictional world of Tyria within the Guild Wars universe present this form of government, although the term is only used in out-of-game lore writings. See also Anarchy Affinity group Bureaucracy (considered the opposite of adhocracy) Crowdsourcing Commons-based peer production Free association Here Comes Everybody Holacracy Libertarianism Self-management Social peer-to-peer processes Socialism Sociocracy Spontaneous order The Tyranny of Structurelessness Union of egoists Workplace democracy References Sources Adhocracy by Robert H. Waterman, Jr. () Future Shock by Alvin Toffler () Forms of government Organization design Libertarian theory 1970 introductions Types of organization
Adhocracy
[ "Engineering" ]
1,113
[ "Design", "Organization design" ]
159,901
https://en.wikipedia.org/wiki/Jacques%20Herbrand
Jacques Herbrand (12 February 1908 – 27 July 1931) was a French mathematician. Although he died at age 23, he was already considered one of "the greatest mathematicians of the younger generation" by his professors Helmut Hasse and Richard Courant. He worked in mathematical logic and class field theory. He introduced recursive functions. Herbrand's theorem refers to either of two completely different theorems. One is a result from his doctoral thesis in proof theory, and the other one half of the Herbrand–Ribet theorem. The Herbrand quotient is a type of Euler characteristic, used in homological algebra. He contributed to Hilbert's program in the foundations of mathematics by providing a constructive consistency proof for a weak system of arithmetic. The proof uses the above-mentioned, proof-theoretic Herbrand's theorem. Biography Herbrand finished his doctorate at Γ‰cole Normale SupΓ©rieure in Paris under Ernest Vessiot in 1929. He joined the army in October 1929, however, and so did not defend his thesis at the Sorbonne until the following year. He was awarded a Rockefeller fellowship that enabled him to study in Germany in 1930-1931, first with John von Neumann in Berlin, then during June with Emil Artin in Hamburg, and finally with Emmy Noether in GΓΆttingen. In Berlin, Herbrand followed a course on Hilbert's proof theory given by von Neumann. During the course, von Neumann explained GΓΆdel's first incompleteness theorem and found, independently of GΓΆdel, the second incompleteness theorem that he also presented in the lectures. A letter of Herbrand's of 5 December 1930 to his friend Claude Chevalley contains a description of von Neumann's idea. An earlier letter to Vessiot, of 28 November, explained GΓΆdel's first incompleteness theorem in the form of failure of omega-consistency. Herbrand's last paper was titled "Sur la non-contradiction de l'arithmΓ©tique" (On the consistency of arithmetic). It contains a consistency proof for a restricted system of arithmetic, similar to a result of Johann von Neumann's. Herbrand had studied GΓΆdel's incompleteness article in Easter 1931 through the page proofs Paul Bernays had lent him. In the last section of his paper, Herbrand makes a comparison of his restricted result to that of GΓΆdel's. The paper was received by the editors the very same day Herbrand lost his life, 27 July, and published posthumously. Death In July 1931, Herbrand was mountain-climbing in the French Alps with two friends when he fell to his death in the granite mountains of Massif des Γ‰crins. Quotation "Jacques Herbrand would have hated Bourbaki" said French mathematician Claude Chevalley quoted in MichΓ¨le Chouchan, "Nicolas Bourbaki Faits et lΓ©gendes", Γ‰ditions du choix, 1995. Bibliography Primary literature: 1967. Jean van Heijenoort (ed.), From Frege to GΓΆdel: A Source Book in Mathematical Logic, 1879–1931. Cambridge, Massachusetts: Harvard Univ. Press. 1930. "Investigations in proof theory," 525–81. 1931. "On the consistency of arithmetic," 618–28. 1968. Jean van Heijenoort (ed.), Jacques Herbrand, Γ‰crits logiques. Paris: Presses Universitaires de France. 1971. Warren David Goldfarb (transl., ed.), Logical Writings of Jacques Herbrand Cambridge, Massachusetts: Harvard University Press. See also Herbrand interpretation Herbrand structure Herbrand Award – by the Conference on Automated Deduction, for automated deduction Prix Jacques Herbrand – by the French Academy of Sciences, for mathematics and physics Herbrandization – a validity-preserving normal form of a formula, dual to Skolemization Herbrand's theorem on ramification groups Rollo Davidson (1944–1970) – another mathematician who died in a mountain climbing accident (GΓΆdel-Herbrand) computability thesis: before Church and Turing, in 1933 with Kurt GΓΆdel, they created a formal definition of a class called general recursive functions. References External links 1908 births 1931 deaths French logicians Logicians Γ‰cole Normale SupΓ©rieure alumni University of Paris alumni Proof theorists 20th-century French mathematicians French male non-fiction writers Mountaineering deaths Sport deaths in France Scientists from Paris 20th-century French philosophers 20th-century French male writers
Jacques Herbrand
[ "Mathematics" ]
914
[ "Proof theorists", "Proof theory" ]
159,908
https://en.wikipedia.org/wiki/Mascot
A mascot is any human, animal, or object thought to bring luck, or anything used to represent a group with a common public identity, such as a school, sports team, society, military unit, or brand name. Mascots are also used as fictional, representative spokespeople for consumer products. In sports, mascots are also used for merchandising. Team mascots are often related to their respective team nicknames. This is especially true when the team's nickname is something that is a living animal and/or can be made to have humanlike characteristics. For more abstract nicknames, the team may opt to have an unrelated character serve as the mascot. For example, the athletic teams of the University of Alabama are nicknamed the Crimson Tide, while their mascot is an elephant named Big Al. Team mascots may take the form of a logo, person, live animal, inanimate object, or a costumed character, and often appear at team matches and other related events. Since the mid-20th century, costumed characters have provided teams with an opportunity to choose a fantasy creature as their mascot, as is the case with the Philadelphia Phillies' mascot: Phillie Phanatic, the Philadelphia Flyers' mascot: Gritty, the Seattle Kraken mascot: Buoy, and the Washington Commanders' mascot: Major Tuddy. Costumed mascots are commonplace, and are regularly used as goodwill ambassadors in the community for their team, company, or organization. History It was sports organizations that initially first thought of using animals as a form of mascot to bring entertainment and excitement for their spectators. Before mascots were fictional icons or people in suits, animals were mostly used in order to bring a somewhat different feel to the game and to strike fear upon the rivalry teams. As time went on, mascots evolved from predatory animals, to two-dimensional fantasy mascots, to finally what we know today, three-dimensional mascots. Stylistic changes in American puppetry in the mid-20th century, including the work of Jim Henson and Sid and Marty Krofft, soon were adapted to sports mascots. It allowed people to not only have visual enjoyment but also interact physically with the mascots. Marketers quickly realized the great potential in three-dimensional mascots and took on board the costumed puppet idea. This change encouraged other companies to start creating their own mascots, resulting in mascots being a necessity amongst not only the sporting industry but for other organisations. Etymology The word 'mascot' originates from the French term 'mascotte' which means lucky charm. This was used to describe anything that brought luck to a household. The word was first recorded in 1867 and popularised by a French composer Edmond Audran who wrote the opera La mascotte, performed in December 1880. The word entered the English language in 1881 with the meaning of a specific living entity associated with a human organization as a symbol or live logo. However, before this, the terms were familiar to the people of France as a slang word used by gamblers. The term is a derivative of the word 'masco' meaning sorceress or witch. Before the 19th century, the word 'mascot' was associated with inanimate objects that would be commonly seen such as a lock of hair or a figurehead on a sailing ship. From then to the twentieth century, the term has been used in reference to any good luck animals, objects etc., and more recently including human caricatures and fictional creatures created as logos for sports teams. Choices and identities Often, the choice of the mascot reflects the desired quality; a typical example of this is the "fighting spirit," in which a competitive nature is personified by warriors or predatory animals. Mascots may also symbolize a local or regional trait, such as the Nebraska Cornhuskers' mascot, Herbie Husker: a stylized version of a farmer, owing to the agricultural traditions of the area in which the university is located. Similarly, Pittsburg State University uses Gus the Gorilla as its mascot, "gorilla" being an old colloquial term for coal miners in the Southeast Kansas area in which the university was established. In the United States, controversy surrounds some mascot choices, especially those using human likenesses. Mascots based on Native American tribes are particularly contentious, as many argue that they constitute offensive exploitations of an oppressed culture. However, several Indian tribes have come out in support of keeping the names. For example, the Utah Utes and the Central Michigan Chippewas are sanctioned by local tribes, and the Florida State Seminoles are supported by the Seminole Tribe of Florida in their use of Osceola and Renegade as symbols. FSU chooses not to refer to them as mascots because of the offensive connotation. This has not, however, prevented fans from engaging in "Redface"β€”dressing up in stereotypical, Plains Indian outfits during games, or creating offensive banners saying "Scalp 'em" as was seen at the 2014 Rose Bowl. Some sports teams have "unofficial" mascots: individual supporters or fans that have become identified with the team. The New York Yankees have such an individual in fan Freddy Sez. Former Toronto Blue Jays mascot BJ Birdie was a costumed character created by a Blue Jays fan, ultimately hired by the team to perform at their home games. USC Trojans mascot is Tommy Trojan who rides on his horse (and the official mascot of the school) Traveler. Sports mascots Many sports teams in the United States have official mascots, sometimes enacted by costumed humans or even live animals. One of the earliest was a taxidermy mount for the Chicago Cubs, in 1908, and later a live animal used in 1916 by the same team. They abandoned the concept shortly thereafter and remained without an official "cub" until 2014, when they introduced a version that was a person wearing a costume. In the United Kingdom, some teams have young fans become "mascots". These representatives sometimes have medical issues, and the appearance is a wish grant, the winner of a contest, or under other circumstances. Mascots also include older people such as Mr England, who are invited by national sports associations to be mascots for the representative teams. One of the earliest was Ken Baily, whose John Bull-inspired appearance was a regular at England matches from 1963 to 1990. Controversies On October 28, 1989, University of Miami mascot Sebastian the Ibis was tackled by a group of police officers for attempting to put out Chief Osceola's flaming spear prior to Miami's game against long-standing rival Florida State at Doak Campbell Stadium in Tallahassee. Sebastian was wearing a fireman’s helmet and yellow raincoat and holding a fire extinguisher. When a police officer attempted to grab the fire extinguisher, the officer was sprayed in the chest. Sebastian was handcuffed by four officers but ultimately released. University of Miami quarterback Gino Torretta told ESPN, "Even if we weren't bad boys, it added to the mystique that, 'Man, look, even their mascot's getting arrested.'" As of 2024, five high schools in the United States use midgets for their mascots. Advocates working with Little People of America have been campaigning to change it because of its common usage as a pejorative slur against disabled people. Corporate mascots Mascots or advertising characters are very common in the corporate world. Recognizable mascots include Chester Cheetah, Keebler Elf, the Fruit of the Loom Guys, Mickey Mouse, Pizza Pizza Guy for Little Caesars, Rocky the Elf, Pepsiman and the NBC Peacock. These characters are typically known without even having to refer to the company or brand. This is an example of corporate branding, and soft selling a company. Mascots are able to act as brand ambassadors where advertising is not allowed. For example, many corporate mascots can attend non-profit events, or sports and promote their brand while entertaining the crowd. Some mascots are simply cartoons or virtual mascots, others are characters in commercials, and others are actually created as costumes and will appear in person in front of the public at tradeshows or events. School mascots American high schools, colleges, and even middle and elementary schools typically have mascots. Many college and university mascots started out as live animals, such as bulldogs and bears that attended sporting events. Today, mascots are usually represented by animated characters, campus sculptures, and costumed students who attend sporting events, alumni gatherings, and other campus events. International mascots – Olympics and World Expositions The mascots that are used for the Summer and Winter Olympic games are fictional characters, typically a human figure or an animal native to the country to which is holding that year's Olympic Games. The mascots are used to entice an audience and bring joy and excitement to the Olympics festivities. Likewise, many World expositions since 1984 have had mascots representing their host city in some way, starting with the 1984 Louisiana World Exposition's mascot Seymore D. Fair. Since 1968, nearly all of the cities that have hosted the Summer or Winter Olympic Games have designed and promoted a mascot that relates to the culture of the host country the overall "brand" of that year's Games. Recent Winter/Summer Olympic games mascots include Miga, Quatchi, Mukmuk (Vancouver, 2010), Wenlock and Mandeville (London, 2012), Bely Mishka, Snow Leopard, Zaika (Sochi, 2014) and Vinicius and Tom (Rio, 2016) have all gone on to become iconic symbols in their respective countries. Since 2010, it has been common for the Olympic and Paralympic games to each have their own mascots, which are presented together. For example, the 2020 Summer Olympics in Tokyo is represented by Miraitowa, while the 2020 Summer Paralympics are represented by Someity, and the two often appear together in promotional materials. Government mascots Yuru-chara In Japan, many municipalities have mascots, which are known as Yuru-chara (Japanese: ゆるキャラ Hepburn: yuru kyara). Yuru-chara is also used to refer to mascots created by businesses to promote their products. NASA mascot Camilla Corona SDO is the mission mascot for NASA's Solar Dynamics Observatory (SDO) and assists the mission with Education and Public Outreach (EPO). Military mascots Mascots are also popular in military units. For example, the United States Marine Corps uses the English Bulldog as its mascot, while the United States Army uses the mule, the United States Navy uses the goat, and the United States Air Force uses the Gyrfalcon. The goat in the Royal Welsh is officially not a mascot but a ranking soldier. Lance Corporal William Windsor retired on 20 May 2009, and his replacement "William Windsor II" was captured and formally recruited on June 15 that same year. Several regiments of the British Army have a live animal mascot which appear on parades. The Parachute Regiment and the Argyll and Sutherland Highlanders have a Shetland pony as their mascot, a ram for The Mercian Regiment; an Irish Wolfhound for the Irish Guards and the Royal Irish Regiment; a drum horse for the Queen's Royal Hussars and the Royal Scots Dragoon Guards; an antelope for the Royal Regiment of Fusiliers; and a goat for the Royal Welsh. Other British military mascots include a Staffordshire Bull Terrier and a pair of ferrets. The Norwegian Royal Guard adopted a king penguin named Nils Olav as its mascot on the occasion of a visit to Edinburgh by its regimental band. The (very large) penguin remains resident at Edinburgh Zoo and has been formally promoted by one rank on the occasion of each subsequent visit to Britain by the band or other detachments of the Guard. Regimental Sergeant Major Olav was awarded the Norwegian Army's Long Service and Good Conduct medal at a ceremony in 2005. Smokey Bear The U.S. Forest Service uses mascot Smokey Bear to raise awareness and educate the public about the dangers of unplanned human-caused wildfires. In music Some bands, particularly in the heavy metal genre, use band mascots to promote their music. The mascots are usually found on album covers or merchandise such as band T-shirts, but can also make appearances in live shows or music videos. One example of a band mascot is Eddie of the English heavy metal band Iron Maiden. Eddie is a zombie-like creature which is personified in different forms on all of the band's albums, most of its singles and some of its promotional merchandise. Eddie is also known to make live appearances, especially during the song "Iron Maiden". Another notable example of a mascot in music is Skeleton Sam of The Grateful Dead. South Korean hip hop band B.A.P uses rabbits named Matoki as their mascot, each bunny a different color representing each member. Although rabbits have an innocent image, BAP gives off a tough image. Hip hop artist Kanye West used to use a teddy bear named Dropout Bear as his mascot; Dropout Bear has appeared on the cover of West's first three studio albums, and served as the main character of West's music video, "Good Morning". The question of whether a "hype-man" can legitimately be considered a hip-hop organization's mascot is currently an active subject of debate within academic Hip-Hop circles. However, local polling in relevant regions suggests acceptance of the "hype-man" as a legitimate organizational mascot. In television Some television series have mascots, like the Cleatus the Robot animated cartoon figure on the U.S. sports television show Fox NFL Sunday. Another example of a cartoon mascot on television is the Sir Seven knight character on Wisconsin's WSAW-TV. See also List of mascots (college, computing, commercial, sports, public-service, television and movie, computer and video games, political parties) Amulet Car mascot Fursuit Lucky charm Mascot Hall of Fame National emblem, National personification, National animals Player escort, children accompanying football players also sometimes called mascots Talisman Totem Costume References External links Mascot Database – the searchable team name database List of Free and Open Source software mascots Benefits of Brand Mascots in Business Benefits of Mascot Designs in Brand Recognition Unique University Mascots and The History Behind Them at The History Tavern Brand management Branding terminology Sports occupations and roles Uniforms
Mascot
[ "Mathematics" ]
2,956
[ "Symbols", "Mascots" ]
159,963
https://en.wikipedia.org/wiki/MPEG-1%20Audio%20Layer%20II
MPEG-1 Audio Layer II or MPEG-2 Audio Layer II (MP2, sometimes incorrectly called Musicam or MUSICAM) is a lossy audio compression format defined by ISO/IEC 11172-3 alongside MPEG-1 Audio Layer I and MPEG-1 Audio Layer III (MP3). While MP3 is much more popular for PC and Internet applications, MP2 remains a dominant standard for audio broadcasting. History of development from MP2 to MP3 MUSICAM MPEG-1 Audio Layer 2 encoding was derived from the MUSICAM (Masking pattern adapted Universal Subband Integrated Coding And Multiplexing) audio codec, developed by Centre commun d'Γ©tudes de tΓ©lΓ©vision et tΓ©lΓ©communications (CCETT), Philips, and the Institut fΓΌr Rundfunktechnik (IRT) in 1989 as part of the EUREKA 147 pan-European inter-governmental research and development initiative for the development of a system for the broadcasting of audio and data to fixed, portable or mobile receivers (established in 1987). It began as the Digital Audio Broadcast (DAB) project managed by Egon Meier-Engelen of the Deutsche Forschungs- und Versuchsanstalt fΓΌr Luft- und Raumfahrt (later on called Deutsches Zentrum fΓΌr Luft- und Raumfahrt, German Aerospace Center) in Germany. The European Community financed this project, commonly known as EU-147, from 1987 to 1994 as a part of the EUREKA research program. The Eureka 147 System comprised three main elements: MUSICAM Audio Coding (Masking pattern Universal Sub-band Integrated Coding And Multiplexing), Transmission Coding & Multiplexing and COFDM Modulation. MUSICAM was one of the few codecs able to achieve high audio quality at bit rates in the range of 64 to 192Β kbit/s per monophonic channel. It has been designed to meet the technical requirements of most applications (in the field of broadcasting, telecommunication and recording on digital storage media) β€” low delay, low complexity, error robustness, short access units, etc. As a predecessor of the MP3 format and technology, the perceptual codec MUSICAM is based on integer arithmetics 32 subbands transform, driven by a psychoacoustic model. It was primarily designed for Digital Audio Broadcasting and digital TV, and disclosed by CCETT(France) and IRT (Germany) in Atlanta during an IEEE-ICASSP conference. This codec incorporated into a broadcasting system using COFDM modulation was demonstrated on air and on the field together with Radio Canada and CRC Canada during the NAB show (Las Vegas) in 1991. The implementation of the audio part of this broadcasting system was based on a two chips encoder (one for the subband transform, one for the psychoacoustic model designed by the team of G. Stoll (IRT Germany), later known as Psychoacoustic model I in the ISO MPEG audio standard) and a real time decoder using one Motorola 56001 DSP chip running an integer arithmetics software designed by Y.F. Dehery's team (CCETT, France). The simplicity of the corresponding decoder together with the high audio quality of this codec using for the first time a 48Β kHz sampling frequency, a 20 bits/sample input format (the highest available sampling standard in 1991, compatible with the AES/EBU professional digital input studio standard) were the main reasons to later adopt the characteristics of MUSICAM as the basic features for an advanced digital music compression codec such as MP3. The audio coding algorithm used by the Eureka 147 Digital Audio Broadcasting (DAB) system has been subject to the standardization process within the ISO/Moving Pictures Expert Group (MPEG) in 1989–94. MUSICAM audio coding was used as a basis for some coding schemes of MPEG-1 and MPEG-2 Audio. Most key features of MPEG-1 Audio were directly inherited from MUSICAM, including the filter bank, time-domain processing, audio frame sizes, etc. However, improvements were made, and the actual MUSICAM algorithm was not used in the final MPEG-1 Audio Layer II standard. Since the finalisation of MPEG-1 Audio and MPEG-2 Audio (in 1992 and 1994), the original MUSICAM algorithm is not used anymore. The name MUSICAM is often mistakenly used when MPEG-1 Audio Layer II is meant. This can lead to some confusion, because the name MUSICAM is trademarked by different companies in different regions of the world. (Musicam is the name used for MP2 in some specifications for Astra Digital Radio as well as in the BBC's DAB documents.) The Eureka Project 147 resulted in the publication of European Standard, ETS 300 401 in 1995, for DAB which now has worldwide acceptance. The DAB standard uses the MPEG-1 Audio Layer II (ISO/IEC 11172-3) for 48Β kHz sampling frequency and the MPEG-2 Audio Layer II (ISO/IEC 13818-3) for 24Β kHz sampling frequency. MPEG Audio In the late 1980s, ISO's Moving Picture Experts Group (MPEG) started an effort to standardize digital audio and video encoding, expected to have a wide range of applications in digital radio and TV broadcasting (later DAB, DMB, DVB), and use on CD-ROM (later Video CD). The MUSICAM audio coding was one of 14 proposals for MPEG-1 Audio standard that were submitted to ISO in 1989. The MPEG-1 Audio standard was based on the existing MUSICAM and ASPEC audio formats. The MPEG-1 Audio standard included the three audio "layers" (encoding techniques) now known as Layer I (MP1), Layer II (MP2) and Layer III (MP3). All algorithms for MPEG-1 Audio Layer I, II and III were approved in 1991 as the committee draft of ISO-11172 and finalized in 1992 as part of MPEG-1, the first standard suite by MPEG, which resulted in the international standard ISO/IEC 11172-3 (a.k.a. MPEG-1 Audio or MPEG-1 Part 3), published in 1993. Further work on MPEG audio was finalized in 1994 as part of the second suite of MPEG standards, MPEG-2, more formally known as international standard ISO/IEC 13818-3 (a.k.a. MPEG-2 Part 3 or backward compatible MPEG-2 Audio or MPEG-2 Audio BC), originally published in 1995. MPEG-2 Part 3 (ISO/IEC 13818-3) defined additional bit rates and sample rates for MPEG-1 Audio Layer I, II and III. The new sampling rates are exactly half that of those originally defined for MPEG-1 Audio. MPEG-2 Part 3 also enhanced MPEG-1's audio by allowing the coding of audio programs with more than two channels, up to 5.1 multichannel. The Layer III (MP3) component uses a lossy compression algorithm that was designed to greatly reduce the amount of data required to represent an audio recording and sound like a decent reproduction of the original uncompressed audio for most listeners. Emmy Award in Engineering CCETT (France), IRT (Germany) and Philips (The Netherlands) won an Emmy Award in Engineering 2000 for development of a digital audio two-channel compression system known as Musicam or MPEG Audio Layer II. Technical specifications MPEG-1 Audio Layer II is defined in ISO/IEC 11172-3 (MPEG-1 Part 3) Sampling rates: 32, 44.1 and 48Β kHz Bit rates: 32, 48, 56, 64, 80, 96, 112, 128, 160, 192, 224, 256, 320 and 384Β kbit/s An extension has been provided in MPEG-2 Audio Layer II and is defined in ISO/IEC 13818-3 (MPEG-2 Part 3) Additional sampling rates: 16, 22.05 and 24Β kHz Additional bit rates: 8, 16, 24, 40 and 144Β kbit/s Multichannel support - up to 5 full range audio channels and an LFE-channel (Low Frequency Enhancement channel) The format is based on successive digital frames of 1152 sampling intervals with four possible formats: mono format stereo format intensity encoded joint stereo format (stereo irrelevance) dual channel (uncorrelated) format Variable bit rate MPEG audio may have variable bit rate (VBR), but it is not widely supported. Layer II can use a method called bit rate switching. Each frame may be created with a different bit rate. According to ISO/IEC 11172-3:1993, Section 2.4.2.3: To provide the smallest possible delay and complexity, the (MPEG audio) decoder is not required to support a continuously variable bit rate when in layer I or II. How the MP2 format works MP2 is a sub-band audio encoder, which means that compression takes place in the time domain with a low-delay filter bank producing 32 frequency domain components. By comparison, MP3 is a transform audio encoder with hybrid filter bank, which means that compression takes place in the frequency domain after a hybrid (double) transformation from the time domain. MPEG Audio Layer II is the core algorithm of the MP3 standards. All psychoacoustical characteristics and frame format structures of the MP3 format are derived from the basic MP2 algorithm and format. The MP2 encoder may exploit inter channel redundancies using optional "joint stereo" intensity encoding. Like MP3, MP2 is a perceptual coding format, which means that it removes information that the human auditory system will not be able to easily perceive. To choose which information to remove, the audio signal is analyzed according to a psychoacoustic model, which takes into account the parameters of the human auditory system. Research into psychoacoustics has shown that if there is a strong signal on a certain frequency, then weaker signals at frequencies close to the strong signal's frequency cannot be perceived by the human auditory system. This is called frequency masking. Perceptual audio codecs take advantage of this frequency masking by ignoring information at frequencies that are deemed to be imperceptible, thus allowing more data to be allocated to the reproduction of perceptible frequencies. MP2 splits the input audio signal into 32 sub-bands, and if the audio in a sub-band is deemed to be imperceptible then that sub-band is not transmitted. MP3, on the other hand, transforms the input audio signal to the frequency domain in 576 frequency components. Therefore, MP3 has a higher frequency resolution than MP2, which allows the psychoacoustic model to be applied more selectively than for MP2. So MP3 has greater scope to reduce the bit rate. The use of an additional entropy coding tool, and higher frequency accuracy (due to the larger number of frequency sub-bands used by MP3) explains why MP3 does not need as high a bit rate as MP2 to get an acceptable audio quality. Conversely, MP2 shows a better behavior than MP3 in the time domain, due to its lower frequency resolution. This implies less codec time delayΒ β€” which can make editing audio simplerΒ β€” as well as "ruggedness" and resistance to errors which may occur during the digital recording process, or during transmission errors. The MP2 sub-band filter bank also provides an inherent "transient concealment" feature, due to the specific temporal masking effect of its mother filter. This unique characteristic of the MPEG-1 Audio family implies a very good sound quality on audio signals with rapid energy changes, such as percussive sounds. Because both the MP2 and MP3 formats use the same basic sub-band filter bank, both benefit from this characteristic. Applications of MP2 Part of the DAB digital radio and DVB digital television standards. MPEG-1 Audio Layer II is commonly used within the broadcast industry for distributing live audio over satellite, ISDN and IP Network connections as well as for storage of audio in digital playout systems. An example is NPR's PRSS Content Depot programming distribution system. The Content Depot distributes MPEG-1 L2 audio in a Broadcast Wave File wrapper. MPEG2 with RIFF headers (used in .wav) is specified in the RIFF/WAV standards. As a result, Windows Media Player will directly play Content Depot files, however, less intelligent .wav players often do not. As the encoding and decoding process would have been a significant drain on CPU resources in the first generations of broadcast playout systems, professional broadcast playout systems typically implement the codec in hardware, such as by delegating the task of encoding and decoding to a compatible soundcard rather than the system CPU. MPEG-1 Audio Layer II is the audio format used in Digital Audio Broadcast (DAB), a digital radio standard for broadcasting digital audio radio services in many countries around the world. The BBC Research & Development department states that at least 192Β kbit/s is necessary for a high fidelity stereo broadcast: All DVD-Video players in PAL countries contain stereo MP2 decoders, making MP2 a possible competitor to Dolby Digital in these markets. DVD-Video players in NTSC countries are not required to decode MP2 audio, although most do. While some DVD recorders store audio in MP2 and many consumer-authored DVDs use the format, commercial DVDs with MP2 soundtracks are rare. MPEG-1 Audio Layer II is the standard audio format used in the Video CD and Super Video CD formats (VCD and SVCD also support variable bit rate and MPEG Multichannel as added by MPEG-2). MPEG-1 Audio Layer II is the standard audio format used in the MHP standard for set-top boxes. MPEG-1 Audio Layer II is the audio format used in HDV camcorders. MP2 files are compatible with some Portable audio players. Naming and extensions The term MP2 and filename extension .mp2 usually refer MPEG-1 Audio Layer II data, but can also refer to MPEG-2 Audio Layer II, a mostly backward compatible extension which adds support for multichannel audio, variable bit rate encoding, and additional sampling rates, defined in ISO/IEC 13818-3. The abbreviation MP2 is also sometimes erroneously applied to MPEG-2 video or MPEG-2 AAC audio. See also MPEG-1 MPEG-1 Audio Layer I MPEG-1 Audio Layer III MPEG-2 MP4 (container format) Elementary stream Musepack originally MP2-based, with numerous improvements Notes References Genesis of the MP3 Audio Coding Standard by Hans Georg Musmann in IEEE Transactions on Consumer Electronics, Vol. 52, Nr. 3, pp.Β 1043–1049, August 2006 MUSICAM Source Coding by Yves-FranΓ§ois Dehery, AES 10th International Conference: Kensington, London, England, (7-9 Sept 1991), pp 71–79. External links The history of MP3 from Fraunhofer IIS MPEG Audio Resources and Software TooLAME – An MP2 encoder TwoLAME – A fork of the tooLAME code – The document defining MIME type for MPEG-1 Audio Layer II A MPEG Audio Layer II decoder in 4k – Source code for small open source decoder. Official MPEG web site Patent Status of MPEG-1, H.261 and MPEG-2 – Some information about patents Audio codecs MP3 MPEG
MPEG-1 Audio Layer II
[ "Technology" ]
3,228
[ "Multimedia", "MPEG" ]
159,974
https://en.wikipedia.org/wiki/Lagrange%20multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. Summary and rationale The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function or Lagrangian. In the general case, the Lagrangian is defined as for functions ; the notation denotes an inner product. The value is called the Lagrange multiplier. In simple cases, where the inner product is defined as the dot product, the Lagrangian is The method can be summarized as follows: in order to find the maximum or minimum of a function subject to the equality constraint , find the stationary points of considered as a function of and the Lagrange multiplier . This means that all partial derivatives should be zero, including the partial derivative with respect to . or equivalently The solution corresponding to the original constrained optimization is always a saddle point of the Lagrangian function, which can be identified among the stationary points from the definiteness of the bordered Hessian matrix. The great advantage of this method is that it allows the optimization to be solved without explicit parameterization in terms of the constraints. As a result, the method of Lagrange multipliers is widely used to solve challenging constrained optimization problems. Further, the method of Lagrange multipliers is generalized by the Karush–Kuhn–Tucker conditions, which can also take into account inequality constraints of the form for a given constant . Statement The following is known as the Lagrange multiplier theorem. Let be the objective function and let be the constraints function, both belonging to (that is, having continuous first derivatives). Let be an optimal solution to the following optimization problem such that, for the matrix of partial derivatives , : Then there exists a unique Lagrange multiplier such that (Note that this is a somewhat conventional thing where is clearly treated as a column vector to ensure that the dimensions match. But, we might as well make it just a row vector without taking the transpose.) The Lagrange multiplier theorem states that at any local maximum (or minimum) of the function evaluated under the equality constraints, if constraint qualification applies (explained below), then the gradient of the function (at that point) can be expressed as a linear combination of the gradients of the constraints (at that point), with the Lagrange multipliers acting as coefficients. This is equivalent to saying that any direction perpendicular to all gradients of the constraints is also perpendicular to the gradient of the function. Or still, saying that the directional derivative of the function is in every feasible direction. Single constraint For the case of only one constraint and only two choice variables (as exemplified in Figure 1), consider the optimization problem (Sometimes an additive constant is shown separately rather than being included in , in which case the constraint is written as in Figure 1.) We assume that both and have continuous first partial derivatives. We introduce a new variable () called a Lagrange multiplier (or Lagrange undetermined multiplier) and study the Lagrange function (or Lagrangian or Lagrangian expression) defined by where the term may be either added or subtracted. If is a maximum of for the original constrained problem and then there exists such that () is a stationary point for the Lagrange function (stationary points are those points where the first partial derivatives of are zero). The assumption is called constraint qualification. However, not all stationary points yield a solution of the original problem, as the method of Lagrange multipliers yields only a necessary condition for optimality in constrained problems. Sufficient conditions for a minimum or maximum also exist, but if a particular candidate solution satisfies the sufficient conditions, it is only guaranteed that that solution is the best one locally – that is, it is better than any permissible nearby points. The global optimum can be found by comparing the values of the original objective function at the points satisfying the necessary and locally sufficient conditions. The method of Lagrange multipliers relies on the intuition that at a maximum, cannot be increasing in the direction of any such neighboring point that also has . If it were, we could walk along to get higher, meaning that the starting point wasn't actually the maximum. Viewed in this way, it is an exact analogue to testing if the derivative of an unconstrained function is , that is, we are verifying that the directional derivative is 0 in any relevant (viable) direction. We can visualize contours of given by for various values of , and the contour of given by . Suppose we walk along the contour line with We are interested in finding points where almost does not change as we walk, since these points might be maxima. There are two ways this could happen: We could touch a contour line of , since by definition does not change as we walk along its contour lines. This would mean that the tangents to the contour lines of and are parallel here. We have reached a "level" part of , meaning that does not change in any direction. To check the first possibility (we touch a contour line of ), notice that since the gradient of a function is perpendicular to the contour lines, the tangents to the contour lines of and are parallel if and only if the gradients of and are parallel. Thus we want points where and for some where are the respective gradients. The constant is required because although the two gradient vectors are parallel, the magnitudes of the gradient vectors are generally not equal. This constant is called the Lagrange multiplier. (In some conventions is preceded by a minus sign). Notice that this method also solves the second possibility, that is level: if is level, then its gradient is zero, and setting is a solution regardless of . To incorporate these conditions into one equation, we introduce an auxiliary function and solve Note that this amounts to solving three equations in three unknowns. This is the method of Lagrange multipliers. Note that implies as the partial derivative of with respect to is To summarize The method generalizes readily to functions on variables which amounts to solving equations in unknowns. The constrained extrema of are critical points of the Lagrangian , but they are not necessarily local extrema of (see below). One may reformulate the Lagrangian as a Hamiltonian, in which case the solutions are local minima for the Hamiltonian. This is done in optimal control theory, in the form of Pontryagin's minimum principle. The fact that solutions of the method of Lagrange multipliers are not necessarily extrema of the Lagrangian, also poses difficulties for numerical optimization. This can be addressed by minimizing the magnitude of the gradient of the Lagrangian, as these minima are the same as the zeros of the magnitude, as illustrated in Example 5: Numerical optimization. Multiple constraints The method of Lagrange multipliers can be extended to solve problems with multiple constraints using a similar argument. Consider a paraboloid subject to two line constraints that intersect at a single point. As the only feasible solution, this point is obviously a constrained extremum. However, the level set of is clearly not parallel to either constraint at the intersection point (see Figure 3); instead, it is a linear combination of the two constraints' gradients. In the case of multiple constraints, that will be what we seek in general: The method of Lagrange seeks points not at which the gradient of is a multiple of any single constraint's gradient necessarily, but in which it is a linear combination of all the constraints' gradients. Concretely, suppose we have constraints and are walking along the set of points satisfying Every point on the contour of a given constraint function has a space of allowable directions: the space of vectors perpendicular to The set of directions that are allowed by all constraints is thus the space of directions perpendicular to all of the constraints' gradients. Denote this space of allowable moves by and denote the span of the constraints' gradients by Then the space of vectors perpendicular to every element of We are still interested in finding points where does not change as we walk, since these points might be (constrained) extrema. We therefore seek such that any allowable direction of movement away from is perpendicular to (otherwise we could increase by moving along that allowable direction). In other words, Thus there are scalars such that These scalars are the Lagrange multipliers. We now have of them, one for every constraint. As before, we introduce an auxiliary function and solve which amounts to solving equations in unknowns. The constraint qualification assumption when there are multiple constraints is that the constraint gradients at the relevant point are linearly independent. Modern formulation via differentiable manifolds The problem of finding the local maxima and minima subject to constraints can be generalized to finding local maxima and minima on a differentiable manifold In what follows, it is not necessary that be a Euclidean space, or even a Riemannian manifold. All appearances of the gradient (which depends on a choice of Riemannian metric) can be replaced with the exterior derivative Single constraint Let be a smooth manifold of dimension Suppose that we wish to find the stationary points of a smooth function when restricted to the submanifold defined by where is a smooth function for which is a regular value. Let and be the exterior derivatives of and . Stationarity for the restriction at means Equivalently, the kernel contains In other words, and are proportional 1-forms. For this it is necessary and sufficient that the following system of equations holds: where denotes the exterior product. The stationary points are the solutions of the above system of equations plus the constraint Note that the equations are not independent, since the left-hand side of the equation belongs to the subvariety of consisting of decomposable elements. In this formulation, it is not necessary to explicitly find the Lagrange multiplier, a number such that Multiple constraints Let and be as in the above section regarding the case of a single constraint. Rather than the function described there, now consider a smooth function with component functions for which is a regular value. Let be the submanifold of defined by is a stationary point of if and only if contains For convenience let and where denotes the tangent map or Jacobian ( can be canonically identified with ). The subspace has dimension smaller than that of , namely and belongs to if and only if belongs to the image of Computationally speaking, the condition is that belongs to the row space of the matrix of or equivalently the column space of the matrix of (the transpose). If denotes the exterior product of the columns of the matrix of the stationary condition for at becomes Once again, in this formulation it is not necessary to explicitly find the Lagrange multipliers, the numbers such that Interpretation of the Lagrange multipliers In this section, we modify the constraint equations from the form to the form where the are real constants that are considered to be additional arguments of the Lagrangian expression . Often the Lagrange multipliers have an interpretation as some quantity of interest. For example, by parametrising the constraint's contour line, that is, if the Lagrangian expression is then So, is the rate of change of the quantity being optimized as a function of the constraint parameter. As examples, in Lagrangian mechanics the equations of motion are derived by finding stationary points of the action, the time integral of the difference between kinetic and potential energy. Thus, the force on a particle due to a scalar potential, , can be interpreted as a Lagrange multiplier determining the change in action (transfer of potential to kinetic energy) following a variation in the particle's constrained trajectory. In control theory this is formulated instead as costate equations. Moreover, by the envelope theorem the optimal value of a Lagrange multiplier has an interpretation as the marginal effect of the corresponding constraint constant upon the optimal attainable value of the original objective function: If we denote values at the optimum with a star (), then it can be shown that For example, in economics the optimal profit to a player is calculated subject to a constrained space of actions, where a Lagrange multiplier is the change in the optimal value of the objective function (profit) due to the relaxation of a given constraint (e.g. through a change in income); in such a context is the marginal cost of the constraint, and is referred to as the shadow price. Sufficient conditions Sufficient conditions for a constrained local maximum or minimum can be stated in terms of a sequence of principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian matrix of second derivatives of the Lagrangian expression. Examples Example 1 Suppose we wish to maximize subject to the constraint The feasible set is the unit circle, and the level sets of are diagonal lines (with slope βˆ’1), so we can see graphically that the maximum occurs at and that the minimum occurs at For the method of Lagrange multipliers, the constraint is hence the Lagrangian function, is a function that is equivalent to when is set to . Now we can calculate the gradient: and therefore: Notice that the last equation is the original constraint. The first two equations yield By substituting into the last equation we have: so which implies that the stationary points of are Evaluating the objective function at these points yields Thus the constrained maximum is and the constrained minimum is . Example 2 Now we modify the objective function of ExampleΒ 1 so that we minimize instead of again along the circle Now the level sets of are still lines of slope βˆ’1, and the points on the circle tangent to these level sets are again and These tangency points are maxima of On the other hand, the minima occur on the level set for (since by its construction cannot take negative values), at and where the level curves of are not tangent to the constraint. The condition that correctly identifies all four points as extrema; the minima are characterized in by and the maxima by Example 3 This example deals with more strenuous calculations, but it is still a single constraint problem. Suppose one wants to find the maximum values of with the condition that the - and -coordinates lie on the circle around the origin with radius That is, subject to the constraint As there is just a single constraint, there is a single multiplier, say The constraint is identically zero on the circle of radius Any multiple of may be added to leaving unchanged in the region of interest (on the circle where our original constraint is satisfied). Applying the ordinary Lagrange multiplier method yields from which the gradient can be calculated: And therefore: (iii) is just the original constraint. (i) implies or If then by (iii) and consequently from (ii). If substituting this into (ii) yields Substituting this into (iii) and solving for gives Thus there are six critical points of Evaluating the objective at these points, one finds that Therefore, the objective function attains the global maximum (subject to the constraints) at and the global minimum at The point is a local minimum of and is a local maximum of as may be determined by consideration of the Hessian matrix of Note that while is a critical point of it is not a local extremum of We have Given any neighbourhood of one can choose a small positive and a small of either sign to get values both greater and less than This can also be seen from the Hessian matrix of evaluated at this point (or indeed at any of the critical points) which is an indefinite matrix. Each of the critical points of is a saddle point of Example 4 – Entropy Suppose we wish to find the discrete probability distribution on the points with maximal information entropy. This is the same as saying that we wish to find the least structured probability distribution on the points In other words, we wish to maximize the Shannon entropy equation: For this to be a probability distribution the sum of the probabilities at each point must equal 1, so our constraint is: We use Lagrange multipliers to find the point of maximum entropy, across all discrete probability distributions on We require that: which gives a system of equations, such that: Carrying out the differentiation of these equations, we get This shows that all are equal (because they depend on only). By using the constraint we find Hence, the uniform distribution is the distribution with the greatest entropy, among distributions on points. Example 5 – Numerical optimization The critical points of Lagrangians occur at saddle points, rather than at local maxima (or minima). Unfortunately, many numerical optimization techniques, such as hill climbing, gradient descent, some of the quasi-Newton methods, among others, are designed to find local maxima (or minima) and not saddle points. For this reason, one must either modify the formulation to ensure that it's a minimization problem (for example, by extremizing the square of the gradient of the Lagrangian as below), or else use an optimization technique that finds stationary points (such as Newton's method without an extremum seeking line search) and not necessarily extrema. As a simple example, consider the problem of finding the value of that minimizes constrained such that (This problem is somewhat untypical because there are only two values that satisfy this constraint, but it is useful for illustration purposes because the corresponding unconstrained function can be visualized in three dimensions.) Using Lagrange multipliers, this problem can be converted into an unconstrained optimization problem: The two critical points occur at saddle points where and . In order to solve this problem with a numerical optimization technique, we must first transform this problem such that the critical points occur at local minima. This is done by computing the magnitude of the gradient of the unconstrained optimization problem. First, we compute the partial derivative of the unconstrained problem with respect to each variable: If the target function is not easily differentiable, the differential with respect to each variable can be approximated as where is a small value. Next, we compute the magnitude of the gradient, which is the square root of the sum of the squares of the partial derivatives: (Since magnitude is always non-negative, optimizing over the squared-magnitude is equivalent to optimizing over the magnitude. Thus, the "square root" may be omitted from these equations with no expected difference in the results of optimization.) The critical points of occur at and , just as in Unlike the critical points in however, the critical points in occur at local minima, so numerical optimization techniques can be used to find them. Applications Control theory In optimal control theory, the Lagrange multipliers are interpreted as costate variables, and Lagrange multipliers are reformulated as the minimization of the Hamiltonian, in Pontryagin's minimum principle. Nonlinear programming The Lagrange multiplier method has several generalizations. In nonlinear programming there are several multiplier rules, e.g. the CarathΓ©odory–John Multiplier Rule and the Convex Multiplier Rule, for inequality constraints. Economics In many models in mathematical economics such as general equilibrium models, consumer behavior is implemented as utility maximization and firm behavior as profit maximization, both entities being subject to constraints such as budget constraints and production constraints. The usual way to determine an optimal solution is achieved by maximizing some function, where the constraints are enforced using Lagrangian multipliers. Power systems Methods based on Lagrange multipliers have applications in power systems, e.g. in distributed-energy-resources (DER) placement and load shedding. Safe Reinforcement Learning The method of Lagrange multipliers applies to constrained Markov decision processes. It naturally produces gradient-based primal-dual algorithms in safe reinforcement learning. Normalized solutions Considering the PDE problems with constraints, i.e., the study of the properties of the normalized solutions, Lagrange multipliers play an important role. See also Adjustment of observations Duality Gittins index Karush–Kuhn–Tucker conditions: generalization of the method of Lagrange multipliers Lagrange multipliers on Banach spaces: another generalization of the method of Lagrange multipliers Lagrange multiplier test in maximum likelihood estimation Lagrangian relaxation References Further reading External links Exposition β€” plus a brief discussion of Lagrange multipliers in the calculus of variations as used in physics. Additional text and interactive applets β€” Provides compelling insight in 2Β dimensions that at a minimizing point, the direction of steepest descent must be perpendicular to the tangent of the constraint curve at that point. β€” Course slides accompanying text on nonlinear optimization β€” Geometric idea behind Lagrange multipliers Multivariable calculus Mathematical optimization Mathematical and quantitative methods (economics)
Lagrange multiplier
[ "Mathematics" ]
4,431
[ "Mathematical optimization", "Multivariable calculus", "Mathematical analysis", "Calculus" ]
159,990
https://en.wikipedia.org/wiki/Manitoulin%20Island
Manitoulin Island ( ) is an island in Lake Huron, located within the borders of the Canadian province of Ontario, in the bioregion known as Laurentia. With an area of , it is the largest lake island in the world, large enough that it has over 100 lakes itself. In addition to the historic Anishinaabe and European settlement of the island, archaeological discoveries at Sheguiandah have demonstrated Paleo-Indian and Archaic cultures dating from 10,000 BC to 2,000 BC. The current name of the island is the English version, via French, of the Ottawa or Ojibwe name Manidoowaaling (α’ͺᓂᑝᐙᓕᓐᒃ), which means "cave of the spirit". It was named for an underwater cave where a powerful spirit is said to live. By the 19th century, the Odawa "l" was pronounced as "n". The same word with a newer pronunciation is used for the town Manitowaning (19th-century Odawa "Manidoowaaning"), which is located on Manitoulin Island near the underwater cave where legend has it that the spirit dwells. The modern Odawa name for Manitoulin Island is Mnidoo Mnis, meaning "Spirit Island". Manitoulin Island contains a number of lakes of its own. In order of size, its three most prominent lakes are Lake Manitou, Lake Kagawong and Lake Mindemoya. These three lakes in turn have islands within them, the largest of these being Lake Mindemoya's Treasure Island, located in the centre of Mindemoya. The island is the site of the administrative office of the band government of the Sheshegwaning First Nation. Geography and geology The island has an area of , making it the largest freshwater island in the world, the 174th largest island in the world and Canada's 31st largest island. The island separates the larger part of Lake Huron to its south and west from Georgian Bay to its east and the North Channel to the north. Manitoulin Island itself has 108 freshwater lakes, some of which have their own islands; in turn several of these "islands within islands" have their own ponds. Lake Manitou, at , is the largest lake in a freshwater island in the world, and Treasure Island in Lake Mindemoya is the largest island in a lake on an island in a lake in the world. Motors are prohibited on boats on Nameless Lake. The island also has four major rivers: the Kagawong, Manitou River, Blue Jay Creek in Michael's Bay and Mindemoya rivers, which provide spawning grounds for salmon and trout. The Manitoulin Streams Improvement Association was formed in 2000 and incorporated in 2007. The organization rehabilitates streams, rivers and creeks on Manitoulin Island to improve water quality and the fisheries resource. The Manitoulin Streams Improvement Association has conducted enhancement strategies for the Manitou River and Blue Jay Creek. The association has rehabilitated 17 major sites on the Manitou River and three major sites on Blue Jay Creek; it has completed work on Bass Lake Creek and Norton's Creek. The organization plans to start work on the Mindemoya River in 2010. Although culturally and politically considered part of Northern Ontario, the island is physiographically part of Southern Ontario, an "eastward extension of the Interior Plains, a region characterized by low relief and sedimentary underpinnings". The island consists mainly of dolomite as it is a continuation of the Bruce Peninsula and Niagara Escarpment. This geological rock formation runs south into Niagara Falls and continues into New York. The "Cup and Saucer Trail", which climbs the escarpment, provides a lookout over the island. Climate Manitoulin Island experiences a humid continental climate (Dfb) with moderation from Lake Huron. The island experiences warm to hot summers and cool to cold winters. Manitoulin Island has a comparable climate to that of Hokkaido, Japan (hemiboreal climate), despite being on the same latitude as Lugano, Switzerland, which has a temperate climate. The island is characterized by long stretches of marked seasonal differences. Culture The island has two incorporated towns (Northeastern Manitoulin and the Islands and Gore Bay), eight townships (Assiginack, Billings, Burpee and Mills, Central Manitoulin, Dawson, Gordon/Barrie Island, Robinson and Tehkummah) and six Anishinaabe reserves (M'Chigeeng, Sheguiandah, Sheshegwaning, Aundeck Omni Kaning, Wiikwemkoong and Zhiibaahaasing). During the summer, the population (12,600 permanent residents) on the island grows by more than a quarter due to tourists coming for boating and other activities in scenic surroundings. The island, along with several smaller neighbouring islands, constitutes the Manitoulin District census division of Ontario. Manitoulin Island's soil is relatively alkaline, which precludes the growth of common Northern Ontario flora such as blueberries, but allows for the island's trademark hawberries. These berries are so distinctive that people born on the island are referred to as "Haweaters". Each year on the August long weekend, the island hosts the Haweater Festival. The festival attracts numerous tourists; it features parades, firework shows, craft shows, and rural competitions such as horse pulls. Transportation Year-round motor-vehicle access to the island is available via the one-lane Little Current Swing Bridge, which crosses the North Channel at Little Current. From late May to early October, a daily passenger-vehicle ferry, the (Ojibwe for "Big Canoe"), travels between Tobermory on the tip of the Bruce Peninsula and South Baymouth. Winter ice prevents ferry service during that season. There are two airports on the island. Gore Bay-Manitoulin Airport, and Manitoulin East Municipal Airport, which opened in 1988. Both allow small planes access to the island and Border Patrol clearance. Demographics , the population was 13,255. Ethnic groups 59% White (European-Canadian) 40.6% Aboriginal (First Nations) 0.4% Black (African-Canadian) Religious groups 42.3% Protestant 37.3% Roman Catholic 2.7% other Christian 17.7% other/none The most common first languages on Manitoulin Island in 2016 were English (80.8%), Ojibwe (11.2%), French (2.8%), German (0.8%), and Odawa (0.8%). History In 1952 archaeologist Thomas E. Lee discovered Sheguiandah on the island, a prehistoric site. During excavation, he found artefacts of the Paleo-Indian and Archaic periods, dating at least to 10,000 BC and possibly to 30,000 years ago. Additional studies were undertaken by a team he led from the National Museum of Canada in succeeding years. Popular interest in the finds was so high that it contributed to Ontario's passing legislation in 1953 to protect its archaeological sites. A team performed excavations again in the early 1990s, applying new methods of analysis from botany and other scientific disciplines. They concluded the site was at least 9500 years old, making it one of the most significant in Ontario. Manitoulin means spirit island in Anishinaabemowin (Ojibwe language). The island is considered sacred by the Native Anishinaabe people, who identify as the "People of the Three Fires." This loose confederation is made up of the Ojibwe, Odawa and Potawatomi tribes. The North Channel was part of the route used by the French colonial voyageurs and coureurs des bois to reach Lake Superior. The first known European to settle on the island was Father Joseph Poncet, a French Jesuit, who set up a mission near Wiikwemkoong in 1648. The Jesuits called the island "Isle de Ste-Marie". In addition, the Five Nations of the Iroquois began raiding the island and area to try to control the fur trade with the French. As part of what was called the Beaver Wars, the Iroquois drove the Anishinaabe people from the island by 1650. According to Anishinaabe oral tradition, to purify the island from disease, the people burned their settlements as they left. The island was mostly uninhabited for nearly 150 years. Native people (Odawa, Ojibwe, Potawatomi) began to return to the island following the War of 1812 between Britain and the United States. They ceded the island to the British Crown in 1836; the government set aside the land as a refuge for Natives. In 1838 Jean-Baptiste Proulx re-established a Roman Catholic mission. The Jesuits took over the mission in 1845. In 1862, the government opened up the island to settlement by non-Native people by the Manitoulin Island treaty. As the Wikwemikong chief did not accept this treaty, his people's reserve was held back from being offered for development. That reserve remains unceded. On August 7, 1975, the Wikwemikong Unceded Indian Reserve reasserted their claim to sovereignty over the islands off the east end of Manitoulin Island, declaring, "Wikwemikong Band has jurisdiction over its reservation lands and surrounding waters." The province erected an Ontario Historical Plaque on the grounds of the Assiginack Museum to commemorate the Manitoulin Treaties' role in Ontario's history. Notable residents Carl Beam, Canadian artist of Native ancestry Kevin Closs, independent rock recording artist raised in Manitowaning Ethel Rogers Mulvany, Canadian social worker and teacher Daphne Odjig, artist, born and raised on the Wiikwemkoong Unceded Reserve Isabel Paterson, Canadian-American writer born on Manitoulin Island Autumn Peltier, global Indigenous rights and water activist, water protector, top finalist for 2022 International Children's Peace Prize Crystal Shawanda, country music artist from Wiikwemkoong Lucky Thompson, American jazz saxophone player References External links Manitoulin tourism information Manitoulin, an essay about Ojibway Indians and Lumbermen by Harold Nelson Burden (1895) Landforms of Manitoulin District Lake islands of Ontario Niagara Escarpment Dark-sky preserves in Canada Islands of Lake Huron in Ontario Sacred islands
Manitoulin Island
[ "Astronomy" ]
2,146
[ "Dark-sky preserves in Canada", "Dark-sky preserves" ]
160,076
https://en.wikipedia.org/wiki/Combinatorial%20species
In combinatorial mathematics, the theory of combinatorial species is an abstract, systematic method for deriving the generating functions of discrete structures, which allows one to not merely count these structures but give bijective proofs involving them. Examples of combinatorial species are (finite) graphs, permutations, trees, and so on; each of these has an associated generating function which counts how many structures there are of a certain size. One goal of species theory is to be able to analyse complicated structures by describing them in terms of transformations and combinations of simpler structures. These operations correspond to equivalent manipulations of generating functions, so producing such functions for complicated structures is much easier than with other methods. The theory was introduced, carefully elaborated and applied by Canadian researchers around AndrΓ© Joyal. The power of the theory comes from its level of abstraction. The "description format" of a structure (such as adjacency list versus adjacency matrix for graphs) is irrelevant, because species are purely algebraic. Category theory provides a useful language for the concepts that arise here, but it is not necessary to understand categories before being able to work with species. The category of species is equivalent to the category of symmetric sequences in finite sets. Definition of species Any species consists of individual combinatorial structures built on the elements of some finite set: for example, a combinatorial graph is a structure of edges among a given set of vertices, and the species of graphs includes all graphs on all finite sets. Furthermore, a member of a species can have its underying set relabeled by the elements of any other equinumerous set, for example relabeling the vertices of a graph gives "the same graph structure" on the new vertices, i.e. an isomorphic graph. This leads to the formal definition of a combinatorial species. Let be the category of finite sets, with the morphisms of the category being the bijections between these sets. A species is a functor For each finite set A in , the finite set F[A] is called the set of F-structures on A, or the set of structures of species F on A. Further, by the definition of a functor, if Ο† is a bijection between sets A and B, then F[Ο†] is a bijection between the sets of F-structures F[A] and F[B], called transport of F-structures along Ο†. For example, the "species of permutations" maps each finite set A to the set S[A] of all permutations of A (all ways of ordering A into a list), and each bijection f from A to another set B naturally induces a bijection (a relabeling) taking each permutation of A to a corresponding permutation of B, namely a bijection . Similarly, the "species of partitions" can be defined by assigning to each finite set the set of all its partitions, and the "power set species" assigns to each finite set its power set. The adjacent diagram shows a structure (represented by a red dot) built on a set of five distinct elements (represented by blue dots); a corresponding structure could be built out of any five elements. Two finite sets are in bijection whenever they have the same cardinality (number of elements); thus by definition the corresponding species sets are also in bijection, and the (finite) cardinality of depends only on the cardinality of A. In particular, the exponential generating series F(x) of a species F can be defined: where is the cardinality of for any set A having n elements; e.g., . Some examples: writing , The species of sets (traditionally called E, from the French "ensemble", meaning "set") is the functor which maps A to {A}. Then , so . The species S of permutations, described above, has . . The species T2 of ordered pairs (2-tuples) is the functor taking a set A to A2. Then and . Calculus of species Arithmetic on generating functions corresponds to certain "natural" operations on species. The basic operations are addition, multiplication, composition, and differentiation; it is also necessary to define equality on species. Category theory already has a way of describing when two functors are equivalent: a natural isomorphism. In this context, it just means that for each A there is a bijection between F-structures on A and G-structures on A, which is "well-behaved" in its interaction with transport. Species with the same generating function might not be isomorphic, but isomorphic species do always have the same generating function. Addition Addition of species is defined by the disjoint union of sets, and corresponds to a choice between structures. For species F and G, define (F + G)[A] to be the disjoint union (also written "+") of F[A] and G[A]. It follows that (FΒ +Β G)(x) =Β F(x)Β +Β G(x). As a demonstration, take E+ to be the species of non-empty sets, whose generating function is E+(x)Β =Β exΒ βˆ’Β 1, and 1 the species of the empty set, whose generating function is 1(x) = 1. It follows that the sum of the two species EΒ =Β 1Β +Β E+: in words, "a set is either empty or non-empty". Equations like this can be read as referring to a single structure, as well as to the entire collection of structures. Multiplication Multiplying species is slightly more complicated. It is possible to just take the Cartesian product of sets as the definition, but the combinatorial interpretation of this is not quite right. (See below for the use of this kind of product.) Rather than putting together two unrelated structures on the same set, the multiplication operator uses the idea of splitting the set into two components, constructing an F-structure on one and a G-structure on the other. This is a disjoint union over all possible binary partitions ofΒ A. It is straightforward to show that multiplication is associative and commutative (up to isomorphism), and distributive over addition. As for the generating series, (FΒ Β·Β G)(x)Β =Β F(x)G(x). The diagram below shows one possible (FΒ Β·Β G)-structure on a set with five elements. The F-structure (red) picks up three elements of the base set, and the G-structure (light blue) takes the rest. Other structures will have F and G splitting the set in a different way. The set (FΒ Β·Β G)[A], where A is the base set, is the disjoint union of all such structures. The addition and multiplication of species are the most comprehensive expression of the sum and product rules of counting. Composition Composition, also called substitution, is more complicated again. The basic idea is to replace components of F with G-structures, forming (F ∘ G). As with multiplication, this is done by splitting the input set A; the disjoint subsets are given to G to make G-structures, and the set of subsets is given to F, to make the F-structure linking the G-structures. It is required for G to map the empty set to itself, in order for composition to work. The formal definition is: Here, P is the species of partitions, so P[A] is the set of all partitions of A. This definition says that an element of (F ∘ G)[A] is made up of an F-structure on some partition of A, and a G-structure on each component of the partition. The generating series is . One such structure is shown below. Three G-structures (light blue) divide up the five-element base set between them; then, an F-structure (red) is built to connect the G-structures. These last two operations may be illustrated by the example of trees. First, define X to be the species "singleton" whose generating series is X(x)Β =Β x. Then the species Ar of rooted trees (from the French "arborescence") is defined recursively by ArΒ =Β XΒ Β·Β E(Ar). This equation says that a tree consists of a single root and a set of (sub-)trees. The recursion does not need an explicit base case: it only generates trees in the context of being applied to some finite set. One way to think about this is that the Ar functor is being applied repeatedly to a "supply" of elements from the set β€” each time, one element is taken by X, and the others distributed by E among the Ar subtrees, until there are no more elements to give to E. This shows that algebraic descriptions of species are quite different from type specifications in programming languages like Haskell. Likewise, the species P can be characterised as PΒ =Β E(E+): "a partition is a pairwise disjoint set of nonempty sets (using up all the elements of the input set)". The exponential generating series for P is , which is the series for the Bell numbers. Differentiation Differentiation of species intuitively corresponds to building "structures with a hole", as shown in the illustration below. Formally, where is some distinguished new element not present in . To differentiate the associated exponential series, the sequence of coefficients needs to be shifted one place to the "left" (losing the first term). This suggests a definition for species: F' [A]Β =Β F[AΒ +Β {*}], where {*} is a singleton set and "+" is disjoint union. The more advanced parts of the theory of species use differentiation extensively, to construct and solve differential equations on species and series. The idea of adding (or removing) a single part of a structure is a powerful one: it can be used to establish relationships between seemingly unconnected species. For example, consider a structure of the species L of linear ordersβ€”lists of elements of the ground set. Removing an element of a list splits it into two parts (possibly empty); in symbols, this is L'''Β =Β LΒ·L. The exponential generating function of L is L(x)Β =Β 1/(1Β βˆ’Β x), and indeed: The generalized differentiation formulas are to be found in a previous research by N. G. de Bruijn, published in 1964. The species C of cyclic permutations takes a set A to the set of all cycles on A. Removing a single element from a cycle reduces it to a list: CΒ =Β L. We can integrate the generating function of L to produce that forΒ C. A nice example of integration of a species is the completion of a line (coordinatizated by a field) with the infinite point and obtaining a projective line. Further operations There are a variety of other manipulations which may be performed on species. These are necessary to express more complicated structures, such as directed graphs or bigraphs.Pointing selects a single element in a structure. Given a species F, the corresponding pointed species Fβ€’ is defined by Fβ€’[A] = A Γ— F[A]. Thus each Fβ€’-structure is an F-structure with one element distinguished. Pointing is related to differentiation by the relation Fβ€’ = XΒ·F' , so Fβ€’(x) = x F' (x). The species of pointed sets, Eβ€’, is particularly important as a building block for many of the more complex constructions. The Cartesian product of two species is a species which can build two structures on the same set at the same time. It is different from the ordinary multiplication operator in that all elements of the base set are shared between the two structures. An (F Γ— G)-structure can be seen as a superposition of an F-structure and a G-structure. Bigraphs could be described as the superposition of a graph and a set of trees: each node of the bigraph is part of a graph, and at the same time part of some tree that describes how nodes are nested. The generating function (F Γ— G)(x) is the Hadamard or coefficient-wise product of F(x) and G(x). The species Eβ€’ Γ— Eβ€’ can be seen as making two independent selections from the base set. The two points might coincide, unlike in XΒ·XΒ·E, where they are forced to be different. As functors, species F and G may be combined by functorial composition': (the box symbol is used, because the circle is already in use for substitution). This constructs an F-structure on the set of all G-structures on the set A. For example, if F is the functor taking a set to its power set, a structure of the composed species is some subset of the G-structures on A. If we now take G to be Eβ€’ Γ— Eβ€’ from above, we obtain the species of directed graphs, with self-loops permitted. (A directed graph is a set of edges, and edges are pairs of nodes: so a graph is a subset of the set of pairs of elements of the node set A.) Other families of graphs, as well as many other structures, can be defined in this way. Software Operations with species are supported by SageMath and, using a special package, also by Haskell. Variants A species in k sorts is a functor . Here, the structures produced can have elements drawn from distinct sources. A functor to , the category of R-weighted sets for R a ring of power series, is a weighted species. If β€œfinite sets with bijections” is replaced with β€œfinite vector spaces with linear transformations”, then one gets the notion of polynomial functor (after imposing some finiteness condition). See also Container (type theory) Notes References Bruijn, de, N. G. (1964). PΓ³lya's theory of counting. In E. F. Beckenbach (Ed.), Applied combinatorical mathematics (pp. 144-184) Labelle, Jacques. Quelques espΓ¨ces sur les ensembles de petite cardinalitΓ©., Ann. Sc. Math. QuΓ©bec 9.1 (1985): 31-58. Yves Chiricota, Classification des espΓ¨ces molΓ©culaires de degrΓ© 6 et 7, Ann. Sci. Math. QuΓ©bec 17 (1993), no. 1, 11–37. FranΓ§ois Bergeron, Gilbert Labelle, Pierre Leroux, ThΓ©orie des espΓ¨ces et combinatoire des structures arborescentes'', LaCIM, MontrΓ©al (1994). English version: Combinatorial Species and Tree-like Structures , Cambridge University Press (1998). Kerber, Adalbert (1999), Applied finite group actions, Algorithms and Combinatorics, 19 (2nd ed.), Berlin, New York: Springer-Verlag, , MR 1716962, OCLC 247593131 External links Enumerative combinatorics Algebraic combinatorics
Combinatorial species
[ "Mathematics" ]
3,164
[ "Fields of abstract algebra", "Algebraic combinatorics", "Enumerative combinatorics", "Combinatorics" ]
160,125
https://en.wikipedia.org/wiki/Mitochondrial%20disease
Mitochondrial disease is a group of disorders caused by mitochondrial dysfunction. Mitochondria are the organelles that generate energy for the cell and are found in every cell of the human body except red blood cells. They convert the energy of food molecules into the ATP that powers most cell functions. Mitochondrial diseases take on unique characteristics both because of the way the diseases are often inherited and because mitochondria are so critical to cell function. A subclass of these diseases that have neuromuscular symptoms are known as mitochondrial myopathies. Types Mitochondrial disease can manifest in many different ways whether in children or adults. Examples of mitochondrial diseases include: Mitochondrial myopathy Maternally inherited diabetes mellitus and deafness (MIDD) While diabetes mellitus and deafness can be found together for other reasons, at an early age this combination can be due to mitochondrial disease, as may occur in Kearns–Sayre syndrome and Pearson syndrome Leber's hereditary optic neuropathy (LHON) LHON is an eye disorder characterized by progressive loss of central vision due to degeneration of the optic nerves and retina (apparently affecting between 1 in 30,000 and 1 in 50,000 people); visual loss typically begins in young adulthood Leigh syndrome, subacute necrotizing encephalomyelopathy after normal development the disease usually begins late in the first year of life, although onset may occur in adulthood a rapid decline in function occurs and is marked by seizures, altered states of consciousness, dementia, ventilatory failure Neuropathy, ataxia, retinitis pigmentosa, and ptosis (NARP) progressive symptoms as described in the acronym dementia Myoneurogenic gastrointestinal encephalopathy (MNGIE) gastrointestinal pseudo-obstruction neuropathy MERRF syndrome progressive myoclonic epilepsy "Ragged Red Fibers" are clumps of diseased mitochondria that accumulate in the subsarcolemmal region of the muscle fiber and appear when muscle is stained with modified GΓΆmΓΆri trichrome stain short stature hearing loss lactic acidosis exercise intolerance MELAS syndrome, mitochondrial encephalopathy, lactic acidosis, and stroke-like episodes Mitochondrial DNA depletion syndrome Conditions such as Friedreich's ataxia can affect the mitochondria but are not associated with mitochondrial proteins. Presentation Associated conditions Acquired conditions in which mitochondrial dysfunction has been involved include: ALS Alzheimer's disease, Bipolar disorder, schizophrenia, aging and senescence, anxiety disorders Cancer Cardiovascular disease Diabetes Huntington's disease Long Covid ME/CFS Parkinson's disease Sarcopenia The body, and each mutation, is modulated by other genome variants; the mutation that in one individual may cause liver disease might in another person cause a brain disorder. The severity of the specific defect may also be great or small. Some defects include exercise intolerance. Defects often affect the operation of the mitochondria and multiple tissues more severely, leading to multi-system diseases. It has also been reported that drug tolerant cancer cells have an increased number and size of mitochondria, which suggested an increase in mitochondrial biogenesis. A recent study in Nature Nanotechnology has reported that cancer cells can hijack the mitochondria from immune cells via physical tunneling nanotubes. As a rule, mitochondrial diseases are worse when the defective mitochondria are present in the muscles, cerebrum, or nerves, because these cells use more energy than most other cells in the body. Although mitochondrial diseases vary greatly in presentation from person to person, several major clinical categories of these conditions have been defined, based on the most common phenotypic features, symptoms, and signs associated with the particular mutations that tend to cause them. An outstanding question and area of research is whether ATP depletion or reactive oxygen species are in fact responsible for the observed phenotypic consequences. Cerebellar atrophy or hypoplasia has sometimes been reported to be associated. Causes Mitochondrial disorders may be caused by mutations (acquired or inherited), in mitochondrial DNA (mtDNA), or in nuclear genes that code for mitochondrial components. They may also be the result of acquired mitochondrial dysfunction due to adverse effects of drugs, infections, or other environmental causes. Nuclear DNA has two copies per cell (except for sperm and egg cells), one copy being inherited from the father and the other from the mother. Mitochondrial DNA, however, is inherited from the mother only (with some exceptions) and each mitochondrion typically contains between 2 and 10 mtDNA copies. During cell division the mitochondria segregate randomly between the two new cells. Those mitochondria make more copies, normally reaching 500 mitochondria per cell. As mtDNA is copied when mitochondria proliferate, they can accumulate random mutations, a phenomenon called heteroplasmy. If only a few of the mtDNA copies inherited from the mother are defective, mitochondrial division may cause most of the defective copies to end up in just one of the new mitochondria (for more detailed inheritance patterns, see human mitochondrial genetics). Mitochondrial disease may become clinically apparent once the number of affected mitochondria reaches a certain level; this phenomenon is called "threshold expression". Mitochondria possess many of the same DNA repair pathways as nuclei doβ€”but not all of them; therefore, mutations occur more frequently in mitochondrial DNA than in nuclear DNA (see Mutation rate). This means that mitochondrial DNA disorders may occur spontaneously and relatively often. Defects in enzymes that control mitochondrial DNA replication (all of which are encoded for by genes in the nuclear DNA) may also cause mitochondrial DNA mutations. Most mitochondrial function and biogenesis is controlled by nuclear DNA. Human mitochondrial DNA encodes 13 proteins of the respiratory chain, while most of the estimated 1,500 proteins and components targeted to mitochondria are nuclear-encoded. Defects in nuclear-encoded mitochondrial genes are associated with hundreds of clinical disease phenotypes including anemia, dementia, hypertension, lymphoma, retinopathy, seizures, and neurodevelopmental disorders. A study by Yale University researchers (published in the February 12, 2004, issue of the New England Journal of Medicine) explored the role of mitochondria in insulin resistance among the offspring of patients with type 2 diabetes. Other studies have shown that the mechanism may involve the interruption of the mitochondrial signaling process in body cells (intramyocellular lipids). A study conducted at the Pennington Biomedical Research Center in Baton Rouge, Louisiana showed that this, in turn, partially disables the genes that produce mitochondria. Mechanisms The effective overall energy unit for the available body energy is referred to as the daily glycogen generation capacity, and is used to compare the mitochondrial output of affected or chronically glycogen-depleted individuals to healthy individuals. The glycogen generation capacity is entirely dependent on, and determined by, the operating levels of the mitochondria in all of the cells of the human body; however, the relation between the energy generated by the mitochondria and the glycogen capacity is very loose and is mediated by many biochemical pathways. The energy output of full healthy mitochondrial function can be predicted exactly by a complicated theoretical argument, but this argument is not straightforward, as most energy is consumed by the brain and is not easily measurable. Diagnosis Mitochondrial diseases are usually detected by analysing muscle samples, where the presence of these organelles is higher. The most common tests for the detection of these diseases are: Southern blot to detect large deletions or duplications Polymerase chain reaction and specific mutation testing Sequencing Treatments Although research is ongoing, treatment options are currently limited; vitamins are frequently prescribed, though the evidence for their effectiveness is limited. Pyruvate has been proposed in 2007 as a treatment option. N-acetyl cysteine reverses many models of mitochondrial dysfunction. Mood disorders In the case of mood disorders, specifically bipolar disorder, it is hypothesized that N-acetyl-cysteine (NAC), acetyl-L-carnitine (ALCAR), S-adenosylmethionine (SAMe), coenzyme Q10 (CoQ10), alpha-lipoic acid (ALA), creatine monohydrate (CM), and melatonin could be potential treatment options. Gene therapy prior to conception Mitochondrial replacement therapy (MRT), where the nuclear DNA is transferred to another healthy egg cell leaving the defective mitochondrial DNA behind, is an IVF treatment procedure. Using a similar pronuclear transfer technique, researchers at Newcastle University led by Douglass Turnbull successfully transplanted healthy DNA in human eggs from women with mitochondrial disease into the eggs of women donors who were unaffected. In such cases, ethical questions have been raised regarding biological motherhood, since the child receives genes and gene regulatory molecules from two different women. Using genetic engineering in attempts to produce babies free of mitochondrial disease is controversial in some circles and raises important ethical issues. A male baby was born in Mexico in 2016 from a mother with Leigh syndrome using MRT. In September 2012 a public consultation was launched in the UK to explore the ethical issues involved. Human genetic engineering was used on a small scale to allow infertile women with genetic defects in their mitochondria to have children. In June 2013, the United Kingdom government agreed to develop legislation that would legalize the 'three-person IVF' procedure as a treatment to fix or eliminate mitochondrial diseases that are passed on from mother to child. The procedure could be offered from 29 October 2015 once regulations had been established. Embryonic mitochondrial transplant and protofection have been proposed as a possible treatment for inherited mitochondrial disease, and allotopic expression of mitochondrial proteins as a radical treatment for mtDNA mutation load. In June 2018 Australian Senate's Senate Community Affairs References Committee recommended a move towards legalising Mitochondrial replacement therapy (MRT). Research and clinical applications of MRT were overseen by laws made by federal and state governments. State laws were, for the most part, consistent with federal law. In all states, legislation prohibited the use of MRT techniques in the clinic, and except for Western Australia, research on a limited range of MRT was permissible up to day 14 of embryo development, subject to a license being granted. In 2010, the Hon. Mark Butler MP, then Federal Minister for Mental Health and Ageing, had appointed an independent committee to review the two relevant acts: the Prohibition of Human Cloning for Reproduction Act 2002 and the Research Involving Human Embryos Act 2002. The committee's report, released in July 2011, recommended the existing legislation remain unchanged Currently, human clinical trials are underway at GenSight Biologics (ClinicalTrials.gov # NCT02064569) and the University of Miami (ClinicalTrials.gov # NCT02161380) to examine the safety and efficacy of mitochondrial gene therapy in Leber's hereditary optic neuropathy. Epidemiology About 1 in 4,000 children in the United States will develop mitochondrial disease by the age of 10 years. Up to 4,000 children per year in the US are born with a type of mitochondrial disease. Because mitochondrial disorders contain many variations and subsets, some particular mitochondrial disorders are very rare. The average number of births per year among women at risk for transmitting mtDNA disease is estimated to approximately 150 in the United Kingdom and 800 in the United States. History The first pathogenic mutation in mitochondrial DNA was identified in 1988; from that time to 2016, around 275 other disease-causing mutations were identified. Notable cases Notable people with mitochondrial disease include: Mattie Stepanek, a poet, peace advocate, and motivational speaker who had dysautonomic mitochondrial myopathy, and who died at age 13. Rocco Baldelli, a coach and former center fielder in Major League Baseball who had to retire from active play at age 29 due to mitochondrial channelopathy. Charlie Gard, a British boy who had mitochondrial DNA depletion syndrome; decisions about his care were taken to various law courts. Charles Darwin, a nineteenth century naturalist who suffered from a disabling illness, is speculated to have MELAS syndrome. References External links International Mito Patients (IMP) Molecular biology Mitochondrial genetics
Mitochondrial disease
[ "Chemistry", "Biology" ]
2,548
[ "Biochemistry", "Molecular biology" ]
160,220
https://en.wikipedia.org/wiki/Ratite
Ratites () are a polyphyletic group consisting of all birds within the infraclass Palaeognathae that lack keels and cannot fly. They are mostly large, long-necked, and long-legged, the exception being the kiwi, which is also the only nocturnal extant ratite. The understanding of relationships within the paleognath clade has been in flux. Previously, all the flightless members had been assigned to the order Struthioniformes, which is more recently regarded as containing only the ostrich. The modern bird superorder Palaeognathae consists of ratites and the flighted Neotropic tinamous (compare to Neognathae). Unlike other flightless birds, the ratites have no keel on their sternumβ€”hence the name, from the Latin ('raft', a vessel which has no keelβ€”in contradistinction to extant flighted birds with a keel). Without this to anchor their wing muscles, they could not have flown even if they had developed suitable wings. Ratites are a polyphyletic group; tinamous fall within them, and are the sister group of the extinct moa. This implies that flightlessness is a trait that evolved independently multiple times in different ratite lineages. Most parts of the former supercontinent Gondwana have ratites, or did have until the fairly recent past. So did Europe in the Paleocene and Eocene, from where the first flightless paleognaths are known. Ostriches were present in Asia as recently as the Holocene, although the genus is thought to have originated in Africa. However, the ostrich order may have evolved in Eurasia. A recent study posits a Laurasian origin for the clade. Geranoidids, which may have been ratites, existed in North America. Species Living forms The African ostrich is the largest living ratite. A large member of this species can be nearly tall, weigh as much as , and can outrun a horse. Of the living species, the Australian emu is next in height, reaching up to tall and about . Like the ostrich, it is a fast-running, powerful bird of the open plains and woodlands. Also native to Australia and the islands to the north are the three species of cassowary. Shorter than an emu, but heavier and solidly built, cassowaries prefer thickly vegetated tropical forest. They can be dangerous when surprised or cornered because of their razor-sharp talons. In New Guinea, cassowary eggs are brought back to villages and the chicks raised for eating as a much-prized delicacy, despite (or perhaps because of) the risk they pose to life and limb. They reach up to tall and weigh as much as South America has two species of rhea, large fast-running birds of the Pampas. The larger American rhea grows to about tall and usually weighs . The smallest ratites are the five species of kiwi from New Zealand. Kiwi are chicken-sized, shy, and nocturnal. They nest in deep burrows and use a highly developed sense of smell to find small insects and grubs in the soil. Kiwi are notable for laying eggs that are very large in relation to their body size. A kiwi egg may equal 15 to 20 percent of the body mass of a female kiwi. The smallest species of kiwi is the little spotted kiwi, at and . Holocene extinct forms At least nine species of moa lived in New Zealand before the arrival of humans, ranging from turkey-sized to the giant moa Dinornis robustus with a height of and weighing about . They became extinct by A.D. 1400 due to hunting by Māori settlers, who arrived around A.D. 1280. Aepyornis maximus, the "elephant bird" of Madagascar, was the heaviest bird ever known. Although shorter than the tallest moa, a large A. maximus could weigh over and stand up to tall. Accompanying it were three other species of Aepyornis as well as three species of the smaller genus Mullerornis. All these species went into decline following the arrival of humans on Madagascar around 2,000 years ago, and were gone by the 17th or 18th century if not earlier. Classification There are two taxonomic approaches to ratite classification: one combines the groups as families in the order Struthioniformes, while the other supposes that the lineages evolved mostly independently and thus elevates the families to order rank (Rheiformes, Casuariformes etc.). Evolution The longstanding story of ratite evolution was that they share a common flightless ancestor that lived in Gondwana, whose descendants were isolated from each other by continental drift, which carried them to their present locations. Supporting this idea, some studies based on morphology, immunology and DNA sequencing reported that ratites are monophyletic. Cracraft's 1974 biogeographic vicariance hypothesis suggested that ancestral flightless paleognaths, the ancestors of ratites, were present and widespread in Gondwana during the Late Cretaceous. As the supercontinent fragmented due to plate tectonics, they were carried by plate movements to their current positions and evolved into the species present today. The earliest known ratite fossils date to the Paleocene epoch about 56 million years ago (e.g., Diogenornis, a possible early relative of the rhea). However, more primitive paleognaths are known from several million years earlier, and the classification and membership of the Ratitae itself is uncertain. Some of the earliest ratites occur in Europe. Recent analyses of genetic variation between the ratites do not support this simple picture. The ratites may have diverged from one another too recently to share a common Gondwanan ancestor. Also, the Middle Eocene ratites such as Palaeotis and Remiornis from Central Europe may imply that the "out-of-Gondwana" hypothesis is oversimplified. Molecular phylogenies of the ratites have generally placed ostriches in the basal position and among extant ratites, placed rheas in the second most basal position, with Australo-Pacific ratites splitting up last; they have also shown that both the latter groups are monophyletic. Early mitochondrial genetic studies that failed to make ostriches basal were apparently compromised by the combination of rapid early radiation of the group and long terminal branches. A morphological analysis that created a basal New Zealand clade has not been corroborated by molecular studies. A 2008 study of nuclear genes shows ostriches branching first, followed by rheas and tinamous, then kiwi splitting from emus and cassowaries. In more recent studies, moas and tinamous were shown to be sister groups, and elephant birds were shown to be most closely related to the New Zealand kiwi. Additional support for the latter relationship was obtained from morphological analysis. The finding that tinamous nest within this group, originally based on twenty nuclear genes and corroborated by a study using forty novel nuclear loci makes 'ratites' polyphyletic rather than monophyletic, if we exclude the tinamous. Since tinamous are weak fliers, this raises interesting questions about the evolution of flightlessness in this group. The branching of the tinamous within the ratite radiation suggests flightlessness evolved independently among ratites at least three times. More recent evidence suggests this happened at least six times, or once in each major ratite lineage. Re-evolution of flight in the tinamous would be an alternative explanation, but such a development is without precedent in avian history, while loss of flight is commonplace. By 2014, a mitochondrial DNA phylogeny including fossil members placed ostriches on the basal branch, followed by rheas, then a clade consisting of moas and tinamous, followed by the final two branches: a clade of emus plus cassowaries and one of elephant birds plus kiwis. Vicariant speciation based on the plate tectonic split-up of Gondwana followed by continental drift would predict that the deepest phylogenetic split would be between African and all other ratites, followed by a split between South American and Australo-Pacific ratites, roughly as observed. However, the elephant bird–kiwi relation appears to require dispersal across oceans by flight, as apparently does the colonization of New Zealand by the moa and possibly the back-dispersal of tinamous to South America, if the latter occurred. The phylogeny as a whole suggests not only multiple independent origins of flightlessness, but also of gigantism (at least five times). Gigantism in birds tends to be insular; however, a ten-million-year-long window of opportunity for evolution of avian gigantism on continents may have existed following the extinction of the non-avian dinosaurs, in which ratites were able to fill vacant herbivorous niches before mammals attained large size. Some authorities, though, have been skeptical of the new findings and conclusions. Kiwi and tinamous are the only palaeognath lineages not to evolve gigantism, perhaps because of competitive exclusion by giant ratites already present on New Zealand and South America when they arrived or arose. The fact that New Zealand has been the only land mass to recently support two major lineages of flightless ratites may reflect the near total absence of native mammals, which allowed kiwi to occupy a mammal-like nocturnal niche. However, various other landmasses such as South America and Europe have supported multiple lineages of flightless ratites that evolved independently, undermining this competitive exclusion hypothesis. Most recently, studies on genetic and morphological divergence and fossil distribution show that paleognaths as a whole probably had an origin in the northern hemisphere. Early Cenozoic northern hemisphere paleognaths such as Lithornis, Pseudocrypturus, Paracathartes and Palaeotis appear to be the most basal members of the clade. The various ratite lineages were probably descended from flying ancestors that independently colonised South America and Africa from the north, probably initially in South America. From South America, they could have traveled overland to Australia via Antarctica, (by the same route marsupials are thought to have used to reach Australia) and then reached New Zealand and Madagascar via "sweepstakes" dispersals (rare low probability dispersal methods, such as long distance rafting) across the oceans. Gigantism would have evolved subsequent to trans-oceanic dispersals. Loss of flight Loss of flight allows birds to eliminate the costs of maintaining various flight-enabling adaptations like high pectoral muscle mass, hollow bones and a light build, et cetera. The basal metabolic rate of flighted species is much higher than that of flightless terrestrial birds. But energetic efficiency can only help explain the loss of flight when the benefits of flying are not critical to survival. Research on flightless rails indicates the flightless condition evolved in the absence of predators. This shows flight to be generally necessary for survival and dispersal in birds. In apparent contradiction to this, many landmasses occupied by ratites are also inhabited by predatory mammals. However, the K–Pg extinction event created a window of time with large predators absent that may have allowed the ancestors of extant flightless ratites to evolve flightlessness. They subsequently underwent selection for large size. One hypothesis suggests that as predation pressure decreases on islands with low raptor species richness and no mammalian predators, the need for large, powerful flight muscles that make for a quick escape decreases. Moreover, raptor species tend to become generalist predators on islands with low species richness, as opposed to specializing in the predation of birds. An increase in leg size compensates for a reduction in wing length in insular birds that have not lost flight by providing a longer lever to increase force generated during the thrust that initiates takeoff. Description Ratites in general have many physical characteristics in common, although many are not shared by the family Tinamidae, or tinamous. First, the breast muscles are underdeveloped. They do not have keeled sterna. Their wishbones (furculae) are almost absent. They have simplified wing skeletons and musculature. Their legs are stronger and do not have air chambers, except the femurs. Their tail and flight feathers have retrogressed or have become decorative plumes. They have no feather vanes, which means they do not need to oil their feathers, hence they have no preen glands. They have no separation of pterylae (feathered areas) and apteria (non-feathered areas), and finally, they have palaeognathous palates. Ostriches have the greatest dimorphism, rheas show some dichromatism during the breeding season. Emus, cassowaries, and kiwis show some dimorphism, predominantly in size. While the ratites share a lot of similarities, they also have major differences. Ostriches have only two toes, with one being much larger than the other. Cassowaries have developed long inner toenails, used defensively. Ostriches and rheas have prominent wings; although they do not use them to fly, they do use them in courtship and predator distraction. Without exception, ratite chicks are capable of swimming and even diving. On an allometric basis, paleognaths have generally smaller brains than neognaths. Kiwis are exceptions to this trend, and possess proportionally larger brains comparable to those of parrots and songbirds, though evidence for similar advanced cognitive skills is currently lacking. Gallery of living species Behavior and ecology Feeding and diet Ratite chicks tend to be more omnivorous or insectivorous; similarities in adults end with feeding, as they all vary in diet and length of digestive tract, which is indicative of diet. Ostriches, with the longest tracts at , are primarily herbivorous. Rheas' tracts are next longest at , and they also have caeca. They are also mainly herbivores, concentrating on broad-leafed plants. However, they will eat insects if the opportunity arises. Emus have tracts of length, and have a more omnivorous diet, including insects and other small animals. Cassowaries have next to the shortest tracts at . Finally, kiwi have the shortest tracts and eat earthworms, insects, and other similar creatures. Moas and elephant birds were the largest native herbivores in their faunas, far larger than contemporary herbivorous mammals in the latter's case. Some extinct ratites might have had odder lifestyles, such as the narrow-billed Diogenornis and Palaeotis, compared to the shorebird-like lithornithids, and could imply similar animalivorous diets. Reproduction Ratites are different from the flying birds in that they needed to adapt or evolve certain features to protect their young. First and foremost is the thickness of the shells of their eggs. Their young are hatched more developed than most and they can run or walk soon thereafter. Also, most ratites have communal nests, where they share the incubating duties with others. Ostriches, and great spotted kiwis, are the only ratites where the female incubates; they share the duties, with the males incubating at night. Cassowaries and emu are polyandrous, with males incubating eggs and rearing chicks with no obvious contribution from females. Ostriches and rheas are polygynous with each male courting several females. Male rheas are responsible for building nests and incubating while ostrich males incubate only at night. Kiwis stand out as the exception with extended monogamous reproductive strategies where either the male alone or both sexes incubate a single egg. Unlike most birds, male ratites have a phallus that is inserted into the female's cloaca during copulation. Ratites and humans Ratites and humans have had a long relationship starting with the use of the egg for water containers, jewelry, or other art medium. Male ostrich feathers were popular for hats during the 18th century, which led to hunting and sharp declines in populations. Ostrich farming grew out of this need, and humans harvested feathers, hides, eggs, and meat from the ostrich. Emu farming also became popular for similar reasons and for their emu oil. Rhea feathers are popular for dusters, and eggs and meat are used for chicken and pet feed in South America. Ratite hides are popular for leather products like shoes. United States regulation The USDA's Food Safety and Inspection Service (FSIS) began a voluntary, fee-for-service ratite inspection program in 1995 to help the fledgling industry improve the marketability of the meat. A provision in the FY2001 USDA appropriations act (P.L. 106–387) amended the Poultry Products Inspection Act to make federal inspection of ratite meat mandatory as of April 2001 (21 U.S.C. 451 et seq.). See also List of Struthioniformes by population References External links Websites With Information On Ratites Flightless birds Extant Thanetian first appearances Taxa named by William Plane Pycraft Polyphyletic groups
Ratite
[ "Biology" ]
3,596
[ "Phylogenetics", "Polyphyletic groups" ]
160,223
https://en.wikipedia.org/wiki/Media%20player%20software
Media player software is a type of application software for playing multimedia computer files like audio and video files. Media players commonly display standard media control icons known from physical devices such as tape recorders and CD players, such as play (  ), pause (  ), fastforward (⏩️), rewind (βͺ), and stop (  ) buttons. In addition, they generally have progress bars (or "playback bars"), which are sliders to locate the current position in the duration of the media file. Mainstream operating systems have at least one default media player. For example, Windows comes with Windows Media Player, Microsoft Movies & TV and Groove Music, while macOS comes with QuickTime Player and Music. Linux distributions come with different media players, such as SMPlayer, Amarok, Audacious, Banshee, MPlayer, mpv, Rhythmbox, Totem, VLC media player, and xine. Android comes with YouTube Music for audio and Google Photos for video, and smartphone vendors such as Samsung may bundle custom software. Functionality focus The basic feature set of media players are a seek bar, a timer with the current and total playback time, playback controls (play, pause, previous, next, stop), playlists, a "repeat" mode, and a "shuffle" (or "random") mode for curiosity and to facilitate searching long timelines of files. Different media players have different goals and feature sets. Video players are a group of media players that have their features geared more towards playing digital video. For example, Windows DVD Player exclusively plays DVD-Video discs and nothing else. Media Player Classic can play individual audio and video files but many of its features such as color correction, picture sharpening, zooming, set of hotkeys, DVB support and subtitle support are only useful for video material such as films and cartoons. Audio players, on the other hand, specialize in digital audio. For example, AIMP exclusively plays audio formats. MediaMonkey can play both audio and video formats, but many of its features including media library, lyric discovery, music visualization, online radio, audiobook indexing, and tag editing are geared toward consumption of audio material; watching video files on it can be a trying feat. General-purpose media players also do exist. For example, Windows Media Player has exclusive features for both audio and video material, although it cannot match the feature set of Media Player Classic and MediaMonkey combined. By default, videos are played with fully visible field of view while filling at least either width or height of the viewport to appear as large as possible. Options to change the video's scaling and aspect ratio may include filling the viewport through either stretching or cropping, and "100% view" where each pixel of the video covers exactly one pixel on the screen. Zooming into the field of view during playback may be implemented through a slider on any screen or with pinch zoom on touch screens, and moving the field of view may be implemented through scrolling by dragging inside the view port or by moving a rectangle inside a miniature view of the entire field of view that denotes the magnified area. Media player software may have the ability to adjust appearance and acoustics during playback using effects such as mirroring, rotating, cropping, cloning, adjusting colours, deinterlacing, and equalizing and visualizing audio. Easter eggs may be featured, such as a puzzle game on VLC Media Player. Still snapshots may be extracted directly from a video frame or captured through a screenshot, the former of which is preferred since it preserves videos' original dimensions (height and width). Video players may show a tooltip bubble previewing footage at the position hovered over with the mouse cursor. A preview tooltip for the seek bar has been implemented on few smartphones through a stylus or a self-capacitive touch screen able to detect a floating finger. Such include the Samsung Galaxy S4, S5 (finger), Note 2, Note 4 (stylus), and Note 3 (both). Streaming media players may indicate buffered segments of the media in the seek bar. 3D video players 3D video players are used to play 2D video in 3D format. A high-quality three-dimensional video presentation requires that each frame of a motion picture be embedded with information on the depth of objects present in the scene. This process involves shooting the video with special equipment from two distinct perspectives or modeling and rendering each frame as a collection of objects composed of 3D vertices and textures, much like in any modern video game, to achieve special effects. Tedious and costly, this method is only used in a small fraction of movies produced worldwide, while most movies remain in the form of traditional 2D images. It is, however, possible to give an otherwise two-dimensional picture the appearance of depth. Using a technique known as anaglyph processing a "flat" picture can be transformed so as to give an illusion of depth when viewed through anaglyph glasses (usually red-cyan). An image viewed through anaglyph glasses appears to have both protruding and deeply embedded objects in it, at the expense of somewhat distorted colors. The method itself is old enough, dating back to the mid-19th century, but it is only with recent advances in computer technology that it has become possible to apply this kind of transformation to a series of frames in a motion picture reasonably fast or even in real-time, i.e. as the video is being played back. Several implementations exist in the form of 3D video players that render conventional 2D video in anaglyph 3D, as well as in the form of 3D video converters that transform video into stereoscopic anaglyph and transcode it for playback with regular software or hardware video players. Examples Well known examples of media player software include Windows Media Player, VLC media player, iTunes, Winamp, Media Player Classic, MediaMonkey, foobar2000, AIMP, MusicBee and JRiver Media Center. Most of these also include music library managers. Although media players are often multi-media, they can be primarily designed for a specific media. For example, Media Player Classic and VLC media player are video-focused while Winamp and iTunes are music-focused, despite all of them supporting both types of media. Home theater PC A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, photo, audio playback, and sometimes video recording functionality. Although computers with some of these capabilities were available from the late 1980s, the "Home Theater PC" term first appeared in mainstream press in 1996. Since 2007, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content. The term "media center" also refers to specialized computer programs designed to run on standard personal computers. See also Comparison of video player software Comparison of audio player software References Multimedia
Media player software
[ "Technology" ]
1,430
[ "Multimedia" ]
160,224
https://en.wikipedia.org/wiki/Interface%20description%20language
An interface description language or interface definition language (IDL) is a generic term for a language that lets a program or object written in one language communicate with another program written in an unknown language. IDLs are usually used to describe data types and interfaces in a language-independent way, for example, between those written in C++ and those written in Java. IDLs are commonly used in remote procedure call software. In these cases the machines at either end of the link may be using different operating systems and computer languages. IDLs offer a bridge between the two different systems. Software systems based on IDLs include Sun's ONC RPC, The Open Group's Distributed Computing Environment, IBM's System Object Model, the Object Management Group's CORBA (which implements OMG IDL, an IDL based on DCE/RPC) and Data Distribution Service, Mozilla's XPCOM, Microsoft's Microsoft RPC (which evolved into COM and DCOM), Facebook's Thrift and WSDL for Web services. Examples AIDL: Java-based, for Android; supports local and remote procedure calls, can be accessed from native applications by calling through Java Native Interface (JNI) Apache Thrift: from Apache, originally developed by Facebook Avro IDL: for the Apache Avro system ASN.1 Cap'n Proto: created by its former maintainer, avoids some of the perceived shortcomings of Protocol Buffers. Concise Data Definition Language (CDDL, RFC 8610): A Notation for CBOR and JSON data structures CortoScript: Describe data and/or interfaces for systems that require Semantic interoperability Etch: Cisco's Etch Cross-platform Service Description Language Extensible Data Notation (EDN): Clojure data format, similar to JSON FlatBuffers: Serialization format from Google supporting zero-copy deserialization Franca IDL: the open-source Franca interface definition language FIDL: Interface description language for the Fuchsia Operating System designed for writing app components in C, C++, Dart, Go and Rust. IDL specification language: the original Interface Description Language IPL: Imandra Protocol Language JSON Web-Service Protocol (JSON-WSP) Lightweight Imaging Device Interface Language Microsoft Interface Definition Language (MIDL): the Microsoft extension of OMG IDL to add support for Component Object Model (COM) and Distributed Component Object Model (DCOM) OMG IDL: standardized by Object Management Group, used in CORBA (for DCE/RPC services) and DDS (for data modeling), also selected by the W3C for exposing the DOM of XML, HTML, and CSS documents OpenAPI Specification: a standard for Web APIs, used by Swagger and other technologies. Open Service Interface Definitions Protocol Buffers: Google's IDL RESTful Service Description Language (RSDL) Smithy: An AWS-invented protocol-agnostic interface definition language. Specification Language for Internet Communications Engine (Ice: Slice) Universal Network Objects: OpenOffice.org's component model Web Application Description Language (WADL) Web IDL by WHATWG: can be used to describe interfaces that are intended to be implemented in web browsers Web Services Description Language (WSDL) XCB: X protocol description language for X Window System Cross Platform Interface Description Language (XPIDL): Mozilla's way to specify XPCOM interfaces See also Component-based software engineering Interface-based programming Java Interface Definition Language List of computing and IT abbreviations Universal Interface Language User interface markup language References External links Documenting Software Architecture: Documenting Interfaces (PDF) OMG Specification of OMG IDL OMG Tutorial on OMG IDL Data modeling languages Remote procedure call Specification languages Domain-specific programming languages
Interface description language
[ "Engineering" ]
793
[ "Software engineering", "Specification languages" ]
160,277
https://en.wikipedia.org/wiki/Carbon%20fibers
Carbon fibers or carbon fibres (alternatively CF, graphite fiber or graphite fibre) are fibers about in diameter and composed mostly of carbon atoms. Carbon fibers have several advantages: high stiffness, high tensile strength, high strength to weight ratio, high chemical resistance, high-temperature tolerance, and low thermal expansion. These properties have made carbon fiber very popular in aerospace, civil engineering, military, motorsports, and other competition sports. However, they are relatively expensive compared to similar fibers, such as glass fiber, basalt fibers, or plastic fibers. To produce a carbon fiber, the carbon atoms are bonded together in crystals that are more or less aligned parallel to the fiber's long axis as the crystal alignment gives the fiber a high strength-to-volume ratio (in other words, it is strong for its size). Several thousand carbon fibers are bundled together to form a tow, which may be used by itself or woven into a fabric. Carbon fibers are usually combined with other materials to form a composite. For example, when permeated with a plastic resin and baked, it forms carbon-fiber-reinforced polymer (often referred to as carbon fiber), which has a very high strength-to-weight ratio and is extremely rigid although somewhat brittle. Carbon fibers are also composited with other materials, such as graphite, to form reinforced carbon-carbon composites, which have a very high heat tolerance. Carbon fiber-reinforced materials are used to make aircraft and spacecraft parts, racing car bodies, golf club shafts, bicycle frames, fishing rods, automobile springs, sailboat masts, and many other components where light weight and high strength are needed. History In 1860, Joseph Swan produced carbon fibers for the first time, for use in light bulbs. In 1879, Thomas Edison baked cotton threads or bamboo slivers at high temperatures carbonizing them into an all-carbon fiber filament used in one of the first incandescent light bulbs to be heated by electricity. In 1880, Lewis Latimer developed a reliable carbon wire filament for the incandescent light bulb, heated by electricity. In 1958, Roger Bacon created high-performance carbon fibers at the Union Carbide Parma Technical Center located outside of Cleveland, Ohio. Those fibers were manufactured by heating strands of rayon until they carbonized. This process proved to be inefficient, as the resulting fibers contained only about 20% carbon. In the early 1960s, a process was developed by Dr. Akio Shindo at Agency of Industrial Science and Technology of Japan, using polyacrylonitrile (PAN) as a raw material. This had produced a carbon fiber that contained about 55% carbon. In 1960 Richard Millington of H.I. Thompson Fiberglas Co. developed a process (US Patent No. 3,294,489) for producing a high carbon content (99%) fiber using rayon as a precursor. These carbon fibers had sufficient strength (modulus of elasticity and tensile strength) to be used as a reinforcement for composites having high strength to weight properties and for high temperature resistant applications. The high potential strength of carbon fiber was realized in 1963 in a process developed by W. Watt, L. N. Phillips, and W. Johnson at the Royal Aircraft Establishment at Farnborough, Hampshire. The process was patented by the UK Ministry of Defence, then licensed by the British National Research Development Corporation to three companies: Rolls-Royce, who were already making carbon fiber; Morganite; and Courtaulds. Within a few years, after successful use in 1968 of a Hyfil carbon-fiber fan assembly in the Rolls-Royce Conway jet engines of the Vickers VC10, Rolls-Royce took advantage of the new material's properties to break into the American market with its RB-211 aero-engine with carbon-fiber compressor blades. Unfortunately, the blades proved vulnerable to damage from bird impact. This problem and others caused Rolls-Royce such setbacks that the company was nationalized in 1971. The carbon-fiber production plant was sold off to form Bristol Composite Materials Engineering Ltd (often referred to as Bristol Composites). In the late 1960s, the Japanese took the lead in manufacturing PAN-based carbon fibers. A 1970 joint technology agreement allowed Union Carbide to manufacture Japan's Toray Industries product. Morganite decided that carbon-fiber production was peripheral to its core business, leaving Courtaulds as the only big UK manufacturer. Courtaulds's water-based inorganic process made the product susceptible to impurities that did not affect the organic process used by other carbon-fiber manufacturers, leading Courtaulds ceasing carbon-fiber production in 1991. During the 1960s, experimental work to find alternative raw materials led to the introduction of carbon fibers made from a petroleum pitch derived from oil processing. These fibers contained about 85% carbon and had excellent flexural strength. Also, during this period, the Japanese Government heavily supported carbon fiber development at home and several Japanese companies such as Toray, Nippon Carbon, Toho Rayon and Mitsubishi started their own development and production. Since the late 1970s, further types of carbon fiber yarn entered the global market, offering higher tensile strength and higher elastic modulus. For example, T400 from Toray with a tensile strength of 4,000 MPa and M40, a modulus of 400 GPa. Intermediate carbon fibers, such as IM 600 from Toho Rayon with up to 6,000 MPa were developed. Carbon fibers from Toray, Celanese and Akzo found their way to aerospace application from secondary to primary parts first in military and later in civil aircraft as in McDonnell Douglas, Boeing, Airbus, and United Aircraft Corporation planes. In 1988, Dr. Jacob Lahijani invented balanced ultra-high Young's modulus (greater than 100 Mpsi) and high tensile strength pitch carbon fiber (greater than 500 kpsi) used extensively in automotive and aerospace applications. In March 2006, the patent was assigned to the University of Tennessee Research Foundation. Structure and properties Carbon fiber is frequently supplied in the form of a continuous tow wound onto a reel. The tow is a bundle of thousands of continuous individual carbon filaments held together and protected by an organic coating, or size, such as polyethylene oxide (PEO) or polyvinyl alcohol (PVA). The tow can be conveniently unwound from the reel for use. Each carbon filament in the tow is a continuous cylinder with a diameter of 5–10 micrometers and consists almost exclusively of carbon. The earliest generation (e.g. T300, HTA and AS4) had diameters of 16–22 micrometers. Later fibers (e.g. IM6 or IM600) have diameters that are approximately 5 micrometers. The atomic structure of carbon fiber is similar to that of graphite, consisting of sheets of carbon atoms arranged in a regular hexagonal pattern (graphene sheets), the difference being in the way these sheets interlock. Graphite is a crystalline material in which the sheets are stacked parallel to one another in regular fashion. The intermolecular forces between the sheets are relatively weak Van der Waals forces, giving graphite its soft and brittle characteristics. Depending upon the precursor to make the fiber, carbon fiber may be turbostratic or graphitic, or have a hybrid structure with both graphitic and turbostratic parts present. In turbostratic carbon fiber the sheets of carbon atoms are haphazardly folded, or crumpled, together. Carbon fibers derived from polyacrylonitrile (PAN) are turbostratic, whereas carbon fibers derived from mesophase pitch are graphitic after heat treatment at temperatures exceeding 2200Β Β°C. Turbostratic carbon fibers tend to have high ultimate tensile strength, whereas heat-treated mesophase-pitch-derived carbon fibers have high Young's modulus (i.e., high stiffness or resistance to extension under load) and high thermal conductivity. Applications Carbon fiber can have higher cost than other materials which has been one of the limiting factors of adoption. In a comparison between steel and carbon fiber materials for automotive materials, carbon fiber may be 10-12x more expensive. However, this cost premium has come down over the past decade from estimates of 35x more expensive than steel in the early 2000s. Composite materials Carbon fiber is most notably used to reinforce composite materials, particularly the class of materials known as carbon fiber or graphite reinforced polymers. Non-polymer materials can also be used as the matrix for carbon fibers. Due to the formation of metal carbides and corrosion considerations, carbon has seen limited success in metal matrix composite applications. Reinforced carbon-carbon (RCC) consists of carbon fiber-reinforced graphite, and is used structurally in high-temperature applications. The fiber also finds use in filtration of high-temperature gases, as an electrode with high surface area and impeccable corrosion resistance, and as an anti-static component. Molding a thin layer of carbon fibers significantly improves fire resistance of polymers or thermoset composites because a dense, compact layer of carbon fibers efficiently reflects heat. The increasing use of carbon fiber composites is displacing aluminum from aerospace applications in favor of other metals because of galvanic corrosion issues. Note, however, that carbon fiber does not eliminate the risk of galvanic corrosion. In contact with metal, it forms "a perfect galvanic corrosion cell ..., and the metal will be subjected to galvanic corrosion attack" unless a sealant is applied between the metal and the carbon fiber. Carbon fiber can be used as an additive to asphalt to make electrically conductive asphalt concrete. Using this composite material in the transportation infrastructure, especially for airport pavement, decreases some winter maintenance problems that lead to flight cancellation or delay due to the presence of ice and snow. Passing current through the composite material 3D network of carbon fibers dissipates thermal energy that increases the surface temperature of the asphalt, which is able to melt ice and snow above it. Textiles Precursors for carbon fibers are polyacrylonitrile (PAN), rayon and pitch. Carbon fiber filament yarns are used in several processing techniques: the direct uses are for prepregging, filament winding, pultrusion, weaving, braiding, etc. Carbon fiber yarn is rated by the linear density (weight per unit length; i.e., 1Β g/1000Β m = 1Β tex) or by number of filaments per yarn count, in thousands. For example, 200Β tex for 3,000Β filaments of carbon fiber is three times as strong as 1,000Β carbon filament yarn, but is also three times as heavy. This thread can then be used to weave a carbon fiber filament fabric or cloth. The appearance of this fabric generally depends on the linear density of the yarn and the weave chosen. Some commonly used types of weave are twill, satin and plain. Carbon filament yarns can also be knitted or braided. Microelectrodes Carbon fibers are used for fabrication of carbon-fiber microelectrodes. In this application typically a single carbon fiber with diameter of 5–7 ΞΌm is sealed in a glass capillary. At the tip the capillary is either sealed with epoxy and polished to make a carbon-fiber disk microelectrode, or the fiber is cut to a length of 75–150 ΞΌm to make a carbon-fiber cylinder electrode. Carbon-fiber microelectrodes are used either in amperometry or fast-scan cyclic voltammetry for detection of biochemical signaling. Flexible heating Despite being known for their electrical conductivity, carbon fibers can carry only very low currents on their own. When woven into larger fabrics, they can be used to reliably provide (infrared) heating in applications requiring flexible electrical heating elements and can easily sustain temperatures past 100Β Β°C. Many examples of this type of application can be seen in DIY heated articles of clothing and blankets. Due to its chemical inertness, it can be used relatively safely amongst most fabrics and materials; however, shorts caused by the material folding back on itself will lead to increased heat production and can lead to a fire. Synthesis Each carbon filament is produced from a polymer such as polyacrylonitrile (PAN), rayon, or petroleum pitch. All these polymers are known as a precursor. For synthetic polymers such as PAN or rayon, the precursor is first spun into filament yarns, using chemical and mechanical processes to initially align the polymer molecules in a way to enhance the final physical properties of the completed carbon fiber. Precursor compositions and mechanical processes used during spinning filament yarns may vary among manufacturers. After drawing or spinning, the polymer filament yarns are then heated to drive off non-carbon atoms (carbonization), producing the final carbon fiber. The carbon fibers filament yarns may be further treated to improve handling qualities, then wound on to bobbins. A common method of manufacture involves heating the spun PAN filaments to approximately 300Β Β°C in air, which breaks many of the hydrogen bonds and oxidizes the material. During this process, fibers tend to shrink. The resulting chemical composition and mechanical properties of the fiber are dependent on the time and temperature of the process, as well as on the tension applied to the fiber during oxidation. The oxidized PAN is then placed into a furnace having an inert atmosphere of a gas such as argon, and heated to approximately 2000Β Β°C, which induces graphitization of the material, changing the molecular bond structure. When heated in the correct conditions, these chains bond side-to-side (ladder polymers), forming narrow graphene sheets which eventually merge to form a single, columnar filament. The result is usually 93–95% carbon. Lower-quality fiber can be manufactured using pitch or rayon as the precursor instead of PAN. The carbon can become further enhanced, as high modulus, or high strength carbon, by heat treatment processes. Carbon heated in the range of 1500–2000Β Β°C (carbonization) exhibits the highest tensile strength (5,650MPa, or 820,000psi), while carbon fiber heated from 2500 to 3000Β Β°C (graphitizing) exhibits a higher modulus of elasticity (531GPa, or 77,000,000psi). See also Basalt fiber Carbon fiber reinforced polymer Carbon fiber reinforced ceramic material Carbon nanotube ESD materials Graphene References External links Making Carbon Fiber How carbon fiber is made British inventions Allotropes of carbon Synthetic fibers Woven fabrics Nonwoven fabrics Net fabrics
Carbon fibers
[ "Chemistry" ]
3,025
[ "Synthetic materials", "Allotropes of carbon", "Synthetic fibers", "Allotropes" ]
160,332
https://en.wikipedia.org/wiki/Ephemeris
In astronomy and celestial navigation, an ephemeris (; ; , ) is a book with tables that gives the trajectory of naturally occurring astronomical objects and artificial satellites in the sky, i.e., the position (and possibly velocity) over time. Historically, positions were given as printed tables of values, given at regular intervals of date and time. The calculation of these tables was one of the first applications of mechanical computers. Modern ephemerides are often provided in electronic form. However, printed ephemerides are still produced, as they are useful when computational devices are not available. The astronomical position calculated from an ephemeris is often given in the spherical polar coordinate system of right ascension and declination, together with the distance from the origin if applicable. Some of the astronomical phenomena of interest to astronomers are eclipses, apparent retrograde motion/planetary stations, planetary es, sidereal time, positions for the mean and true nodes of the moon, the phases of the Moon, and the positions of minor celestial bodies such as Chiron. Ephemerides are used in celestial navigation and astronomy. They are also used by astrologers. GPS signals include ephemeris data used to calculate the position of satellites in orbit. History 1st millennium BC – Ephemerides in Babylonian astronomy. 2nd century AD – the Almagest and the Handy Tables of Ptolemy 8th century AD – the of IbrāhΔ«m al-FazārΔ« 9th century AD – the of MuαΈ₯ammad ibn MΕ«sā al-KhwārizmΔ« 11th century AD – the of Ibn Yunus 12th century AD – the Tables of Toledo – based largely on Arabic sources of Islamic astronomy – were edited by Gerard of Cremona to form the standard European ephemeris until the Alfonsine Tables. 13th century AD – the ZΔ«j-i ΔͺlkhānΔ« (Ilkhanic Tables) were compiled at the Maragheh observatory in Persia. 13th century AD – the Alfonsine Tables were compiled in Spain to correct anomalies in the Tables of Toledo, remaining the standard European ephemeris until the Prutenic Tables almost 300 years later. 13th century AD - the Dresden Codex, an extant Mayan ephemeris 1408 – Chinese ephemeris table (copy in Pepysian Library, Cambridge, UK (refer book '1434'); Chinese tables believed known to Regiomontanus). 1474 – Regiomontanus publishes his day-to-day Ephemerides in NΓΌrnberg, Germany. 1496 – the Almanach Perpetuum of AbraΓ£o ben Samuel Zacuto (one of the first books published with a movable type and printing press in Portugal) 1504 – While shipwrecked on the island of Jamaica, Christopher Columbus successfully predicted a lunar eclipse for the natives, using the ephemeris of the German astronomer Regiomontanus. 1531 – Work of Johannes StΓΆffler is published posthumously at TΓΌbingen, extending the ephemeris of Regiomontanus through 1551. 1551 – the Prutenic Tables of Erasmus Reinhold were published, based on Copernicus's theories. 1554 – Johannes Stadius published Ephemerides novae et auctae, the first major ephemeris computed according to Copernicus' heliocentric model, using parameters derived from the Prutenic Tables. Although the Copernican model provided an elegant solution to the problem of computing apparent planetary positions (it avoided the need for the equant and better explained the apparent retrograde motion of planets), it still relied on the use of epicycles, leading to some inaccuracies – for example, periodic errors in the position of Mercury of up to ten degrees. One of the users of Stadius's tables is Tycho Brahe. 1627 – the Rudolphine Tables of Johannes Kepler based on elliptical planetary motion became the new standard. 1679 – La Connaissance des Temps ou calendrier et Γ©phΓ©mΓ©rides du lever & coucher du Soleil, de la Lune & des autres planΓ¨tes, first published yearly by Jean Picard and still extant. 1975 – Owen Gingerich, using modern planetary theory and digital computers, calculates the actual positions of the planets in the 16th century and graphs the errors in the planetary positions predicted by the ephemerides of StΓΆffler, Stadius and others. According to Gingerich, the error patterns "are as distinctive as fingerprints and reflect the characteristics of the underlying tables. That is, the error patterns for StΓΆffler are different from those of Stadius, but the error patterns of Stadius closely resemble those of Maestlin, Magini, Origanus, and others who followed the Copernican parameters." Modern ephemeris For scientific uses, a modern planetary ephemeris comprises software that generates positions of planets and often of their satellites, asteroids, or comets, at virtually any time desired by the user. After introduction of electronic computers in the 1950s it became feasible to use numerical integration to compute ephemerides. The Jet Propulsion Laboratory Development Ephemeris is a prime example. Conventional so-called analytical ephemerides that utilize series expansions for the coordinates have also been developed, but of much increased size and accuracy as compared to the past, by making use of computers to manage the tens of thousands of terms. Ephemeride Lunaire Parisienne and VSOP are examples. Typically, such ephemerides cover several centuries, past and future; the future ones can be covered because the field of celestial mechanics has developed several accurate theories. Nevertheless, there are secular phenomena which cannot adequately be considered by ephemerides. The greatest uncertainties in the positions of planets are caused by the perturbations of numerous asteroids, most of whose masses and orbits are poorly known, rendering their effect uncertain. Reflecting the continuing influx of new data and observations, NASA's Jet Propulsion Laboratory (JPL) has revised its published ephemerides nearly every year since 1981. Solar System ephemerides are essential for the navigation of spacecraft and for all kinds of space observations of the planets, their natural satellites, stars, and galaxies. Scientific ephemerides for sky observers mostly contain the positions of celestial bodies in right ascension and declination, because these coordinates are the most frequently used on star maps and telescopes. The equinox of the coordinate system must be given. It is, in nearly all cases, either the actual equinox (the equinox valid for that moment, often referred to as "of date" or "current"), or that of one of the "standard" equinoxes, typically J2000.0, B1950.0, or J1900. Star maps almost always use one of the standard equinoxes. Scientific ephemerides often contain further useful data about the moon, planet, asteroid, or comet beyond the pure coordinates in the sky, such as elongation to the Sun, brightness, distance, velocity, apparent diameter in the sky, phase angle, times of rise, transit, and set, etc. Ephemerides of the planet Saturn also sometimes contain the apparent inclination of its ring. Celestial navigation serves as a backup to satellite navigation. Software is widely available to assist with this form of navigation; some of this software has a self-contained ephemeris. When software is used that does not contain an ephemeris, or if no software is used, position data for celestial objects may be obtained from the modern Nautical Almanac or Air Almanac. An ephemeris is usually only correct for a particular location on the Earth. In many cases, the differences are too small to matter. However, for nearby asteroids or the Moon, they can be quite important. Other modern ephemerides recently created are the EPM (Ephemerides of Planets and the Moon), from the Russian Institute for Applied Astronomy of the Russian Academy of Sciences, and the INPOP () by the French IMCCE. See also Almanac American Ephemeris and Nautical Almanac The Astronomical Almanac (new name) Ephemera Ephemeris time Epoch (astronomy) Epoch (reference date) Fundamental ephemeris January 0 or March 0 Keplerian elements Nautical almanac Osculating orbit Ptolemy's table of chords Two-line elements William of Saint-Cloud Notes References External links The JPL HORIZONS online ephemeris Introduction to the JPL ephemerides (archived 26 February 2005) Astrology Astronomical tables Astrometry Astronomy books Calendars Celestial navigation
Ephemeris
[ "Physics", "Astronomy" ]
1,778
[ "Calendars", "Physical quantities", "Time", "History of astronomy", "Astrology", "Astronomy books", "Works about astronomy", "Astrometry", "Celestial navigation", "Spacetime", "Astronomical tables", "Astronomical sub-disciplines" ]
160,338
https://en.wikipedia.org/wiki/House%20%28astrology%29
Most horoscopic traditions of astrology systems divide the horoscope into a number (usually twelve) of houses whose positions depend on time and location rather than on date. In Hindu astrological tradition these are known as Bhāvas. The houses of the horoscope represent different fields of experience wherein the energies of the signs and planets operateβ€”described in terms of physical surroundings as well as personal life experiences. Description In astrology, houses are a fundamental component of the birth chart that represent different areas of life. There are 12 houses, each associated with a specific zodiac sign and planetary ruler. The first house represents the self, while the second house relates to personal possessions and finances. The third house pertains to communication and siblings, while the fourth house represents home and family. The fifth house is associated with creativity and romance, while the sixth house relates to work and health. The seventh house represents partnerships and marriage, while the eighth house pertains to shared resources and transformation. The ninth house is associated with higher education and travel, while the tenth house represents career and public image. The eleventh house pertains to social networks and friendships, while the twelfth house represents spirituality and subconscious patterns. Understanding the houses in astrology can provide insight into various aspects of a person's life and personality. Every house system is dependent on the rotational movement of Earth on its axis, but there is a wide range of approaches to calculating house divisions and different opinions among astrologers over which house system is most accurate. To calculate the houses, it is necessary to know the exact time, date, and location. In natal astrology, some astrologers will use a birth time set for noon or sunrise if the actual time of birth is unknown. An accurate interpretation of such a chart, however, cannot be expected. The houses are divisions of the ecliptic plane (a great circle containing the Sun's orbit, as seen from the earth), at the time and place of the horoscope in question. They are numbered counter-clockwise from the cusp of the first house. Commonly, houses one through six are below the horizon and houses seven through twelve are above the horizon, but some systems may not respect entirely that division (in particular when the Ascendant does not coincide with the first house's cusp). The several methods of calculating house divisions stem from disagreement over what they mean mathematically (regarding space and time). All house systems in Western astrology use twelve houses projected on the ecliptic. The differences arise from which fundamental plane is the object of the initial division and whether the divisions represent units of time, or degrees of distance. If space is the basis for house division, the chosen plane is divided into equal arcs of 30Β° each. A difference will be made as to whether these divisions are made directly on the ecliptic, or on the celestial equator or some other great circle, before being projected on the ecliptic. If time is the basis for house division, a difference must be made for whether the houses are based on invariant equal hours (each house represents 2 hours of the sun's apparent movement each day) or temporal hours (daytime and night-time divided into six equal parts, but here the temporal hours will vary according to season and latitude.) Regardless of these different methods, all house divisions in Western astrology share certain things in common: the twelve house cusps are always projected on the ecliptic; they will all place the cusp of the first house near the eastern horizon and every house cusp is 180Β° of longitude apart from the sixth following house (1st opposes 7th; 2nd opposes 8th and so on). The twelve houses The next table represents the basic outline of the houses as they are still understood today and includes the traditional Latin names. The houses are numbered from the east downward under the horizon, each representing a specific area of life. Many modern astrologers assume that the houses relate to their corresponding signs, i.e. that the first house has a natural affinity with the first sign, Aries, and so on. Another common idea is to look at houses in pairs, e.g. 1&7, 2&8, etc., and notice their complementary attributes. House modalities and triplicities Similarly to how signs are classified according to astrological modality (Cardinal, Fixed and Mutable), houses are classified as Angular, Succedent and Cadent. Angular houses are points of initiation and represent action; they relate to cardinal signs (Aries, Cancer, Libra and Capricorn). Succedent houses are points of purpose and represent stabilization; they relate to fixed signs (Taurus, Leo, Scorpio and Aquarius). Cadent houses are points of transition and represent change and adaptation; they relate to mutable signs (Gemini, Virgo, Sagittarius and Pisces). Following the classification of signs by the four classical elements (Fire, Earth, Air and Water), houses can also be grouped together in triplicities, related to a level of experience. In old astrological writings (e.g. William Lilly), house could also be used as a synonym for domicile or rulership, as in the sentence "The Moon has its house in Cancer" meaning that Cancer is ruled by the Moon. It may be helpful to think of a ruling planet, in this case the Moon, as the "owner of the 4th House", and the sign, e.g. Cancer, as the CEO or landlord who runs the house. In an individual horoscope, whatever planet occupies any given house can be thought of as the house's tenant. (See Rulership section below.) The four bhavas of Hindu astrology In Indian astrology, the twelve houses are called Bhava and their meanings are very similar to the triplicities in Western astrology. The houses are divided into four 'bhavas' which point to 'mood' or what the house stands for. These four bhavas are Dharma (duty), Artha (resources), Kama (pleasure) and Moksha (liberation). These bhavas are called 'purusharthas or 'aims in life.' The ancient mystics of India realized that the austere path of the yogi was not for everyone. They found that each human existence has four worthwhile goals in life: Dharma – 1st, 5th and 9th Bhavas – The need to find a path and purpose. Artha – 2nd, 6th and 10th Bhavas – The need to acquire the necessary resources and abilities to provide for to fulfill a path and purpose. Kama – 3rd, 7th and 11th Bhavas – The need for pleasure and enjoyment. Moksha – 4th, 8th and 12th Bhavas – The need to find liberation and enlightenment from the world. These 4 aims of life are repeated in above sequence 3 times through the 12 bhavas: The first round, bhavas 1 through 4, show the process within the Individual. The second round, bhavas 5 through 8, show the alchemy in relating to other people. The third round, bhavas 9 through 12, show the Universalization of the self. Systems of house division There are many systems of house division. In most, the ecliptic is divided into houses and the ascendant (eastern horizon) marks the cusp, or beginning, of the first house, and the descendant (western horizon) marks the cusp of the seventh house. Many systems, called quadrant house systems, also use the midheaven (medium coeli) as the cusp of the tenth house. Goals for a house system include ease of computation; agreement with the "quadrant" concept (ascendant on the first house cusp and midheaven on the tenth); defined and meaningful behaviour in the polar regions; acceptable handling of heavenly bodies of high latitude (a distinct problem from high-latitude locations on the Earth's surface); and symbolic value. It is impossible for any system to satisfy all the criteria completely, so each one represents a different compromise. The extremely popular Placidus and Koch systems, in particular, can generate undefined results in the polar circles. Research and debate on the merits of different house systems is ongoing. Early forms of house division The Babylonians may have been the first to set out the concept of house division. Specifically, they timed the birth according to three systems of time division: (a) a three-part division of the night into watches, (b) a four-part division of the nychthemeron with respect to sunrise and sunset, and (c) a twelve-part division of the day-time into hours. Babylonian astronomers studied the rising times of the signs and calculated tables of ascensions for their latitude, but it would take better time measurements by the Egyptians and the introduction of the concept of ascendant, around the 2nd century BC, to give astrological houses their first recognisable structure and meaning, from the perspective of Classical Western astrology. Whole sign In the whole sign house system, sometimes referred to as the 'Sign-House system', the houses are 30Β° each. The ascendant designates the rising sign, and the first house begins at zero degrees of the zodiac sign in which the ascendant falls, regardless of how early or late in that sign the ascendant is. The next sign after the ascending sign then becomes the 2nd house, the sign after that the 3rd house, and so on. In other words, each house is wholly filled by one sign. This was the main system used in the Hellenistic tradition of astrology, and is also used in Indian astrology, as well as in some early traditions of Medieval astrology. It is thought to be the oldest system of house division. The Whole Sign system may have been developed in the Hellenistic tradition of astrology sometime around the 1st or 2nd century BCE, and from there it may have passed to the Indian and early Medieval traditions of astrology; though the line of thought which states that it was transmitted to India from Western locales is hotly contested. At some point in the Medieval period, probably around the 10th century, whole sign houses fell into disuse in the western tradition, and by the 20th century the system was completely unknown in the western astrological community, although was continually used in India all the way into the present time. Beginning in the 1980s and 1990s the system was rediscovered and reintroduced into western astrology. The distinction between equal houses and whole sign houses lies in the fact that in whole sign houses the cusp of the 1st house is the beginning of the sign that contains the ascendant, while in equal houses the degree of the ascendant is itself the cusp of the 1st house. Debate surrounding whole sign houses There is debate surrounding the claims that the whole sign house system was the original form of house division and that it was the dominant form of house division among ancient astrologers. One argument against whole sign houses is that it is never explicitly mentioned in the text of any ancient astrologer when explaining how to divide up the houses. A counterpoint is that it is implied and it would be the only house system that makes sense in ancient charts where only an ascendant degree is presented. However, if one knows the longitude of the location of the astrologer, one would only need the ascendant degree to determine the quadrant houses. Another argument against whole sign houses is that it breaks with principles of primary motion since planets can go backwards through the houses (e.g., a planet can go from the 8th house into the 9th house given the right conditions). Additionally, there is concern that whole sign houses demotes the value of angularity. Whole sign houses is essentially an American driven movement that is argued to have decontextualized Hellenistic astrological texts from those that preceded and proceeded them. In Europe, most astrologers previously associated with traditional astrology never really took up whole sign houses. Martin Gansten argues that in Valens, houses were often provisionally approximated by sign position alone, but calculation of places by degree was consistently upheld in principle as more accurate and useful. Equal house In the equal house system the ecliptic is also divided into twelve divisions of 30 degrees, although the houses are measured out in 30 degree increments starting from the degree of the ascendant. It begins with the ascendant, which acts as the 'cusp' or starting point of the 1st house, then the second house begins exactly 30 degrees later in zodiacal order, then the third house begins exactly 30 degrees later in zodiacal order from the 2nd house, and so on. Proponents of the equal house system claim that it is more accurate and less distorting in higher latitudes (especially above 60 degrees) than the Placidean and other quadrant house systems. Space-based house systems In this type of system, the definition of houses involves the division of the sphere into twelve equal lunes perpendicular to a fundamental plane (the Morinus and Regiomontanus systems being two notable exceptions). M-House (Equal Mc) This system is constructed in a similar manner as the Equal house, but houses are measured out in 30 degree increments starting from the longitude of the midheaven (Mc), which acts as the 'cusp' or starting point of the 10th house. The ascendant does not coincide with the cusp for the 1st house. Porphyry Each quadrant of the ecliptic is divided into three equal parts between the four angles. This is the oldest system of quadrant style house division. Although it is attributed to Porphyry of Tyros, this system was first described by the 2nd-century astrologer Vettius Valens, in the 3rd book of his astrological compendium known as The Anthology. Carter's Poly Equatorial This house system was described by the English astrologer Charles E. O. Carter (1887–1968) in his Essays on the Foundations of Astrology. The house division starts at the right ascension of the ascendant and to it is added 30ΒΊ of right ascension for each successive cusp. Those cusps are then restated in terms of celestial longitude by projecting them along great circles containing the North and South celestial poles. The 1st house cusp coincides with the ascendant's longitude, but the 10th house cusp is not identical with the Midheaven. Meridian Also known as the Axial system, or Equatorial system, it divides the celestial equator in twelve 30Β° sectors (starting at the local meridian) and projects them on to the ecliptic along the great circles containing the North and South celestial poles. The intersections of the ecliptic with those great circles provide the house cusps. The 10th house cusp thus equals the Midheaven, but the East Point (also known as Equatorial Ascendant) is now the first house's cusp. Each house is exactly 2 sidereal hours long. This system was proposed by the Australian astrologer David Cope in the beginning of the 20th century and has become the most popular system with the Uranian school of astrology. The Ascendant (intersection between the ecliptic and the horizon) preserves its importance in chart interpretation through sign and aspects, but not as a house determinant, which is why this house system can be used in any latitude. Morinus French mathematician Jean Baptiste Morinus Regiomontanus The celestial equator is divided into twelve, and these divisions are projected on to the ecliptic along great circles that take in the north and south points on the horizon. Named after the German astronomer and astrologer Johann MΓΌller of KΓΆnigsberg. The Regiomontanus system was later largely replaced by the Placidus system. Campanus The prime vertical (the great circle taking in the zenith and east point on the horizon) is divided into twelve, and these divisions are projected on to the ecliptic along great circles that take in the north and south points on the horizon. It is attributed to Campanus of Novara but the method is known to have been used before his time. Sinusoidal Sinusoidal systems of house division are similar to Porphyry houses except that instead of each quadrant being divided into three equal sized houses, the middle house in each quadrant is compressed or expanded based on whether the quadrant covers less than or greater than 90 degrees. In other words, houses are smooth around the zodiac with the difference or ratio in quadrant sizes being spread in a continuous sinusoidal manner from expanded to compressed houses. Sinusoidal houses were invented and first published by Walter Pullen in his astrology program Astrolog in 1994. Krusinski/Pisa/Goelzer A recently published (1988) house system, discovered by Georg Goelzer, based on a great circle passing through the ascendant and zenith. This circle is divided into 12 equal parts (1st cusp is ascendant, 10th cusp is zenith), then the resulting points are projected to the ecliptic through meridian circles. The house tables for this system were published in 1995 in Poland. This house system is also known under the name Amphora in the Czech Republic, after it was proposed there by Milan PΓ­Ε‘a after the study of Manilius's "Astronomica" under this name ("Konstelace č. 22" in: "AMPHORA - novΓ½ systΓ©m astrologickΓ½ch domΕ―" (1997) and in the booklet "Amphora - algoritmy novΓ©ho systΓ©mu domΕ―" (1998)). Time-based house systems Alchabitius The predecessor system to the Placidus, which largely replaced the Porphyry. The difference with Placidus is that the time that it takes the ascendant to reach the meridian is divided equally into three parts. The Alchabitius house system was very popular in Europe before the introduction of the Regiomontanus system. Placidus This is the most commonly used house system in modern Western astrology. The paths drawn for each degree of the ecliptic to move from the Imum coeli to the horizon, and from the horizon to the midheaven, are trisected to determine the cusps of houses 2, 3, 11, and 12. The cusps of houses 8, 9, 5 and 6 are opposite these. The Placidus system is sometimes not defined beyond polar circles (latitudes greater than 66Β°N or 66Β°S), because certain degrees are circumpolar (never touch the horizon), and planets falling in them cannot be assigned to houses without extending the system. This result is a weakness of the Placidean system according to its critics, who often cite the exceptional house proportions in the higher latitudes. Named for 17th-century astrologer Placidus de Titis, it is thought the Placidus system was first mentioned about 13th century in Arab literature, but the first confirmed publication was in 1602 by Giovanni Antonio Magini (1555–1617) in his book "Tabulae Primi Mobilis, quas Directionem Vulgo Dicunt". The first documented usage is from Czech, 1627. Placidus remains the most popular system among English-speaking astrologers. Koch A rather more complicated version of the Placidus system, built on equal increments of Right Ascension for each quadrant. The Koch system was developed by the German astrologer Walter Koch (1895–1970) and is defined only for latitudes between 66Β°N and 66Β°S. This system is popular among research astrologers in the U.S. and among German speakers, but in Central Europe lost some popularity to the KrusiΕ„ski house system. Topocentric This is a recent system, invented in Argentina, that its creators claim has been determined empirically, i.e. by observing events in people's lives and assessing the geometry of a house system that would fit. The house cusps are always within a degree of those given in the Placidus system. The topocentric system can also be described as an approximation algorithm for the Placidus system. Topocentric houses are also called Polich-Page, after the names of the house system creators, Wendel Polich and A. Page Nelson. Chart gallery The following charts display different house systems for the same time and location. To better compare systems subject to distortion, a high latitude city was chosen (Stockholm, Sweden) and the time corresponds to a long ascension sign (Cancer). For clarity purposes, all the usual aspect lines, degrees and glyphs were removed. The MC in non-quadrant house systems In the whole sign and equal house systems the Medium Coeli (Midheaven), the highest point in the chart, does not act as the cusp or starting point of the 10th house. Instead the MC moves around the top half of the chart, and can land anywhere in the 7th, 8th, 9th, 10th, 11th, 12th, depending on the latitude. The MC retains its commonly agreed significations, but it doesn't act as the starting point of the 10th house, whereas the Equal house system adds extra definition and meaning to the MC including any cusps involved, any interpretations applied to the MC itself concur with other house systems. This is also the more common criticism of the whole sign and equal house method as it concerns the location of the Medium Coeli (Midheaven), the highest point in the chart. In the equal house system, the ascendant/descendant and midheaven/IC axes can vary from being perpendicular to each other (from approx. +-5 deg at most at equator to approx. +-15 degrees at Alexandria to +-90 degrees at polar circle). As a result, equal houses counted from the ascendant cannot in general place the midheaven on the tenth house cusp, where many feel it would be symbolically desirable. Since this point is associated with ambition, career, and public image, the argument is that the Midheaven, therefore, must be the cusp of the similar tenth house. It has also been linked by extension with Capricorn (the tenth sign of the zodiac). The equal house system always takes the MC to be first and foremost THE most important indicator of career; whereas the 10th house cusp, while taken into account, is interpreted simply as a weaker 2nd MC cusp. The Midheaven is not associated with house locations defined by the Whole Sign and Equal House system, rather, the Midheaven placement relies on the specific location of the Ascendant, so the Midheaven can be found anywhere between the 8th and 11th houses. Rulership In Hellenistic, Vedic, Medieval and Renaissance astrology each house is ruled by the planet that rules the sign on its cusp. For example, if a person has the sign Aries on the cusp of their 7th house, the planet Mars is said to "rule" the 7th house. This means that when a planet is allotted a house, the planet's attributes will have some bearing on the topics related to that house within the life of the individual whose chart is being analyzed. This planet is considered very important for events specifically pertaining to that house's topics; in fact, its placement in the chart will have at least as much influence on the chart as the planets placed within the house. In traditional Western and Hindu astrology, each sign is ruled by one of the 7 visible planets (note that in astrology, the Sun and Moon are considered planets, which literally means wanderers, i.e. wandering stars, as opposed to the fixed stars of the constellations). In addition, some modern astrologers who follow the X=Y=Z or Planet=Sign=House doctrine, which was first taught by Alan Leo in the early part of the 20th century, believe that certain houses are also ruled byβ€”or have an affinity withβ€”the planet which rules the corresponding zodiacal sign. For instance, Mars is ruler of the 1st house because it rules Aries, the first sign; Mercury rules (or has an affinity with) the 3rd house because it rules Gemini, the 3rd sign; etc. This concept is sometimes referred to as "natural rulership", as opposed to the former which is known as "accidental rulership". Notes References Arroyo, Stephen (1989). Chart Interpretation Handbook. California: CCRS Publications. Carter, Charles (1947; 2nd ed. 1978). Essays on the Foundations of Astrology - Chapter 8 "Problems of the Houses". London: Theosophical Publishing House. Collins, Gene. F. (2009). "Cosmopsychology - The psychology of humans as spiritual beings". Xlibris Corporation. DeVore, Nicholas (1948). Encyclopedia of Astrology. Philosophical Library, sub. tit. "Houses" Dobyns, Zipporah P. (1973). Finding the person in the horoscope. Third Edition. California: T.I.A. Publications (CCRS Publications) Foreman, Patricia (1992). "Computers and astrology: a universal user's guide and reference". Virginia: Good Earth Publications. Hand, Rob (2000). Whole Sign Houses: The Oldest House System. ARHAT Publications. Holden, James (1982). Ancient House Division, Journal of Research of the American Federation of Astrologers 1. Hone, Margaret (1978). The Modern Text-Book of Astrology. Revised edition (1995). England: L. N. Fowler & Co. Ltd. Houlding, Deborah (1996; Reprinted 2006). The Houses: Temples of the sky. Bournemouth: The Wessex Astrologer, Ltd. Kenton, Warren (1974). Astrology. The Celestial Mirror. Reprinted (1994). London: Thames and Hudson. Mayo, Jeff (1979). Teach Yourself Astrology. London: Hodder and Stoughton. North, John D. (1986). Horoscopes and History. London: The Warburg Institute, University of London. Parker, Derek and Julia (1990). The New Complete Astrologer. New York: Crescent Books. Rochberg, Francesca (1998). Babylonian Horoscopes. Transactions of the American Philosophical Society. Tester, Jim (1987). A History of Western Astrology. Reprinted (1990). Suffolk: St Edmundsbury Press. + Astrology Technical factors of Hindu astrology Technical factors of Western astrology
House (astrology)
[ "Astronomy" ]
5,490
[ "Astrology", "History of astronomy" ]
160,402
https://en.wikipedia.org/wiki/Shamrock
A shamrock is a type of clover, used as a symbol of Ireland. Saint Patrick, one of Ireland's patron saints, is said to have used it as a metaphor for the Christian Holy Trinity. The name shamrock comes from Irish (), which is the diminutive of the Irish word and simply means "young clover". At most times, Shamrock refers to either the species (lesser/yellow clover, Irish: ) or Trifolium repens (white clover, Irish: ). However, other three-leaved plantsβ€”such as Medicago lupulina, Trifolium pratense, and Oxalis acetosellaβ€”are sometimes called shamrocks. The shamrock was traditionally used for its medicinal properties, and was a popular motif in Victorian times. Botanical species There is still not a consensus over the precise botanical species of clover that is the "true" shamrock. John Gerard in his herbal of 1597 defined the shamrock as Trifolium pratense or Trifolium pratense flore albo, meaning red or white clover. He described the plant in English as "Three leaved grasse" or "Medow Trefoile", "which are called in Irish Shamrockes". The Irish botanist Caleb Threlkeld, writing in 1726 in his work entitled Synopsis Stirpium Hibernicarum or A Treatise on Native Irish Plants followed Gerard in identifying the shamrock as Trifolium pratense, calling it White Field Clover. The botanist Carl Linnaeus in his 1737 work Flora Lapponica identifies the shamrock as Trifolium pratense, mentioning it by name as Chambroch, with the following curious remark: "" ('The Irish call it shamrock, which is purple field clover, and which they eat to make them speedy and of nimble strength'). Linnaeus based his information that the Irish ate shamrock on the comments of English Elizabethan authors such as Edmund Spenser who remarked that the shamrock used to be eaten by the Irish, especially in times of hardship and famine. It has since been argued however, that the Elizabethans were confused by the similarity between the Irish (Gaelic) name for young clover , and the name for wood sorrel . The situation regarding the identity of the shamrock was further confused by a London botanist James Ebenezer Bicheno, who proclaimed in a dissertation in 1830 that the real shamrock was Oxalis acetosella, a species of wood sorrel. Bichino falsely claimed that clover was not a native Irish plant and had only been introduced into Ireland in the middle of the 17th century, and based his argument on the same comments by Elizabethan authors that shamrock had been eaten. Bicheno argued that this fitted the wood sorrel better than clover, as wood sorrel was often eaten as a green and used to flavour food. Bicheno's argument has not been generally accepted however, as the weight of evidence favours a species of clover. A more scientific approach was taken by English botanists James Britten and Robert Holland, who stated in their Dictionary of English Plant Names published in 1878, that their investigations had revealed that Trifolium dubium was the species sold most frequently in Covent Garden as shamrock on St. Patrick's Day, and that it was worn in at least 13 counties in Ireland. Finally, detailed investigations to settle the matter were carried out in two separate botanical surveys in Ireland, one in 1893 and the other in 1988. The 1893 survey was carried out by Nathaniel Colgan, an amateur naturalist working as a clerk in Dublin; while the 1988 survey was carried out by E. Charles Nelson, Director of the Irish National Botanic Gardens. Both surveys involved asking people from all across Ireland to send in examples of shamrock, which were then planted and allowed to flower, so that their botanical species could be identified. The results of both surveys were very similar, showing that the conception of the shamrock in Ireland had changed little in almost a hundred years. The results of the surveys are shown in the table below. The results show that there is no one "true" species of shamrock, but that Trifolium dubium (lesser clover) is considered to be the shamrock by roughly half of Irish people, and Trifolium repens (white clover) by another third, with the remaining sixth split between Trifolium pratense (red clover), Medicago lupulina (black medick), Oxalis acetosella (wood sorrel), and various other species of Trifolium and Oxalis. None of the species in the survey are unique to Ireland, and all are common European species, so there is no botanical basis for the belief that the shamrock is a unique species of plant that only grows in Ireland. Early references The word shamrock derives from or young clover, and references to or clover appear in early Irish literature, generally as a description of a flowering clovered plain. For example, in the series of medieval metrical poems about various Irish places called the Metrical Dindshenchus, a poem about Tailtiu or Teltown in County Meath describes it as a plain blossoming with flowering clover (). Similarly, another story tells of how St. Brigid decided to stay in County Kildare when she saw the delightful plain covered in clover blossom (scoth-shemrach). However, the literature in Irish makes no distinction between clover and shamrock, and it is only in English that shamrock emerges as a distinct word. The first mention of shamrock in the English language occurs in 1571 in the work of the English Elizabethan scholar Edmund Campion. In his work Boke of the Histories of Irelande, Campion describes the habits of the "wild Irish" and states that the Irish ate shamrock: "Shamrotes, watercresses, rootes, and other herbes they feed upon". The statement that the Irish ate shamrock was widely repeated in later works and seems to be a confusion with the Irish word or wood sorrel (Oxalis). There is no evidence from any Irish source that the Irish ate clover, but there is evidence that the Irish ate wood sorrel. For example, in the medieval Irish work (The Frenzy of Sweeney), the king Sweeney, who has gone mad and is living in the woods as a hermit, lists wood sorrel among the plants he feeds upon. The English Elizabethan poet Edmund Spenser, writing soon after in 1596, described his observations of war-torn Munster after the Desmond Rebellion in his work A View of the Present State of Ireland. Here shamrock is described as a food eaten as a last resort by starving people desperate for any nourishment during a post-war famine: Anatomies of death, they spake like ghosts, crying out of theire graves; they did eat of the carrions .... and if they found a plott of water cresses or shamrockes theyr they flocked as to a feast for the time, yett not able long to contynewe therewithall.The idea that the Irish ate shamrock is repeated in the writing of Fynes Moryson, one-time secretary to the Lord Deputy of Ireland. In his 1617 work An itinerary thorow Twelve Dominions, Moryson describes the "wild Irish", and in this case their supposed habit of eating shamrock is a result of their marginal hand-to-mouth existence as bandits. Moryson claims that the Irish "willingly eat the herbe Schamrock being of a sharpe taste which as they run and are chased to and fro they snatch like beasts out of the ditches." The reference to a sharp taste is suggestive of the bitter taste of wood sorrel. What is clear is that by the end of the sixteenth century the shamrock had become known to English writers as a plant particularly associated with the Irish, but only with a confused notion that the shamrock was a plant eaten by them. To a herbalist like Gerard it is clear that the shamrock is clover, but other English writers do not appear to know the botanical identity of the shamrock. This is not surprising, as they probably received their information at second or third hand. It is notable that there is no mention anywhere in these writings of St. Patrick or the legend of his using the shamrock to explain the Holy Trinity. However, there are two possible references to the custom of "drowning the shamrock" in "usquebagh" or whiskey. In 1607, the playwright Edward Sharpham in his play The Fleire included a reference to "Maister Oscabath the Irishman ... and Maister Shamrough his lackey". Later, a 1630 work entitled Sir Gregory Nonsence by the poet John Taylor contains the lines: "Whilste all the Hibernian Kernes in multitudes, /Did feast with shamerags steeved in Usquebagh." Link to St. Patrick Traditionally, shamrock is said to have been used by Saint Patrick to illustrate the Christian doctrine of the Holy Trinity when Christianising Ireland in the 5th century. The first evidence of a link between St Patrick and the shamrock appears in 1675 on the St Patrick's Coppers or Halpennies. These appear to show a figure of St Patrick preaching to a crowd while holding a shamrock, presumably to explain the doctrine of the Holy Trinity. When Saint Patrick arrived in Ireland in 431, he used the shamrock to teach pagans the Holy Trinity. In pagan Ireland, three was a significant number and the Irish had many triple deities, which could have aided St Patrick in his evangelisation efforts. Patricia Monaghan states that "There is no evidence that the clover or wood sorrel (both of which are called shamrocks) were sacred to the Celts". However, Jack Santino speculates that "The shamrock was probably associated with the earth and assumed by the druids to be symbolic of the regenerative powers of nature ... Nevertheless, the shamrock, whatever its history as a folk symbol, today has its meaning in a Christian context. Pictures of Saint Patrick depict him driving the snakes out of Ireland with a cross in one hand and a sprig of shamrocks in the other." Roger Homan writes, "We can perhaps see St Patrick drawing upon the visual concept of the triskele when he uses the shamrock to explain the Trinity". Why the Celts to whom St Patrick was preaching would have needed an explanation of the concept of a triple deity is not clear, since at least two separate triple goddesses are known to have been worshipped in pagan Ireland - Γ‰riu, FΓ³dla and Banba; and Badb Catha, Macha and The MorrΓ­gan. The first written mention of the link does not appear until 1681, in the account of Thomas Dineley, an English traveller to Ireland. Dineley writes:The 17th day of March yeerly is St Patricks, an immoveable feast, when ye Irish of all stations and condicions were crosses in their hatts, some of pinns, some of green ribbon, and the vulgar superstitiously wear shamroges, 3 leav'd grass, which they likewise eat (they say) to cause a sweet breath.There is nothing in Dineley's account of the legend of St. Patrick using the shamrock to teach the mystery of the Holy Trinity, and this story does not appear in writing anywhere until a 1726 work by the botanist Caleb Threlkeld. Threlkeld identifies the shamrock as White Field Clover (Trifolium pratense album ) and comments rather acerbically on St. Patrick's Day customs including the wearing of shamrocks:This plant is worn by the people in their hats upon the 17. Day of March yearly, (which is called St. Patrick's Day.) It being a current tradition, that by this Three Leafed Grass, he emblematically set forth to them the Mystery of the Holy Trinity. However that be, when they wet their Seamar-oge, they often commit excess in liquor, which is not a right keeping of a day to the Lord; error generally leading to debauchery.The Rev Threlkeld's remarks on liquor undoubtedly refer to the custom of toasting St. Patrick's memory with "St. Patrick's Pot", or "drowning the shamrock" as it is otherwise known. After mass on St. Patrick's Day the traditional custom of the menfolk was to lift the usual fasting restrictions of Lent and repair to the nearest tavern to mark the occasion with as many St. Patrick's Pots as they deemed necessary. The drowning of the shamrock was accompanied by a certain amount of ritual as one account explains: "The drowning of the shamrock" by no means implies it was necessary to get drunk in doing so. At the end of the day the shamrock which has been worn in the coat or the hat is removed and put into the final glass of grog or tumbler of punch; and when the health has been drunk or the toast honoured, the shamrock should be picked out from the bottom of the glass and thrown over the left shoulder.The shamrock is still chiefly associated with Saint Patrick's Day, which has become the Irish national holiday, and is observed with parades and celebrations worldwide. The custom of wearing shamrock on the day is still observed and depictions of shamrocks are habitually seen during the celebrations. Symbol of Ireland As St. Patrick is Ireland's patron saint, the shamrock has been used as a symbol of Ireland since the 18th century. The shamrock first began to evolve from a symbol purely associated with St. Patrick to an Irish national symbol when it was taken up as an emblem by rival militias during the turbulent politics of the late eighteenth century. On one side were the Volunteers (also known as the Irish Volunteers), who were local militias in late 18th century Ireland, raised to defend Ireland from the threat of French and Spanish invasion when regular British soldiers were withdrawn from Ireland to fight during the American Revolutionary War. On the other side were revolutionary nationalist groups, such as the United Irishmen. Among the Volunteers, examples of the use of the shamrock include its appearance on the guidon of the Royal Glin Hussars formed in July 1779 by the Knight of Glin, and its appearance on the flags of the Limerick Volunteers, the Castle Ray Fencibles and the Braid Volunteers. The United Irishmen adopted green as their revolutionary colour and wore green uniforms or ribbons in their hats, and the green concerned was often associated with the shamrock. The song The Wearing of the Green commemorated their exploits and various versions exist which mention the shamrock. The flag was used as their standard and was often depicted accompanied by shamrocks, and in 1799 a revolutionary journal entitled The Shamroc briefly appeared in which the aims of the rebellion were supported. Since the 1800 Acts of Union between Britain and Ireland the shamrock was incorporated into the Royal Coat of Arms of the United Kingdom, depicted growing from a single stem alongside the rose of England, and the thistle of Scotland to symbolise the unity of the three kingdoms. Since then, the shamrock has regularly appeared alongside the rose, thistle and (sometimes) leek for Wales in British coins such as the two shilling and crown, and in stamps. The rose, thistle and shamrock motif also appears regularly on British public buildings such as Buckingham Palace. Throughout the nineteenth century the popularity of the shamrock as a symbol of Ireland grew, and it was depicted in many illustrations on items such as book covers and St. Patrick's Day postcards. It was also mentioned in many songs and ballads of the time. For example, a popular ballad called The Shamrock Shore lamented the state of Ireland in the nineteenth century. Another typical example of such a ballad appears in the works of Thomas Moore whose Oh the Shamrock embodies the Victorian spirit of sentimentality. It was immensely popular and contributed to raising the profile of the shamrock as an image of Ireland: Oh The Shamrock - Through Erin's Isle, To sport awhile, As Love and Valor wander'd With Wit, the sprite, Whose quiver bright A thousand arrows squander'd. Where'er they pass, A triple grass Shoots up, with dew-drops streaming, As softly green As emeralds seen Through purest crystal gleaming. Oh the Shamrock, the green immortal Shamrock! Chosen leaf Of Bard and Chief, Old Erin's native Shamrock! Throughout the nineteenth and twentieth centuries, the shamrock continued to appear in a variety of settings. For example, the shamrock appeared on many buildings in Ireland as a decorative motif, such as on the facade of the Kildare Street Club building in Dublin, St. Patrick's Cathedral, Armagh, and the Harp and Lion Bar in Listowel, County Kerry. It also appears on street furniture, such as old lamp standards like those in Mountjoy Square in Dublin, and on monuments like the Parnell Monument, and the O'Connell Monument, both in O'Connell Street, Dublin. Shamrocks also appeared on decorative items such as glass, china, jewellery, poplin and Irish lace. Belleek Pottery in County Fermanagh, for example, regularly features shamrock motifs. The shamrock is used in the emblems of many state organisations, both in the Republic of Ireland and Northern Ireland. Some of these are all-Ireland bodies, (such as Tourism Ireland) as well as organisations specific to the Republic of Ireland (such as IDA Ireland) and Northern Ireland (such as Police Service of Northern Ireland). The Irish Postal Service , regularly features the shamrock on its series of stamps. The airline uses the emblem in its logos, and its air traffic control call sign is "SHAMROCK". The shamrock has been registered as a trademark by the Government of Ireland. In the early 1980s, Ireland defended its right to use the shamrock as its national symbol in a German trademark case, which included high-level representation from Taoiseach Charles Haughey. Having originally lost, Ireland won on appeal to the German Supreme Court in 1985. Since 1969, a bowl of shamrocks in a special Waterford Crystal bowl featuring a shamrock design is flown from Ireland to Washington, D.C., and presented to the President of the United States every St. Patrick's Day. Shamrock is also used in emblems of UK organisations with an association with Ireland, such as the Irish Guards. Soldiers of the Royal Irish Regiment of the British Army use the shamrock as their emblem, and wear a sprig of shamrock on Saint Patrick's Day. Shamrock are exported to wherever the regiment is stationed throughout the world. Queen Victoria decreed over a hundred years ago that soldiers from Ireland should wear a sprig of shamrock in recognition of fellow Irish soldiers who had fought bravely in the Boer War, a tradition continued by British army soldiers from both the north and the south of Ireland following partition in 1921. The coat of arms on the flag of the Royal Ulster Constabulary George Cross Foundation was cradled in a wreath of shamrock. The shamrock also appears in the emblems of a wide range of voluntary and non-state organisations in Ireland, such as the Irish Farmers Association, the Boy Scouts of Ireland association, Scouting Ireland Irish Girl Guides, and the Irish Kidney Donors Association. In addition many sporting organisations representing Ireland use the shamrock in their logos and emblems. Examples include the Irish Football Association (Northern Ireland), Irish Rugby Football Union, Swim Ireland, Cricket Ireland, and the Olympic Council of Ireland. A sprig of shamrock represents the Lough Derg Yacht Club Tipperary, (est. 1835). The shamrock is the official emblem of Irish football club Shamrock Rovers. Use outside Ireland Shamrock commonly appear as part of the emblem of many organisations in countries overseas with communities of Irish descent. Outside Ireland, various organisations, businesses and places also use the symbol to advertise a connection with the island. These uses include: The shamrock features in the emblem of the Ancient Order of Hibernians, the largest and oldest Irish Catholic organisation. Founded in New York City in 1836 by Irish immigrants, it claims a membership of 80,000 in the United States, Canada and Ireland. The Emerald Society, an organisation of American police officers or fire fighters of Irish heritage, includes a shamrock on its badge. Emerald Societies are found in most major US cities such as New York City, Milwaukee, Jersey City, Washington, Boston, Chicago, San Francisco, Los Angeles and Saint Paul, Minnesota. The shamrock is featured in the "compartment" of the Royal Arms of Canada, as part of a wreath of shamrocks, roses, thistles, and lilies (representing the Irish, English, Scottish, and French settlers of Canada). The flag of the city of Montreal, Quebec, Canada has a shamrock in the lower right quadrant. The shamrock represents the Irish population, one of the four major ethnic groups that made up the population of the city in the 19th century when the arms were designed, the other three being the French (represented by a fleur-de-lis in the upper-left), the English (represented by a rose in the upper-right), and the Scots (represented by a thistle in the lower-left). The shamrock is featured on the passport stamp of Montserrat, many of whose citizens are of Irish descent. The shamrock signified the Second Corps of the Army of the Potomac in the American Civil War, which contained the Irish Brigade. It can still be seen on the regimental coat of arms of "The Fighting Sixty-Ninth" The Erin Go Bragh flag, used originally by the Saint Patrick's Battalion of the Mexican Army, uses an angelic ClΓ‘irseach, a medieval Irish harp, cradled in a wreath of clover. The crest of Glasgow Celtic Football Club originally included a shamrock which was changed in 1938 to a four leaved clover for reasons that remain unclear. The club was founded in 1888 in Glasgow among the poor Irish immigrants of the city. London Irish rugby football club has a shamrock on its crest. The club was founded in 1898 for the young Irishmen of London. The Shamrocks Motorcycle Club is a US-based traditional motorcycle club (composed of law enforcement personnel) which uses the shamrock as its name and symbol. The basketball team, Boston Celtics, in the USA incorporate the shamrock in their logo. Former NBA player Shaquille O'Neal nicknamed himself the "Big Shamrock" after joining the team. In Australia, the Melbourne Celtic Club features a shamrock on its emblem. The club was founded in 1887 for the Irish and other Celtic groups in the city. During the Russian Civil War a British officer Col. P.J. Woods, of Belfast, established a Karelian Regiment which had a shamrock on an orange field as its regimental badge. A shamrock (Trifylli) is the official emblem of Greek multi-sport club Panathinaikos A.O., Greek football club Acharnaikos F.C. and Cypriot sports club AC Omonia. A red shamrock is also the emblem of Platanias F.C., a Cretan football team of Chania. The Danish football club Viborg FF uses a shamrock in its badge and it has become a symbol of the town of Viborg. The German football club SpVgg Greuther FΓΌrth also has a shamrock in its badge as it is a symbol of the city of FΓΌrth. According to the Anti-Defamation League, the Aryan Brotherhood symbol combines a shamrock with a swastika. See also Guernsey lily Ragwort (Isle of Man) St. Patrick's blue Trefoil References Bibliography External links The truth behind the shamrock on the BBC News website, dated 17 March 2004. Retrieved 20 July 2008. Landscaping: Shamrocks and 4-Leaf Clovers on the About.com website. Retrieved 20 July 2008. Decodeunicode.org/en/u+2618 Shamrock as a symbol in Unicode Christian symbols Trinitarianism Irish-American culture Culture of Ireland Irish folklore National symbols of Ireland National symbols of Northern Ireland Plant common names
Shamrock
[ "Biology" ]
4,910
[ "Plant common names", "Common names of organisms", "Plants" ]
160,435
https://en.wikipedia.org/wiki/Global%20air-traffic%20management
Global air-traffic management (GATM) is a concept for satellite-based Communication, navigation and surveillance and air traffic management. The Federal Aviation Administration and the International Civil Aviation Organization, a specialized agency of the United Nations, established GATM standards to keep air travel safe and effective in increasingly crowded worldwide air space. Efforts are being made worldwide to test and implement new technologies that will allow GATM to efficiently support air traffic control. Airservices Australia ADS-B initiative is one of the major implementation programs in this field. This initiative will facilitate the certification of this new technology allowing further implementation. The two core satellite constellations are the Global Positioning System (GPS) of the US and the Global Navigation Satellite System (GLONASS) of Russia/India. The third constellation will be the European Union Galileo system when it becomes fully operational. These systems provide independent capabilities and can be used in combination with future core constellations and augmentation systems. Signals from core satellite are received by ground reference stations and any errors in the signals are identified. Each station in the network relays the data to area-wide master stations where correction information for specific geographical areas is computed. The correction message is prepared and uplinked to a geostationary communication satellite (GEO) via a ground uplink station. This message is broadcast to receivers on board aircraft flying within the broadcast coverage area of the system. The system is known in the US as WAAS (Wide Area Augmentation System), in Europe as EGNOS (European Geostationary Navigation Overlay System), in Japan as MSAS (MTSAT Satellite Based Augmentation System) and in India as GAGAN (GPS-aided geo-augmented navigation). The system employs various techniques to correct equatorial anomalies. The advantage of the system is, it is global in scope and it has the potential to support all phases of flight providing a seamless global navigation guidance. This could eliminate the need for a variety of ground and airborne systems that were designed to meet specific requirements for certain phases of flight. Standard and recommended practices for the air traffic management based on a global navigation satellite system are developed by ICAO (International Civil Aviation Organization). Thus the system has to meet ICAO standards to become operational. Air traffic control International air transport Satellite navigation systems
Global air-traffic management
[ "Technology" ]
466
[ "Satellite navigation systems", "Information systems", "Wireless locating", "Computer systems" ]
160,478
https://en.wikipedia.org/wiki/Loopback
Loopback (also written loop-back) is the routing of electronic signals or digital data streams back to their source without intentional processing or modification. It is primarily a means of testing the communications infrastructure. Loopback can take the form of communication channels with only one communication endpoint. Any message transmitted by such a channel is immediately and only received by that same channel. In telecommunications, loopback devices perform transmission tests of access lines from the serving switching center, which usually does not require the assistance of personnel at the served terminal. Loop around is a method of testing between stations that are not necessarily adjacent, wherein two lines are used, with the test being done at one station and the two lines are interconnected at the distant station. A patch cable may also function as loopback, when applied manually or automatically, remotely or locally, facilitating a loop-back test. Where a system (such as a modem) involves round-trip analog-to-digital processing, a distinction is made between analog loopback, where the analog signal is looped back directly, and digital loopback, where the signal is processed in the digital domain before being re-converted to an analog signal and returned to the source. Telecommunications In telecommunications, loopback, or a loop, is a hardware or software method which feeds a received signal or data back to the sender. It is used as an aid in debugging physical connection problems. As a test, many data communication devices can be configured to send specific patterns (such as all ones) on an interface and can detect the reception of this signal on the same port. This is called a loopback test and can be performed within a modem or transceiver by connecting its output to its own input. A circuit between two points in different locations may be tested by applying a test signal on the circuit in one location, and having the network device at the other location send a signal back through the circuit. If this device receives its own signal back, this proves that the circuit is functioning. A hardware loop is a simple device that physically connects the receiver channel to the transmitter channel. In the case of a network termination connector such as X.21, this is typically done by simply connecting the pins together in the connector. Media such as optical fiber or coaxial cable, which have separate transmit and receive connectors, can simply be looped together with a single strand of the appropriate medium. A modem can be configured to loop incoming signals from either the remote modem or the local terminal. This is referred to as loopback or software loop. Serial interfaces A serial communications transceiver can use loopback for testing its functionality. For example, a device's transmit pin connected to its receive pin will result in the device receiving exactly what it transmits. Moving this looping connection to the remote end of a cable adds the cable to this test. Moving it to the far end of a modem link extends the test further. This is a common troubleshooting technique and is often combined with a specialized test device that sends specific patterns and counts any errors that come back (see Bit Error Rate Test). Some devices include built-in loopback capability. A simple serial interface loopback test, called paperclip test, is sometimes used to identify serial ports of a computer and verify operation. It utilizes a terminal emulator application to send characters, with flow control set to off, to the serial port and receive the same back. For this purpose, a paperclip is used to short pin 2 to pin 3 (the receive and transmit pins) on a standard RS-232 interface using D-subminiature DE-9 or DB-25 connectors. Virtual loopback interface Implementations of the Internet protocol suite include a virtual network interface through which network applications can communicate when executing on the same machine. It is implemented entirely within the operating system's networking software and passes no packets to any network interface controller. Any traffic that a computer program sends to a loopback IP address is simply and immediately passed back up the network software stack as if it had been received from another device. Unix-like systems usually name this loopback interface lo or lo0. Various Internet Engineering Task Force (IETF) standards reserve the IPv4 address block , in CIDR notation and the IPv6 address for this purpose. The most common IPv4 address used is . Commonly these loopback addresses are mapped to the hostnames localhost or loopback. MPLS One notable exception to the use of the network addresses is their use in Multiprotocol Label Switching (MPLS) traceroute error detection, in which their property of not being routable provides a convenient means to avoid delivery of faulty packets to end users. Martian packets Any IP datagram with a source or destination address set to a loopback address must not appear outside of a computing system, or be routed by any routing device. Packets received on an interface with a loopback destination address must be dropped. Such packets are sometimes referred to as Martian packets. As with other bogus packets, they may be malicious and any problems they might cause can be avoided by applying bogon filtering. Management interface Some computer network equipment use the term "loopback" for a virtual interface used for management purposes. Unlike a proper loopback interface, this type of loopback device is not used to talk with itself. Such an interface is assigned an address that can be accessed from management equipment over a network but is not assigned to any of the physical interfaces on the device. Such a loopback device is also used for management datagrams, such as alarms, originating from the equipment. The property that makes this virtual interface special is that applications that use it will send or receive traffic using the address assigned to the virtual interface as opposed to the address on the physical interface through which the traffic passes. Loopback interfaces of this sort are often used in the operation of routing protocols, because they have the useful property that, unlike real physical interfaces, they will not go down when a physical port fails. Other applications The audio systems Open Sound System (OSS), Advanced Linux Sound Architecture (ALSA) and PulseAudio have loopback modules for recording the audio output of applications for testing purposes. Unlike physical loopbacks, this does not involve double analog/digital conversion and no disruption is caused by hardware malfunctions. See also Feedback Loop device Virtual network interface References External links National Instruments: Serial loopback testing Communication circuits Internet architecture
Loopback
[ "Technology", "Engineering" ]
1,319
[ "Telecommunications engineering", "Internet architecture", "IT infrastructure", "Communication circuits" ]
160,501
https://en.wikipedia.org/wiki/Ultra%20high%20frequency
Ultra high frequency (UHF) is the ITU designation for radio frequencies in the range between 300Β megahertz (MHz) and 3Β gigahertz (GHz), also known as the decimetre band as the wavelengths range from one meter to one tenth of a meter (one decimeter). Radio waves with frequencies above the UHF band fall into the super-high frequency (SHF) or microwave frequency range. Lower frequency signals fall into the VHF (very high frequency) or lower bands. UHF radio waves propagate mainly by line of sight; they are blocked by hills and large buildings although the transmission through building walls is strong enough for indoor reception. They are used for television broadcasting, cell phones, satellite communication including GPS, personal radio services including Wi-Fi and Bluetooth, walkie-talkies, cordless phones, satellite phones, and numerous other applications. The IEEE defines the UHF radar band as frequencies between 300Β MHz and 1Β GHz. Two other IEEE radar bands overlap the ITU UHF band: the L band between 1 and 2Β GHz and the S band between 2 and 4Β GHz. Propagation characteristics Radio waves in the UHF band travel almost entirely by line-of-sight propagation (LOS) and ground reflection; unlike in the HF band there is little to no reflection from the ionosphere (skywave propagation), or ground wave. UHF radio waves are blocked by hills and cannot travel beyond the horizon, but can penetrate foliage and buildings for indoor reception. Since the wavelengths of UHF waves are comparable to the size of buildings, trees, vehicles and other common objects, reflection and diffraction from these objects can cause fading due to multipath propagation, especially in built-up urban areas. Atmospheric moisture reduces, or attenuates, the strength of UHF signals over long distances, and the attenuation increases with frequency. UHF TV signals are generally more degraded by moisture than lower bands, such as VHF TV signals. As the visual horizon sets the maximum range of UHF transmission to between 30 and 40Β miles (48 to 64Β km) or less, depending on local terrain, the same frequency channels can be reused by other users in neighboring geographic areas (frequency reuse). Radio repeaters are used to retransmit UHF signals when a distance greater than the line of sight is required. Occasionally when conditions are right, UHF radio waves can travel long distances by tropospheric ducting as the atmosphere warms and cools throughout the day. Antennas The length of an antenna is related to the length of the radio waves used. Due to the short wavelengths, UHF antennas are conveniently stubby and short; at UHF frequencies a quarter-wave monopole, the most common omnidirectional antenna is between 2.5 and 25Β cm long. UHF wavelengths are short enough that efficient transmitting antennas are small enough to mount on handheld and mobile devices, so these frequencies are used for two-way land mobile radio systems, such as walkie-talkies, two-way radios in vehicles, and for portable wireless devices; cordless phones and cell phones. Omnidirectional UHF antennas used on mobile devices are usually short whips, sleeve dipoles, rubber ducky antennas or the planar inverted F antenna (PIFA) used in cellphones. Higher gain omnidirectional UHF antennas can be made of collinear arrays of dipoles and are used for mobile base stations and cellular base station antennas. The short wavelengths also allow high gain antennas to be conveniently small. High gain antennas for point-to-point communication links and UHF television reception are usually Yagi, log periodic, corner reflectors, or reflective array antennas. At the top end of the band, slot antennas and parabolic dishes become practical. For satellite communication, helical and turnstile antennas are used since satellites typically employ circular polarization which is not sensitive to the relative orientation of the transmitting and receiving antennas. For television broadcasting specialized vertical radiators that are mostly modifications of the slot antenna or reflective array antenna are used: the slotted cylinder, zig-zag, and panel antennas. Applications UHF television broadcasting channels are used for digital television, although much of the former bandwidth has been reallocated to land mobile radio system, trunked radio and mobile telephone use. Since at UHF frequencies transmitting antennas are small enough to install on portable devices, the UHF spectrum is used worldwide for land mobile radio systems, two-way radios used for voice communication for commercial, industrial, public safety, and military purposes. Examples of personal radio services are GMRS, PMR446, and UHF CB. The most rapidly-expanding use of the band is Wi-Fi (wireless LAN) networks in homes, offices, and public places. Wi-Fi IEEE 802.11 low band operates between 2412 and 2484Β MHz. A second widespread use is for cellphones, allowing handheld mobile phones be connected to the public switched telephone network and the Internet. Current 3G and 4G cellular networks use UHF, the frequencies varying among different carriers and countries. Satellite phones also use this frequency in the L band and S band. Examples of UHF frequency allocations Australia 406–406.1Β MHz: Mobile satellite service 450.4875–451.5125Β MHz:Fixed point-to-point link 457.50625–459.9875Β MHz: Land mobile service 476–477Β MHz: UHF citizens band (Land mobile service) 503–694Β MHz: UHF channels for television broadcasting Canada 430–450Β MHz: Amateur radio (70Β cm band) 470–806Β MHz: Terrestrial television (with select channels in the 600 & 700Β MHz bands left vacant) 1452–1492Β MHz: Digital Audio Broadcasting (L band) Many other frequency assignments for Canada and Mexico are similar to their US counterparts France 380-400Β MHz: Terrestrial Trunked Radio for Police 430-440Β MHz: Amateur radio (70Β cm band) 470-694Β MHz: Terrestrial television New Zealand 406.1–420Β MHz: Land mobile service 430–440Β MHz: Amateur radio (70Β cm band) and amateur radio satellite 476–477Β MHz: PRS Personal Radio Service (Land mobile service) 485–502Β MHz: Analog and P25 Emergency services use 510–622Β MHz: Terrestrial television 960–1215Β MHz: Aeronautical radionavigation 1240–1300Β MHz: Amateur radio (23Β cm band) United Kingdom 380–399.9Β MHz: Terrestrial Trunked Radio (TETRA) service for emergency use 430–440Β MHz: Amateur radio (70Β cm band) 446.0–446.2Β MHz : European unlicensed PMR service => PMR446 457–464Β MHz: Scanning telemetry and telecontrol, assigned mostly to the water, gas, and electricity industries 606–614Β MHz: Radio microphones and radio-astronomy 470–862Β MHz: Previously used for analogue TV channels 21–69 (until 2012). Currently channels 21 to 37 and 39 to 48 are used for Freeview digital TV. Channels 55 to 56 were previously used by temporary muxes COM7 and COM8, channel 38 was used for radio astronomy but has been cleared to allow PMSE users access on a licensed, shared basis. 694–790Β MHz: i.e. Channels 49 to 60 have been cleared, to allow these channels to be allocated for 5G cellular communication. 791–862Β MHz, i.e. channels 61 to 69 inclusive were previously used for licensed and shared wireless microphones (channel 69 only), has since been allocated to 4G cellular communications. 863–865Β MHz: Used for licence-exempt wireless systems. 863–870Β MHz: Short range devices, LPWAN IoT devices such as NarrowBand-IoT. 870–960Β MHz: Cellular communications (GSM900 - Vodafone and O2 only) including GSM-R and future TETRA 1240–1325Β MHz: Amateur radio (23Β cm band) 1710–1880Β MHz: 2G Cellular communications (GSM1800) 1880–1900Β MHz: DECT cordless telephone 1900–1980Β MHz: 3G cellular communications (mobile phone uplink) 2110–2170Β MHz: 3G cellular communications (base station downlink) 2310–2450Β MHz: Amateur radio (13Β cm band) United States UHF channels are used for digital television broadcasting on both over the air channels and cable television channels. Since 1962, UHF channel tuners (at the time, channels 14 to 83) have been required in television receivers by the All-Channel Receiver Act. However, because of their more limited range, and because few sets could receive them until older sets were replaced, UHF channels were less desirable to broadcasters than VHF channels (and licenses sold for lower prices). A complete list of US Television Frequency allocations can be found at Pan-American television frequencies. There is a considerable amount of lawful unlicensed activity (cordless phones, wireless networking) clustered around 900Β MHz and 2.4Β GHz, regulated under Title 47 CFR Part 15. These ISM bandsβ€”frequencies with a higher unlicensed power permitted for use originally by Industrial, Scientific, Medical apparatusβ€”are now some of the most crowded in the spectrum because they are open to everyone. The 2.45Β GHz frequency is the standard for use by microwave ovens, adjacent to the frequencies allocated for Bluetooth network devices. The spectrum from 806Β MHz to 890Β MHz (UHF channels 70 to 83) was taken away from TV broadcast services in 1983, primarily for analog mobile telephony. In 2009, as part of the transition from analog to digital over-the-air broadcast of television, the spectrum from 698Β MHz to 806Β MHz (UHF channels 52 to 69) was removed from TV broadcasting, making it available for other uses. Channel 55, for instance, was sold to Qualcomm for their MediaFLO service, which was later sold to AT&T, and discontinued in 2011. Some US broadcasters had been offered incentives to vacate this channel early, permitting its immediate mobile use. The FCC's scheduled auction for this newly available spectrum was completed in March 2008. 225–420Β MHz: Government use, including meteorology, military aviation, and federal two-way use 420–450Β MHz: Government radiolocation, amateur radio satellite and amateur radio (70Β cm band), MedRadio 450–470Β MHz: UHF business band, General Mobile Radio Service, and Family Radio Service 2-way "walkie-talkies", public safety 470–512Β MHz: Low-band TV channels 14 to 20 (shared with public safety land mobile 2-way radio in 12 major metropolitan areas scheduled to relocate to 700Β MHz band by 2023) 512–608Β MHz: Medium-band TV channels 21 to 36 608–614Β MHz: Channel 37 used for radio astronomy and wireless medical telemetry 614–698Β MHz: Mobile broadband shared with TV channels 38 to 51 auctioned in April 2017. TV stations were relocated by 2020. 617–652Β MHz: Mobile broadband service downlink 652–663Β MHz: Wireless microphones (higher priority) and unlicensed devices (lower priority) 663–698Β MHz: Mobile broadband service uplink 698–806Β MHz: Was auctioned in March 2008; bidders got full use after the transition to digital TV was completed on June 12, 2009 (formerly high-band UHF TV channels 52 to 69) and recently modified in 2021 for Next Generation 5G UHF transmission bandwidth for 'over the air' channels 2 thru 69 (virtual 1 thru 36). 806–816Β MHz: Public safety and commercial 2-way (formerly TV channels 70 to 72) 817–824Β MHz: ESMR band for wideband mobile services (mobile phone) (formerly public safety and commercial 2-way) 824–849Β MHz: Cellular A & B franchises, terminal (mobile phone) (formerly TV channels 73 to 77) 849–851Β MHz: Commercial aviation air-ground systems (Gogo) 851–861Β MHz: Public safety and commercial 2-way (formerly TV channels 77 to 80) 862–869Β MHz: ESMR band for wideband mobile services (base station) (formerly public safety and commercial 2-way) 869–894Β MHz: Cellular A & B franchises, base station (formerly TV channels 80 to 83) 894–896Β MHz: Commercial aviation air-ground systems (Gogo) 896–901Β MHz: Commercial 2-way radio 901–902Β MHz: Narrowband PCS: commercial narrowband mobile services 902–928Β MHz: ISM band, amateur radio (33Β cm band), cordless phones and stereo, radio-frequency identification, datalinks 928–929Β MHz: SCADA, alarm monitoring, meter reading systems and other narrowband services for a company's internal use 929–930Β MHz: Pagers 930–931Β MHz: Narrowband PCS: commercial narrowband mobile services 931–932Β MHz: Pagers 932–935Β MHz: Fixed microwave services: distribution of video, audio and other data 935–940Β MHz: Commercial 2-way radio 940–941Β MHz: Narrowband PCS: commercial narrowband mobile services 941–960Β MHz: Mixed studio-transmitter fixed links, SCADA, other. 960–1215Β MHz: Aeronautical radionavigation 1240–1300Β MHz: Amateur radio (23Β cm band) 1300–1350Β MHz: Long range radar systems 1350–1390Β MHz: Military air traffic control and mobile telemetry systems at test ranges 1390–1395Β MHz: Proposed wireless medical telemetry service. TerreStar failed to provide service by the required deadline. 1395–1400Β MHz: Wireless medical telemetry service 1400–1427Β MHz: Earth exploration, radio astronomy, and space research 1427–1432Β MHz: Wireless medical telemetry service 1432–1435Β MHz: Proposed wireless medical telemetry service. TerreStar failed to provide service by the required deadline. 1435–1525Β MHz: Military use mostly for aeronautical mobile telemetry (therefore not available for Digital Audio Broadcasting, unlike Canada/Europe) 1525–1559Β MHz: Skyterra downlink (Ligado is seeking FCC permission for terrestrial use) 1526–1536Β MHz: proposed Ligado downlink 1536–1559Β MHz: proposed guard band 1559–1610Β MHz: Radio Navigation Satellite Services (RNSS) Upper L-band 1563–1587Β MHz: GPS L1 band 1593–1610Β MHz: GLONASS G1 band 1559–1591Β MHz: Galileo E1 band (overlapping with GPS L1) 1610–1660.5Β MHz: Mobile Satellite Service 1610–1618: Globalstar uplink 1618–1626.5Β MHz: Iridium uplink and downlink 1626.5–1660.5Β MHz: Skyterra uplink (Ligado is seeking FCC permission for terrestrial use) 1627.5–1637.5Β MHz: proposed Ligado uplink 1 1646.5–1656.5Β MHz: proposed Ligado uplink 2 1660.5–1668.4Β MHz: Radio astronomy observations. Transmitting is not permitted. 1668.4–1670Β MHz: Radio astronomy observations. Weather balloons may utilize the spectrum after an advance notice. 1670–1675Β MHz: Geostationary Operational Environmental Satellite transmissions to three earth stations in Wallops Island, Virginia; Greenbelt, Maryland and Fairbanks, Alaska. Nationwide broadband service license in this range is held by a subsidiary of Crown Castle International Corp. who is trying to provide service in cooperation with Ligado Networks. 1675–1695Β MHz: Meteorological federal users 1695–1780Β MHz: AWS mobile phone uplink (UL) operating band 1695–1755Β MHz: AWS-3 blocks A1 and B1 1710–1755Β MHz: AWS-1 blocks A, B, C, D, E, F 1755–1780Β MHz: AWS-3 blocks G, H, I, J (various federal agencies transitioning by 2025) 1780–1850Β MHz: exclusive federal use (Air Force satellite communications, Army's cellular-like communication system, other agencies) 1850–1920Β MHz: PCS mobile phoneβ€”order is A, D, B, E, F, C, G, H blocks. A, B, C = 15Β MHz; D, E, F, G, H = 5Β MHz 1920–1930Β MHz: DECT cordless telephone 1930–2000Β MHz: PCS base stationsβ€”order is A, D, B, E, F, C, G, H blocks. A, B, C = 15Β MHz; D, E, F, G, H = 5Β MHz 2000–2020Β MHz: lower AWS-4 downlink (mobile broadband) 2020–2110Β MHz: Cable Antenna Relay service, Local Television Transmission service, TV Broadcast Auxiliary service, Earth Exploration Satellite service 2110–2200Β MHz: AWS mobile broadband downlink 2110–2155Β MHz: AWS-1 blocks A, B, C, D, E, F 2155–2180Β MHz: AWS-3 blocks G, H, I, J 2180–2200Β MHz: upper AWS-4 2200–2290Β MHz: NASA satellite tracking, telemetry and control (space-to-Earth, space-to-space) 2290–2300Β MHz: NASA Deep Space Network 2300–2305Β MHz: Amateur radio (13Β cm band, lower segment) 2305–2315Β MHz: WCS mobile broadband service uplink blocks A and B 2315–2320Β MHz: WCS block C (AT&T is pursuing smart grid deployment) 2320–2345Β MHz: Satellite radio (Sirius XM) 2345–2350Β MHz: WCS block D (AT&T is pursuing smart grid deployment) 2350–2360Β MHz: WCS mobile broadband service downlink blocks A and B 2360–2390Β MHz: Aircraft landing and safety systems 2390–2395Β MHz: Aircraft landing and safety systems (secondary deployment in a dozen of airports), amateur radio otherwise 2395–2400Β MHz: Amateur radio (13Β cm band, upper segment) 2400–2483.5Β MHz: ISM, IEEE 802.11, 802.11b, 802.11g, 802.11n wireless LAN, IEEE 802.15.4-2006, Bluetooth, radio-controlled aircraft (strictly for spread spectrum use), microwave ovens, Zigbee 2483.5–2495Β MHz: Globalstar downlink and Terrestrial Low Power Service suitable for TD-LTE small cells 2495–2690Β MHz: Educational Broadcast and Broadband Radio Services 2690–2700Β MHz: Receive-only range for radio astronomy and space research See also Digital Audio Broadcasting and its regional implementations Digital terrestrial television The Thing (listening device) References External links U.S. cable television channel frequencies Tomislav Stimac, "Definition of frequency bands (VLF, ELF... etc.)". IK1QFK Home Page (vlf.it). Radio spectrum Television technology Wireless
Ultra high frequency
[ "Physics", "Technology", "Engineering" ]
3,961
[ "Information and communications technology", "Telecommunications engineering", "Radio spectrum", "Television technology", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Wireless" ]
160,505
https://en.wikipedia.org/wiki/Very%20low%20frequency
Very low frequency or VLF is the ITU designation for radio frequencies (RF) in the range of 3–30Β kHz, corresponding to wavelengths from 100 to 10Β km, respectively. The band is also known as the myriameter band or myriameter wave as the wavelengths range from one to ten myriameters (an obsolete metric unit equal to 10Β kilometers). Due to its limited bandwidth, audio (voice) transmission is highly impractical in this band, and therefore only low-data-rate coded signals are used. The VLF band is used for a few radio navigation services, government time radio stations (broadcasting time signals to set radio clocks) and secure military communication. Since VLF waves can penetrate at least 40Β meters (131Β ft) into saltwater, they are used for military communication with submarines. Propagation characteristics Because of their long wavelengths, VLF radio waves can diffract around large obstacles and so are not blocked by mountain ranges, and they can propagate as ground waves following the curvature of the Earth and so are not limited by the horizon. Ground waves are absorbed by the resistance of the Earth and are less important beyond several hundred to a thousand kilometres/miles, and the main mode of long-distance propagation is an Earth–ionosphere waveguide mechanism. The Earth is surrounded by a conductive layer of electrons and ions in the upper atmosphere at the bottom of the ionosphere called the D layer at 60–90Β km (37–56Β miles) altitude, which reflects VLF radio waves. The conductive ionosphere and the conductive Earth form a horizontal "duct" a few VLF wavelengths high, which acts as a waveguide confining the waves so they don't escape into space. The waves travel in a zig-zag path around the Earth, reflected alternately by the Earth and the ionosphere, in transverse magnetic (TM) mode. VLF waves have very low path attenuation, 2–3Β dB per 1,000Β km, with little of the "fading" experienced at higher frequencies. This is because VLF waves are reflected from the bottom of the ionosphere, while higher frequency shortwave signals are returned to Earth from higher layers in the ionosphere, the F1 and F2 layers, by a refraction process, and spend most of their journey in the ionosphere, so they are much more affected by ionization gradients and turbulence. Therefore, VLF transmissions are very stable and reliable, and are used for long-distance communication. Propagation distances of 5,000–20,000Β km have been realized. However, atmospheric noise ("sferics") is high in the band, including such phenomena as "whistlers", caused by lightning. VLF waves can penetrate seawater to a depth of at least 10–40 meters (30–130 feet), depending on the frequency employed and the salinity of the water, so they are used to communicate with submarines. VLF waves at certain frequencies have been found to cause electron precipitation. VLF waves used to communicate with submarines have created an artificial bubble around the Earth that can protect it from solar flares and coronal mass ejections; this occurred through interaction with high-energy radiation particles. Antennas A major practical drawback to the VLF band is that because of the length of the waves, full size resonant antennas (half wave dipole or quarter wave monopole antennas) cannot be built because of their physical height. Vertical antennas must be used because VLF waves propagate in vertical polarization, but a quarter-wave vertical antenna at 30Β kHz (10Β km wavelength) would be high. So practical transmitting antennas are electrically short, a small fraction of the length at which they would be self-resonant. Due to their low radiation resistance (often less than one ohm) they are inefficient, radiating only 10% to 50% of the transmitter power at most, with the rest of the power dissipated in the antenna/ground system resistances. Very high power transmitters (~1Β megawatt) are required for long-distance communication, so the efficiency of the antenna is an important factor. VLF transmitting antennas High power VLF transmitting stations use capacitively-toploaded monopole antennas. These are very large wire antennas, up to several kilometers long. They consist of a series of steel radio masts, linked at the top with a network of cables, often shaped like an umbrella or clotheslines. Either the towers themselves or vertical wires serve as monopole radiators, and the horizontal cables form a capacitive top-load to increase the current in the vertical wires, increasing the radiated power and efficiency of the antenna. High-power stations use variations on the umbrella antenna such as the "delta" and "trideco" antennas, or multiwire flattop (triatic) antennas. For low-power transmitters, inverted-L and T antennas are used. Due to the low radiation resistance, to minimize power dissipated in the ground these antennas require extremely low resistance ground (Earthing) systems, consisting of radial networks of buried copper wires under the antenna. To minimize dielectric losses in the soil, the ground conductors are buried shallowly, only a few inches in the ground, and the ground surface near the antenna is sometimes protected by copper ground screens. Counterpoise systems have also been used, consisting of radial networks of copper cables supported several feet above the ground under the antenna. A large loading coil is required at the antenna feed point to cancel the capacitive reactance of the antenna to make it resonant. At VLF the design of this coil is challenging; it must have low resistance at the operating RF frequency, high , must handle very high currents, and must withstand the extremely high voltage on the antenna. These are usually huge air core coils 2-4 meters high wound on a nonconductive frame, with RF resistance reduced by using thick litz wire several centimeters in diameter, consisting of thousands of insulated strands of fine wire braided together. The high capacitance and inductance and low resistance of the antenna-loading coil combination makes it act electrically like a high tuned circuit. VLF antennas have very narrow bandwidth and to change the transmitting frequency requires a variable inductor (variometer) to tune the antenna. The large VLF antennas used for high-power transmitters usually have bandwidths of only 50–100Β hertz. The high results in very high voltages (up to 250Β kV) on the antenna and very good insulation is required. Large VLF antennas usually operate in 'voltage limited' mode: the maximum power of the transmitter is limited by the voltage the antenna can accept without air breakdown, corona, and arcing from the antenna. Dynamic antenna tuning The bandwidth of large capacitively loaded VLF antennas is so narrow (50–100Β Hz) that even the small frequency shifts of FSK and MSK modulation may exceed it, throwing the antenna out of resonance, causing the antenna to reflect some power back down the feedline. The traditional solution is to use a "bandwidth resistor" in the antenna which reduces the , increasing the bandwidth; however this also reduces the power output. A recent alternative used in some military VLF transmitters is a circuit which dynamically shifts the antenna's resonant frequency between the two output frequencies with the modulation. This is accomplished with a saturable reactor in series with the antenna loading coil. This is a ferromagnetic core inductor with a second control winding through which a DC current flows, which controls the inductance by magnetizing the core, changing its permeability. The keying datastream is applied to the control winding. So when the frequency of the transmitter is shifted between the '1' and '0' frequencies, the saturable reactor changes the inductance in the antenna resonant circuit to shift the antenna resonant frequency to follow the transmitter's frequency. VLF receiving antennas The requirements for receiving antennas are less stringent, because of the high level of natural atmospheric noise in the band. At VLF frequencies atmospheric radio noise is far above the receiver noise introduced by the receiver circuit and determines the receiver signal-to-noise ratio. So small inefficient receiving antennas can be used, and the low voltage signal from the antenna can simply be amplified by the receiver without introducing significant noise. Ferrite loop antennas are usually used for reception. Modulation Because of the small bandwidth of the band, and the extremely narrow bandwidth of the antennas used, it is impractical to transmit audio signals (AM or FM radiotelephony). A typical AM radio signal with a bandwidth of 10Β kHz would occupy one third of the VLF band. More significantly, it would be difficult to transmit any distance because it would require an antenna with 100 times the bandwidth of current VLF antennas, which due to the Chu-Harrington limit would be enormous in size. Therefore, only text data can be transmitted, at low bit rates. In military networks frequency-shift keying (FSK) modulation is used to transmit radioteletype data using 5Β bit ITA2 or 8Β bit ASCII character codes. A small frequency shift of 30–50Β hertz is used due to the small bandwidth of the antenna. In high power VLF transmitters, to increase the allowable data rate, a special form of FSK called minimum-shift keying (MSK) is used. This is required due to the high of the antenna. The huge capacitively-loaded antenna and loading coil form a high tuned circuit, which stores oscillating electrical energy. The of large VLF antennas is typically over 200; this means the antenna stores far more energy (200Β times as much) than is supplied or radiated in any single cycle of the transmitter current. The energy is stored alternately as electrostatic energy in the topload and ground system, and magnetic energy in the vertical wires and loading coil. VLF antennas typically operate "voltage-limited", with the voltage on the antenna close to the limit that the insulation will stand, so they will not tolerate any abrupt change in the voltage or current from the transmitter without arcing or other insulation problems. As described below, MSK is able to modulate the transmitted wave at higher data rates without causing voltage spikes on the antenna. The three types of modulation that have been used in VLF transmitters are: Continuous Wave (CW), Interrupted Continuous Wave (ICW), or On-Off Keying Morse code radiotelegraphy transmission with unmodulated carrier. The carrier is turned on and off, with carrier on representing the Morse code "dots" and "dashes" and carrier off representing spaces. The simplest and earliest form of radio data transmission, this was used from the beginning of the 20thΒ century to the 1960s in commercial and military VLF stations. Because of the high antenna the carrier cannot be switched abruptly on and off but requires a long time constant, many cycles, to build up the oscillating energy in the antenna when the carrier turns on, and many cycles to dissipate the stored energy when the carrier turns off. This limits the data rate that can be transmitted to 15–20Β words/minute. CW is now only used in small hand-keyed transmitters, and for testing large transmitters. Frequency-shift keying (FSK) FSK is the second oldest and second simplest form of digital radio data modulation, after CW. For FSK, the carrier shifted between two frequencies, one representing the binary digit '1' and the other representing binary '0'. For example, a frequency of 9070Β Hz might be used to indicate a '1' and the frequency 9020Β Hz, 50Β Hz lower, to indicate a '0'. The two frequencies are generated by a continuously-running frequency synthesizer. The transmitter is periodically switched between these frequencies to represent 8Β bit ASCII codes for the characters of the message. A problem at VLF is that when the frequency is switched the two sine waves usually have different phases, which creates a sudden phase-shift transient which can cause arcing on the antenna. To avoid arcing, FSK can only be used at slow rates of 50–75Β bit/s. Minimum-shift keying (MSK) A continuous phase version of FSK designed specifically for small bandwidths, this was adopted by naval VLF stations in the 1970s to increase the data rate and is now the standard mode used in military VLF transmitters. If the two frequencies representing '1' and '0' are 50Β Hz apart, the standard frequency shift used in military VLF stations, their phases coincide every 20Β ms. In MSK the frequency of the transmitter is switched only when the two sine waves have the same phase, at the point both sine waves cross zero in the same direction. This creates a smooth continuous transition between the waves, avoiding transients which can cause stress and arcing on the antenna. MSK can be used at data rates up to 300Β bit/s, or about 35Β ASCII characters (8Β bits each) per second, approximately 450Β words per minute. Applications Early wireless telegraphy Historically, this band was used for long distance transoceanic radio communication during the wireless telegraphy era between about 1905 and 1925. Nations built networks of high-power LF and VLF radiotelegraphy stations that transmitted text information by Morse code, to communicate with other countries, their colonies, and naval fleets. Early attempts were made to use radiotelephone using amplitude modulation and single-sideband modulation within the band starting from 20Β kHz, but the result was unsatisfactory because the available bandwidth was insufficient to contain the sidebands. In the 1920s the discovery of the skywave (skip) radio propagation method allowed lower power transmitters operating at high frequency to communicate at similar distances by reflecting their radio waves off a layer of ionized atoms in the ionosphere, and long-distance radio communication stations switched to the shortwave frequencies. The Grimeton VLF transmitter at Grimeton near Varberg in Sweden, one of the few remaining transmitters from that era that has been preserved as a historical monument, can be visited by the public at certain times, such as on Alexanderson Day. Navigation beacons and time signals Due to its long propagation distances and stable phase characteristics, during the 20thΒ century the VLF band was used for long range hyperbolic radio navigation systems which allowed ships and aircraft to determine their geographical position by comparing the phase of radio waves received from fixed VLF navigation beacon transmitters. The worldwide Omega system used frequencies from 10 to 14Β kHz, as did Russia's Alpha. VLF was also used for standard time and frequency broadcasts. In the US, the time signal station WWVL began transmitting a 500Β W signal on 20Β kHz in AugustΒ 1963. It used frequency-shift keying (FSK) to send data, shifting between 20Β kHz and 26Β kHz. The WWVL service was discontinued in JulyΒ 1972. Geophysical and atmospheric measurement Naturally occurring signals in the VLF band are used by geophysicists for long range lightning location and for research into atmospheric phenomena such as the aurora. Measurements of whistlers are employed to infer the physical properties of the magnetosphere. Geophysicists use VLF-electromagnetic receivers to measure conductivity in the near surface of the Earth. VLF signals can be measured as a geophysical electromagnetic survey that relies on transmitted currents inducing secondary responses in conductive geologic units. A VLF anomaly represents a change in the attitude of the electromagnetic vector overlying conductive materials in the subsurface. Mine communication systems VLF can also penetrate soil and rock for some distance, so these frequencies are also used for through-the-earth mine communications systems. Military communications Powerful VLF transmitters are used by the military to communicate with their forces worldwide. The advantage of VLF frequencies is their long range, high reliability, and the prediction that in a nuclear war VLF communications will be less disrupted by nuclear explosions than higher frequencies. Since it can penetrate seawater VLF is used by the military to communicate with submarines near the surface, while ELF frequencies are used for deeply submerged subs. Examples of naval VLF transmitters are Britain's Skelton Transmitting Station in Skelton, Cumbria Germany's DHO38 in Rhauderfehn, which transmits on 23.4Β kHz with a power of 800Β kW U.S. Jim Creek Naval Radio Station in Oso, Washington state, which transmits on 24.8Β kHz with a power of 1.2Β MW U.S. Cutler Naval Radio Station at Cutler, Maine which transmits on 24Β kHz with 1.8Β MW. Since 2004 the US Navy has stopped using ELF transmissions, with the statement that improvements in VLF communication has made them unnecessary, so it may have developed technology to allow submarines to receive VLF transmissions while at operating depth. High power land-based and aircraft transmitters in countries that operate submarines send signals that can be received thousands of miles away. Transmitter sites typically cover great areas (many acres or square kilometers), with transmitted power anywhere from 20Β kW to 2,000Β kW. Submarines receive signals from land based and aircraft transmitters using some form of towed antenna that floats just under the surface of the water – for example a Buoyant Cable Array Antenna (BCAA). Modern receivers use sophisticated digital signal processing techniques to remove the effects of atmospheric noise (largely caused by lightning strikes around the world) and adjacent channel signals, extending the useful reception range. Strategic nuclear bombers of the United States Air Force receive VLF signals as part of hardened nuclear resilient operations. Two alternative character sets may be used: 5Β bit ITA2 or 8Β bit ASCII. Because these are military transmissions they are almost always encrypted for security reasons. Although it is relatively easy to receive the transmissions and convert them into a string of characters, enemies cannot decode the encrypted messages; military communications usually use unbreakable one-time pad ciphers since the amount of text is so small. Amateur use The frequency range below 8.3Β kHz is not allocated by the International Telecommunication Union and in some nations may be used license-free. Radio amateurs in some countries have been granted permission (or have assumed permission) to operate at frequencies below 8.3Β kHz. Operations tend to congregate around the frequencies 8.27Β kHz, 6.47Β kHz, 5.17Β kHz, and 2.97Β kHz. Transmissions typically last from one hour up to several days and both receiver and transmitter must have their frequency locked to a stable reference such as a GPS disciplined oscillator or a rubidium standard in order to support such long duration coherent detection and decoding. Amateur equipment Radiated power from amateur stations is very small, ranging from 1Β ΞΌW to 100Β ΞΌW for fixed base station antennas, and up to 10Β mW from kite or balloon antennas. Despite the low power, stable propagation with low attenuation in the earth-ionosphere cavity enable very narrow bandwidths to be used to reach distances up to several thousand kilometers. The modes used are QRSS, MFSK, and coherent BPSK. The transmitter generally consists of an audio amplifier of a few hundred watts, an impedance matching transformer, a loading coil and a large wire antenna. Receivers employ an electric field probe or magnetic loop antenna, a sensitive audio preamplifier, isolating transformers, and a PC sound card to digitise the signal. Extensive digital signal processing is required to retrieve the weak signals from beneath interference from power line harmonics and VLF radio atmospherics. Useful received signal strengths are as low as Β volts/meter (electric field) and Β tesla (magnetic field), with signaling rates typically between 1 and 100Β bits per hour. PC based reception VLF signals are often monitored by radio amateurs using simple homemade VLF radio receivers based on personal computers (PCs). An aerial in the form of a coil of insulated wire is connected to the input of the soundcard of the PC (via a jack plug) and placed a few meters away from it. Fast Fourier transform (FFT) software in combination with a sound card allows reception of all frequencies below the Nyquist frequency simultaneously in the form of spectrogrammes. Because CRT monitors are strong sources of noise in the VLF range, it is recommended to record the spectrograms with any PC CRT monitors turned off. These spectrograms show many signals, which may include VLF transmitters and the horizontal electron beam deflection of TV sets. The strength of the signal received can vary with a sudden ionospheric disturbance. These cause the ionization level to increase in the ionosphere producing a rapid change to the amplitude and phase of the received VLF signal. List of VLF transmissions For a more detailed list, see List of VLF-transmitters See also Communication with submarines OMEGA Navigation System, 1971–1997 Radio atmospheric References Further reading External links Longwave club of America Radio waves below 22 kHz VLF Discussion Group Tomislav Stimac, "Definition of frequency bands (VLF, ELF... etc.)". PC-based VLF-reception Gallery of VLF-signals NASA live streaming ELF -> VLF Receiver NOTE: As of 05/03/2014, the "Listen live" links are down, but the site has some previously recorded examples to listen to. World Wide Lightning Location Network Stanford University VLF group University of Louisville VLF Monitor Larry's Very Low Frequency site Mark's Live Online VLF Receiver, UK IW0BZD VLF TUBE receiver Internet based VLF listening guide with server list List of VLF-transmitters Radio spectrum Radio electronics
Very low frequency
[ "Physics", "Engineering" ]
4,457
[ "Radio electronics", "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
160,506
https://en.wikipedia.org/wiki/Hardware%20random%20number%20generator
In computing, a hardware random number generator (HRNG), true random number generator (TRNG), non-deterministic random bit generator (NRBG), or physical random number generator is a device that generates random numbers from a physical process capable of producing entropy (in other words, the device always has access to a physical entropy source), unlike the pseudorandom number generator (PRNG, a.k.a. "deterministic random bit generator", DRBG) that utilizes a deterministic algorithm and non-physical nondeterministic random bit generators that do not include hardware dedicated to generation of entropy. Many natural phenomena generate low-level, statistically random "noise" signals, including thermal and shot noise, jitter and metastability of electronic circuits, Brownian motion, and atmospheric noise. Researchers also used the photoelectric effect, involving a beam splitter, other quantum phenomena, and even the nuclear decay (due to practical considerations the latter, as well as the atmospheric noise, is not viable). While "classical" (non-quantum) phenomena are not truly random, an unpredictable physical system is usually acceptable as a source of randomness, so the qualifiers "true" and "physical" are used interchangeably. A hardware random number generator is expected to output near-perfect random numbers ("full entropy"). A physical process usually does not have this property, and a practical TRNG typically includes a few blocks: a noise source that implements the physical process producing the entropy. Usually this process is analog, so a digitizer is used to convert the output of the analog source into a binary representation; a conditioner (randomness extractor) that improves the quality of the random bits; health tests. TRNGs are mostly used in cryptographical algorithms that get completely broken if the random numbers have low entropy, so the testing functionality is usually included. Hardware random number generators generally produce only a limited number of random bits per second. In order to increase the available output data rate, they are often used to generate the "seed" for a faster PRNG. DRBG also helps with the noise source "anonymization" (whitening out the noise source identifying characteristics) and entropy extraction. With a proper DRBG algorithm selected (cryptographically secure pseudorandom number generator, CSPRNG), the combination can satisfy the requirements of Federal Information Processing Standards and Common Criteria standards. Uses Hardware random number generators can be used in any application that needs randomness. However, in many scientific applications additional cost and complexity of a TRNG (when compared with pseudo random number generators) provide no meaningful benefits. TRNGs have additional drawbacks for data science and statistical applications: impossibility to re-run a series of numbers unless they are stored, reliance on an analog physical entity can obscure the failure of the source. The TRNGs therefore are primarily used in the applications where their unpredictability and the impossibility to re-run the sequence of numbers are crucial to the success of the implementation: in cryptography and gambling machines. Cryptography The major use for hardware random number generators is in the field of data encryption, for example to create random cryptographic keys and nonces needed to encrypt and sign data. In addition to randomness, there are at least two additional requirements imposed by the cryptographic applications: forward secrecy guarantees that the knowledge of the past output and internal state of the device should not enable the attacker to predict future data; backward secrecy protects the "opposite direction": knowledge of the output and internal state in the future should not divulge the preceding data. A typical way to fulfill these requirements is to use a TRNG to seed a cryptographically secure pseudorandom number generator. History Physical devices were used to generate random numbers for thousands of years, primarily for gambling. Dice in particular have been known for more than 5000 years (found on locations in modern Iraq and Iran), and flipping a coin (thus producing a random bit) dates at least to the times of ancient Rome. The first documented use of a physical random number generator for scientific purposes was by Francis Galton (1890). He devised a way to sample a probability distribution using a common gambling dice. In addition to the top digit, Galton also looked at the face of a dice closest to him, thus creating 6*4 = 24 outcomes (about 4.6 bits of randomness). Kendall and Babington-Smith (1938) used a fast-rotating 10-sector disk that was illuminated by periodic bursts of light. The sampling was done by a human who wrote the number under the light beam onto a pad. The device was utilized to produce a 100,000-digit random number table (at the time such tables were used for statistical experiments, like PRNG nowadays). On 29 April 1947, the RAND Corporation began generating random digits with an "electronic roulette wheel", consisting of a random frequency pulse source of about 100,000 pulses per second gated once per second with a constant frequency pulse and fed into a five-bit binary counter. Douglas Aircraft built the equipment, implementing Cecil Hasting's suggestion (RAND P-113) for a noise source (most likely the well known behavior of the 6D4 miniature gas thyratron tube, when placed in a magnetic field). Twenty of the 32 possible counter values were mapped onto the 10 decimal digits and the other 12 counter values were discarded. The results of a long run from the RAND machine, filtered and tested, were converted into a table, which originally existed only as a deck of punched cards, but was later published in 1955 as a book, 50 rows of 50 digits on each page (A Million Random Digits with 100,000 Normal Deviates). The RAND table was a significant breakthrough in delivering random numbers because such a large and carefully prepared table had never before been available. It has been a useful source for simulations, modeling, and for deriving the arbitrary constants in cryptographic algorithms to demonstrate that the constants had not been selected maliciously ("nothing up my sleeve numbers"). Since the early 1950s, research into TRNGs has been highly active, with thousands of research works published and about 2000 patents granted by 2017. Physical phenomena with random properties Multiple different TRNG designs were proposed over time with a large variety of noise sources and digitization techniques ("harvesting"). However, practical considerations (size, power, cost, performance, robustness) dictate the following desirable traits: use of a commonly available inexpensive silicon process; exclusive use of digital design techniques. This allows an easier system-on-chip integration and enables the use of FPGAs; compact and low-power design. This discourages use of analog components (e.g., amplifiers); mathematical justification of the entropy collection mechanisms. StipčeviΔ‡ & KoΓ§ in 2014 classified the physical phenomena used to implement TRNG into four groups: electrical noise; free-running oscillators; chaos; quantum effects. Electrical noise-based RNG Noise-based RNGs generally follow the same outline: the source of a noise generator is fed into a comparator. If the voltage is above threshold, the comparator output is 1, otherwise 0. The random bit value is latched using a flip-flop. Sources of noise vary and include: Johnson–Nyquist noise ("thermal noise"); Zener noise; avalanche breakdown. The drawbacks of using noise sources for an RNG design are: noise levels are hard to control, they vary with environmental changes and device-to-device; calibration processes needed to ensure a guaranteed amount of entropy are time-consuming; noise levels are typically low, thus the design requires power-hungry amplifiers. The sensitivity of amplifier inputs enables manipulation by an attacker; circuitry located nearby generates a lot of non-random noise thus lowering the entropy; a proof of randomness is near-impossible as multiple interacting physical processes are involved. Chaos-based RNG The idea of chaos-based noise stems from the use of a complex system that is hard to characterize by observing its behavior over time. For example, lasers can be put into (undesirable in other applications) chaos mode with chaotically fluctuating power, with power detected using a photodiode and sampled by a comparator. The design can be quite small, as all photonics elements can be integrated on-chip. StipčeviΔ‡ & KoΓ§ characterize this technique as "most objectionable", mostly due to the fact that chaotic behavior is usually controlled by a differential equation and no new randomness is introduced, thus there is a possibility of the chaos-based TRNG producing a limited subset of possible output strings. Free-running oscillators-based RNG The TRNGs based on a free-running oscillator (FRO) typically utilize one or more ring oscillators (ROs), outputs of which are sampled using yet another clock. Since inverters forming the RO can be thought of as amplifiers with a very large gain, an FRO output exhibits very fast oscillations in phase and frequency domains. The FRO-based TRNGs are very popular due to their use of the standard digital logic despite issues with randomness proofs and chip-to-chip variability. Quantum-based RNG Quantum random number generation technology is well established with 8 commercial quantum random number generator (QRNG) products offered before 2017. Herrero-Collantes & Garcia-Escartin list the following stochastic processes as "quantum": nuclear decay historically was the earliest quantum method used since the 1960s owing its popularity to the availability of Geiger counters and calibrated radiation sources. The entropy harvesting was done using an event counter that was periodically sampled or a time counter that was sampled at the time of the event. Similar designs were utilized in the 1950s to generate random noise in analog computers. The major drawbacks were radiation safety concerns, low bit rates, and non-uniform distribution; shot noise, a quantum mechanical noise source found in electronic circuits, while technically a quantum effect, is hard to isolate from the thermal noise, so, with few exceptions, noise sources utilizing it are only partially quantum and are usually classified as "classical"; quantum optics: branching path generator using a beamsplitter so that a photon from a single-photon source randomly takes one of the two paths and sensed by one of the two single-photon detectors thus generating a random bit; time of arrival generators and photon counting generators use a weak photon source, with the entropy harvested similarly to the case of radioactive decay; attenuated pulse generators are a generalization (simplifying the equipment) of the above methods that allows more than one photon in the system at a time; vacuum fluctuations generators use a laser homodyne detection to probe the changes in the vacuum state; laser phase noise generators use the phase noise on the output of a single spatial mode laser that is converted to amplitude using an unbalanced Mach-Zehnder interferometer. The noise is sampled by a photodetector; amplified spontaneous emission generators use spontaneous light emission present in the optical amplifiers as a source of noise; Raman scattering generators extract entropy from the interaction of photons with the solid-state materials; optical parametric oscillator generators use the spontaneous parametric down-conversion leading to binary phase state selection in a degenerate optical parametric oscillator; To reduce costs and increase robustness of quantum random number generators, online services have been implemented. A plurality of quantum random number generators designs are inherently untestable and thus can be manipulated by adversaries. Mannalath et al. call these designs "trusted" in a sense that they can only operate in a fully controlled, trusted environment. Performance test The failure of a TRNG can be quite complex and subtle, necessitating validation of not just the results (the output bit stream), but of the unpredictability of the entropy source. Hardware random number generators should be constantly monitored for proper operation to protect against the entropy source degradation due to natural causes and deliberate attacks. FIPS Pub 140-2 and NIST Special Publication 800-90B define tests which can be used for this. The minimal set of real-time tests mandated by the certification bodies is not large; for example, NIST in SP 800-90B requires just two continuous health tests: repetition count test checks that the sequences of identical digits are not too long, for a (typical) case of a TRNG that digitizes one bit at a time, this means not having long strings of either 0s or 1s; adaptive proportion test verifies that any random digit does not occur too frequently in the data stream (low bias). For bit-oriented entropy sources that means that the count of 1s and 0s in the bit stream is approximately the same. Attacks Just as with other components of a cryptography system, a cryptographic random number generator should be designed to resist certain attacks. Defending against these attacks is difficult without a hardware entropy source. The physical processes in HRNG introduce new attack surfaces. For example, a free-running oscillator-based TRNG can be attacked using a frequency injection. Estimating entropy There are mathematical techniques for estimating the entropy of a sequence of symbols. None are so reliable that their estimates can be fully relied upon; there are always assumptions which may be very difficult to confirm. These are useful for determining if there is enough entropy in a seed pool, for example, but they cannot, in general, distinguish between a true random source and a pseudorandom generator. This problem is avoided by the conservative use of hardware entropy sources. See also AN/CYZ-9 Bell test experiments /dev/random ERNIE Lavarand (a hardware random number generator based on movement of the floating material in lava lamps) List of random number generators Lottery machine RDRAND Trusted Platform Module References Sources General references . External links . ProtegoST SG100, ProtegoST, "Hardware Random Number Generator "Based on quantum physics random number source from a zener diode". Cryptography Random number generation Computer peripherals de:Zufallszahlengenerator#Physikalischer Zufallszahlengenerator
Hardware random number generator
[ "Mathematics", "Technology", "Engineering" ]
2,947
[ "Cybersecurity engineering", "Cryptography", "Applied mathematics", "Computer peripherals", "Components" ]
160,518
https://en.wikipedia.org/wiki/Low%20frequency
Low frequency (LF) is the ITU designation for radio frequencies (RF) in the range of 30–300Β kHz. Since its wavelengths range from 10–1Β km, respectively, it is also known as the kilometre band or kilometre waves. LF radio waves exhibit low signal attenuation, making them suitable for long-distance communications. In Europe and areas of Northern Africa and Asia, part of the LF spectrum is used for AM broadcasting as the "longwave" band. In the western hemisphere, its main use is for aircraft beacons, navigation (LORAN, mostly defunct), information, and weather systems. A number of time signal broadcasts also use this band. The main mode of transmission used in this band is ground waves, in which LF radio waves travel just above the Earth's surface, following the terrain. LF ground waves can travel over hills, and can travel far beyond the horizon, up to several hundred kilometers from the transmitter. Propagation Because of their long wavelength, low frequency radio waves can diffract over obstacles like mountain ranges and travel beyond the horizon, following the contour of the Earth. This mode of propagation, called ground wave, is the main mode in the LF band. Ground waves must be vertically polarized (the electric field is vertical while the magnetic field is horizontal), so vertical monopole antennas are used for transmitting. The transmission distance is limited by the absorption of ground waves in the Earth. The attenuation of signal strength with distance is lower than at higher frequencies. Low frequency ground waves can be received up to from the transmitting antenna. Low frequency waves can also occasionally travel long distances by reflecting from the ionosphere (the actual mechanism is one of refraction), although this method, called skywave or "skip" propagation, is not as common as at higher frequencies. Reflection occurs at the ionospheric E layer or F layers. Skywave signals can be detected at distances exceeding from the transmitting antenna. Uses Radio broadcasting AM broadcasting is authorized in the longwave band on frequencies between 148.5 and 283.5Β kHz in Europe and parts of Asia. Standard time signals In Europe and Japan, many low-cost consumer devices have since the late 1980s contained radio clocks with an LF receiver for these signals. Since these frequencies propagate by ground wave only, the precision of time signals is not affected by varying propagation paths between the transmitter, the ionosphere, and the receiver. In the United States, such devices became feasible for the mass market only after the output power of WWVB was increased in 1997 and 1999. JJY transmitting broadcast on the exact same frequency, and has a similar timecode. Military Radio signals below 50Β kHz are capable of penetrating ocean depths to approximately ; the longer the wavelength, the deeper they go. The British, German, Indian, Russian, Swedish, United States, and possibly other navies communicate with submarines on these frequencies. In addition, Royal Navy nuclear submarines carrying ballistic missiles are allegedly under standing orders to monitor the BBC Radio 4 transmission on 198Β kHz in waters near the UK. It is rumoured that they are to construe a sudden halt in transmission, particularly of the morning news programme Today, as an indicator that the UK is under attack, whereafter their sealed orders take effect. The United States has four LF stations maintaining contact with its submarine force: Aguada, Puerto Rico, Keflavik, Iceland, Awase, Okinawa, and Sigonella, Italy, using AN/FRT-95 solid state transmitters. In the U.S., the Ground Wave Emergency Network or GWEN operated between 150 and 175Β kHz, until replaced by satellite communications systems in 1999. GWEN was a land based military radio communications system which could survive and continue to operate even in the case of a nuclear attack. Experimental and amateur The 2007Β World Radiocommunication Conference (WRC-07) made a worldwide amateur radio allocation in this band. An international 2.1Β kHz allocation, the (135.7–137.9Β kHz) is available to amateur radio operators in several countries in Europe, New Zealand, Canada, US, and French overseas dependencies. The world record distance for a two-way contact is over 10,000Β km from near Vladivostok to New Zealand. As well as conventional Morse code many operators use very slow computer-controlled Morse code (so-called "QRSS") or specialized digital communications modes. The UK allocated a 2.8Β kHz sliver of spectrum from 71.6Β kHz to 74.4Β kHz beginning in AprilΒ 1996 to UK amateurs who applied for a Notice of Variation to use the band on a noninterference basis with a maximum output power of 1Β WattΒ ERP. This was withdrawn on 30Β June 2003 after a number of extensions in favor of the cross-European standard 136Β kHz band. Very slow Morse Code from G3AQC in the UK was received away, across the Atlantic Ocean, by W1TAG in the US on 21-22Β November 2001 on 72.401Β kHz. In the United States, there is an exemption within FCC PartΒ 15 regulations permitting unlicensed transmissions in the frequency range of 160–190Β kHz. Longwave radio hobbyists refer to this as the 'LowFER' band, and experimenters, and their transmitters are called 'LowFERs'. This frequency range between 160Β kHz and 190Β kHz is also referred to as the band. Requirements include: The total input power to the final radio frequency stage (exclusive of filament or heater power) shall not exceed one watt. The total length of the transmission line, antenna, and ground lead (if used) shall not exceed 15 meters. All emissions below 160Β kHz or above 190Β kHz shall be attenuated at least 20Β dB below the level of the unmodulated carrier. As an alternative to these requirements, a field strength of 2400/F(kHz)Β microvolts/meter (measured at a distance of 300 meters) may be used (as described in 47CFR15.209). In all cases, operation may not cause harmful interference to licensed services. Many experimenters in this band are amateur radio operators. Meteorological information broadcasts A regular service transmitting RTTY marine meteorological information in SYNOP code on LF is the German Meteorological Service (Deutscher Wetterdienst or DWD). The DWD operates station DDH47 on 147.3Β kHz using standard ITA-2 alphabet with a transmission speed of 50Β baud and FSK modulation with 85Β Hz shift. Radio navigation signals In parts of the world where there is no longwave broadcasting service, Non-directional beacons used for aeronavigation operate on 190–300Β kHz (and beyond into the MW band). In Europe, Asia and Africa, the NDB allocation starts on 283.5Β kHz. The LORAN-C radio navigation system operated on 100Β kHz. In the past, the Decca Navigator System operated between 70Β kHz and 129Β kHz. The last Decca chains were closed down in 2000. Differential GPS telemetry transmitters operate between 283.5 and 325Β kHz. The commercial "Datatrak" radio navigation system operates on a number of frequencies, varying by country, between 120–148Β kHz. Other applications Some radio frequency identification (RFID) tags utilize LF. These tags are commonly known as LFIDs or LowFIDs (low frequency identification). The LF RFID tags are near-field devices, interacting with the inductive near field, rather than with radiated waves (radio waves) that are the only part of the electromagnetic field that persists into the far field. As such, they are technically not radio devices nor radio antennas, even though they do operate at radio frequencies, and are called "antennas" in the RFID trade, but not in radio engineering. It is more proper, and technically more informative to think of them as secondary coils of very loosely coupled transformers. Antennas Since the ground waves used in this band require vertical polarization, vertical antennas are used for transmission. Mast radiators are most common, either insulated from the ground and fed at the bottom, or occasionally fed through guy-wires. T-antennas and inverted L-antennas are used when antenna height is an issue. LF transmitting antennas for high power transmitters require large amounts of space, and have been the cause of controversy in Europe and the United States, due to concerns about possible health hazards associated with human exposure to radio waves. Longwave receiving antennas Antenna requirements for LF reception are much more modest than for transmission. Although non-resonant long wire antennas are sometimes used, ferrite loop antennas are far more popular because of their small size. Amateur radio operators have achieved good LF reception using active antennas: A short whip with a built-in pre-amplifier. Antenna heights Due to the long wavelengths in the band, nearly all LF antennas are electrically short, shorter than one quarter of the radiated wavelength, so their low radiation resistance makes them inefficient, requiring very low resistance grounds and conductors to avoid dissipating transmitter power. These electrically short antennas need loading coils at the base of the antenna to bring them into resonance. Many antenna types, such as the umbrella antenna and L- and T-antenna, use capacitive top-loading (a "top hat"), in the form of a network of horizontal wires attached to the top of the vertical radiator. The capacitance improves the efficiency of the antenna by increasing the current, without increasing its height. The height of antennas differ by usage. For some non-directional beacons (NDBs) the height can be as low as 10Β meters, while for more powerful navigation transmitters such as DECCA, masts with a height around 100Β meters are used. T-antennas have a height between 50–200Β meters, while mast aerials are usually taller than 150Β meters. The height of mast antennas for LORAN-C is around 190Β meters for transmitters with radiated power below 500Β kW, and around 400Β meters for transmitters greater than The main type of LORAN-C antenna is insulated from ground. LF (longwave) broadcasting stations use mast antennas with heights of more than 150Β meters or T-aerials. The mast antennas can be ground-fed insulated masts or upper-fed grounded masts. It is also possible to use cage antennas on grounded masts. Directional array antennas For broadcasting stations, directional antennas are often required. They consist of multiple masts, which often have the same height. Some longwave antennas consist of multiple mast antennas arranged in a circle with or without a mast antenna in the center. Such antennas focus the transmitted power toward ground and give a large zone of fade-free reception. This type of antenna is rarely used, because they are very expensive and require much space and because fading occurs on longwave much more rarely than in the medium wave range. One antenna of this kind was used by transmitter Orlunda in Sweden. Footnotes See also 2200-meter band Ground Wave Emergency Network (GWEN) Longwave LowFER Passive RFID Time signal WGU-20 References Further reading External links Radio spectrum
Low frequency
[ "Physics" ]
2,284
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
160,519
https://en.wikipedia.org/wiki/Medium%20frequency
Medium frequency (MF) is the ITU designation for radio frequencies (RF) in the range of 300Β kilohertz (kHz) to 3Β megahertz (MHz). Part of this band is the medium waveΒ (MW) AM broadcast band. The MF band is also known as the hectometer band as the wavelengths range from ten to one hectometers (1000 to 100Β m). Frequencies immediately below MF are denoted as low frequency (LF), while the first band of higher frequencies is known as high frequency (HF). MF is mostly used for AM radio broadcasting, navigational radio beacons, maritime ship-to-shore communication, and transoceanic air traffic control. Propagation Radio waves at MF wavelengths propagate via ground waves and reflection from the ionosphere (called skywaves). Ground waves follow the curvature of Earth. At these wavelengths, they can bend (diffract) over hills, and travel beyond the visual horizon, although they may be blocked by mountain ranges. Typical MF radio stations can cover a radius of several hundred kilometres/miles from the transmitter, with longer distances over water and damp earth. MF broadcasting stations use ground waves to cover their listening areas. MF waves can also travel longer distances via skywave propagation, in which radio waves radiated at an angle into the sky are refracted back to Earth by layers of charged particles (ions) in the ionosphere, the E and F layers. However, at certain times the D layer (at a lower altitude than the refractive E and F layers) can be electronically noisy and absorb MF radio waves, interfering with skywave propagation. This happens when the ionosphere is heavily ionised, such as during the day, in summer and especially at times of high solar activity. At night, especially in winter months and at times of low solar activity, the ionospheric D layer can virtually disappear. When this happens, MF radio waves can easily be received hundreds or even thousands of miles away as the signal will be refracted by the remaining F layer. This can be very useful for long-distance communication, but can also interfere with local stations. Because of the limited number of available channels in the MW broadcast band, the same frequencies are re-allocated to different broadcasting stations several hundred miles apart. On nights of good skywave propagation, the signals of distant stations may reflect off the ionosphere and interfere with the signals of local stations on the same frequency. The North American Regional Broadcasting Agreement (NARBA) sets aside certain channels for nighttime use over extended service areas via skywave by a few specially licensed AM broadcasting stations. These channels are called clear channels, and the stations, called clear-channel stations, are required to broadcast at higher powers of 10 to 50Β kW. Uses and applications A major use of these frequencies is AM broadcasting; AM radio stations are allocated frequencies in the medium wave broadcast band from 526.5Β kHz to 1606.5Β kHz in Europe; in North America this extends from 525Β kHz to 1705Β kHz Some countries also allow broadcasting in the 120-meter band from 2300 to 2495Β kHz; these frequencies are mostly used in tropical areas. Although these are medium frequencies, 120 meters is generally treated as one of the shortwave bands. There are a number of coast guard and other ship-to-shore frequencies in use between 1600 and 2850Β kHz. These include, as examples, the French MRCC on 1696Β kHz and 2677Β kHz, Stornoway Coastguard on 1743Β kHz, the US Coastguard on 2670Β kHz and Madeira on 2843Β kHz. RN Northwood in England broadcasts Weather Fax data on 2618.5Β kHz. Non-directional navigational radio beacons (NDBs) for maritime and aircraft navigation occupy a band from 190 to 435Β kHz, which overlaps from the LF into the bottom part of the MF band. 2182Β kHz is the international calling and distress frequency for SSB maritime voice communication (radiotelephony). It is analogous to Channel 16 on the marine VHF band. 500Β kHz was for many years the maritime distress and emergency frequency, and there are more NDBs between 510 and 530Β kHz. Navtex, which is part of the current Global Maritime Distress Safety System occupies 518Β kHz and 490Β kHz for important digital text broadcasts. Lastly, there are aeronautical and other mobile SSB bands from 2850Β kHz to 3500Β kHz, crossing the boundary from the MF band into the HF radio band. An amateur radio band known as 160 meters or 'top-band' is between 1800 and 2000Β kHz (allocation depends on country and starts at 1810Β kHz outside the Americas). Amateur operators transmit CW morse code, digital signals and SSB and AM voice signals on this band. Following World Radiocommunication Conference 2012 (WRC-2012), the amateur service received a new allocation between 472 and 479Β kHz for narrow band modes and secondary service, after extensive propagation and compatibility studies made by the ARRL 600 meters Experiment Group and their partners around the world. In recent years, some limited amateur radio operation has also been allowed in the region of 500Β kHz in the US, UK, Germany and Sweden. Many home-portable or cordless telephones, especially those that were designed in the 1980s, transmit low power FM audio signals between the table-top base unit and the handset on frequencies in the range 1600 to 1800Β kHz. Antennas Transmitting antennas commonly used on this band include monopole mast radiators, top-loaded wire monopole antennas such as the inverted-L and T antennas, and wire dipole antennas. Ground wave propagation, the most widely used type at these frequencies, requires vertically polarized antennas like monopoles. The most common transmitting antennas, monopoles of one-quarter to five-eighths wavelength, are physically large at these frequencies, requiring a tall radio mast. Usually the metal mast itself is energized and used as the antenna, and is mounted on a large porcelain insulator to isolate it from the ground; this is called a mast radiator. The monopole antenna, particularly if electrically short requires a good, low resistance Earth ground connection for efficiency since the ground resistance is in series with the antenna and consumes transmitter power. Commercial radio stations use a ground system consisting of many copper cables, buried shallowly in the earth, radiating from the base of the antenna to a distance of about a quarter wavelength. In areas of rocky or sandy soil where the ground conductivity is poor, above-ground counterpoises are sometimes used. Lower power transmitters often use electrically short quarter wave monopoles such as inverted-L or T antennas, which are brought into resonance with a loading coil at their base. Receiving antennas do not have to be as efficient as transmitting antennas since in this band the signal-to-noise ratio is determined by atmospheric noise. The noise floor in the receiver is far below the noise in the signal, so antennas small in comparison to the wavelength, which are inefficient and produce low signal strength, can be used. The weak signal from the antenna can be amplified in the receiver without introducing significant noise. The most common receiving antenna is the ferrite loopstick antenna (also known as a ferrite rod aerial), made from a ferrite rod with a coil of fine wire wound around it. This antenna is small enough that it is usually enclosed inside the radio case. In addition to their use in AM radios, ferrite antennas are also used in portable radio direction finder (RDF) receivers. The ferrite rod antenna has a dipole reception pattern with sharp nulls along the axis of the rod, so that reception is at its best when the rod is at right angles to the transmitter, but fades to nothing when the rod points exactly at the transmitter. Other types of loop antennas and random wire antennas are also used. See also Electromagnetic spectrum Global Maritime Distress Safety System Maritime broadcast communications net Navtex Types of radio emissions References Federal Standard 1037C Further reading Charles Allen Wright and Albert Frederick Puchstein, "Telephone communication, with particular application to medium-frequency alternating currents and electro-motive forces". New York [etc.] McGraw-Hill Book Company, inc., 1st ed., 1925. LCCN 25008275 External links Tomislav Stimac, "Definition of frequency bands (VLF, ELF... etc.)". IK1QFK Home Page (vlf.it). Radio spectrum
Medium frequency
[ "Physics" ]
1,753
[ "Radio spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
160,556
https://en.wikipedia.org/wiki/Ball%20%28mathematics%29
In mathematics, a ball is the solid figure bounded by a sphere; it is also called a solid sphere. It may be a closed ball (including the boundary points that constitute the sphere) or an open ball (excluding them). These concepts are defined not only in three-dimensional Euclidean space but also for lower and higher dimensions, and for metric spaces in general. A ball in dimensions is called a hyperball or -ball and is bounded by a hypersphere or ()-sphere. Thus, for example, a ball in the Euclidean plane is the same thing as a disk, the area bounded by a circle. In Euclidean 3-space, a ball is taken to be the volume bounded by a 2-dimensional sphere. In a one-dimensional space, a ball is a line segment. In other contexts, such as in Euclidean geometry and informal use, sphere is sometimes used to mean ball. In the field of topology the closed -dimensional ball is often denoted as or while the open -dimensional ball is or . In Euclidean space In Euclidean -space, an (open) -ball of radius and center is the set of all points of distance less than from . A closed -ball of radius is the set of all points of distance less than or equal to away from . In Euclidean -space, every ball is bounded by a hypersphere. The ball is a bounded interval when , is a disk bounded by a circle when , and is bounded by a sphere when . Volume The -dimensional volume of a Euclidean ball of radius in -dimensional Euclidean space is: whereΒ  is Leonhard Euler's gamma function (which can be thought of as an extension of the factorial function to fractional arguments). Using explicit formulas for particular values of the gamma function at the integers and half integers gives formulas for the volume of a Euclidean ball that do not require an evaluation of the gamma function. These are: In the formula for odd-dimensional volumes, the double factorial is defined for odd integers as . In general metric spaces Let be a metric space, namely a set with a metric (distance function) , and let be a positive real number. The open (metric) ball of radius centered at a point in , usually denoted by or , is defined the same way as a Euclidean ball, as the set of points in of distance less than away from , The closed (metric) ball, sometimes denoted or , is likewise defined as the set of points of distance less than or equal to away from , In particular, a ball (open or closed) always includes itself, since the definition requires . A unit ball (open or closed) is a ball of radius 1. A ball in a general metric space need not be round. For example, a ball in real coordinate space under the Chebyshev distance is a hypercube, and a ball under the taxicab distance is a cross-polytope. A closed ball also need not be compact. For example, a closed ball in any infinite-dimensional normed vector space is never compact. However, a ball in a vector space will always be convex as a consequence of the triangle inequality. A subset of a metric space is bounded if it is contained in some ball. A set is totally bounded if, given any positive radius, it is covered by finitely many balls of that radius. The open balls of a metric space can serve as a base, giving this space a topology, the open sets of which are all possible unions of open balls. This topology on a metric space is called the topology induced by the metric . Let denote the closure of the open ball in this topology. While it is always the case that it is always the case that For example, in a metric space with the discrete metric, one has but for any In normed vector spaces Any normed vector space with norm is also a metric space with the metric In such spaces, an arbitrary ball of points around a point with a distance of less than may be viewed as a scaled (by ) and translated (by ) copy of a unit ball Such "centered" balls with are denoted with The Euclidean balls discussed earlier are an example of balls in a normed vector space. -norm In a Cartesian space with the -norm , that is one chooses some and definesThen an open ball around the origin with radius is given by the set For , in a 2-dimensional plane , "balls" according to the -norm (often called the taxicab or Manhattan metric) are bounded by squares with their diagonals parallel to the coordinate axes; those according to the -norm, also called the Chebyshev metric, have squares with their sides parallel to the coordinate axes as their boundaries. The -norm, known as the Euclidean metric, generates the well known disks within circles, and for other values of , the corresponding balls are areas bounded by LamΓ© curves (hypoellipses or hyperellipses). For , the - balls are within octahedra with axes-aligned body diagonals, the -balls are within cubes with axes-aligned edges, and the boundaries of balls for with are superellipsoids. generates the inner of usual spheres. Often can also consider the case of in which case we define General convex norm More generally, given any centrally symmetric, bounded, open, and convex subset of , one can define a norm on where the balls are all translated and uniformly scaled copies ofΒ . Note this theorem does not hold if "open" subset is replaced by "closed" subset, because the origin point qualifies but does not define a norm onΒ . In topological spaces One may talk about balls in any topological space , not necessarily induced by a metric. An (open or closed) -dimensional topological ball of is any subset of which is homeomorphic to an (open or closed) Euclidean -ball. Topological -balls are important in combinatorial topology, as the building blocks of cell complexes. Any open topological -ball is homeomorphic to the Cartesian space and to the open unit -cube (hypercube) . Any closed topological -ball is homeomorphic to the closed -cube . An -ball is homeomorphic to an -ball if and only if . The homeomorphisms between an open -ball and can be classified in two classes, that can be identified with the two possible topological orientations ofΒ . A topological -ball need not be smooth; if it is smooth, it need not be diffeomorphic to a Euclidean -ball. Regions A number of special regions can be defined for a ball: cap, bounded by one plane sector, bounded by a conical boundary with apex at the center of the sphere segment, bounded by a pair of parallel planes shell, bounded by two concentric spheres of differing radii wedge, bounded by two planes passing through a sphere center and the surface of the sphere See also Ball – ordinary meaning Disk (mathematics) Formal ball, an extension to negative radii Neighbourhood (mathematics) Sphere, a similar geometric shape 3-sphere -sphere, or hypersphere Alexander horned sphere Manifold Volume of an -ball Octahedron – a 3-ball in the metric. References Balls Metric geometry Spheres Topology
Ball (mathematics)
[ "Physics", "Mathematics" ]
1,463
[ "Spacetime", "Topology", "Space", "Geometry" ]
160,573
https://en.wikipedia.org/wiki/Axle
An axle or axletree is a central shaft for a rotating wheel or gear. On wheeled vehicles, the axle may be fixed to the wheels, rotating with them, or fixed to the vehicle, with the wheels rotating around the axle. In the former case, bearings or bushings are provided at the mounting points where the axle is supported. In the latter case, a bearing or bushing sits inside a central hole in the wheel to allow the wheel or gear to rotate around the axle. Sometimes, especially on bicycles, the latter type of axle is referred to as a spindle. Terminology On cars and trucks, several senses of the word axle occur in casual usage, referring to the shaft itself, its housing, or simply any transverse pair of wheels. Strictly speaking, a shaft that rotates with the wheel, being either bolted or splined in fixed relation to it, is called an axle or axle shaft. However, in looser usage, an entire assembly including the surrounding axle housing (typically a casting) is also called an axle. An even broader (somewhat figurative) sense of the word refers to every pair of parallel wheels on opposite sides of a vehicle, regardless of their mechanical connection to each other and to the vehicle frame or body. Thus, transverse pairs of wheels in an independent suspension may be called an axle in some contexts. This very loose definition of "axle" is often used in assessing toll roads or vehicle taxes, and is taken as a rough proxy for the overall weight-bearing capacity of a vehicle, and its potential for causing wear or damage to roadway surfaces. Vehicle axles Axles are an integral component of most practical wheeled vehicles. In a solid, "live-axle" suspension system, the rotating inner axle cores (or half-shafts) serve to transmit driving torque to the wheels at each end, while the rigid outer tube maintains the position of the wheels at fixed angles relative to the axle, and controls the angle of the axle and wheels assembly to the vehicle body. The solid axles (housings) in this system must also bear the weight of the vehicle plus any cargo. A non-driving axle, such as the front beam axle in heavy-duty trucks and some two-wheel drive light trucks and vans, will have no shaft, and serves only as a suspension and steering component. Conversely, many front-wheel drive cars have a one-piece rear beam axle. In other types of suspension systems, the axles serve only to transmit driving torque to the wheels: the position and angle of the wheel hubs is made independent from the axles by the function of the suspension system. This is typical of the independent suspensions found on most newer cars, and even SUVs, and on the front of many light trucks. An exception to this rule is the independent (rear) swing axle suspension, wherein the half-axles are also load-bearing suspension arms. Independent drive-trains still need differentials (or diffs), but without fixed axle-housing tubes attached. The diff may be attached to the vehicle frame or body, and/or be integrated with the transmission (or gearbox) in a combined transaxle unit. The axle (half-)shafts then transmit driving torque to the wheels, usually via constant-velocity joints. Like a full floating axle system, the drive shafts in a front-wheel-drive independent suspension system do not support any vehicle weight. Structural features and design A straight axle is a single rigid shaft connecting a wheel on the left side of the vehicle to a wheel on the right side. The axis of rotation fixed by the axle is common to both wheels. Such a design can keep the wheel positions steady under heavy stress, and can therefore support heavy loads. Straight axles are used on trains (that is, locomotives and railway wagons), for the rear axles of commercial trucks, and on heavy-duty off-road vehicles. The axle can optionally be protected and further reinforced by enclosing the length of the axle in a housing. In split-axle designs, the wheel on each side is attached to a separate shaft. Modern passenger cars have split-drive axles. In some designs, this allows independent suspension of the left and right wheels, and therefore a smoother ride. Even when the suspension is not independent, split axles permit the use of a differential, allowing the left and right drive wheels to be driven at different speeds as the automobile turns, improving traction and extending tire life. A tandem axle is a group of two or more axles situated close together. Truck designs use such a configuration to provide a greater weight capacity than a single axle. Semi-trailers usually have a tandem axle at the rear. Axles are typically made from SAE grade 41xx steel or SAE grade 10xx steel. SAE grade 41xx steel is commonly known as "chrome-molybdenum steel" (or "chrome-moly") while SAE grade 10xx steel is known as "carbon steel". The primary differences between the two are that chrome-moly steel is significantly more resistant to bending or breaking, and is very difficult to weld with tools normally found outside a professional welding shop. Drive axle An axle that is driven by the engine or prime mover is called a drive axle. Modern front-wheel drive cars typically combine the transmission (gearbox and differential) and front axle into a single unit called a transaxle. The drive axle is a split axle with a differential and universal joints between the two half axles. Each half axle connects to the wheel by use of a constant velocity (CV) joint which allows the wheel assembly to move freely vertically as well as to pivot when making turns. In rear-wheel drive cars and trucks, the engine turns a driveshaft (also called a propeller shaft or tailshaft) which transmits the rotational force to a drive axle at the rear of the vehicle. The drive axle may be a live axle, but modern rear-wheel drive automobiles generally use a split axle with a differential. In this case, one half-axle or half-shaft connects the differential with the left rear wheel, a second half-shaft does the same with the right rear wheel; thus the two half-axles and the differential constitute the rear axle. The front drive axle is providing the force to drive the truck. In fact, only one wheel of that axle is actually moving the truck and trailer down the road. Some simple vehicle designs, such as leisure go-karts, may have a single driven wheel where the drive axle is a split axle with only one of the two shafts driven by the engine, or else have both wheels connected to one shaft without a differential (kart racing). However, other go-karts have two rear drive wheels too. Lift axle Some dump trucks and trailers may be configured with a lift axle (also known as an airlift axle or drop axle), which may be mechanically raised or lowered. The axle is lowered to increase the weight capacity, or to distribute the weight of the cargo over more wheels, for example, to cross a weight-restricted bridge. When not needed, the axle is lifted off the ground to save wear on the tires and axle, and to increase traction in the remaining wheels, and to decrease fuel consumption. Lifting an axle also alleviates lateral scrubbing of the additional axle in very tight turns, allowing the vehicle to turn more readily. In some situations, the removal of pressure from the additional axle is necessary for the vehicle to complete a turn at all. Several manufacturers offer computer-controlled airlifts so that the dead axles are automatically lowered when the main axle reaches its weight limit. The dead axles can still be lifted by the press of a button if needed, for better maneuverability. Lift axles were in use in the early 1940s. Initially, the axle was lifted by a mechanical device. Soon hydraulics replaced the mechanical lift system. One of the early manufacturers was Zetterbergs, located in Γ–stervΓ₯la, Sweden. Their brand was Zeta-lyften. The liftable tandem drive axle was invented in 1957 by the Finnish truck manufacturer Vanajan Autotehdas, a company sharing history with Sisu Auto. Full-floating vs semi-floating A full-floating axle carries the vehicle's weight on the axle casing, not the half-shafts; they serve only to transmit torque from the differential to the wheels. They "float" inside an assembly that carries the vehicle's weight. Thus the only stress it must endure is torque (not lateral bending force). Full-floating axle shafts are retained by a flange bolted to the hub, while the hub and bearings are retained on the spindle by a large nut. In contrast, a semi-floating design carries the weight of the vehicle on the axle shaft itself; there is a single bearing at the end of the axle housing that carries the load from the axle and that the axle rotates through. To be "semi-floating" the axle shafts must be able to "float" in the housing, bearings and seals, and not subject to axial "thrust" and/or bearing preload. Needle bearings and separate lip seals are used in semi-floating axles with axle retained in the housing at their inner ends typically with circlips which are 3ΒΎ-round hardened washers that slide into grooves machined at the inner end of the shafts and retained in/by recesses in the differential carrier side gears which are themselves retained by the differential pinion gear (or "spider gear") shaft. A true semi-floating axle assembly places no side loads on the axle housing tubes or axle shafts. Axles that are pressed into ball or tapered roller bearings, which are in turn retained in the axle housings with flanges, bolts, and nuts do not "float" and place axial loads on the bearings, housings, and only a short section of the shaft itself, that also carries all radial loads. The full-floating design is typically used in most ΒΎ- and 1-ton light trucks, medium-duty trucks, and heavy-duty trucks. The overall assembly can carry more weight than a semi-floating or non-floating axle assembly because the hubs have two bearings riding on a fixed spindle. A full-floating axle can be identified by a protruding hub to which the axle shaft flange is bolted. The semi-floating axle setup is commonly used on half-ton and lighter 4Γ—4 trucks in the rear. This setup allows the axle shaft to be the means of propulsion, and also support the weight of the vehicle. The main difference between the full- and semi-floating axle setups is the number of bearings. The semi-floating axle features only one bearing, while the full-floating assembly has bearings on both the inside and outside of the wheel hub. The other difference is axle removal. To remove the semi-floating axle, the wheel must be removed first; if such an axle breaks, the wheel is most likely to come off the vehicle. The semi-floating design is found under most Β½-ton and lighter trucks, as well as in SUVs and rear-wheel-drive passenger cars, usually being smaller or less expensive models. A benefit of a full-floating axle is that even if an axle shaft (used to transmit torque or power) breaks, the wheel will not come off, preventing serious accidents. See also Solid axle GΓΆlsdorf axle Klien-Lindner axle List of auto parts LuttermΓΆller axle Portal axle Powertrain Transaxle Wagon wheel (transportation) Wheel and axle Wheelset (rail transport) References External links Truck Axle Design Automotive suspension technologies Rail technologies Vehicle parts
Axle
[ "Technology" ]
2,388
[ "Vehicle parts", "Components" ]
160,663
https://en.wikipedia.org/wiki/Cantilever
A cantilever is a rigid structural element that extends horizontally and is unsupported at one end. Typically it extends from a flat vertical surface such as a wall, to which it must be firmly attached. Like other structural elements, a cantilever can be formed as a beam, plate, truss, or slab. When subjected to a structural load at its far, unsupported end, the cantilever carries the load to the support where it applies a shear stress and a bending moment. Cantilever construction allows overhanging structures without additional support. In bridges, towers, and buildings Cantilevers are widely found in construction, notably in cantilever bridges and balconies (see corbel). In cantilever bridges, the cantilevers are usually built as pairs, with each cantilever used to support one end of a central section. The Forth Bridge in Scotland is an example of a cantilever truss bridge. A cantilever in a traditionally timber framed building is called a jetty or forebay. In the southern United States, a historic barn type is the cantilever barn of log construction. Temporary cantilevers are often used in construction. The partially constructed structure creates a cantilever, but the completed structure does not act as a cantilever. This is very helpful when temporary supports, or falsework, cannot be used to support the structure while it is being built (e.g., over a busy roadway or river, or in a deep valley). Therefore, some truss arch bridges (see Navajo Bridge) are built from each side as cantilevers until the spans reach each other and are then jacked apart to stress them in compression before finally joining. Nearly all cable-stayed bridges are built using cantilevers as this is one of their chief advantages. Many box girder bridges are built segmentally, or in short pieces. This type of construction lends itself well to balanced cantilever construction where the bridge is built in both directions from a single support. These structures rely heavily on torque and rotational equilibrium for their stability. In an architectural application, Frank Lloyd Wright's Fallingwater used cantilevers to project large balconies. The East Stand at Elland Road Stadium in Leeds was, when completed, the largest cantilever stand in the world holding 17,000 spectators. The roof built over the stands at Old Trafford uses a cantilever so that no supports will block views of the field. The old (now demolished) Miami Stadium had a similar roof over the spectator area. The largest cantilevered roof in Europe is located at St James' Park in Newcastle-Upon-Tyne, the home stadium of Newcastle United F.C. Less obvious examples of cantilevers are free-standing (vertical) radio towers without guy-wires, and chimneys, which resist being blown over by the wind through cantilever action at their base. Aircraft The cantilever is commonly used in the wings of fixed-wing aircraft. Early aircraft had light structures which were braced with wires and struts. However, these introduced aerodynamic drag which limited performance. While it is heavier, the cantilever avoids this issue and allows the plane to fly faster. Hugo Junkers pioneered the cantilever wing in 1915. Only a dozen years after the Wright Brothers' initial flights, Junkers endeavored to eliminate virtually all major external bracing members in order to decrease airframe drag in flight. The result of this endeavor was the Junkers J 1 pioneering all-metal monoplane of late 1915, designed from the start with all-metal cantilever wing panels. About a year after the initial success of the Junkers J 1, Reinhold Platz of Fokker also achieved success with a cantilever-winged sesquiplane built instead with wooden materials, the Fokker V.1. In the cantilever wing, one or more strong beams, called spars, run along the span of the wing. The end fixed rigidly to the central fuselage is known as the root and the far end as the tip. In flight, the wings generate lift and the spars carry this load through to the fuselage. To resist horizontal shear stress from either drag or engine thrust, the wing must also form a stiff cantilever in the horizontal plane. A single-spar design will usually be fitted with a second smaller drag-spar nearer the trailing edge, braced to the main spar via additional internal members or a stressed skin. The wing must also resist twisting forces, achieved by cross-bracing or otherwise stiffening the main structure. Cantilever wings require much stronger and heavier spars than would otherwise be needed in a wire-braced design. However, as the speed of the aircraft increases, the drag of the bracing increases sharply, while the wing structure must be strengthened, typically by increasing the strength of the spars and the thickness of the skinning. At speeds of around the drag of the bracing becomes excessive and the wing strong enough to be made a cantilever without excess weight penalty. Increases in engine power through the late 1920s and early 1930s raised speeds through this zone and by the late 1930s cantilever wings had almost wholly superseded braced ones. Other changes such as enclosed cockpits, retractable undercarriage, landing flaps and stressed-skin construction furthered the design revolution, with the pivotal moment widely acknowledged to be the MacRobertson England-Australia air race of 1934, which was won by a de Havilland DH.88 Comet. Currently, cantilever wings are almost universal with bracing only being used for some slower aircraft where a lighter weight is prioritized over speed, such as in the ultralight class. Cantilever in microelectromechanical systems Cantilevered beams are the most ubiquitous structures in the field of microelectromechanical systems (MEMS). An early example of a MEMS cantilever is the Resonistor, an electromechanical monolithic resonator. MEMS cantilevers are commonly fabricated from silicon (Si), silicon nitride (Si3N4), or polymers. The fabrication process typically involves undercutting the cantilever structure to release it, often with an anisotropic wet or dry etching technique. Without cantilever transducers, atomic force microscopy would not be possible. A large number of research groups are attempting to develop cantilever arrays as biosensors for medical diagnostic applications. MEMS cantilevers are also finding application as radio frequency filters and resonators. The MEMS cantilevers are commonly made as unimorphs or bimorphs. Two equations are key to understanding the behavior of MEMS cantilevers. The first is Stoney's formula, which relates cantilever end deflection Ξ΄ to applied stress Οƒ: where is Poisson's ratio, is Young's modulus, is the beam length and is the cantilever thickness. Very sensitive optical and capacitive methods have been developed to measure changes in the static deflection of cantilever beams used in dc-coupled sensors. The second is the formula relating the cantilever spring constant to the cantilever dimensions and material constants: where is force and is the cantilever width. The spring constant is related to the cantilever resonance frequency by the usual harmonic oscillator formula . A change in the force applied to a cantilever can shift the resonance frequency. The frequency shift can be measured with exquisite accuracy using heterodyne techniques and is the basis of ac-coupled cantilever sensors. The principal advantage of MEMS cantilevers is their cheapness and ease of fabrication in large arrays. The challenge for their practical application lies in the square and cubic dependences of cantilever performance specifications on dimensions. These superlinear dependences mean that cantilevers are quite sensitive to variation in process parameters, particularly the thickness as this is generally difficult to accurately measure. However, it has been shown that microcantilever thicknesses can be precisely measured and that this variation can be quantified. Controlling residual stress can also be difficult. Chemical sensor applications A chemical sensor can be obtained by coating a recognition receptor layer over the upper side of a microcantilever beam. A typical application is the immunosensor based on an antibody layer that interacts selectively with a particular immunogen and reports about its content in a specimen. In the static mode of operation, the sensor response is represented by the beam bending with respect to a reference microcantilever. Alternatively, microcantilever sensors can be operated in the dynamic mode. In this case, the beam vibrates at its resonance frequency and a variation in this parameter indicates the concentration of the analyte. Recently, microcantilevers have been fabricated that are porous, allowing for a much larger surface area for analyte to bind to, increasing sensitivity by raising the ratio of the analyte mass to the device mass. Surface stress on microcantilever, due to receptor-target binding, which produces cantilever deflection can be analyzed using optical methods like laser interferometry. Zhao et al., also showed that by changing the attachment protocol of the receptor on the microcantilever surface, the sensitivity can be further improved when the surface stress generated on the microcantilever is taken as the sensor signal. See also Applied mechanics Cantilever bicycle brakes Cantilever bicycle frame Cantilever chair Cantilever method Cantilevered stairs Corbel arch Euler–Bernoulli beam theory Grand Canyon Skywalk Knudsen force in the context of microcantilevers Orthodontics Statics References Sources Inglis, Simon: Football Grounds of Britain. CollinsWillow, 1996. page 206. External links Architectural elements Structural system Bridge components
Cantilever
[ "Technology", "Engineering" ]
2,024
[ "Structural engineering", "Building engineering", "Structural system", "Architectural elements", "Bridge components", "Components", "Architecture" ]
160,673
https://en.wikipedia.org/wiki/BogoMips
BogoMips (from "bogus" and MIPS) is a crude measurement of CPU speed made by the Linux kernel when it boots to calibrate an internal busy-loop. An often-quoted definition of the term is "the number of million times per second a processor can do absolutely nothing". BogoMips is a value that can be used to verify whether the processor in question is in the proper range of similar processors, i.e. BogoMips represents a processor's clock frequency as well as the potentially present CPU cache. It is not usable for performance comparisons among different CPUs. History In 1993, Lars Wirzenius posted a Usenet message explaining the reasons for its introduction in the Linux kernel on comp.os.linux: [...] MIPS is short for Millions of Instructions Per Second. It is a measure for the computation speed of a processor. Like most such measures, it is more often abused than used properly (it is very difficult to justly compare MIPS for different kinds of computers). BogoMips are Linus's own invention. The linux kernel version 0.99.11 (dated 11 July 1993) needed a timing loop (the time is too short and/or needs to be too exact for a non-busy-loop method of waiting), which must be calibrated to the processor speed of the machine. Hence, the kernel measures at boot time how fast a certain kind of busy loop runs on a computer. "Bogo" comes from "bogus", i.e, something which is a fake. Hence, the BogoMips value gives some indication of the processor speed, but it is way too unscientific to be called anything but BogoMips. The reasons (there are two) it is printed during boot-up is that a) it is slightly useful for debugging and for checking that the computer[’]s caches and turbo button work, and b) Linus loves to chuckle when he sees confused people on the news. [...] Proper BogoMips ratings As a very approximate guide, the BogoMips can be pre-calculated by the following table. The given rating is typical for that CPU with the then current and applicable Linux version. The index is the ratio of "BogoMips per clock speed" for any CPU to the same for an Intel 386DX CPU, for comparison purposes. With the 2.2.14 Linux kernel, a caching setting of the CPU state was moved from behind to before the BogoMips calculation. Although the BogoMips algorithm itself wasn't changed, from that kernel onward the BogoMips rating for then current Pentium CPUs was twice that of the rating before the change. The changed BogoMips outcome had no effect on real processor performance. In a shell, BogoMips can be easily obtained by searching the cpuinfo file: $ grep -i bogomips /proc/cpuinfo Computation of BogoMips With kernel 2.6.x, BogoMips are implemented in the /usr/src/linux/init/calibrate.c kernel source file. It computes the Linux kernel timing parameter loops_per_jiffy (see jiffy) value. The explanation from source code: <nowiki> /* * A simple loop like * while ( jiffies < start_jiffies+1) * start = read_current_timer(); * will not do. As we don't really know whether jiffy switch * happened first or timer_value was read first. And some asynchronous * event can happen between these two events introducing errors in lpj. * * So, we do * 1. pre_start <- When we are sure that jiffy switch hasn't happened * 2. check jiffy switch * 3. start <- timer value before or after jiffy switch * 4. post_start <- When we are sure that jiffy switch has happened * * Note, we don't know anything about order of 2 and 3. * Now, by looking at post_start and pre_start difference, we can * check whether any asynchronous event happened or not */ </nowiki> loops_per_jiffy is used to implement udelay (delay in microseconds) and ndelay (delay in nanoseconds) functions. These functions are needed by some drivers to wait for hardware. Note that a busy waiting technique is used, so the kernel is effectively blocked when executing ndelay/udelay functions. For i386 architecture delay_loop is implemented in /usr/src/linux/arch/i386/lib/delay.c as: /* simple loop based delay: */ static void delay_loop(unsigned long loops) { int d0; __asm__ __volatile__( "\tjmp 1f\n" ".align 16\n" "1:\tjmp 2f\n" ".align 16\n" "2:\tdecl %0\n\tjns 2b" :"=&a" (d0) :"0" (loops)); } equivalent to the following assembler code ; input: eax = d0 ; output: eax = 0 jmp start .align 16 start: jmp body .align 16 body: decl eax jns body which can be rewritten to C-pseudocode static void delay_loop(long loops) { long d0 = loops; do { --d0; } while (d0 >= 0); } Full and complete information and details about BogoMips, and hundreds of reference entries can be found in the (outdated) BogoMips mini-Howto. Timer-based delays In 2012, ARM contributed a new udelay implementation allowing the system timer built into many ARMv7 CPUs to be used instead of a busy-wait loop. This implementation was released in Version 3.6 of the Linux kernel. Timer-based delays are more robust on systems that use frequency scaling to dynamically adjust the processor's speed at runtime, as loops_per_jiffies values may not necessarily scale linearly. Also, since the timer frequency is known in advance, no calibration is needed at boot time. One side effect of this change is that the BogoMIPS value will reflect the timer frequency, not the CPU's core frequency. Typically the timer frequency is much lower than the processor's maximum frequency, and some users may be surprised to see an unusually low BogoMIPS value when comparing against systems that use traditional busy-wait loops. See also Turbo button Instructions per second References External links BogoMips Mini-Howto, V38 Sources of classical standalone benchmark Linux kernel Benchmarks (computing)
BogoMips
[ "Technology" ]
1,445
[ "Benchmarks (computing)", "Computing comparisons", "Computer performance" ]
160,688
https://en.wikipedia.org/wiki/Premium%20Bonds
Premium Bonds is a lottery bond scheme organised by the United Kingdom government since 1956. At present it is managed by the government's National Savings and Investments agency. The principle behind Premium Bonds is that rather than the stake being gambled, as in a usual lottery, it is the interest on the bonds that is distributed by a lottery. The bonds are entered in a monthly prize draw and the government promises to buy them back, on request, for their original price. The government pays interest into the bond fund (4.15% per annum in December 2024 but decreasing to 4% in January 2025) from which a monthly lottery distributes tax-free prizes to bondholders whose numbers are selected randomly. The machine that generates the numbers is called ERNIE, an acronym for "Electronic Random Number Indicator Equipment". Prizes range from Β£25 to Β£1,000,000 and (since December 2024) the odds of a Β£1 bond winning a prize in a given month are 22,000 to 1. Investors can buy bonds at any time but they must be held for a whole calendar month before they qualify for a prize. As an example, a bond purchased mid-May must then be held throughout June before being eligible for the draw in July (and onwards). Bonds purchased by reinvestment of prizes are immediately eligible for the following month's draw. Numbers are entered in the draw each month, with an equal chance of winning, until the bond is cashed. As of 2015, each person may own bonds up to Β£50,000. Since 1 February 2019, the minimum purchase amount for Premium Bonds has been Β£25. there are over 128.7billion eligible Premium Bonds, each having a value of Β£1. When introduced to the wider public in 1957, the only other similar game available in the UK was the football pools, with the National Lottery not coming into existence until 1994. Although many avenues of lotteries and other forms of gambling are now available to British adults, Premium Bonds are held by more than 24 million people, equivalent to more than 1 in 3 of the UK population. History The term "premium bond" has been used in the English language since at least the late 18th century, to mean a bond that earns no interest but is eligible for entry into a lottery. The modern iteration of Premium Bonds were introduced by Harold Macmillan, as Chancellor of the Exchequer, in his Budget of 17 April 1956, to control inflation and encourage people to save. On 1 November 1956, in front of the Royal Exchange in the City of London, the Lord Mayor of London, Alderman Sir Cuthbert Ackroyd, bought the first bond from the Postmaster General, Dr Charles Hill, for Β£1. Councillor William Crook, the mayor of Lytham St Anne's, bought the second. The Premium Bonds office was in St Annes-on-Sea, Lancashire, until it moved to Blackpool in 1978. Winning Winners of the jackpot are told on the first working day of the month, although the actual date of the draw varies. The online prize finder is updated by the third or fourth working day of the month. Winners of the top Β£1m prize are told in person of their win by "Agent Million", an NS&I employee, usually on the day before the first working day of the month. However, in-person visits were suspended, starting in May 2020, during the COVID-19 pandemic in the United Kingdom. Bond holders can check whether they have won any prizes on the National Savings & Investment Premium Bond Prize Checker website, or the smartphone app, which provides lists of winning bond numbers for the past six months. Older winning numbers (more than 18 months old) can also be checked in the London Gazette Premium Bonds Unclaimed Prizes Supplement. Odds of winning In December 2008, NS&I reduced the interest rate (and therefore the odds of winning) due to the drop in the Bank of England base rate during the Great Recession, leading to criticism from members of Parliament, financial experts and holders of bonds; many claimed Premium Bonds were now "worthless", and somebody with Β£30,000 invested and "average luck" would win only 10 prizes a year compared to 15 the previous year. Investors with smaller, although significant, amounts would possibly win nothing. From 1 January 2009 the odds of winning a prize for each Β£1 of bond was 36,000 to 1. In October 2009, the odds returned to 24,000 to 1 with the prize fund interest rate increase. The odds reached 26,000 to 1 by October 2013 and then reverted to 24,500 to 1 in November 2017. , the odds of winning are 1/22,000; resulting in the expected number of prizes for the maximum Β£50,000 worth of bonds being 27 per year. Prize fund distribution The prize fund is equal to one month's interest on all bonds eligible for the draw. The annual interest is set by NS&I and was 1.40% , reducing to 1.00% . This was increased to 2.2%, then increased again to 3% and is now at 4% from January 2025. The following table lists the distribution of prizes on offer in the January 2025 draw. Economic analysis While the mean return is 4% as of January 2025, the median return is lower. For an investor with the maximum Β£50,000 invested, the median return is 3.45% (Β£1,725). For investors with lower amounts invested, the median return is lower. The typical investor with Β£1,250 or less invested will receive nothing in a year. Premium Bonds are tax free, so are more attractive to higher rate taxpayers. ERNIE ERNIE - an acronym for "Electronic Random Number Indicator Equipment" - is the name for a series of hardware random number generators developed for this application. There have been five models of ERNIE to date. All of them have generated true random numbers derived from random statistical fluctuations in a variety of physical processes. The first ERNIE was built at the Post Office Research Station by a team led by Sidney Broadhurst. The designers were Tommy Flowers and Harry Fensom and it derives from Colossus, one of the world's first digital computers. It was introduced in 1957, with the first draw on 1 June, and generated bond numbers from the signal noise created by neon gas discharge tubes. ERNIE 1 is in the collections of the Science Museum in London and was on display between 2008 and 2015. ERNIE 2 replaced the first ERNIE in 1972. ERNIE 3 in 1988 was the size of a personal computer; at the end of its life it took five and a half hours to complete its monthly draw. In August 2004, ERNIE 4 was brought into service in anticipation of an increase in prizes each month from September 2004. Developed by LogicaCMG, it was 500 times faster than the original and generated a million numbers an hour; these were checked against a list of valid bonds. By comparison, the original ERNIE generated 2,000 numbers an hour and was the size of a van. ERNIE 4 used thermal noise in transistors as its source of randomness to generate true random numbers. ERNIE's output was independently tested each month by the Government Actuary's Department, the draw being valid only if it was certified to be statistically consistent with randomness. At the end of its life it was moved to Bletchley Park's National Museum of Computing. ERNIE 5, the latest model, was brought into service in March 2019, and is a quantum random number generator built by ID Quantique. It uses quantum technology to produce random numbers through light, replacing the former 'thermal noise' method. Running at speeds 21,000 times faster than the first ERNIE, it can produce 3 million winners in just 12 minutes each month. In popular culture ERNIE, anthropomorphised in early advertising, receives Valentine cards, Christmas cards and letters from the public. It is the subject of the song "E.R.N.I.E." by Madness, from the 1980 album Absolutely. It is also referenced by Jethro Tull in their album Thick as a Brick. In other countries Premium Bonds under various names exist or have existed in various countries. Similar programmes to UK Premium Bonds include: In the Republic of Ireland, Prize Bonds also originated in early 1957. In Sweden, "Premieobligationer" usually run for five years and are traded on Nasdaq OMX Stockholm. The unit (one Bond) is generally 1000 SEK or 5000 SEK. Holders of 10 or 50 consecutive bonds starting at 1 + N * 10 or 50 are guaranteed one win per year. Outstanding bonds were around 28.9 billion SEK. In Denmark, "Premieobligationer" usually ran for five or 10 years with a fixed prize list printed on the physical bonds. They were physical bearer bonds and most series were extended one or more times by another 5 or 10 years. The last series have now ended and must be redeemed for their principal cash within 10 years of the final ending dates. The bonds were generally identified by their colour, for instance the blue premium bonds were issued in 1948, and were redeemed in 1998 (10 years + 4 10-year extension). The first 200 DKK of each prize was tax free, the rest taxed at only 15% (compared to 30% or more for ordinary income). In New Zealand, "Bonus Bonds" were established by the NZ Government in 1970 and sold to ANZ Bank in 1990. In August 2020 it was announced that the scheme would close due to low interest rates reducing the prize pool. At the time of the announcement there were 1.2m bondholders with NZD $3.2 billion invested. Unrelated concepts In 2023, American economist Paul Krugman used the name "premium bonds" for an unrelated type of bond that he proposed to avoid a default due to the United States debt ceiling. Academic studies In 2008, two financial economists, Lobe and Hoelzl, analysed the main driving factors for the immense marketing success of Premium Bonds. One in three Britons invest in Premium Bonds. The thrill of gambling is significantly boosted by enhancing the skewness of the prize distribution. However, using data collected over the past fifty years, they found that the bond bears relatively low risk compared to many other investments. Aaron Brown discusses in a 2006 book Premium Bonds in comparison with equity-linked, commodity-linked and other "added risk" bonds. His conclusion is that it makes little difference, either to a retail investor or from a theoretical finance perspective, whether the added risk comes from a random number generator or from fluctuations in financial markets. See also Prize-linked savings accounts are savings accounts which use a similar system to grant interest References External links National Savings & Investments website Are Premium Bonds worth it? – BBC News, 2006 Q&A: Premium Bonds – The Guardian, 2006 Companies based in Blackpool Companies based in Glasgow Borough of Fylde Government bonds issued by the United Kingdom 1956 introductions Personal finance Public finance of the United Kingdom Lotteries in the United Kingdom History of computing in the United Kingdom Tax-advantaged savings plans in the United Kingdom
Premium Bonds
[ "Technology" ]
2,277
[ "History of computing", "History of computing in the United Kingdom" ]
160,697
https://en.wikipedia.org/wiki/Bristol%20Taurus
The Taurus is a British 14-cylinder two-row radial aircraft engine, produced by the Bristol Engine Company starting in 1936. The Taurus was developed by adding cylinders to the existing single-row Aquila design and transforming it into a twin-row radial engine, creating a powerplant that produced just over with very low weight. Design and development Bristol had originally intended to use the Aquila and Perseus as two of its major product lines in the 1930s, but the rapid increase in size and speed of aircraft in the 1930s demanded much larger engines. The mechanicals from both of these designs were then put into two-row configurations to develop much larger engines, the Aquila becoming the Taurus, and the Perseus becoming the Hercules. The Taurus used sleeve valves, resulting in an uncluttered exterior and little mechanical noise. It offered high power with a relatively low weight, starting from in the earliest versions. It was also compact, with a diameter of which made it attractive for fighters. Unfortunately, the engine was also described as "notoriously troublesome", with protracted development and a slow growth in rated power. After several years of development, power had only increased from to . As the most important applications of this engine was in aircraft that flew at low altitude, development efforts focused on low-altitude performance. The first Taurus engines were delivered just before World War II, and was used primarily in the Fairey Albacore and Bristol's Beaufort. In April 1940, a suggestion was made to replace the Taurus engines of the latter with the Pratt & Whitney R-1830 Twin Wasp, which had a slightly larger diameter, but this change was postponed to the autumn of 1941 while attempts were made to cure the Taurus's reliability problems, and later had to be temporarily reversed because of shortages of Twin Wasps. The Twin Wasp was, however, strongly preferred, especially for overseas postings, because of its better reliability. The reliability problems were mostly cured in later models of the Taurus engine by a change in the cylinder manufacturing process, although the engine reputation never recovered, and in the Albacore the Taurus engine was used until the end of that aircraft's production in 1943. There were no other operational applications of the Taurus engine, because its initial reliability problems discouraged development of Taurus-powered aircraft, and because later-war combat aircraft demanded more powerful engines. Production ended in favour of the Hercules engine. Variants Taurus II (1940) – maximum power with boost at 3,225 rpm for take off or one minute using 87 octane fuel. Medium supercharged. Taurus III – maximum continuous power, medium supercharged, compression ratio 7.2:1. Taurus VI – maximum continuous power, medium supercharged, compression ratio 7.2:1. Taurus XII (1940) – maximum continuous power, medium supercharged, compression ratio 7.2:1. Taurus XVI (1940) – maximum continuous power, medium supercharged, compression ratio 7.2:1. Taurus XX – trials engine only, one built. Applications Note: Bristol Type 148 Bristol Beaufort Fairey Albacore Fairey Battle testbed only Gloster F.9/37 Specifications (Taurus II) See also References Notes Bibliography Gunston, Bill (2006). World Encyclopedia of Aero Engines: From the Pioneers to the Present Day. 5th edition, Stroud, UK: Sutton. . White, Graham (1995). Allied Aircraft Piston Engines of World War II: History and Development of Frontline Aircraft Piston Engines Produced by Great Britain and the United States During World War II. Warrendale, Pennsylvania: SAE International. Aircraft air-cooled radial piston engines Taurus Sleeve valve engines 1930s aircraft piston engines
Bristol Taurus
[ "Technology" ]
760
[ "Sleeve valve engines", "Engines" ]
160,706
https://en.wikipedia.org/wiki/Bristol%20Hercules
The Bristol Hercules is a 14-cylinder two-row radial aircraft engine designed by Sir Roy Fedden and produced by the Bristol Engine Company starting in 1939. It was the most numerous of their single sleeve valve (Burt-McCollum, or Argyll, type) designs, powering many aircraft in the mid-World War II timeframe. The Hercules powered a number of aircraft types, including Bristol's own Beaufighter heavy fighter design, although it was more commonly used on bombers. The Hercules also saw use in civilian designs, culminating in the 735 and 737 engines for such as the Handley Page Hastings C1 and C3 and Bristol Freighter. The design was also licensed for production in France by SNECMA. Design and development Shortly after the end of World War I, the Shell company, Asiatic Petroleum, commissioned Harry Ricardo to investigate problems of fuel and engines. His book was published in 1923 as β€œThe Internal Combustion Engine”. Ricardo postulated that the days of the poppet valve were numbered and that a sleeve valve alternative should be pursued. The rationale behind the single sleeve valve design was two-fold: to provide optimum intake and exhaust gas flow in a two-row radial engine, improving its volumetric efficiency and to allow higher compression ratios, thus improving its thermal efficiency. The arrangement of the cylinders in two-row radials made it very difficult to utilise four valves per cylinder, consequently all non-sleeve valve two- and four-row radials were limited to the less efficient two-valve configuration. Also, as combustion chambers of sleeve-valve engines are uncluttered by valves, especially hot exhaust valves, so being comparatively smooth they allow engines to work with lower octane number fuels using the same compression ratio. Conversely, the same octane number fuel may be utilised while employing a higher compression ratio, or supercharger pressure, thus attaining either higher economy or power output. The downside was the difficulty in maintaining sufficient cylinder and sleeve lubrication. Manufacturing was also a major problem. Sleeve valve engines, even the mono valve Fedden had elected to use, were extremely difficult to make. Fedden had experimented with sleeve valves in an inverted V-12 as early as 1927 but did not pursue that engine any further. Reverting to nine cylinder engines, Bristol had developed a sleeve valve engine that would actually work by 1934, introducing their first sleeve-valve designs in the class Perseus and the class Aquila that they intended to supply throughout the 1930s. Aircraft development in the era was so rapid that both engines quickly ended up at the low-power end of the military market and, in order to deliver larger engines, Bristol developed 14-cylinder versions of both. The Perseus evolved into the Hercules, and the Aquila into the Taurus. These smooth-running engines were largely hand-built, which was incompatible with the needs of wartime production. At that time, the tolerances were simply not sufficiently accurate to ensure the mass production of reliable engines. Fedden drove his teams mercilessly, at both Bristol and its suppliers, and thousands of combinations of alloys and methods were tried before a process was discovered which used centrifugal casting to make the sleeves perfectly round. This final success arrived just before the start of the Second World War. In 1937 Bristol acquired a Northrop Model 8A-1, the export version of the A-17 attack bomber, and modified it as a testbed for the first Hercules engines. In 1939 Bristol developed a modular engine installation for the Hercules, a so-called "power-egg", allowing the complete engine and cowling to be fitted to any suitable aircraft. A total of over 57,400 Hercules engines were built. Variants Hercules I (1936) – , single-speed supercharger, run on 87 octane fuel. Hercules II (1938) – , single-speed supercharger, run on 87 octane fuel. Hercules III (1939) – , two-speed supercharger, run on either 87 or 100 octane fuel. Hercules IV (1939) – , single-speed supercharger, run on 87 octane fuel. Hercules V (1939) – , civil prototype derived from the Hercules IV but not developed. Hercules VI (1941) – , two-speed supercharger, run on either 87 or 100 octane fuel. Hercules VII production cancelled. Hercules VIII – , very high-altitude version of the Hercules II, single-speed supercharger with an auxiliary high-altitude single-speed 'S' supercharger. Hercules X (1941) – , derived from the Hercules III. Hercules XI (1941) – , derived from the Hercules III, run on 100 octane fuel. Hercules XII – derived from the Hercules IV. Hercules XIV (1942) – , developed for the civil market and used by BOAC, run on 100 octane fuel. Hercules XVMT – , very high-altitude development of the Hercules II, single-speed supercharger with an auxiliary high-altitude turbo-supercharger. Hercules XVI (1942) – , two-speed supercharger, run on either 87 or 100 octane fuel. Hercules XVII (1943) – , two-speed supercharger locked in 'M' gear. Hercules XVIII – low-level development of the Hercules VI with cropped supercharger impellers. Hercules XIX (1943) – , a development of the Hercules XVII, the two-speed supercharger had cropped impellers locked in 'M' gear. Hercules XX – similar to the Hercules XIX. Hercules 36 – a development engine derived from the Hercules VI and Hercules XVI, run on 100 octane fuel. The Hercules 38 was a further development of the Hercules 36. Hercules 100 (1944) – , the first in a new sub-series of Hercules engines designed primarily for the impending post-war civil market. The entire series was split, some versions had standard epicyclic reduction gearing and parallel versions had a new torquemeter-type reduction gearing. Hercules 101 – , developed from the Hercules 100. The Hercules 103 was the torquemeter version. The Hercules 110 was a further development of the Hercules 101. Hercules 105 – , developed from the Hercules 101 with modified supercharger gears. Hercules 106 – , developed from the Hercules 101. The Hercules 107 was the torquemeter version. Hercules 120 – , high-altitude development of the Hercules 101. The Hercules 121 was the torquemeter version. The Hercules 200 was further modified version of the Hercules 120. Hercules 130 – , development of the Hercules 100. The Hercules 134 was a development with modified mounting ring and exhaust pipes for a rear manifold. Hercules 216 – , development of the Hercules 106 with the Hercules 230 power section and single speed supercharger. Applications: Hercules 230 – , development of the Hercules 130 with the re-designed power section and modified mounting ring and exhaust pipes for a rear manifold. The Hercules 270 was a development. The Hercules 231 and Hercules 271 were the torquemeter versions. Hercules 232 – modified development of the Hercules 230 for improved performance. The Hercules 233 was the torquemeter version. Hercules 234 – modified development of the Hercules 232. The Hercules 235 was the torquemeter version. The Hercules 238 was a military version of the Hercules 734 which itself was based on the Hercules 234. Hercules 260 – modified development of the Hercules 230 to suit reversible propellers. The Hercules 261 was the torquemeter version. Hercules 264 – , a development of the Hercules 260. The Hercules 265 was the torquemeter version. Hercules 268 – a further development of the Hercules 260. The Hercules 269 was the torquemeter version. Hercules 630 – , a civil development of the Hercules 100. The Hercules 631 was the torquemeter version. Hercules 632 – , a civil-series engine developed from the Hercules 630. The Hercules 633 was the torquemeter version. The Hercules 638 and Hercules 672, along with their torquemeter versions the Hercules 639 and Hercules 673 were developments of the Hercules 632. Hercules 634 – , a civil-series engine developed from the Hercules 630 with modified mounting ring and exhaust pipes for a rear manifold. The Hercules 635 was the torquemeter version. Hercules 636 – a civil-series engine developed from the Hercules 630 with modified mounting ring and exhaust pipes for a rear manifold. The Hercules 637 was the torquemeter version. The Hercules 637-2 and Hercules 637-3 were further torquemeter developments. Hercules 730 – , a civil-series engine developed from the Hercules 230 and 630 with improved power section, the Hercules 731 was the torquemeter version. Hercules 732 – a civil-series engine developed from the Hercules 730 with modified mounting ring and exhaust pipes for a rear manifold. The Hercules 733 was the torquemeter version. Hercules 734 – , a civil-series engine developed from the Hercules 730. The Hercules 735 was the torquemeter version. The Hercules 238 was a military version of the civil Hercules 734. Hercules 736 – , a civil-series engine developed from the Hercules 730. The Hercules 737 was the torquemeter version. Hercules 738 – a civil-series engine developed from the Hercules 730. Hercules 739 – the torquemeter version of the Hercules 738. Hercules 750 – a civil-series engine developed from the Hercules 730 to suit braking propellers. The Hercules 751 was the torquemeter version. Hercules 758 – , a civil-series development of the Hercules 750, the Hercules 759 was the torquemeter version. The Hercules 790 and its torquemeter version the Hercules 790 were further developed from the Hercules 758. Hercules 760 – a civil-series engine developed from the Hercules 730. Hercules 762 – , a civil-series high-altitude development of the Hercules 730 with modified supercharger. The Hercules 763 was the torquemeter version. Hercules 772 – , a civil-series development of the Hercules 762. The Hercules 773 was the torquemeter version. Applications Bristol Hercules applications: Armstrong Whitworth Albemarle Avro Lancaster Mk II Avro Tudor VII Avro York Mk II BrΓ©guet 890H Mercure prototype Bristol Beaufighter Bristol Freighter CASA C-207 Azor Fokker T.IX Folland Fo.108 GAL Universal Freighter Handley Page Halifax Handley Page Halton Handley Page Hastings Handley Page Hermes Nord Noratlas Nord Noroit Northrop Gamma 2L Saro A.36 Lerwick Short S.26 Short Seaford Short Solent Short Stirling SNCASE SE-1010 Vickers Valetta Vickers Varsity Vickers VC.1 Viking Vickers Wellesley Vickers Wellington Engines on display RAF Snaith Museum https://www.facebook.com/RAFSnaith/ A Bristol Hercules is on public display at the City of Norwich Aviation Museum in Horsham St Faith, Norfolk. Two incomplete and badly corroded examples from aircraft lost in the vicinity of Texel and recovered from the sea are on display at the Museum Kaapskil in Oudeschild, Texel, NL. Specifications (Hercules II) See also References Notes Bibliography External links Running a Hercules for the first time in 30 years Image of the gear system for the sleeve drive "Safety through engine development testing" a 1948 advert for the Hercules in Flight magazine "600 Hours between overhaul" a 1948 Flight advertisement for the Hercules Hercules Aircraft air-cooled radial piston engines Sleeve valve engines 1930s aircraft piston engines
Bristol Hercules
[ "Technology" ]
2,338
[ "Sleeve valve engines", "Engines" ]
160,734
https://en.wikipedia.org/wiki/Michael%20Beaumont%2C%2022nd%20Seigneur%20of%20Sark
Seigneur John Michael Beaumont (20 December 1927 – 3 July 2016) was the twenty-second Seigneur of Sark in the Channel Islands. He worked as a civil engineer before succeeding his paternal grandmother, Sibyl Hathaway, the 21st Dame of Sark, in 1974. During his rule, Beaumont saw the loss of many feudal rights enjoyed by the seigneurs, and he was consequently often described as the "last feudal baron". Family Beaumont was the son of the Royal Air Force officer and film producer Francis William Beaumont and his first wife, Enid Ripley. His paternal grandmother, Sibyl Hathaway, ascended as the Dame of Sark six months before his birth. Francis and Enid divorced in 1937 as a result of his adultery with an actress, Mary Lawson, whom he subsequently married. Beaumont's father and stepmother were killed on 4 May 1941, during the Liverpool Blitz, which left the 14-year-old Beaumont as heir to his grandmother. Beaumont worked as a structural design engineer for the British Aircraft Corporation in Bristol before moving to Shoreham-by-Sea, where he worked on Beagle Aircraft. In 1956, Beaumont married Diana La Trobe-Bateman, and the couple had two sons, Christopher and Anthony. Seigneurship In 1974, Beaumont's grandmother died and he succeeded her as Seigneur of Sark. The new seigneur swore fealty to Queen Elizabeth in 1978, when she and the Duke of Edinburgh visited the island for the first time since his accession. In 1990, a French nuclear physicist named AndrΓ© Gardes came to Sark to depose Beaumont and establish himself as seigneur, but this one-man "invasion" attempt failed. During her 2001 visit, the Queen made the seigneur an Officer of the Order of the British Empire. In 2008, Sark experienced a major change in the system of government. Beaumont remained the overlord of the island, but lost some of his feudal privileges. He did retain the privilege of being the only person on the island with the right to keep pigeons and an unspayed dog. The first democratic elections on the island took place in December 2008. Beaumont appreciated the fact that it allowed his island to stay independent from Guernsey. Final years Due to their poor health, the aging seigneur and his wife moved out of La Seigneurie, the traditional residence of the ruler of the island, to a more manageable cottage on their estate. In 2009, they agreed to allow David Synnott and his wife to live in the Seigneurie for ten yearsβ€”until the end of October 2019. The rent is paid through renovations, and Synnott said that the seigneur was "effectively making a large and generous donation to his successor who will benefit from the work". Beaumont's heir, his son, Major Christopher Beaumont, lived and worked with his family in Britain and served as an officer in the Royal Engineers. In 2008, he told the Chief Pleas that he intended to move back to Sark upon inheriting the fief. In 2011, the seigneur declared that he would never consider selling his fief. Beaumont died on 3 July 2016 and was succeeded by his eldest son, Christopher. Beaumont's widow, Diana, died on 1 December 2016. She was eighty years old when she died, which was less than five months after Beaumont's death. References 1927 births 2016 deaths Officers of the Order of the British Empire Seigneurs of Sark Structural engineers English civil engineers
Michael Beaumont, 22nd Seigneur of Sark
[ "Engineering" ]
724
[ "Structural engineering", "Structural engineers" ]
160,767
https://en.wikipedia.org/wiki/Bristol%20Centaurus
The Centaurus was the final development of the Bristol Engine Company's series of sleeve valve radial aircraft engines. The Centaurus is an 18-cylinder, two-row design that eventually delivered over . The engine was introduced into service late in the Second World War and was one of the most powerful aircraft piston engines to see service. Design and development Like other Bristol sleeve valve engines, the Centaurus was based on the design knowledge acquired from an earlier design, in this case the Bristol Perseus cylinder. The Centaurus used 18 Perseus cylinders. The same cylinder was in use in the contemporary 14-cylinder Hercules, which was being brought into production when the design of the Centaurus started. The Centaurus had a cylinder swept volume of , nearly as much as the American Wright R-3350 Duplex-Cyclone large radial, making the Centaurus one of the largest aircraft piston engines to enter production, while that of the Hercules was . The nearly 40 per cent higher capacity was achieved by increasing the stroke from and by changing to two rows of nine cylinders instead of two rows of seven. The diameter of the Centaurus was only just over 6 per cent greater than the Hercules in spite of its much greater swept volume. The cylinder heads had an indentation like an inverted top hat, which was finned, but it was difficult to get air down into this hollow to adequately cool the head. During development, Bristol contacted ICI Metals Division, Birmingham, to enquire whether a copper-chromium alloy with higher thermal conductivity would have sufficient high temperature strength to be used for this purpose. With the same cylinder volume and using the new material, the horsepower per cylinder was raised from to . Bristol maintained the Centaurus from type-testing in 1938, but production did not start until 1942, owing to the need to get the Hercules into production and improve the reliability of the entire engine line. Nor was there any real need for the larger engine at this early point in the war, when most military aircraft designs had a requirement for engines of about . The Hercules power of about was better suited to the existing airframes. The Centaurus did not enter service until near the end of the war, first appearing on the Vickers Warwick. Other wartime, or postwar, uses included the Bristol Brigand and Buckmaster, Hawker Tempest and Sea Fury and the Blackburn Firebrand and Beverley. The engine also entered service after the war in a civilian airliner, the Airspeed Ambassador and was also used in the Bristol Brabazon I Mark 1 prototype aircraft until the Brabazon trans-Atlantic airliner programme was cancelled. The eight Centaurus engines were to be replaced with eight Bristol Proteus gas turbines on the Mark II giving a faster cruising speed at higher altitude. By the end of the war in Europe, around 2,500 examples of the Centaurus had been produced by Bristol. The 373 was the most powerful version of the Centaurus and was intended for the Blackburn Beverley transport aircraft. Using direct fuel injection, it achieved a remarkable , but was never fitted. A projected enlarged capacity version of the Centaurus was designed by Sir Roy Fedden; cylinders were produced for this engine, but it was never built. Known as the Bristol Orion, a name used previously for a variant of the Jupiter engine and later re-used for a turboprop, this development was also a two-row, 18 cylinder sleeve valve engine, with the displacement increased to [], nearly as large as the American Pratt & Whitney R-4360 Wasp Major four-row, 28-cylinder radial, the largest displacement aviation radial engine ever placed in quantity production. Variants Centaurus I – , two-speed full/medium supercharger and left-hand tractor drive. Run on 100 octane fuel. Centaurus IV – , two-speed medium/full supercharger and rigid mounting. Centaurus V – , two-speed full/medium supercharger with cropped impellers. The Centaurus VI was similar to the Centaurus V with master connecting rods in cylinder numbers 7 and 8. The Centaurus VIII was similar to the Centaurus VI with methanol/water fittings. Centaurus VII – , two-speed medium/full supercharger and rigid mounting. Centaurus IX – , and Centaurus XI were similar to the Centaurus VII. The Centaurus X was similar to the Centaurus IX with methanol/water fittings. Centaurus XII – , was a development of the Centaurus IV with twin-turbine entry supercharger, redesigned propeller reduction gear and Hobson-RAE injector and vertically mounted starter motor. The Centaurus XV was a development of the Centaurus VII with flexible mounting. Centaurus XVIII – , was similar to the Centaurus XV. Centaurus XX – , a dual-installation engine for the Bristol Brabazon, similar to the Centaurus 57. Centaurus 57 – , a development of the Centaurus XII with modified supercharger and injector. The Centaurus 58 was a modified Centaurus 57, and the Centaurus 59 was a modified Centaurus 58 with a flexible mounting. Centaurus 70 – , a modified Centaurus 57 with single-speed medium supercharger. The Centaurus 71 was a lightened Centaurus 70 with torquemeter-type reduction gear and accessory drive. Centaurus 100 – , a modified Centaurus 57 with two-speed full/medium supercharger and methanol/water injector. The Centaurus 130 was a civil model, modified from the Centaurus 100 with single-speed medium supercharger. Centaurus 160 – , two-speed full/medium supercharger, a front cover suitable for braking propeller, front ignition, accessory drive, improved sleeve timing and dynamic suspension mounting. The Centaurus 161 was a Centaurus 160 with torquemeter-type reduction gear. The Centaurus 165 was a Centaurus 161 with improved power section and methanol/water fittings. Centaurus 170 – , a development of the Centaurus 160 with single-speed medium supercharger. The Centaurus 171 was a Centaurus 170 with torquemeter-type reduction gear. The Centaurus 173 was a Centaurus 171 with methanol/water injection and accessory drive. The Centaurus 175 was a Centaurus 173 with modified valve port timings and reduced boost. Centaurus 373 – , a modified Centaurus 173. Centaurus 568 – , a civil engine with two-speed full/medium supercharger modified from the Centaurus 58. Centaurus 630 – , civil engine with single-speed medium supercharger, a front cover suitable for braking propeller, front ignition, accessory drive, improved sleeve timing and dynamic suspension mounting. The Centaurus 631 was a Centaurus 630 with torquemeter-type reduction gear. Centaurus 660 – , civil engine with two-speed full/medium supercharger, a front cover suitable for braking propeller, front ignition, accessory drive, improved sleeve timing and dynamic suspension mounting. The Centaurus 661 was a Centaurus 660 with torquemeter-type reduction gear. The Centaurus 662 was a Centaurus 660 with methanol/water injection for improved takeoff power, the Centaurus 663 was a Centaurus 662 with torquemeter-type reduction gear. Applications Note: Survivors The Royal Navy Historic Flight operated a Hawker Sea Fury powered by a Bristol Centaurus engine until it was destroyed in an accident on 28 April 2021 whilst attempting a forced landing following a failure and seizure of its Bristol Centaurus XVIII engine: https://assets.publishing.service.gov.uk/media/628cd96cd3bf7f1f47c65ebc/Hawker_Sea_Fury_T_Mk_20_G-RNHF_07-22.pdf Engines on display Preserved Bristol Centaurus engines are on public display at the following museums: Aerospace Bristol Aerospace Museum of California Fleet Air Arm Museum Imperial War Museum Duxford London Science Museum Midland Air Museum Shuttleworth Collection, Old Warden Dumfries and Galloway Aviation Museum San Diego Air & Space Museum Specifications (Centaurus VII) See also References Notes Bibliography Bridgman, L, (ed.) (1998) Jane's Fighting Aircraft of World War II. Crescent. Gunston, Bill. Development of Piston Aero Engines. Cambridge, UK. Patrick Stephens, 2006. Gunston, Bill. World Encyclopedia of Aero Engines: From the Pioneers to the Present Day. 5th edition, Stroud, UK: Sutton, 2006. . White, Graham. Allied Aircraft Piston Engines of World War II: History and Development of Frontline Aircraft Piston Engines Produced by Great Britain and the United States During World War II. Warrendale, Pennsylvania: SAE International, 1995. External links Period advertisement for the Bristol Centaurus - Flight, May 1949 Video of a cutaway engine in motion illustrating its operation Aircraft air-cooled radial piston engines Centaurus Sleeve valve engines 1930s aircraft piston engines
Bristol Centaurus
[ "Technology" ]
1,828
[ "Sleeve valve engines", "Engines" ]
160,773
https://en.wikipedia.org/wiki/Argonne%20National%20Laboratory
Argonne National Laboratory is a federally funded research and development center in Lemont, Illinois, United States. Founded in 1946, the laboratory is owned by the United States Department of Energy and administered by UChicago Argonne LLC of the University of Chicago. The facility is the largest national laboratory in the Midwest. Argonne had its beginnings in the Metallurgical Laboratory of the University of Chicago, formed in part to carry out Enrico Fermi's work on nuclear reactors for the Manhattan Project during World War II. After the war, it was designated as the first national laboratory in the United States on July 1, 1946. In its first decades, the laboratory was a hub for peaceful use of nuclear physics; nearly all operating commercial nuclear power plants around the world have roots in Argonne research. More than 1,000 scientists conduct research at the laboratory, in the fields of energy storage and renewable energy; fundamental research in physics, chemistry, and materials science; environmental sustainability; supercomputing; and national security. Argonne formerly ran a smaller facility called Argonne National Laboratory-West (or simply Argonne-West) in Idaho next to the Idaho National Engineering and Environmental Laboratory. In 2005, the two Idaho-based laboratories merged to become the Idaho National Laboratory. Argonne is a part of the expanding Illinois Technology and Research Corridor. Fermilab, which is another USDoE National Laboratory, is located approximately 20 miles away. Overview Argonne has five areas of focus, as stated by the laboratory in 2022, including scientific discovery in physical and life sciences; energy and climate research; global security advances to protect society; operating research facilities that support thousands of scientists and engineers from around the world; and developing the scientific and technological workforce. History Origins Argonne began in 1942 as the Metallurgical Laboratory, part of the Manhattan Project at the University of Chicago. The Met Lab built Chicago Pile-1, the world's first nuclear reactor, under the stands of the University of Chicago sports stadium. In 1943, CP-1 was reconstructed as CP-2, in the Argonne Forest, a forest preserve location outside Chicago. The laboratory facilities built here became known as Site A. On July 1, 1946, Site A of the "Metallurgical Laboratory" was formally re-chartered as Argonne National Laboratory for "cooperative research in nucleonics." At the request of the U.S. Atomic Energy Commission, it began developing nuclear reactors for the nation's peaceful nuclear energy program. In the late 1940s and early 1950s, the laboratory moved west to a larger location in unincorporated DuPage County and established a remote location in Idaho, called "Argonne-West," to conduct further nuclear research. Early research The lab's early efforts focused on developing designs and materials for producing electricity from nuclear reactions. The laboratory designed and built Chicago Pile 3 (1944), the world's first heavy-water moderated reactor, and the Experimental Breeder Reactor I (Chicago Pile 4) in Idaho, which lit a string of four light bulbs with the world's first nuclear-generated electricity in 1951. The BWR power station reactor, now the second most popular design worldwide, came from the BORAX experiments. The knowledge gained from the Argonne experiments was the foundation for the designs of most of the commercial reactors used throughout the world for electric power generation, and inform the current evolving designs of liquid-metal reactors for future power stations. Meanwhile, the laboratory was also helping to design the reactor for the world's first nuclear-powered submarine, the U.S.S. Nautilus, which steamed for more than 513,550 nautical miles (951,090Β km) and provided a basis for the United States' nuclear navy. Not all nuclear technology went into developing reactors, however. While designing a scanner for reactor fuel elements in 1957, Argonne physicist William Nelson Beck put his own arm inside the scanner and obtained one of the first ultrasound images of the human body. Remote manipulators designed to handle radioactive materials laid the groundwork for more complex machines used to clean up contaminated areas, sealed laboratories or caves. In addition to nuclear work, the laboratory performed basic research in physics and chemistry. In 1955, Argonne chemists co-discovered the elements einsteinium and fermium, elements 99 and 100 in the periodic table. 1960–1995 In 1962, Argonne chemists produced the first compound of the inert noble gas xenon, opening up a new field of chemical bonding research. In 1963, they discovered the hydrated electron. Argonne was chosen as the site of the 12.5 GeV Zero Gradient Synchrotron, a proton accelerator that opened in 1963. A bubble chamber allowed scientists to track the motions of subatomic particles as they zipped through the chamber; they later observed the neutrino in a hydrogen bubble chamber for the first time. In 1964, the "Janus" reactor opened to study the effects of neutron radiation on biological life, providing research for guidelines on safe exposure levels for workers at power plants, laboratories and hospitals. Scientists at Argonne pioneered a technique to analyze the Moon's surface using alpha radiation, which launched aboard the Surveyor 5 in 1967 and later analyzed lunar samples from the Apollo 11 mission. In 1978, the Argonne Tandem Linac Accelerator System (ATLAS) opened as the world's first superconducting accelerator for projectiles heavier than the electron. Nuclear engineering experiments during this time included the Experimental Boiling Water Reactor, the forerunner of many modern nuclear plants, and Experimental Breeder Reactor II (EBR-II), which was sodium-cooled, and included a fuel recycling facility. EBR-II was later modified to test other reactor designs, including a fast-neutron reactor and, in 1982, the Integral Fast Reactor conceptβ€”a revolutionary design that reprocessed its own fuel, reduced its atomic waste and withstood safety tests of the same failures that triggered the Chernobyl and Three Mile Island disasters. In 1994, however, the U.S. Congress terminated funding for the bulk of Argonne's nuclear programs. Argonne moved to specialize in other areas, while capitalizing on its experience in physics, chemical sciences and metallurgy. In 1987, the laboratory was the first to successfully demonstrate a pioneering technique called plasma wakefield acceleration, which accelerates particles in much shorter distances than conventional accelerators. It also cultivated a strong battery research program. Following a major push by then-director Alan Schriesheim, the laboratory was chosen as the site of the Advanced Photon Source, a major X-ray facility which was completed in 1995 and produced the brightest X-rays in the world at the time of its construction. Since 1995 The laboratory continued to develop as a center for energy research, as well as a site for scientific facilities too large to be hosted at universities. In the early 2000s, the Argonne Leadership Computing Facility was founded and hosted multiple supercomputers, several of which ranked among the top 10 most powerful in the world at the time of their construction. The laboratory also built the Center for Nanoscale Materials for conducting materials research at the atomic level; and greatly expanded its battery research and quantum technology programs. Chicago Tribune reported in March 2019 that the laboratory was constructing the world's most powerful supercomputer. Costing $500 million, it will have the processing power of 1 quintillion FLOPS. Applications will include the analysis of stars and improvements in the power grid. Initiatives Hard X-ray Sciences: Argonne is home to one of the world's largest high-energy light sources: the Advanced Photon Source (APS). Each year, scientists make thousands of discoveries while using the APS to characterize both organic and inorganic materials and even processes, such as how vehicle fuel injectors spray gasoline in engines. Leadership Computing: Argonne maintains one of the fastest computers for open science and has developed system software for these massive machines. Argonne works to drive the evolution of leadership computing from petascale to exascale, develop new codes and computing environments, and expand computational efforts to help solve scientific challenges. For example, in October 2009, the laboratory announced that it would be embarking on a joint project to explore cloud computing for scientific purposes. In the 1970s Argonne translated the Numerische Mathematik numerical linear algebra programs from ALGOL to Fortran and this library was expanded into LINPACK and EISPACK, by Cleve Moler, et al. Materials for Energy: Argonne scientists work to predict, understand, and control where and how to place individual atoms and molecules to achieve desired material properties. Among other innovations, Argonne scientists helped develop an ice slurry to cool the organs of heart attack victims, described what makes diamonds slippery at the nanoscale level, and discovered a superinsulating material that resists the flow of electric current more completely than any other previous material. Electrical Energy Storage: Argonne develops batteries for electric transportation technology and grid storage for intermittent energy sources like wind or solar, as well as the manufacturing processes needed for these materials-intensive systems. The laboratory has been working on advanced battery materials research and development for over 50 years. In the past 10 years, the laboratory has focused on lithium-ion batteries, and in September 2009, it announced an initiative to explore and improve their capabilities. Argonne also maintains an independent battery-testing facility, which tests sample batteries from both government and private industry to see how well they perform over time and under heat and cold stresses. Alternative Energy and Efficiency: Argonne develops both chemical and biological fuels tailored for current engines as well as improved combustion schemes for future engine technologies. The laboratory has also recommended best practices for conserving fuel; for example, a study that recommended installing auxiliary cab heaters for trucks in lieu of idling the engine. Meanwhile, the solar energy research program focuses on solar-fuel and solar-electric devices and systems that are scalable and economically competitive with fossil energy sources. Argonne scientists also explore best practices for a smart grid, both by modeling power flow between utilities and homes and by researching the technology for interfaces. Nuclear Energy: Argonne generates advanced reactor and fuel cycle technologies that enable the safe, sustainable generation of nuclear power. Argonne scientists develop and validate computational models and reactor simulations of future generation nuclear reactors. Another project studies how to reprocess spent nuclear fuel, so that waste is reduced up to 90%. Biological and Environmental Systems: Understanding the local effect of climate change requires integration of the interactions between the environment and human activities. Argonne scientists study these relationships from molecule to organism to ecosystem. Programs include bioremediation using trees to pull pollutants out of groundwater; biochips to detect cancers earlier; a project to target cancerous cells using nanoparticles; soil metagenomics; and a user facility for the Atmospheric Radiation Measurementclimate change research project. National Security: Argonne develops security technologies that will prevent and mitigate events with potential for mass disruption or destruction. These include sensors that can detect chemical, biological, nuclear and explosive materials; portable Terahertz radiation ("T-ray") machines that detect dangerous materials more easily than X-rays at airports; and tracking and modeling the possible paths of chemicals released into a subway. User facilities Argonne builds and maintains scientific facilities that would be too expensive for a single company or university to construct and operate. These facilities are used by scientists from Argonne, private industry, academia, other national laboratories and international scientific organizations. Advanced Photon Source (APS): a national synchrotron X-ray research facility which produces the brightest X-ray beams in the Western Hemisphere. Center for Nanoscale Materials (CNM): a user facility located on the APS which provides infrastructure and instruments to study nanotechnology and nanomaterials. The CNM is one of five U.S. Department of Energy Office of Science Nanoscale Science Research Centers. Argonne Tandem Linac Accelerator System (ATLAS): ATLAS is the world's first superconducting particle accelerator for heavy ions at energies in the vicinity of the Coulomb barrier. This is the energy domain suited to study the properties of the nucleus, the core of matter and the fuel of stars. Argonne Leadership Computing Facility (ALCF): a DOE Office of Science User Facility that provides supercomputing resources to the research community to enable breakthroughs in science and engineering. Centers The Advanced Materials for Energy-Water Systems (AMEWS) Center is an Energy Frontier Research Center sponsored by the U.S. Department of Energy. Led by Argonne National Laboratory and including the University of Chicago and Northwestern University as partners, AMEWS works to solve the challenges that exist at the interface of water and the materials that make up the systems that handle, process and treat water. Electron Microscopy Center (EMC): one of three DOE-supported scientific user facilities for electron beam microcharacterization. The EMC conducts in situ studies of transformations and defect processes, ion beam modification and irradiation effects, superconductors, ferroelectrics and interfaces. Its intermediate voltage electron microscope, which is coupled with an accelerator, represents the only such system in the United States. Biology Center (SBC): The SBC is a user facility located off the Advanced Photon Source X-ray facility, which specializes in macromolecular crystallography. Users have access to an insertion-device, a bending-magnet, and a biochemistry laboratory. SBC beamlines are often used to map out the crystal structures of proteins; in the past, users have imaged proteins from anthrax, meningitis-causing bacteria, salmonella, and other pathogenic bacteria. The Network Enabled Optimization System (NEOS) Server is the first network-enabled problem-solving environment for a wide class of applications in business, science, and engineering. Included are state-of-the-art solvers in integer programming, nonlinear optimization, linear programming, stochastic programming, and complementarity problems. Most NEOS solvers accept input in the AMPL modeling language. The Joint Center for Energy Storage Research (JCESR) is a consortium of several national laboratories, academic institutions, and industrial partners based at Argonne National Laboratory. The mission of JCESR is to design and build transformative materials enabling next-generation batteries that satisfy all the performance metrics for a given application. The Midwest Integrated Center for Computational Materials (MICCoM) is headquartered at the laboratory. MICCoM develops and disseminates interoperable open-source software, data, and validation procedures to simulate and predict properties of functional materials for energy conversion processes. The ReCell Center is a national collaboration of industry, academia and national laboratories, led by Argonne National Laboratory, working to advance recycling technologies along the entire battery life cycle. The center aims to grow a sustainable advanced battery recycling industry by developing economic and environmentally sound recycling processes that can be adopted by industry for lithium-ion and future battery chemistries. Educational and community outreach Argonne welcomes all members of the public age 16 or older to take guided tours of the scientific and engineering facilities and grounds. For children under 16, Argonne offers hands-on learning activities suitable for K–12 field trips. The laboratory also hosts educational science and engineering outreach for schools in the surrounding area. Argonne scientists and engineers take part in the training of nearly 1,000 college graduate students and post-doctoral researchers every year as part of their research and development activities. Directors Over the course of its history, 13 individuals have served as Argonne Director: 1946–1956 Walter Zinn 1957–1961 Norman Hilberry 1961–1967 Albert V. Crewe 1967–1973 Robert B. Duffield 1973–1979 Robert G. Sachs 1979–1984 Walter E. Massey 1984–1996 Alan Schriesheim 1996–1998 Dean E. Eastman 1999–2000 Yoon Il Chang 2000–2005 Hermann A. Grunder 2005–2008 Robert Rosner 2009–2014 Eric Isaacs 2014–2016 Peter Littlewood 2017–present Paul Kearns In media Significant portions of the 1996 chase film Chain Reaction were shot in the Zero Gradient Synchrotron ring room and the former Continuous Wave Deuterium Demonstrator laboratory. Notable staff Alexei Alexeyevich Abrikosov Khalil Amine Paul Benioff Charles H. Bennett Sandra Biedron Margaret K. Butler David Callaway Yanglai Cho George Crabtree Seth Darling Harold B. Evans Paul Fenter Enrico Fermi Stuart Freedman Ian Foster Wallace Givens Raymond Goertz Maury C. Goodman Kawtar Hafidi Cynthia Hall Katrin Heitmann Caroline Herzenberg Paul Kearns Harold Lichtenberger Maria Goeppert Mayer William McCune JoΓ«l Mesot Carlo Montemagno JosΓ© Enrique Moyal Gilbert Jerome Perlow Lloyd_Quarterman Aneesur Rahman John P. Schiffer Luise Meyer-SchΓΌtzmeister Rolf Siemssen Dorothy Martin Simon Lynda Soderholm Marius Stan Rick Stevens Valerie Taylor Marion C. Thurnauer Carlos E.M. Wagner Kameshwar C. Wali Larry Wos Cosmas Zachos Daniel Zajfman Nestor J. Zaluzec See also Advanced Research Projects Agency-Energy Automated theorem proving Canadian Penning Trap Spectrometer Center for the Advancement of Science in Spaceβ€”operates the US National Laboratory on the ISS. Gammasphere Nanofluid Track Imaging Cherenkov Experiment Notes References Argonne National Laboratory, 1946–96. Jack M. Holl, Richard G. Hewlett, Ruth R. Harris. University of Illinois Press, 1997. . Nuclear physics: an introduction. S.B. Patel. New Age International Ltd., 1991. . Summary of Nuclear Chemistry Work at Argonne, Martin H. Studier, Argonne National Laboratory Report, Declassified June 13, 1949. External links Argonne National Laboratoryβ€”Official Argonne website Argonne National Laboratory Presentationsβ€”Finding aid for Argonne National Laboratory presentations Argonne Newsβ€”News releases, media center Argonne Softwareβ€”Open source and commercially available software in or near the "shrink-wrap" phase Photo repositoryβ€”Photography for public use Historical Argonne National Laboratory reports digitized by the TRAIL project, hosted at University of North Texas Libraries and HathiTrust United States Department of Energy national laboratories Federally Funded Research and Development Centers Buildings and structures in DuPage County, Illinois Leadership in Energy and Environmental Design certified buildings Lemont, Illinois Nuclear research institutes Supercomputer sites University of Chicago Tourist attractions in DuPage County, Illinois Argonne National Laboratory people 1946 establishments in Illinois
Argonne National Laboratory
[ "Engineering" ]
3,868
[ "Nuclear research institutes", "Leadership in Energy and Environmental Design certified buildings", "Nuclear organizations", "Building engineering" ]
160,815
https://en.wikipedia.org/wiki/CAS%20Registry%20Number
A CAS Registry Number (also referred to as CAS RN or informally CAS Number) is a unique identification number, assigned by the Chemical Abstracts Service (CAS) in the US to every chemical substance described in the open scientific literature, in order to index the substance in the CAS Registry. This registry includes all substances described since 1957, plus some substances from as far back as the early 1800s; it is a chemical database that includes organic and inorganic compounds, minerals, isotopes, alloys, mixtures, and nonstructurable materials (UVCBs, substances of unknown or variable composition, complex reaction products, or biological origin). CAS RNs are generally serial numbers (with a check digit), so they do not contain any information about the structures themselves the way SMILES and InChI strings do. The CAS Registry is an authoritative collection of disclosed chemical substance information. It identifies more than 204 million unique organic and inorganic substances and 69 million protein and DNA sequences, plus additional information about each substance. It is updated with around 15,000 additional new substances daily. A collection of almost 500 thousand CAS registry numbers are made available under a CC BY-NC license at ACS Commons Chemistry. History and use Historically, chemicals have been identified by a wide variety of synonyms. One of the biggest challenges in the early development of substance indexing, a task undertaken by the Chemical Abstracts Service, was in identifying if a substance in literature was new or if it had been previously discovered. Well-known chemicals may additionally be known via multiple generic, historical, commercial, and/or (black)-market names, and even systematic nomenclature based on structure alone was not universally useful. An algorithm was developed to translate the structural formula of a chemical into a computer-searchable table, which provided a basis for the service that listed each chemical with its CAS Registry Number, the CAS Chemical Registry System, which became operational in 1965. CAS Registry Numbers (CAS RN) are simple and regular, convenient for database searches. They offer a reliable, common and international link to every specific substance across the various nomenclatures and disciplines used by branches of science, industry, and regulatory bodies. Almost all molecule databases today allow searching by CAS Registry Number, and it is used as a global standard. Format A CAS Registry Number has no inherent meaning, but is assigned in sequential, increasing order when the substance is identified by CAS scientists for inclusion in the CAS Registry database. A CAS RN is separated by hyphens into three parts, the first consisting from two up to seven digits, the second consisting of two digits, and the third consisting of a single digit serving as a check digit. This format gives CAS a maximum capacity of 1,000,000,000 unique numbers. The check digit is found by taking the last digit times 1, the preceding digit times 2, the preceding digit times 3 etc., adding all these up and computing the sum modulo 10. For example, the CAS number of water is 7732-18-5: the checksum 5 is calculated as (8Γ—1 + 1Γ—2 + 2Γ—3 + 3Γ—4 + 7Γ—5 + 7Γ—6) = 105; 105 mod 10 = 5. Granularity Stereoisomers and racemic mixtures are assigned discrete CAS Registry Numbers: -epinephrine has 51-43-4, -epinephrine has 150-05-0, and racemic -epinephrine has 329-65-7 Different phases do not receive different CAS RNs (liquid water and ice both have 7732-18-5), but different crystal structures do (carbon in general is 7440-44-0, graphite is 7782-42-5 and diamond is 7782-40-3) Commonly encountered mixtures of known or unknown composition may receive a CAS RN; examples are Leishman stain (12627-53-1) and mustard oil (8007-40-7). Some chemical elements are discerned by their oxidation state, e.g. the element chromium has 7440-47-3, the trivalent Cr(III) has 16065-83-1 and the hexavalent Cr(VI) species have 18540-29-9. Occasionally whole classes of molecules receive a single CAS RN: the class of enzymes known as alcohol dehydrogenases has 9031-72-5. Search engines CHEMINDEX Search via Canadian Centre for Occupational Health and Safety ChemIDplus Advanced via United States National Library of Medicine Common Chemistry via Australian Inventory of Chemical Substances European chemical Substances Information System via the website of Royal Society of Chemistry HSNO Chemical Classification Information Database via Environmental Risk Management Authority Search Tool of Australian Inventory of Chemical Substances USEPA CompTox Chemicals Dashboard See also Academic publishing Beilstein Registry Number Chemical file format Dictionary of chemical formulas EC# (EINECS and ELINCS, European Community) EC number (Enzyme Commission) International Union of Pure and Applied Chemistry List of CAS numbers by chemical compound MDL number PubChem Registration authority UN number References External links CAS registry description, by Chemical Abstracts Service To find the CAS number of a compound given its name, formula or structure, the following free resources can be used: CAS Common Chemistry NIST Chemistry WebBook NCI/CADD Chemical Identifier Resolver ChemSub Online (Multilingual chemical names) NIOSH Pocket Guide to Chemical Hazards, index of CAS numbers Chemical numbering schemes American Chemical Society Unique identifiers
CAS Registry Number
[ "Chemistry", "Mathematics" ]
1,119
[ "Mathematical objects", "American Chemical Society", "Chemical numbering schemes", "Numbers" ]
160,832
https://en.wikipedia.org/wiki/Tunnel
A tunnel is an underground or undersea passageway. It is dug through surrounding soil, earth or rock, or laid under water, and is usually completely enclosed except for the two portals common at each end, though there may be access and ventilation openings at various points along the length. A pipeline differs significantly from a tunnel, though some recent tunnels have used immersed tube construction techniques rather than traditional tunnel boring methods. A tunnel may be for foot or vehicular road traffic, for rail traffic, or for a canal. The central portions of a rapid transit network are usually in the tunnel. Some tunnels are used as sewers or aqueducts to supply water for consumption or for hydroelectric stations. Utility tunnels are used for routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment. Secret tunnels are built for military purposes, or by civilians for smuggling of weapons, contraband, or people. Special tunnels, such as wildlife crossings, are built to allow wildlife to cross human-made barriers safely. Tunnels can be connected together in tunnel networks. A tunnel is relatively long and narrow; the length is often much greater than twice the diameter, although similar shorter excavations can be constructed, such as cross passages between tunnels. The definition of what constitutes a tunnel can vary widely from source to source. For example, in the United Kingdom, a road tunnel is defined as "a subsurface highway structure enclosed for a length of or more." In the United States, the NFPA definition of a tunnel is "An underground structure with a design length greater than and a diameter greater than ." Etymology The word "tunnel" comes from the Middle English tonnelle, meaning "a net", derived from Old French tonnel, a diminutive of tonne ("cask"). The modern meaning, referring to an underground passageway, evolved in the 16th century as a metaphor for a narrow, confined space like the inside of a cask. History In Babylon, about 2200 B.C., it is believed that the first artificial tunnel was constructed. To join the temple of Belos with the palace, this was built with the aid of the cut and cover technique. In the Mahabharata, the Pandavas built a secret tunnel within their new home, called "Lakshagriha" (House of Lac), which was constructed by Purochana under the orders of Duryodhana by the intention of burning them alive inside, allowing them to escape when the palace was set on fire; this act of foresight by the Pandavas saved their lives Some of the earliest tunnels used by humans were paleoburrows excavated by prehistoric mammals. Much of the early technology of tunnelling evolved from mining and military engineering. The etymology of the terms "mining" (for mineral extraction or for siege attacks), "military engineering", and "civil engineering" reveals these deep historic connections. Antiquity and early middle ages Predecessors of modern tunnels were adits that transported water for irrigation, drinking, or sewerage. The first qanats are known from before 2000 BC. The earliest tunnel known to have been excavated from both ends is the Siloam Tunnel, built in Jerusalem by the kings of Judah around the 8th century BC. Another tunnel excavated from both ends, maybe the second known, is the Tunnel of Eupalinos, which is a tunnel aqueduct long running through Mount Kastro in Samos, Greece. It was built in the 6th century BC to serve as an aqueduct. In Pakistan, the mughal era tunnel has been restored in the Lahore. In Ethiopia, the Siqurto foot tunnel, hand-hewn in the Middle Ages, crosses a mountain ridge. In the Gaza Strip, the network of tunnels was used by Jewish strategists as rock-cut shelters, in first links to Judean resistance against Roman rule in the Bar Kokhba revolt during the 2nd century AD. Geotechnical investigation and design A major tunnel project must start with a comprehensive investigation of ground conditions by collecting samples from boreholes and by other geophysical techniques. An informed choice can then be made of machinery and methods for excavation and ground support, which will reduce the risk of encountering unforeseen ground conditions. In planning the route, the horizontal and vertical alignments can be selected to make use of the best ground and water conditions. It is common practice to locate a tunnel deeper than otherwise would be required, in order to excavate through solid rock or other material that is easier to support during construction. Conventional desk and preliminary site studies may yield insufficient information to assess such factors as the blocky nature of rocks, the exact location of fault zones, or the stand-up times of softer ground. This may be a particular concern in large-diameter tunnels. To give more information, a pilot tunnel (or "drift tunnel") may be driven ahead of the main excavation. This smaller tunnel is less likely to collapse catastrophically should unexpected conditions be met, and it can be incorporated into the final tunnel or used as a backup or emergency escape passage. Alternatively, horizontal boreholes may sometimes be drilled ahead of the advancing tunnel face. Other key geotechnical factors: Stand-up time is the amount of time a newly excavated cavity can support itself without any added structures. Knowing this parameter allows the engineers to determine how far an excavation can proceed before support is needed, which in turn affects the speed, efficiency, and cost of construction. Generally, certain configurations of rock and clay will have the greatest stand-up time, while sand and fine soils will have a much lower stand-up time. Groundwater control is very important in tunnel construction. Water leaking into a tunnel or vertical shaft will greatly decrease stand-up time, causing the excavation to become unstable and risking collapse. The most common way to control groundwater is to install dewatering pipes into the ground and to simply pump the water out. A very effective but expensive technology is ground freezing, using pipes which are inserted into the ground surrounding the excavation, which are then cooled with special refrigerant fluids. This freezes the ground around each pipe until the whole space is surrounded with frozen soil, keeping water out until a permanent structure can be built. Tunnel cross-sectional shape is also very important in determining stand-up time. If a tunnel excavation is wider than it is high, it will have a harder time supporting itself, decreasing its stand-up time. A square or rectangular excavation is more difficult to make self-supporting, because of a concentration of stress at the corners. Choice of tunnels versus bridges For water crossings, a tunnel is generally more costly to construct than a bridge. However, both navigational and traffic considerations may limit the use of high bridges or drawbridges intersecting with shipping channels, necessitating a tunnel. Bridges usually require a larger footprint on each shore than tunnels. In areas with expensive real estate, such as Manhattan and urban Hong Kong, this is a strong factor in favor of a tunnel. Boston's Big Dig project replaced elevated roadways with a tunnel system to increase traffic capacity, hide traffic, reclaim land, redecorate, and reunite the city with the waterfront. The 1934 Queensway Tunnel under the River Mersey at Liverpool was chosen over a massively high bridge partly for defence reasons; it was feared that aircraft could destroy a bridge in times of war, not merely impairing road traffic but blocking the river to navigation. Maintenance costs of a massive bridge to allow the world's largest ships to navigate under were considered higher than for a tunnel. Similar conclusions were reached for the 1971 Kingsway Tunnel under the Mersey. In Hampton Roads, Virginia, tunnels were chosen over bridges for strategic considerations; in the event of damage, bridges might prevent US Navy vessels from leaving Naval Station Norfolk. Water-crossing tunnels built instead of bridges include the Seikan Tunnel in Japan; the Holland Tunnel and Lincoln Tunnel between New Jersey and Manhattan in New York City; the Queens-Midtown Tunnel between Manhattan and the borough of Queens on Long Island; the Detroit-Windsor Tunnel between Michigan and Ontario; and the Elizabeth River tunnels between Norfolk and Portsmouth, Virginia; the 1934 River Mersey road Queensway Tunnel; the Western Scheldt Tunnel, Zeeland, Netherlands; and the North Shore Connector tunnel in Pittsburgh, Pennsylvania. The Sydney Harbour Tunnel was constructed to provide a second harbour crossing and to alleviate traffic congestion on the Sydney Harbour Bridge, without spoiling the iconic view. Other reasons for choosing a tunnel instead of a bridge include avoiding difficulties with tides, weather, and shipping during construction (as in the Channel Tunnel), aesthetic reasons (preserving the above-ground view, landscape, and scenery), and also for weight capacity reasons (it may be more feasible to build a tunnel than a sufficiently strong bridge). Some water crossings are a mixture of bridges and tunnels, such as the Denmark to Sweden link and the Chesapeake Bay Bridge-Tunnel in Virginia. There are particular hazards with tunnels, especially from vehicle fires when combustion gases can asphyxiate users, as happened at the Gotthard Road Tunnel in Switzerland in 2001. One of the worst railway disasters ever, the Balvano train disaster, was caused by a train stalling in the Armi tunnel in Italy in 1944, killing 426 passengers. Designers try to reduce these risks by installing emergency ventilation systems or isolated emergency escape tunnels parallel to the main passage. Project planning and cost estimates Government funds are often required for the creation of tunnels. When a tunnel is being planned or constructed, economics and politics play a large factor in the decision making process. Civil engineers usually use project management techniques for developing a major structure. Understanding the amount of time the project requires, and the amount of labor and materials needed is a crucial part of project planning. The project duration must be identified using a work breakdown structure and critical path method. Also, the land needed for excavation and construction staging, and the proper machinery must be selected. Large infrastructure projects require millions or even billions of dollars, involving long-term financing, usually through issuance of bonds. The costs and benefits for an infrastructure such as a tunnel must be identified. Political disputes can occur, as in 2005 when the US House of Representatives approved a $100 million federal grant to build a tunnel under New York Harbor. However, the Port Authority of New York and New Jersey was not aware of this bill and had not asked for a grant for such a project. Increased taxes to finance a large project may cause opposition. Construction Tunnels are dug in types of materials varying from soft clay to hard rock. The method of tunnel construction depends on such factors as the ground conditions, the groundwater conditions, the length and diameter of the tunnel drive, the depth of the tunnel, the logistics of supporting the tunnel excavation, the final use and the shape of the tunnel and appropriate risk management. There are three basic types of tunnel construction in common use. Cut-and-cover tunnels are constructed in a shallow trench and then covered over. Bored tunnels are constructed in situ, without removing the ground above. Finally, a tube can be sunk into a body of water, which is called an immersed tunnel. Cut-and-cover Cut-and-cover is a simple method of construction for shallow tunnels where a trench is excavated and roofed over with an overhead support system strong enough to carry the load of what is to be built above the tunnel. There are two basic forms of cut-and-cover tunnelling: Bottom-up method: A trench is excavated, with ground support as necessary, and the tunnel is constructed in it. The tunnel may be of in situ concrete, precast concrete, precast arches, or corrugated steel arches; in early days brickwork was used. The trench is then carefully back-filled and the surface is reinstated. Top-down method: Side support walls and capping beams are constructed from ground level by such methods as slurry walling or contiguous bored piling. Only a shallow excavation is needed to construct the tunnel roof using precast beams or in situ concrete sitting on the walls. The surface is then reinstated except for access openings. This allows early reinstatement of roadways, services, and other surface features. Excavation then takes place under the permanent tunnel roof, and the base slab is constructed. Shallow tunnels are often of the cut-and-cover type (if under water, of the immersed-tube type), while deep tunnels are excavated, often using a tunnelling shield. For intermediate levels, both methods are possible. Large cut-and-cover boxes are often used for underground metro stations, such as Canary Wharf tube station in London. This construction form generally has two levels, which allows economical arrangements for ticket hall, station platforms, passenger access and emergency egress, ventilation and smoke control, staff rooms, and equipment rooms. The interior of Canary Wharf station has been likened to an underground cathedral, owing to the sheer size of the excavation. This contrasts with many traditional stations on London Underground, where bored tunnels were used for stations and passenger access. Nevertheless, the original parts of the London Underground network, the Metropolitan and District Railways, were constructed using cut-and-cover. These lines pre-dated electric traction and the proximity to the surface was useful to ventilate the inevitable smoke and steam. A major disadvantage of cut-and-cover is the widespread disruption generated at the surface level during construction. This, and the availability of electric traction, brought about London Underground's switch to bored tunnels at a deeper level towards the end of the 19th century. Prior to the replacement of manual excavation by the use of boring machines, Victorian tunnel excavators developed a specialized method called clay-kicking for digging tunnels in clay-based soils. The clay-kicker lies on a plank at a 45-degree angle away from the working face and rather than a mattock with his hands, inserts with his feet a tool with a cup-like rounded end, then turns the tool with his hands to extract a section of soil, which is then placed on the waste extract. Clay-kicking is a specialized method developed in the United Kingdom of digging tunnels in strong clay-based soil structures. This method of cut and cover construction required relatively little disturbance of property during the renewal of the United Kingdom's then ancient sewerage systems. It was also used during the First World War by Royal Engineer tunnelling companies placing mines beneath German lines, because it was almost silent and so not susceptible to listening methods of detection. Boring machines Tunnel boring machines (TBMs) and associated back-up systems are used to highly automate the entire tunnelling process, reducing tunnelling costs. In certain predominantly urban applications, tunnel boring is viewed as a quick and cost-effective alternative to laying surface rails and roads. Expensive compulsory purchase of buildings and land, with potentially lengthy planning inquiries, is eliminated. Disadvantages of TBMs arise from their usually large size – the difficulty of transporting the large TBM to the site of tunnel construction, or (alternatively) the high cost of assembling the TBM on-site, often within the confines of the tunnel being constructed. There are a variety of TBM designs that can operate in a variety of conditions, from hard rock to soft water-bearing ground. Some TBMs, the bentonite slurry and earth-pressure balance types, have pressurized compartments at the front end, allowing them to be used in difficult conditions below the water table. This pressurizes the ground ahead of the TBM cutter head to balance the water pressure. The operators work in normal air pressure behind the pressurized compartment, but may occasionally have to enter that compartment to renew or repair the cutters. This requires special precautions, such as local ground treatment or halting the TBM at a position free from water. Despite these difficulties, TBMs are now preferred over the older method of tunnelling in compressed air, with an airlock/decompression chamber some way back from the TBM, which required operators to work in high pressure and go through decompression procedures at the end of their shifts, much like deep-sea divers. In February 2010, Aker Wirth delivered a TBM to Switzerland, for the expansion of the Linth–Limmern Power Stations located south of Linthal in the canton of Glarus. The borehole has a diameter of . The four TBMs used for excavating the Gotthard Base Tunnel, in Switzerland, had a diameter of about . A larger TBM was built to bore the Green Heart Tunnel (Dutch: Tunnel Groene Hart) as part of the HSL-Zuid in the Netherlands, with a diameter of . This in turn was superseded by the Madrid M30 ringroad, Spain, and the Chong Ming tunnels in Shanghai, China. All of these machines were built at least partly by Herrenknecht. , the world's largest TBM was "Big Bertha", a diameter machine built by Hitachi Zosen Corporation, which dug the Alaskan Way Viaduct replacement tunnel in Seattle, Washington (US). Shafts A temporary access shaft is sometimes necessary during the excavation of a tunnel. They are usually circular and go straight down until they reach the level at which the tunnel is going to be built. A shaft normally has concrete walls and is usually built to be permanent. Once the access shafts are complete, TBMs are lowered to the bottom and excavation can start. Shafts are the main entrance in and out of the tunnel until the project is completed. If a tunnel is going to be long, multiple shafts at various locations may be bored so that entrance to the tunnel is closer to the unexcavated area. Once construction is complete, construction access shafts are often used as ventilation shafts, and may also be used as emergency exits. Sprayed concrete techniques The new Austrian tunnelling method (NATM)β€”also referred to as the Sequential Excavation Method (SEM)β€”was developed in the 1960s. The main idea of this method is to use the geological stress of the surrounding rock mass to stabilize the tunnel, by allowing a measured relaxation and stress reassignment into the surrounding rock to prevent full loads becoming imposed on the supports. Based on geotechnical measurements, an optimal cross section is computed. The excavation is protected by a layer of sprayed concrete, commonly referred to as shotcrete. Other support measures can include steel arches, rock bolts, and mesh. Technological developments in sprayed concrete technology have resulted in steel and polypropylene fibers being added to the concrete mix to improve lining strength. This creates a natural load-bearing ring, which minimizes the rock's deformation. By special monitoring the NATM method is flexible, even at surprising changes of the geomechanical rock consistency during the tunneling work. The measured rock properties lead to appropriate tools for tunnel strengthening. Pipe jacking In pipe jacking, hydraulic jacks are used to push specially made pipes through the ground behind a TBM or shield. This method is commonly used to create tunnels under existing structures, such as roads or railways. Tunnels constructed by pipe jacking are normally small diameter bores with a maximum size of around . Box jacking Box jacking is similar to pipe jacking, but instead of jacking tubes, a box-shaped tunnel is used. Jacked boxes can be a much larger span than a pipe jack, with the span of some box jacks in excess of . A cutting head is normally used at the front of the box being jacked, and spoil removal is normally by excavator from within the box. Recent developments of the Jacked Arch and Jacked deck have enabled longer and larger structures to be installed to close accuracy. Underwater tunnels There are also several approaches to underwater tunnels, the two most common being bored tunnels or immersed tubes, examples are BjΓΈrvika Tunnel and Marmaray. Submerged floating tunnels are a novel approach under consideration; however, no such tunnels have been constructed to date. Temporary way During construction of a tunnel it is often convenient to install a temporary railway, particularly to remove excavated spoil, often narrow gauge so that it can be double track to allow the operation of empty and loaded trains at the same time. The temporary way is replaced by the permanent way at completion, thus explaining the term "Perway". Enlargement The vehicles or traffic using a tunnel can outgrow it, requiring replacement or enlargement: The original single line Gib Tunnel near Mittagong was replaced with a double-track tunnel, with the original tunnel used for growing mushrooms. The 1832 double-track -long tunnel from Edge Hill to Lime Street in Liverpool was near totally removed, apart from a section at Edge Hill and a section nearer to Lime Street, as four tracks were required. The tunnel was dug out into a very deep four-track cutting, with short tunnels in places along the cutting. Train services were not interrupted as the work progressed. There are other occurrences of tunnels being replaced by open cuts, for example, the Auburn Tunnel. The Farnworth Tunnel in England was enlarged using a tunnel boring machine (TBM) in 2015. The Rhyndaston Tunnel was enlarged using a borrowed TBM so as to be able to take ISO containers. Tunnels can also be enlarged by lowering the floor. Open building pit An open building pit consists of a horizontal and a vertical boundary that keeps groundwater and soil out of the pit. There are several potential alternatives and combinations for (horizontal and vertical) building pit boundaries. The most important difference with cut-and-cover is that the open building pit is muted after tunnel construction; no roof is placed. Other construction methods Drilling and blasting Hydraulic splitter Slurry-shield machine Wall-cover construction method. Variant tunnel types Double-deck and multipurpose tunnels Some tunnels are double-deck, for example, the two major segments of the San Francisco–Oakland Bay Bridge (completed in 1936) are linked by a double-deck tunnel section through Yerba Buena Island, the largest-diameter bored tunnel in the world. At construction this was a combination bidirectional rail and truck pathway on the lower deck with automobiles above, now converted to one-way road vehicle traffic on each deck. In Turkey, the Eurasia Tunnel under the Bosphorus, opened in 2016, has at its core a two-deck road tunnel with two lanes on each deck. Additionally, in 2015 the Turkish government announced that it will build three-level tunnel, also under the Bosporus. The tunnel is intended to carry both the Istanbul metro and a two-level highway, over a length of . The French in west Paris consists of two bored tunnel tubes, the eastern one of which has two levels for light motorized vehicles, over a length of . Although each level offers a physical height of , only traffic up to tall is allowed in this tunnel tube, and motorcyclists are directed to the other tube. Each level was built with a three-lane roadway, but only two lanes per level are used – the third serves as a hard shoulder within the tunnel. The A86 Duplex is Europe's longest double-deck tunnel. In Shanghai, China, a two-tube double-deck tunnel was built starting in 2002. In each tube of the both decks are for motor vehicles. In each direction, only cars and taxis travel on the high two-lane upper deck, and heavier vehicles, like trucks and buses, as well as cars, may use the high single-lane lower level. In the Netherlands, a two-storey, eight-lane, cut-and-cover road tunnel under the city of Maastricht was opened in 2016. Each level accommodates a full height, two by two-lane highway. The two lower tubes of the tunnel carry the A2 motorway, which originates in Amsterdam, through the city; and the two upper tubes take the N2 regional highway for local traffic. The Alaskan Way Viaduct replacement tunnel, is a $3.3Β billion , double-decker bored highway tunnel under Downtown Seattle. Construction began in July 2013 using "Bertha", at the time the world's largest earth pressure balance tunnel boring machine, with a cutterhead diameter. After several delays, tunnel boring was completed in April 2017, and the tunnel opened to traffic on 4 February 2019. New York City's 63rd Street Tunnel under the East River, between the boroughs of Manhattan and Queens, was intended to carry subway trains on the upper level and Long Island Rail Road commuter trains on the lower level. Construction started in 1969, and the two sides of the tunnel were bored through in 1972. The upper level, used by the IND 63rd Street Line () of the New York City Subway, was not opened for passenger service until 1989. The lower level, intended for commuter rail, saw passenger service after completion of the East Side Access project, in late 2022. In the UK, the 1934 Queensway Tunnel under the River Mersey between Liverpool and Birkenhead was originally to have road vehicles running on the upper deck and trams on the lower. During construction the tram usage was cancelled. The lower section is only used for cables, pipes and emergency accident refuge enclosures. Hong Kong's Lion Rock Tunnel, built in the mid 1960s, connecting New Kowloon and Sha Tin, carries a motorway but also serves as an aqueduct, featuring a gallery containing five water mains lines with diameters between below the road section of the tunnel. Wuhan's Yangtze River Highway and Railway Tunnel is a two-tube double-deck tunnel under the Yangtze River completed in 2018. Each tube carries three lanes of local traffic on the top deck with one track Wuhan Metro Line 7 on the lower deck. Mount Baker Tunnel has three levels. The bottom level is to be used by Sound Transit light rail. The middle level is used by car traffic, and the top layer is for bicycle and pedestrian access. Some tunnels have more than one purpose. The SMART Tunnel in Malaysia is the first multipurpose "Stormwater Management And Road Tunnel" in the world, created to convey both traffic and occasional flood waters in Kuala Lumpur. When necessary, floodwater is first diverted into a separate bypass tunnel located underneath the double-deck roadway tunnel. In this scenario, traffic continues normally. Only during heavy, prolonged rains when the threat of extreme flooding is high, the upper tunnel tube is closed off to vehicles and automated flood control gates are opened so that water can be diverted through both tunnels. Covered passageways Over-bridges can sometimes be built by covering a road or river or railway with brick or steel arches, and then levelling the surface with earth. In railway parlance, a surface-level track which has been built or covered over is normally called a "covered way". Snow sheds are a kind of artificial tunnel built to protect a railway from avalanches of snow. Similarly the Stanwell Park, New South Wales "steel tunnel", on the Illawarra railway line, protects the line from rockfalls. Underpass An underpass is a road or railway or other passageway passing under another road or railway, under an overpass. This is not strictly a tunnel. Utility Tunnel A Utility Tunnel is built for the purpose of carrying one or more utilities in the same space, for this reason they are also referred to as Multi-Utility Tunnels or MUTs. Through co-location of different utilities in one tunnel, organizations are able to reduce the financial and environmental costs of building and maintaining utilities. These tunnels can be used for many types of utilities, routing steam, chilled water, electrical power or telecommunication cables, as well as connecting buildings for convenient passage of people and equipment. Safety and security Owing to the enclosed space of a tunnel, fires can have very serious effects on users. The main dangers are gas and smoke production, with even low concentrations of carbon monoxide being highly toxic. Fires killed 11 people in the Gotthard tunnel fire of 2001 for example, all of the victims succumbing to smoke and gas inhalation. Over 400 passengers died in the Balvano train disaster in Italy in 1944, when the locomotive halted in a long tunnel. Carbon monoxide poisoning was the main cause of death. In the Caldecott Tunnel fire of 1982, the majority of fatalities were caused by toxic smoke, rather than by the initial crash. Likewise 84 people were killed in the Paris MΓ©tro train fire of 1904. Motor vehicle tunnels usually require ventilation shafts and powered fans to remove toxic exhaust gases during routine operation. Rail tunnels usually require fewer air changes per hour, but still may require forced-air ventilation. Both types of tunnels often have provisions to increase ventilation under emergency conditions, such as a fire. Although there is a risk of increasing the rate of combustion through increased airflow, the primary focus is on providing breathable air to persons trapped in the tunnel, as well as firefighters. The aerodynamic pressure wave produced by high speed trains entering a tunnel reflect at its open ends and change sign (compression wavefront changes to rarefaction wavefront and vice versa). When two wavefronts of the same sign meet the train, significant and rapid air pressure may cause ear discomfort for passengers and crew. When a high-speed train exits a tunnel, a loud "Tunnel boom" may occur, which can disturb residents near the mouth of the tunnel, and it is exacerbated in mountain valleys where the sound can echo. When there is a parallel, separate tunnel available, airtight but unlocked emergency doors are usually provided which allow trapped personnel to escape from a smoke-filled tunnel to the parallel tube. Larger, heavily used tunnels, such as the Big Dig tunnel in Boston, Massachusetts, may have a dedicated 24-hour staffed operations center which monitors and reports on traffic conditions, and responds to emergencies. Video surveillance equipment is often used, and real-time pictures of traffic conditions for some highways may be viewable by the general public via the Internet. A database of seismic damage to underground structures using 217 case histories shows the following general observations can be made regarding the seismic performance of underground structures: Underground structures suffer appreciably less damage than surface structures. Reported damage decreases with increasing over burden depth. Deep tunnels seem to be safer and less vulnerable to earthquake shaking than are shallow tunnels. Underground facilities constructed in soils can be expected to suffer more damage compared to openings constructed in competent rock. Lined and grouted tunnels are safer than unlined tunnels in rock. Shaking damage can be reduced by stabilizing the ground around the tunnel and by improving the contact between the lining and the surrounding ground through grouting. Tunnels are more stable under a symmetric load, which improves ground-lining interaction. Improving the tunnel lining by placing thicker and stiffer sections without stabilizing surrounding poor ground may result in excess seismic forces in the lining. Backfilling with non-cyclically mobile material and rock-stabilizing measures may improve the safety and stability of shallow tunnels. Damage may be related to peak ground acceleration and velocity based on the magnitude and epicentral distance of the affected earthquake. Duration of strong-motion shaking during earthquakes is of utmost importance because it may cause fatigue failure and therefore, large deformations. High frequency motions may explain the local spalling of rock or concrete along planes of weakness. These frequencies, which rapidly attenuate with distance, may be expected mainly at small distances from the causative fault. Ground motion may be amplified upon incidence with a tunnel if wavelengths are between one and four times the tunnel diameter. Damage at and near tunnel portals may be significant due to slope instability. Earthquakes are one of nature's most formidable threats. A magnitude 6.7 earthquake shook the San Fernando valley in Los Angeles in 1994. The earthquake caused extensive damage to various structures, including buildings, freeway overpasses and road systems throughout the area. The National Center for Environmental Information estimates total damages to be 40 billion dollars. According to an article issued by Steve Hymon of TheSource – Transportation News and Views, there was no serious damage sustained by the LA subway system. Metro, the owner of the LA subway system, issued a statement through their engineering staff about the design and consideration that goes into a tunnel system. Engineers and architects perform extensive analysis as to how hard they expect earthquakes to hit that area. All of this goes into the overall design and flexibility of the tunnel. This same trend of limited subway damage following an earthquake can be seen in many other places. In 1985 a magnitude 8.1 earthquake shook Mexico City; there was no damage to the subway system, and in fact the subway systems served as a lifeline for emergency personnel and evacuations. A magnitude 7.2 ripped through Kobe Japan in 1995, leaving no damage to the tunnels themselves. Entry portals sustained minor damages, however these damages were attributed to inadequate earthquake design that originated from the original construction date of 1965. In 2010 a magnitude 8.8, massive by any scale, afflicted Chile. Entrance stations to subway systems suffered minor damages, and the subway system was down for the rest of the day. By the next afternoon, the subway system was operational again. Examples In history The history of ancient tunnels and tunneling in the world is reviewed in various sources which include many examples of these structures that were built for different purposes. Some well known ancient and modern tunnels are briefly introduced below: The qanat or kareez of Persia are water management systems used to provide a reliable supply of water to human settlements or for irrigation in hot, arid and semi-arid climates. The deepest known qanat is in the Iranian city of Gonabad, which after 2700 years, still provides drinking and agricultural water to nearly 40,000 people. Its main well depth is more than , and its length is . The Siloam Tunnel was built before 701Β BC for a reliable supply of water, to withstand siege attacks. The Eupalinian aqueduct on the island of Samos (North Aegean, Greece) was built in 520Β BC by the ancient Greek engineer Eupalinos of Megara under a contract with the local community. Eupalinos organised the work so that the tunnel was begun from both sides of Mount Kastro. The two teams advanced simultaneously and met in the middle with excellent accuracy, something that was extremely difficult in that time. The aqueduct was of utmost defensive importance, since it ran underground, and it was not easily found by an enemy who could otherwise cut off the water supply to Pythagoreion, the ancient capital of Samos. The tunnel's existence was recorded by Herodotus (as was the mole and harbour, and the third wonder of the island, the great temple to Hera, thought by many to be the largest in the Greek world). The precise location of the tunnel was only re-established in the 19th century by German archaeologists. The tunnel proper is and visitors can still enter it. One of the first known drainage and sewage networks in form of tunnels was constructed at Persepolis in Iran at the same time as the construction of its foundation in 518Β BC. In most places the network was dug in the sound rock of the mountain and then covered by large pieces of rock and stone followed by earth and piles of rubble to level the ground. During investigations and surveys, long sections of similar rock tunnels extending beneath the palace area were traced by Herzfeld and later by Schmidt and their archaeological teams. The Via Flaminia, an important Roman road, penetrated the Furlo pass in the Apennines through a tunnel which emperor Vespasian had ordered built in 76–77Β AD. A modern road, the SS 3 Flaminia, still uses this tunnel, which had a precursor dating back to the 3rd century BC, remnants of this earlier tunnel (one of the first road tunnels) are also still visible. The world's oldest tunnel traversing under a water body is claimed to be the Terelek kaya tΓΌneli under KΔ±zΔ±l River, a little south of the towns of Boyabat and Durağan in Turkey, just downstream from where Kizil River joins its tributary GΓΆkΔ±rmak. The tunnel is presently under a narrow part of a lake formed by a dam some kilometres further downstream. Estimated to have been built more than 2000 years ago, possibly by the same civilization that also built the royal tombs in a rock face nearby, it is assumed to have had a defensive purpose. Sapperton Canal Tunnel on the Thames and Severn Canal in England, dug through hills, which opened in 1789, was long and allowed boat transport of coal and other goods. Above it the Sapperton Long Tunnel was constructed which carries the "Golden Valley" railway line between Swindon and Gloucester. The 1791 Dudley canal tunnel is on the Dudley Canal, in Dudley, England. The tunnel is long. Closed in 1962 the tunnel was reopened in 1973. The series of tunnels was extended in 1984 and 1989. Fritchley Tunnel, constructed in 1793 in Derbyshire by the Butterley Company to transport limestone to its ironworks factory. The Butterley company engineered and built its own railway. A victim of the depression the company closed after 219 years in 2009. The tunnel is the world's oldest railway tunnel traversed by rail wagons. Gravity and horse haulage was utilised. The railway was converted to steam locomotion in 1813 using a Steam Horse locomotive engineered and built by the Butterley company, however reverted to horses. Steam trains used the tunnel continuously from the 1840s when the railway was converted to a narrow gauge. The line closed in 1933. In the Second World War, the tunnel was used as an air raid shelter. Sealed up in 1977 it was rediscovered in 2013 and inspected. The tunnel was resealed to preserved the construction as it was designated an ancient monument. The 1794 Butterley canal tunnel canal tunnel is in length on the Cromford Canal in Ripley, Derbyshire, England. The tunnel was built simultaneously with the 1793 Fritchley railway tunnel. The tunnel partially collapsed in 1900 splitting the Cromford Canal, and has not been used since. The Friends of Cromford Canal, a group of volunteers, are working at fully restoring the Cromford Canal and the Butterley Tunnel. The 1796 Stoddart Tunnel in Chapel-en-le-Frith in Derbyshire is reputed to be the oldest rail tunnel in the world. The rail wagons were originally horse-drawn. Derby Tunnels in Salem, Massachusetts, were built in 1801 to smuggle imports affected by President Thomas Jefferson's new customs duties. Jefferson had ordered local militias to help the Custom House in each port collect these dues, but the smugglers, led by Elias Derby, hired the Salem militia to dig the tunnels and hide the spoil. A tunnel was created for the first true steam locomotive, from Penydarren to Abercynon. The Penydarren locomotive was built by Richard Trevithick. The locomotive made the historic journey from Penydarren to Abercynon in 1804. Part of this tunnel can still be seen at Pentrebach, Merthyr Tydfil, Wales. This is arguably the oldest railway tunnel in the world, dedicated only to self-propelled steam engines on rails. The Montgomery Bell Tunnel in Tennessee, an water diversion tunnel, , to power a water wheel, was built by slave labour in 1819, being the first full-scale tunnel in North America. Bourne's Tunnel, Rainhill, near Liverpool, England. It is long. Built in the late 1820s, the exact date is unknown, however probably built in 1828 or 1829. This is the first tunnel in the world constructed under a railway line. The construction of the Liverpool to Manchester Railway ran over a horse-drawn tramway that ran from the Sutton collieries to the Liverpool-Warrington turnpike road. A tunnel was bored under the railway for the tramway. As the railway was being constructed the tunnel was made operational, opening prior to the Liverpool tunnels on the Liverpool to Manchester line. The tunnel was made redundant in 1844 when the tramway was dismantled. Crown Street station, Liverpool, England, 1829. Built by George Stephenson, a single track railway tunnel , was bored from Edge Hill to Crown Street to serve the world's first intercity passenger railway terminus station. The station was abandoned in 1836 being too far from Liverpool city centre, with the area converted for freight use. Closed down in 1972, the tunnel is disused. However it is the oldest passenger rail tunnel running under streets in the world. The 1829 Wapping Tunnel in Liverpool, England, at long on a twin track railway, was the first rail tunnel bored under a metropolis. The tunnel's path is from Edge Hill in the east of the city to Wapping Dock in the south end Liverpool docks. The tunnel was used only for freight terminating at the Park Lane goods terminal. Currently disused since 1972, the tunnel was to be a part of the Merseyrail metro network, with work started and abandoned because of costs. The tunnel is in excellent condition and is still being considered for reuse by Merseyrail, maybe with an underground station cut into the tunnel for Liverpool university. The river portal is opposite the new King's Dock Liverpool Arena being an ideal location for a serving station. If reused the tunnel will be the oldest used underground rail tunnel in the world and oldest section of any underground metro system. 1832, Lime Street railway station tunnel, Liverpool. A two track rail tunnel, long was bored under the metropolis from Edge Hill in the east of the city to Lime Street in Liverpool's city centre. The tunnel was in use from 1832 being used to transport building materials to the new Lime St station while under construction. The station and tunnel was opened to passengers in 1836. In the 1880s the tunnel was converted to a deep cutting, open to the atmosphere, being four tracks wide. This is the only occurrence of a major tunnel being removed. Two short sections of the original tunnel still exist at Edge Hill station and further towards Lime Street, giving the two tunnels the distinction of being the oldest rail tunnels in the world still in use, and the oldest in use under streets. Over time a section of the deep cutting has been converted back into tunnel due to sections having buildings built over. Box Tunnel in England, which opened in 1841, was the longest railway tunnel in the world at the time of construction. It was dug by hand, and has a length of . The 1842 Prince of Wales Tunnel, in Shildon near Darlington, England, is the oldest sizeable tunnel in the world still in use under a settlement. The Victoria Tunnel Newcastle opened in 1842, is a subterranean wagonway with a maximum depth of that drops from entrance to exit. The tunnel runs under Newcastle upon Tyne, England, and originally exited at the River Tyne. It remains largely intact. Originally designed to carry coal from Spital Tongues to the river, in WW2 part of the tunnel was used as a shelter. Under the management of a charitable foundation called the Ouseburn Trust it is currently used for heritage tours. The Thames Tunnel, built by Marc Isambard Brunel and his son Isambard Kingdom Brunel opened in 1843, was the first tunnel (after Terelek) traversing under a water body, and the first to be built using a tunnelling shield. Originally used as a foot-tunnel, the tunnel was converted to a railway tunnel in 1869 and was a part of the East London Line of the London Underground until 2007. It was the oldest section of the network, although not the oldest purpose built rail section. From 2010 the tunnel became a part of the London Overground network. The Victoria Tunnel/Waterloo Tunnel in Liverpool, England, was bored under a metropolis opening in 1848. The tunnel was initially used only for rail freight serving the Waterloo Freight terminal, and later freight and passengers serving the Liverpool ship liner terminal. The tunnel's path is from Edge Hill in the east of the city to the north end Liverpool docks at Waterloo Dock. The tunnel is split into two tunnels with a short open air cutting linking the two. The cutting is where the cable hauled trains from Edge Hill were hitched and unhitched. The two tunnels are effectively one on the same centre line and are regarded as one. However, as initially the long Victoria section was originally cable hauled and the shorter Waterloo section was locomotive hauled, two separate names were given, the short section was named the Waterloo Tunnel. In 1895 the two tunnels were converted to locomotive haulage. Used until 1972, the tunnel is still in excellent condition. A short section of the Victoria tunnel at Edge Hill is still used for shunting trains. The tunnel is being considered for reuse by the Merseyrail network. Stations cut into the tunnel are being considered and also reuse by a monorail system from the proposed Liverpool Waters redevelopment of Liverpool's Central Docks has been proposed. The summit tunnel of the Semmering railway, the first Alpine tunnel, was opened in 1848 and was long. It connected rail traffic between Vienna, the capital of Austro-Hungarian Empire, and Trieste, its port. The Giovi Rail Tunnel through the Appennini Mounts opened in 1854, linking the capital city of the Kingdom of Sardinia, Turin, to its port, Genoa. The tunnel was long. The oldest underground sections of the London Underground were built using the cut-and-cover method in the 1860s, and opened in January 1863. What are now the Metropolitan, Hammersmith & City and Circle lines were the first to prove the success of a metro or subway system. On 18 June 1868, the Central Pacific Railroad's Summit Tunnel (Tunnel #6) at Donner Pass in the California Sierra Nevada mountains was opened, permitting the establishment of the commercial mass transportation of passengers and freight over the Sierras for the first time. It remained in daily use until 1993, when the Southern Pacific Railroad closed it and transferred all rail traffic through the long Tunnel #41 (a.k.a. "The Big Hole") built a mile to the south in 1925. In 1870, after fourteen years of works, the FrΓ©jus Rail Tunnel was completed between France and Italy, being the second-oldest Alpine tunnel, long. At that time it was the longest in the world. The third Alpine tunnel, the Gotthard Rail Tunnel, between northern and southern Switzerland, opened in 1882 and was the longest rail tunnel in the world, measuring . The 1882 Col de Tende Road Tunnel, at long, was one of the first long road tunnels under a pass, running between France and Italy. The Mersey Railway tunnel opened in 1886, running from Liverpool to Birkenhead under the River Mersey. The Mersey Railway was the world's first deep-level underground railway. By 1892 the extensions on land from Birkenhead Park station to Liverpool Central Low level station gave a tunnel in length. The under river section is in length, and was the longest underwater tunnel in world in January 1886. The rail Severn Tunnel was opened in late 1886, at long, although only of the tunnel is actually under the River Severn. The tunnel replaced the Mersey Railway tunnel's longest under water record, which was held for less than a year. James Greathead, in constructing the City & South London Railway tunnel beneath the Thames, opened in 1890, brought together three key elements of tunnel construction under water: shield method of excavation; permanent cast iron tunnel lining; construction in a compressed air environment to inhibit water flowing through soft ground material into the tunnel heading. Built in sections between 1890 and 1939, the section of London Underground's Northern line from Morden to East Finchley via Bank was the longest railway tunnel in the world at in length. St. Clair Tunnel, also opened later in 1890, linked the elements of the Greathead tunnels on a larger scale. In 1906 the fourth Alpine tunnel opened, the Simplon Tunnel, between Switzerland and Italy. It is long, and was the longest tunnel in the world until 1982. It was also the deepest tunnel in the world, with a maximum rock overlay of approximately . The 1927 Holland Tunnel was the first underwater tunnel designed for automobiles. The construction required a novel ventilation system. In 1945 the Delaware Aqueduct tunnel was completed, supplying water to New York City. At it is the longest tunnel in the world. In 1988 the long Seikan Tunnel in Japan was completed under the Tsugaru Strait, linking the islands of Honshu and Hokkaido. It was the longest railway tunnel in the world at that time. Ryfast is the longest undersea road tunnel. It is in length. The tunnel opened for use in 2020. Longest The Thirlmere Aqueduct in North West England, United Kingdom is sometimes considered the longest tunnel, of any type, in the world at , though the aqueduct's tunnel section is not continuous. The Dahuofang Water Tunnel in China, opened in 2009, is the third longest water tunnel in the world at length. The Gotthard Base Tunnel in Switzerland, opened in 2016, is the longest and deepest railway tunnel in the world at length and maximum depth below the Gotthard Massif. It provides a flat transit route between the North and South of Europe under the Swiss Alps, at a maximum elevation of . The Seikan Tunnel in Japan connects the main island of Honshu with the northern island of Hokkaido by rail. It is long, of which are crossing the Tsugaru Strait undersea. The Channel Tunnel crosses the English Channel between France and the United Kingdom. It has a total length of , of which are the world's longest undersea tunnel section. The LΓΆtschberg Base Tunnel in Switzerland was the longest land rail tunnel, with a length of , from its inauguration in 2007 until the completion of the Gotthard Base Tunnel in 2016. The LΓ¦rdal Tunnel in Norway from LΓ¦rdal to Aurland is the world's longest road tunnel, intended for cars and similar vehicles, at . The Zhongnanshan Tunnel in People's Republic of China opened in January 2007 is the world's second longest highway tunnel and the longest mountain road tunnel in Asia, at . The longest canal tunnel is the Rove Tunnel in France, over long. Notable The Moffat Tunnel, opened in 1928, passes under the Continental Divide of the Americas in Colorado. The tunnel is long and at an elevation of is the highest active railroad tunnel in the U.S. (The inactive Tennessee Pass Line and the historic Alpine Tunnel are higher.) Williamson's tunnels in Liverpool, from 1804 and completed around 1840 by a wealthy eccentric, are probably the largest underground folly in the world. The tunnels were built with no functional purpose. The Chicago freight tunnel network is the largest urban street tunnel network, comprising of tunnels beneath the majority of downtown Chicago streets. It operated between 1906 and 1956 as a freight network, connecting building basements and railway stations. Following a 1992 flood the network was sealed, although some parts still carry utility and communications infrastructure. The Pennsylvania Turnpike opened in 1940 with seven tunnels, most of which were bored as part of the stillborn South Pennsylvania Railroad and giving the highway the nickname "Tunnel Highway". Four of the tunnels (Allegheny Mountain, Tuscarora Mountain, Kittatinny Mountain, and Blue Mountain) remain in active use, while the other three (Laurel Hill, Rays Hill, and Sideling Hill) were bypassed in the 1960s; the latter two tunnels are on a bypassed section of the Turnpike now commonly known as the Abandoned Pennsylvania Turnpike. The FredhΓ€lls road tunnel was opened in 1966, in Stockholm, Sweden, and the New Elbe road tunnel opened in 1975 in Hamburg, Germany. Both tunnels handle around 150,000 vehicles a day, making them two of the most trafficked tunnels in the world. The HonningsvΓ₯g Tunnel ( long) opened in 1999 on European route E69 in Norway as the world's northernmost road tunnel, except for mines (which exist on Svalbard). The Central Artery road tunnel in Boston, Massachusetts, is a part of the larger Big Dig completed around 2007, and carries approximately 200,000 vehicles/day under the city along Interstate 93, US Route 1, and Massachusetts Route 3, which share a concurrency through the tunnels. The Big Dig replaced Boston's old badly deteriorated I-93 elevated highway. The Stormwater Management And Road Tunnel or SMART Tunnel, is a combined storm drainage and road structure opened in 2007 in Kuala Lumpur, Malaysia. The tunnel is the longest stormwater drainage tunnel in South East Asia and second longest in Asia. The facility can be operated as a simultaneous traffic and stormwater passage, or dedicated exclusively to stormwater when necessary. The Eiksund Tunnel on national road Rv 653 in Norway is the world's deepest subsea road tunnel, measuring long, with deepest point at below the sea level, opened in February 2008. Gerrards Cross railway tunnel, in England, opened in 2010, is notable in that it converted an existing railway cutting into a tunnel to create ground to build a supermarket over the tunnel. The railway in the cutting was first opened around 1906, stretching over 104 years to complete a railway tunnel. The tunnel was built using the cover method with craned in prefabricated forms in order to keep the busy railway operating. A branch of the Tesco supermarket chain occupies the newly created ground above the railway tunnel, with an adjacent existing railway station at the end of the tunnel. During construction, a portion of the tunnel collapsed when soil cover was added. The prefabricated forms were covered with a layer of reinforced concrete after the collapse. The Fenghuoshan tunnel, completed in 2005 on the Qinghai-Tibet railway is the world's highest railway tunnel, about above sea level and long. The La Linea Tunnel in Colombia, 2016, is the longest, , mountain tunnel in South America. It crosses beneath a mountain at above sea level with six traffic lanes, and it has a parallel emergency tunnel. The tunnel is subject to serious groundwater pressure. The tunnel will link BogotΓ‘ and its urban area with the coffee-growing region, and with the main port on the Colombian Pacific coast. The Chicago Deep Tunnel Project is a network of of drainage tunnels designed to reduce flooding in the Chicago area. Started in the mid-1970s, the project is due to be completed in 2029. New York City Water Tunnel No. 3, started in 1970, has an expected completion beyond 2026, and will measure more than . Mining The use of tunnels for mining is called drift mining. Drift mining can help find coal, goal, iron, and other minerals, just like normal mining. Sub-surface mining consists of digging tunnels or shafts into the earth to reach buried ore deposits. Military use Some tunnels are not for transport at all but rather, are fortifications, for example Mittelwerk and Cheyenne Mountain Complex. Excavation techniques, as well as the construction of underground bunkers and other habitable areas, are often associated with military use during armed conflict, or civilian responses to threat of attack. Another use for tunnels was for the storage of chemical weapons . Secret tunnels Secret tunnels have given entrance to or escape from an area, such as the Cu Chi Tunnels or the smuggling tunnels in the Gaza Strip which connect it to Egypt. Although the Underground Railroad network used to transport escaped slaves was "underground" mostly in the sense of secrecy, hidden tunnels were occasionally used. Secret tunnels were also used during the Cold War, under the Berlin Wall and elsewhere, to smuggle refugees, and for espionage. Smugglers use secret tunnels to transport or store contraband, such as illegal drugs and weapons. Elaborately engineered tunnels built to smuggle drugs across the Mexico-US border were estimated to require up to 9 months to complete, and an expenditure of up to $1 million. Some of these tunnels were equipped with lighting, ventilation, telephones, drainage pumps, hydraulic elevators, and in at least one instance, an electrified rail transport system. Secret tunnels have also been used by thieves to break into bank vaults and retail stores after hours. Several tunnels have been discovered by the Border Security Forces across the Line of Control along the India-Pakistan border, mainly to allow terrorists access to the Indian territory of Jammu and Kashmir. The actual usage of erdstall tunnels is unknown but theories connect it to a rebirth ritual. Natural tunnels Lava tubes are emptied lava conduits, formed during volcanic eruptions by flowing and cooling lava. Natural Tunnel State Park (Virginia, US) features an natural tunnel, really a limestone cave, that has been used as a railroad tunnel since 1890. Punarjani Guha in Kerala, India. Hindus believe that crawling through the tunnel (which they believe was created by a Hindu god) from one end to the other will wash away all of one's sins and thus allow one to attain rebirth. Only men are permitted to crawl through the tunnel. Torghatten, a Norwegian island with a hat-shaped silhouette, has a natural tunnel in the middle of the hat, letting light come through. The long, high, and wide tunnel is said to be the hole made by an arrow of the angry troll Hestmannen, the hill being the hat of the troll-king of SΓΈmna trying to save the beautiful LekamΓΈya. The tunnel is thought actually to be the work of ice. The sun shines through the tunnel during two few minutes long periods every year. Major accidents Clayton Tunnel rail crash (1861) – confusion about block signals leading to collision, 23 killed. Welwyn Tunnel rail crash (1866) – train failed in tunnel, guard did not protect train. Paris MΓ©tro train fire (1904) – train fire in Couronnes underground station, 84 killed by smoke and gases. Church Hill Tunnel collapse (1925) – tunnel collapse on a work train during renovation, killing four men and trapping a steam locomotive and ten flat cars. Balvano train disaster (1944) – asphyxiation of about 500 "unofficial" passengers on freight train. Caldecott Tunnel fire (1982) – major motor vehicle tunnel crash and fire. Channel Tunnel fire (1996) – Train carrying Heavy Good Vehicles (HGV) caught on fire. Princess Diana's death (1997) – Car crash in Pont de l'Alma tunnel, Paris, which killed Princess Diana. Mont Blanc Tunnel fire (1999) – Transport truck caught on fire and combusted inside tunnel. Big Dig Ceiling collapse (2006) – Concrete ceiling panel falls in Fort Point tunnel, Boston, which causes the Big Dig project to be closed for a year. See also Euphrates Tunnel Cattle creep Counter-beam lighting Culvert Hobby tunneling Megaproject Rapid transit Sequential Excavation Method Structure gauge – measure of maximum physical clearance in a tunnel Tree tunnel – tunnel-like effect from tree canopies above a road Tunnel tree – tunnel bored through the trunk of a tree Tunnels in popular culture Underground living References Bibliography Railway Tunnels in Queensland by Brian Webber, 1997, . Sullivan, Walter. Progress In Technology Revives Interest In Great Tunnels, New York Times, 24 June 1986. Retrieved 15 August 2010. External links ITA-AITES International Tunnelling Association Tunnels & Tunnelling International magazine Crossings Civil engineering Transport buildings and structures Earthworks (engineering)
Tunnel
[ "Engineering" ]
12,057
[ "Construction", "Civil engineering" ]
160,847
https://en.wikipedia.org/wiki/Glycogen%20storage%20disease%20type%20V
Glycogen storage disease type V (GSD5, GSD-V), also known as McArdle's disease, is a metabolic disorder, one of the metabolic myopathies, more specifically a muscle glycogen storage disease, caused by a deficiency of myophosphorylase. Its incidence is reported as one in 100,000, roughly the same as glycogen storage disease type I. The disease was first reported in 1951 by British physician Brian McArdle of Guy's Hospital, London. Signs and symptoms Onset of symptoms and diagnostic delay In the classic phenotype, the onset of this disease is usually noticed in childhood, but often not diagnosed until the third or fourth decade of life, frequently due to misdiagnosis and dismissal of symptoms. The median age of symptom onset is 3 years, with the median diagnostic delay being 29 years. Misdiagnosis is overwhelmingly common, with approximately 90% of patients being misdiagnosed, and approximately 62% receiving multiple misdiagnoses before a correct diagnosis. The prolonged diagnostic delay, misdiagnosis or multiple misdiagnoses, or being given inappropriate exercise advice (such as ignore pain or avoid exercise) severely impacts quality of life (QoL), physically and mentally. Ultra-rare phenotypes Late adult-onset, limb–girdle phenotype There is an ultra-rare adult-onset, limb–girdle phenotype that presents very late in life (70+ years of age) due to a recessive homozygous PYGM mutation (p.Β Lys42Profs*48) resulting in severe upper and lower limb atrophy, with the possibility of ptosis (drooping eyelids) and camptocormia (stooped posture). As of 2017, there have been two reported cases of this specific homozygous mutation and phenotype. In 1980, a woman also had a limb–girdle phenotype with onset at age 60, histochemical staining showed myophosphorylase deficiency; however the genetic mutation was unknown. Fatal infantile-onset phenotype There is an ultra-rare, fatal infantile-onset phenotype that results in profound muscle weakness ("floppy baby") and respiratory failure within weeks of birth (perinatal asphyxia). Post mortem biopsy showed deficiency of myophosphorylase and abnormal glycogen accumulation in skeletal muscle tissue. This phenotype may also include premature birth and joint contractures. Two reported cases, in 1978 and 1989. Mild phenotype There is an ultra-rare mild phenotype caused by recessive heterozygous alleles in the PYGM gene, where one allele is a common exon mutation and the other allele is an ultra-rare intronic mutation. It can also be caused by recessive homozygous intronic mutations. These intronic mutations result in a milder phenotype compared to the classic phenotype of McArdle disease. There is residual myophosphorylase activity, between 1-2% residual activity compared to unaffected individuals. This results in greater exercise capacity compared to classic phenotype McArdle individuals, particularly for sustained aerobic activity, but the capacity was still below that of unaffected individuals. In this mild phenotype, since their early teens, they did experience cramping and premature muscle fatigue during sudden vigorous exercise and prolonged isometric exercise; however, due to their less diminished capacity for aerobic activity, they were able to keep up with their peers in sports and everyday activities. As of 2009, there have been 3 reported cases of non-related individuals, a reported Druze family of consanguineous (related) individuals and 9 reported cases in two Finnish families. Common signs and symptoms The most prominent symptom is that of exercise intolerance which includes: premature muscle fatigue (particularly for anaerobic activity and high-intensity aerobic activity, which may be described as inability to keep up with peers or reduced stamina); exercise-induced painful cramps; inappropriate rapid heart rate response to exercise; exaggerated cardiorespiratory response to exercise (heavy or rapid breathing with inapprop. rapid HR); second wind phenomenon (muscle fatigue and heart rate improves for aerobic activity after approximately 6–10 minutes). Heart rate during exercise is a key indicator as, unlike the symptoms of muscle fatigue and cramping, it is a medical sign (meaning that it is observable and measurable by a third party rather than felt subjectively by the patient). In regularly active individuals with McArdle disease, they may not feel the usual symptoms of muscle fatigue and cramping until they increase their speed to very brisk walking, jogging or cycling; however, they will still show an inappropriate rapid heart rate response to exercise, with a declining heart rate once second wind has been achieved."In McArdle's, our heart rate tends to increase in what is called an 'inappropriate' response. That is, after the start of exercise it increases much more quickly than would be expected in someone unaffected by McArdle's." Other symptoms and comorbidities Myoglobinuria (reddish-brown urine) may be seen due to the breakdown of skeletal muscle known as rhabdomyolysis (a condition in which muscle cells breakdown, sending their contents into the bloodstream). In 2020, the largest study to-date of 269 GSD-V patients, 39.4% reported no previous episodes of myoglobinuria and 6.8% had normal CK (including those with fixed muscle weakness); so an absence of myoglobinuria and normal CK should not rule out the possibility of the disease. Between 33-51.4% develop fixed muscle weakness, typically of the trunk and upper body, with the onset of muscle weakness usually occurring later in life (40+ years of age). Younger people may display unusual symptoms, such as difficulty in chewing, swallowing or utilizing normal oral motor functions. Idiopathic leg pains were common in children, usually occurring at night, often presumed to be "growing pains" and not investigated further. A number of comorbidities were found in GSD-V individuals at a higher rate than in the general population, including (but not limited to): hypertension (17%), endocrine diseases (15.7%), musculoskeletal/rheumatic disease (12.9%), hyperuricemia/gout (11.6%), gastrointestinal diseases (11.2%), neurological disease (10%), respiratory disease (9.5%), and coronary artery disease (8.3%). They may have a pseudoathletic appearance of muscle hypertrophy (24%), particularly of the legs, and may have lower bone mineral content and density in the legs. Besides exercise-induced premature muscle fatigue, GSD-V individuals may also have comorbidities of mental fatigue, general fatigue, reduced motivation, sleep disturbances, anxiety and depression. As skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity, individuals with GSD-V experience during exercise: sinus tachycardia, tachypnea, muscle fatigue and pain, during the aforementioned activities and time frames. They may exhibit a "second wind" phenomenon, which is characterized by the individual's better tolerance for aerobic exercise such as walking and cycling after approximately 10 minutes. This is attributed to the combination of increased blood flow and the ability of the body to find alternative sources of energy, like fatty acids, proteins, and increased blood glucose uptake. AMP is primarily produced from the myokinase (adenylate kinase) reaction, which runs when the ATP reservoir is low. The myokinase reaction is one of three reactions in the phosphagen system (ATP-PCr), with the myokinase reaction occurring after phosphocreatine (creatine phosphate) has been depleted. In McArdle disease individuals, their muscle cells produce far more AMP than non-affected individuals as the reduced glycolytic flux from impaired glycogenolysis results in a chronically low ATP reservoir during exercise. The muscle cells need ATP (adenosine triphosphate) as it provides energy for muscle contraction by actively transporting calcium ions into the sarcoplasmic reticulum before muscle contraction, and it is used during muscle contraction for the release of myosin heads in the sliding filament model during the cross-bridge cycle. Along with the myokinase reaction, AMP is also produced by the purine nucleotide cycle, which also runs when the ATP reservoir in muscle cells is low, and is a part of protein metabolism. In the purine nucleotide cycle, three nucleotides: AMP (adenosine monophosphate), IMP (inosine monophosphate), and S-AMP (adenylosuccinate) are converted in a circular fashion; the byproducts are fumarate (which goes on to produce ATP via oxidative phosphorylation), ammonia (from the conversion of AMP into IMP), and uric acid (from excess AMP). GSD-V patients may experience myogenic hyperuricemia (exercise-induced accelerated breakdown of purine nucleotides in skeletal muscle). To avoid health complications, GSD-V patients need to get their ATP primarily from free fatty acids (lipid metabolism) rather than protein metabolism. Over-reliance on protein metabolism can be best avoided by not depleting their ATP reservoir, such as by not pushing through the pain and by not going too fast, too soon. "Be wary of pushing on when you feel pain start. This pain is a result of damaging muscles, and repeated damage will cause problems in the long term. But also this is counterproductive–it will stop you from getting into second wind. By pressing on despite the pain, you start your protein metabolism which then effectively blocks your glucose and fat metabolism. If you ever get into this situation, you need to stop completely for 30 minutes or more and then start the whole process again."Patients may present at emergency rooms with a transient contracture of the muscles and often severe pain (e.g. "clawed hand"). These require urgent assessment for rhabdomyolysis as in about 30% of cases this leads to acute kidney injury, which left untreated can be life-threatening. In a small number of cases compartment syndrome has developed, requiring prompt surgical referral. Genetics McArdle disease (GSD-V) is inherited in an autosomal recessive manner. If both parents are carriers (not having the disease, but each parent having one copy of the mutated allele), then each child of the couple will have a 25% chance of being affected (having McArdle disease), a 50% chance of being a carrier, and a 25% chance of being unaffected (neither a carrier nor diseased). Two autosomal recessive forms of this disease occur, childhood-onset and adult-onset. The gene for myophosphorylase, PYGM (the muscle-type of the glycogen phosphorylase gene), is located on chromosome 11q13. According to the most recent publications, 95 different mutations have been reported. The forms of the mutations may vary between ethnic groups. For example, the R50X (Arg50Stop) mutation (previously referred to as R49X) is most common in North America and western Europe, and the Y84X mutation is most common among central Europeans. The exact method of protein disruption has been elucidated in certain mutations. For example, R138W is known to disrupt to pyridoxal phosphate binding site. In 2006, another mutation (c.13_14delCT) was discovered which may contribute to increased symptoms in addition to the common Arg50Stop mutation. Myophosphorylase Structure The myophosphorylase structure consists of 842 amino acids. Its molecular weight of the unprocessed precursor is 97 kDa. The three-dimensional structure has been determined for this protein. The interactions of several amino acids in myophosphorylase's structure are known. Ser-14 is modified by phosphorylase kinase during activation of the enzyme. Lys-680 is involved in binding the pyridoxal phosphate, which is the active form of vitamin B6, a cofactor required by myophosphorylase. By similarity, other sites have been estimated: Tyr-76 binds AMP, Cys-109 and Cys-143 are involved in subunit association, and Tyr-156 may be involved in allosteric control. Function Myophosphorylase is the form of the glycogen phosphorylase found in muscle that catalyses the following reaction: ((1β†’4)-alpha-D-glucosyl) (n) + phosphate = ((1β†’4)-alpha-D-glucosyl) (n-1) + alpha-D-glucose 1-phosphate During exercise, a deficiency of this enzyme ultimately leads to rapid depletion of phosphocreatine, a decrease in available ATP, and an exaggerated rise of ADP and AMP. McArdle disease individuals also have increased maximum fat oxidation compared to unaffected individuals. During exercise, in affected individuals, there is no significant rise in lactic acid production compared to resting levels (it may even fall below resting levels), and plasma pH levels rise (become more alkaline) rather than fall (become more acidic). Pathophysiology Myophosphorylase is involved in the breakdown of glycogen to glucose-1-phosphate for use in muscle. The enzyme removes 1,4 glycosyl residues from outer branches of glycogen and adds inorganic phosphate to form glucose-1-phosphate. Ordinarily, the removal of 1,4 glycosyl residues by myophosphorylase leads to the formation of glucose-1-phosphate during glycogen breakdown and the polar, phosphorylated glucose cannot leave the cell membrane and so is marked for intracellular catabolism. In McArdle's disease, deficiency of myophosphorylase leads to accumulation of intramuscular glycogen and a lack of glucose-1-phosphate for cellular fuel. Myophosphorylase comes in two forms: form 'a' is phosphorylated by phosphorylase kinase, form 'b' is not phosphorylated. Form 'a' is de-phosphorylated into form 'b' by the enzyme phosphoprotein phosphatase, which is activated by elevated insulin. Both forms have two conformational states: active (R or relaxed) and inactive (T or tense). When either form 'a' or 'b' are in the active state, then the enzyme converts glycogen into glucose-1-phosphate. Myophosphorylase-b is allosterically activated by elevated AMP within the cell, and allosterically inactivated by elevated ATP and/or glucose-6-phosphate. Myophosphorylase-a is active, unless allosterically inactivated by elevated glucose within the cell. In this way, myophosphorylase-a is the more active of the two forms as it will continue to convert glycogen into glucose-1-phosphate even with high levels of glycogen-6-phosphate and ATP. Diagnosis There are some laboratory tests that may aid in diagnosis of GSD-V. A muscle biopsy will note the absence of myophosphorylase in muscle fibers. In some cases, abnormal accumulation of glycogen stained by periodic acid-Schiff can be seen with microscopy. Genetic sequencing of the PYGM gene (which codes for the muscle isoform of glycogen phosphorylase) may be done to determine the presence of gene mutations, determining if McArdle's is present. This type of testing is considerably less invasive than a muscle biopsy. The physician can also perform an ischemic forearm exercise test as described below (see History). Some findings suggest a nonischemic test could be performed with similar results. The nonischemic version of this test would involve not cutting off the blood flow to the exercising arm. Findings consistent with McArdle's disease would include a failure of lactate to rise in venous blood and exaggerated ammonia levels. These findings would indicate a severe muscle glycolytic block. Serum lactate may fail to rise in part because of increased uptake via the monocarboxylate transporter (MCT1), which is upregulated in skeletal muscle in McArdle disease. Lactate may be used as a fuel source once converted to pyruvate. Ammonia levels may rise given ammonia is a by-product of AMP deaminase which follows after the production of AMP by adenylate kinase, an alternative pathway for ATP production. In this pathway, adenylate kinase combines two ADP molecules to make ATP and AMP; AMP is then deaminated, producing inosine monophosphate (IMP) and ammonia (NH3) as part of purine nucleotide cycle. Physicians may also check resting levels of creatine kinase, which are moderately increased in 90% of patients. In some, the level is increased by multitudes - a person without GSD-V will have a CK between 60 and 400IU/L, while a person with the syndrome may have a level of 5,000 IU/L at rest, and may increase to 35,000 IU/L or more with muscle exertion. This can help distinguish McArdle's syndrome from carnitine palmitoyltransferase II deficiency (CPT-II), a lipid-based metabolic disorder which prevents fatty acids from being transported into mitochondria for use as an energy source. Also, serum electrolytes and endocrine studies (such as thyroid function, parathyroid function and growth hormone levels) will also be completed. Urine studies are required only if rhabdomyolysis is suspected. Urine volume, urine sediment and myoglobin levels would be ascertained. If rhabdomyolysis is suspected, serum myoglobin, creatine kinase, lactate dehydrogenase, electrolytes and renal function will be checked. Physicians may also conduct an exercise stress test to test for an inappropriate rapid heart rate (sinus tachycardia) in response to exercise. Due to the rare nature of the disease, the inappropriate rapid heart rate in response to exercise may be misdiagnosed as inappropriate sinus tachycardia (which is a diagnosis of exclusion). The 12 Minute Walk Test (12MWT) can be used to determine "second wind," which requires a treadmill (no incline), heart rate monitor, stop watch, pain scale, and that the patient has rested for 30 minutes prior to the test to ensure that "second wind" has stopped (that is, that increased ATP production primarily from free fatty acids has returned to resting levels). Electromyography (EMG) may show normal or myopathic results (short duration, polyphasic, small amplitude MUAPs). Before exercise, a minority of GSD-V patients show myopathic results (5/25 patients); whereas after 5 minutes of high-intensity isometric exercise, the majority showed myopathic results (22/25 patients). The myopathic results were a decrease in CMAP amplitude, which was evident immediately after exercise and, after a plateau phase of a few minutes, reached its maximum after 30 minutes. Differential diagnosis Dynamic symptoms of exercise intolerance (e.g. muscle fatigue and cramping) with or without fixed proximal muscle weakness: Another glycogen storage disease that affects muscle (muscle GSD); Metabolic myopathy other than glycogen storage disease; Endocrine myopathy that affects carbohydrate metabolism secondary to the primary disease; Inadequate blood flow (ischemia), particularly of the calves Intermittent claudication; Popliteal artery entrapment syndrome; Chronic venous insufficiency. Poor diet or malabsorption disease resulting in malnutrition of micronutrients essential for muscle glycogen metabolism; Other rare myopathies, such as Brody disease, Rippling muscle disease, Erythrocyte lactate transporter defect, a small number of muscular dystrophies, Tubular aggregate myopathy (TAM), etc. Exercise-induced muscle fatigue without cramping: Myasthenia gravis; Lambert–Eaton myasthenic syndrome; Congenital myasthenic syndromes. Fixed symptom of muscle weakness, predominantly of the proximal muscles: Limb-girdle muscular dystrophy; Inflammatory myopathy. Allelic to McArdle disease (GSD-V) is a recently discovered disease that has a pathogenic autosomal dominant mutation in exon 16 of the PYGM gene c.1915G>C (p.Asp639His). Discovered in 2020, it affected 13 members of a family over four generations and has yet to be assigned a GSD number. Unlike McArdle disease (GSD-V), this disease does not have an overall deficiency of myophosphorylase, only a deficiency of functioning myophosphorylase-a with plenty of functioning myophosphorylase-b (similar to GSD-IXd). Myophosphorylase-b can be allosterically activated to break down glycogen (glycogenolysis) by high levels of AMP, and as the AMP-dependent activity was preserved, the individuals of this family had normal muscle glycogen concentrations as well as lacked exercise intolerance (which are prominent distinguishing features from McArdle disease). The only symptom was adult-onset (40+ years of age) fixed muscle weakness, initially of the proximal muscles of the legs, followed by proximal arms, then distal leg muscles. Muscle biopsy also showed accumulation of the intermediate filament desmin in the myofibres. Treatment Supervised exercise programs have been shown in small studies to improve exercise capacity by several measures: lowering heart rate, lowering serum creatine kinase (CK), increasing the exercise intensity threshold before symptoms of muscle fatigue and cramping are experienced, and the skeletal muscles becoming aerobically conditioned. Oral sucrose treatment (for example a sports drink with 75 grams of sucrose in 660 ml.) taken 30 minutes prior to exercise has been shown to help improve exercise tolerance, including a lower heart rate and lower perceived level of exertion compared with placebo. This is because the ingestion of a high-carbohydrate meal or drink causes transient hyperglycaemia, with the exercising muscle cells utilizing the high glucose in the blood for the glycolytic pathway. However, the ingestion of a high-carbohydrate meal or drink is problematic as a frequent form of treatment since it will increase the release of insulin, which inhibits the release of fatty acids and subsequently will delay the ability to get into second wind. The frequent ingestion of sucrose (e.g. sugary drinks), in order to avoid premature muscle fatigue and cramping, is also problematic in that it can lead to obesity as insulin will also stimulate triglyceride synthesis (develop body fat), and obesity-related ill health (e.g. type II diabetes and heart disease). A low dosage treatment with creatine showed a significant improvement of muscle problems compared to placebo in a small clinical study, while other studies have shown minimal subjective benefit. High dosage treatment of creatine has been shown to worsen symptoms of myalgia (muscle pain). A ketogenic diet has demonstrated beneficial for McArdle disease (GSD-V) as ketones readily convert to acetyl CoA for oxidative phosphorylation, whereas free fatty acids take a few minutes to convert into acetyl CoA. Ketones are a part of fat metabolism, the ketones can act as the main fuel before fatty acid catabolism takes over (second wind), during which the ketones would act as a supplementary fuel alongside the fatty acids to produce adenosine triphosphate (ATP) by oxidative phosphorylation. History The deficiency was the first metabolic myopathy to be recognized, when the physician Brian McArdle described the first case in a 30-year-old man who always experienced pain and weakness after exercise. McArdle noticed this patient's cramps were electrically silent and his venous lactate levels failed to increase upon ischemic exercise. (The ischemic exercise consists of the patient squeezing a hand dynamometer at maximal strength for a specific period of time, usually a minute, with a blood pressure cuff, which is placed on the upper arm and set at 250 mmHg, blocking blood flow to the exercising arm.) Notably, this is the same phenomenon that occurs when muscle is poisoned in vitro by iodoacetate, which inhibits the breakdown of glycogen into glucose and prevents the formation of lactate; as well as produces an electronically silent muscle contracture. Knowing what occurs to muscle poisoned by iodoacetate, helped McArdle speculate that a glycogenolytic block might be occurring when he first described the disease. McArdle accurately concluded that the patient had a disorder of glycogen breakdown that specifically affected skeletal muscle. The associated enzyme deficiency was discovered in 1959 by W. F. H. M. Mommaerts et al. In animals Naturally-occurring myophosphorylase deficiency (GSD-V; McArdle disease) has been found in Charolais cattle and Merino sheep. The cattle were asymptomatic at rest, but when forced to exercise, would become noticeably fatigued and recumbent (having to lie down) for approximately 10 minutes before being able to resume exercise (the second wind phenomenon). Artificially-induced myophosphorylase deficiency was created in mice, by altering their embryonic DNA, for use in laboratory experiments. See also Glycogen storage disease Hitting the wall (muscle fatigue due to glycogen depletion) Inborn errors of carbohydrate metabolism Purine nucleotide cycleΒ§Glycogenoses (GSDs) Second wind (increased ATP production primarily by fatty acids after glycogen depletion) References External links Euromac, an EU-funded consortium of medical and research institutes across Europe which is building a patient registry and raising standards of care for people with McArdle Disease. International Association for Muscle Glycogen Storage Disease (IamGSD). Walking With McArdle's - IamGSD videos EUROMAC Introduction - Video about McArdle disease and the EUROMAC Registry of McArdle disease and other rare glycogenoses patients Autosomal recessive disorders Inborn errors of carbohydrate metabolism Muscular disorders
Glycogen storage disease type V
[ "Chemistry" ]
5,789
[ "Inborn errors of carbohydrate metabolism", "Carbohydrate metabolism" ]
160,851
https://en.wikipedia.org/wiki/Glycogen%20storage%20disease
A glycogen storage disease (GSD, also glycogenosis and dextrinosis) is a metabolic disorder caused by a deficiency of an enzyme or transport protein affecting glycogen synthesis, glycogen breakdown, or glucose breakdown, typically in muscles and/or liver cells. GSD has two classes of cause: genetic and environmental. Genetic GSD is caused by any inborn error of carbohydrate metabolism (genetically defective enzymes or transport proteins) involved in these processes. In livestock, environmental GSD is caused by intoxication with the alkaloid castanospermine. However, not every inborn error of carbohydrate metabolism has been assigned a GSD number, even if it is known to affect the muscles or liver. For example, phosphoglycerate kinase deficiency (gene PGK1) has a myopathic form. Also, Fanconi-Bickel syndrome (gene SLC2A2) and Danon disease (gene LAMP2) were declassed as GSDs due to being defects of transport proteins rather than enzymes; however, GSD-1 subtypes b, c, and d are due to defects of transport proteins (genes SLC37A4, SLC17A3) yet are still considered GSDs. Phosphoglucomutase deficiency (gene PGM1) was declassed as a GSD due to it also affecting the formation of N-glycans; however, as it affects both glycogenolysis and glycosylation, it has been suggested that it should re-designated as GSD-XIV. (See inborn errors of carbohydrate metabolism for a full list of inherited diseases that affect glycogen synthesis, glycogen breakdown, or glucose breakdown.) Types Some GSDs have different forms, e.g. infantile, juvenile, adult (late-onset). Some GSDs have different subtypes, e.g. GSD1a / GSD1b, GSD9A1 / GSD9A2 / GSD9B / GSD9C / GSD9D. GSD type 0: Although glycogen synthase deficiency does not result in storage of extra glycogen in the liver, it is classified with the GSDs as type 0 because it is another defect of glycogen storage and can cause similar problems. GSD type VIII (GSD 8): In the past, liver phosphorylase-b kinase deficiency was considered a distinct condition, however it has been classified with GSD type VI and GSD IXa1; it has been described as X-linked recessive inherited. GSD IX has become the dominant classification for this disease, grouped with the other isoenzymes of phosphorylase-b kinase deficiency. GSD type XI (GSD 11): Fanconi-Bickel syndrome (GLUT2 deficiency), hepatorenal glycogenosis with renal Fanconi syndrome, no longer considered a glycogen storage disease, but a defect of glucose transport. The designation of GSD type XI (GSD 11) has been repurposed for muscle lactate dehydrogenase deficiency (LDHA). GSD type XIV (GSD 14): No longer classed as a GSD, but as a congenital disorder of glycosylation type 1T (CDG1T), affects the phosphoglucomutase enzyme (gene PGM1). Phosphoglucomutase 1 deficiency is both a glycogenosis and a congenital disorder of glycosylation. Individuals with the disease have both a glycolytic block as muscle glycogen cannot be broken down, as well as abnormal serum transferrin (loss of complete N-glycans). As it affects glycogenolysis, it has been suggested that it should re-designated as GSD-XIV. Lafora disease is considered a complex neurodegenerative disease and also a glycogen metabolism disorder. Polyglucosan storage myopathies are associated with defective glycogen metabolism (Not McArdle disease, same gene but different symptoms) Myophosphorylase-a activity impaired: Autosomal dominant mutation on PYGM gene. AMP-independent myophosphorylase activity impaired, whereas the AMP-dependent activity was preserved. No exercise intolerance. Adult-onset muscle weakness. Accumulation of the intermediate filament desmin in the myofibers of the patients. Myophosphorylase comes in two forms: form 'a' is phosphorylated by phosphorylase kinase, form 'b' is not phosphorylated. Both forms have two conformational states: active (R or relaxed) and inactive (T or tense). When either form 'a' or 'b' are in the active state, then the enzyme converts glycogen into glucose-1-phosphate. Myophosphorylase-b is allosterically activated by AMP being in larger concentration than ATP and/or glucose-6-phosphate. (See Glycogen phosphorylaseΒ§Regulation). Unknown glycogenosis related to dystrophy gene deletion: patient has a previously undescribed myopathy associated with both Becker muscular dystrophy and a glycogen storage disorder of unknown aetiology. Diagnosis Methods to diagnose glycogen storage diseases include history and physical examination for associated symptoms, blood tests for associated metabolic disturbances, and genetic testing for suspected mutations. It may also include a non-ischemic forearm test, exercise stress test, or 12-minute walk test (12MWT). Advancements in genetic testing are slowly diminishing the need for biopsy; however, in the event of a VUS and inconclusive exercise tests, a biopsy would then be necessary to confirm diagnosis. Differential diagnosis Muscle Glycogen storage diseases that involve skeletal muscle typically have exercise-induced (dynamic) symptoms, such as premature muscle fatigue, rather than fixed weakness (static) symptoms. Differential diagnoses for glycogen storage diseases that involve fixed muscle weakness, particularly of the proximal muscles, would be an inflammatory myopathy or a limb-girdle muscular dystrophy. For those with exercise intolerance and/or proximal muscle weakness, the endocrinopathies should be considered. The timing of the symptoms of exercise intolerance, such as muscle fatigue and cramping, is important in order to help distinguish it from other metabolic myopathies such as fatty acid metabolism disorders. Problems originating within the circulatory system, rather than the muscle itself, can produce exercise-induced muscle fatigue, pain and cramping that alleviates with rest, resulting from inadequate blood flow (ischemia) to the muscles. Ischemia that often produces symptoms in the leg muscles includes intermittent claudication, popliteal artery entrapment syndrome, and chronic venous insufficiency. Diseases disrupting the neuromuscular junction can cause abnormal muscle fatigue, such as myasthenia gravis, an autoimmune disease. Similar, are Lambert–Eaton myasthenic syndrome (autoimmune) and the congenital myasthenic syndromes (genetic). Diseases can disrupt glycogen metabolism secondary to the primary disease. Abnormal thyroid functionβ€”hypo- and hyperthyroidismβ€”can manifest as myopathy with symptoms of exercise-induced muscle fatigue, cramping, muscle pain and may include proximal weakness or muscle hypertrophy (particularly of the calves). Hypothyroidism up-regulates glycogen synthesis and down-regulates glycogenolysis and glycolysis; conversely, hyperthyroidism does the reverse, up-regulating glycogenolysis and glycolysis while down-regulating glycogen synthesis. Prolonged hypo- and hyperthyroid myopathy leads to atrophy of type II (fast-twitch/glycolytic) muscle fibres, and a predominance of type I (slow-twitch/oxidative) muscle fibres. Muscle biopsy shows abnormal muscle glycogen: high accumulation in hypothyroidism and low accumulation in hyperthyroidism. Hypothyroid myopathy includes Kocher-Debre-Semelaigne syndrome (childhood-onset), Hoffman syndrome (adult-onset), myasthenic syndrome, and atrophic form. In patients with increased growth hormone, muscle biopsy includes, among other features, excess glycogen deposition. EPG5-related Vici syndrome is a multisystem disorder, a congenital disorder of autophagy, with muscle biopsy showing excess glycogen accumulation, among other myopathic features. It is interesting to note, in comparison to hypothyroid myopathy, that McArdle disease (GSD-V), which is by far the most commonly diagnosed of the muscle GSDs and therefore the most studied, has as its second highest comorbidity endocrine disease (chiefly hypothyroidism) and that some patients with McArdle disease also have hypertrophy of the calf muscles. Late-onset Pompe disease (GSD-II) also has calf hypertrophy and hypothyroidism as comorbidities. Poor diet and malabsorption diseases (such as celiac disease) may lead to malnutrition of essential vitamins necessary for glycogen metabolism within the muscle cells. Malnutrition typically presents with systemic symptoms, but in rare instances can be limited to myopathy. Vitamin D deficiency myopathy (also known as osteomalic myopathy due to the interplay between vitamin D and calcium) results in muscle weakness, predominantly of the proximal muscles; with muscle biopsy showing abnormal glycogen accumulation, atrophy of type II (fast-twitch/glycolytic) muscle fibres, and diminished calcium uptake by the sarcoplasmic reticulum (needed for muscle contraction). Although Vitamin D deficiency myopathy typically includes muscle atrophy, rarely calf muscle hypertrophy has been reported. Exercise-induced, electrically silent, muscle cramping and stiffness (transient muscle contractures or "pseudomyotonia") are seen not only in GSD types V, VII, IXd, X, XI, XII, and XIII, but also in Brody disease, Rippling muscle disease types 1 and 2, and CAV3-related hyperCKemia (Elevated serum creatine phosphokinase). Unlike the other myopathies, in Brody disease the muscle cramping is painless. Like GSD types II, III, and V, a pseudoathletic appearance of muscle hypertrophy is also seen in some with Brody disease and Rippling muscle disease. Erythrocyte lactate transporter defect (formerly Lactate transporter defect, myopathy due to) also includes exercise-induced, electrically silent, painful muscle cramping and transient contractures; as well as exercise-induced muscle fatigue. EMG and muscle biopsy is normal however, as the defect is not in the muscle but in the red blood cells that should clear lactate buildup from exercising muscles. Although most muscular dystrophies have fixed muscle weakness rather than exercise-induced muscle fatigue and/or cramping, there are a few exceptions. Limb–girdle muscular dystrophy autosomal recessive 23 (LGMD R23) has calf hypertrophy and exercise-induced cramping. Myofibrillar myopathy 10 (MFM10) has exercise-induced muscle fatigue, cramping and stiffness, with hypertrophic neck and shoulder girdle muscles. LGMD R28 has calf hypertrophy and exercise-induced muscle fatigue and pain. LGMD R8 has calf pseudohypertrophy and exercise-induced weakness (fatigue) and pain. LGMD R15 (a.k.a MDDGC3) has muscle hypertrophy, proximal muscle weakness, and muscle fatigue. DMD-related myopathies of Duchenne and Becker muscular dystrophy are known for fixed muscle weakness and pseudohypertrophic calf muscles, but they also have secondary muscular mitochondrial impairment causing low ATP production; as well as decreasing type II (fast-twitch/glycolytic) muscle fibres, producing a predominance of type I (slow-twitch/oxidative) muscle fibres. DMD-related childhood-onset milder phenotypes present with exercise-induced muscle cramping, stiffness, pain, fatigue, and elevated CK. Becker muscular dystrophy has adult-onset exercise-induced muscle cramping, pain, and elevated CK. Tubular aggregate myopathy (TAM) types 1 and 2 has exercise-induced muscle pain, fatigue, stiffness, with proximal muscle weakness and calf muscle pseudohypertrophy. TAM1 has cramping at rest, while TAM2 has cramping during exercise. Stormorken syndrome includes the symptoms of TAM, but is a more severe presentation including short stature and other abnormalities. Satoyoshi syndrome has exercise-induced painful muscle cramps, muscle hypertrophy, and short stature. Dimethylglycine dehydrogenase deficiency has muscle fatigue, elevated CK, and fishy body odour. Myopathy with myalgia, increased serum creatine kinase, with or without episodic rhabdomyolysis (MMCKR) has exercise-induced muscle cramps, pain, and fatigue; with some exhibiting proximal muscle weakness. Liver (help wikipedia by contributing to this subsection) Glycogenosis-like phenotype of congenital hyperinsulinism due to HNF4A mutation or MODY1 (maturity-onset diabetes of the young, type 1). This phenotype of MODY1 has macrosomia and infantile-onset hyperinsulinemic hypoglycemia, physiological 3-OH butyrate, increased triglyceride serum levels, increased level of glycogen in liver and erythrocytes, increased liver transaminases, transient hepatomegaly, renal Fanconi syndrome, and later develop liver cirrhosis, decreased succinate-dependent respiration (mitochondrial dysfunction), rickets, nephrocalcinosis, chronic kidney disease, and diabetes. Treatment Treatment is dependent on the type of glycogen storage disease. Von Gierke disease (GSD-I) is typically treated with frequent small meals of carbohydrates and cornstarch, called modified cornstarch therapy, to prevent low blood sugar, while other treatments may include allopurinol and human granulocyte colony stimulating factor. Cori/Forbes disease (GSD-III) treatment may use modified cornstarch therapy, a high protein diet with a preference to complex carbohydrates. However, unlike GSD-I, gluconeogenesis is functional, so simple sugars (sucrose, fructose, and lactose) are not prohibited. A ketogenic diet has demonstrated beneficial for McArdle disease (GSD-V) as ketones readily convert to acetyl CoA for oxidative phosphorylation, whereas free fatty acids take a few minutes to convert into acetyl CoA. For phosphoglucomutase deficiency (formerly GSD-XIV), D-galactose supplements and exercise training has shown favourable improvement of signs and symptoms. In terms of exercise training, some patients with phosphoglucomutase deficiency also experience "second wind." For McArdle disease (GSD-V), regular aerobic exercise utilizing "second wind" to enable the muscles to become aerobically conditioned, as well as anaerobic exercise (strength training) that follows the activity adaptations so as not to cause muscle injury, helps to improve exercise intolerance symptoms and maintain overall health. Studies have shown that regular low-moderate aerobic exercise increases peak power output, increases peak oxygen uptake (VΜ‡O2peak), lowers heart rate, and lowers serum CK in individuals with McArdle disease. Regardless of whether the patient experiences symptoms of muscle pain, muscle fatigue, or cramping, the phenomenon of second wind having been achieved is demonstrable by the sign of an increased heart rate dropping while maintaining the same speed on the treadmill. Inactive patients experienced second wind, demonstrated through relief of typical symptoms and the sign of an increased heart rate dropping, while performing low-moderate aerobic exercise (walking or brisk walking). Conversely, patients that were regularly active did not experience the typical symptoms during low-moderate aerobic exercise (walking or brisk walking), but still demonstrated second wind by the sign of an increased heart rate dropping. For the regularly active patients, it took more strenuous exercise (very brisk walking/jogging or bicycling) for them to experience both the typical symptoms and relief thereof, along with the sign of an increased heart rate dropping, demonstrating second wind. In young children (<10 years old) with McArdle disease (GSD-V), it may be more difficult to detect the second wind phenomenon. They may show a normal heart rate, with normal or above normal peak cardio-respiratory capacity (VΜ‡O2max). That said, patients with McArdle disease typically experience symptoms of exercise intolerance before the age of 10 years, with the median symptomatic age of 3 years. Tarui disease (GSD-VII) patients do not experience the "second wind" phenomenon; instead are said to be "out-of-wind." However, they can achieve sub-maximal benefit from lipid metabolism of free fatty acids during aerobic activity following a warm-up. Epidemiology Overall, according to a study in British Columbia, approximately 2.3 children per 100,000 births (1 in 43,000) have some form of glycogen storage disease. In the United States, they are estimated to occur in 1 per 20,000–25,000 births. Dutch incidence rate is estimated to be 1 per 40,000 births. While a Mexican incidence showed 6.78:1000 male newborns. Within the category of muscle glycogenoses (muscle GSDs), McArdle disease (GSD-V) is by far the most commonly diagnosed. See also Metabolic myopathies Inborn errors of carbohydrate metabolism References External links AGSD. - Association for Glycogen Storage Disease. A US-based non-profit, parent and patient oriented support group dedicated to promoting the best interest of all the different types of glycogen storage disease. AGSD-UK - Association for Glycogen Storage Disease (UK). A UK-based charity which helps individuals and families affected by Glycogen Storage Disease by putting people in contact, providing information and support, publishing a magazine and holding conferences, workshops, courses and family events. IamGSD - International Association for Muscle Glycogen Storage Disease. A non-profit, patient-led international group encouraging efforts by research and medical professionals, national support groups and individual patients worldwide. IPA - International Pompe Association. (Pompe Disease is also known as GSD-II). A non-profit, federation of Pompe disease patient's groups world-wide. It seeks to coordinate activities and share experience and knowledge between different groups. EUROMAC - EUROMAC is a European registry of patients affected by McArdle Disease and other rare neuromuscular glycogenoses. CoRDS - Coordination of Rare Diseases at Sanford (CoRDS) is a centralized international patient registry for all rare diseases. They work with patient advocacy groups, including IamGSD, individuals and researchers. CORD - Canadian Organization for Rare Disorders (CORD) is a Canadian national network for organizations representing all those with rare disorders. CORD provides a strong common voice to advocate for health policy and a healthcare system that works for those with rare disorders. NORD - National Organization for Rare Disorders (NORD) is an American national non-profit patient advocacy organization that is dedicated to individuals with rare diseases and the organizations that serve them. EURODIS - Rare Diseases Europe (EURODIS) is a unique, non-profit alliance of over 700 rare disease patient organizations across Europe that work together to improve the lives of the 30 million people living with a rare disease in Europe. Inborn errors of carbohydrate metabolism Hepatology Rare diseases Diseases of liver Muscular disorders Metabolic disorders
Glycogen storage disease
[ "Chemistry" ]
4,394
[ "Inborn errors of carbohydrate metabolism", "Metabolic disorders", "Carbohydrate metabolism", "Metabolism" ]
160,853
https://en.wikipedia.org/wiki/Felt
Felt is a textile that is produced by matting, condensing, and pressing fibers together. Felt can be made of natural fibers such as wool or animal fur, or from synthetic fibers such as petroleum-based acrylic or acrylonitrile or wood pulp–based rayon. Blended fibers are also common. Natural fiber felt has special properties that allow it to be used for a wide variety of purposes. It is "fire-retardant and self-extinguishing; it dampens vibration and absorbs sound; and it can hold large amounts of fluid without feeling wet..." History Felt from wool is one of the oldest known textiles. Many cultures have legends about the origins of felt-making. Sumerian legend claims that the secret of feltmaking was discovered by Urnamman of Lagash. The story of Saint Clement and Saint Christopher relates that the men packed their sandals with wool to prevent blisters while fleeing from persecution. At the end of their journey the movement and sweat had turned the wool into felt socks. Most likely felt's origins can be found in central Asia, where there is evidence of feltmaking in Siberia (Altai mountains) in Northern Mongolia and more recently evidence dating back to the first century CE in Mongolia. Siberian tombs (7th to 2nd century BCE) show the broad uses of felt in that culture, including clothing, jewelry, wall hangings, and elaborate horse blankets. Employing careful color use, stitching, and other techniques, these feltmakers were able to use felt as an illustrative and decorative medium on which they could depict abstract designs and realistic scenes with great skill. Over time these makers became known for the beautiful abstract patterns they used that were derived from plant, animal, and other symbolic designs. From Siberia and Mongolia feltmaking spread across the areas held by the Turkic-Mongolian tribes. Sheep and camel herds were central to the wealth and lifestyle of these tribes, both of which animals were critical to producing the fibers needed for felting. For nomads traveling frequently and living on fairly treeless plains felt provided housing (yurts, tents etc.), insulation, floor coverings, and inside walling, as well as many household necessities from bedding and coverings to clothing. In the case of nomadic peoples, an area where feltmaking was particularly visible was in trappings for their animals and for travel. Felt was often featured in the blankets that went under saddles. Dyes provided rich coloring, and colored slices of pre-felts (semi-felted sheets that could be cut in decorative ways) along with dyed yarns and threads were combined to create beautiful designs on the wool backgrounds. Felt was even used to create totems and amulets with protective functions. In traditional societies the patterns embedded in the felt were also imbued with significant religious and symbolic meaning. Feltmaking is still practised by nomadic peoples (such as Mongols and Turkic people) in Central Asia, where rugs, tents and clothing are regularly made. Some of these are traditional items, such as the classic yurt, or ger, while others are designed for the tourist market, such as decorated slippers. In the Western world, felt is widely used as a medium for expression in both textile art and contemporary art and design, where it has significance as an ecologically responsible textile and building material. In addition to Central Asian traditions of felting, Scandinavian countries have also supported feltmaking, particularly for clothing. Manufacturing methods Wet felting In the wet felting process, hot water is applied to layers of animal hairs, while repeated agitation and compression causes the fibers to hook together or weave together into a single piece of fabric. Wrapping the properly arranged fiber in a sturdy, textured material, such as a bamboo mat or burlap, will speed up the felting process. The felted material may be finished by fulling. Only certain types of fiber can be wet felted successfully. Most types of fleece, such as those taken from the alpaca or the Merino sheep, can be put through the wet felting process. One may also use mohair (goat), angora (rabbit), or hair from rodents such as beavers and muskrats. These types of fiber are covered in tiny scales, similar to the scales found on a strand of human hair. Heat, motion, and moisture of the fleece causes the scales to open, while agitating them causes them to latch onto each other, creating felt. There is an alternative theory that the fibers wind around each other during felting. Plant fibers and synthetic fibers will not wet felt. In order to make multi-colored designs, felters conduct a two-step process in which they create pre-felts of specialized colorsβ€”these semi-completed sheets of colored felt can then be cut with a sharp implement (knife or scissors) and the distinctive colors placed next to each other as in making a mosaic. The felting process is then resumed and the edges of the fabric attach to each other as the felting process is completed. Shyrdak carpets (Turkmenistan) use a form of this method wherein two pieces of contrasting color are cut out with the same pattern, the cut-outs are then switched, fitting one into the other, which makes a sharply defined and colorful patterned piece. In order to strengthen the joints of a mosaic style felt, feltmakers often add a backing layer of fleece that is felted along with the other components. Feltmakers can differ in their orientation to this added layerβ€”where some will lay it on top of the design before felting and others will place the design on top of the strengthening layer. The process of felting was adapted to the lifestyles of the different cultures in which it flourished. In Central Asia, it is common to conduct the rolling/friction process with the aid of a horse, donkey, or camel, which will pull the rolled felt until the process is complete. Alternately, a group of people in a line might roll the felt along, kicking it regularly with their feet. Further fulling can include throwing or slamming and working the edges with careful rolling. In Turkey, some baths had areas dedicated to feltmaking, making use of the steam and hot water that were already present for bathing. Development of felting as a profession As felting grew in importance to a society, so, too, did the knowledge about techniques and approaches. Amateur or community felting obviously continued in many communities at the same time that felting specialists and felting centers began to develop. However, the importance of felting to community life can be seen in the fact that, in many Central Asian communities, felt production is directed by a leader who oversees the process as a ritual that includes prayersβ€”words and actions to bring good luck to the process. Successfully completing the creation of felt (certainly large felt pieces) is reason for celebration, feasting, and the sharing of traditional stories. In Turkey, craft guilds called "ahi" came into being, and these groups were responsible for registering members and protecting the knowledge of felting. In Istanbul at one time, there were 1,000 felters working in 400 workshops registered in this ahi. Needle felting Needle felting is a method of creating felt that uses specially designed needles instead of water. Felting needles have angled notches along the shaft that catch fibers and tangle them together to produce felt. These notches are sometimes erroneously called "barbs", but barbs are protrusions (like barbed wire) and would be too difficult to thrust into the wool and nearly impossible to pull out. Felting needles are thin and sharp, with shafts of a variety of different gauges and shapes. Needle felting is used in industrial felt making as well as for individual art and craft applications. Felting needles are sometimes fitted in holders that allow the use of 2 or more needles at one time to sculpt wool objects and shapes. Individual needles are often used for detail while multiple needles that are paired together are used for larger areas or to form the base of the project. At any point in time a variety of fibers and fiber colors may be added, using needles to incorporate them into the project. Needle felting can be used to create both 2 dimensional and 3 dimensional artwork, including soft sculpture, dolls, figurines, jewelry, and 2 dimensional wool paintings. Needle felting is popular with artists and craftspeople worldwide. One example is Ikuyo Fujita(藀田育代 Fujita Ikuyo), a Japanese artist who works primarily in needle felt painting and mogol (pipe cleaner) art. Recently, needle-felting machines have become popular for art or craft felters. Similar to a sewing machine, these tools have several needles that punch fibers together. These machines can be used to create felted products more efficiently. The embellishment machine allows the user to create unique combinations of fibers and designs. Carroting Invented in the mid 17th century and used until the mid-20th centuries, a process called "carroting" was used in the manufacture of good quality felt for making men's hats. Beaver, rabbit or hare skins were treated with a dilute solution of the mercury compound mercuric nitrate. The skins were dried in an oven where the thin fur at the sides turned orange, the color of carrots. Pelts were stretched over a bar in a cutting machine, and the skin was sliced off in thin shreds, with the fleece coming away entirely. The fur was blown onto a cone-shaped colander and then treated with hot water to consolidate it. The cone then peeled off and passed through wet rollers to cause the fur to felt. These 'hoods' were then dyed and blocked to make hats. The toxic solutions from the carrot and the vapours it produced resulted in widespread cases of mercury poisoning among hatters. This may be the origin of the phrase "mad as a hatter" which was used to humorous effect by Lewis Carroll in the chapter "A Mad Tea Party" of the novel Alice in Wonderland. Uses Felt is used in a wide range of industries and manufacturing processes, from the automotive industry and casinos to musical instruments and home construction, as well as in gun wadding, either inside cartridges or pushed down the barrel of a muzzleloader. Felt had many uses in ancient times and continues to be widely used today. Industrial uses Felt is frequently used in industry as a sound or vibration damper, as a non-woven fabric for air filtration, and in machinery for cushioning and padding moving parts. Home Decor Felt can be used in home furnishings like table runners, placemats, coasters, and even as backing for area rugs. It can add a touch of warmth and texture to a space. Clothing During the 18th and 19th centuries gentlemen's headwear made from beaver felt were popular. In the early part of the 20th century, cloth felt hats, such as fedoras, trilbies and homburgs, were worn by many men in the western world. Felt is often used in footwear as boot liners, with the Russian valenki being an example. Musical instruments Many musical instruments use felt. It is often used as a damper. On drum cymbal stands, it protects the cymbal from cracking and ensures a clean sound. It is used to wrap bass drum strikers and timpani mallets. Felt is used extensively in pianos; for example, piano hammers are made of wool felt around a wooden core. The density and springiness of the felt is a major part of what creates a piano's tone. As the felt becomes grooved and "packed" with use and age, the tone suffers. Felt is placed under the piano keys on accordions to control touch and key noise; it is also used on the pallets to silence notes not sounded by preventing air flow. Felt is used with other instruments, particularly stringed instruments, as a damper to reduce volume or eliminate unwanted sounds. Arts and crafts Felt is used for framing paintings. It is laid between the slip mount and picture as a protective measure to avoid damage from rubbing to the edge of the painting. This is commonly found as a preventive measure on paintings which have already been restored or professionally framed. It is widely used to protect paintings executed on various surfaces including canvas, wood panel and copper plate. A felt-covered board can be used in storytelling to small children. Small felt cutouts or figures of animals, people, or other objects will adhere to a felt board, and in the process of telling the story, the storyteller also acts it out on the board with the animals or people. Puppets can also be made with felt. The best known example of felt puppets are Jim Henson's Muppets. Felt pressed dolls, such as Lenci dolls, were very popular in the nineteenth century and just after World War I. As part of the overall renewal of interest in textile and fiber arts, beginning in the 1970s and continuing through today, felt has experienced a strong revival in interest, including its historical roots. Polly Stirling, a fiber artist from New South Wales, Australia, is commonly associated with the development of nuno felting, a key technique for contemporary art felting. German artist Joseph Beuys prominently integrates felt within his works. English artist Jenny Cowern shifted from traditional drawing and painting media into using felt as her primary media. Modern day felters with access to a broad range of sheep and other animal fibers have exploited knowledge of these different breeds to produce special effects in their felt. Fleece locks are classified by the Bradford or Micron count, both which designate the fineness to coarseness of the material. Fine wools range from 64 to 80 (Bradford); medium 40–60 (Bradford); and coarse 36–60 (Bradford). Merino, the finest and most delicate sheep fleece, will be employed for clothing that goes next to the body. Claudy Jongstra raises traditional and rare breeds of sheep with much hardier coats (Drenthe, Heath, Gotland, Schoonbeek, and Wensleydale) on her property in Friesland and these are used in her interior design projects. Exploitation of these characteristics of the fleece in tandem with the use of other techniques, such as stitching and incorporation of other fibers, provides felters with a broad range of possibilities See also Bowler hat Fuzzy felt Roofing felt Valenki References General bibliography E. J. W. Barber. Prehistoric Textiles: The Development of Cloth in the Neolithic and Bronze Ages, with Special Reference to the Aegean. Princeton: Princeton University Press, 1991. Lise Bender JΓΈrgensen. North European Textiles Until AD 1000. Aarchus: Aarchus University Press, 1992. External links Nonwoven fabrics Building materials Animal hair products Fur trade Maritime culture
Felt
[ "Physics", "Engineering" ]
3,037
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
160,986
https://en.wikipedia.org/wiki/Order%20statistic
In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference. Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles. When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution. Notation and examples For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are 6, 9, 3, 7, the order statistics would be denoted where the subscript enclosed in parentheses indicates the th order statistic of the sample. The first order statistic (or smallest order statistic) is always the minimum of the sample, that is, where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values. Similarly, for a sample of size , the th order statistic (or largest order statistic) is the maximum, that is, The sample range is the difference between the maximum and minimum. It is a function of the order statistics: A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range. The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd. More precisely, if for some integer , then the sample median is and so is an order statistic. On the other hand, when is even, and there are two middle values, and , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles. Probabilistic analysis Given any random variables X1, X2, ..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order. When the random variables X1, X2, ..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem. From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end. Cumulative distribution function of order statistics For a random sample as above, with cumulative distribution , the order statistics for that sample have cumulative distributions as follows (where r specifies which order statistic): the corresponding probability density function may be derived from this result, and is found to be Moreover, there are two special cases, which have CDFs that are easy to compute. Which can be derived by careful consideration of probabilities. Probability distributions of order statistics Order statistics sampled from a uniform distribution In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf. We assume throughout this section that is a random sample drawn from a continuous distribution with cdf . Denoting we obtain the corresponding random sample from the standard uniform distribution. Note that the order statistics also satisfy . The probability density function of the order statistic is equal to that is, the kth order statistic of the uniform distribution is a beta-distributed random variable. The proof of these statements is as follows. For to be between u and uΒ +Β du, it is necessary that exactly kΒ βˆ’Β 1 elements of the sample are smaller than u, and that at least one is between u and uΒ +Β du. The probability that more than one is in this latter interval is already , so we have to calculate the probability that exactly kΒ βˆ’Β 1, 1 and nΒ βˆ’Β k observations fall in the intervals , and respectively. This equals (refer to multinomial distribution for details) and the result follows. The mean of this distribution is k / (n + 1). The joint distribution of the order statistics of the uniform distribution Similarly, for iΒ <Β j, the joint probability density function of the two order statistics U(i)Β <Β U(j) can be shown to be which is (up to terms of higher order than ) the probability that iΒ βˆ’Β 1, 1, jΒ βˆ’Β 1Β βˆ’Β i, 1 and nΒ βˆ’Β j sample elements fall in the intervals , , , , respectively. One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant: One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region . It is also related with another particularity of order statistics of uniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding is bounded above by , which is thus invariant on the set of all with constant product . Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of , i.e. maximum minus the minimum. More generally, for , also has a beta distribution: From these formulas we can derive the covariance between two order statistics:The formula follows from noting that and comparing that with where , which is the actual distribution of the difference. Order statistics sampled from an exponential distribution For a random sample of size n from an exponential distribution with parameter Ξ», the order statistics X(i) for i = 1,2,3, ..., n each have distribution where the Zj are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by AlfrΓ©d RΓ©nyi. Order statistics sampled from an Erlang distribution The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method . The joint distribution of the order statistics of an absolutely continuous distribution If FX is absolutely continuous, it has a density such that , and we can use the substitutions and to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X: where where Application: confidence intervals for quantiles An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution. A small-sample-size example The simplest case to consider is how well the sample median estimates the population median. As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median. Large sample sizes For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by For a general distribution F with a continuous non-zero density at FΒ βˆ’1(p), a similar asymptotic normality applies: where f is the density function, and FΒ βˆ’1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946. Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence in relative entropy or KL divergence. An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance Οƒ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed. Proof It can be shown that where with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method. Mutual Information of Order Statistics The mutual information and f-divergence between order statistics have also been considered. For example, if the parent distribution is continuous, then for all In other words, mutual information is independent of the parent distribution. For discrete random variables, the equality need not to hold and we only have The mutual information between uniform order statistics is given by where where is the -th harmonic number. Application: Non-parametric density estimation Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator. Suppose, we want to estimate the density at the point . Consider the random variables , which are i.i.d with distribution function . In particular, . The expected value of the first order statistic given a sample of total observations yields, where is the quantile function associated with the distribution , and . This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm, Input: A sample of observations. points of density evaluation. Tuning parameter (usually 1/3). Output: estimated density at the points of evaluation. 1: Set 2: Set 3: Create an matrix which holds subsets with observations each. 4: Create a vector to hold the density evaluations. 5: for do 6: for do 7: Find the nearest distance to the current point within the th subset 8: end for 9: Compute the subset average of distances to 10: Compute the density estimate at 11: end for 12: return In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true. Dealing with discrete variables Suppose are i.i.d. random variables from a discrete distribution with cumulative distribution function and probability mass function . To find the probabilities of the order statistics, three values are first needed, namely The cumulative distribution function of the order statistic can be computed by noting that Similarly, is given by Note that the probability mass function of is just the difference of these values, that is to say Computing order statistics The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n). Applications Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields. For example, a recent application in actuarial risk can be found in, where some weighted premium principles in terms of record claims and kth record claims are provided. See also Rankit Box plot BRS-inequality Concomitant (statistics) Fisher–Tippett distribution Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables Bernstein polynomial L-estimator – linear combinations of order statistics Rank-size distribution Selection algorithm Examples of order statistics Sample maximum and minimum Quantile Percentile Decile Quartile Median Mean Sample mean and covariance References External links Retrieved Feb 02, 2005 Retrieved Feb 02, 2005 C++ source Dynamic Order Statistics Nonparametric statistics Summary statistics Permutations
Order statistic
[ "Mathematics" ]
2,917
[ "Functions and mappings", "Permutations", "Mathematical objects", "Combinatorics", "Mathematical relations" ]
160,987
https://en.wikipedia.org/wiki/CJK%20characters
In internationalization, CJK characters is a collective term for graphemes used in the Chinese, Japanese, and Korean writing systems, which each include Chinese characters. It can also go by CJKV to include Chα»― NΓ΄m, the Chinese-origin logographic script formerly used for the Vietnamese language, or CJKVZ to also include Sawndip, used to write the Zhuang languages. Character repertoire Standard Mandarin Chinese and Standard Cantonese are written almost exclusively in Chinese characters. Over 3,000 characters are required for general literacy, with up to 40,000 characters for reasonably complete coverage. Japanese uses fewer charactersβ€”general literacy in Japanese can be expected with 2,136 characters. The use of Chinese characters in Korea is increasingly rare, although idiosyncratic use of Chinese characters in proper names requires knowledge (and therefore availability) of many more characters. Even today, however, South Korean students are taught 1,800 characters. Other scripts used for these languages, such as bopomofo and the Latin-based pinyin for Chinese, hiragana and katakana for Japanese, and hangul for Korean, are not strictly "CJK characters", although CJK character sets almost invariably include them as necessary for full coverage of the target languages. The sinologist Carl Leban (1971) produced an early survey of CJK encoding systems. Until the early 20th century, Classical Chinese was the written language of government and scholarship in Vietnam. Popular literature in Vietnamese was written in the script, consisting of Chinese characters with many characters created locally. Since the 1920s, the script since then used for recording literature has been the Latin-based Vietnamese alphabet. Encoding The number of characters required for complete coverage of all these languages' needs cannot fit in the 256-character code space of 8-bit character encodings, requiring at least a 16-bit fixed width encoding or multi-byte variable-length encodings. The 16-bit fixed width encodings, such as those from Unicode up to and including version 2.0, are now deprecated due to the requirement to encode more characters than a 16-bit encoding can accommodateβ€”Unicode 5.0 has some 70,000 Han charactersβ€”and the requirement by the Chinese government that software in China support the GB 18030 character set. Although CJK encodings have common character sets, the encodings often used to represent them have been developed separately by different East Asian governments and software companies, and are mutually incompatible. Unicode has attempted, with some controversy, to unify the character sets in a process known as Han unification. CJK character encodings should consist minimally of Han characters plus language-specific phonetic scripts such as pinyin, bopomofo, hiragana, katakana and hangul. CJK character encodings include: Big5 (the most prevalent encoding before Unicode was implemented) CCCII CNS 11643 (official standard of Republic of China) EUC-JP EUC-KR GB 2312 (subset and predecessor of GB 18030) GB 18030 (mandated standard in the People's Republic of China) Giga Character Set (GCS) ISO 2022-JP ISO-2022-KR KS X 1001 KPS 9566 Shift-JIS TRON Unicode The CJK character sets take up the bulk of the assigned Unicode code space. There is much controversy among Japanese experts of Chinese characters about the desirability and technical merit of the Han unification process used to map multiple Chinese and Japanese character sets into a single set of unified characters. All three languages can be written both left-to-right and top-to-bottom (right-to-left and top-to-bottom in ancient documents), but are usually considered left-to-right scripts when discussing encoding issues. Legal status Libraries cooperated on encoding standards for JACKPHY characters in the early 1980s. According to Ken Lunde, the abbreviation "CJK" was a registered trademark of Research Libraries Group (which merged with OCLC in 2006). The trademark owned by OCLC between 1987 and 2009 has now expired. See also Chinese character description languages Chinese character encoding Chinese input methods for computers CJK Compatibility Ideographs Chinese character strokes CJK Unified Ideographs Complex Text Layout languages (CTL) Input method editor Japanese language and computers Korean language and computers List of CJK fonts Sinoxenic Variable-width encoding Vietnamese language and computers References Works cited Sources DeFrancis, John. The Chinese Language: Fact and Fantasy. Honolulu: University of Hawaii Press, 1990. . Hannas, William C. Asia's Orthographic Dilemma. Honolulu: University of Hawaii Press, 1997. (paperback); (hardcover). Lemberg, Werner: The CJK package for LATEX2Ξ΅β€”Multilingual support beyond babel. TUGboat, Volume 18 (1997), No. 3β€”Proceedings of the 1997 Annual Meeting. Leban, Carl. Automated Orthographic Systems for East Asian Languages (Chinese, Japanese, Korean), State-of-the-art Report, Prepared for the Board of Directors, Association for Asian Studies. 1971. Lunde, Ken. CJKV Information Processing. Sebastopol, Calif.: O'Reilly & Associates, 1998. . External links CJKV: A Brief Introduction Lemberg CJK article from above, TUGboat18-3 On "CJK Unified Ideograph", from Wenlin.com FGA: Unicode CJKV character set rationalization Encodings of Asian languages Languages of East Asia Natural language and computing Chinese-language computing Japanese-language computing Korean-language computing Writing systems using Chinese characters ja:CJKV
CJK characters
[ "Technology" ]
1,151
[ "Natural language and computing" ]
160,990
https://en.wikipedia.org/wiki/Infinitesimal
In mathematics, an infinitesimal number is a non-zero quantity that is closer to 0 than any non-zero real number is. The word infinitesimal comes from a 17th-century Modern Latin coinage infinitesimus, which originally referred to the "infinity-eth" item in a sequence. Infinitesimals do not exist in the standard real number system, but they do exist in other number systems, such as the surreal number system and the hyperreal number system, which can be thought of as the real numbers augmented with both infinitesimal and infinite quantities; the augmentations are the reciprocals of one another. Infinitesimal numbers were introduced in the development of calculus, in which the derivative was first conceived as a ratio of two infinitesimal quantities. This definition was not rigorously formalized. As calculus developed further, infinitesimals were replaced by limits, which can be calculated using the standard real numbers. In the 3rd century BC Archimedes used what eventually came to be known as the method of indivisibles in his work The Method of Mechanical Theorems to find areas of regions and volumes of solids. In his formal published treatises, Archimedes solved the same problem using the method of exhaustion. Infinitesimals regained popularity in the 20th century with Abraham Robinson's development of nonstandard analysis and the hyperreal numbers, which, after centuries of controversy, showed that a formal treatment of infinitesimal calculus was possible. Following this, mathematicians developed surreal numbers, a related formalization of infinite and infinitesimal numbers that include both hyperreal cardinal and ordinal numbers, which is the largest ordered field. Vladimir Arnold wrote in 1990: The crucial insight for making infinitesimals feasible mathematical entities was that they could still retain certain properties such as angle or slope, even if these entities were infinitely small. Infinitesimals are a basic ingredient in calculus as developed by Leibniz, including the law of continuity and the transcendental law of homogeneity. In common speech, an infinitesimal object is an object that is smaller than any feasible measurement, but not zero in sizeβ€”or, so small that it cannot be distinguished from zero by any available means. Hence, when used as an adjective in mathematics, infinitesimal means infinitely small, smaller than any standard real number. Infinitesimals are often compared to other infinitesimals of similar size, as in examining the derivative of a function. An infinite number of infinitesimals are summed to calculate an integral. The modern concept of infinitesimals was introduced around 1670 by either Nicolaus Mercator or Gottfried Wilhelm Leibniz. The 15th century saw the work of Nicholas of Cusa, further developed in the 17th century by Johannes Kepler, in particular, the calculation of the area of a circle by representing the latter as an infinite-sided polygon. Simon Stevin's work on the decimal representation of all numbers in the 16th century prepared the ground for the real continuum. Bonaventura Cavalieri's method of indivisibles led to an extension of the results of the classical authors. The method of indivisibles related to geometrical figures as being composed of entities of codimension 1. John Wallis's infinitesimals differed from indivisibles in that he would decompose geometrical figures into infinitely thin building blocks of the same dimension as the figure, preparing the ground for general methods of the integral calculus. He exploited an infinitesimal denoted 1/∞ in area calculations. The use of infinitesimals by Leibniz relied upon heuristic principles, such as the law of continuity: what succeeds for the finite numbers succeeds also for the infinite numbers and vice versa; and the transcendental law of homogeneity that specifies procedures for replacing expressions involving unassignable quantities, by expressions involving only assignable ones. The 18th century saw routine use of infinitesimals by mathematicians such as Leonhard Euler and Joseph-Louis Lagrange. Augustin-Louis Cauchy exploited infinitesimals both in defining continuity in his Cours d'Analyse, and in defining an early form of a Dirac delta function. As Cantor and Dedekind were developing more abstract versions of Stevin's continuum, Paul du Bois-Reymond wrote a series of papers on infinitesimal-enriched continua based on growth rates of functions. Du Bois-Reymond's work inspired both Γ‰mile Borel and Thoralf Skolem. Borel explicitly linked du Bois-Reymond's work to Cauchy's work on rates of growth of infinitesimals. Skolem developed the first non-standard models of arithmetic in 1934. A mathematical implementation of both the law of continuity and infinitesimals was achieved by Abraham Robinson in 1961, who developed nonstandard analysis based on earlier work by Edwin Hewitt in 1948 and Jerzy ŁoΕ› in 1955. The hyperreals implement an infinitesimal-enriched continuum and the transfer principle implements Leibniz's law of continuity. The standard part function implements Fermat's adequality. History of the infinitesimal The notion of infinitely small quantities was discussed by the Eleatic School. The Greek mathematician Archimedes (c.Β 287Β BC – c.Β 212Β BC), in The Method of Mechanical Theorems, was the first to propose a logically rigorous definition of infinitesimals. His Archimedean property defines a number x as infinite if it satisfies the conditions ..., and infinitesimal if and a similar set of conditions holds for x and the reciprocals of the positive integers. A number system is said to be Archimedean if it contains no infinite or infinitesimal members. The English mathematician John Wallis introduced the expression 1/∞ in his 1655 book Treatise on the Conic Sections. The symbol, which denotes the reciprocal, or inverse, of ∞, is the symbolic representation of the mathematical concept of an infinitesimal. In his Treatise on the Conic Sections, Wallis also discusses the concept of a relationship between the symbolic representation of infinitesimal 1/∞ that he introduced and the concept of infinity for which he introduced the symbol ∞. The concept suggests a thought experiment of adding an infinite number of parallelograms of infinitesimal width to form a finite area. This concept was the predecessor to the modern method of integration used in integral calculus. The conceptual origins of the concept of the infinitesimal 1/∞ can be traced as far back as the Greek philosopher Zeno of Elea, whose Zeno's dichotomy paradox was the first mathematical concept to consider the relationship between a finite interval and an interval approaching that of an infinitesimal-sized interval. Infinitesimals were the subject of political and religious controversies in 17th century Europe, including a ban on infinitesimals issued by clerics in Rome in 1632. Prior to the invention of calculus mathematicians were able to calculate tangent lines using Pierre de Fermat's method of adequality and RenΓ© Descartes' method of normals. There is debate among scholars as to whether the method was infinitesimal or algebraic in nature. When Newton and Leibniz invented the calculus, they made use of infinitesimals, Newton's fluxions and Leibniz' differential. The use of infinitesimals was attacked as incorrect by Bishop Berkeley in his work The Analyst. Mathematicians, scientists, and engineers continued to use infinitesimals to produce correct results. In the second half of the nineteenth century, the calculus was reformulated by Augustin-Louis Cauchy, Bernard Bolzano, Karl Weierstrass, Cantor, Dedekind, and others using the (Ξ΅, Ξ΄)-definition of limit and set theory. While the followers of Cantor, Dedekind, and Weierstrass sought to rid analysis of infinitesimals, and their philosophical allies like Bertrand Russell and Rudolf Carnap declared that infinitesimals are pseudoconcepts, Hermann Cohen and his Marburg school of neo-Kantianism sought to develop a working logic of infinitesimals. The mathematical study of systems containing infinitesimals continued through the work of Levi-Civita, Giuseppe Veronese, Paul du Bois-Reymond, and others, throughout the late nineteenth and the twentieth centuries, as documented by Philip Ehrlich (2006). In the 20th century, it was found that infinitesimals could serve as a basis for calculus and analysis (see hyperreal numbers). First-order properties In extending the real numbers to include infinite and infinitesimal quantities, one typically wishes to be as conservative as possible by not changing any of their elementary properties. This guarantees that as many familiar results as possible are still available. Typically, elementary means that there is no quantification over sets, but only over elements. This limitation allows statements of the form "for any number x..." For example, the axiom that states "for any numberΒ x, xΒ +Β 0Β =Β x" would still apply. The same is true for quantification over several numbers, e.g., "for any numbersΒ x and y, xyΒ =Β yx." However, statements of the form "for any setΒ SΒ of numbersΒ ..." may not carry over. Logic with this limitation on quantification is referred to as first-order logic. The resulting extended number system cannot agree with the reals on all properties that can be expressed by quantification over sets, because the goal is to construct a non-Archimedean system, and the Archimedean principle can be expressed by quantification over sets. One can conservatively extend any theory including reals, including set theory, to include infinitesimals, just by adding a countably infinite list of axioms that assert that a number is smaller than 1/2, 1/3, 1/4, and so on. Similarly, the completeness property cannot be expected to carry over, because the reals are the unique complete ordered field up to isomorphism. We can distinguish three levels at which a non-Archimedean number system could have first-order properties compatible with those of the reals: An ordered field obeys all the usual axioms of the real number system that can be stated in first-order logic. For example, the commutativity axiom xΒ +Β yΒ =Β yΒ +Β x holds. A real closed field has all the first-order properties of the real number system, regardless of whether they are usually taken as axiomatic, for statements involving the basic ordered-field relations +,Β Γ—, and ≀. This is a stronger condition than obeying the ordered-field axioms. More specifically, one includes additional first-order properties, such as the existence of a root for every odd-degree polynomial. For example, every number must have a cube root. The system could have all the first-order properties of the real number system for statements involving any relations (regardless of whether those relations can be expressed usingΒ +,Β Γ—, and ≀). For example, there would have to be a sine function that is well defined for infinite inputs; the same is true for every real function. Systems in category 1, at the weak end of the spectrum, are relatively easy to construct but do not allow a full treatment of classical analysis using infinitesimals in the spirit of Newton and Leibniz. For example, the transcendental functions are defined in terms of infinite limiting processes, and therefore there is typically no way to define them in first-order logic. Increasing the analytic strength of the system by passing to categoriesΒ 2 andΒ 3, we find that the flavor of the treatment tends to become less constructive, and it becomes more difficult to say anything concrete about the hierarchical structure of infinities and infinitesimals. Number systems that include infinitesimals Formal series Laurent series An example from category 1 above is the field of Laurent series with a finite number of negative-power terms. For example, the Laurent series consisting only of the constant term 1 is identified with the real numberΒ 1, and the series with only the linear termΒ x is thought of as the simplest infinitesimal, from which the other infinitesimals are constructed. Dictionary ordering is used, which is equivalent to considering higher powers ofΒ x as negligible compared to lower powers. David O. Tall refers to this system as the super-reals, not to be confused with the superreal number system of Dales and Woodin. Since a Taylor series evaluated with a Laurent series as its argument is still a Laurent series, the system can be used to do calculus on transcendental functions if they are analytic. These infinitesimals have different first-order properties than the reals because, for example, the basic infinitesimalΒ x does not have a square root. The Levi-Civita field The Levi-Civita field is similar to the Laurent series, but is algebraically closed. For example, the basic infinitesimal x has a square root. This field is rich enough to allow a significant amount of analysis to be done, but its elements can still be represented on a computer in the same sense that real numbers can be represented in floating-point. Transseries The field of transseries is larger than the Levi-Civita field. An example of a transseries is: where for purposes of ordering x is considered infinite. Surreal numbers Conway's surreal numbers fall into category 2, except that the surreal numbers form a proper class and not a set. They are a system designed to be as rich as possible in different sizes of numbers, but not necessarily for convenience in doing analysis, in the sense that every ordered field is a subfield of the surreal numbers. There is a natural extension of the exponential function to the surreal numbers. Hyperreals The most widespread technique for handling infinitesimals is the hyperreals, developed by Abraham Robinson in the 1960s. They fall into category 3 above, having been designed that way so all of classical analysis can be carried over from the reals. This property of being able to carry over all relations in a natural way is known as the transfer principle, proved by Jerzy ŁoΕ› in 1955. For example, the transcendental function sin has a natural counterpart *sin that takes a hyperreal input and gives a hyperreal output, and similarly the set of natural numbers has a natural counterpart , which contains both finite and infinite integers. A proposition such as carries over to the hyperreals as . Superreals The superreal number system of Dales and Woodin is a generalization of the hyperreals. It is different from the super-real system defined by David Tall. Dual numbers In linear algebra, the dual numbers extend the reals by adjoining one infinitesimal, the new element Ξ΅ with the property Ξ΅2 = 0 (that is, Ξ΅ is nilpotent). Every dual number has the form z = a + bΞ΅ with a and b being uniquely determined real numbers. One application of dual numbers is automatic differentiation. This application can be generalized to polynomials in n variables, using the Exterior algebra of an n-dimensional vector space. Smooth infinitesimal analysis Synthetic differential geometry or smooth infinitesimal analysis have roots in category theory. This approach departs from the classical logic used in conventional mathematics by denying the general applicability of the law of excluded middle – i.e., not (a β‰  b) does not have to mean a = b. A nilsquare or nilpotent infinitesimal can then be defined. This is a number x where x2 = 0 is true, but x = 0 need not be true at the same time. Since the background logic is intuitionistic logic, it is not immediately clear how to classify this system with regard to classes 1, 2, and 3. Intuitionistic analogues of these classes would have to be developed first. Infinitesimal delta functions Cauchy used an infinitesimal to write down a unit impulse, infinitely tall and narrow Dirac-type delta function satisfying in a number of articles in 1827, see Laugwitz (1989). Cauchy defined an infinitesimal in 1821 (Cours d'Analyse) in terms of a sequence tending to zero. Namely, such a null sequence becomes an infinitesimal in Cauchy's and Lazare Carnot's terminology. Modern set-theoretic approaches allow one to define infinitesimals via the ultrapower construction, where a null sequence becomes an infinitesimal in the sense of an equivalence class modulo a relation defined in terms of a suitable ultrafilter. The article by Yamashita (2007) contains bibliography on modern Dirac delta functions in the context of an infinitesimal-enriched continuum provided by the hyperreals. Logical properties The method of constructing infinitesimals of the kind used in nonstandard analysis depends on the model and which collection of axioms are used. We consider here systems where infinitesimals can be shown to exist. In 1936 Maltsev proved the compactness theorem. This theorem is fundamental for the existence of infinitesimals as it proves that it is possible to formalise them. A consequence of this theorem is that if there is a number system in which it is true that for any positive integer n there is a positive number x such that 0Β <Β xΒ <Β 1/n, then there exists an extension of that number system in which it is true that there exists a positive number x such that for any positive integer n we have 0Β <Β xΒ <Β 1/n. The possibility to switch "for any" and "there exists" is crucial. The first statement is true in the real numbers as given in ZFC set theory : for any positive integer n it is possible to find a real number between 1/n and zero, but this real number depends on n. Here, one chooses n first, then one finds the corresponding x. In the second expression, the statement says that there is an x (at least one), chosen first, which is between 0 and 1/n for any n. In this case x is infinitesimal. This is not true in the real numbers (R) given by ZFC. Nonetheless, the theorem proves that there is a model (a number system) in which this is true. The question is: what is this model? What are its properties? Is there only one such model? There are in fact many ways to construct such a one-dimensional linearly ordered set of numbers, but fundamentally, there are two different approaches: Extend the number system so that it contains more numbers than the real numbers. Extend the axioms (or extend the language) so that the distinction between the infinitesimals and non-infinitesimals can be made in the real numbers themselves. In 1960, Abraham Robinson provided an answer following the first approach. The extended set is called the hyperreals and contains numbers less in absolute value than any positive real number. The method may be considered relatively complex but it does prove that infinitesimals exist in the universe of ZFC set theory. The real numbers are called standard numbers and the new non-real hyperreals are called nonstandard. In 1977 Edward Nelson provided an answer following the second approach. The extended axioms are IST, which stands either for Internal set theory or for the initials of the three extra axioms: Idealization, Standardization, Transfer. In this system, we consider that the language is extended in such a way that we can express facts about infinitesimals. The real numbers are either standard or nonstandard. An infinitesimal is a nonstandard real number that is less, in absolute value, than any positive standard real number. In 2006 Karel Hrbacek developed an extension of Nelson's approach in which the real numbers are stratified in (infinitely) many levels; i.e., in the coarsest level, there are no infinitesimals nor unlimited numbers. Infinitesimals are at a finer level and there are also infinitesimals with respect to this new level and so on. Infinitesimals in teaching Calculus textbooks based on infinitesimals include the classic Calculus Made Easy by Silvanus P. Thompson (bearing the motto "What one fool can do another can") and the German text Mathematik fur Mittlere Technische Fachschulen der Maschinenindustrie by R. Neuendorff. Pioneering works based on Abraham Robinson's infinitesimals include texts by Stroyan (dating from 1972) and Howard Jerome Keisler (Elementary Calculus: An Infinitesimal Approach). Students easily relate to the intuitive notion of an infinitesimal difference 1-"0.999...", where "0.999..." differs from its standard meaning as the real number 1, and is reinterpreted as an infinite terminating extended decimal that is strictly less than 1. Another elementary calculus text that uses the theory of infinitesimals as developed by Robinson is Infinitesimal Calculus by Henle and Kleinberg, originally published in 1979. The authors introduce the language of first-order logic, and demonstrate the construction of a first order model of the hyperreal numbers. The text provides an introduction to the basics of integral and differential calculus in one dimension, including sequences and series of functions. In an Appendix, they also treat the extension of their model to the hyperhyperreals, and demonstrate some applications for the extended model. An elementary calculus text based on smooth infinitesimal analysis is Bell, John L. (2008). A Primer of Infinitesimal Analysis, 2nd Edition. Cambridge University Press. ISBN 9780521887182. A more recent calculus text utilizing infinitesimals is Dawson, C. Bryan (2022), Calculus Set Free: Infinitesimals to the Rescue, Oxford University Press. ISBN 9780192895608. Functions tending to zero In a related but somewhat different sense, which evolved from the original definition of "infinitesimal" as an infinitely small quantity, the term has also been used to refer to a function tending to zero. More precisely, Loomis and Sternberg's Advanced Calculus defines the function class of infinitesimals, , as a subset of functions between normed vector spaces by , as well as two related classes (see Big-O notation) by , and.The set inclusions generally hold. That the inclusions are proper is demonstrated by the real-valued functions of a real variable , , and : but and .As an application of these definitions, a mapping between normed vector spaces is defined to be differentiable at if there is a [i.e, a bounded linear map ] such that in a neighborhood of . If such a map exists, it is unique; this map is called the differential and is denoted by , coinciding with the traditional notation for the classical (though logically flawed) notion of a differential as an infinitely small "piece" of F. This definition represents a generalization of the usual definition of differentiability for vector-valued functions of (open subsets of) Euclidean spaces. Array of random variables Let be a probability space and let . An array of random variables is called infinitesimal if for every , we have: The notion of infinitesimal array is essential in some central limit theorems and it is easily seen by monotonicity of the expectation operator that any array satisfying Lindeberg's condition is infinitesimal, thus playing an important role in Lindeberg's Central Limit Theorem (a generalization of the central limit theorem). See also Cantor function Differential (infinitesimal) Indeterminate form Infinitesimal calculus Infinitesimal transformation Instant Nonstandard calculus Model theory Notes References B. Crowell, "Calculus" (2003) Dawson, C. Bryan, "Calculus Set Free: Infinitesimals to the Rescue" (2022) Oxford University Press Ehrlich, P. (2006) The rise of non-Archimedean mathematics and the roots of a misconception. I. The emergence of non-Archimedean systems of magnitudes. Arch. Hist. Exact Sci. 60, no. 1, 1–121. Malet, Antoni. "Barrow, Wallis, and the remaking of seventeenth-century indivisibles". Centaurus 39 (1997), no. 1, 67–92. J. Keisler, "Elementary Calculus" (2000) University of Wisconsin K. Stroyan "Foundations of Infinitesimal Calculus" (1993) Stroyan, K. D.; Luxemburg, W. A. J. Introduction to the theory of infinitesimals. Pure and Applied Mathematics, No. 72. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1976. Robert Goldblatt (1998) "Lectures on the hyperreals" Springer. Cutland et al. "Nonstandard Methods and Applications in Mathematics" (2007) Lecture Notes in Logic 25, Association for Symbolic Logic. "The Strength of Nonstandard Analysis" (2007) Springer. Yamashita, H.: Comment on: "Pointwise analysis of scalar Fields: a nonstandard approach" [J. Math. Phys. 47 (2006), no. 9, 092301; 16 pp.]. J. Math. Phys. 48 (2007), no. 8, 084101, 1 page. Calculus History of calculus Infinity Nonstandard analysis History of mathematics Mathematical logic
Infinitesimal
[ "Mathematics" ]
5,311
[ "Calculus", "Mathematical logic", "Mathematical objects", "Infinity", "Nonstandard analysis", "Mathematics of infinitesimals", "Model theory", "History of calculus" ]
160,993
https://en.wikipedia.org/wiki/Generating%20function
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations on the formal series. There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed. Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients. History Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. George PΓ³lya writes in Mathematics and plausible reasoning: The name "generating function" is due to Laplace. Yet, without giving it a name, Euler used the device of generating functions long before Laplace [..]. He applied this mathematical tool to several problems in Combinatory Analysis and the Theory of Numbers. Definition Convergence Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. Thus generating functions are not functions in the formal sense of a mapping from a domain to a codomain. These expressions in terms of the indeterminateΒ  may involve arithmetic operations, differentiation with respect toΒ  and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function ofΒ . Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of , and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted forΒ . Limitations Not all expressions that are meaningful as functions ofΒ  are meaningful as expressions designating formal series; for example, negative and fractional powers ofΒ  are examples of functions that do not have a corresponding formal power series. Types Ordinary generating function (OGF) When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function. The ordinary generating function of a sequence is: If is the probability mass function of a discrete random variable, then its ordinary generating function is called a probability-generating function. Exponential generating function (EGF) The exponential generating function of a sequence is Exponential generating functions are generally more convenient than ordinary generating functions for combinatorial enumeration problems that involve labelled objects. Another benefit of exponential generating functions is that they are useful in transferring linear recurrence relations to the realm of differential equations. For example, take the Fibonacci sequence that satisfies the linear recurrence relation . The corresponding exponential generating function has the form and its derivatives can readily be shown to satisfy the differential equation as a direct analogue with the recurrence relation above. In this view, the factorial term is merely a counter-term to normalise the derivative operator acting on . Poisson generating function The Poisson generating function of a sequence is Lambert series The Lambert series of a sequence is Note that in a Lambert series the index starts at 1, not at 0, as the first term would otherwise be undefined. The Lambert series coefficients in the power series expansions for integers are related by the divisor sum The main article provides several more classical, or at least well-known examples related to special arithmetic functions in number theory. As an example of a Lambert series identity not given in the main article, we can show that for we have that where we have the special case identity for the generating function of the divisor function, , given by Bell series The Bell series of a sequence is an expression in terms of both an indeterminate and a prime and is given by: Dirichlet series generating functions (DGFs) Formal Dirichlet series are often classified as generating functions, although they are not strictly formal power series. The Dirichlet series generating function of a sequence is: The Dirichlet series generating function is especially useful when is a multiplicative function, in which case it has an Euler product expression in terms of the function's Bell series: If is a Dirichlet character then its Dirichlet series generating function is called a Dirichlet -series. We also have a relation between the pair of coefficients in the Lambert series expansions above and their DGFs. Namely, we can prove that: if and only if where is the Riemann zeta function. The sequence generated by a Dirichlet series generating function (DGF) corresponding to:has the ordinary generating function: Polynomial sequence generating functions The idea of generating functions can be extended to sequences of other objects. Thus, for example, polynomial sequences of binomial type are generated by: where is a sequence of polynomials and is a function of a certain form. Sheffer sequences are generated in a similar way. See the main article generalized Appell polynomials for more information. Examples of polynomial sequences generated by more complex generating functions include: Appell polynomials Chebyshev polynomials Difference polynomials Generalized Appell polynomials -difference polynomials Other generating functions Other sequences generated by more complex generating functions include: Double exponential generating functions. For example: Aitken's Array: Triangle of Numbers Hadamard products of generating functions and diagonal generating functions, and their corresponding integral transformations Convolution polynomials Knuth's article titled "Convolution Polynomials" defines a generalized class of convolution polynomial sequences by their special generating functions of the form for some analytic function with a power series expansion such that . We say that a family of polynomials, , forms a convolution family if and if the following convolution condition holds for all , and for all : We see that for non-identically zero convolution families, this definition is equivalent to requiring that the sequence have an ordinary generating function of the first form given above. A sequence of convolution polynomials defined in the notation above has the following properties: The sequence is of binomial type Special values of the sequence include and , and For arbitrary (fixed) , these polynomials satisfy convolution formulas of the form For a fixed non-zero parameter , we have modified generating functions for these convolution polynomial sequences given by where is implicitly defined by a functional equation of the form . Moreover, we can use matrix methods (as in the reference) to prove that given two convolution polynomial sequences, and , with respective corresponding generating functions, and , then for arbitrary we have the identity Examples of convolution polynomial sequences include the binomial power series, , so-termed tree polynomials, the Bell numbers, , the Laguerre polynomials, and the Stirling convolution polynomials. Ordinary generating functions Examples for simple sequences Polynomials are a special case of ordinary generating functions, corresponding to finite sequences, or equivalently sequences that vanish after a certain point. These are important in that many finite sequences can usefully be interpreted as generating functions, such as the PoincarΓ© polynomial and others. A fundamental generating function is that of the constant sequence , whose ordinary generating function is the geometric series The left-hand side is the Maclaurin series expansion of the right-hand side. Alternatively, the equality can be justified by multiplying the power series on the left by , and checking that the result is the constant power series 1 (in other words, that all coefficients except the one of are equal to 0). Moreover, there can be no other power series with this property. The left-hand side therefore designates the multiplicative inverse of in the ring of power series. Expressions for the ordinary generating function of other sequences are easily derived from this one. For instance, the substitution gives the generating function for the geometric sequence for any constant : (The equality also follows directly from the fact that the left-hand side is the Maclaurin series expansion of the right-hand side.) In particular, One can also introduce regular gaps in the sequence by replacing by some power of , so for instance for the sequence (which skips over ) one gets the generating function By squaring the initial generating function, or by finding the derivative of both sides with respect to and making a change of running variable , one sees that the coefficients form the sequence , so one has and the third power has as coefficients the triangular numbers whose term is the binomial coefficient , so that More generally, for any non-negative integer and non-zero real value , it is true that Since one can find the ordinary generating function for the sequence of square numbers by linear combination of binomial-coefficient generating sequences: We may also expand alternately to generate this same sequence of squares as a sum of derivatives of the geometric series in the following form: By induction, we can similarly show for positive integers that where denote the Stirling numbers of the second kind and where the generating function so that we can form the analogous generating functions over the integral th powers generalizing the result in the square case above. In particular, since we can write we can apply a well-known finite sum identity involving the Stirling numbers to obtain that Rational functions The ordinary generating function of a sequence can be expressed as a rational function (the ratio of two finite-degree polynomials) if and only if the sequence is a linear recursive sequence with constant coefficients; this generalizes the examples above. Conversely, every sequence generated by a fraction of polynomials satisfies a linear recurrence with constant coefficients; these coefficients are identical to the coefficients of the fraction denominator polynomial (so they can be directly read off). This observation shows it is easy to solve for generating functions of sequences defined by a linear finite difference equation with constant coefficients, and then hence, for explicit closed-form formulas for the coefficients of these generating functions. The prototypical example here is to derive Binet's formula for the Fibonacci numbers via generating function techniques. We also notice that the class of rational generating functions precisely corresponds to the generating functions that enumerate quasi-polynomial sequences of the form where the reciprocal roots, , are fixed scalars and where is a polynomial in for all . In general, Hadamard products of rational functions produce rational generating functions. Similarly, if is a bivariate rational generating function, then its corresponding diagonal generating function, is algebraic. For example, if we let then this generating function's diagonal coefficient generating function is given by the well-known OGF formula This result is computed in many ways, including Cauchy's integral formula or contour integration, taking complex residues, or by direct manipulations of formal power series in two variables. Operations on generating functions Multiplication yields convolution Multiplication of ordinary generating functions yields a discrete convolution (the Cauchy product) of the sequences. For example, the sequence of cumulative sums (compare to the slightly more general Euler–Maclaurin formula) of a sequence with ordinary generating function has the generating function because is the ordinary generating function for the sequence . See also the section on convolutions in the applications section of this article below for further examples of problem solving with convolutions of generating functions and interpretations. Shifting sequence indices For integers , we have the following two analogous identities for the modified generating functions enumerating the shifted sequence variants of and , respectively: Differentiation and integration of generating functions We have the following respective power series expansions for the first derivative of a generating function and its integral: The differentiation–multiplication operation of the second identity can be repeated times to multiply the sequence by , but that requires alternating between differentiation and multiplication. If instead doing differentiations in sequence, the effect is to multiply by the th falling factorial: Using the Stirling numbers of the second kind, that can be turned into another formula for multiplying by as follows (see the main article on generating function transformations): A negative-order reversal of this sequence powers formula corresponding to the operation of repeated integration is defined by the zeta series transformation and its generalizations defined as a derivative-based transformation of generating functions, or alternately termwise by and performing an integral transformation on the sequence generating function. Related operations of performing fractional integration on a sequence generating function are discussed here. Enumerating arithmetic progressions of sequences In this section we give formulas for generating functions enumerating the sequence given an ordinary generating function , where , , and and are integers (see the main article on transformations). For , this is simply the familiar decomposition of a function into even and odd parts (i.e., even and odd powers): More generally, suppose that and that denotes the th primitive root of unity. Then, as an application of the discrete Fourier transform, we have the formula For integers , another useful formula providing somewhat reversed floored arithmetic progressions β€” effectively repeating each coefficient times β€” are generated by the identity -recursive sequences and holonomic generating functions Definitions A formal power series (or function) is said to be holonomic if it satisfies a linear differential equation of the form where the coefficients are in the field of rational functions, . Equivalently, is holonomic if the vector space over spanned by the set of all of its derivatives is finite dimensional. Since we can clear denominators if need be in the previous equation, we may assume that the functions, are polynomials in . Thus we can see an equivalent condition that a generating function is holonomic if its coefficients satisfy a -recurrence of the form for all large enough and where the are fixed finite-degree polynomials in . In other words, the properties that a sequence be -recursive and have a holonomic generating function are equivalent. Holonomic functions are closed under the Hadamard product operation on generating functions. Examples The functions , , , , , the dilogarithm function , the generalized hypergeometric functions and the functions defined by the power series and the non-convergent are all holonomic. Examples of -recursive sequences with holonomic generating functions include and , where sequences such as and are not -recursive due to the nature of singularities in their corresponding generating functions. Similarly, functions with infinitely many singularities such as , , and are not holonomic functions. Software for working with -recursive sequences and holonomic generating functions Tools for processing and working with -recursive sequences in Mathematica include the software packages provided for non-commercial use on the RISC Combinatorics Group algorithmic combinatorics software site. Despite being mostly closed-source, particularly powerful tools in this software suite are provided by the Guess package for guessing -recurrences for arbitrary input sequences (useful for experimental mathematics and exploration) and the Sigma package which is able to find P-recurrences for many sums and solve for closed-form solutions to -recurrences involving generalized harmonic numbers. Other packages listed on this particular RISC site are targeted at working with holonomic generating functions specifically. Relation to discrete-time Fourier transform When the series converges absolutely, is the discrete-time Fourier transform of the sequence . Asymptotic growth of a sequence In calculus, often the growth rate of the coefficients of a power series can be used to deduce a radius of convergence for the power series. The reverse can also hold; often the radius of convergence for a generating function can be used to deduce the asymptotic growth of the underlying sequence. For instance, if an ordinary generating function that has a finite radius of convergence of can be written as where each of and is a function that is analytic to a radius of convergence greater than (or is entire), and where then using the gamma function, a binomial coefficient, or a multiset coefficient. Note that limit as goes to infinity of the ratio of to any of these expressions is guaranteed to be 1; not merely that is proportional to them. Often this approach can be iterated to generate several terms in an asymptotic series for . In particular, The asymptotic growth of the coefficients of this generating function can then be sought via the finding of , , , , and to describe the generating function, as above. Similar asymptotic analysis is possible for exponential generating functions; with an exponential generating function, it is that grows according to these asymptotic formulae. Generally, if the generating function of one sequence minus the generating function of a second sequence has a radius of convergence that is larger than the radius of convergence of the individual generating functions then the two sequences have the same asymptotic growth. Asymptotic growth of the sequence of squares As derived above, the ordinary generating function for the sequence of squares is: With , , , , and , we can verify that the squares grow as expected, like the squares: Asymptotic growth of the Catalan numbers The ordinary generating function for the Catalan numbers is With , , , , and , we can conclude that, for the Catalan numbers: Bivariate and multivariate generating functions The generating function in several variables can be generalized to arrays with multiple indices. These non-polynomial double sum examples are called multivariate generating functions, or super generating functions. For two variables, these are often called bivariate generating functions. Bivariate case The ordinary generating function of a two-dimensional array (where and are natural numbers) is: For instance, since is the ordinary generating function for binomial coefficients for a fixed , one may ask for a bivariate generating function that generates the binomial coefficients for all and . To do this, consider itself as a sequence in , and find the generating function in that has these sequence values as coefficients. Since the generating function for is: the generating function for the binomial coefficients is: Other examples of such include the following two-variable generating functions for the binomial coefficients, the Stirling numbers, and the Eulerian numbers, where and denote the two variables: Multivariate case Multivariate generating functions arise in practice when calculating the number of contingency tables of non-negative integers with specified row and column totals. Suppose the table has rows and columns; the row sums are and the column sums are . Then, according to I. J. Good, the number of such tables is the coefficient of: in: Representation by continued fractions (Jacobi-type -fractions) Definitions Expansions of (formal) Jacobi-type and Stieltjes-type continued fractions (-fractions and -fractions, respectively) whose th rational convergents represent -order accurate power series are another way to express the typically divergent ordinary generating functions for many special one and two-variate sequences. The particular form of the Jacobi-type continued fractions (-fractions) are expanded as in the following equation and have the next corresponding power series expansions with respect to for some specific, application-dependent component sequences, and , where denotes the formal variable in the second power series expansion given below: The coefficients of , denoted in shorthand by , in the previous equations correspond to matrix solutions of the equations: where , for , if , and where for all integers , we have an addition formula relation given by: Properties of the th convergent functions For (though in practice when ), we can define the rational th convergents to the infinite -fraction, , expanded by: component-wise through the sequences, and , defined recursively by: Moreover, the rationality of the convergent function for all implies additional finite difference equations and congruence properties satisfied by the sequence of , and for if then we have the congruence for non-symbolic, determinate choices of the parameter sequences and when , that is, when these sequences do not implicitly depend on an auxiliary parameter such as , , or as in the examples contained in the table below. Examples The next table provides examples of closed-form formulas for the component sequences found computationally (and subsequently proved correct in the cited references) in several special cases of the prescribed sequences, , generated by the general expansions of the -fractions defined in the first subsection. Here we define and the parameters and to be indeterminates with respect to these expansions, where the prescribed sequences enumerated by the expansions of these -fractions are defined in terms of the -Pochhammer symbol, Pochhammer symbol, and the binomial coefficients. {| class="wikitable" |- ! !! !! !! |- | || || || |- | || || || |- | || || || |- | || || || |- | || || || |- | || || || |- | || || || |} The radii of convergence of these series corresponding to the definition of the Jacobi-type -fractions given above are in general different from that of the corresponding power series expansions defining the ordinary generating functions of these sequences. Examples Square numbers Generating functions for the sequence of square numbers are: where is the Riemann zeta function. Applications Generating functions are used to: Find a closed formula for a sequence given in a recurrence relation, for example, Fibonacci numbers. Find recurrence relations for sequencesβ€”the form of a generating function may suggest a recurrence formula. Find relationships between sequencesβ€”if the generating functions of two sequences have a similar form, then the sequences themselves may be related. Explore the asymptotic behaviour of sequences. Prove identities involving sequences. Solve enumeration problems in combinatorics and encoding their solutions. Rook polynomials are an example of an application in combinatorics. Evaluate infinite sums. Various techniques: Evaluating sums and tackling other problems with generating functions Example 1: Formula for sums of harmonic numbers Generating functions give us several methods to manipulate sums and to establish identities between sums. The simplest case occurs when . We then know that for the corresponding ordinary generating functions. For example, we can manipulate where are the harmonic numbers. Let be the ordinary generating function of the harmonic numbers. Then and thus Using convolution with the numerator yields which can also be written as Example 2: Modified binomial coefficient sums and the binomial transform As another example of using generating functions to relate sequences and manipulate sums, for an arbitrary sequence we define the two sequences of sums for all , and seek to express the second sums in terms of the first. We suggest an approach by generating functions. First, we use the binomial transform to write the generating function for the first sum as Since the generating function for the sequence is given by we may write the generating function for the second sum defined above in the form In particular, we may write this modified sum generating function in the form of for , , , and , where . Finally, it follows that we may express the second sums through the first sums in the following form: Example 3: Generating functions for mutually recursive sequences In this example, we reformulate a generating function example given in Section 7.3 of Concrete Mathematics (see also Section 7.1 of the same reference for pretty pictures of generating function series). In particular, suppose that we seek the total number of ways (denoted ) to tile a 3-by- rectangle with unmarked 2-by-1 domino pieces. Let the auxiliary sequence, , be defined as the number of ways to cover a 3-by- rectangle-minus-corner section of the full rectangle. We seek to use these definitions to give a closed form formula for without breaking down this definition further to handle the cases of vertical versus horizontal dominoes. Notice that the ordinary generating functions for our two sequences correspond to the series: If we consider the possible configurations that can be given starting from the left edge of the 3-by- rectangle, we are able to express the following mutually dependent, or mutually recursive, recurrence relations for our two sequences when defined as above where , , , and : Since we have that for all integers , the index-shifted generating functions satisfy we can use the initial conditions specified above and the previous two recurrence relations to see that we have the next two equations relating the generating functions for these sequences given by which then implies by solving the system of equations (and this is the particular trick to our method here) that Thus by performing algebraic simplifications to the sequence resulting from the second partial fractions expansions of the generating function in the previous equation, we find that and that for all integers . We also note that the same shifted generating function technique applied to the second-order recurrence for the Fibonacci numbers is the prototypical example of using generating functions to solve recurrence relations in one variable already covered, or at least hinted at, in the subsection on rational functions given above. Convolution (Cauchy products) A discrete convolution of the terms in two formal power series turns a product of generating functions into a generating function enumerating a convolved sum of the original sequence terms (see Cauchy product). Consider and are ordinary generating functions. Consider and are exponential generating functions. Consider the triply convolved sequence resulting from the product of three ordinary generating functions Consider the -fold convolution of a sequence with itself for some positive integer (see the example below for an application) Multiplication of generating functions, or convolution of their underlying sequences, can correspond to a notion of independent events in certain counting and probability scenarios. For example, if we adopt the notational convention that the probability generating function, or pgf, of a random variable is denoted by , then we can show that for any two random variables if and are independent. Similarly, the number of ways to pay cents in coin denominations of values in the set {1,Β 5,Β 10,Β 25,Β 50} (i.e., in pennies, nickels, dimes, quarters, and half dollars, respectively) is generated by the product and moreover, if we allow the cents to be paid in coins of any positive integer denomination, we arrive at the generating for the number of such combinations of change being generated by the partition function generating function expanded by the infinite -Pochhammer symbol product of Example: Generating function for the Catalan numbers An example where convolutions of generating functions are useful allows us to solve for a specific closed-form function representing the ordinary generating function for the Catalan numbers, . In particular, this sequence has the combinatorial interpretation as being the number of ways to insert parentheses into the product so that the order of multiplication is completely specified. For example, which corresponds to the two expressions and . It follows that the sequence satisfies a recurrence relation given by and so has a corresponding convolved generating function, , satisfying Since , we then arrive at a formula for this generating function given by Note that the first equation implicitly defining above implies that which then leads to another "simple" (of form) continued fraction expansion of this generating function. Example: Spanning trees of fans and convolutions of convolutions A fan of order is defined to be a graph on the vertices with edges connected according to the following rules: Vertex 0 is connected by a single edge to each of the other vertices, and vertex is connected by a single edge to the next vertex for all . There is one fan of order one, three fans of order two, eight fans of order three, and so on. A spanning tree is a subgraph of a graph which contains all of the original vertices and which contains enough edges to make this subgraph connected, but not so many edges that there is a cycle in the subgraph. We ask how many spanning trees of a fan of order are possible for each . As an observation, we may approach the question by counting the number of ways to join adjacent sets of vertices. For example, when , we have that , which is a sum over the -fold convolutions of the sequence for . More generally, we may write a formula for this sequence as from which we see that the ordinary generating function for this sequence is given by the next sum of convolutions as from which we are able to extract an exact formula for the sequence by taking the partial fraction expansion of the last generating function. Implicit generating functions and the Lagrange inversion formula One often encounters generating functions specified by a functional equation, instead of an explicit specification. For example, the generating function for the number of binary trees on nodes (leaves included) satisfies The Lagrange inversion theorem is a tool used to explicitly evaluate solutions to such equations. Applying the above theorem to our functional equation yields (with ): Via the binomial theorem expansion, for even , the formula returns . This is expected as one can prove that the number of leaves of a binary tree are one more than the number of its internal nodes, so the total sum should always be an odd number. For odd , however, we get The expression becomes much neater if we let be the number of internal nodes: Now the expression just becomes the th Catalan number. Introducing a free parameter (snake oil method) Sometimes the sum is complicated, and it is not always easy to evaluate. The "Free Parameter" method is another method (called "snake oil" by H. Wilf) to evaluate these sums. Both methods discussed so far have as limit in the summation. When n does not appear explicitly in the summation, we may consider as a "free" parameter and treat as a coefficient of , change the order of the summations on and , and try to compute the inner sum. For example, if we want to compute we can treat as a "free" parameter, and set Interchanging summation ("snake oil") gives Now the inner sum is . Thus Then we obtain It is instructive to use the same method again for the sum, but this time take as the free parameter instead of . We thus set Interchanging summation ("snake oil") gives Now the inner sum is . Thus Thus we obtain for as before. Generating functions prove congruences We say that two generating functions (power series) are congruent modulo , written if their coefficients are congruent modulo for all , i.e., for all relevant cases of the integers (note that we need not assume that is an integer hereβ€”it may very well be polynomial-valued in some indeterminate , for example). If the "simpler" right-hand-side generating function, , is a rational function of , then the form of this sequence suggests that the sequence is eventually periodic modulo fixed particular cases of integer-valued . For example, we can prove that the Euler numbers, satisfy the following congruence modulo 3: One useful method of obtaining congruences for sequences enumerated by special generating functions modulo any integers (i.e., not only prime powers ) is given in the section on continued fraction representations of (even non-convergent) ordinary generating functions by -fractions above. We cite one particular result related to generating series expanded through a representation by continued fraction from Lando's Lectures on Generating Functions as follows: Generating functions also have other uses in proving congruences for their coefficients. We cite the next two specific examples deriving special case congruences for the Stirling numbers of the first kind and for the partition function which show the versatility of generating functions in tackling problems involving integer sequences. The Stirling numbers modulo small integers The main article on the Stirling numbers generated by the finite products provides an overview of the congruences for these numbers derived strictly from properties of their generating function as in Section 4.6 of Wilf's stock reference Generatingfunctionology. We repeat the basic argument and notice that when reduces modulo 2, these finite product generating functions each satisfy which implies that the parity of these Stirling numbers matches that of the binomial coefficient and consequently shows that is even whenever . Similarly, we can reduce the right-hand-side products defining the Stirling number generating functions modulo 3 to obtain slightly more complicated expressions providing that Congruences for the partition function In this example, we pull in some of the machinery of infinite products whose power series expansions generate the expansions of many special functions and enumerate partition functions. In particular, we recall that the partition function is generated by the reciprocal infinite -Pochhammer symbol product (or -Pochhammer product as the case may be) given by This partition function satisfies many known congruence properties, which notably include the following results though there are still many open questions about the forms of related integer congruences for the function: We show how to use generating functions and manipulations of congruences for formal power series to give a highly elementary proof of the first of these congruences listed above. First, we observe that in the binomial coefficient generating function all of the coefficients are divisible by 5 except for those which correspond to the powers and moreover in those cases the remainder of the coefficient is 1 modulo 5. Thus, or equivalently It follows that Using the infinite product expansions of it can be shown that the coefficient of in is divisible by 5 for all . Finally, since we may equate the coefficients of in the previous equations to prove our desired congruence result, namely that for all . Transformations of generating functions There are a number of transformations of generating functions that provide other applications (see the main article). A transformation of a sequence's ordinary generating function (OGF) provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas involving a sequence OGF (see integral transformations) or weighted sums over the higher-order derivatives of these functions (see derivative transformations). Generating function transformations can come into play when we seek to express a generating function for the sums in the form of involving the original sequence generating function. For example, if the sums are then the generating function for the modified sum expressions is given by (see also the binomial transform and the Stirling transform). There are also integral formulas for converting between a sequence's OGF, , and its exponential generating function, or EGF, , and vice versa given by provided that these integrals converge for appropriate values of . Tables of special generating functions An initial listing of special mathematical series is found here. A number of useful and special sequence generating functions are found in Section 5.4 and 7.4 of Concrete Mathematics and in Section 2.5 of Wilf's Generatingfunctionology. Other special generating functions of note include the entries in the next table, which is by no means complete. {| class="wikitable" |- ! Formal power series !! Generating-function formula !! Notes |- | || || is a first-order harmonic number |- | || || is a Bernoulli number |- | || || is a Fibonacci number and |- | || || denotes the rising factorial, or Pochhammer symbol and some integer |- | || |- | || |- | || |- | || || is the polylogarithm function and is a generalized harmonic number for |- | || || is a Stirling number of the second kind and where the individual terms in the expansion satisfy |- | || || |- | || || The two-variable case is given by |- | || || |- | || || |- ||| |} See also Moment-generating function Probability-generating function Generating function transformation Stanley's reciprocity theorem Integer partition Combinatorial principles Cyclic sieving Z-transform Umbral calculus Coins in a fountain Notes References Citations Reprinted in External links "Introduction To Ordinary Generating Functions" by Mike Zabrocki, York University, Mathematics and Statistics Generating Functions, Power Indices and Coin Change at cut-the-knot "Generating Functions" by Ed Pegg Jr., Wolfram Demonstrations Project, 2007. 1730 introductions Abraham de Moivre
Generating function
[ "Mathematics" ]
7,579
[ "Sequences and series", "Generating functions", "Mathematical structures" ]
161,005
https://en.wikipedia.org/wiki/Head-related%20transfer%20function
A head-related transfer function (HRTF) is a response that characterizes how an ear receives a sound from a point in space. As sound strikes the listener, the size and shape of the head, ears, ear canal, density of the head, size and shape of nasal and oral cavities, all transform the sound and affect how it is perceived, boosting some frequencies and attenuating others. Generally speaking, the HRTF boosts frequencies from 2–5Β kHz with a primary resonance of +17Β dB at 2,700Β Hz. But the response curve is more complex than a single bump, affects a broad frequency spectrum, and varies significantly from person to person. A pair of HRTFs for two ears can be used to synthesize a binaural sound that seems to come from a particular point in space. It is a transfer function, describing how a sound from a specific point will arrive at the ear (generally at the outer end of the auditory canal). Some consumer home entertainment products designed to reproduce surround sound from stereo (two-speaker) headphones use HRTFs. Some forms of HRTF processing have also been included in computer software to simulate surround sound playback from loudspeakers. Sound localization Humans have just two ears, but can locate sounds in three dimensions – in range (distance), in direction above and below (elevation), in front and to the rear, as well as to either side (azimuth). This is possible because the brain, inner ear, and the external ears (pinna) work together to make inferences about location. This ability to localize sound sources may have developed in humans and ancestors as an evolutionary necessity since the eyes can only see a fraction of the world around a viewer, and vision is hampered in darkness, while the ability to localize a sound source works in all directions, to varying accuracy, regardless of the surrounding light. Humans estimate the location of a source by taking cues derived from one ear (monaural cues), and by comparing cues received at both ears (difference cues or binaural cues). Among the difference cues are time differences of arrival and intensity differences. The monaural cues come from the interaction between the sound source and the human anatomy, in which the original source sound is modified before it enters the ear canal for processing by the auditory system. These modifications encode the source location and may be captured via an impulse response which relates the source location and the ear location. This impulse response is termed the head-related impulse response (HRIR). Convolution of an arbitrary source sound with the HRIR converts the sound to that which would have been heard by the listener if it had been played at the source location, with the listener's ear at the receiver location. HRIRs have been used to produce virtual surround sound. The HRTF is the Fourier transform of HRIR. HRTFs for left and right ear (expressed above as HRIRs) describe the filtering of a sound source (x(t)) before it is perceived at the left and right ears as xL(t) and xR(t), respectively. The HRTF can also be described as the modifications to a sound from a direction in free air to the sound as it arrives at the eardrum. These modifications include the shape of the listener's outer ear, the shape of the listener's head and body, the acoustic characteristics of the space in which the sound is played, and so on. All these characteristics will influence how (or whether) a listener can accurately tell what direction a sound is coming from. In the AES69-2015 standard, the Audio Engineering Society (AES) has defined the SOFA file format for storing spatially oriented acoustic data like head-related transfer functions (HRTFs). SOFA software libraries and files are collected at the Sofa Conventions website. How HRTF works The associated mechanism varies between individuals, as their head and ear shapes differ. HRTF describes how a given sound wave input (parameterized as frequency and source location) is filtered by the diffraction and reflection properties of the head, pinna, and torso, before the sound reaches the transduction machinery of the eardrum and inner ear (see auditory system). Biologically, the source-location-specific prefiltering effects of these external structures aid in the neural determination of source location, particularly the determination of the source's elevation. Technical derivation Linear systems analysis defines the transfer function as the complex ratio between the output signal spectrum and the input signal spectrum as a function of frequency. Blauert (1974; cited in Blauert, 1981) initially defined the transfer function as the free-field transfer function (FFTF). Other terms include free-field to eardrum transfer function and the pressure transformation from the free-field to the eardrum. Less specific descriptions include the pinna transfer function, the outer ear transfer function, the pinna response, or directional transfer function (DTF). The transfer function H(f) of any linear time-invariant system at frequency f is: H(f) = Output(f) / Input(f) One method used to obtain the HRTF from a given source location is therefore to measure the head-related impulse response (HRIR), h(t), at the ear drum for the impulse Ξ”(t) placed at the source. The HRTF H(f) is the Fourier transform of the HRIR h(t). Even when measured for a "dummy head" of idealized geometry, HRTF are complicated functions of frequency and the three spatial variables. For distances greater than 1 m from the head, however, the HRTF can be said to attenuate inversely with range. It is this far field HRTF, H(f, ΞΈ, Ο†), that has most often been measured. At closer range, the difference in level observed between the ears can grow quite large, even in the low-frequency region within which negligible level differences are observed in the far field. HRTFs are typically measured in an anechoic chamber to minimize the influence of early reflections and reverberation on the measured response. HRTFs are measured at small increments of ΞΈ such as 15Β° or 30Β° in the horizontal plane, with interpolation used to synthesize HRTFs for arbitrary positions of ΞΈ. Even with small increments, however, interpolation can lead to front-back confusion, and optimizing the interpolation procedure is an active area of research. In order to maximize the signal-to-noise ratio (SNR) in a measured HRTF, it is important that the impulse being generated be of high volume. In practice, however, it can be difficult to generate impulses at high volumes and, if generated, they can be damaging to human ears, so it is more common for HRTFs to be directly calculated in the frequency domain using a frequency-swept sine wave or by using maximum length sequences. User fatigue is still a problem, however, highlighting the need for the ability to interpolate based on fewer measurements. The head-related transfer function is involved in resolving the cone of confusion, a series of points where interaural time difference (ITD) and interaural level difference (ILD) are identical for sound sources from many locations around the 0 part of the cone. When a sound is received by the ear it can either go straight down the ear into the ear canal or it can be reflected off the pinnae of the ear, into the ear canal a fraction of a second later. The sound will contain many frequencies, so therefore many copies of this signal will go down the ear all at different times depending on their frequency (according to reflection, diffraction, and their interaction with high and low frequencies and the size of the structures of the ear.) These copies overlap each other, and during this, certain signals are enhanced (where the phases of the signals match) while other copies are canceled out (where the phases of the signal do not match). Essentially, the brain is looking for frequency notches in the signal that correspond to particular known directions of sound. If another person's ears were substituted, the individual would not immediately be able to localize sound, as the patterns of enhancement and cancellation would be different from those patterns the person's auditory system is used to. However, after some weeks, the auditory system would adapt to the new head-related transfer function. The inter-subject variability in the spectra of HRTFs has been studied through cluster analyses. Assessing the variation through changes between the person's ear, we can limit our perspective with the degrees of freedom of the head and its relation with the spatial domain. Through this, we eliminate the tilt and other co-ordinate parameters that add complexity. For the purpose of calibration we are only concerned with the direction level to our ears, ergo a specific degree of freedom. Some of the ways in which we can deduce an expression to calibrate the HRTF are: Localization of sound in Virtual Auditory space HRTF Phase synthesis HRTF Magnitude synthesis Localization of sound in virtual auditory space A basic assumption in the creation of a virtual auditory space is that if the acoustical waveforms present at a listener's eardrums are the same under headphones as in free field, then the listener's experience should also be the same. Typically, sounds generated from headphones are perceived as originating from within the head. In the virtual auditory space, the headphones should be able to "externalize" the sound. Using the HRTF, sounds can be spatially positioned using the technique described below. Let x(t) represent an electrical signal driving a loudspeaker and y(t) represent the signal received by a microphone inside the listener's eardrum. Similarly, let x(t) represent the electrical signal driving a headphone and y(t) represent the microphone response to the signal. The goal of the virtual auditory space is to choose x(t) such that y(t) = y(t). Applying the Fourier transform to these signals, we come up with the following two equations: Y = XLFM, and Y = XHM, where L is the transfer function of the loudspeaker in the free field, F is the HRTF, M is the microphone transfer function, and H is the headphone-to-eardrum transfer function. Setting Y = Y, and solving for X yields X = XLF/H. By observation, the desired transfer function is T= LF/H. Therefore, theoretically, if x(t) is passed through this filter and the resulting x(t) is played on the headphones, it should produce the same signal at the eardrum. Since the filter applies only to a single ear, another one must be derived for the other ear. This process is repeated for many places in the virtual environment to create an array of head-related transfer functions for each position to be recreated while ensuring that the sampling conditions are set by the Nyquist criteria. HRTF phase synthesis There is less reliable phase estimation in the very low part of the frequency band, and in the upper frequencies the phase response is affected by the features of the pinna. Earlier studies also show that the HRTF phase response is mostly linear and that listeners are insensitive to the details of the interaural phase spectrum as long as the interaural time delay (ITD) of the combined low-frequency part of the waveform is maintained. This is the modeled phase response of the subject HRTF as a time delay, dependent on the direction and elevation. A scaling factor is a function of the anthropometric features. For example, a training set of N subjects would consider each HRTF phase and describe a single ITD scaling factor as the average delay of the group. This computed scaling factor can estimate the time delay as function of the direction and elevation for any given individual. Converting the time delay to phase response for the left and the right ears is trivial. The HRTF phase can be described by the ITD scaling factor. This is in turn quantified by the anthropometric data of a given individual taken as the source of reference. For a generic case we consider Ξ² as a sparse vector that represents the subject's anthropometric features as a linear superposition of the anthropometric features from the training data (y = Ξ² X), and then apply the same sparse vector directly on the scaling vector H. We can write this task as a minimization problem, for a non-negative shrinking parameter Ξ»: From this, ITD scaling factor value H is estimated as: where The ITD scaling factors for all persons in the dataset are stacked in a vector H ∈ R, so the value H corresponds to the scaling factor of the n-th person. HRTF magnitude synthesis We solve the above minimization problem using least absolute shrinkage and selection operator. We assume that the HRTFs are represented by the same relation as the anthropometric features. Therefore, once we learn the sparse vector Ξ² from the anthropometric features, we directly apply it to the HRTF tensor data and the subject's HRTF values H given by: where The HRTFs for each subject are described by a tensor of size DΒ Γ—Β K, where D is the number of HRTF directions and K is the number of frequency bins. All H corresponds to all the HRTFs of the training set are stacked in a new tensor H ∈ R, so the value H corresponds to the k-th frequency bin for d-th HRTF direction of the n-th person. Also H corresponds to k-th frequency for every d-th HRTF direction of the synthesized HRTF. HRTF from geometry Accumulation of HRTF data has made it possible for a computer program to infer an approximate HRTF from head geometry. Two programs are known to do so, both open-source: Mesh2HRTF, which runs physical simulation on a full 3D-mesh of the head, and EAC, which uses a neural network trained from existing HRTFs and works from photo and other rough measurements. Recording and playback technology Recordings processed via an HRTF, such as in a computer gaming environment (see A3D, EAX, and OpenAL), which approximates the HRTF of the listener, can be heard through stereo headphones or speakers and interpreted as if they comprise sounds coming from all directions, rather than just two points on either side of the head. The perceived accuracy of the result depends on how closely the HRTF data set matches the characteristics of one's own ears, though a generic HRTF may be preferred to an accurate one measured from one's one ear. Some vendors like Apple and Sony offer a variety of HRTFs to be selected by the user's ear shape. Windows 10 and above come with Microsoft Spatial Sound included, the same spatial audio framework used on Xbox One and Hololens 2. On a Windows PC or an Xbox One, the framework can use several different downstream audio processors, including Windows Sonic for Headphones, Dolby Atmos, and DTS Headphone:X, to apply an HRTF. The framework can render both fixed-position surround sound sources and dynamic "object" sources that can move in space. Apple similarly has Spatial Sound for its devices used with headphones produced by Apple or Beats. For music playback to headphones, Dolby Atmos can be enabled and the HRTF applied. The HRTF (or rather, the object positions) can vary with head tracking to maintain the illusion of direction. Qualcomm Snapdragon has a similar head-tracked spatial audio system, used by some brands of Android phones. YouTube uses head-tracked HRTF with 360-degree and VR videos. Linux is currently unable to directly process any of the proprietary spatial audio (surround plus dynamic objects) formats. SoundScape Renderer offers directional synthesis. PulseAudio and PipeWire each can provide virtual surround (fixed-location channels) using an HRTF. Recent PipeWire versions are also able to provide dynamic spatial rendering using HRTFs, however integration with applications is still in progress. Users can configure their own positional and dynamic sound sources, as well as simulate a surround speaker setup using existing configurations. The cross-platform OpenAL Soft, an implementation of OpenAL, uses HRTFs for improved localization. Windows and Linux spatial audio systems support any model of stereo headphones, while Apple only allows spatial audio to be used with Apple or Beats-branded Bluetooth headsets. See also 3D sound reconstruction A3D Binaural recording Dolby Atmos Dummy head recording Environmental audio extensions Head shadow OpenAL QSound Sensaura Sound localization Sound Retrieval System Soundbar Transfer function References External links Spatial Sound Tutorial CIPIC HRTF Database Listen HRTF Database High-resolution HRTF and 3D ear model database (48 subjects) AIR Database (HRTF database in reverberant environments) Full Sphere HRIR/HRTF Database of the Neumann KU100 MIT Database (one dataset) ARI (Acoustics Research Institute) Database (90+ datasets) Acoustics Electrical circuits Signal processing Control theory
Head-related transfer function
[ "Physics", "Mathematics", "Technology", "Engineering" ]
3,562
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Applied mathematics", "Control theory", "Classical mechanics", "Acoustics", "Electronic engineering", "Electrical circuits", "Electrical engineering", "Dynamical systems" ]
161,006
https://en.wikipedia.org/wiki/Unary%20operation
In mathematics, a unary operation is an operation with only one operand, i.e. a single input. This is in contrast to binary operations, which use two operands. An example is any function , where is a set; the function is a unary operation on . Common notations are prefix notation (e.g. Β¬, βˆ’), postfix notation (e.g. factorial ), functional notation (e.g. or ), and superscripts (e.g. transpose ). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument. Examples Absolute value Obtaining the absolute value of a number is a unary operation. This function is defined as where is the absolute value of . Negation Negation is used to find the negative value of a single number. Here are some examples: Factorial For any positive integer n, the product of the integers less than or equal to n is a unary operation called factorial. In the context of complex numbers, the gamma function is an unary operation extension of factorial. Trigonometry In trigonometry, the trigonometric functions, such as , , and , can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result. Examples from programming languages Below is a table summarizing common unary operators along with their symbols, description, and examples: JavaScript In JavaScript, these operators are unary: Increment: ++x, x++ Decrement: --x, x-- Positive: +x Negative: -x Ones' complement: ~x Logical negation: !x C family of languages In the C family of languages, the following operators are unary: Increment: ++x, x++ Decrement: --x, x-- Address: &x Indirection: *x Positive: +x Negative: -x Ones' complement: ~x Logical negation: !x Sizeof: sizeof x, sizeof(type-name) Cast: (type-name) cast-expression Unix shell (Bash) In the Unix shell (Bash/Bourne Shell), e.g., the following operators are unary: Pre and Post-Increment: ++$x, $x++ Pre and Post-Decrement: --$x, $x-- Positive: +$x Negative: -$x Logical negation: !$x Simple expansion: $x Complex expansion: ${#x} PowerShell In the PowerShell, the following operators are unary: Increment: ++$x, $x++ Decrement: --$x, $x-- Positive: +$x Negative: -$x Logical negation: !$x Invoke in current scope: .$x Invoke in new scope: &$x Cast: [type-name] cast-expression Cast: +$x Array: ,$array See also Unary function Binary operation Iterated binary operation Binary function Ternary operation Arity Operation (mathematics) Operator (programming) References External links Elementary algebra Operators (programming)
Unary operation
[ "Mathematics" ]
712
[ "Functions and mappings", "Unary operations", "Mathematical objects", "Elementary algebra", "Elementary mathematics", "Mathematical relations", "Algebra" ]
161,016
https://en.wikipedia.org/wiki/R.%20T.%20Crowley
Robert T. Crowley (born March 2, 1948) is a pioneer in the development and practice of Electronic Data Interchange (EDI), an early component of electronic commerce. Crowley participated in the development of the early forms of EDI, working with Edward A. Guilbert, the creator of the technology, from the 1977 onwards, and assisted in the development of UN/EDIFACT, the international EDI standard developed through the United Nations. Active in many EDI projects around the world, he served as Chair of ANSI ASC X12, the US national standards body for EDI, from 1993 to 1995. He is the founder of the EDI standards committee for the ocean transport industry (OCEAN), as well as the US Customs Electronic Systems Advisory Committee (CESAC), advising the US Customs Service (USCS) on matters of electronic commerce. Robert was also a founding member of TOPAS (Terminal Operator and Port Authority Subcommittee) that initiated EDI use between ship lines and terminal operators/ports. Robert also served as Chair of the X12 Security Task Group for a number of years, and was one of the authors of the X12 technical report on the use of Extensible Markup Language XML for conducting EDI. He is now vice chair of ISO Technical Committee 154 US Technical Advisory Group (ISO TC154 US TAG), and Editor of document ISO8601 Representation of Dates and Times. References Robert T. Crowley; Senior Vice President, Research Triangle Commerce, Inc. at investing.businessweek.com 1948 births Living people
R. T. Crowley
[ "Technology" ]
318
[ "Computing stubs", "Computer specialist stubs" ]
161,019
https://en.wikipedia.org/wiki/Negation
In logic, negation, also called the logical not or logical complement, is an operation that takes a proposition to another proposition "not ", written , , or . It is interpreted intuitively as being true when is false, and false when is true. For example, if is "Spot runs", then "not " is "Spot does not run". An operand of a negation is called a negand or negatum. Negation is a unary logical connective. It may furthermore be applied not only to propositions, but also to notions, truth values, or semantic values more generally. In classical logic, negation is normally identified with the truth function that takes truth to falsity (and vice versa). In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition is the proposition whose proofs are the refutations of . Definition Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value of true when its operand is false, and a value of false when its operand is true. Thus if statement is true, then (pronounced "not P") would then be false; and conversely, if is true, then would be false. The truth table of is as follows: {| class="wikitable" style="text-align:center; background-color: #ddffdd;" |- bgcolor="#ddeeff" | || |- | || |- | || |} Negation can be defined in terms of other logical operations. For example, can be defined as (where is logical consequence and is absolute falsehood). Conversely, one can define as for any proposition (where is logical conjunction). The idea here is that any contradiction is false, and while these ideas work in both classical and intuitionistic logic, they do not work in paraconsistent logic, where contradictions are not necessarily false. As a further example, negation can be defined in terms of NAND and can also be defined in terms of NOR. Algebraically, classical negation corresponds to complementation in a Boolean algebra, and intuitionistic negation to pseudocomplementation in a Heyting algebra. These algebras provide a semantics for classical and intuitionistic logic. Notation The negation of a proposition is notated in different ways, in various contexts of discussion and fields of application. The following table documents some of these variants: The notation is Polish notation. In set theory, is also used to indicate 'not in the set of': is the set of all members of that are not members of . Regardless how it is notated or symbolized, the negation can be read as "it is not the case that ", "not that ", or usually more simply as "not ". Precedence As a way of reducing the number of necessary parentheses, one may introduce precedence rules: Β¬ has higher precedence than ∧, ∧ higher than ∨, and ∨ higher than β†’. So for example, is short for Here is a table that shows a commonly used precedence of logical operators. Properties Double negation Within a system of classical logic, double negation, that is, the negation of the negation of a proposition , is logically equivalent to . Expressed in symbolic terms, . In intuitionistic logic, a proposition implies its double negation, but not conversely. This marks one important difference between classical and intuitionistic negation. Algebraically, classical negation is called an involution of period two. However, in intuitionistic logic, the weaker equivalence does hold. This is because in intuitionistic logic, is just a shorthand for , and we also have . Composing that last implication with triple negation implies that . As a result, in the propositional case, a sentence is classically provable if its double negation is intuitionistically provable. This result is known as Glivenko's theorem. Distributivity De Morgan's laws provide a way of distributing negation over disjunction and conjunction: ,Β  and . Linearity Let denote the logical xor operation. In Boolean algebra, a linear function is one such that: If there exists , , for all . Another way to express this is that each variable always makes a difference in the truth-value of the operation, or it never makes a difference. Negation is a linear logical operator. Self dual In Boolean algebra, a self dual function is a function such that: for all . Negation is a self dual logical operator. Negations of quantifiers In first-order logic, there are two quantifiers, one is the universal quantifier (means "for all") and the other is the existential quantifier (means "there exists"). The negation of one quantifier is the other quantifier ( and ). For example, with the predicate P as "x is mortal" and the domain of x as the collection of all humans, means "a person x in all humans is mortal" or "all humans are mortal". The negation of it is , meaning "there exists a person x in all humans who is not mortal", or "there exists someone who lives forever". Rules of inference There are a number of equivalent ways to formulate rules for negation. One usual way to formulate classical negation in a natural deduction setting is to take as primitive rules of inference negation introduction (from a derivation of to both and , infer ; this rule also being called reductio ad absurdum), negation elimination (from and infer ; this rule also being called ex falso quodlibet), and double negation elimination (from infer ). One obtains the rules for intuitionistic negation the same way but by excluding double negation elimination. Negation introduction states that if an absurdity can be drawn as conclusion from then must not be the case (i.e. is false (classically) or refutable (intuitionistically) or etc.). Negation elimination states that anything follows from an absurdity. Sometimes negation elimination is formulated using a primitive absurdity sign . In this case the rule says that from and follows an absurdity. Together with double negation elimination one may infer our originally formulated rule, namely that anything follows from an absurdity. Typically the intuitionistic negation of is defined as . Then negation introduction and elimination are just special cases of implication introduction (conditional proof) and elimination (modus ponens). In this case one must also add as a primitive rule ex falso quodlibet. Programming language and ordinary language As in mathematics, negation is used in computer science to construct logical statements. if (!(r == t)) { /*...statements executed when r does NOT equal t...*/ } The exclamation mark "!" signifies logical NOT in B, C, and languages with a C-inspired syntax such as C++, Java, JavaScript, Perl, and PHP. "NOT" is the operator used in ALGOL 60, BASIC, and languages with an ALGOL- or BASIC-inspired syntax such as Pascal, Ada, Eiffel and Seed7. Some languages (C++, Perl, etc.) provide more than one operator for negation. A few languages like PL/I and Ratfor use Β¬ for negation. Most modern languages allow the above statement to be shortened from if (!(r == t)) to if (r != t), which allows sometimes, when the compiler/interpreter is not able to optimize it, faster programs. In computer science there is also bitwise negation. This takes the value given and switches all the binary 1s to 0s and 0s to 1s. This is often used to create ones' complement (or "~" in C or C++) and two's complement (just simplified to "-" or the negative sign, as this is equivalent to taking the arithmetic negation of the number). To get the absolute (positive equivalent) value of a given integer the following would work as the "-" changes it from negative to positive (it is negative because "x < 0" yields true) unsigned int abs(int x) { if (x < 0) return -x; else return x; } To demonstrate logical negation: unsigned int abs(int x) { if (!(x < 0)) return x; else return -x; } Inverting the condition and reversing the outcomes produces code that is logically equivalent to the original code, i.e. will have identical results for any input (depending on the compiler used, the actual instructions performed by the computer may differ). In C (and some other languages descended from C), double negation (!!x) is used as an idiom to convert x to a canonical Boolean, ie. an integer with a value of either 0 or 1 and no other. Although any integer other than 0 is logically true in C and 1 is not special in this regard, it is sometimes important to ensure that a canonical value is used, for example for printing or if the number is subsequently used for arithmetic operations. The convention of using ! to signify negation occasionally surfaces in ordinary written speech, as computer-related slang for not. For example, the phrase !voting means "not voting". Another example is the phrase !clue which is used as a synonym for "no-clue" or "clueless". Kripke semantics In Kripke semantics where the semantic values of formulae are sets of possible worlds, negation can be taken to mean set-theoretic complementation (see also possible world semantics for more). See also Affirmation and negation (grammatical polarity) Ampheck Apophasis Binary opposition Bitwise NOT Contraposition Cyclic negation Negation as failure NOT gate Plato's beard Square of opposition References Further reading Gabbay, Dov, and Wansing, Heinrich, eds., 1999. What is Negation?, Kluwer. Horn, L., 2001. A Natural History of Negation, University of Chicago Press. G. H. von Wright, 1953–59, "On the Logic of Negation", Commentationes Physico-Mathematicae 22. Wansing, Heinrich, 2001, "Negation", in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic, Blackwell. External links NOT, on MathWorld Tables of Truth of composite clauses Semantics Logical connectives Unary operations Articles with example C++ code Formal semantics (natural language)
Negation
[ "Mathematics" ]
2,260
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Unary operations" ]
161,176
https://en.wikipedia.org/wiki/Webcomic
Webcomics (also known as online comics or Internet comics) are comics published on the internet, such as on a website or a mobile app. While many webcomics are published exclusively online, others are also published in magazines, newspapers, or comic books. Webcomics can be compared to self-published print comics in that anyone with an Internet connection can publish their own webcomic. Readership levels vary widely; many are read only by the creator's immediate friends and family, while some of the most widely read have audiences of well over one million readers. Webcomics range from traditional comic strips and graphic novels to avant garde comics, and cover many genres, styles, and subjects. They sometimes take on the role of a comic blog. The term web cartoonist is sometimes used to refer to someone who creates webcomics. Medium There are several differences between webcomics and print comics. With webcomics the restrictions of traditional books, newspapers or magazines can be lifted, allowing artists and writers to take advantage of the web's unique capabilities. Styles The creative freedom webcomics provide allows artists to work in nontraditional styles. Clip art or photo comics (also known as fumetti) are two types of webcomics that do not use traditional artwork. A Softer World, for example, is made by overlaying photographs with strips of typewriter-style text. As in the constrained comics tradition, a few webcomics, such as Dinosaur Comics by Ryan North, are created with most strips having art copied exactly from one (or a handful of) template comics and only the text changing. Pixel art, such as that created by Richard Stevens of Diesel Sweeties, is similar to that of sprite comics but instead uses low-resolution images created by the artist themself. However, it is also common for some artists to use traditional styles, similar to those typically published in newspapers or comic books. Content Webcomics that are independently published are not subject to the content restrictions of book publishers or newspaper syndicates, enjoying an artistic freedom similar to underground and alternative comics. Some webcomics stretch the boundaries of taste, taking advantage of the fact that Internet censorship is virtually nonexistent in countries like the United States. The content of webcomics can still cause problems, such as Leisure Town artist Tristan Farnon's legal trouble after creating a profane Dilbert parody, or the Catholic League's protest of artist Eric Millikin's "blasphemous treatment of Jesus." Format Webcomic artists use many formats throughout the world. Comic strips, generally consisting of three or four panels, have been a common format for many artists. Other webcomic artists use the format of traditional printed comic books and graphic novels, sometimes with the plan of later publishing books. Scott McCloud, an early advocate of webcomics since 1998, pioneered the idea of the "infinite canvas" where, rather than being confined to normal print dimensions, artists are free to spread out in any direction indefinitely with their comics. Such a format proved highly successful in South-Korean webcomics when JunKoo Kim implemented an infinite scrolling mechanism in the platform Webtoon in 2004. In 2009, French web cartoonist Balak described Turbomedia, a format for webcomics where a reader only views one panel at a time, in which the reader decides their own reading rhythm by going forward one panel at a time. Some web cartoonists, such as political cartoonist Mark Fiore or Charley Parker with Argon Zark!, incorporate animations or interactive elements into their webcomics. History The first comics to be shared through the Internet were Eric Millikin's Witches and Stitches, which he started uploading on CompuServe in 1985. Services such as CompuServe and Usenet were used before the World Wide Web started to rise in popularity in 1993. Early webcomics were often derivatives from strips in college newspapers, but when the Web became widely popular in the mid-1990s, more people started creating comics exclusively for this medium. By 2000, various webcomic creators were financially successful and webcomics became more artistically recognized. Unique genres and styles became popular during this period. The 2010s also saw the rise of webtoons in South Korea, where the form has become very prominent. This decade had also seen an increasingly larger number of successful webcomics being adapted into animated series in China and Japan. Webcomics collectives In March 1995, artist Bebe Williams launched one of the first webcomics collectives, Art Comics Daily. Newspaper comic strip syndicates also launched websites in the mid-1990s. Other webcomics collectives followed, with many launching in the next decade. In March 2000, Chris Crosby, Crosby's mother Teri, and other artists founded Keenspot. In July 2000, Austin Osueke launched eigoMANGA, publishing original online manga, referred to as "webmanga". In 2001, the subscription webcomics site Cool Beans World was launched. Contributors included UK-based comic book creators Pat Mills, Simon Bisley, John Bolton, and Kevin O'Neill, and the author Clive Barker. Serialised content included Scarlet Traces and Marshal Law. In March 2001, Shannon Denton and Patrick Coyle launched Komikwerks.com serving free strips from comics and animation professionals. The site launched with 9 titles including Steve Conley's Astounding Space Thrills, Jason Kruse's The World of Quest, and Bernie Wrightson's The Nightmare Expeditions. On March 2, 2002, Joey Manley founded Modern Tales, offering subscription-based webcomics. The Modern Tales spin-off serializer followed in October 2002, then came girlamatic and Graphic Smash in March and September 2003 respectively. By 2005, webcomics hosting had become a business in its own right, with sites such as Webcomics Nation. Traditional comic book publishers, such as Marvel Comics and Slave Labour Graphics, did not begin making serious digital efforts until 2006 and 2007. DC Comics launched its web comic imprint, Zuda Comics in October 2007. The site featured user submitted comics in a competition for a professional contract to produce web comics. In July 2010, it was announced that DC was closing down Zuda. Business Some creators of webcomics are able to do so professionally through various revenue channels. Webcomic artists may sell merchandise based on their work, such as T-shirts and toys, or they may sell print versions or compilations of their webcomic. Webcomic creators can also sell online advertisements on their websites. In the second half of the 2000s, webcomics became less financially sustainable due to the rise of social media and consumers' disinterest in certain kinds of merchandise. Crowdfunding through Kickstarter and Patreon have also become sources of income for web cartoonists. Webcomics have been used by some cartoonists as a path towards syndication in newspapers. Since the mid-1990s, Scott McCloud advocated for micropayments systems as a source of income for web cartoonists, but micropayment systems have not been popular with artists or readers. Awards Many webcomics artists have received honors for their work. In 2006, Gene Luen Yang's graphic novel American Born Chinese, originally published as a webcomic on Modern Tales, was the first graphic novel to be nominated for a National Book Award. Don Hertzfeldt's animated film based on his webcomics, Everything Will Be OK, won the 2007 Sundance Film Festival Jury Award in Short Filmmaking, a prize rarely bestowed on an animated film. Many traditionally print-comics focused organizations have added award categories for comics published on the web. The Eagle Awards established a Favorite Web-based Comic category in 2000, and the Ignatz Awards followed the next year by introducing an Outstanding Online Comic category in 2001. After having nominated webcomics in several of their traditional print-comics categories, the Eisner Awards began awarding comics in the Best Digital Comic category in 2005. In 2006 the Harvey Awards established a Best Online Comics Work category, and in 2007 the Shuster Awards began an Outstanding Canadian Web Comic Creator Award. In 2012 the National Cartoonists Society gave their first Reuben Award for "On-line comic strips." Other awards focus exclusively on webcomics. The Web Cartoonists' Choice Awards consist of a number of awards that were handed out annually from 2001 to 2008. The Dutch Clickburg Webcomic Awards (also known as the Clickies) has been handed out four times between 2005 and 2010. The awards require the recipient to be active in the Benelux countries, with the exception of one international award. Webcomics in print Though webcomics are typically published primarily on the World Wide Web, often webcomic creators decide to also print self-published books of their work. In some cases, web cartoonists may get publishing deals in which comic books are created of their work. Sometimes, these books are published by mainstream comics publishers who are traditionally aimed at the direct market of comic books stores. Some web cartoonists may pursue print syndication in established newspapers or magazines. The traditional audience base for webcomics and print comics are vastly different, and webcomic readers do not necessarily go to bookstores. For some web cartoonists, a print release may be considered the "goal" of a webcomic series, while for others, comic books are "just another way to get the content out." Webcomics have been seen by some artists as a potential new path towards syndication in newspapers. According to Jeph Jacques (Questionable Content), "there's no real money" in syndication for webcomic artists. Some artists are not able to syndicate their work in newspapers because their comics are targeted to a specific niche audience and would not be popular with a broader readership. Non-anglophone webcomics Many webcomics are published primarily in English, this being a major language in Australia, Canada, India, the United States, and the United Kingdom. Cultures surrounding non-anglophone webcomics have thrived in countries such as China, France, India, Japan, and South Korea. Webcomics have been a popular medium in India since the early 2000s. Indian webcomics are successful as they reach a large audience for free and they are frequently used by the country's younger generation to spread social awareness on topics such as politics and feminism. These webcomics achieve a large amount of exposure by being spread through social media. In China, Chinese webcomics have become a popular way to criticize the communist government and politicians in the country. Many webcomics by popular artists get shared around the country thanks to social networks such as Sina Weibo and WeChat. Many titles will often be censored or taken down by the government. See also Digital comic Digital illustration List of webcomic creators List of webcomics Web fiction Webtoon References Further reading External links The Rise of Web Comics Video produced by Off Book Comics formats New media New media art Multimedia Digital art Internet art Internet-based works Internet culture
Webcomic
[ "Technology" ]
2,276
[ "Multimedia", "New media", "Internet-based works" ]
161,212
https://en.wikipedia.org/wiki/Mkdir
The mkdir (make directory) command in the Unix, DOS, DR FlexOS, IBM OS/2, Microsoft Windows, and ReactOS operating systems is used to make a new directory. It is also available in the EFI shell and in the PHP scripting language. In DOS, OS/2, Windows and ReactOS, the command is often abbreviated to md. The command is analogous to the Stratus OpenVOS create_dir command. MetaComCo TRIPOS and AmigaDOS provide a similar MakeDir command to create new directories. The numerical computing environments MATLAB and GNU Octave include an mkdir function with similar functionality. History In early versions of Unix (4.1BSD and early versions of System V), this command had to be setuid root as the kernel did not have an mkdir syscall. Instead, it made the directory with mknod and linked in the . and .. directory entries manually. The command is available in MS-DOS versions 2 and later. Digital Research DR DOS 6.0 and Datalight ROM-DOS also include an implementation of the and commands. The version of mkdir bundled in GNU coreutils was written by David MacKenzie. It is also available in the open source MS-DOS emulator DOSBox and in KolibriOS. Usage Normal usage is as straightforward as follows: mkdir name_of_directory where name_of_directory is the name of the directory one wants to create. When typed as above (i.e. normal usage), the new directory would be created within the current directory. On Unix and Windows (with Command extensions enabled, the default), multiple directories can be specified, and mkdir will try to create all of them. Options On Unix-like operating systems, mkdir takes options. The options are: -p (--parents): parents or path, will also create all directories leading up to the given directory that do not exist already. For example, mkdir -p a/b will create directory a if it doesn't exist, then will create directory b inside directory a. If the given directory already exists, ignore the error. -m (--mode): mode, specify the octal permissions of directories created by mkdir . -p is most often used when using mkdir to build up complex directory hierarchies, in case a necessary directory is missing or already there. -m is commonly used to lock down temporary directories used by shell scripts. Examples An example of -p in action is: mkdir -p /tmp/a/b/c If /tmp/a exists but /tmp/a/b does not, mkdir will create /tmp/a/b before creating /tmp/a/b/c. And an even more powerful command, creating a full tree at once (this however is a Shell extension, nothing mkdir does itself): mkdir -p tmpdir/{trunk/sources/{includes,docs},branches,tags} If one is using variables with mkdir in a bash script, POSIX `special' built-in command 'eval' would serve its purpose. DOMAIN_NAME=includes,docs eval "mkdir -p tmpdir/{trunk/sources/{${DOMAIN_NAME}},branches,tags}" This will create: tmpdir |__ | | | branches tags trunk | sources |_ | | includes docs See also Filesystem Hierarchy Standard GNU Core Utilities Find – The find command coupled with mkdir can be used to only recreate a directory structure (without files). List of Unix commands List of DOS commands References Further reading External links Microsoft TechNet Mkdir article Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands Internal DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands Windows administration IBM i Qshell commands
Mkdir
[ "Technology" ]
842
[ "IBM i Qshell commands", "Windows commands", "Computing commands", "OS/2 commands", "ReactOS commands", "Plan 9 commands", "Inferno (operating system) commands", "MSX-DOS commands" ]
161,241
https://en.wikipedia.org/wiki/Altitude%20%28triangle%29
In geometry, an altitude of a triangle is a line segment through a given vertex (called apex) and perpendicular to a line containing the side or edge opposite the apex. This (finite) edge and (infinite) line extension are called, respectively, the base and extended base of the altitude. The point at the intersection of the extended base and the altitude is called the foot of the altitude. The length of the altitude, often simply called "the altitude" or "height", symbol , is the distance between the foot and the apex. The process of drawing the altitude from a vertex to the foot is known as dropping the altitude at that vertex. It is a special case of orthogonal projection. Altitudes can be used in the computation of the area of a triangle: one-half of the product of an altitude's length and its base's length (symbol ) equals the triangle's area: /2. Thus, the longest altitude is perpendicular to the shortest side of the triangle. The altitudes are also related to the sides of the triangle through the trigonometric functions. In an isosceles triangle (a triangle with two congruent sides), the altitude having the incongruent side as its base will have the midpoint of that side as its foot. Also the altitude having the incongruent side as its base will be the angle bisector of the vertex angle. In a right triangle, the altitude drawn to the hypotenuse divides the hypotenuse into two segments of lengths and . If we denote the length of the altitude by , we then have the relation Β  (Geometric mean theorem; see Special Cases, inverse Pythagorean theorem) For acute triangles, the feet of the altitudes all fall on the triangle's sides (not extended). In an obtuse triangle (one with an obtuse angle), the foot of the altitude to the obtuse-angled vertex falls in the interior of the opposite side, but the feet of the altitudes to the acute-angled vertices fall on the opposite extended side, exterior to the triangle. This is illustrated in the adjacent diagram: in this obtuse triangle, an altitude dropped perpendicularly from the top vertex, which has an acute angle, intersects the extended horizontal side outside the triangle. Theorems Orthocenter Altitude in terms of the sides For any triangle with sides and semiperimeter the altitude from side (the base) is given by This follows from combining Heron's formula for the area of a triangle in terms of the sides with the area formula where the base is taken as side and the height is the altitude from the vertex (opposite side ). By exchanging with or , this equation can also used to find the altitudes and , respectively. Inradius theorems Consider an arbitrary triangle with sides and with corresponding altitudes . The altitudes and the incircle radius are related by Circumradius theorem Denoting the altitude from one side of a triangle as , the other two sides as and , and the triangle's circumradius (radius of the triangle's circumscribed circle) as , the altitude is given by Interior point If are the perpendicular distances from any point to the sides, and are the altitudes to the respective sides, then Area theorem Denoting the altitudes of any triangle from sides respectively as , and denoting the semi-sum of the reciprocals of the altitudes as we have General point on an altitude If is any point on an altitude of any triangle , then Triangle inequality Since the area of the triangle is , the triangle inequality implies . Special cases Equilateral triangle From any point within an equilateral triangle, the sum of the perpendiculars to the three sides is equal to the altitude of the triangle. This is Viviani's theorem. Right triangle In a right triangle with legs and and hypotenuse , each of the legs is also an altitude: and . The third altitude can be found by the relation This is also known as the inverse Pythagorean theorem. Note in particular: See also Median (geometry) Notes References External links Straight lines defined for a triangle de:HΓΆhe (Geometrie) he:Χ’Χ•Χ‘Χ” (Χ’ΧΧ•ΧžΧ˜Χ¨Χ™Χ”)
Altitude (triangle)
[ "Mathematics" ]
876
[ "Line (geometry)", "Straight lines defined for a triangle" ]
161,244
https://en.wikipedia.org/wiki/Orthocenter
The orthocenter of a triangle, usually denoted by , is the point where the three (possibly extended) altitudes intersect. The orthocenter lies inside the triangle if and only if the triangle is acute. For a right triangle, the orthocenter coincides with the vertex at the right angle. Formulation Let denote the vertices and also the angles of the triangle, and let be the side lengths. The orthocenter has trilinear coordinates and barycentric coordinates Since barycentric coordinates are all positive for a point in a triangle's interior but at least one is negative for a point in the exterior, and two of the barycentric coordinates are zero for a vertex point, the barycentric coordinates given for the orthocenter show that the orthocenter is in an acute triangle's interior, on the right-angled vertex of a right triangle, and exterior to an obtuse triangle. In the complex plane, let the points represent the numbers and assume that the circumcenter of triangle is located at the origin of the plane. Then, the complex number is represented by the point , namely the altitude of triangle . From this, the following characterizations of the orthocenter by means of free vectors can be established straightforwardly: The first of the previous vector identities is also known as the problem of Sylvester, proposed by James Joseph Sylvester. Properties Let denote the feet of the altitudes from respectively. Then: The product of the lengths of the segments that the orthocenter divides an altitude into is the same for all three altitudes: The circle centered at having radius the square root of this constant is the triangle's polar circle. The sum of the ratios on the three altitudes of the distance of the orthocenter from the base to the length of the altitude is 1: (This property and the next one are applications of a more general property of any interior point and the three cevians through it.) The sum of the ratios on the three altitudes of the distance of the orthocenter from the vertex to the length of the altitude is 2: The isogonal conjugate of the orthocenter is the circumcenter of the triangle. The isotomic conjugate of the orthocenter is the symmedian point of the anticomplementary triangle. Four points in the plane, such that one of them is the orthocenter of the triangle formed by the other three, is called an orthocentric system or orthocentric quadrangle. Orthocentric system Relation with circles and conics Denote the circumradius of the triangle by . Then In addition, denoting as the radius of the triangle's incircle, as the radii of its excircles, and again as the radius of its circumcircle, the following relations hold regarding the distances of the orthocenter from the vertices: If any altitude, for example, , is extended to intersect the circumcircle at , so that is a chord of the circumcircle, then the foot bisects segment : The directrices of all parabolas that are externally tangent to one side of a triangle and tangent to the extensions of the other sides pass through the orthocenter. A circumconic passing through the orthocenter of a triangle is a rectangular hyperbola. Relation to other centers, the nine-point circle The orthocenter , the centroid , the circumcenter , and the center of the nine-point circle all lie on a single line, known as the Euler line. The center of the nine-point circle lies at the midpoint of the Euler line, between the orthocenter and the circumcenter, and the distance between the centroid and the circumcenter is half of that between the centroid and the orthocenter: The orthocenter is closer to the incenter than it is to the centroid, and the orthocenter is farther than the incenter is from the centroid: In terms of the sides , , , inradius and circumradius , Orthic triangle If the triangle is oblique (does not contain a right-angle), the pedal triangle of the orthocenter of the original triangle is called the orthic triangle or altitude triangle. That is, the feet of the altitudes of an oblique triangle form the orthic triangle, . Also, the incenter (the center of the inscribed circle) of the orthic triangle is the orthocenter of the original triangle . Trilinear coordinates for the vertices of the orthic triangle are given by The extended sides of the orthic triangle meet the opposite extended sides of its reference triangle at three collinear points. In any acute triangle, the inscribed triangle with the smallest perimeter is the orthic triangle. This is the solution to Fagnano's problem, posed in 1775. The sides of the orthic triangle are parallel to the tangents to the circumcircle at the original triangle's vertices. The orthic triangle of an acute triangle gives a triangular light route. The tangent lines of the nine-point circle at the midpoints of the sides of are parallel to the sides of the orthic triangle, forming a triangle similar to the orthic triangle. The orthic triangle is closely related to the tangential triangle, constructed as follows: let be the line tangent to the circumcircle of triangle at vertex , and define analogously. Let The tangential triangle is , whose sides are the tangents to triangle 's circumcircle at its vertices; it is homothetic to the orthic triangle. The circumcenter of the tangential triangle, and the center of similitude of the orthic and tangential triangles, are on the Euler line. Trilinear coordinates for the vertices of the tangential triangle are given by The reference triangle and its orthic triangle are orthologic triangles. For more information on the orthic triangle, see here. History The theorem that the three altitudes of a triangle concur (at the orthocenter) is not directly stated in surviving Greek mathematical texts, but is used in the Book of Lemmas (proposition 5), attributed to Archimedes (3rd century BC), citing the "commentary to the treatise about right-angled triangles", a work which does not survive. It was also mentioned by Pappus (Mathematical Collection, VII, 62; 340). The theorem was stated and proved explicitly by al-Nasawi in his (11th century) commentary on the Book of Lemmas, and attributed to al-Quhi (). This proof in Arabic was translated as part of the (early 17th century) Latin editions of the Book of Lemmas, but was not widely known in Europe, and the theorem was therefore proven several more times in the 17th–19th century. Samuel Marolois proved it in his Geometrie (1619), and Isaac Newton proved it in an unfinished treatise Geometry of Curved Lines Later William Chapple proved it in 1749. A particularly elegant proof is due to FranΓ§ois-Joseph Servois (1804) and independently Carl Friedrich Gauss (1810): Draw a line parallel to each side of the triangle through the opposite point, and form a new triangle from the intersections of these three lines. Then the original triangle is the medial triangle of the new triangle, and the altitudes of the original triangle are the perpendicular bisectors of the new triangle, and therefore concur (at the circumcenter of the new triangle). See also Triangle center References External links Orthocenter of a triangle With interactive animation Animated demonstration of orthocenter construction Compass and straightedge. Fagnano's Problem by Jay Warendorff, Wolfram Demonstrations Project. Triangle centers
Orthocenter
[ "Physics", "Mathematics" ]
1,654
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
161,253
https://en.wikipedia.org/wiki/Quantum%20fluctuation
In quantum physics, a quantum fluctuation (also known as a vacuum state fluctuation or vacuum fluctuation) is the temporary random change in the amount of energy in a point in space, as prescribed by Werner Heisenberg's uncertainty principle. They are minute random fluctuations in the values of the fields which represent elementary particles, such as electric and magnetic fields which represent the electromagnetic force carried by photons, W and Z fields which carry the weak force, and gluon fields which carry the strong force. The uncertainty principle states the uncertainty in energy and time can be related by , where β‰ˆ . This means that pairs of virtual particles with energy and lifetime shorter than are continually created and annihilated in empty space. Although the particles are not directly detectable, the cumulative effects of these particles are measurable. For example, without quantum fluctuations, the "bare" mass and charge of elementary particles would be infinite; from renormalization theory the shielding effect of the cloud of virtual particles is responsible for the finite mass and charge of elementary particles. Another consequence is the Casimir effect. One of the first observations which was evidence for vacuum fluctuations was the Lamb shift in hydrogen. In July 2020, scientists reported that quantum vacuum fluctuations can influence the motion of macroscopic, human-scale objects by measuring correlations below the standard quantum limit between the position/momentum uncertainty of the mirrors of LIGO and the photon number/phase uncertainty of light that they reflect. Field fluctuations In quantum field theory, fields undergo quantum fluctuations. A reasonably clear distinction can be made between quantum fluctuations and thermal fluctuations of a quantum field (at least for a free field; for interacting fields, renormalization substantially complicates matters). An illustration of this distinction can be seen by considering quantum and classical Klein–Gordon fields: For the quantized Klein–Gordon field in the vacuum state, we can calculate the probability density that we would observe a configuration at a time in terms of its Fourier transform to be In contrast, for the classical Klein–Gordon field at non-zero temperature, the Gibbs probability density that we would observe a configuration at a time is These probability distributions illustrate that every possible configuration of the field is possible, with the amplitude of quantum fluctuations controlled by the Planck constant , just as the amplitude of thermal fluctuations is controlled by , where is the Boltzmann constant. Note that the following three points are closely related: the Planck constant has units of action (joule-seconds) instead of units of energy (joules), the quantum kernel is instead of (the quantum kernel is nonlocal from a classical heat kernel viewpoint, but it is local in the sense that it does not allow signals to be transmitted), the quantum vacuum state is Lorentz-invariant (although not manifestly in the above), whereas the classical thermal state is not (the classical dynamics is Lorentz-invariant, but the Gibbs probability density is not a Lorentz-invariant initial condition). A classical continuous random field can be constructed that has the same probability density as the quantum vacuum state, so that the principal difference from quantum field theory is the measurement theory (measurement in quantum theory is different from measurement for a classical continuous random field, in that classical measurements are always mutually compatible – in quantum-mechanical terms they always commute). See also Cosmic microwave background False vacuum Hawking radiation Quantum annealing Quantum foam Stochastic interpretation Vacuum energy Vacuum polarization Virtual black hole Zitterbewegung References Quantum mechanics Inflation (cosmology) Articles containing video clips Energy (physics)
Quantum fluctuation
[ "Physics", "Mathematics" ]
722
[ "Physical quantities", "Quantity", "Theoretical physics", "Quantum mechanics", "Energy (physics)", "Wikipedia categories named after physical quantities" ]
161,267
https://en.wikipedia.org/wiki/Sitting
Sitting is a basic action and resting position in which the body weight is supported primarily by the bony ischial tuberosities with the buttocks in contact with the ground or a horizontal surface such as a chair seat, instead of by the lower limbs as in standing, squatting or kneeling. When sitting, the torso is more or less upright, although sometimes it can lean against other objects for a more relaxed posture. Sitting for much of the day may pose significant health risks, with one study suggesting people who sit regularly for prolonged periods may have higher mortality rates than those who do not. The average person sits down for 4.7 hours per day, according to a global review representing 47% of the global adult population. The form of kneeling where the buttocks sit back on the heels, for example as in the Seiza and Vajrasana postures, is also often interpreted as sitting. Prevalence The British Chiropractic Association said in 2006 that 32% of the British population spent more than ten hours per day sitting down. Positions On the floor The most common ways of sitting on the floor involve bending the knees. One can also sit with the legs unbent, using something solid as support for the back or leaning on one's arms. Sitting with bent legs can be done with the legs mostly parallel or by crossing them over each other. A common cross-legged position is with the lower part of both legs folded towards the body, crossing each other at the ankle or calf, with both ankles on the floor, sometimes with the feet tucked under the knees or thighs. The position is known in several European languages as tailor's posture, from the traditional working posture of tailors . It is also named after various plains-dwelling nomads: in American English Indian style, in many European languages "Turkish style", and in Japanese . In yoga it is known as sukhasana, meaning "easy pose." On a raised seat Various raised surfaces at the appropriate height can be used as seats for humans, whether they are made for the purpose, such as chairs, stools and benches, or not. While the buttocks are nearly always rested on the raised surface, there are many differences in how one can hold one's legs and back. There are two major styles of sitting on a raised surface. The first has one or two of the legs in front of the sitting person; in the second, sitting astride something, the legs incline outwards on either side of the body. The feet can rest on the floor or on a footrest, which can keep them vertical, horizontal, or at an angle in between. They can also dangle if the seat is sufficiently high. Legs can be kept right to the front of the body, spread apart, or one crossed over the other. The upper body can be held upright, recline to either side or backward, or one can lean forward. Yoga, traditions and spirituality There are many seated positions in various traditions and rituals. Four examples are: ζ­£εΊ§ (zhengzuo) is a Chinese word which describes the traditional formal way of sitting in Ancient China. A related position is θ·ͺεΊ§, which differs in the tops of the feet being raised off the ground. Vajrasana (Diamond Pose) is a yoga posture (asana) similar to seiza. The lotus position involves resting each foot on the opposite thigh so that the soles of the feet face upwards. The Burmese position, named so because of its use in Buddhist sculptures in Burma, places both feet in front of the pelvis with knees bent and touching the floor to the sides. The heels are pointing toward pelvis or upward, and toes are pointed so that the tops of the feet lie on the ground. This looks similar to the cross-legged position, but the feet are not placed underneath the thigh of the next leg, therefore the legs do not cross. Instead, one foot is placed in front of the other. In various mythologies and folk magic, sitting is a magical act that connects the person who sits with other persons, states or places. Kneeling chairs The kneeling chair (often just referred to as "ergonomic chair") was designed to motivate better posture than the conventional chair. To sit in a kneeling chair, one rests one's buttocks on the upper sloping pad and rests the front of the lower legs atop the lower pad, i.e., the human position as both sitting and kneeling at the same time. Health risks In 1700, De Morbis Artificum Diatriba listed sitting in odd postures as a cause of diseases in "chair-workers". Current studies indicate there is a significantly higher mortality rate among people who regularly sit for prolonged periods, and the risk is not negated by regular exercise, though it is lowered. The causes of mortality and morbidity include heart disease, obesity, type 2 diabetes and cancer, specifically, breast, endometrial, colorectal, lung, and epithelial ovarian cancer. The link between heart disease and diabetes mortality and sitting is well-established, but the risk of cancer mortality is unclear. Sedentary time is also associated with an increased risk of depression in children and adolescents. A correlation between occupational sitting specifically and higher body mass index has been demonstrated, but causality has not yet been established. There are several hypotheses explaining why sitting is a health risk. These include changes in cardiac output, vitamin D, inflammation, sex hormone activity, lipoprotein lipase activity, and GLUT4 activity due to long periods of muscular unloading, among others. Sitting may occupy up to half of an adult's workday in developed countries. Workplace programs to reduce sitting vary in method. They include sit-stand desks, counseling, workplace policy changes, walking or standing meetings, treadmill desks, breaks, therapy ball chairs, and stepping devices. Results of these programs are mixed, but there is moderate evidence to show that changes to chairs (adjusting the biomechanics of the chair or using different types of chairs) can effectively reduce musculoskeletal symptoms in workers who sit for most of their day. Public health programs typically focus on increasing physical activity rather than reducing sitting time. One major target for these public health programs is sitting in the workplace. For example, WHO Europe recommended in September 2015 the provision of adjustable desks in the workplace. In general, there is conflicting evidence regarding the precise risks of sitting for long periods. A 2018 Cochrane review found low-quality evidence that providing employees with a standing desk option may reduce the length of time some people sit at work in the first year. This reduction in sitting may decrease with time, and there is no evidence that standing desks are effective in the long term. In addition, a 2018 British Journal of Medicine systematic review concluded that interventions aimed at reducing sitting outside of work were only modestly effective. It is not clear how standing desks compare to other work-place interventions to reduce the length of time employees are sitting during the work day. Relationship between posture and health conditions Though most studies even until early 21st century relate human body postures to various musculoskeletal conditions, recent researches show no potential causal relationship between postures and these conditions like back pain; other causes like sleep deprivation, stress and long-term physical inactivity or prolonged static unnatural postural stress could be significant confounders for various health conditions. However some research show that prolonged slouched position may be a cause for minor breathing disorders. Though still a large proportion of the clinical practitioners attribute absence of a neutral spine posture as one of the main causes of conditions like back pain and neck pain, the relationship is not thoroughly established. It is also thought that much of so-called "poor posture" is actually just postural stress and being stuck with bad ergonomics that could be causing the pain, and not really a postural problem. iHunch is an example of postural stress which could cause upper back pain and neck pain, which is prevalent in younger generations and people whose occupation involves prolonged usage of computers. The concept of "good posture" has led to a common misconception that sitting in one good sitting position will allay the negative effects of sitting. Sedentary behaviour Sedentary behaviour is any waking behaviour, whether in sitting or reclining posture, by an energy expenditure less than or equal to 1.5 metabolic equivalents of task (METs). MET, beside the watt and kilojoules, is the unit for expressing the energy cost of physical activities. One MET is defined as resting metabolic rate – as energy used with a person at rest, sitting quietly in a chair or as the amount of oxygen (O2) consumed with that person. MET for an adult weighing 70Β kg equals 3.5 ml O2 per kg body weight per min. Sedentary behaviour should be distinguished from being inactive – performing insufficient amounts of MVPA (moderate to vigorous physical activity). The World Health Organization recommends at least 60 min of daily MVPA for children and adolescents aged 5–17 years, and 150 min of weekly MVPA for adults. Sedentary behaviour can not be equated with screen time, although some researchers found out that a large share of waking time by children and adolescents in a sedentary position is accumulated by media consumption in front of a screen. See also Baddha Koṇāsana Bharadvajasana Coccydynia Right to sit Flandrin pose Siddhasana Sitting disability Sitting on one's haunches Notes References Further reading External links Human positions
Sitting
[ "Biology" ]
1,966
[ "Behavior", "Human positions", "Human behavior" ]
161,278
https://en.wikipedia.org/wiki/Period%205%20element
A period 5 element is one of the chemical elements in the fifth row (or period) of the periodic table of the chemical elements. The periodic table is laid out in rows to illustrate recurring (periodic) trends in the chemical behaviour of the elements as their atomic number increases: a new row is begun when chemical behaviour begins to repeat, meaning that elements with similar behaviour fall into the same vertical columns. The fifth period contains 18 elements, beginning with rubidium and ending with xenon. As a rule, period 5 elements fill their 5s shells first, then their 4d, and 5p shells, in that order; however, there are exceptions, such as rhodium. Physical properties This period contains technetium, one of the two elements until lead that has no stable isotopes (along with promethium), as well as molybdenum and iodine, two of the heaviest elements with a known biological role. Niobium has the largest known magnetic penetration depth of all the elements. Zirconium is one of the main components of zircon crystals, currently the oldest known minerals in the Earth's crust. Many later transition metals, such as rhodium, are very commonly used in jewelry as they are very shiny. This period is known to have a large number of exceptions to the Madelung rule. Elements and their properties {| class="wikitable sortable" ! colspan="3" | Chemical element ! Block ! Electron configuration |- !Β  ! ! ! ! |- bgcolor="" || 37 || Rb || Rubidium || s-block || [Kr] 5s1 |- bgcolor="" || 38 || Sr || Strontium || s-block || [Kr] 5s2 |- bgcolor="" || 39 || Y || Yttrium || d-block || [Kr] 4d1 5s2 |- bgcolor="" || 40 || Zr || Zirconium || d-block || [Kr] 4d2 5s2 |- bgcolor="" || 41 || Nb || Niobium || d-block || [Kr] 4d4 5s1 (*) |- bgcolor="" || 42 || Mo || Molybdenum || d-block || [Kr] 4d5 5s1 (*) |- bgcolor="" || 43 || Tc || Technetium || d-block || [Kr] 4d5 5s2 |- bgcolor="" || 44 || Ru || Ruthenium || d-block || [Kr] 4d7 5s1 (*) |- bgcolor="" || 45 || Rh || Rhodium || d-block|| [Kr] 4d8 5s1 (*) |- bgcolor="" || 46 || Pd || Palladium || d-block || [Kr] 4d10 (*) |- bgcolor="" || 47 || Ag || Silver || d-block || [Kr] 4d10 5s1 (*) |- bgcolor="" || 48 || Cd || Cadmium || d-block || [Kr] 4d10 5s2 |- bgcolor="" || 49 || In || Indium || p-block || [Kr] 4d10 5s2 5p1 |- bgcolor="" || 50 || Sn || Tin || p-block || [Kr] 4d10 5s2 5p2 |- bgcolor="" || 51 || Sb || Antimony || p-block || [Kr] 4d10 5s2 5p3 |- bgcolor="" || 52 || Te || Tellurium || p-block || [Kr] 4d10 5s2 5p4 |- bgcolor="" || 53 || I || Iodine || p-block || [Kr] 4d10 5s2 5p5 |- bgcolor="" || 54 || Xe || Xenon || p-block || [Kr] 4d10 5s2 5p6 |} (*) Exception to the Madelung rule s-block elements Rubidium Rubidium is the first element placed in period 5. It is an alkali metal, the most reactive group in the periodic table, having properties and similarities with both other alkali metals and other period 5 elements. For example, rubidium has 5 electron shells, a property found in all other period 5 elements, whereas its electron configuration's ending is similar to all other alkali metals: s1. Rubidium also follows the trend of increasing reactivity as the atomic number increases in the alkali metals, for it is more reactive than potassium, but less so than caesium. In addition, both potassium and rubidium yield almost the same hue when ignited, so researchers must use different methods to differentiate between these two 1st group elements. Rubidium is very susceptible to oxidation in air, similar to most of the other alkali metals, so it readily transforms into rubidium oxide, a yellow solid with the chemical formula Rb2O. Strontium Strontium is the second element placed in the 5th period. It is an alkaline earth metal, a relatively reactive group, although not nearly as reactive as the alkali metals. Like rubidium, it has 5 electron shells or energy levels, and in accordance with the Madelung rule it has two electrons in its 5s subshell. Strontium is a soft metal and is extremely reactive upon contact with water. If it comes in contact with water, it will combine with the atoms of both oxygen and hydrogen to form strontium hydroxide and pure hydrogen gas which quickly diffuses in the air. In addition, strontium, like rubidium, oxidizes in air and turns a yellow color. When ignited, it will burn with a strong red flame. d-block elements Yttrium Yttrium is a chemical element with symbol Y and atomic number 39. It is a silvery-metallic transition metal chemically similar to the lanthanides and it has often been classified as a "rare earth element". Yttrium is almost always found combined with the lanthanides in rare earth minerals and is never found in nature as a free element. Its only stable isotope, 89Y, is also its only naturally occurring isotope. In 1787, Carl Axel Arrhenius found a new mineral near Ytterby in Sweden and named it ytterbite, after the village. Johan Gadolin discovered yttrium's oxide in Arrhenius' sample in 1789, and Anders Gustaf Ekeberg named the new oxide yttria. Elemental yttrium was first isolated in 1828 by Friedrich WΓΆhler. The most important use of yttrium is in making phosphors, such as the red ones used in television set cathode-ray tube (CRT) displays and in LEDs. Other uses include the production of electrodes, electrolytes, electronic filters, lasers and superconductors; various medical applications; and as traces in various materials to enhance their properties. Yttrium has no known biological role, and exposure to yttrium compounds can cause lung disease in humans. Zirconium Zirconium is a chemical element with the symbol Zr and atomic number 40. The name of zirconium is taken from the mineral zircon. Its atomic mass is 91.224. It is a lustrous, gray-white, strong transition metal that resembles titanium. Zirconium is mainly used as a refractory and opacifier, although minor amounts are used as alloying agent for its strong resistance to corrosion. Zirconium is obtained mainly from the mineral zircon, which is the most important form of zirconium in use. Zirconium forms a variety of inorganic and organometallic compounds such as zirconium dioxide and zirconocene dichloride, respectively. Five isotopes occur naturally, three of which are stable. Zirconium compounds have no biological role. Niobium Niobium, or columbium, is a chemical element with the symbol Nb and atomic number 41. It is a soft, grey, ductile transition metal, which is often found in the pyrochlore mineral, the main commercial source for niobium, and columbite. The name comes from Greek mythology: Niobe, daughter of Tantalus. Niobium has physical and chemical properties similar to those of the element tantalum, and the two are therefore difficult to distinguish. The English chemist Charles Hatchett reported a new element similar to tantalum in 1801, and named it columbium. In 1809, the English chemist William Hyde Wollaston wrongly concluded that tantalum and columbium were identical. The German chemist Heinrich Rose determined in 1846 that tantalum ores contain a second element, which he named niobium. In 1864 and 1865, a series of scientific findings clarified that niobium and columbium were the same element (as distinguished from tantalum), and for a century both names were used interchangeably. The name of the element was officially adopted as niobium in 1949. It was not until the early 20th century that niobium was first used commercially. Brazil is the leading producer of niobium and ferroniobium, an alloy of niobium and iron. Niobium is used mostly in alloys, the largest part in special steel such as that used in gas pipelines. Although alloys contain only a maximum of 0.1%, that small percentage of niobium improves the strength of the steel. The temperature stability of niobium-containing superalloys is important for its use in jet and rocket engines. Niobium is used in various superconducting materials. These superconducting alloys, also containing titanium and tin, are widely used in the superconducting magnets of MRI scanners. Other applications of niobium include its use in welding, nuclear industries, electronics, optics, numismatics and jewelry. In the last two applications, niobium's low toxicity and ability to be colored by anodization are particular advantages. Molybdenum Molybdenum is a Group 6 chemical element with the symbol Mo and atomic number 42. The name is from Neo-Latin Molybdaenum, from Ancient Greek , meaning lead, itself proposed as a loanword from Anatolian Luvian and Lydian languages, since its ores were confused with lead ores. The free element, which is a silvery metal, has the sixth-highest melting point of any element. It readily forms hard, stable carbides, and for this reason it is often used in high-strength steel alloys. Molybdenum does not occur as a free metal on Earth, but rather in various oxidation states in minerals. Industrially, molybdenum compounds are used in high-pressure and high-temperature applications, as pigments and catalysts. Molybdenum minerals have long been known, but the element was "discovered" (in the sense of differentiating it as a new entity from the mineral salts of other metals) in 1778 by Carl Wilhelm Scheele. The metal was first isolated in 1781 by Peter Jacob Hjelm. Most molybdenum compounds have low solubility in water, but the molybdate ion MoO42βˆ’ is soluble and forms when molybdenum-containing minerals are in contact with oxygen and water. Technetium Technetium is the chemical element with atomic number 43 and symbol Tc. It is the lowest atomic number element without any stable isotopes; every form of it is radioactive. Nearly all technetium is produced synthetically and only minute amounts are found in nature. Naturally occurring technetium occurs as a spontaneous fission product in uranium ore or by neutron capture in molybdenum ores. The chemical properties of this silvery gray, crystalline transition metal are intermediate between rhenium and manganese. Many of technetium's properties were predicted by Dmitri Mendeleev before the element was discovered. Mendeleev noted a gap in his periodic table and gave the undiscovered element the provisional name ekamanganese (Em). In 1937 technetium (specifically the technetium-97 isotope) became the first predominantly artificial element to be produced, hence its name (from the Greek , meaning "artificial"). Its short-lived gamma ray-emitting nuclear isomerβ€”technetium-99mβ€”is used in nuclear medicine for a wide variety of diagnostic tests. Technetium-99 is used as a gamma ray-free source of beta particles. Long-lived technetium isotopes produced commercially are by-products of fission of uranium-235 in nuclear reactors and are extracted from nuclear fuel rods. Because no isotope of technetium has a half-life longer than 4.2Β million years (technetium-98), its detection in red giants in 1952, which are billions of years old, helped bolster the theory that stars can produce heavier elements. Ruthenium Ruthenium is a chemical element with symbol Ru and atomic number 44. It is a rare transition metal belonging to the platinum group of the periodic table. Like the other metals of the platinum group, ruthenium is inert to most chemicals. The Russian scientist Karl Ernst Claus discovered the element in 1844 and named it after Ruthenia, the Latin word for Rus'. Ruthenium usually occurs as a minor component of platinum ores and its annual production is only about 12 tonnes worldwide. Most ruthenium is used for wear-resistant electrical contacts and the production of thick-film resistors. A minor application of ruthenium is its use in some platinum alloys. Rhodium Rhodium is a chemical element that is a rare, silvery-white, hard, and chemically inert transition metal and a member of the platinum group. It has the chemical symbol Rh and atomic number 45. It is composed of only one isotope, 103Rh. Naturally occurring rhodium is found as the free metal, alloyed with similar metals, and never as a chemical compound. It is one of the rarest precious metals and one of the most costly (gold has since taken over the top spot of cost per ounce). Rhodium is a so-called noble metal, resistant to corrosion, found in platinum or nickel ores together with the other members of the platinum group metals. It was discovered in 1803 by William Hyde Wollaston in one such ore, and named for the rose color of one of its chlorine compounds, produced after it reacted with the powerful acid mixture aqua regia. The element's major use (about 80% of world rhodium production) is as one of the catalysts in the three-way catalytic converters of automobiles. Because rhodium metal is inert against corrosion and most aggressive chemicals, and because of its rarity, rhodium is usually alloyed with platinum or palladium and applied in high-temperature and corrosion-resistive coatings. White gold is often plated with a thin rhodium layer to improve its optical impression while sterling silver is often rhodium plated for tarnish resistance. Rhodium detectors are used in nuclear reactors to measure the neutron flux level. Palladium Palladium is a chemical element with the chemical symbol Pd and an atomic number of 46. It is a rare and lustrous silvery-white metal discovered in 1803 by William Hyde Wollaston. He named it after the asteroid Pallas, which was itself named after the epithet of the Greek goddess Athena, acquired by her when she slew Pallas. Palladium, platinum, rhodium, ruthenium, iridium and osmium form a group of elements referred to as the platinum group metals (PGMs). These have similar chemical properties, but palladium has the lowest melting point and is the least dense of them. The unique properties of palladium and other platinum group metals account for their widespread use. A quarter of all goods manufactured today either contain PGMs or have a significant part in their manufacturing process played by PGMs. Over half of the supply of palladium and its congener platinum goes into catalytic converters, which convert up to 90% of harmful gases from auto exhaust (hydrocarbons, carbon monoxide, and nitrogen dioxide) into less-harmful substances (nitrogen, carbon dioxide and water vapor). Palladium is also used in electronics, dentistry, medicine, hydrogen purification, chemical applications, and groundwater treatment. Palladium plays a key role in the technology used for fuel cells, which combine hydrogen and oxygen to produce electricity, heat, and water. Ore deposits of palladium and other PGMs are rare, and the most extensive deposits have been found in the norite belt of the Bushveld Igneous Complex covering the Transvaal Basin in South Africa, the Stillwater Complex in Montana, United States, the Thunder Bay District of Ontario, Canada, and the Norilsk Complex in Russia. Recycling is also a source of palladium, mostly from scrapped catalytic converters. The numerous applications and limited supply sources of palladium result in the metal attracting considerable investment interest. Silver Silver is a metallic chemical element with the chemical symbol Ag (, from the Indo-European root *arg- for "grey" or "shining") and atomic number 47. A soft, white, lustrous transition metal, it has the highest electrical conductivity of any element and the highest thermal conductivity of any metal. The metal occurs naturally in its pure, free form (native silver), as an alloy with gold and other metals, and in minerals such as argentite and chlorargyrite. Most silver is produced as a byproduct of copper, gold, lead, and zinc refining. Silver has long been valued as a precious metal, and it is used to make ornaments, jewelry, high-value tableware, utensils (hence the term silverware), and currency coins. Today, silver metal is also used in electrical contacts and conductors, in mirrors and in catalysis of chemical reactions. Its compounds are used in photographic film, and dilute silver nitrate solutions and other silver compounds are used as disinfectants and microbiocides. While many medical antimicrobial uses of silver have been supplanted by antibiotics, further research into clinical potential continues. Cadmium Cadmium is a chemical element with the symbol Cd and atomic number 48. This soft, bluish-white metal is chemically similar to the two other stable metals in group 12, zinc and mercury. Like zinc, it prefers oxidation state +2 in most of its compounds and like mercury it shows a low melting point compared to transition metals. Cadmium and its congeners are not always considered transition metals, in that they do not have partly filled d or f electron shells in the elemental or common oxidation states. The average concentration of cadmium in the Earth's crust is between 0.1 and 0.5 parts per million (ppm). It was discovered in 1817 simultaneously by Stromeyer and Hermann, both in Germany, as an impurity in zinc carbonate. Cadmium occurs as a minor component in most zinc ores and therefore is a byproduct of zinc production. It was used for a long time as a pigment and for corrosion resistant plating on steel while cadmium compounds were used to stabilize plastic. With the exception of its use in nickel–cadmium batteries and cadmium telluride solar panels, the use of cadmium is generally decreasing. These declines have been due to competing technologies, cadmium's toxicity in certain forms and concentration and resulting regulations. p-block elements Indium Indium is a chemical element with the symbol In and atomic number 49. This rare, very soft, malleable and easily fusible other metal is chemically similar to gallium and thallium, and shows the intermediate properties between these two. Indium was discovered in 1863 and named for the indigo blue line in its spectrum that was the first indication of its existence in zinc ores, as a new and unknown element. The metal was first isolated in the following year. Zinc ores continue to be the primary source of indium, where it is found in compound form. Very rarely the element can be found as grains of native (free) metal, but these are not of commercial importance. Indium's current primary application is to form transparent electrodes from indium tin oxide in liquid crystal displays and touchscreens, and this use largely determines its global mining production. It is widely used in thin-films to form lubricated layers (during World War II it was widely used to coat bearings in high-performance aircraft). It is also used for making particularly low melting point alloys, and is a component in some lead-free solders. Indium is not known to be used by any organism. In a similar way to aluminium salts, indium(III) ions can be toxic to the kidney when given by injection, but oral indium compounds do not have the chronic toxicity of salts of heavy metals, probably due to poor absorption in basic conditions. Radioactive indium-111 (in very small amounts on a chemical basis) is used in nuclear medicine tests, as a radiotracer to follow the movement of labeled proteins and white blood cells in the body. Tin Tin is a chemical element with the symbol Sn (for ) and atomic number 50. It is a main-group metal in group 14 of the periodic table. Tin shows chemical similarity to both neighboring group 14 elements, germanium and lead and has two possible oxidation states, +2 and the slightly more stable +4. Tin is the 49th most abundant element and has, with 10 stable isotopes, the largest number of stable isotopes in the periodic table. Tin is obtained chiefly from the mineral cassiterite, where it occurs as tin dioxide, SnO2. This silvery, malleable post-transition metal is not easily oxidized in air and is used to coat other metals to prevent corrosion. The first alloy, used in large scale since 3000 BC, was bronze, an alloy of tin and copper. After 600 BC pure metallic tin was produced. Pewter, which is an alloy of 85–90% tin with the remainder commonly consisting of copper, antimony and lead, was used for tableware from the Bronze Age until the 20th century. In modern times tin is used in many alloys, most notably tin/lead soft solders, typically containing 60% or more of tin. Another large application for tin is corrosion-resistant tin plating of steel. Because of its low toxicity, tin-plated metal is also used for food packaging, giving the name to tin cans, which are made mostly of steel. Antimony Antimony () is a toxic chemical element with the symbol Sb and an atomic number of 51. A lustrous grey metalloid, it is found in nature mainly as the sulfide mineral stibnite (Sb2S3). Antimony compounds have been known since ancient times and were used for cosmetics, metallic antimony was also known but mostly identified as lead. For some time China has been the largest producer of antimony and its compounds, with most production coming from the Xikuangshan Mine in Hunan. Antimony compounds are prominent additives for chlorine and bromine containing fire retardants found in many commercial and domestic products. The largest application for metallic antimony is as alloying material for lead and tin. It improves the properties of the alloys which are used as in solders, bullets and ball bearings. An emerging application is the use of antimony in microelectronics. Tellurium Tellurium is a chemical element that has the symbol Te and atomic number 52. A brittle, mildly toxic, rare, silver-white metalloid which looks similar to tin, tellurium is chemically related to selenium and sulfur. It is occasionally found in native form, as elemental crystals. Tellurium is far more common in the universe than on Earth. Its extreme rarity in the Earth's crust, comparable to that of platinum, is partly due to its high atomic number, but also due to its formation of a volatile hydride which caused the element to be lost to space as a gas during the hot nebular formation of the planet. Tellurium was discovered in Transylvania (today part of Romania) in 1782 by Franz-Joseph MΓΌller von Reichenstein in a mineral containing tellurium and gold. Martin Heinrich Klaproth named the new element in 1798 after the Latin word for "earth", tellus. Gold telluride minerals (responsible for the name of Telluride, Colorado) are the most notable natural gold compounds. However, they are not a commercially significant source of tellurium itself, which is normally extracted as by-product of copper and lead production. Tellurium is commercially primarily used in alloys, foremost in steel and copper to improve machinability. Applications in solar panels and as a semiconductor material also consume a considerable fraction of tellurium production. Iodine Iodine is a chemical element with the symbol I and atomic number 53. The name is from Greek ioeidΔ“s, meaning violet or purple, due to the color of elemental iodine vapor. Iodine and its compounds are primarily used in nutrition, and industrially in the production of acetic acid and certain polymers. Iodine's relatively high atomic number, low toxicity, and ease of attachment to organic compounds have made it a part of many X-ray contrast materials in modern medicine. Iodine has only one stable isotope. A number of iodine radioisotopes are also used in medical applications. Iodine is found on Earth mainly as the highly water-soluble iodide Iβˆ’, which concentrates it in oceans and brine pools. Like the other halogens, free iodine occurs mainly as a diatomic molecule I2, and then only momentarily after being oxidized from iodide by an oxidant like free oxygen. In the universe and on Earth, iodine's high atomic number makes it a relatively rare element. However, its presence in ocean water has given it a role in biology (see below). Xenon Xenon is a chemical element with the symbol Xe and atomic number 54. A colorless, heavy, odorless noble gas, xenon occurs in the Earth's atmosphere in trace amounts. Although generally unreactive, xenon can undergo a few chemical reactions such as the formation of xenon hexafluoroplatinate, the first noble gas compound to be synthesized. Naturally occurring xenon consists of nine stable isotopes. There are also over 40 unstable isotopes that undergo radioactive decay. The isotope ratios of xenon are an important tool for studying the early history of the Solar System. Radioactive xenon-135 is produced from iodine-135 as a result of nuclear fission, and it acts as the most significant neutron absorber in nuclear reactors. Xenon is used in flash lamps and arc lamps, and as a general anesthetic. The first excimer laser design used a xenon dimer molecule (Xe2) as its lasing medium, and the earliest laser designs used xenon flash lamps as pumps. Xenon is also being used to search for hypothetical weakly interacting massive particles and as the propellant for ion thrusters in spacecraft. Biological role Rubidium, strontium, yttrium, zirconium, and niobium have no biological role. Yttrium can cause lung disease in humans. Molybdenum-containing enzymes are used as catalysts by some bacteria to break the chemical bond in atmospheric molecular nitrogen, allowing biological nitrogen fixation. At least 50 molybdenum-containing enzymes are now known in bacteria and animals, though only the bacterial and cyanobacterial enzymes are involved in nitrogen fixation. Owing to the diverse functions of the remainder of the enzymes, molybdenum is a required element for life in higher organisms (eukaryotes), though not in all bacteria. Technetium, ruthenium, rhodium, palladium, and silver have no biological role. Although cadmium has no known biological role in higher organisms, a cadmium-dependent carbonic anhydrase has been found in marine diatoms. Rats fed a tin-free diet exhibited improper growth, but the evidence for essentiality is otherwise limited. Indium has no biological role and can be toxic as well as antimony. Tellurium has no biological role, although fungi can incorporate it in place of sulfur and selenium into amino acids such as tellurocysteine and telluromethionine. In humans, tellurium is partly metabolized into dimethyl telluride, (CH3)2Te, a gas with a garlic-like odor which is exhaled in the breath of victims of tellurium toxicity or exposure. Iodine is the heaviest essential element utilized widely by life in biological functions (only tungsten, employed in enzymes by a few species of bacteria, is heavier). Iodine's rarity in many soils, due to initial low abundance as a crust-element, and also leaching of soluble iodide by rainwater, has led to many deficiency problems in land animals and inland human populations. Iodine deficiency affects about two billion people and is the leading preventable cause of intellectual disabilities. Iodine is required by higher animals, which use it to synthesize thyroid hormones, which contain the element. Because of this function, radioisotopes of iodine are concentrated in the thyroid gland along with nonradioactive iodine. The radioisotope iodine-131, which has a high fission product yield, concentrates in the thyroid, and is one of the most carcinogenic of nuclear fission products. Xenon has no biological role, and is used as a general anaesthetic. References Periods (periodic table) Period 5
Period 5 element
[ "Chemistry" ]
6,414
[ "Periodic table", "Periods (periodic table)" ]
161,291
https://en.wikipedia.org/wiki/Noble%20metal
A noble metal is ordinarily regarded as a metallic element that is generally resistant to corrosion and is usually found in nature in its raw form. Gold, platinum, and the other platinum group metals (ruthenium, rhodium, palladium, osmium, iridium) are most often so classified. Silver, copper, and mercury are sometimes included as noble metals, but each of these usually occurs in nature combined with sulfur. In more specialized fields of study and applications the number of elements counted as noble metals can be smaller or larger. It is sometimes used for the three metals copper, silver, and gold which have filled d-bands, while it is often used mainly for silver and gold when discussing surface-enhanced Raman spectroscopy involving metal nanoparticles. It is sometimes applied more broadly to any metallic or semimetallic element that does not react with a weak acid and give off hydrogen gas in the process. This broader set includes copper, mercury, technetium, rhenium, arsenic, antimony, bismuth, polonium, gold, the six platinum group metals, and silver. Many of the noble metals are used in alloys for jewelry or coinage. In dentistry, silver is not always considered a noble metal because it is subject to corrosion when present in the mouth. All the metals are important heterogeneous catalysts. Meaning and history While lists of noble metals can differ, they tend to cluster around gold and the six platinum group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum. In addition to this term's function as a compound noun, there are circumstances where noble is used as an adjective for the noun metal. A galvanic series is a hierarchy of metals (or other electrically conductive materials, including composites and semimetals) that runs from noble to active, and allows one to predict how materials will interact in the environment used to generate the series. In this sense of the word, graphite is more noble than silver and the relative nobility of many materials is highly dependent upon context, as for aluminium and stainless steel in conditions of varying pH. The term noble metal can be traced back to at least the late 14th century and has slightly different meanings in different fields of study and application. Prior to Mendeleev's publication in 1869 of the first (eventually) widely accepted periodic table, Odling published a table in 1864, in which the "noble metals" rhodium, ruthenium, palladium; and platinum, iridium, and osmium were grouped together, and adjacent to silver and gold. Properties Geochemical The noble metals are siderophiles (iron-lovers). They tend to sink into the Earth's core because they dissolve readily in iron either as solid solutions or in the molten state. Most siderophile elements have practically no affinity whatsoever for oxygen: indeed, oxides of gold are thermodynamically unstable with respect to the elements. Copper, silver, gold, and the six platinum group metals are the only native metals that occur naturally in relatively large amounts. Corrosion resistance Noble metals tend to be resistant to oxidation and other forms of corrosion, and this corrosion resistance is often considered to be a defining characteristic. Some exceptions are described below. Copper is dissolved by nitric acid and aqueous potassium cyanide. Ruthenium can be dissolved in aqua regia, a highly concentrated mixture of hydrochloric acid and nitric acid, only when in the presence of oxygen, while rhodium must be in a fine pulverized form. Palladium and silver are soluble in nitric acid, while silver's solubility in aqua regia is limited by the formation of silver chloride precipitate. Rhenium reacts with oxidizing acids, and hydrogen peroxide, and is said to be tarnished by moist air. Osmium and iridium are chemically inert in ambient conditions. Platinum and gold can be dissolved in aqua regia. Mercury reacts with oxidising acids. In 2010, US researchers discovered that an organic "aqua regia" in the form of a mixture of thionyl chloride SOCl2 and the organic solvent pyridine C5H5N achieved "high dissolution rates of noble metals under mild conditions, with the added benefit of being tunable to a specific metal" for example, gold but not palladium or platinum. However, Gold can be dissolved in Selenic Acid (H2SeO4). Anion (-ide) formation The noble elements Gold and Platinum also have a comparatively high electronegativity for a metallic element, thus alowing them to exist as single-metallic anions. For example: Cs + Au -> CsAu (Caesium Auride, a yellow crystalline salt with the ion). Platinum also exhibits similar properties with BaPt, BaPt2, Cs2Pt (Barium and Caesium Platinides, which are reddish salts). Electronic The expression noble metal is sometimes confined to copper, silver, and gold since their full d-subshells can contribute to their noble character. There are also known to be significant contributions from how readily there is overlap of the d-electron states with the orbitals of other elements, particularly for gold. Relativistic contributions are also important, playing a role in the catalytic properties of gold. The elements to the left of gold and silver have incompletely filled d-bands, which is believed to play a role in their catalytic properties. A common explanation is the d-band filling model of Hammer and Jens NΓΈrskov, where the total d-bands are considered, not just the unoccupied states. The low-energy plasmon properties are also of some importance, particularly those of silver and gold nanoparticles for surface-enhanced Raman spectroscopy, localized surface plasmons and other plasmonic properties. Electrochemical Standard reduction potentials in aqueous solution are also a useful way of predicting the non-aqueous chemistry of the metals involved. Thus, metals with high negative potentials, such as sodium, or potassium, will ignite in air, forming the respective oxides. These fires cannot be extinguished with water, which also react with the metals involved to give hydrogen, which is itself explosive. Noble metals, in contrast, are disinclined to react with oxygen and, for that reason (as well as their scarcity) have been valued for millennia, and used in jewellery and coins. The adjacent table lists standard reduction potential in volts; electronegativity (revised Pauling); and electron affinity values (kJ/mol), for some metals and metalloids. The simplified entries in the reaction column can be read in detail from the Pourbaix diagrams of the considered element in water. Noble metals have large positive potentials; elements not in this table have a negative standard potential or are not metals. Electronegativity is included since it is reckoned to be, "a major driver of metal nobleness and reactivity". The black tarnish commonly seen on silver arises from its sensitivity to sulphur containing gases such as hydrogen sulfide: 2 Ag + H2S + O2 β†’ Ag2S + H2O. Rayner-Canham contends that, "silver is so much more chemically-reactive and has such a different chemistry, that it should not be considered as a 'noble metal'." In dentistry, silver is not regarded as a noble metal due to its tendency to corrode in the oral environment. The relevance of the entry for water is addressed by Li et al. in the context of galvanic corrosion. Such a process will only occur when: "(1) two metals which have different electrochemical potentials are...connected, (2) an aqueous phase with electrolyte exists, and (3) one of the two metals has...potential lower than the potential of the reaction ( + 4e + = 4 OHβ€’) which is 0.4 V...The...metal with...a potential less than 0.4 V acts as an anode...loses electrons...and dissolves in the aqueous medium. The noble metal (with higher electrochemical potential) acts as a cathode and, under many conditions, the reaction on this electrode is generally βˆ’ 4Β eβ€’ βˆ’ = 4 OHβ€’)." The superheavy elements from hassium (element 108) to livermorium (116) inclusive are expected to be "partially very noble metals"; chemical investigations of hassium has established that it behaves like its lighter congener osmium, and preliminary investigations of nihonium and flerovium have suggested but not definitively established noble behavior. Copernicium's behaviour seems to partly resemble both its lighter congener mercury and the noble gas radon. Oxides As long ago as 1890, Hiorns observed as follows: "Noble Metals. Gold, Platinum, Silver, and a few rare metals. The members of this class have little or no tendency to unite with oxygen in the free state, and when placed in water at a red heat do not alter its composition. The oxides are readily decomposed by heat in consequence of the feeble affinity between the metal and oxygen." Smith, writing in 1946, continued the theme: "There is no sharp dividing line [between 'noble metals' and 'base metals'] but perhaps the best definition of a noble metal is a metal whose oxide is easily decomposed at a temperature below a red heat." "It follows from this that noble metals...have little attraction for oxygen and are consequently not oxidised or discoloured at moderate temperatures." Such nobility is mainly associated with the relatively high electronegativity values of the noble metals, resulting in only weakly polar covalent bonding with oxygen. The table lists the melting points of the oxides of the noble metals, and for some of those of the non-noble metals, for the elements in their most stable oxidation states. Catalytic properties All the noble metals can act as catalysts. For example, platinum is used in catalytic converters, devices which convert toxic gases produced in car engines, such as the oxides of nitrogen, into non-polluting substances. Gold has many industrial applications; it is used as a catalyst in hydrogenation and the water gas shift reaction. See also Galvanic series Minor metals Hallmark Precious metal Notes References Further reading Balshaw L 2020, "Noble metals dissolved without aqua regia", Chemistry World, 1 September Beamish FE 2012, The analytical chemistry of the noble metals, Elsevier Science, Burlington Brasser R, Mojzsis SJ 2017, "A colossal impact enriched Mars' mantle with noble metals", Geophys. Res. Lett., vol. 44, pp.Β 5978–5985, Brooks RR (ed.) 1992, Noble metals and biological systems: Their role in medicine, mineral exploration, and the environment, CRC Press, Boca Raton Brubaker PE, Moran JP, Bridbord K, Hueter FG 1975, "Noble metals: a toxicological appraisal of potential new environmental contaminants", Environmental Health Perspectives, vol. 10, pp.Β 39–56, Du R et al. 2019, "Emerging noble metal aerogels: State of the art and a look forward", Matter, vol. 1, pp.Β 39–56 HΓ€mΓ€lΓ€inen J, Ritala M, LeskelΓ€ M 2013, "Atomic layer deposition of noble metals and their oxides", Chemistry of Materials, vol. 26, no. 1, pp.Β 786–801, Kepp K 2020, "Chemical causes of metal nobleness", ChemPhysChem, vol. 21 no. 5. pp.Β 360βˆ’369, Lal H, Bhagat SN 1985, "Gradation of the metallic character of noble metals on the basis of thermoelectric properties", Indian Journal of Pure and Applied Physics, vol. 23, no. 11, pp.Β 551–554 Lyon SB 2010, "3.21 - Corrosion of noble metals", in B Cottis et al. (eds.), Shreir's Corrosion, Elsevier, pp.Β 2205–2223, Medici S, Peana MF, Zoroddu MA 2018, "Noble metals in pharmaceuticals: Applications and limitations", in M Rai M, Ingle, S Medici (eds.), Biomedical applications of metals, Springer, Pan S et al. 2019, "Noble-noble strong union: Gold at its best to make a bond with a noble gas atom", ChemistryOpen, vol. 8, p.Β 173, Russel A 1931, "Simple deposition of reactive metals on noble metals", Nature, vol. 127, pp.Β 273–274, St. John J et al. 1984, Noble metals, Time-Life Books, Alexandria, VA Wang H 2017, "Chapter 9 - Noble Metals", in LY Jiang, N Li (eds.), Membrane-based separations in metallurgy, Elsevier, pp.Β 249–272, External links Noble metal – chemistry EncyclopΓ¦dia Britannica, online edition Chemical nomenclature Metallurgy
Noble metal
[ "Chemistry", "Materials_science", "Engineering" ]
2,777
[ "Metallurgy", "Materials science", "nan" ]
161,293
https://en.wikipedia.org/wiki/Amphoterism
In chemistry, an amphoteric compound () is a molecule or ion that can react both as an acid and as a base. What exactly this can mean depends on which definitions of acids and bases are being used. Etymology and terminology Amphoteric is derived from the Greek word () meaning "both". Related words in acid-base chemistry are amphichromatic and amphichroic, both describing substances such as acid-base indicators which give one colour on reaction with an acid and another colour on reaction with a base. Amphiprotism Amphiprotism is exhibited by compounds with both BrΓΈnsted acidic and basic properties. A prime example is H2O. Amphiprotic molecules can either donate or accept a proton (). Amino acids (and proteins) are amphiprotic molecules because of their amine () and carboxylic acid () groups. Ampholytes Ampholytes are zwitterions. Molecules or ions that contain both acidic and basic functional groups. Amino acids hav both a basic group and an acidic group . Often such species exists as several structures in chemical equilibrium: H2N-RCH-CO2H + H2O<=> H2N-RCH-COO- + H3O+<=> H3N+-RCH-COOH + OH-<=> H3N+-RCH-COO- + H2O In approximately neutral aqueous solution (pH β‰… 7), the basic amino group is mostly protonated and the carboxylic acid is mostly deprotonated, so that the predominant species is the zwitterion . The pH at which the average charge is zero is known as the molecule's isoelectric point. Ampholytes are used to establish a stable pH gradient for use in isoelectric focusing. Metal oxides which react with both acids as well as bases to produce salts and water are known as amphoteric oxides. Many metals (such as zinc, tin, lead, aluminium, and beryllium) form amphoteric oxides or hydroxides. Aluminium oxide () is an example of an amphoteric oxide. Amphoterism depends on the oxidation states of the oxide. Amphoteric oxides include lead(II) oxide and zinc oxide, among many others. Amphiprotic molecules According to the BrΓΈnsted-Lowry theory of acids and bases, acids are proton donors and bases are proton acceptors. An amphiprotic molecule (or ion) can either donate or accept a proton, thus acting either as an acid or a base. Water, amino acids, hydrogencarbonate ion (or bicarbonate ion) , dihydrogen phosphate ion , and hydrogensulfate ion (or bisulfate ion) are common examples of amphiprotic species. Since they can donate a proton, all amphiprotic substances contain a hydrogen atom. Also, since they can act like an acid or a base, they are amphoteric. Examples The water molecule is amphoteric in aqueous solution. It can either gain a proton to form a hydronium ion , or else lose a proton to form a hydroxide ion . Another possibility is the molecular autoionization reaction between two water molecules, in which one water molecule acts as an acid and another as a base. H2O + H2O <=> H3O+ + OH- The bicarbonate ion, , is amphoteric as it can act as either an acid or a base: As an acid, losing a proton: HCO3- + OH- <=> CO3^2- + H2O As a base, accepting a proton: HCO3- + H+ <=> H2CO3 Note: in dilute aqueous solution the formation of the hydronium ion, , is effectively complete, so that hydration of the proton can be ignored in relation to the equilibria. Other examples of inorganic polyprotic acids include anions of sulfuric acid, phosphoric acid and hydrogen sulfide that have lost one or more protons. In organic chemistry and biochemistry, important examples include amino acids and derivatives of citric acid. Although an amphiprotic species must be amphoteric, the converse is not true. For example, a metal oxide such as zinc oxide, ZnO, contains no hydrogen and so cannot donate a proton. Nevertheless, it can act as an acid by reacting with the hydroxide ion, a base: Zinc oxide can also act as a base: Oxides Zinc oxide (ZnO) reacts both with acids and with bases: ZnO + \overset{acid}{H2SO4} -> ZnSO4 + H2O ZnO + \overset{base}{2 NaOH} + H2O -> Na2[Zn(OH)4] This reactivity can be used to separate different cations, for instance zinc(II), which dissolves in base, from manganese(II), which does not dissolve in base. Lead oxide (PbO): PbO + \overset{acid}{2 HCl} -> PbCl2 + H2O PbO + \overset{base}{2 NaOH} + H2O -> Na2[Pb(OH)4] Lead oxide (): PbO2 + \overset{acid}{4 HCl} -> PbCl4 + 2H2O PbO2 + \overset{base}{2 NaOH} + 2H2O -> Na2[Pb(OH)6] Aluminium oxide (): Al2O3 + \overset{acid}{6 HCl} -> 2 AlCl3 + 3 H2O Al2O3 + \overset{base}{2 NaOH} + 3 H2O -> 2 Na[Al(OH)4] (hydrated sodium aluminate) Stannous oxide (SnO): SnO + \overset{acid}{2 HCl} <=> SnCl2 + H2O SnO + \overset{base}{4 NaOH} + H2O <=> Na4[Sn(OH)6] Stannic oxide (): SnO2 + \overset{acid}{4 HCl} <=> SnCl4 + 2H2O SnO2 + \overset{base}{4 NaOH} + 2H2O <=> Na4[Sn(OH)8] Vanadium dioxide (): VO2 + \overset{acid}{2 HCl} -> VOCl2 + H2O 4 VO2 + \overset{base}{2 NaOH} -> Na2V4O9 + H2O Some other elements which form amphoteric oxides are gallium, indium, scandium, titanium, zirconium, chromium, iron, cobalt, copper, silver, gold, germanium, antimony, bismuth, beryllium, and tellurium. Hydroxides Aluminium hydroxide is also amphoteric: Al(OH)3 + \overset{acid}{3 HCl} -> AlCl3 + 3 H2O Al(OH)3 + \overset{base}{NaOH} -> Na[Al(OH)4] Beryllium hydroxide: Be(OH)2 + \overset{acid}{2 HCl} -> BeCl2 + 2 H2O Be(OH)2 + \overset{base}{2 NaOH} -> Na2[Be(OH)4] Chromium hydroxide: Cr(OH)3 + \overset{acid}{3 HCl} -> CrCl3 + 3H2O Cr(OH)3 + \overset{base}{NaOH} -> Na[Cr(OH)4] See also Ate complex Isoelectric point Zwitterion References Acid–base chemistry Chemical properties General chemistry
Amphoterism
[ "Chemistry" ]
1,714
[ "Acid–base chemistry", "Amphoteric compounds", "Acids", "Equilibrium chemistry", "nan", "Bases (chemistry)" ]
161,296
https://en.wikipedia.org/wiki/Colony%20%28biology%29
In biology, a colony is composed of two or more conspecific individuals living in close association with, or connected to, one another. This association is usually for mutual benefit such as stronger defense or the ability to attack bigger prey. Colonies can form in various shapes and ways depending on the organism involved. For instance, the bacterial colony is a cluster of identical cells (clones). These colonies often form and grow on the surface of (or within) a solid medium, usually derived from a single parent cell. Colonies, in the context of development, may be composed of two or more unitary (or solitary) organisms or be modular organisms. Unitary organisms have determinate development (set life stages) from zygote to adult form and individuals or groups of individuals (colonies) are visually distinct. Modular organisms have indeterminate growth forms (life stages not set) through repeated iteration of genetically identical modules (or individuals), and it can be difficult to distinguish between the colony as a whole and the modules within. In the latter case, modules may have specific functions within the colony. In contrast, solitary organisms do not associate with colonies; they are ones in which all individuals live independently and have all of the functions needed to survive and reproduce. Some organisms are primarily independent and form facultative colonies in reply to environmental conditions while others must live in a colony to survive (obligate). For example, some carpenter bees will form colonies when a dominant hierarchy is formed between two or more nest foundresses (facultative colony), while corals are animals that are physically connected by living tissue (the coenosarc) that contains a shared gastrovascular cavity. Colony types Social colonies Unicellular and multicellular unitary organisms may aggregate to form colonies. For example, Protists such as slime molds are many unicellular organisms that aggregate to form colonies when food resources are hard to come by, as together they are more reactive to chemical cues released by preferred prey. Eusocial insects like ants and honey bees are multicellular animals that live in colonies with a highly organized social structure. Colonies of some social insects may be deemed superorganisms. Animals, such as humans and rodents, form breeding or nesting colonies, potentially for more successful mating and to better protect offspring. The Bracken Cave is the summer home to a colony of around 20 million Mexican free-tailed bats, making it the largest known concentration of mammals. Modular organisms Modular organisms are those in which a genet (or genetic individual formed from a sexually-produced zygote) asexually reproduces to form genetically identical clones called ramets. A clonal colony is when the ramets of a genet live in close proximity or are physically connected. Ramets may have all of the functions needed to survive on their own or be interdependent on other ramets. For example, some sea anemones go through the process of pedal laceration in which a genetically identical individual is asexually produced from tissue broken off from the anemone's pedal disc. In plants, clonal colonies are created through the propagation of genetically identical individuals by stolons or rhizomes. Colonial organisms are clonal colonies composed of many physically connected, interdependent individuals. The subunits of colonial organisms can be unicellular, as in the alga Volvox (a coenobium), or multicellular, as in the phylum Bryozoa. Colonial organisms may have been the first step toward multicellular organisms. Individuals within a multicellular colonial organism may be called ramets, modules, or zooids. Structural and functional variation (polymorphism), when present, designates ramet responsibilities such as feeding, reproduction, and defense. To that end, being physically connected allows the colonial organism to distribute nutrients and energy obtained by feeding zooids throughout the colony. The hydrozoan Portuguese man o' war is a classic example of a colonial organism, one of many in the taxonomic class. Microbial colonies A microbial colony is defined as a visible cluster of microorganisms growing on the surface of or within a solid medium, presumably cultured from a single cell. Because the colony is clonal, with all organisms in it descending from a single ancestor (assuming no contamination), they are genetically identical, except for any mutations (which occur at low frequencies). Obtaining such genetically identical organisms (or pure strains) can be useful; this is done by spreading organisms on a culture plate and starting a new stock from a single resulting colony. A biofilm is a colony of microorganisms often comprising several species, with properties and capabilities greater than the aggregate of capabilities of the individual organisms. Colony ontogeny for eusocial insects Colony ontogeny refers to the developmental process and progression of a colony. It describes the various stages and changes that occur within a colony from its initial formation to its mature state. The exact duration and dynamics of colony ontogeny can vary greatly depending on the species and environmental conditions. Factors such as resource availability, competition, and environmental cues can influence the progression and outcome of colony development. During colony ontogeny for eusocial insects such as ants and bees, a colony goes through several distinct phases, each characterised by specific behavioural patterns, division of labor, and structural modifications. While the exact details can vary depending on the species, the general progression typically involves a number of well-defined stages, detailed below. Founding stage In this initial stage, a single female individual or small group of female individuals, often called the foundress(es), queen(s) (and kings for termites) or primary reproductive(s), establish a new colony. The foundresses build a basic nest structure and begin to lay eggs. The foundresses can also perform non-reproductive tasks at this early stage, such as nursing these first eggs and leaving the nest to gather resources. Worker emergence This is also known as the ergonomic stage. As the eggs laid by the foundresses develop, they give rise to the first generation of workers. These workers can assume various tasks, such as foraging, brood care, and nest maintenance. Initially, the worker population is relatively small, and their tasks are not as specialised. As the colony grows, more workers emerge, and the division of labor becomes more pronounced. Some individuals may specialise in tasks like foraging, defense, or tending to the brood, while others may take on general tasks within the nest. These specialised tasks can change throughout the life of a worker. Reproductive phase At a certain point in the colony ontogeny, usually after a period of growth and maturation, the colony produces reproductives, including new virgin queens (princesses) and males. These individuals have the potential to leave the nest and start new colonies, ensuring the transmission of the gene pool of its natal colony. Colony death Over time, colonies may go through a senescence phase where the reproductive output declines, and the colony's overall vitality diminishes. Eventually, the colony may die off or be replaced by a new generation of reproductives. After the death of the queen in a monogyne colony, possible fates other than colony death include serial polygyny (when a virgin queen of the colony replaces the dead queen as the primary reproductive) or colony inheritance (when a worker takes over as primary reproductive). Life history Individuals in social colonies and modular organisms receive benefit to such a lifestyle. For example, it may be easier to seek out food, defend a nesting site, or increase competitive ability against other species. Modular organisms' ability to reproduce asexually in addition to sexually allows them unique benefits that social colonies do not have. The energy required for sexual reproduction varies based on the frequency and length of reproductive activity, number and size of offspring, and parental care. While solitary individuals bear all of those energy costs, individuals in some social colonies share a portion of those costs. Modular organisms save energy by using asexual reproduction during their life. Energy reserved in this way allows them to put more energy towards colony growth, regenerating lost modules (due to predation or other cause of death), or response to environmental conditions. See also Ant colony Beehive (beekeeping) Bird colony Clonal colony Coenocyte Colonisation (biology) Coral reef Eusociality Superorganism Swarm Birth colony Austroplatypus incompertus References Community ecology Microbiology terms Habitat Environmental terminology
Colony (biology)
[ "Biology" ]
1,730
[ "Microbiology terms" ]
161,306
https://en.wikipedia.org/wiki/Sellmeier%20equation
The Sellmeier equation is an empirical relationship between refractive index and wavelength for a particular transparent medium. The equation is used to determine the dispersion of light in the medium. It was first proposed in 1872 by Wolfgang Sellmeier and was a development of the work of Augustin Cauchy on Cauchy's equation for modelling dispersion. The equation In its original and the most general form, the Sellmeier equation is given as , where n is the refractive index, Ξ» is the wavelength, and Bi and Ci are experimentally determined Sellmeier coefficients. These coefficients are usually quoted for Ξ» in micrometres. Note that this Ξ» is the vacuum wavelength, not that in the material itself, which is Ξ»/n. A different form of the equation is sometimes used for certain types of materials, e.g. crystals. Each term of the sum representing an absorption resonance of strength Bi at a wavelength . For example, the coefficients for BK7 below correspond to two absorption resonances in the ultraviolet, and one in the mid-infrared region. Analytically, this process is based on approximating the underlying optical resonances as dirac delta functions, followed by the application of the Kramers-Kronig relations. This results in real and imaginary parts of the refractive index which are physically sensible. However, close to each absorption peak, the equation gives non-physical values of n2 = ±∞, and in these wavelength regions a more precise model of dispersion such as Helmholtz's must be used. If all terms are specified for a material, at long wavelengths far from the absorption peaks the value of n tends to where Ξ΅r is the relative permittivity of the medium. For characterization of glasses the equation consisting of three terms is commonly used: As an example, the coefficients for a common borosilicate crown glass known as BK7 are shown below: For common optical glasses, the refractive index calculated with the three-term Sellmeier equation deviates from the actual refractive index by less than 5Γ—10βˆ’6 over the wavelengths' range of 365Β nm to 2.3Β ΞΌm, which is of the order of the homogeneity of a glass sample. Additional terms are sometimes added to make the calculation even more precise. Sometimes the Sellmeier equation is used in two-term form: Here the coefficient A is an approximation of the short-wavelength (e.g., ultraviolet) absorption contributions to the refractive index at longer wavelengths. Other variants of the Sellmeier equation exist that can account for a material's refractive index change due to temperature, pressure, and other parameters. Derivation Analytically, the Sellmeier equation models the refractive index as due to a series of optical resonances within the bulk material. Its derivation from the Kramers-Kronig relations requires a few assumptions about the material, from which any deviations will affect the model's accuracy: There exists a number of resonances, and the final refractive index can be calculated from the sum over the contributions from all resonances. All optical resonances are at wavelengths far away from the wavelengths of interest, where the model is applied. At these resonant frequencies, the imaginary component of the susceptibility () can be modeled as a delta function. From the last point, the complex refractive index (and the electric susceptibility) becomes: The real part of the refractive index comes from applying the Kramers-Kronig relations to the imaginary part: Plugging in the first equation above for the imaginary component: The order of summation and integration can be swapped. When evaluated, this gives the following, where is the Heaviside function: Since the domain is assumed to be far from any resonances (assumption 2 above), evaluates to 1 and a familiar form of the Sellmeier equation is obtained: By rearranging terms, the constants and can be substituted into the equation above to give the Sellmeier equation. Coefficients See also Cauchy's equation References Internal links RefractiveIndex.INFO Refractive index database featuring Sellmeier coefficients for many hundreds of materials. A browser-based calculator giving refractive index from Sellmeier coefficients. Annalen der Physik - free Access, digitized by the French national library Sellmeier coefficients for 356 glasses from Ohara, Hoya, and Schott Eponymous equations of physics Optics
Sellmeier equation
[ "Physics", "Chemistry" ]
916
[ "Applied and interdisciplinary physics", "Equations of physics", "Optics", "Eponymous equations of physics", " molecular", "Atomic", " and optical physics" ]
161,339
https://en.wikipedia.org/wiki/List%20of%20geometers
A geometer is a mathematician whose area of study is the historical aspects that define geometry, instead of the analytical geometric studies that becomes conducted from geometricians. Some notable geometers and their main fields of work, chronologically listed, are: 1000 BCE to 1 BCE Baudhayana (fl. c. 800 BC) – Euclidean geometry Manava (c. 750 BC–690 BC) – Euclidean geometry Thales of Miletus (c. 624 BC – c. 546 BC) – Euclidean geometry Pythagoras (c. 570 BC – c. 495 BC) – Euclidean geometry, Pythagorean theorem Zeno of Elea (c. 490 BC – c. 430 BC) – Euclidean geometry Hippocrates of Chios (born c. 470 – 410 BC) – first systematically organized Stoicheia – Elements (geometry textbook) Mozi (c. 468 BC – c. 391 BC) Plato (427–347 BC) Theaetetus (c. 417 BC – 369 BC) Autolycus of Pitane (360–c. 290 BC) – astronomy, spherical geometry Euclid (fl. 300 BC) – Elements, Euclidean geometry (sometimes called the "father of geometry") Apollonius of Perga (c. 262 BC – c. 190 BC) – Euclidean geometry, conic sections Archimedes (c. 287 BC – c. 212 BC) – Euclidean geometry Eratosthenes (c. 276 BC – c. 195/194 BC) – Euclidean geometry Katyayana (c. 3rd century BC) – Euclidean geometry 1–1300 AD Hero of Alexandria (c. AD 10–70) – Euclidean geometry Pappus of Alexandria (c. AD 290–c. 350) – Euclidean geometry, projective geometry Hypatia of Alexandria (c. AD 370–c. 415) – Euclidean geometry Brahmagupta (597–668) – Euclidean geometry, cyclic quadrilaterals Vergilius of Salzburg (c.700–784) – Irish bishop of Aghaboe, Ossory and later Salzburg, Austria; antipodes, and astronomy Al-Abbās ibn Said al-JawharΔ« (c. 800–c. 860) Thabit ibn Qurra (826–901) – analytic geometry, non-Euclidean geometry, conic sections Abu'l-WΓ‘fa (940–998) – spherical geometry, spherical triangles Ibn al-Haytham (965–c. 1040) Omar Khayyam (1048–1131) – algebraic geometry, conic sections Ibn Maḍāʾ (1116–1196) 1301–1800 AD Piero della Francesca (1415–1492) Leonardo da Vinci (1452–1519) – Euclidean geometry Jyesthadeva (c. 1500 – c. 1610) – Euclidean geometry, cyclic quadrilaterals Marin GetaldiΔ‡ (1568–1626) Jacques-FranΓ§ois Le Poivre (1652–1710) – projective geometry Johannes Kepler (1571–1630) – (used geometric ideas in astronomical work) Edmund Gunter (1581–1686) Girard Desargues (1591–1661) – projective geometry; Desargues' theorem RenΓ© Descartes (1596–1650) – invented the methodology of analytic geometry, also called Cartesian geometry after him Pierre de Fermat (1607–1665) – analytic geometry Blaise Pascal (1623–1662) – projective geometry Christiaan Huygens (1629–1695) – evolute Giordano Vitale (1633–1711) Philippe de La Hire (1640–1718) – projective geometry Isaac Newton (1642–1727) – 3rd-degree algebraic curve Giovanni Ceva (1647–1734) – Euclidean geometry Johann Jacob Heber (1666–1727) – surveyor and geometer Giovanni Gerolamo Saccheri (1667–1733) – non-Euclidean geometry Leonhard Euler (1707–1783) Tobias Mayer (1723–1762) Johann Heinrich Lambert (1728–1777) – non-Euclidean geometry Gaspard Monge (1746–1818) – descriptive geometry John Playfair (1748–1819) – Euclidean geometry Lazare Nicolas Marguerite Carnot (1753–1823) – projective geometry Joseph Diaz Gergonne (1771–1859) – projective geometry; Gergonne point Carl Friedrich Gauss (1777–1855) – Theorema Egregium Louis Poinsot (1777–1859) SimΓ©on Denis Poisson (1781–1840) Jean-Victor Poncelet (1788–1867) – projective geometry Augustin-Louis Cauchy (1789–1857) August Ferdinand MΓΆbius (1790–1868) – Euclidean geometry Nikolai Ivanovich Lobachevsky (1792–1856) – hyperbolic geometry, a non-Euclidean geometry Michel Chasles (1793–1880) – projective geometry Germinal Dandelin (1794–1847) – Dandelin spheres in conic sections Jakob Steiner (1796–1863) – champion of synthetic geometry methodology, projective geometry, Euclidean geometry 1801–1900 AD Karl Wilhelm Feuerbach (1800–1834) – Euclidean geometry Julius PlΓΌcker (1801–1868) JΓ‘nos Bolyai (1802–1860) – hyperbolic geometry, a non-Euclidean geometry Christian Heinrich von Nagel (1803–1882) – Euclidean geometry Johann Benedict Listing (1808–1882) – topology Hermann GΓΌnther Grassmann (1809–1877) – exterior algebra Ludwig Otto Hesse (1811–1874) – algebraic invariants and geometry Ludwig Schlafli (1814–1895) – Regular 4-polytope Pierre Ossian Bonnet (1819–1892) – differential geometry Arthur Cayley (1821–1895) Joseph Bertrand (1822–1900) Delfino Codazzi (1824–1873) – differential geometry Bernhard Riemann (1826–1866) – elliptic geometry (a non-Euclidean geometry) and Riemannian geometry Julius Wilhelm Richard Dedekind (1831–1916) Ludwig Burmester (1840–1927) – theory of linkages Edmund Hess (1843–1903) Albert Victor BΓ€cklund (1845–1922) Max Noether (1844–1921) – algebraic geometry Henri Brocard (1845–1922) – Brocard points William Kingdon Clifford (1845–1879) – geometric algebra Pieter Hendrik Schoute (1846–1923) Felix Klein (1849–1925) Sofia Vasilyevna Kovalevskaya (1850–1891) Evgraf Fedorov (1853–1919) Henri PoincarΓ© (1854–1912) Luigi Bianchi (1856–1928) – differential geometry Alicia Boole Stott (1860–1940) Hermann Minkowski (1864–1909) – non-Euclidean geometry Henry Frederick Baker (1866–1956) – algebraic geometry Γ‰lie Cartan (1869–1951) Dmitri Egorov (1869–1931) – differential geometry Veniamin Kagan (1869–1953) Raoul Bricard (1870–1944) – descriptive geometry Ernst Steinitz (1871–1928) – Steinitz's theorem Marcel Grossmann (1878–1936) Oswald Veblen (1880–1960) – projective geometry, differential geometry Nathan Altshiller Court (1881–1968) – author of College Geometry Emmy Noether (1882–1935) – algebraic topology Harry Clinton Gossard (1884–1954) Arthur Rosenthal (1887–1959) Helmut Hasse (1898–1979) – algebraic geometry 1901–present William Vallance Douglas Hodge (1903–1975) Patrick du Val (1903–1987) Beniamino Segre (1903–1977) – combinatorial geometry J. C. P. Miller (1906–1981) AndrΓ© Weil (1906–1998) – Algebraic geometry H. S. M. Coxeter (1907–2003) – theory of polytopes, non-Euclidean geometry, projective geometry J. A. Todd (1908–1994) Daniel Pedoe (1910–1998) Shiing-Shen Chern (1911–2004) – differential geometry Ernst Witt (1911–1991) Rafael Artzy (1912–2006) Aleksandr Danilovich Aleksandrov (1912–1999) LΓ‘szlΓ³ Fejes TΓ³th (1915–2005) Edwin Evariste Moise (1918–1998) Aleksei Pogorelov (1919–2002) – differential geometry Magnus Wenninger (1919–2017) – polyhedron models Jean-Louis Koszul (1921–2018) Isaak Yaglom (1921–1988) Eugenio Calabi (1923–2023) Benoit Mandelbrot (1924–2010) – fractal geometry Katsumi Nomizu (1924–2008) – affine differential geometry Michael S. Longuet-Higgins (1925–2016) John Leech (1926–1992) Alexander Grothendieck (1928–2014) – algebraic geometry Branko GrΓΌnbaum (1929–2018) – discrete geometry Michael Atiyah (1929–2019) Lev Semenovich Pontryagin (1908–1988) Geoffrey Colin Shephard (1927–2016) Norman W. Johnson (1930–2017) John Milnor (1931–) Roger Penrose (1931–) Yuri Manin (1937–2023) – algebraic geometry and diophantine geometry Vladimir Arnold (1937–2010) – algebraic geometry Ernest Vinberg (1937–2020) J. H. Conway (1937–2020) – sphere packing, recreational geometry Robin Hartshorne (1938–) – geometry, algebraic geometry Phillip Griffiths (1938–) – algebraic geometry, differential geometry Enrico Bombieri (1940–) – algebraic geometry Robert Williams (1942–) Peter McMullen (1942–) Richard S. Hamilton (1943–2024) – differential geometry, Ricci flow, PoincarΓ© conjecture Mikhail Gromov (1943–) Rudy Rucker (1946–) William Thurston (1946–2012) Shing-Tung Yau (1949–) Michael Freedman (1951–) Egon Schulte (1955–) – polytopes George W. Hart (1955–) – sculptor KΓ‘roly Bezdek (1955–) – discrete geometry, sphere packing, Euclidean geometry, non-Euclidean geometry Simon Donaldson (1957–) Kenji Fukaya (1959–) – symplectic geometry Yong-Geun Oh (1961–) Toshiyuki Kobayashi (1962–) Hiraku Nakajima (1962–) – representation theory and geometry Hwang Jun-Muk (1963–) – algebraic geometry, differential geometry Grigori Perelman (1966–) – PoincarΓ© conjecture Maryam Mirzakhani (1977–2017) Denis Auroux (1977–) Geometers in art See also Mathematics and architecture References Geometers
List of geometers
[ "Mathematics" ]
2,218
[ "Geometers", "Geometry" ]
161,344
https://en.wikipedia.org/wiki/ASC%20X12
The Accredited Standards Committee X12 (also known as ASC X12) is a standards organization. Chartered by the American National Standards Institute (ANSI) in 1979, it develops and maintains the X12 Electronic data interchange (EDI) and Context Inspired Component Architecture (CICA) standards along with XML schemas which drive business processes globally. The membership of ASC X12 includes technologists and business process experts, encompassing health care, insurance, transportation, finance, government, supply chain and other industries. ASC X12 has sponsored more than 300 X12 EDI transaction sets and a growing collection of X12 XML schemas for health care, insurance, government, transportation, finance, and many other industries. ASC X12's membership includes 3,000 standards experts representing over 600 companies from multiple business domains. Organization ASC X12 is led by two groups. The ASC X12 Board of Directors (Board) and the ASC X12 Steering Committee (Steering) collaborate to ensure the best interests of ASC X12 are served. Each group has specific responsibilities and the groups cooperatively handle items or issues that span the responsibilities of both groups. Subcommittees ASC X12 is organized into subcommittees that develop and maintain standards for a particular set of business functions. X12C Communications & Controls Responsible for EDI control structures, security, and architecture as well as the X12 XML Reference Model. X12F Finance X12F is responsible for the development and maintenance of components of the ASC X12 Standards related to the financial services industry's business activities. ASC X12F also develops and maintains interpretations, technical reports and guidelines related to its areas of responsibility. X12I Transportation X12I is responsible for the development and maintenance of components of the ASC X12 Standards related to the transportation industry's business activities, including air, marine, rail, and motor freight transportation and Customs, logistics and multi-modal activities. ASC X12I also develops and maintains interpretations, technical reports and guidelines related to its areas of responsibility. X12J Technical Assessment Maintains the directory, dictionary and design rules for all the X12 standards. Also manages the process for requests for standards changes. X12M Supply Chain X12M is responsible for the development and maintenance of components of the ASC X12 Standards related to the supply chain industry's business activities, excluding transportation and finance, from sourcing to delivery. Supply Chain activities include Distribution Management, Inventory Management, Marketing Data Management, Materials Management, Procurement Management, Product Management, Production Planning Management, Sales, X12N Insurance X12N is responsible for the development and maintenance of components of the ASC X12 Standards related to the insurance industry's business activities, including those related to property insurance, casualty insurance, health care insurance, life insurance, annuity insurance, reinsurance, and pensions. Health insurance activities include those undertaken by commercial and government health care organizations Caucuses There are informal industry groups created to identify issues and activities in specific areas. In 2014, there were four caucuses. Clearinghouse Created in October 2012 to support and improve EDI peer-to-peer connectivity, with a focus on health information exchanges. Connectivity Created in January 2010 to support and improve EDI peer-to-peer connectivity, with a focus on value-added networks and clearinghouses. Dental Focuses on X12 healthcare standards in relation to dentistry and the dental industry. Provider Focuses on interests and concerns of healthcare providers regarding X12 healthcare standards. See also Electronic Data Interchange EDIFACT X12 EDIFACT Mapping List of X12 EDI transaction sets References External links Accredited Standards Committee X12 Data interchange standards Standards for electronic health records American National Standards Institute Electronic data interchange
ASC X12
[ "Technology" ]
760
[ "Computer standards", "Data interchange standards" ]
161,388
https://en.wikipedia.org/wiki/Indirect%20self-reference
Indirect self-reference describes an object referring to itself indirectly. For example, define the function f such that f(x) = x(x). Any function passed as an argument to f is invoked with itself as an argument, and thus in any use of that argument is indirectly referring to itself. This example is similar to the Scheme expression "((lambda(x)(x x)) (lambda(x)(x x)))", which is expanded to itself by beta reduction, and so its evaluation loops indefinitely despite the lack of explicit looping constructs. An equivalent example can be formulated in lambda calculus. Indirect self-reference is special in that its self-referential quality is not explicit, as it is in the sentence "this sentence is false." The phrase "this sentence" refers directly to the sentence as a whole. An indirectly self-referential sentence would replace the phrase "this sentence" with an expression that effectively still referred to the sentence, but did not use the pronoun "this." An example will help to explain this. Suppose we define the quine of a phrase to be the quotation of the phrase followed by the phrase itself. So, the quine of: is a sentence fragment would be: "is a sentence fragment" is a sentence fragment which, incidentally, is a true statement. Now consider the sentence: "when quined, makes quite a statement" when quined, makes quite a statement The quotation here, plus the phrase "when quined," indirectly refers to the entire sentence. The importance of this fact is that the remainder of the sentence, the phrase "makes quite a statement," can now make a statement about the sentence as a whole. If we had used a pronoun for this, we could have written something like "this sentence makes quite a statement." It seems silly to go through this trouble when pronouns will suffice (and when they make more sense to the casual reader), but in systems of mathematical logic, there is generally no analog of the pronoun. It is somewhat surprising, in fact, that self-reference can be achieved at all in these systems. Upon closer inspection, it can be seen that in fact, the Scheme example above uses a quine, and f is actually the quine function itself. Indirect self-reference was studied in great depth by W. V. Quine (after whom the operation above is named), and occupies a central place in the proof of GΓΆdel's incompleteness theorem. Among the paradoxical statements developed by Quine is the following: "yields a false statement when preceded by its quotation" yields a false statement when preceded by its quotation See also Diagonal lemma Fixed point (mathematics) Fixed-point combinator GΓΆdel, Escher, Bach Indirection Quine's paradox Self-hosting (compilers) Self-interpreter References Self-reference Theoretical computer science
Indirect self-reference
[ "Mathematics" ]
605
[ "Theoretical computer science", "Applied mathematics" ]
15,988,191
https://en.wikipedia.org/wiki/Phototrophic%20biofilm
Phototrophic biofilms are microbial communities generally comprising both phototrophic microorganisms, which use light as their energy source, and chemoheterotrophs. Thick laminated multilayered phototrophic biofilms are usually referred to as microbial mats or phototrophic mats (see also biofilm). These organisms, which can be prokaryotic or eukaryotic organisms like bacteria, cyanobacteria, fungi, and microalgae, make up diverse microbial communities that are affixed in a mucous matrix, or film. These biofilms occur on contact surfaces in a range of terrestrial and aquatic environments. The formation of biofilms is a complex process and is dependent upon the availability of light as well as the relationships between the microorganisms. Biofilms serve a variety of roles in aquatic, terrestrial, and extreme environments; these roles include functions which are both beneficial and detrimental to the environment. In addition to these natural roles, phototrophic biofilms have also been adapted for applications such as crop production and protection, bioremediation, and wastewater treatment. Biofilm formation Biofilm formation is a complicated process which occurs in four general steps: attachment of cells, formation of the colony, maturation, and cell dispersal. These films can grow in sizes ranging from microns to centimeters in thickness. Most are green and/or brown, but can be more colorful. Biofilm development is dependent on the generation of extracellular polymeric substances (EPS) by microorganisms. The EPS, which is akin to a gel, is a matrix which provides structure for the biofilm and is essential for growth and functionality. It consists of organic compounds such as polysaccharides, proteins, and glycolipids and may also include inorganic substances like silt and silica. EPS join cells together in the biofilm and transmits light to organisms in the lower zone. Additionally, EPS serves as an adhesive for surface attachment and facilitates digestion of nutrients by extracellular enzymes. Microbial functions and interactions are also important for maintaining the well-being of the community. In general, phototrophic organisms in the biofilm provide a foundation for the growth of the community as a whole by mediating biofilm processes and conversions. The chemoheterotrophs use the photosynthetic waste products from the phototrophs as their carbon and nitrogen sources, and in turn perform nutrient regeneration for the community. Various groups of organisms are located in distinct layers based on availability of light, the presence of oxygen, and redox gradients produced by the species. Light exposure early in biofilm development has an immense impact on growth and microbial diversity; greater light availability promotes more growth. Phototrophs such as cyanobacteria and green algae occupy the exposed layer of the biofilm while lower layers consist of anaerobic phototrophs and heterotrophs like bacteria, protozoa, and fungi. Eukaryotic algae and cyanobacteria in the outer portion use light energy to reduce carbon dioxide, providing organic substrates and oxygen. This photosynthetic activity fuels processes and conversions in the total biofilm community, including the heterotrophic fraction. It also produces an oxygen gradient in the mat which inhibits most anaerobic phototrophs and chemotrophs from growing in the upper regions. Communication between the microorganisms is facilitated by quorum sensing or signal transduction pathways, which are accomplished through the secretion of molecules which diffuse through the biofilm. The identity of these substances varies depending on the type of microorganism from which it was secreted. While some of the organisms contributing to the formation of the biofilms can be identified, exact composition of the biofilms is difficult to determine because many of the organisms cannot be grown using pure culture methods. Though pure culture methods cannot be used to identify unculturable microorganisms and do not support the study of the complex interactions between photoautotrophs and heterotrophs, the use of metagenomics, proteomics, and transcriptomics has helped characterize these unculturable organisms and has provided some insight into molecular mechanisms, microbial organization, and interactions in biofilms. Ecology Phototrophic biofilms can be found on terrestrial and aquatic surfaces and can withstand environmental fluctuations and extreme environments. In aquatic systems, biofilms are prevalent on surfaces of rocks and plants, and in terrestrial environments they can be located in the soil, on rocks, and on buildings. Phototrophic biofilms and microbial mats have been described in extreme environments like thermal springs, hyper saline ponds, desert soil crusts, and in lake ice covers in Antarctica. The 3.4-billion-year fossil record of benthic phototrophic communities, such as microbial mats and stromatolites, indicates that these associations represent the Earth's oldest known ecosystems. It is thought that these early ecosystems played a key role in the build-up of oxygen in the Earth's atmosphere. A diverse array of roles is played by these microorganisms across the range of environments in which they can be found. In aquatic environments, these microbes are primary producers, a critical part of the food chain. They perform a key function in exchanging a substantial amount of nutrients and gases between the atmospheric and oceanic reservoirs. Biofilms in terrestrial systems can contribute to improving soil, reducing erosion, promoting growth of vegetation, and revitalizing desert-like land, but they can also accelerate the degradation of solid structures like buildings and monuments. Applications There is a growing interest in the application of phototrophic biofilms, for instance in wastewater treatment in constructed wetlands, bioremediation, agriculture, and biohydrogen production. A few are outlined below. Agriculture Agrochemicals such as pesticides, fertilizers, and food hormones are widely used to produce greater quality and quantity of food as well as provide crop protection. However, biofertilizers have been developed as a more environmentally cognizant method of assisting in plant development and protection by promoting the growth of microorganisms such as cyanobacteria. Cyanobacteria can augment plant growth by colonizing on plant roots to supply carbon and nitrogen, which they can provide to plants through the natural metabolic processes of carbon dioxide and nitrogen fixation. They can also produce substances which induce plant defense against harmful fungi, bacteria, and viruses. Other organisms can also produce secondary metabolites such as phytohormones which increase plants' resistance to pests and disease. Promoting growth of phototrophic biofilms in agricultural settings improves the quality of the soil and water retention, reduces salinity, and protects against erosion. Bioremediation Organisms in mats such as cyanobacteria, sulfate reducers, and aerobic heterotrophs can aid in bioremediation of water systems through biodegradation of oils. This is achieved by freeing oxygen, organic compounds, and nitrogen from hydrocarbon pollutants. Biofilm growth can also degrade other pollutants by oxidizing oils, pesticides, and herbicides and reducing heavy metals like copper, lead, and zinc. Aerobic processes to degrade pollutants can be achieved during the day and anaerobic processes are performed at night by biofilms. Additionally, because biofilm response to pollutants during initial exposure suggested acute toxicity, biofilms can be used as sensors for pollution. Wastewater treatment Biofilms are used in wastewater treatment facilities and constructed wetlands for processes such as cleaning pesticide and fertilizer-laden water because it is simple to form flocs, or aggregates, using biofilms as compared to other floc materials. There are also many other benefits to using phototrophic biofilms in treating wastewater, particularly in nutrient removal. The organisms can sequester nutrients from the wastewater and use these along with carbon dioxide to build biomass. The biomass can capture nitrogen, which can be extracted and used in fertilizer production. Due to their quick growth, phototrophic biofilms have greater nutrient uptake than other methods of nutrient removal utilizing algal biomass, and they are easier to harvest because they naturally grow on wastewater pond surfaces. Phototrophic activity of these films can precipitate dissolved phosphates due to an increase in pH; these phosphates are then removed by assimilation. Increase in pH of the wastewater also minimizes the presence of coliform bacteria. Heavy metal detoxification in wastewater treatment can also be achieved with these microbes primarily through passive mechanisms such as ion exchange, chelation, adsorption, and diffusion, which constitute biosorption. The active mode is known as bioaccumulation. Biosorption-mediated metal detoxification is influenced by factors including light intensity, pH, density of the biofilm, and organism tolerance of heavy metals. Though biosorption is an efficient process and inexpensive, methods to retrieve heavy metals from the biomass after biosorption still need further development. Using phototrophic biofilms for wastewater treatment is more energy efficient and economical and has the capability of producing byproducts which can be further processed into biofuels. Specifically cyanobacteria are capable of producing biohydrogen, which is an alternative to fossil fuels and may become a viable source of renewable energy. References Bacteria Cyanobacteria Environmental microbiology
Phototrophic biofilm
[ "Biology", "Environmental_science" ]
1,948
[ "Algae", "Prokaryotes", "Bacteria", "Environmental microbiology", "Cyanobacteria", "Microorganisms" ]
15,988,726
https://en.wikipedia.org/wiki/Washington%20Summit%20Publishers
Washington Summit Publishers (WSP) is a white nationalist publisher based in Augusta, Georgia, which produces and sells books on race and intelligence and related topics. The company is run by white supremacist Richard B. Spencer, who also ran the defunct white supremacist National Policy Institute. History Before Spencer, the company was run by Louis Andrews. He was also director of the National Policy Institute and managing editor of The Occidental Quarterly, both heavily funded by William Regnery II. In 2013, the company was listed as being headquartered in Whitefish, Montana. As of 2019, the company had moved to Augusta, Georgia. Authors Authors published by WSP include J. Philippe Rushton, Kevin B. MacDonald, Richard Lynn, Tatu Vanhanen, and Michael H. Hart. Journal WSP published Radix Journal through its imprint Radix. Contributors have included Kerry Bolton, Peter Brimelow, Samuel T. Francis, Kevin B. MacDonald, William Regnery II, Alex Kurtagić, and Jared Taylor. The last article on RadixJournal.com was published in April 2021 and its last podcast episode was released in September of the same year; the website was taken offline in June 2023. Spencer started publishing a Substack under the name Radix Journal in April 2022, it later was rebranded as ALEXANDRIA. Subjects This company has published content supportive of white nationalism and white supremacy. "Human biodiversity" (HBD), an alt-right euphemism for scientific racism, was one of the main publishing subjects of Washington Summit Publishers. The Southern Poverty Law Center (SPLC) said in 2006 that the company had reprinted racist tracts along with books promoting antisemitism and eugenics. In 2015, the SPLC listed Washington Summit Publishers as a white nationalist hate group. References Alt-right organizations American companies established in 2006 Antisemitism in Georgia (U.S. state) History of racism in Georgia (U.S. state) Book publishing companies of the United States Companies based in Augusta, Georgia Pseudoscience literature Publishing companies established in 2006 Race and intelligence controversy Scientific racism White nationalism in the United States Neo-fascist organizations in the United States
Washington Summit Publishers
[ "Biology" ]
456
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]