id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,102,642
https://en.wikipedia.org/wiki/Leak-down%20tester
A leak-down tester is a measuring instrument used to determine the condition of internal combustion engines by introducing compressed air into the cylinder and measuring the rate at which it leaks out. Compression testing is a crude form of leak-down testing which also includes effects due to compression ratio, valve timing, cranking speed, and other factors. Compression tests should normally be done with all spark plugs removed to maximize cranking speed. Cranking compression is a dynamic test of the actual low-speed pumping action, where peak cylinder pressure is measured and stored. Leak-down testing is a static test. Leak-down tests cylinder leakage paths. Leak-down primarily tests pistons and rings, seated valve sealing, and the head gasket. Leak-down will not show valve timing and movement problems, or piston movement related sealing problems. Any test should include both compression and leak-down. Testing is done on an engine which is not running, and normally with the tested cylinder at top dead center on compression, although testing can be done at other points in the compression and power stroke. Pressure is fed into a cylinder via the spark plug hole and the flow, which represents any leakage from the cylinder, is measured. Leak-down tests tend to rotate the engine, and often require some method of holding the crankshaft in the proper position for each tested cylinder. This can be as simple as a breaker bar on a crankshaft bolt in an automatic transmission vehicle, or leaving a manual transmission vehicle in a high gear with the parking brake locked. Leakage is given in wholly arbitrary percentages but these “percentages” do not relate to any actual quantity or real dimension. The meaning of the readings is only relative to other tests done with the same tester design. Leak-down readings of up to 20% are usually acceptable. Leakages over 20% generally indicate internal repairs are required. Racing engines would be in the 1-10% range for top performance, although this number can vary. Ideally, a baseline number should be taken on a fresh engine and recorded. The same leakage tester, or the same leakage tester design, can be used to determine wear. In the United States, FAA specifications state that engines up to engine displacement require an orifice diameter, long, 60-degree approach angle. The input pressure is set for , and minimum cylinder pressure is the accepted standard. While the leak-down tester pressurizes the cylinder, the mechanic can listen to various parts to determine where any leak may originate. For example, a leaking exhaust valve will make a hissing noise in the exhaust pipe while a head gasket may cause bubbling in the cooling system. How it works A leak-down tester is essentially a miniature flow meter similar in concept to an air flow bench. The measuring element is the restriction orifice and the leakage in the engine is compared to the flow of this orifice. There will be a pressure drop across the orifice and another across any points of leakage in the engine. Since the meter and engine are connected in series, the flow is the same across both. (For example: If the meter was unconnected so that all the air escapes then the reading would be 0, or 100% leakage. Conversely, if there is no leakage there will be no pressure drop across either the orifice or the leak, giving a reading of 100, or 0% leakage). Gauge meter faces can be numbered 0-100 or 100-0, indicating either 0% at full pressure or 100% at full pressure. There is no standard regarding the size of the restriction orifice for non-aviation use and that is what leads to differences in readings between leak-down testers generally available from different manufacturers. Most often quoted though is a restriction with a .040in. hole drilled in it. Some poorly designed units do not include a restriction orifice at all, relying on the internal restriction of the regulator, and give much less accurate results. In addition, large engines and small engines will be measured in the same way (compared to the same orifice) but a small leak in a large engine would be a large leak in a small engine. A locomotive engine which gives a leak-down of 10% on a leak-down tester is virtually perfectly sealed while the same tester giving a 10% reading on a model airplane engine indicates a catastrophic leak. With a non-turbulent .040" orifice, and with a cylinder leakage effective orifice size of .040", leakage would be 50% at any pressure. At higher leakages the orifice can become turbulent, and this makes flow non-linear. Also, leakage paths in cylinders can be turbulent at fairly low flow rates. This makes leakage non-linear with test pressure. Further complicating things, nonstandard restriction orifice sizes will cause different indicated leakage percentages with the same cylinder leakage. Leak down testers are most accurate at low leakage levels, and the exact leakage reading is just a relative indication that can vary significantly between instruments. Some manufacturers use only a single gauge. In these instruments, the orifice inlet pressure is maintained automatically by the pressure regulator. A single gauge works well as long as leakage flow is much less than regulator flow. Any error in the input pressure will produce a corresponding error in the reading. As a single gauge instrument approaches 100% leakage, the leakage scale error reaches maximum. This may or may not induce significant error, depending on regulator flow and orifice flow. At low and modest leakage percentages, there is little or no difference between single and dual gauges. In instruments with two gauges the operator manually resets the pressure to 100 after connection to the engine guaranteeing consistent input pressure and greater accuracy. Most instruments use as the input pressure simply because ordinary 100psi gauges can be used which corresponds to 100% but there is no necessity for that pressure beyond that. Any pressure above will function just as well for measurement purposes although the sound of leaks will not be quite as loud. Besides leakage noise, indicated percentage of leakage will sometimes vary with regulator pressure and orifice size. With 100 psi and a .030" orifice, a given cylinder might show 20% leakage. At 50 psi, the same cylinder might show 30% leakage or 15% leakage with the same orifice. This happens because leakage flow is almost always very turbulent. Because of turbulence and other factors, such as seating pressures, test pressure changes almost always change the effective orifice formed by cylinder leakage paths. Metering orifice size has a direct effect on leakage percentage. Generally, a typical automotive engine pressurized to more than 30-40 psi must be locked or it will rotate under test pressure. The exact test pressure tolerated before rotation is highly dependent on connecting rod angle, bore, compression of other cylinders, and friction. There is less tendency to rotate when the piston is at top dead center, especially with small bore engines. Maximum tendency to rotate occurs at about half stroke, when the rod is at right angles to the crankshaft's throw. Due to the simple construction, many mechanics build their own testers. Homemade instruments can function as well as commercial testers, providing they employ proper orifice sizes, good pressure gauges, and good regulators. References External links Vacuum Leak Tester Engine tuning instruments
Leak-down tester
[ "Technology", "Engineering" ]
1,513
[ "Engine tuning instruments", "Mechanical engineering", "Measuring instruments" ]
7,102,909
https://en.wikipedia.org/wiki/Moving-cluster%20method
In astrometry, the moving-cluster method and the closely related convergent point method are means, primarily of historical interest, for determining the distance to star clusters. They were used on several nearby clusters in the first half of the 1900s to determine distance. The moving-cluster method is now largely superseded by other, usually more accurate distance measures. Introduction The moving-cluster method relies on observing the proper motions and Doppler shift of each member of a group of stars known to form a cluster. The idea is that since all the stars share a common space velocity, they will appear to move towards a point of common convergence ("vanishing point") on the sky. This is essentially a perspective effect. Using the moving-cluster method, the distance to a given star cluster (in parsecs) can be determined using the following equation: where "θ" is the angle between the star and the cluster's apparent convergence point, "μ" is the proper motion of the cluster (in arcsec/year), and "v" is the star's radial velocity (in AU/year). Usage The method has only ever been used for a small number of clusters. This is because for the method to work, the cluster must be quite close to Earth (within a few hundred parsecs), and also be fairly tightly bound so it can be made out on the sky. Also, the method is quite difficult to work with compared with more straightforward methods like trigonometric parallax. Finally, the uncertainty in the final distance values are in general fairly large compared those obtained with precision measurements like those from Hipparcos. Of the clusters it has been used with, certainly the most famous are the Hyades and the Pleiades. The moving-cluster method was in fact the only way astronomers had to measure the distance to these clusters with any precision for some time in the early 20th century. Because of the problems outlined above, this method has not been used practically for stars for several decades in astronomical research. However, recently it has been used to estimate the distance between the brown dwarf 2M1207 and its observed exoplanet 2M1207b. In December 2005, American astronomer Eric Mamajek reported a distance (53 ± 6 parsecs) to 2M1207b using the moving-cluster method. See also Astrometry Parallax Parallax in astronomy Stellar parallax Cepheid Stars RR Lyrae Variable Stars References Astrometry Galactic astronomy
Moving-cluster method
[ "Astronomy" ]
515
[ "Galactic astronomy", "Astrometry", "Astronomical sub-disciplines" ]
7,103,365
https://en.wikipedia.org/wiki/Component%20television
Component television is a form factor in which a television set is sold as a system of separate components, similar to audio components. For example, a component television system is a monitor, tuner and speakers sold separately and which can be integrated into a single system. The component television form factor began in 1980 (but became notable in 1982) with Sony's ProFeel television line and became a design fad with many manufacturers until the late 1980s. References Video hardware
Component television
[ "Engineering" ]
95
[ "Electronic engineering", "Video hardware" ]
7,103,913
https://en.wikipedia.org/wiki/SN%202003fg
SN 2003fg, nicknamed the Champagne Supernova, was an unusual Type Ia supernova. It was discovered in 2003, with the Canada-France-Hawaii Telescope and the Keck Telescope, both on Mauna Kea in Hawaii, and announced by researchers at the University of Toronto. The supernova occurred in a galaxy some 4 billion light-years from Earth. It was nicknamed after the 1995 song "Champagne Supernova" by English rock band Oasis. It was unusual because of the mass of its progenitor. According to the current understanding, white dwarf stars explode as Type Ia supernovas when their mass approaches 1.4 solar masses, termed the Chandrasekhar limit. The mass added to the star is believed to be donated by a companion star, either from the companion's stellar wind or the overflow of its Roche lobe as it evolves. However, the progenitor of SN 2003fg reached two solar masses before exploding. The primary mechanism invoked to explain how a white dwarf can exceed the Chandrasekhar mass is unusually rapid rotation; the added support effectively increases the critical mass. An alternative explanation is that the explosion resulted from the merger of two white dwarfs. The evidence indicating a higher than normal mass comes from the light curve and spectra of the supernova—while it was particularly overluminous, the kinetic energies measured from the spectra appeared smaller than usual. One proposed explanation is that more of the total kinetic energy budget was expended climbing out of the deeper than usual potential well. This is important because the brightness of type Ia supernovae was thought to be essentially uniform, making them useful "standard candles" in measuring distances in the universe. Such an aberrant type Ia supernova could throw distances and other scientific work into doubt; however, the light curve characteristics of SN 2003fg were such that it would never have been mistaken for an ordinary high-redshift Type Ia supernova. References External links Light curves and spectrum on the Open Supernova Catalog 'Champagne supernova' challenges understanding of how supernovae work - University of Toronto Cosmos Magazine - "Rebellious supernova confronts dark energy" 'Champagne Supernova' breaks astronomical rules - CBC Astronomy: Champagne supernova - Nature (subscription site) Supernovae - NASA GSFC Supernovae Astronomical objects discovered in 2003 Boötes
SN 2003fg
[ "Chemistry", "Astronomy" ]
476
[ "Supernovae", "Astronomical events", "Boötes", "Constellations", "Explosions" ]
7,104,097
https://en.wikipedia.org/wiki/Extended%20Validation%20Certificate
An Extended Validation (EV) Certificate is a certificate conforming to X.509 that proves the legal entity of the owner and is signed by a certificate authority key that can issue EV certificates. EV certificates can be used in the same manner as any other X.509 certificates, including securing web communications with HTTPS and signing software and documents. Unlike domain-validated certificates and organization-validation certificates, EV certificates can be issued only by a subset of certificate authorities (CAs) and require verification of the requesting entity's legal identity before certificate issuance. As of February 2021, all major web browsers (Google Chrome, Mozilla Firefox, Microsoft Edge and Apple Safari) have menus which show the EV status of the certificate and the verified legal identity of EV certificates. Mobile browsers typically display EV certificates the same way they do Domain Validation (DV) and Organization Validation (OV) certificates. Of the ten most popular websites online, none use EV certificates and the trend is away from their usage. For software, the verified legal identity is displayed to the user by the operating system (e.g., Microsoft Windows) before proceeding with the installation. Extended Validation certificates are stored in a file format specified by and typically use the same encryption as organization-validated certificates and domain-validated certificates, so they are compatible with most server and user agent software. The criteria for issuing EV certificates are defined by the Guidelines for Extended Validation established by the CA/Browser Forum. To issue an extended validation certificate, a CA requires verification of the requesting entity's identity and its operational status with its control over domain name and hosting server. History Introduction by CA/Browser Forum In 2005 Melih Abdulhayoglu, CEO of the Comodo Group (currently known as Xcitium), convened the first meeting of the organization that became the CA/Browser Forum, hoping to improve standards for issuing SSL/TLS certificates. On June 12, 2007, the CA/Browser Forum officially ratified the first version of the Extended Validation (EV) SSL Guidelines, which took effect immediately. The formal approval successfully brought to a close more than two years of effort and provided the infrastructure for trusted website identity on the Internet. Then, in April 2008, the forum announced version 1.1 of the guidelines, building on the practical experience of its member CAs and relying-party application software suppliers gained in the months since the first version was approved for use. Creation of special UI indicators in browsers Most major browsers created special user interface indicators for pages loaded via HTTPS secured by an EV certificate soon after the creation of the standard. This includes Google Chrome 1.0, Internet Explorer 7.0, Firefox 3, Safari 3.2, Opera 9.5. Furthermore, some mobile browsers, including Safari for iOS, Windows Phone, Firefox for Android, Chrome for Android, and iOS, added such UI indicators. Usually, browsers with EV support display the validated identity—usually a combination of organization name and jurisdiction—contained in the EV certificate's 'subject' field. In most implementations, the enhanced display includes: The name of the company or entity that owns the certificate; A lock symbol, also in the address bar, that varies in color depending on the security status of the website. By clicking on the lock symbol, the user can obtain more information about the certificate, including the name of the certificate authority that issued the EV certificate. Removal of special UI indicators In May 2018, Google announced plans to redesign user interfaces of Google Chrome to remove emphasis for EV certificates. Chrome 77, released in 2019, removed the EV certificate indication from omnibox, but EV certificate status can be viewed by clicking on lock icon and then checking for legal entity name listed as "issued to" under "certificate". Firefox 70 removed the distinction in the omnibox or URL bar (EV and DV certificates are displayed similarly with just a lock icon), but the details about certificate EV status are accessible in the more detailed view that opens after clicking on the lock icon. Apple Safari on iOS 12 and MacOS Mojave (released in September 2018) removed the visual distinction of EV status. Issuing criteria Only CAs who pass an independent qualified audit review may offer EV, and all CAs globally must follow the same detailed issuance requirements which aim to: Establish the legal identity as well as the operational and physical presence of website owner; Establish that the applicant is the domain name owner or has exclusive control over the domain name; Confirm the identity and authority of the individuals acting for the website owner, and that documents pertaining to legal obligations are signed by an authorized officer; Limit the duration of certificate validity to ensure the certificate information is up to date. CA/B Forum is also limiting the maximum re-use of domain validation data and organization data to maximum of 397 days (must not exceed 398 days) from March 2020 onward. With the exception of Extended Validation Certificates for .onion domains, it is otherwise not possible to get a wildcard Extended Validation Certificate – instead, all fully qualified domain names must be included in the certificate and inspected by the certificate authority. Extended Validation certificate identification EV certificates are standard X.509 digital certificates. The primary way to identify an EV certificate is by referencing the Certificate Policies (CP) extension field. Each EV certificate's CP object identifier (OID) field identifies an EV certificate. The CA/Browser Forum's EV OID is 2.23.140.1.1. Other EV OIDs may be documented in the issuer's Certification Practice Statement. As with root certificate authorities in general, browsers may not recognize all issuers. EV HTTPS certificates contain a subject with X.509 OIDs for jurisdictionOfIncorporationCountryName (OID: 1.3.6.1.4.1.311.60.2.1.3), jurisdictionOfIncorporationStateOrProvinceName (OID: 1.3.6.1.4.1.311.60.2.1.2) (optional),jurisdictionLocalityName (OID: 1.3.6.1.4.1.311.60.2.1.1) (optional), businessCategory (OID: 2.5.4.15) and serialNumber (OID: 2.5.4.5), with the serialNumber pointing to the ID at the relevant secretary of state (US) or government business registrar (outside US). Online Certificate Status Protocol The criteria for issuing Extended Validation certificates do not require issuing certificate authorities to immediately support Online Certificate Status Protocol for revocation checking. However, the requirement for a timely response to revocation checks by the browser has prompted most certificate authorities that had not previously done so to implement OCSP support. Section 26-A of the issuing criteria requires CAs to support OCSP checking for all certificates issued after Dec. 31, 2010. Criticism Colliding entity names The legal entity names are not unique, therefore an attacker who wants to impersonate an entity might incorporate a different business with the same name (but, e.g., in a different state or country) and obtain a valid certificate for it, but then use the certificate to impersonate the original site. In one demonstration, a researcher incorporated a business called "Stripe, Inc." in Kentucky and showed that browsers display it similarly to how they display certificate of payment processor "Stripe, Inc." incorporated in Delaware. Researcher claimed the demonstration setup took about an hour of his time, US$100 in legal costs and US$77 for the certificate. Also, he noted that "with enough mouse clicks, [user] may be able to [view] the city and state [where entity is incorporated], but neither of these are helpful to a typical user, and they will likely just blindly trust the [EV certificate] indicator". Availability to small businesses Since EV certificates are being promoted and reported as a mark of a trustworthy website, some small business owners have voiced concerns that EV certificates give undue advantage to large businesses. The published drafts of the EV Guidelines excluded unincorporated business entities, and early media reports focused on that issue. Version 1.0 of the EV Guidelines was revised to embrace unincorporated associations as long as they were registered with a recognized agency, greatly expanding the number of organizations that qualified for an Extended Validation Certificate. Effectiveness against phishing attacks with IE7 security UI In 2006, researchers at Stanford University and Microsoft Research conducted a usability study of the EV display in Internet Explorer 7. Their paper concluded that "participants who received no training in browser security features did not notice the extended validation indicator and did not outperform the control group", whereas "participants who were asked to read the Internet Explorer help file were more likely to classify both real and fake sites as legitimate". Domain-validated certificates were created by CAs in the first place While proponents of EV certificates claim they help against phishing attacks, security expert Peter Gutmann states the new class of certificates restore a CA's profits which were eroded due to the race to the bottom that occurred among issuers in the industry. According to Peter Gutmann, EV certificates are not effective against phishing because EV certificates are "not fixing any problem that the phishers are exploiting". He suggests that the big commercial CAs have introduced EV certificates to return the old high prices. See also Qualified website authentication certificate HTTP Strict Transport Security References External links CA/Browser Forum Web site Firefox green padlock for EV certificates Key management E-commerce Public key infrastructure Transport Layer Security 2007 introductions
Extended Validation Certificate
[ "Technology" ]
1,981
[ "Information technology", "E-commerce" ]
7,104,225
https://en.wikipedia.org/wiki/List%20of%20Bluetooth%20profiles
In order to use Bluetooth, a device must be compatible with the subset of Bluetooth profiles (often called services or functions) necessary to use the desired services. A Bluetooth profile is a specification regarding an aspect of Bluetooth-based wireless communication between devices. It resides on top of the Bluetooth Core Specification and (optionally) additional protocols. While the profile may use certain features of the core specification, specific versions of profiles are rarely tied to specific versions of the core specification, making them independent of each other. For example, there are Hands-Free Profile (HFP) 1.5 implementations using both Bluetooth 2.0 and Bluetooth 1.2 core specifications. The way a device uses Bluetooth depends on its profile capabilities. The profiles provide standards that manufacturers follow to allow devices to use Bluetooth in the intended manner. For the Bluetooth Low Energy stack, according to Bluetooth 4.0 a special set of profiles applies. A host operating system can expose a basic set of profiles (namely OBEX, HID and Audio Sink) and manufacturers can add additional profiles to their drivers and stack to enhance what their Bluetooth devices can do. Devices such as mobile phones can expose additional profiles by installing appropriate apps. At a minimum, each profile specification contains information on the following topics: Dependencies on other formats Suggested user interface formats Specific parts of the Bluetooth protocol stack used by the profile. To perform its task, each profile uses particular options and parameters at each layer of the stack. This may include an outline of the required service record, if appropriate. This article summarizes the current definitions of profiles defined and adopted by the Bluetooth SIG and possible applications of each profile. Advanced Audio Distribution Profile (A2DP) This profile defines how multimedia audio can be streamed from one device to another over a Bluetooth connection (it is also called Bluetooth Audio Streaming). For example, music can be streamed from a mobile phone to a wireless headset, hearing aid/cochlear implant streamer, or car audio; alternately from a laptop/desktop to a wireless headset; also, voice can be streamed from a microphone device to a recorder on a PC. The Audio/Video Remote Control Profile (AVRCP) is often used in conjunction with A2DP for remote control on devices such as headphones, car audio systems, or stand-alone speaker units. These systems often also implement Headset (HSP) or Hands-Free (HFP) profiles for telephone calls, which may be used separately. Each A2DP service, of possibly many, is designed to uni-directionally transfer an audio stream in up to 2 channel stereo, either to or from the Bluetooth host. This profile relies on AVDTP and GAVDP. It includes mandatory support for the low-complexity SBC codec (not to be confused with Bluetooth's voice-signal codecs such as CVSDM), and supports optionally MPEG-1 Part 3/MPEG-2 Part 3 (MP2 and MP3), MPEG-2 Part 7/MPEG-4 Part 3 (AAC and HE-AAC), and ATRAC, and is extensible to support manufacturer-defined codecs, such as aptX. For an extended list of codecs, see . While designed for a one-way audio transfer - CSR has developed a way to transfer a mono stream back (and enable use of headsets with microphones), and incorporated it into FastStream and aptX Low Latency codecs. The patent has expired. Some Bluetooth stacks enforce the SCMS-T digital rights management (DRM) scheme. In these cases, it is impossible to connect certain A2DP headphones for high quality audio, while some vendors disable the A2DP functionality altogether to avoid devices rejecting A2DP sink. Attribute Profile (ATT) The ATT is a wire application protocol for the Bluetooth Low Energy specification. It is closely related to Generic Attribute Profile (GATT). Audio/Video Remote Control Profile (AVRCP) This profile is designed to provide a standard interface to control TVs, Hi-Fi equipment, etc. to allow a single remote control (or other device) to control all of the A/V equipment to which a user has access. It may be used in concert with A2DP or VDP. It is commonly used in car navigation systems to control streaming Bluetooth audio. It also has the possibility for vendor-dependent extensions. AVRCP has several versions with significantly increasing functionality: 1.0 — Basic remote control commands (play/pause/stop, etc.) 1.3 — all of 1.0 plus metadata and media-player state support The status of the music source (playing, stopped, etc.) Metadata information on the track itself (artist, track name, etc.). 1.4 — all of 1.3 plus media browsing capabilities for multiple media players Browsing and manipulation of multiple players Browsing of media metadata per media player, including a "Now Playing" list Basic search capabilities Support for Absolute volume 1.5 — all of 1.4 plus specification corrections and clarifications to absolute volume control, browsing and other features 1.6 — all of 1.5 plus browsing data and track information Number of items that are in a folder without downloading the list Support for transmitting cover arts through the BIP over OBEX protocol. 1.6.1 and 1.6.2 correct minor errors in tables. Basic Imaging Profile (BIP) This profile is designed for sending images between devices and includes the ability to resize, and convert images to make them suitable for the receiving device. It may be broken down into smaller pieces: Image Push Allows the sending of images from a device the user controls. Image Pull Allows the browsing and retrieval of images from a remote device. Advanced Image Printing print images with advanced options using the DPOF format developed by Canon, Kodak, Fujifilm, and Matsushita Automatic Archive Allows the automatic backup of all the new images from a target device. For example, a laptop could download all of the new pictures from a camera whenever it is within range. Remote Camera Allows the initiator to remotely use a digital camera. For example, a user could place a camera on a tripod for a group photo, use their phone handset to check that everyone is in frame, and activate the shutter with the user in the photo. Remote Display Allows the initiator to push images to be displayed on another device. For example, a user could give a presentation by sending the slides to a video projector. Basic Printing Profile (BPP) This allows devices to send text, e-mails, vCards, or other items to printers based on print jobs. It differs from HCRP in that it needs no printer-specific drivers. This makes it more suitable for embedded devices such as mobile phones and digital cameras which cannot easily be updated with drivers dependent upon printer vendors. Common ISDN Access Profile (CIP) This provides unrestricted access to the services, data and signalling that ISDN offers. Cordless Telephony Profile (CTP) This is designed for cordless phones to work using Bluetooth. It is hoped that mobile phones could use a Bluetooth CTP gateway connected to a landline when within the home, and the mobile phone network when out of range. It is central to the Bluetooth SIG's "3-in-1 phone" use case. Device ID Profile (DIP) This profile allows a device to be identified above and beyond the limitations of the Device Class already available in Bluetooth. It enables identification of the manufacturer, product id, product version, and the version of the Device ID specification being met. It is useful in allowing a PC to identify a connecting device and download appropriate drivers. It enables similar applications to those the Plug-and-play specification allows. This is important in order to make best use of the features on the device identified. A few examples illustrating possible uses of this information are listed below: In PC-to-PC usage models (such as conference table and file transfer), a PC may use this information to supplement information from other Bluetooth specifications to identify the right device to communicate with. A cellular phone may use this information to identify associated accessories or download Java apps from another device that advertises its availability. In PC to peripheral usage models (such as dial up networking using a cellular phone), the PC may need to download device drivers or other software for that peripheral from a web site. To do this the driver must know the proper identity of the peripheral. Devices are expected to provide some basic functionality using only the Bluetooth profile implementation, and that additional software loaded using the Device ID information should only be necessary for extended or proprietary features. Likewise, devices which access a profile in another device are expected to be able provide the basic services of the profile regardless of the presence or absence of Device ID information. Dial-up Networking Profile (DUN) This profile provides a standard to access the Internet and other dial-up services over Bluetooth. The most common scenario is accessing the Internet from a laptop by dialing up on a mobile phone, wirelessly. It is based on Serial Port Profile (SPP), and provides for relatively easy conversion of existing products, through the many features that it has in common with the existing wired serial protocols for the same task. These include the AT command set specified in European Telecommunications Standards Institute (ETSI) 07.07, and Point-to-Point Protocol (PPP). DUN distinguishes the initiator (DUN Terminal) of the connection and the provider (DUN Gateway) of the connection. The gateway provides a modem interface and establishes the connection to a PPP gateway. The terminal implements the usage of the modem and PPP protocol to establish the network connection. In standard phones, the gateway PPP functionality is usually implemented by the access point of the Telco provider. In "always on" smartphones, the PPP gateway is often provided by the phone and the terminal shares the connection. Fax Profile (FAX) This profile is intended to provide a well-defined interface between a mobile phone or fixed-line phone and a PC with Fax software installed. Support must be provided for ITU T.31 and / or ITU T.32 AT command sets as defined by ITU-T. Data and voice calls are not covered by this profile. Generic Audio/Video Distribution Profile (GAVDP) GAVDP provides the basis for A2DP and VDP, the basis of the systems designed for distributing video and audio streams using Bluetooth technology. The GAVDP defines two roles, that of an Initiator and an Acceptor: Initiator (INT) – This is the device that initiates a signaling procedure. Acceptor (ACP) – This is the device that shall respond to an incoming request from the INT Note: the roles are not fixed to the devices. The roles are determined when you initiate a signaling procedure, and they are released when the procedure ends. The roles can be switched between two devices when a new procedure is initiated.The Baseband, LMP, L2CAP, and SDP are Bluetooth protocols defined in the Bluetooth Core specifications. AVDTP consists of a signaling entity for negotiation of streaming parameters and a transport entity that handles the streaming. Generic Access Profile (GAP) Provides the basis for all other profiles. GAP defines how two Bluetooth units discover and establish a connection with each other. Generic Attribute Profile (GATT) Provides profile discovery and description services for Bluetooth Low Energy protocol. It defines how ATT attributes are grouped together into sets to form services. Generic Object Exchange Profile (GOEP) Provides a basis for other data profiles. Based on OBEX and sometimes referred to as such. Hard Copy Cable Replacement Profile (HCRP) This provides a simple wireless alternative to a cable connection between a device and a printer. Unfortunately it does not set a standard regarding the actual communications to the printer, so drivers are required specific to the printer model or range. This makes this profile less useful for embedded devices such as digital cameras and palmtops, as updating drivers can be problematic. Health Device Profile (HDP) Health Thermometer profile (HTP) and Heart Rate Profile (HRP) fall under this category as well. Profile designed to facilitate transmission and reception of Medical Device data. The APIs of this layer interact with the lower level Multi-Channel Adaptation Protocol (MCAP layer), but also perform SDP behavior to connect to remote HDP devices. Also makes use of the Device ID Profile (DIP). Hands-Free Profile (HFP) This profile is used to allow car hands-free kits to communicate with mobile phones in the car. It commonly uses Synchronous Connection Oriented link (SCO) to carry a monaural audio channel with continuously variable slope delta modulation or pulse-code modulation, and with logarithmic a-law or μ-law quantization. Version 1.6 adds optional support for wide band speech with the mSBC codec, a 16 kHz monaural configuration of the SBC codec mandated by the A2DP profile. Version 1.7 adds indicator support to report such things as headset battery level. In 2002 Audi, with the Audi A8, was the first motor vehicle manufacturer to install Bluetooth technology in a car, enabling the passenger to use a wireless in-car phone. The following year DaimlerChrysler and Acura introduced Bluetooth technology integration with the audio system as a standard feature in the third-generation Acura TL in a system dubbed HandsFree Link (HFL). Later, BMW added it as an option on its 1 Series, 3 Series, 5 Series, 7 Series and X5 vehicles. Since then, other manufacturers have followed suit, with many vehicles, including the Toyota Prius (since 2004), 2007 Toyota Camry, 2006 Infiniti G35, and the Lexus LS 430 (since 2004). Several Nissan models (Versa, X-Trail) include a built-in Bluetooth for the Technology option. Volvo started introducing support in some vehicles in 2007, and as of 2009 all Bluetooth-enabled vehicles support HFP. Many car audio consumer electronics manufacturers like Kenwood, JVC, Sony, Pioneer and Alpine build car audio receivers that house Bluetooth modules all supporting various HFP versions. Bluetooth car kits allow users with Bluetooth-equipped cell phones to make use of some of the phone's features, such as making calls, while the phone itself can be left in the user's pocket or hand bag. Companies like Visteon Corp., Peiker acustic, RAYTEL, Parrot SA, Novero, Dension, S1NN and Motorola manufacture Bluetooth hands-free car kits for well-known brand car manufacturers. Most Bluetooth headsets implement both Hands-Free Profile and Headset Profile, because of the extra features in HFP for use with a mobile phone, such as last number redial, call waiting and voice dialing. The mobile phone side of an HFP link is Audio Gateway or HFP Server. The automobile side of HFP link is Car Kit or HFP Client. Human Interface Device Profile (HID) Provides support for HID devices such as mice, joysticks, keyboards, and simple buttons and indicators on other types of devices. It is designed to provide a low latency link, with low power requirements. PlayStation 3 controllers and Wii remotes also use Bluetooth HID. Bluetooth HID is a lightweight wrapper of the human interface device protocol defined for USB. The use of the HID protocol simplifies host implementation (when supported by host operating systems) by re-use of some of the existing support for USB HID in order to support also Bluetooth HID. Keyboard and keypads must be secure. For other HID devices, security is optional. HID over GATT Profile - Bluetooth Low Energy (HOGP) A profile that defines how a device with Bluetooth low energy wireless communications can support HID devices over the Bluetooth using the low energy protocol stack using: Generic Attribute Profile. Headset Profile (HSP) This is the most commonly used profile, providing support for the popular Bluetooth headsets to be used with mobile phones and gaming consoles. It relies on SCO audio encoded in 64 kbit/s CVSD or PCM and a subset of AT commands from GSM 07.07 for minimal controls including the ability to ring, answer a call, hang up and adjust the volume. iPod Accessory Protocol (iAP) iAP and later iAPv2 protocol are proprietary protocols developed by Apple Inc. for communication with 3rd party accessories for iPhones, iPods and iPads. Most Bluetooth drivers and stacks for Windows do not support the iAP profile since using such protocols requires a MFi license from Apple and thus is displayed as "Bluetooth Peripheral Device" or "Not Supported Bluetooth Function" in Device Manager. Intercom Profile (ICP) This is often referred to as the walkie-talkie profile. It is another TCS based profile, relying on SCO to carry the audio. It is proposed to allow voice calls between two Bluetooth capable handsets, over Bluetooth. The ICP standard was withdrawn on 10-June-2010. LAN Access Profile (LAP) LAN Access profile makes it possible for a Bluetooth device to access LAN, WAN or Internet via another device that has a physical connection to the network. It uses PPP over RFCOMM to establish connections. LAP also allows the device to join an ad-hoc Bluetooth network. The LAN Access Profile has been replaced by the PAN profile in the Bluetooth specification. Mesh Profile (MESH) Mesh Profile Specification allows for many-to-many communication over Bluetooth radio. It supports data encryption, message authentication and is meant for building efficient smart lighting systems and IoT networks. Application layer for Bluetooth Mesh has been defined in a separate Mesh Model Specification. As of release 1.0 lighting, sensors, time, scenes and generic devices has been defined. Additionally, application-specific properties has been defined in Mesh Device Properties Specification, which contains the definitions for all mesh-specific GATT characteristics and their descriptors. Message Access Profile (MAP) Message Access Profile (MAP) specification allows exchange of messages between devices. Mostly used for automotive handsfree use. The MAP profile can also be used for other uses that require the exchange of messages between two devices. The automotive Hands-Free use case is where an on-board terminal device (typically an electronic device as a Car-Kit installed in the car) can talk via messaging capability to another communication device (typically a mobile phone). For example, Bluetooth MAP is used by HP Send and receive text (SMS) messages from a Palm/HP smartphone to an HP TouchPad tablet. Bluetooth MAP is used by Ford in select SYNC Generation 1-equipped 2011 and 2012 vehicles and also by BMW with many of their iDrive systems. The Lexus LX and GS 2013 models both also support MAP as does the Honda CRV 2012, Acura 2013 and ILX 2013. Apple introduced Bluetooth MAP in iOS 6 for the iPhone and iPad. Android support was introduced in version 4.4 (KitKat). OBject EXchange (OBEX) Object Push Profile (OPP) A basic profile for sending "objects" such as pictures, virtual business cards, or appointment details. It is called push because the transfers are always instigated by the sender (client), not the receiver (server). OPP uses the APIs of OBEX profile and the OBEX operations which are used in OPP are connect, disconnect, put, get and abort. By using these API the OPP layer will reside over OBEX and hence follow the specifications of the Bluetooth stack. Personal Area Networking Profile (PAN) This profile is intended to allow the use of Bluetooth Network Encapsulation Protocol on Layer 3 protocols for transport over a Bluetooth link. Phone Book Access Profile (PBAP, PBA) Phone Book Access (PBA). or Phone Book Access Profile (PBAP) is a profile that allows exchange of Phone Book Objects between devices. It is likely to be used between a car kit and a mobile phone to: allow the car kit to display the name of the incoming caller; allow the car kit to download the phone book so the user can initiate a call from the car display. The profile consists of two roles: PSE - Phone Book Server Equipment for the side delivering phonebook data, like a mobile phone PCE - Phone Book Client Equipment, for the device receiving this data, like a personal navigation device (PND) Proximity Profile (PXP) The Proximity profile (PXP) enables proximity monitoring between two devices. This feature is especially useful for unlocking devices such as a PC when a connected Bluetooth smartphone is nearby. Serial Port Profile (SPP) This profile is based on ETSI 07.10 and the RFCOMM protocol. It emulates a serial cable to provide a simple substitute for existing RS-232, including the familiar control signals. It is the basis for DUN, FAX, HSP and AVRCP. SPP maximum payload capacity is 128 bytes. Serial Port Profile defines how to set up virtual serial ports and connect two Bluetooth enabled devices. Service Discovery Application Profile (SDAP) SDAP describes how an application should use SDP to discover services on a remote device. SDAP requires that any application be able to find out what services are available on any Bluetooth enabled device it connects to. SIM Access Profile (SAP, SIM, rSAP) This profile allows devices such as car phones with built-in GSM transceivers to connect to a SIM card in a Bluetooth enabled phone, thus the car phone itself does not require a separate SIM card and the car external antenna can be used. This profile is sometimes referred to as rSAP (remote-SIM-Access-Profile), though that name does not appear in the profile specification published by the Bluetooth SIG. Information on phones that support SAP can be found below: Currently the following cars by design can work with SIM-Access-Profile: Many manufacturers of GSM based mobile phones offer support for SAP/rSAP. It is supported by the Android, Maemo, and MeeGo phone OSs. Neither Apple's iOS nor Microsoft's Windows Phone support rSAP; both use PBAP for Bluetooth cellphone-automobile integration. Discontinued systems Synchronization Profile (SYNCH) This profile allows synchronization of Personal Information Manager (PIM) items. As this profile originated as part of the infrared specifications but has been adopted by the Bluetooth SIG to form part of the main Bluetooth specification, it is also commonly referred to as IrMC Synchronization. Synchronisation Mark-up Language Profile (SyncML) For Bluetooth, Synchronization is one of the most important areas. The Bluetooth specifications up to, and including 1.1, has Synchronization Profile that is based on IrMC. Later, many of the companies in the Bluetooth SIG already had proprietary synchronization solutions and they did not want to implement IrMC -based synchronization also, hence SyncML emerged. SyncML is an open industry initiative for common data synchronization protocol. The SyncML protocol has been developed by some of the leading companies in their sectors, Lotus, Motorola, Ericsson, Matsushita Communication Industrial Co., Nokia, IBM, Palm Inc., Psion and Starfish Software; together with over 600 SyncML Supporter companies. SyncML is a synchronization protocol that can be used by devices to communicate the changes that have taken place in the data that is stored within them. However, SyncML is capable of delivering more than just basic synchronization; it is extensible, providing powerful commands to allow searching and execution. Video Distribution Profile (VDP) This profile allows the transport of a video stream. It could be used for streaming a recorded video from a PC media center to a portable player, or a live video from a digital video camera to a TV. Support for the H.263 baseline is mandatory. The MPEG-4 Visual Simple Profile, and H.263 profiles 3 and 8 are optionally supported, and covered in the specification.1 Wireless Application Protocol Bearer (WAPB) This is a profile for carrying Wireless Application Protocol (WAP) over Point-to-Point Protocol over Bluetooth. Future profiles These profiles are still not finalised, but are currently proposed within the Bluetooth SIG: Unrestricted digital information (UDI) Extended service discovery profile (ESDP) Video conferencing profile (VCP): This profile is to be compatible with 3G-324M, and support videoconferencing over a 3G high-speed connection. Tempow Audio Profile (TAP): this new audio profile was presented at Bluetooth World 2017 in Santa Clara. It enables new audio functions, upgrading current A2DP profile. Compatibility of products with profiles can be verified on the Bluetooth Qualification Program website. References External links Specification: Adopted Documents, the official Bluetooth SIG member website Bluetooth
List of Bluetooth profiles
[ "Technology" ]
5,216
[ "Wireless networking", "Bluetooth" ]
7,104,660
https://en.wikipedia.org/wiki/Metal-induced%20crystallization
Combined with certain metallic species, amorphous films can crystallize in a process known as metal-induced crystallization (MIC). The effect was discovered in 1969, when amorphous germanium (a-Ge) films crystallized at surprisingly low temperatures when in contact with Al, Ag, Cu, or Sn. The effect was also verified in amorphous silicon (a-Si) films, as well as in amorphous carbon and various metal-oxide films. Likewise, the MIC evolved from simple temperature-driven annealing approaches to others involving laser or microwave radiation, for example. A very common variant of the MIC procedure is the metal-induced lateral crystallization (MILC). In this case, the metal is deposited (onto the top or at the bottom) of some selected areas of the desired amorphous film. Upon annealing, crystallization starts from the portion of the amorphous film that is in contact with the metal species, and the MIC proceeds laterally. So far, lots of studies have been carried out to investigate the MIC phenomenon -- invariably by applying different sample production methods and characterization tools. According to them, the MIC process is highly susceptible to the type and amount of the metallic species, the sample history (production method, geometry and annealing details), as well as to the methodology to determine crystallization. Besides, the MIC process is well beyond the mere diffusion of species (as it is usually discussed in studies involving layered sample structures) and involves many complex atomic-thermodynamic processes at the microscopic level. References Semiconductor device fabrication Inorganic chemistry Chemical processes Crystallography
Metal-induced crystallization
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
334
[ "Microtechnology", "Materials science", "Chemical processes", "Semiconductor device fabrication", "Crystallography", "Condensed matter physics", "nan", "Chemical process engineering" ]
7,105,177
https://en.wikipedia.org/wiki/Tropomodulin
Tropomodulin (TMOD) is a protein which binds and caps the minus end of actin (the "pointed" end), regulating the length of actin filaments in muscle and non-muscle cells. The protein functions by physically blocking the spontaneous dissociation of ADP-bound actin monomers from the minus end of the actin fibre. This, along with plus end capping proteins, such as capZ stabilise the structure of the actin filament. End capping is particularly important when long-lived actin filaments are necessary, for example: in myofibrils. Inhibition of tropomodulin capping activity leads to dramatic increase in thin filament length from its pointed end. Actin filaments have two differing ends where one is the fast-acting barbed end and the other is the slow growing pointed end. Since TMOD binds to the pointed end of actin it is essential in cell morphology, cell movement, and muscle contraction. TMOD has been identified as an erythrocyte with 359 amino acids and it is a globular protein. When tropomyosin is not present Tropomodulin also assists in partially inhibiting elongation and depolymerization at the pointed filament ends. The N-terminal of Tropomodulin is rod shaped. This portion then binds to the N-terminal part of the two tropomyosin that are on the opposite part of the actin filaments in the muscle and nonmuscle cells. TMOD is able to have high-affinity binding through low-affinity interactions because of its ability to control subunit exchange of the pointed end of the actin filaments. When looking at epithelial cells Tropomodulin sustains F-actin in the lateral cell membranes and the adherens junction. Tropomodulin binds exclusively to the pointed filament ends and not to actin monomers or alongside actin filaments. Tropomodulin is a 40-kD tropomyosin-binding protein that was originally isolated from the red blood cell membrane skeleton. Tropomodulin is associated with Leiomodin as homologous proteins because both proteins play a role in muscle sarcomere thin filament formation and maintenance. An ortholog that is identified with TMOD and structurally similar  is UNC-94. Where the UNC-94 protein is capping on the minus end of the actin filament. This protein like TMOD depends on the presence of tropomyosin in order to function properly. Genes The TMOD genes are important for cell morphology, cell movement, and muscle contraction. There are 4 identified Tropomodulin genes identified in humans: TMOD1, TMOD2, TMOD3, and TMOD4. The 4 identified genes are also recognized as Isoforms. There are also known orthologs of these isoforms in mice. Known tropomodulin homologs have been identified in flies (Drosophila), worms (C.elegans), rats, chicks, and mice. The TMOD genes are expressed at different levels in human tissue. The different levels can be identified as: the first level is heart and skeletal muscle, then the next level is found  in brain, lung, and pancreas, then the last level in placenta, liver, and kidney. Using the lab technique PCR TMOD gene was isolated and identified to have a total of 9 exons, allowing for the assumption that alternative promoters for tissue-specific expression and regulation. TMOD1, TMOD3, and TMOD4 are the only isoforms that are found in muscles. TMOD2 is the only identified isoform that is only found in the brain and not in any muscles like the other isoforms. The two isoforms that are associated with neurons are TMOD1 and TMOD2. The functions of each isoform can vary depending on the location of the Tropomyosin and actin filaments. Since the TMOD isoforms can influence stability of skeleton cells and regulate actin it can then be seen as essential for embryonic development. TMOD1 Tropomodulin 1 (TMOD1) can be found in various areas, but more specifically erythrocytes, the heart, and slow skeletal muscle. The structure for this protein varies slightly from the others where it has an N-terminus half and a C-Terminus half. The N-terminus half is seen to be mostly extended, unstructured, and flexible and the C-terminus half is seen to be compactly folded. The inhibition of TMOD1 where an antibody inhibits the C-terminus or a decrease in the expression of TMOD1 can cause the c-terminal filaments to go from compactly folded to elongated and thin filaments. Thus causing there to be a decrease in the ability of the heart to be able to contract. When looking at neurons TMOD1 is essential in synaptogenesis. TMOD1 is also important for spine morphogenesis and synapse formation where it can stabilize the F-actin. In epithelial cells, such as ocular lens fiber cells, TMOD1 is important in maintaining the stability of the tropomyosin and F-actin so that the cells stay tightly packed and maintain the tissues mechanical integrity.   TMOD2 Tropomodulin 2 (TMOD2) is an isoform that is more commonly found in the brain. TMOD2 like the other Tropomodulins is able to bind to the pointed end of actin and tropomyosin. In doing so TMOD2 is able to regulate actin nucleation and polymerization. In regards to neurons TMOD 2 is essential in dendrite formation where it can regulate the branching of the dendrites. When looking at the ortholog in mice, if there is a lack of the TMOD2 gene there will be a result of hyperactivity and impaired learning and memory. TMOD3 Tropomodulin 3 (TMOD3) is found to be essential in membranous skeleton and embryonic development. TMOD3 is a wide ranging tropomodulin gene  in non-erythroid cells, in which it regulates actin processes, such as lamellipodia protrusion and cell motility. Lamellipodia protrusion, dense actin filaments, are usually found in neurons and epithelial cells where TMOD3 is mostly found. Change in regulation and reduction can drastically change the function of neurons or epithelial cells. We also find the gene TMOD 3 in polarized epithelial cell plasma membranes and the sarcoplasmic reticulum membranes of skeletal muscle. This TMOD is the only isoform of the 4 known to be found in the human platelet proteome. The way that TMOD3 functions in actin membranous skeleton structures is by capping the F-actin in stress fibers. If TMOD3 is not present there will be impaired erythroblast maturation in definitive erythropoiesis. TMOD3 actin binding is regulated via phosphatidylinositol 3-kinase (PI3K)–Akt signaling in adipocytes, where Tmod3 regulation of cortical actin assembly with Tropomyosin. The regulation of TMOD3 is essential for insulin-mediated trafficking of the glucose transporter Glut4 to the plasma membrane. In intestinal epithelial cells if there is a reduction in TMOD3 the binding of tropomyosin and F-actin will be disrupted and cause the cell height to collapse. This collapse in cell height can change the overall functionality of the intestinal cells. TMOD4 Tropomodulin 4 (TMOD4) is found essential for muscles where it regulates thin filament length and can switch between myogenesis and adipogenesis. TMOD4 Function has at least one point in common with the protein LMOD3 during skeletal myofibrillogenesis. References External links
Tropomodulin
[ "Chemistry" ]
1,760
[ "Biochemistry stubs", "Protein stubs" ]
7,105,280
https://en.wikipedia.org/wiki/Handbra
A handbra (also hand bra or hand-bra) is the practice of covering female nipples and areolae with hands or arms. It often is done in compliance with censors' guidelines, public authorities and community standards when female breasts are required to be covered in film or other media. If the arms are used instead of the hands the expression is arm bra. The use of long hair for this purpose is called a hair bra. Moreover, a handbra may also be used by women to cover their breasts to maintain their modesty, when they find themselves with their breasts uncovered in front of others. Social conventions requiring females to cover all or part of their breasts in public have been widespread throughout history and across cultures. Contemporary Western cultures usually regard the exposure of the nipples and areolae as immodest, and sometimes prosecute it as indecent exposure. Covering them, as with pasties, is often sufficient to avoid legal sanction. In art Employment of the handbra technique and its variations has a long history in art. Judean pillar figures show a nude goddess, supporting or cupping her prominent breasts with her hands. In print media Similar community standards apply in other media, with female models being required to at least cover their breasts in some way. The handbra technique became less common and an unnecessary pose in early 20th century European and American pinup postcard media as toplessness and nudity became more common. In America, after bare breasts become repressed in mainstream media circa 1930, the handbra became an increasingly durable pose, especially as more widespread American pinup literature emerged in the 1950s. Once bare breasts became common in pinup literature, after the early 1960s, the handbra pose became less necessary. As with pinup magazines of the 1950s, the handbra pose was a mainstay of late 20th century mainstream media, especially lad mags, such as FHM, Maxim, and Zoo Weekly, that prominently featured photos of scantily clad actresses and models who wished to avoid topless and nude glamour photography. Examples include Brigitte Bardot (1955, 1971), Elizabeth Taylor in a Playboy magazine pictorial from the set of Cleopatra, Peggy Moffitt modeling Rudi Gernreich's topless maillot and how Life magazine handled the story (1964), and the emergence of handbras in publications such as the Sports Illustrated Swimsuit Issue by model Elle MacPherson (1989). Toward the end of the 20th century, the handbra appeared on numerous celebrity magazine covers. The August 1991 cover of Vanity Fair magazine, known as the More Demi Moore cover, contained a controversial handbra nude photograph of the then seven-months pregnant Demi Moore taken by Annie Leibovitz. Two years later Janet Jackson appeared on the September 1993 cover of Rolling Stone with her nipples covered by a pair of male hands. The magazine later named it their "Most Popular Cover Ever". In July 1994, Ronald Reagan's daughter Patti Davis appeared on the cover of Playboy with another model covering her breasts. Photographer Raphael Mazzucco created an eight-woman handbra on the cover of the 2006 Sports Illustrated Swimsuit Issue and a photo of Marisa Miller covering her breasts with her arms and her vulva with an iPod in the 2007 Swimsuit Issue. The handbra was the subject of a pointed parody advertisement for Holding Your Own Boobs magazine performed by Sarah Michelle Gellar and Will Ferrell on the May 15, 1999 episode of Saturday Night Live. In cinema At the start of the 20th century, the use of the handbra was not very common in European or American cinema, where toplessness and discreet full nudity of the female form was accepted. In the 1930s, the Hays Code brought an end to nudity in all its forms, including toplessness, in Hollywood films. To remain within the censors' guidelines or community standards of decency and modesty, breasts of actresses in an otherwise topless scene were required to be covered, especially the nipples and areolae, with their hands (using a handbra gesture), arms, towel, pasties, some other object, or the angle of the body in relation to the camera. Social upheaval in the 1960s resulted first in toplessness then full nudity in film being accepted (albeit subject to movie ratings in many countries), after which the use of the handbra decreased. It has, however, not disappeared, remaining a concession to modesty in "PG" pictures. On the Internet In 2014 Playboy Enterprises made its Playboy.com website content "safe for work" by covering nipples with handbras and armbras. Other uses A brassiere called the "handbra" has a pair of hands parodying the technique. Lady Gaga wore one in the music video for her 2013 single "Applause". See also Glamour photography Toplessness References External links Human positions 1990s neologisms Breast
Handbra
[ "Biology" ]
996
[ "Behavior", "Human positions", "Human behavior" ]
7,106,579
https://en.wikipedia.org/wiki/Business%20support%20system
Business support systems (BSS) are the components that a telecommunications service provider (or telco) uses to run its business operations towards customers. Together with operations support systems (OSS), they are used to support various end-to-end telecommunication services (e.g., telephone services). BSS and OSS have their own data and service responsibilities. The two systems together are abbreviated in various ways, such as OSS/BSS, BSS/OSS, B/OSS, BSSOSS, OSSBSS or BOSS. Some commentators and analysts take a network-up approach to these systems (hence OSS/BSS) and others take a business-down approach (hence BSS/OSS). The initialism BSS is also used in a singular form to refer to all the business support systems, viewed as a whole system. Role BSS deals with the taking of orders, payment issues, revenues, etc. It supports four processes: product management, order management, revenue management and customer management. Product management Product management supports product development, the sales and management of products, offers and bundles to businesses and mass-market customers. Product management regularly includes offering cross-product discounts, appropriate pricing and managing how products relate to one another. Customer management Service providers require a single view of the customer and regularly need to support complex hierarchies across customer-facing applications (customer relationship management). Customer management also covers requirements for partner management and 24x7 web-based customer self-service. Customer management can also be thought of as full-fledged customer relationship management systems implemented to help customer care agents handle the customers in a better and more informed manner. Revenue management Revenue management focuses on billing, charging and settlement. It includes billing for consumer, enterprise and wholesale services, including interconnect and roaming. This includes billing mediation systems, bill generation and bill presentment. Revenue management may also include fraud management and revenue assurance. Order management Order management encompasses four areas: Order decomposition details the rules for decomposing a Sales Order into multiple work orders or service orders. For example, a Triple Play Telco Sales order with three services - landline, Internet and wireless - can be broken down into three sub-orders, one for each line of business. Each of the sub-orders will be fulfilled separately in its own provisioning systems. However, there may be dependencies in each sub-order; e.g., an Internet sub-order can be fulfilled only when the landline has been successfully installed, provisioned and activated at the customer premises. Order orchestration is an objective application which is used by telcos to precisely manage, process and handle their customer orders across a multiple fulfillment and order capture network. It helps in the data aggregation transversely from assorted order capture and order fulfillment systems and delivers an all-inclusive platform for customer order management. It has been in vast application in the recent times, due to its advanced and precise order information efficiency and low order fulfillment costs, thus aggregating lesser manual process, and faster output. Its radical exception response based functioning and proactive monitoring enables it to centralize order data in accurate manner with ease. Order fallout, also known as Order Failure, refers to the condition when an order fails during processing. The order fallout occurs due to multiple scenarios; such as downstream system failure, which relates to an internal non-data related error; or when the system receives incorrect or missing data, which subsequently fails the order. Other Order Fallout conditions include database failure or error pertaining to network connectivity. Validation or recognition of order also occurs, in which the system marks the received corrupted order from an external system as failed. Another Order Fallout condition refers to the state of run-time failure, wherein an order is inhibited from getting processed due to non-determined reliance. Order Fallout Management helps in complete resolve of order failures through detection, notification and recovery process, helping the order to process sustain-ably and precisely. Order status management Order management as a beginning of assurance is normally associated with OSS, although BSS is often the business driver for fulfillment management and order provisioning. See also Business Process Framework (eTOM) Operations, administration and management (OAM) References External links What is BSS? Not your parents’ BSS/OSS: A digital stack for operators in the internet economy Business software Telecommunications systems
Business support system
[ "Technology" ]
894
[ "Telecommunications systems" ]
7,107,008
https://en.wikipedia.org/wiki/Critical%20relative%20humidity
The critical relative humidity (CRH) of a salt is defined as the relative humidity of the surrounding atmosphere (at a certain temperature) at which the material begins to absorb moisture from the atmosphere and below which it will not absorb atmospheric moisture. When the humidity of the atmosphere is equal to (or is greater than) the critical relative humidity of a sample of salt, the sample will take up water until all of the salt is dissolved to yield a saturated solution. All water-soluble salts and mixtures have characteristic critical humidities; it is a unique material property. The critical relative humidity of most salts decreases with increasing temperature. For instance, the critical relative humidity of ammonium nitrate decreases 22% with a temperature from 0 °C to 40 °C (32 °F to 104 °F). The critical relative humidity of several fertilizer salts is given in table 1: Table 1: Critical relative humidities of pure salts at 30°C. Mixtures of salts usually have lower critical humidities than either of the constituents. Fertilizers that contain Urea as an ingredient usually exhibit a much lower Critical Relative Humidity than Fertilizers without Urea. Table 2 shows CRH data for two-component mixtures: Table 2: Critical relative humidities of mixtures of salts at 30°C (values are percent relative humidity). As shown, the effect of salt mixing is most dramatic in the case of ammonium nitrate with urea. This mixture has an extremely low critical relative humidity and can therefore only be used in liquid fertilisers (so called UAN-solutions). See also Deliquescent Hygroscopy Humidity References Chemical properties Agricultural chemicals Atmospheric thermodynamics Humidity
Critical relative humidity
[ "Physics", "Chemistry", "Mathematics" ]
346
[ "Physical phenomena", "Physical quantities", "Quantity", "nan", "Physical properties" ]
7,107,162
https://en.wikipedia.org/wiki/Calcium-binding%20protein
Calcium-binding proteins are proteins that participate in calcium cell signaling pathways by binding to Ca2+, the calcium ion that plays an important role in many cellular processes. Calcium-binding proteins have specific domains that bind to calcium and are known to be heterogeneous. One of the functions of calcium binding proteins is to regulate the amount of free (unbound) Ca2+ in the cytosol of the cell. The cellular regulation of calcium is known as calcium homeostasis. Types Many different calcium-binding proteins exist, with different cellular and tissue distribution and involvement in specific functions. Calcium binding proteins also serve an important physiological role for cells. The most ubiquitous Ca2+-sensing protein, found in all eukaryotic organisms including yeasts, is calmodulin. Intracellular storage and release of Ca2+ from the sarcoplasmic reticulum is associated with the high-capacity, low-affinity calcium-binding protein calsequestrin. Calretinin is another type of Calcium binding protein weighing 29kD. It is involved in cell signaling and shown to exist in neurons. This type of protein is also found in large quantities in malignant mesothelial cells, which can be easily differentiated from carcinomas. This differentiation is later applied for a diagnosis on ovarian stromal tumors. Also, another member of the EF-hand superfamily is the S100B protein, which regulates p53. P53 is known as a tumor suppressor protein and in this case acts as a transcriptional activator or repressor of numerous genes. S100B proteins are abundantly found in cancerous tumor cells causing them to be overexpressed, therefore making these proteins useful for classifying tumors. In addition, this explains why this protein can easily interact with p53 when transcriptional regulation takes place. Calcium-binding proteins can be either intracellular and extracellular. Those that are intracellular can contain or lack a structural EF-hand domain. Extracellular calcium-binding proteins are classified into six groups. Since Ca (2+) is an important second messenger, it can act as an activator or inhibitor in gene transcription. Those that belong to the EF-hand superfamily such as Calmodulin and Calcineurin have been linked to transcription regulation. When levels of Ca(2+) increase in the cell, these members of the EF-hand superfamily regulate transcription indirectly by phosphorylating/dephosphorylating transcription factors. Secretory calcium-binding phosphoprotein The secretory calcium-binding phosphoprotein (SCPP) gene family consists of an ancient group of genes emerging around the same time as bony fish. SCPP genes are roughly divided into acidic and P/Q-rich types: the former mostly participates in bone and dentin formation, while the latter usually participate in enamel/enameloid formation. In mammals, P/Q-rich SCPP is also found in saliva and milk and includes unorthodox members such as MUC7 (a mucin) and casein. SCPP genes are recognized by exon structure rather than protein sequence. Functions With their role in signal transduction, calcium-binding proteins contribute to all aspects of the cell's functioning, from homeostasis to learning and memory. For example, the neuron-specific calexcitin has been found to have an excitatory effect on neurons, and interacts with proteins that control the firing state of neurons, such as the voltage-dependent potassium channel. Compartmentalization of calcium binding proteins such as calretinin and calbindin-28 kDa has been noted within cells, suggesting that these proteins perform distinct functions in localized calcium signaling. It also indicates that in addition to freely diffusing through the cytoplasm to attain a homogeneous distribution, calcium binding proteins can bind to cellular structures through interactions that are likely important for their functions. See also Calbindin Calmodulin Calsequestrin Troponin References External links Proteins by function Calcium signaling
Calcium-binding protein
[ "Chemistry" ]
836
[ "Calcium signaling", "Signal transduction" ]
7,107,398
https://en.wikipedia.org/wiki/Anton%20Vamplew
Anton Vamplew, (born 6 February 1966, in Rainham, Kent) is an English amateur astronomer, author, lecturer and media presenter of the subject. Biography He joined Mid-Kent Astronomical Society in 1979, later becoming chairman and editor of the society's quarterly journal, Pegasus. His first radio appearance was on BBC Radio Kent in 1986 during the approach of Halley's Comet. On the same station, he soon created his astro-persona "Captain Cosmos", presenting a monthly live phone-in. This was followed by an astronomy series entitled the Essential Guide to the Night Sky, which ran for eight months. From March 1993 to July 2002 he toured schools, colleges and museums, giving talks about space and astronomy to children and adults in the Astrodome inflatable mobile planetarium. He was also involved in writing the shows. In November 1996 he was asked to write and present a Universe Special for the BBC World Service youth programme, Megamix. He then became a regular presenter of the programme (along with Nikita Gulhane and Brenda Emmanus) until it ended in March 1999. During that time he produced a series of features entitled the Captain Cosmos Guide to the End of the World. Also on the World Service, he wrote and presented a 10-part astronomy series: Captain Cosmos' Galactic Guide which was broadcast twice in 1998, plus he was the resident astronomy expert from March 1999 to March 2000 on the youth programme, The Edge. On 5 November 1997, he made his first of 30 regular appearances on the BBC Children's programme, Blue Peter. For a time he became known as the Blue Peter Astronomer and undertook two overseas assignments to the telescopes of La Palma and Hawaii. In 1999 Vamplew joined the Royal Observatory Greenwich working as a presenter in their original Caird Planetarium. Once the redevelopment of the Observatory site began in late 2004 he joined the team that would equip the new, then to be named, Peter Harrison Planetarium. He left his newer role of Planetarium Producer and the Observatory itself in May 2007. He wrote the Beginners' Guide in the BBC Sky at Night magazine from February 2006 until May 2013. Selected television appearances GMTV (TV) (2006) (ITV morning news programme) Mind Games (TV) (2004) (BBC4 brainteaser quiz panel show) Richard & Judy (TV) (2004) (Channel 4 chat show) Ready, Steady, Cook (TV) (2002) (BBC2 cookery show) This Morning (TV) (2005) (ITV daytime magazine show) BBC World (TV) (2006) (BBC international news channel) Who Knows? (TV) (2003) (Disney Channel magazine show) Space Detectives (TV) (2000) (BBC children's 13-part programme - scientific advisor and episode writer) Bibliography Simple Stargazing (Collins, 2005), Anton Vamplew's Stargazing Secrets (Collins, 2007), Secretos Para Observar Los Astros (Naturart, S.a.), The Practical Astronomer (Dorling Kindersley, 2010), Stargazing for Beginners (Dorling Kindersley, 2010), Praktische Astronomie: Das Handbuch zur Himmelsbeobachtung (Dorling Kindersley Verlag, 2011), New Simple Stargazing (iTunes, 2012), Anton's Night Sky Guide 2013 (Amazon, 2012), ASIN B008PR9XMG The Practical Astronomer, 2nd Edition (Dorling Kindersley, 2017), References External links Anton's "Starry Skies" site 1966 births Living people 21st-century British astronomers English writers English television presenters Amateur astronomers People from Rainham, Kent
Anton Vamplew
[ "Astronomy" ]
761
[ "Astronomers", "Amateur astronomers" ]
7,107,907
https://en.wikipedia.org/wiki/Power%20shovel
A power shovel, also known as a motor shovel, stripping shovel, front shovel, mining shovel or rope shovel, is a bucket-equipped machine usually powered by steam, diesel fuel, gasoline or electricity and used for digging and loading earth or fragmented rock and for mineral extraction. Power shovels are a type of rope/cable excavator, where the digging arm is controlled and powered by winches and steel ropes, rather than hydraulics like in the modern hydraulic excavators. Basic parts of a power shovel include the track system, cabin, cables, rack, stick, boom foot-pin, saddle block, boom, boom point sheaves and bucket. The size of bucket varies from 0.73 to 53 cubic meters. Design Power shovels normally consist of a revolving deck with a power plant, drive and control mechanisms, usually a counterweight, and a front attachment, such as a crane ("boom") which supports a handle ("dipper" or "dipper stick") with a digger ("bucket") at the end. The term "dipper" is also sometimes used to refer to the handle and digger combined. The machinery is mounted on a base platform with tracks or wheels. Modern bucket capacities range from 8m3 to nearly 80m3. Use Power shovels are used principally for excavation and removal of overburden in open-cut mining operations; they may also be used for the loading of minerals, such as coal. They are the classic equivalent of excavators, and operate in a similar fashion. Other uses of the power shovel include: Close range work. Digging very hard materials. Removing large boulders. Excavating material and loading trucks. Various other types of jobs such as digging in gravel banks, in clay pits, cuts in support of road work, road-side berms, etc. Operation The shovel operates using several main motions including: Hoisting - Pulling the bucket up through the bank of material being dug. Crowding - Moving the dipper handle in or out in order to control the depth of cut or to position for dumping. Swinging - Rotating the shovel between the dig site and dumping location. Propelling - Moving the shovel unit to different locations or dig positions. A shovel's work cycle, or digging cycle, consists of four phases: 1 Digging 2 Swinging 3 Dumping 4 Returning The digging phase consists of crowding the dipper into the bank, hoisting the dipper to fill it, then retracting the full dipper from the bank. The swinging phase occurs once the dipper is clear of the bank both vertically and horizontally. The operator controls the dipper through a planned swing path and dump height until it is suitably positioned over the haul unit (e.g. truck). Dumping involves opening the dipper door to dump the load, while maintaining the correct dump height. Returning is when the dipper swings back to the bank, and involves lowering the dipper into the track position to close the dipper door. Giant stripping shovels In the 1950s with the demand for coal at a peak high and more coal companies turning to the cheaper method of strip mining, excavator manufacturers started offering a new super class of power shovels, commonly called giant stripping shovels. Most were built between the 1950s and the 1970s. The world's first giant stripping shovel for the coal fields was the Marion 5760. Unofficially known to its crew and eastern Ohio residents alike as The Mountaineer, it was erected in 1955/56 near Cadiz, Ohio off of Interstate I-70. Larger models followed the successful 5760, culminating in the mid 60s with the gigantic 12,700 ton Marion 6360, nicknamed The Captain. One stripping shovel, The Bucyrus-Erie 1850-B known as "Big Brutus" has been preserved as a national landmark and a museum with tours and camping. Another stripping shovel, The Bucyrus-Erie 3850-B known as "Big Hog" was eventually cut down in 1985 and buried on the Peabody Sinclair Surface Mining site near the Paradise Mining Plant where it was operated. It remains there on non-public, government-owned land. Notable examples Ranked by bucket capacity. See also P&H Mining Bucyrus International Dragline Excavator Marion Power Shovel Steam shovel Hulett Further reading Extreme Mining Machines - Stripping shovels and walking draglines, by Keith Haddock, pub by MBI, References Stripping shovels Engineering vehicles Maintenance of way equipment
Power shovel
[ "Engineering" ]
914
[ "Engineering vehicles", "Stripping shovels", "Mining equipment" ]
7,108,375
https://en.wikipedia.org/wiki/Content%20adaptation
Content adaptation is the action of transforming content to adapt to device capabilities. Content adaptation is usually related to mobile devices, which require special handling because of their limited computational power, small screen size, and constrained keyboard functionality. Content adaptation could roughly be divided to two fields: Media content adaptation that adapts media files. Browsing content adaptation that adapts a website to mobile devices. Browsing content adaptation Advances in the capabilities of small, mobile devices such as mobile phones (cell phones) and Personal Digital Assistants have led to an explosion in the number of types of device that can now access the Web. Some commentators refer to the Web that can be accessed from mobile devices as the Mobile Web. The sheer number and variety of Web-enabled devices poses significant challenges for authors of websites who want to support access from mobile devices. The W3C Device Independence Working Group described many of the issues in its report Authoring Challenges for Device Independence. Content adaptation is one approach to a solution. Rather than requiring authors to create pages explicitly for each type of device that might request them, content adaptation transforms an author's materials automatically. For example, content might be converted from a device-independent markup language, such as XDIME, an implementation of the W3C's DIAL specification, into a form suitable for the device, such as XHTML Basic, C-HTML, or WML. Similarly, a suitable device-specific CSS style sheet or a set of in-line styles might be generated from abstract style definitions. Likewise, a device specific layout might be generated from abstract layout definitions. Once created, the device-specific materials form the response returned to the device from which the request was made. Another way is to use the latest trend responsive design based on CSS, covered in this article (RWD). Content adaptation requires a processor that performs the selection, modification, and generation of materials to form the device-specific result. IBM's Websphere Everyplace Mobile Portal (WEMP), BEA Systems' WebLogic Mobility Server, Morfeo's MyMobileWeb, and Apache Cocoon are examples of such processors. Wurfl and WALL are popular open source tools for content adaptation. WURFL is an XML-based Device Description Repository with APIs to access the data in Java and PHP (and other popular programming languages). WALL (Wireless Abstraction Library) lets a developer author mobile pages which look like plain HTML, but converts them to WML, C-HTML, or XHTML Mobile Profile, depending on the capabilities of the device from which the HTTP request originates. GreasySpoon lets the developer build plugins for content editing, in JavaScript, Ruby (programming language), and more, just like the Firefox application GreaseMonkey. Alembik (Media Transcoding Server) is a Java (J2EE) application providing transcoding services for variety of clients and for different media types (image, audio, video, etc.). It is fully compliant with OMA's Standard Transcoder Interface specification and is distributed under the LGPL open source license. In 2007, the first large scale carrier-grade deployments of content transformation, on existing mass-market handsets, with no software download required, were deployed by Vodafone in the UK and globally for Yahoo! oneSearch, using the Novarra Vision solution. Novarra's content adaptation solution had been used in enterprise intranet deployments as early as 2003 (at that time, the platform was named “Engines for Wireless Data”). InfoGin, the 9-year-old content-adaptation company with customers like Vodafone, Orange, Telefónica and PCCW. The patented "Web to Mobile adaptation", Mobile Matrix Transcoder, Multimedia and Documents transcoders, Video adaptation supporte. Launched in 2007, Bytemobile's Web Fidelity Service was another carrier-grade, commercial infrastructure solution, which provided wireless content adaptation to mobile subscribers on their existing mass-market handsets, with no client download required. See also Progressive enhancement, layering technologies such that more features are added for successively more powerful clients. Adaptation (computer science) jQuery Mobile or Zepto Responsive architecture is an analogous concept, applied to actual building architecture. References External links Authoring Challenges for Device Independence (W3C Working Group Note) Web development
Content adaptation
[ "Engineering" ]
899
[ "Software engineering", "Web development" ]
7,108,409
https://en.wikipedia.org/wiki/Trojan%20%28celestial%20body%29
In astronomy, a trojan is a small celestial body (mostly asteroids) that shares the orbit of a larger body, remaining in a stable orbit approximately 60° ahead of or behind the main body near one of its Lagrangian points and . Trojans can share the orbits of planets or of large moons. Trojans are one type of co-orbital object. In this arrangement, a star and a planet orbit about their common barycenter, which is close to the center of the star because it is usually much more massive than the orbiting planet. In turn, a much smaller mass than both the star and the planet, located at one of the Lagrangian points of the star–planet system, is subject to a combined gravitational force that acts through this barycenter. Hence the smallest object orbits around the barycenter with the same orbital period as the planet, and the arrangement can remain stable over time. In the Solar System, most known trojans share the orbit of Jupiter. They are divided into the Greek camp at (ahead of Jupiter) and the Trojan camp at (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans, 31 Neptune trojans, two Uranus trojans, and two Earth trojans, have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn probably does not have any primordial trojans. The same arrangement can appear when the primary object is a planet and the secondary is one of its moons, whereby much smaller trojan moons can share its orbit. All known trojan moons are part of the Saturn system. Telesto and Calypso are trojans of Tethys, and Helene and Polydeuces of Dione. Trojan minor planets In 1772, the Italian–French mathematician and astronomer Joseph-Louis Lagrange obtained two constant-pattern solutions (collinear and equilateral) of the general three-body problem. In the restricted three-body problem, with one mass negligible (which Lagrange did not consider), the five possible positions of that mass are now termed Lagrange points. The term "trojan" originally referred to the "trojan asteroids" (Jovian trojans) that orbit close to the Lagrangian points of Jupiter. These have long been named for figures from the Trojan War of Greek mythology. By convention, the asteroids orbiting near the point of Jupiter are named for the characters from the Greek side of the war, whereas those orbiting near the of Jupiter are from the Trojan side. There are two exceptions, named before the convention was adopted: 624 Hektor in the L4 group, and 617 Patroclus in the L5 group. Astronomers estimate that the Jovian trojans are about as numerous as the asteroids of the asteroid belt. Later on, objects were found orbiting near the Lagrangian points of Neptune, Mars, Earth, Uranus, and Venus. Minor planets at the Lagrangian points of planets other than Jupiter may be called Lagrangian minor planets. Four Martian trojans are known: 5261 Eureka, , , and – the only Trojan body in the leading "cloud" at , There seem to be, also, , , and , but these have not yet been accepted by the Minor Planet Center. There are 28 known Neptunian trojans, but the large Neptunian trojans are expected to outnumber the large Jovian trojans by an order of magnitude. was confirmed to be the first known Earth trojan in 2011. It is located in the Lagrangian point, which lies ahead of the Earth. was found to be another Earth trojan in 2021. It is also at L4. was identified as the first Uranus trojan in 2013. It is located at the Lagrangian point. A second one, , was announced in 2017. is a temporary Venusian trojan, the first one to be identified. The large asteroids Ceres and Vesta have temporary trojans. Saturn has 1 known trojan in the L4 Lagrangian Point, 2019 UO14. Trojans by planet Stability Whether or not a system of star, planet, and trojan is stable depends on how large the perturbations are to which it is subject. If, for example, the planet is the mass of Earth, and there is also a Jupiter-mass object orbiting that star, the trojan's orbit would be much less stable than if the second planet had the mass of Pluto. As a rule of thumb, the system is likely to be long-lived if m1 > 100m2 > 10,000m3 (in which m1, m2, and m3 are the masses of the star, planet, and trojan). More formally, in a three-body system with circular orbits, the stability condition is 27(m1m2 + m2m3 + m3m1) < (m1 + m2 + m3)2. So the trojan being a mote of dust, m3→0, imposes a lower bound on of ≈ 24.9599. And if the star were hyper-massive, m1→+∞, then under Newtonian gravity, the system is stable whatever the planet and trojan masses. And if = , then both must exceed 13+√168 ≈ 25.9615. However, this all assumes a three-body system; once other bodies are introduced, even if distant and small, stability of the system requires even larger ratios. See also Earth trojan Jupiter trojan Lissajous orbit List of objects at Lagrange points Tadpole orbit Trojan wave packet References Solar System
Trojan (celestial body)
[ "Astronomy" ]
1,182
[ "Outer space", "Solar System" ]
9,225,925
https://en.wikipedia.org/wiki/Air%20draft
Air draft (or air draught) is the distance from the surface of the water to the highest point on a vessel. This is similar to the deep draft of a vessel which is measured from the surface of the water to the deepest part of the hull below the surface. However, air draft is expressed as a height (positive upward), while deep draft is expressed as a depth (positive downward). Clearance below The vessel's clearance is the distance in excess of the air draft which allows a vessel to pass safely under a bridge or obstacle such as power lines, etc. A bridge's "clearance below" is most often noted on charts as measured from the surface of the water to the underside of the bridge at the chart datum Mean High Water (MHW), a less restrictive clearance than Mean Higher High Water (MHHW). In 2014, the United States Coast Guard reported that 1.2% of the collisions that it investigated in the recent past were caused by vessels attempting to pass under structures with insufficient clearance resulting in bridge strikes. Examples The Bridge of the Americas in Panama limits which ships can traverse the Panama Canal due to its height at above the water. The world's largest cruise ships, , and the will fit within the canal's new widened locks, but they are too tall to pass under the bridge, even at low tide (the two first ships are , but do have lowerable funnels, enabling them to pass the Great Belt Bridge in Denmark). New vessels are rarely built not clearing , a height which accommodates all but the largest cruise and container ships. The Suez Canal Bridge has a clearance over the canal. The Bayonne Bridge, an arch bridge connecting New Jersey with New York City, undertook a $1.7 billion modification to raise its roadbed to . See also Structural clearance Structure gauge Tower Bridge Cargo ship size categories Chart datum Bridge strike References Ship measurements Vertical extent
Air draft
[ "Physics", "Mathematics" ]
391
[ "Vertical extent", "Physical quantities", "Quantity", "Size", "Wikipedia categories named after physical quantities" ]
9,226,144
https://en.wikipedia.org/wiki/The%20Reckoning%20of%20Time
The Reckoning of Time (, CPL 2320) is an English era treatise written in Medieval Latin by the Northumbrian monk Bede in 725. Background In mid-7th-century Anglo-Saxon England, there was a desire to see the Easter season less closely tied to the Jewish Passover calendar as well as a desire to have Easter observed on a Sunday. Continuing a tradition of Christian scholarship exploring the correct date of Easter, a generation later, Bede sought to explain the ecclesiastical reasoning behind the Synod of Whitby's decision in 664 to favor Roman custom over Irish custom. Bede's resulting treatise provides justification for a precise calculation for Easter. It also explains why time, and the various units of time, are sacred. Structure The treatise includes an introduction to the traditional ancient and medieval view of the cosmos, including an explanation of how the Earth influenced the changing length of daylight, of how the seasonal motion of the Sun and Moon influenced the changing appearance of the new moon at evening twilight, and a quantitative relation between the changes of the tides at a given place and the daily motion of the Moon. The Reckoning of Time describes the principal ancient calendars, including those of the Hebrews, the Egyptians, the Romans, the Greeks, and the English. The focus of was calculation of the date of Easter, for which Bede described the method developed by Dionysius Exiguus. also gave instructions for calculating the date of the Easter full moon, for calculating the motion of the Sun and Moon through the zodiac, and for many other calculations related to the calendar. Bede based his reasoning for the dates on the Hebrew Bible. The functions of the universe and its purpose are generally referred to a scriptural foundation. According to the introduction by Faith Wallis in the 1999 English translated edition of The Reckoning of Time, Bede aimed to write a Christian work that integrated the astronomical understanding of computing with a theological context of history. The book is also regarded by Bede to be a sequel to his works The Nature of Things and On Time. Sections The work is divided into six sections: Technical preparation (Chapters 1–4) This section familiarizes the reader with terminology regarding measurements. In chapter 3 Bede defines a day as being 12 hours long. An hour consists of increments of , and . Each of which are small increments of time within the hour. The smallest increment of time is the atom. The Julian calendar (Chapters 5–41) Here, Bede gives an exhaustive overview of the date of the Earth's creation, the months, the weeks and the Moon. He argues that the first day did not, as it was generally believed, take place at the time of an equinox. According to the religious accounts of God's creation of the universe, light was created on the first day. It was not until the fourth day, however, that God created the stars and therefore there was no measurement of hours. Much of this section is devoted to the Moon. Bede goes into extensive detail about measuring the moon's cycles, the Moon's relationship to the Earth and Sun. Bede discusses the Moon's relationship to the tide and calculating . Anomalies of lunar reckoning (Chapters 42–43) These two chapters pick up where the previous section left off on examining the irregularities of the Moon creating a leap year as well as why, according to Bede, the Moon appears older than it actually is. The Paschal table (Chapters 44–65) This section explores different year cycles that include varying numbers of months and days, determining the year cycle of Christ's incarnation, Easter, and other moon cycles. The Major Chronicle (Chapter 66) Bede gives an exhaustive description of the Six Ages of the World. The "Major Chronicle" is the starting point for several later chronicles, such as the Chronicon universale usque ad annum 741 and the Chronicon Moissiacense. Bede details the First Age, from Adam to Noah, as being 1,656 years long according to the Hebrew Bible or 2,242 years according to the Septuagint. The Second Age, from Noah to Abraham, is 292 years or 272 years long based on Bede's evaluation of the Hebrew Bible and Septuagint respectively. The Third Age is said to be 942 years long according to both the Hebrew Bible and Septuagint spanning from Abraham to David. The Fourth age is from David until the Babylonian exile. This is 473 years according to the Hebrew Bible or 485 according to the Septuagint. The Fifth age is from the Babylonian exile to the advent of Christ. The Sixth age is the current age lasting from the advent of Christ until the end of days. Prophecy (Chapters 67–71) Finally, Bede goes on to discuss the end of the Sixth Age, the Second Coming of Christ, the Antichrist, and Judgement Day, and the Seventh and Eighth ages of the world to come. See also Easter controversy Ēostre Germanic calendar Notes References Jones, Charles W., ed. De temporum ratione, in Bedae opera de temporibus, Cambridge, Massachusetts: The Mediaeval Academy of America, 1943. Jones, Charles W., ed. De temporum ratione, in Bedae opera didascalia 2, Corpus Christianorum Series Latina, 123B, Turnhout: Brepols, 1997. Wallis, Faith, trans. Bede: The Reckoning of Time, Liverpool: Liverpool Univ. Pr., 1999/2004. . External links De Temporum Ratione in Latin from Patrologia Latina. Time in religion Date of Easter Calendars Works by Bede 8th-century books in Latin 725 8th century in England
The Reckoning of Time
[ "Physics" ]
1,185
[ "Calendars", "Physical quantities", "Time", "Time in religion", "Spacetime" ]
9,226,345
https://en.wikipedia.org/wiki/Hat%20notation
A "hat" (circumflex (ˆ)), placed over a symbol is a mathematical notation with various uses. Estimated value In statistics, a circumflex (ˆ), called a "hat", is used to denote an estimator or an estimated value. For example, in the context of errors and residuals, the "hat" over the letter indicates an observable estimate (the residuals) of an unobservable quantity called (the statistical errors). Another example of the hat operator denoting an estimator occurs in simple linear regression. Assuming a model of , with observations of independent variable data and dependent variable data , the estimated model is of the form where is commonly minimized via least squares by finding optimal values of and for the observed data. Hat matrix In statistics, the hat matrix H projects the observed values y of response variable to the predicted values ŷ: Cross product In screw theory, one use of the hat operator is to represent the cross product operation. Since the cross product is a linear transformation, it can be represented as a matrix. The hat operator takes a vector and transforms it into its equivalent matrix. For example, in three dimensions, Unit vector In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in (pronounced "v-hat"). This is especially common in physics context. Fourier transform The Fourier transform of a function is traditionally denoted by . Operator In quantum mechanics, operators are denoted with hat notation. For instance, see the time-independent Schrödinger equation, where the Hamiltonian operator is denoted . See also References Mathematical notation
Hat notation
[ "Mathematics" ]
370
[ "Algebra", "Algebra stubs", "nan" ]
9,226,504
https://en.wikipedia.org/wiki/54P/de%20Vico%E2%80%93Swift%E2%80%93NEAT
54P/de Vico–Swift–NEAT is a periodic comet in the Solar System first discovered by Father Francesco de Vico (Rome, Italy) on August 23, 1844. It has become a lost comet several times after its discovery. The comet makes many close approaches to Jupiter. The comet was last observed on 20 December 2009 by Ageo Observatory. First discovery (1844) Independent discoveries were made by Melhop (Hamburg, Germany) on September 6 and by Hamilton Lanphere Smith (Cleveland, Ohio, USA) on September 10. Paul Laugier and Felix Victor Mauvais calculated an orbit on September 9, 1844, and noted that a similarity existed with comets seen in previous years, by including comet Blanpain of 1819, into their calculations, they came up with an orbital period of between 4.6 and 4.9 years. Hervé Faye (Paris, France) computed the first elliptical orbit on September 16, 1844, and the orbital period as 5.46 years. The comet was considered lost as subsequent predicted returns after 1844 were never observed. Second discovery (1894) Edward D. Swift (Echo Mountain, California, USA) rediscovered the comet on November 21, 1894. Adolf Berberich suggested the comet might be the same as de Vico's comet on the basis of the comet's location and direction of motion. After 1894, the comet was considered lost again after the 1901 and 1907 returns remained unseen. Third discovery (1965) In 1963, Brian G. Marsden used a computer to link the 1844 and 1894 sightings and calculated a favourable return in 1965. The comet was subsequently recovered by Arnold Klemola (Yale-Columbia Southern Observatory, Argentina) on June 30, 1965, at magnitude 17. In 1968 the comet passed close to Jupiter which increased the perihelion distance and orbital period, the magnitude dropped and the comet was not observed for subsequent predictions, in 1995 it was again considered lost. Fourth discovery (2002) The Near-Earth Asteroid Tracking (NEAT) program rediscovered the comet on October 11, 2002. The LINEAR program (New Mexico) found several prediscovery images from October 4. It was confirmed as a return of comet 54P/de Vico-Swift by Kenji Muraoka (Kochi, Japan). 2009 apparition On August 17, 2009, comet 54P/de Vico–Swift–NEAT was recovered, while 2.3 AU from the Sun. References External links Orbital simulation from JPL (Java) / Ephemeris 54P at Gary W. Kronk's Cometography 54P at Kazuo Kinoshita's Comets 54P at Seiichi Yoshida's Comet Catalog Periodic comets 0054 054P 054P 18440823 Recovered astronomical objects
54P/de Vico–Swift–NEAT
[ "Astronomy" ]
569
[ "Recovered astronomical objects", "Astronomical objects" ]
9,227,425
https://en.wikipedia.org/wiki/Stockholm%20Environment%20Institute%20US%20Center
The Stockholm Environment Institute (SEI) is an international research organization focusing on the issue of sustainable development. SEI has its headquarters in Stockholm with a network structure of permanent and associated staff worldwide and with centres the US, York (UK), Oxford (UK), Tallinn (Estonia), and Bangkok (Thailand). SEI's US center is a research affiliate of Tufts University in Massachusetts and also has offices in Davis, California, and Seattle, Washington. It conducts a diverse programme focusing on the social, technological and institutional requirements for a transition to sustainability. Its funders include the United Nations, the World Bank, and numerous foundations and national governments such as the United States, Sweden, Denmark, Germany, the Netherlands and the UK. In addition to providing policy-relevant analyses, the Center works to build capacity in developing countries for integrated sustainability planning through training and collaboration on projects. Its decision support tools are widely used: LEAP for energy planning and climate change mitigation, WEAP for water resources planning and PoleStar for evaluating sustainable development strategies. Its activities are organized into three programs: The Climate and Energy Program conducts energy system analyses, examines environmental consequences of energy use such as global warming, and develops policies for a transition to efficient and renewable energy technology. The Water Resources Program brings an integrated perspective to freshwater assessment, one that seeks sustainable water solutions by balancing the needs for basic water services, development and the environment. The Sustainable Development Studies Program takes a holistic perspective in assessing sustainability at global, regional, and national levels. External links SEI-US web site Main SEI web site LEAP web site WEAP web site Tufts University Research institutes in the United States International research institutes Environmental research institutes
Stockholm Environment Institute US Center
[ "Environmental_science" ]
345
[ "Environmental research institutes", "Environmental research" ]
9,227,595
https://en.wikipedia.org/wiki/Santalum%20album
Santalum album is a small tropical tree, and the traditional source of sandalwood oil. It is native to Indonesia (Java and the Lesser Sunda Islands), the Philippines, and Western Australia. It is commonly known as the true sandalwood, white sandalwood, or Indian sandalwood. It was one of the plants exploited by Austronesian arboriculture and it was introduced by Austronesian sailors to East Asia, Mainland Southeast Asia and South Asia during the ancient spice trade, becoming naturalized in South India by at least 1300 BCE. It was greatly valued for its fragrance, and is considered sacred in some religions like Hinduism. The high value of the species has caused over-exploitation, to the point where the wild population is vulnerable to extinction. Indian sandalwood still commands high prices for its essential oil owing to its high alpha santalol content, but the lack of sizable trees has essentially eliminated its former use for fine woodworking. The plant is long-lived, but harvest is only viable after many years. Description Santalum album is an evergreen tree that grows between . The tree is variable in habit, usually upright to sprawling, and may intertwine with other species. The plant parasitises the roots of other tree species, with a haustorium adaptation on its own roots, but without major detriment to its hosts. An individual will form a non-obligate relationship with a number of other plants. Up to 300 species (including its own) can host the tree's development - supplying macronutrients phosphorus, nitrogen and potassium, and shade - especially during early phases of development. It may propagate itself through wood suckering during its early development, establishing small stands. The reddish or brown bark can be almost black and is smooth in young trees, becoming cracked with a red reveal. The heartwood is pale green to white as the common name indicates. The leaves are thin, opposite and ovate to lanceolate in shape. Glabrous surface is shiny and bright green, with a glaucous pale reverse. Fruit is produced after three years, viable seeds after five. These seeds are distributed by birds. Taxonomy Nomenclature The nomenclature for other "sandalwoods" and the taxonomy of the genus are derived from this species' historical and widespread use. Etymologically it is derived from Sanskrit chandanam, meaning "wood for burning incense", and related to candrah, meaning "shining, glowing". Santalum album is included in the family Santalaceae, and is commonly known as white or East Indian sandalwood. The name, Santalum ovatum, used by Robert Brown in Prodromus Florae Novae Hollandiae (1810) was described as a synonym of this species by Alex George in 1984. The epithet album refers to the "white" of the heartwood. The species was the first to be known as sandalwood. Other species in the genus Santalum, such as the Australian S. spicatum, are also referred to as true sandalwoods, to distinguish them from trees with similar-smelling wood or oil. Phytochemistry Sandalwood oil consists of about 80% α-santalol and β-santolol, predominantly the former, which are sesquiterpenes. Attempts to synthesise these date to 1947 by Givaudan in Switzerland. The resulting isobornyl cyclohexanol can be distinguished from santolol, but is much cheaper. Since then other synthetic sandalwood oils have been used in laundry detergents and textiles. Three of the terpene synthase genes producing components employed in host defense are present in S. album. Distribution and history Sandalwood is originally native to dry areas in Indonesia (Java and the Lesser Sunda Islands), the Philippines, and Western Australia, where it is found with close congeners. It was introduced very early () into Dravidian regions of South Asia via the Austronesian maritime spice trade, along with other Austronesian domesticates like areca nut and coconuts. It first appears in archaeological records in South Asia in the southern Deccan by 1300 BCE. It became naturalized in these regions where dry sandy soils are common. Sandalwood is now cultivated in India, Sri Lanka, Indonesia, Malaysia, the Philippines and Northern Australia. Habitat and growth S. album occurs from coastal dry forests up to elevation. It normally grows in sandy or well drained stony red soils, but a wide range of soil types are inhabited. This habitat has a temperature range from and annual rainfall between and . S. album can grow up to vertically. It should be planted in good sunlight and does not require a lot of water. The tree starts to flower after seven years. When the tree is still young the flowers are white and with age they turn red or orange. The trunk of the tree starts to develop its fragrance after about 10 years of growth, but is not ready to harvest till after 20. The tree rarely lives more than 100 years. Conservation S. album is recognized as a "vulnerable" species by the International Union for the Conservation of Nature (IUCN). It is threatened by over-exploitation and degradation to habitat through altered land use, fire (to which this species is extremely sensitive), Spike disease, agriculture, and land-clearing are the factors of most concern. To preserve this vulnerable resource from over-exploitation, legislation protects the species, and cultivation is researched and developed. Until 2002, individuals in India were not allowed to grow sandalwood. Due to its scarcity, sandalwood is not allowed to be cut or harvested by individuals. The State grants specific permission to officials who then can cut down the tree and sell its wood. The Indian government has placed a ban on the export of the timber. Uses and production S. album has been the primary source of sandalwood and the derived oil. These often hold an important place in the societies of their naturalised distribution range. The central part of the tree, the heartwood, is the only part of the tree that is used for its fragrance. It is yellow-brown in color, hard with an oily texture and due to its durability, is a preferred material for carving. The outer part of the tree, the sapwood, is unscented. The sapwood is white or yellow in color and is used to make turnery items. The high value of sandalwood has led to attempts at cultivation, this has increased the distribution range of the plant. It was valued in construction, since it was considered rotproof. The first extraction of its essential oil occurred in Mysore, India in 1917. For many years, the oils were extracted in the perfumeries at Grasse, France. Production is now controlled by the Indian state, and demand exceeds supply. The ISO Standard for the accepted characteristics of this essential oil is ISO 3518:2002. HPTLC and GC, GC-MS based methods are used for qualitative and quantitative analyses of the volatile essential oil constituents. True sandalwood has a high santalol content, at about 90%, compared with the other main source of the oil, the cheaper Santalum spicatum (Australian sandalwood), at around 39%, and India used to dominate production of sandalwood oil world-wide, but the industry has been in decline in the 21st century. Another source is Santalum austrocaledonicum from New Caledonia. Sandalwood is used in the production of the perfume Samsara by Guerlain (1989). The long maturation period and difficulty in cultivation have restricted extensive planting. Harvest of the tree involves several curing and processing stages, also adding to the commercial value. The wood and oil are in high demand and are an important trade item in three main regions: India The use of S. album in India is noted in literature for over two thousand years. It has use as wood and oil in religious practices, and was burned in cremation. In modern times only a small fragment is added to the pyre for symbolic purposes. It also features as a construction material in temples and elsewhere. The Indian government has banned the export of the species to reduce the threat by over-harvesting. In the southern Indian states of Karnataka, Andhra Pradesh, and Tamil Nadu all trees of greater than a specified girth were the property of the state until 2001/2. Cutting of trees, even on private property, was regulated by the Forest Department. After that, they were allowed to be sold to private growers, but the product can only be sold to the state forest department. Annual production fell from a high of 4,000 tonnes in the early 1970s, to fewer than 300 tonnes in 2011. The decline is blamed on government policy and over-exploitation, and moves have been made to encourage planters to grow the trees again. Australia The native species, Santalum spicatum, is more common and extensively grown in Western Australia, but there are two commercial Indian sandalwood plantations in full operation based in Kununurra, in the far north of Western Australia: Quintis (formerly Tropical Forestry Services), which in 2017 controlled around 80 per cent of the world's supply of Indian sandalwood, and Santanol. Comoros True sandalwood is grated against a stone, coral, or ceramic surface to make a sun-protective medicinal paste called msindzano, worn on the faces of women and girls in Comoros. Sri Lanka The harvesting of sandalwood is preferred to be of trees that are advanced in age. Saleable wood can, however, be of trees as young as seven years. The entire plant is removed rather than cut to the base, as in coppiced species. The extensive removal of S. album over the past century led to increased vulnerability to extinction. , small plantations of true sandalwood also exist in China, Indonesia, Malaysia, the Philippines and the Pacific Islands. China Sandalwood has been used over a longtime in China for the construction of statues and temples, and was burned in censers during religious rites. Egypt Sandalwood was used for embalming mummies, and later was burned as part of Muslim funerals. See also Sandal spike phytoplasma - disease of S. album Domesticated plants and animals of Austronesia References Bibliography album Flora of India (region) Flora of Bangladesh Essential oils Vulnerable plants Plants described in 1753 Taxa named by Carl Linnaeus Austronesian agriculture
Santalum album
[ "Chemistry" ]
2,148
[ "Essential oils", "Natural products" ]
9,227,625
https://en.wikipedia.org/wiki/Ed%20Pegg%20Jr.
Edward Taylor Pegg Jr. (born December 7, 1963) is an expert on mathematical puzzles and is a self-described recreational mathematician. He wrote an online puzzle column called Ed Pegg Jr.'s Math Games for the Mathematical Association of America during the years 2003–2007. His puzzles have also been used by Will Shortz on the puzzle segment of NPR's Weekend Edition Sunday. He was a fan of Martin Gardner and regularly participated in Gathering 4 Gardner conferences. In 2009, he teamed up with Tom M. Rodgers and Alan Schoen to edit two Gardner tribute books. Pegg received a master's degree in mathematics from the University of Colorado at Colorado Springs, writing his thesis on the subject of fair dice. In 2000, he left NORAD to join Wolfram Research, where he collaborated on A New Kind of Science (NKS). In 2004, he started assisting Eric W. Weisstein at Wolfram MathWorld. He has made contributions to several hundred MathWorld articles. He was one of the chief consultants for Numb3rs. References External links MathPuzzle Ed Pegg Jr.'s Math Games Demonstrations by Ed Pegg Jr. The Math Behind Numb3rs CBS puzzle Ed Pegg Jr.'s entry in the Numericana Hall of Fame 1963 births 20th-century American mathematicians 21st-century American mathematicians Cellular automatists Living people Puzzle designers Recreational mathematicians Mathematics popularizers Pegg Edward
Ed Pegg Jr.
[ "Mathematics" ]
291
[ "Recreational mathematics", "Recreational mathematicians" ]
9,228,002
https://en.wikipedia.org/wiki/Self-protein
Self-protein refers to all proteins endogenously produced by DNA-level transcription and translation within an organism of interest. This does not include proteins synthesized due to viral infection, but may include those synthesized by commensal bacteria within the intestines. Proteins that are not created within the body of the organism of interest, but nevertheless enter through the bloodstream, a breach in the skin, or a mucous membrane, may be designated as “non-self” and subsequently targeted and attacked by the immune system. Tolerance to self-protein is crucial for overall wellbeing; when the body erroneously identifies self-proteins as “non-self”, the subsequent immune response against endogenous proteins may lead to the development of an autoimmune disease. Examples Of note, the list provided above is not exhaustive; the list does not mention all possible proteins targeted by the provided autoimmune diseases. Identification by the immune system Autoimmune responses and diseases are primarily instigated by T lymphocytes that are incorrectly screened for reactivity to self-protein during cell development. During T-cell development, early T-cell progenitors first move via chemokine gradients from the bone marrow into the thymus, where T-cell receptors are randomly rearranged at the gene level to allow for T-cell receptor generation. These T-cells have the potential to bind to anything, including self-proteins. The immune system must differentiate the T-cells that have receptors capable of binding to self versus non-self proteins; T-cells that can bind to self-proteins must be destroyed to prevent development of an autoimmune disorder. In a process known as “Central Tolerance”, T-cells are exposed to cortical epithelial cells that express a variety of different major histocompatibility complexes (MHC) of both class 1 and class 2, which have the ability to bind to T-cell receptors of CD8+ cytotoxic T-cells, and CD4+ helper T-cells, respectively. The T-cells that display affinity for these MHC are positively selected to continue to the second stage of development, while those that cannot bind to MHC undergo apoptosis. In the second stage, immature T-cells are exposed to a variety of macrophages, dendritic cells, and medullary epithelial cells that express self-protein on MHC class 1 and class 2. These epithelial cells also express the transcription factor labelled autoimmune regulator (AIRE) – this crucial transcription factor allows the medullary epithelial cells of the thymus to express proteins would normally be present in peripheral tissue rather than in an epithelial cell, such as insulin-like peptides, myelin-like peptides, and more. As these epithelial cells now present a large variety of self-proteins that could be encountered across the body, the immature T-cells are tested for affinity to self-protein and self-MHC. If any T-cell has strong affinity for self-protein and self-MHC, the cell undergoes apoptosis to prevent autoimmune function. T-cells that display low/medium affinity are allowed to leave the thymus and circulate throughout the body to react to novel non-self antigen. In this manner, the body attempts to systematically destroy T-cells that could lead to autoimmunity. References Immunology
Self-protein
[ "Biology" ]
716
[ "Immunology" ]
9,228,246
https://en.wikipedia.org/wiki/Quadratic%20growth
In mathematics, a function or sequence is said to exhibit quadratic growth when its values are proportional to the square of the function argument or sequence position. "Quadratic growth" often means more generally "quadratic growth in the limit", as the argument or sequence position goes to infinity – in big Theta notation, . This can be defined both continuously (for a real-valued function of a real variable) or discretely (for a sequence of real numbers, i.e., real-valued function of an integer or natural number variable). Examples Examples of quadratic growth include: Any quadratic polynomial. Certain integer sequences such as the triangular numbers. The th triangular number has value , approximately . For a real function of a real variable, quadratic growth is equivalent to the second derivative being constant (i.e., the third derivative being zero), and thus functions with quadratic growth are exactly the quadratic polynomials, as these are the kernel of the third derivative operator . Similarly, for a sequence (a real function of an integer or natural number variable), quadratic growth is equivalent to the second finite difference being constant (the third finite difference being zero), and thus a sequence with quadratic growth is also a quadratic polynomial. Indeed, an integer-valued sequence with quadratic growth is a polynomial in the zeroth, first, and second binomial coefficient with integer values. The coefficients can be determined by taking the Taylor polynomial (if continuous) or Newton polynomial (if discrete). Algorithmic examples include: The amount of time taken in the worst case by certain algorithms, such as insertion sort, as a function of the input length. The numbers of live cells in space-filling cellular automaton patterns such as the breeder, as a function of the number of time steps for which the pattern is simulated. Metcalfe's law stating that the value of a communications network grows quadratically as a function of its number of users. See also Exponential growth References Asymptotic analysis
Quadratic growth
[ "Mathematics" ]
413
[ "Mathematical analysis", "Asymptotic analysis", "Mathematical analysis stubs" ]
9,228,804
https://en.wikipedia.org/wiki/Wi-Fi%20Protected%20Setup
Wi-Fi Protected Setup (WPS), originally Wi-Fi Simple Config, is a network security standard to create a secure wireless home network. Created by Cisco and introduced in 2006, the purpose of the protocol is to allow home users who know little of wireless security and may be intimidated by the available security options to set up Wi-Fi Protected Access, as well as making it easy to add new devices to an existing network without entering long passphrases. It is used by devices made by HP, Brother and Canon for their printers. WPS is a wireless method that is used to connect certain Wi-Fi devices such as printers and security cameras to the Wi-Fi network without using any password. In addition, there is another way to connect called WPS Pin that is used by some devices to connect to the wireless network. Wi-Fi Protected Setup allows the owner of Wi-Fi privileges to block other users from using their household Wi-Fi. The owner can also allow people to use Wi-Fi. This can be changed by pressing the WPS button on the home router. A major security flaw was revealed in December 2011 that affects wireless routers with the WPS PIN feature, which most recent models have enabled by default. The flaw allows a remote attacker to recover the WPS PIN in a few hours with a brute-force attack and, with the WPS PIN, the network's WPA/WPA2 pre-shared key (PSK). Users have been urged to turn off the WPS PIN feature, although this may not be possible on some router models. Modes The standard emphasizes usability and security, and allows four modes in a home network for adding a new device to the network: PIN method In which a PIN has to be read from either a sticker or display on the new wireless device. This PIN must then be entered at the "representant" of the network, usually the network's access point. Alternately, a PIN provided by the access point may be entered into the new device. This method is the mandatory baseline mode and everything must support it. The Wi-Fi Direct specification supersedes this requirement by stating that all devices with a keypad or display must support the PIN method. Push button method In which the user has to push a button, either an actual or virtual one, on both the access point and the new wireless client device. On most devices, this discovery mode turns itself off as soon as a connection is established or after a delay (typically 2 minutes or less), whichever comes first, thereby minimizing its vulnerability. Support of this mode is mandatory for access points and optional for connecting devices. The Wi-Fi Direct specification supersedes this requirement by stating that all devices must support the push button method. Near-field communication method In which the user has to bring the new client close to the access point to allow a near-field communication between the devices. NFC Forum–compliant RFID tags can also be used. Support of this mode is optional. USB method In which the user uses a USB flash drive to transfer data between the new client device and the network's access point. Support of this mode is optional, but deprecated. The last two modes are usually referred to as out-of-band methods as there is a transfer of information by a channel other than the Wi-Fi channel itself. Only the first two modes are currently covered by the WPS certification. The USB method has been deprecated and is not part of the Alliance's certification testing. Some wireless access points have a dual-function WPS button, and holding this button down for a shorter or longer time may have other functions, such as factory-reset or toggling WiFi. Some manufacturers, such as Netgear, use a different logo and/or name for Wi-Fi Protected Setup; the Wi-Fi Alliance recommends the use of the Wi-Fi Protected Setup Identifier Mark on the hardware button for this function. Technical architecture The WPS protocol defines three types of devices in a network: Registrar A device with the authority to issue and revoke access to a network; it may be integrated into a wireless access point (AP), or provided as a separate device. Enrollee A client device seeking to join a wireless network. AP An access point functioning as a proxy between a registrar and an enrollee. The WPS standard defines three basic scenarios that involve components listed above: AP with integrated registrar capabilities configures an enrollee station (STA) In this case, the session will run on the wireless medium as a series of EAP request/response messages, ending with the AP disassociating from the STA and waiting for the STA to reconnect with its new configuration (handed to it by the AP just before). Registrar STA configures the AP as an enrollee This case is subdivided in two aspects: first, the session could occur on either a wired or wireless medium, and second, the AP could already be configured by the time the registrar found it. In the case of a wired connection between the devices, the protocol runs over Universal Plug and Play (UPnP), and both devices will have to support UPnP for that purpose. When running over UPnP, a shortened version of the protocol is run (only two messages) as no authentication is required other than that of the joined wired medium. In the case of a wireless medium, the session of the protocol is very similar to the internal registrar scenario, but with opposite roles. As to the configuration state of the AP, the registrar is expected to ask the user whether to reconfigure the AP or keep its current settings, and can decide to reconfigure it even if the AP describes itself as configured. Multiple registrars should have the ability to connect to the AP. UPnP is intended to apply only to a wired medium, while actually it applies to any interface to which an IP connection can be set up. Thus, having manually set up a wireless connection, the UPnP can be used over it in the same manner as with the wired connection. Registrar STA configures enrollee STA In this case the AP stands in the middle and acts as an authenticator, meaning it only proxies the relevant messages from side to side. Protocol The WPS protocol consists of a series of EAP message exchanges that are triggered by a user action, relying on an exchange of descriptive information that should precede that user's action. The descriptive information is transferred through a new Information Element (IE) that is added to the beacon, probe response, and optionally to the probe request and association request/response messages. Other than purely informative type–length–values, those IEs will also hold the possible and the currently deployed configuration methods of the device. After this communication of the device capabilities from both ends, the user initiates the actual protocol session. The session consists of eight messages that are followed, in the case of a successful session, by a message to indicate that the protocol is completed. The exact stream of messages may change when configuring different kinds of devices (AP or STA), or when using different physical media (wired or wireless). Band or radio selection Some devices with dual-band wireless network connectivity do not allow the user to select the 2.4 GHz or 5 GHz band (or even a particular radio or SSID) when using Wi-Fi Protected Setup, unless the wireless access point has separate WPS button for each band or radio; however, a number of later wireless routers with multiple frequency bands and/or radios allow the establishment of a WPS session for a specific band and/or radio for connection with clients which cannot have the SSID or band (e.g., 2.4/5 GHz) explicitly selected by the user on the client for connection with WPS (e.g. pushing the 5 GHz, where supported, WPS button on the wireless router will force a client device to connect via WPS on only the 5 GHz band after a WPS session has been established by the client device which cannot explicitly allow the selection of wireless network and/or band for the WPS connection method). Security Online brute-force attack In December 2011, researcher Stefan Viehböck reported a design and implementation flaw that makes brute-force attacks against PIN-based WPS feasible to be performed on WPS-enabled Wi-Fi networks. A successful attack on WPS allows unauthorized parties to gain access to the network, and the only effective workaround is to disable WPS. The vulnerability centers around the acknowledgement messages sent between the registrar and enrollee when attempting to validate a PIN, which is an eight-digit number used to add new WPA enrollees to the network. Since the last digit is a checksum of the previous digits, there are seven unknown digits in each PIN, yielding 107 = 10,000,000 possible combinations. When an enrollee attempts to gain access using a PIN, the registrar reports the validity of the first and second halves of the PIN separately. Since the first half of the pin consists of four digits (10,000 possibilities) and the second half has only three active digits (1000 possibilities), at most 11,000 guesses are needed before the PIN is recovered. This is a reduction by three orders of magnitude from the number of PINs that would be required to be tested. As a result, an attack can be completed in under four hours. The ease or difficulty of exploiting this flaw is implementation-dependent, as Wi-Fi router manufacturers could defend against such attacks by slowing or disabling the WPS feature after several failed PIN validation attempts. A young developer based out of a small town in eastern New Mexico created a tool that exploits this vulnerability to prove that the attack is feasible. The tool was then purchased by Tactical Network Solutions in Maryland. They state that they have known about the vulnerability since early 2011 and had been using it. In some devices, disabling WPS in the user interface does not result in the feature actually being disabled, and the device remains vulnerable to this attack. Firmware updates have been released for some of these devices allowing WPS to be disabled completely. Vendors could also patch the vulnerability by adding a lock-down period if the Wi-Fi access point detects a brute-force attack in progress, which disables the PIN method for long enough to make the attack impractical. Offline brute-force attack In the summer of 2014, Dominique Bongard discovered what he called the Pixie Dust attack. This attack works only on the default WPS implementation of several wireless chip makers, including Ralink, MediaTek, Realtek and Broadcom. The attack focuses on a lack of randomization when generating the E-S1 and E-S2 "secret" nonces. Knowing these two nonces, the PIN can be recovered within a couple of minutes. A tool called pixiewps has been developed and a new version of Reaver has been developed to automate the process. Since both the client and access point (enrollee and registrar, respectively) need to prove they know the PIN to make sure the client is not connecting to a rogue AP, the attacker already has two hashes that contain each half of the PIN, and all they need is to brute-force the actual PIN. The access point sends two hashes, E-Hash1 and E-Hash2, to the client, proving that it also knows the PIN. E-Hash1 and E-Hash2 are hashes of (E-S1 | PSK1 | PKe | PKr) and (E-S2 | PSK2 | PKe | PKr), respectively. The hashing function is HMAC-SHA-256 and uses the "authkey" that is the key used to hash the data. Physical security issues All WPS methods are vulnerable to usage by an unauthorized user if the wireless access point is not kept in a secure area. Many wireless access points have security information (if it is factory-secured) and the WPS PIN printed on them; this PIN is also often found in the configuration menus of the wireless access point. If this PIN cannot be changed or disabled, the only remedy is to get a firmware update to enable the PIN to be changed, or to replace the wireless access point. It is possible to extract a wireless passphrase with the following methods using no special tools: A wireless passphrase can be extracted using WPS under Windows Vista and newer versions of Windows, under administrative privileges by connecting with this method then bringing up the properties for this wireless network and clicking on "show characters". Within most Linux desktop and Unix distributions (like Ubuntu), all network connections and their details are visible to a regular user, including the password obtained through WPS. Furthermore, root (aka admin) can always access all network details through terminal, i.e. even if there is no window manager active for regular users. A simple exploit in the Intel PROset wireless client utility can reveal the wireless passphrase when WPS is used, after a simple move of the dialog box which asks if you want to reconfigure this access point. References External links Wi-Fi Protected Setup Knowledge Center at the Wi-Fi Alliance UPnP device architecture US-CERT VU#723755 Broken cryptography algorithms Cryptographic protocols Wi-Fi
Wi-Fi Protected Setup
[ "Technology" ]
2,776
[ "Wireless networking", "Wi-Fi" ]
9,229,683
https://en.wikipedia.org/wiki/DEPTHX
The Deep Phreatic Thermal Explorer (DEPTHX) is an autonomous underwater vehicle designed and built by Stone Aerospace, an aerospace engineering firm based in Austin, Texas. It was designed to autonomously explore and map underwater sinkholes in northern Mexico, as well as collect water and wall core samples. This could be achieved via an autonomous form of navigation known as A-Navigation. The DEPTHX vehicle was the first of three vehicles to be built by Stone Aerospace which were funded by NASA with the goal of developing technology that can explore the oceans of Jupiter's moon Europa to look for extraterrestrial life. DEPTHX was a collaborative project for which Stone Aerospace was the principal investigator. Co-investigators included Carnegie Mellon University, which was responsible for the navigation and guidance software, the Southwest Research Institute, which built the vehicle's science payload, and research scientists from the University of Texas at Austin, the Colorado School of Mines, and NASA Ames Research Center. History In 1999, Bill Stone had been involved in an underwater surveying project in Wakulla Springs, Florida. For that project, Stone had devised a digital wall mapper that was propelled by a diver propulsion vehicle and steered by divers which was designed to create a 3-D map of Wakulla Springs using an array of sonars, as well as a suite of other sophisticated sensors. The success of this project, the Wakulla Springs 2 Project, attracted the interest of planetary scientist Dan Durda from the Southwest Research Institute, who wished to create a similar piece of technology to explore the oceans of Europa, but one that could drive itself autonomously. Stone accepted the challenge, and several collaborative proposals were submitted to NASA. It wasn't until 2003 that NASA would finally fund DEPTHX as a three-year, $5 million project. The vehicle underwent several different design concepts over the next couple of years as engineers at Stone Aerospace explored various options. Initial designs focused on a less ellipsoidal design, however these designs were abandoned due to concerns that such a shape would be difficult to maneuver out of the potentially tight spots it might encounter during the exploration of unknown territory. It was also during this time that the DEPTHX team did a field campaign at Cenote Zacatón using a drop sonde to acquire some initial data for the software team, which itself contributed to the overall design changes. The final design was decided upon in 2006, at which point construction of the vehicle began. The completed vehicle was about in diameter and weighed about . It had redundant navigation systems including 54 sonars, an inertial measurement unit, doppler velocity logger, as well as depth gauges and accelerometers. Propulsion systems were also redundant, having six thrusters and two equivalent battery stacks. It was outfitted with a variable buoyancy system, and finally with the science payload that included the ability to take in water and solid core samples for later analysis, as well as an onboard microscope to analyze water samples in real time. Accomplishments During the DEPTHX 2007 deployment, the vehicle was able to create 3-D maps of four cenotes in Sistema Zacatón in Tamaulipas, Mexico. This was the first autonomous system to explore and map a cavern. The mapping of Cenote Zacatón was particularly notable because its depth was previously unknown, as human divers had not been successful in attempts to reach the bottom. DEPTHX created the first map of the bottom of Zacatón, which has a depth of over . DEPTHX was the first robotic system of any kind to implement three-dimensional simultaneous localization and mapping (SLAM). It was also the first such system to make its own decisions on where and how to collect samples. From these samples, at least three new divisions of bacteria were discovered. The success of the DEPTHX mission led to the funding of the follow-on project, ENDURANCE. The ENDURANCE vehicle reused the frame and a number of systems from the DEPTHX vehicle, but was considerably reconfigured for the needs of the Antarctic environment. Fieldwork timeline January 25–27, 2007 - The DEPTHX Team begins surveying the world's deepest water-filled sinkhole, Cenote Zacatón. February 4–10, 2007 - Field operations at la Pilita, one of the sinkholes in Sistema Zacatón. DEPTHX runs long autonomous missions to map the area and collect scientific data. March 2007 - Field work continues in La Pilita. May 2007 - DEPTHX is lowered into Zacatón for the first time and maps the bottom for the first time. Three-dimensional SLAM technology is demonstrated, microbiological samples are collected, and autonomous operation demonstrated. See also References External links TED Talk - Bill Stone: Exploring deep caves (and someday the moon) Bill Stone discusses various topics, but notably discoveries of DEPTHX DEPTHX: Zacaton - Mission 1 begins Daily field notes from the DEPTHX team DEPTHX at Robotics Center at Carnegie Mellon University Responsible for software that controls navigation, mapping and autonomous decision making Autonomous underwater vehicles NASA vehicles Caving techniques Astrobiology
DEPTHX
[ "Astronomy", "Biology" ]
1,023
[ "Origin of life", "Speculative evolution", "Astrobiology", "Biological hypotheses", "Astronomical sub-disciplines" ]
9,230,934
https://en.wikipedia.org/wiki/A%20Dream%20Within%20a%20Dream
"A Dream Within a Dream" is a poem written by American poet Edgar Allan Poe, first published in 1849. The poem has 24 lines, divided into two stanzas. Analysis The poem dramatizes the confusion felt by the narrator as he watches the important things in life slip away. Realizing he cannot hold on to even one grain of sand, he is led to his final question whether all things are just a dream. It has been suggested that the "golden sand" referenced in the 15th line signifies that which is to be found in an hourglass, consequently time itself. Another interpretation holds that the expression evokes an image derived from the 1848 finding of gold in California. The latter interpretation seems unlikely, however, given the presence of the four, almost identical, lines describing the sand in another poem "To ——," which is regarded as a blueprint for "A Dream Within a Dream" and preceding its publication by two decades. Publication history The poem was first published in the March 31, 1849, edition of the Boston-based story paper The Flag of Our Union. The same publication had only two weeks before first published Poe's short story "Hop-Frog." The next month, owner Frederick Gleason announced it could no longer pay for whatever articles or poems it published. Adaptations Picnic at Hanging Rock, a story about a group of girls disappearing while on a field trip to a rock formation in the early 20th century, begins with a voice over that states "What we see and what we seem is but a dream. A dream within a dream". The Alan Parsons Project's album Tales of Mystery and Imagination (Edgar Allan Poe) opens with an instrumental homage to the poem also titled "A Dream Within a Dream". Its 1987 re-release included a narration of the original poem by Orson Welles. The Propaganda album A Secret Wish, released in 1985, opens with the track "Dream Within A Dream". The poem is recited in spoken-word form by vocalist Susanne Freytag. Biological Radio, the 1997 Dreadzone album, features the track "Dream Within A Dream" which quotes lines from the poem. The Yardbirds' recorded a musical adaptation for their 2003 album Birdland, adding a new verse of their own. Elysian Fields recorded a musical adaptation of the song. Sopor Aeternus and The Ensemble of Shadows adapted the poem for their album Poetica - All Beauty Sleeps. Korean boy group NCT utilized the poem as a concept base, mentioning "Dream Within a Dream" several times throughout their discography. Examples include "Dream in a Dream" by TEN (NCT 2018 Empathy, 2018) and "INTERLUDE: Regular-Irregular" by NCT 127 (Regular-Irregular, 2018), with additional references in media. Polish singers Sanah and Grzegorz Turnau recorded a song, "Sen we śnie", which uses Poe's poem translated to Polish by poet and translator Włodzimierz Lewik as its lyrics. References External links A Dream Within A Dream , from about.com. Video of A Dream Within a Dream Poetry by Edgar Allan Poe 1849 poems Works originally published in The Flag of Our Union Dream
A Dream Within a Dream
[ "Biology" ]
648
[ "Dream", "Behavior", "Sleep" ]
9,231,406
https://en.wikipedia.org/wiki/Educational%20Broadband%20Service
The Educational Broadband Service (EBS) was formerly known as the Instructional Television Fixed Service (ITFS). ITFS was a band of twenty (20) microwave TV channels available to be licensed by the U.S. Federal Communications Commission (FCC) to local credit granting educational institutions. It was designed to serve as a means for educational institutions to deliver live or pre-recorded Instructional television to multiple sites within school districts and to higher education branch campuses. In recognition of the variety and quantity of video materials required to support instruction at numerous grade levels and in a range of subjects, licensees were typically granted a group of four channels. Its low capital and operating costs as compared to broadcast television, technical quality that compared favorably with broadcast television, and its multi-channel per licensees feature made ITFS an extremely cost effective vehicle for the delivery of Educational television materials. The FCC changed the name of this service to the Educational Broadband Service (EBS) and changed the allocation so each licensee would not have four 6 MHz wide channels but instead would have one 6 MHz channel and one 15 MHz wide "channel" (three contiguous 5 MHz channels). There are currently several hundred EBS systems in operation delivering schedules of live and pre-recorded instruction. History Initial FCC authorization The FCC initially authorized ITFS, in 1963, to operate using a one-way, analog, line-of-sight technology. Typical installations included up to four transmitters multiplexed through a single broadcast antenna with directional receive antennas at each receive site. Receive site installations included equipment to down convert the microwave channels for viewing on standard television receivers. In typical installations, the down converted ITFS signals were distributed to classrooms over multi-channel closed-circuit television systems. FCC allows leasing In the late 1970s the FCC recognized that many ITFS licensees lacked the technical expertise and/or the financial means to make more effective use of ITFS. Subsequently, the FCC authorized ITFS licensees to lease a portion of their spectrum, designated as “Excess Capacity," for commercial use, meaning ITFS licensees were required to retain forty hours per week per channel for daytime instruction with the excess nighttime hours available for commercial use in exchange for technical and financial support for their instructional service. So, primarily in large markets, subscription premium television such as HBO, Showtime, The Movie Channel and others could be transmitted over these same microwave stations beginning at 4 PM when school was out, and continuing throughout the evening, sometimes until the wee hours. In those days, pay-TV did not broadcast during the day, so there was no interference between one type of television programming and the next. Only after ITFS had migrated to other formats did the daytime hours of the service become available to subscription television providers which filled the hours with programming such as the five-hour-long children's show Pinwheel airing weekday mornings on Nickelodeon or long blocks of international cartoons for which the rights thereto remained in the pennies per subscriber throughout the run of the technology. Subsequent development Using ITFS excess capacity and up to thirteen channels in the companion commercial service, the Multichannel Multipoint Distribution Service (MMDS), a number of telecommunications companies built wireless cable systems. The number of available channels, however, proved to be insufficient to compete effectively with the expanding channel capacity of cable TV. ITFS and MMDS licensees then sought FCC authorization to employ digital compression technology, which would substantively increase the number of program streams that could be carried on the channels of the combined ITFS and MMDS spectrum. Two-way operation added In 1998, the FCC approved the use of digital compression in ITFS. At the time digital compression technology was expected to expand the number of program steams by a ratio of 4 to 1 or more. The FCC also authorized both cellular and two-way operations in the ITFS/MMDS services and the potential for ITFS to be used for the distribution of data, as well as video. In the same rule, the FCC reduced the capacity that educational licensees were required to retain for instruction from forty hours per week per channel to 5% of channel capacity. In permitting two-way operations the FCC created the first potential for a substantial use of instructional materials that rely on interaction between the instructional program and learners. The expanded programming capacity provided by digital video compression encouraged a number of commercial entities to create wireless entertainment video systems. These systems found, however, that the additional programming capability was not sufficient to overcome the line-of-sight handicap and the associated higher cost for customer installations. It was clear that while video distribution was a viable educational service for ITFS, commercial video services could not be widely successful in the ITFS/Multichannel Multipoint Distribution Service (MMDS) spectrum. Telecommunications interest in ITFS spectrum In 1999, telecommunication interests associated with the cell phone industry sought to obtain FCC approval for the transfer of portions of the ITFS spectrum from educational use to support a proposed 3G (Third Generation) cell phone technology. In 2001, the FCC ruled to preserve the ITFS spectrum for education and further modified the rules to authorize the use of the spectrum in mobile operations and voice communications. These changes in rule and the rising demand for broadband communications led to several commercial tests of combined ITFS/MMDS digital systems designed for two-way data distribution. It was believed that these wireless systems could provide a high-speed data connection that would compete effectively with DSL and cable modem services in providing access to the Internet. Such systems would also have the capacity to distribute video and voice in the form of data. These tests were, subsequently, halted as it became apparent that the existing technology and cost structures could not sustain commercial operations. During the same period a new technology, Non-line-of-sight (NLOS), was in development and testing by a number of technology companies. NLOS showed promise of overcoming the obstacles of line-of-sight and high customer installation costs that had handicapped ITFS/MMDS operations. That improvement, however, was not judged to be sufficient to ensure that a combined ITFS/MMDS digital service could satisfy the needs of education, as well as providing technology sufficiently robust to be commercially viable. FCC approves wireless networking uses In 2003 the National ITFS Association, the CatholicTV Network, and the Wireless Communications Association filed a joint proposal with the FCC to reformat the ITFS/MMDS spectrum and to provide rules, which would support widespread development of a wireless broadband service in the ITFS/MMDS spectrum. Some school boards provide their students with internet access via this spectrum. FCC publishes major revisions of the BRS/EBS band In July 2019 the FCC published "Transforming the 2.5 GHz Band" substantially changing the BRS and EBS band licenses and use. Principal aspects addressed were a removal of the educational requirements for use and ownership of EBS licenses, new lease terms, changes in license coverage, new white space licenses, a future spectrum auction, and a priority window for Native American tribes to apply for new licenses that cover Indian lands. WISPs using EBS Cellular phone pioneer Craig McCaw's Clearwire Wireless Internet Service Provider (WISP) leased EBS from the non-profit Broadcast license holder in many US cities. WCO Spectrum, in 2023, has been approaching schools offering to purchase 2.5GHz spectrum in the EBS to create maximum value for license holders. See also Agency for Instructional Technology Cable in the Classroom Educational television Instructional television National Association of Educational Broadcasters Non-commercial educational References External links National Educational Broadband Services (EBS) Organization (formerly the National ITFS Association) FCC 2019 BRS/EBS "Transforming the 2.5 GHz Band" spectrum revisions (major) Microwave bands Educational television Network access
Educational Broadband Service
[ "Engineering" ]
1,568
[ "Electronic engineering", "Network access" ]
9,231,515
https://en.wikipedia.org/wiki/Self-flagellation
Self-flagellation is the disciplinary and devotional practice of flogging oneself with whips or other instruments that inflict pain. In Christianity, self-flagellation is practiced in the context of the doctrine of the mortification of the flesh and is seen as a spiritual discipline. It is often used as a form of penance and is intended to allow the flagellant to share in the sufferings of Jesus, bringing his or her focus to God. The main religions that practice self-flagellation include some branches of Christianity and Islam. The ritual has also been practiced among members of several Egyptian and Greco-Roman cults. Christianity Historically, Christians have engaged in various forms of mortification of the flesh, ranging from self-denial, wearing hairshirts and chains, fasting, and self-flagellation (often using a type of whip called a discipline). Some Christians use excerpts from the Bible to justify this ritual. For example, some interpreters claim that Paul the Apostle's statement, "I chastise my body" (1 Corinthians 9:27), refers to self-inflicted bodily scourging. Prominent Christians who have practiced self-flagellation include Martin Luther, the Protestant Reformer, and Congregationalist writer Sarah Osborn, who practiced self-flagellation in order "to remind her of her continued sin, depravity, and vileness in the eyes of God". It became "quite common" for members of the Tractarian movement within the Anglican Communion to practice self-flagellation using a discipline. In the 11th century, Peter Damian, a Benedictine monk in the Roman Catholic tradition, taught that spirituality should manifest itself in physical discipline; he admonished those who sought to follow Christ to practice self-flagellation for the duration of the time it takes one to recite forty Psalms, increasing the number of flagellations on holy days of the Christian calendar. For Damian, only those who shared in the sufferings of Christ could be saved. Throughout Christian history, the mortification of the flesh, wherein one denies oneself physical pleasures, has been commonly followed by members of the clergy, especially in Christian monasteries and convents. Self-flagellation was imposed as a form of punishment as a means of penance for disobedient clergy and laity. In the 13th century, a group of Roman Catholics, known as the Flagellants, took this practice to extremes. During the Black Death, it was thought of as a way to combat the plague by cleansing one's sins. The Flagellants were condemned by the Catholic Church as a cult in 1349 by Pope Clement VI. Self-flagellation rituals were also practiced in 16th-century Japan. Japanese of the time who were converted to Christianity by Jesuit missionaries were reported to have had sympathy for the Passion of the Christ, and they readily practiced self-flagellation to show their devotion. The earliest records of self-flagellation practiced by Japanese converts appeared in the year 1555 in the regions of Bungo and Hirado in Kyushu. These Japanese Christians wore crowns of thorns and bore crosses on their backs during the procession, which led to the place they had designated as the Mount of the Cross. Christians give various reasons for choosing to self-flagellate. One of the main reasons is to emulate the suffering of Christ during his Passion. As Jesus was whipped before his crucifixion, many see whipping themselves as a way to be closer to Jesus and as a reminder of that whipping. Many early Christians believed that in order to be closer to God, one would need to literally suffer through the pain of Christ. Some of them interpret Paul the Apostle as alluding to inflicting bodily harm in order to feel closer to God in his letters to the Romans and to the Colossians. Self-flagellation was also seen as a form of purification, purifying the soul as repentance for any worldly indulgences. Self-flagellation is also used as a punishment on earth in order to avoid punishment in the next life. Self-flagellation was also seen as a way to control the body in order to focus only on God. By whipping oneself, one would find distraction from the pleasures of the world and be able to fully focus on worshiping God. Self-flagellation is also done to thank God for responding to a prayer or to drive evil spirits from the body (cf. Exorcism in Christianity). The popularity of self-flagellation has abated, with some pious Christians choosing to practice the mortification of the flesh with acts like fasting or abstaining from a pleasure (cf. Lenten sacrifice). There is a debate within the Christian tradition about whether or not self-flagellation is of spiritual benefit, with various religious leaders and Christians condemning the practice and others, such as Pope John Paul II, having practiced self-flagellation. People who self-flagellate believe that they need to spiritually share in the suffering of Jesus, and continue this practice, both publicly and privately. Judaism Some Jewish men practice a symbolic form of self-flagellation on the day before Yom Kippur as an enactment; it is strictly prohibited in Judaism to cause self-harm. Biblical passages such as "it shall be a holy convocation unto you; and ye shall afflict your souls" (Leviticus 23:27) were used to justify these actions. It was a common practice in the Middle Ages for men to whip themselves on the back 39 times. However, since biblical times Judaism has largely considered Yom Kippur as a day of spiritual atonement achieved through fasting, introspection, and other interpretations of the commandment "afflict your souls" that do not involve bodily self-harm. Islam Much of the Twelver Shia community tries to emulate Imam Husain through self-flagellation in the same way that Christians try to emulate Jesus Christ. This is exhibited through the public performance of matam. The Shia counterpart to a Christian flagellant is a matamdar. This ritual of matam is meant to reaffirm one's faith and relationships by creating a deep bond among the participants through their shared religious devotion. Despite the violent nature of this ritual, the love and vulnerability associated with it makes it an affirmational ritual performance. Many Shia communities worldwide march in massive parades every year on the Day of Ashura, during the mourning of Muharram, to commemorate the Battle of Karbala and the martyrdom of Imam Hussein. During these parades, devotees hit themselves on the chest or slash themselves with blades on chains called zanjerzani. Though it is uncommon, some Shia communities hit themselves on the back with chains and sharp objects such as knives. This happens in many countries including India, Pakistan, Iraq, Afghanistan, Iran, Saudi Arabia, Lebanon, United States, and Australia. Self-flagellation is just as controversial in Islam as it is in Christianity. In 2008, a prominent court case involving a resident of the UK town of Eccles, who was accused of encouraging his children to self-flagellate, provoked widespread condemnation of the practice. Shias responded by affirming that children should not be encouraged to self-harm, but defending the importance of the ritual when performed by consenting adults. However, some Shia leaders fear that the practice gives their religion a bad reputation, and recommend donating blood instead. See also Autosadism Human sacrifice Religious abuse Self-mutilation Self-defeating personality disorder References Corporal punishments Religious practices
Self-flagellation
[ "Biology" ]
1,566
[ "Behavior", "Religious practices", "Human behavior" ]
9,232,272
https://en.wikipedia.org/wiki/Geometrical%20properties%20of%20polynomial%20roots
In mathematics, a univariate polynomial of degree with real or complex coefficients has complex roots, if counted with their multiplicities. They form a multiset of points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial. Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots. Such bounds are widely used for root-finding algorithms for polynomials, either for tuning them, or for computing their computational complexity. Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degree with real coefficients, which is less than for sufficiently large. In this article, a polynomial that is considered is always denoted where are real or complex numbers and ; thus is the degree of the polynomial. Continuous dependence on coefficients The roots of a polynomial of degree depend continuously on the coefficients. For simple roots, this results immediately from the implicit function theorem. This is true also for multiple roots, but some care is needed for the proof. A small change of coefficients may induce a dramatic change of the roots, including the change of a real root into a complex root with a rather large imaginary part (see Wilkinson's polynomial). A consequence is that, for classical numeric root-finding algorithms, the problem of approximating the roots given the coefficients can be ill-conditioned for many inputs. Conjugation The complex conjugate root theorem states that if the coefficients of a polynomial are real, then the non-real roots appear in pairs of the form . It follows that the roots of a polynomial with real coefficients are mirror-symmetric with respect to the real axis. This can be extended to algebraic conjugation: the roots of a polynomial with rational coefficients are conjugate (that is, invariant) under the action of the Galois group of the polynomial. However, this symmetry can rarely be interpreted geometrically. Bounds on all roots Upper bounds on the absolute values of polynomial roots are widely used for root-finding algorithms, either for limiting the regions where roots should be searched, or for the computation of the computational complexity of these algorithms. Many such bounds have been given, and the sharper one depends generally on the specific sequence of coefficient that are considered. Most bounds are greater or equal to one, and are thus not sharp for a polynomial which have only roots of absolute values lower than one. However, such polynomials are very rare, as shown below. Any upper bound on the absolute values of roots provides a corresponding lower bound. In fact, if and is an upper bound of the absolute values of the roots of then is a lower bound of the absolute values of the roots of since the roots of either polynomial are the multiplicative inverses of the roots of the other. Therefore, in the remainder of the article lower bounds will not be given explicitly. Lagrange's and Cauchy's bounds Lagrange and Cauchy were the first to provide upper bounds on all complex roots. Lagrange's bound is and Cauchy's bound is Lagrange's bound is sharper (smaller) than Cauchy's bound only when 1 is larger than the sum of all but the largest. This is relatively rare in practice, and explains why Cauchy's bound is more widely used than Lagrange's. Both bounds result from the Gershgorin circle theorem applied to the companion matrix of the polynomial and its transpose. They can also be proved by elementary methods. If is a root of the polynomial, and one has Dividing by one gets which is Lagrange's bound when there is at least one root of absolute value larger than 1. Otherwise, 1 is a bound on the roots, and is not larger than Lagrange's bound. Similarly, for Cauchy's bound, one has, if , Thus Solving in , one gets Cauchy's bound if there is a root of absolute value larger than 1. Otherwise the bound is also correct, as Cauchy's bound is larger than 1. These bounds are not invariant by scaling. That is, the roots of the polynomial are the quotient by of the root of , and the bounds given for the roots of are not the quotient by of the bounds of . Thus, one may get sharper bounds by minimizing over possible scalings. This gives and for Lagrange's and Cauchy's bounds respectively. Another bound, originally given by Lagrange, but attributed to Zassenhaus by Donald Knuth, is This bound is invariant by scaling. Let be the largest for . Thus one has for If is a root of , one has and thus, after dividing by As we want to prove , we may suppose that (otherwise there is nothing to prove). Thus which gives the result, since Lagrange improved this latter bound into the sum of the two largest values (possibly equal) in the sequence Lagrange also provided the bound where denotes the th nonzero coefficient when the terms of the polynomials are sorted by increasing degrees. Using Hölder's inequality Hölder's inequality allows the extension of Lagrange's and Cauchy's bounds to every -norm. The -norm of a sequence is for any real number , and If with , and , an upper bound on the absolute values of the roots of is For and , one gets respectively Cauchy's and Lagrange's bounds. For , one has the bound This is not only a bound of the absolute values of the roots, but also a bound of the product of their absolute values larger than 1; see , below. Let be a root of the polynomial Setting we have to prove that every root of satisfies If the inequality is true; so, one may suppose for the remainder of the proof. Writing the equation as Hölder's inequality implies If , this is Thus In the case , the summation formula for a geometric progression, gives Thus which simplifies to Thus, in all cases which finishes the proof. Other bounds Many other upper bounds for the magnitudes of all roots have been given. Fujiwara's bound slightly improves the bound given above by dividing the last argument of the maximum by two. Kojima's bound is where denotes the th nonzero coefficient when the terms of the polynomials are sorted by increasing degrees. If all coefficients are nonzero, Fujiwara's bound is sharper, since each element in Fujiwara's bound is the geometric mean of first elements in Kojima's bound. Sun and Hsieh obtained another improvement on Cauchy's bound. Assume the polynomial is monic with general term . Sun and Hsieh showed that upper bounds and could be obtained from the following equations. is the positive root of the cubic equation They also noted that . Landau's inequality The previous bounds are upper bounds for each root separately. Landau's inequality provides an upper bound for the absolute values of the product of the roots that have an absolute value greater than one. This inequality, discovered in 1905 by Edmund Landau, has been forgotten and rediscovered at least three times during the 20th century. This bound of the product of roots is not much greater than the best preceding bounds of each root separately. Let be the roots of the polynomial . If is the Mahler measure of , then Surprisingly, this bound of the product of the absolute values larger than 1 of the roots is not much larger than the best bounds of one root that have been given above for a single root. This bound is even exactly equal to one of the bounds that are obtained using Hölder's inequality. This bound is also useful to bound the coefficients of a divisor of a polynomial with integer coefficients: if is a divisor of , then and, by Vieta's formulas, for , where is a binomial coefficient. Thus and Discs containing some roots From Rouché theorem Rouché's theorem allows defining discs centered at zero and containing a given number of roots. More precisely, if there is a positive real number and an integer such that then there are exactly roots, counted with multiplicity, of absolute value less than . If then By Rouché's theorem, this implies directly that and have the same number of roots of absolute values less than , counted with multiplicities. As this number is , the result is proved. The above result may be applied if the polynomial takes a negative value for some positive real value of . In the remaining of the section, suppose that . If it is not the case, zero is a root, and the localization of the other roots may be studied by dividing the polynomial by a power of the indeterminate, getting a polynomial with a nonzero constant term. For and , Descartes' rule of signs shows that the polynomial has exactly one positive real root. If and are these roots, the above result shows that all the roots satisfy As these inequalities apply also to and these bounds are optimal for polynomials with a given sequence of the absolute values of their coefficients. They are thus sharper than all bounds given in the preceding sections. For , Descartes' rule of signs implies that either has two positive real roots that are not multiple, or is nonnegative for every positive value of . So, the above result may be applied only in the first case. If are these two roots, the above result implies that for roots of , and that for the other roots. Instead of explicitly computing and it is generally sufficient to compute a value such that (necessarily ). These have the property of separating roots in terms of their absolute values: if, for , both and exist, there are exactly roots such that For computing one can use the fact that is a convex function (its second derivative is positive). Thus exists if and only if is negative at its unique minimum. For computing this minimum, one can use any optimization method, or, alternatively, Newton's method for computing the unique positive zero of the derivative of (it converges rapidly, as the derivative is a monotonic function). One can increase the number of existing 's by applying the root squaring operation of the Dandelin–Graeffe iteration. If the roots have distinct absolute values, one can eventually completely separate the roots in terms of their absolute values, that is, compute positive numbers such there is exactly one root with an absolute value in the open interval for . From Gershgorin circle theorem The Gershgorin circle theorem applies the companion matrix of the polynomial on a basis related to Lagrange interpolation to define discs centered at the interpolation points, each containing a root of the polynomial; see for details. If the interpolation points are close to the roots of the roots of the polynomial, the radii of the discs are small, and this is a key ingredient of Durand–Kerner method for computing polynomial roots. Bounds of real roots For polynomials with real coefficients, it is often useful to bound only the real roots. It suffices to bound the positive roots, as the negative roots of are the positive roots of . Clearly, every bound of all roots applies also for real roots. But in some contexts, tighter bounds of real roots are useful. For example, the efficiency of the method of continued fractions for real-root isolation strongly depends on tightness of a bound of positive roots. This has led to establishing new bounds that are tighter than the general bounds of all roots. These bounds are generally expressed not only in terms of the absolute values of the coefficients, but also in terms of their signs. Other bounds apply only to polynomials whose all roots are reals (see below). Bounds of positive real roots To give a bound of the positive roots, one can assume without loss of generality, as changing the signs of all coefficients does not change the roots. Every upper bound of the positive roots of is also a bound of the real zeros of . In fact, if is such a bound, for all , one has . Applied to Cauchy's bound, this gives the upper bound for the real roots of a polynomial with real coefficients. If this bound is not greater than , this means that all nonzero coefficients have the same sign, and that there is no positive root. Similarly, another upper bound of the positive roots is If all nonzero coefficients have the same sign, there is no positive root, and the maximum must be zero. Other bounds have been recently developed, mainly for the method of continued fractions for real-root isolation. Polynomials whose roots are all real If all roots of a polynomial are real, Laguerre proved the following lower and upper bounds of the roots, by using what is now called Samuelson's inequality. Let be a polynomial with all real roots. Then its roots are located in the interval with endpoints For example, the roots of the polynomial satisfy Root separation The root separation of a polynomial is the minimal distance between two roots, that is the minimum of the absolute values of the difference of two roots: The root separation is a fundamental parameter of the computational complexity of root-finding algorithms for polynomials. In fact, the root separation determines the precision of number representation that is needed for being certain of distinguishing distinct roots. Also, for real-root isolation, it allows bounding the number of interval divisions that are needed for isolating all roots. For polynomials with real or complex coefficients, it is not possible to express a lower bound of the root separation in terms of the degree and the absolute values of the coefficients only, because a small change on a single coefficient transforms a polynomial with multiple roots into a square-free polynomial with a small root separation, and essentially the same absolute values of the coefficient. However, involving the discriminant of the polynomial allows a lower bound. For square-free polynomials with integer coefficients, the discriminant is an integer, and has thus an absolute value that is not smaller than . This allows lower bounds for root separation that are independent from the discriminant. Mignotte's separation bound is where is the discriminant, and For a square free polynomial with integer coefficients, this implies where is the bit size of , that is the sum of the bitsize of its coefficients. Gauss–Lucas theorem The Gauss–Lucas theorem states that the convex hull of the roots of a polynomial contains the roots of the derivative of the polynomial. A sometimes useful corollary is that, if all roots of a polynomial have positive real part, then so do the roots of all derivatives of the polynomial. A related result is Bernstein's inequality. It states that for a polynomial P of degree n with derivative P′ we have Statistical distribution of the roots If the coefficients of a random polynomial are independently and identically distributed with a mean of zero, most complex roots are on the unit circle or close to it. In particular, the real roots are mostly located near , and, moreover, their expected number is, for a large degree, less than the natural logarithm of the degree. If the coefficients are Gaussian distributed with a mean of zero and variance of σ then the mean density of real roots is given by the Kac formula where When the coefficients are Gaussian distributed with a non-zero mean and variance of σ, a similar but more complex formula is known. Real roots For large , the mean density of real roots near is asymptotically if and It follows that the expected number of real roots is, using big notation where is a constant approximately equal to . In other words, the expected number of real roots of a random polynomial of high degree is lower than the natural logarithm of the degree. Kac, Erdős and others have shown that these results are insensitive to the distribution of the coefficients, if they are independent and have the same distribution with mean zero. However, if the variance of the th coefficient is equal to the expected number of real roots is Geometry of multiple roots A polynomial can be written in the form of with distinct roots and corresponding multiplicities . A root is a simple root if or a multiple root if . Simple roots are Lipschitz continuous with respect to coefficients but multiple roots are not. In other words, simple roots have bounded sensitivities but multiple roots are infinitely sensitive if the coefficients are perturbed arbitrarily. As a result, most root-finding algorithms suffer substantial loss of accuracy on multiple roots in numerical computation. In 1972, William Kahan proved that there is an inherent stability of multiple roots. Kahan discovered that polynomials with a particular set of multiplicities form what he called a pejorative manifold and proved that a multiple root is Lipschitz continuous if the perturbation maintains its multiplicity. This geometric property of multiple roots is crucial in numerical computation of multiple roots. See also Quadratic function#Upper bound on the magnitude of the roots Cohn's theorem relating the roots of a self-inversive polynomial with the roots of the reciprocal polynomial of its derivative. Notes References . External links The beauty of the roots, a visualization of the distribution of all roots of all polynomials with degree and integer coefficients in some range. Polynomials
Geometrical properties of polynomial roots
[ "Mathematics" ]
3,580
[ "Polynomials", "Algebra" ]
9,232,391
https://en.wikipedia.org/wiki/Risk-based%20testing
Risk-based testing (RBT) is a type of software testing that functions as an organizational principle used to prioritize the tests of features and functions in software, based on the risk of failure, the function of their importance and likelihood or impact of failure. In theory, there are an infinite number of possible tests. Risk-based testing uses risk (re-)assessments to steer all phases of the test process, i.e., test planning, test design, test implementation, test execution and test evaluation. This includes for instance, ranking of tests, and subtests, for functionality; test techniques such as boundary-value analysis, all-pairs testing and state transition tables aim to find the areas most likely to be defective. Types of risk assessment Light-weight risk assessment Lightweight risk-based testing methods mainly concentrate on two important factors: likelihood and impact. Likelihood means how likely it is for a risk to happen, while impact measures how serious the consequences could be if the risk actually occurs. Instead of using complicated math, these techniques rely on simple judgments and scales. For instance, a team might rate the chance of risk as high, medium, or low and its impact as severe, moderate, or minor. These ratings help prioritize where testing efforts should be focused. Heavy-weight risk assessment Heavy-weighted risk-based testing is a method used to test software by focusing on the areas where problems are most likely to happen. The testing team looks for the most important parts of the software that might fail and concentrates on testing those parts more thoroughly. There are four main types of heavy-weight risk-based testing methods: Cost of Exposure: This looks at how much money a problem in the software might cause. It figures this out by thinking about how likely a problem is and how much it might cost. Failure Mode and Effect Analysis (FMEA): This technique finds out what parts of the software might fail, why they might fail, and what might happen if they do. It helps find the important areas that need attention. Quality Functional Deployment (QFD): This method helps connect what the users need with what the software does. It looks at risks that might come from not understanding what the users really want. Fault Tree Analysis (FTA): This technique is used to figure out why something went wrong by looking at different reasons in a step-by-step way.. Types of risk Risk can be identified as the probability that an undetected software bug may have a negative impact on the user of a system. The methods assess risks along a variety of dimensions: Business or operational High use of a subsystem, function or feature Criticality of a subsystem, function or feature, including the cost of failure Technical Geographic distribution of development team Complexity of a subsystem or function External Sponsor or executive preference Regulatory requirements E-business failure-mode related Static content defects Web page integration defects Functional behavior-related failure Service (Availability and Performance) related failure Usability and Accessibility-related failure Security vulnerability Large scale integration failure Some considerations about prioritizing risks is written by Venkat Ramakrishnan in a blog. References Software testing
Risk-based testing
[ "Engineering" ]
646
[ "Software engineering", "Software testing", "Software engineering stubs" ]
9,233,284
https://en.wikipedia.org/wiki/Reef%20Ball%20Foundation
Reef Ball Foundation, Inc. is a 501(c)(3) non-profit organization that functions as an international environmental non-governmental organization. The foundation uses reef ball artificial reef technology, combined with coral propagation, transplant technology, public education, and community training to build, restore and protect coral reefs. The foundation has established "reef ball reefs" in 59 countries. Over 550,000 reef balls have been deployed in more than 4,000 projects. History Reef Ball Development Group was founded in 1993 by Todd Barber, with the goal of helping to preserve and protect coral reefs for the benefit of future generations. Barber witnessed his favorite coral reef on Grand Cayman destroyed by Hurricane Gilbert, and wanted to do something to help increase the resiliency of eroding coral reefs. Barber and his father patented the idea of building reef substrate modules with a central inflatable bladder, so that the modules would be buoyant, making them easy to deploy by hand or with a small boat, rather than requiring heavy machinery. Over the next few years, with the help of research colleagues at University of Georgia, Nationwide Artificial Reef Coordinators and the Florida Institute of Technology (FIT), Barber, his colleagues, and business partners worked to perfect the design. In 1997, Kathy Kirbo established The Reef Ball Foundation, Inc as a non-profit organization with original founders being Todd Barber as chairman and charter member, Kathy Kirbo founding executive director, board secretary, and charter member, Larry Beggs as vice president and a charter member and Eric Krasle as treasurer and a charter member, Jay Jorgensen as a charter member. Reef balls can be found in almost every coastal state in the United States, and on every continent including Antarctica. The foundation has expanded the scope of its projects to include coral rescue, propagation and transplant operations, beach restorations, mangrove restorations and nursery development. Reef Ball also participates in education and outreach regarding environmental stewardship and coral reefs. In 2001, Reef Ball Foundation took control of the Reef Ball Development Group, and operates all aspects of the business as a non-profit organization. By 2007, the foundation has deployed 550,000 reef balls worldwide. In 2019, Reef Ball Foundation deployed 1,400 reef balls in the shores of Progreso, Yucatán in Mexico. Artificial reefs were also built in Quintana Roo, Baja California, Colima, Veracruz, and Campache. Almost 25,000 reef balls have been established in the surrounding seas of Mexico. Technology and research The Reef Ball Foundation manufactures reef balls for open ocean deployment in sizes from in diameter and in weight. Reef balls are hollow, and typically have several convex-concave holes of varying sizes to most closely approximate natural coral reef conditions by creating whirlpools. Reef balls are made from pH-balanced microsilica concrete, and are treated to create a rough surface texture, in order to promote settling by marine organisms such as corals, algae, coralline algae and sponges. Over the last decade, research has been conducted with respect to the ability of artificial reefs to produce or attract biomass, the effectiveness of reef balls in replicating natural habitat, and mitigating disasters. The use of reef balls as breakwaters and for beach stabilization has been extensively studied. Projects The foundation undertakes an array of projects including artificial reef deployment, estuary restoration, mangrove plantings, oyster reef creation, coral propagation, natural disaster recovery, erosion control, and education. Notable projects include: In Antigua, undertaking 4,700 modules were deployed around the island. In Malaysia, 5,000 reef balls were deployed around protected sea turtle nesting islands to deter netting, successfully increasing nesting numbers. In Campeche, Mexico, over 4,000 reef balls deployed by local fishing communities to enhance fishery resources. In Tampa Bay, USA, reef balls were installed beneath docks, in front of sea walls, and as a submerged breakwater to create oyster reefs. In Phuket, Thailand, reef balls were planted with corals after the Boxing Day Tsunami to help restore tourism. In Indonesia, locals and P.T. Newmont used reef balls to mitigate damage from mining operations and restore thousands of coral heads. In Australia, reef balls have been used to enhance fisheries in New South Wales. Designed artificial reefs The trend in artificial reef development has been toward the construction of designed artificial reefs, built from materials specifically designed to function as reefs. Designed systems (such as reef balls) can be modified to achieve a variety of goals. These include coral reef rehabilitation, fishery enhancement, snorkeling and diving trails, beach erosion protection, surfing enhancement, fish spawning sites, planters for mangrove replanting, enhancement of lobster fisheries, creation of oyster reefs, estuary rehabilitation, and even exotic uses such as deep water Oculina coral replanting. Designed systems can overcome many of the problems associated with "materials of opportunity" such as stability in storms, durability, biological fit, lack of potential pollution problems, availability, and reduction in long-term artificial reef costs. Designed reefs have been developed specifically for coral reef rehabilitation, and can therefore be used in a more specific niche than materials of opportunity. Some examples of specialized adaptations which "designed reefs" can use include: specialized surface textures, coral planting attachment points, specialized pH-neutral surfaces (such as neutralized concrete, ceramics, or mineral accretion surfaces), fissures to create currents for corals, and avoidance of materials such as iron (which may cause algae to overgrow coral). Other types of designed systems can create aquaculture opportunities for lobsters, create oyster beds, or be used for a large variety of other specialized needs. See also Underwater sculptures Project AWARE Reef Check References External links Reef Ball Foundation Environmental organizations based in Georgia (U.S. state) Environmental engineering Ecology organizations 501(c)(3) organizations Coral reefs Fisheries organizations
Reef Ball Foundation
[ "Chemistry", "Engineering", "Biology" ]
1,195
[ "Coral reefs", "Chemical engineering", "Biogeomorphology", "Civil engineering", "Environmental engineering" ]
9,233,359
https://en.wikipedia.org/wiki/Vibrational%20circular%20dichroism
Vibrational circular dichroism (VCD) is a spectroscopic technique which detects differences in attenuation of left and right circularly polarized light passing through a sample. It is the extension of circular dichroism spectroscopy into the infrared and near infrared ranges. Because VCD is sensitive to the mutual orientation of distinct groups in a molecule, it provides three-dimensional structural information. Thus, it is a powerful technique as VCD spectra of enantiomers can be simulated using ab initio calculations, thereby allowing the identification of absolute configurations of small molecules in solution from VCD spectra. Among such quantum computations of VCD spectra resulting from the chiral properties of small organic molecules are those based on density functional theory (DFT) and gauge-including atomic orbitals (GIAO). As a simple example of the experimental results that were obtained by VCD are the spectral data obtained within the carbon-hydrogen (C-H) stretching region of 21 amino acids in heavy water solutions. Measurements of vibrational optical activity (VOA) have thus numerous applications, not only for small molecules, but also for large and complex biopolymers such as muscle proteins (myosin, for example) and DNA. Vibrational modes Theory While the fundamental quantity associated with the infrared absorption is the dipole strength, the differential absorption is also proportional to the rotational strength, a quantity which depends on both the electric and magnetic dipole transition moments. Sensitivity of the handedness of a molecule toward circularly polarized light results from the form of the rotational strength. A rigorous theoretical development of VCD was developed concurrently by the late Professor P.J. Stephens, FRS, at the University of Southern California, and the group of Professor A.D. Buckingham, FRS, at Cambridge University in the UK, and first implemented analytically in the Cambridge Analytical Derivative Package (CADPAC) by R.D. Amos. Previous developments by D.P. Craig and T. Thirmachandiman at the Australian National University and Larry A. Nafie and Teresa B. Freedman at Syracuse University though theoretically correct, were not able to be straightforwardly implemented, which prevented their use. Only with the development of the Stephens formalism as implemented in CADPAC did a fast efficient and theoretically rigorous theoretical calculation of the VCD spectra of chiral molecules become feasible. This also stimulated the commercialization of VCD instruments by Biotools, Bruker, Jasco and Thermo-Nicolet (now Thermo-Fisher). Peptides and proteins Extensive VCD studies have been reported for both polypeptides and several proteins in solution; several recent reviews were also compiled. An extensive but not comprehensive VCD publications list is also provided in the "References" section. The published reports over the last 22 years have established VCD as a powerful technique with improved results over those previously obtained by visible/UV circular dichroism (CD) or optical rotatory dispersion (ORD) for proteins and nucleic acids. The effects due to solvent on stabilizing the structures (conformers and zwitterionic species) of amino acids and peptides and the corresponding effects seen in the vibrational circular dichroism (VCD) and Raman optical activity spectra (ROA) have been recently documented by a combined theoretical and experimental work on L-alanine and N-acetyl L-alanine N'-methylamide. Similar effects have also been seen in the nuclear magnetic resonance (NMR) spectra by the Weise and Weisshaar NMR groups at the University of Wisconsin–Madison. Nucleic acids VCD spectra of nucleotides, synthetic polynucleotides and several nucleic acids, including DNA, have been reported and assigned in terms of the type and number of helices present in A-, B-, and Z-DNA. Instrumentation VCD can be regarded as a relatively recent technique. Although Vibrational Optical Activity and in particular Vibrational Circular Dichroism, has been known for a long time, the first VCD instrument was developed in 1973 and commercial instruments were available only since 1997. For biopolymers such as proteins and nucleic acids, the difference in absorbance between the levo- and dextro- configurations is five orders of magnitude smaller than the corresponding (unpolarized) absorbance. Therefore, VCD of biopolymers requires the use of very sensitive, specially built instrumentation as well as time-averaging over relatively long intervals of time even with such sensitive VCD spectrometers. Most CD instruments produce left- and right- circularly polarized light which is then either sine-wave or square-wave modulated, with subsequent phase-sensitive detection and lock-in amplification of the detected signal. In the case of FT-VCD, a photo-elastic modulator (PEM) is employed in conjunction with an FTIR interferometer set-up. An example is that of a Bomem model MB-100 FTIR interferometer equipped with additional polarizing optics/ accessories needed for recording VCD spectra. A parallel beam emerges through a side port of the interferometer which passes first through a wire grid linear polarizer and then through an octagonal-shaped ZnSe crystal PEM which modulates the polarized beam at a fixed, lower frequency such as 37.5 kHz. A mechanically stressed crystal such as ZnSe exhibits birefringence when stressed by an adjacent piezoelectric transducer. The linear polarizer is positioned close to, and at 45 degrees, with respect to the ZnSe crystal axis. The polarized radiation focused onto the detector is doubly modulated, both by the PEM and by the interferometer setup. A very low noise detector, such as MCT (HgCdTe), is also selected for the VCD signal phase-sensitive detection. The first dedicated VCD spectrometer brought to market was the ChiralIR from Bomem/BioTools, Inc. in 1997. Today, Thermo-Electron, Bruker, Jasco and BioTools offer either VCD accessories or stand-alone instrumentation. To prevent detector saturation an appropriate, long wave pass filter is placed before the very low noise MCT detector, which allows only radiation below 1750 cm−1 to reach the MCT detector; the latter however measures radiation only down to 750 cm−1. FT-VCD spectra accumulation of the selected sample solution is then carried out, digitized and stored by an in-line computer. Published reviews that compare various VCD methods are also available. Magnetic VCD VCD spectra have also been reported in the presence of an applied external magnetic field. This method can enhance the VCD spectral resolution for small molecules. Raman optical activity (ROA) ROA is a technique complementary to VCD especially useful in the 50–1600 cm−1 spectral region; it is considered as the technique of choice for determining optical activity for photon energies less than 600 cm−1. See also Amino acid Birefringence Circular dichroism Density functional theory DNA DNA structure Hyper–Rayleigh scattering optical activity IR spectroscopy Magnetic circular dichroism Molecular models of DNA Nucleic acid Optical rotatory dispersion Photoelastic modulator Polarization Protein Protein structure Quantum chemistry Raman optical activity (ROA) References Polarization (waves) Physical chemistry Proteins Peptides Nucleic acids Infrared spectroscopy Biochemistry Biophysics DNA Molecular biology Molecular geometry Quantum chemistry
Vibrational circular dichroism
[ "Physics", "Chemistry", "Biology" ]
1,546
[ "Biomolecules by chemical classification", "Quantum mechanics", "Theoretical chemistry", "Proteins", "Spectroscopy", "Physical chemistry", "Biophysics", " molecular", "Peptides", " and optical physics", "Spectrum (physical sciences)", "Molecular geometry", "Atomic", "Nucleic acids", "App...
9,233,368
https://en.wikipedia.org/wiki/Dye%20tracing
Dye tracing is a method of tracking and tracing various flows using dye as a flow tracer when added to a liquid. Dye tracing may be used to analyse the flow of the liquid or the transport of objects within the liquid. Dye tracking may be either qualitative, showing the presence of a particular flow, or quantitative, when the amount of the traced dye is measured by special instruments. Fluorescent dyes Fluorescent dyes are often used in situations where there is insufficient lighting (e.g., sewers or cave waters), and where precise quantitative data are required (measured by a fluorometer). In 1871, fluorescein was among the first fluorescent dyes to be developed. Its disodium salt (under the trademark "uranine") was developed several years later and still remains among the best tracer dyes. Other popular tracer dyes are rhodamine, pyranine and sulforhodamine B. Quantitative tracing Carbon sampling was the first method of technology-assisted dye tracing that was based on the absorption of dye in charcoal. Charcoal packets may be placed along the expected route of the flow, later the collected dye may be chemically extracted and its amount subjectively evaluated. Filter fluorometers were the first devices that could detect dye concentrations beyond human eye sensitivity. Spectrofluorometers, developed in the mid-1980s, made it possible to perform advanced analysis of fluorescence. Filter fluorometers and spectrofluorometers identify the intensity of fluorescence that is present in a liquid sample. Different dyes and chemicals produce a distinctive wavelength that is determined during analysis. Tracing methods Each sampling area is analysed by a quantitative instrument to test the background fluorescence. Each different type of dye has significant performance factors that distinguish them in different environments. These performance factors include: Resistance to absorption Surface water loss Limitations of use in acidic waters Depending on the environment, water flows possess certain factors that can affect how a dye performs. Natural fluorescence in a water flow can interfere with certain dyes. The presence of organic material, other chemicals, and sunlight can affect the intensity of dyes. Applications Water tracing Typical applications of water flow tracing include: Plumbing/piping tracing Leak detection Checking for illegal tapping Pollution studies Natural waterflow analysis (rivers, lakes, ocean currents, cave waterflows, karst studies, groundwater filtration, etc.) Sewer and stormwater drainage analysis Medicine and biology Dye tracing may be used for the analysis of blood circulation within various parts of the human or animal body. For example, fluorescent angiography, a technique of analysis of circulation in retina is used for diagnosing various eye diseases. With modern fluorometers, capable of tracking single fluorescent molecules, it is possible to track migrations of single cells tagged by a fluorescent molecule (see fluorescein in biological research). For example, the fluorescent-activated cell sorting in flow cytometry makes it possible to sort out the cells with attached fluorescent molecules from a flow. See also Fluorescence microscope FLEX mission Green fluorescent protein Float tracking Streakline References Dyes Data collection Hydrology
Dye tracing
[ "Chemistry", "Technology", "Engineering", "Environmental_science" ]
644
[ "Data collection", "Hydrology", "Data", "Environmental engineering" ]
9,234,084
https://en.wikipedia.org/wiki/Fluorometer
A fluorometer, fluorimeter or fluormeter is a device used to measure parameters of visible spectrum fluorescence: its intensity and wavelength distribution of emission spectrum after excitation by a certain spectrum of light. These parameters are used to identify the presence and the amount of specific molecules in a medium. Modern fluorometers are capable of detecting fluorescent molecule concentrations as low as 1 part per trillion. Fluorescence analysis can be orders of magnitude more sensitive than other techniques. Applications include chemistry/biochemistry, medicine, environmental monitoring. For instance, they are used to measure chlorophyll fluorescence to investigate plant physiology. Components and design Typically fluorometers utilize a double beam. These two beams work in tandem to decrease the noise created from radiant power fluctuations. The upper beam is passed through a filter or monochromator and passes through the sample. The lower beam is passed through an attenuator and adjusted to try and match the fluorescent power given off from the sample. Light from the fluorescence of the sample and the lower, attenuated beam are detected by separate transducers and converted to an electrical signal that is interpreted by a computer system. Within the machine the transducer that detects fluorescence created from the upper beam is located a distance away from the sample and at a 90-degree angle from the incident, upper beam. The machine is constructed like this to decrease the stray light from the upper beam that may strike the detector. The optimal angle is 90 degrees. There are two different approaches to handling the selection of incident light that gives way to different types fluorometers. If filters are used to select wavelengths of light, the machine is called a fluorometer. While a spectrofluorometer will typically use two monochromators, some spectrofluorometers may use one filter and one monochromator. Where, in this case, the broad band filter acts to reduce stray light, including from unwanted diffraction orders of the diffraction grating in the monochromator. Light sources for fluorometers are often dependent on the type of sample being tested. Among the most common light source for fluorometers is the low-pressure mercury lamp. This provides many excitation wavelengths, making it the most versatile. However, this lamp is not a continuous source of radiation. The xenon arc lamp is used when a continuous source of radiation is needed. Both of these sources provide a suitable spectrum of ultraviolet light that induces chemiluminescence. These are just two of the many possible light sources. Glass and silica cuvettes are often the vessels in which the sample is placed. Care must be taken to not leave fingerprints or any other sort of mark on the outside of the cuvette, because this can produce unwanted fluorescence. "Spectro grade" solvents such as methanol are sometimes used to clean the vessel surfaces to minimize these problems. Uses Dairy industry Fluorimetry is widely used by the dairy industry to verify whether pasteurization has been successful. This is done using a reagent which is hydrolysed to a fluorophore and phosphoric acid by alkaline phosphatase in milk. If pasteurization has been successful then alkaline phosphatase will be entirely denatured and the sample will not fluoresce. This works because pathogens in milk are killed by any heat treatment which denatures alkaline phosphatase. Fluorescence assays are required by milk producers in the UK to prove successful pasteurization has occurred, so all UK dairies contain fluorimetry equipment. Protein aggregation and TSE detection Thioflavins are dyes used for histology staining and biophysical studies of protein aggregation. For example, thioflavin T is used in the RT-QuIC technique to detect transmissible spongiform encephalopathy-causing misfolded prions. Oceanography Fluorometers are widely used in oceanography to measure chlorophyll concentrations based on chlorophyll fluorescence by phytoplankton cell pigments. Chlorophyll fluorescence is a widely-used proxy for the quantity (biomass) of microscopic algae in the water. In the lab after water sampling, researchers extract the pigments out of a filter that has phytoplankton cells on it, then measure the fluorescence of the extract in a benchtop fluorometer in a dark room. To directly measure chlorophyll fluorescence "in situ" (in the water), researchers use instruments designed to measure fluorescence optically (for example, sondes with extra electronic optical sensors attached). The optical sensors emit blue light to excite phytoplankton pigments and make them fluoresce or emit red light. The sensor measures this induced fluorescence by measuring the red light as a voltage, and the instrument saves it to a data file. The voltage signal of the sensor gets converted to a concentration with a calibration curve in the lab, using either red-colored dyes like Rhodamine, standards like Fluorescein, or live phytoplankton cultures. Ocean chlorophyll fluorescence is measured on research vessels, small boats, buoys, docks, and piers all over the world. Fluorometry measurements are used to map chlorophyll concentrations in support of ocean color remote sensing. Special fluorometers for ocean waters can measure properties beyond the total amount of fluorescence, such as the quantum yield of photochemistry, the timing of the fluorescence, and the fluorescence of cells when subjected to increasing amounts of light. Aquaculture operations such as fish farms us fluorometers to measure food availability for filter feeding animals like mussels and to detect the onset of Harmful Algal Blooms (HABs) and/or "red tides" (not necessarily the same thing). Molecular biology Fluorometers can be used to determine the nucleic acid concentration in a sample. Fluorometer types There are two basic types of fluorometers: the filter fluorometers and spectrofluorometer. The difference between them is the way they select the wavelengths of incident light; filter fluorometers use filters while spectrofluorometers use grating monochromators. Filter fluorometers are often purchased or built at a lower cost but are less sensitive and have less resolution than spectrofluorometers. Filter fluorometers are also capable of operation only at the wavelengths of the available filters, whereas monochromators are generally freely tunable over a relatively wide range. The potential disadvantage of monochromators arises from that same property, because the monochromator is capable of miscalibration or misadjustment, whereas the wavelength of filters are fixed when manufactured. Filter fluorometer Spectrofluorometer Integrated fluorometer See also Fluorescence spectroscopy, for a fuller discussion of instrumentation Chlorophyll fluorescence, to investigate plant ecophysiology. Integrated fluorometer to measure gas exchange and chlorophyll fluorescence of leaves. Radiometer, to measure various electromagnetic radiation Spectrometer, to analyze spectrum of electromagnetic radiation Scatterometer, to measure scattered radiation Microfluorimetry, to measure fluorescence on a microscopic level Interference filter, thin film filters that work by optical interference, showing how they can be tuned in some cases References Laboratory equipment Electromagnetic radiation meters Spectrometers nl:Fluorimeter
Fluorometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,573
[ "Spectrum (physical sciences)", "Electromagnetic radiation meters", "Electromagnetic spectrum", "Measuring instruments", "Spectrometers", "Spectroscopy" ]
9,234,323
https://en.wikipedia.org/wiki/Urban%20ecosystem
In ecology, urban ecosystems are considered a ecosystem functional group within the intensive land-use biome. They are structurally complex ecosystems with highly heterogeneous and dynamic spatial structure that is created and maintained by humans. They include cities, smaller settlements and industrial areas, that are made up of diverse patch types (e.g. buildings, paved surfaces, transport infrastructure, parks and gardens, refuse areas). Urban ecosystems rely on large subsidies of imported water, nutrients, food and other resources. Compared to other natural and artificial ecosystems human population density is high, and their interaction with the different patch types produces emergent properties and complex feedbacks among ecosystem components. In socioecology, urban areas are considered part of a broader social-ecological system in which urban landscapes and urban human communities interact with other landscape elements. Urbanization has large impacts on human and environmental health, and the study of urban ecosystems has led to proposals for sustainable urban designs and approaches to development of city fringe areas that can help reduce negative impact on surrounding environments and promote human well-being. Urban ecosystem research Urban ecology is a relatively new field. Because of this, the research that has been done in this field has yet to become extensive. While there is still plenty of time for growth in the research of this field, there are some key issues and biases within the current research that still need to be addressed. The article “A Review of Urban Ecosystem Services: Six Key Challenges for Future Research'' addresses the issue of geographical bias. According to this article, there is a significant geographical bias, “towards the northern hemisphere”. The article states that case study research is done primarily in the United States and China. It goes on to explain how future research would benefit from a more geographically diverse array of case studies. “A Quantitative Review of Urban Ecosystem Service Assessments: Concepts, Models, and Implementation” is an article that gives a comprehensive examination of 217 papers written on Urban Ecosystems to answer the questions of where studies are being done, which types of studies are being done, and to what extent do stakeholders influence these studies. According to this article, "The results indicate that most UES studies have been undertaken in Europe, North America, and China, at city scale. Assessment methods involve bio-physical models, Geographical Information Systems, and valuation, but few study findings have been implemented as land use policy." “Urban vacancy and land use legacies: A frontier for urban ecological research, design, and planning” is another scholarly article that gives an insight into the future of urban ecological research. It details an important opportunity for the future of urban ecological researchers that only a few researchers have inquired into so far, the utilization of vacant land for the creation of urban ecosystems. Difficulties and Opportunities Difficulties Urban ecosystems are complex and dynamic systems that encompass a wide range of living and nonliving components. These components include humans, plants, animals, buildings, transportation systems, and water and energy infrastructure. As the world becomes increasingly urbanized, understanding urban ecosystems and how they function is becoming increasingly important. Population growth Cities are home to more than half of the world's population, and the number of people living in urban areas is expected to continue to grow in the coming decades. This rapid urbanization can have both positive and negative impacts. On the one hand, cities can provide economic opportunities, access to healthcare and education, and a high quality of life for residents. On the other, increased urbanization exacerbates the struggles of pollution, loss of green spaces, loss of biodiversity, and more. Pollution In many cities, air pollution levels are well above safe limits, and this can have serious implications for human health. Pollution from vehicles, factories, and power plants can cause respiratory problems, heart disease, and even cancer. In addition to its impact on human health, air pollution can also damage buildings, corrode infrastructure, and harm plant and animal life. Dissolution of green spaces as a public resource As cities grow, natural areas such as forests, wetlands, and grasslands are often replaced by buildings, roads, and other forms of development. Lack of urban green spaces contribute to a reduction in air/water quality, mental and physical health of residents, energy efficiency, and biodiversity. Habitat fragmentation and loss of species diversity Related to the dissolution of green space, habitat fragmentation refers to the way in which green spaces get divided by urban development, making it impossible for some species to migrate between. The process, referred to as Genetic Drift, is essential to maintaining the genetic diversity needed for species survival. Species diversity is also impacted by the introduction of non-native and invasive species from travel and shipping processes. Research has found that heavily urbanized areas have a higher richness of invasive species when compared to rural communities. While not all non-native or invasive species are inherently detrimental to a city, invasives can out-compete essential native species, cause biotic homogenization, and introduce new vectors for new diseases. Urban heat lands Urban Heat Island (UHI) refers to the variation in average temperature that occurs within an urban area due to current methods of development. Patterns in UHIs cause disproportionate impacts of climate change, often creating extra burdens for the already vulnerable. Extreme heat events, which occur more frequently in UHIs, can and do result in deaths, cardiopulmonary diseases, reduced capacity for outdoor labor, mental health concerns, and kidney disease. The demographics most vulnerable to the negative impacts of UHIs are senior citizens, and those without resources to cool off, such as air conditioners. Disease Currently methods of urban development increase the risk of disease proliferation within cities as compared to rural environments. Urban traits that contribute to higher risk are poor housing conditions, contaminated water supplies, frequent travel in and out, survival success of rats, and intense population density that causes rapid spread and rapid evolution of the disease. Opportunities Green and blue infrastructure Green and blue infrastructure refers to methods of development that work to integrate natural systems and human made structures. Green Infrastructure includes land conservation, such as nature preserves, and increased vegetation cover, such as vertical gardens. Blue infrastructure would include stormwater management efforts such as bioswales. The process of LEED certification can be used to establish green infrastructure practices in individual buildings. Buildings with LEED certification status report 30% less energy used and economic and mental benefits from natural lighting. Public cities and walkable design Beginning in earnest during the 1960, city planning in terms of transit centered around individual car use. Today, cars are still the most dominant form of transportation in urban areas. One effective solution is an improvement to public transportation. Expanding bus or train routes and switching to clean energy use address the issues of air quality, noise pollution, and socioeconomic equity. Another opportunity to reduce carbon emissions and increase population health would be the implementation of the walkable city model in urban planning. A walkable city is strategically planned to reduce distance traveled in order to access resources needed such as food and jobs. STRATEGIC INCREASES IN GREEN SPACES RENEWABLE ENERGY CITIZEN PARTICIPATION IN PLANNING IMPROVING RESEARCH Bibliography Manfredi Nicoletti, L'Ecosistema Urbano (The Urban Ecosystem), Dedalo Bari 1978 Maes, Mikaël J. A., et al. (2019). Mapping Synergies and Trade-Offs between Urban Ecosystems and the Sustainable Development Goals. Environmental Science & Policy, 93, 181-188. Neuenkamp, Lena, et al. (2021). Special Issue: Urban Ecosystems: Potentials, Challenges, and Solutions. Basic & Applied Ecology, 56, 281-288. Nilon, C. H., Aronson, M. F., Cilliers, S. S., Dobbs, C., Frazee, L. J., Goddard, M. A., & Yocom, K. P. (2017). Planning for the future of urban biodiversity: A global review of city-scale initiatives. BioScience, 67(4), 332-342. Colombo, Enea, et al. “Smartification from Pilot Projects to New Trends in Urban Ecosystems.” 2022 IEEE International Smart Cities Conference (ISC2), Smart Cities Conference (ISC2), 2022 IEEE International, September 2022, pp. 1–7. EBSCOhost. Kourdounouli, Christina, and Anna Maria Jönsson. “Urban Ecosystem Conditions and Ecosystem Services – a Comparison between Large Urban Zones and City Cores in the EU.” Journal of Environmental Planning and Management BECC: Biodiversity and Ecosystem Services in a Changing Climate, vol. 63, no. 5, January 2020, pp. 798–817. EBSCOhost See also Ecosystem Human ecosystem Media ecosystem Urban planning Urban ecology References Ecosystems Systems ecology
Urban ecosystem
[ "Biology", "Environmental_science" ]
1,779
[ "Environmental social science", "Symbiosis", "Ecosystems", "Systems ecology" ]
9,234,631
https://en.wikipedia.org/wiki/Motorola%20Minitor
The Motorola Minitor is a portable, analog, receive only, voice pager typically carried by civil defense organizations such as fire, rescue, and EMS personnel (both volunteer and career) to alert of emergencies. The Minitor, slightly smaller than a pack of cigarettes, is carried on a person and usually left in selective call mode. When the unit is activated, the pager sounds a tone alert, followed by an announcement from a dispatcher alerting the user of a situation. After activation, the pager remains in monitor mode much like a scanner, and monitors transmissions on that channel until the unit is reset back into selective call mode either manually, or automatically after a set period of time, depending on programming. Purpose and History In the times before modern radio communications, it was difficult for emergency services such as volunteer fire departments to alert their members to an emergency, since the members were not based at the station. The earliest methods of sounding an alarm would typically be by ringing a bell either at the fire station or the local church. As electricity became available, most fire departments used fire sirens or whistles to summon volunteers (many fire departments still use outdoor sirens and horns along with pagers to alert volunteers). Other methods included specialized phones placed inside the volunteer firefighter's home or business or by base radios or scanners. "Plectron" radio receivers were very popular, but were limited to 120VAC or 12VDC operation, limiting their use to a house/building or mounted in a vehicle. There was a great need and desire for a portable radio small enough to be worn by a person and only activated when needed. Thus, Motorola answered this call in the 1970s and released the very first Minitor pager. There are six versions of Minitor pagers. The first was the original Minitor, followed by the Minitor II(1992), Minitor III(1999), Minitor IV, and the Minitor V released in late 2005. The Minitor VI was released in early 2014. The Minitor III, IV, and V used the same basic design, while the original Minitor and Minitor II use their own rectangular proprietary case design. Similar voice pagers released by Motorola were the Keynote and Director pagers. They were essentially stripped down versions of the Minitor and never gained widespread use, though the Keynotes were much more common in Europe because they could decode 5/6 tone alert patterns in addition to the more popular two tone sequential used in the United States. Although the Minitor is primarily used by emergency personnel, other agencies such as utilities and private contractors also use the pager. Unlike conventional alphanumeric pagers and cell phones, Minitors are operated on an RF network that is generally restricted to a particular agency in a given geographical area. The Minitor is the most common voice pager used by emergency services in the United States. However, digital 2-way pagers that can display alpha-numeric characters can overcome some of the limitations of voice only pagers, are now starting to replace the Minitor pagers in certain applications. Activation Minitor pagers, depending on the model and application, can operate in the VHF Low Band, VHF High Band, and UHF frequency ranges. They are alerted by using two-tone sequential Selective calling, generally following the Motorola Quick Call II standard. In other words, the pager will activate when a particular series of audible tones are sent over the frequency (commonly referred to as a "page") the pager is set to. For example, if a Minitor is programmed on VHF frequency channel 155.295 MHz and set to alert for 879 Hz & 358.6 Hz, it will disregard any other tone sequences transmitted on that frequency, only alerting when the proper sequence has been received. The pager may be reset back into its selective call mode by pressing the reset button, or it can be programmed to reset back into selective call mode automatically after a predetermined amount of time, to conserve battery power. Older Minitor pagers (both the Minitor I and Minitor II series) have tone reeds or filters that are tuned to a specific audible tone frequency, and must physically be replaced if alert tones are changed. For two-tone sequential paging, there are two reeds, the first tone passes through the first reed, and the second tone passes through the second reed, thereby activating the pager. Beginning with the Minitor III series, these physical reeds or filters are no longer necessary, as the pagers now feature all solid-state electronics, and various tone sequences can be programmed via computer software. Newer Minitor pagers can scan two channels by selecting that function via a rotary knob on the pager; in this mode when using a Minitor III or IV the user will hear all traffic, even without the correct tones being sent. If the activation tones are transmitted in this monitor mode, the pager alerts as normal. Minitor Vs have the option to remain in monitor mode or in selective call mode when scanning two channels. Minitor IIIs and IVs only have the option to remain in monitor mode when scanning two channels. The range of the Minitor's operating distance depends on the strength ("wattage") of the paging transmitter. A repeater is often used to improve paging coverage, as it can be located for better range than the dispatch center where the page originates from. Weather conditions, low battery, and even atmospheric conditions can affect the Minitor's ability to receive transmissions. In fact, a remote transmitter hundreds, even thousands of miles away belonging to a separate agency, can activate a Minitor (and also block it) unknowingly if the atmospheric conditions let the signal propagate that far. This is commonly known as radio skip. The Minitor is a receive-only unit, not a transceiver, and thus cannot transmit. Features Note - most all of the features below refer to the Minitor pagers III and up, the original Minitor and Minitor II pagers may not have some of the listed features Newer generation Minitor pagers can simultaneously scan up to two channels and have multiple activation tones. This can be very helpful if a user belongs to several emergency services, or the emergency service has different alarms for different emergencies. Alert tones - The default, and most common alert is the continuous beeping (sounds like "beep-beep-beep-beep...etc.)". Other alarms can include a steady high pitched tones, and the newest Minitor V's can even have musical tones for general non-emergency announcements. VIBRA-Page - For silent alarm activation, most Minitor pagers can also vibrate without sounding an alarm tone. This is particularly useful in churches, schools, meetings, etc. where a loud noise would be disruptive. This feature is known as "VIBRA-Page". Voice Record - Many Minitor pagers can also record (up to 8 minutes, depending on the model and options) of voice/transmission after the pager activates. Controls - Physical controls (specifically on the Minitor III) include an "A, B, C, D" function knob, a power/volume knob, reset button, voice playback button, external speaker jack, and an amber and red LED. Depending on the model, the selection on the function knobs may do different things. Control examples - For example, function A may be selective call mode, while function B is the vibrate function. Function C monitors channel 2. D is the mode that is similar to a scanner. When the pager is turned on, eight short beeps are heard along with flashing of both LEDs. Holding down the reset button in selective call mode will monitor the channel for any transmission on that channel at that time, or pure static as the squelch is bypassed. Field Programmable - Some models have field programmable options such as Non- Priority Scan, Alert Duration, Priority Alert, On/Off Duty, Reset Options, and Push-To-Listen. Many Minitor pagers can be hooked up to a computer with a special cable and options changed. Durability - Unlike older models, the Minitor V is "rainproof" as it meets "Military Standard 810, Procedure 1 for driving rain". Belt Clip - A spring-loaded clip is attached to the back of each Minitor to allow the user to clip the pager onto a pocket or belt. Also, carrying cases and covers are also made to protect the pager. Charging - Minitor pagers come standard with a charging stand and two rechargeable batteries. Amplified base unit - An optional "Charger/Amplifier" base can be bought. Bigger than the standard charging stand, the "Charger/Amplifier" base not only charges the pager, but has an external antenna for increased reception, an amplified audio out jack to drive a stand-alone speaker, and some models even incorporate a relay to activate external devices along with the pager. Some uses for this relay include: Turning on lights in a building such as a fire station, activating an external audio/visual alarm, etc. Accessories - Official Motorola accessories for the Minitor pagers include (including some listed above): Desktop Battery Charger, Desktop Battery Charger/Amplifier with Antenna and Relay, Vehicular Charger-Amp with Relay, Earpieces, Extra Loud Lapel Speaker, and Nylon Carrying Case. Disadvantages The audible alarm on the Minitor lasts only for the duration of the second activation tone. If there is bad reception, the pager may only sound a quick beep, and the user may not be alerted properly. This can be changed by editing the codeplug's "Alert Duration" from STD to Fixed, the user can then set the alert duration longer than the second tone. The user must be cautious, however, as setting the alert tone duration too high may cover some voice information. Also, some units may have the volume knob set to control the sound output of the audible alert as well. The user may have the volume turned down to an undetectable level either by accident or by carelessness, thus missing the page. A factory option for "Fixed Alert" (the only option on the earlier Minitor I), however, lets the alert tone override the volume and sound at maximum volume regardless of the volume knob's position. It is possible to program the pager to always vibrate when an alert is received, giving the possibility of either a silent (vibrating) alert or audible and vibrating alerts. Minitor I and II do not have vibrating capabilities standard). The vibrating motor in the newer (IV and V) Minitor pagers is quite strong in order to be felt in varying conditions, such as when performing heavy work. It is not uncommon for the vibrating motor in a pager, placed in a charger overnight and left in vibrate mode, to "walk" the pager and charger off of a table or nightstand. Minitor pagers are powered by battery which will eventually run down if not charged (a flashing red LED and audible alarm is used as a warning of low battery power). As the Minitor is portable, its electronics aren't as sensitive as set top or base radios and are usually less able to pick up weak or distant signals. See also Selective calling Radio receiver Plectron Dispatching References External links Motorola MINITOR Information on Batlabs Firefighting equipment Pagers
Motorola Minitor
[ "Technology" ]
2,359
[ "Pagers", "Radio paging" ]
9,234,672
https://en.wikipedia.org/wiki/Home%20automation%20for%20the%20elderly%20and%20disabled
Home automation for the elderly and disabled focuses on making it possible for older adults and people with disabilities to remain at home, safe and comfortable. Home automation is becoming a viable option for older adults and people with disabilities who would prefer to stay in the comfort of their homes rather than move to a healthcare facility. This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards old people and people with disabilities. Concept There are two basic forms of home automation systems for the elderly: embedded health systems and private health networks. Embedded health systems integrate sensors and microprocessors in appliances, furniture, and clothing which collect data that is analyzed and can be used to diagnose diseases and recognize risk patterns. Private health networks implement wireless technology to connect portable devices and store data in a household health database. Due to the need for more healthcare options for the aging population "there is a significant interest from industry and policy makers in developing these technologies". Home automation is implemented in homes of older adults and people with disabilities in order to maintain their independence and safety, also saving the costs and anxiety of moving to a health care facility. For those with disabilities smart homes give them opportunity for independence, providing emergency assistance systems, security features, fall prevention, automated timers, and alerts, also allowing monitoring from family members via an internet connection. Telehealth implementation Background Telehealth is the use of electronic technology services to provide patient care and improve the healthcare delivery system. The term is often confused with telemedicine, which specifically involves remote clinical services of healthcare delivery. Telehealth is the delivery of remote clinical and non-clinical services of healthcare delivery. Telehealth promotes the diagnosis, treatment, education, and self-management away from health care providers and into people's homes. Reasons for implementation The goal of telehealth is to complement the traditional healthcare setting. There is an increased demand on the healthcare system from a growing elderly population and shortage of healthcare providers. Many elderly and disabled patients are faced with limited access to health care and providers. Telehealth may bridge the gap between patient demand and healthcare accessibility. Telehealth may also decrease healthcare costs and mitigate transportation concerns. For the elderly and disabled populations, telehealth would allow them to stay in the comfort and convenience of their homes. Elderly population Geriatrics is the role of healthcare in providing care to the elderly population. The elderly population involves many health complications. According to the National Institute of Health, "the main threats are non-communicable diseases, including heart, stroke, cancer, diabetes, hypertension, and dementia". Telehealth may help provide management and monitoring of chronic disease in patient homes. One telemonitoring device measures vital signs: blood pressure, pulse, oxygen saturation, and weight. Another telemonitoring device is video-conferencing, which can provide patient-provider consultation and electronic delivery of medication instructions and general health information. Some studies have been done to analyze the effectiveness of telehealth on the elderly population. Some have found positive telehealth effects including reduction of symptoms and self-efficacy in the elderly population with chronic conditions. Other studies have found the opposite effect, where telehealth home care produced greater mortality than the traditional hospital setting. Then there are other studies that have found inconclusive results. Disabled population Persons with severe functional disabilities are statistically the highest users of all health care services and represent a large portion of health care costs and designated service. The disabled population requires many self-monitoring and self-management strategies that place a strenuous burden on healthcare providers. Telecommunications technologies may provide a way to meet the healthcare demands of the disabled population. According to the National Institutes of Health, "the largest proportion of health care... result from individuals with severe functional disabilities, such as stroke and traumatic brain injury". Patients with functional disabilities may benefit from telehealth care. According to the World Health Organization, functional limitation refers to the physical or mental conditions, which impair, interfere with, or impede one or more of the individual's major life activities and instrumental activities of daily living. Patients with spina bifida, musculoskeletal disorders, mental illness, or neurological disorders may also benefit from telehealth care services. Telehealth technologies include vital sign telemonitoring devices, exercise routines, problem-solving assessments, and therapeutic self-care management tasks. Telehealth care, however, is not sufficient in complete care but rather a supplement of separate care. Ethical concerns and legalities Concerns of telehealth implementation include the limited scope of research that confirm conclusive benefits of telehealth in comparison to the healthcare setting. Currently there is no definitive conclusion that telehealth is the superior mode of healthcare delivery. There are also ethical issues about patient autonomy and patient acceptance of telehealth as a mode of healthcare. Lack of face-to-face patient-provider care in contrast to direct care in a traditional healthcare setting is an ethical concern for many. In 2015 the Texas Medical Board ruled that state physicians had to physically meet patients before remotely treating ailments or prescribing medication. The telemedicine company Teladoc sued over the rule in Teladoc v. Texas Medical Board, arguing the bill violated antitrust laws by inflating prices and limiting the supply of health care providers in the state. The bill, meant to go active on June 3, 2015, was then stalled. Teladoc voluntarily dropped the lawsuit in 2017 after Texas passed a new bill allowing for remote treatment without a prior in-person interaction, which Teladoc Health had lobbied heavily for. On September 15, 2017, the Texas Medical Board amended its regulations to allow state-licensed healthcare providers to care for patients without required face-to-face interaction, potentially affecting up to 28 million patients in Texas. Systems Home automation for healthcare can range from very simple alerts to lavish computer controlled network interfaces. Some of the monitoring or safety devices that can be installed in a home include lighting and motion sensors, environmental controls, video cameras, automated timers, emergency assistance systems, and alerts. Security In order to maintain the security of the home many home automation systems integrate features such as remote keyless entry systems which will allow seniors to view who is at the door and then remotely open the door. Home networks can also be programmed to automatically lock doors and shut blinds in order to maintain privacy. Emergency assistance systems and tools Emergency assistance for older adults and people with disabilities can be classified into three categories: First, Second, and Third Generation emergency assistance systems or tools. First generation These simple systems and tools include personal alarm systems and emergency response telephones that do not have to be integrated into a smart home system. A typical system consists of a small wireless pendant transceiver to be worn around the neck or wrist. The system has a central unit plugged into a telephone jack, with a loudspeaker and microphone. When the pendant is activated a 24-hour control center is contacted. Generally the 24 hour control center speaks to the user and identifies that help is required e.g. Emergency services are dispatched. The control center also has information of the user, e.g. medical symptoms, medication allergies, etc. The unit has a built in rechargeable battery backup and the ability to notify the control center if the battery is running low or if the system loses power. Modern systems have active wireless pendants that are polled frequently advising battery, and signal strength status as older style pendant could have a battery that has failed rendering the pendant useless when required in an emergency. Second generation These systems and tools generate alarms and alerts automatically if significant changes are observed in the user's vital signs. These systems are usually fully integrated into a home network and allow health professionals to monitor patients at home. The system consists of an antenna that a patient holds over their implanted cardiac device to transmit data for downloading over the telephone line and viewing by the patient's physician. The collected data can be accessed by the patient or family members. Another example of this type of system is a Smart Shirt that measures heart rate, electrocardiogram results, respiration, temperature and other vital functions and alerts the patient or physician if there is a problem. Third generation These types of systems would help older adults and people with disabilities deal with loneliness and depression by connecting them with other elderly or disabled individuals through the Internet, reducing their sense of isolation. Reminder systems Home automation systems may include automatic reminder systems for the elderly. Such systems are connected to the Internet and make announcements over an intercom. They can prompt about doctor's appointments and taking medicine, as well as everyday activities such as turning off the stove, closing the blinds, locking doors, etc. Users choose what activities to be reminded of. The system can be set up to automatically perform tasks based on user activity, such as turning on the lights or adjusting room temperature when the user enters specified areas. Other systems can remind users at home or away from home to take their medicine, and how much, by using an alarm wristwatch with text message and medical alert. Reminder systems can also remind about everyday tasks such as eating lunch or walking the dog. Some communities offer free telephone reassurance services to residents, which includes both safety check calls as well as reminders. These services have been credited with saving the lives of many elderly and senior citizens who choose to remain at home. Medication dispensing and spoon-feeding Smart homes can implement medication dispensing devices in order to ensure that necessary medications are taken at appropriate times. Automated pill dispensers can dispense only the pills that are to be taken at that time and are locked; versions are available for Alzheimer's patients that have a lock on them. For diabetic patients a talking glucose monitor allows the patient to check their blood sugar level and take the appropriate injection. Digital thermometers are able to recognize a fever and alert physicians. Blood pressure and pulse monitors dispense hypertensive medications when needed. There are also spoon-feeding robots. Home robotics Domestic robots, connected to the domotic network, are included to perform or help in household chores such as cooking, cleaning etc. Dedicated robots can administer medications and alert a remote caregiver if the patient is about to miss his or her medicine dose (oral or no-oral medications). Challenges The recent advances made in tailoring home automation toward the elderly have generated opposition. It has been stated that "Smart home technology will be helpful only if it is tailored to meet the individual needs of each patient". This currently creates a problem because many of the interfaces designed for home automation "are not designed to take functional limitations, associated with age, into consideration". Another presented problem involves making the system user-friendly for the elderly who often have difficulty operating electronic devices. The cost of the systems has also presented a challenge, as the U.S. government currently provides no assistance to seniors who choose to install these systems (in some countries such as Spain the Dependency Law includes this assistance). The biggest concern expressed by potential users of smart home technology is "fear of lack of human responders or the possible replacement of human caregivers by technology", but home automation should be seen as something that augments, but does not replace, human care. See also Assisted living Disability robot and domestic robot Elderly care Floor plans and house navigation system Gerontechnology Healthcare robot Home robot Indoor positioning system Roujin Z, a film that uses assistive domotics as a central plot device. Transgenerational design References Further reading Slatalla, Michele. "Is 'Smart House' Still an Oxymoron?" New York Times. 31 July 2008. Assistive technology Home automation Patient safety Housing for the elderly Gerontology
Home automation for the elderly and disabled
[ "Technology", "Biology" ]
2,446
[ "Home automation", "Gerontology" ]
9,235,009
https://en.wikipedia.org/wiki/Records%20of%20Early%20English%20Drama
The Records of Early English Drama (REED) is a performance history research project, based at the University of Toronto, Ontario, Canada. It was founded in 1976 by a group of international scholars interested in understanding “the native tradition of English playmaking that apparently flourished in late medieval provincial towns” and formed the context for the development of the English Renaissance theatre, including the work of Shakespeare and his contemporaries. REED's primary focus is to locate, transcribe, edit, and publish historical documents from England, Wales, and Scotland containing evidence of drama, secular music, and other communal entertainment and mimetic ceremony from the late Middle Ages until 1642, when the Puritans closed the London public theatres. From its inception in 1976 to 2016, REED published twenty-seven print collections of records edited by over thirty international scholars. REED is also engaged in creating a collection of free digital resources for research and education including Patrons and Performances (2003) and Early Modern London Theatres (2011). In March 2017, REED moved to digital publication of records with the launch of REED Online, a publication site where records will be freely available. History During a 1970-71 research trip in York, England, to study manuscripts related to the York cycle of biblical plays (also known as the York Mystery Plays), Alexandra F. Johnston, an early drama scholar from the University of Toronto, came across a manuscript transcription of a 1433 indenture agreement between the leaders of the medieval Mercers' Guild and their pageant masters. The document contained details of a medieval pageant wagon and sophisticated staging unknown to researchers of the time. Johnston also met Margaret Dorrell, an Australian graduate student at the University of Leeds, who was working on a similar project related to the York records; the two women decided to collaborate. Within the next two years, Johnston and Dorrell met other scholars of medieval and Renaissance drama working independently on manuscripts from other English cities (David Galloway of the University of New Brunswick on Norwich, Reginald Ingram of the University of British Columbia on Coventry, and Lawrence Clopper of Indiana University Bloomington on Chester). The idea of a scholarly publishing project to find, transcribe, and edit documentary evidence of performance arose from these meetings and was met with interest by the individual researchers and their academic communities. In January 1974, Johnston circulated a position paper on the project. Discussions and planning followed and, in February 1975, the inaugural REED meeting was held at Victoria University in the University of Toronto. In 1975–76, Johnston received a Canada Council personal grant for the publication of the York records as a pilot project, and in late 1976, REED was officially launched with a Canada Council ten-year Major Editorial Grant for the proposed series of collections, establishing REED as a long-term research and publishing project. Because three of the four initial collections were edited by Canadian researchers, Toronto, Canada, became the home of the project. In 1979, REED published its first two collections of records: York, edited by Alexandra F. Johnston and Margaret Rogerson (née Dorrell), and Chester, edited by Lawrence D. Clopper. Since then the project has expanded its scope from major cities and towns to all the counties of England, Wales, and Scotland, based on historic pre-1642 county borders. After its inception in 1976, REED produced the bi-annual REED Newsletter which, in 1997, became the refereed scholarly journal Early Theatre. REED has had close ties to the English Department, the Centre for Medieval Studies (CMS), the Centre for Reformation and Renaissance Studies (CRRS), and the Graduate Centre for Study of Drama. From 1976 to 2009 the project was based at Victoria University in the University of Toronto. In 2009 the offices of the project moved to the English Department. REED retains active relationships with the English Department, the CMS, and the University of Toronto Libraries. REED's internal governance is provided by an executive board of senior scholars in early drama and related fields, with digital advisors and collections editors drawn from Canada, the United States, Australia, New Zealand, and the United Kingdom. REED has collaborated with the Poculi Ludique Societas (PLS) to mount four productions of full cycles of medieval biblical dramas: the York Plays (also called the York Mystery Plays) in 1977 and 1998, and the Chester Plays (also called the Chester Mystery Plays) in 1983 and 2010, with participation from international amateur theatre groups. In November 2002, REED, in partnership with the Art Gallery of Ontario, hosted the Picturing Shakespeare symposium, an exhibition of and an accompanying public symposium regarding the Sanders portrait, an Elizabethan painting reputed to be the only one of Shakespeare made during his lifetime. In addition to revealing evidence of vernacular entertainment activities, the research work for the collections produces a body of knowledge regarding professional travelling entertainers, their patronage, and their performance venues. This cumulative information was first launched for public use through the Patrons and Performances website in 2003. In 2011, REED collaborated with the Department of Digital Humanities, King's College London, and the Department of English at the University of Southampton to create Early Modern London Theatres (EMLoT), a research database and educational resource, with learning modules. EMLoT gathers documents related to professional theatres north and south of the Thames up to 1642 and bibliographic information about their subsequent transcriptions, documenting how scholars “got [their] information about the early theatres, from whom and when.” In 2016, to mark the 400th anniversary of Shakespeare's death, REED collaborated with the BBC and The British Library to produce an ongoing public website titled Shakespeare on Tour. Many REED editors contributed stories and images from their research in the Elizabethan period to help raise “the curtain on performances of The Bard’s plays countrywide from the 16th Century to the present day.” Throughout its existence, REED maintained its primary focus and published about six collections each decade. In 2015, REED published its last print collection (Civic London to 1558, edited by Anne Lancashire), and in March 2017, the first digital collection (Staffordshire, edited by Alan B. Somerset) was made freely available on its publication website, REED Online. All subsequent collections will be added to this database and website. REED has received substantial funding from private individuals and foundations (including the Jackman Foundation), the Canada Council, and the Social Sciences and Humanities Research Council in Canada; the National Endowment for the Humanities and the Andrew W. Mellon Foundation in the U.S.; as well as the Arts and Humanities Research Council and The British Academy in the U.K. Notes References External links Records of Early English Drama—Official website Medieval drama English drama Folk plays 16th-century theatre 17th-century theatre History of theatre Digital humanities Text Encoding Initiative Renaissance and early modern research centres
Records of Early English Drama
[ "Technology" ]
1,370
[ "Digital humanities", "Computing and society" ]
9,235,239
https://en.wikipedia.org/wiki/GRO%20J1655%E2%88%9240
GRO J1655−40 is a binary star consisting of an evolved F-type primary star and a massive, unseen companion, which orbit each other once every 2.6 days in the constellation of Scorpius. Gas from the surface of the visible star is accreted onto the dark companion, which appears to be a stellar black hole with several times the mass of the Sun. The optical companion of this low-mass X-ray binary is a subgiant F star. Along with GRS 1915+105, GRO J1655−40 is one of at least two galactic "microquasars" that may provide a link between the supermassive black holes generally believed to power extragalactic quasars and more local accreting black hole systems. In particular, both display the radio jets characteristic of many active galactic nuclei. The distance from the Solar System is probably about 11,000 light years, or approximately half-way from the Sun to the Galactic Center, but a closer distance of ~2800 ly is not ruled out. GRO J1655−40 and its companion are moving through the Milky Way at around 112 km/s (250,000 miles per hour), in a galactic orbit that depends on its exact distance, but is mostly interior to the "Solar circle", d~8,500 pc, and within 150 pc (~500 ly) of the galactic plane. For comparison, the Sun and other nearby stars have typical speeds on the order of 20 km/s relative to the average velocity of stars moving with the galactic disk's rotation in the solar neighborhood, which supports the idea that the black hole formed from the collapse of the core of a massive star. As the core collapsed, its outer layers exploded as a supernova. Such explosions often seem to leave the remnant system moving through the galaxy with unusually high speed. The outburst source was found to exhibit quasi-periodic oscillations (QPOs) whose frequency increases monotonically during the rising phase of the outburst and with monotonically decreasing frequency in the declining phase of the outburst. This can be easily modeled assuming propagation of an oscillating shock wave: steadily going closer to the black hole due to rise in the Keplerian component rate in the rising phase and going away from the black hole as viscosity is withdrawn in the declining phase. The shock appears to be propagating at a speed of a few meters per second. See also List of nearest known black holes NGC 6242 References External links SIMBAD, V* V1033 Sco -- High Mass X-ray Binary, "GRO J1665-40" X-ray binaries Stellar black holes Scorpius Microquasars Scorpii, V1033 F-type subgiants
GRO J1655−40
[ "Physics", "Astronomy" ]
578
[ "Black holes", "Stellar black holes", "Unsolved problems in physics", "Constellations", "Scorpius" ]
9,236,652
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Tate%20height
In number theory, the Néron–Tate height (or canonical height) is a quadratic form on the Mordell–Weil group of rational points of an abelian variety defined over a global field. It is named after André Néron and John Tate. Definition and properties Néron defined the Néron–Tate height as a sum of local heights. Although the global Néron–Tate height is quadratic, the constituent local heights are not quite quadratic. Tate (unpublished) defined it globally by observing that the logarithmic height associated to a symmetric invertible sheaf on an abelian variety is “almost quadratic,” and used this to show that the limit exists, defines a quadratic form on the Mordell–Weil group of rational points, and satisfies where the implied constant is independent of . If is anti-symmetric, that is , then the analogous limit converges and satisfies , but in this case is a linear function on the Mordell-Weil group. For general invertible sheaves, one writes as a product of a symmetric sheaf and an anti-symmetric sheaf, and then is the unique quadratic function satisfying The Néron–Tate height depends on the choice of an invertible sheaf on the abelian variety, although the associated bilinear form depends only on the image of in the Néron–Severi group of . If the abelian variety is defined over a number field K and the invertible sheaf is symmetric and ample, then the Néron–Tate height is positive definite in the sense that it vanishes only on torsion elements of the Mordell–Weil group . More generally, induces a positive definite quadratic form on the real vector space . On an elliptic curve, the Néron–Severi group is of rank one and has a unique ample generator, so this generator is often used to define the Néron–Tate height, which is denoted without reference to a particular line bundle. (However, the height that naturally appears in the statement of the Birch and Swinnerton-Dyer conjecture is twice this height.) On abelian varieties of higher dimension, there need not be a particular choice of smallest ample line bundle to be used in defining the Néron–Tate height, and the height used in the statement of the Birch–Swinnerton-Dyer conjecture is the Néron–Tate height associated to the Poincaré line bundle on , the product of with its dual. The elliptic and abelian regulators The bilinear form associated to the canonical height on an elliptic curve E is The elliptic regulator of E/K is where P1,...,Pr is a basis for the Mordell–Weil group E(K) modulo torsion (cf. Gram determinant). The elliptic regulator does not depend on the choice of basis. More generally, let A/K be an abelian variety, let B ≅ Pic0(A) be the dual abelian variety to A, and let P be the Poincaré line bundle on A × B. Then the abelian regulator of A/K is defined by choosing a basis Q1,...,Qr for the Mordell–Weil group A(K) modulo torsion and a basis η1,...,ηr for the Mordell–Weil group B(K) modulo torsion and setting (The definitions of elliptic and abelian regulator are not entirely consistent, since if A is an elliptic curve, then the latter is 2r times the former.) The elliptic and abelian regulators appear in the Birch–Swinnerton-Dyer conjecture. Lower bounds for the Néron–Tate height There are two fundamental conjectures that give lower bounds for the Néron–Tate height. In the first, the field K is fixed and the elliptic curve E/K and point P ∈ E(K) vary, while in the second, the elliptic Lehmer conjecture, the curve E/K is fixed while the field of definition of the point P varies. (Lang)      for all and all nontorsion (Lehmer)     for all nontorsion In both conjectures, the constants are positive and depend only on the indicated quantities. (A stronger form of Lang's conjecture asserts that depends only on the degree .) It is known that the abc conjecture implies Lang's conjecture, and that the analogue of Lang's conjecture over one dimensional characteristic 0 function fields is unconditionally true. The best general result on Lehmer's conjecture is the weaker estimate due to Masser. When the elliptic curve has complex multiplication, this has been improved to by Laurent. There are analogous conjectures for abelian varieties, with the nontorsion condition replaced by the condition that the multiples of form a Zariski dense subset of , and the lower bound in Lang's conjecture replaced by , where is the Faltings height of . Generalizations A polarized algebraic dynamical system is a triple consisting of a (smooth projective) algebraic variety , an endomorphism , and a line bundle with the property that for some integer . The associated canonical height is given by the Tate limit where is the n-fold iteration of . For example, any morphism of degree yields a canonical height associated to the line bundle relation . If is defined over a number field and is ample, then the canonical height is non-negative, and ( is preperiodic if its forward orbit contains only finitely many distinct points.) References General references for the theory of canonical heights J. H. Silverman, The Arithmetic of Elliptic Curves, External links Number theory Algebraic geometry Abc conjecture
Néron–Tate height
[ "Mathematics" ]
1,173
[ "Discrete mathematics", "Fields of abstract algebra", "Algebraic geometry", "Abc conjecture", "Number theory" ]
9,237,787
https://en.wikipedia.org/wiki/The%20Californian%20Ideology
"The Californian Ideology" is a 1995 essay by English media theorists Richard Barbrook and Andy Cameron of the University of Westminster. Barbrook calls it a "critique of dotcom neoliberalism". In the essay, Barbrook and Cameron argue that the rise of networking technologies in Silicon Valley in the 1990s was linked to American neoliberalism and a paradoxical hybridization of beliefs from the political left and right in the form of hopeful technological determinism. The essay was published in Mute magazine in 1995 and later appeared on the nettime Internet mailing list. A revised version was published in Science as Culture in 1996. The essay has since been further revised and translated. Andrew Leonard of Salon called the essay "one of the most penetrating critiques of neo-conservative digital hypesterism yet published". In contrast, Wired magazine publisher Louis Rossetto wrote that the essay showed "profound ignorance of economics". Critique During the 1990s, members of the entrepreneurial class in the information technology industry in Silicon Valley vocally promoted an ideology that combined the ideas of Marshall McLuhan with elements of radical individualism, libertarianism, and neoliberal economics, using publications like Wired magazine to promulgate their ideas. This ideology mixed New Left and New Right beliefs based on their shared interest in anti-statism, the counterculture of the 1960s, and techno-utopianism. Proponents believed that in a post-industrial, post-capitalist, knowledge-based economy, the exploitation of information and knowledge would drive growth and wealth creation while diminishing the older power structures of the state in favor of connected individuals in virtual communities. Critics contend that the Californian Ideology strengthens corporations' power over the individual, increases social stratification, and is distinctly Americentric. Barbrook argues that members of the digerati who adhere to the Californian Ideology embrace a form of reactionary modernism. According to him, "American neo-liberalism seems to have successfully achieved the contradictory aims of reactionary modernism: economic progress and social immobility. Because the long-term goal of liberating everyone will never be reached, the short-term rule of the digerati can last forever." Influences Sociologist Thomas Streeter of the University of Vermont has said that the Californian Ideology appeared as part of a pattern of Romantic individualism with Stewart Brand as a key influence. Adam Curtis connects the Californian Ideology's origins to Ayn Rand's philosophy of Objectivism. Reception While generally agreeing with Barbrook and Cameron's central thesis, David Hudson of Rewired took issue with their portrayal of Wired magazine's position as representative of every viewpoint in the industry. "What Barbrook is saying between the lines is that the people with their hands on the reins of power in all of the wired world...are guided by an utterly skewed philosophical construct." Hudson maintained that there were a multitude of different ideologies at work. Andrew Leonard of Salon called the essay "a lucid lambasting of right-wing libertarian digerati domination of the Internet" and "one of the most penetrating critiques of neo-conservative digital hypesterism yet published". Leonard also noted what he called former Wired editor and publisher Louis Rossetto's "vitriolic" response. Rossetto's rebuttal, also published in Mute, criticized the essay as showing "profound ignorance of economics". He also criticized the essay's suggestion that "a uniquely European (but not even vaguely defined) mixed economy solution" would be better for the internet, arguing that Europe's technological development was hampered by "huge plutocratic organizations like Siemens and Philips [that conspire] with bungling bureaucracies to hoover up taxes collected by local and Euro-wide state institutions and shovel them into mammoth technology projects which have proven to be, almost without exception, disasters" and by "High European taxes which have restricted spending on technology and hence retarded its development". Gary Kamiya, also of Salon, found the essay's main points valid, but, like Rossetto, attacked Barbrook's and Cameron's "ludicrous academic-Marxist claim that high-tech libertarianism somehow represents a recrudescence of racism." Architecture historian Kazys Varnelis of Columbia University found that in spite of the privatization the Californian Ideology advocates, Silicon Valley's and California's economic growth was "made possible only due to exploitation of the immigrant poor and defense funding...government subsidies for corporations and exploitation of non-citizen poor: a model for future administrations." In the 2011 documentary All Watched Over by Machines of Loving Grace, Curtis concludes that the Californian Ideology failed to live up to its claims: In 2015, Wired wrote, "Denounced as the work of 'looney lefties' by Silicon Valley's boosters when it first appeared, The Californian Ideology has since been vindicated by the corporate take-over of the Net and the exposure of the NSA's mass surveillance programmes." In 2022, Hasmet M. Uluorta and Lawrence Quill wrote, "The recent tech-lash, concerns over the gig-economy, and the dubious imperatives of datamining, require us to reconsider the prospects for open societies that rely upon platforms as we enter the next phase of the Californian Ideology." See also Paulina Borsook, Cyberselfish (2000) Carmen Hermosillo Corporatocracy Cyber-utopianism Dark Enlightenment Dot-com company Intellectual property Libertarian transhumanism Surveillance capitalism Technocracy Technocapitalism Technolibertarianism The Venus Project TESCREAL Notes References Barbrook, Richard. Andy Cameron. (1996) [1995] "The Californian Ideology". Science as Culture 6.1 (1996): 44–72. Barbrook, Richard. Andy Cameron (1995) Basic Banalities. Barbrook, Richard. (2000) [1999]. "Cyber-Communism: How The Americans Are Superseding Capitalism In Cyberspace". Science as Culture. 9 (1), 5-40. . Borsook, Paulina. (2000). Cyberselfish: A Critical Romp Through the Terribly Libertarian Culture of High Tech. PublicAffairs. . Curtis, Adam (2011). "Love and Power". All Watched Over by Machines of Loving Grace. BBC. Hudson, David. (June 24, 1996). "The Other Californians". Rewired: Journal of a Strained Net. Kamiya, Gary. (January 20, 1997). "Smashing the state: The strange rise of libertarianism". Salon.com. Leonard, Andrew. (September 10, 1999). "The Cybercommunist Manifesto". Salon.com. May, Christopher. (2002). The Information Society: A Sceptical View. Wiley-Blackwell. . Ouellet, Maxime. (2010). "Cybernetic capitalism and the global information society: From the global panopticon to a 'brand' new world". In Jacqueline Best and Matthew Paterson, Cultural Political Economy. 10. Taylor & Francis. . Rossetto, Louis. (1996). "19th Century Nostrums are not Solutions to 21st Century Problems". Mute. 1 (4). Streeter, Thomas. (1999). 'That Deep Romantic Chasm': Libertarianism, Neoliberalism, and the Computer Culture. In Andrew Calabrese and Jean-Claude Burgelman, eds., Communication, Citizenship, and Social Policy: Re-Thinking the Limits of the Welfare State. Rowman & Littlefield, 49–64. Turner, Fred. (2006). From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism. University Of Chicago Press. . Varnelis, Kazys. (2009). "Complexity and Contradiction in Infrastructure ". Ph.D. Lecture Series. Columbia Graduate School of Architecture, Planning, and Preservation. Further reading Barbrook, Richard. (2007). Imaginary Futures: From Thinking Machines to the Global Village. Pluto. . Dyson, Esther. George Gilder, George Keyworth, Alvin Toffler. (1994). "Cyberspace and the American Dream: A Magna Carta for the Knowledge Age". Future Insight. Progress & Freedom Foundation. Flew, Terry. (2002). "The 'New Empirics' in Internet Studies and Comparative Internet Policy". In Fibreculture Conference, 5–8 December, 5–8 December. Melbourne. Gere, Charlie. (2002). Digital Culture. Reaktion Books. . Halberstadt, Mitchell. (January 20, 1997). "Beyond California". Rewired: Journal of a Strained Net. Hudson, David. (1997). Rewired. Macmillan Technical Pub. . Lovink, Geert. (2009) [2002]. Dynamics of Critical Internet Culture (1994-2001). Amsterdam: Institute of Network Cultures. . Pearce, Celia. (1996). The California Ideology: An Insider's View. Mute. 1 (4). External links The Californian Ideology at the Hypermedia Research Centre The Californian Ideology revised SaC version Ideologies California culture Computing culture Technological utopianism Controversies within libertarianism Criticisms of economics 1995 essays Transhumanism
The Californian Ideology
[ "Technology", "Engineering", "Biology" ]
1,933
[ "Computing culture", "Genetic engineering", "Transhumanism", "Computing and society", "Ethics of science and technology" ]
9,237,823
https://en.wikipedia.org/wiki/Verizon%20Business
Verizon Business (formerly known as Verizon Enterprise Solutions) is a division of Verizon Communications based in Basking Ridge, New Jersey, that provides services and products for Verizon's business and government clients. It was formed as Verizon Business in January 2006 and relaunched as Verizon Enterprise Solutions on January 1, 2012. Verizon reorganized into three units in January 2019, which included Verizon Business Group. Overview According to analysts, Verizon is strengthening its position as a global leader in enterprise managed services. David Molony, Principal Analyst for Enterprise Services at Omdia, highlighted that Verizon leads worldwide based on the number of contracts and their overall value. As Verizon adapts its training approach to cater to the evolving needs of future leaders, it is also equipping its executives to act as catalysts for change. A recent RootMetrics report declared Verizon the "undisputed leader" in network reliability and coverage. The analysis showed Verizon outperformed competitors across both urban and rural regions. Additionally, Verizon Business has unveiled an advanced smartphone management solution aimed at simplifying processes and addressing challenges faced by business owners and IT teams. Verizon Business was created following Verizon's acquisition of MCI Communications in January 2006. The division became Verizon Enterprise Solutions on January 1, 2012 and is based in Basking Ridge, New Jersey. Verizon Enterprise Solutions is the division of Verizon Communications that manages Verizon's business and government clients. The division's network and services were available in more than 150 countries and it had employees in 75 countries in 2013. Verizon Business operated 200 data centers in 22 countries, providing cloud, hosting and Internet colocation services to customers in 2013. It also had partial ownership in 80 submarine cable networks worldwide, including the SEA-ME-WE 4, Trans-Pacific Express, and the Europe India Gateway systems in 2011. John Stratton led the division from January 2012 until April 2014 when Chris Formant was named president of the unit. Verizon reorganized into three distinct units starting in January 2019 Verizon Consumer Group, Verizon Business Group, and Verizon Media Group. When this was announced, it was also announced that Tami Erwin, then executive vice president of wireless operations, would lead Verizon Business. Erwin was replaced in July 2022 by Sowmyanarayan Sampath as CEO of Verizon Business. In March 2023, Kyle Malady replaced Sampath as CEO. Products and services Verizon Business provides products both wireless and wireline for enterprise, small business and government. Verizon Business Complete delivers a comprehensive, end-to-end solution, offering exceptional scalability and flexibility through Verizon’s high-speed, dependable 5G network. As IT departments take on the challenge of driving digital transformation—ranging from integrating GenAI to bolstering cybersecurity—Verizon Business Complete allows these teams to delegate smartphone management and focus on more strategic, high-value initiatives,” said Iris Meijer, Chief Product & Marketing Officer at Verizon Business. Networks The company provides Private IP services and networks, as well as managed WAN and LAN services, among other networking services. Verizon also operates a global IP network that reaches 150 countries. In January 2012, Verizon began its Private IP Wireless (LTE) service, which combines 4G LTE with Verizon’s MPLS IP VPN. Cloud computing and data centers Verizon Business offers cloud and data center services through its 11 cloud-enabled data centers. Six of these are in the United States, including NAP of the Americas, its flagship Internet exchange point and colocation center. Verizon also has approximately 50 regional data centers and has network access points in the United States, Europe and Latin America. Verizon offers colocation and managed services through these data centers. In August 2011, Verizon purchased CloudSwitch. CloudSwitch's software allowed Verizon to offer clients the ability to use their existing applications with cloud services. Verizon had a fabric-based cloud infrastructure called Verizon Cloud, which was in beta testing in 2013. Verizon Cloud has two components: Verizon Cloud Compute and Verizon Cloud Storage. Seven data centers support Verizon Cloud As of April 2014, the company's Secure Cloud Interconnect (SCI) service allows enterprise customers to connect their private IP to Verizon's cloud services, and other cloud platforms including Equinix and Microsoft. In December 2016 Verizon agreed to sell its US data centers business to Equinix Inc for 3.6 billion in cash. The deal includes 24 facilities across 15 metropolitan markets. Connected devices Verizon offers machine to machine (M2M) solutions for clients. Verizon established "Innovation Centers" in both Boston and San Francisco to help clients with M2M development. Examples of Verizon Business M2M offerings include digital signage, smart cities, smart meters, fleet management, and asset tracking. Verizon acquired Hughes Telematics in June 2012, expanding the division's M2M capabilities, particularly in telematics, which deals with vehicle telecommunications and technology. After the acquisition, in March 2013, Verizon Enterprise Solutions began offering Networkfleet solutions, a service which tracks and analyzes data about commercial vehicle fleets to help customers optimize routes and manage their fleet vehicles and employees. Security Verizon provides security management services for its cloud and mobility products. These include threat management tools and protection services, monitoring, analytics, incident response, and forensics investigations. It also offers identity and access management in both the United States and Europe. In November 2013, Verizon Enterprise Solutions introduced Managed Certificate Services, which provide a cloud-based means for businesses to secure connections and data between various types of machines and devices. Other products Additional items offered by Verizon Business include wired and wireless voice, FiOS and data and Internet services. Mobility products offered include mobile workforce manager, mobile application management, and mobility pro services. The Verizon Data Breach Investigations Report (DBIR) offers a comprehensive look at recent trends and activities on security and data breaches. See also Sprint Corporation Comcast AT&T Corporation References Verizon Cloud platforms
Verizon Business
[ "Technology" ]
1,306
[ "Cloud platforms", "Computing platforms" ]
9,238,407
https://en.wikipedia.org/wiki/Renewable%20portfolio%20standards%20in%20the%20United%20States
A Renewable Portfolio Standard (RPS) is a regulation that requires the increased production of energy from renewable energy sources, such as wind, solar, biomass, and geothermal, which have been adopted in 38 of 50 U.S. states and the District of Columbia. The United States federal RPS is called the Renewable Electricity Standard (RES). Several states have clean energy standards, which also allow for resources that do not produce emissions, such as large hydropower and nuclear power. The RPS mechanism generally places an obligation on electricity supply companies to produce a specified fraction of their electricity from renewable energy sources. Certified renewable energy generators earn certificates for every unit of electricity they produce and can sell these along with their electricity to supply companies. Supply companies then pass the certificates to some form of regulatory body to demonstrate their compliance with their regulatory obligations. Because it is a market mandate, the RPS relies almost entirely on the private market for its implementation. Unlike feed-in tariffs which guarantee purchase of all renewable energy regardless of cost, RPS programs tend to allow more price competition between different types of renewable energy, but can be limited in competition through eligibility and multipliers for RPS programs. Those supporting the adoption of RPS mechanisms claim that market implementation will result in competition, efficiency, and innovation that will deliver renewable energy at the lowest possible cost, allowing renewable energy to compete with cheaper fossil fuel energy sources. Program diversity Of all the state-based RPS programs in place today, no two are the same. Each has been designed taking into account state-specific policy objectives (e.g. economic growth, diversity of energy supply, environmental concerns), local resource endowment, political considerations, and the capacity to expand renewable energy production. At the most basic level, this gives rise to differing RPS targets and years (e.g. Arizona's 15% by 2025 and Colorado's 30% by 2020). Other factors in program design include resource eligibility, in-state requirements, new build requirements, technology favoritism, lobbying by industry associations and non-profits, groups cost caps, program coverage (IOUs versus Cooperatives and Municipal utilities), cost recovery by utilities, penalties for non-compliance, rules regarding REC creation and trading, and additional non-binding goals. Since RPS programs create a mandate to purchase renewable energy, they create a lucrative captive market of buyers for renewable energy producers who are eligible in a particular state's RPS program to issue RECs. A state may choose to promote new investment in renewable energy generation capacity by not making eligible existing renewable energy such as hydroelectric plants or geothermal energy to qualify under an RPS program. Many states that have mandatory Renewable Portfolio Standards also have additional voluntary targets either for the total proportion of renewable energy or for a particular technology type. In many states, municipalities and cooperatives are exempt from the RPS target, have a lower target, or are required to develop their own targets. Furthermore, in some states such as Minnesota, individual utilities (e.g., Xcel Energy) are singled out for special treatment. Renewable energy certificates (RECs) States with RPS programs have associated renewable energy certificate trading programs. RECs provide a mechanism by which to track the amount of renewable power being sold and to financially reward eligible power producers. For each unit of power that an eligible producer generates, a certificate or credit is issued. These can then be sold either in conjunction with the underlying power or separately to energy supply companies. A market exists for RECs because energy supply companies are required to redeem certificates equal to their obligation under the RPS program. State specific programs or various applications (e.g., WREGIS, M-RETS, NEPOOL GIS) are used to track REC issuance and ownership. These credits can in some programs be 'banked' (for use in future years) or borrowed (to meet current year commitments). There is a great deal of variety among the states in the handling and functioning of RECs and this will be a major issue in integrating state and federal programs. Multipliers RPS multipliers adjust the amount of renewable energy credits (RECs) awarded (up or down) for each MWh of electricity produced based on its source. Since the definition of what is considered "renewable energy" varies, for example, nuclear power, and whether an RPS program should consider environmental damage of a renewable energy source (for example, hydroelectric dams, bird strikes of wind turbines, geothermal earthquakes, solar thermal water use) affects RPS program design and implementation. A state can use a multiplier as protectionism to local renewable energy generators from out of state renewable generators. Since RECs are regulated at a state level, their ability to be traded over state lines varies. Solar renewable energy certificates (SRECs) Over 16 of the approximately 30 states with RPS programs have also established a set-aside for solar energy. This results in the creation and trading of RECs specific to solar known as solar renewable energy certificates (SRECs). With a separate market for SRECs, states are able to ensure that a portion of their renewable energy comes from solar. As a result, states with solar carve-outs, such as New Jersey, have had more success in promoting solar energy through the RPS than states, such as Texas, with a generic REC market or REC multiplier. Tiers and set-asides Energy supply companies need to show that they have acquired a particular percentage of their power sales from the designated technology type. Multiple technology types are bundled together in tiers or classes with similar effect. Not all states have set-asides or tiers (some preferring to promote particular technologies through credit multipliers) and each state that groups technologies together in a tier does so differently. Eligible technologies Every state defines renewable technologies differently. Many states exclude existing renewable facilities from benefiting from an RPS program. A state's definition of eligible technologies is also driven by the objectives of the program. Programs designed to promote diversity in generation types may include or promote technologies different from programs designed to achieve environmental goals. In a 2011 report published by the Union of Concerned Scientists, Doug Koplow said: Nuclear power should not be eligible for inclusion in a renewable portfolio standard. Nuclear power is an established, mature technology with a long history of government support. Furthermore, nuclear plants are unique in their potential to cause catastrophic damage (due to accidents, sabotage, or terrorism); to produce very long-lived radioactive wastes; and to exacerbate nuclear proliferation. Penalties In order to motivate compliance, states that have enforceable standards will have penalties for utilities that fail to reach the specified targets. States may choose to set penalty values or make arbitrary penalty amounts when suppliers fail to meet a renewable target. Where specific technologies are promoted through either tiers or set-aside provisions, the penalties for missing these targets are typically separate and higher. Some states have higher penalties for repeat violations and others escalate penalties on a yearly basis according to price indices. Cost caps All states either place caps on the cost of the program or include some form of 'escape clause' whereby the regulatory authority can suspend the program or exempt utilities from meeting its requirements. The need for such measures arises from the difficulties in estimating in advance the actual cost of the RPS program. The realized cost to the utility and the ratepayer is not known until the supply and cost base of renewable power, along with actual demand, is established. However, likely costs can be estimated, and some states appear to have set cost caps low enough that complete RPS requirements could not be fulfilled without a significant decrease in renewables costs. Cost recovery With few exceptions, utilities are allowed to recover the additional cost of procuring renewable power. The method by which this can be achieved varies by state. Some states opt for a ratepayer surcharge while others require utilities to include costs in rate base. In some instances, utilities are even able to recover the cost of penalties associated with non-compliance. Policy by jurisdiction However, the federal government has discussed enacting a nationwide RPS in the future. Such a policy would establish a common goal for every state in the country, which is less confusing than the state by state table below. If the federal government does pass a national Renewable Portfolio Standard it could be problematic for some states. Since each state has a unique environmental landscape, each state has different abilities when it comes to producing renewable energy. States with less renewable resources available could be penalized for their lack of ability. RPS mechanisms have tended to be most successful in stimulating new renewable energy capacity in the United States where they have been used in combination with federal Production Tax Credits (PTC). In periods, where PTC have been withdrawn the RPS alone has often proven to be insufficient stimulus to incentivise large volumes of capacity. Federal Public Utility Regulatory Policies Act is a law passed in 1978 by the United States Congress as part of the National Energy Act that is meant to promote greater use of renewable energy. In 2009, the US Congress considered Federal level RPS requirements. The American Clean Energy and Security Act reported out of committee in July by the Senate Committee on Energy & Natural Resources includes a Renewable Electricity Standard that calls for 3% of U.S. electrical generation to come from non-hydro renewables by 2011–2013. However, the proposed Support Renewable Energy Act died in the 111th Congress. In 2007, the Edison Electric Institute, a trade association for America's investor-owned utilities, reiterated their continuing opposition to a nationwide RPS; among the reasons included were that it conflicts with and preempts existing RES programs passed in many states, it does not adequately consider the uneven distribution of renewable resources across the country, and it creates inequities among utility customers, by specifically exempting all rural electric cooperatives, and government-owned utilities from the RES mandate. The American Legislative Exchange Council (ALEC) drafted the model bill Electricity Freedom Act, which ALEC affiliate representatives are attempted to roll out in various states and which "would end requirements for states to derive a specific percentage of their electricity needs from renewable energy sources." As a result, of being unable to stop the approval of this model legislation, the American Wind Energy Association and the Solar Energy Industry Association allowed their ALEC membership lapse after one year as members. Different state RPS programs issue a different number of Renewable Energy Credits depending on the generation technology; for example, solar generation counts for twice as much as other renewable sources in Michigan and Virginia. State General citation: States with active RPSes that have not yet been met are in bold. Georgia, Indiana, Kansas, North Dakota, Oklahoma, South Dakota, and Utah have set voluntary standards; these are not shown to result in additional renewable generation installations and are not listed in this table. California The California Renewables Portfolio Standard was created in 2002 under Senate Bill 1078 and further accelerated in 2006 under Senate Bill 107. The bills stipulate that California electricity corporations must expand their renewable portfolio by 1% each year until reaching 20% in 2010. On November 17, 2008, Governor Arnold Schwarzenegger signed executive order S-14-08 which mandated a RPS of 33% by 2020 which sits in addition to the 20% by 2010 order. The target has been extended to 50% by 2030. In September 2018, Governor Jerry Brown signed legislation increasing the state's requirement to 100% clean energy by 2045 and increasing the interim target to 60% by 2030. Colorado The Colorado Renewable Portfolio Standard was updated from 20% to 30% in the 2010 Legislative Session as House Bill 1001. This increase was anticipated to increase solar industry jobs from current (2009) estimated 2,500 to 33,500 by 2020. The updated RPS is also anticipated to create an additional $4.3B (U.S.) in state revenue within the industries. Michigan On October 6, 2008, Public Act 295 was signed into law in the State of Michigan. This Act, known as the Clean, Renewable and Efficient Energy Act, established a Renewable Energy Standard for the State of Michigan. The Renewable Energy Standard requires Michigan electric providers to achieve a retail supply portfolio that includes at least 10% renewable energy by 2015. A ballot proposal to raise the standard to 25% renewable energy by 2025 as a constitutional amendment was put to the voters in the November 2012 General Election as Proposal 3. A Proposal To Amend the State Constitution to Establish a Standard for Renewable Energy. The ballot proposal was defeated with over 60% opposing the proposal. According to the State of Michigan, as of March 4, 2013 "progress toward the first compliance year in 2012 and the 10 percent renewable energy standard in 2015 is going smoothly. Michigan’s electric providers are on track to meet the 10 percent renewable energy requirement. The renewable energy standard is resulting in the development of new renewable capacity and can be credited with the development of over 1,000 MW of new renewable energy projects becoming commercially operational since the Act became law. The weighted average price of renewable energy contracts is $82.54 per MWh which is less than forecasted in REPs." Nevada In 1997 Nevada passed a Renewable Portfolio Standard as part of their 1997 Electric Restructuring Legislation (AB 366) It required any electric providers in the state to acquire actual renewable electric generation or purchase renewable energy credits so that each utility had 1 percent of total consumption in renewables. However, on June 8, 2001, Nevada Governor Kenny Guinn signed SB 372, at the time the country's most aggressive renewable portfolio standard. The law requires that 15 percent of all electricity generated in Nevada be derived from new renewables by the year 2013. The 2001 revision requires that at least 5 percent of the renewable energy projects must generate electricity from solar energy, which have a credit multiplier. In June 2005, the Nevada legislature extending the deadline and raising the requirements of the RPS to 20 percent of sales by 2015. This was further raised to 25 percent by 2025 in 2009. Ohio In an April 2008 unanimous vote, the Ohio legislature passed a bill requiring 25 percent of Ohio's energy to be generated from alternative and renewable sources, of which half or 12.5 percent must derive from renewable sources. In July 2019, Ohio passed House Bill 6 in order to subsidize two failing nuclear power plants and eliminate their RPS altogether at 8.5% in 2026. Oregon For Oregon's three largest utilities (Portland General Electric (PGE), PacifiCorp and the Eugene Water and Electric Board), the standard starts at 5% in 2011, increases to 15% in 2015, 20% in 2020, and 25% in 2025. Other electric utilities in the state, depending on size, have standards of 5% or 10% in 2025. In 2016, the target was raised to 50%, as two companies must supply 50% of Oregon's power as renewable by 2040. In 2021, the target was raised to 100% by 2040. Texas The Texas Renewable Portfolio Standard was originally created by Senate Bill 7 in 1999. The Texas RPS mandated that utility companies jointly create 2000 new MWs of renewables by 2009 based on their market share. In 2005, Senate Bill 20, increased the state's RPS requirement to 5,880 MW by 2015, of which, 500 MW must come from non-wind resources. The bill set a goal of 10,000 MW of renewable energy capacity for 2025. The state's installed capacity reached the 10,000 MW target in early 2010, 15 years ahead of schedule. See also List of U.S. states by electricity production from renewable sources Net metering Power purchase agreement Project finance Renewable electricity Renewables Obligation References External links Compare State Renewable Portfolio Standard Programs at RPS EDGE Databases What are Renewable Electricity Standards? Center for Climate and Energy Solutions RPS Map States with RPS Regulations Solar Energy Industries Association American Wind Energy Association Edison Electric Institute Renewables Portfolio Standard California's Renewable Energy Law Lives! The New York Renewable Portfolio Standard Renewable energy policy Renewable electricity Energy policy Renewable energy law
Renewable portfolio standards in the United States
[ "Environmental_science" ]
3,284
[ "Environmental social science", "Energy policy" ]
9,238,494
https://en.wikipedia.org/wiki/Biosocial%20theory
Biosocial theory is a theory in behavioral and social science that describes personality disorders and mental illnesses and disabilities as biologically-determined personality traits reacting to environmental stimuli. Biosocial theory also explains the shift from evolution to culture when it comes to gender and mate selection. Biosocial theory in motivational psychology identifies the differences between males and females concerning physical strength and reproductive capacity, and how these differences interact with expectations from society about social roles. This interaction produces the differences we see in gender. Description M. M. Linehan wrote in her 1993 paper, Cognitive–Behavioral Treatment of Borderline Personality Disorder, that "the biosocial theory suggests that BPD is a disorder of self-regulation, and particularly of emotional regulation, which results from biological irregularities combined with certain dysfunctional environments, as well as from their interaction and transaction over time" The biological part of the model involves the idea that emotional sensitivity is inborn. As we have different sensitivities in our pain tolerance, in our skin, or in our digestion, we also have different sensitivities to our emotional reactions. This is part of our genetic makeup, but this alone does not cause difficulties or pathologies. It is the transaction between the biological and the social part, especially with invalidating environments, that brings problems. An invalidating environment is one in which the individuals do not fit, so it invalidates their emotions and experiences. It does not need to be an abusive environment; invalidation can occur in subtle ways. Emotional sensitivity plus invalidating environments cause pervasive emotion dysregulation which is the font of many psychopathologies. According to a 1999 article published by McLean Hospital, See also Biocultural anthropology Biosocial criminology Sociobiology References External links An exposition on the drawbacks of viewing human creativity through the lens of the Biosocial Theory by Steven Mizrachs. Classical Conditioning, Arousal, and Crime: a Biosocial Perspective (1997) by A. Raine. Psychological theories Sociobiology
Biosocial theory
[ "Biology" ]
410
[ "Behavioural sciences", "Behavior", "Sociobiology" ]
9,238,840
https://en.wikipedia.org/wiki/Speiss
Speisses are alloys of heavy metals like iron, cobalt, nickel and copper with arsenic, antimony and, occasionally, tin. The latter elements lower the melting point to around 1000 °C. Speisses commonly occur in lead smelting operations and copper smelting operations. Speisses are only partially miscible with mattes, and if there is enough arsenic or antimony in the copper feed to a matte smelting furnace, a separate speiss melt can form. Speisses show high affinities for platinum group metals and gold. The mass concentration of platinum group metals in the speiss phase is about 1000 times that of the concentration in the matte phase, while the ratio for gold is about 100 times. Speisses are also immiscible in liquid lead and flow out of lead blast furnaces as a separate phase. See also Agglomerate (Steel industry) References Metallurgical processes
Speiss
[ "Chemistry", "Materials_science" ]
196
[ "Metallurgical processes", "Metallurgy" ]
9,239,056
https://en.wikipedia.org/wiki/Matte%20%28metallurgy%29
Matte is a term used in the field of pyrometallurgy given to the molten metal sulfide phases typically formed during smelting of copper, nickel, and other base metals. Typically, a matte is the phase in which the principal metal being extracted is recovered prior to a final reduction process (usually converting) to produce blister copper. The matte may also collect some valuable minor constituents such as noble metals, minor base metals, selenium or tellurium. Mattes may also be used to collect impurities from a metal phase, such as in the case of antimony smelting. Molten mattes are insoluble in both slag and metal phases. This insolubility, combined with differences in specific gravities between mattes, slags, and metals, allows for separation of the molten phases. References Metallurgy
Matte (metallurgy)
[ "Chemistry", "Materials_science", "Engineering" ]
182
[ "Metallurgy", "Materials science stubs", "Materials science", "nan" ]
9,239,161
https://en.wikipedia.org/wiki/Rainbow%20Herbicides
The Rainbow Herbicides are a group of tactical-use chemical weapons used by the United States military in Southeast Asia during the Vietnam War. Success with Project AGILE field tests in 1961 with herbicides in South Vietnam was inspired by the British use of herbicides and defoliants during the Malayan Emergency in the 1950s, which led to the formal herbicidal program Trail Dust (see Operation Ranch Hand). Herbicidal warfare is the use of substances primarily designed to destroy the plant-based ecosystem of an agricultural food production area and/or to destroy dense foliage which provides the enemy with natural tactical cover. Background The United States discovered 2,4-dichlorophenoxyacetic acid (2,4-D) during World War II. It was recognized as toxic and was combined with large amounts of water or oil to function as a weed-killer. Army experiments with the chemical eventually led to the discovery that 2,4-D combined with 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) yielded a more potent herbicide. Some batches of 2,4,5-T manufactured for Rainbow Herbicide use were later found to have been contaminated with synthesis-byproduct dioxins including 2,3,7,8-tetrachlorodibenzodioxin (TCDD). Work by researcher Alvin Lee Young identifies examples of Agent Pink and Agent Green containing as much as double the TCDD concentrations observed in Agent Purple or Agent Orange. Types This is a list of the different types of agents used, their active ingredients, and the years they were being used during the Vietnam War as follows: Use In Vietnam, the early large-scale defoliation missions (1962–1964) used of Agent Green, of Agent Pink, and of Agent Purple. These were dwarfed by the of Agent Orange (both versions) used from 1965 to 1970. Agent White started to replace Orange in 1966; of White were used. The only agent used on a large scale in an anti-crop role was Agent Blue, with used. The bombardment occurred most heavily in the area of the Ho Chi Minh Trail. The rainbow herbicides damaged the ecosystems and cultivated lands of Vietnam, and led to buildup of dioxins in the regional food chain. About 4.8 million people were affected. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. In addition to testing and using the herbicides in Vietnam, Laos, and Cambodia, the US military also tested the "Rainbow Herbicides" and many other chemical defoliants and herbicides in the United States, Canada, Puerto Rico, Korea, India, and Thailand from the mid-1940s to the late 1960s. Herbicide persistence studies of Agent Orange and Agent White were conducted in the Philippines. The Philippine herbicide test program was conducted in cooperation with the University of the Philippines College of Forestry, and was also described in a 1969 issue of The Philippine Collegian, the college's newspaper. Super or enhanced Agent Orange was tested by representatives from Fort Detrick and Dow Chemical in Texas, Puerto Rico, Hawaii, and later in Malaysia, in a cooperative project with the International Rubber Research Institute. Picloram in Agent White and Super-Orange was contaminated by hexachlorobenzene (HCB). The Canadian government also tested these herbicides and used them to clear vegetation for artillery training. A 2003 study in Nature found that the military underreported its use of rainbow herbicides by . Long-term effects Vietnam remains heavily contaminated by dioxin-like compounds, which are classified as persistent organic pollutants. These compounds remain in the water table and have built up in the tissues of local fauna. However, the contamination has begun to deteriorate, and the forest canopy has regrown somewhat since the Vietnam War. Dioxins are endocrine disruptors and may have effects on the children of people who were exposed. Rainbow herbicides and other dioxin-like compounds are endocrine disruptors, and evidence suggests that they continue to have long-term health consequences many years after exposure. Because they mimic, or interfere with, hormonal function, adverse effects can include problems with reproduction, growth and development, immune function, and metabolic function. As an example, dioxins and dioxin-like compounds influence the hormone dehydoepiandosterone (DHEA), which has a role in the determination of male or female sex characteristics. There have been thousands of documented instances of health problems and birth defects associated with rainbow herbicide exposure in Vietnam, where tested levels remain high in the soil, water, and atmosphere, decades after initial exposure. Soldiers exposed to Rainbow Herbicides in Southeast Asia reported long-term health effects, which led to several lawsuits against the U.S. government and the manufacturers of the chemical. See also List of Rainbow Codes Operation Ranch Hand U.S. Army Biological Warfare Laboratories Dow Chemical Company References Further reading . A more abbreviated version: "Update #1 to INFORMATION PAPER Agent Orange/Agent Purple and Canadian Forces Base Gagetown" Department of Defense, Veterans and Emergency Management; Maine Veterans' Services. February 9, 2006. Herbicidal Warfare Vietnam 1961-1971 . oneofmanyfeathers.com 2/1/2014. External links Defoliants Military equipment of the Vietnam War Environmental racism
Rainbow Herbicides
[ "Chemistry" ]
1,124
[ "Defoliants", "Chemical weapons" ]
9,239,718
https://en.wikipedia.org/wiki/2000-watt%20society
The 2000-watt society concept, introduced in 1998 by the Swiss Federal Institute of Technology in Zurich (ETH Zurich), aims to reduce the average primary energy use of First World citizens to no more than 2,000 watts (equivalent to 2 kilowatt-hours per hour or 48 kilowatt-hours per day) by 2050, without compromising their standard of living. In a 2008 referendum, more than three-quarters of Zurich's residents endorsed a proposal to lower the city's energy consumption to 2,000 watts per capita and cut greenhouse gas emissions to one ton per capita annually by 2050, with a clear exclusion of nuclear energy. This occasion marked the first democratic legitimization of the concept. In 2009, energy consumption averaged 6,000 watts in Western Europe, 12,000 watts in the United States, 1,500 watts in China, and 300 watts in Bangladesh. At that time, Switzerland's average energy consumption stood at approximately 5,000 watts, having last been a 2,000-watt society in the 1960s. The 2000-watt society initiative is supported by the Swiss Federal Office of Energy (SFOE), the Association of Swiss Architects and Engineers, and other bodies. Current energy use Breakdown of average energy consumption of 5.1 kW by a Swiss person as of July 2008: 1500 watts for living and office space (this includes heat and hot water) 1100 watts for food and consumer discretionary (including transportation of these to the point of sale) 600 watts for electricity 500 watts for automobile travel 250 watts for air travel 150 watts for public transportation 900 watts for public infrastructure Implications Researchers in Switzerland believe that this vision is achievable, despite a projected 65% increase in economic growth by 2050, by using new low-carbon technologies and techniques. It is predicted that a 2000-watt society will require a complete reinvestment in the country's capital assets, refurbishment of the nation's building stock to meet low-energy building standards, significant improvements in the efficiency of road transport, aviation and energy-intensive material use, the possible introduction of high-speed maglev trains, the use of renewable energy sources, district heating, microgeneration and related technologies, as well as a refocusing of research into new priority areas. As a result of the intensified research and development effort required, it is hoped that Switzerland will become a leader in the technologies involved. Indeed, the idea has a great deal of government backing, due to fears about climate change. Progress towards a 2000-watt society The 2,000-watt society principle is gaining momentum in Switzerland. A 2016 article revealed that 2% of Swiss residents adhere to the 2,000-watt energy limit, with average per capita energy consumption exceeding 5,000 watts. More than 100 municipalities have integrated this objective into their by-laws or energy strategies. Nine complexes in seven cities and towns—Zurich, Basel, Bern, Lucerne, Lenzburg, Kriens, and Prilly/Renens—have been awarded the "2,000-watt area" certificate. From 2000 to 2020, despite a global increase in energy consumption and greenhouse gas emissions, Switzerland saw notable reductions. The Swiss Federal Office of Energy (FOE) highlights a decrease in per capita energy use from 6,000 to just under 4,000 watts and a nearly 50% cut in greenhouse gas emissions. However, to meet the 2,000-watt society goals by 2050 to 2100, the FOE acknowledges the necessity for more decisive measures, noting the progress is on the right path but could be accelerated. Certification The Swiss Federal Office of Energy stipulates that the 2000-watt sites label is awarded to residential developments demonstrating sustainable practices in construction, operation, renovation, and mobility. This certification integrates the Energy City label with the Swiss Engineers and Architects Association's standards. Developers are encouraged to apply at the project's inception, with certification granted upon verification of compliance with set objectives. The label's validity continues until more than 50% of the project undergoes repurposing, ensuring adherence to established criteria. The assessment encompasses management, communication, construction practices, and approaches to supply, disposal, and mobility. City of Zurich The 2016 Zurich 2000-Watt Society roadmap documents a reduction in per capita energy consumption to 4,200 watts and CO2 emissions to 4.7 tonnes, compared to 1990 levels. Without additional measures, projections indicate that by 2050, consumption would only decrease to 3,500 watts and CO2 emissions to 3.5 tonnes per person, falling short of the goals of 2,500 watts and 1 ton of CO2 emissions, respectively. To address this, the roadmap outlines specific strategies for energy supply and buildings, including the installation of more efficient appliances (227 watts), energy efficiency measures for redevelopments (170 watts), new building standards (57 watts), the replacement of fossil and nuclear energy with renewables (505 watts), and the modernization of heating systems (28 watts). In the area of mobility, it suggests efforts to reduce energy consumption for aviation (209 watts) and private transport (50 watts) to achieve the 2050 targets. Basel pilot region Launched in 2001 and located in the metropolitan area of Basel, 'Pilot Region Basel' aims to develop and commercialize some of the technologies involved. The pilot is a partnership between industry, universities, research institutes and the authorities, coordinated by Novatlantis. Participation is not restricted to locally based organizations. The city of Zürich joined the project in 2005 and the canton of Geneva declared its interest in 2008. Within the pilot region, the projects in progress include demonstration buildings constructed to MINERGIE or Passivhaus standards, electricity generation from renewable energy sources, and vehicles using natural gas, hydrogen and biogas. The aim is to put research into practice, seek continuous improvements, and to communicate progress to all interested parties, including the public. Fribourg smart living building The Smart Living Lab based in Fribourg is a joint research center of EPFL, the School of Engineering and Architecture of Fribourg and the University of Fribourg. Together, they designed the smart living building, which will be both a sustainable structure and an evolving building and whose construction starts in 2022. It will house the activities of some 130 researchers, offering laboratories, offices, conference rooms and some experimental dwellings. In this multiple-use context, the building will become an experimental field of studies in itself and aims to find solutions to energy consumption and the greenhouse gas emissions that it generates. This construction is the group's first case study, and research projects have been established to help it meet the lab's ambitious goals: limiting its consumption and emissions to the values set for 2050 by the 2000-watt society vision, while considering the whole life cycle of its components. See also Avoiding Dangerous Climate Change Carbon footprint Climate Change Act 2008 Energy conservation Energy policy Low-carbon economy Making Sweden an Oil-Free Society, an official report One Watt Initiative Paris Agreement Peak oil Sustainable development World energy resources and consumption Notes and references External links Novatlantis: Smart 2000-Watt-Sites The Zürich partner region (City of Zürich) The realities of implementing the 2,000 Watt society Energy from the perspective of sustainable development: the 2000 Watt society Steps towards a sustainable development. A white book for R & D of energy-efficient technologies; Jochem et al. ETH Zürich, 2004 All info about the 2000 Watt Society (in French) The 2000 watt area Energy conservation Energy policy Environmental design Low-carbon economy Economy of Switzerland Energy in Switzerland Transport in Switzerland 1998 introductions
2000-watt society
[ "Engineering", "Environmental_science" ]
1,549
[ "Environmental design", "Design", "Environmental social science", "Energy policy" ]
9,239,738
https://en.wikipedia.org/wiki/Carboxylation
Carboxylation is a chemical reaction in which a carboxylic acid is produced by treating a substrate with carbon dioxide. The opposite reaction is decarboxylation. In chemistry, the term carbonation is sometimes used synonymously with carboxylation, especially when applied to the reaction of carbanionic reagents with CO2. More generally, carbonation usually describes the production of carbonates. Organic chemistry Carboxylation is a standard conversion in organic chemistry. Specifically carbonation (i.e. carboxylation) of Grignard reagents and organolithium compounds is a classic way to convert organic halides into carboxylic acids. Sodium salicylate, precursor to aspirin, is commercially prepared by treating sodium phenolate (the sodium salt of phenol) with carbon dioxide at high pressure (100 atm) and high temperature (390 K) – a method known as the Kolbe-Schmitt reaction. Acidification of the resulting salicylate salt gives salicylic acid. Many detailed procedures are described in the journal Organic Syntheses. Carboxylation catalysts include N-Heterocyclic carbenes and catalysts based on silver. Carboxylation in biochemistry Carbon-based life originates from carboxylation that couples atmospheric carbon dioxide to a sugar. The process is usually catalysed by the enzyme RuBisCO. Ribulose-1,5-bisphosphate carboxylase/oxygenase, the enzyme that catalyzes this carboxylation, is possibly the single most abundant protein on Earth. Many carboxylases, including Acetyl-CoA carboxylase, Methylcrotonyl-CoA carboxylase, Propionyl-CoA carboxylase, and Pyruvate carboxylase require biotin as a cofactor. These enzymes are involved in various biogenic pathways. In the EC scheme, such carboxylases are classed under EC 6.3.4, "Other Carbon—Nitrogen Ligases". Another example is the posttranslational modification of glutamate residues, to γ-carboxyglutamate, in proteins. It occurs primarily in proteins involved in the blood clotting cascade, specifically factors II, VII, IX, and X, protein C, and protein S, and also in some bone proteins. This modification is required for these proteins to function. Carboxylation occurs in the liver and is performed by γ-glutamyl carboxylase (GGCX). GGCX requires vitamin K as a cofactor and performs the reaction in a processive manner. γ-carboxyglutamate binds calcium, which is essential for its activity. For example, in prothrombin, calcium binding allows the protein to associate with the plasma membrane in platelets, bringing it into close proximity with the proteins that cleave prothrombin to active thrombin after injury. See also Decarboxylation Carboxy-lyases References Organic reactions Post-translational modification
Carboxylation
[ "Chemistry" ]
650
[ "Organic reactions", "Post-translational modification", "Gene expression", "Biochemical reactions" ]
9,239,903
https://en.wikipedia.org/wiki/Retractable%20roof
A retractable roof is a roof system designed to roll back the roof of a structure so that the interior of the facility is open to the outdoors. Retractable roofs are sometimes referred to as operable roofs or retractable skylights. The term operable skylight, while quite similar, refers to a skylight that opens on a hinge, rather than on a track. Retractable roofs are used in residences, restaurants and bars, swim centres, arenas and stadiums, and other facilities wishing to provide protection from the elements, as well as the option of having an open roof during favourable weather. History The United States Patent and Trademark Office (USPTO) records show that David S. Miller, founder of Rollamatic Retractable Roofs, filed in August 1963 for "a movable and remotely controllable roof section for houses and other types of buildings". Shapes and sizes While any shape is possible, common shapes are flat, ridge, hip-ridge, barrel and dome. A residence might incorporate one or more 3' by 5' retractables; a bar or restaurant a retractable roof measuring 20' by 30'; and a meeting hall a 50' by 100' bi-parting-over-stationary. Sports venues Stadium retractable roofs are generally used in locales where inclement weather, extreme heat, or extreme cold are prevalent during the respective sports seasons, in order to allow for playing of traditionally outdoor sports in more favorable conditions, as well as the comfort of spectators watching games played in such weather. Unlike their predecessors, the domes built primarily during the 1960s, 1970s, and early 1980s, retractable roofs also allow for playing of the same traditionally outdoor sports in outdoor conditions when the weather is more favorable. Another purpose of retractable roofs is to allow for growth of natural grass playing fields in environments where extreme hot and/or cold temperatures would otherwise make installation and maintenance of such a field cost prohibitive. Installations throughout the world employ a variety of different configurations and styles. The first retractable-roof stadiums The first retractable roof sports venue was the now-demolished Civic Arena in Pittsburgh, Pennsylvania, United States. Constructed in 1961 for the Pittsburgh Civic Light Opera, the arena was home to minor league basketball, college basketball, and minor league ice hockey teams before becoming the home of the Pittsburgh Penguins of the National Hockey League (NHL) and Pittsburgh Pipers of the American Basketball Association (ABA) in 1967, as well as hosting over a dozen regular season National Basketball Association (NBA) games in the 1960s and 1970s. The arena's dome-shaped roof covered and was made up of eight equal segments constructed from close to 3,000 tons of steel, in which six segments could retract underneath the remaining two, supported by a long exterior cantilevered arm. Olympic Stadium in Montreal, Quebec was slated to be the first outdoor retractable roof stadium at its debut for the 1976 Summer Olympics. However, plagued by construction problems, the roof was not installed until 1987, and was not retractable until 1988. Even then, movement of the roof was impossible in high wind conditions, and technical problems plagued the facility. A permanent, fixed roof was installed in 1998. The Centre Court at the National Tennis Centre, now called the Rod Laver Arena, in Melbourne, Australia opened in January 1988. It was the first retractable roof system installed in a Grand Slam tennis venue. The roof enables matches to continue during rain, extreme heat, and in the presence of smoke from bushfires in surrounding regions. The Rogers Centre (formerly known as SkyDome) in Toronto, Ontario had a fully functional retractable roof at its debut in 1989. Types of stadium retractable roofs Architecturally speaking, retractable roofs vary greatly from stadium to stadium in shape, material and movement. For example, American Family Field has a fan style roof, while Toyota Stadium in Japan has an accordion-like roof. Most retractable roofs are made of metal, while some, such as the roof of State Farm Stadium, are made of water-resistant fabric. Although each retractable roof differs in these aspects, the roof of T-Mobile Park is unique in that it is the only one in North America that does not form a climate-controlled enclosure when in the extended position; rather, it acts as an "umbrella" to cover the playing field and spectator areas during inclement weather, with no side walls enclosing the stadium. Gameplay with retractable roofs In North American major sports leagues, specific rules exist governing the movement of retractable roofs before and during gameplay. These rules vary between the NFL and MLB, as well as from stadium to stadium. In general, if a game begins with the roof open and weather conditions become less favorable, the home team may, with the approval of the field officials and visiting team, request the roof be closed. (Such a scenario is generally rare, due to the accuracy of modern weather forecasting and a general err on the side of caution that keeps a roof closed if there is any significant threat of precipitation.) Depending on the stadium, weather or gameplay conditions, and the judgment of the officials, play may or may not continue until the roof is fully closed. If the game begins with the roof closed, it may be opened under some circumstances depending on the venue. If it is closed after the game begins, typically it must remain closed for the duration of the game. Alternatives to retractable roofs Some modern athletic facilities are using less-complex roof systems commonly referred to as open roofs. These are constructed with similar materials as retractable roofs, such as polycarbonate or tempered glass roofs. Hinged at the structure's gutters, open roofs fully close and open by the mechanics of a rack and pinion system or a push/pull drive system. Open roofs are typically seen at smaller athletic venues such as country clubs and universities, and also in the construction of commercial greenhouses and garden centres for climate control purposes. See also References External links CBC archives The architect explains the roof system for the Rogers Centre (then called SkyDome) in Toronto. CBC Archives A clip from 1975 where the stadium architect talks about his design for the Montreal Olympic Stadium. CBC Archives A look back on the history of the Montreal Olympic Stadium (1999). Guidelines for movement of a retractable roof (Major League Baseball) Roofs Architectural elements
Retractable roof
[ "Technology", "Engineering" ]
1,309
[ "Structural engineering", "Building engineering", "Architecture", "Structural system", "Architectural elements", "Roofs", "Components" ]
154,076
https://en.wikipedia.org/wiki/Epiphany%20%28feeling%29
An epiphany (from the ancient Greek ἐπιφάνεια, epiphanea, "manifestation, striking appearance") is an experience of a sudden and striking realization. Generally the term is used to describe a scientific breakthrough or a religious or philosophical discovery, but it can apply in any situation in which an enlightening realization allows a problem or situation to be understood from a new and deeper perspective. Epiphanies are studied by psychologists and other scholars, particularly those attempting to study the process of innovation. Epiphanies are relatively rare occurrences and generally follow a process of significant thought about a problem. Often they are triggered by a new and key piece of information, but importantly, a depth of prior knowledge is required to allow the leap of understanding. Famous epiphanies include Archimedes's discovery of a method to determine the volume of an irregular object ("Eureka!") and Isaac Newton's realization that a falling apple and the orbiting moon are both pulled by the same force. History The word epiphany originally referred to insight through the divine. Today, this concept is more often used without such connotations, but a popular implication remains that the epiphany is supernatural, as the discovery seems to come suddenly from the outside. The word's secular usage may owe much of its popularity to Irish novelist James Joyce. The Joycean epiphany has been defined as "a sudden spiritual manifestation, whether from some object, scene, event, or memorable phase of the mindthe manifestation being out of proportion to the significance or strictly logical relevance of whatever produces it". The author used epiphany as a literary device within each entry of his short story collection Dubliners (1914); his protagonists came to sudden recognitions that changed their view of themselves and/or their social conditions. Joyce had first expounded on epiphany's meaning in the fragment Stephen Hero (published posthumously in 1944). For the philosopher Emmanuel Lévinas, epiphany or a manifestation of the divine is seen in another's face (see face-to-face). In traditional and pre-modern cultures, initiation rites and mystery religions have served as vehicles of epiphany, as well as the arts. The Greek dramatists and poets would, in the ideal, induct the audience into states of catharsis or kenosis, respectively. In modern times an epiphany lies behind the title of William Burroughs' Naked Lunch, a drug-influenced state, as Burroughs explained, "a frozen moment when everyone sees what is at the end of the fork." Both the Dadaist Marcel Duchamp and the Pop Artist Andy Warhol would invert expectations by presenting commonplace objects or graphics as works of fine art (for example a urinal as a fountain), simply by presenting them in a way no one had thought to do before; the result was intended to induce an epiphany of "what art is" or is not. Process Epiphanies can come in many different forms, and are often generated by a complex combination of experience, memory, knowledge, predisposition and context. A contemporary example of an epiphany in education might involve the process by which a student arrives at some form of new insight or clarifying thought. Despite this popular image, epiphany is the result of significant work on the part of the discoverer, and is only the satisfying result of a long process. The surprising and fulfilling feeling of epiphany is so surprising because one cannot predict when one's labor will bear fruit, and our subconscious can play a significant part in delivering the solution; and is fulfilling because it is a reward for a long period of effort. Myth A common myth predicts that most, if not all, innovations occur through epiphanies. Not all innovations occur through epiphanies; Scott Berkun notes that "the most useful way to think of an epiphany is as an occasional bonus of working on tough problems." Most innovations occur without epiphany, and epiphanies often contribute little towards finding the next one. Crucially, epiphany cannot be predicted, or controlled. Although epiphanies are only a rare occurrence, crowning a process of significant labor, there is a common myth that epiphanies of sudden comprehension are commonly responsible for leaps in technology and the sciences. Famous epiphanies include Archimedes' realization of how to estimate the volume of a given mass, which inspired him to shout "Eureka!" ("I have found it!"). The biographies of many mathematicians and scientists include an epiphanic episode early in the career, the ramifications of which were worked out in detail over the following years. For example, allegedly Albert Einstein was struck as a young child by being given a compass, and realizing some unseen force in space was making it move. Another, perhaps better, example from Einstein's life occurred in 1905 after he had spent an evening unsuccessfully trying to reconcile Newtonian physics and Maxwell's equations. While taking a streetcar home, he looked behind him at the receding clocktower in Bern and realized that if the car sped up (close to the speed of light) he would see the clock slow down; with this thought, he later remarked, "a storm broke loose in my mind," which would allow him to understand special relativity. Einstein had a second epiphany two years later in 1907 which he called "the happiest thought of my life" when he imagined an elevator falling, and realized that a passenger would not be able to tell the difference between the weightlessness of falling, and the weightlessness of spacea thought which allowed him to generalize his theory of relativity to include gravity as a curvature in spacetime. A similar flash of holistic understanding in a prepared mind was said to give Charles Darwin his "hunch" (about natural selection), and Darwin later said he always remembered the spot in the road where his carriage was when the epiphany struck. Another famous epiphany myth is associated with Isaac Newton's apple story, and yet another with Nikola Tesla's discovery of a workable alternating current induction motor. Though such epiphanies might have occurred, they were almost certainly the result of long and intensive periods of study those individuals had undertaken, rather than an out-of-the-blue flash of inspiration about an issue they had not thought about previously. Another myth is that epiphany is simply another word for (usually spiritual) vision. Actually, realism and psychology make epiphany a different mode as distinguished from vision, even though both vision and epiphany are often triggered by (sometimes seemingly) irrelevant incidents or objects. In religion In Christianity, the Epiphany refers to a realization that Christ is the Son of God. Western churches generally celebrate the Visit of the Magi as the revelation of the Incarnation of the infant Christ, and commemorate the Feast of the Epiphany on January 6. Traditionally, Eastern churches, following the Julian rather than the Gregorian calendar, have celebrated Epiphany (or Theophany) in conjunction with Christ's baptism by John the Baptist and celebrated it on January 19; however, other Eastern churches have adopted the Western Calendar and celebrate it on January 6. Some Protestant churches often celebrate Epiphany as a season, extending from the last day of Christmas until either Ash Wednesday, or the Feast of the Presentation on February2. In more general terms, the phrase "religious epiphany" is used when a person realizes their faith, or when they are convinced an event or happening was really caused by a deity or being of their faith. In Hinduism, for example, epiphany might refer to Arjuna's realization that Krishna (incarnation of God serving as his charioteer in the "Bhagavad Gita") is indeed representing the Universe. The Hindu term for epiphany would be bodhodaya, from Sanskrit bodha "wisdom" and udaya "rising". Or in Buddhism, the term might refer to the Buddha obtaining enlightenment under the bodhi tree, finally realizing the nature of the universe, and thus attaining Nirvana. The Zen term kensho also describes this moment, referring to the feeling attendant on realizing the answer to a koan. See also Anagnorisis Apophenia Eureka effect Hierophany Kenshō Lateral thinking Peripeteia Revelation Satori Samadhi Theophany References Positive mental attitude Innovation Religious practices Ancient Greek philosophical concepts ca:Epifania de:Erleuchtung hu:Epifánia
Epiphany (feeling)
[ "Biology" ]
1,791
[ "Behavior", "Religious practices", "Human behavior" ]
154,147
https://en.wikipedia.org/wiki/Limerence
Limerence is a state of mind resulting from romantic feelings for another person. It typically involves intrusive and melancholic thoughts, or tragic concerns for the object of one's affection, along with a desire for the reciprocation of one's feelings and to form a relationship with the object of love. Psychologist Dorothy Tennov coined the term "limerence" as an alteration of "amorance" without other etymologies to describe a concept that had grown out of her work in the 1960s, when she interviewed over 500 people on the topic of love. In her book Love and Limerence, she writes that "to be in a state of limerence is to feel what is usually termed 'being in love. She coined the term to disambiguate the state from other less-overwhelming emotions, and to avoid the implication that those who do not experience it are not capable of experiencing love. According to Tennov and others, limerence can be considered romantic love, passionate love, infatuation or lovesickness. Later in Tennov's life, she compared limerence to love madness. Limerence is also sometimes compared and contrasted with a crush, with limerence being much more intense. Love and Limerence has been called the seminal work on romantic love. Anthropologist and author Helen Fisher wrote that data collection on romantic attraction started with Tennov collecting survey results, diaries, and other personal accounts. Fisher, who knew Tennov and corresponded with her, has commented that Tennov's concept had a sad component to it. Overview Dorothy Tennov's concept represents a scientific attempt at studying the nature of romantic love. She identified a suite of psychological traits associated with being in love, which she called limerence. Other authors have also considered limerence to be an emotional and motivational state for focusing attention on a preferred mating partner or an attachment process. Joe Beam calls limerence the feeling of being madly in love. Nicky Hayes describes it as "a kind of infatuated, all-absorbing passion", the type of love Dante felt towards Beatrice or that of Romeo and Juliet. It is this unfulfilled, intense longing for the other person which defines limerence, where the individual becomes "more or less obsessed by that person and spends much of their time fantasising about them". Hayes suggests that "it is the unobtainable nature of the goal which makes the feeling so powerful", and occasional, intermittent reinforcement may be required to support the underlying feelings. Severus Snape's love for Lily Evans, the mother of Harry Potter, is a modern fictional representation of limerence. Another famous historical example was the tumultuous affair between Lord Byron and Lady Caroline Lamb. A central feature of limerence for Tennov was the fact that her participants really saw the object of their affection's personal flaws, but simply overlooked them or found them attractive. Tennov calls this "crystallization", after a description by the French writer Stendhal. This "crystallized" version of a love object, with accentuated features, is what Tennov calls a "limerent object", or "LO". Limerence has psychological properties akin to passionate love, but in Tennov's conception, limerence begins outside of a relationship and before the person experiencing it knows for certain whether it's reciprocated. Tennov observes that limerence is therefore frequently unrequited and argues that some type of situational uncertainty is required for the intense mental preoccupation to occur. Uncertainty could be, for example, barriers to the fulfillment of a relationship such as physical or emotional distance from the LO, or uncertainty about how the LO reciprocates the feeling. Some people may also fear intimacy so that they distance themselves and avoid a real connection. Not everyone experiences limerence. Tennov estimates that 50% of women and 35% of men experience limerence based on answers to certain survey questions she administered. Limerence can be difficult to understand for those who have never experienced it, and it is thus often derided and dismissed as undesirable, some kind of pathology, ridiculous fantasy or a construct of romantic fiction. According to Tennov, limerence is not a mental illness, although it can be "highly disruptive and extremely painful", "irrational, silly, embarrassing, and abnormal" or sometimes "the greatest happiness" depending on who is asked. Components The original components of limerence, from Love and Limerence, were: intrusive thinking about the object of your passionate desire (the limerent object or "LO"), who is a possible sexual partner acute longing for reciprocation dependency of mood on LO's actions or, more accurately, your interpretation of LO's actions with respect to the probability of reciprocation inability to react limerently to more than one person at a time (exceptions occur only when limerence is at low ebb—early on or in the last fading) some fleeting and transient relief from unrequited limerent passion through vivid imagination of action by LO that means reciprocation fear of rejection and sometimes incapacitating but always unsettling shyness in LO's presence, especially in the beginning and whenever uncertainty strikes intensification through adversity (at least, up to a point) acute sensitivity to any act or thought or condition that can be interpreted favorably, and an extraordinary ability to devise or invent "reasonable" explanations for why the neutrality that the disinterested observer might see is in fact a sign of hidden passion in the LO an aching of the "heart" (a region in the center front of the chest) when uncertainty is strong buoyancy (a feeling of walking on air) when reciprocation seems evident a general intensity of feeling that leaves other concerns in the background a remarkable ability to emphasize what is truly admirable in LO and to avoid dwelling on the negative, even to respond with a compassion for the negative and render it, emotionally if not perceptually, into another positive attribute. Relation to other concepts Love Dorothy Tennov gives several reasons for inventing a term for the state denoted by limerence (usually termed "being in love"). One principle reason is to resolve ambiguities with the word "love" being used both to refer to an act which is chosen, as well as to a state which is endured:Many writers on love have complained about semantic difficulties. The dictionary lists two dozen different meanings of the word "love". And how does one distinguish between love and affection, liking, fondness, caring, concern, infatuation, attraction, or desire? [...] Acknowledgment of a distinction between love as a verb, as an action taken by the individual, and love as a state is awkward. Never having fallen in love is not at all a matter of not loving, if loving is defined as caring. Furthermore, this state of "being in love" included feelings that do not properly fit with love defined as concern.(The type of love that focuses on caring for others is called compassionate love or agape.) The other principle reason given is that she encountered people who do not experience the state. The first such person Tennov discovered was a long-time friend, Helen Payne, whose unfamiliarity with the state of limerence emerged during a conversation on an airplane flight together. Tennov writes that "describing the intricacies of romantic attachments" to Helen was "like trying to describe the color red to one blind from birth". Tennov labels such people "nonlimerents" (a person not currently experiencing limerence), but cautions that it seemed to her that there is no nonlimerent personality and that potentially anyone could experience the state of limerence. Tennov says:I adopted the view that never being in this state was neither more nor less pathological than experiencing it. I wanted to be able to speak about this reliably identifiable condition without giving love's advocates the feeling something precious was being destroyed. Even more important, if using the term "love" denoted the presence of the state, there was the danger that absence of the state would receive negative connotations.Tennov addresses the issue of whether limerence is love in several other passages. In one passage she clearly says that limerence is love, at least in certain cases:In fully developed limerence, you feel additionally what is, in other contexts as well, called love—an extreme degree of feeling that you want LO to be safe, cared for, happy, and all those other positive and noble feelings that you might feel for your children, your parents, and your dearest friends. That's probably why limerence is called love in all languages. [...] Surely limerence is love at its highest and most glorious peak.However, Tennov then switches in tone and tells a fairly negative story of the pain felt by a woman reminiscing over the time she wasted pining for a man she now feels nothing towards, something which occupied her in a time when her father was still alive and her children "were adorable babies who needed their mother's attention." Tennov says this is why we distinguish limerence (this "love") from other loves. In another passage, Tennov says that while affection and fondness do not demand anything in return, the return of feelings desired in the limerent state means that "Other aspects of your life, including love, are sacrificed in behalf of the all-consuming need." and that "While limerence has been called love, it is not love." Romantic love Dorothy Tennov sometimes considers limerence to be synonymous with the term "romantic love". This term has a complicated history with an evolving definition, but the sense in which Tennov uses it originates from a literary tradition of stories depicting tragic or unfulfilled love. Some examples of romantic love stories in this vein are Layla and Majnun, Tristan and Iseult, Dante and Beatrice (from La Vita Nuova), Romeo and Juliet and The Sorrows of Young Werther. In this sense, romantic love is idealized, unrealistic and irrational, the kind of love often found in a fairy-tale depicting a tragedy. This can be contrasted with rational, practical and pragmatic love, or the kind of love found in steady, long-term relationships. The literary genre of romantic love dates back to troubadour poetry from the middle-ages (or earlier) and the doctrine of courtly love. Tennov credits the cleric Andreas Capellanus as describing the state of limerence "very accurately" in The Art of Courtly Love, a book of statutes for the "proper" conduct of lovers. The work includes rules such as "A true lover is constantly and without intermission possessed by the thoughts of his beloved." and "The easy attainment of love makes it of little value; difficulty of attainment makes it prized." This work helped spread the cultural doctrine of romantic love throughout Europe. Because of the literary and cultural origins of the term, the romantic love phenomenon is sometimes held to be socially constructed (especially by critics, according to Tennov); however, Tennov argues that limerence has a biological basis and evolutionary purpose. Romantic love is also often used as a synonym for passionate love, also called "being in love", and also often associated with limerence. Academic literature has never universally adopted a single term for this. Helen Fisher has commented that she prefers the term "romantic love" because she thinks it has meaning in society. Passionate and companionate love Limerence has been compared to passionate love, with Elaine Hatfield considering them synonymous or commenting in 2016 that they are "much the same". Passionate love is described as: A state of intense longing for union with an other. Reciprocated love (union with the other) is associated with fulfillment and ecstasy. Unrequited love (separation) with emptiness; with anxiety, or despair. A state of profound physiological arousal. Many other academics also consider these synonymous, such as Bianca Acevedo & Arthur Aron:Passionate love, "a state of intense longing for union with another" (Hatfield & Rapson, 1993, p. 5), also referred to as "being in love" (Meyers & Berscheid, 1997), "infatuation" (Fisher, 1998), and "limerence" (Tennov, 1979), includes an obsessive element, characterized by intrusive thinking, uncertainty, and mood swings.Passionate love is linked to passion, as in intense emotion, for example, joy and fulfillment, but also anguish and agony. Hatfield notes that the original meaning of passion "was agony—as in Christ's passion." Passionate love is contrasted with companionate love, which is "the affection we feel for those with whom our lives are deeply entwined". Companionate love is felt less intensely and often follows after passionate love in a relationship. In Love and Limerence, Dorothy Tennov also lists passionate love among several synonyms for limerence, and refers to one of Hatfield's early writings on the subject. However, Tennov says that one of the guiding points of her study was to focus on the aspects of love that produced distress. She has also said that one of the problems she encountered in her studies is that her interview subjects would use terms like "passionate love", "romantic love" and "being in love" to refer to mental states other than what she refers to as limerence. For example, some of her nonlimerent interviewees would use the word "obsession", yet not report the intrusive thoughts necessary to limerence, only that "thoughts of the person are frequent and pleasurable". Infatuation Various authors have considered "infatuation" to be a synonym for both passionate love and limerence. Dorothy Tennov has stated that she did not use the word "infatuation" because while there is overlap, the word evokes different connotations. In one type of distinction, people use "infatuation" to express disapproval or to refer to unsatisfactory relationships, and "love" to refer to satisfactory ones. In Love and Limerence, Tennov considers "infatuation" to be a pejorative, for example often being used as a label for teenage limerent fantasizing and obsession with a celebrity. In the triangular theory of love, by Robert Sternberg, "infatuation" refers to romantic passion without intimacy (or closeness) and without commitment. Sternberg has stated that infatuation in his theory is essentially the same as limerence. Another related concept (which also has qualities reminiscent of limerence) is "fatuous love", which is romantic passion with a commitment made in the absence of intimacy. This can be, for example, lovers in the throes of new passion who commit to marry without really knowing each other well enough to know if they are suitable partners. In this situation, their passion usually wanes over time, turning into a commitment alone (called "empty love") and they become unhappy. Independent emotion systems Helen Fisher's popular theory of independent emotion systems posits that there are three primary biological systems involved with human reproduction, mating and parenting: lust (the sex drive, or sexual desire), attraction (passionate love, infatuation or limerence) and attachment (companionate love). These three systems regularly work in concert together but serve different purposes and can also work independently. Independent emotions theory has been critiqued as being oversimplified, but the general idea of separate systems remains useful. When limerence is a component in an affair, for example, Fisher's theory can be used to help explain this. Fisher's theory is that while a person can feel deep attachment for a long-term spouse, they can also feel limerence for somebody else, just as how one can also feel sexual attraction towards other people. Joe Beam comments that if somebody in a committed relationship ends up in limerence like this, it will pull them out of their relationship. Fisher's theory has also been used to explain why some people can feel "platonic" limerence without sexual desire, because sexual desire is separate from romantic love. Lisa Diamond argues this is possible even in contradiction to one's sexual orientation, because the brain systems evolved by repurposing the systems for mother-infant bonding (a process called exaptation). According to this theory, it would not have been adaptive for a parent to only be able to bond with an opposite sex child, so the systems must have evolved independent of sexual orientation. People most often fall in love because of sexual desire, but Diamond suggests time spent together and physical touch can serve as a substitute. In Dorothy Tennov's conception, sexual attraction was an essential component of limerence (as a generalization); however she did note that occasionally people described to her what seemed to fit the pattern of limerence, but without sexual attraction. Additionally, for those who do have a sexual interest, their desire for emotional union and commitment is a far greater concern to them. Attachment theory Attachment theory refers to John Bowlby's concept of an "attachment system", a system evolved to keep infants in proximity of their caregiver (or "attachment figure"). A person uses their attachment figure as a "secure base" to feel safe exploring the environment, seeks proximity with the attachment figure when threatened, and suffers distress when separated. A prominent theory suggests this system is reused for adult pair bonds, as an exaptation or co-option, whereby a given trait takes on a new purpose. Attachment style refers to differences in attachment-related thoughts and behaviors, especially relating to the concept of security vs. insecurity. This can be split into components of anxiety (worrying the partner is available, attentive and responsive) and avoidance (preference not to rely on others or open up emotionally). In Helen Fisher's taxonomy, limerence and attachment are considered different systems with different purposes. In the past, other authors have also suggested that limerence could be related to the anxious attachment style. However, in their original 1987 paper conceptualizing romantic love as an attachment process (and relating limerence to attachment style), Cindy Hazan and Phillip Shaver also caution that they are not implying that the early phase of romance is equivalent to being attached. A 1990 study found that the 15% of participants who self-reported an anxious attachment style scored highly on limerence measures (especially obsessive preoccupation and emotional dependence scales), but found considerable overlap of distributions between all three attachment styles and limerence. Studies and a meta-analysis by Bianca Acevedo & Arthur Aron found that while romantic obsession is associated with relationship satisfaction in short-term relationships, it is associated with slightly decreased satisfaction over the long-term and they speculate this could be related to insecure attachment. Love styles Love styles are a concept invented by the sociologist John Alan Lee which can be understood as different ways to love, or different kinds of love stories. Limerence is sometimes considered similar or related to the love style mania (or manic love), named after the Greek theia mania (the madness from the gods). Lee developed his concept of manic love in relation to some of the same sources as Tennov, such as Andreas Capellanus and courtly love. A manic lover is obsessively preoccupied with the beloved. When asked to recall their childhood, a typical manic lover recalls it as unhappy, and they are usually lonely, dissatisfied adults. They are anxious to fall in love; however, they are unsure of which physical type they prefer. Because they are unsure of who to fall in love with, they often fall in love with somebody quite inappropriate (even somebody they initially dislike) and project onto them qualities they want but do not actually have. According to Lee, "Mania can become almost an addiction nearly impossible for the addict to end on his own initiative." Mania is often the first love style of a young person, but others may not experience it until middle age—for example, after a marriage has lost its interest. According to Lee, a cycle of manic loves is often caused by a desperate need to be in love, the cause of which the manic lover must locate and remedy to break free. While Lee describes the manic lover as jealous, Tennov states that people can be limerent and not be jealous. Rather, according to Tennov, what a limerent person desires is exclusivity and this is often mistaken for jealousy. Among the other love styles, mania can be closely compared to eros (or erotic love). Both are often considered "romantic love", and mania and eros taken together correspond to passionate love. Like the manic lover, the erotic lover is also intensely preoccupied with their beloved, but the thoughts are optimistic while a manic lover is insecure. Unlike a manic lover, the erotic lover is aware of which physical type they consider ideal. As such, eros begins with a powerful initial attraction, referred to by Stendhal as "a sudden sensation of recognition and hope". The erotic lover also recalls their childhood as happy and eros has been associated with secure attachment, while mania has been associated with attachment anxiety. According to Lee, the love style ludus (noncommittal love as a game, avoidance and juggling multiple partners) and mania possess a "fatal attraction" for one another. It's surprisingly common, but not a good match for happy, mutual love. Erotomania Limerence is sometimes compared to erotomania; however, erotomania is defined as a delusional disorder where the sufferer has a delusional belief that the object of their affection is madly in love with them when they are not. A person suffering from erotomania might interpret subtle, irrelevant details (such as their love object wearing a particular accessory) as coded declarations of love, and the sufferer will invent ways to interpret outright rejections as unserious so they can continue believing the object is secretly in love with them. According to Dorothy Tennov, a person experiencing limerence might misinterpret signals and falsely believe that their LO reciprocates the feeling when they do not, but they are receptive to negative cues, especially when receiving a clear rejection. Love addiction Because limerence is compared to addiction, it is sometimes compared to or contrasted with what is called "love addiction", although according to modern research all romantic love may work like an addiction at the level of the brain. There's an academic discussion over whether all love is an addiction, or whether "love addiction" only refers to brain processes which could be considered abnormal. The term has had an amorphous definition over the years and it does not yet denote a psychiatric condition, but recently one definition has been developed that "Individuals addicted to love tend to experience negative moods and affects when away from their partners and have the strong urge and craving to see their partner as a way of coping with stressful situations." This definition is given in terms of a relationship, although limerence is usually unrequited. Evolutionary purpose In a 1998 essay, as well as in Love and Limerence, Dorothy Tennov has speculated that limerence has an evolutionary purpose. For what ultimate cause might the state of limerence be a proximate cause? In other words, why were people who became limerent successful, maybe more successful than others, in passing their genes on to succeeding generations back a few hundred thousand or million years ago when heads grew larger and fathers who left mother and child to fend for themselves were less "reproductively successful"—in the long run, that is (Morgan 1993). Did limerence evolve to cement a relationship long enough to get the offspring up and running? [...] The most consistent result of limerence is mating, not merely sexual interaction but also commitment, the establishment of a shared domicile in the form of a cozy nest built for the enjoyment of ecstasy, for reproduction, and for the rearing of children. Helen Fisher's components of romantic attraction are largely derived from Tennov's components of limerence, and in a similar vein as Tennov, Fisher has theorized that this 'attraction' system evolved to facilitate mammalian mate choice. Tennov has suggested that if the neurophysiological "machinery" for limerence is not a universal among all humans, then having both phenotypes (limerent and nonlimerent) in the population might be beneficial and an evolutionarily stable strategy. Characteristics Addiction Limerence has been called an addiction. The early stage of romantic love is comparable to a behavioral addiction (i.e. addiction to a non-substance) but the "substance" involved is the loved person. A team led by Helen Fisher used fMRI to find that people who had "just fallen madly in love" showed activation in an area of the brain called the ventral tegmental area, which projects dopamine to other brain areas, while looking at a photograph of their beloved. This as well as activity in other key areas supports the theory that people in love experience what is called incentive salience in response to the loved person, which could be a result of oxytocin activity in motivation pathways in the brain. Incentive salience is the property by which cues in the environment stand out to a person and become attention-grabbing and attractive, like a "motivational magnet" which pulls a person towards a particular reward. The phenomenon Tennov describes as a loved one taking on a "special meaning" to the person in love is believed to be related to this heightened salience in response to the loved one. In addiction research, a distinction is also drawn between "wanting" a reward (i.e. incentive salience, tied to mesocorticolimbic dopamine) and "liking" a reward (i.e. pleasure, tied to hedonic hotspots), aspects which are dissociable. People can be addicted to drugs and compulsively seek them out, even when taking the drug no longer results in a high or the addiction is detrimental to one's life. They can also "want" (i.e. feel compelled towards, in the sense of incentive salience) something which they do not cognitively wish for. In a similar way, people who are in love may "want" a loved person even when interactions with them are not pleasurable. For example, they may want to contact an ex-partner after a rejection, even when the experience will only be painful. It is also possible for a person to be "in love" with somebody they do not like, or who treats them poorly. Fisher's team proposes that romantic love is a "positive addiction" (i.e. not harmful) when requited and a "negative addiction" when unrequited or inappropriate. In brain scans of long-term romantic love (involving subjects who professed to be "madly" in love, but were together with their partner 10 years or more), attraction similar to early-stage romantic love was associated with dopamine reward center activity ("wanting"), but long-term attachment was associated with the globus palludus, a site for opiate receptors identified as a hedonic hotspot ("liking"). Long-term romantic lovers also showed lower levels of obsession compared to those in the early stage. Lovesickness Limerence is usually unrequited, and a horrible experience for the limerent person. Limerence is debilitating for some people. Lovesickness is a state of mind characterized by addictive cravings, frustration, depression, melancholy and intrusive thinking. In Dorothy Tennov's survey group, 42% reported being "severely depressed about a love affair". Other effects are distraction and self-isolation. Fisher's fMRI scans of rejected lovers showed activation in brain areas associated with physical pain, craving and assessing one's gains and losses. Tennov describes being under the spell herself, saying "Before it happened, I couldn't have imagined it[.] Now, I wouldn't want to have it happen again." Some people even described to her incidents of self-injury, but Tennov maintains that limerence on its own is normal and tragedies involve additional factors. Lovesickness has been pathologized in previous centuries, but is not currently in the ICD-10, ICPC or DSM-5. Author and psychologist Frank Tallis has made the argument that all love—even normal love—is largely indistinguishable from mental illness. In his view, the ethical dilemma behind the notion that love could be a psychopathology can be resolved by suggesting that there is no difference between "normal" and "abnormal" when it comes to love. There is also an ethical debate over the implications of using modern drugs for this type of thing. Bioethicist Brian Earp and colleagues have argued that the voluntary use of anti-love biotechnology (for example, a drug made to cause the person who uses it to fall out of love) could be ethical. However, there is currently no drug which is a realistic candidate. Although limerence was not intended to denote an abnormal state and lovesickness is no longer recognized as a medical condition, symptoms still bear a resemblance to many entries in the DSM. For example, when people fall in love, there are four core symptoms: preoccupation, episodes of melancholy, episodes of rapture and instability of mood. These correspond with conventional diagnoses of obsessionality (or OCD), depression, mania (or hypomania) and manic depression. Other examples are physical symptoms similar to panic attacks (pounding heart, trembling, shortness of breath and lightheadedness), excessive worry about the future which resembles generalized anxiety disorder, appetite disturbance and sensitivity about one's appearance which resembles anorexia nervosa, and the feeling that life has become a dream which resembles derealization and depersonalization. There's a scholarly debate about the involuntary nature of romantic love. The notion that falling in love is an involuntary process is different from the issue of whether one's behavior can be considered autonomous while in love. Tallis argues that love evolved to override rationality so that one finds a lover and reproduces regardless of the personal costs of bearing and raising a child. He uses the example of Charles Darwin who, never being romantic, is said to have sat and made a list of reasons to marry or not to marry. Being accustomed to total freedom and worrying about such things as financial austerities that would limit his expenditure on books, Darwin found his reasons not to marry greatly outweighed his reasons to marry. However, shortly thereafter Darwin unexpectedly fell in love, suddenly becoming preoccupied with cozy images of married life and thus quickly converting from bachelor to husband. Tallis writes:At first sight, it seems extraordinary that evolutionary forces might conspire to shape something that looks like a mental illness to ensure reproductive success. Yet, there are many reasons why love should have evolved to share with madness several features—the most notable of which is the loss of reason. Like the ancient humoral model of love sickness, evolutionary principles seem to have necessitated a blurring of the distinction between normal and abnormal states. Evolution expects us to love madly, lest we fail to love at all. According to Tennov, "Love has been called a madness and an affliction at least since the time of the ancient Greeks and probably earlier than that." Historical accounts of lovesickness attribute it, for example, to being struck by an arrow shot by Eros, to a sickness entering through the eyes (similar to the evil eye), to an excess of black bile, or to spells, potions and other magic. Attempts to treat lovesickness have been made throughout history using a variety of plants, natural products, charms and rituals. The first known treatise on lovesickness is Remedia Amoris, by the poet Ovid. Crystallization Crystallization, for Tennov, is the "remarkable ability to emphasize what is truly admirable in LO and to avoid dwelling on the negative, even to respond with a compassion for the negative and render it, emotionally if not perceptually, into another positive attribute." Tennov borrows the term from the French writer Stendhal from his 1821 treatise on love, De l'Amour, in which he describes an analogy where a tree branch is tossed into a salt mine. After remaining there for several months, the tree branch (or twig) becomes covered in salt crystals which transform it "into an object of shimmering beauty". In the same way, unattractive characteristics of an LO are given little to no attention so that the LO is seen in the most favorable light. One of Tennov's interviewees, Lenore, says:Yes I knew he gambled, I knew he sometimes drank too much, and I knew he didn't read a book from one year to the next. I knew and I didn't know. I knew it but I didn't incorporate it into the overall image. I dwelt on his wavy hair, the way he looked at me, the thought of his driving to work in the morning, his charm (that I believed must surely affect everyone he met), the flowers he sent, the considerations he had shown to my sister's children at the picnic last summer, the feeling I had when we were in close physical contact, the way he mixed a martini, his laugh, the hair on the back of his hand. Okay! I know it's crazy, that my list of 'positives' sounds silly, but those are the things I think of, remember, and, yes, want back again!This kind of "misperception" or "love is blind" bias is more often referred to as "idealization", which modern research considers to be a form of positive illusions. For example, a 1996 study found that "Individuals were happier in their relationships when they idealized their partners and their partners idealized them." However, Tennov argues against the term "idealization", because she says that it implies that the image seen by the person experiencing romantic passion "is molded to fit a preformed, externally derived, or emotionally needed conception". In crystallization, she says, "the actual and existing features of LO merely undergo enhancement." A limerent person may overlook red flags or incompatibilities. Tennov notes that the bias can be an impediment to a limerent person wishing to recover from the condition, as another of her interviewees says:I decided to make a list in block letters of everything about Elsie that I found unpleasant or annoying. It was a very long list. On the other side of the paper, I listed her good points. It was a short list. But it didn't help at all. The good points seemed so much more important, and the bad things, well, in Elsie they weren't so bad, or they were things I felt I could help her with. Intrusive thinking and fantasy Intrusive thinking is an oft-reported feature of romantic love. Tennov wrote that "Limerence is first and foremost a condition of cognitive obsession." One study found that on average people in love spent 65% of their waking hours thinking about the beloved. Arthur Aron says "It is obsessive-compulsive when you're feeling it. It's the center of your life." At the height of obsessive fantasy, people experiencing limerence may spend 85 to nearly 100% of their days and nights doting on the LO, lose ability to focus on other tasks and become easily distracted. A limerent person can spend time fantasizing about future events even if they never come true, as the anticipation on its own yields dopamine. According to Tennov, limerent fantasy is unsatisfactory unless rooted in reality, because the fantasizer may want the fantasy to seem realistic enough to be somewhat possible. The fantasies can nevertheless be wildly unrealistic, for example, one person related to her an elaborate rescue fantasy in which he saves an LO's 5-year-old cousin from a group of motorcycles only to be bitten by a snake and die in his LO's lap. This fantasizing along with the replaying of actual memories forms a bridge between one's ordinary life and the eventual hoped for moment of consummation. Tennov says that limerent fantasy is "inescapable", something that just "happens" as opposed to something one "does". One theory of obsessive thinking draws from the parallel with drug addiction: as the early stage of romantic love is compared to addiction to a person, and drug addicts also exhibit obsessive thinking about drug use. Tennov has written that limerent fantasy based in reality "can be conceived as intricate strategy planning". In the late 1990s, it had also been speculated that being in love may lower serotonin levels in the brain, which could cause the intrusive thoughts. The serotonin hypothesis is based in part on a comparison to obsessive–compulsive disorder, but the experimental evidence is ambiguous. The experiments have tested blood levels of serotonin, with the first experiment finding lowered serotonin levels, but the second experiment finding that men and women were affected differently. This second experiment found that obsessive thinking was actually associated with increased serotonin levels in women. For some people who have a fear of intimacy or a history of trauma, limerent fantasy might be an escape or a means of having what feels like a relationship but without the threat of real intimacy. Fear of rejection Tennov's conception of fear of rejection was characterized by nervous feelings and shyness around the limerent object, "worried that your own actions may bring about disaster". Awkwardness, stammering, confusion and shyness predominate at the behavioral level. She quotes the poet Sappho who writes "Sweat runs down in rivers, a tremor seizes [...] Lost in the love-trance." One of Tennov's interviewees, a 28-year-old truck driver, said "It was like what you might call stage fright, like going up in front of an audience. [...] I was awkward as hell." Fisher et al. has suggested that fear in the presence of the beloved is caused by elevated levels of dopamine. Many of the people Tennov interviewed described being normally confident, but suddenly shy when the limerent object is around, or being only in this state of fear with certain limerent objects but not others. Tennov wondered if fear of rejection even serves an evolutionary purpose, by drawing out the courtship process to ensure a greater chance of finding a compatible partner. Uncertainty and hope According to Tennov, the goal of limerence is "oneness" with the LO, i.e. mutual reciprocation or return of feelings. Limerence subsides in a relationship when the limerent person receives adequate reciprocation from the LO. However, mutual reciprocation is a matter of perception on the part of the limerent person, therefore she says the goal of limerence is "removing uncertainty" about whether or not the LO reciprocates. Some authors have conceptualized limerence as an attachment process. In the early stages of romantic love, individuals may start out hypervigilant (hyperaware and sensitive to cues) due to uncertainty and novelty, but become synchronized over time as a relationship progresses. During the early period of limerence (which may begin as a crush or with a physical attraction), Tennov estimates the limerent person may spend up to 30% of their waking hours thinking about the LO, feel a sense of freedom, elation and buoyancy, and enjoy the preoccupation. Then, when elements of doubt and uncertainty are added to the situation, the time spent preoccupied can soar to even 100% of waking hours, provided there is always some hope the LO might return the feelings. At 100%, this might be joy or despair, depending on whether the limerent person perceives the LO as returning the feelings or rejecting them. One of Tennov's interviewees says "When I felt [Barry] loved me, I was intensely in love and deliriously happy; when he seemed rejecting, I was still intensely in love, only miserable beyond words." Much of the time preoccupied is spent replaying events, searching for their meaning to determine this. These thoughts are felt to be involuntary by the individual, occurring intrusively, even to the point of distraction. Uncertainty can also be introduced by the presence of barriers to a relationship, or what Tennov calls "intensification through adversity". She writes: The recognition that some uncertainty must exist has been commented on and complained about by virtually everyone who has undertaken a serious study of the phenomenon of romantic love. Psychologists Ellen Bersheid and Elaine Walster discussed this common observation made, they note, by Socrates, Ovid, the Kama Sutra, and "Dear Abby," that the presentation of a hard-to-get as opposed to an immediately yielding exterior is a help in eliciting passion. The presence of barriers was crucial to the mutual limerence of Romeo and Juliet, hence this is often called "the Romeo and Juliet effect." Helen Fisher calls this "frustration attraction", and suggests that attraction increases because dopamine levels increase in the brain when an expected reward is delayed. Another theory promoted by Fisher is that separation evokes panic and stress, or activation of the hypothalamic–pituitary–adrenal axis. Uncertainty can also come in the form of intermittent reinforcements, which prolong the duration of limerence, keeping the brain "hooked" in. Robert Sternberg has written that passionate or infatuated love essentially thrives under these conditions:The available evidence suggests that such love may survive only under conditions of intermittent reinforcement, when uncertainty reduction plays a key role in one's feelings for another [...]. Tennov's (1979) analysis suggests that limerence can survive only under conditions in which full development and consummation of love is withheld and in which titillation of one kind or another continues over time. Once the relationship is allowed to develop or once the relationship becomes an utter impossibility, extinction seems to take place.Hence, Judson Brewer characterizes the uncertainty of receiving an occasional message from an LO as "gasoline poured on the fire". "Limerence can live a long life sustained by crumbs," says Tennov, who compares this to the uncertainty of gambling: "Both gamblers and limerents find reason to hope in wild dreams." Tennov writes that "It is limerence, not love, that increases when lovers are able to meet only infrequently or when there is anger between them." However, uncertainty can just be a matter of perception on the part of the limerent person, rather than it being based on actual obstacles or events. One married couple Tennov interviewed was mutually limerent in high school, but each was too shy to make the first move so that each was unaware of the other's attraction. They then met again in college 5 years later and married, but only found out about their mutual limerence in high school through a chance conversation several years after that. Tennov notes that there were no obstacles to their relationship, but suggests their inaccurate perceptions that each was not interested probably increased their limerence in high school. According to Tennov, because limerence only occurs when there is at least some hope of reciprocation, one can attempt to extinguish limerence by removing any hope that the LO will reciprocate. For example, an individual who is the object of unwanted attraction should give the clearest possible rejection to the limerent person, and not say something such as "I like you as a friend, but..." which is too ambiguous. Physiology The physiological effects of limerence can include trembling, pallor, flushing, weakness, sweating, butterflies in the stomach and a pounding heart. Tennov wrote that the sensation of limerence is associated primarily with the heart, even speculating that intrusive thinking results in mutual feedback where thinking of the limerent object causes an increase in heart rate, which in turn changes thought patterns. She says: When I asked interviewees in the throes of the limerent condition to tell where they felt the sensation of limerence, they pointed unerringly to the midpoint in their chest. So consistently did this occur that it would seem to be another indication that the state described is indeed limerence, not affection (described by some as located "all over", or even in "the arms" when held out in a gesture of embrace) or in sexual feelings (located, appropriately enough, in the genitals). Readiness Some people may have a heightened susceptibility to limerence, a state Tennov calls "readiness", "longing for limerence" or being "in love with love". This may occur due to biological factors such as adolescence, but also psychological factors like loneliness and discontent. Sometimes readiness can be so intense that a person falls in love with somebody who only has a minimal appeal. Shaver and Hazan observed that those suffering from loneliness are more susceptible to limerence, arguing that "if people have a large number of unmet social needs, and are not aware of this, then a sign that someone else might be interested is easily built up in that person's imagination into far more than the friendly social contact that it might have been. By dwelling on the memory of that social contact, the lonely person comes to magnify it into a deep emotional experience, which may be quite different from the reality of the event." Duration Tennov estimates, based on both questionnaire and interview data, that limerence most commonly lasts between 18 months and three years with an average of two years, but may be as short as mere days or as long as a lifetime. One woman wrote to Tennov about her mother's limerence which lasted 65 years. Tennov calls it the worst case when the limerent person cannot get away, because the LO is a coworker or lives nearby. Limerence can last indefinitely sometimes when it is unrequited, especially when reciprocation is uncertain. This could be such as when receiving mixed signals from an LO, or because of the intermittent reinforcement of an LO ignoring the limerent person for awhile and then suddenly calling. Tennov's estimate of 18 months to 3 years is sometimes used as the normal duration of romantic love. The other common estimate, 12–18 months, comes from Donatella Marazziti's experiment comparing the serotonin levels of people in love with OCD patients. In this experiment, subjects who had fallen in love within the past 6 months (who were in a relationship) were measured to have serotonin levels which were different from controls, levels which returned to normal after 12–18 months. According to Tennov, ideally limerence will be replaced by another type of love. In this way, feelings may evolve over the duration of a relationship: "Those whose limerence was replaced by affectional bonding with the same partner might say, 'We were very much in love when we married; today we love each other very much. The more stable type of love which is usually the characteristic of long-term relationships is commonly called companionate love, storge or attachment. Love regulation Love regulation is "the use of behavioral or cognitive strategies to change the intensity of current feelings of romantic love." For example, looking at pictures of the beloved has been shown to increase feelings of infatuation (i.e. passionate love) and attachment (i.e. companionate love). Sandra Langeslag notes that it's a common misconception that love feelings are uncontrollable, or even should not be controlled; however studies using EEG and psychometrics have shown that love regulation is possible and may be useful. In some cases, love feelings may be stronger than desired such as after a breakup, or love feelings may be weaker than desired such as when they decline throughout a long-term relationship. In a technique called cognitive reappraisal, one focuses on positive or negative aspects of the beloved, the relationship, or imagined future scenarios: In positive reappraisal, one focuses on positive qualities of the beloved ("he's kind", "she's spontaneous"), the relationship ("we have so much fun together") or imagined future scenarios ("we'll live happily ever after"). Positive reappraisal increases attachment and can increase relationship satisfaction. In negative reappraisal, one focuses on negative qualities of the beloved ("he's lazy", "she's always late"), the relationship ("we fight a lot") or imagined future scenarios ("he'll cheat on me"). Negative reappraisal decreases feelings of infatuation and attachment, but decreases mood in the short term. Langeslag has recommended distraction as an antidote to the short-term decrease in mood. Preliminary results from a 2024 study of online limerence communities conducted by Langeslag found that negative reappraisal decreased limerence for the study participants. A therapist named Brandy Wyant has also had her limerent clients list reasons their LO is not perfect, or reasons they and their LO are not compatible. Love regulation doesn't switch feelings on or off immediately, so Langeslag recommends, for example, writing a list of things once a day to feel a lasting change. Based on the addiction theory of romantic love, Helen Fisher and colleagues recommend that rejected lovers remove all reminders of their beloved, such as letters or photos, and avoid contact with the rejecting partner. Reminders can cause cravings which prolong recovery. Fisher et al. also suggests that positive contact with friends could reduce cravings. Rejected lovers should stay busy to distract themselves, and engage in self-expanding activities. Controversy In 2008, Albert Wakin, a professor who knew Tennov at the University of Bridgeport but did not assist in her research, and Duyen Vo, a graduate student, suggested that limerence is similar to obsessive–compulsive disorder (OCD) and substance use disorder (SUD). They presented work to an American Association of Behavioral and Social Sciences conference, but suggested that much more research is needed before it could be proposed to the APA that limerence be included in the DSM. They began conducting an unpublished study and reported to USA Today that about 25% or 30% of their participants had experienced a limerent relationship as they defined it. While limerence and romantic love in general have been compared to OCD since 1998 according to a hypothesis invented by other authors, experimental evidence for a connection with serotonin is ambiguous. This hypothesis was based on a superficial comparison between features of preoccupation shared between the two conditions, for example focusing on trivial details or worrying about the future. Helen Fisher has commented on Wakin & Vo in 2008, stating that limerence is romantic love and that "They are associating the negative aspects of it with the term, and that can be a disorder." Fisher is one of the original authors to compare limerence to OCD, and has proposed that romantic love is a "natural addiction" which can be either positive or negative depending on the situation. Fisher stated again in 2024 that she does not think there is any difference between limerence and romantic love. In 2017, Wakin has stated that he feels that brain scans of limerence would help establish it as "something unlike everything that has been diagnosed already", but brain scans have actually been described since as far back as 2002. In Fisher et al.'s original brain scan experiments, all participants spent more than 85% of their waking hours thinking about their loved one. Wakin also claims that a person experiencing limerence can never be satiated, even if their feelings are reciprocated. Tennov found many cases of nonlimerent people who described their limerent partners being "stricken with a kind of insatiability" in this way, and that "no degree of attentiveness was ever sufficient". However, according to Tennov's theory, the intensity of limerence diminishes when the limerent person perceives sustained reciprocation, so it is prolonged inside of a relationship when the LO behaves in a nonlimerent manner. Other authors who are in the mainstream have speculated that unwanted obsession inside a relationship could be related to self-esteem and an insecure attachment style. In the 1999 preface to her revised edition of Love and Limerence, Dorothy Tennov describes limerence as an aspect of basic human nature and remarks that "Reaction to limerence theory depends partly on acquaintance with the evidence for it and partly on personal experience. People who have not experienced limerence are baffled by descriptions of it and are often resistant to the evidence that it exists. To such outside observers, limerence seems pathological." Tennov states that limerence is normal and says that even those of her interviewees who experienced limerence of a distressing variety were "fully functioning, rational, emotionally stable, normal, nonneurotic, nonpathological members of society" and "could be characterized as responsible and quite sane". She suggests that limerence is too often interpreted as "mental illness" in psychiatry. Tragedies such as violence, she says, involve limerence when it is "augmented and distorted" by other conditions, which she contrasts with "pure limerence". In a 2005 Q&A, Tennov is asked if limerence can ever lead to a situation such as depicted in the movie Fatal Attraction, but Tennov replies that the movie character seemed to her to be a caricature. Most romantic stalkers are an ex-partner, erotomanic, have a personality disorder, are intellectually limited or socially incompetent. One writer who investigated the phenomenon of limerence videos on TikTok in 2024 has written that it seemed to her that the many videos created by the relationship coaches there were actually about social media stalking rather than having anything at all to do with limerence. See also References Bibliography 1970s neologisms Concepts in aesthetics Emotions Interpersonal relationships Love Personal life Philosophy of love Psychology Sexology Citation overkill
Limerence
[ "Biology" ]
11,305
[ "Behavior", "Psychology", "Sexology", "Behavioural sciences", "Interpersonal relationships", "Human behavior" ]
154,156
https://en.wikipedia.org/wiki/Cloak
A cloak is a type of loose garment worn over clothing, mostly but not always as outerwear for outdoor wear, serving the same purpose as an overcoat, protecting the wearer from the weather. It may form part of a uniform. People in many different societies may wear cloaks. Over time cloak designs have changed to match fashion and available textiles. Cloaks generally fasten at the neck or over the shoulder, and vary in length from the hip all the way down to the ankle – mid-calf being the normal length. They may have an attached hood and may cover and fasten down the front, in which case they have holes or slits for the hands to pass through. However, cloaks are almost always sleeveless. Christian clerics may wear a cappa or a cope – forms of cloak – as liturgical vestments or as part of a religious habit. Etymology The word cloak comes from Old North French cloque (Old French cloche, cloke) meaning "bell", from Medieval Latin clocca "travelers' cape," literally "a bell," so called from the garment's bell-like shape. Thus the word is related to the word clock. History Ancient Greeks and Romans were known to wear cloaks. Greek men and women wore the himation, from the Archaic through the Hellenistic periods ( 750–30 BC). Romans would later wear the Greek-styled cloak, the pallium. The pallium was quadrangular, shaped like a square, and sat on the shoulders, not unlike the himation. Romans of the Republic would wear the toga as a formal display of their citizenship. It was denied to foreigners and was worn by magistrates on all occasions as a badge of office. The toga allegedly originated with Numa Pompilius ( BC), the second semi-legendary king of Rome. Eminent personages in Kievan Rus' adopted the Byzantine chlamys in the form of a fur-lined (). Powerful noblemen and elite warriors of the Aztec Empire would wear a tilmàtli; a Mesoamerican cloak/cape used as a symbol of their upper status. Cloth and clothing was of utmost importance for the Aztecs. The more elaborate and colorful tilmàtlis were strictly reserved for élite high priests, emperors; and the Eagle warriors as well as the Jaguar knights. Opera cloak In full evening dress in the Western countries, ladies and gentlemen frequently use the cloak as a fashion statement, or to protect the fine fabrics of evening wear from the elements, especially where a coat would crush or hide the garment. Opera cloaks are made of quality materials such as wool or cashmere, velvet and satin. Ladies may wear a long (over the shoulders or to ankles) cloak usually called a cape, or a full-length cloak. Gentlemen wear an ankle-length or full-length cloak. Formal cloaks often have expensive, colored linings and trimmings such as silk, satin, velvet and fur. The term was the title of a 1942 operatic comedy. In literature and the arts According to the King James Version of the Bible, Matthew recorded Jesus of Nazareth saying in Matthew 5:40: "And if any man will sue thee at the law, and take away thy coat, let him have thy cloke also." The King James Version of the Bible has the words recorded a little differently in Luke 6:29: "...and him that taketh away thy cloke, forbid not to take thy coat also." Cloaks are a staple garment in the fantasy genre due to the popularity of medieval settings. They are also usually associated with witches, wizards, and vampires; the best-known stage version of Dracula, which first made actor Bela Lugosi prominent, featured him wearing it so that his exit through a trap door concealed on the stage could seem sudden. When Lugosi reprised his role as Dracula for the 1931 Universal Studios motion picture version of the play, he retained the cloak as part of his outfit, which made such a strong impression that cloaks came to be equated with Count Dracula in nearly all non-historical media depictions of him. Fantasy cloaks are often magical. For example, they may grant the person wearing it invisibility as in the Harry Potter series by J. K. Rowling. A similar sort of garment is worn by the members of the Fellowship of the Ring in The Lord of the Rings by J. R. R. Tolkien, although instead of granting complete invisibility, the Elf-made cloaks simply appear to shift between any natural color (e.g. green, gray, brown) to help the wearer to blend in with his or her surroundings. In the Marvel comic book stories and in the Marvel Cinematic Universe, the sorcerer Doctor Strange is associated with a magical Cloak of Levitation, which not only enables its wearer to levitate, but has other mystical abilities as well. Doctor Strange also uses it as a weapon. Alternatively, cloaks in fantasy may nullify magical projectiles, as the "cloak of magic resistance" in NetHack. Metaphor Figuratively, a cloak may be anything that disguises or conceals something. In many science fiction franchises, such as Star Trek, there are cloaking devices, which provide a way to avoid detection by making objects appear invisible. A real device, albeit of limited capability, was demonstrated in 2006. Because they keep a person hidden and conceal a weapon, the phrase cloak and dagger has come to refer to espionage and secretive crimes: it suggests murder from hidden sources. "Cloak and dagger" stories are thus mystery, detective, and crime stories of this. The vigilante duo of Marvel comics Cloak and Dagger is a reference to this. See also Kinsale cloak Mantle (clothing) Poncho Robe Serape Shawl Shroud Stole (shawl) Spanish cloak Veil Witzchoura Wrap (clothing) References Sources Oxford English Dictionary Ashelford, Jane: The Art of Dress: Clothing and Society 1500–1914, Abrams, 1996. Baumgarten, Linda: What Clothes Reveal: The Language of Clothing in Colonial and Federal America, Yale University Press, 2016. Payne, Blanche: History of Costume from the Stone Age to the Twentysecond Century, Harper & Row, 2965. No ISBN for this edition; ASIN B0006BMNFS Picken, Mary Brooks: The Fashion Dictionary, Funk and Bagnalls, 1957. (1973 edition ) Medieval European costume 18th-century fashion 19th-century fashion 20th-century fashion Formal wear History of clothing Costume design
Cloak
[ "Engineering" ]
1,343
[ "Costume design", "Design" ]
154,163
https://en.wikipedia.org/wiki/Curry%27s%20paradox
Curry's paradox is a paradox in which an arbitrary claim F is proved from the mere existence of a sentence C that says of itself "If C, then F". The paradox requires only a few apparently-innocuous logical deduction rules. Since F is arbitrary, any logic having these rules allows one to prove everything. The paradox may be expressed in natural language and in various logics, including certain forms of set theory, lambda calculus, and combinatory logic. The paradox is named after the logician Haskell Curry, who wrote about it in 1942. It has also been called Löb's paradox after Martin Hugo Löb, due to its relationship to Löb's theorem. In natural language Claims of the form "if A, then B" are called conditional claims. Curry's paradox uses a particular kind of self-referential conditional sentence, as demonstrated in this example: Even though Germany does not border China, the example sentence certainly is a natural-language sentence, and so the truth of that sentence can be analyzed. The paradox follows from this analysis. The analysis consists of two steps. First, common natural-language proof techniques can be used to prove that the example sentence is true [steps 1–4 below]. Second, the truth of the sentence can be used to prove that Germany borders China [steps 5–6]: The sentence reads "If this sentence is true, then Germany borders China"   [repeat definition to get step numbering compatible to the formal proof] If the sentence is true, then it is true.   [obvious, i.e., a tautology] If the sentence is true, then: if the sentence is true, then Germany borders China.   [replace "it is true" by the sentence's definition] If the sentence is true, then Germany borders China.   [contract repeated condition] But 4. is what the sentence says, so it is indeed true. The sentence is true [by 5.], and [by 4.]: if it is true, then Germany borders China.So, Germany borders China.   [modus ponens] Because Germany does not border China, this suggests that there has been an error in one of the proof steps. The claim "Germany borders China" could be replaced by any other claim, and the sentence would still be provable. Thus every sentence appears to be provable. Because the proof uses only well-accepted methods of deduction, and because none of these methods appears to be incorrect, this situation is paradoxical. Informal proof The standard method for proving conditional sentences (sentences of the form "if A, then B") is called "conditional proof". In this method, in order to prove "if A, then B", first A is assumed and then with that assumption B is shown to be true. To produce Curry's paradox, as described in the two steps above, apply this method to the sentence "if this sentence is true, then Germany borders China". Here A, "this sentence is true", refers to the overall sentence, while B is "Germany borders China". So, assuming A is the same as assuming "If A, then B". Therefore, in assuming A, we have assumed both A and "If A, then B". Therefore, B is true, by modus ponens, and we have proven "If this sentence is true, then 'Germany borders China' is true." in the usual way, by assuming the hypothesis and deriving the conclusion. Now, because we have proved "If this sentence is true, then 'Germany borders China' is true", then we can again apply modus ponens, because we know that the claim "this sentence is true" is correct. In this way, we can deduce that Germany borders China. In formal logics Sentential logic The example in the previous section used unformalized, natural-language reasoning. Curry's paradox also occurs in some varieties of formal logic. In this context, it shows that if we assume there is a formal sentence (X → Y), where X itself is equivalent to (X → Y), then we can prove Y with a formal proof. One example of such a formal proof is as follows. For an explanation of the logic notation used in this section, refer to the list of logic symbols. X := (X → Y) X → X X → (X → Y) X → Y X Y An alternative proof is via Peirce's law. If X = X → Y, then (X → Y) → X. This together with Peirce's law ((X → Y) → X) → X and modus ponens implies X and subsequently Y (as in above proof). The above derivation shows that, if Y is an unprovable statement in a formal system, then there is no statement X in that system such that X is equivalent to the implication (X → Y). In other words, step 1 of the previous proof fails. By contrast, the previous section shows that in natural (unformalized) language, for every natural language statement Y there is a natural language statement Z such that Z is equivalent to (Z → Y) in natural language. Namely, Z is "If this sentence is true then Y". Naive set theory Even if the underlying mathematical logic does not admit any self-referential sentences, certain forms of naive set theory are still vulnerable to Curry's paradox. In set theories that allow unrestricted comprehension, we can prove any logical statement Y by examining the set One then shows easily that the statement is equivalent to . From this, may be deduced, similarly to the proofs shown above. ("" stands for "this sentence".) Therefore, in a consistent set theory, the set does not exist for false Y. This can be seen as a variant on Russell's paradox, but is not identical. Some proposals for set theory have attempted to deal with Russell's paradox not by restricting the rule of comprehension, but by restricting the rules of logic so that it tolerates the contradictory nature of the set of all sets that are not members of themselves. The existence of proofs like the one above shows that such a task is not so simple, because at least one of the deduction rules used in the proof above must be omitted or restricted. Lambda calculus with restricted minimal logic Curry's paradox may be expressed in untyped lambda calculus, enriched by implicational propositional calculus. To cope with the lambda calculus's syntactic restrictions, shall denote the implication function taking two parameters, that is, the lambda term shall be equivalent to the usual infix notation . An arbitrary formula can be proved by defining a lambda function , and , where denotes Curry's fixed-point combinator. Then by definition of and , hence the above sentential logic proof can be duplicated in the calculus: In simply typed lambda calculus, fixed-point combinators cannot be typed and hence are not admitted. Combinatory logic Curry's paradox may also be expressed in combinatory logic, which has equivalent expressive power to lambda calculus. Any lambda expression may be translated into combinatory logic, so a translation of the implementation of Curry's paradox in lambda calculus would suffice. The above term translates to in combinatory logic, where hence Discussion Curry's paradox can be formulated in any language supporting basic logic operations that also allows a self-recursive function to be constructed as an expression. Two mechanisms that support the construction of the paradox are self-reference (the ability to refer to "this sentence" from within a sentence) and unrestricted comprehension in naive set theory. Natural languages nearly always contain many features that could be used to construct the paradox, as do many other languages. Usually, the addition of metaprogramming capabilities to a language will add the features needed. Mathematical logic generally does not allow explicit reference to its own sentences; however, the heart of Gödel's incompleteness theorems is the observation that a different form of self-reference can be added—see Gödel number. The rules used in the construction of the proof are the rule of assumption for conditional proof, the rule of contraction, and modus ponens. These are included in most common logical systems, such as first-order logic. Consequences for some formal logic In the 1930s, Curry's paradox and the related Kleene–Rosser paradox, from which Curry's paradox was developed, played a major role in showing that various formal logic systems allowing self-recursive expressions are inconsistent. The axiom of unrestricted comprehension is not supported by modern set theory, and Curry's paradox is thus avoided. See also Girard's paradox List of paradoxes Richard's paradox Zermelo–Fraenkel set theory Fixed-point combinator References External links Penguins Rule the Universe: A Proof that Penguins Rule the Universe, a brief and entertaining discussion of Curry's paradox. Mathematical paradoxes Mathematical logic Paradoxes of naive set theory Self-referential paradoxes
Curry's paradox
[ "Mathematics" ]
1,870
[ "Mathematical logic", "Basic concepts in infinite set theory", "Basic concepts in set theory", "Mathematical paradoxes", "Paradoxes of naive set theory", "Mathematical problems" ]
154,171
https://en.wikipedia.org/wiki/List%20of%20heritage%20railways
This list of heritage railways includes heritage railways sorted by country, state, or region. A heritage railway is a preserved or tourist railroad which is run as a tourist attraction, is usually but not always run by volunteers, and often seeks to re-create railway scenes of the past. Europe Austria Ampflwanger Bahn (Timelkam — Ampflwang) Bockerlbahn Bürmoos (narrow gauge, original tracks removed) Bregenzerwaldbahn (narrow gauge, remaining section Bezau — Schwarzenberg) Erzbergbahn (section Vordernberg — Eisenerz) Feistritztalbahn (narrow gauge, Weiz — Birkfeld) Gurktalbahn (narrow gauge, remaining section Treibach-Althofen — Pöckstein-Zwischenwässern) Höllental Railway (Lower Austria) (narrow gauge, Payerbach-Reichenau — Hirschwang an der Rax) Landesbahn Feldbach — Bad Gleichenberg (some regular service remains) Lavamünder Bahn (Lavamünd — St. Paul im Lavanttal, track removed) Lokalbahn Ebelsberg — St. Florian (narrow gauge, some sections removed) Lokalbahn Korneuburg — Hohenau (some sections) Lokalbahn Retz — Drosendorf Lokalbahn Weizelsdorf — Ferlach Internationale Rheinregulierungsbahn (narrow gauge, remaining section: Austrian side of mouth of river Rhine into lake Constance — Lustenau depot — Kadelberg quarry via Swiss side of Rhine) Rosen Valley Railway (section Weizelsdorf — Rosenbach) Stainzer Flascherlzug (narrow gauge, Stainz — Preding) Steyr Valley Railway (narrow gauge, remaining section Steyr — Grünburg) Taurachbahn (narrow gauge, section Tamsweg — Mauterndorf im Lungau of the Murtalbahn) Thörlerbahn (narrow gauge, originally Kapfenberg — Turnau, track removed) Wachauer Bahn (section Krems — Emmersdorf an der Donau of the Donauuferbahn) Waldviertler Schmalspurbahnen (narrow gauge, Gmünd — Groß Gerungs and Gmünd — Litschau / — Heidenreichstein) Ybbs Valley Railway (narrow gauge, remaining section Göstling — Lunz am See — Kienberg-Gaming) Belgium Chemin de Fer à vapeur des Trois Vallées Chemin de Fer du Bocq Dendermonde-Puurs Steam Railway Stoomcentrum Maldegem ASVi museum Vennbahn Closed in 2001 Bosnia and Herzegovina Sarajevo-Višegrad Railway (section from Višegrad to Vardište) Czech Republic Lužná u Rakovníka - Kolešovice Railway Tovačovka (Kroměříž) Kojetín - Tovačov) Zubrnická museální železnice ((Ústí nad Labem Střekov) - Velké Březno - Zubrnice) Břeclav - Lednice Hvozdnický Expres (Opava Východ - Svobodné Heřmanice) Bruntál - Malá Morávka Sklářská lokálka Šenovka (Česká Kamenice - Kamenický Šenov) Kozí dráha (Děčín - Telnice) Denmark Source: H Heritage rail operator | N Narrow gauge railway | S Standard gauge railway DJK: Dansk Jernbane-Klub. Several heritage railways and operators are members of DJK Finland Jokioinen Museum Railway Kovjoki Museum Railway Porvoo Museum Railway France Chemin de Fer de la Baie de Somme Chemin de fer Touristique d'Anse Froissy Dompierre Light Railway Tarn Light Railway Germany Greece Diakofto–Kalavryta Railway Pelion railway Treno sto Rouf Railway Carriage Theater Hungary Children's Railway, Gyermekvasút Italy Bernina Railway, in the Rhaetian Railway between Italy and Switzerland; inscribed in the World Heritage List of UNESCO Valmorea railway Ceva–Ormea railway Sassari–Tempio-Palau railway Asciano–Monte Antico railway Novara–Varallo railway Latvia Gulbene-Alūksne railway Ventspils narrow-gauge railway Luxembourg Train 1900 Netherlands Corus Stoom IJmuiden Efteling Steam Train Company Museum Buurtspoorweg Steamtrain Hoorn Medemblik Stichting Stadskanaal Rail Stichting voorheen RTM Stoom Stichting Nederland Stoomtrein Goes - Borsele Stoomtrein Valkenburgse Meer Veluwse Stoomtrein Maatschappij Zuid-Limburgse Stoomtrein Maatschappij Norway Old Voss Line Krøderen Line Nesttun–Os Line Norwegian Railway Museum in Hamar Rjukan Line Setesdal Line Thamshavn Line Urskog–Høland Line Valdres Line Poland Bieszczady Forest Railway Narrow Gauge Railway Museum in Sochaczew Narrow Gauge Railway Museum in Wenecja Seaside Narrow Gauge Railway Wigry Forest Railway Portugal Barca d'Alva–La Fuente de San Esteban railway Corgo line Linha do Douro Sabor line Tâmega line Linha do Tua National Railway Museum (Portugal) Narrow-gauge railways in Portugal Monte Railway (Funchal, Madeira) Republic of Ireland Romania Mocăniţa from Vasser Valley, Maramureş (CFF Vişeu de Sus) Sibiu to Agnita narrow-gauge line in Hârtibaciu Valley San Marino Ferrovia Rimini–San Marino (In 2012, 800 meters of the track was reconstructed and opened to service at the San Marino terminal station with the original train as a tourist attraction.) Serbia Šargan Eight Slovakia Čierny Hron Railway The Historical Logging Switchback Railway in Vychylovka, Kysuce near Nová Bystrica (Historická lesná úvraťová železnica) Spain Basque Railway Museum (steam railway tours) Gijón Railway Museum Philip II Train, service between Madrid and El Escorial Railway Museum in Vilanova (close to Barcelona) Strawberry train, seasonal service between Madrid and Aranjuez Tramvia Blau, Barcelona Tren dels Llacs, seasonal service between Lleida and La Pobla de Segur Sweden Anten-Gräfsnäs Järnväg – narrow gauge, near Gothenburg Association of Narrow Gauge Railways Växjö-Västervik – narrow gauge (includes a section of mixed gauge track into Västervik) Böda Skogsjärnväg – narrow gauge, Öland Dal-Västra Värmlands Järnväg – standard gauge, Värmland Djurgården Line (tramway) – Stockholm Engelsberg-Norbergs Railway – standard gauge, Västmanland Gotlands Hesselby Jernväg – narrow gauge, Gotland Jädraås-Tallås Järnväg – narrow gauge, Gästrikland Ohsabanan – narrow gauge, Jönköping Risten–Lakvik Museum Railway – narrow gauge, Östergötland Skara – Lundsbrunns Järnvägar – narrow gauge, Västra Götaland County Skånska Järnvägar – standard gauge, Skåne Smalspårsjärnvägen Hultsfred-Västervik – narrow gauge, Småland Upsala-Lenna Jernväg – narrow gauge, Upsala County Östra Södermanlands Järnväg – narrow gauge, Södermanland Switzerland Blonay-Chamby Museum Railway Brienz Rothorn Bahn Dampfbahn-Verein Zürcher Oberland Etzwilen–Singen railway Furka Cogwheel Steam Railway Furka Oberalp Railway Pilatus Railway Rigi Railways Schynige Platte Railway Zürcher Museums-Bahn La Traction Sursee–Triengen Railway Schinznacher Baumschulbahn United Kingdom and Crown dependencies England Scotland Wales Northern Ireland Isle of Man Channel Islands Alderney Railway Pallot Heritage Steam Museum North America Canada United States Abilene and Smoky Valley Railroad Adirondack Railroad Agrirama Logging Train Arcade and Attica Railroad Azalea Sprinter Big South Fork Scenic Railway Black Hills Central Railroad Black River and Western Railroad Boone and Scenic Valley Railroad Branson Scenic Railway Bluegrass Railroad and Museum Blue Ridge Scenic Railway Belvidere and Delaware Railroad (AKA Delaware River Railroad) California Western Railroad (AKA, The Skunk Train) Cass Scenic Railroad State Park Conway Scenic Railroad Cumbres and Toltec Scenic Railroad Cuyahoga Valley Scenic Railroad Chehalis–Centralia Railroad Chelatchie Prairie Railroad Cripple Creek and Victor Narrow Gauge Railroad Durango and Silverton Narrow Gauge Railroad Durbin and Greenbrier Valley Railroad Dollywood Express Eureka Springs and North Arkansas Railway East Broad Top Railroad and Coal Company Everett Railroad Gold Coast Railroad Museum Georgia Coastal Railway Georgia State Railroad Museum Georgetown Loop Railroad Grand Canyon Railway Great Smoky Mountains Railroad Grapevine Vintage Railroad Heber Valley Railroad Hesston Steam Museum Hocking Valley Scenic Railway Huckleberry Railroad Illinois Railway Museum Indiana Transportation Museum Kirby Family Farm Train Kentucky Railway Museum Kettle Moraine Scenic Railroad Lumberjack Steam Train Little River Railroad (Michigan) Mount Rainier Railroad and Logging Museum Mount Washington Cog Railway Mid-Continent Railway Museum Monticello Railway Museum Midwest Central Railroad My Old Kentucky Dinner Train Nevada Northern Railway Museum New Hope Railroad North Shore Scenic Railroad Nickel Plate Express Niles Canyon Railway Oregon Coast Scenic Railroad Oil Creek and Titusville Railroad Reading Blue Mountain and Northern Railroad Rio Grande Scenic Railroad Roaring Camp Railtown 1897 State Historic Park SAM Shortline Excursion Train Silverwood Theme Park Steamtown National Historic Site Serengeti Express Southeastern Railway Museum Stone Mountain Scenic Railroad Strasburg Rail Road Sumpter Valley Railway South Central Florida Express, Inc. (AKA, Sugar Express) Tallulah Falls Railroad Museum Tennessee Valley Railroad Museum Tweetsie Railroad Texas State Railroad Three Rivers Rambler TECO Line Streetcar Tavares, Eustis & Gulf Railroad Virginia and Truckee Railroad Valley Railroad Company (AKA, The Essex Steam Train) Western Maryland Scenic Railroad White Pass and Yukon Route Wilmington and Western Railroad Wiscasset, Waterville and Farmington Railway Walt Disney World Railroad Wanamaker, Kempton and Southern Railroad Wildlife Express Train Whippany Railway Museum White Mountain Central Railroad Whitewater Valley Railroad Yosemite Mountain Sugar Pine Railroad Mexico Chihuahua al Pacífico (Copper Canyon) Ferrocarril Interoceanico Tequila Express Barbados St. Nicholas Abbey Heritage Railway St. Kitts St. Kitts Scenic Railway (Over historic tracks) South America Argentina Capilla del Señor Historic Train, in Buenos Aires Province Old Patagonian Express, Patagonia Train at the End of the World in Tierra del Fuego, Tierra del Fuego Tren a las Nubes, Salta Tren Histórico de Bariloche, Patagonia (British-built 1912, 4-6-0 steam locomotive to Perito Moreno glacier) Villa Elisa Historic Train in Entre Ríos Province Brazil Estrada de Ferro Central do Brasil Rede Mineira de Viação Corcovado Rack Railway Estrada de Ferro Oeste de Minas Estrada de Ferro Perus Pirapora Serra Verde Express Train of Pantanal Trem da Serra da Mantiqueira Trem das Águas Viação Férrea Campinas Jaguariúna Chile Colchaguac Wine Train (a Bayer Peacock 2-6-0) Tren de la Araucanía Temuco to Victoria (1953 Baldwin 4-8-2) Ecuador Tren Crucero Ecuador Colombia Tren Turistico De La Sabana, Bogota Asia Mainland China Jiayang Coal Railway Mengzi–Baoxiu Railway (heritage train operation on an otherwise disused section west of Jianshui) Tieling-Faku Railway Kunming-Hekou Railway Huanan Forest Railway Chaoyanggou-Qi Railway Taiwan Alishan Forest Railway Hong Kong Hong Kong Tramways India Calcutta Tramways Darjeeling Himalayan Railway Kalka Shimla Railway Matheran Hill Railway Nilgiri Mountain Railway Palace on Wheels Indonesia Ambarawa Railway Museum Cepu Forest Railway Mak Itam Steam Locomotive Sepur Kluthuk Jaladara Israel The Oak Railway (רכבת האלונים) in kibbutz Ein Shemer Japan Narita Yume Bokujo narrow gauge railway Sagano Scenic Railway Shuzenji Romney Railway Pakistan Khyber Railway Pakistan Railways Heritage Museum Africa South Africa Note that most of the heritage railway operators in South Africa have their own depots where locomotives and coaches are kept and serviced, but run on state-owned railways. Atlantic Rail – Now defunct. Formally ran day trips from Cape Town to Simonstown using steam locomotives and heritage coaching stock Friends of the Rail – day trips from Hermanstad (Pretoria) using steam locomotives and heritage coaching stock Outeniqua Choo Tjoe – A heritage railway that has not operated since August 2006. Patons Country Narrow Gauge Railway – a two-foot narrow-gauge heritage railway in KwaZulu-Natal, South Africa, from Ixopo to Umzimkhulu Reefsteamers – day trips from Johannesburg to Magaliesburg. Rovos Rail – up-market railtours The Sandstone Heritage Trust – private railway operating 2-foot gauge steam locomotives Umgeni Steam Railway – Kloof to Inchanga, near Durban Tunisia Lézard rouge Australia Australia New Zealand See also Heritage tourism List of Conservation topics List of tourist attractions worldwide List of United States railroads Mountain railway References External links Heritage railways in Spain International working steam locomotives National Preservation forum Indian Train Times Rail transport-related lists Railroad attractions Tourism-related lists
List of heritage railways
[ "Engineering" ]
2,868
[ "Lists of heritage railways", "Heritage railways", "Engineering preservation societies" ]
154,176
https://en.wikipedia.org/wiki/Heritage%20railway
A heritage railway or heritage railroad (U.S. usage) is a railway operated as living history to re-create or preserve railway scenes of the past. Heritage railways are often old railway lines preserved in a state depicting a period (or periods) in the history of rail transport. Definition The British Office of Rail and Road defines heritage railways as follows:...'lines of local interest', museum railways or tourist railways that have retained or assumed the character and appearance and operating practices of railways of former times. Several lines that operate in isolation provide genuine transport facilities, providing community links. Most lines constitute tourist or educational attractions in their own right. Much of the rolling stock and other equipment used on these systems is original and is of historic value in its own right. Many systems aim to replicate both the look and operating practices of historic former railways companies. Infrastructure Heritage railway lines have historic rail infrastructure which has been substituted (or made obsolete) in modern rail systems. Historical installations, such as hand-operated points, water cranes, and rails fastened with hand-hammered rail spikes, are characteristic features of heritage lines. Unlike tourist railways, which primarily carry tourists and have modern installations and vehicles, heritage-line infrastructure creates views and soundscapes of the past in operation. Operation Due to a lack of modern technology or the desire for historical accuracy, railway operations can be handled with traditional practices such as the use of tokens. Heritage infrastructure and operations often require the assignment of roles, based on historical occupations, to the railway staff. Some, or all, staff and volunteers, including Station masters and signalmen, sometimes wearing period-appropriate attire, can be seen on some heritage railways. Most heritage railways use heritage rolling stock, although modern rail vehicles can be used to showcase railway scenes with historical-line infrastructure. Cost While some heritage railways are profitable tourist attractions, many are not-for-profit entities; some of the latter depend on enthusiastic volunteers for upkeep and operations to supplement revenue from traffic and visitors. Still other heritage railways offer a viable public-transit option, and can maintain operations with revenue from regular riders or government subsidies. Development Children's railways Children's railways are extracurricular educational institutions where children and teenagers learn about railway work; they are often functional, passenger-carrying narrow-gauge rail lines. The railways developed in the USSR during the Soviet era. Many were called "Pioneer railways", after the youth organisation of that name. The first children's railway opened in Moscow in 1932 and, at the breakup of the USSR, 52 children's railways existed in the country. Although the fall of communist governments has led to the closure of some, preserved children's railways are still functioning in post-Soviet states and Eastern European countries. Many children's railways were built on parkland in urban areas. Unlike many industrial areas typically served by a narrow-gauge railway, parks were free of redevelopment. Child volunteers and socialist fiscal policy enabled the existence of many of these railways. Children's railways which still carry traffic have often retained their original infrastructure and rolling stock, including vintage steam locomotives; some have acquired heritage vehicles from other railways. Examples of children's railways with steam locomotives include the Dresden Park Railway in Germany; the Gyermekvasút in Budapest; the Park Railway Maltanka in Poznań; the Košice Children's Railway in Slovakia, and the gauge steam railway on the grounds of St Nicholas' School in Merstham, Surrey, which the children help operate with assistance from the East Surrey 16mm Group and other volunteers. Mountain railways Creating passages for trains up steep hills and through mountain regions offers many obstacles which call for technical solutions. Steep grade railway technologies and extensive tunneling may be employed. The use of narrow gauge allows tighter curves in the track, and offers a smaller structure gauge and tunnel size. At high altitudes, construction and logistical difficulties, limited urban development and demand for transport and special rolling-stock requirements have left many mountain railways unmodernized. The engineering feats of past railway builders and views of pristine mountain scenes have made many railways in mountainous areas profitable tourist attractions. Pit railways Pit railways have been in operation in underground mines all over the world. Small rail vehicles transport ore, waste rock, and workers through narrow tunnels. Sometimes trains were the sole mode of transport in the passages between the work sites and the mine entrance. The railway's loading gauge often dictated the cross-section of passages to be dug. At many mining sites, pit railways have been abandoned due to mine closure or adoption of new transportation equipment. Some show mines have a vintage pit railway and offer mantrip rides into the mine. Underground railways The Metro 1 (officially the Millennium Underground Railway or M1), built from 1894 to 1896, is the oldest line of the Budapest Metro system and the second-oldest underground railway in the world. The M1 underwent major reconstruction during the 1980s and 1990s, and Line 1 now serves eight original stations whose original appearance has been preserved. In 2002, the line was listed as a UNESCO World Heritage Site. In the Deák Ferenc Square concourse's Millennium Underground Museum, many other artifacts of the metro's early history may be seen. Heritage tramways By country The first heritage railway to be rescued and run entirely by volunteers was the Talyllyn Railway in Wales. This narrow-gauge line, taken over by a group of enthusiasts in 1950, was the beginning of the preservation movement worldwide. Argentina La Trochita (officially Viejo Expreso Patagónico, the Old Patagonian Express) was declared a National Historic Monument by the Government of Argentina in 1999. Trains on the Patagonian narrow-gauge railway use steam locomotives. The railway runs through the foothills of the Andes between Esquel and El Maitén in Chubut Province and Ingeniero Jacobacci in Río Negro Province. In southern Argentina, the Train of the End of the World to the Tierra del Fuego National Park is considered the world's southernmost functioning railway. Heritage railway operations started in 1994, after restoration of the old (narrow-gauge) steam railway. In Salta Province in northeastern Argentina, the Tren a las Nubes (Train to the Clouds) runs along of track in what is one of the highest railways in the world. The line has 29 bridges, 21 tunnels, 13 viaducts, two spirals and two zigzags, and its highest point is above sea level. In the Misiones Province, more precisely in the Iguazú National Park, is the Ecological Train of the Forest. With a speed below 20 km per hour to avoid interfering with wildlife and the formations are propelled to liquefied petroleum gas (LPG), a non-polluting fuel. The Villa Elisa Historic Train (operated by Ferroclub Central Entrerriano) runs steam trains between the cities of Villa Elisa and Caseros in Entre Ríos Province, covering in 120 minutes. Australia The world's second preserved railway, and the first outside the United Kingdom, was Australia's Puffing Billy Railway. This railway operates on of track, with much of its original rolling stock built as early as 1898. Just about over half of Australia's heritage lines are operated by narrow gauge tank engines, much like the narrow gauge lines of the United Kingdom. Austria The Höllental Railway is a , narrow-gauge (Bosnian gauge) railway, operating in Lower Austria. It runs on summer weekends, connecting Reichenau an der Rax to the nearby Höllental. Belgium Flanders, Belgium's northern Dutch-speaking region, has the Dendermonde–Puurs Steam Railway; whereas Wallonia, with its strong history of 19th century heavy industries, has the Chemin de fer à vapeur des Trois Vallées and PFT operates the Chemin de Fer du Bocq. Canada Railways Tramways Heritage streetcar lines: Downtown Historic Railway, in Vancouver, B.C. Replaced temporarily by the Olympic Line during the 2010 Vancouver Olympics, abandoned in 2012. Nelson Electric Tramway, in Nelson, B.C.: two streetcars – Car 400 (formerly BCER, owned by the Royal BC Museum, operational since 1999) and Car 23 (operational since 1992) operate on a 1.2 km route from City wharf to Lakeside Park. High Level Bridge Streetcar, in Edmonton, Alberta. Whitehorse trolley, in Whitehorse, Yukon. Closed in 2019, re-opened in 2024. Museums with operational heritage streetcar lines: Halton County Radial Railway, in Rockwood, Ontario Canadian Railway Museum, in Delson/Saint-Constant, Quebec Heritage Park Historical Village, in Calgary, Alberta Fort Edmonton Park in Edmonton, Alberta, operated by the Edmonton Radial Railway society along with the High Level Bridge Streetcar. Finland On the Finnish state-owned rail network, the section between Olli and Porvoo is a dedicated museum line. In southern Finland, it is the only line with many structural details abandoned by the rest of the network which regularly carries passenger traffic. Wooden sleepers, gravel ballast and low rail weight with no overhead catenary make it uniquely historical. Along the line, the Hinthaara railway station and the Porvoo railway station area are included in the National Board of Antiquities' inventory of cultural environments of national significance in Finland. Also on the list is scenery in the Porvoonjoki Valley, through which the line passes. The Jokioinen Museum Railway is a stretch of preserved narrow-gauge railway between Humppila and Jokioinen. Nykarleby Järnväg is a stretch of rebuilt narrow-gauge railway on the bank of the old Kovjoki–Nykarleby line. Germany The is a spur line of the Prussian Eastern Railway, located in the Märkische Schweiz Nature Park in Brandenburg. It was originally constructed in 1897 as a narrow-gauge railway, with a gauge of , connecting Buckow to the Müncheberg (Mark) station. This line was electrified and changed to standard gauge in 1930. It has operated as a heritage railway since 2002. India The Mountain railways of India are the railway lines that were built in the mountainous regions of India. The term mainly includes the narrow-gauge and metre-gauge railways in these regions but may also include some broad-gauge railways. Of the Mountain railways of India, the Darjeeling Himalayan, Nilgiri Mountain and Kalka–Shimla Railways have been collectively designated as a UNESCO World Heritage Site. To meet World Heritage criteria, the sites must retain some of their traditional infrastructure and culture. The Nilgiri Mountain Railway is also the only rack and pinion railway in India. The Matheran Hill Railway, along with the Kangra Valley Railway are preserved narrow gauge railways under consideration for UNESCO status. Some scenic routes have been preserved as heritage railways. Here normal services have stopped, only tourist heritage trains are operated. Examples of these are the Patalpani–Kalakund Heritage Train and the Rajasthan Valley Queen Heritage train which runs from Marwar Junction to Khamlighat. Indonesia In Indonesia there are several historic train lines and steam trains that are still operated today, including Ambarawa Railway Museum, Sawahlunto Railway Museum, Cepu Forest Railway, Jaladara excursion train in Surakarta, and several narrow gauge lines in the Sugar Factory area. Italy In Italy the heritage railway institute is recognized and protected by law no. 128 of 9 August 2017, which has as its objective the protection and valorisation of disused, suspended or abolished railway lines, of particular cultural, landscape and tourist value, including both railway routes and stations and the related works of art and appurtenances, on which, upon proposal of the regions to which they belong, tourism-type traffic management is applied (art. 2, paragraph 1). At the same time, the law identified a first list of 18 tourist railways, considered to be of particular value (art. 2, paragraph 2). The list is periodically updated by decree of the Ministry of Infrastructure and Transport, in agreement with the Ministry of Economy and Finance and the Ministry of Culture, also taking into account the reports in the State-Regions Conference, a list which in 2022 reached 26 railway lines. According to article 1, law 128/2017 has as its purpose: "the protection and valorisation of railway sections of particular cultural, landscape and tourist value, which include railway routes, stations and related works of art and appurtenances, and of the historic and tourist rolling stock authorized to travel along them, as well as the regulation of the use of ferrocycles". Below is the list of railway lines recognized as tourist railways by Italian legislation. a) pursuant to art. 2 paragraph 2 law 128/2017: Sulmona-Castel di Sangro section of the Castel di Sangro-Carpinone section of the Sulmona-Isernia railway Ceva–Ormea railway Sassari–Tempio-Palau railway Castelvetrano-Porto Palo section of the Agrigento Bassa-Porto Empedocle section of the Castelvetrano-Porto Empedocle railway Asciano–Monte Antico railway b) pursuant to the Ministerial Decree of 30 March 2022: Alba-Nizza Monferrato section of the Novara–Varallo railway Fabriano-Pergola section of the Malnate Olona-Swiss border section of the Valmorea railway. The Bernina railway line is a single-track railway line forming part of the Rhaetian Railway (RhB). It links the spa resort of St. Moritz, in the canton of Graubünden, Switzerland, with the town of Tirano, in the Province of Sondrio, Italy, via the Bernina Pass. Reaching a height of above sea level, it is the third highest railway crossing in Europe. It also ranks as the highest adhesion railway of the continent, andwith inclines of up to 7%as one of the steepest adhesion railways in the world. The elevation difference on the section between the Bernina Pass and Tirano is , allowing passengers to view glaciers along the line. On 7 July 2008, the Bernina line and the Albula railway line, which also forms part of the RhB, were recorded in the list of UNESCO World Heritage Sites, under the name Rhaetian Railway in the Albula / Bernina Landscapes. The whole site is a cross-border joint Swiss-Italian heritage area. Trains operating on the Bernina line include the Bernina Express. In July 2023, Ferrovie dello Stato established a new company, the "FS Treni Turistici Italiani" (English: FS Italian Tourist Trains), with the mission "to propose an offer of railway services expressly designed and calibrated for quality, sustainable tourism and attentive to rediscovering the riches of the Italian territory. Tourism that can experience the train journey as an integral moment of the holiday, an element of quality in the overall tourist experience". There are three service areas proposed: Luxury trains, which includes the circulation of the "Orient Express - La Dolce Vita" from 2024, and Venice Simplon Orient Express, already operating on European routes; Express and historic trains, with the express trains of the 1980s and 1990s which being redeveloped and modernized in the railway workshops of Rimini, while the historic trains are used for journeys that include stops with guided tours and tastings; Regional trains, also with trips that include experiential tourist stops, which pass through places rich in history, with villages and areas of landscape, naturalistic, food and wine and agri-food interest. New Zealand Rail transport played a major role in the history of New Zealand and several rail enthusiast societies and heritage railways have been formed to preserve New Zealand's rich rail history. Slovakia The Čierny Hron Railway is a narrow-gauge railway in central Slovakia, established in the first decade of the 20th century and operating primarily as a freight railway for the local logging industry. From the late 1920s to the early 1960s, it also offered passenger transport between the villages of Hronec and Čierny Balog. The railway became Czechoslovakia's most extensive forest railway network. After its closure in 1982, it received heritage status and was restored during the following decade. Since 1992, it has been one of Slovakia's official heritage railways and is a key regional tourist attraction. The Historical Logging Switchback Railway in Vychylovka is a heritage railway in north-central Slovakia, originally built to serve the logging industry in the Orava and Kysuce regions. Despite a closure and dissasembly of most of its original network during the early 1970s, its surviving lines and branches have been (or are being) restored. The railway is owned and operated by the Museum of Kysuce, with a line open to tourists for sightseeing. Switzerland Switzerland has a very dense rail network, both standard and narrow gauge. The overwhelming majority of railways, built between the mid-19th and early 20th century, are still in regular operation today and electrified, a major exception being the Furka Steam Railway, the longest unelectrified line in the country and one of the highest rail crossings in Europe. Many railway companies, especially mountain railways, provide services with well-preserved historic trains for tourists, for instance the Rigi Railways, the oldest rack railway in Europe, and the Pilatus Railway, the steepest in the world. Two railways, the Albula Railway and the Bernina Railway, have been designated as a World Heritage Site, although they are essentially operated with modern rolling stock. Due to the availability of hydroelectric resources in the Alps, the Swiss network was electrified earlier than in the rest of Europe. Some of the most emblematic pre-World War II electric locomotives and trains are the Crocodile, notably used on the Gotthard Railway, and the Red Arrow. Both are occasionally operated by SBB Historic. Switzerland also comprehends a large number of funiculars, several still working with the original carriages, such as the Giessbachbahn. United Kingdom In Britain, heritage railways are often railway lines which were run as commercial railways but were no longer needed (or closed down) and were taken over or re-opened by volunteers or non-profit organisations. The large number of heritage railways in the UK is due in part to the closure of many minor lines during the 1960s' Beeching cuts, and they were relatively easy to revive. There are between 100 and 150 heritage railways in the United Kingdom. A typical British heritage railway will use steam locomotives and original rolling stock to create a period atmosphere, although some are concentrating on diesel and electric traction to re-create the post-steam era. Many run seasonally on partial routes, unconnected to a larger network (or railway), and charge high fares in comparison with transit services; as a result, they focus on the tourist and leisure markets. During the 1990s and 2000s, however, some heritage railways aimed to provide local transportation and extend their running seasons to carry commercial passenger traffic. The first standard-gauge line to be preserved (not a victim of Beeching) was the Middleton Railway; the second, and the first to carry passengers, was the Bluebell Railway. Not-for-profit heritage railways differ in their quantity of service and some lines see traffic only on summer weekends. The more successful, such as the Severn Valley Railway and the North Yorkshire Moors Railway, may have up to five or six steam locomotives and operate a four-train service daily; smaller railways may run daily throughout the summer with only one steam locomotive. The Great Central Railway, the only preserved British main line with a double track, can operate over 50 trains on a busy timetable day. After the privatisation of main-line railways, the line between not-for-profit heritage railways and for-profit branch lines may be blurred. The Romney, Hythe and Dymchurch Railway is an example of a commercial line run as a heritage operation and to provide local transportation, and the Severn Valley Railway has operated a few goods trains commercially. A number of heritage railway lines are regularly used by commercial freight operators. Since the Bluebell Railway reopened to traffic in 1960, the definition of private standard gauge railways in the United Kingdom as preserved railways has evolved as the number of projects and their length, operating days and function have changed. The situation is further muddied by large variations in ownership-company structure, rolling stock and other assets. Unlike community railways, tourist railways in the UK are vertically integrated (although those operating mainly as charities separate their charitable and non-charitable activities for accounting purposes). United States Railroads Heritage railways are known in the United States as tourist, historic, or scenic railroads. Most are remnants of original railroads, and some are reconstructed after having been scrapped. Some heritage railways preserve entire railroads in their original state using original structures, track, and motive power. Examples of heritage railroads in the US by preservation type: Original East Broad Top Railroad and Coal Company (Pennsylvania) Nevada Northern Railway (Nevada) California Western Railroad (California) Stewartstown Railroad (Pennsylvania) Arcade and Attica Railroad (New York) Remnant Durango and Silverton Railroad (Colorado) Cumbres and Toltec Scenic Railroad (Colorado and New Mexico) Hocking Valley Scenic Railway (Ohio) Tennessee Valley Railroad Museum (Tennessee) Strasburg Rail Road (Pennsylvania) Fox River Trolley Museum (Illinois) Reconstructed Sumpter Valley Railway (Oregon) Tweetsie Railroad (North Carolina) Virginia and Truckee Railroad (Nevada) Wiscasset, Waterville and Farmington Railway (Maine) Fort Collins Municipal Railway (Colorado) National Park Related Lines Steamtown National Historic Site (Pennsylvania) Grand Canyon Railway (Arizona) Cuyahoga Valley Scenic Railroad (Ohio) Golden Spike National Historical Park (Utah) Other operations, such as the Valley Railroad or Hocking Valley Scenic Railway operate on historic track and utilize historic equipment, but are not reflective of the operations carried out by the original railroad they operate on. Hence, they do not fit into the Heritage Railway category, but rather Tourist Railway/Amusement. Tramways Heritage streetcar lines are operating in over 20 U.S. cities, and are in planning or construction stages in others. Several new heritage streetcar lines have been opened since the 1970s; some are stand-alone lines while others make use of a section of a modern light rail system. Heritage streetcar systems operating in Little Rock, Arkansas; Memphis, Tennessee; Dallas, Texas; New Orleans, Louisiana; Boston, Massachusetts (MBTA Mattapan Trolley) Philadelphia, Pennsylvania (SEPTA route 15); and Tampa, Florida, are among the larger examples. A heritage line operates in Charlotte, North Carolina, and will become a part of the city's new transit system. Another such line, called The Silver Line, operates in San Diego. The San Francisco Municipal Railway, or Muni, runs exclusively historic trolleys on its heavily used F Market & Wharves line. The line serves Market Street and the tourist areas along the Embarcadero, including Fisherman's Wharf. Boston's Massachusetts Bay Transportation Authority runs exclusively PCC streetcars on its Mattapan Line, part of that authority's Red Line. The historic rolling stock is retained because doing so cost less than would a full rebuild of the line to accommodate either a heavy rail line (like the rest of the Red Line or the Blue or Orange Lines) or a modern light rail line (like the Green Line). It is also unique in that it used almost exclusively by commuters and is not particularly popular with tourists (and thus may not really be a true heritage system, despite the historic rolling stock). Dallas has the McKinney Avenue Transit Authority. Denver has the Platte Valley Trolley, a heritage line recalling the open-sided streetcars of the early 20th century. Old Pueblo Trolley is a volunteer-run heritage line in Tucson, Arizona; its popularity inspired, in large part, a modern streetcar system for Tucson currently in the final planning stages, which would incorporate the heritage line. The VTA in San Jose, California, also maintains a heritage trolley fleet, for occasional use on the downtown portion of a new light rail system opened in 1988. Other cities with heritage streetcar lines include Galveston, Texas; Kenosha, Wisconsin; and San Pedro, California (home of the port of Los Angeles). The National Park Service operates a system in Lowell, Massachusetts. Most heritage streetcar lines use overhead trolley wires to power the cars, as was the case with the vast majority of original streetcar lines. However, on the Galveston Island Trolley heritage line, which opened in 1988, using modern-day replicas of vintage trolleys, the cars were powered by an on-board diesel engine, as local authorities were concerned that overhead wires would be too susceptible to damage from hurricanes. In spite of that precaution, damage in 2008 from Hurricane Ike was heavy enough to put the line out of service indefinitely, and as of 2021 it has yet to reopen, but three streetcars are being repaired and reopening is planned. Another heritage line lacking trolley wires was Savannah's River Street Streetcar line, which opened in February 2009 and operated until around 2015. It was the first line to use a diesel/electric streetcar whose built-in electricity generator is powered by biodiesel. In El Reno, Oklahoma, the Heritage Express Trolley connects Heritage Park with downtown, using a single streetcar that has been equipped with a propane-powered on-board generator. The car formerly operated on SEPTA's Norristown High Speed Line, where third-rail current collection is used. The El Reno line is single-track and long. In Portland, Oregon, replica-vintage cars provided a heritage streetcar service, named Portland Vintage Trolley, along a section of that city's 1986-operated light rail line from 1991 to 2014. Elsewhere in Portland, the Willamette Shore Trolley is a seasonal, volunteer-operated excursion service on a former freight railroad line, to Lake Oswego, Oregon. This operation uses a diesel-powered generator on a trailer towed or pushed by the streetcar, as the line lacks trolley wires. Similarly, the Astoria Riverfront Trolley in Astoria, Oregon, is a seasonal heritage-trolley service along a section of former freight railroad and using a diesel-powered generator on a trailer to provide electricity to the streetcar. Other seasonal or weekends-only heritage streetcar lines operate in Yakima, Washington (Yakima Electric Railway Museum); Fort Collins, Colorado; and Fort Smith, Arkansas. The Fort Collins and Fort Smith lines are both operated by an original (as opposed to replica) Birney-type streetcar, and in both cases the individual car in use is listed on the National Register of Historic Places. In Philadelphia, the Penn's Landing Trolley operated seasonal and weekend service as a volunteer operation with former P&W equipment between September 1982 and December 17, 1995, on the Philadelphia Belt Line track on Columbus Boulevard in the historic Penn's Landing district. Over 50 years later, the revival of extended streetcar operations in New Orleans is credited by many to the worldwide fame gained by its streetcars built by the Perley A. Thomas Car Works in 1922–23. These cars were operating on the system's Desire route made famous by Tennessee Williams' A Streetcar Named Desire. Some Perley Thomas cars were maintained in continuous service on the St. Charles Avenue Streetcar line until Hurricane Katrina caused major damage to the right-of-way in 2005. The historic streetcars suffered only minor damage and several were transferred to serve on the, then recently rebuilt, Canal Street line while the St. Charles line was being repaired. By June 22, 2008, service was restored to the entire length of the St. Charles Streetcar line. The New Orleans' St. Charles streetcar line is a National Historic Landmark. Pre-Katrina, New Orleans had plans to reconstruct the Desire line along its original route down St. Claude Avenue. Instead, the Loyola-UPT line was extended by building a spur down North Rampart Street to Elysian Fields Avenue. In San Francisco, parts of the cable car and Muni streetcar system (specifically the above-mentioned F Market & Wharves line) are heritage lines, although they are also functioning parts of the city's transit system. The cable cars are a National Historic Landmark and are rare examples of vehicles with this distinction. Located east of San Francisco is one of several museums in the U.S. that restore and operate vintage streetcars and interurbans, the Western Railway Museum. In popular culture The preservation of the Talyllyn Railway was the inspiration for the 1953 Ealing Studios comedy The Titfield Thunderbolt. The film is centred on the preservation of a fictional Somerset branch line from Titfield to Mallingford. Filmed on the Camerton branch in the summer of 1952, the branch was lifted after production had finished. Many preserved railways also served as a filming location for several production companies; for example, the Keighley and Worth Valley Railway served as a filming location for the 1970 adaptation of The Railway Children. Series three of Survivors uses heritage railways to help reestablish transportation, communication and trade in post-apocalyptic England. See also Heritage streetcar List of heritage railways Restored train Gandy dancer Wilbert Awdry References External links UK Heritage Railways International Working Steam Scenic railways in France mainline and tourist routes UK Heritage Railway Photographs Hungarian Interactive Railway Museum, Budapest Henry Williams Limited Engineering preservation societies
Heritage railway
[ "Engineering" ]
6,028
[ "Engineering societies", "Heritage railways", "Engineering preservation societies" ]
154,231
https://en.wikipedia.org/wiki/List%20of%20heritage%20railways%20in%20Northern%20Ireland
There are currently two operating heritage railways in Northern Ireland. Operating heritage railways Downpatrick and County Down Railway is located on part of the former Belfast & County Down Railway and is Ireland's only heritage railway using the standard Irish track gauge of . It has approximately of track, with branches to Inch Abbey, Magnus Barefoot's grave and Downpatrick station from a triangular junction. It operates with preserved steam and diesel locomotives and vintage wooden carriages. Giant's Causeway and Bushmills Railway is on the north coast in County Antrim. Narrow gauge steam-powered services run from the Giant's Causeway to Bushmills. Laid on part of the course of the original Giant's Causeway Tramway, which was electric-powered with its own hydroelectric plant (the first such system in the world). Former heritage railways Foyle Valley Railway is the name of two previous narrow gauge railways in Derry which used rolling stock from the former County Donegal Railways Joint Committee. The first (1975–1978) consisted of 300 metres of track at Victoria Road station. The second (1993–c.2000) stretched approximately south-west from the museum building on Foyle Road. The current owners of the museum have tentative plans to reopen the railway. Shane's Castle Railway was a tourist railway in the grounds of the castle which used preserved narrow gauge steam locomotives. It was long and operated between 1971 and 1995. Kings Road Railway was operated by William McCormick in the 1960s and 70s in his back garden in Knock, Belfast using a former industrial narrow gauge steam locomotive. See also List of British heritage and private railways Conservation in the United Kingdom Index of conservation articles List of heritage railways List of heritage railways in the Republic of Ireland List of narrow-gauge railways in Ireland Railway Preservation Society of Ireland Ulster Folk and Transport Museum References External links UK Directory of Heritage Railways UK Heritage Railways Lists of heritage railways Heritage railways in Northern Ireland Heritage railways in Northern Ireland Heritage railways in Northern Ireland Heritage railways in Northern Ireland Cultural heritage of Ireland
List of heritage railways in Northern Ireland
[ "Engineering" ]
402
[ "Lists of heritage railways", "Engineering preservation societies" ]
154,233
https://en.wikipedia.org/wiki/List%20of%20heritage%20railways%20in%20the%20Republic%20of%20Ireland
There are a small number of heritage railways in the Republic of Ireland, reflecting Ireland's long history of rail transport. Some former operations have closed, and aspirant operations may have museums and even rolling stock, but no operating track. There are also working groups, which may run heritage rolling stock on main lines. Heritage railways Operating Some of the main preserved or restored railways include: Fintown Railway, based in Fintown, County Donegal, which runs along the length of Lough Finn to Glenties Line for about a mile Listowel and Ballybunion Railway, a section of the Lartigue Monorail system, has been restored for visitors in Listowel, County Kerry Stradbally Woodland Railway, County Laois Waterford Suir Valley Railway, County Waterford, running a narrow gauge railway for from Kilmeaden Station along the former mainline route from Waterford to Mallow. It operates alongside the Waterford Greenway and is Ireland's longest heritage line. Under development / suspended Connemara Railway, based at Maam Cross, County Galway, not open to the public, sometime operating a temporary narrow gauge railway, later fitting standard gauge track (operation not apparent from last ref. check) West Clare Railway – installation intact but closed 1 June 2022 until further notice Defunct Clonmacnoise and West Offaly Railway in County Offaly, run by Bord na Móna, near Shannonbridge, closed 2008 Tralee and Dingle Light Railway, between Tralee and Dingle, which operated 1993–2013 Railway museums Castlerea Railway Museum, County Roscommon Cavan and Leitrim Railway, County Leitrim The Donegal Railway Heritage Centre in County Donegal, commemorating the operations of the County Donegal Railways Joint Committee which once had two narrow gauge railway systems Preservation groups Preservation groups in the Republic of Ireland include: The Irish Steam Preservation Society, based in Stradbally, County Laois, which operates the Stradbally Woodland Railway with vintage steam and diesel locomotives. The Irish Traction Group, based in Carrick-on-Suir, County Tipperary, which has a diesel locomotive collection at the site by the Limerick–Waterford railway route. ITG also run regular railtours around the country, sometimes with older (but still in regular use) Iarnród Éireann locomotives and rolling stock. The Railway Preservation Society of Ireland, an all-island body, with bases in County Antrim and Dublin, and a museum in the former; holds a full operating licence and operates heritage-themed excursions on the main lines in both jurisdictions. See also Conservation in the Republic of Ireland Index of conservation articles List of heritage railways List of heritage railways in Northern Ireland List of narrow-gauge railways in Ireland List of steam locomotives in Ireland References Lists of heritage railways Heritage railways in the Republic of Ireland Heritage railways in the Republic of Ireland Heritage railways in the Republic of Ireland Cultural heritage of Ireland
List of heritage railways in the Republic of Ireland
[ "Engineering" ]
587
[ "Lists of heritage railways", "Engineering preservation societies" ]
154,242
https://en.wikipedia.org/wiki/PH%20meter
A pH meter is a scientific instrument that measures the hydrogen-ion activity in water-based solutions, indicating its acidity or alkalinity expressed as pH. The pH meter measures the difference in electrical potential between a pH electrode and a reference electrode, and so the pH meter is sometimes referred to as a "potentiometric pH meter". The difference in electrical potential relates to the acidity or pH of the solution. Testing of pH via pH meters (pH-metry) is used in many applications ranging from laboratory experimentation to quality control. Applications The rate and outcome of chemical reactions taking place in water often depends on the acidity of the water, and it is therefore useful to know the acidity of the water, typically measured by means of a pH meter. Knowledge of pH is useful or critical in many situations, including chemical laboratory analyses. pH meters are used for soil measurements in agriculture, water quality for municipal water supplies, swimming pools, environmental remediation; brewing of wine or beer; manufacturing, healthcare and clinical applications such as blood chemistry; and many other applications. Advances in the instrumentation and in detection have expanded the number of applications in which pH measurements can be conducted. The devices have been miniaturized, enabling direct measurement of pH inside of living cells. In addition to measuring the pH of liquids, specially designed electrodes are available to measure the pH of semi-solid substances, such as foods. These have tips suitable for piercing semi-solids, have electrode materials compatible with ingredients in food, and are resistant to clogging. Design and use Principle of operation Potentiometric pH meters measure the voltage between two electrodes and display the result converted into the corresponding pH value. They comprise a simple electronic amplifier and a pair of electrodes, or alternatively a combination electrode, and some form of display calibrated in pH units. It usually has a glass electrode and a reference electrode, or a combination electrode. The electrodes, or probes, are inserted into the solution to be tested. pH meters may also be based on the antimony electrode (typically used for rough conditions) or the quinhydrone electrode. In order to accurately measure the potential difference between the two sides of the glass membrane reference electrode, typically a silver chloride electrode or calomel electrode are required on each side of the membrane. Their purpose is to measure changes in the potential on their respective side. One is built into the glass electrode. The other, which makes contact with the test solution through a porous plug, may be a separate reference electrode or may be built into a combination electrode. The resulting voltage will be the potential difference between the two sides of the glass membrane possibly offset by some difference between the two reference electrodes, that can be compensated for. The article on the glass electrode has a good description and figure. The design of the electrodes is the key part: These are rod-like structures usually made of glass, with a bulb containing the sensor at the bottom. The glass electrode for measuring the pH has a glass bulb specifically designed to be selective to hydrogen-ion concentration. On immersion in the solution to be tested, hydrogen ions in the test solution exchange for other positively charged ions on the glass bulb, creating an electrochemical potential across the bulb. The electronic amplifier detects the difference in electrical potential between the two electrodes generated in the measurement and converts the potential difference to pH units. The magnitude of the electrochemical potential across the glass bulb is linearly related to the pH according to the Nernst equation. The reference electrode is insensitive to the pH of the solution, being composed of a metallic conductor, which connects to the display. This conductor is immersed in an electrolyte solution, typically potassium chloride, which comes into contact with the test solution through a porous ceramic membrane. The display consists of a voltmeter, which displays voltage in units of pH. On immersion of the glass electrode and the reference electrode in the test solution, an electrical circuit is completed, in which there is a potential difference created and detected by the voltmeter. The circuit can be thought of as going from the conductive element of the reference electrode to the surrounding potassium-chloride solution, through the ceramic membrane to the test solution, the hydrogen-ion-selective glass of the glass electrode, to the solution inside the glass electrode, to the silver of the glass electrode, and finally the voltmeter of the display device. The voltage varies from test solution to test solution depending on the potential difference created by the difference in hydrogen-ion concentrations on each side of the glass membrane between the test solution and the solution inside the glass electrode. All other potential differences in the circuit do not vary with pH and are corrected for by means of the calibration. For simplicity, many pH meters use a combination probe, constructed with the glass electrode and the reference electrode contained within a single probe. A detailed description of combination electrodes is given in the article on glass electrodes. The pH meter is calibrated with solutions of known pH, typically before each use, to ensure accuracy of measurement. To measure the pH of a solution, the electrodes are used as probes, which are dipped into the test solutions and held there sufficiently long for the hydrogen ions in the test solution to equilibrate with the ions on the surface of the bulb on the glass electrode. This equilibration provides a stable pH measurement. pH electrode and reference electrode design Details of the fabrication and resulting microstructure of the glass membrane of the pH electrode are maintained as trade secrets by the manufacturers. However, certain aspects of design are published. Glass is a solid electrolyte, for which alkali-metal ions can carry current. The pH-sensitive glass membrane is generally spherical to simplify the manufacture of a uniform membrane. These membranes are up to 0.4 millimeters in thickness, thicker than original designs, so as to render the probes durable. The glass has silicate chemical functionality on its surface, which provides binding sites for alkali-metal ions and hydrogen ions from the solutions. This provides an ion-exchange capacity in the range of 10−6 to 10−8 mol/cm2. Selectivity for hydrogen ions (H+) arises from a balance of ionic charge, volume requirements versus other ions, and the coordination number of other ions. Electrode manufacturers have developed compositions that suitably balance these factors, most notably lithium glass. The silver chloride electrode is most commonly used as a reference electrode in pH meters, although some designs use the saturated calomel electrode. The silver chloride electrode is simple to manufacture and provides high reproducibility. The reference electrode usually consists of a platinum wire that has contact with a silver/silver chloride mixture, which is immersed in a potassium chloride solution. There is a ceramic plug, which serves as a contact to the test solution, providing low resistance while preventing mixing of the two solutions. With these electrode designs, the voltmeter is detecting potential differences of ±1400 millivolts. The electrodes are further designed to rapidly equilibrate with test solutions to facilitate ease of use. The equilibration times are typically less than one second, although equilibration times increase as the electrodes age. Maintenance Because of the sensitivity of the electrodes to contaminants, cleanliness of the probes is essential for accuracy and precision. Probes are generally kept moist when not in use with a medium appropriate for the particular probe, which is typically an aqueous solution available from probe manufacturers. Probe manufacturers provide instructions for cleaning and maintaining their probe designs. For illustration, one maker of laboratory-grade pH gives cleaning instructions for specific contaminants: general cleaning (15-minute soak in a solution of bleach and detergent), salt (hydrochloric acid solution followed by sodium hydroxide and water), grease (detergent or methanol), clogged reference junction (KCl solution), protein deposits (pepsin and HCl, 1% solution), and air bubbles. Calibration and operation The German Institute for Standardization publishes a standard for pH measurement using pH meters, DIN 19263. Very precise measurements necessitate that the pH meter is calibrated before each measurement. More typically calibration is performed once per day of operation. Calibration is needed because the glass electrode does not give reproducible electrostatic potentials over longer periods of time. Consistent with principles of good laboratory practice, calibration is performed with at least two standard buffer solutions that span the range of pH values to be measured. For general purposes, buffers at pH 4.00 and pH 10.00 are suitable. The pH meter has one calibration control to set the meter reading equal to the value of the first standard buffer and a second control to adjust the meter reading to the value of the second buffer. A third control allows the temperature to be set. Standard buffer sachets, available from a variety of suppliers, usually document the temperature dependence of the buffer control. More precise measurements sometimes require calibration at three different pH values. Some pH meters provide built-in temperature-coefficient correction, with temperature thermocouples in the electrode probes. The calibration process correlates the voltage produced by the probe (approximately 0.06 volts per pH unit) with the pH scale. Good laboratory practice dictates that, after each measurement, the probes are rinsed with distilled water or deionized water to remove any traces of the solution being measured, blotted with a scientific wipe to absorb any remaining water, which could dilute the sample and thus alter the reading, and then immersed in a storage solution suitable for the particular probe type. Types of pH meters In general there are three major categories of pH meters. Benchtop pH meters are often used in laboratories and are used to measure samples which are brought to the pH meter for analysis. Portable, or field pH meters, are handheld pH meters that are used to take the pH of a sample in a field or production site. In-line or in situ pH meters, also called pH analyzers, are used to measure pH continuously in a process, and can stand-alone, or be connected to a higher level information system for process control. pH meters range from simple and inexpensive pen-like devices to complex and expensive laboratory instruments with computer interfaces and several inputs for indicator and temperature measurements to be entered to adjust for the variation in pH caused by temperature. The output can be digital or analog, and the devices can be battery-powered or rely on line power. Some versions use telemetry to connect the electrodes to the voltmeter display device. Specialty meters and probes are available for use in special applications, such as harsh environments and biological microenvironments. There are also holographic pH sensors, which allow pH measurement colorimetrically, making use of the variety of pH indicators that are available. Additionally, there are commercially available pH meters based on solid state electrodes, rather than conventional glass electrodes. History The concept of pH was defined in 1909 by S. P. L. Sørensen, and electrodes were used for pH measurement in the 1920s. In October 1934, Arnold Orville Beckman registered the first patent for a complete chemical instrument for the measurement of pH, U.S. Patent No. 2,058,761, for his "acidimeter", later renamed the pH meter. Beckman developed the prototype as an assistant professor of chemistry at the California Institute of Technology, when asked to devise a quick and accurate method for measuring the acidity of lemon juice for the California Fruit Growers Exchange (Sunkist). On April 8, 1935, Beckman's renamed National Technical Laboratories focused on the manufacture of scientific instruments, with the Arthur H. Thomas Company as a distributor for its pH meter. In its first full year of sales, 1936, the company sold 444 pH meters for $60,000 in sales. In years to come, the company sold millions of the units. In 2004 the Beckman pH meter was designated an ACS National Historic Chemical Landmark in recognition of its significance as the first commercially successful electronic pH meter. The Radiometer Corporation of Denmark was founded in 1935, and began marketing a pH meter for medical use around 1936, but "the development of automatic pH-meters for industrial purposes was neglected. Instead American instrument makers successfully developed industrial pH-meters with a wide variety of applications, such as in breweries, paper works, alum works, and water treatment systems." In the 1940s the electrodes for pH meters were often difficult to make, or unreliable due to brittle glass. Dr. Werner Ingold began to industrialize the production of single-rod measuring cells, a combination of measurement and reference electrode in one construction unit, which led to broader acceptance in a wide range of industries including pharmaceutical production. Beckman marketed a portable "Pocket pH Meter" as early as 1956, but it did not have a digital read-out. In the 1970s Jenco Electronics of Taiwan designed and manufactured the first portable digital pH meter. This meter was sold under the label of the Cole-Parmer Corporation. Building a pH meter Specialized manufacturing is required for the electrodes, and details of their design and construction are typically trade secrets. However, with purchase of suitable electrodes, a standard multimeter can be used to complete the construction of the pH meter. However, commercial suppliers offer voltmeter displays that simplify use, including calibration and temperature compensation. See also Antimony electrode Ion-selective electrodes ISFET pH electrode Potentiometry Quinhydrone electrode Saturated calomel electrode Silver chloride electrode Standard hydrogen electrode References External links Introduction to pH measurement – Overview of pH and pH measurement at the Omega Engineering website Development of the Beckman pH Meter – National Historic Chemical Landmark of the American Chemical Society pH Measurement Handbook - A publication of the Thermo-Scientific Co. Acid–base chemistry Electrochemistry Measuring instruments Scientific instruments
PH meter
[ "Chemistry", "Technology", "Engineering" ]
2,872
[ "Acid–base chemistry", "Measuring instruments", "Scientific instruments", "Equilibrium chemistry", "Electrochemistry", "nan" ]
154,291
https://en.wikipedia.org/wiki/Ham%20%28chimpanzee%29
Ham (July 1957 – January 19, 1983), a chimpanzee also known as Ham the Chimp and Ham the Astrochimp, was the first non-human great ape launched into space. On January 31, 1961, Ham flew a suborbital flight on the Mercury-Redstone 2 mission, part of the U.S. space program's Project Mercury. Ham was known as "No 65" before he safely returned to Earth, when he was named after an acronym for the laboratory that prepared him for his historic mission—the Holloman Aerospace Medical Center, located at Holloman Air Force Base in New Mexico, southwest of Alamogordo. His name was also in honor of the commander of Holloman Aeromedical Laboratory, Lieutenant Colonel Hamilton "Ham" Blackshear. Early life Ham was born in July 1957 in French Cameroon, captured by animal trappers and sent to the Rare Bird Farm in Miami, Florida. He was purchased by the United States Air Force and brought to Holloman Air Force Base in July 1959. Ham was sold to the United States Air Force for $457. There were originally 40 chimpanzee flight candidates at Holloman. After evaluation, the number of candidates was reduced to 18, then to six, including Ham. Officially, Ham was known as No. 65 before his flight, and only renamed "Ham" upon his successful return to Earth. This was reportedly because officials did not want the bad press that would come from the death of a "named" chimpanzee if the mission were a failure. Among his handlers, No. 65 had been known as "Chop Chop Chang". Training and mission Beginning in July 1959, the two-year-old chimpanzee was trained under the direction of neuroscientist Joseph V. Brady at Holloman Air Force Base Aero-Medical Field Laboratory to do simple, timed tasks in response to electric lights and sounds. During his pre-flight training, Ham was taught to push a lever within five seconds of seeing a flashing blue light; failure to do so resulted in an application of a light electric shock to the soles of his feet, while a correct response earned him a banana pellet. Ham was trained for 219 hours during a 15-month period. While Ham was the first great ape, he was not the first animal to go to space, as there were many other types of animals that left Earth's atmosphere before him. However, none of these other animals could provide the significant insight that Ham could provide. One of the reasons that a chimpanzee was chosen for this mission was because of their many similarities to humans. Some of their similarities include: similar organ placement inside the body and having a response time to a stimulus that was very similar to that of humans (just a couple of deciseconds slower). Through the observations of Ham scientists would gain a better understanding of the possibility of sending humans into space. On January 31, 1961, Ham was secured in a Project Mercury mission designated MR-2 and launched from Cape Canaveral, Florida, on a suborbital flight. Based on dental eruption, Ham was 44 months old at the time of the flight. A number of physiological sensors were used to monitor the vital signs (electrocardiogram, respiration, and body temperature) of Ham. A commercial rectal thermistor probe was used instead of the probe used on the human Mercury astronauts. The probe was inserted 8 inches deep into Ham's rectum. The physiological sensors were placed on Ham about 10 hours before liftoff. Ham's ability to complete tasks during the flight were assessed by the psychomotor apparatus. The apparatus gave Ham a visual cue in the form of colored lights and required a response from two levers; if he succeed in his task, drink and food pellet would be awarded; failure would be punished by a shock to the soles of his feet. Due to a valve malfunction, the Redstone rocket delivered thrust higher than intended. The anomaly triggered the emergency escape rocket and subjected Ham to 17gs of acceleration. The jettison of the spent escape rocket also caused the retro rocket pack to be prematurely jettisoned. The lack of the retro rocket caused the capsule to reenter the atmosphere with excessive speed. Ham was subjected to 14.7 gs during reentry. Ham's capsule splashed down in the Atlantic Ocean and was recovered by the USS Donner later that day. The capsule was damaged during splashdown and settled deeper in the water than designed. The post flight examination found a small abrasion on the bridge of Ham's nose; he was also dehydrated and lost 5.37% body weight; he was otherwise in good physical condition. His flight was 16 minutes and 39 seconds long. He would become agitated when the press approached him and panic when his handler would try to situate him into a capsule for photos. Ham's lever-pushing performance in space was only a fraction of a second slower than on Earth, demonstrating that tasks could be performed in space. Of the two shocks Ham received in flight, the one shortly after the launch was due to an error in the testing apparatus; the other one due to the lack of response after experiencing 14g deceleration during reentry. The results from his test flight led directly to Alan Shepard's May 5, 1961, suborbital flight aboard Freedom 7. Later life Ham retired from the National Aeronautics and Space Administration (NASA) in 1963. On April 5, 1963, Ham was transferred to the National Zoo in Washington, D.C., where he lived for 17 years before joining a small group of chimps at North Carolina Zoo on September 25, 1980. Ham suffered from chronic heart and liver disease. On January 19, 1983, at age 25, Ham died. After his death, Ham's body was given to the Armed Forces Institute of Pathology for necropsy. Following the necropsy, the plan was to have him taxidermied and placed on display at the Smithsonian Institution, following Soviet precedent with pioneering space dogs Belka and Strelka. However, this plan was abandoned after a negative public reaction. Ham's skeleton is held in the collection of the National Museum of Health and Medicine, Silver Spring, Maryland, and the rest of Ham's remains were buried at the International Space Hall of Fame in Alamogordo, New Mexico. Colonel John Stapp gave the eulogy at the memorial service. Ham's backup, Minnie, was the only female chimpanzee trained for the Mercury program. After her role in the Mercury program ended, Minnie became part of an Air Force chimpanzee breeding program, producing nine offspring and helping to raise the offspring of several other members of the chimpanzee colony. She was the last surviving astro-chimpanzee and died at age 41 on March 14, 1998. Cultural references Ray Allen & The Embers released the song "Ham the Space Monkey" in 1961. Tom Wolfe's 1979 book The Right Stuff depicts Ham's spaceflight, as do its 1983 film and 2020 TV adaptations. The 2001 film Race to Space is a fictionalized version of Ham's story; the chimpanzee in the film is named "Mac". In 2007, a French documentary made in association with Animal Planet, Ham—Astrochimp #65, tells the story of Ham as witnessed by Jeff, who took care of Ham until his departure from the Air Force base after the success of the mission. It is also known as Ham: A Chimp into Space / Ham, un chimpanzé dans l'espace. The 2008 3D animated film Space Chimps follows anthropomorphic chimpanzees and their adventures in space. The primary protagonist is named Ham III, depicted as the grandson of Ham. In 2008, Bark Hide and Horn, a folk-rock band from Portland, Oregon, released a song titled "Ham the Astrochimp", detailing the journey of Ham from his perspective. See also Animals in space Monkeys and apes in space Albert II, a rhesus monkey, became the first mammal in space on June 14, 1949 Laika, a Soviet space dog, was the first animal to orbit Earth, November 3, 1957 Yuri Gagarin, the first human in space, orbited April 12, 1961 Enos, the second of the two chimpanzees launched into space, and the only one to orbit Earth, November 29, 1961 Félicette, the only cat in space, October 18, 1963 One Small Step: The Story of the Space Chimps, 2008 documentary Spaceflight List of individual apes References Further reading Brief biography of Ham, aimed at children ages 9–12. A novel about Ham and his trainer. Book covering the life and flight of Ham, plus other space animals. External links Pictures from the NASA Life Sciences Data Archive Who2 profile: Ham the Chimp Animal Astronauts Chimp Ham: "Trailblazer In Space" 1961 Detroit News In Praise of Ham the Astrochimp in LIFE 1957 animal births 1983 animal deaths 1961 in spaceflight Animals in space Individual chimpanzees NASA Project Mercury Non-human primate astronauts of the American space program
Ham (chimpanzee)
[ "Chemistry", "Biology" ]
1,896
[ "Animal testing", "Space-flown life", "Animals in space" ]
154,456
https://en.wikipedia.org/wiki/Fathom
A fathom is a unit of length in the imperial and the U.S. customary systems equal to , used especially for measuring the depth of water. The fathom is neither an international standard (SI) unit, nor an internationally accepted non-SI unit. Historically it was the maritime measure of depth in the English-speaking world but, apart from within the US, charts now use metres. There are two yards (6 feet) in an imperial fathom. Originally the span of a man's outstretched arms, the size of a fathom has varied slightly depending on whether it was defined as a thousandth of an (Admiralty) nautical mile or as a multiple of the imperial yard. Formerly, the term was used for any of several units of length varying around . Etymology The term (pronounced ) derives (via Middle English fathme) from the Old English fæðm, which is cognate with the Danish word favn (via the Vikings) and means "embracing arms" or "pair of outstretched arms". It is maybe also cognate with the Old High German word "fadum", which has the same meaning and also means "yarn (originally stretching between the outstretched fingertips)". Forms Ancient fathoms The Ancient Greek measure known as the orguia (, orgyiá, ."outstretched") is usually translated as "fathom". By the Byzantine period, this unit came in two forms: a "simple orguia" (, haplē orguiá) roughly equivalent to the old Greek fathom (6 Byzantine feet, m) and an "imperial" (, basilikē) or "geometric orguia" (, geōmetrikē orguiá) that was one-eighth longer (6 feet and a span, m). International fathom One international fathom is equal to: 1.8288 metres exactly (Official international definition of the fathom) British fathom The British Admiralty defined a fathom to be a thousandth of an imperial nautical mile (which was 6080 ft) or . In practice the "warship fathom" of exactly was used in Britain and the United States. No conflict between the definitions existed in practice, since depths on imperial nautical charts were indicated in feet if less than and in fathoms for depths greater than that. Until the 19th century in England, the length of the fathom was more variable: from  feet on merchant vessels to either on fishing vessels (from ). Other definitions Other definitions of fathom include: 1.828804 m (Obsolete measurement of the fathom based on the US survey foot, only for use of historical and legacy applications) 2 yards exactly 18 hands One metre is about 0.5468 fathoms In the international yard and pound agreement of 1959 the United States, Australia, Canada, New Zealand, South Africa, and the United Kingdom defined the length of the international yard to be exactly 0.9144 metre. In 1959 United States kept the US survey foot as definition for the fathom. In October 2019, the U.S. National Geodetic Survey and the National Institute of Standards and Technology announced their joint intent to retire the U.S. survey foot, with effect from the end of 2022. The fathom in U.S. Customary units is thereafter defined based on the International 1959 foot, giving the length of the fathom as exactly 1.8288 metres in the United States as well. Derived units At one time, a quarter meant one-quarter of a fathom. A cable length, based on the length of a ship's cable, has been variously reckoned as equal to 100 or 120 fathoms. Use of the fathom Water depth Most modern nautical charts indicate depth in metres. However, the U.S. Hydrographic Office uses feet and fathoms. A nautical chart will always explicitly indicate the units of depth used. To measure the depth of shallow waters, boatmen used a sounding line containing fathom points, some marked and others in between, called deeps, unmarked but estimated by the user. Water near the coast and not too deep to be fathomed by a hand sounding line was referred to as in soundings or on soundings. The area offshore beyond the 100 fathom line, too deep to be fathomed by a hand sounding line, was referred to as out of soundings or off soundings. A deep-sea lead, the heaviest of sounding leads, was used in water exceeding 100 fathoms in depth. This technique has been superseded by sonic depth finders for measuring mechanically the depth of water beneath a ship, one version of which is the Fathometer (trademark). The record made by such a device is a fathogram. A fathom line or fathom curve, a usually sinuous line on a nautical chart, joins all points having the same depth of water, thereby indicating the contour of the ocean floor. Some extensive flat areas of the sea bottom with constant depth are known by their fathom number, like the Broad Fourteens or the Long Forties, both in the North Sea. Line length The components of a commercial fisherman's setline were measured in fathoms. The rope called a groundline, used to form the main line of a setline, was usually provided in bundles of 300 fathoms. A single skein of this rope was referred to as a line. Especially in Pacific coast fisheries the setline was composed of units called skates, each consisting of several hundred fathoms of groundline, with gangions and hooks attached. A tuck seine or tuck net about long, and very deep in the middle, was used to take fish from a larger seine. A line attached to a whaling harpoon was about . A forerunner — a piece of cloth tied on a ship's log line some fathoms from the outboard end — marked the limit of drift line. A kite was a drag, towed under water at any depth up to about , which upon striking bottom, was upset and rose to the surface. A shot, one of the forged lengths of chain joined by shackles to form an anchor cable, was usually . A shackle, a length of cable or chain equal to . In 1949, the British navy redefined the shackle to be . The Finnish fathom (syli) is occasionally used: nautical mile or cable length. Burial A burial at sea (where the body is weighted to force it to the bottom) requires a minimum of six fathoms of water. This is the origin of the phrase "to deep six" as meaning to discard, or dispose of. The phrase is echoed in Shakespeare's The Tempest, where Ariel tells Ferdinand, "Full fathom five thy father lies". On land Until early in the 20th century, it was the unit used to measure the depth of mines (mineral extraction) in the United Kingdom. Miners also use it as a unit of area equal to 6 feet square (3.34 m2) in the plane of a vein. In Britain, it can mean the quantity of wood in a pile of any length measuring square in cross section. In Central Europe, the klafter was the corresponding unit of comparable length, as was the toise in France. In Hungary the square fathom ("négyszögöl") is still in use as an unofficial measure of land area, primarily for small lots suitable for construction. See also Ancient Greek units of measurement Anthropic units Bathymetry English units Hvat Imperial units International System of Units United States customary units Sounding line Toise Klafter References Citations Bibliography . External links An explanation of the fathom marks used at sea (retrieved Sept 2005). Hungarian web page that refers to the length of a "bécsi öl" Human-based units of measurement Nautical terminology Units of length Customary units of measurement in the United States
Fathom
[ "Mathematics" ]
1,626
[ "Quantity", "Units of measurement", "Units of length" ]
154,473
https://en.wikipedia.org/wiki/Fermi%20level
The Fermi level of a solid-state body is the thermodynamic work required to add one electron to the body. It is a thermodynamic quantity usually denoted by μ or EF for brevity. The Fermi level does not include the work required to remove the electron from wherever it came from. A precise understanding of the Fermi level—how it relates to electronic band structure in determining electronic properties; how it relates to the voltage and flow of charge in an electronic circuit—is essential to an understanding of solid-state physics. In band structure theory, used in solid state physics to analyze the energy levels in a solid, the Fermi level can be considered to be a hypothetical energy level of an electron, such that at thermodynamic equilibrium this energy level would have a 50% probability of being occupied at any given time. The position of the Fermi level in relation to the band energy levels is a crucial factor in determining electrical properties. The Fermi level does not necessarily correspond to an actual energy level (in an insulator the Fermi level lies in the band gap), nor does it require the existence of a band structure. Nonetheless, the Fermi level is a precisely defined thermodynamic quantity, and differences in Fermi level can be measured simply with a voltmeter. Voltage measurement Sometimes it is said that electric currents are driven by differences in electrostatic potential (Galvani potential), but this is not exactly true. As a counterexample, multi-material devices such as p–n junctions contain internal electrostatic potential differences at equilibrium, yet without any accompanying net current; if a voltmeter is attached to the junction, one simply measures zero volts. Clearly, the electrostatic potential is not the only factor influencing the flow of charge in a material—Pauli repulsion, carrier concentration gradients, electromagnetic induction, and thermal effects also play an important role. In fact, the quantity called voltage as measured in an electronic circuit has a simple relationship to the chemical potential for electrons (Fermi level). When the leads of a voltmeter are attached to two points in a circuit, the displayed voltage is a measure of the total work transferred when a unit charge is allowed to move from one point to the other. If a simple wire is connected between two points of differing voltage (forming a short circuit), current will flow from positive to negative voltage, converting the available work into heat. The Fermi level of a body expresses the work required to add an electron to it, or equally the work obtained by removing an electron. Therefore, VA − VB, the observed difference in voltage between two points, A and B, in an electronic circuit is exactly related to the corresponding chemical potential difference, μA − μB, in Fermi level by the formula where −e is the electron charge. From the above discussion it can be seen that electrons will move from a body of high μ (low voltage) to low μ (high voltage) if a simple path is provided. This flow of electrons will cause the lower μ to increase (due to charging or other repulsion effects) and likewise cause the higher μ to decrease. Eventually, μ will settle down to the same value in both bodies. This leads to an important fact regarding the equilibrium (off) state of an electronic circuit: This also means that the voltage (measured with a voltmeter) between any two points will be zero, at equilibrium. Note that thermodynamic equilibrium here requires that the circuit be internally connected and not contain any batteries or other power sources, nor any variations in temperature. Band structure of solids In the band theory of solids, electrons occupy a series of bands composed of single-particle energy eigenstates each labelled by ϵ. Although this single particle picture is an approximation, it greatly simplifies the understanding of electronic behaviour and it generally provides correct results when applied correctly. The Fermi–Dirac distribution, , gives the probability that (at thermodynamic equilibrium) a state having energy ϵ is occupied by an electron: Here, T is the absolute temperature and kB is the Boltzmann constant. If there is a state at the Fermi level (ϵ = μ), then this state will have a 50% chance of being occupied. The distribution is plotted in the left figure. The closer f is to 1, the higher chance this state is occupied. The closer f is to 0, the higher chance this state is empty. The location of μ within a material's band structure is important in determining the electrical behaviour of the material. In an insulator, μ lies within a large band gap, far away from any states that are able to carry current. In a metal, semimetal or degenerate semiconductor, μ lies within a delocalized band. A large number of states nearby μ are thermally active and readily carry current. In an intrinsic or lightly doped semiconductor, μ is close enough to a band edge that there are a dilute number of thermally excited carriers residing near that band edge. In semiconductors and semimetals the position of μ relative to the band structure can usually be controlled to a significant degree by doping or gating. These controls do not change μ which is fixed by the electrodes, but rather they cause the entire band structure to shift up and down (sometimes also changing the band structure's shape). For further information about the Fermi levels of semiconductors, see (for example) Sze. Local conduction band referencing, internal chemical potential and the parameter ζ If the symbol ℰ is used to denote an electron energy level measured relative to the energy of the edge of its enclosing band, ϵC, then in general we have We can define a parameter ζ that references the Fermi level with respect to the band edge:It follows that the Fermi–Dirac distribution function can be written asThe band theory of metals was initially developed by Sommerfeld, from 1927 onwards, who paid great attention to the underlying thermodynamics and statistical mechanics. Confusingly, in some contexts the band-referenced quantity ζ may be called the Fermi level, chemical potential, or electrochemical potential, leading to ambiguity with the globally-referenced Fermi level. In this article, the terms conduction-band referenced Fermi level or internal chemical potential are used to refer to ζ. ζ is directly related to the number of active charge carriers as well as their typical kinetic energy, and hence it is directly involved in determining the local properties of the material (such as electrical conductivity). For this reason it is common to focus on the value of ζ when concentrating on the properties of electrons in a single, homogeneous conductive material. By analogy to the energy states of a free electron, the ℰ of a state is the kinetic energy of that state and ϵC is its potential energy. With this in mind, the parameter, ζ, could also be labelled the Fermi kinetic energy. Unlike μ, the parameter, ζ, is not a constant at equilibrium, but rather varies from location to location in a material due to variations in ϵC, which is determined by factors such as material quality and impurities/dopants. Near the surface of a semiconductor or semimetal, ζ can be strongly controlled by externally applied electric fields, as is done in a field effect transistor. In a multi-band material, ζ may even take on multiple values in a single location. For example, in a piece of aluminum there are two conduction bands crossing the Fermi level (even more bands in other materials); each band has a different edge energy, ϵC, and a different ζ. The value of ζ at zero temperature is widely known as the Fermi energy, sometimes written ζ0. Confusingly (again), the name Fermi energy sometimes is used to refer to ζ at non-zero temperature. Temperature out of equilibrium The Fermi level, μ, and temperature, T, are well defined constants for a solid-state device in thermodynamic equilibrium situation, such as when it is sitting on the shelf doing nothing. When the device is brought out of equilibrium and put into use, then strictly speaking the Fermi level and temperature are no longer well defined. Fortunately, it is often possible to define a quasi-Fermi level and quasi-temperature for a given location, that accurately describe the occupation of states in terms of a thermal distribution. The device is said to be in quasi-equilibrium when and where such a description is possible. The quasi-equilibrium approach allows one to build a simple picture of some non-equilibrium effects as the electrical conductivity of a piece of metal (as resulting from a gradient of μ) or its thermal conductivity (as resulting from a gradient in T). The quasi-μ and quasi-T can vary (or not exist at all) in any non-equilibrium situation, such as: If the system contains a chemical imbalance (as in a battery). If the system is exposed to changing electromagnetic fields (as in capacitors, inductors, and transformers). Under illumination from a light-source with a different temperature, such as the sun (as in solar cells), When the temperature is not constant within the device (as in thermocouples), When the device has been altered, but has not had enough time to re-equilibrate (as in piezoelectric or pyroelectric substances). In some situations, such as immediately after a material experiences a high-energy laser pulse, the electron distribution cannot be described by any thermal distribution. One cannot define the quasi-Fermi level or quasi-temperature in this case; the electrons are simply said to be non-thermalized. In less dramatic situations, such as in a solar cell under constant illumination, a quasi-equilibrium description may be possible but requiring the assignment of distinct values of μ and T to different bands (conduction band vs. valence band). Even then, the values of μ and T may jump discontinuously across a material interface (e.g., p–n junction) when a current is being driven, and be ill-defined at the interface itself. Technicalities Nomenclature The term Fermi level is mainly used in discussing the solid state physics of electrons in semiconductors, and a precise usage of this term is necessary to describe band diagrams in devices comprising different materials with different levels of doping. In these contexts, however, one may also see Fermi level used imprecisely to refer to the band-referenced Fermi level, μ − ϵC, called ζ above. It is common to see scientists and engineers refer to "controlling", "pinning", or "tuning" the Fermi level inside a conductor, when they are in fact describing changes in ϵC due to doping or the field effect. In fact, thermodynamic equilibrium guarantees that the Fermi level in a conductor is always fixed to be exactly equal to the Fermi level of the electrodes; only the band structure (not the Fermi level) can be changed by doping or the field effect (see also band diagram). A similar ambiguity exists between the terms, chemical potential and electrochemical potential. It is also important to note that Fermi level is not necessarily the same thing as Fermi energy. In the wider context of quantum mechanics, the term Fermi energy usually refers to the maximum kinetic energy of a fermion in an idealized non-interacting, disorder free, zero temperature Fermi gas. This concept is very theoretical (there is no such thing as a non-interacting Fermi gas, and zero temperature is impossible to achieve). However, it finds some use in approximately describing white dwarfs, neutron stars, atomic nuclei, and electrons in a metal. On the other hand, in the fields of semiconductor physics and engineering, Fermi energy often is used to refer to the Fermi level described in this article. Fermi level referencing and the location of zero Fermi level Much like the choice of origin in a coordinate system, the zero point of energy can be defined arbitrarily. Observable phenomena only depend on energy differences. When comparing distinct bodies, however, it is important that they all be consistent in their choice of the location of zero energy, or else nonsensical results will be obtained. It can therefore be helpful to explicitly name a common point to ensure that different components are in agreement. On the other hand, if a reference point is inherently ambiguous (such as "the vacuum", see below) it will instead cause more problems. A practical and well-justified choice of common point is a bulky, physical conductor, such as the electrical ground or earth. Such a conductor can be considered to be in a good thermodynamic equilibrium and so its μ is well defined. It provides a reservoir of charge, so that large numbers of electrons may be added or removed without incurring charging effects. It also has the advantage of being accessible, so that the Fermi level of any other object can be measured simply with a voltmeter. Why it is not advisable to use "the energy in vacuum" as a reference zero In principle, one might consider using the state of a stationary electron in the vacuum as a reference point for energies. This approach is not advisable unless one is careful to define exactly where the vacuum is. The problem is that not all points in the vacuum are equivalent. At thermodynamic equilibrium, it is typical for electrical potential differences of order 1 V to exist in the vacuum (Volta potentials). The source of this vacuum potential variation is the variation in work function between the different conducting materials exposed to vacuum. Just outside a conductor, the electrostatic potential depends sensitively on the material, as well as which surface is selected (its crystal orientation, contamination, and other details). The parameter that gives the best approximation to universality is the Earth-referenced Fermi level suggested above. This also has the advantage that it can be measured with a voltmeter. Discrete charging effects in small systems In cases where the "charging effects" due to a single electron are non-negligible, the above definitions should be clarified. For example, consider a capacitor made of two identical parallel-plates. If the capacitor is uncharged, the Fermi level is the same on both sides, so one might think that it should take no energy to move an electron from one plate to the other. But when the electron has been moved, the capacitor has become (slightly) charged, so this does take a slight amount of energy. In a normal capacitor, this is negligible, but in a nano-scale capacitor it can be more important. In this case one must be precise about the thermodynamic definition of the chemical potential as well as the state of the device: is it electrically isolated, or is it connected to an electrode? When the body is able to exchange electrons and energy with an electrode (reservoir), it is described by the grand canonical ensemble. The value of chemical potential can be said to be fixed by the electrode, and the number of electrons on the body may fluctuate. In this case, the chemical potential of a body is the infinitesimal amount of work needed to increase the average number of electrons by an infinitesimal amount (even though the number of electrons at any time is an integer, the average number varies continuously.): where is the free energy function of the grand canonical ensemble. If the number of electrons in the body is fixed (but the body is still thermally connected to a heat bath), then it is in the canonical ensemble. We can define a "chemical potential" in this case literally as the work required to add one electron to a body that already has exactly electrons, where is the free energy function of the canonical ensemble, alternatively, These chemical potentials are not equivalent, , except in the thermodynamic limit. The distinction is important in small systems such as those showing Coulomb blockade. The parameter, , (i.e., in the case where the number of electrons is allowed to fluctuate) remains exactly related to the voltmeter voltage, even in small systems. To be precise, then, the Fermi level is defined not by a deterministic charging event by one electron charge, but rather a statistical charging event by an infinitesimal fraction of an electron. Notes References Electronic band structures Fermi–Dirac statistics th:ระดับพลังงานแฟร์มี vi:Mức Fermi
Fermi level
[ "Physics", "Chemistry", "Materials_science" ]
3,406
[ "Electron", "Electronic band structures", "Condensed matter physics" ]
154,487
https://en.wikipedia.org/wiki/Parkway
A parkway is a landscaped thoroughfare. The term is particularly used for a roadway in a park or connecting to a park from which trucks and other heavy vehicles are excluded. Over the years, many different types of roads have been labeled parkways. The term may be used to describe city streets as narrow as two lanes with a landscaped median, wide landscaped setbacks, or both. The term has also been applied to scenic highways and to limited-access roads more generally. Many parkways originally intended for scenic, recreational driving have evolved into major urban and commuter routes. United States Scenic roads The first parkways in the United States were developed during the late 19th century by landscape architects Frederick Law Olmsted and Calvert Vaux as roads that separated pedestrians, bicyclists, equestrians, and horse carriages, such as Eastern Parkway, which is credited as the world's first parkway, and Ocean Parkway in the New York City borough of Brooklyn. The term "parkway" to define this type of road was coined by Calvert Vaux and Frederick Law Olmsted in their proposal to link city and suburban parks with "pleasure roads".In Buffalo, New York, Olmsted and Vaux used parkways with landscaped medians and setbacks to create the first interconnected park and parkway system in the United States. Bidwell Parkway and Chapin Parkway are 200 foot wide city streets with only one lane for cars in each direction and broad landscaped medians that provide a pleasant, shaded route to the park and serve as mini-parks within the neighborhood. The Rhode Island Metropolitan Park Commission developed several parkways in the Providence area. Other parkways, such as Park Presidio Boulevard in San Francisco, California, were designed to serve larger volumes of traffic. During the early 20th century, the meaning of the word was expanded to include limited-access highways designed for recreational driving of automobiles, with landscaping. These parkways originally provided scenic routes without very slow or commercial vehicles, at grade intersections, or pedestrian traffic. Examples are the Merritt Parkway in Connecticut and the Vanderbilt Motor Parkway in New York. But their success led to more development, expanding a city's boundaries, eventually limiting the parkway's recreational driving use. The Arroyo Seco Parkway between Downtown Los Angeles and Pasadena, California, is an example of lost pastoral aesthetics. It and others have become major commuting routes, while retaining the name "parkway". Early high speed roads In New York City, construction on the Long Island Motor Parkway (Vanderbilt Parkway) began in 1906 and planning for the Bronx River Parkway in 1907. In the 1920s, the New York City Metropolitan Area's parkway system grew under the direction of Robert Moses, the president of the New York State Council of Parks and Long Island State Park Commission, who used parkways to provide access to newly created state parks, especially for city dwellers. As Commissioner of New York City Parks under Mayor LaGuardia, he extended the parkways to the heart of the city, creating and linking its parks to the greater metropolitan systems. Most of the New York metropolitan parkways were designed by Gilmore Clark. The famed "Gateway to New England" Merritt Parkway in Connecticut was designed in the 1930s as a pleasurable alternative for affluent locals to the congested Boston Post Road, running through forest with each bridge designed uniquely to enhance the scenery. Another example is the Sprain Brook Parkway from lower-Westchester to connect to the Taconic State Parkway to Chatham, New York. Landscape architect George Kessler designed extensive parkway systems for Kansas City, Missouri; Memphis, Tennessee; Indianapolis; and other cities at the beginning of the 20th century. New Deal roads In the 1930s, as part of the New Deal, the U.S. federal government constructed National Parkways designed for recreational driving and to commemorate historic trails and routes. These divided four-lane parkways have lower speed limits and are maintained by the National Park Service. An example is the Civilian Conservation Corps (CCC) built Blue Ridge Parkway in the Appalachian Mountains of North Carolina and Virginia. Others are Skyline Drive in Virginia; the Natchez Trace Parkway in Mississippi, Alabama, and Tennessee; and the Colonial Parkway in eastern Virginia's Historic Triangle area. The George Washington Memorial Parkway and the Clara Barton Parkway, running along the Potomac River near Washington, D.C., and Alexandria, Virginia, were also constructed during this era. Post-war parkways In Kentucky the term "parkway" designates a freeway in the Kentucky Parkway system, with nine built in the 1960s and 1970s. They were toll roads until the construction bonds were repaid; the last of these roads to charge tolls became freeways in 2006. The Arroyo Seco Parkway from Pasadena to Los Angeles, built in 1940, was the first segment of the vast Southern California freeway system. It became part of State Route 110 and was renamed the Pasadena Freeway. A 2010 restoration of the freeway brought the Arroyo Seco Parkway designation back. In the New York metropolitan area, contemporary parkways are predominantly limited-access highways or freeways restricted to non-commercial traffic, excluding trucks and tractor-trailers. Some have low overpasses that also exclude buses. The Vanderbilt Parkway, an exception in western Suffolk County, is a surviving remnant of the Long Island Motor Parkway that became a surface street, no longer with controlled-access or non-commercial vehicle restrictions. The Palisades Interstate Parkway is a post-war parkway that starts at the George Washington Bridge, heads north through New Jersey, continuing through Rockland and Orange counties in New York. The Palisades Parkway was built to allow for a direct route from New York City to Harriman State Park. In New Jersey, the Garden State Parkway, connecting the northern part of the state with the Jersey Shore, is restricted to buses and non-commercial traffic north of the Route 18 interchange, but trucks are permitted south of this point. It is one of the busiest toll roads in the country. In the Pittsburgh region, two of the major Interstates are referred to informally as parkways. The Parkway East (I-376, formally the Penn-Lincoln Parkway) connects Downtown Pittsburgh to Monroeville, Pennsylvania. The Parkway West (I-376) runs through the Fort Pitt Tunnel and links Downtown to Pittsburgh International Airport, southbound I-79, Imperial, Pennsylvania, and westbound US 22/US 30. The Parkway North (I-279) connects Downtown to Franklin Park, Pennsylvania and northbound I-79. In the suburbs of Philadelphia, U.S. Route 202 follows an at-grade parkway alignment known as the "U.S. Route 202 Parkway" between Montgomeryville and Doylestown. The parkway varies from two to four lanes in width, has shoulders, a walking path called the US 202 Parkway Trail on the side, and a speed limit. The parkway opened in 2012 as a bypass of a section of US 202 between the two towns; it had originally been proposed as a four-lane freeway before funding for the road was cut. In Minneapolis, the Grand Rounds Scenic Byway system has of streets designated as parkways. These are not freeways; they have a slow speed limit, pedestrian crossings, and stop signs. In Cincinnati, parkways are major roads which trucks are prohibited from using. Some Cincinnati parkways, such as Columbia Parkway, are high-speed, limited-access roads, while others, such as Central Parkway, are multi-lane urban roads without controlled access. Columbia Parkway carries US-50 traffic from downtown towards east-side suburbs of Mariemont, Anderson, and Milford, and is a limited access road from downtown to the Village of Mariemont. In Boston, parkways are generally four to six lanes wide but are not usually controlled-access. They are highly trafficked in most cases, transporting people between neighborhoods quicker than a typical city street. Many of them serve as principal arterials and some (like Storrow Drive, Memorial Drive, the Alewife Brook Parkway and the VFW Parkway) have evolved into regional commuter routes. Canada "Parkway" is used in the names of many Canadian roads, including major routes through national parks, scenic drives, major urban thoroughfares, and even regular freeways that carry commercial traffic. Parkways in the National Capital Region are administered by the National Capital Region (Canada). However, some of them are named "drive" or "driveway". The term in Canada is also applied to multi-use paths and greenways used by walkers and cyclists. Airport Parkway (Ottawa) Aviation Parkway (Ottawa) Broad Street in Saint John, New Brunswick Colonel By Drive in Ottawa, Ontario Conestoga Parkway in Kitchener, Ontario Don Valley Parkway in Toronto, Ontario Emil Kolb Parkway in Bolton, Ontario Erin Mills Parkway in Mississauga, Ontario Forest Hills Parkway in Halifax, Nova Scotia Hanlon Expressway in Guelph, Ontario Icefields Parkway in Alberta Island Park Drive in Ottawa, Ontario Lauzon Parkway in Windsor, Ontario Lincoln M. Alexander Parkway in Hamilton, Ontario Niagara Parkway in Southern Ontario Ojibway Parkway in Windsor, Ontario Queen Elizabeth Driveway in Ottawa, Ontario Red Hill Valley Parkway in Hamilton, Ontario The Parkway in St. John's, Newfoundland and Labrador Thousand Islands Parkway in Eastern Ontario United Kingdom In the United Kingdom, the term "parkway" more commonly refers to park and ride railway stations, where this is often indicated as part of the name, as with Bristol Parkway, the first such station, opened in 1972. Luton Airport Parkway is somewhat analogous - an interconnect railway station with an airport via a public transport shuttle (initially buses, now the Luton DART light railway). Parkways fitting the definition applied in this article also exist, as listed in this section. Peterborough The city of Peterborough has roads branded as "parkways" which provide routes for much through traffic and local traffic. The majority are dual carriageways, with many of their junctions numbered. Five main parkways form an orbital outer ring road. Three parkways serve settlements. Plymouth In the City of Plymouth, the A38 is called "The Parkway" and bisects a rural belt of the local authority area, which coincides with the geographical centre; it has two junctions to enter the downtown part of the city. Australia Australian Capital Territory The Australian Capital Territory uses the term "parkway" to refer to roadways of a standard approximately equivalent to what would be designated as an "expressway", "freeway", or "motorway" in other areas. Parkways generally have multiple lanes in each direction of travel, no intersections (crossroads are accessed by interchanges), high speed limits, and are of dual carriageway design (or have high crash barriers on the median). Victoria Victoria uses the term "parkway" to sometimes refer to smaller local access roads that travel through parkland. Unlike other uses of the term, these parkways are not high-speed routes but may still have some degree of limited access. Other countries Singapore uses the term "parkway" as an alternative to "expressway". As such, parkways are also dual carriageways with high speed limits and interchanges. The East Coast Parkway is currently the only expressway in Singapore that uses this terminology. In Russia, long, broad (multi-lane) and beautified thoroughfares are referred to as prospekts. See also Central reservation Green belt Linear park Park Road verge References External links "Why do we drive on the parkway and park on the driveway?" The Straight Dope Landscape Types of roads Environmental design Limited-access roads Urban studies and planning terminology
Parkway
[ "Engineering" ]
2,341
[ "Environmental design", "Design" ]
154,502
https://en.wikipedia.org/wiki/Type%202%20diabetes
Type 2 diabetes (T2D), formerly known as adult-onset diabetes, is a form of diabetes mellitus that is characterized by high blood sugar, insulin resistance, and relative lack of insulin. Common symptoms include increased thirst, frequent urination, fatigue and unexplained weight loss. Other symptoms include increased hunger, having a sensation of pins and needles, and sores (wounds) that heal slowly. Symptoms often develop slowly. Long-term complications from high blood sugar include heart disease, stroke, diabetic retinopathy, which can result in blindness, kidney failure, and poor blood flow in the lower-limbs, which may lead to amputations. The sudden onset of hyperosmolar hyperglycemic state may occur; however, ketoacidosis is uncommon. Type 2 diabetes primarily occurs as a result of obesity and lack of exercise. Some people are genetically more at risk than others. Type 2 diabetes makes up about 90% of cases of diabetes, with the other 10% due primarily to type 1 diabetes and gestational diabetes. In type 1 diabetes, there is a lower total level of insulin to control blood glucose, due to an autoimmune-induced loss of insulin-producing beta cells in the pancreas. Diagnosis of diabetes is by blood tests such as fasting plasma glucose, oral glucose tolerance test, or glycated hemoglobin (A1c). Type 2 diabetes is largely preventable by staying at a normal weight, exercising regularly, and eating a healthy diet (high in fruits and vegetables and low in sugar and saturated fat). Treatment involves exercise and dietary changes. If blood sugar levels are not adequately lowered, the medication metformin is typically recommended. Many people may eventually also require insulin injections. In those on insulin, routinely checking blood sugar levels (such as through a continuous glucose monitor) is advised; however, this may not be needed in those who are not on insulin therapy. Bariatric surgery often improves diabetes in those who are obese. Rates of type 2 diabetes have increased markedly since 1960 in parallel with obesity. As of 2015, there were approximately 392 million people diagnosed with the disease compared to around 30 million in 1985. Typically, it begins in middle or older age, although rates of type 2 diabetes are increasing in young people. Type 2 diabetes is associated with a ten-year-shorter life expectancy. Diabetes was one of the first diseases ever described, dating back to an Egyptian manuscript from  BCE. Type 1 and type 2 diabetes were identified as separate conditions in 400–500 CE with type 1 associated with youth and type 2 with being overweight. The importance of insulin in the disease was determined in the 1920s. Signs and symptoms The classic symptoms of diabetes are frequent urination (polyuria), increased thirst (polydipsia), increased hunger (polyphagia), and weight loss. Other symptoms that are commonly present at diagnosis include a history of blurred vision, itchiness, peripheral neuropathy, recurrent vaginal infections, and fatigue. Other symptoms may include loss of taste. Many people, however, have no symptoms during the first few years and are diagnosed on routine testing. A small number of people with type 2 diabetes can develop a hyperosmolar hyperglycemic state (a condition of very high blood sugar associated with a decreased level of consciousness and low blood pressure). Complications Type 2 diabetes is typically a chronic disease associated with a ten-year-shorter life expectancy. This is partly due to a number of complications with which it is associated, including: two to four times the risk of cardiovascular disease, including ischemic heart disease and stroke; a 20-fold increase in lower limb amputations, and increased rates of hospitalizations. In the developed world, and increasingly elsewhere, type 2 diabetes is the largest cause of nontraumatic blindness and kidney failure. It has also been associated with an increased risk of cognitive dysfunction and dementia through disease processes such as Alzheimer's disease and vascular dementia. Other complications include hyperpigmentation of skin (acanthosis nigricans), sexual dysfunction, diabetic ketoacidosis, and frequent infections. There is also an association between type 2 diabetes and mild hearing loss. Causes The development of type 2 diabetes is caused by a combination of lifestyle and genetic factors. While some of these factors are under personal control, such as diet and obesity, other factors are not, such as increasing age, female sex, and genetics. Generous consumption of alcohol is also a risk factor. Obesity is more common in women than men in many parts of Africa. The nutritional status of a mother during fetal development may also play a role. Lifestyle Lifestyle factors are important to the development of type 2 diabetes, including obesity and being overweight (defined by a body mass index of greater than 25), lack of physical activity, poor diet, psychological stress, and urbanization. Excess body fat is associated with 30% of cases in those of Chinese and Japanese descent, 60–80% of cases in those of European and African descent, and 100% of cases in Pima Indians and Pacific Islanders. Among those who are not obese, a high waist–hip ratio is often present. Smoking appears to increase the risk of type 2 diabetes. Lack of sleep has also been linked to type 2 diabetes. Laboratory studies have linked short-term sleep deprivations to changes in glucose metabolism, nervous system activity, or hormonal factors that may lead to diabetes. Dietary factors also influence the risk of developing type 2 diabetes. Consumption of sugar-sweetened drinks in excess is associated with an increased risk. The type of fats in the diet are important, with saturated fat and trans fatty acids increasing the risk, and polyunsaturated and monounsaturated fat decreasing the risk. Eating a lot of white rice appears to play a role in increasing risk. A lack of exercise is believed to cause 7% of cases. Sedentary lifestyle is another risk factor. Persistent organic pollutants may also play a role. Genetics Most cases of diabetes involve many genes, with each being a small contributor to an increased probability of becoming a type 2 diabetic. The proportion of diabetes that is inherited is estimated at 72%. More than 36 genes and 80 single nucleotide polymorphisms (SNPs) had been found that contribute to the risk of type 2 diabetes. All of these genes together still only account for 10% of the total heritable component of the disease. The TCF7L2 allele, for example, increases the risk of developing diabetes by 1.5 times and is the greatest risk of the common genetic variants. Most of the genes linked to diabetes are involved in pancreatic beta cell functions. There are a number of rare cases of diabetes that arise due to an abnormality in a single gene (known as monogenic forms of diabetes or "other specific types of diabetes"). These include maturity onset diabetes of the young (MODY), Donohue syndrome, and Rabson–Mendenhall syndrome, among others. Maturity onset diabetes of the young constitute 1–5% of all cases of diabetes in young people. Epigenetic regulation may have a role in type 2 diabetes. Medical conditions There are a number of medications and other health problems that can predispose to diabetes. Some of the medications include: glucocorticoids, thiazides, beta blockers, atypical antipsychotics, and statins. Those who have previously had gestational diabetes are at a higher risk of developing type 2 diabetes. Other health problems that are associated include: acromegaly, Cushing's syndrome, hyperthyroidism, pheochromocytoma, and certain cancers such as glucagonomas. Individuals with cancer may be at a higher risk of mortality if they also have diabetes. Testosterone deficiency is also associated with type 2 diabetes. Eating disorders may also interact with type 2 diabetes, with bulimia nervosa increasing the risk and anorexia nervosa decreasing it. Pathophysiology Type 2 diabetes is due to insufficient insulin production from beta cells in the setting of insulin resistance. Insulin resistance, which is the inability of cells to respond adequately to normal levels of insulin, occurs primarily within the muscles, liver, and fat tissue. In the liver, insulin normally suppresses glucose release. However, in the setting of insulin resistance, the liver inappropriately releases glucose into the blood. The proportion of insulin resistance versus beta cell dysfunction differs among individuals, with some having primarily insulin resistance and only a minor defect in insulin secretion and others with slight insulin resistance and primarily a lack of insulin secretion. Other potentially important mechanisms associated with type 2 diabetes and insulin resistance include: increased breakdown of lipids within fat cells, resistance to and lack of incretin, high glucagon levels in the blood, increased retention of salt and water by the kidneys, and inappropriate regulation of metabolism by the central nervous system. However, not all people with insulin resistance develop diabetes since an impairment of insulin secretion by pancreatic beta cells is also required. In the early stages of insulin resistance, the mass of beta cells expands, increasing the output of insulin to compensate for the insulin insensitivity, so that the disposition index remains constant. But when type 2 diabetes has become manifest, the person will have lost about half of their beta cells. The causes of the aging-related insulin resistance seen in obesity and in type 2 diabetes are uncertain. Effects of intracellular lipid metabolism and ATP production in liver and muscle cells may contribute to insulin resistance. Diagnosis The World Health Organization definition of diabetes (both type 1 and type 2) is for a single raised glucose reading with symptoms, otherwise raised values on two occasions, of either: fasting plasma glucose ≥ 7.0 mmol/L (126 mg/dL) or glucose tolerance test with two hours after the oral dose a plasma glucose ≥ 11.1 mmol/L (200 mg/dL) A random blood sugar of greater than 11.1 mmol/L (200 mg/dL) in association with typical symptoms or a glycated hemoglobin (HbA1c) of ≥ 48 mmol/mol (≥ 6.5 DCCT %) is another method of diagnosing diabetes. In 2009, an International Expert Committee that included representatives of the American Diabetes Association (ADA), the International Diabetes Federation (IDF), and the European Association for the Study of Diabetes (EASD) recommended that a HbA1c threshold of ≥ 48 mmol/mol (≥ 6.5 DCCT %) should be used to diagnose diabetes. This recommendation was adopted by the American Diabetes Association in 2010. Positive tests should be repeated unless the person presents with typical symptoms and blood sugar >11.1 mmol/L (>200 mg/dL). Threshold for diagnosis of diabetes is based on the relationship between results of glucose tolerance tests, fasting glucose or HbA1c and complications such as retinal problems. A fasting or random blood sugar is preferred over the glucose tolerance test, as they are more convenient for people. HbA1c has the advantages that fasting is not required and results are more stable but has the disadvantage that the test is more costly than measurement of blood glucose. It is estimated that 20% of people with diabetes in the United States do not realize that they have the disease. Type 2 diabetes is characterized by high blood glucose in the context of insulin resistance and relative insulin deficiency. This is in contrast to type 1 diabetes in which there is an absolute insulin deficiency due to destruction of islet cells in the pancreas and gestational diabetes that is a new onset of high blood sugars associated with pregnancy. Type 1 and type 2 diabetes can typically be distinguished based on the presenting circumstances. If the diagnosis is in doubt antibody testing may be useful to confirm type 1 diabetes and C-peptide levels may be useful to confirm type 2 diabetes, with C-peptide levels normal or high in type 2 diabetes, but low in type 1 diabetes. Screening Universal screening for diabetes in people without risk factors or symptoms is not recommended. The United States Preventive Services Task Force (USPSTF) recommended in 2021 screening for type 2 diabetes in adults aged 35 to 70 years old who are overweight (i.e. BMI over 25) or have obesity. For people of Asian descent, screening is recommended if they have a BMI over 23. Screening at an earlier age may be considered in people with a family history of diabetes; some ethnic groups, including Hispanics, African Americans, and Native Americans; a history of gestational diabetes; polycystic ovary syndrome. Screening can be repeated every 3 years. The American Diabetes Association (ADA) recommended in 2024 screening in all adults from the age of 35 years. ADA also recommends screening in adults of all ages with a BMI over 25 (or over 23 in Asian Americans) with another risk factor: first-degree relative with diabetes, ethnicity at high risk for diabetes, blood pressure ≥130/80 mmHg or on therapy for hypertension, history of cardiovascular disease, physical inactivity, polycystic ovary syndrome or severe obesity. ADA recommends repeat screening every 3 years at minimum. ADA recommends yearly tests in people with prediabetes. People with previous gestational diabetes or pancreatitis are also recommended screening. There is no evidence that screening changes the risk of death and any benefit of screening on adverse effects, incidence of type 2 diabetes, HbA1c or socioeconomic effects are not clear. In the UK, NICE guidelines suggest taking action to prevent diabetes for people with a body mass index (BMI) of 30 or more. For people of Black African, African-Caribbean, South Asian and Chinese descent the recommendation to start prevention starts at the BMI of 27,5. A study based on a large sample of people in England suggest even lower BMIs for certain ethnic groups for the start of prevention, for example 24 in South Asian and 21 in Bangladeshi populations. Prevention Onset of type 2 diabetes can be delayed or prevented through proper nutrition and regular exercise. Intensive lifestyle measures may reduce the risk by over half. The benefit of exercise occurs regardless of the person's initial weight or subsequent weight loss. High levels of physical activity reduce the risk of diabetes by about 28%. Evidence for the benefit of dietary changes alone, however, is limited, with some evidence for a diet high in green leafy vegetables and some for limiting the intake of sugary drinks. There is an association between higher intake of sugar-sweetened fruit juice and diabetes, but no evidence of an association with 100% fruit juice. A 2019 review found evidence of benefit from dietary fiber. A 2017 review found that, long term, lifestyle changes decreased the risk by 28%, while medication does not reduce risk after withdrawal. While low vitamin D levels are associated with an increased risk of diabetes, correcting the levels by supplementing vitamin D3 does not improve that risk. In those with prediabetes, diet in combination with physical activity delays or reduces the risk of type 2 diabetes, according to a 2017 Cochrane review. In those with prediabetes, metformin may delay or reduce the risk of developing type 2 diabetes compared to diet and exercise or a placebo intervention, but not compared to intensive diet and exercise, and there was not enough data on outcomes such as mortality and diabetic complications and health-related quality of life, according to a 2019 Cochrane review. In those with prediabetes, alpha-glucosidase inhibitors such as acarbose may delay or reduce the risk of type 2 diabetes when compared to placebo, however there was no conclusive evidence that acarbose improved cardiovascular mortality or cardiovascular events, according to a 2018 Cochrane review. In those with prediabetes, pioglitazone may delay or reduce the risk of developing type 2 diabetes compared to placebo or no intervention, but no difference was seen compared to metformin, and data were missing on mortality and complications and quality of life, according to a 2020 Cochrane review. In those with prediabetes, there was insufficient data to draw any conclusions on whether SGLT2 inhibitors may delay or reduce the risk of developing type 2 diabetes, according to a 2016 Cochrane review. Management Management of type 2 diabetes focuses on lifestyle interventions, lowering other cardiovascular risk factors, and maintaining blood glucose levels in the normal range. Self-monitoring of blood glucose for people with newly diagnosed type 2 diabetes may be used in combination with education, although the benefit of self-monitoring in those not using multi-dose insulin is questionable. In those who do not want to measure blood levels, measuring urine levels may be done. Managing other cardiovascular risk factors, such as hypertension, high cholesterol, and microalbuminuria, improves a person's life expectancy. Decreasing the systolic blood pressure to less than 140 mmHg is associated with a lower risk of death and better outcomes. Intensive blood pressure management (less than 130/80 mmHg) as opposed to standard blood pressure management (less than 140–160 mmHg systolic to 85–100 mmHg diastolic) results in a slight decrease in stroke risk but no effect on overall risk of death. Intensive blood sugar lowering (HbA1c < 6%) as opposed to standard blood sugar lowering (HbA1c of 7–7.9%) does not appear to change mortality. The goal of treatment is typically an HbA1c of 7 to 8% or a fasting glucose of less than 7.2 mmol/L (130 mg/dL); however these goals may be changed after professional clinical consultation, taking into account particular risks of hypoglycemia and life expectancy. Hypoglycemia is associated with adverse outcomes in older people with type 2 diabetes. Despite guidelines recommending that intensive blood sugar control be based on balancing immediate harms with long-term benefits, many people – for example people with a life expectancy of less than nine years who will not benefit, are over-treated. It is recommended that all people with type 2 diabetes get regular eye examinations. There is moderate evidence suggesting that treating gum disease by scaling and root planing results in an improvement in blood sugar levels for people with diabetes. Lifestyle Exercise A proper diet and regular exercise are foundations of diabetic care, with one review indicating that a greater amount of exercise improved outcomes. Regular exercise may improve blood sugar control, decrease body fat content, and decrease blood lipid levels. Diet Calorie restriction to promote weight loss is generally recommended. Around 80 percent of obese people with type 2 diabetes achieve complete remission with no need for medication if they sustain a weight loss of at least , but most patients are not able to achieve or sustain significant weight loss. Even modest weight loss can produce significant improvements in glycemic control and reduce the need for medication. Several diets may be effective such as the DASH diet, Mediterranean diet, low-fat diet, or monitored carbohydrate diets such as a low carbohydrate diet. Other recommendations include emphasizing intake of fruits, vegetables, reduced saturated fat and low-fat dairy products, and with a macronutrient intake tailored to the individual, to distribute calories and carbohydrates throughout the day. A 2021 review showed that consumption of tree nuts (walnuts, almonds, and hazelnuts) reduced fasting blood glucose in diabetic people. , there is insufficient data to recommend nonnutritive sweeteners, which may help reduce caloric intake. An elevated intake of microbiota-accessible carbohydrates can help reducing the effects of T2D. Viscous fiber supplements may be useful in those with diabetes. Culturally appropriate education may help people with type 2 diabetes control their blood sugar levels for up to 24 months. There is not enough evidence to determine if lifestyle interventions affect mortality in those who already have type 2 diabetes. Stress management Although psychological stress is recognized as a risk factor for type 2 diabetes, the effect of stress management interventions on disease progression are not established. A Cochrane review is under way to assess the effects of mindfulness‐based interventions for adults with type 2 diabetes. Medications Blood sugar control There are several classes of diabetes medications available. Metformin is generally recommended as a first line treatment as there is some evidence that it decreases mortality; however, this conclusion is questioned. Metformin should not be used in those with severe kidney or liver problems. The American Diabetes Association and European Association for the Study of Diabetes recommend using a GLP-1 receptor agonist or SGLT2 inhibitor as the first-line treatment in patients who have or are at high risk for atherosclerotic cardiovascular disease, heart failure, or chronic kidney disease. The higher cost of these drugs compared to metformin has limited their use. Other classes of medications include: sulfonylureas, thiazolidinediones, dipeptidyl peptidase-4 inhibitors, SGLT2 inhibitors, and GLP-1 receptor agonists. A 2018 review found that SGLT2 inhibitors and GLP-1 agonists, but not DPP-4 inhibitors, were associated with lower mortality than placebo or no treatment. Rosiglitazone, a thiazolidinedione, has not been found to improve long-term outcomes even though it improves blood sugar levels. Additionally it is associated with increased rates of heart disease and death. Injections of insulin may either be added to oral medication or used alone. Most people do not initially need insulin. When it is used, a long-acting formulation is typically added at night, with oral medications being continued. Doses are then increased to effect (blood sugar levels being well controlled). When nightly insulin is insufficient, twice daily insulin may achieve better control. The long acting insulins glargine and detemir are equally safe and effective, and do not appear much better than NPH insulin, but as they are significantly more expensive, they are not cost effective as of 2010. In those who are pregnant, insulin is generally the treatment of choice. Blood pressure lowering Many international guidelines recommend blood pressure treatment targets that are lower than 140/90 mmHg for people with diabetes. However, there is only limited evidence regarding what the lower targets should be. A 2016 systematic review found potential harm to treating to targets lower than 140 mmHg, and a subsequent review in 2019 found no evidence of additional benefit from blood pressure lowering to between 130 and 140 mmHg, although there was an increased risk of adverse events. In people with diabetes and hypertension and either albuminuria or chronic kidney disease, an inhibitor of the renin-angiotensin system (such as an ACE inhibitor or angiotensin receptor blocker) to reduce the risks of progression of kidney disease and present cardiovascular events. There is some evidence that angiotensin converting enzyme inhibitors (ACEIs) are superior to other inhibitors of the renin-angiotensin system such as angiotensin receptor blockers (ARBs), or aliskiren in preventing cardiovascular disease. Although a 2016 review found similar effects of ACEIs and ARBs on major cardiovascular and renal outcomes. There is no evidence that combining ACEIs and ARBs provides additional benefits. Other The use of statins in diabetes to prevent cardiovascular disease should be considered after evaluating the person's total risk for cardiovascular disease. The use of aspirin (acetylsalicylic acid) to prevent cardiovascular disease in diabetes is controversial. Aspirin is recommended in people with previous cardiovascular disease, however routine use of aspirin has not been found to improve outcomes in uncomplicated diabetes. Aspirin as primary prevention may have greater risk than benefit, but could be considered in people aged 50 to 70 with another significant cardiovascular risk factor and low risk of bleeding after information about possible risks and benefits as part of shared-decision making. Vitamin D supplementation to people with type 2 diabetes may improve markers of insulin resistance and HbA1c. Sharing their electronic health records with people who have type 2 diabetes helps them to reduce their blood sugar levels. It is a way of helping people understand their own health condition and involving them actively in its management. Surgery Weight loss surgery in those who are obese is an effective measure to treat diabetes. Many are able to maintain normal blood sugar levels with little or no medication following surgery and long-term mortality is decreased. There however is some short-term mortality risk of less than 1% from the surgery. The body mass index cutoffs for when surgery is appropriate are not yet clear. It is recommended that this option be considered in those who are unable to get both their weight and blood sugar under control. Epidemiology The International Diabetes Federation estimates nearly 537 million people lived with diabetes worldwide in 2021, 90–95% of whom have type 2 diabetes. Diabetes is common both in the developed and the developing world. Some ethnic groups such as South Asians, Pacific Islanders, Latinos, and Native Americans are at particularly high risk of developing type 2 diabetes. Type 2 diabetes in normal weight individuals represents 60 to 80 percent of all cases in some Asian countries. The mechanism causing diabetes in non-obese individuals is poorly understood. Rates of diabetes in 1985 were estimated at 30 million, increasing to 135 million in 1995 and 217 million in 2005. This increase is believed to be primarily due to the global population aging, a decrease in exercise, and increasing rates of obesity. Traditionally considered a disease of adults, type 2 diabetes is increasingly diagnosed in children in parallel with rising obesity rates. The five countries with the greatest number of people with diabetes as of 2000 are India having 31.7 million, China 20.8 million, the United States 17.7 million, Indonesia 8.4 million, and Japan 6.8 million. It is recognized as a global epidemic by the World Health Organization. History Diabetes is one of the first diseases described with an Egyptian manuscript from  BCE mentioning "too great emptying of the urine." The first described cases are believed to be of type 1 diabetes. Indian physicians around the same time identified the disease and classified it as madhumeha or honey urine noting that the urine would attract ants. The term "diabetes" or "to pass through" was first used in 230 BCE by the Greek Apollonius Memphites. The disease was rare during the time of the Roman empire with Galen commenting that he had only seen two cases during his career. Type 1 and type 2 diabetes were identified as separate conditions for the first time by the Indian physicians Sushruta and Charaka in 400–500 CE with type 1 associated with youth and type 2 with being overweight. Effective treatment was not developed until the early part of the 20th century when the Canadians Frederick Banting and Charles Best discovered insulin in 1921 and 1922. This was followed by the development of the longer acting NPH insulin in the 1940s. In 1916, Elliot Joslin proposed that in people with diabetes, periods of fasting are helpful. Subsequent research has supported this, and weight loss is a first line treatment in type 2 diabetes. Research In 2020, Diabetes Severity Score (DISSCO) was developed which is a tool that might better than HbA1c identify if a person's condition is declining. It uses a computer algorithm to analyse data from anonymised electronic patient records and produces a score based on 34 indicators. Stem cells In April 2024 scientists reported the first case of reversion of type 2 diabetes by use of stem cells in a 59-year-old man treated in 2021 who has since remain insulin-free. Replication in more patients and evidence over longer periods would be needed before considering this treatment as a possible cure. References External links IDF Diabetes Atlas 2021 National Institute of Diabetes and Digestive and Kidney Diseases Centers for Disease Control and Prevention ADA's Standards of Medical Care in Diabetes 2024 Aging-associated diseases Types of diabetes Medical conditions related to obesity
Type 2 diabetes
[ "Biology" ]
5,780
[ "Senescence", "Aging-associated diseases" ]
154,505
https://en.wikipedia.org/wiki/Digital%20signal%20processor
A digital signal processor (DSP) is a specialized microprocessor chip, with its architecture optimized for the operational needs of digital signal processing. DSPs are fabricated on metal–oxide–semiconductor (MOS) integrated circuit chips. They are widely used in audio signal processing, telecommunications, digital image processing, radar, sonar and speech recognition systems, and in common consumer electronic devices such as mobile phones, disk drives and high-definition television (HDTV) products. The goal of a DSP is usually to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can also execute digital signal processing algorithms successfully, but may not be able to keep up with such processing continuously in real-time. Also, dedicated DSPs usually have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. Overview Digital signal processing (DSP) algorithms typically require a large number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals (perhaps from audio or video sensors) are constantly converted from analog to digital, manipulated digitally, and then converted back to analog form. Many DSP applications have constraints on latency; that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred (or batch) processing is not viable. Most general-purpose microprocessors and operating systems can execute DSP algorithms successfully, but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints. A specialized DSP, however, will tend to provide a lower-cost solution, with better performance, lower latency, and no requirements for specialised cooling or large batteries. Such performance improvements have led to the introduction of digital signal processing in commercial communications satellites where hundreds or even thousands of analog filters, switches, frequency converters and so on are required to receive and process the uplinked signals and ready them for downlinking, and can be replaced with specialised DSPs with significant benefits to the satellites' weight, power consumption, complexity/cost of construction, reliability and flexibility of operation. For example, the SES-12 and SES-14 satellites from operator SES launched in 2018, were both built by Airbus Defence and Space with 25% of capacity using DSP. The architecture of a DSP is optimized specifically for digital signal processing. Most also support some of the features of an applications processor or microcontroller, since signal processing is rarely the only task of a system. Some useful features for optimizing DSP algorithms are outlined below. Architecture Software architecture By the standards of general-purpose processors, DSP instruction sets are often highly irregular; while traditional instruction sets are made up of more general instructions that allow them to perform a wider variety of operations, instruction sets optimized for digital signal processing contain instructions for common mathematical operations that occur frequently in DSP calculations. Both traditional and DSP-optimized instruction sets are able to compute any arbitrary operation but an operation that might require multiple ARM or x86 instructions to compute might require only one instruction in a DSP optimized instruction set. One implication for software architecture is that hand-optimized assembly-code routines (assembly programs) are commonly packaged into libraries for re-use, instead of relying on advanced compiler technologies to handle essential algorithms. Even with modern compiler optimizations hand-optimized assembly code is more efficient and many common algorithms involved in DSP calculations are hand-written in order to take full advantage of the architectural optimizations. Instruction sets multiply–accumulates (MACs, including fused multiply–add, FMA) operations used extensively in all kinds of matrix operations convolution for filtering dot product polynomial evaluation Fundamental DSP algorithms depend heavily on multiply–accumulate performance FIR filters Fast Fourier transform (FFT) related instructions: SIMD VLIW Specialized instructions for modulo addressing in ring buffers and bit-reversed addressing mode for FFT cross-referencing DSPs sometimes use time-stationary encoding to simplify hardware and increase coding efficiency. Multiple arithmetic units may require memory architectures to support several accesses per instruction cycle – typically supporting reading 2 data values from 2 separate data buses and the next instruction (from the instruction cache, or a 3rd program memory) simultaneously. Special loop controls, such as architectural support for executing a few instruction words in a very tight loop without overhead for instruction fetches or exit testing—such as zero-overhead looping and hardware loop buffers. Data instructions Saturation arithmetic, in which operations that produce overflows will accumulate at the maximum (or minimum) values that the register can hold rather than wrapping around (maximum+1 doesn't overflow to minimum as in many general-purpose CPUs, instead it stays at maximum). Sometimes various sticky bits operation modes are available. Fixed-point arithmetic is often used to speed up arithmetic processing. Single-cycle operations to increase the benefits of pipelining. Program flow Floating-point unit integrated directly into the datapath Pipelined architecture Highly parallel multiplier–accumulators (MAC units) Hardware-controlled looping, to reduce or eliminate the overhead required for looping operations Hardware architecture Memory architecture DSPs are usually optimized for streaming data and use special memory architectures that are able to fetch multiple data or instructions at the same time, such as the Harvard architecture or Modified von Neumann architecture, which use separate program and data memories (sometimes even concurrent access on multiple data buses). DSPs can sometimes rely on supporting code to know about cache hierarchies and the associated delays. This is a tradeoff that allows for better performance. In addition, extensive use of DMA is employed. Addressing and virtual memory DSPs frequently use multi-tasking operating systems, but have no support for virtual memory or memory protection. Operating systems that use virtual memory require more time for context switching among processes, which increases latency. Hardware modulo addressing Allows circular buffers to be implemented without having to test for wrapping Bit-reversed addressing, a special addressing mode useful for calculating FFTs Exclusion of a memory management unit Address generation unit History Development In 1976, Richard Wiggins proposed the Speak & Spell concept to Paul Breedlove, Larry Brantingham, and Gene Frantz at Texas Instruments' Dallas research facility. Two years later in 1978, they produced the first Speak & Spell, with the technological centerpiece being the TMS5100, the industry's first digital signal processor. It also set other milestones, being the first chip to use linear predictive coding to perform speech synthesis. The chip was made possible with a 7 μm PMOS fabrication process. In 1978, American Microsystems (AMI) released the S2811. The AMI S2811 "signal processing peripheral", like many later DSPs, has a hardware multiplier that enables it to do multiply–accumulate operation in a single instruction. The S2281 was the first integrated circuit chip specifically designed as a DSP, and fabricated using vertical metal oxide semiconductor (VMOS, V-groove MOS), a technology that had previously not been mass-produced. It was designed as a microprocessor peripheral, for the Motorola 6800, and it had to be initialized by the host. The S2811 was not successful in the market. In 1979, Intel released the 2920 as an "analog signal processor". It had an on-chip ADC/DAC with an internal signal processor, but it didn't have a hardware multiplier and was not successful in the market. In 1980, the first stand-alone, complete DSPs – Nippon Electric Corporation's NEC μPD7720 based on the modified Harvard architecture and AT&T's DSP1 – were presented at the International Solid-State Circuits Conference '80. Both processors were inspired by the research in public switched telephone network (PSTN) telecommunications. The μPD7720, introduced for voiceband applications, was one of the most commercially successful early DSPs. The Altamira DX-1 was another early DSP, utilizing quad integer pipelines with delayed branches and branch prediction. Another DSP produced by Texas Instruments (TI), the TMS32010 presented in 1983, proved to be an even bigger success. It was based on the Harvard architecture, and so had separate instruction and data memory. It already had a special instruction set, with instructions like load-and-accumulate or multiply-and-accumulate. It could work on 16-bit numbers and needed 390 ns for a multiply–add operation. TI is now the market leader in general-purpose DSPs. About five years later, the second generation of DSPs began to spread. They had 3 memories for storing two operands simultaneously and included hardware to accelerate tight loops; they also had an addressing unit capable of loop-addressing. Some of them operated on 24-bit variables and a typical model only required about 21 ns for a MAC. Members of this generation were for example the AT&T DSP16A or the Motorola 56000. The main improvement in the third generation was the appearance of application-specific units and instructions in the data path, or sometimes as coprocessors. These units allowed direct hardware acceleration of very specific but complex mathematical problems, like the Fourier-transform or matrix operations. Some chips, like the Motorola MC68356, even included more than one processor core to work in parallel. Other DSPs from 1995 are the TI TMS320C541 or the TMS 320C80. The fourth generation is best characterized by the changes in the instruction set and the instruction encoding/decoding. SIMD extensions were added, and VLIW and the superscalar architecture appeared. As always, the clock-speeds have increased; a 3 ns MAC now became possible. Modern DSPs Modern signal processors yield greater performance; this is due in part to both technological and architectural advancements like lower design rules, fast-access two-level cache, (E)DMA circuitry, and a wider bus system. Not all DSPs provide the same speed and many kinds of signal processors exist, each one of them being better suited for a specific task, ranging in price from about US$1.50 to US$300. Texas Instruments produces the C6000 series DSPs, which have clock speeds of 1.2 GHz and implement separate instruction and data caches. They also have an 8 MiB 2nd level cache and 64 EDMA channels. The top models are capable of as many as 8000 MIPS (millions of instructions per second), use VLIW (very long instruction word), perform eight operations per clock-cycle and are compatible with a broad range of external peripherals and various buses (PCI/serial/etc). TMS320C6474 chips each have three such DSPs, and the newest generation C6000 chips support floating point as well as fixed point processing. Freescale produces a multi-core DSP family, the MSC81xx. The MSC81xx is based on StarCore Architecture processors and the latest MSC8144 DSP combines four programmable SC3400 StarCore DSP cores. Each SC3400 StarCore DSP core has a clock speed of 1 GHz. XMOS produces a multi-core multi-threaded line of processor well suited to DSP operations, They come in various speeds ranging from 400 to 1600 MIPS. The processors have a multi-threaded architecture that allows up to 8 real-time threads per core, meaning that a 4 core device would support up to 32 real time threads. Threads communicate between each other with buffered channels that are capable of up to 80 Mbit/s. The devices are easily programmable in C and aim at bridging the gap between conventional micro-controllers and FPGAs CEVA, Inc. produces and licenses three distinct families of DSPs. Perhaps the best known and most widely deployed is the CEVA-TeakLite DSP family, a classic memory-based architecture, with 16-bit or 32-bit word-widths and single or dual MACs. The CEVA-X DSP family offers a combination of VLIW and SIMD architectures, with different members of the family offering dual or quad 16-bit MACs. The CEVA-XC DSP family targets Software-defined Radio (SDR) modem designs and leverages a unique combination of VLIW and Vector architectures with 32 16-bit MACs. Analog Devices produce the SHARC-based DSP and range in performance from 66 MHz/198 MFLOPS (million floating-point operations per second) to 400 MHz/2400 MFLOPS. Some models support multiple multipliers and ALUs, SIMD instructions and audio processing-specific components and peripherals. The Blackfin family of embedded digital signal processors combine the features of a DSP with those of a general use processor. As a result, these processors can run simple operating systems like μCLinux, velocity and Nucleus RTOS while operating on real-time data. The SHARC-based ADSP-210xx provides both delayed branches and non-delayed branches. NXP Semiconductors produce DSPs based on TriMedia VLIW technology, optimized for audio and video processing. In some products the DSP core is hidden as a fixed-function block into a SoC, but NXP also provides a range of flexible single core media processors. The TriMedia media processors support both fixed-point arithmetic as well as floating-point arithmetic, and have specific instructions to deal with complex filters and entropy coding. CSR produces the Quatro family of SoCs that contain one or more custom Imaging DSPs optimized for processing document image data for scanner and copier applications. Microchip Technology produces the PIC24 based dsPIC line of DSPs. Introduced in 2004, the dsPIC is designed for applications needing a true DSP as well as a true microcontroller, such as motor control and in power supplies. The dsPIC runs at up to 40MIPS, and has support for 16 bit fixed point MAC, bit reverse and modulo addressing, as well as DMA. Most DSPs use fixed-point arithmetic, because in real world signal processing the additional range provided by floating point is not needed, and there is a large speed benefit and cost benefit due to reduced hardware complexity. Floating point DSPs may be invaluable in applications where a wide dynamic range is required. Product developers might also use floating point DSPs to reduce the cost and complexity of software development in exchange for more expensive hardware, since it is generally easier to implement algorithms in floating point. Generally, DSPs are dedicated integrated circuits; however DSP functionality can also be produced by using field-programmable gate array chips (FPGAs). Embedded general-purpose RISC processors are becoming increasingly DSP like in functionality. For example, the OMAP3 processors include an ARM Cortex-A8 and C6000 DSP. In Communications a new breed of DSPs offering the fusion of both DSP functions and H/W acceleration function is making its way into the mainstream. Such Modem processors include ASOCS ModemX and CEVA's XC4000. In May 2018, Huarui-2 designed by Nanjing Research Institute of Electronics Technology of China Electronics Technology Group passed acceptance. With a processing speed of 0.4 TFLOPS, the chip can achieve better performance than current mainstream DSP chips. The design team has begun to create Huarui-3, which has a processing speed in TFLOPS level and a support for artificial intelligence. See also Digital signal controller Graphics processing unit System on a chip Hardware acceleration Vision processing unit MDSP – a multiprocessor DSP OpenCL Sound card References External links DSP Online Book Pocket Guide to Processors for DSP - Berkeley Design Technology, INC Digital signal processing Computer engineering Integrated circuits Coprocessors Hardware acceleration
Digital signal processor
[ "Technology", "Engineering" ]
3,335
[ "Hardware acceleration", "Computer engineering", "Computer systems", "Electrical engineering", "Integrated circuits" ]
154,576
https://en.wikipedia.org/wiki/Cloud%20cover
Cloud cover (also known as cloudiness, cloudage, or cloud amount) refers to the fraction of the sky obscured by clouds on average when observed from a particular location. Okta is the usual unit for measurement of the cloud cover. The cloud cover is correlated to the sunshine duration as the least cloudy locales are the sunniest ones while the cloudiest areas are the least sunny places, as clouds can block sunlight, especially at sunrise and sunset where sunlight is already limited. The global cloud cover averages around 67-68%, though it ranges from 56% to 73% depending on the minimum optical depth considered (lower when optical depth is large, and higher when it is low, such that subvisible cirrus clouds are counted). Average cloud cover is around 72% over the oceans, with low seasonal variation, and about 55% above land, with significant seasonal variation. Role in the climate system Clouds play multiple critical roles in the climate system and diurnal cycle. In particular, being bright objects in the visible part of the solar spectrum, they efficiently reflect light to space and thus contribute to the cooling of the planet, as well as trapping remaining heat at night. Cloud cover thus plays an important role in the energetic balance of the atmosphere and a variation of it is a factor and consequence of and to the climate change expected by recent studies. Variability Cloud cover values only vary by 3% from year-to-year averages, whereas the local, day-to-day variability in cloud amounts typically rises to 30% over the globe. Land is generally covered by 10-15% less cloud than the oceans, because the seas are covered with water, allowing for more evaporation. Lastly, there is a latitudinal variation in the cloud cover. Areas around 10-15% below the global mean can be found around 20°N and 20°S, due to an absence of equatorial effects and strong winds reducing cloud formation. On the other hand, in the storm regions of the Southern Hemisphere midlatitudes were found to have with 15–25% more cloudiness than the global mean at 60°S. On average, about 67% of the entire Earth is cloud-covered at any moment. On a continental scale, it can be noticed based upon a long-term satellite recording of cloudiness data that on a year-mean basis, Europe, North America, South America and Asia are dominated by cloudy skies due to the westerlies, monsoon or other effects. On the other hand, Africa, the Middle East and Australia are dominated by clear skies due to their continentality and aridity. On a regional scale, some exceptionally humid areas of Earth experience cloudy conditions virtually all time such as South America's Amazon Rainforest while some highly arid areas experience clear-sky conditions virtually all the time such as Africa's Sahara Desert. Altitude of typical cloud cover Although clouds can exist within a wide range of altitudes, typical cloud cover has a base at approximately 4,000m and extends up to an altitude of about 5,000m. Clouds height can vary depending on latitude; with cloud cover in polar latitudes being slightly lower and in tropical regions the cloud cover may extend up to 8,000m. The type of cloud is also a factor, with low cumulus clouds sitting at 300–1,500m while high cirrus clouds at 5,500-6,500m. References McIntosh, D. H. (1972) Meteorological Glossary, Her Majesty's Stationery Office, Met. O. 842, A.P. 897, 319 p. External links NSDL.arm.gov, Glossary of Atmospheric Terms, From the National Science Digital Library's Atmospheric Visualization Collection. Earthobersvatory.nasa.gov, Monthly maps of global cloud cover from NASA's Earth Observatory International Satellite Cloud Climatology Project (ISCCP), NASA's data products on their satellite observations NASA composite satellite image. Clouds Articles containing video clips Atmospheric dynamics fr:Nuage#Nébulosité et opacité
Cloud cover
[ "Chemistry" ]
830
[ "Atmospheric dynamics", "Fluid dynamics" ]
154,581
https://en.wikipedia.org/wiki/Shortwave%20radiation%20%28optics%29
Shortwave radiation (SW) is thermal radiation in the optical spectrum, including visible (VIS), near-ultraviolet (UV), and near-infrared (NIR) spectra. There is no standard cut-off for the near-infrared range; therefore, the shortwave radiation range is also variously defined. It may be broadly defined to include all radiation with a wavelength of 0.1μm and 5.0μm or narrowly defined so as to include only radiation between 0.2μm and 3.0μm. There is little radiation flux (in terms of W/m2) to the Earth's surface below 0.2μm or above 3.0μm, although photon flux remains significant as far as 6.0μm, compared to shorter wavelength fluxes. UV-C radiation spans from 0.1μm to .28μm, UV-B from 0.28μm to 0.315μm, UV-A from 0.315μm to 0.4μm, the visible spectrum from 0.4μm to 0.7μm, and NIR arguably from 0.7μm to 5.0μm, beyond which the infrared is thermal. Shortwave radiation is distinguished from longwave radiation. Downward shortwave radiation is related to solar irradiance and is sensitive to solar zenith angle and cloud cover. See also Outgoing longwave radiation Notes External links National Science Digital Library - Shortwave radiation Measuring Solar Radiation: The Solar Infrared Radiation Station (SIRS). A lesson plan that deals with shortwave radiation from the SIRS instrument. References Zhang, Y., W. B. Rossow, A. A. Lacis, V. Oinas and M. I. Mischenko (2004). "Calculation of radiative fluxes from the surface to the top of atmosphere based on ISCCP and other global data sets: Refinements of the radiative transfer model and the input data." Journal of Geophysical Research-Atmospheres 109(D19105). L. Chen, G. Yan, T. Wang, H. Ren, J. Calbó, J. Zhao, R. McKenzie (2012), Estimation of surface shortwave radiation components under all sky conditions: Modeling and sensitivity analysis, Remote Sensing of Environment, 123: 457–469. Waves
Shortwave radiation (optics)
[ "Physics" ]
489
[ "Waves", "Physical phenomena", "Motion (physics)" ]
154,584
https://en.wikipedia.org/wiki/Hilbert%27s%20problems
Hilbert's problems are 23 problems in mathematics published by German mathematician David Hilbert in 1900. They were all unsolved at the time, and several proved to be very influential for 20th-century mathematics. Hilbert presented ten of the problems (1, 2, 6, 7, 8, 13, 16, 19, 21, and 22) at the Paris conference of the International Congress of Mathematicians, speaking on August 8 at the Sorbonne. The complete list of 23 problems was published later, in English translation in 1902 by Mary Frances Winston Newson in the Bulletin of the American Mathematical Society. Earlier publications (in the original German) appeared in Archiv der Mathematik und Physik. List of Hilbert's Problems The following are the headers for Hilbert's 23 problems as they appeared in the 1902 translation in the Bulletin of the American Mathematical Society. 1. Cantor's problem of the cardinal number of the continuum. 2. The compatibility of the arithmetical axioms. 3. The equality of the volumes of two tetrahedra of equal bases and equal altitudes. 4. Problem of the straight line as the shortest distance between two points. 5. Lie's concept of a continuous group of transformations without the assumption of the differentiability of the functions defining the group. 6. Mathematical treatment of the axioms of physics. 7. Irrationality and transcendence of certain numbers. 8. Problems of prime numbers (The "Riemann Hypothesis"). 9. Proof of the most general law of reciprocity in any number field. 10. Determination of the solvability of a Diophantine equation. 11. Quadratic forms with any algebraic numerical coefficients 12. Extensions of Kronecker's theorem on Abelian fields to any algebraic realm of rationality 13. Impossibility of the solution of the general equation of 7th degree by means of functions of only two arguments. 14. Proof of the finiteness of certain complete systems of functions. 15. Rigorous foundation of Schubert's enumerative calculus. 16. Problem of the topology of algebraic curves and surfaces. 17. Expression of definite forms by squares. 18. Building up of space from congruent polyhedra. 19. Are the solutions of regular problems in the calculus of variations always necessarily analytic? 20. The general problem of boundary values (Boundary value problems in PD) 21. Proof of the existence of linear differential equations having a prescribed monodromy group. 22. Uniformization of analytic relations by means of automorphic functions. 23. Further development of the methods of the calculus of variations. Nature and influence of the problems Hilbert's problems ranged greatly in topic and precision. Some of them, like the 3rd problem, which was the first to be solved, or the 8th problem (the Riemann hypothesis), which still remains unresolved, were presented precisely enough to enable a clear affirmative or negative answer. For other problems, such as the 5th, experts have traditionally agreed on a single interpretation, and a solution to the accepted interpretation has been given, but closely related unsolved problems exist. Some of Hilbert's statements were not precise enough to specify a particular problem, but were suggestive enough that certain problems of contemporary nature seem to apply; for example, most modern number theorists would probably see the 9th problem as referring to the conjectural Langlands correspondence on representations of the absolute Galois group of a number field. Still other problems, such as the 11th and the 16th, concern what are now flourishing mathematical subdisciplines, like the theories of quadratic forms and real algebraic curves. There are two problems that are not only unresolved but may in fact be unresolvable by modern standards. The 6th problem concerns the axiomatization of physics, a goal that 20th-century developments seem to render both more remote and less important than in Hilbert's time. Also, the 4th problem concerns the foundations of geometry, in a manner that is now generally judged to be too vague to enable a definitive answer. The 23rd problem was purposefully set as a general indication by Hilbert to highlight the calculus of variations as an underappreciated and understudied field. In the lecture introducing these problems, Hilbert made the following introductory remark to the 23rd problem: The other 21 problems have all received significant attention, and late into the 20th century work on these problems was still considered to be of the greatest importance. Paul Cohen received the Fields Medal in 1966 for his work on the first problem, and the negative solution of the tenth problem in 1970 by Yuri Matiyasevich (completing work by Julia Robinson, Hilary Putnam, and Martin Davis) generated similar acclaim. Aspects of these problems are still of great interest today. Knowability Following Gottlob Frege and Bertrand Russell, Hilbert sought to define mathematics logically using the method of formal systems, i.e., finitistic proofs from an agreed-upon set of axioms. One of the main goals of Hilbert's program was a finitistic proof of the consistency of the axioms of arithmetic: that is his second problem. However, Gödel's second incompleteness theorem gives a precise sense in which such a finitistic proof of the consistency of arithmetic is provably impossible. Hilbert lived for 12 years after Kurt Gödel published his theorem, but does not seem to have written any formal response to Gödel's work. Hilbert's tenth problem does not ask whether there exists an algorithm for deciding the solvability of Diophantine equations, but rather asks for the construction of such an algorithm: "to devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers". That this problem was solved by showing that there cannot be any such algorithm contradicted Hilbert's philosophy of mathematics. In discussing his opinion that every mathematical problem should have a solution, Hilbert allows for the possibility that the solution could be a proof that the original problem is impossible. He stated that the point is to know one way or the other what the solution is, and he believed that we always can know this, that in mathematics there is not any "ignorabimus" (statement whose truth can never be known). It seems unclear whether he would have regarded the solution of the tenth problem as an instance of ignorabimus. On the other hand, the status of the first and second problems is even more complicated: there is no clear mathematical consensus as to whether the results of Gödel (in the case of the second problem), or Gödel and Cohen (in the case of the first problem) give definitive negative solutions or not, since these solutions apply to a certain formalization of the problems, which is not necessarily the only possible one. The 24th problem Hilbert originally included 24 problems on his list, but decided against including one of them in the published list. The "24th problem" (in proof theory, on a criterion for simplicity and general methods) was rediscovered in Hilbert's original manuscript notes by German historian Rüdiger Thiele in 2000. Follow-ups Since 1900, mathematicians and mathematical organizations have announced problem lists but, with few exceptions, these have not had nearly as much influence nor generated as much work as Hilbert's problems. One exception consists of three conjectures made by André Weil in the late 1940s (the Weil conjectures). In the fields of algebraic geometry, number theory and the links between the two, the Weil conjectures were very important. The first of these was proved by Bernard Dwork; a completely different proof of the first two, via ℓ-adic cohomology, was given by Alexander Grothendieck. The last and deepest of the Weil conjectures (an analogue of the Riemann hypothesis) was proved by Pierre Deligne. Both Grothendieck and Deligne were awarded the Fields medal. However, the Weil conjectures were, in their scope, more like a single Hilbert problem, and Weil never intended them as a programme for all mathematics. This is somewhat ironic, since arguably Weil was the mathematician of the 1940s and 1950s who best played the Hilbert role, being conversant with nearly all areas of (theoretical) mathematics and having figured importantly in the development of many of them. Paul Erdős posed hundreds, if not thousands, of mathematical problems, many of them profound. Erdős often offered monetary rewards; the size of the reward depended on the perceived difficulty of the problem. The end of the millennium, which was also the centennial of Hilbert's announcement of his problems, provided a natural occasion to propose "a new set of Hilbert problems". Several mathematicians accepted the challenge, notably Fields Medalist Steve Smale, who responded to a request by Vladimir Arnold to propose a list of 18 problems (Smale's problems). At least in the mainstream media, the de facto 21st century analogue of Hilbert's problems is the list of seven Millennium Prize Problems chosen during 2000 by the Clay Mathematics Institute. Unlike the Hilbert problems, where the primary award was the admiration of Hilbert in particular and mathematicians in general, each prize problem includes a million-dollar bounty. As with the Hilbert problems, one of the prize problems (the Poincaré conjecture) was solved relatively soon after the problems were announced. The Riemann hypothesis is noteworthy for its appearance on the list of Hilbert problems, Smale's list, the list of Millennium Prize Problems, and even the Weil conjectures, in its geometric guise. Although it has been attacked by major mathematicians of our day, many experts believe that it will still be part of unsolved problems lists for many centuries. Hilbert himself declared: "If I were to awaken after having slept for a thousand years, my first question would be: Has the Riemann hypothesis been proved?" In 2008, DARPA announced its own list of 23 problems that it hoped could lead to major mathematical breakthroughs, "thereby strengthening the scientific and technological capabilities of the DoD". The DARPA list also includes a few problems from Hilbert's list, e.g. the Riemann hypothesis. Summary Of the cleanly formulated Hilbert problems, numbers 3, 7, 10, 14, 17, 18, 19, and 20 have resolutions that are accepted by consensus of the mathematical community. Problems 1, 2, 5, 6, 9, 11, 12, 15, 21, and 22 have solutions that have partial acceptance, but there exists some controversy as to whether they resolve the problems. That leaves 8 (the Riemann hypothesis), 13 and 16 unresolved, and 4 and 23 as too vague to ever be described as solved. The withdrawn 24 would also be in this class. Table of problems Hilbert's 23 problems are (for details on the solutions and references, see the articles that are linked to in the first column): See also Landau's problems Millennium Prize Problems Smale's problems Taniyama's problems Thurston's 24 questions Notes References Further reading A wealth of information relevant to Hilbert's "program" and Gödel's impact on the Second Question, the impact of Arend Heyting's and Brouwer's Intuitionism on Hilbert's philosophy. A collection of survey essays by experts devoted to each of the 23 problems emphasizing current developments. An account at the undergraduate level by the mathematician who completed the solution of the problem. External links Unsolved problems in mathematics
Hilbert's problems
[ "Mathematics" ]
2,346
[ "Hilbert's problems", "Unsolved problems in mathematics", "Mathematical problems" ]
154,616
https://en.wikipedia.org/wiki/Negative%20number
In mathematics, a negative number is the opposite of a positive real number. Equivalently, a negative number is a real number that is less than zero. Negative numbers are often used to represent the magnitude of a loss or deficiency. A debt that is owed may be thought of as a negative asset. If a quantity, such as the charge on an electron, may have either of two opposite senses, then one may choose to distinguish between those senses—perhaps arbitrarily—as positive and negative. Negative numbers are used to describe values on a scale that goes below zero, such as the Celsius and Fahrenheit scales for temperature. The laws of arithmetic for negative numbers ensure that the common-sense idea of an opposite is reflected in arithmetic. For example, −(−3) = 3 because the opposite of an opposite is the original value. Negative numbers are usually written with a minus sign in front. For example, −3 represents a negative quantity with a magnitude of three, and is pronounced "minus three" or "negative three". Conversely, a number that is greater than zero is called positive; zero is usually (but not always) thought of as neither positive nor negative. The positivity of a number may be emphasized by placing a plus sign before it, e.g. +3. In general, the negativity or positivity of a number is referred to as its sign. Every real number other than zero is either positive or negative. The non-negative whole numbers are referred to as natural numbers (i.e., 0, 1, 2, 3...), while the positive and negative whole numbers (together with zero) are referred to as integers. (Some definitions of the natural numbers exclude zero.) In bookkeeping, amounts owed are often represented by red numbers, or a number in parentheses, as an alternative notation to represent negative numbers. Negative numbers were used in the Nine Chapters on the Mathematical Art, which in its present form dates from the period of the Chinese Han dynasty (202 BC – AD 220), but may well contain much older material. Liu Hui (c. 3rd century) established rules for adding and subtracting negative numbers. By the 7th century, Indian mathematicians such as Brahmagupta were describing the use of negative numbers. Islamic mathematicians further developed the rules of subtracting and multiplying negative numbers and solved problems with negative coefficients. Prior to the concept of negative numbers, mathematicians such as Diophantus considered negative solutions to problems "false" and equations requiring negative solutions were described as absurd. Western mathematicians like Leibniz held that negative numbers were invalid, but still used them in calculations. Introduction The number line The relationship between negative numbers, positive numbers, and zero is often expressed in the form of a number line: Numbers appearing farther to the right on this line are greater, while numbers appearing farther to the left are lesser. Thus zero appears in the middle, with the positive numbers to the right and the negative numbers to the left. Note that a negative number with greater magnitude is considered less. For example, even though (positive) is greater than (positive) , written negative is considered to be less than negative : Signed numbers In the context of negative numbers, a number that is greater than zero is referred to as positive. Thus every real number other than zero is either positive or negative, while zero itself is not considered to have a sign. Positive numbers are sometimes written with a plus sign in front, e.g. denotes a positive three. Because zero is neither positive nor negative, the term nonnegative is sometimes used to refer to a number that is either positive or zero, while nonpositive is used to refer to a number that is either negative or zero. Zero is a neutral number. As the result of subtraction Negative numbers can be thought of as resulting from the subtraction of a larger number from a smaller. For example, negative three is the result of subtracting three from zero: In general, the subtraction of a larger number from a smaller yields a negative result, with the magnitude of the result being the difference between the two numbers. For example, since . Everyday uses of negative numbers Sport Goal difference in association football and hockey; points difference in rugby football; net run rate in cricket; golf scores relative to par. Plus-minus differential in ice hockey: the difference in total goals scored for the team (+) and against the team (−) when a particular player is on the ice is the player's +/− rating. Players can have a negative (+/−) rating. Run differential in baseball: the run differential is negative if the team allows more runs than they scored. Clubs may be deducted points for breaches of the laws, and thus have a negative points total until they have earned at least that many points that season. Lap (or sector) times in Formula 1 may be given as the difference compared to a previous lap (or sector) (such as the previous record, or the lap just completed by a driver in front), and will be positive if slower and negative if faster. In some athletics events, such as sprint races, the hurdles, the triple jump and the long jump, the wind assistance is measured and recorded, and is positive for a tailwind and negative for a headwind. Science Temperatures which are colder than 0 °C or 0 °F. Latitudes south of the equator and longitudes west of the prime meridian. Topographical features of the earth's surface are given a height above sea level, which can be negative (e.g. the surface elevation of the Dead Sea or Death Valley, or the elevation of the Thames Tideway Tunnel). Electrical circuits. When a battery is connected in reverse polarity, the voltage applied is said to be the opposite of its rated voltage. For example, a 6-volt battery connected in reverse applies a voltage of −6 volts. Ions have a positive or negative electrical charge. Impedance of an AM broadcast tower used in multi-tower directional antenna arrays, which can be positive or negative. Finance Financial statements can include negative balances, indicated either by a minus sign or by enclosing the balance in parentheses. Examples include bank account overdrafts and business losses (negative earnings). The annual percentage growth in a country's GDP might be negative, which is one indicator of being in a recession. Occasionally, a rate of inflation may be negative (deflation), indicating a fall in average prices. The daily change in a share price or stock market index, such as the FTSE 100 or the Dow Jones. A negative number in financing is synonymous with "debt" and "deficit" which are also known as "being in the red". Interest rates can be negative, when the lender is charged to deposit their money. Other The numbering of stories in a building below the ground floor. When playing an audio file on a portable media player, such as an iPod, the screen display may show the time remaining as a negative number, which increases up to zero time remaining at the same rate as the time already played increases from zero. Television game shows: Participants on QI often finish with a negative points score. Teams on University Challenge have a negative score if their first answers are incorrect and interrupt the question. Jeopardy! has a negative money score – contestants play for an amount of money and any incorrect answer that costs them more than what they have now can result in a negative score. In The Price Is Rights pricing game Buy or Sell, if an amount of money is lost that is more than the amount currently in the bank, it incurs a negative score. The change in support for a political party between elections, known as swing. A politician's approval rating. In video games, a negative number indicates loss of life, damage, a score penalty, or consumption of a resource, depending on the genre of the simulation. Employees with flexible working hours may have a negative balance on their timesheet if they have worked fewer total hours than contracted to that point. Employees may be able to take more than their annual holiday allowance in a year, and carry forward a negative balance to the next year. Transposing notes on an electronic keyboard are shown on the display with positive numbers for increases and negative numbers for decreases, e.g. "−1" for one semitone down. Arithmetic involving negative numbers The minus sign "−" signifies the operator for both the binary (two-operand) operation of subtraction (as in ) and the unary (one-operand) operation of negation (as in , or twice in ). A special case of unary negation occurs when it operates on a positive number, in which case the result is a negative number (as in ). The ambiguity of the "−" symbol does not generally lead to ambiguity in arithmetical expressions, because the order of operations makes only one interpretation or the other possible for each "−". However, it can lead to confusion and be difficult for a person to understand an expression when operator symbols appear adjacent to one another. A solution can be to parenthesize the unary "−" along with its operand. For example, the expression may be clearer if written (even though they mean exactly the same thing formally). The subtraction expression is a different expression that doesn't represent the same operations, but it evaluates to the same result. Sometimes in elementary schools a number may be prefixed by a superscript minus sign or plus sign to explicitly distinguish negative and positive numbers as in Addition Addition of two negative numbers is very similar to addition of two positive numbers. For example, The idea is that two debts can be combined into a single debt of greater magnitude. When adding together a mixture of positive and negative numbers, one can think of the negative numbers as positive quantities being subtracted. For example: In the first example, a credit of is combined with a debt of , which yields a total credit of . If the negative number has greater magnitude, then the result is negative: Here the credit is less than the debt, so the net result is a debt. Subtraction As discussed above, it is possible for the subtraction of two non-negative numbers to yield a negative answer: In general, subtraction of a positive number yields the same result as the addition of a negative number of equal magnitude. Thus and On the other hand, subtracting a negative number yields the same result as the addition a positive number of equal magnitude. (The idea is that losing a debt is the same thing as gaining a credit.) Thus and Multiplication When multiplying numbers, the magnitude of the product is always just the product of the two magnitudes. The sign of the product is determined by the following rules: The product of one positive number and one negative number is negative. The product of two negative numbers is positive. Thus and The reason behind the first example is simple: adding three 's together yields : The reasoning behind the second example is more complicated. The idea again is that losing a debt is the same thing as gaining a credit. In this case, losing two debts of three each is the same as gaining a credit of six: The convention that a product of two negative numbers is positive is also necessary for multiplication to follow the distributive law. In this case, we know that Since , the product must equal . These rules lead to another (equivalent) rule—the sign of any product a × b depends on the sign of a as follows: if a is positive, then the sign of a × b is the same as the sign of b, and if a is negative, then the sign of a × b is the opposite of the sign of b. The justification for why the product of two negative numbers is a positive number can be observed in the analysis of complex numbers. Division The sign rules for division are the same as for multiplication. For example, and If dividend and divisor have the same sign, the result is positive, if they have different signs the result is negative. Negation The negative version of a positive number is referred to as its negation. For example, is the negation of the positive number . The sum of a number and its negation is equal to zero: That is, the negation of a positive number is the additive inverse of the number. Using algebra, we may write this principle as an algebraic identity: This identity holds for any positive number . It can be made to hold for all real numbers by extending the definition of negation to include zero and negative numbers. Specifically: The negation of 0 is 0, and The negation of a negative number is the corresponding positive number. For example, the negation of is . In general, The absolute value of a number is the non-negative number with the same magnitude. For example, the absolute value of and the absolute value of are both equal to , and the absolute value of is . Formal construction of negative integers In a similar manner to rational numbers, we can extend the natural numbers N to the integers Z by defining integers as an ordered pair of natural numbers (a, b). We can extend addition and multiplication to these pairs with the following rules: We define an equivalence relation ~ upon these pairs with the following rule: This equivalence relation is compatible with the addition and multiplication defined above, and we may define Z to be the quotient set N²/~, i.e. we identify two pairs (a, b) and (c, d) if they are equivalent in the above sense. Note that Z, equipped with these operations of addition and multiplication, is a ring, and is in fact, the prototypical example of a ring. We can also define a total order on Z''' by writing This will lead to an additive zero of the form (a, a), an additive inverse of (a, b) of the form (b, a), a multiplicative unit of the form (a + 1, a), and a definition of subtraction This construction is a special case of the Grothendieck construction. Uniqueness The additive inverse of a number is unique, as is shown by the following proof. As mentioned above, an additive inverse of a number is defined as a value which when added to the number yields zero. Let x be a number and let y be its additive inverse. Suppose y′ is another additive inverse of x. By definition, And so, x + y′ = x + y. Using the law of cancellation for addition, it is seen that y′ = y. Thus y is equal to any other additive inverse of x. That is, y is the unique additive inverse of x. History For a long time, understanding of negative numbers was delayed by the impossibility of having a negative-number amount of a physical object, for example "minus-three apples", and negative solutions to problems were considered "false". In Hellenistic Egypt, the Greek mathematician Diophantus in the 3rd century AD referred to an equation that was equivalent to (which has a negative solution) in Arithmetica, saying that the equation was absurd. For this reason Greek geometers were able to solve geometrically all forms of the quadratic equation which give positive roots, while they could take no account of others. Negative numbers appear for the first time in history in the Nine Chapters on the Mathematical Art (九章算術, Jiǔ zhāng suàn-shù), which in its present form dates from the Han period, but may well contain much older material. The mathematician Liu Hui (c. 3rd century) established rules for the addition and subtraction of negative numbers. The historian Jean-Claude Martzloff theorized that the importance of duality in Chinese natural philosophy made it easier for the Chinese to accept the idea of negative numbers. The Chinese were able to solve simultaneous equations involving negative numbers. The Nine Chapters used red counting rods to denote positive coefficients and black rods for negative. This system is the exact opposite of contemporary printing of positive and negative numbers in the fields of banking, accounting, and commerce, wherein red numbers denote negative values and black numbers signify positive values. Liu Hui writes: The ancient Indian Bakhshali Manuscript carried out calculations with negative numbers, using "+" as a negative sign. The date of the manuscript is uncertain. LV Gurjar dates it no later than the 4th century, Hoernle dates it between the third and fourth centuries, Ayyangar and Pingree dates it to the 8th or 9th centuries, and George Gheverghese Joseph dates it to about AD 400 and no later than the early 7th century, During the 7th century AD, negative numbers were used in India to represent debts. The Indian mathematician Brahmagupta, in Brahma-Sphuta-Siddhanta (written c. AD 630), discussed the use of negative numbers to produce a general form quadratic formula similar to the one in use today. In the 9th century, Islamic mathematicians were familiar with negative numbers from the works of Indian mathematicians, but the recognition and use of negative numbers during this period remained timid. Al-Khwarizmi in his Al-jabr wa'l-muqabala (from which the word "algebra" derives) did not use negative numbers or negative coefficients. But within fifty years, Abu Kamil illustrated the rules of signs for expanding the multiplication , and al-Karaji wrote in his al-Fakhrī that "negative quantities must be counted as terms". In the 10th century, Abū al-Wafā' al-Būzjānī considered debts as negative numbers in A Book on What Is Necessary from the Science of Arithmetic for Scribes and Businessmen. By the 12th century, al-Karaji's successors were to state the general rules of signs and use them to solve polynomial divisions. As al-Samaw'al writes: the product of a negative number—al-nāqiṣ (loss)—by a positive number—al-zāʾid (gain)—is negative, and by a negative number is positive. If we subtract a negative number from a higher negative number, the remainder is their negative difference. The difference remains positive if we subtract a negative number from a lower negative number. If we subtract a negative number from a positive number, the remainder is their positive sum. If we subtract a positive number from an empty power (martaba khāliyya), the remainder is the same negative, and if we subtract a negative number from an empty power, the remainder is the same positive number. In the 12th century in India, Bhāskara II gave negative roots for quadratic equations but rejected them because they were inappropriate in the context of the problem. He stated that a negative value is "in this case not to be taken, for it is inadequate; people do not approve of negative roots." Fibonacci allowed negative solutions in financial problems where they could be interpreted as debits (chapter 13 of Liber Abaci, 1202) and later as losses (in Flos, 1225). In the 15th century, Nicolas Chuquet, a Frenchman, used negative numbers as exponents but referred to them as "absurd numbers". Michael Stifel dealt with negative numbers in his 1544 AD Arithmetica Integra, where he also called them numeri absurdi (absurd numbers). In 1545, Gerolamo Cardano, in his Ars Magna, provided the first satisfactory treatment of negative numbers in Europe. He did not allow negative numbers in his consideration of cubic equations, so he had to treat, for example, separately from (with in both cases). In all, Cardano was driven to the study of thirteen types of cubic equations, each with all negative terms moved to the other side of the = sign to make them positive. (Cardano also dealt with complex numbers, but understandably liked them even less.) See also Signed zero Additive inverse History of zero Integers Positive and negative parts Rational numbers Real numbers Sign function Sign (mathematics) Signed number representations References Citations Bibliography Bourbaki, Nicolas (1998). Elements of the History of Mathematics. Berlin, Heidelberg, and New York: Springer-Verlag. . Struik, Dirk J. (1987). A Concise History of Mathematics''. New York: Dover Publications. External links Maseres' biographical information BBC Radio 4 series In Our Time, on "Negative Numbers", 9 March 2006 Endless Examples & Exercises: Operations with Signed Integers Math Forum: Ask Dr. Math FAQ: Negative Times a Negative Chinese mathematical discoveries Elementary arithmetic Numbers
Negative number
[ "Mathematics" ]
4,256
[ "Elementary arithmetic", "Mathematical objects", "Elementary mathematics", "Arithmetic", "Numbers" ]
154,664
https://en.wikipedia.org/wiki/Turbulence
In fluid dynamics, turbulence or turbulent flow is fluid motion characterized by chaotic changes in pressure and flow velocity. It is in contrast to laminar flow, which occurs when a fluid flows in parallel layers with no disruption between those layers. Turbulence is commonly observed in everyday phenomena such as surf, fast flowing rivers, billowing storm clouds, or smoke from a chimney, and most fluid flows occurring in nature or created in engineering applications are turbulent. Turbulence is caused by excessive kinetic energy in parts of a fluid flow, which overcomes the damping effect of the fluid's viscosity. For this reason, turbulence is commonly realized in low viscosity fluids. In general terms, in turbulent flow, unsteady vortices appear of many sizes which interact with each other, consequently drag due to friction effects increases. The onset of turbulence can be predicted by the dimensionless Reynolds number, the ratio of kinetic energy to viscous damping in a fluid flow. However, turbulence has long resisted detailed physical analysis, and the interactions within turbulence create a very complex phenomenon. Physicist Richard Feynman described turbulence as the most important unsolved problem in classical physics. The turbulence intensity affects many fields, for examples fish ecology, air pollution, precipitation, and climate change. Examples of turbulence Smoke rising from a cigarette. For the first few centimeters, the smoke is laminar. The smoke plume becomes turbulent as its Reynolds number increases with increases in flow velocity and characteristic length scale. Flow over a golf ball. (This can be best understood by considering the golf ball to be stationary, with air flowing over it.) If the golf ball were smooth, the boundary layer flow over the front of the sphere would be laminar at typical conditions. However, the boundary layer would separate early, as the pressure gradient switched from favorable (pressure decreasing in the flow direction) to unfavorable (pressure increasing in the flow direction), creating a large region of low pressure behind the ball that creates high form drag. To prevent this, the surface is dimpled to perturb the boundary layer and promote turbulence. This results in higher skin friction, but it moves the point of boundary layer separation further along, resulting in lower drag. Clear-air turbulence experienced during airplane flight, as well as poor astronomical seeing (the blurring of images seen through the atmosphere). Most of the terrestrial atmospheric circulation. The oceanic and atmospheric mixed layers and intense oceanic currents. The flow conditions in many industrial equipment (such as pipes, ducts, precipitators, gas scrubbers, dynamic scraped surface heat exchangers, etc.) and machines (for instance, internal combustion engines and gas turbines). The external flow over all kinds of vehicles such as cars, airplanes, ships, and submarines. The motions of matter in stellar atmospheres. A jet exhausting from a nozzle into a quiescent fluid. As the flow emerges into this external fluid, shear layers originating at the lips of the nozzle are created. These layers separate the fast moving jet from the external fluid, and at a certain critical Reynolds number they become unstable and break down to turbulence. Biologically generated turbulence resulting from swimming animals affects ocean mixing. Snow fences work by inducing turbulence in the wind, forcing it to drop much of its snow load near the fence. Bridge supports (piers) in water. When river flow is slow, water flows smoothly around the support legs. When the flow is faster, a higher Reynolds number is associated with the flow. The flow may start off laminar but is quickly separated from the leg and becomes turbulent. In many geophysical flows (rivers, atmospheric boundary layer), the flow turbulence is dominated by the coherent structures and turbulent events. A turbulent event is a series of turbulent fluctuations that contain more energy than the average flow turbulence. The turbulent events are associated with coherent flow structures such as eddies and turbulent bursting, and they play a critical role in terms of sediment scour, accretion and transport in rivers as well as contaminant mixing and dispersion in rivers and estuaries, and in the atmosphere. In the medical field of cardiology, a stethoscope is used to detect heart sounds and bruits, which are due to turbulent blood flow. In normal individuals, heart sounds are a product of turbulent flow as heart valves close. However, in some conditions turbulent flow can be audible due to other reasons, some of them pathological. For example, in advanced atherosclerosis, bruits (and therefore turbulent flow) can be heard in some vessels that have been narrowed by the disease process. Recently, turbulence in porous media became a highly debated subject. Strategies used by animals for olfactory navigation, and their success, are heavily influenced by turbulence affecting the odor plume. Features Turbulence is characterized by the following features: Irregularity Turbulent flows are always highly irregular. For this reason, turbulence problems are normally treated statistically rather than deterministically. Turbulent flow is chaotic. However, not all chaotic flows are turbulent. Diffusivity The readily available supply of energy in turbulent flows tends to accelerate the homogenization (mixing) of fluid mixtures. The characteristic which is responsible for the enhanced mixing and increased rates of mass, momentum and energy transports in a flow is called "diffusivity". Turbulent diffusion is usually described by a turbulent diffusion coefficient. This turbulent diffusion coefficient is defined in a phenomenological sense, by analogy with the molecular diffusivities, but it does not have a true physical meaning, being dependent on the flow conditions, and not a property of the fluid itself. In addition, the turbulent diffusivity concept assumes a constitutive relation between a turbulent flux and the gradient of a mean variable similar to the relation between flux and gradient that exists for molecular transport. In the best case, this assumption is only an approximation. Nevertheless, the turbulent diffusivity is the simplest approach for quantitative analysis of turbulent flows, and many models have been postulated to calculate it. For instance, in large bodies of water like oceans this coefficient can be found using Richardson's four-third power law and is governed by the random walk principle. In rivers and large ocean currents, the diffusion coefficient is given by variations of Elder's formula. Rotationality Turbulent flows have non-zero vorticity and are characterized by a strong three-dimensional vortex generation mechanism known as vortex stretching. In fluid dynamics, they are essentially vortices subjected to stretching associated with a corresponding increase of the component of vorticity in the stretching direction—due to the conservation of angular momentum. On the other hand, vortex stretching is the core mechanism on which the turbulence energy cascade relies to establish and maintain identifiable structure function. In general, the stretching mechanism implies thinning of the vortices in the direction perpendicular to the stretching direction due to volume conservation of fluid elements. As a result, the radial length scale of the vortices decreases and the larger flow structures break down into smaller structures. The process continues until the small scale structures are small enough that their kinetic energy can be transformed by the fluid's molecular viscosity into heat. Turbulent flow is always rotational and three dimensional. For example, atmospheric cyclones are rotational but their substantially two-dimensional shapes do not allow vortex generation and so are not turbulent. On the other hand, oceanic flows are dispersive but essentially non rotational and therefore are not turbulent. Dissipation To sustain turbulent flow, a persistent source of energy supply is required because turbulence dissipates rapidly as the kinetic energy is converted into internal energy by viscous shear stress. Turbulence causes the formation of eddies of many different length scales. Most of the kinetic energy of the turbulent motion is contained in the large-scale structures. The energy "cascades" from these large-scale structures to smaller scale structures by an inertial and essentially inviscid mechanism. This process continues, creating smaller and smaller structures which produces a hierarchy of eddies. Eventually this process creates structures that are small enough that molecular diffusion becomes important and viscous dissipation of energy finally takes place. The scale at which this happens is the Kolmogorov length scale. Via this energy cascade, turbulent flow can be realized as a superposition of a spectrum of flow velocity fluctuations and eddies upon a mean flow. The eddies are loosely defined as coherent patterns of flow velocity, vorticity and pressure. Turbulent flows may be viewed as made of an entire hierarchy of eddies over a wide range of length scales and the hierarchy can be described by the energy spectrum that measures the energy in flow velocity fluctuations for each length scale (wavenumber). The scales in the energy cascade are generally uncontrollable and highly non-symmetric. Nevertheless, based on these length scales these eddies can be divided into three categories. Integral time scale The integral time scale for a Lagrangian flow can be defined as: where u′ is the velocity fluctuation, and is the time lag between measurements. Integral length scales Large eddies obtain energy from the mean flow and also from each other. Thus, these are the energy production eddies which contain most of the energy. They have the large flow velocity fluctuation and are low in frequency. Integral scales are highly anisotropic and are defined in terms of the normalized two-point flow velocity correlations. The maximum length of these scales is constrained by the characteristic length of the apparatus. For example, the largest integral length scale of pipe flow is equal to the pipe diameter. In the case of atmospheric turbulence, this length can reach up to the order of several hundreds kilometers.: The integral length scale can be defined as where r is the distance between two measurement locations, and u′ is the velocity fluctuation in that same direction. Kolmogorov length scales Smallest scales in the spectrum that form the viscous sub-layer range. In this range, the energy input from nonlinear interactions and the energy drain from viscous dissipation are in exact balance. The small scales have high frequency, causing turbulence to be locally isotropic and homogeneous. Taylor microscales The intermediate scales between the largest and the smallest scales which make the inertial subrange. Taylor microscales are not dissipative scales, but pass down the energy from the largest to the smallest without dissipation. Some literatures do not consider Taylor microscales as a characteristic length scale and consider the energy cascade to contain only the largest and smallest scales; while the latter accommodate both the inertial subrange and the viscous sublayer. Nevertheless, Taylor microscales are often used in describing the term "turbulence" more conveniently as these Taylor microscales play a dominant role in energy and momentum transfer in the wavenumber space. Although it is possible to find some particular solutions of the Navier–Stokes equations governing fluid motion, all such solutions are unstable to finite perturbations at large Reynolds numbers. Sensitive dependence on the initial and boundary conditions makes fluid flow irregular both in time and in space so that a statistical description is needed. The Russian mathematician Andrey Kolmogorov proposed the first statistical theory of turbulence, based on the aforementioned notion of the energy cascade (an idea originally introduced by Richardson) and the concept of self-similarity. As a result, the Kolmogorov microscales were named after him. It is now known that the self-similarity is broken so the statistical description is presently modified. A complete description of turbulence is one of the unsolved problems in physics. According to an apocryphal story, Werner Heisenberg was asked what he would ask God, given the opportunity. His reply was: "When I meet God, I am going to ask him two questions: Why relativity? And why turbulence? I really believe he will have an answer for the first." A similar witticism has been attributed to Horace Lamb in a speech to the British Association for the Advancement of Science: "I am an old man now, and when I die and go to heaven there are two matters on which I hope for enlightenment. One is quantum electrodynamics, and the other is the turbulent motion of fluids. And about the former I am rather more optimistic." Onset of turbulence The onset of turbulence can be, to some extent, predicted by the Reynolds number, which is the ratio of inertial forces to viscous forces within a fluid which is subject to relative internal movement due to different fluid velocities, in what is known as a boundary layer in the case of a bounding surface such as the interior of a pipe. A similar effect is created by the introduction of a stream of higher velocity fluid, such as the hot gases from a flame in air. This relative movement generates fluid friction, which is a factor in developing turbulent flow. Counteracting this effect is the viscosity of the fluid, which as it increases, progressively inhibits turbulence, as more kinetic energy is absorbed by a more viscous fluid. The Reynolds number quantifies the relative importance of these two types of forces for given flow conditions, and is a guide to when turbulent flow will occur in a particular situation. This ability to predict the onset of turbulent flow is an important design tool for equipment such as piping systems or aircraft wings, but the Reynolds number is also used in scaling of fluid dynamics problems, and is used to determine dynamic similitude between two different cases of fluid flow, such as between a model aircraft, and its full size version. Such scaling is not always linear and the application of Reynolds numbers to both situations allows scaling factors to be developed. A flow situation in which the kinetic energy is significantly absorbed due to the action of fluid molecular viscosity gives rise to a laminar flow regime. For this the dimensionless quantity the Reynolds number () is used as a guide. With respect to laminar and turbulent flow regimes: laminar flow occurs at low Reynolds numbers, where viscous forces are dominant, and is characterized by smooth, constant fluid motion; turbulent flow occurs at high Reynolds numbers and is dominated by inertial forces, which tend to produce chaotic eddies, vortices and other flow instabilities. The Reynolds number is defined as where: is the density of the fluid (SI units: kg/m3) is a characteristic velocity of the fluid with respect to the object (m/s) is a characteristic linear dimension (m) is the dynamic viscosity of the fluid (Pa·s or N·s/m2 or kg/(m·s)). While there is no theorem directly relating the non-dimensional Reynolds number to turbulence, flows at Reynolds numbers larger than 5000 are typically (but not necessarily) turbulent, while those at low Reynolds numbers usually remain laminar. In Poiseuille flow, for example, turbulence can first be sustained if the Reynolds number is larger than a critical value of about 2040; moreover, the turbulence is generally interspersed with laminar flow until a larger Reynolds number of about 4000. The transition occurs if the size of the object is gradually increased, or the viscosity of the fluid is decreased, or if the density of the fluid is increased. Heat and momentum transfer When flow is turbulent, particles exhibit additional transverse motion which enhances the rate of energy and momentum exchange between them thus increasing the heat transfer and the friction coefficient. Assume for a two-dimensional turbulent flow that one was able to locate a specific point in the fluid and measure the actual flow velocity of every particle that passed through that point at any given time. Then one would find the actual flow velocity fluctuating about a mean value: and similarly for temperature () and pressure (), where the primed quantities denote fluctuations superposed to the mean. This decomposition of a flow variable into a mean value and a turbulent fluctuation was originally proposed by Osborne Reynolds in 1895, and is considered to be the beginning of the systematic mathematical analysis of turbulent flow, as a sub-field of fluid dynamics. While the mean values are taken as predictable variables determined by dynamics laws, the turbulent fluctuations are regarded as stochastic variables. The heat flux and momentum transfer (represented by the shear stress ) in the direction normal to the flow for a given time are where is the heat capacity at constant pressure, is the density of the fluid, is the coefficient of turbulent viscosity and is the turbulent thermal conductivity. Kolmogorov's theory of 1941 Richardson's notion of turbulence was that a turbulent flow is composed by "eddies" of different sizes. The sizes define a characteristic length scale for the eddies, which are also characterized by flow velocity scales and time scales (turnover time) dependent on the length scale. The large eddies are unstable and eventually break up originating smaller eddies, and the kinetic energy of the initial large eddy is divided into the smaller eddies that stemmed from it. These smaller eddies undergo the same process, giving rise to even smaller eddies which inherit the energy of their predecessor eddy, and so on. In this way, the energy is passed down from the large scales of the motion to smaller scales until reaching a sufficiently small length scale such that the viscosity of the fluid can effectively dissipate the kinetic energy into internal energy. In his original theory of 1941, Kolmogorov postulated that for very high Reynolds numbers, the small-scale turbulent motions are statistically isotropic (i.e. no preferential spatial direction could be discerned). In general, the large scales of a flow are not isotropic, since they are determined by the particular geometrical features of the boundaries (the size characterizing the large scales will be denoted as ). Kolmogorov's idea was that in the Richardson's energy cascade this geometrical and directional information is lost, while the scale is reduced, so that the statistics of the small scales has a universal character: they are the same for all turbulent flows when the Reynolds number is sufficiently high. Thus, Kolmogorov introduced a second hypothesis: for very high Reynolds numbers the statistics of small scales are universally and uniquely determined by the kinematic viscosity and the rate of energy dissipation . With only these two parameters, the unique length that can be formed by dimensional analysis is This is today known as the Kolmogorov length scale (see Kolmogorov microscales). A turbulent flow is characterized by a hierarchy of scales through which the energy cascade takes place. Dissipation of kinetic energy takes place at scales of the order of Kolmogorov length , while the input of energy into the cascade comes from the decay of the large scales, of order . These two scales at the extremes of the cascade can differ by several orders of magnitude at high Reynolds numbers. In between there is a range of scales (each one with its own characteristic length ) that has formed at the expense of the energy of the large ones. These scales are very large compared with the Kolmogorov length, but still very small compared with the large scale of the flow (i.e. ). Since eddies in this range are much larger than the dissipative eddies that exist at Kolmogorov scales, kinetic energy is essentially not dissipated in this range, and it is merely transferred to smaller scales until viscous effects become important as the order of the Kolmogorov scale is approached. Within this range inertial effects are still much larger than viscous effects, and it is possible to assume that viscosity does not play a role in their internal dynamics (for this reason this range is called "inertial range"). Hence, a third hypothesis of Kolmogorov was that at very high Reynolds number the statistics of scales in the range are universally and uniquely determined by the scale and the rate of energy dissipation . The way in which the kinetic energy is distributed over the multiplicity of scales is a fundamental characterization of a turbulent flow. For homogeneous turbulence (i.e., statistically invariant under translations of the reference frame) this is usually done by means of the energy spectrum function , where is the modulus of the wavevector corresponding to some harmonics in a Fourier representation of the flow velocity field : where is the Fourier transform of the flow velocity field. Thus, represents the contribution to the kinetic energy from all the Fourier modes with , and therefore, where is the mean turbulent kinetic energy of the flow. The wavenumber corresponding to length scale is . Therefore, by dimensional analysis, the only possible form for the energy spectrum function according with the third Kolmogorov's hypothesis is where would be a universal constant. This is one of the most famous results of Kolmogorov 1941 theory, describing transport of energy through scale space without any loss or gain. The Kolmogorov five-thirds law was first observed in a tidal channel, and considerable experimental evidence has since accumulated that supports it. Outside of the inertial area, one can find the formula below : In spite of this success, Kolmogorov theory is at present under revision. This theory implicitly assumes that the turbulence is statistically self-similar at different scales. This essentially means that the statistics are scale-invariant and non-intermittent in the inertial range. A usual way of studying turbulent flow velocity fields is by means of flow velocity increments: that is, the difference in flow velocity between points separated by a vector (since the turbulence is assumed isotropic, the flow velocity increment depends only on the modulus of ). Flow velocity increments are useful because they emphasize the effects of scales of the order of the separation when statistics are computed. The statistical scale-invariance without intermittency implies that the scaling of flow velocity increments should occur with a unique scaling exponent , so that when is scaled by a factor , should have the same statistical distribution as with independent of the scale . From this fact, and other results of Kolmogorov 1941 theory, it follows that the statistical moments of the flow velocity increments (known as structure functions in turbulence) should scale as where the brackets denote the statistical average, and the would be universal constants. There is considerable evidence that turbulent flows deviate from this behavior. The scaling exponents deviate from the value predicted by the theory, becoming a non-linear function of the order of the structure function. The universality of the constants have also been questioned. For low orders the discrepancy with the Kolmogorov value is very small, which explain the success of Kolmogorov theory in regards to low order statistical moments. In particular, it can be shown that when the energy spectrum follows a power law with , the second order structure function has also a power law, with the form Since the experimental values obtained for the second order structure function only deviate slightly from the value predicted by Kolmogorov theory, the value for is very near to (differences are about 2%). Thus the "Kolmogorov − spectrum" is generally observed in turbulence. However, for high order structure functions, the difference with the Kolmogorov scaling is significant, and the breakdown of the statistical self-similarity is clear. This behavior, and the lack of universality of the constants, are related with the phenomenon of intermittency in turbulence and can be related to the non-trivial scaling behavior of the dissipation rate averaged over scale . This is an important area of research in this field, and a major goal of the modern theory of turbulence is to understand what is universal in the inertial range, and how to deduce intermittency properties from the Navier-Stokes equations, i.e. from first principles. See also Astronomical seeing Atmospheric dispersion modeling Chaos theory Clear-air turbulence Different types of boundary conditions in fluid dynamics Eddy covariance Fluid dynamics Darcy–Weisbach equation Eddy Navier–Stokes equations Large eddy simulation Hagen–Poiseuille equation Kelvin–Helmholtz instability Lagrangian coherent structure Turbulence kinetic energy Mesocyclones Navier–Stokes existence and smoothness Swing bowling Taylor microscale Turbulence modeling Velocimetry Vertical draft Vortex Vortex generator Wake turbulence Wave turbulence Wingtip vortices Wind tunnel Notes References Further reading Original scientific research papers and classic monographs Translated into English: Translated into English: External links Center for Turbulence Research, Scientific papers and books on turbulence Center for Turbulence Research, Stanford University Scientific American article Air Turbulence Forecast international CFD database iCFDdatabase Fluid Mechanics website with movies, Q&A, etc Johns Hopkins public database with direct numerical simulation data TurBase public database with experimental data from European High Performance Infrastructures in Turbulence (EuHIT) Concepts in physics Aerodynamics Chaos theory Transport phenomena Fluid dynamics Flow regimes
Turbulence
[ "Physics", "Chemistry", "Engineering" ]
5,097
[ "Transport phenomena", "Physical phenomena", "Turbulence", "Chemical engineering", "Aerodynamics", "Flow regimes", "nan", "Aerospace engineering", "Piping", "Fluid dynamics" ]
154,665
https://en.wikipedia.org/wiki/Vortex
In fluid dynamics, a vortex (: vortices or vortexes) is a region in a fluid in which the flow revolves around an axis line, which may be straight or curved. Vortices form in stirred fluids, and may be observed in smoke rings, whirlpools in the wake of a boat, and the winds surrounding a tropical cyclone, tornado or dust devil. Vortices are a major component of turbulent flow. The distribution of velocity, vorticity (the curl of the flow velocity), as well as the concept of circulation are used to characterise vortices. In most vortices, the fluid flow velocity is greatest next to its axis and decreases in inverse proportion to the distance from the axis. In the absence of external forces, viscous friction within the fluid tends to organise the flow into a collection of irrotational vortices, possibly superimposed to larger-scale flows, including larger-scale vortices. Once formed, vortices can move, stretch, twist, and interact in complex ways. A moving vortex carries some angular and linear momentum, energy, and mass, with it. Overview In the dynamics of fluid, a vortex is fluid that revolves around the line of flow. This flow of fluid might be curved or straight. Vortices form from stirred fluids: they might be observed in smoke rings, whirlpools, in the wake of a boat or the winds around a tornado or dust devil. Vortices are an important part of turbulent flow. Vortices can otherwise be known as a circular motion of a liquid. In the cases of the absence of forces, the liquid settles. This makes the water stay still instead of moving. When they are created, vortices can move, stretch, twist and interact in complicated ways. When a vortex is moving, sometimes, it can affect an angular position. For an example, if a water bucket is rotated or spun constantly, it will rotate around an invisible line called the axis line. The rotation moves around in circles. In this example the rotation of the bucket creates extra force. The reason that the vortices can change shape is the fact that they have open particle paths. This can create a moving vortex. Examples of this fact are the shapes of tornadoes and drain whirlpools. When two or more vortices are close together they can merge to make a vortex. Vortices also hold energy in its rotation of the fluid. If the energy is never removed, it would consist of circular motion forever. Properties Vorticity A key concept in the dynamics of vortices is the vorticity, a vector that describes the local rotary motion at a point in the fluid, as would be perceived by an observer that moves along with it. Conceptually, the vorticity could be observed by placing a tiny rough ball at the point in question, free to move with the fluid, and observing how it rotates about its center. The direction of the vorticity vector is defined to be the direction of the axis of rotation of this imaginary ball (according to the right-hand rule) while its length is twice the ball's angular velocity. Mathematically, the vorticity is defined as the curl (or rotational) of the velocity field of the fluid, usually denoted by and expressed by the vector analysis formula , where is the nabla operator and is the local flow velocity. The local rotation measured by the vorticity must not be confused with the angular velocity vector of that portion of the fluid with respect to the external environment or to any fixed axis. In a vortex, in particular, may be opposite to the mean angular velocity vector of the fluid relative to the vortex's axis. Vortex types In theory, the speed of the particles (and, therefore, the vorticity) in a vortex may vary with the distance from the axis in many ways. There are two important special cases, however: If the fluid rotates like a rigid body – that is, if the angular rotational velocity is uniform, so that increases proportionally to the distance from the axis – a tiny ball carried by the flow would also rotate about its center as if it were part of that rigid body. In such a flow, the vorticity is the same everywhere: its direction is parallel to the rotation axis, and its magnitude is equal to twice the uniform angular velocity of the fluid around the center of rotation. If the particle speed is inversely proportional to the distance from the axis, then the imaginary test ball would not rotate over itself; it would maintain the same orientation while moving in a circle around the vortex axis. In this case the vorticity is zero at any point not on that axis, and the flow is said to be irrotational. Irrotational vortices In the absence of external forces, a vortex usually evolves fairly quickly toward the irrotational flow pattern, where the flow velocity is inversely proportional to the distance . Irrotational vortices are also called free vortices. For an irrotational vortex, the circulation is zero along any closed contour that does not enclose the vortex axis; and has a fixed value, , for any contour that does enclose the axis once. The tangential component of the particle velocity is then . The angular momentum per unit mass relative to the vortex axis is therefore constant, . The ideal irrotational vortex flow in free space is not physically realizable, since it would imply that the particle speed (and hence the force needed to keep particles in their circular paths) would grow without bound as one approaches the vortex axis. Indeed, in real vortices there is always a core region surrounding the axis where the particle velocity stops increasing and then decreases to zero as goes to zero. Within that region, the flow is no longer irrotational: the vorticity becomes non-zero, with direction roughly parallel to the vortex axis. The Rankine vortex is a model that assumes a rigid-body rotational flow where is less than a fixed distance 0, and irrotational flow outside that core regions. In a viscous fluid, irrotational flow contains viscous dissipation everywhere, yet there are no net viscous forces, only viscous stresses. Due to the dissipation, this means that sustaining an irrotational viscous vortex requires continuous input of work at the core (for example, by steadily turning a cylinder at the core). In free space there is no energy input at the core, and thus the compact vorticity held in the core will naturally diffuse outwards, converting the core to a gradually-slowing and gradually-growing rigid-body flow, surrounded by the original irrotational flow. Such a decaying irrotational vortex has an exact solution of the viscous Navier–Stokes equations, known as a Lamb–Oseen vortex. Rotational vortices A rotational vortex – a vortex that rotates in the same way as a rigid body – cannot exist indefinitely in that state except through the application of some extra force, that is not generated by the fluid motion itself. It has non-zero vorticity everywhere outside the core. Rotational vortices are also called rigid-body vortices or forced vortices. For example, if a water bucket is spun at constant angular speed about its vertical axis, the water will eventually rotate in rigid-body fashion. The particles will then move along circles, with velocity equal to . In that case, the free surface of the water will assume a parabolic shape. In this situation, the rigid rotating enclosure provides an extra force, namely an extra pressure gradient in the water, directed inwards, that prevents transition of the rigid-body flow to the irrotational state. Vortex formation on boundaries Vortex structures are defined by their vorticity, the local rotation rate of fluid particles. They can be formed via the phenomenon known as boundary layer separation which can occur when a fluid moves over a surface and experiences a rapid acceleration from the fluid velocity to zero due to the no-slip condition. This rapid negative acceleration creates a boundary layer which causes a local rotation of fluid at the wall (i.e. vorticity) which is referred to as the wall shear rate. The thickness of this boundary layer is proportional to (where v is the free stream fluid velocity and t is time). If the diameter or thickness of the vessel or fluid is less than the boundary layer thickness then the boundary layer will not separate and vortices will not form. However, when the boundary layer does grow beyond this critical boundary layer thickness then separation will occur which will generate vortices. This boundary layer separation can also occur in the presence of combatting pressure gradients (i.e. a pressure that develops downstream). This is present in curved surfaces and general geometry changes like a convex surface. A unique example of severe geometric changes is at the trailing edge of a bluff body where the fluid flow deceleration, and therefore boundary layer and vortex formation, is located. Another form of vortex formation on a boundary is when fluid flows perpendicularly into a wall and creates a splash effect. The velocity streamlines are immediately deflected and decelerated so that the boundary layer separates and forms a toroidal vortex ring. Vortex geometry In a stationary vortex, the typical streamline (a line that is everywhere tangent to the flow velocity vector) is a closed loop surrounding the axis; and each vortex line (a line that is everywhere tangent to the vorticity vector) is roughly parallel to the axis. A surface that is everywhere tangent to both flow velocity and vorticity is called a vortex tube. In general, vortex tubes are nested around the axis of rotation. The axis itself is one of the vortex lines, a limiting case of a vortex tube with zero diameter. According to Helmholtz's theorems, a vortex line cannot start or end in the fluid – except momentarily, in non-steady flow, while the vortex is forming or dissipating. In general, vortex lines (in particular, the axis line) are either closed loops or end at the boundary of the fluid. A whirlpool is an example of the latter, namely a vortex in a body of water whose axis ends at the free surface. A vortex tube whose vortex lines are all closed will be a closed torus-like surface. A newly created vortex will promptly extend and bend so as to eliminate any open-ended vortex lines. For example, when an airplane engine is started, a vortex usually forms ahead of each propeller, or the turbofan of each jet engine. One end of the vortex line is attached to the engine, while the other end usually stretches out and bends until it reaches the ground. When vortices are made visible by smoke or ink trails, they may seem to have spiral pathlines or streamlines. However, this appearance is often an illusion and the fluid particles are moving in closed paths. The spiral streaks that are taken to be streamlines are in fact clouds of the marker fluid that originally spanned several vortex tubes and were stretched into spiral shapes by the non-uniform flow velocity distribution. Pressure in a vortex The fluid motion in a vortex creates a dynamic pressure (in addition to any hydrostatic pressure) that is lowest in the core region, closest to the axis, and increases as one moves away from it, in accordance with Bernoulli's principle. One can say that it is the gradient of this pressure that forces the fluid to follow a curved path around the axis. In a rigid-body vortex flow of a fluid with constant density, the dynamic pressure is proportional to the square of the distance from the axis. In a constant gravity field, the free surface of the liquid, if present, is a concave paraboloid. In an irrotational vortex flow with constant fluid density and cylindrical symmetry, the dynamic pressure varies as , where is the limiting pressure infinitely far from the axis. This formula provides another constraint for the extent of the core, since the pressure cannot be negative. The free surface (if present) dips sharply near the axis line, with depth inversely proportional to . The shape formed by the free surface is called a hyperboloid, or "Gabriel's Horn" (by Evangelista Torricelli). The core of a vortex in air is sometimes visible because water vapor condenses as the low pressure of the core causes adiabatic cooling; the funnel of a tornado is an example. When a vortex line ends at a boundary surface, the reduced pressure may also draw matter from that surface into the core. For example, a dust devil is a column of dust picked up by the core of an air vortex attached to the ground. A vortex that ends at the free surface of a body of water (like the whirlpool that often forms over a bathtub drain) may draw a column of air down the core. The forward vortex extending from a jet engine of a parked airplane can suck water and small stones into the core and then into the engine. Evolution Vortices need not be steady-state features; they can move and change shape. In a moving vortex, the particle paths are not closed, but are open, loopy curves like helices and cycloids. A vortex flow might also be combined with a radial or axial flow pattern. In that case the streamlines and pathlines are not closed curves but spirals or helices, respectively. This is the case in tornadoes and in drain whirlpools. A vortex with helical streamlines is said to be solenoidal. As long as the effects of viscosity and diffusion are negligible, the fluid in a moving vortex is carried along with it. In particular, the fluid in the core (and matter trapped by it) tends to remain in the core as the vortex moves about. This is a consequence of Helmholtz's second theorem. Thus vortices (unlike surface waves and pressure waves) can transport mass, energy and momentum over considerable distances compared to their size, with surprisingly little dispersion. This effect is demonstrated by smoke rings and exploited in vortex ring toys and guns. Two or more vortices that are approximately parallel and circulating in the same direction will attract and eventually merge to form a single vortex, whose circulation will equal the sum of the circulations of the constituent vortices. For example, an airplane wing that is developing lift will create a sheet of small vortices at its trailing edge. These small vortices merge to form a single wingtip vortex, less than one wing chord downstream of that edge. This phenomenon also occurs with other active airfoils, such as propeller blades. On the other hand, two parallel vortices with opposite circulations (such as the two wingtip vortices of an airplane) tend to remain separate. Vortices contain substantial energy in the circular motion of the fluid. In an ideal fluid this energy can never be dissipated and the vortex would persist forever. However, real fluids exhibit viscosity and this dissipates energy very slowly from the core of the vortex. It is only through dissipation of a vortex due to viscosity that a vortex line can end in the fluid, rather than at the boundary of the fluid. Further examples In the hydrodynamic interpretation of the behaviour of electromagnetic fields, the acceleration of electric fluid in a particular direction creates a positive vortex of magnetic fluid. This in turn creates around itself a corresponding negative vortex of electric fluid. Exact solutions to classical nonlinear magnetic equations include the Landau–Lifshitz equation, the continuum Heisenberg model, the Ishimori equation, and the nonlinear Schrödinger equation. Vortex rings are torus-shaped vortices where the axis of rotation is a continuous closed curve. Smoke rings and bubble rings are two well-known examples. The lifting force of aircraft wings, propeller blades, sails, and other airfoils can be explained by the creation of a vortex superimposed on the flow of air past the wing. Aerodynamic drag can be explained in large part by the formation of vortices in the surrounding fluid that carry away energy from the moving body. Large whirlpools can be produced by ocean tides in certain straits or bays. Examples are Charybdis of classical mythology in the Straits of Messina, Italy; the Naruto whirlpools of Nankaido, Japan; and the Maelstrom at Lofoten, Norway. Vortices in the Earth's atmosphere are important phenomena for meteorology. They include mesocyclones on the scale of a few miles, tornadoes, waterspouts, and hurricanes. These vortices are often driven by temperature and humidity variations with altitude. The sense of rotation of hurricanes is influenced by the Earth's rotation. Another example is the Polar vortex, a persistent, large-scale cyclone centered near the Earth's poles, in the middle and upper troposphere and the stratosphere. Vortices are prominent features of the atmospheres of other planets. They include the permanent Great Red Spot on Jupiter, the intermittent Great Dark Spot on Neptune, the polar vortices of Venus, the Martian dust devils and the North Polar Hexagon of Saturn. Sunspots are dark regions on the Sun's visible surface (photosphere) marked by a lower temperature than its surroundings, and intense magnetic activity. The accretion disks of black holes and other massive gravitational sources. Taylor–Couette flow occurs in a fluid between two nested cylinders, one rotating, the other fixed. See also References Notes Other External links Optical Vortices Video of two water vortex rings colliding (MPEG) Chapter 3 Rotational Flows: Circulation and Turbulence Vortical Flow Research Lab (MIT) – Study of flows found in nature and part of the Department of Ocean Engineering. Rotation Aerodynamics Fluid dynamics
Vortex
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
3,689
[ "Physical phenomena", "Vortices", "Dynamical systems", "Chemical engineering", "Classical mechanics", "Rotation", "Aerodynamics", "Motion (physics)", "Aerospace engineering", "Piping", "Fluid dynamics" ]
154,675
https://en.wikipedia.org/wiki/Allied%20military%20phonetic%20spelling%20alphabets
The Allied military phonetic spelling alphabets prescribed the words that are used to represent each letter of the alphabet, when spelling other words out loud, letter-by-letter, and how the spelling words should be pronounced for use by the Allies of World War II. They are not a "phonetic alphabet" in the sense in which that term is used in phonetics, i.e. they are not a system for transcribing speech sounds. The Allied militaries – primarily the US and the UK – had their own radiotelephone spelling alphabets which had origins back to World War I and had evolved separately in the different services in the two countries. For communication between the different countries and different services specific alphabets were mandated. The last WWII spelling alphabet continued to be used through the Korean War, being replaced in 1956 as a result of both countries adopting the ICAO/ITU Radiotelephony Spelling Alphabet, with the NATO members calling their usage the "NATO Phonetic Alphabet". During WWII, the Allies had defined terminology to describe the scope of communications procedures among different services and nations. A summary of the terms used was published in a post-WWII NATO memo: combined—between services of one nation and those of another nation, but not necessarily within or between the services of the individual nations joint—between (but not necessarily within) two or more services of one nation intra—within a service (but not between services) of one nation Thus, the Combined Communications Board (CCB), created in 1941, derived a spelling alphabet that was mandated for use when any US military branch was communicating with any British military branch; when operating without any British forces, the Joint Army/Navy spelling alphabet was mandated for use whenever the US Army and US Navy were communicating in joint operations; if the US Army was operating on its own, it would use its own spelling alphabet, in which some of the letters were identical to the other spelling alphabets and some completely different. WWII CCB (ICAO) and NATO alphabets The US and UK began to coordinate calling alphabets by the military during World War II and by 1943 they had settled on a streamline communications that became known as the CCB. Both nations had previous independently developed alphabet naming system dating back to World War I. Subsequently, this second world war era letter naming became accepted as standard by the ICAO in 1947. After the creation of NATO in 1949, modifications began to take place. An alternative name for the ICAO spelling alphabet, "NATO phonetic alphabet", exists because it appears in Allied Tactical Publication ATP-1, Volume II: Allied Maritime Signal and Maneuvering Book used by all navies of NATO, which adopted a modified form of the International Code of Signals. Because the latter allows messages to be spelled via flags or Morse code, it naturally named the code words used to spell out messages by voice its "phonetic alphabet". The name NATO phonetic alphabet became widespread because the signals used to facilitate the naval communications and tactics of NATO have become global. However, ATP-1 is marked NATO Confidential (or the lower NATO Restricted) so it is not available publicly. Nevertheless, a NATO unclassified version of the document is provided to foreign, even hostile, militaries, even though they are not allowed to make it available publicly. The spelling alphabet is now also defined in other unclassified international military documents. The NATO alphabet appeared in some United States Air Force Europe publications during the Cold War. A particular example was the Ramstein Air Base Telephone Directory, published between 1969 and 1973 (currently out of print). The US and NATO versions had differences, and the translation was provided as a convenience. Differences included Alfa, Bravo and Able, Baker for the first two letters. The NATO phonetic spelling alphabet was first adopted on January 1, 1956, while the ICAO radiotelephony spelling alphabet was still undergoing final changes. United Kingdom military spelling alphabets British Army radiotelephony spelling alphabet Royal Navy radiotelephony spelling alphabet RAF radiotelephony spelling alphabet The RAF radiotelephony spelling alphabet, sometimes referred to as the "RAF Phonetic Alphabet", was used by the British Royal Air Force (RAF) to aid communication after the take-up of radio, especially to spell out aircraft identification letters, e.g. "H for Harry", "G for George", etc. Several alphabets were used, before being superseded by the adoption of the NATO/ICAO radiotelephony alphabet. History During World War I battle lines were often static and forces were commonly linked by wired telephone networks. Signals were weak on long wire runs and field telephone systems often used a single wire with earth return, which made them subject to inadvertent and deliberate interference. Spelling alphabets were introduced for wire telephony as well as on the newer radio voice equipment. The British Army and the Royal Navy had developed their own quite separate spelling alphabets. The Navy system was a full alphabet, starting: Apples, Butter, Charlie, Duff, Edward, but the RAF alphabet was based on that of the "signalese" of the army signallers. This was not a full alphabet, but differentiated only the letters most frequently misunderstood: Ack (originally "Ak"), Beer (or Bar), C, D, E, F, G, H, I, J, K, L, eMma, N, O, Pip, Q, R, eSses, Toc, U, Vic, W, X, Y, Z. By 1921, the RAF "Telephony Spelling Alphabet" had been adopted by all three armed services, and was then made mandatory for UK civil aviation, as announced in Notice to Airmen Number 107. In 1956, the NATO phonetic alphabet was adopted due to the RAF's wide commitments with NATO and worldwide sharing of civil aviation facilities. The choice of Nuts following Monkey is probably from "monkey nuts" (peanuts); likewise Orange and Pip can be similarly paired, as in "orange pip". "Vic" subsequently entered the English language as the standard "Vee"-shaped flight pattern of three aircraft. United States military spelling alphabets US Army radiotelephony spelling alphabet 'Interrogatory' was used in place of 'Inter' in joint Army/Navy Operations. US Navy radiotelephony spelling alphabet The US Navy's first phonetic spelling alphabet was not used for radio, but was instead used on the deck of ships "in calling out flags to be hoisted in a signal". There were two alternative alphabets used, which were almost completely different from each other, with only the code word "Xray" in common. The US Navy's first radiotelephony phonetic spelling alphabet was published in 1913, in the Naval Radio Service's Handbook of Regulations developed by Captain William H. G. Bullard. The Handbook's procedures were described in the November 1917 edition of Popular Science Monthly. Joint Army/Navy radiotelephony spelling alphabet The Joint Army/Navy (JAN) spelling alphabet was developed by the Joint Board on November 13, 1940, and it took effect on March 1, 1941. It was reformulated by the CCB following the entrance of the US into World War II by the CCB "Methods and Procedures" committee, and was used by all branches of the United States Armed Forces until the promulgation of its replacement, the ICAO spelling alphabet (Alfa, Bravo, etc.), in 1956. Before the JAN phonetic alphabet, each branch of the armed forces had used its own radio alphabet, leading to difficulties in interbranch communication. The US Army used this alphabet in modified form, along with the British Army and Canadian Army from 1943 onward, with "Sugar" replacing "Sail". The JAN spelling alphabet was used to name Atlantic basin storms during hurricane season from 1947 to 1952, before being replaced with a new system of using female names. Vestiges of the JAN spelling system remain in use in the US Navy, in the form of Material Conditions of Readiness, used in damage control. Dog, William, X-Ray, Yoke, and Zebra all reference designations of fittings, hatches, or doors. The response "Roger" for "· – ·" or "R", to mean "received", also derives from this alphabet. The names Able to Fox were also widely used in the early days of hexadecimal digital encoding of text, for speaking the hexadecimal digits A to F (equivalent to decimal 10 to 15), although the written form was simply the capital letters A to F. See also Allied Communication Procedures International Code of Signals Spelling alphabet APCO radiotelephony spelling alphabet Cockney alphabet German phonetic alphabet Greek spelling alphabet ICAO radiotelephony spelling alphabet Toc H—example of signalese carry-over References External links Signal Flags and the Phonetic Alphabet—NavSource Naval History Visual Signaling, Signal Corps, United States Army, 1910—a book at the Internet Archive 1941 in military history History of the United States Army Military communications of the United Kingdom Military communications Phonetic Alphabet, RAF Spelling alphabets Telecommunications-related introductions in 1941 United States Navy Allies of World War II de:Buchstabiertafel#Joint Army/Navy Phonetic Alphabet (Able, Baker, …)
Allied military phonetic spelling alphabets
[ "Engineering" ]
1,885
[ "Military communications", "Telecommunications engineering" ]
154,711
https://en.wikipedia.org/wiki/Aerospace
Aerospace is a term used to collectively refer to the atmosphere and outer space. Aerospace activity is very diverse, with a multitude of commercial, industrial, and military applications. Aerospace engineering consists of aeronautics and astronautics. Aerospace organizations research, design, manufacture, operate, maintain, and repair both aircraft and spacecraft. The beginning of space and the ending of the air are proposed as 100km (62mi) above the ground according to the physical explanation that the air density is too low for a lifting body to generate meaningful lift force without exceeding orbital velocity. Overview In most industrial countries, the aerospace industry is a co-operation of the public and private sectors. For example, several states have a civilian space program funded by the government, such as National Aeronautics and Space Administration in the United States, European Space Agency in Europe, the Canadian Space Agency in Canada, Indian Space Research Organisation in India, Japan Aerospace Exploration Agency in Japan, Roscosmos State Corporation for Space Activities in Russia, China National Space Administration in China, SUPARCO in Pakistan, Iranian Space Agency in Iran, and Korea Aerospace Research Institute in South Korea. Along with these public space programs, many companies produce technical tools and components such as spacecraft and satellites. Some known companies involved in space programs include Boeing, Cobham, Airbus, SpaceX, Lockheed Martin, RTX Corporation, MDA and Northrop Grumman. These companies are also involved in other areas of aerospace, such as the construction of aircraft. History Modern aerospace began with Engineer George Cayley in 1799. Cayley proposed an aircraft with a "fixed wing and a horizontal and vertical tail," defining characteristics of the modern aeroplane. The 19th century saw the creation of the Aeronautical Society of Great Britain (1866), the American Rocketry Society, and the Institute of Aeronautical Sciences, all of which made aeronautics a more serious scientific discipline. Airmen like Otto Lilienthal, who introduced cambered airfoils in 1891, used gliders to analyze aerodynamic forces. The Wright brothers were interested in Lilienthal's work and read several of his publications. They also found inspiration in Octave Chanute, an airman and the author of Progress in Flying Machines (1894). It was the preliminary work of Cayley, Lilienthal, Chanute, and other early aerospace engineers that brought about the first powered sustained flight at Kitty Hawk, North Carolina on December 17, 1903, by the Wright brothers. War and science fiction inspired scientists and engineers like Konstantin Tsiolkovsky and Wernher von Braun to achieve flight beyond the atmosphere. World War II inspired Wernher von Braun to create the V1 and V2 rockets. The launch of Sputnik 1 in October 1957 started the Space Age, and on July 20, 1969 Apollo 11 achieved the first crewed Moon landing. In April 1981, the Space Shuttle Columbia launched, the start of regular crewed access to orbital space. A sustained human presence in orbital space started with "Mir" in 1986 and is continued by the "International Space Station". Space commercialization and space tourism are more recent features of aerospace. Manufacturing Aerospace manufacturing is a high-technology industry that produces "aircraft, guided missiles, space vehicles, aircraft engines, propulsion units, and related parts". Most of the industry is geared toward governmental work. For each original equipment manufacturer (OEM), the US government has assigned a Commercial and Government Entity (CAGE) code. These codes help to identify each manufacturer, repair facilities, and other critical aftermarket vendors in the aerospace industry. In the United States, the Department of Defense and the National Aeronautics and Space Administration (NASA) are the two largest consumers of aerospace technology and products. Others include the very large airline industry. The aerospace industry employed 472,000 wage and salary workers in 2006. Most of those jobs were in Washington state and in California, with Missouri, New York and Texas also being important. The leading aerospace manufacturers in the U.S. are Boeing, United Technologies Corporation, SpaceX, Northrop Grumman and Lockheed Martin. As talented American employees age and retire, these manufacturers face an expanding labor shortfall. In order to supply the industrial sector with fresh workers, apprenticeship programs like the Aerospace Joint Apprenticeship Council (AJAC) collaborate with community colleges and aerospace firms in Washington state. Important locations of the civilian aerospace industry worldwide include Washington state (Boeing), California (Boeing, Lockheed Martin, etc.) and Montreal, Quebec, Canada (Bombardier, Pratt & Whitney Canada) in North America; Toulouse, France (Airbus SE) and Hamburg, Germany (Airbus SE) in Europe; as well as São José dos Campos, Brazil (Embraer), Querétaro, Mexico (Bombardier Aerospace, General Electric Aviation) and Mexicali, Mexico (United Technologies Corporation, Gulfstream Aerospace) in Latin America. In the European Union, aerospace companies such as Airbus SE, Safran, Thales, Dassault Aviation, Leonardo and Saab AB account for a large share of the global aerospace industry and research effort, with the European Space Agency as one of the largest consumers of aerospace technology and products. In India, Bangalore is a major center of the aerospace industry, where Hindustan Aeronautics Limited, the National Aerospace Laboratories and the Indian Space Research Organisation are headquartered. The Indian Space Research Organisation (ISRO) launched India's first Moon orbiter, Chandrayaan-1, in October 2008. In Russia, large aerospace companies like Oboronprom and the United Aircraft Building Corporation (encompassing Mikoyan, Sukhoi, Ilyushin, Tupolev, Yakovlev, and Irkut which includes Beriev) are among the major global players in this industry. The historic Soviet Union was also the home of a major aerospace industry. The United Kingdom formerly attempted to maintain its own large aerospace industry, making its own airliners and warplanes, but it has largely turned its lot over to cooperative efforts with continental companies, and it has turned into a large import customer, too, from countries such as the United States. However, the United Kingdom has a very active aerospace sector, with major companies such as BAE Systems, supplying fully assembled aircraft, aircraft components, sub-assemblies and sub-systems to other manufacturers, both in Europe and all over the world. Canada has formerly manufactured some of its own designs for jet warplanes, etc. (e.g. the CF-100 fighter), but for some decades, it has relied on imports from the United States and Europe to fill these needs. However Canada still manufactures some military aircraft although they are generally not combat capable. Another notable example was the late 1950s development of the Avro Canada CF-105 Arrow, a supersonic fighter-interceptor whose 1959 cancellation was considered highly controversial. France has continued to make its own warplanes for its air force and navy, and Sweden continues to make its own warplanes for the Swedish Air Force—especially in support of its position as a neutral country. (See Saab AB.) Other European countries either team up in making fighters (such as the Panavia Tornado and the Eurofighter Typhoon), or else to import them from the United States. Pakistan has a developing aerospace engineering industry. The National Engineering and Scientific Commission, Khan Research Laboratories and Pakistan Aeronautical Complex are among the premier organizations involved in research and development in this sector. Pakistan has the capability of designing and manufacturing guided rockets, missiles and space vehicles. The city of Kamra is home to the Pakistan Aeronautical Complex which contains several factories. This facility is responsible for manufacturing the MFI-17, MFI-395, K-8 and JF-17 Thunder aircraft. Pakistan also has the capability to design and manufacture both armed and unarmed unmanned aerial vehicles. In the People's Republic of China, Beijing, Xi'an, Chengdu, Shanghai, Shenyang and Nanchang are major research and manufacture centers of the aerospace industry. China has developed an extensive capability to design, test and produce military aircraft, missiles and space vehicles. Despite the cancellation in 1983 of the experimental Shanghai Y-10, China is still developing its civil aerospace industry. The aircraft parts industry was born out of the sale of second-hand or used aircraft parts from the aerospace manufacture sector. Within the United States there is a specific process that parts brokers or resellers must follow. This includes leveraging a certified repair station to overhaul and "tag" a part. This certification guarantees that a part was repaired or overhauled to meet OEM specifications. Once a part is overhauled its value is determined from the supply and demand of the aerospace market. When an airline has an aircraft on the ground, the part that the airline requires to get the plane back into service becomes invaluable. This can drive the market for specific parts. There are several online marketplaces that assist with the commodity selling of aircraft parts. In the aerospace and defense industry, much consolidation occurred at the end of the 20th century and in the early 21st century. Between 1988 and 2011, more than 6,068 mergers and acquisitions with a total known value of US$678 billion were announced worldwide. The largest transactions have been: The acquisition of Rockwell Collins by United Technologies Corporation for US$30.0 billion in 2018 The acquisition of Goodrich Corporation by United Technologies Corporation for US$16.2 billion in 2011 The merger of Allied Signal with Honeywell in a stock swap valued at US$15.6 billion in 1999 The merger of Boeing with McDonnell valued at US$13.4 billion in 1996 The acquisition of Marconi Electronic Systems, a subsidiary of GEC, by British Aerospace for US$12.9 billion in 1999 (now called: BAE Systems) The acquisition of Hughes Aircraft by Raytheon for US$9.5 billion in 1997 Technology Multiple technologies and innovations are used in aerospace, many of them pioneered around World War II: patented by Short Brothers, folding wings optimise aircraft carrier storage from a simple fold to the entire rotating wing of the V-22, and the wingtip fold of the Boeing 777X for airport compatibility. To improve low-speed performance, a de Havilland DH4 was modified by Handley Page to a monoplane with high-lift devices: full-span leading-edge slats and trailing-edge flaps; in 1924, Fowler flaps that extend backward and downward were invented in the US, and used on the Lockheed Model 10 Electra while in 1943 forward-hinged leading-edge Krueger flaps were invented in Germany and later used on the Boeing 707. The 1927 large Propeller Research Tunnel at NACA Langley confirmed that the landing gear was a major source of drag, in 1930 the Boeing Monomail featured a retractable gear. The flush rivet displaced the domed rivet in the 1930s and pneumatic rivet guns work in combination with a heavy reaction bucking bar; not depending on plastic deformation, specialist rivets were developed to improve fatigue life as shear fasteners like the Hi-Lok, threaded pins tightened until a collar breaks off with enough torque. First flown in 1935, the Queen Bee was a radio-controlled target drone derived from the Tiger Moth for Flak training; the Ryan Firebee was a jet-powered target drone developed into long-range reconnaissance UAVs: the Ryan Model 147 Fire Fly and Lightning Bug; the Israeli IAI Scout and Tadiran Mastiff launched a line of battlefield UAVs including the IAI Searcher; developed from the General Atomics Gnat long-endurance UAV for the CIA, the MQ-1 Predator led to the armed MQ-9 Reaper. At the end of World War I, piston engine power could be boosted by compressing intake air with a compressor, also compensating for decreasing air density with altitude, improved with 1930s turbochargers for the Boeing B-17 and the first pressurized airliners. The 1937 Hindenburg disaster ended the era of passenger airships but the US Navy used airships for anti-submarine warfare and airborne early warning into the 1960s, while small airships continue to be used for aerial advertising, sightseeing flights, surveillance and research, and the Airlander 10 or the Lockheed Martin LMH-1 continue to be developed. As US airlines were interested in high-altitude flying in the mid-1930s, the Lockheed XC-35 with a pressurized cabin was tested in 1937 and the Boeing 307 Stratoliner would be the first pressurized airliner to enter commercial service. In 1933, Plexiglas, a transparent Acrylic plastic, was introduced in Germany and shortly before World War II, was first used for aircraft windshields as it is lighter than glass, and the bubble canopy improved fighter pilots visibility. In January 1930, Royal Air Force pilot and engineer Frank Whittle filed a patent for a gas turbine aircraft engine with an inlet, compressor, combustor, turbine and nozzle, while an independent turbojet was developed by researcher Hans von Ohain in Germany; both engines ran within weeks in early 1937 and the Heinkel HeS 3-propelled Heinkel He 178 experimental aircraft made its first flight on Aug 27, 1939 while the Whittle W.1-powered Gloster E.28/39 prototype flew on May 15, 1941. In 1935, Britain demonstrated aircraft radio detection and ranging and in 1940 the RAF introduced the first VHF airborne radars on Bristol Blenheims, then higher-resolution microwave-frequency radar with a cavity magnetron on Bristol Beaufighters in 1941, and in 1959 the radar-homing Hughes AIM-4 Falcon became the first US guided missile on the Convair F-106. In the early 1940s, British Hurricane and Spitfire pilots wore g-suits to prevent G-LOC due to blood pooling in the lower body in high g situations; Mayo Clinic researchers developed air-filled bladders to replace water-filled bladders and in 1943 the US military began using pressure suits from the David Clark Company. The modern ejection seat was developed during World War II, a seat on rails ejected by rockets before deploying a parachute, which could have been enhanced by the USAF in the late 1960s as a turbojet-powered autogyro with 50 nm of range, the Kaman KSA-100 SAVER. In 1942, numerical control machining was conceived by machinist John T. Parsons to cut complex structures from solid blocks of alloy, rather than assembling them, improving quality, reducing weight, and saving time and cost to produce bulkheads or wing skins. In World War II, the German V-2 combined gyroscopes, an accelerometer and a primitive computer for real-time inertial navigation allowing dead reckoning without reference to landmarks or guide stars, leading to packaged IMUs for spacecraft and aircraft. The UK Miles M.52 supersonic aircraft was to have an afterburner, augmenting a turbojet thrust by burning additional fuel in the nozzle, but was cancelled in 1946. In 1935, German aerodynamicist Adolf Busemann proposed using swept wings to reduce high-speed drag and the Messerschmitt P.1101 fighter prototype was 80% complete by the end of World War II; the later US North American F-86 and Boeing B-47 flew in 1947, as the Soviet MiG-15, and the British de Havilland Comet in 1949. In 1951, the Avro Jetliner featured an ice protection system from Goodyear through electro-thermal resistances in the wing and tail leading edges; jet aircraft use hot engine bleed air and lighter aircraft use pneumatic deicing boots or weep anti-icing fluid on propellers, wing and tail leading edges. In 1954, Bell Labs developed the first transistorized airborne digital computer, Tradic for the US Boeing B-52 and in the 1960s Raytheon built the MIT-developed Apollo Guidance Computer; the MIL-STD-1553 avionics digital bus was defined in 1973 then first used in the General Dynamics F-16, while the civil ARINC 429 was first used in the Boeing 757/B767 and Airbus A310 in the early 1980s. After World War II, the initial promoter of Photovoltaic power for spacecraft, Hans K. Ziegler, was brought to the US under Operation Paperclip along Wernher von Braun and Vanguard 1 was its first application in 1958, later enhanced in space-deployable structures like the International Space Station solar arrays of . To board an airliner, jet bridges are more accessible, comfortable and efficient than climbing the stairs. In the 1950s, to improve thrust and fuel efficiency, the jet engine airflow was divided into a core stream and a bypass stream with a lower velocity for better propulsive efficiency: the first was the Rolls-Royce Conway with a 0.3 BPR on the Boeing 707 in 1960, followed by the Pratt & Whitney JT3D with a 1.5 BPR and, derived from the J79, the General Electric CJ805 powered the Convair 990 with a 28% lower cruise fuel burn; bypass ratio improved to the 9.3 BPR Rolls-Royce Trent XWB, the 10:1 BPR GE9X and the Pratt & Whitney GTF with high-pressure ratio cores. Functional safety Functional safety relates to a part of the general safety of a system or a piece of equipment. It implies that the system or equipment can be operated properly and without causing any danger, risk, damage or injury. Functional safety is crucial in the aerospace industry, which allows no compromises or negligence. In this respect, supervisory bodies, such as the European Aviation Safety Agency (EASA), regulate the aerospace market with strict certification standards. This is meant to reach and ensure the highest possible level of safety. The standards AS 9100 in America, EN 9100 on the European market or JISQ 9100 in Asia particularly address the aerospace and aviation industry. These are standards applying to the functional safety of aerospace vehicles. Some companies are therefore specialized in the certification, inspection, verification and testing of the vehicles and spare parts to ensure and attest compliance with the appropriate regulations. Spinoffs Spinoffs refer to any technology that is a direct result of coding or products created by NASA and redesigned for an alternate purpose. These technological advancements are one of the primary results of the aerospace industry, with $5.2 billion worth of revenue generated by spinoff technology, including computers and cellular devices. These spinoffs have applications in a variety of different fields including medicine, transportation, energy, consumer goods, public safety and more. NASA publishes an annual report called "Spinoffs", regarding many of the specific products and benefits to the aforementioned areas in an effort to highlight some of the ways funding is put to use. For example, in the most recent edition of this publication, "Spinoffs 2015", endoscopes are featured as one of the medical derivations of aerospace achievement. This device enables more precise and subsequently cost-effective neurosurgery by reducing complications through a minimally invasive procedure that abbreviates hospitalization. "These NASA technologies are not only giving companies and entrepreneurs a competitive edge in their own industries, but are also helping to shape budding industries, such as commercial lunar landers," said Daniel Lockney. See also Aerodynamics Aeronautics Aerospace engineering Aircraft Astronautics NewSpace Space agencies (List of) Space exploration Spacecraft Wiktionary: Aviation, aerospace, and aeronautical terms References Further reading Blockley, Richard, and Wei Shyy. Encyclopedia of aerospace engineering (American Institute of Aeronautics and Astronautics, Inc., 2010). Brunton, Steven L., et al. "Data-driven aerospace engineering: reframing the industry with machine learning." AIAA Journal.. 59.8 (2021): 2820-2847. online Davis, Jeffrey R., Robert Johnson, and Jan Stepanek, eds. Fundamentals of aerospace medicine (Lippincott Williams & Wilkins, 2008) online. Mouritz, Adrian P. Introduction to aerospace materials (Elsevier, 2012) online. Petrescu, Relly Victoria, et al. "Modern propulsions for aerospace-a review." Journal of Aircraft and Spacecraft Technology 1.1 (2017). Phero, Graham C., and Kessler Sterne. "The aerospace revolution: development, intellectual property, and value." (2022). online Wills, Jocelyn. Tug of War: Surveillance Capitalism, Military Contracting, and the Rise of the Security State'' (McGill-Queen's University Press, 2017), scholarly history of MDA in Canada. online book review External links
Aerospace
[ "Physics" ]
4,230
[ "Spacetime", "Space", "Aerospace" ]
154,738
https://en.wikipedia.org/wiki/Hydrogen%20sulfide
Hydrogen sulfide is a chemical compound with the formula . It is a colorless chalcogen-hydride gas, and is poisonous, corrosive, and flammable, with trace amounts in ambient atmosphere having a characteristic foul odor of rotten eggs. Swedish chemist Carl Wilhelm Scheele is credited with having discovered the chemical composition of purified hydrogen sulfide in 1777. Hydrogen sulfide is toxic to humans and most other animals by inhibiting cellular respiration in a manner similar to hydrogen cyanide. When it is inhaled or its salts are ingested in high amounts, damage to organs occurs rapidly with symptoms ranging from breathing difficulties to convulsions and death. Despite this, the human body produces small amounts of this sulfide and its mineral salts, and uses it as a signalling molecule. Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen, such as in swamps and sewers; this process is commonly known as anaerobic digestion, which is done by sulfate-reducing microorganisms. It also occurs in volcanic gases, natural gas deposits, and sometimes in well-drawn water. Properties Hydrogen sulfide is slightly denser than air. A mixture of and air can be explosive. Oxidation In general, hydrogen sulfide acts as a reducing agent, as indicated by its ability to reduce sulfur dioxide in the Claus process. Hydrogen sulfide burns in oxygen with a blue flame to form sulfur dioxide () and water: If an excess of oxygen is present, sulfur trioxide () is formed, which quickly hydrates to sulfuric acid: Acid-base properties It is slightly soluble in water and acts as a weak acid (pKa = 6.9 in 0.01–0.1 mol/litre solutions at 18 °C), giving the hydrosulfide ion . Hydrogen sulfide and its solutions are colorless. When exposed to air, it slowly oxidizes to form elemental sulfur, which is not soluble in water. The sulfide anion is not formed in aqueous solution. Extreme temperatures and pressures At pressures above 90 GPa (gigapascal), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature this high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches , the highest accepted superconducting critical temperature as of 2015. By substituting a small part of sulfur with phosphorus and using even higher pressures, it has been predicted that it may be possible to raise the critical temperature to above and achieve room-temperature superconductivity. Hydrogen sulfide decomposes without a presence of a catalyst under atmospheric pressure around 1200 °C into hydrogen and sulfur. Tarnishing Hydrogen sulfide reacts with metal ions to form metal sulfides, which are insoluble, often dark colored solids. Lead(II) acetate paper is used to detect hydrogen sulfide because it readily converts to lead(II) sulfide, which is black. Treating metal sulfides with strong acid or electrolysis often liberates hydrogen sulfide. Hydrogen sulfide is also responsible for tarnishing on various metals including copper and silver; the chemical responsible for black toning found on silver coins is silver sulfide (), which is produced when the silver on the surface of the coin reacts with atmospheric hydrogen sulfide. Coins that have been subject to toning by hydrogen sulfide and other sulfur-containing compounds may have the toning add to the numismatic value of a coin based on aesthetics, as the toning may produce thin-film interference, resulting in the coin taking on an attractive coloration. Coins can also be intentionally treated with hydrogen sulfide to induce toning, though artificial toning can be distinguished from natural toning, and is generally criticised among collectors. Production Hydrogen sulfide is most commonly obtained by its separation from sour gas, which is natural gas with a high content of . It can also be produced by treating hydrogen with molten elemental sulfur at about 450 °C. Hydrocarbons can serve as a source of hydrogen in this process. The very favorable thermodynamics for the hydrogenation of sulfur implies that the dehydrogenation (or cracking) of hydrogen sulfide would require very high temperatures. A standard lab preparation is to treat ferrous sulfide with a strong acid in a Kipp generator: For use in qualitative inorganic analysis, thioacetamide is used to generate : Many metal and nonmetal sulfides, e.g. aluminium sulfide, phosphorus pentasulfide, silicon disulfide liberate hydrogen sulfide upon exposure to water: This gas is also produced by heating sulfur with solid organic compounds and by reducing sulfurated organic compounds with hydrogen. It can also be produced by mixing ammonium thiocyanate to concentrated sulphuric acid and adding water to it. Biosynthesis Hydrogen sulfide can be generated in cells via enzymatic or non-enzymatic pathways. Three enzymes catalyze formation of : cystathionine γ-lyase (CSE), cystathionine β-synthetase (CBS), and 3-mercaptopyruvate sulfurtransferase (3-MST). CBS and CSE are the main proponents of biogenesis, which follows the trans-sulfuration pathway. These enzymes have been identified in a breadth of biological cells and tissues, and their activity is induced by a number of disease states. These enzymes are characterized by the transfer of a sulfur atom from methionine to serine to form a cysteine molecule. 3-MST also contributes to hydrogen sulfide production by way of the cysteine catabolic pathway. Dietary amino acids, such as methionine and cysteine serve as the primary substrates for the transulfuration pathways and in the production of hydrogen sulfide. Hydrogen sulfide can also be derived from proteins such as ferredoxins and Rieske proteins. Sulfate-reducing (resp. sulfur-reducing) bacteria generate usable energy under low-oxygen conditions by using sulfates (resp. elemental sulfur) to oxidize organic compounds or hydrogen; this produces hydrogen sulfide as a waste product. Water heaters can aid the conversion of sulfate in water to hydrogen sulfide gas. This is due to providing a warm environment sustainable for sulfur bacteria and maintaining the reaction which interacts between sulfate in the water and the water heater anode, which is usually made from magnesium metal. Signalling role in the body acts as a gaseous signaling molecule with implications for health and in diseases. Hydrogen sulfide is involved in vasodilation in animals, as well as in increasing seed germination and stress responses in plants. Hydrogen sulfide signaling is moderated by reactive oxygen species (ROS) and reactive nitrogen species (RNS). has been shown to interact with the NO pathway resulting in several different cellular effects, including the inhibition of cGMP phosphodiesterases, as well as the formation of another signal called nitrosothiol. Hydrogen sulfide is also known to increase the levels of glutathione, which acts to reduce or disrupt ROS levels in cells. The field of biology has advanced from environmental toxicology to investigate the roles of endogenously produced in physiological conditions and in various pathophysiological states. has been implicated in cancer, in Down syndrome and in vascular disease. At lower concentrations, it stimulates mitochondrial function via multiple mechanisms including direct electron donation. However, at higher concentrations, it inhibits Complex IV of the mitochondrial electron transport chain, which effectively reduces ATP generation and biochemical activity within cells. Uses Production of sulfur Hydrogen sulfide is mainly consumed as a precursor to elemental sulfur. This conversion, called the Claus process, involves partial oxidation to sulfur dioxide. The latter reacts with hydrogen sulfide to give elemental sulfur. The conversion is catalyzed by alumina. Production of thioorganic compounds Many fundamental organosulfur compounds are produced using hydrogen sulfide. These include methanethiol, ethanethiol, and thioglycolic acid. Hydrosulfides can be used in the production of thiophenol. Production of metal sulfides Upon combining with alkali metal bases, hydrogen sulfide converts to alkali hydrosulfides such as sodium hydrosulfide and sodium sulfide: Sodium sulfides are used in the paper making industry. Specifically, salts of break bonds between lignin and cellulose components of pulp in the Kraft process. As indicated above, many metal ions react with hydrogen sulfide to give the corresponding metal sulfides. Oxidic ores are sometimes treated with hydrogen sulfide to give the corresponding metal sulfides which are more readily purified by flotation. Metal parts are sometimes passivated with hydrogen sulfide. Catalysts used in hydrodesulfurization are routinely activated with hydrogen sulfide. Hydrogen sulfide was a reagent in the qualitative inorganic analysis of metal ions. In these analyses, heavy metal (and nonmetal) ions (e.g., Pb(II), Cu(II), Hg(II), As(III)) are precipitated from solution upon exposure to . The components of the resulting solid are then identified by their reactivity. Miscellaneous applications Hydrogen sulfide is used to separate deuterium oxide, or heavy water, from normal water via the Girdler sulfide process. A suspended animation-like state has been induced in rodents with the use of hydrogen sulfide, resulting in hypothermia with a concomitant reduction in metabolic rate. Oxygen demand was also reduced, thereby protecting against hypoxia. In addition, hydrogen sulfide has been shown to reduce inflammation in various situations. Occurrence Volcanoes and some hot springs (as well as cold springs) emit some . Hydrogen sulfide can be present naturally in well water, often as a result of the action of sulfate-reducing bacteria. Hydrogen sulfide is produced by the human body in small quantities through bacterial breakdown of proteins containing sulfur in the intestinal tract; it therefore contributes to the characteristic odor of flatulence. It is also produced in the mouth (halitosis). A portion of global emissions are due to human activity. By far the largest industrial source of is petroleum refineries: The hydrodesulfurization process liberates sulfur from petroleum by the action of hydrogen. The resulting is converted to elemental sulfur by partial combustion via the Claus process, which is a major source of elemental sulfur. Other anthropogenic sources of hydrogen sulfide include coke ovens, paper mills (using the Kraft process), tanneries and sewerage. arises from virtually anywhere where elemental sulfur comes in contact with organic material, especially at high temperatures. Depending on environmental conditions, it is responsible for deterioration of material through the action of some sulfur oxidizing microorganisms. It is called biogenic sulfide corrosion. In 2011 it was reported that increased concentrations of were observed in the Bakken formation crude, possibly due to oil field practices, and presented challenges such as "health and environmental risks, corrosion of wellbore, added expense with regard to materials handling and pipeline equipment, and additional refinement requirements". Besides living near gas and oil drilling operations, ordinary citizens can be exposed to hydrogen sulfide by being near waste water treatment facilities, landfills and farms with manure storage. Exposure occurs through breathing contaminated air or drinking contaminated water. In municipal waste landfill sites, the burial of organic material rapidly leads to the production of anaerobic digestion within the waste mass and, with the humid atmosphere and relatively high temperature that accompanies biodegradation, biogas is produced as soon as the air within the waste mass has been reduced. If there is a source of sulfate bearing material, such as plasterboard or natural gypsum (calcium sulfate dihydrate), under anaerobic conditions sulfate reducing bacteria converts this to hydrogen sulfide. These bacteria cannot survive in air but the moist, warm, anaerobic conditions of buried waste that contains a high source of carbon – in inert landfills, paper and glue used in the fabrication of products such as plasterboard can provide a rich source of carbon – is an excellent environment for the formation of hydrogen sulfide. In industrial anaerobic digestion processes, such as waste water treatment or the digestion of organic waste from agriculture, hydrogen sulfide can be formed from the reduction of sulfate and the degradation of amino acids and proteins within organic compounds. Sulfates are relatively non-inhibitory to methane forming bacteria but can be reduced to by sulfate reducing bacteria, of which there are several genera. Removal from water A number of processes have been designed to remove hydrogen sulfide from drinking water. Continuous chlorination For levels up to 75 mg/L chlorine is used in the purification process as an oxidizing chemical to react with hydrogen sulfide. This reaction yields insoluble solid sulfur. Usually the chlorine used is in the form of sodium hypochlorite. Aeration For concentrations of hydrogen sulfide less than 2 mg/L aeration is an ideal treatment process. Oxygen is added to water and a reaction between oxygen and hydrogen sulfide react to produce odorless sulfate. Nitrate addition Calcium nitrate can be used to prevent hydrogen sulfide formation in wastewater streams. Removal from fuel gases Hydrogen sulfide is commonly found in raw natural gas and biogas. It is typically removed by amine gas treating technologies. In such processes, the hydrogen sulfide is first converted to an ammonium salt, whereas the natural gas is unaffected. The bisulfide anion is subsequently regenerated by heating of the amine sulfide solution. Hydrogen sulfide generated in this process is typically converted to elemental sulfur using the Claus Process. Safety The underground mine gas term for foul-smelling hydrogen sulfide-rich gas mixtures is stinkdamp. Hydrogen sulfide is a highly toxic and flammable gas (flammable range: 4.3–46%). It can poison several systems in the body, although the nervous system is most affected. The toxicity of is comparable with that of carbon monoxide. It binds with iron in the mitochondrial cytochrome enzymes, thus preventing cellular respiration. Its toxic properties were described in detail in 1843 by Justus von Liebig. Even before hydrogen sulfide was discovered, Italian physician Bernardino Ramazzini hypothesized in his 1713 book De Morbis Artificum Diatriba that occupational diseases of sewer-workers and blackening of coins in their clothes may be caused by an unknown invisible volatile acid (moreover, in late 18th century toxic gas emanation from Paris sewers became a problem for the citizens and authorities). Although very pungent at first (it smells like rotten eggs), it quickly deadens the sense of smell, creating temporary anosmia, so victims may be unaware of its presence until it is too late. Safe handling procedures are provided by its safety data sheet (SDS). Low-level exposure Since hydrogen sulfide occurs naturally in the body, the environment, and the gut, enzymes exist to metabolize it. At some threshold level, believed to average around 300–350 ppm, the oxidative enzymes become overwhelmed. Many personal safety gas detectors, such as those used by utility, sewage and petrochemical workers, are set to alarm at as low as 5 to 10 ppm and to go into high alarm at 15 ppm. Metabolism causes oxidation to sulfate, which is harmless. Hence, low levels of hydrogen sulfide may be tolerated indefinitely. Exposure to lower concentrations can result in eye irritation, a sore throat and cough, nausea, shortness of breath, and fluid in the lungs. These effects are believed to be due to hydrogen sulfide combining with alkali present in moist surface tissues to form sodium sulfide, a caustic. These symptoms usually subside in a few weeks. Long-term, low-level exposure may result in fatigue, loss of appetite, headaches, irritability, poor memory, and dizziness. Chronic exposure to low level (around 2 ppm) has been implicated in increased miscarriage and reproductive health issues among Russian and Finnish wood pulp workers, but the reports have not (as of 1995) been replicated. High-level exposure Short-term, high-level exposure can induce immediate collapse, with loss of breathing and a high probability of death. If death does not occur, high exposure to hydrogen sulfide can lead to cortical pseudolaminar necrosis, degeneration of the basal ganglia and cerebral edema. Although respiratory paralysis may be immediate, it can also be delayed up to 72 hours. Inhalation of resulted in about 7 workplace deaths per year in the U.S. (2011–2017 data), second only to carbon monoxide (17 deaths per year) for workplace chemical inhalation deaths. Exposure thresholds Exposure limits stipulated by the United States government: 10 ppm REL-Ceiling (NIOSH): recommended permissible exposure ceiling (the recommended level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 20 ppm PEL-Ceiling (OSHA): permissible exposure ceiling (the level that must not be exceeded, except once for 10 min. in an 8-hour shift, if no other measurable exposure occurs) 50 ppm PEL-Peak (OSHA): peak permissible exposure (the level that must never be exceeded) 100 ppm IDLH (NIOSH): immediately dangerous to life and health (the level that interferes with the ability to escape) 0.00047 ppm or 0.47 ppb is the odor threshold, the point at which 50% of a human panel can detect the presence of an odor without being able to identify it. 10–20 ppm is the borderline concentration for eye irritation. 50–100 ppm leads to eye damage. At 100–150 ppm the olfactory nerve is paralyzed after a few inhalations, and the sense of smell disappears, often together with awareness of danger. 320–530 ppm leads to pulmonary edema with the possibility of death. 530–1000 ppm causes strong stimulation of the central nervous system and rapid breathing, leading to loss of breathing. 800 ppm is the lethal concentration for 50% of humans for 5 minutes' exposure (LC50). Concentrations over 1000 ppm cause immediate collapse with loss of breathing, even after inhalation of a single breath. Treatment Treatment involves immediate inhalation of amyl nitrite, injections of sodium nitrite, or administration of 4-dimethylaminophenol in combination with inhalation of pure oxygen, administration of bronchodilators to overcome eventual bronchospasm, and in some cases hyperbaric oxygen therapy (HBOT). HBOT has clinical and anecdotal support. Incidents Hydrogen sulfide was used by the British Army as a chemical weapon during World War I. It was not considered to be an ideal war gas, partially due to its flammability and because the distinctive smell could be detected from even a small leak, alerting the enemy to the presence of the gas. It was nevertheless used on two occasions in 1916 when other gases were in short supply. On September 2, 2005, a leak in the propeller room of a Royal Caribbean Cruise Liner docked in Los Angeles resulted in the deaths of 3 crewmen due to a sewage line leak. As a result, all such compartments are now required to have a ventilation system. A dump of toxic waste containing hydrogen sulfide is believed to have caused 17 deaths and thousands of illnesses in Abidjan, on the West African coast, in the 2006 Côte d'Ivoire toxic waste dump. In September 2008, three workers were killed and two suffered serious injury, including long term brain damage, at a mushroom growing company in Langley, British Columbia. A valve to a pipe that carried chicken manure, straw and gypsum to the compost fuel for the mushroom growing operation became clogged, and as workers unclogged the valve in a confined space without proper ventilation the hydrogen sulfide that had built up due to anaerobic decomposition of the material was released, poisoning the workers in the surrounding area. An investigator said there could have been more fatalities if the pipe had been fully cleared and/or if the wind had changed directions. In 2014, levels of hydrogen sulfide as high as 83 ppm were detected at a recently built mall in Thailand called Siam Square One at the Siam Square area. Shop tenants at the mall reported health complications such as sinus inflammation, breathing difficulties and eye irritation. After investigation it was determined that the large amount of gas originated from imperfect treatment and disposal of waste water in the building. In 2014, hydrogen sulfide gas killed workers at the Promenade shopping center in North Scottsdale, Arizona, USA after climbing into 15 ft deep chamber without wearing personal protective gear. "Arriving crews recorded high levels of hydrogen cyanide and hydrogen sulfide coming out of the sewer." In November 2014, a substantial amount of hydrogen sulfide gas shrouded the central, eastern and southeastern parts of Moscow. Residents living in the area were urged to stay indoors by the emergencies ministry. Although the exact source of the gas was not known, blame had been placed on a Moscow oil refinery. In June 2016, a mother and her daughter were found dead in their still-running 2006 Porsche Cayenne SUV against a guardrail on Florida's Turnpike, initially thought to be victims of carbon monoxide poisoning. Their deaths remained unexplained as the medical examiner waited for results of toxicology tests on the victims, until urine tests revealed that hydrogen sulfide was the cause of death. A report from the Orange-Osceola Medical Examiner's Office indicated that toxic fumes came from the Porsche's starter battery, located under the front passenger seat. In January 2017, three utility workers in Key Largo, Florida, died one by one within seconds of descending into a narrow space beneath a manhole cover to check a section of paved street. In an attempt to save the men, a firefighter who entered the hole without his air tank (because he could not fit through the hole with it) collapsed within seconds and had to be rescued by a colleague. The firefighter was airlifted to Jackson Memorial Hospital and later recovered. A Monroe County Sheriff officer initially determined that the space contained hydrogen sulfide and methane gas produced by decomposing vegetation. On May 24, 2018, two workers were killed, another seriously injured, and 14 others hospitalized by hydrogen sulfide inhalation at a Norske Skog paper mill in Albury, New South Wales. An investigation by SafeWork NSW found that the gas was released from a tank used to hold process water. The workers were exposed at the end of a 3-day maintenance period. Hydrogen sulfide had built up in an upstream tank, which had been left stagnant and untreated with biocide during the maintenance period. These conditions allowed sulfate-reducing bacteria to grow in the upstream tank, as the water contained small quantities of wood pulp and fiber. The high rate of pumping from this tank into the tank involved in the incident caused hydrogen sulfide gas to escape from various openings around its top when pumping was resumed at the end of the maintenance period. The area above it was sufficiently enclosed for the gas to pool there, despite not being identified as a confined space by Norske Skog. One of the workers who was killed was exposed while investigating an apparent fluid leak in the tank, while the other who was killed and the worker who was badly injured were attempting to rescue the first after he collapsed on top of it. In a resulting criminal case, Norske Skog was accused of failing to ensure the health and safety of its workforce at the plant to a reasonably practicable extent. It pleaded guilty, and was fined AU$1,012,500 and ordered to fund the production of an anonymized educational video about the incident. In October 2019, an Odessa, Texas employee of Aghorn Operating Inc. and his wife were killed due to a water pump failure. Produced water with a high concentration of hydrogen sulfide was released by the pump. The worker died while responding to an automated phone call he had received alerting him to a mechanical failure in the pump, while his wife died after driving to the facility to check on him. A CSB investigation cited lax safety practices at the facility, such as an informal lockout-tagout procedure and a nonfunctioning hydrogen sulfide alert system. Suicides The gas, produced by mixing certain household ingredients, was used in a suicide wave in 2008 in Japan. The wave prompted staff at Tokyo's suicide prevention center to set up a special hotline during "Golden Week", as they received an increase in calls from people wanting to kill themselves during the annual May holiday. As of 2010, this phenomenon has occurred in a number of US cities, prompting warnings to those arriving at the site of the suicide. These first responders, such as emergency services workers or family members are at risk of death or injury from inhaling the gas, or by fire. Local governments have also initiated campaigns to prevent such suicides. In 2020, ingestion was used as a suicide method by Japanese pro wrestler Hana Kimura. In 2024, Lucy-Bleu Knight, stepdaughter of famed musician Slash, also used ingestion to commit suicide. Hydrogen sulfide in the natural environment Microbial: The sulfur cycle Hydrogen sulfide is a central participant in the sulfur cycle, the biogeochemical cycle of sulfur on Earth. In the absence of oxygen, sulfur-reducing and sulfate-reducing bacteria derive energy from oxidizing hydrogen or organic molecules by reducing elemental sulfur or sulfate to hydrogen sulfide. Other bacteria liberate hydrogen sulfide from sulfur-containing amino acids; this gives rise to the odor of rotten eggs and contributes to the odor of flatulence. As organic matter decays under low-oxygen (or hypoxic) conditions (such as in swamps, eutrophic lakes or dead zones of oceans), sulfate-reducing bacteria will use the sulfates present in the water to oxidize the organic matter, producing hydrogen sulfide as waste. Some of the hydrogen sulfide will react with metal ions in the water to produce metal sulfides, which are not water-soluble. These metal sulfides, such as ferrous sulfide FeS, are often black or brown, leading to the dark color of sludge. Several groups of bacteria can use hydrogen sulfide as fuel, oxidizing it to elemental sulfur or to sulfate by using dissolved oxygen, metal oxides (e.g., iron oxyhydroxides and manganese oxides), or nitrate as electron acceptors. The purple sulfur bacteria and the green sulfur bacteria use hydrogen sulfide as an electron donor in photosynthesis, thereby producing elemental sulfur. This mode of photosynthesis is older than the mode of cyanobacteria, algae, and plants, which uses water as electron donor and liberates oxygen. The biochemistry of hydrogen sulfide is a key part of the chemistry of the iron-sulfur world. In this model of the origin of life on Earth, geologically produced hydrogen sulfide is postulated as an electron donor driving the reduction of carbon dioxide. Animals Hydrogen sulfide is lethal to most animals, but a few highly specialized species (extremophiles) do thrive in habitats that are rich in this compound. In the deep sea, hydrothermal vents and cold seeps with high levels of hydrogen sulfide are home to a number of extremely specialized lifeforms, ranging from bacteria to fish. Because of the absence of sunlight at these depths, these ecosystems rely on chemosynthesis rather than photosynthesis. Freshwater springs rich in hydrogen sulfide are mainly home to invertebrates, but also include a small number of fish: Cyprinodon bobmilleri (a pupfish from Mexico), Limia sulphurophila (a poeciliid from the Dominican Republic), Gambusia eurystoma (a poeciliid from Mexico), and a few Poecilia (poeciliids from Mexico). Invertebrates and microorganisms in some cave systems, such as Movile Cave, are adapted to high levels of hydrogen sulfide. Interstellar and planetary occurrence Hydrogen sulfide has often been detected in the interstellar medium. It also occurs in the clouds of planets in our solar system. Mass extinctions Hydrogen sulfide has been implicated in several mass extinctions that have occurred in the Earth's past. In particular, a buildup of hydrogen sulfide in the atmosphere may have caused, or at least contributed to, the Permian-Triassic extinction event 252 million years ago. Organic residues from these extinction boundaries indicate that the oceans were anoxic (oxygen-depleted) and had species of shallow plankton that metabolized . The formation of may have been initiated by massive volcanic eruptions, which emitted carbon dioxide and methane into the atmosphere, which warmed the oceans, lowering their capacity to absorb oxygen that would otherwise oxidize . The increased levels of hydrogen sulfide could have killed oxygen-generating plants as well as depleted the ozone layer, causing further stress. Small blooms have been detected in modern times in the Dead Sea and in the Atlantic Ocean off the coast of Namibia. See also Hydrogen sulfide chemosynthesis Marsh gas References Additional resources External links International Chemical Safety Card 0165 Concise International Chemical Assessment Document 53 National Pollutant Inventory - Hydrogen sulfide fact sheet NIOSH Pocket Guide to Chemical Hazards NACE (National Association of Corrosion Epal) Acids Foul-smelling chemicals Hydrogen compounds Triatomic molecules Industrial gases Airborne pollutants Sulfides Flatulence Gaseous signaling molecules Blood agents
Hydrogen sulfide
[ "Physics", "Chemistry" ]
6,213
[ "Acids", "Chemical weapons", "Molecules", "Signal transduction", "Gaseous signaling molecules", "Triatomic molecules", "Industrial gases", "Blood agents", "Chemical process engineering", "Matter" ]
154,750
https://en.wikipedia.org/wiki/Zirconium%20dioxide
Zirconium dioxide (), sometimes known as zirconia (not to be confused with zirconium silicate or zircon), is a white crystalline oxide of zirconium. Its most naturally occurring form, with a monoclinic crystalline structure, is the mineral baddeleyite. A dopant stabilized cubic structured zirconia, cubic zirconia, is synthesized in various colours for use as a gemstone and a diamond simulant. Production, chemical properties, occurrence Zirconia is produced by calcining zirconium compounds, exploiting its high thermostability. Structure Three phases are known: monoclinic below 1170 °C, tetragonal between 1170 °C and 2370 °C, and cubic above 2370 °C. The trend is for higher symmetry at higher temperatures, as is usually the case. A small percentage of the oxides of calcium or yttrium stabilize in the cubic phase. The very rare mineral tazheranite, , is cubic. Unlike , which features six-coordinated titanium in all phases, monoclinic zirconia consists of seven-coordinated zirconium centres. This difference is attributed to the larger size of the zirconium atom relative to the titanium atom. Chemical reactions Zirconia is chemically unreactive. It is slowly attacked by concentrated hydrofluoric acid and sulfuric acid. When heated with carbon, it converts to zirconium carbide. When heated with carbon in the presence of chlorine, it converts to zirconium(IV) chloride. This conversion is the basis for the purification of zirconium metal and is analogous to the Kroll process. Engineering properties Zirconium dioxide is one of the most studied ceramic materials. adopts a monoclinic crystal structure at room temperature and transitions to tetragonal and cubic at higher temperatures. The change of volume caused by the structure transitions from tetragonal to monoclinic to cubic induces large stresses, causing it to crack upon cooling from high temperatures. When the zirconia is blended with some other oxides, the tetragonal and/or cubic phases are stabilized. Effective dopants include magnesium oxide (MgO), yttrium oxide (, yttria), calcium oxide (), and cerium(III) oxide (). Zirconia is often more useful in its phase 'stabilized' state. Upon heating, zirconia undergoes disruptive phase changes. By adding small percentages of yttria, these phase changes are eliminated, and the resulting material has superior thermal, mechanical, and electrical properties. In some cases, the tetragonal phase can be metastable. If sufficient quantities of the metastable tetragonal phase is present, then an applied stress, magnified by the stress concentration at a crack tip, can cause the tetragonal phase to convert to monoclinic, with the associated volume expansion. This phase transformation can then put the crack into compression, retarding its growth, and enhancing the fracture toughness. This mechanism, known as transformation toughening, significantly extends the reliability and lifetime of products made with stabilized zirconia. The band gap is dependent on the phase (cubic, tetragonal, monoclinic, or amorphous) and preparation methods, with typical estimates from 5–7 eV. A special case of zirconia is that of tetragonal zirconia polycrystal, or TZP, which is indicative of polycrystalline zirconia composed of only the metastable tetragonal phase. Uses The main use of zirconia is in the production of hard ceramics, such as in dentistry, with other uses including as a protective coating on particles of titanium dioxide pigments, as a refractory material, in insulation, abrasives, and enamels. Stabilized zirconia is used in oxygen sensors and fuel cell membranes because it has the ability to allow oxygen ions to move freely through the crystal structure at high temperatures. This high ionic conductivity (and a low electronic conductivity) makes it one of the most useful electroceramics. Zirconium dioxide is also used as the solid electrolyte in electrochromic devices. Zirconia is a precursor to the electroceramic lead zirconate titanate (PZT), which is a high-κ dielectric, which is found in myriad components. Niche uses The very low thermal conductivity of cubic phase of zirconia also has led to its use as a thermal barrier coating, or TBC, in jet and diesel engines to allow operation at higher temperatures. Thermodynamically, the higher the operation temperature of an engine, the greater the possible efficiency. Another low-thermal-conductivity use is as a ceramic fiber insulation for crystal growth furnaces, fuel-cell stacks, and infrared heating systems. This material is also used in dentistry in the manufacture of subframes for the construction of dental restorations such as crowns and bridges, which are then veneered with a conventional feldspathic porcelain for aesthetic reasons, or of strong, extremely durable dental prostheses constructed entirely from monolithic zirconia, with limited but constantly improving aesthetics. Zirconia stabilized with yttria (yttrium oxide), known as yttria-stabilized zirconia, can be used as a strong base material in some full ceramic crown restorations. Transformation-toughened zirconia is used to make ceramic knives. Because of the hardness, ceramic-edged cutlery stays sharp longer than steel edged products. Due to its infusibility and brilliant luminosity when incandescent, it was used as an ingredient of sticks for limelight. Zirconia has been proposed to electrolyze carbon monoxide and oxygen from the atmosphere of Mars to provide both fuel and oxidizer that could be used as a store of chemical energy for use with surface transportation on Mars. Carbon monoxide/oxygen engines have been suggested for early surface transportation use, as both carbon monoxide and oxygen can be straightforwardly produced by zirconia electrolysis without requiring use of any of the Martian water resources to obtain hydrogen, which would be needed for the production of methane or any hydrogen-based fuels. Zirconia can be used as photocatalyst since its high band gap (~ 5 eV) allows the generation of high-energy electrons and holes. Some studies demonstrated the activity of doped zirconia (in order to increase visible light absorption) in degrading organic compounds and reducing Cr(VI) from wastewaters. Zirconia is also a potential high-κ dielectric material with potential applications as an insulator in transistors. Zirconia is also employed in the deposition of optical coatings; it is a high-index material usable from the near-UV to the mid-IR, due to its low absorption in this spectral region. In such applications, it is typically deposited by PVD. In jewelry making, some watch cases are advertised as being "black zirconium oxide". In 2015 Omega released a fully watch named "The Dark Side of The Moon" with ceramic case, bezel, pushers, and clasp, advertising it as four times harder than stainless steel and therefore much more resistant to scratches during everyday use. In gas tungsten arc welding, tungsten electrodes containing 1% zirconium oxide (a.k.a. zirconia) instead of 2% thorium have good arc starting and current capacity, and are not radioactive. Diamond simulant Single crystals of the cubic phase of zirconia are commonly used as diamond simulant in jewellery. Like diamond, cubic zirconia has a cubic crystal structure and a high index of refraction. Visually discerning a good quality cubic zirconia gem from a diamond is difficult, and most jewellers will have a thermal conductivity tester to identify cubic zirconia by its low thermal conductivity (diamond is a very good thermal conductor). This state of zirconia is commonly called cubic zirconia, CZ, or zircon by jewellers, but the last name is not chemically accurate. Zircon is actually the mineral name for naturally occurring zirconium(IV) silicate (). See also Quenching Sintering S-type star, emitting spectral lines of zirconium monoxide Yttria-stabilized zirconia References Further reading External links NIOSH Pocket Guide to Chemical Hazards Biomaterials Ceramic materials High-κ dielectrics Refractory materials Zirconium dioxide
Zirconium dioxide
[ "Physics", "Engineering", "Biology" ]
1,829
[ "Biomaterials", "Refractory materials", "Materials", "Ceramic materials", "Ceramic engineering", "Matter", "Medical technology" ]
154,838
https://en.wikipedia.org/wiki/Podkamennaya%20Tunguska
The Podkamennaya Tunguska (, literally Tunguska under the stones; , Ket: Ӄо’ль) also known as Middle Tunguska or Stony Tunguska, is a river in Krasnoyarsk Krai, Russia. History In 1908, an asteroid impacted near the river and later became known as the Tunguska event. Hydrology The river's nutrition is mainly snow (60%); rain and groundwater nutrition account for 16 and 24%, respectively. The flood lasts from the beginning of May to the end of June, in the lower reaches until the beginning of July. From July to October, there is a summer low, interrupted by a rise in the level to 5.5 m during floods, which can be from one to four per year. The average annual water consumption at the mouth is 1587.18 m3/s, during summer floods it reaches 35,000 m3/s. Ice phenomena have been occurring since mid-October, the autumn ice drift of 7-16 days is accompanied by the formation of ice jams. The ice age is from the end of October to the middle of May. The ice drift lasts 5-7 days in the upper reaches and up to 10 days in the lower reaches, passes violently, with congestion the level rises by 29.7 m. Winter nutrition is weakened due to the location of the river basin in the permafrost zone and reaches the lowest values of 3-15 m3/s, the total winter runoff is 11% of the annual. In popular culture The river was the set location in the Call of Duty: Black Ops Escalation DLC map, Call of The Dead. See also List of rivers of Russia References External links Rivers of Krasnoyarsk Krai Tunguska event
Podkamennaya Tunguska
[ "Physics" ]
369
[ "Unsolved problems in physics", "Tunguska event" ]
154,851
https://en.wikipedia.org/wiki/Reentrancy%20%28computing%29
Reentrancy is a programming concept where a function or subroutine can be interrupted and then resumed before it finishes executing. This means that the function can be called again before it completes its previous execution. Reentrant code is designed to be safe and predictable when multiple instances of the same function are called simultaneously or in quick succession. A computer program or subroutine is called reentrant if multiple invocations can safely run concurrently on multiple processors, or if on a single-processor system its execution can be interrupted and a new execution of it can be safely started (it can be "re-entered"). The interruption could be caused by an internal action such as a jump or call, or by an external action such as an interrupt or signal, unlike recursion where new invocations can only be caused by internal call. This definition originates from multiprogramming environments, where multiple processes may be active concurrently and where the flow of control could be interrupted by an interrupt and transferred to an interrupt service routine (ISR) or "handler" subroutine. Any subroutine used by the handler that could potentially have been executing when the interrupt was triggered should be reentrant. Similarly, code shared by two processors accessing shared data should be reentrant. Often, subroutines accessible via the operating system kernel are not reentrant. Hence, interrupt service routines are limited in the actions they can perform; for instance, they are usually restricted from accessing the file system and sometimes even from allocating memory. Reentrancy is neither necessary nor sufficient for thread-safety in multi-threaded environments. In other words, a reentrant subroutine can be thread-safe, but . Conversely, thread-safe code need not be reentrant (see below for examples). Other terms used for reentrant programs include "sharable code". Reentrant subroutines are sometimes marked in reference material as being "signal safe". Reentrant programs are often "pure procedures". Background Reentrancy is not the same thing as idempotence, in which the function may be called more than once yet generate exactly the same output as if it had only been called once. Generally speaking, a function produces output data based on some input data (though both are optional, in general). Shared data could be accessed by any function at any time. If data can be changed by any function (and none keep track of those changes), there is no guarantee to those that share a datum that that datum is the same as at any time before. Data has a characteristic called scope, which describes where in a program the data may be used. Data scope is either global (outside the scope of any function and with an indefinite extent) or local (created each time a function is called and destroyed upon exit). Local data is not shared by any routines, re-entering or not; therefore, it does not affect re-entrance. Global data is defined outside functions and can be accessed by more than one function, either in the form of global variables (data shared between all functions), or as static variables (data shared by all invocations of the same function). In object-oriented programming, global data is defined in the scope of a class and can be private, making it accessible only to functions of that class. There is also the concept of instance variables, where a class variable is bound to a class instance. For these reasons, in object-oriented programming, this distinction is usually reserved for the data accessible outside of the class (public), and for the data independent of class instances (static). Reentrancy is distinct from, but closely related to, thread-safety. A function can be thread-safe and still not reentrant. For example, a function could be wrapped all around with a mutex (which avoids problems in multithreading environments), but, if that function were used in an interrupt service routine, it could starve waiting for the first execution to release the mutex. The key for avoiding confusion is that reentrant refers to only one thread executing. It is a concept from the time when no multitasking operating systems existed. Rules for reentrancy Reentrant code may not hold any static or global non-constant data without synchronization. Reentrant functions can work with global data. For example, a reentrant interrupt service routine could grab a piece of hardware status to work with (e.g., serial port read buffer) which is not only global, but volatile. Still, typical use of static variables and global data is not advised, in the sense that, except in sections of code that are synchronized, only atomic read-modify-write instructions should be used in these variables (it should not be possible for an interrupt or signal to come during the execution of such an instruction). Note that in C, even a read or write is not guaranteed to be atomic; it may be split into several reads or writes. The C standard and SUSv3 provide sig_atomic_t for this purpose, although with guarantees only for simple reads and writes, not for incrementing or decrementing. More complex atomic operations are available in C11, which provides stdatomic.h. Reentrant code may not modify itself without synchronization. The operating system might allow a process to modify its code. There are various reasons for this (e.g., blitting graphics quickly) but this generally requires synchronization to avoid problems with reentrancy.It may, however, modify itself if it resides in its own unique memory. That is, if each new invocation uses a different physical machine code location where a copy of the original code is made, it will not affect other invocations even if it modifies itself during execution of that particular invocation (thread). Reentrant code may not call non-reentrant computer programs or routines without synchronization. Multiple levels of user, object, or process priority or multiprocessing usually complicate the control of reentrant code. It is important to keep track of any access or side effects that are done inside a routine designed to be reentrant. Reentrancy of a subroutine that operates on operating-system resources or non-local data depends on the atomicity of the respective operations. For example, if the subroutine modifies a 64-bit global variable on a 32-bit machine, the operation may be split into two 32-bit operations, and thus, if the subroutine is interrupted while executing, and called again from the interrupt handler, the global variable may be in a state where only 32 bits have been updated. The programming language might provide atomicity guarantees for interruption caused by an internal action such as a jump or call. Then the function in an expression like (global:=1) + (f()), where the order of evaluation of the subexpressions might be arbitrary in a programming language, would see the global variable either set to 1 or to its previous value, but not in an intermediate state where only part has been updated. (The latter can happen in C, because the expression has no sequence point.) The operating system might provide atomicity guarantees for signals, such as a system call interrupted by a signal not having a partial effect. The processor hardware might provide atomicity guarantees for interrupts, such as interrupted processor instructions not having partial effects. Examples To illustrate reentrancy, this article uses as an example a C utility function, , that takes two pointers and transposes their values, and an interrupt-handling routine that also calls the swap function. Neither reentrant nor thread-safe This is an example swap function that fails to be reentrant or thread-safe. Since the tmp variable is globally shared, without synchronization, among any concurrent instances of the function, one instance may interfere with the data relied upon by another. As such, it should not have been used in the interrupt service routine isr(): int tmp; void swap(int* x, int* y) { tmp = *x; *x = *y; /* Hardware interrupt might invoke isr() here. */ *y = tmp; } void isr() { int x = 1, y = 2; swap(&x, &y); } Thread-safe but not reentrant The function in the preceding example can be made thread-safe by making thread-local. It still fails to be reentrant, and this will continue to cause problems if is called in the same context as a thread already executing : _Thread_local int tmp; void swap(int* x, int* y) { tmp = *x; *x = *y; /* Hardware interrupt might invoke isr() here. */ *y = tmp; } void isr() { int x = 1, y = 2; swap(&x, &y); } Reentrant and thread-safe An implementation of that allocates on the stack instead of globally and that is called only with unshared variables as parameters is both thread-safe and reentrant. Thread-safe because the stack is local to a thread and a function acting just on local data will always produce the expected result. There is no access to shared data therefore no data race. void swap(int* x, int* y) { int tmp; tmp = *x; *x = *y; *y = tmp; /* Hardware interrupt might invoke isr() here. */ } void isr() { int x = 1, y = 2; swap(&x, &y); } Reentrant interrupt handler A reentrant interrupt handler is an interrupt handler that re-enables interrupts early in the interrupt handler. This may reduce interrupt latency. In general, while programming interrupt service routines, it is recommended to re-enable interrupts as soon as possible in the interrupt handler. This practice helps to avoid losing interrupts. Further examples In the following code, neither f nor g functions is reentrant. int v = 1; int f() { v += 2; return v; } int g() { return f() + 2; } In the above, depends on a non-constant global variable ; thus, if is interrupted during execution by an ISR which modifies , then reentry into will return the wrong value of . The value of and, therefore, the return value of , cannot be predicted with confidence: they will vary depending on whether an interrupt modified during 's execution. Hence, is not reentrant. Neither is , because it calls , which is not reentrant. These slightly altered versions are reentrant: int f(int i) { return i + 2; } int g(int i) { return f(i) + 2; } In the following, the function is thread-safe, but not (necessarily) reentrant: int function() { mutex_lock(); // ... // function body // ... mutex_unlock(); } In the above, can be called by different threads without any problem. But, if the function is used in a reentrant interrupt handler and a second interrupt arises inside the function, the second routine will hang forever. As interrupt servicing can disable other interrupts, the whole system could suffer. Notes See also Referential transparency References Works cited Further reading Concurrency (computer science) Recursion Subroutines Articles with example C code
Reentrancy (computing)
[ "Mathematics" ]
2,442
[ "Mathematical logic", "Recursion" ]
154,877
https://en.wikipedia.org/wiki/Weather%20balloon
A weather balloon, also known as a sounding balloon, is a balloon (specifically a type of high-altitude balloon) that carries instruments to the stratosphere to send back information on atmospheric pressure, temperature, humidity and wind speed by means of a small, expendable measuring device called a radiosonde. To obtain wind data, they can be tracked by radar, radio direction finding, or navigation systems (such as the satellite-based Global Positioning System, GPS). Balloons meant to stay at a constant altitude for long periods of time are known as transosondes. Weather balloons that do not carry an instrument pack are used to determine upper-level winds and the height of cloud layers. For such balloons, a theodolite or total station is used to track the balloon's azimuth and elevation, which are then converted to estimated wind speed and direction and/or cloud height, as applicable. Weather balloons are launched around the world for observations used to diagnose current conditions as well as by human forecasters and computer models for weather forecasting. Between 900 and 1,300 locations around the globe do routine releases, two or four times daily. History One of the first people to use weather balloons was the French meteorologist Léon Teisserenc de Bort. Starting in 1896 he launched hundreds of weather balloons from his observatory in Trappes, France. These experiments led to his discovery of the tropopause and stratosphere. Transosondes, weather balloons with instrumentation meant to stay at a constant altitude for long periods of time to help diagnose radioactive debris from atomic fallout, were experimented with in 1958. The drone technology boom has led to the development of weather drones since the late 1990s. These may begin to replace balloons as a more specific means for carrying radiosondes. Materials and equipment The balloon itself produces the lift, and is usually made of a highly flexible latex material, though chloroprene may also be used. The unit that performs the actual measurements and radio transmissions hangs at the lower end of the string, and is called a radiosonde. Specialized radiosondes are used for measuring particular parameters, such as determining the ozone concentration. The balloon is usually filled with hydrogen, though helium – a more expensive, but viable option nonetheless – is also frequently used. The ascent rate can be controlled by the amount of gas with which the balloon is filled, usually at around . Weather balloons may reach altitudes of or more, limited by diminishing pressures causing the balloon to expand to such a degree (typically by a 100:1 factor) that it disintegrates. In this instance the instrument package is usually lost, although a parachute may be employed to help in allowing retrieval of the instrument. Above that altitude sounding rockets are used to carry instruments aloft, and for even higher altitudes satellites are used. Launch time, location, and uses Weather balloons are launched around the world for observations used to diagnose current conditions as well as by human forecasters and computer models for weather forecasting. Between 900 and 1,300 locations around the globe do routine releases, two or four times daily, usually at 0000 UTC and 1200 UTC. Some facilities will also do occasional supplementary special releases when meteorologists determine there is a need for additional data between the 12-hour routine launches in which time much can change in the atmosphere. Military and civilian government meteorological agencies such as the National Weather Service in the US typically launch balloons, and by international agreements, almost all the data are shared with all nations. Specialized uses also exist, such as for aviation interests, pollution monitoring, photography or videography, and research. Examples include pilot balloons (Pibal). Field research programs often use mobile launchers from land vehicles as well as ships and aircraft (usually dropsondes in this case). In recent years, weather balloons have also been used for scattering human ashes at high altitudes. The weather balloon was also used to create the fictional entity 'Rover' during the production of the 1960s TV series The Prisoner in Portmeirion, Gwynedd, North Wales, UK in September 1966. This was retained in further scenes shot at MGM Borehamwood UK during 1966–67. Environmental issues While weather forecasting is increasingly reliant on satellites and radar technology, it still heavily involves the use of weather balloons. These devices, launched from thousands of stations worldwide, ascend into the atmosphere to collect meteorological data. The United States, for example, releases approximately 76,600 balloons annually, while Canada launches 22,000. Weather balloons, after reaching an altitude of approximately 35 kilometers, burst, releasing their instruments and the latex material they are made of. While the instruments are often recovered, the latex remains in the environment, posing a significant threat to marine ecosystems. Studies have shown that a substantial portion of weather balloons eventually end up in the ocean. For instance, one Australian researcher collected over 2,460 weather balloon debris from the Great Barrier Reef, estimating that up to 300 balloons per week may be released into the marine environment. This environmental impact underscores the need for sustainable alternatives in weather data collection. Scientists and environmentalists have raised concerns about weather balloons' environmental impact. The latex material, which can persist in the ocean for extended periods, can harm marine life, including sea turtles, birds, and fish. Efforts to minimize the environmental impact of weather balloons include developing biodegradable materials and improved recovery methods. However, the continued reliance on weather balloons for meteorological data challenges balancing the need for accurate weather forecasts with environmental sustainability. See also Atmospheric sounding Ceiling balloon High-altitude balloon SCR-658 radar Skyhook balloon Timeline of hydrogen technologies High-altitude platform UFOs References External links Atmospheric Soundings for Canada and the United States – University of Wyoming Balloon Lift With Lighter Than Air Gases – University of Hawaii Examples of Launches of Instrumented Balloons in Storms – NSSL Federal Meteorological Handbook No. 3 – Rawinsonde and Pibal Observations Kites and Balloons – NOAA Photo Library NASA Balloon Program Office – Wallops Flight Facility, Virginia National Science Digital Library: Weather Balloons – Lesson plan for middle school Pilot Balloon Observation Theodolites – Martin Brenner, CSULB StratoCat – Historical recompilation project on the use of stratospheric balloons in the scientific research, the military field and the aerospace activity WMO spreadsheet of all Upper Air stations around the world (revised location September 2008) Earth observation balloons Atmospheric sounding Meteorological instrumentation and equipment Scientific observation French inventions
Weather balloon
[ "Technology", "Engineering" ]
1,329
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
154,881
https://en.wikipedia.org/wiki/Heat%20index
The heat index (HI) is an index that combines air temperature and relative humidity, in shaded areas, to posit a human-perceived equivalent temperature, as how hot it would feel if the humidity were some other value in the shade. For example, when the temperature is with 70% relative humidity, the heat index is (see table below). The heat index is meant to describe experienced temperatures in the shade, but it does not take into account heating from direct sunlight, physical activity or cooling from wind. The human body normally cools itself by evaporation of sweat. High relative humidity reduces evaporation and cooling, increasing discomfort and potential heat stress. Different individuals perceive heat differently due to body shape, metabolism, level of hydration, pregnancy, or other physical conditions. Measurement of perceived temperature has been based on reports of how hot subjects feel under controlled conditions of temperature and humidity. Besides the heat index, other measures of apparent temperature include the Canadian humidex, the wet-bulb globe temperature, "relative outdoor temperature", and the proprietary "RealFeel". History The heat index was developed in 1979 by Robert G. Steadman. Like the wind chill index, the heat index contains assumptions about the human body mass and height, clothing, amount of physical activity, individual heat tolerance, sunlight and ultraviolet radiation exposure, and the wind speed. Significant deviations from these will result in heat index values which do not accurately reflect the perceived temperature. In Canada, the similar humidex (a Canadian innovation introduced in 1965) is used in place of the heat index. While both the humidex and the heat index are calculated using dew point, the humidex uses a dew point of as a base, whereas the heat index uses a dew point base of . Further, the heat index uses heat balance equations which account for many variables other than vapor pressure, which is used exclusively in the humidex calculation. A joint committee formed by the United States and Canada to resolve differences has since been disbanded. Definition The heat index of a given combination of (dry-bulb) temperature and humidity is defined as the dry-bulb temperature which would feel the same if the water vapor pressure were 1.6 kPa. Quoting Steadman, "Thus, for instance, an apparent temperature of refers to the same level of sultriness, and the same clothing requirements, as a dry-bulb temperature of with a vapor pressure of 1.6 kPa." This vapor pressure corresponds for example to an air temperature of and relative humidity of 40% in the sea-level psychrometric chart, and in Steadman's table at 40% RH the apparent temperature is equal to the true temperature between . At standard atmospheric pressure (101.325 kPa), this baseline also corresponds to a dew point of and a mixing ratio of 0.01 (10 g of water vapor per kilogram of dry air). A given value of relative humidity causes larger increases in the heat index at higher temperatures. For example, at approximately , the heat index will agree with the actual temperature if the relative humidity is 45%, but at , any relative-humidity reading above 18% will make the heat index higher than . It has been suggested that the equation described is valid only if the temperature is or more. The relative humidity threshold, below which a heat index calculation will return a number equal to or lower than the air temperature (a lower heat index is generally considered invalid), varies with temperature and is not linear. The threshold is commonly set at an arbitrary 40%. The heat index and its counterpart the humidex both take into account only two variables, shade temperature and atmospheric moisture (humidity), thus providing only a limited estimate of thermal comfort. Additional factors such as wind, sunshine and individual clothing choices also affect perceived temperature; these factors are parameterized as constants in the heat index formula. Wind, for example, is assumed to be . Wind passing over wet or sweaty skin causes evaporation and a wind chill effect that the heat index does not measure. The other major factor is sunshine; standing in direct sunlight can add up to to the apparent heat compared to shade. There have been attempts to create a universal apparent temperature, such as the wet-bulb globe temperature, "relative outdoor temperature", "feels like", or the proprietary "RealFeel". Meteorological considerations Outdoors in open conditions, as the relative humidity increases, first haze and ultimately a thicker cloud cover develops, reducing the amount of direct sunlight reaching the surface. Thus, there is an inverse relationship between maximum potential temperature and maximum potential relative humidity. Because of this factor, it was once believed that the highest heat index reading actually attainable anywhere on Earth was approximately . However, in Dhahran, Saudi Arabia on July 8, 2003, the dew point was while the temperature was , resulting in a heat index of . On August 28, 2024, a weather station in southern Iran recorded a heat index of , which will be a new record if confirmed. The human body requires evaporative cooling to prevent overheating. Wet-bulb temperature and Wet Bulb Globe Temperature are used to determine the ability of a body to eliminate excess heat. A sustained wet-bulb temperature of about can be fatal to healthy people; at this temperature our bodies switch from shedding heat to the environment, to gaining heat from it. Thus a wet bulb temperature of is the threshold beyond which the body is no longer able to adequately cool itself. Table of values The table below is from the U.S. National Oceanic and Atmospheric Administration. The columns begin at , but there is also a heat index effect at and similar temperatures when there is high humidity. For example, if the air temperature is and the relative humidity is 65%, the heat index is Effects of the heat index (shade values) Exposure to full sunshine can increase heat index values by up to 8 °C (14 °F). Formula There are many formulas devised to approximate the original tables by Steadman. Anderson et al. (2013), NWS (2011), Jonson and Long (2004), and Schoen (2005) have lesser residuals in this order. The former two are a set of polynomials, but the third one is by a single formula with exponential functions. The formula below approximates the heat index in degrees Fahrenheit, to within ±. It is the result of a multivariate fit (temperature equal to or greater than and relative humidity equal to or greater than 40%) to a model of the human body. This equation reproduces the above NOAA National Weather Service table (except the values at & 45%/70% relative humidity vary unrounded by less than ±1, respectively). where HI = heat index (in degrees Fahrenheit) T = ambient dry-bulb temperature (in degrees Fahrenheit) R = relative humidity (percentage value between 0 and 100) The following coefficients can be used to determine the heat index when the temperature is given in degrees Celsius, where HI = heat index (in degrees Celsius) T = ambient dry-bulb temperature (in degrees Celsius) R = relative humidity (percentage value between 0 and 100) An alternative set of constants for this equation that is within ± of the NWS master table for all humidities from 0 to 80% and all temperatures between and all heat indices below is: A further alternate is this: where For example, using this last formula, with temperature and relative humidity (RH) of 85%, the result would be: . Limitations The heat index does not work well with extreme conditions, like supersaturation of air, when the air is more than 100% saturated with water. David Romps, a physicist and climate scientist at the University of California, Berkeley and his graduate student Yi-Chuan Lu, found that the heat index was underestimating the severity of intense heat waves, such as the 1995 Chicago heat wave. Other issues with the heat index include the unavailability of precise humidity data in many geographical regions, the assumption that the person is healthy, and the assumption that the person has easy access to water and shade. See also Apparent temperature Humidex Wet-bulb temperature Wind chill References External links Description of wind chill & apparent temperature Formulae in metric units Heat Index Calculator Calculates both °F and °C Current map of global heat index values Atmospheric thermodynamics Meteorological indices Meteorological quantities
Heat index
[ "Physics", "Mathematics" ]
1,724
[ "Quantity", "Physical quantities", "Meteorological quantities" ]
154,910
https://en.wikipedia.org/wiki/Wind%20chill
Wind chill (popularly wind chill factor) is the sensation of cold produced by the wind for a given ambient air temperature on exposed skin as the air motion accelerates the rate of heat transfer from the body to the surrounding atmosphere. Its values are always lower than the air temperature in the range where the formula is valid. When the apparent temperature is higher than the air temperature, the heat index is used instead. Explanation A surface loses heat through conduction, evaporation, convection, and radiation. The rate of convection depends on both the difference in temperature between the surface and the fluid surrounding it and the velocity of that fluid with respect to the surface. As convection from a warm surface heats the air around it, an insulating boundary layer of warm air forms against the surface. Moving air disrupts this boundary layer, or epiclimate, carrying the warm air away, thereby allowing cooler air to replace the warm air against the surface and increasing the temperature difference in the boundary layer. The faster the wind speed, the more readily the surface cools. Contrary to popular belief, wind chill does not refer to how cold things get, and they will only get as cold as the air temperature. This means radiators and pipes cannot freeze when wind chill is below freezing and the air temperature is above freezing. Alternative approaches Many formulas exist for wind chill because, unlike temperature, wind chill has no universally agreed-upon standard definition or measurement. All the formulas attempt to qualitatively predict the effect of wind on the temperature humans perceive. Weather services in different countries use standards unique to their country or region; for example, the U.S. and Canadian weather services use a model accepted by the National Weather Service. That model has evolved over time. The first wind chill formulas and tables were developed by Paul Allman Siple and Charles F. Passel working in the Antarctic before the Second World War, and were made available by the National Weather Service by the 1970s. They were based on the cooling rate of a small plastic bottle as its contents turned to ice while suspended in the wind on the expedition hut roof, at the same level as the anemometer. The so-called Windchill Index provided a pretty good indication of the severity of the weather. In the 1960s, wind chill began to be reported as a wind chill equivalent temperature (WCET), which is theoretically less useful. The author of this change is unknown, but it was not Siple or Passel as is generally believed. At first, it was defined as the temperature at which the windchill index would be the same in the complete absence of wind. This led to equivalent temperatures that exaggerated the severity of the weather. Charles Eagan realized that people are rarely still and that even when it is calm, there is some air movement. He redefined the absence of wind to be an air speed of , which was about as low a wind speed as a cup anemometer could measure. This led to more realistic (warmer-sounding) values of equivalent temperature. Original model Equivalent temperature was not universally used in North America until the 21st century. Until the 1970s, the coldest parts of Canada reported the original Wind Chill Index, a three- or four-digit number with units of kilocalories/hour per square metre. Each individual calibrated the scale of numbers personally, through experience. The chart also provided general guidance to comfort and hazard through threshold values of the index, such as 1400, which was the threshold for frostbite. The original formula for the index was: where: WCI = wind chill index, kg⋅cal/m2/h v = wind velocity, m/s Ta = air temperature, °C North American and United Kingdom wind chill index In November 2001, Canada, the United States, and the United Kingdom implemented a new wind chill index developed by scientists and medical experts on the Joint Action Group for Temperature Indices (JAG/TI). It is determined by iterating a model of skin temperature under various wind speeds and temperatures using standard engineering correlations of wind speed and heat transfer rate. Heat transfer was calculated for a bare face in wind, facing the wind, while walking into it at . The model corrects the officially measured wind speed to the wind speed at face height, assuming the person is in an open field. The results of this model may be approximated, to within one degree, from the following formulas. The standard wind chill formula for Environment Canada is: where Twc is the wind chill index, based on the Celsius temperature scale; Ta is the air temperature in degrees Celsius; and v is the wind speed at standard anemometer height, in kilometres per hour. When the temperature is and the wind speed is , the wind chill index is −24. If the temperature remains at −20 °C and the wind speed increases to , the wind chill index falls to −33. The equivalent formula in US customary units is: where Twc is the wind chill index, based on the Fahrenheit scale; Ta is the air temperature in degrees Fahrenheit; and v is the wind speed in miles per hour. Windchill temperature is defined only for temperatures at or below and wind speeds above . As the air temperature falls, the chilling effect of any wind that is present increases. For example, a wind will lower the apparent temperature by a wider margin at an air temperature of than a wind of the same speed would if the air temperature were . The 2001 WCET is a steady-state calculation (except for the time-to-frostbite estimates). There are significant time-dependent aspects to wind chill because cooling is most rapid at the start of any exposure, when the skin is still warm. Australian apparent temperature The apparent temperature (AT), invented in the late 1970s, was designed to measure thermal sensation in indoor conditions. It was extended in the early 1980s to include the effect of sun and wind. The AT index used here is based on a mathematical model of an adult, walking outdoors, in the shade (Steadman 1994). The AT is defined as the temperature, at the reference humidity level, producing the same amount of discomfort as that experienced under the current ambient temperature and humidity. The formula is: where: is dry-bulb temperature (°C) is water vapour pressure (hPa) is wind speed (m/s) at an elevation of 10 m (33ft) The vapour pressure can be calculated from the temperature and relative humidity using the equation: where: is dry-bulb temperature (°C) is relative humidity (%) represents the exponential function The Australian formula includes the important factor of humidity and is somewhat more involved than the simpler North American model. The North American formula was designed to be applied at low temperatures (as low as ) when humidity levels are also low. The hot-weather version of the AT (1984) is used by the National Weather Service in the United States. In the United States, this simple version of the AT is known as the heat index. References External links National Center for Atmospheric Research Table of wind chill temperatures in Celsius and Fahrenheit Current map of global wind chill values Wind chill calculator at the US National Weather Service Atmospheric thermodynamics Meteorological indices Meteorological quantities Units of meteorology measurement
Wind chill
[ "Physics", "Mathematics" ]
1,485
[ "Units of meteorology measurement", "Physical quantities", "Quantity", "Meteorological quantities", "Units of measurement" ]
154,963
https://en.wikipedia.org/wiki/Dicyclic%20group
In group theory, a dicyclic group (notation Dicn or Q4n, ) is a particular kind of non-abelian group of order 4n (n > 1). It is an extension of the cyclic group of order 2 by a cyclic group of order 2n, giving the name di-cyclic. In the notation of exact sequences of groups, this extension can be expressed as: More generally, given any finite abelian group with an order-2 element, one can define a dicyclic group. Definition For each integer n > 1, the dicyclic group Dicn can be defined as the subgroup of the unit quaternions generated by More abstractly, one can define the dicyclic group Dicn as the group with the following presentation Some things to note which follow from this definition: if , then Thus, every element of Dicn can be uniquely written as amxl, where 0 ≤ m < 2n and l = 0 or 1. The multiplication rules are given by It follows that Dicn has order 4n. When n = 2, the dicyclic group is isomorphic to the quaternion group Q. More generally, when n is a power of 2, the dicyclic group is isomorphic to the generalized quaternion group. Properties For each n > 1, the dicyclic group Dicn is a non-abelian group of order 4n. (For the degenerate case n = 1, the group Dic1 is the cyclic group C4, which is not considered dicyclic.) Let A = be the subgroup of Dicn generated by a. Then A is a cyclic group of order 2n, so [Dicn:A] = 2. As a subgroup of index 2 it is automatically a normal subgroup. The quotient group Dicn/A is a cyclic group of order 2. Dicn is solvable; note that A is normal, and being abelian, is itself solvable. Binary dihedral group The dicyclic group is a binary polyhedral group — it is one of the classes of subgroups of the Pin group Pin−(2), which is a subgroup of the Spin group Spin(3) — and in this context is known as the binary dihedral group. The connection with the binary cyclic group C2n, the cyclic group Cn, and the dihedral group Dihn of order 2n is illustrated in the diagram at right, and parallels the corresponding diagram for the Pin group. Coxeter writes the binary dihedral group as ⟨2,2,n⟩ and binary cyclic group with angle-brackets, ⟨n⟩. There is a superficial resemblance between the dicyclic groups and dihedral groups; both are a sort of "mirroring" of an underlying cyclic group. But the presentation of a dihedral group would have x2 = 1, instead of x2 = an; and this yields a different structure. In particular, Dicn is not a semidirect product of A and , since A ∩  is not trivial. The dicyclic group has a unique involution (i.e. an element of order 2), namely x2 = an. Note that this element lies in the center of Dicn. Indeed, the center consists solely of the identity element and x2. If we add the relation x2 = 1 to the presentation of Dicn one obtains a presentation of the dihedral group Dihn, so the quotient group Dicn/<x2> is isomorphic to Dihn. There is a natural 2-to-1 homomorphism from the group of unit quaternions to the 3-dimensional rotation group described at quaternions and spatial rotations. Since the dicyclic group can be embedded inside the unit quaternions one can ask what the image of it is under this homomorphism. The answer is just the dihedral symmetry group Dihn. For this reason the dicyclic group is also known as the binary dihedral group. Note that the dicyclic group does not contain any subgroup isomorphic to Dihn. The analogous pre-image construction, using Pin+(2) instead of Pin−(2), yields another dihedral group, Dih2n, rather than a dicyclic group. Generalizations Let A be an abelian group, having a specific element y in A with order 2. A group G is called a generalized dicyclic group, written as Dic(A, y), if it is generated by A and an additional element x, and in addition we have that [G:A] = 2, x2 = y, and for all a in A, x−1ax = a−1. Since for a cyclic group of even order, there is always a unique element of order 2, we can see that dicyclic groups are just a specific type of generalized dicyclic group. The dicyclic group is the case of the family of binary triangle groups defined by the presentation:Taking the quotient by the additional relation produces an ordinary triangle group, which in this case is the dihedral quotient . See also binary polyhedral group binary cyclic group, ⟨n⟩, order 2n binary tetrahedral group, 2T = ⟨2,3,3⟩, order 24 binary octahedral group, 2O = ⟨2,3,4⟩, order 48 binary icosahedral group, 2I = ⟨2,3,5⟩, order 120 References . External links Dicyclic groups on GroupNames Finite groups Quaternions
Dicyclic group
[ "Mathematics" ]
1,192
[ "Mathematical structures", "Algebraic structures", "Finite groups" ]
154,985
https://en.wikipedia.org/wiki/System%20administrator
An IT administrator, system administrator, sysadmin, or admin is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially multi-user computers, such as servers. The system administrator seeks to ensure that the uptime, performance, resources, and security of the computers they manage meet the needs of the users, without exceeding a set budget when doing so. To meet these needs, a system administrator may acquire, install, or upgrade computer components and software; provide routine automation; maintain security policies; troubleshoot; train or supervise staff; or offer technical support for projects. Related fields Many organizations staff offer jobs related to system administration. In a larger company, these may all be separate positions within a computer support or Information Services (IS) department. In a smaller group they may be shared by a few sysadmins, or even a single person. A database administrator (DBA) maintains a database system, and is responsible for the integrity of the data and the efficiency and performance of the system. A network administrator maintains network infrastructure such as switches and routers, and diagnoses problems with these or with the behavior of network-attached computers. A security administrator is a specialist in computer and network security, including the administration of security devices such as firewalls, as well as consulting on general security measures. A web administrator maintains web server services (such as Apache or IIS) that allow for internal or external access to web sites. Tasks include managing multiple sites, administering security, and configuring necessary components and software. Responsibilities may also include software change management. A computer operator performs routine maintenance and upkeep, such as changing backup tapes or replacing failed drives in a redundant array of independent disks (RAID). Such tasks usually require physical presence in the room with the computer, and while less skilled than sysadmin tasks, may require a similar level of trust, since the operator has access to possibly sensitive data. An SRE Site Reliability Engineer - takes a software engineering or programmatic approach to managing systems. Training Most employers require a bachelor's degree in a related field, such as computer science, information technology, electronics engineering, or computer engineering. Some schools also offer undergraduate degrees and graduate programs in system administration. In addition, because of the practical nature of system administration and the easy availability of open-source server software, many system administrators enter the field self-taught. Generally, a prospective employee will be required to have experience with the computer systems they are expected to manage. In most cases, candidates are expected to possess industry certifications such as the Microsoft MCSA, MCSE, MCITP, Red Hat RHCE, Novell CNA, CNE, Cisco CCNA or CompTIA's A+ or Network+, Sun Certified SCNA, Linux Professional Institute, Linux Foundation Certified Engineer or Linux Foundation Certified System Administrator, among others. Sometimes, almost exclusively in smaller sites, the role of system administrator may be given to a skilled user in addition to or in replacement of their duties. Skills The subject matter of system administration includes computer systems and the ways people use them in an organization. This entails a knowledge of operating systems and applications, as well as hardware and software troubleshooting, but also knowledge of the purposes for which people in the organization use the computers. Perhaps the most important skill for a system administrator is problem solving—frequently under various sorts of constraints and stress. The sysadmin is on call when a computer system goes down or malfunctions, and must be able to quickly and correctly diagnose what is wrong and how best to fix it. They may also need to have teamwork and communication skills; as well as being able to install and configure hardware and software. Sysadmins must understand the behavior of software in order to deploy it and to troubleshoot problems, and generally know several programming languages used for scripting or automation of routine tasks. A typical sysadmin's role is not to design or write new application software but when they are responsible for automating system or application configuration with various configuration management tools, the lines somewhat blur. Depending on the sysadmin's role and skillset they may be expected to understand equivalent key/core concepts a software engineer understands. That said, system administrators are not software engineers or developers, in the job title sense. Particularly when dealing with Internet-facing or business-critical systems, a sysadmin must have a strong grasp of computer security. This includes not merely deploying software patches, but also preventing break-ins and other security problems with preventive measures. In some organizations, computer security administration is a separate role responsible for overall security and the upkeep of firewalls and intrusion detection systems, but all sysadmins are generally responsible for the security of computer systems. Duties A system administrator's responsibilities might include: Analyzing system logs and identifying potential issues with computer systems. Applying operating system updates, patches, and configuration changes. Installing and configuring new hardware and software. Adding, removing, or updating user account information, resetting passwords, etc. Answering technical queries and assisting users. Responsibility for security. Responsibility for documenting the configuration of the system. Troubleshooting any reported problems. System performance tuning. Ensuring that the network infrastructure is up and running. Configuring, adding, and deleting file systems. Ensuring parity between dev, test and production environments. Training users Plan and manage the machine room environment In larger organizations, some of the tasks above may be divided among different system administrators or members of different organizational groups. For example, a dedicated individual(s) may apply all system upgrades, a Quality Assurance (QA) team may perform testing and validation, and one or more technical writers may be responsible for all technical documentation written for a company. System administrators, in larger organizations, tend not to be systems architects, systems engineers, or systems designers. In smaller organizations, the system administrator might also act as technical support, database administrator, network administrator, storage (SAN) administrator or application analyst. See also Application service management Bastard Operator From Hell (BOFH) DevOps Forum administrator Information technology operations League of Professional System Administrators LISA (organization) Orchestration (computing) Professional certification (computer technology) Superuser Sysop System Administrator Appreciation Day References Further reading Essential Linux Administration: A Comprehensive Guide for Beginners, by Chuck Easttom (Cengage Press, 2011) Essential System Administration (O'Reilly), 3rd Edition, 2001, by Æleen Frisch The Practice of System and Network Administration (Addison-Wesley), 2nd Edition 5 Jul. 2007, by Thomas A. Limoncelli, Christine Hogan and Strata R. Chalup The Practice of System and Network Administration Volume 1: DevOps and other Best Practices for Enterprise IT (Addison-Wesley), 3rd Edition. 4 Nov. 2016, by Thomas A. Limoncelli, Christine Hogan, Strata R. Chalup The Practice of Cloud System Administration: Designing and Operating Large Distributed Systems, Volume 2 (Addison-Wesley), 2 Sep. 2014, by Thomas A. Limoncelli, Christine Hogan, Strata R. Chalup Principles of Network and System Administration (J. Wiley & Sons), 2000, 2003 (2nd ed.), by Mark Burgess Time Management for System Administrators (O'Reilly), 2005, by Thomas A. Limoncelli UNIX and Linux System Administration Handbook (Prentice Hall), 5th edition, 8 Aug. 2017, by Trent R. Hein, Ben Whaley, Dan Mackin, Sandeep Negi "The blue collar workers of the 21st century", Minnesota Public Radio, 27 January 2004 External links Communication Workers of America Information systems Computer occupations Computer systems
System administrator
[ "Technology", "Engineering" ]
1,594
[ "Computer engineering", "Computer occupations", "Computer systems", "System administration", "Information systems", "Computer science", "Information technology", "Computers" ]
155,019
https://en.wikipedia.org/wiki/Hanging
Hanging is killing a person by suspending them from the neck with a noose or ligature. Hanging has been a standard method of capital punishment since the Middle Ages, and has been the primary execution method in numerous countries and regions. The first known account of execution by hanging is in Homer's Odyssey. Hanging is also a method of suicide. Methods of judicial hanging There are numerous methods of hanging in execution that instigate death either by cervical fracture or by strangulation. Short drop The short drop is a method of hanging in which the condemned prisoner stands on a raised support, such as a stool, ladder, cart, horse, or other vehicle, with the noose around the neck. The support is then moved away, leaving the person dangling from the rope. Suspended by the neck, the weight of the body tightens the noose around the neck, effecting strangulation and death. Loss of consciousness is typically rapid and death ensues in a few minutes. Before 1850, the short drop was the standard method of hanging, and it is still common in suicides and extrajudicial hangings (such as lynchings and summary executions) which do not benefit from the specialised equipment and drop-length calculation tables used in the newer methods. Pole method A short-drop variant is the Austro-Hungarian "pole" method, called (literally: strangling gallows), in which the following steps take place: The condemned is made to stand before a specialized vertical pole or pillar, approximately in height. A rope is attached around the condemned's feet and routed through a pulley at the base of the pole. The condemned is hoisted to the top of the pole by means of a sling running across the chest and under the armpits. A narrow-diameter noose is looped around the prisoner's neck, then secured to a hook mounted at the top of the pole. The chest sling is released, and the prisoner is rapidly jerked downward by the assistant executioners via the foot rope. The executioner stands on a stepped platform approximately high beside the condemned. The executioner would place the heel of his hand beneath the prisoner's jaw to increase the force on the neck vertebrae at the end of the drop, then manually dislocate the condemned's neck by forcing the head to one side while the neck vertebrae were under traction. This method was later also adopted by the successor states, most notably by Czechoslovakia, where the "pole" method was used as the single type of execution from 1918 until 1954, when the prison hosting Czechoslovakia's executions, Pankrác Prison, constructed an indoor gallows that exclusively accommodated short-drop hangings to replace the pole method. Nazi war criminal Karl Hermann Frank, executed in 1946 in Prague, was among approximately 1,000 condemned people executed by the pole hanging method in Czechoslovakia. Standard drop The standard drop involves a drop of between and came into use from 1866, when the scientific details were published by Irish doctor Samuel Haughton. Its use rapidly spread to English-speaking countries and those with judicial systems of English origin. It was considered a humane improvement on the short drop because it was intended to be enough to break the person's neck, causing immediate unconsciousness and rapid brain death. This method was used to execute condemned Nazis under United States jurisdiction after the Nuremberg Trials, including Joachim von Ribbentrop and Ernst Kaltenbrunner. In the execution of Ribbentrop, historian Giles MacDonogh records that: "The hangman botched the execution and the rope throttled the former foreign minister for 20 minutes before he expired." A Life magazine report on the execution merely says: "The trap fell open and with a sound midway between a rumble and a crash, Ribbentrop disappeared. The rope quivered for a time, then stood tautly straight." Long drop The long-drop process, also known as the measured drop, was introduced to Britain in 1872 by William Marwood as a scientific advance on the standard drop, and further refined by his successor James Berry. Instead of everyone falling the same standard distance, the person's height and weight were used to determine how much slack would be provided in the rope so that the distance dropped would be enough to ensure that the neck was broken, but not so much that the person was decapitated. Careful placement of the eye or knot of the noose (so that the head was jerked back as the rope tightened) contributed to breaking the neck. Prior to 1892, the drop was in the range of , depending on the weight of the body, and was calculated to deliver an energy of , which fractured the neck at either the 2nd and 3rd or 4th and 5th cervical vertebrae. This force resulted in some decapitations, such as the infamous case of Black Jack Ketchum in New Mexico Territory in 1901, owing to a significant weight gain while in custody not having been factored into the drop calculations. Between 1892 and 1913, the length of the drop was shortened to avoid decapitation. After 1913, other factors were also taken into account, and the energy delivered was reduced to about . The decapitation of Eva Dugan during a botched hanging in 1930 led the state of Arizona to switch to the gas chamber as its primary execution method, on the grounds that it was believed more humane. One of the more recent decapitations as a result of the long drop occurred when Barzan Ibrahim al-Tikriti was hanged in Iraq in 2007. Accidental decapitation also occurred during the 1962 hanging of Arthur Lucas, one of the last two people put to death in Canada. Nazis executed under British jurisdiction, including Josef Kramer, Fritz Klein, Irma Grese and Elisabeth Volkenrath, were hanged by Albert Pierrepoint using the variable-drop method devised by Marwood. The record speed for a British long-drop hanging was seven seconds from the executioner entering the cell to the drop. Speed was considered to be important in the British system as it reduced the condemned's mental distress. Long-drop hanging is still practised as the method of execution in a few countries, including Japan and Singapore. As suicide Hanging is a common suicide method. The materials necessary for suicide by hanging are readily available to the average person, compared with firearms or poisons. Full suspension is not required, and for this reason, hanging is especially commonplace among suicidal prisoners . A type of hanging comparable to full suspension hanging may be obtained by self-strangulation using a ligature around the neck and the partial weight of the body (partial suspension) to tighten the ligature. When a suicidal hanging involves partial suspension the deceased is found to have both feet touching the ground, e.g., they are kneeling, crouching or standing. Partial suspension or partial weight-bearing on the ligature is sometimes used, particularly in prisons, mental hospitals or other institutions, where full suspension support is difficult to devise, because high ligature points (e.g., hooks or pipes) have been removed. In Canada, hanging is the most common method of suicide, and in the U.S., hanging is the second most common method, after self-inflicted gunshot wounds. In the United Kingdom, where firearms are less easily available, in 2001 hanging was the most common method among men and the second most commonplace among women (after poisoning). Those who survive a suicide-via-hanging attempt, whether due to breakage of the cord or ligature point, or being discovered and cut down, face a range of serious injuries, including cerebral anoxia (which can lead to permanent brain damage), laryngeal fracture, cervical spine fracture (which may cause paralysis), tracheal fracture, pharyngeal laceration, and carotid artery injury. As human sacrifice There are some suggestions that the Vikings practised hanging as human sacrifices to Odin, to honour Odin's own sacrifice of hanging himself from Yggdrasil. In Northern Europe, it is widely speculated that the Iron Age bog bodies, many of which show signs of having been hanged, were examples of human sacrifice to the gods. Medical effects A hanging may induce one or more of the following medical conditions, some leading to death: Closure of carotid arteries causing cerebral hypoxia Closure of the jugular veins Breaking of the neck (cervical fracture) causing traumatic spinal cord injury or even unintended decapitation Closure of the airway The cause of death in hanging depends on the conditions related to the event. When the body is released from a relatively high position, the major cause of death is severe trauma to the upper cervical spine. The injuries produced are highly variable. One study showed that only a small minority of a series of judicial hangings produced fractures to the cervical spine (6 out of 34 cases studied), with half of these fractures (3 out of 34) being the classic "hangman's fracture" (bilateral fractures of the pars interarticularis of the C2 vertebra). According to Historical and biomechanical aspects of hangman's fracture, the phrase in the usual execution order, "hanged by the neck until dead", was necessary. The side, or subaural knot, has been shown to produce other, more complex injuries, with one thoroughly studied case producing only ligamentous injuries to the cervical spine and bilateral vertebral artery disruptions, but no major vertebral fractures or crush injuries to the spinal cord. In the absence of fracture and dislocation, occlusion of blood vessels becomes the major cause of death, rather than asphyxiation. Obstruction of venous drainage of the brain via occlusion of the internal jugular veins leads to cerebral oedema and then cerebral ischemia. The face will typically become engorged and cyanotic (turned blue through lack of oxygen). Compromise of the cerebral blood flow may occur by obstruction of the carotid arteries, even though their obstruction requires far more force than the obstruction of jugular veins, since they are seated deeper and they contain blood in much higher pressure compared to the jugular veins. Notable practices across the globe Hanging has been a method of capital punishment in many countries, and is still used by many countries to this day. Long-drop hanging is mainly used by former British colonies, while short-drop and suspension hanging is common elsewhere, in countries including Iran and Afghanistan. Afghanistan Hanging is the most used form of capital punishment in Afghanistan. Australia Capital punishment was a part of the legal system of Australia from the establishment of New South Wales as a British penal colony, until 1985, by which time all Australian states and territories had abolished the death penalty; In practice, the last execution in Australia was the hanging of Ronald Ryan on 3 February 1967, in Victoria. During the 19th century, crimes that could carry a death sentence included burglary, sheep theft, forgery, sexual assaults, murder and manslaughter. During the 19th century, there were roughly eighty people hanged every year throughout the Australian colonies for these crimes. Bahamas Bahamas employs hanging to execute the condemned but no executions have been conducted in the country since 2000. As of 2023, there have been some inmates on death row but their sentences have been commuted. Bangladesh Hanging is the only method of execution in Bangladesh, ever since its independence. Brazil Death by hanging was the customary method of capital punishment in Brazil throughout its history. Some important national heroes like Tiradentes (1792) were killed by hanging. The last man executed in Brazil was the slave Francisco, in 1876. The death penalty was abolished for all crimes, except for those committed under extraordinary circumstances such as war or military law, in 1890. Bulgaria Bulgaria's national hero, Vasil Levski, was executed by hanging by the Ottoman court in Sofia in 1873. Every year since Bulgaria's liberation, thousands come with flowers on the date of his death, 19 February, to his monument where the gallows stood. The last execution was in 1989, and the death penalty was abolished for all crimes in 1998. Canada Historically, hanging was the only method of execution used in Canada and was in use as possible punishment for all murders until 1961, when murders were reclassified into capital and non-capital offences. The death penalty was restricted to apply only for certain offences to the National Defence Act in 1976 and was completely abolished in 1998. The last hangings in Canada took place on 11 December 1962. Egypt In 1955, Egypt hanged three Israelis on charges of spying. In 1982 Egypt hanged three civilians convicted of the assassination of Anwar Sadat. In 2004, Egypt hanged five militants on charges of trying to kill the Prime Minister. To this day, hanging remains the standard method of capital punishment in Egypt, which executes more people each year than any other African country. Germany In the territories occupied by Nazi Germany from 1939 to 1945, strangulation hanging was a preferred means of public execution, although more criminal executions were performed by guillotine than hanging. The most commonly sentenced were partisans and black marketeers, whose bodies were usually left hanging for long periods. There are also numerous reports of concentration camp inmates being hanged. Hanging was continued in post-war Germany in the British and US Occupation Zones under their jurisdiction, and for Nazi war criminals, until well after (western) Germany itself had abolished the death penalty by the German constitution as adopted in 1949. West Berlin was not subject to the Grundgesetz (Basic Law) and abolished the death penalty in 1951. The German Democratic Republic abolished the death penalty in 1987. The last execution ordered by a West German court was carried out by guillotine in Moabit prison in 1949. The last hanging in Germany was the one ordered of several war criminals in Landsberg am Lech on 7 June 1951. The last known execution in East Germany was in 1981 by a pistol shot to the neck. Hong Kong Even though Hong Kong is now part of China it has no capital punishment; it is a special administrative region of China. When Hong Kong was still a part of the British Empire, it had hanging as the method of execution. The last person who was executed was a Chinese Vietnamese man who attacked a security guard and another person. This execution occurred in 1966. Hungary The prime minister of Hungary, during the 1956 Revolution, Imre Nagy, was secretly tried, executed by hanging, and buried unceremoniously by the new Soviet-backed Hungarian government, in 1958. Nagy was later publicly exonerated by Hungary. Capital punishment was abolished for all crimes in 1990. India Hanging was introduced by the British. All executions in India since independence have been carried out by hanging, although the law provides for military executions to be carried out by firing squad. In 1949, Nathuram Godse, who had been sentenced to death for the assassination of Mahatma Gandhi, was the first person to be executed by hanging in independent India. The Supreme Court of India has suggested that capital punishment should be given only in the "rarest of rare cases". Since 2001, eight people have been executed in India. Dhananjoy Chatterjee, the 1991 rapist and murderer was executed on 14 August 2004 in Alipore Jail, Kolkata. Ajmal Kasab, the lone surviving terrorist of the 2008 Mumbai attacks was executed on 21 November 2012 in Yerwada Central Jail, Pune. The Supreme Court of India had previously rejected his mercy plea, which was then rejected by the President of India. He was hanged one week later. Afzal Guru, a terrorist found guilty of conspiracy in the December 2001 attack on the Indian Parliament, was executed by hanging in Tihar Jail, Delhi on 9 February 2013. Yakub Memon was convicted over his involvement in the 1993 Bombay bombings by Special Terrorist and Disruptive Activities court on 27 July 2007. His appeals and petitions for clemency were all rejected and he was finally executed by hanging on 30 July 2015 in Nagpur jail. In March 2020, four prisoners convicted of rape and murder were executed by hanging in Tihar Jail. Iran Death by hanging is the primary means of capital punishment in Iran, which carries out one of the highest numbers of annual executions in the world. The method used is the short drop, which does not break the neck of the condemned, but rather causes a slower death due to strangulation. It is legal for murder, rape, and drug trafficking unless the criminal pays diyya to the victim's family, thus attaining their forgiveness (see Sharia). If the presiding judge deems the case to be "causing public outrage", he can order the hanging to take place in public at the spot where the crime was committed, typically from a mobile telescoping crane which hoists the condemned high into the air. On 19 July 2005, two boys, Mahmoud Asgari and Ayaz Marhoni, aged 15 and 17 respectively, who had been convicted of the rape of a 13-year-old boy, were hanged at Edalat (Justice) Square in Mashhad, on charges of homosexuality and rape. On 15 August 2004, a 16-year-old girl, Atefeh Sahaaleh (also called Atefeh Rajabi), was executed for having committed "acts incompatible with chastity". At dawn on 27 July 2008, the Iranian government executed 29 people at Evin Prison in Tehran. On 2 December 2008, an unnamed man was hanged for murder at Kazeroun Prison, just moments after he was pardoned by the murder victim's family. He was quickly cut down and rushed to a hospital, where he was successfully revived. The conviction and hanging of Reyhaneh Jabbari caused international uproar as she was sentenced to death in 2009 and hanged on 25 October 2014 for murdering a former intelligence officer; according to Jabbari's testimony she stabbed him during an attempted rape and then another person killed him. Iraq Hanging was used under the regime of Saddam Hussein, but was suspended along with capital punishment on 10 June 2003, when a coalition led by the United States invaded and overthrew the previous regime. The death penalty was reinstated on 8 August 2004. In September 2005, three murderers were the first people to be executed since the restoration. Then on 9 March 2006, an official of Iraq's Supreme Judicial Council confirmed that Iraqi authorities had executed the first insurgents by hanging. Saddam Hussein was sentenced to death by hanging for crimes against humanity on 5 November 2006, and was executed on 30 December 2006 at approximately 6:00 a.m. local time. During the drop, there was an audible crack indicating that his neck was broken, a successful example of a long-drop hanging. Barzan Ibrahim, the head of the Mukhabarat, Saddam's security agency, and Awad Hamed al-Bandar, former chief judge, were executed on 15 January 2007, also by the long-drop method, but Barzan was decapitated by the rope at the end of his fall. Former vice-president Taha Yassin Ramadan had been sentenced to life in prison on 5 November 2006, but the sentence was changed to death by hanging on 12 February 2007. He was the fourth and final man to be executed for the 1982 crimes against humanity on 20 March 2007. The execution went smoothly. At the Anfal genocide trial, Saddam's cousin Ali Hassan al-Majid (alias Chemical Ali), former defence minister Sultan Hashim Ahmed al-Tay, and former deputy Hussein Rashid Mohammed were sentenced to hang for their role in the Al-Anfal Campaign against the Kurds on 24 June 2007. Al-Majid was sentenced to death three more times: once for the 1991 suppression of a Shi'a uprising along with Abdul-Ghani Abdul Ghafur on 2 December 2008; once for the 1999 crackdown in the assassination of Grand Ayatollah Mohammad al-Sadr on 2 March 2009; and once on 17 January 2010 for the gassing of the Kurds in 1988; he was hanged on 25 January. On 26 October 2010, Saddam's top minister Tariq Aziz was sentenced to hang for persecuting the members of rival Shi'a political parties. His sentence was commuted to indefinite imprisonment after Iraqi president Jalal Talabani did not sign his execution order and he died in prison in 2015. On 14 July 2011, US forces transferred condemned prisoners Sultan Hashim Ahmed al-Tay and two of Saddam's half-brothers, Sabawi Ibrahim al-Tikriti and Watban Ibrahim al-Tikriti, to Iraqi authorities for execution. The Iraqi High Tribunal had sentenced Saddam's half-brothers to death on 11 March 2009 for their roles in the executions of 42 traders who were accused of manipulating food prices. None of the three men were executed. It is alleged that Iraq's government keeps the execution rate secret, and hundreds may be carried out every year. In 2007, Amnesty International stated that 900 people were at "imminent risk" of execution in Iraq. Israel Israel has provisions in its criminal law to use the death penalty for extraordinary crimes. It has been used only twice for Israelis, and only one of those executions was by hanging. On 31 May 1962, Nazi leader Adolf Eichmann was captured, taken to Israel and then executed by hanging. Japan All executions in Japan are carried out by hanging. On 23 December 1948, Hideki Tojo, Kenji Doihara, Akira Mutō, Iwane Matsui, Seishirō Itagaki, Kōki Hirota, and Heitaro Kimura were hanged at Sugamo Prison by the U.S. occupation authorities in Ikebukuro in Allied-occupied Japan for war crimes, crimes against humanity, and crimes against peace during the Asian-Pacific theatre of World War II. On 27 February 2004, the mastermind of the Sarin gas attack on the Tokyo subway, Shoko Asahara, was found guilty and sentenced to death by hanging. On 25 December 2006, serial killer Hiroaki Hidaka and three others were hanged in Japan. Long-drop hanging is the method of carrying out judicial capital punishment on civilians in Japan, as in the cases of Norio Nagayama, Mamoru Takuma, and Tsutomu Miyazaki. In 2018 Shoko Asahara and several of his cult members were hanged for committing the 1995 sarin gas attack. Jordan Death by hanging is the traditional method of capital punishment in Jordan. On 14 August 1993, Jordan hanged two Jordanians convicted of spying for Israel. Sajida al-Rishawi, "The 4th bomber" of the 2005 Amman bombings, was executed by hanging alongside Ziad al-Karbouly on 4 February 2015 in retribution for the immolation of Jordanian pilot Muath Al-Kasasbeh. Kuwait Kuwait has always used hanging for execution. During the Gulf War, Iraqi government officials executed different people for different reasons. After the war, Kuwait hanged Iraqi collaborators. Sometimes the executions are in public. The most recent executions were in 2022. Lebanon Lebanon hanged two men in 1998 for murdering a man and his sister. However, capital punishment ended up being altogether suspended in Lebanon, as a result of staunch opposition by activists and some political factions. Liberia On 16 February 1979, seven men convicted of the ritual killing of the popular Kru traditional singer Moses Tweh, were publicly hanged at dawn in Harper. Malaysia Hanging is the traditional method of capital punishment in Malaysia and has been used to execute people convicted of murder, drug trafficking and waging war against the government. The Barlow and Chambers execution was carried out as a result of new tighter drug regulations. Pakistan In Pakistan, hanging is the most common form of execution. Portugal The last person executed by hanging in Portugal was Francisco Matos Lobos on 16 April 1842. Before that, it had been a common death penalty. Russia Hanging was commonly practised in the Russian Empire during the rule of the Romanov Dynasty as an alternative to impalement, which was used in the 15th and 16th centuries. Hanging was abolished in 1868 by Alexander II after serfdom, but was restored by the time of his death and his assassins were hanged. While those sentenced to death for murder were usually pardoned and sentences commuted to life imprisonment, those guilty of high treason were usually executed. This also included the Grand Duchy of Finland and Kingdom of Poland under the Russian crown. Taavetti Lukkarinen became the last Finn to be executed this way. He was hanged for espionage and high treason in 1916. The hanging was usually performed by short drop in public. The gallows were usually either a stout nearby tree branch, as in the case of Lukkarinen, or a makeshift gallows constructed for the purpose. After the October Revolution in 1917, capital punishment was, on paper, abolished, but continued to be used unabated against people perceived to be enemies of the regime. Under the Bolsheviks, most executions were performed by shooting, either by firing squad or by a single firearm. In 1943, hanging was restored primarily for German servicemen and native collaborators for atrocities committed against Soviet POWs and civilians. The last to be hanged were Andrey Vlasov and his companions in 1946. Singapore In Singapore, long-drop hanging is currently used as a mandatory punishment for crimes such as drug trafficking, murder and some types of kidnapping. It was introduced by the British, when they occupied Singapore and neighbouring Malaysia. It has also been used for punishing those convicted of unauthorised discharging of firearms. Sri Lanka Hanging was abolished in Sri Lanka in 1956, but in 1959 it was brought back and later halted in 1978. In 1975, the day before the execution of Maru Sira, he had been overdosed by the prison guards to prevent him from escaping. On the day of his execution he was unconscious, so when he was brought to the gallows, he was slumped over on the trapdoor with a noose around his neck, and when the executioner pulled the lever, his execution was botched and he strangled. Syria Syria has publicly hanged people, such as two individuals in 1952, Israeli spy Eli Cohen in 1965, and a number of Jews accused of spying for Israel in 1969. According to a 19th-century report, members of the Alawite sect centred on Lattakia in Syria had a particular aversion towards being hanged, and the family of the condemned was willing to pay "considerable sums" to ensure its relations were impaled, instead of being hanged. As far as Burckhardt could make out, this attitude was based upon the Alawites' idea that the soul ought to leave the body through the mouth, rather than leave it in any other fashion. The Islamic State also used hanging post-mortem, after they executed alleged spies for the western-backed coalition in Deir ez-Zor by cutting their throats in a slaughterhouse, during the Islamic holiday of Eid al-Adha in 2016. They also used shooting, beheading, fire and other methods to execute people during their rule. United Kingdom As a form of judicial execution in England, hanging is thought to date from the Anglo-Saxon period. Records of the names of British hangmen begin with Thomas de Warblynton in the 1360s; complete records extend from the 16th century to the last hangmen, Robert Leslie Stewart and Harry Allen, who conducted the last British executions in 1964. Until 1868 hangings were performed in public. In London, the traditional site was at Tyburn, a settlement west of the City on the main road to Oxford, which was used on eight hanging days a year, though before 1865, executions had been transferred to the street outside Newgate Prison, Old Bailey, now the site of the Central Criminal Court. Three British subjects were hanged after World War II after having been convicted of having helped Nazi Germany in its war against Britain. John Amery, the son of prominent British politician Leo Amery, became an expatriate in the 1930s, moving to France. He became involved in pre-war fascist politics, remained in what became Vichy France following France's defeat by Germany in 1940 and eventually went to Germany and later the German puppet state in Italy headed by Benito Mussolini. Captured by Italian partisans at the end of the war and handed over to British authorities, Amery was accused of having made propaganda broadcasts for the Nazis and of having attempted to recruit British prisoners of war for a Waffen SS regiment later known as the British Free Corps. Amery pleaded guilty to treason charges on 28 November 1945 and was hanged at Wandsworth Prison on 19 December 1945. William Joyce, an American-born Irishman who had lived in Britain and possessed a British passport, had been involved in pre-war fascist politics in the UK, fled to Nazi Germany just before the war began to avoid arrest by British authorities and became a naturalised German citizen. He made propaganda broadcasts for the Nazis, becoming infamous under the nickname Lord Haw Haw. Captured by British forces in May 1945, he was tried for treason later that year. Although Joyce's defence argued that he was by birth American and thus not subject to being tried for treason, the prosecution successfully argued that Joyce's pre-war British passport meant that he was a subject of the British Crown and he was convicted. After his appeals failed, he was hanged at Wandsworth Prison on 3 January 1946. Theodore Schurch, a British soldier captured by the Nazis who then began working for the Italian and German intelligence services by acting as a spy and informer who would be placed among other British prisoners, was arrested in Rome in March 1945 and tried under the Treachery Act 1940. After his conviction, he was hanged at HM Prison Pentonville on 4 January 1946. The Homicide Act 1957 created the new offence of capital murder, punishable by death, with all other murders being punishable by life imprisonment. In 1965, Parliament passed the Murder (Abolition of Death Penalty) Act, temporarily abolishing capital punishment for murder for five years. The Act was renewed in 1969, making the abolition permanent. With the passage of the Crime and Disorder Act 1998 and the Human Rights Act 1998, the death penalty was officially abolished for all crimes in both civilian and military cases. Following its complete abolition, the gallows were removed from Wandsworth Prison, where they remained in full working order until that year. The last woman to be hanged was Ruth Ellis on 13 July 1955, by Albert Pierrepoint who was a prominent hangman in the 20th century in England. The last hangings in Britain took place in 1964, when Peter Anthony Allen was executed at Walton Prison in Liverpool. Gwynne Owen Evans was executed by Harry Allen at Strangeways Prison in Manchester. Both were executed for the murder of John Alan West. Hanging was also the method used in many colonies and overseas territories. Silken rope In the UK, some felons are traditionally said to have been executed by hanging with a silken rope: Hereditary peers who committed capital offences, as anticipated by the fictional Duke of Denver, brother of Lord Peter Wimsey. The Duke was accused of murder in the novel Clouds of Witness, and this execution would have been his fate, after conviction by his peers in a trial in the House of Lords. It has been claimed that the execution of Earl Ferrers in 1760 – the only time a peer was hanged after trial by the House of Lords – was carried out with the normal hempen rope instead of a silk one. The writ of execution does not specify a silk rope be used, and The Newgate Calendar makes no mention of the use of such an item – an unusual omission given its highly sensationalist nature. Those who have the Freedom of the City of London. United States Hanging was one means by which Puritans of the Massachusetts Bay Colony enforced religious and intellectual conformity on the whole community. The best known hanging carried out by the Puritans, Mary Dyer was one of the four executed Quakers known as the Boston martyrs. Capital punishment in the U.S. varies from state to state; it is outlawed in some states but used in most others. However, the death penalty under federal law is applicable in every state. Hanging is no longer used as a method of execution. When Black pastor Denmark Vesey of the Emanuel African Methodist Episcopal Church was suspected of plotting to launch a slave rebellion in Charleston, South Carolina in 1822, 35 people, including Vesey, were judged guilty by a city-appointed court and were subsequently hanged, and the church was burned down. The Dakota War of 1862, also known as the Dakota uprising, led to the largest mass execution in the United States when 38 Sioux Indians, who were facing starvation and displacement, attacked white settlers, for which they were sentenced to death via hanging in Mankato, Minnesota in December 1862. Originally, 303 had been sentenced to hang, but the convictions were reviewed by President Abraham Lincoln and the sentences of all but 38 were commuted. In 2019, an historic apology was issued to the Dakota people for the mass hanging and the "trauma inflicted on Native people at the hands of state government." A total of 40 suspected Unionists were hanged in Gainesville, Texas in October 1862. On 7 July 1865, four people involved in the assassination of President Abraham Lincoln—Mary Surratt, Lewis Powell, David Herold, and George Atzerodt—were hanged at Fort McNair in Washington, D.C. While relatively uncommon, hanging in chains has also been practiced (mainly during the colonial era), the first being a slave after the New York Slave Revolt of 1712. The last hanging in chains was in 1913, of John Marshall in West Virginia for murder. The last public hanging in the United States (not including lynching, one of the last of which was Michael Donald in 1981) took place on 14 August 1936, in Owensboro, Kentucky. Rainey Bethea was executed for the rape and murder of 70-year-old Lischa Edwards. The execution was presided over by the first female sheriff in Kentucky, Florence Shoemaker Thompson. In California, Clinton Duffy, who served as warden of San Quentin State Prison between 1940 and 1952, presided over ninety executions. He began to oppose the death penalty, and after his retirement, wrote a memoir entitled Eighty-Eight Men and Two Women in support of the movement to abolish the death penalty. The book documents several hangings gone wrong and describes how they led his predecessor, Warden James B. Holohan, to persuade the California Legislature to replace hanging with the gas chamber in 1937. Various methods of capital punishment have been replaced by lethal injection in most states and the federal government. Many states that offered hanging as an option have since eliminated the method. Condemned murderer Victor Feguer became the last inmate to be executed by hanging in the state of Iowa on 15 March 1963. Hanging was the preferred method of execution for capital murder cases in Iowa until 1965, when the death penalty was abolished and replaced with life imprisonment without parole. Barton Kay Kirkham was the last person to be hanged in Utah, preferring it over execution by firing squad. Laws in Delaware were changed in 1986 to specify lethal injection, except for those convicted before 1986 (who were still allowed to choose hanging). If a choice was not made, or the convict refused to choose injection, then hanging would become the default method. This was the case in the 1996 execution of Billy Bailey, the most recent hanging in American history; since then, no Delaware prisoner fit the category, and the state's gallows were later dismantled. Upright jerker The upright jerker is a method of hanging that originated in the United States in the late 19th century. The person to be hanged is jerked into the air by weights and pulleys. It proved to be ineffective at breaking the neck of the condemned, and death by asphyxiation often occurred. In the United States, use of the method ceased in the late 1930s. However, Iran continues to intermittently employ a variant of this method, using a crane rather than a specially-designed mechanism of pulleys. The method has received heavy criticism from human rights organizations and the European Union. Inverted hanging, the "Jewish" punishment A completely different principle of hanging is to hang the convicted person from their legs, rather than from their neck, either as a form of torture, or as an execution method. In late medieval Germany, this came to be primarily associated with Jews accused of being thieves, called the . The jurist Ulrich Tengler, in his from 1509, describes the procedure as follows, in the section : showed that originally, this type of inverted hanging between two dogs was not a punishment specifically for Jews. Esther Cohen writes: In Spain 1449, during a mob attack against the Marranos (Jews nominally converted to Christianity), the Jews resisted, but lost and several of them were hanged up by the feet. The first attested German case for a Jew being hanged by the feet is from 1296, in present-day Soultzmatt. Some other historical examples of this type of hanging within the German context are one Jew in Hennegau 1326, two Jews hanged in Frankfurt 1444, one in Halle in 1462, one in Dortmund 1486, one in Hanau 1499, one in Breslau 1505, one in Württemberg 1553, one in Bergen 1588, one in Öttingen 1611, one in Frankfurt 1615 and again in 1661, and one condemned to this punishment in Prussia in 1637. The details of the cases vary widely: In the 1444 Frankfurt cases and the 1499 Hanau case, the dogs were dead prior to being hanged, and in the late 1615 and 1661 cases in Frankfurt, the Jews (and dogs) were merely kept in this torture for half an hour, before being garroted from below. In the 1588 Bergen case, all three victims were left hanging till they were dead, ranging from 6 to 8 days after being hanged. In the Dortmund 1486 case, the dogs bit the Jew to death while hanging. In the 1611 Öttingen case, the Jew "Jacob the Tall" thought to blow up the with gunpowder after having burgled it. He was strung up between two dogs, and a large fire was made close to him, and he expired after half an hour under this torture. In the 1553 Württemberg case, the Jew chose to convert to Christianity after hanging like this for 24 hours; he was then given the mercy to be hanged in the ordinary manner, from the neck, and without the dogs beside him. In the 1462 Halle case, the Jew Abraham also converted after 24 hours hanging upside down, and a priest went up on a ladder and baptised him. For two more days, Abraham was left hanging, while the priest argued with the city council that a true Christian should not be punished in this way. On the third day, Abraham was granted a reprieve, and was taken down, but died 20 days later in the local hospital having meanwhile suffered in extreme pain. In the 1637 case, where the Jew had murdered a Christian jeweller, the appeal to the empress was successful, and out of mercy, the Jew was condemned to be merely pinched with glowing pincers, have hot lead dripped into his wounds, and then be broken alive on the wheel. Some of the reported cases may be myths, or wandering stories. The 1326 Hennegau case, for example, deviates from the others in that the Jew was not a thief, but was suspected (even though he was a convert to Christianity) of having struck an al fresco painting of Virgin Mary, so that blood had begun to seep down the wall from the painting. Even under all degrees of judicial torture, the Jew denied performing this sacrilegious act, and was therefore exonerated. Then a brawny smith demanded from him a trial by combat, because, supposedly, in a dream the Virgin herself had besought the smith to do so. The court accepted the smith's challenge, he easily won the combat against the Jew, who was duly hanged up by the feet between two dogs. To add to the injury, one let him be slowly roasted as well as hanged. This is a very similar story to one told in France, in which a young Jew threw a lance at the head of a statue of the Virgin, so that blood spurted out of it. There was inadequate evidence for a normal trial, but a frail old man asked for trial by combat, and bested the young Jew. The Jew confessed his crime, and was hanged by his feet between two mastiffs. The features of the earliest attested case, that of a Jewish thief hanged by the feet in Soultzmatt in 1296 are also rather divergent from the rest. The Jew managed somehow, after he had been left to die, to twitch his body in such a manner that he could hoist himself up on the gallows and free himself. At that time, his feet were so damaged that he was unable to escape, and when he was discovered 8 days after he had been hanged, he was strangled to death by the townspeople. As late as in 1699 Celle, the courts were sufficiently horrified at how the Jewish leader of a robber gang (condemned to be hanged in the normal manner) declared blasphemies against Christianity, that they made a ruling on the post mortem treatment of Jonas Meyer. After 3 days, his corpse was cut down, his tongue cut out, and his body was hanged up again, but this time from its feet. Punishment for traitors Guido Kisch writes that the first instance he knows where a person in Germany was hanged up by his feet between two dogs until he died occurred about 1048, some 250 years earlier than the first attested Jewish case. This was a knight called Arnold, who had murdered his lord; the story is contained in Adam of Bremen's History of the Archbishops of Hamburg-Bremen. Another example of a non-Jew who suffered this punishment as a torture, in 1196 Richard, Count of Acerra, was one of those executed by Henry VI in the suppression of the rebelling Sicilians: A couple of centuries earlier, in France 991, a viscount Walter nominally owing his allegiance to the French King Hugh Capet chose, on instigation of his wife, to join the rebellion under Odo I, Count of Blois. When Odo found out he had to abandon Melun after all, Walter was duly hanged before the gates, whereas his wife, the fomentor of treason, was hanged by her feet, causing much merriment and jeers from Hugh's soldiers as her clothes fell downwards revealing her naked body, although it is not wholly clear if she died in that manner. Elizabethan maritime law During Queen Elizabeth I's reign, the following was written concerning those who stole a ship from the Royal Navy: Hanging by the ribs In 1713, Juraj Jánošík, a semi-legendary Slovak outlaw and folk hero, was sentenced to be hanged from his left rib. He was left to slowly die. The German physician Gottlob Schober (1670–1739), who worked in Russia from 1712, notes that a person could hang from the ribs for about three days prior to expiring, his primary pain being that of extreme thirst. He thought this degree of insensitivity was something peculiar to the Russian mentality. The Dutch in Suriname were also in the habit of hanging a slave from the ribs, a custom amongst the African tribes from whom they were originally purchased. John Gabriel Stedman stayed in South America from 1772 to 1777 and described the method as told by a witness: William Blake was specially commissioned to make illustrations to Stedman's narrative. Grammar The standard past tense and past participle form of the verb "hang", in the sense of this article, is "hanged", although some dictionaries give "hung" as an alternative. See also Capital punishment Death erection Dule tree Erotic asphyxiation Executioner Gallows Garrote Hand of Glory Hanging judge Hanging tree (United States) Hangman (game) Hangman's knot Jack Ketch List of people who died by hanging List of suicides Lynching Lynching in the United States Suicide by hanging Rope References Further reading Jack Shuler, The Thirteenth Turn: A History of the Noose. New York: Public Affairs, 2014, External links A Case Of Strangulation Fabricated As Hanging Obliquity vs. Discontinuity of ligature mark in diagnosis of hanging – a comparative study Death Penalty Worldwide Academic research database on the laws, practice, and statistics of capital punishment for every death penalty country in the world. Execution methods Human positions Strangling Injuries of neck
Hanging
[ "Biology" ]
9,147
[ "Behavior", "Human positions", "Human behavior" ]
155,131
https://en.wikipedia.org/wiki/Bruxism
Bruxism is excessive teeth grinding or jaw clenching. It is an oral parafunctional activity; i.e., it is unrelated to normal function such as eating or talking. Bruxism is a common behavior; the global prevalence of bruxism (both sleep and awake) is 22.22%. Several symptoms are commonly associated with bruxism, including aching jaw muscles, headaches, hypersensitive teeth, tooth wear, and damage to dental restorations (e.g. crowns and fillings). Symptoms may be minimal, without patient awareness of the condition. If nothing is done, after a while many teeth start wearing down until the whole tooth is gone. There are two main types of bruxism: one occurs during sleep (nocturnal bruxism) and one during wakefulness (awake bruxism). Dental damage may be similar in both types, but the symptoms of sleep bruxism tend to be worse on waking and improve during the course of the day, and the symptoms of awake bruxism may not be present at all on waking, and then worsen over the day. The causes of bruxism are not completely understood, but probably involve multiple factors. Awake bruxism is more common in women, whereas men and women are affected in equal proportions by sleep bruxism. Awake bruxism is thought to have different causes from sleep bruxism. Several treatments are in use, although there is little evidence of robust efficacy for any particular treatment. Epidemiology There is a wide variation in reported epidemiologic data for bruxism, and this is largely due to differences in the definition, diagnosis and research methodologies of these studies. E.g. several studies use self-reported bruxism as a measure of bruxism, and since many people with bruxism are not aware of their habit, self-reported tooth grinding and clenching habits may be a poor measure of the true prevalence. The ICSD-R states that 85–90% of the general population grind their teeth to a degree at some point during their life, although only 5% will develop a clinical condition. Some studies have reported that awake bruxism affects females more commonly than males, while in sleep bruxism, males and females are affected equally. Children are reported to brux as commonly as adults. It is possible for sleep bruxism to occur as early as the first year of life, after the first teeth (deciduous incisors) erupt into the mouth, and the overall prevalence in children is about 14–20%. The ICSD-R states that sleep bruxism may occur in over 50% of normal infants. Often sleep bruxism develops during adolescence, and the prevalence in 18- to 29-year-olds is about 13%. The overall prevalence in adults is reported to be 8%, and people over the age of 60 are less likely to be affected, with the prevalence dropping to about 3% in this group. According to a meta-analysis conducted in 2024, the global prevalence of bruxism (both sleep and awake) is 22.22%. The global prevalence of sleep bruxism is 21%, while the prevalence of awake bruxism is 23%. The occurrence of sleep bruxism, based on polysomnography, was estimated at 43%. The highest prevalence of sleep bruxism was observed in North America at 31%, followed by South America at 23%, Europe at 21%, and Asia at 19%. The prevalence of awake bruxism was highest in South America at 30%, followed by Asia at 25% and Europe at 18%. The review also concluded that overall, bruxism affects males and females equally, and affects elderly people less commonly. Signs and symptoms Most people who brux are unaware of the problem, either because there are no symptoms, or because the symptoms are not understood to be associated with a clenching and grinding problem. The symptoms of sleep bruxism are usually most intense immediately after waking, and then slowly abate, and the symptoms of a grinding habit which occurs mainly while awake tend to worsen through the day, and may not be present on waking. Bruxism may cause a variety of signs and symptoms, including: A grinding or tapping noise during sleep, sometimes detected by a partner or a parent. This noise can be surprisingly loud and unpleasant, and can wake a sleeping partner. Noises are rarely associated with awake bruxism. Other parafunctional activity which may occur together with bruxism: cheek biting (which may manifest as morsicatio buccarum or linea alba), or lip biting. A burning sensation on the tongue (see: glossodynia), possibly related to a coexistent "tongue thrusting" parafunctional activity. Indentations of the teeth in the tongue ("crenated tongue" or "scalloped tongue"). Hypertrophy of the muscles of mastication (increase in the size of the muscles that move the jaw), particularly the masseter muscle. Tenderness, pain or fatigue of the muscles of mastication, which may get worse during chewing or other jaw movement. Trismus (restricted mouth opening). Pain or tenderness of the temporomandibular joints, which may manifest as preauricular pain (in front of the ear), or pain referred to the ear (otalgia). Clicking of the temporomandibular joints. Headaches, particularly pain in the temples, caused by muscle pain associated with the temporalis muscle. Excessive tooth wear, particularly attrition, which flattens the occlusal (biting) surface, but also possibly other types of tooth wear such as abfraction, where notches form around the neck of the teeth at the gumline. Tooth fractures, and repeated failure of dental restorations (fillings, crowns, etc.). Hypersensitive teeth, (e.g. dental pain when drinking a cold liquid) caused by wearing away of the thickness of insulating layers of dentin and enamel around the dental pulp. Inflammation of the periodontal ligament of teeth, which may make them sore to bite on, and possibly also a degree of loosening of the teeth. Bruxism is usually detected because of the effects of the process (most commonly tooth wear and pain), rather than the process itself. The large forces that can be generated during bruxism can have detrimental effects on the components of masticatory system, namely the teeth, the periodontium and the articulation of the mandible with the skull (the temporomandibular joints). The muscles of mastication that act to move the jaw can also be affected since they are being utilized over and above of normal function. Pain Most people with bruxism will experience no pain. The presence or degree of pain does not necessarily correlate with the severity of grinding or clenching. The pain in the muscles of mastication caused by bruxism can be likened to muscle pain after exercise. The pain may be felt over the angle of the jaw (masseter) or in the temple (temporalis), and may be described as a headache or an aching jaw. Most (but not all) bruxism includes clenching force provided by masseter and temporalis muscle groups; but some bruxers clench and grind front teeth only, which involves minimal action of the masseter and temporalis muscles. The temporomandibular joints themselves may also become painful, which is usually felt just in front of the ear, or inside the ear itself. Clicking of the jaw joint may also develop. The forces exerted on the teeth are more than the periodontal ligament is biologically designed to handle, and so inflammation may result. A tooth may become sore to bite on, and further, tooth wear may reduce the insulating width of enamel and dentin that protects the pulp of the tooth and result in hypersensitivity, e.g. to cold stimuli. The relationship of bruxism with temporomandibular joint dysfunction (TMD, or temporomandibular pain dysfunction syndrome) is debated. Many suggest that sleep bruxism can be a causative or contributory factor to pain symptoms in TMD. Indeed, the symptoms of TMD overlap with those of bruxism. Others suggest that there is no strong association between TMD and bruxism. A systematic review investigating the possible relationship concluded that when self-reported bruxism is used to diagnose bruxism, there is a positive association with TMD pain, and when stricter diagnostic criteria for bruxism are used, the association with TMD symptoms is much lower. In severe, chronic cases, bruxism can lead to myofascial pain and arthritis of the temporomandibular joints. Tooth wear Many publications list tooth wear as a consequence of bruxism, but some report a lack of a positive relationship between tooth wear and bruxism. Tooth wear caused by tooth-to-tooth contact is termed attrition. This is the most usual type of tooth wear that occurs in bruxism, and affects the occlusal surface (the biting surface) of the teeth. The exact location and pattern of attrition depends on how the bruxism occurs, e.g., when the canines and incisors of the opposing arches are moved against each other laterally, by the action of the medial pterygoid muscles, this can lead to the wearing down of the incisal edges of the teeth. To grind the front teeth, most people need to posture their mandible forwards, unless there is an existing edge to edge, class III incisal relationship. People with bruxism may also grind their posterior teeth (back teeth), which wears down the cusps of the occlusal surface. Once tooth wear progresses through the enamel layer, the exposed dentin layer is softer and more vulnerable to wear and tooth decay. If enough of the tooth is worn away or decayed, the tooth will effectively be weakened, and may fracture under the increased forces that occur in bruxism. Abfraction is another type of tooth wear that is postulated to occur with bruxism, although some still argue whether this type of tooth wear is a reality. Abfraction cavities are said to occur usually on the facial aspect of teeth, in the cervical region as V-shaped defects caused by flexing of the tooth under occlusal forces. It is argued that similar lesions can be caused by long-term forceful toothbrushing. However, the fact that the cavities are V-shaped does not suggest that the damage is caused by toothbrush abrasion, and that some abfraction cavities occur below the level of the gumline, i.e., in an area shielded from toothbrush abrasion, supports the validity of this mechanism of tooth wear. In addition to attrition, erosion is said to synergistically contribute to tooth wear in some bruxists, according to some sources. Tooth mobility The view that occlusal trauma (as may occur during bruxism) is a causative factor in gingivitis and periodontitis is not widely accepted. It is thought that the periodontal ligament may respond to increased occlusal (biting) forces by resorbing some of the bone of the alveolar crest, which may result in increased tooth mobility, however these changes are reversible if the occlusal force is reduced. Tooth movement that occurs during occlusal loading is sometimes termed fremitus. It is generally accepted that increased occlusal forces are able to increase the rate of progression of pre-existing periodontal disease (gum disease), however the main stay treatment is plaque control rather than elaborate occlusal adjustments. It is also generally accepted that periodontal disease is a far more common cause of tooth mobility and pathological tooth migration than any influence of bruxism, although bruxism may much less commonly be involved in both. Causes The muscles of mastication (the temporalis muscle, masseter muscle, medial pterygoid muscle and lateral pterygoid muscle) are paired on either side and work together to move the mandible, which hinges and slides around its dual articulation with the skull at the temporomandibular joints. Some of the muscles work to elevate the mandible (close the mouth), and others also are involved in lateral (side to side), protrusive or retractive movements. Mastication (chewing) is a complex neuromuscular activity that can be controlled either by subconscious processes or by conscious processes. In individuals without bruxism or other parafunctional activities, during wakefulness the jaw is generally at rest and the teeth are not in contact, except while speaking, swallowing or chewing. It is estimated that the teeth are in contact for less than 20 minutes per day, mostly during chewing and swallowing. Normally during sleep, the voluntary muscles are inactive due to physiologic motor paralysis, and the jaw is usually open. Ankyloglossia is suspected as a cause of bruxism. Some bruxism activity is rhythmic with bite force pulses of tenths of a second (like chewing), and some have longer bite force pulses of 1 to 30 seconds (clenching). Some individuals clench without significant lateral movements. Bruxism can also be regarded as a disorder of repetitive, unconscious contraction of muscles. This typically involves the masseter muscle and the anterior portion of the temporalis (the large outer muscles that clench), and the lateral pterygoids, relatively small bilateral muscles that act together to perform sideways grinding. Multiple causes The cause of bruxism is largely unknown, but it is generally accepted to have multiple possible causes. Bruxism is a parafunctional activity, but it is debated whether this represents a subconscious habit or is entirely involuntary. The relative importance of the various identified possible causative factors is also debated. Awake bruxism is thought to be usually semivoluntary, and often associated with stress caused by family responsibilities or work pressures. Some suggest that in children, bruxism may occasionally represent a response to earache or teething. Awake bruxism usually involves clenching (sometimes the term "awake clenching" is used instead of awake bruxism), but also possibly grinding, and is often associated with other semivoluntary oral habits such as cheek biting, nail biting, chewing on a pen or pencil absent mindedly, or tongue thrusting (where the tongue is pushed against the front teeth forcefully). There is evidence that sleep bruxism is caused by mechanisms related to the central nervous system, involving sleep arousal and neurotransmitter abnormalities. Underlying these factors may be psychosocial factors including daytime stress which is disrupting peaceful sleep. Sleep bruxism is mainly characterized by "rhythmic masticatory muscle activity" (RMMA) at a frequency of about once per second, and also with occasional tooth grinding. It has been shown that the majority (86%) of sleep bruxism episodes occur during periods of sleep arousal. One study reported that sleep arousals which were experimentally induced with sensory stimulation in sleeping bruxists triggered episodes of sleep bruxism. Sleep arousals are a sudden change in the depth of the sleep stage, and may also be accompanied by increased heart rate, respiratory changes and muscular activity, such as leg movements. Initial reports have suggested that episodes of sleep bruxism may be accompanied by gastroesophageal reflux, decreased esophageal pH (acidity), swallowing, and decreased salivary flow. Another report suggested a link between episodes of sleep bruxism and a supine sleeping position (lying face up). Disturbance of the dopaminergic system in the central nervous system has also been suggested to be involved in the etiology of bruxism. Evidence for this comes from observations of the modifying effect of medications which alter dopamine release on bruxing activity, such as levodopa, amphetamines or nicotine. Nicotine stimulates release of dopamine, which is postulated to explain why bruxism is twice as common in smokers compared to non-smokers. Historical focus Historically, many believed that problems with the bite were the sole cause for bruxism. It was often claimed that a person would grind at the interfering area in a subconscious, instinctive attempt to wear this down and "self equiliberate" their occlusion. However, occlusal interferences are extremely common and usually do not cause any problems. It is unclear whether people with bruxism tend to notice problems with the bite because of their clenching and grinding habit, or whether these act as a causative factor in the development of the condition. In sleep bruxism especially, there is no evidence that removal of occlusal interferences has any impact on the condition. People with no teeth at all who wear dentures can still have bruxism, although dentures also often change the original bite. Most modern sources state that there is no relationship, or at most a minimal relationship, between bruxism and occlusal factors. The findings of one study, which used self-reported tooth grinding rather than clinical examination to detect bruxism, suggested that there may be more of a relationship between occlusal factors and bruxism in children. However, the role of occlusal factors in bruxism cannot be completely discounted due to insufficient evidence and problems with the design of studies. A minority of researchers continue to claim that various adjustments to the mechanics of the bite are capable of curing bruxism (see Occlusal adjustment/reorganization). Psychosocial factors Many studies have reported significant psychosocial risk factors for bruxism, particularly a stressful lifestyle, and this evidence is growing, but still not conclusive. Some consider emotional stress and anxiety to be the main triggering factors. It has been reported that persons with bruxism respond differently to depression, hostility and stress compared to people without bruxism. Stress has a stronger relationship to awake bruxism, but the role of stress in sleep bruxism is less clear, with some stating that there is no evidence for a relationship with sleep bruxism. However, children with sleep bruxism have been shown to have greater levels of anxiety than other children. People aged 50 with bruxism are more likely to be single and have a high level of education. Work-related stress and irregular work shifts may also be involved. Personality traits are also commonly discussed in publications concerning the causes of bruxism, e.g. aggressive, competitive or hyperactive personality types. Some suggest that suppressed anger or frustration can contribute to bruxism. Stressful periods such as examinations, family bereavement, marriage, divorce, or relocation have been suggested to intensify bruxism. Awake bruxism often occurs during periods of concentration such as while working at a computer, driving or reading. Animal studies have also suggested a link between bruxism and psychosocial factors. Rosales et al. electrically shocked lab rats, and then observed high levels of bruxism-like muscular activity in rats that were allowed to watch this treatment compared to rats that did not see it. They proposed that the rats who witnessed the electrical shocking of other rats were under emotional stress which may have caused the bruxism-like behavior. Genetic factors Some research suggests that there may be a degree of inherited susceptibility to develop sleep bruxism. 21–50% of people with sleep bruxism have a direct family member who had sleep bruxism during childhood, suggesting that there are genetic factors involved, although no genetic markers have yet been identified. Offspring of people who have sleep bruxism are more likely to also have sleep bruxism than children of people who do not have bruxism, or people with awake bruxism rather than sleep bruxism. Medications Certain stimulant drugs, including both prescribed and recreational drugs, are thought by some to cause the development of bruxism. However, others argue that there is insufficient evidence to draw such a conclusion. Examples may include dopamine agonists, dopamine antagonists, tricyclic antidepressants, selective serotonin reuptake inhibitors, alcohol, cocaine, and amphetamines (including those taken for medical reasons). In some reported cases where bruxism is thought to have been initiated by selective serotonin reuptake inhibitors, decreasing the dose resolved the side effect. Other sources state that reports of selective serotonin reuptake inhibitors causing bruxism are rare, or only occur with long-term use. Specific examples include levodopa (when used in the long term, as in Parkinson's disease), fluoxetine, metoclopramide, lithium, cocaine, venlafaxine, citalopram, fluvoxamine, methylenedioxyamphetamine (MDA), methylphenidate (used in attention deficit hyperactive disorder), and gamma-hydroxybutyric acid (GHB) and similar gamma-aminobutyric acid-inducing analogues such as phenibut. Bruxism can also be exacerbated by excessive consumption of caffeine, as in coffee, tea or chocolate. Bruxism has also been reported to occur commonly comorbid with drug addiction. Methylenedioxymethamphetamine (MDMA, ecstasy) has been reported to be associated with bruxism, which occurs immediately after taking the drug and for several days afterwards. Tooth wear in people who take ecstasy is also frequently much more severe than in people with bruxism not associated with ecstasy. Occlusal factors Occlusion is defined most simply as "contacts between teeth", and is the meeting of teeth during biting and chewing. The term does not imply any disease. Malocclusion is a medical term referring to less than ideal positioning of the upper teeth relative to the lower teeth, which can occur both when the upper jaw is ideally proportioned to the lower jaw, or where there is a discrepancy between the size of the upper jaw relative to the lower jaw. Malocclusion of some sort is so common that the concept of an "ideal occlusion" is called into question, and it can be considered "normal to be abnormal". An occlusal interference may refer to a problem which interferes with the normal path of the bite, and is usually used to describe a localized problem with the position or shape of a single tooth or group of teeth. A premature contact is one part of the bite meeting sooner than other parts, meaning that the rest of the teeth meet later or are held open, e.g., a new dental restoration on a tooth (e.g., a crown) which has a slightly different shape or position to the original tooth may contact too soon in the bite. A deflective contact/interference is an interference with the bite that changes the normal path of the bite. A common example of a deflective interference is an over-erupted upper wisdom tooth, often because the lower wisdom tooth has been removed or is impacted. In this example, when the jaws are brought together, the lower back teeth contact the prominent upper wisdom tooth before the other teeth, and the lower jaw has to move forward to allow the rest of the teeth to meet. The difference between a premature contact and a deflective interference is that the latter implies a dynamic abnormality in the bite. Possible associations Several associations between bruxism and other conditions, usually neurological or psychiatric disorders, have rarely been reported, with varying degrees of evidence (often in the form of case reports). Examples include: Acrodynia Atypical facial pain Autism Cerebral palsy Disturbed sleep patterns and other sleep disorders, such as obstructive sleep apnea, snoring, moderate daytime sleepiness, and insomnia Down syndrome Dyskinesias Epilepsy Eustachian tube dysfunction Infarction in the basal ganglia Intellectual disability, particularly in children Leigh disease Meningococcal septicaemia Multiple system atrophy Oromandibular dystonia Parkinson's diseases, (possibly due to long-term therapy with levodopa causing dopaminergic dysfunction) Rett syndrome Torus mandibularis and buccal exostosis Trauma, e.g. brain injury or coma Diagnosis Early diagnosis of bruxism is advantageous, but difficult. Early diagnosis can prevent damage that may be incurred and the detrimental effect on quality of life. A diagnosis of bruxism is usually made clinically, and is mainly based on the person's history (e.g. reports of grinding noises) and the presence of typical signs and symptoms, including tooth mobility, tooth wear, masseteric hypertrophy, indentations on the tongue, hypersensitive teeth (which may be misdiagnosed as reversible pulpitis), pain in the muscles of mastication, and clicking or locking of the temporomandibular joints. Questionnaires can be used to screen for bruxism in both the clinical and research settings. For tooth grinders who live in a household with other people, diagnosis of grinding is straightforward: Housemates or family members would advise a bruxer of recurrent grinding. Grinders who live alone can likewise resort to a sound-activated tape recorder. To confirm the condition of clenching, on the other hand, bruxers may rely on such devices as the Bruxchecker, Bruxcore, or a beeswax-bearing biteplate. The Individual (personal) Tooth-Wear Index was developed to objectively quantify the degree of tooth wear in an individual, without being affected by the number of missing teeth. Bruxism is not the only cause of tooth wear. Another possible cause of tooth wear is acid erosion, which may occur in people who drink a lot of acidic liquids such as concentrated fruit juice, or in people who frequently vomit or regurgitate stomach acid, which itself can occur for various reasons. People also demonstrate a normal level of tooth wear, associated with normal function. The presence of tooth wear only indicates that it had occurred at some point in the past, and does not necessarily indicate that the loss of tooth substance is ongoing. People who clench and perform minimal grinding will also not show much tooth wear. Occlusal splints are usually employed as a treatment for bruxism, but they can also be of diagnostic use, e.g. to observe the presence or absence of wear on the splint after a certain period of wearing it at night. The most usual trigger in sleep bruxism that leads a person to seek medical or dental advice is being informed by a sleeping partner of unpleasant grinding noises during sleep. The diagnosis of sleep bruxism is usually straightforward, and involves the exclusion of dental diseases, temporomandibular disorders, and the rhythmic jaw movements that occur with seizure disorders (e.g. epilepsy). This usually involves a dental examination, and possibly electroencephalography if a seizure disorder is suspected. Polysomnography shows increased masseter and temporalis muscular activity during sleep. Polysomnography may involve electroencephalography, electromyography, electrocardiography, air flow monitoring and audio–video recording. It may be useful to help exclude other sleep disorders; however, due to the expense of the use of a sleep lab, polysomnography is mostly of relevance to research rather than routine clinical diagnosis of bruxism. Tooth wear may be brought to the person's attention during routine dental examination. With awake bruxism, most people will often initially deny clenching and grinding because they are unaware of the habit. Often, the person may re-attend soon after the first visit and report that they have now become aware of such a habit. Several devices have been developed that aim to objectively measure bruxism activity, either in terms of muscular activity or bite forces. They have been criticized for introducing a possible change in the bruxing habit, whether increasing or decreasing it, and are therefore poorly representative to the native bruxing activity. These are mostly of relevance to research, and are rarely used in the routine clinical diagnosis of bruxism. Examples include the "Bruxcore Bruxism-Monitoring Device" (BBMD, "Bruxcore Plate"), the "intra-splint force detector" (ISFD), and electromyographic devices to measure masseter or temporalis muscle activity (e.g. the "BiteStrip", and the "Grindcare"). ICSD-R diagnostic criteria The ICSD-R listed diagnostic criteria for sleep bruxism. The minimal criteria include both of the following: A. symptom of tooth-grinding or tooth-clenching during sleep, and B. One or more of the following: Abnormal tooth wear Grinding sounds Discomfort of the jaw muscles With the following criteria supporting the diagnosis: C. polysomnography shows both: Activity of jaw muscles during sleep No associated epileptic activity D. No other medical or mental disorders (e.g., sleep-related epilepsy, which may cause abnormal movement during sleep). E. The presence of other sleep disorders (e.g., obstructive sleep apnea syndrome). Definition examples Bruxism is derived from the Greek word (brykein) "to bite, or to gnash, grind the teeth". People with bruxism are called bruxists or bruxers and the verb itself is "to brux". There is no widely accepted definition of bruxism. Examples of definitions include: Classification by temporal pattern Bruxism can be subdivided into two types based upon when the parafunctional activity occurs – during sleep ("sleep bruxism"), or while awake ("awake bruxism"). This is the most widely used classification since sleep bruxism generally has different causes to awake bruxism, although the effects on the condition on the teeth may be the same. The treatment is also often dependent upon whether the bruxism happens during sleep or while awake, e.g., an occlusal splint worn during sleep in a person who only bruxes when awake will probably have no benefit. Some have even suggested that sleep bruxism is an entirely different disorder and is not associated with awake bruxism. Awake bruxism is sometimes abbreviated to AB, and is also termed "diurnal bruxism", DB, or "daytime bruxing". Sleep bruxism is sometimes abbreviated to SB, and is also termed "sleep-related bruxism", "nocturnal bruxism", or "nocturnal tooth grinding". According to the International Classification of Sleep Disorders revised edition (ICSD-R), the term "sleep bruxism" is the most appropriate since this type occurs during sleep specifically rather than being associated with a particular time of day, i.e., if a person with sleep bruxism were to sleep during the day and stay awake at night then the condition would not occur during the night but during the day. The ICDS-R defined sleep bruxism as "a stereotyped movement disorder characterized by grinding or clenching of the teeth during sleep", classifying it as a parasomnia. The second edition (ICSD-2) however reclassified bruxism to a "sleep related movement disorder" rather than a parasomnia. Classification by cause Alternatively, bruxism can be divided into primary bruxism (also termed "idiopathic bruxism"), where the disorder is not related to any other medical condition, or secondary bruxism, where the disorder is associated with other medical conditions. Secondary bruxism includes iatrogenic causes, such as the side effect of prescribed medications. Another source divides the causes of bruxism into three groups, namely central or pathophysiological factors, psychosocial factors and peripheral factors. The World Health Organization's International Classification of Diseases 10th revision does not have an entry called bruxism, instead listing "tooth grinding" under somatoform disorders. To describe bruxism as a purely somatoform disorder does not reflect the mainstream, modern view of this condition (see causes). Classification by severity The ICSD-R described three different severities of sleep bruxism, defining mild as occurring less than nightly, with no damage to teeth or psychosocial impairment; moderate as occurring nightly, with mild impairment of psychosocial functioning; and severe as occurring nightly, and with damage to the teeth, temporomandibular disorders and other physical injuries, and severe psychosocial impairment. Classification by duration The ICSD-R also described three different types of sleep bruxism according to the duration the condition is present, namely acute, which lasts for less than one week; subacute, which lasts for more than a week and less than one month; and chronic which lasts for over a month. Management Treatment for bruxism revolves around repairing the damage to teeth that has already occurred, and also often, via one or more of several available methods, attempting to prevent further damage and manage symptoms, but there is no widely accepted, best treatment. Since bruxism is not life-threatening, and there is little evidence of the efficacy of any treatment, it has been recommended that only conservative treatment which is reversible and that carries low risk of morbidity should be used. The main treatments that have been described in awake and sleep bruxism are described below. Psychosocial interventions Given the strong association between awake bruxism and psychosocial factors (the relationship between sleep bruxism and psychosocial factors being unclear), the role of psychosocial interventions could be argued to be central to the management. The most simple form of treatment is therefore reassurance that the condition does not represent a serious disease, which may act to alleviate contributing stress. Sleep hygiene education should be provided by the clinician, as well as a clear and short explanation of bruxism (definition, causes and treatment options). Relaxation and tension-reduction have not been found to reduce bruxism symptoms, but have given patients a sense of well-being. One study has reported less grinding and reduction of EMG activity after hypnotherapy. Other interventions include relaxation techniques, stress management, behavioural modification, habit reversal and hypnosis (self hypnosis or with a hypnotherapist). Cognitive behavioral therapy has been recommended by some for treatment of bruxism. In many cases awake bruxism can be reduced by using reminder techniques. Combined with a protocol sheet this can also help to evaluate in which situations bruxism is most prevalent. Medication Many different medications have been used to treat bruxism, including benzodiazepines, anticonvulsants, beta blockers, dopamine agents, antidepressants, muscle relaxants, and others. However, there is little, if any, evidence for their respective and comparative efficacies with each other and when compared to a placebo. A multiyear systematic review to investigate the evidence for drug treatments in sleep bruxism published in 2014 (Pharmacotherapy for Sleep Bruxism. Macedo, et al.) found "insufficient evidence on the effectiveness of pharmacotherapy for the treatment of sleep bruxism." Specific drugs that have been studied in sleep bruxism are clonazepam, levodopa, amitriptyline, bromocriptine, pergolide, clonidine, propranolol, and l-tryptophan, with some showing no effect and others appear to have promising initial results; however, it has been suggested that further safety testing is required before any evidence-based clinical recommendations can be made. When bruxism is related to the use of selective serotonin reuptake inhibitors in depression, adding buspirone has been reported to resolve the side effect. Tricyclic antidepressants have also been suggested to be preferable to selective serotonin reuptake inhibitors in people with bruxism, and may help with the pain. Prevention of dental damage Bruxism can cause significant tooth wear if it is severe, and sometimes dental restorations (crowns, fillings etc.) are damaged or lost, sometimes repeatedly. Most dentists therefore prefer to keep dental treatment in people with bruxism very simple and only carry it out when essential, since any dental work is likely to fail in the long term. Dental implants, dental ceramics such as Emax crowns and complex bridgework for example are relatively contraindicated in bruxists. In the case of crowns, the strength of the restoration becomes more important, sometimes at the cost of aesthetic considerations. E.g. a full coverage gold crown, which has a degree of flexibility and also involves less removal (and therefore less weakening) of the underlying natural tooth may be more appropriate than other types of crown which are primarily designed for esthetics rather than durability. Porcelain veneers on the incisors are particularly vulnerable to damage, and sometimes a crown can be perforated by occlusal wear. Occlusal splints (also termed dental guards) are commonly prescribed, mainly by dentists and dental specialists, as a treatment for bruxism. Proponents of their use claim many benefits, however when the evidence is critically examined in systematic reviews of the topic, it is reported that there is insufficient evidence to show that occlusal splints are effective for sleep bruxism as well as bruxism overall. Furthermore, occlusal splints are probably ineffective for awake bruxism, since they tend to be worn only during sleep. However, occlusal splints may be of some benefit in reducing the tooth wear that may accompany bruxism, but by mechanically protecting the teeth rather than reducing the bruxing activity itself. In a minority of cases, sleep bruxism may be made worse by an occlusal splint. Some patients will periodically return with splints with holes worn through them, either because the bruxism is aggravated, or unaffected by the presence of the splint. When tooth-to-tooth contact is possible through the holes in a splint, it is offering no protection against tooth wear and needs to be replaced. Occlusal splints are divided into partial or full-coverage splints according to whether they fit over some or all of the teeth. They are typically made of plastic (e.g. acrylic) and can be hard or soft. A lower appliance can be worn alone, or in combination with an upper appliance. Usually lower splints are better tolerated in people with a sensitive gag reflex. Another problem with wearing a splint can be stimulation of salivary flow, and for this reason some advise to start wearing the splint about 30 mins before going to bed so this does not lead to difficulty falling asleep. As an added measure for hypersensitive teeth in bruxism, desensitizing toothpastes (e.g. containing strontium chloride) can be applied initially inside the splint so the material is in contact with the teeth all night. This can be continued until there is only a normal level of sensitivity from the teeth, although it should be remembered that sensitivity to thermal stimuli is also a symptom of pulpitis, and may indicate the presence of tooth decay rather than merely hypersensitive teeth. Splints may also reduce muscle strain by allowing the upper and lower jaw to move easily with respect to each other. Treatment goals include: constraining the bruxing pattern to avoid damage to the temporomandibular joints; stabilizing the occlusion by minimizing gradual changes to the positions of the teeth, preventing tooth damage and revealing the extent and patterns of bruxism through examination of the markings on the splint's surface. A dental guard is typically worn during every night's sleep on a long-term basis. However, a meta-analysis of occlusal splints (dental guards) used for this purpose concluded "There is not enough evidence to state that the occlusal splint is effective for treating sleep bruxism." A repositioning splint is designed to change the patient's occlusion, or bite. The efficacy of such devices is debated. Some writers propose that irreversible complications can result from the long-term use of mouthguards and repositioning splints. Random controlled trials with these type devices generally show no benefit over other therapies. Another partial splint is the nociceptive trigeminal inhibition tension suppression system (NTI-TSS) dental guard. This splint snaps onto the front teeth only. It is theorized to prevent tissue damages primarily by reducing the bite force from attempts to close the jaw normally into a forward twisting of the lower front teeth. The intent is for the brain to interpret the nerve sensations as undesirable, automatically and subconsciously reducing clenching force. However, there may be potential for the NTI-TSS device to act as a Dahl appliance, holding the posterior teeth out of occlusion and leading to their over-eruption, deranging the occlusion (i.e. it may cause the teeth to move position). This is far more likely if the appliance is worn for excessive periods of time, which is why NTI type appliances are designed for night time use only, and ongoing follow-ups are recommended. A mandibular advancement device (normally used for treatment of obstructive sleep apnea) may reduce sleep bruxism, although its use may be associated with discomfort. Botulinum toxin Botulinum neurotoxin (BoNT) is used as a treatment for bruxism. A 2020 overview of systematic reviews found that botulinum toxin type A (BTX-A) showed a significant pain and sleep bruxism frequency reduction when compared to placebo or conventional treatment (behavioral therapy, occlusal splints, and drugs), after 6 and 12 months. Botulinum toxin causes muscle paralysis/atrophy by inhibition of acetylcholine release at neuromuscular junctions. BoNT injections are used in bruxism on the theory that a dilute solution of the toxin will partially paralyze the muscles and lessen their ability to forcefully clench and grind the jaw, while aiming to retain enough muscular function to enable normal activities such as talking and eating. This treatment typically involves five or six injections into the masseter and temporalis muscles, and less often into the lateral pterygoids (given the possible risk of decreasing the ability to swallow) taking a few minutes per side. The effects may be noticeable by the next day, and they may last for about three months. Occasionally, adverse effects may occur, such as bruising, but this is quite rare. The dose of toxin used depends upon the person, and a higher dose may be needed in people with stronger muscles of mastication. With the temporary and partial muscle paralysis, atrophy of disuse may occur, meaning that the future required dose may be smaller or the length of time the effects last may be increased. Biofeedback Biofeedback is a process or device that allows an individual to become aware of, and alter physiological activity with the aim of improving health. Although the evidence of biofeedback has not been tested for awake bruxism, there is recent evidence for the efficacy of biofeedback in the management of nocturnal bruxism in small control groups. Electromyographic monitoring devices of the associated muscle groups tied with automatic alerting during periods of clenching and grinding have been prescribed for awake bruxism. Dental appliances with capsules that break and release a taste stimulus when enough force is applied have also been described in sleep bruxism, which would wake the person from sleep in an attempt to prevent bruxism episodes. "Large scale, double-blind, experiment confirming the effectiveness of this approach have yet to be carried out." Occlusal adjustment/reorganization As an alternative to simply reactively repairing the damage to teeth and conforming to the existing occlusal scheme, occasionally some dentists will attempt to reorganize the occlusion in the belief that this may redistribute the forces and reduce the amount of damage inflicted on the dentition. Sometimes termed "occlusal rehabilitation" or "occlusal equilibration", this can be a complex procedure, and there is much disagreement between proponents of these techniques on most of the aspects involved, including the indications and the goals. It may involve orthodontics, restorative dentistry or even orthognathic surgery. Some have criticized these occlusal reorganizations as having no evidence base, and irreversibly damaging the dentition on top of the damage already caused by bruxism. History Two thousand years ago, Shuowen Jiezi by Xu Shen documented the definition of Chinese character "齘" (bruxism) as "the clenching of teeth" (齒相切也). In 610, Zhubing yuanhou lun by Chao Yuanfang documented the definition of bruxism (齘齒) as "the clenching of teeth during sleep" and explained that it was caused by Qi deficiency and blood stasis. In 978, Taiping Shenghuifang by Wang Huaiyin gave a similar explanation and three prescriptions for treatment. "La bruxomanie" (a French term, translates to bruxomania) was suggested by Marie Pietkiewics in 1907. In 1931, Frohman first coined the term bruxism. Occasionally recent medical publications will use the word bruxomania with bruxism, to denote specifically bruxism that occurs while awake; however, this term can be considered historical and the modern equivalent would be awake bruxism or diurnal bruxism. It has been shown that the type of research into bruxism has changed over time. Overall between 1966 and 2007, most of the research published was focused on occlusal adjustments and oral splints. Behavioral approaches in research declined from over 60% of publications in the period 1966–86 to about 10% in the period 1997–2007. In the 1960s, a periodontist named Sigurd Peder Ramfjord championed the theory that occlusal factors were responsible for bruxism. Generations of dentists were educated by this ideology in the prominent textbook on occlusion of the time, however therapy centered around removal of occlusal interference remained unsatisfactory. The belief among dentists that occlusion and bruxism are strongly related is still widespread, however the majority of researchers now disfavor malocclusion as the main etiologic factor in favor of a more multifactorial, biopsychosocial model of bruxism. Society and culture Clenching the teeth is generally displayed by humans and other animals as a display of anger, hostility or frustration. It is thought that in humans, clenching the teeth may be an evolutionary instinct to display teeth as weapons, thereby threatening a rival or a predator. The phrase "to grit one's teeth" is the grinding or clenching of the teeth in anger, or to accept a difficult or unpleasant situation and deal with it in a determined way. In the Bible there are several references to "gnashing of teeth" in both the Old Testament, and the New Testament, where the phrase "weeping and gnashing of teeth" appears no less than 7 times in Matthew alone. A Chinese proverb has linked bruxism with psychosocial factors. "If a boy clenches, he hates his family for not being prosperous; if a girl clenches, she hates her mother for not being dead."(男孩咬牙,恨家不起;女孩咬牙,恨妈不死。) In David Lynch's 1977 film Eraserhead, Henry Spencer's partner ("Mary X") is shown tossing and turning in her sleep, and snapping her jaws together violently and noisily, depicting sleep bruxism. In Stephen King's 1988 novel The Tommyknockers, the sister of central character Bobbi Anderson also had bruxism. In the 2000 film Requiem for a Dream, the character of Sara Goldfarb (Ellen Burstyn) begins taking an amphetamine-based diet pill and develops bruxism. In the 2005 film Beowulf & Grendel, a modern reworking of the Anglo-Saxon poem Beowulf, Selma the witch tells Beowulf that the troll's name Grendel means "grinder of teeth", stating that "he has bad dreams", a possible allusion to Grendel traumatically witnessing the death of his father as a child, at the hands of King Hrothgar. The Geats (the warriors who hunt the troll) alternatively translate the name as "grinder of men's bones" to demonize their prey. In George R. R. Martin's A Song of Ice and Fire series, King Stannis Baratheon grinds his teeth regularly, so loudly it can be heard "half a castle away". In rave culture, recreational use of ecstasy is often reported to cause bruxism. Among people who have taken ecstasy, while dancing it is common to use pacifiers, lollipops or chewing gum in an attempt to reduce the damage to the teeth and to prevent jaw pain. Bruxism is thought to be one of the contributing factors in "meth mouth", a condition potentially associated with long term methamphetamine use. References External links Pathology of temporomandibular joints, muscles of mastication and associated structures Sleep disorders
Bruxism
[ "Biology" ]
10,432
[ "Behavior", "Sleep", "Sleep disorders" ]
155,192
https://en.wikipedia.org/wiki/Chimera%20%28genetics%29
A genetic chimerism or chimera ( or ) is a single organism composed of cells with more than one distinct genotype. Animal chimeras can be produced by the fusion of two (or more) embryos. In plants and some animal chimeras, mosaicism involves distinct types of tissue that originated from the same zygote but differ due to mutation during ordinary cell division. Normally, genetic chimerism is not visible on casual inspection; however, it has been detected in the course of proving parentage. More practically, in agronomy Chimera indicates a plant or portion of a plant whose tissues are made up of two or more types of cells with different genetic makeup; it can derive from a bud mutation or, more rarely, at the grafting point, from the concrescence of cells of the two bionts; in this case it is commonly referred to as a "graft hybrid", although it is not a hybrid in the genetic sense of "hybrid". In contrast, an individual where each cell contains genetic material from two organisms of different breeds, varieties, species or genera is called a hybrid. Another way that chimerism can occur in animals is by organ transplantation, giving one individual tissues that developed from a different genome. For example, transplantation of bone marrow often determines the recipient's ensuing blood type. Classifications Natural chimerism Some level of chimerism occurs naturally in the wild in many animal species, and in some cases may be a required (obligate) part of their life cycle. Symbiotic chimerism in anglerfish Chimerism occurs naturally in adult Ceratioid anglerfish and is in fact a natural and essential part of their life cycle. Once the male achieves adulthood, it begins its search for a female. Using strong olfactory (or smell) receptors, the male searches until it locates a female anglerfish. The male, less than an inch in length, bites into her skin and releases an enzyme that digests the skin of both his mouth and her body, fusing the pair down to the blood-vessel level. While this attachment has become necessary for the male's survival, it will eventually consume him, as both anglerfish fuse into a single hermaphroditic individual. Sometimes in this process, more than one male will attach to a single female as a symbiote. In this case, they will all be consumed into the body of the larger female angler. Once fused to a female, the males will reach sexual maturity, developing large testicles as their other organs atrophy. This process allows for sperm to be in constant supply when the female produces an egg, so that the chimeric fish is able to have a greater number of offspring. Sponges Chimerism has been found in some species of marine sponges. Four distinct genotypes have been found in a single individual, and there is potential for even greater genetic heterogeneity. Each genotype functions independently in terms of reproduction, but the different intra-organism genotypes behave as a single large individual in terms of ecological responses like growth. In obligates It has been shown that male yellow crazy ants are obligate chimeras, the first known such case. In this species, the queens have arisen from fertilized eggs with a genotype of RR (Reproductive × Reproductive), the sterile female workers show a RW arrangement (Reproductive × Worker), and the males instead of being haploid, as is usually the case for ants, also display a RW genotype, but for them the egg R and the sperm W do not fuse so they develop as a chimera with some cells carrying an R and others carrying a W genome. Artificial chimerism Artificial chimerism refers to examples of chimerism that are accidentally produced by humans, either for research or commercial purposes. Tetragametic chimerism Tetragametic chimerism is a form of congenital chimerism. This condition occurs through fertilizing two separate ova by two sperm, followed by aggregation of the two at the blastocyst or zygote stages. This results in the development of an organism with intermingled cell lines. Put another way, the chimera is formed from the merging of two nonidentical twins. As such, they can be male, female, or intersex. The tetragametic state has important implications for organ or stem cell transplantation. Chimeras typically have immunologic tolerance to both cell lines. Microchimerism Microchimerism is the presence of a small number of cells that are genetically distinct from those of the host individual. Most people are born with a few cells genetically identical to their mothers' and the proportion of these cells goes down in healthy individuals as they get older. People who retain higher numbers of cells genetically identical to their mother's have been observed to have higher rates of some autoimmune diseases, presumably because the immune system is responsible for destroying these cells and a common immune defect prevents it from doing so and also causes autoimmune problems. The higher rates of autoimmune diseases due to the presence of maternally-derived cells is why in a 2010 study of a 40-year-old man with scleroderma-like disease (an autoimmune rheumatic disease), the female cells detected in his blood stream via FISH (fluorescence in situ hybridization) were thought to be maternally-derived. However, his form of microchimerism was found to be due to a vanished twin, and it is unknown whether microchimerism from a vanished twin might predispose individuals to autoimmune diseases as well. Mothers often also have a few cells genetically identical to those of their children, and some people also have some cells genetically identical to those of their siblings (maternal siblings only, since these cells are passed to them because their mother retained them). Germline chimerism Germline chimerism occurs when the germ cells (for example, sperm and egg cells) of an organism are not genetically identical to its own. It has been recently discovered that marmosets can carry the reproductive cells of their (fraternal) twin siblings due to placental fusion during development. (Marmosets almost always give birth to fraternal twins.) Types Animals As the organism develops, it can come to possess organs that have different sets of chromosomes. For example, the chimera may have a liver composed of cells with one set of chromosomes and have a kidney composed of cells with a second set of chromosomes. This has occurred in humans, and at one time was thought to be extremely rare although more recent evidence suggests that this is not the case. This is particularly true for the marmoset. Recent research shows most marmosets are chimeras, sharing DNA with their fraternal twins. 95% of marmoset fraternal twins trade blood through chorionic fusions, making them hematopoietic chimeras. In the budgerigar, due to the many existing plumage colour variations, tetragametic chimeras can be very conspicuous, as the resulting bird will have an obvious split between two colour types often divided bilaterally down the centre. These individuals are known as half-sider budgerigars. An animal chimera is a single organism that is composed of two or more different populations of genetically distinct cells that originated from different zygotes involved in sexual reproduction. If the different cells have emerged from the same zygote, the organism is called a mosaic. Innate chimeras are formed from at least four parent cells (two fertilised eggs or early embryos fused together). Each population of cells keeps its own character and the resulting organism is a mixture of tissues. Cases of human chimeras have been documented. Chimerism in humans Some consider mosaicism to be a form of chimerism, while others consider them to be distinct. Mosaicism involves a mutation of the genetic material in a cell, giving rise to a subset of cells that are different from the rest. Natural chimerism is the fusion of more than one fertilized zygote in the early stages of prenatal development. It is much rarer than mosaicism. In artificial chimerism, an individual has one cell lineage that was inherited genetically at the time of the formation of the human embryo and the other that was introduced through a procedure, including organ transplantation or blood transfusion. Specific types of transplants that could induce this condition include bone marrow transplants and organ transplants, as the recipient's body essentially works to permanently incorporate the new blood stem cells into it. Boklage argues that many human 'mosaic' cell lines will be "found to be chimeric if properly tested". In contrast, a human where each cell contains genetic material from two organisms of different breeds, varieties, species or genera is called a human–animal hybrid. While German dermatologist Alfred Blaschko described Blaschko's lines in 1901, the genetic science took until the 1930s to approach a vocabulary for the phenomenon. The term genetic chimera has been used at least since the 1944 article of Belgovskii. This condition is either innate or it is synthetic, acquired for example through the infusion of allogeneic blood cells during transplantation or transfusion. In nonidentical twins, innate chimerism occurs by means of blood vessel anastomoses. The likelihood of offspring being a chimera is increased if it is created via in vitro fertilisation. Chimeras can often breed, but the fertility and type of offspring depend on which cell line gave rise to the ovaries or testes; varying degrees of intersex differences may result if one set of cells is genetically female and another genetically male. On January 22, 2019, the National Society of Genetic Counselors released an article Chimerism Explained: How One Person Can Unknowingly Have Two Sets of DNA, where they state, "where a twin pregnancy evolves into one child, is currently believed to be one of the rarer forms. However, we know that 20 to 30% of singleton pregnancies were originally a twin or a multiple pregnancy". Most human chimeras will go through life without realizing they are chimeras. The difference in phenotypes may be subtle (e.g., having a hitchhiker's thumb and a straight thumb, eyes of slightly different colors, differential hair growth on opposite sides of the body, etc.) or completely undetectable. Chimeras may also show, under a certain spectrum of UV light, distinctive marks on the back resembling that of arrow points pointing downward from the shoulders down to the lower back; this is one expression of pigment unevenness called Blaschko's lines. Another case was that of Karen Keegan, who was also suspected (initially) of not being her children's biological mother, after DNA tests on her adult sons for a kidney transplant she needed, seemed to show she was not their mother. Plants Structure The distinction between sectorial, mericlinal and periclinal plant chimeras is widely used. Periclinal chimeras involve a genetic difference that persists in the descendant cells of a particular meristem layer. This type of chimera is more stable than mericlinal or sectoral mutations that affect only later generations of cells. Graft chimeras These are produced by grafting genetically different parents, different cultivars or different species (which may belong to different genera). The tissues may be partially fused together following grafting to form a single growing organism that preserves both types of tissue in a single shoot. Just as the constituent species are likely to differ in a wide range of features, so the behavior of their periclinal chimeras is like to be highly variable. The first such known chimera was probably the Bizzarria, which is a fusion of the Florentine citron and the sour orange. Well-known examples of a graft-chimera are Laburnocytisus 'Adamii', caused by a fusion of a Laburnum and a broom, and "Family" trees, where multiple varieties of apple or pear are grafted onto the same tree. Many fruit trees are cultivated by grafting the body of a sapling onto a rootstock. Chromosomal chimeras These are chimeras in which the layers differ in their chromosome constitution. Occasionally, chimeras arise from loss or gain of individual chromosomes or chromosome fragments owing to misdivision. More commonly cytochimeras have simple multiple of the normal chromosome complement in the changed layer. There are various effects on cell size and growth characteristics. Nuclear gene-differential chimeras These chimeras arise by spontaneous or induced mutation of a nuclear gene to a dominant or recessive allele. As a rule, one character is affected at a time in the leaf, flower, fruit, or other parts. Plastid gene-differential chimeras These chimeras arise by spontaneous or induced mutation of a plastid gene, followed by the sorting-out of two kinds of plastid during vegetative growth. Alternatively, after selfing or nucleic acid thermodynamics, plastids may sort-out from a mixed egg or mixed zygote respectively. This type of chimera is recognized at the time of origin by the sorting-out pattern in the leaves. After sorting-out is complete, periclinal chimeras are distinguished from similar looking nuclear gene-differential chimeras by their non-mendelian inheritance. The majority of variegated-leaf chimeras are of this kind. All plastid gene- and some nuclear gene-differential chimeras affect the color of the plasmids within the leaves, and these are grouped together as chlorophyll chimeras, or preferably as variegated leaf chimeras. For most variegation, the mutation involved is the loss of the chloroplasts in the mutated tissue, so that part of the plant tissue has no green pigment and no photosynthetic ability. This mutated tissue is unable to survive on its own, but it is kept alive by its partnership with normal photosynthetic tissue. Sometimes chimeras are also found with layers differing in respect of both their nuclear and their plastid genes. Origins There are multiple reasons to explain the occurrence of plant chimera during the plant recovery stage: The process of shoot organogenesis starts from the multicellular origin. The endogenous tolerance leads to the ineffectiveness of the weak selective agents. A self-protection mechanism (cross protection). Transformed cells serve as guards to protect the untransformed ones. The observable characteristic of transgenic cells may be a transient expression of the marker gene. Or it may due to the presence of agrobacterium cells. Detection Untransformed cells should be easy to detect and remove to avoid chimeras. This is because it is important to maintain the stable ability of the transgenic plants across different generations. Reporter genes such as GUS and Green Fluorescent Protein (GFP) are used in combination with plant selective markers (herbicide, antibody etc.). However, GUS expression depends on the plant development stage and GFP may be influenced by the green tissue autofluorescence. Quantitative PCR could be an alternative method for chimera detection. Viruses In 2012, the first example of a naturally-occurring RNA-DNA hybrid virus was unexpectedly discovered during a metagenomic study of the acidic extreme environment of Boiling Springs Lake that is in Lassen Volcanic National Park, California. The virus was named BSL-RDHV (Boiling Springs Lake RNA DNA Hybrid Virus). Its genome is related to a DNA circovirus, which usually infects birds and pigs, and a RNA tombusvirus, which infect plants. The study surprised scientists, because DNA and RNA viruses vary and the way the chimera came together was not understood. Other viral chimeras have also been found, and the group is known as the CHIV viruses ("chimeric viruses"). Research The first known primate chimeras are the rhesus monkey twins, Roku and Hex, each having six genomes. They were created by mixing cells from totipotent four-cell morulas; although the cells never fused, they worked together to form organs. It was discovered that one of these primates, Roku, was a sexual chimera; as four percent of Roku's blood cells contained two x chromosomes. A major milestone in chimera experimentation occurred in 1984 when a chimeric sheep–goat was produced by combining embryos from a goat and a sheep, and survived to adulthood. To research the developmental biology of the bird embryo, researchers produced artificial quail-chick chimeras in 1987. By using transplantation and ablation in the chick embryo stage, the neural tube and the neural crest cells of the chick were ablated, and replaced with the same parts from a quail. Once hatched, the quail feathers were visibly apparent around the wing area, whereas the rest of the chick's body was made of its own chicken cells. In August 2003, researchers at the Shanghai Second Medical University in China reported that they had successfully fused human skin cells and rabbit ova to create the first human chimeric embryos. The embryos were allowed to develop for several days in a laboratory setting, and then destroyed to harvest the resulting stem cells. In 2007, scientists at the University of Nevada School of Medicine created a sheep whose blood contained 15% human cells and 85% sheep cells. In 2023 a study reported the first chimeric monkey using embryonic stem cell lines, it was the only live birth from 12 pregnancies resulting from 40 implanted embryos of the crab-eating macaque, an average of 67% and a highest of 92% of the cells across the 26 tested tissues were descendants of the donor stem cells against 0.1–4.5% from previous experiments on chimeric monkeys. Work with mice Chimeric mice are important animals in biological research, as they allow for the investigation of a variety of biological questions in an animal that has two distinct genetic pools within it. These include insights into problems such as the tissue specific requirements of a gene, cell lineage, and cell potential. The general methods for creating chimeric mice can be summarized either by injection or aggregation of embryonic cells from different origins. The first chimeric mouse was made by Beatrice Mintz in the 1960s through the aggregation of eight-cell-stage embryos. Injection on the other hand was pioneered by Richard Gardner and Ralph Brinster who injected cells into blastocysts to create chimeric mice with germ lines fully derived from injected embryonic stem cells (ES cells). Chimeras can be derived from mouse embryos that have not yet implanted in the uterus as well as from implanted embryos. ES cells from the inner cell mass of an implanted blastocyst can contribute to all cell lineages of a mouse including the germ line. ES cells are a useful tool in chimeras because genes can be mutated in them through the use of homologous recombination, thus allowing gene targeting. Since this discovery occurred in 1988, ES cells have become a key tool in the generation of specific chimeric mice. Underlying biology The ability to make mouse chimeras comes from an understanding of early mouse development. Between the stages of fertilization of the egg and the implantation of a blastocyst into the uterus, different parts of the mouse embryo retain the ability to give rise to a variety of cell lineages. Once the embryo has reached the blastocyst stage, it is composed of several parts, mainly the trophectoderm, the inner cell mass, and the primitive endoderm. Each of these parts of the blastocyst gives rise to different parts of the embryo; the inner cell mass gives rise to the embryo proper, while the trophectoderm and primitive endoderm give rise to extra embryonic structures that support growth of the embryo. Two- to eight-cell-stage embryos are competent for making chimeras, since at these stages of development, the cells in the embryos are not yet committed to give rise to any particular cell lineage, and could give rise to the inner cell mass or the trophectoderm. In the case where two diploid eight-cell-stage embryos are used to make a chimera, chimerism can be later found in the epiblast, primitive endoderm, and trophectoderm of the mouse blastocyst. It is possible to dissect the embryo at other stages so as to accordingly give rise to one lineage of cells from an embryo selectively and not the other. For example, subsets of blastomeres can be used to give rise to chimera with specified cell lineage from one embryo. The Inner Cell Mass of a diploid blastocyst, for example, can be used to make a chimera with another blastocyst of eight-cell diploid embryo; the cells taken from the inner cell mass will give rise to the primitive endoderm and to the epiblast in the chimera mouse. From this knowledge, ES cell contributions to chimeras have been developed. ES cells can be used in combination with eight-cell-and two-cell-stage embryos to make chimeras and exclusively give rise to the embryo proper. Embryos that are to be used in chimeras can be further genetically altered to specifically contribute to only one part of chimera. An example is the chimera built off of ES cells and tetraploid embryos, which are artificially made by electrofusion of two two-cell diploid embryos. The tetraploid embryo will exclusively give rise to the trophectoderm and primitive endoderm in the chimera. Methods of production There are a variety of combinations that can give rise to a successful chimera mouse and according to the goal of the experiment an appropriate cell and embryo combination can be picked; they are generally but not limited to diploid embryo and ES cells, diploid embryo and diploid embryo, ES cell and tetraploid embryo, diploid embryo and tetraploid embryo, ES cells and ES cells. The combination of embryonic stem cell and diploid embryo is a common technique used for the making of chimeric mice, since gene targeting can be done in the embryonic stem cell. These kinds of chimeras can be made through either aggregation of stem cells and the diploid embryo or injection of the stem cells into the diploid embryo. If embryonic stem cells are to be used for gene targeting to make a chimera, the following procedure is common: a construct for homologous recombination for the gene targeted will be introduced into cultured mouse embryonic stem cells from the donor mouse, by way of electroporation; cells positive for the recombination event will have antibiotic resistance, provided by the insertion cassette used in the gene targeting; and be able to be positively selected for. ES cells with the correct targeted gene are then injected into a diploid host mouse blastocyst. Then, these injected blastocysts are implanted into a pseudo pregnant female surrogate mouse, which will bring the embryos to term and give birth to a mouse whose germline is derived from the donor mouse's ES cells. This same procedure can be achieved through aggregation of ES cells and diploid embryos, diploid embryos are cultured in aggregation plates in wells where single embryos can fit, to these wells ES cells are added the aggregates are cultured until a single embryo is formed and has progressed to the blastocyst stage, and can then be transferred to the surrogate mouse. Ethics and legislation The US and Western Europe have strict codes of ethics and regulations in place that expressly forbid certain subsets of experimentation using human cells, though there is a vast difference in the regulatory framework. Through the creation of human chimeras comes the question: where does society now draw the line of humanity? This question poses serious legal and moral issues, along with creating controversy. Chimpanzees, for example, are not offered any legal standing, and are put down if they pose a threat to humans. If a chimpanzee is genetically altered to be more similar to a human, it may blur the ethical line between animal and human. Legal debate would be the next step in the process to determine whether certain chimeras should be granted legal rights. Along with issues regarding the rights of chimeras, individuals have expressed concern about whether or not creating human chimeras diminishes the "dignity" of being human. See also 46,XX/46,XY Genetic chimerism in fiction Retron Vanishing twin X-inactivation (lyonization) Polycephaly References Further reading Appel, Jacob M. "The Monster's Law", Genewatch, Volume 19, Number 2, March–April 2007. Nelson, J. Lee (Scientific American, February 2008). Your Cells Are My Cells Weiss, Rick (August 14, 2003). Cloning yields human-rabbit hybrid embryo . The Washington Post. Weiss, Rick (February 13, 2005). U.S. Denies Patent for a too-human hybrid. The Washington Post. External links "Chimerism Explained" Chimerism and cellular mosaicism, Genetic Home Reference, U.S. National Library of Medicine, National Institute of Health. Chimera: Apical Origin, Ontogeny and Consideration in Propagation Plant Chimeras in Tissue Culture Ainsworth, Claire (November 15, 2003). "The Stranger Within". New Scientist . (Reprinted here ) Embryogenesis of chimeras, twins and anterior midline asymmetries Natural human chimeras: A review Reproduction Intersex healthcare Genetic anomalies Twin
Chimera (genetics)
[ "Biology" ]
5,441
[ "Chimerism", "Biological interactions", "Behavior", "Reproduction" ]
155,214
https://en.wikipedia.org/wiki/Pudendal%20nerve
The pudendal nerve is the main nerve of the perineum. It is a mixed (motor and sensory) nerve and also conveys sympathetic autonomic fibers. It carries sensation from the external genitalia of both sexes and the skin around the anus and perineum, as well as the motor supply to various pelvic muscles, including the male or female external urethral sphincter and the external anal sphincter. If damaged, most commonly by childbirth, loss of sensation or fecal incontinence may result. The nerve may be temporarily anesthetized, called pudendal anesthesia or pudendal block. The pudendal canal that carries the pudendal nerve is also known by the eponymous term "Alcock's canal", after Benjamin Alcock, an Irish anatomist who documented the canal in 1836. Structure Origin The pudendal nerve is paired, meaning there are two nerves, one on the left and one on the right side of the body. Each is formed as three roots immediately converge above the upper border of the sacrotuberous ligament and the coccygeus muscle. The three roots become two cords when the middle and lower root join to form the lower cord, and these in turn unite to form the pudendal nerve proper just proximal to the sacrospinous ligament. The three roots are derived from the ventral rami of the 2nd, 3rd, and 4th sacral spinal nerves, with the primary contribution coming from the 4th. Course and relations The pudendal nerve passes between the piriformis muscle and coccygeus (ischiococcygeus) muscles and leaves the pelvis through the lower part of the greater sciatic foramen. It crosses over the lateral part of the sacrospinous ligament and reenters the pelvis through the lesser sciatic foramen. After reentering the pelvis, it accompanies the internal pudendal artery and internal pudendal vein upwards and forwards along the lateral wall of the ischiorectal fossa, being contained in a sheath of the obturator fascia termed the pudendal canal, along with the internal pudendal blood vessels. Branches Inside the pudendal canal, the nerve divides into branches, first giving off the inferior rectal nerve, then the perineal nerve, before continuing as the dorsal nerve of the penis (in males) or the dorsal nerve of the clitoris (in females). Nucleus The nerve is a major branch of the sacral plexus, with fibers originating in Onuf's nucleus in the sacral region of the spinal cord. Variation The pudendal nerve may vary in its origins. For example, the pudendal nerve may actually originate in the sciatic nerve. Consequently, damage to the sciatic nerve can affect the pudendal nerve as well. Sometimes dorsal rami of the first sacral nerve contribute fibers to the pudendal nerve, and even more rarely . Function The pudendal nerve has both motor (control of muscles) and sensory functions. It also carries sympathetic autonomic fibers (but not parasympathetic fibers). Sensory The pudendal nerve supplies sensation to the penis in males, and to the clitoris in females, which travels through the branches of both the dorsal nerve of the penis and the dorsal nerve of the clitoris. The posterior scrotum in males and the labia majora in females are also supplied, via the posterior scrotal nerves (males) or posterior labial nerves (females). The pudendal nerve is one of several nerves supplying sensation to these areas. Branches also supply sensation to the anal canal. By providing sensation to the penis and the clitoris, the pudendal nerve is responsible for the afferent component of penile erection and clitoral erection. Motor Branches innervate muscles of the perineum and the pelvic floor; namely, the bulbospongiosus and the ischiocavernosus muscles respectively, the levator ani muscle (including the Iliococcygeus, pubococcygeus, puborectalis and either pubovaginalis in females or puboprostaticus in males) the external anal sphincter (via the inferior anal branch), and male or female external urethral sphincter. As it functions to innervate the external urethral sphincter it is responsible for the tone of the sphincter mediated via acetylcholine release. This means that during periods of increased acetylcholine release the skeletal muscle in the external urethral sphincter contracts, causing urinary retention. Whereas in periods of decreased acetylcholine release the skeletal muscle in the external urethral sphincter relaxes, allowing voiding of the bladder to occur. (Unlike the internal sphincter muscle, the external sphincter is made of skeletal muscle, therefore it is under voluntary control of the somatic nervous system.) It is also responsible for ejaculation. Clinical significance The pudendal nerve may be tested by elicitation of the anocutaneous reflex ("anal wink"). Anesthesia A pudendal nerve block, also known as a saddle nerve block, is a local anesthesia technique used in an obstetric procedure to anesthetize the perineum during labor. In this procedure, an anesthetic agent such as lidocaine is injected through the inner wall of the vagina about the pudendal nerve. Abnormal loss of sensation in the same region as a medical symptom is also sometimes termed saddle anesthesia. Damage The pudendal nerve can be compressed or stretched, resulting in temporary or permanent neuropathy. Injury to the pudendal nerve manifests more as sensory problems (pain or alteration/loss of sensation) rather than loss of muscle control. Irreversible nerve injury may occur when nerves are stretched by 12% or more of their normal length. If the pelvic floor is over-stretched, acutely (e.g. prolonged or difficult childbirth) or chronically (e.g. chronic straining during defecation caused by constipation), the pudendal nerve is vulnerable to stretch-induced neuropathy. After repeated traction of the pudendal nerve, it starts to be replaced by fibrous tissue with subsequent loss of function. Pudendal nerve entrapment, also known as Alcock canal syndrome, is neuropathic pain in the distribution of the pudendal nerve. It is caused by entrapment of the nerve. The condition is estimated to have a prevalence of 1 in 100000, and is sometimes associated with professional cycling. Systemic diseases such as diabetes and multiple sclerosis can damage the pudendal nerve via demyelination or other mechanisms. A pelvic tumor (most notably a large sacrococcygeal teratoma), or surgery to remove the tumor, can also cause permanent damage. Unilateral pudendal nerve neuropathy inconsistently causes fecal incontinence in some, but not others. This is because crossover innervation of the external anal sphincter occurs in some individuals. There is significant overlap of the innervation of the external anal sphincter from the pudendal nerves of both sides. This allows partial re-innervation from the opposite side after nerve injury. Imaging The pudendal nerve is difficult to visualize on routine CT or MR imaging, however under CT guidance, a needle may be placed adjacent to the pudendal neurovascular bundle. The ischial spine, an easily identifiable structure on CT, is used as the level of injection. A spinal needle is advanced via the gluteal muscles and advanced within several millimeters of the ischial spine. Contrast (X-ray dye) is then injected, highlighting the nerve in the canal and allowing for confirmation of correct needle placement. The nerve may then be injected with cortisone and local anesthetic to confirm and also treat chronic pain of the external genitalia (known as vulvodynia in females), pelvic and anorectal pain. Nerve latency testing The time taken for a muscle supplied by the pudendal nerve to contract in response to an electrical stimulus applied to the sensory and motor fibers can be quantified. Increased conduction time (terminal motor latency) signifies damage to the nerve. 2 stimulating electrodes and 2 measuring electrodes are mounted on the examiner's gloved finger ("St Mark's electrode"). History The term pudendal comes from Latin , meaning external genitals, derived from , meaning "parts to be ashamed of". The pudendal canal is also known by the eponymous term "Alcock's canal", after Benjamin Alcock, an Irish anatomist who documented the canal in 1836. Alcock documented the existence of the canal and pudendal nerve in a contribution about iliac arteries in Robert Bentley Todd's "The Cyclopaedia of Anatomy and Physiology". Additional images See also Neurogenic bladder Pudendal neuralgia Sacral plexus Inferior rectal nerve Perineal nerve Dorsal nerve of the penis Dorsal nerve of the clitoris Pudendal canal References External links - "Inferior view of female perineum, branches of the internal pudendal artery." Diagnosis and treatment at www.nervemed.com www.pudendal.com Pudendal nerve entrapment at chronicprostatitis.com CT sequence showing a pudendal nerve block. Nerves of the lower limb and lower torso Sexual anatomy
Pudendal nerve
[ "Biology" ]
2,011
[ "Sexual anatomy", "Sex" ]
155,319
https://en.wikipedia.org/wiki/Data%20acquisition
Data acquisition is the process of sampling signals that measure real-world physical conditions and converting the resulting samples into digital numeric values that can be manipulated by a computer. Data acquisition systems, abbreviated by the acronyms DAS, DAQ, or DAU, typically convert analog waveforms into digital values for processing. The components of data acquisition systems include: Sensors, to convert physical parameters to electrical signals. Signal conditioning circuitry, to convert sensor signals into a form that can be converted to digital values. Analog-to-digital converters, to convert conditioned sensor signals to digital values. Data acquisition applications are usually controlled by software programs developed using various general purpose programming languages such as Assembly, BASIC, C, C++, C#, Fortran, Java, LabVIEW, Lisp, Pascal, etc. Stand-alone data acquisition systems are often called data loggers. There are also open-source software packages providing all the necessary tools to acquire data from different, typically specific, hardware equipment. These tools come from the scientific community where complex experiment requires fast, flexible, and adaptable software. Those packages are usually custom-fit but more general DAQ packages like the Maximum Integrated Data Acquisition System can be easily tailored and are used in several physics experiments. History In 1963, IBM produced computers that specialized in data acquisition. These include the IBM 7700 Data Acquisition System, and its successor, the IBM 1800 Data Acquisition and Control System. These expensive specialized systems were surpassed in 1974 by general-purpose S-100 computers and data acquisition cards produced by Tecmar/Scientific Solutions Inc. In 1981 IBM introduced the IBM Personal Computer and Scientific Solutions introduced the first PC data acquisition products. Methodology Sources and systems Data acquisition begins with the physical phenomenon or physical property to be measured. Examples of this include temperature, vibration, light intensity, gas pressure, fluid flow, and force. Regardless of the type of physical property to be measured, the physical state that is to be measured must first be transformed into a unified form that can be sampled by a data acquisition system. The task of performing such transformations falls on devices called sensors. A data acquisition system is a collection of software and hardware that allows one to measure or control the physical characteristics of something in the real world. A complete data acquisition system consists of DAQ hardware, sensors and actuators, signal conditioning hardware, and a computer running DAQ software. If timing is necessary (such as for event mode DAQ systems), a separate compensated distributed timing system is required. A sensor, which is a type of transducer, is a device that converts a physical property into a corresponding electrical signal (e.g., strain gauge, thermistor). An acquisition system to measure different properties depends on the sensors that are suited to detect those properties. Signal conditioning may be necessary if the signal from the transducer is not suitable for the DAQ hardware being used. The signal may need to be filtered, shaped, or amplified in most cases. Various other examples of signal conditioning might be bridge completion, providing current or voltage excitation to the sensor, isolation, and linearization. For transmission purposes, single ended analog signals, which are more susceptible to noise can be converted to differential signals. Once digitized, the signal can be encoded to reduce and correct transmission errors. DAQ hardware DAQ hardware is what usually interfaces between the signal and a PC. It could be in the form of modules that can be connected to the computer's ports (parallel, serial, USB, etc.) or cards connected to slots (S-100 bus, AppleBus, ISA, MCA, PCI, PCI-E, etc.) in a PC motherboard or in a modular crate (CAMAC, NIM, VME). Sometimes adapters are needed, in which case an external breakout box can be used. DAQ cards often contain multiple components (multiplexer, ADC, DAC, TTL-IO, high-speed timers, RAM). These are accessible via a bus by a microcontroller, which can run small programs. A controller is more flexible than a hard-wired logic, yet cheaper than a CPU so it is permissible to block it with simple polling loops. For example: Waiting for a trigger, starting the ADC, looking up the time, waiting for the ADC to finish, move value to RAM, switch multiplexer, get TTL input, let DAC proceed with voltage ramp. Today, signals from some sensors and Data Acquisition Systems can be streamed via Bluetooth. DAQ device drivers DAQ device drivers are needed for the DAQ hardware to work with a PC. The device driver performs low-level register writes and reads on the hardware while exposing API for developing user applications in a variety of programs. Input devices 3D scanner Analog-to-digital converter Time-to-digital converter Hardware Computer Automated Measurement and Control (CAMAC) Industrial Ethernet Industrial USB LAN eXtensions for Instrumentation Network interface controller PCI eXtensions for Instrumentation VMEbus VXI DAQ software Specialized DAQ software may be delivered with the DAQ hardware. Software tools used for building large-scale data acquisition systems include EPICS. Other programming environments that are used to build DAQ applications include ladder logic, Visual C++, Visual Basic, LabVIEW, and MATLAB. See also Black box Data collection (synonym) Data logger Data storage device Data science Sensor Signal processing Transducer References Further reading Tomaž Kos, Tomaž Kosar, and Marjan Mernik. Development of data acquisition systems by using a domain-specific modeling language. Computers in Industry, 63(3):181–192, 2012. Data Signal processing
Data acquisition
[ "Technology", "Engineering" ]
1,163
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Information technology", "Data" ]
155,350
https://en.wikipedia.org/wiki/Munsell%20color%20system
In colorimetry, the Munsell color system is a color space that specifies colors based on three properties of color: hue (basic color), value (lightness), and chroma (color intensity). It was created by Albert H. Munsell in the first decade of the 20th century and adopted by the United States Department of Agriculture (USDA) as the official color system for soil research in the 1930s. Several earlier color order systems had placed colors into a three-dimensional color solid of one form or another, but Munsell was the first to separate hue, value, and chroma into perceptually uniform and independent dimensions, and he was the first to illustrate the colors systematically in three-dimensional space. Munsell's system, particularly the later renotations, is based on rigorous measurements of human subjects' visual responses to color, putting it on a firm experimental scientific basis. Because of this basis in human visual perception, Munsell's system has outlasted its contemporary color models, and though it has been superseded for some uses by models such as CIELAB (L*a*b*) and CIECAM02, it is still in wide use today. Explanation The system consists of three independent properties of color which can be represented cylindrically in three dimensions as an irregular color solid: hue, measured by degrees around horizontal circles chroma, measured radially outward from the neutral (gray) vertical axis value, measured vertically on the core cylinder from 0 (black) to 10 (white) Munsell determined the spacing of colors along these dimensions by taking measurements of human visual responses. In each dimension, Munsell colors are as close to perceptually uniform as he could make them, which makes the resulting shape quite irregular. As Munsell explains: Hue Each horizontal circle Munsell divided into five principal hues: Red, Yellow, Green, Blue, and Purple, along with 5 intermediate hues (e.g., YR) halfway between adjacent principal hues. Each of these 10 steps, with the named hue given number 5, is then broken into 10 sub-steps, so that 100 hues are given integer values. In practice, color charts conventionally specify 40 hues, in increments of 2.5, progressing as for example 10R to 2.5YR. Two colors of equal value and chroma, on opposite sides of a hue circle, are complementary colors, and mix additively to the neutral gray of the same value. The diagram below shows 40 evenly spaced Munsell hues, with complements vertically aligned. Value Value, or lightness, varies vertically along the color solid, from black (value 0) at the bottom, to white (value 10) at the top. Neutral grays lie along the vertical axis between black and white. Several color solids before Munsell's plotted luminosity from black on the bottom to white on the top, with a gray gradient between them, but these systems neglected to keep perceptual lightness constant across horizontal slices. Instead, they plotted fully saturated yellow (light), and fully saturated blue and purple (dark) along the equator. Chroma Chroma, measured radially from the center of each slice, represents the “purity” of a color (related to saturation), with lower chroma being less pure (more washed out, as in pastels). Note that there is no intrinsic upper limit to chroma. Different areas of the color space have different maximal chroma coordinates. For instance light yellow colors have considerably more potential chroma than light purples, due to the nature of the eye and the physics of color stimuli. This led to a wide range of possible chroma levels—up to the high 30s for some hue–value combinations (though it is difficult or impossible to make physical objects in colors of such high chromas, and they cannot be reproduced on current computer displays). Vivid solid colors are in the range of approximately 8. Specifying a color A color is fully specified by listing the three numbers for hue, value, and chroma in that order. For instance, a purple of medium lightness and fairly saturated would be 5P 5/10 with 5P meaning the color in the middle of the purple hue band, 5/ meaning medium value (lightness), and a chroma of 10 (see swatch). An achromatic color is specified by the syntax . For example, a medium grey is specified by "N 5/". In computer processing, the Munsell colors are converted to a set of "HVC" numbers. The V and C are the same as the normal chroma and value. The H (hue) number is converted by mapping the hue rings into numbers between 0 and 100, where both 0 and 100 correspond to 10RP. As the Munsell books, including the 1943 renotation, only contains colors for some points in the Munsell space, it is non-trivial to specify an arbitrary color in Munsell space. Interpolation must be used to assign meanings to non-book colors such as "2.8Y 6.95/2.3", followed by an inversion of the fitted Munsell-to-xyY transform. The ASTM has defined a method in 2008, but Centore 2012 is known to work better. History and influence The idea of using a three-dimensional color solid to represent all colors was developed during the 18th and 19th centuries. Several different shapes for such a solid were proposed, including: a double triangular pyramid by Tobias Mayer in 1758, a single triangular pyramid by Johann Heinrich Lambert in 1772, a sphere by Philipp Otto Runge in 1810, a hemisphere by Michel Eugène Chevreul in 1839, a cone by Hermann von Helmholtz in 1860, a tilted cube by William Benson in 1868, and a slanted double cone by August Kirschmann in 1895. These systems became progressively more sophisticated, with Kirschmann’s even recognizing the difference in value between bright colors of different hues. But all of them remained either purely theoretical or encountered practical problems in accommodating all colors. Furthermore, none was based on any rigorous scientific measurement of human vision; before Munsell, the relationship between hue, value, and chroma was not understood. Albert Munsell, an artist and professor of art at the Massachusetts Normal Art School (now Massachusetts College of Art and Design, or MassArt), wanted to create a "rational way to describe color" that would use decimal notation instead of color names (which he felt were "foolish" and "misleading"), which he could use to teach his students about color. He first started work on the system in 1898 and published it in full form in A Color Notation in 1905. The original embodiment of the system (the 1905 Atlas) had some deficiencies as a physical representation of the theoretical system. These were improved significantly in the 1929 Munsell Book of Color and through an extensive series of experiments carried out by the Optical Society of America in the 1940s resulting in the notations (sample definitions) for the modern Munsell Book of Color. Though several replacements for the Munsell system have been invented, building on Munsell's foundational ideas—including the Optical Society of America's Uniform Color Scales, and the International Commission on Illumination’s CIELAB (L*a*b*) and CIECAM02 color models—the Munsell system is still widely used, by, among others, ANSI to define skin color and hair color for forensic pathology, the USGS for matching soil color, in prosthodontics during the selection of tooth color for dental restorations, and breweries for matching beer color. The original Munsell color chart remains useful for comparing computer models of human color vision. See also Coloroid HSL and HSV Natural Color System Notes References Bibliography One of the first books about the Munsell color system, explaining the intuition behind its three dimensions, and suggesting possible uses of the system in picking color combinations. An edited version can be found at http://www.applepainter.com/. A description of color systems leading up to Munsell's, and a biographical explanation of Munsell's changing ideas about color and development of his color solid, leading up to the publication of A Color Notation in 1905. An introductory explanation of the development and influence of the Munsell system. A concise introduction to the Munsell color system, on a web page which also discusses several other color systems, putting the Munsell system in its historical context. Munsell's original description of his system. A Color Notation was published before he had established the irregular shape of a perceptual color solid, so it describes colors positioned in a sphere. Munsell's description of his color system, from a lecture to the American Psychological Association. External links General information Munsell.com, the homepage of Munsell Color, a subdivision of X-Rite, current owners of the Munsell Color Company. Munsell page at the X-Rite website. ApplePainter.com, a site explaining the Munsell color chart, including an edited version of Cleland's book, A practical description of the Munsell color system. An explanation of the Munsell system at Adobe.com. Retrieved 13 August 2003 A brief explanation at the site of the Japanese company Dainichiseika Color & Chemicals, including a nice diagram of the Munsell color solid. Data and conversion Munsell Color Science Laboratory at the Rochester Institute of Technology, an academic laboratory dedicated to color science, endowed by the Munsell Foundation. Munsell renotation data in plain text format (from the 1940s Optical Society of America renotations). CIE xyY original and sRGB and CIELAB conversions provided. The Munsell and Kubelka-Munk Toolbox by Paul Centore, with a radial interpolation algorithm described in Centore 2012. munsellinterpol, R language package for interpolating between Munsell renotation samples; spline interpolation. Munsell Color Palette, an online Munsell color palette and Munsell-to-sRGB converter; crude linear interpolation in sRGB space. A flash-based Munsell Palette color-picker from web-design firm Triplecode (based on a version originally created at the MIT Media Lab). LOGitEASY Munsell Color Calculator, which converts Munsell colors to a specialized soil-color notation (registration required) Other tools ToyPalette from Loo & Cox, a web application for generating color palettes from images. Munsell color analysis of digital image. Color space 1905 introductions
Munsell color system
[ "Mathematics" ]
2,265
[ "Color space", "Space (mathematics)", "Metric spaces" ]
155,407
https://en.wikipedia.org/wiki/Kleene%27s%20recursion%20theorem
In computability theory, Kleene's recursion theorems are a pair of fundamental results about the application of computable functions to their own descriptions. The theorems were first proved by Stephen Kleene in 1938 and appear in his 1952 book Introduction to Metamathematics. A related theorem, which constructs fixed points of a computable function, is known as Rogers's theorem and is due to Hartley Rogers, Jr. The recursion theorems can be applied to construct fixed points of certain operations on computable functions, to generate quines, and to construct functions defined via recursive definitions. Notation The statement of the theorems refers to an admissible numbering of the partial recursive functions, such that the function corresponding to index is . If and are partial functions on the natural numbers, the notation indicates that, for each n, either and are both defined and are equal, or else and are both undefined. Rogers's fixed-point theorem Given a function , a fixed point of is an index such that . Note that the comparison of in- and outputs here is not in terms of numerical values, but in terms of their associated functions. Rogers describes the following result as "a simpler version" of Kleene's (second) recursion theorem. This essentially means that if we apply an effective transformation to programs (say, replace instructions such as successor, jump, remove lines), there will always be a program whose behaviour is not altered by the transformation. This theorem can therefore be interpreted in the following manner: “given any effective procedure to transform programs, there is always a program that, when modified by the procedure, does exactly what it did before”, or: “it’s impossible to write a program that changes the extensional behaviour of all programs”. Proof of the fixed-point theorem The proof uses a particular total computable function , defined as follows. Given a natural number , the function outputs the index of the partial computable function that performs the following computation: Given an input , first attempt to compute . If that computation returns an output , then compute and return its value, if any. Thus, for all indices of partial computable functions, if is defined, then . If is not defined, then is a function that is nowhere defined. The function can be constructed from the partial computable function described above and the s-m-n theorem: for each , is the index of a program which computes the function . To complete the proof, let be any total computable function, and construct as above. Let be an index of the composition , which is a total computable function. Then by the definition of . But, because is an index of , , and thus . By the transitivity of , this means . Hence for . This proof is a construction of a partial recursive function which implements the Y combinator. Fixed-point-free functions A function such that for all is called fixed-point free. The fixed-point theorem shows that no total computable function is fixed-point free, but there are many non-computable fixed-point-free functions. Arslanov's completeness criterion states that the only recursively enumerable Turing degree that computes a fixed-point-free function is 0′, the degree of the halting problem. Kleene's second recursion theorem The second recursion theorem is a generalization of Rogers's theorem with a second input in the function. One informal interpretation of the second recursion theorem is that it is possible to construct self-referential programs; see "Application to quines" below. The second recursion theorem. For any partial recursive function there is an index such that . The theorem can be proved from Rogers's theorem by letting be a function such that (a construction described by the S-m-n theorem). One can then verify that a fixed-point of this is an index as required. The theorem is constructive in the sense that a fixed computable function maps an index for into the index . Comparison to Rogers's theorem Kleene's second recursion theorem and Rogers's theorem can both be proved, rather simply, from each other. However, a direct proof of Kleene's theorem does not make use of a universal program, which means that the theorem holds for certain subrecursive programming systems that do not have a universal program. Application to quines A classic example using the second recursion theorem is the function . The corresponding index in this case yields a computable function that outputs its own index when applied to any value. When expressed as computer programs, such indices are known as quines. The following example in Lisp illustrates how the in the corollary can be effectively produced from the function . The function s11 in the code is the function of that name produced by the S-m-n theorem. Q can be changed to any two-argument function. (setq Q '(lambda (x y) x)) (setq s11 '(lambda (f x) (list 'lambda '(y) (list f x 'y)))) (setq n (list 'lambda '(x y) (list Q (list s11 'x 'x) 'y))) (setq p (eval (list s11 n n))) The results of the following expressions should be the same. p(nil) (eval (list p nil)) Q(p, nil) (eval (list Q p nil)) Application to elimination of recursion Suppose that and are total computable functions that are used in a recursive definition for a function : The second recursion theorem can be used to show that such equations define a computable function, where the notion of computability does not have to allow, prima facie, for recursive definitions (for example, it may be defined by μ-recursion, or by Turing machines). This recursive definition can be converted into a computable function that assumes is an index to itself, to simulate recursion: The recursion theorem establishes the existence of a computable function such that . Thus satisfies the given recursive definition. Reflexive programming Reflexive, or reflective, programming refers to the usage of self-reference in programs. Jones presents a view of the second recursion theorem based on a reflexive language. It is shown that the reflexive language defined is not stronger than a language without reflection (because an interpreter for the reflexive language can be implemented without using reflection); then, it is shown that the recursion theorem is almost trivial in the reflexive language. The first recursion theorem While the second recursion theorem is about fixed points of computable functions, the first recursion theorem is related to fixed points determined by enumeration operators, which are a computable analogue of inductive definitions. An enumeration operator is a set of pairs (A,n) where A is a (code for a) finite set of numbers and n is a single natural number. Often, n will be viewed as a code for an ordered pair of natural numbers, particularly when functions are defined via enumeration operators. Enumeration operators are of central importance in the study of enumeration reducibility. Each enumeration operator Φ determines a function from sets of naturals to sets of naturals given by A recursive operator is an enumeration operator that, when given the graph of a partial recursive function, always returns the graph of a partial recursive function. A fixed point of an enumeration operator Φ is a set F such that Φ(F) = F. The first enumeration theorem shows that fixed points can be effectively obtained if the enumeration operator itself is computable. First recursion theorem. The following statements hold. For any computable enumeration operator Φ there is a recursively enumerable set F such that Φ(F) = F and F is the smallest set with this property. For any recursive operator Ψ there is a partial computable function φ such that Ψ(φ) = φ and φ is the smallest partial computable function with this property. The first recursion theorem is also called Fixed point theorem (of recursion theory). There is also a definition which can be applied to recursive functionals as follows: Let be a recursive functional. Then has a least fixed point which is computable i.e. 1) 2) such that it holds that 3) is computable Example Like the second recursion theorem, the first recursion theorem can be used to obtain functions satisfying systems of recursion equations. To apply the first recursion theorem, the recursion equations must first be recast as a recursive operator. Consider the recursion equations for the factorial function f:The corresponding recursive operator Φ will have information that tells how to get to the next value of f from the previous value. However, the recursive operator will actually define the graph of f. First, Φ will contain the pair . This indicates that f(0) is unequivocally 1, and thus the pair (0,1) is in the graph of f. Next, for each n and m, Φ will contain the pair . This indicates that, if f(n) is m, then is , so that the pair is in the graph of f. Unlike the base case , the recursive operator requires some information about f(n) before it defines a value of . The first recursion theorem (in particular, part 1) states that there is a set F such that . The set F will consist entirely of ordered pairs of natural numbers, and will be the graph of the factorial function f, as desired. The restriction to recursion equations that can be recast as recursive operators ensures that the recursion equations actually define a least fixed point. For example, consider the set of recursion equations:There is no function g satisfying these equations, because they imply g(2) = 1 and also imply g(2) = 0. Thus there is no fixed point g satisfying these recursion equations. It is possible to make an enumeration operator corresponding to these equations, but it will not be a recursive operator. Proof sketch for the first recursion theorem The proof of part 1 of the first recursion theorem is obtained by iterating the enumeration operator Φ beginning with the empty set. First, a sequence Fk is constructed, for . Let F0 be the empty set. Proceeding inductively, for each k, let Fk + 1 be . Finally, F is taken to be . The remainder of the proof consists of a verification that F is recursively enumerable and is the least fixed point of Φ. The sequence Fk used in this proof corresponds to the Kleene chain in the proof of the Kleene fixed-point theorem. The second part of the first recursion theorem follows from the first part. The assumption that Φ is a recursive operator is used to show that the fixed point of Φ is the graph of a partial function. The key point is that if the fixed point F is not the graph of a function, then there is some k such that Fk is not the graph of a function. Comparison to the second recursion theorem Compared to the second recursion theorem, the first recursion theorem produces a stronger conclusion but only when narrower hypotheses are satisfied. Rogers uses the term weak recursion theorem for the first recursion theorem and strong recursion theorem for the second recursion theorem. One difference between the first and second recursion theorems is that the fixed points obtained by the first recursion theorem are guaranteed to be least fixed points, while those obtained from the second recursion theorem may not be least fixed points. A second difference is that the first recursion theorem only applies to systems of equations that can be recast as recursive operators. This restriction is similar to the restriction to continuous operators in the Kleene fixed-point theorem of order theory. The second recursion theorem can be applied to any total recursive function. Generalized theorem In the context of his theory of numberings, Ershov showed that Kleene's recursion theorem holds for any precomplete numbering. A Gödel numbering is a precomplete numbering on the set of computable functions so the generalized theorem yields the Kleene recursion theorem as a special case. Given a precomplete numbering , then for any partial computable function with two parameters there exists a total computable function with one parameter such that See also Denotational semantics, where another least fixed point theorem is used for the same purpose as the first recursion theorem. Fixed-point combinators, which are used in lambda calculus for the same purpose as the first recursion theorem. Diagonal lemma a closely related result in mathematical logic. References Footnotes Further reading External links . Computability theory Theorems in the foundations of mathematics
Kleene's recursion theorem
[ "Mathematics" ]
2,786
[ "Foundations of mathematics", "Mathematical logic", "Mathematical problems", "Mathematical theorems", "Computability theory", "Theorems in the foundations of mathematics" ]
155,414
https://en.wikipedia.org/wiki/Computability%20theory
Computability theory, also known as recursion theory, is a branch of mathematical logic, computer science, and the theory of computation that originated in the 1930s with the study of computable functions and Turing degrees. The field has since expanded to include the study of generalized computability and definability. In these areas, computability theory overlaps with proof theory and effective descriptive set theory. Basic questions addressed by computability theory include: What does it mean for a function on the natural numbers to be computable? How can noncomputable functions be classified into a hierarchy based on their level of noncomputability? Although there is considerable overlap in terms of knowledge and methods, mathematical computability theorists study the theory of relative computability, reducibility notions, and degree structures; those in the computer science field focus on the theory of subrecursive hierarchies, formal methods, and formal languages. The study of which mathematical constructions can be effectively performed is sometimes called recursive mathematics. Introduction Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post. The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation. In 1952, these results led Kleene to coin the two names "Church's thesis" and "Turing's thesis". Nowadays these are often considered as a single hypothesis, the Church–Turing thesis, which states that any function that is computable by an algorithm is a computable function. Although initially skeptical, by 1946 Gödel argued in favor of this thesis: With a definition of effective calculation came the first proofs that there are problems in mathematics that cannot be effectively decided. In 1936, Church and Turing were inspired by techniques used by Gödel to prove his incompleteness theorems - in 1931, Gödel independently demonstrated that the is not effectively decidable. This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false. Many problems in mathematics have been shown to be undecidable after these initial examples were established. In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided. Extending this result, Pyotr Novikov and William Boone showed independently in the 1950s that the word problem for groups is not effectively solvable: there is no effective procedure that, given a word in a finitely presented group, will decide whether the element represented by the word is the identity element of the group. In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers. Turing computability The main form of computability studied in the field was introduced by Turing in 1936. A set of natural numbers is said to be a computable set (also called a decidable, recursive, or Turing computable set) if there is a Turing machine that, given a number n, halts with output 1 if n is in the set and halts with output 0 if n is not in the set. A function f from natural numbers to natural numbers is a (Turing) computable, or recursive function if there is a Turing machine that, on input n, halts and returns output f(n). The use of Turing machines here is not necessary; there are many other models of computation that have the same computing power as Turing machines; for example the μ-recursive functions obtained from primitive recursion and the μ operator. The terminology for computable functions and sets is not completely standardized. The definition in terms of μ-recursive functions as well as a different definition of functions by Gödel led to the traditional name recursive for sets and functions computable by a Turing machine. The word decidable stems from the German word which was used in the original papers of Turing and others. In contemporary use, the term "computable function" has various definitions: according to Nigel J. Cutland, it is a partial recursive function (which can be undefined for some inputs), while according to Robert I. Soare it is a total recursive (equivalently, general recursive) function. This article follows the second of these conventions. In 1996, Soare gave additional comments about the terminology. Not every set of natural numbers is computable. The halting problem, which is the set of (descriptions of) Turing machines that halt on input 0, is a well-known example of a noncomputable set. The existence of many noncomputable sets follows from the facts that there are only countably many Turing machines, and thus only countably many computable sets, but according to the Cantor's theorem, there are uncountably many sets of natural numbers. Although the halting problem is not computable, it is possible to simulate program execution and produce an infinite list of the programs that do halt. Thus the halting problem is an example of a computably enumerable (c.e.) set, which is a set that can be enumerated by a Turing machine (other terms for computably enumerable include recursively enumerable and semidecidable). Equivalently, a set is c.e. if and only if it is the range of some computable function. The c.e. sets, although not decidable in general, have been studied in detail in computability theory. Areas of research Beginning with the theory of computable sets and functions described above, the field of computability theory has grown to include the study of many closely related topics. These are not independent areas of research: each of these areas draws ideas and results from the others, and most computability theorists are familiar with the majority of them. Relative computability and the Turing degrees Computability theory in mathematical logic has traditionally focused on relative computability, a generalization of Turing computability defined using oracle Turing machines, introduced by Turing in 1939. An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers. The oracle machine may only ask questions of the form "Is n in the oracle set?". Each question will be immediately answered correctly, even if the oracle set is not computable. Thus an oracle machine with a noncomputable oracle will be able to compute sets that a Turing machine without an oracle cannot. Informally, a set of natural numbers A is Turing reducible to a set B if there is an oracle machine that correctly tells whether numbers are in A when run with B as the oracle set (in this case, the set A is also said to be (relatively) computable from B and recursive in B). If a set A is Turing reducible to a set B and B is Turing reducible to A then the sets are said to have the same Turing degree (also called degree of unsolvability). The Turing degree of a set gives a precise measure of how uncomputable the set is. The natural examples of sets that are not computable, including many different sets that encode variants of the halting problem, have two properties in common: They are computably enumerable, and Each can be translated into any other via a many-one reduction. That is, given such sets A and B, there is a total computable function f such that A = {x : f(x) ∈ B}. These sets are said to be many-one equivalent (or m-equivalent). Many-one reductions are "stronger" than Turing reductions: if a set A is many-one reducible to a set B, then A is Turing reducible to B, but the converse does not always hold. Although the natural examples of noncomputable sets are all many-one equivalent, it is possible to construct computably enumerable sets A and B such that A is Turing reducible to B but not many-one reducible to B. It can be shown that every computably enumerable set is many-one reducible to the halting problem, and thus the halting problem is the most complicated computably enumerable set with respect to many-one reducibility and with respect to Turing reducibility. In 1944, Post asked whether every computably enumerable set is either computable or Turing equivalent to the halting problem, that is, whether there is no computably enumerable set with a Turing degree intermediate between those two. As intermediate results, Post defined natural types of computably enumerable sets like the simple, hypersimple and hyperhypersimple sets. Post showed that these sets are strictly between the computable sets and the halting problem with respect to many-one reducibility. Post also showed that some of them are strictly intermediate under other reducibility notions stronger than Turing reducibility. But Post left open the main problem of the existence of computably enumerable sets of intermediate Turing degree; this problem became known as Post's problem. After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set. Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree. This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure. There are uncountably many sets that are not computably enumerable, and the investigation of the Turing degrees of all sets is as central in computability theory as the investigation of the computably enumerable Turing degrees. Many degrees with special properties were constructed: hyperimmune-free degrees where every function computable relative to that degree is majorized by a (unrelativized) computable function; high degrees relative to which one can compute a function f which dominates every computable function g in the sense that there is a constant c depending on g such that g(x) < f(x) for all x > c; random degrees containing algorithmically random sets; 1-generic degrees of 1-generic sets; and the degrees below the halting problem of limit-computable sets. The study of arbitrary (not necessarily computably enumerable) Turing degrees involves the study of the Turing jump. Given a set A, the Turing jump of A is a set of natural numbers encoding a solution to the halting problem for oracle Turing machines running with oracle A. The Turing jump of any set is always of higher Turing degree than the original set, and a theorem of Friedburg shows that any set that computes the Halting problem can be obtained as the Turing jump of another set. Post's theorem establishes a close relationship between the Turing jump operation and the arithmetical hierarchy, which is a classification of certain subsets of the natural numbers based on their definability in arithmetic. Much recent research on Turing degrees has focused on the overall structure of the set of Turing degrees and the set of Turing degrees containing computably enumerable sets. A deep theorem of Shore and Slaman states that the function mapping a degree x to the degree of its Turing jump is definable in the partial order of the Turing degrees. A survey by Ambos-Spies and Fejer gives an overview of this research and its historical progression. Other reducibilities An ongoing area of research in computability theory studies reducibility relations other than Turing reducibility. Post introduced several strong reducibilities, so named because they imply truth-table reducibility. A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with. Weak reducibilities are those where a reduction process may not terminate for all oracles; Turing reducibility is one example. The strong reducibilities include: One-one reducibility: A is one-one reducible (or 1-reducible) to B if there is a total computable injective function f such that each n is in A if and only if f(n) is in B. Many-one reducibility: This is essentially one-one reducibility without the constraint that f be injective. A is many-one reducible (or m-reducible) to B if there is a total computable function f such that each n is in A if and only if f(n) is in B. Truth-table reducibility: A is truth-table reducible to B if A is Turing reducible to B via an oracle Turing machine that computes a total function regardless of the oracle it is given. Because of compactness of Cantor space, this is equivalent to saying that the reduction presents a single list of questions (depending only on the input) to the oracle simultaneously, and then having seen their answers is able to produce an output without asking additional questions regardless of the oracle's answer to the initial queries. Many variants of truth-table reducibility have also been studied. Further reducibilities (positive, disjunctive, conjunctive, linear and their weak and bounded versions) are discussed in the article Reduction (computability theory). The major research on strong reducibilities has been to compare their theories, both for the class of all computably enumerable sets as well as for the class of all subsets of the natural numbers. Furthermore, the relations between the reducibilities has been studied. For example, it is known that every Turing degree is either a truth-table degree or is the union of infinitely many truth-table degrees. Reducibilities weaker than Turing reducibility (that is, reducibilities that are implied by Turing reducibility) have also been studied. The most well known are arithmetical reducibility and hyperarithmetical reducibility. These reducibilities are closely connected to definability over the standard model of arithmetic. Rice's theorem and the arithmetical hierarchy Rice showed that for every nontrivial class C (which contains some but not all c.e. sets) the index set E = {e: the eth c.e. set We is in C} has the property that either the halting problem or its complement is many-one reducible to E, that is, can be mapped using a many-one reduction to E (see Rice's theorem for more detail). But, many of these index sets are even more complicated than the halting problem. These type of sets can be classified using the arithmetical hierarchy. For example, the index set FIN of the class of all finite sets is on the level Σ2, the index set REC of the class of all recursive sets is on the level Σ3, the index set COFIN of all cofinite sets is also on the level Σ3 and the index set COMP of the class of all Turing-complete sets Σ4. These hierarchy levels are defined inductively, Σn+1 contains just all sets which are computably enumerable relative to Σn; Σ1 contains the computably enumerable sets. The index sets given here are even complete for their levels, that is, all the sets in these levels can be many-one reduced to the given index sets. Reverse mathematics The program of reverse mathematics asks which set-existence axioms are necessary to prove particular theorems of mathematics in subsystems of second-order arithmetic. This study was initiated by Harvey Friedman and was studied in detail by Stephen Simpson and others; in 1999, Simpson gave a detailed discussion of the program. The set-existence axioms in question correspond informally to axioms saying that the powerset of the natural numbers is closed under various reducibility notions. The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility. Numberings A numbering is an enumeration of functions; it has two parameters, e and x and outputs the value of the e-th function in the numbering on the input x. Numberings can be partial-computable although some of its members are total computable functions. Admissible numberings are those into which all others can be translated. A Friedberg numbering (named after its discoverer) is a one-one numbering of all partial-computable functions; it is necessarily not an admissible numbering. Later research dealt also with numberings of other classes like classes of computably enumerable sets. Goncharov discovered for example a class of computably enumerable sets for which the numberings fall into exactly two classes with respect to computable isomorphisms. The priority method Post's problem was solved with a method called the priority method; a proof using this method is called a priority argument. This method is primarily used to construct computably enumerable sets with particular properties. To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties. Each requirement is assigned to a natural number representing the priority of the requirement; so 0 is assigned to the most important priority, 1 to the second most important, and so on. The set is then constructed in stages, each stage attempting to satisfy one of more of the requirements by either adding numbers to the set or banning numbers from the set so that the final set will satisfy the requirement. It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event. Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity. Because complex priority arguments can be technical and difficult to follow, it has traditionally been considered desirable to prove results without priority arguments, or to see if results proved with priority arguments can also be proved without them. For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method. The lattice of computably enumerable sets When Post defined the notion of a simple set as a c.e. set with an infinite complement not containing any infinite c.e. set, he started to study the structure of the computably enumerable sets under inclusion. This lattice became a well-studied structure. Computable sets can be defined in this structure by the basic result that a set is computable if and only if the set and its complement are both computably enumerable. Infinite c.e. sets have always infinite computable subsets; but on the other hand, simple sets exist but do not always have a coinfinite computable superset. Post introduced already hypersimple and hyperhypersimple sets; later maximal sets were constructed which are c.e. sets such that every c.e. superset is either a finite variant of the given maximal set or is co-finite. Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem. Post did not find such a property and the solution to his problem applied priority methods instead; in 1991, Harrington and Soare found eventually such a property. Automorphism problems Another important question is the existence of automorphisms in computability-theoretic structures. One of these structures is that one of computably enumerable sets under inclusion modulo finite difference; in this structure, A is below B if and only if the set difference B − A is finite. Maximal sets (as defined in the previous paragraph) have the property that they cannot be automorphic to non-maximal sets, that is, if there is an automorphism of the computably enumerable sets under the structure just mentioned, then every maximal set is mapped to another maximal set. In 1974, Soare showed that also the converse holds, that is, every two maximal sets are automorphic. So the maximal sets form an orbit, that is, every automorphism preserves maximality and any two maximal sets are transformed into each other by some automorphism. Harrington gave a further example of an automorphic property: that of the creative sets, the sets which are many-one equivalent to the halting problem. Besides the lattice of computably enumerable sets, automorphisms are also studied for the structure of the Turing degrees of all sets as well as for the structure of the Turing degrees of c.e. sets. In both cases, Cooper claims to have constructed nontrivial automorphisms which map some degrees to other degrees; this construction has, however, not been verified and some colleagues believe that the construction contains errors and that the question of whether there is a nontrivial automorphism of the Turing degrees is still one of the main unsolved questions in this area. Kolmogorov complexity The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time). The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x. This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects. Kolmogorov complexity became not only a subject of independent study but is also applied to other subjects as a tool for obtaining proofs. There are still many open problems in this area. Frequency computation This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1, x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true. Such sets are known as (m, n)-recursive sets. The first major result in this branch of computability theory is Trakhtenbrot's result that a set is computable if it is (m, n)-recursive for some m, n with 2m > n. On the other hand, Jockusch's semirecursive sets (which were already known informally before Jockusch introduced them 1968) are examples of a set which is (m, n)-recursive if and only if 2m < n + 1. There are uncountably many of these sets and also some computably enumerable but noncomputable sets of this type. Later, Degtev established a hierarchy of computably enumerable sets that are (1, n + 1)-recursive but not (1, n)-recursive. After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions. One of the major results was Kummer's Cardinality Theory which states that a set A is computable if and only if there is an n such that some algorithm enumerates for each tuple of n different numbers up to n many possible choices of the cardinality of this set of n numbers intersected with A; these choices must contain the true cardinality but leave out at least one false one. Inductive inference This is the computability-theoretic branch of learning theory. It is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, computable functional) which outputs for any input of the form (f(0), f(1), ..., f(n)) a hypothesis. A learner M learns a function f if almost all hypotheses are the same index e of f with respect to a previously agreed on acceptable numbering of all computable functions; M learns S if M learns every f in S. Basic results are that all computably enumerable classes of functions are learnable while the class REC of all computable functions is not learnable. Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. Generalizations of Turing computability Computability theory includes the study of generalized notions of this field such as arithmetic reducibility, hyperarithmetical reducibility and α-recursion theory, as described by Sacks in 1990. These generalized notions include reducibilities that cannot be executed by Turing machines but are nevertheless natural generalizations of Turing reducibility. These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers. These areas are linked to the theories of well-orderings and trees; for example the set of all indices of computable (nonbinary) trees without infinite branches is complete for level of the analytical hierarchy. Both Turing reducibility and hyperarithmetical reducibility are important in the field of effective descriptive set theory. The even more general notion of degrees of constructibility is studied in set theory. Continuous computability theory Computability theory for digital computation is well developed. Computability theory is less well developed for analog computation that occurs in analog computers, analog signal processing, analog electronics, artificial neural networks and continuous-time control theory, modelled by differential equations and continuous dynamical systems. For example, models of computation such as the Blum–Shub–Smale machine model have formalized computation on the reals. Relationships between definability, proof and computability There are close relationships between the Turing degree of a set of natural numbers and the difficulty (in terms of the arithmetical hierarchy) of defining that set using a first-order formula. One such relationship is made precise by Post's theorem. A weaker relationship was demonstrated by Kurt Gödel in the proofs of his completeness theorem and incompleteness theorems. Gödel's proofs show that the set of logical consequences of an effective first-order theory is a computably enumerable set, and that if the theory is strong enough this set will be uncomputable. Similarly, Tarski's indefinability theorem can be interpreted both in terms of definability and in terms of computability. Computability theory is also linked to second-order arithmetic, a formal theory of natural numbers and sets of natural numbers. The fact that certain sets are computable or relatively computable often implies that these sets can be defined in weak subsystems of second-order arithmetic. The program of reverse mathematics uses these subsystems to measure the non-computability inherent in well known mathematical theorems. In 1999, Simpson discussed many aspects of second-order arithmetic and reverse mathematics. The field of proof theory includes the study of second-order arithmetic and Peano arithmetic, as well as formal theories of the natural numbers weaker than Peano arithmetic. One method of classifying the strength of these weak systems is by characterizing which computable functions the system can prove to be total. For example, in primitive recursive arithmetic any computable function that is provably total is actually primitive recursive, while Peano arithmetic proves that functions like the Ackermann function, which are not primitive recursive, are total. Not every total computable function is provably total in Peano arithmetic, however; an example of such a function is provided by Goodstein's theorem. Name The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days. Robert I. Soare, a prominent researcher in the field, has proposed that the field should be called "computability theory" instead. He argues that Turing's terminology using the word "computable" is more natural and more widely understood than the terminology using the word "recursive" introduced by Kleene. Many contemporary researchers have begun to use this alternate terminology. These researchers also use terminology such as partial computable function and computably enumerable (c.e.) set instead of partial recursive function and recursively enumerable (r.e.) set. Not all researchers have been convinced, however, as explained by Fortnow and Simpson. Some commentators argue that both the names recursion theory and computability theory fail to convey the fact that most of the objects studied in computability theory are not computable. In 1967, Rogers has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry). The idea is that a computable bijection merely renames numbers in a set, rather than indicating any structure in the set, much as a rotation of the Euclidean plane does not change any geometric aspect of lines drawn on it. Since any two infinite computable sets are linked by a computable bijection, this proposal identifies all the infinite computable sets (the finite computable sets are viewed as trivial). According to Rogers, the sets of interest in computability theory are the noncomputable sets, partitioned into equivalence classes by computable bijections of the natural numbers. Professional organizations The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year. The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences. See also Recursion (computer science) Computability logic Transcomputational problem Notes References Further reading Undergraduate level texts Advanced texts Survey papers and collections Research papers and collections Reprinted in . External links Association for Symbolic Logic homepage Computability in Europe homepage Webpage on Recursion Theory Course at Graduate Level with approximately 100 pages of lecture notes German language lecture notes on inductive inference C
Computability theory
[ "Mathematics" ]
6,605
[ "Computability theory", "Mathematical logic" ]
155,430
https://en.wikipedia.org/wiki/Kleene%20algebra
In mathematics, a Kleene algebra ( ; named after Stephen Cole Kleene) is an idempotent (and thus partially ordered) semiring endowed with a closure operator. It generalizes the operations known from regular expressions. Definition Various inequivalent definitions of Kleene algebras and related structures have been given in the literature. Here we will give the definition that seems to be the most common nowadays. A Kleene algebra is a set A together with two binary operations + : A × A → A and · : A × A → A and one function * : A → A, written as a + b, ab and a* respectively, so that the following axioms are satisfied. Associativity of + and ·: a + (b + c) = (a + b) + c and a(bc) = (ab)c for all a, b, c in A. Commutativity of +: a + b = b + a for all a, b in A Distributivity: a(b + c) = (ab) + (ac) and (b + c)a = (ba) + (ca) for all a, b, c in A Identity elements for + and ·: There exists an element 0 in A such that for all a in A: a + 0 = 0 + a = a. There exists an element 1 in A such that for all a in A: a1 = 1a = a. Annihilation by 0: a0 = 0a = 0 for all a in A. The above axioms define a semiring. We further require: + is idempotent: a + a = a for all a in A. It is now possible to define a partial order ≤ on A by setting a ≤ b if and only if a + b = b (or equivalently: a ≤ b if and only if there exists an x in A such that a + x = b; with any definition, a ≤ b ≤ a implies a = b). With this order we can formulate the last four axioms about the operation *: 1 + a(a*) ≤ a* for all a in A. 1 + (a*)a ≤ a* for all a in A. if a and x are in A such that ax ≤ x, then a*x ≤ x if a and x are in A such that xa ≤ x, then x(a*) ≤ x Intuitively, one should think of a + b as the "union" or the "least upper bound" of a and b and of ab as some multiplication which is monotonic, in the sense that a ≤ b implies ax ≤ bx. The idea behind the star operator is a* = 1 + a + aa + aaa + ... From the standpoint of programming language theory, one may also interpret + as "choice", · as "sequencing" and * as "iteration". Examples Let Σ be a finite set (an "alphabet") and let A be the set of all regular expressions over Σ. We consider two such regular expressions equal if they describe the same language. Then A forms a Kleene algebra. In fact, this is a free Kleene algebra in the sense that any equation among regular expressions follows from the Kleene algebra axioms and is therefore valid in every Kleene algebra. Again let Σ be an alphabet. Let A be the set of all regular languages over Σ (or the set of all context-free languages over Σ; or the set of all recursive languages over Σ; or the set of all languages over Σ). Then the union (written as +) and the concatenation (written as ·) of two elements of A again belong to A, and so does the Kleene star operation applied to any element of A. We obtain a Kleene algebra A with 0 being the empty set and 1 being the set that only contains the empty string. Let M be a monoid with identity element e and let A be the set of all subsets of M. For two such subsets S and T, let S + T be the union of S and T and set ST = {st : s in S and t in T}. S* is defined as the submonoid of M generated by S, which can be described as {e} ∪ S ∪ SS ∪ SSS ∪ ... Then A forms a Kleene algebra with 0 being the empty set and 1 being {e}. An analogous construction can be performed for any small category. The linear subspaces of a unital algebra over a field form a Kleene algebra. Given linear subspaces V and W, define V + W to be the sum of the two subspaces, and 0 to be the trivial subspace {0}. Define , the linear span of the product of vectors from V and W respectively. Define , the span of the unit of the algebra. The closure of V is the direct sum of all powers of V. Suppose M is a set and A is the set of all binary relations on M. Taking + to be the union, · to be the composition and * to be the reflexive transitive closure, we obtain a Kleene algebra. Every Boolean algebra with operations and turns into a Kleene algebra if we use for +, for · and set a* = 1 for all a. A quite different Kleene algebra can be used to implement the Floyd–Warshall algorithm, computing the shortest path's length for every two vertices of a weighted directed graph, by Kleene's algorithm, computing a regular expression for every two states of a deterministic finite automaton. Using the extended real number line, take a + b to be the minimum of a and b and ab to be the ordinary sum of a and b (with the sum of +∞ and −∞ being defined as +∞). a* is defined to be the real number zero for nonnegative a and −∞ for negative a. This is a Kleene algebra with zero element +∞ and one element the real number zero. A weighted directed graph can then be considered as a deterministic finite automaton, with each transition labelled by its weight. For any two graph nodes (automaton states), the regular expressions computed from Kleene's algorithm evaluates, in this particular Kleene algebra, to the shortest path length between the nodes. Properties Zero is the smallest element: 0 ≤ a for all a in A. The sum a + b is the least upper bound of a and b: we have a ≤ a + b and b ≤ a + b and if x is an element of A with a ≤ x and b ≤ x, then a + b ≤ x. Similarly, a1 + ... + an is the least upper bound of the elements a1, ..., an. Multiplication and addition are monotonic: if a ≤ b, then a + x ≤ b + x, ax ≤ bx, and xa ≤ xb for all x in A. Regarding the star operation, we have 0* = 1 and 1* = 1, a ≤ b implies a* ≤ b* (monotonicity), an ≤ a* for every natural number n, where an is defined as n-fold multiplication of a, (a*)(a*) = a*, (a*)* = a*, 1 + a(a*) = a* = 1 + (a*)a, ax = xb implies (a*)x = x(b*), ((ab)*)a = a((ba)*), (a+b)* = a*(b(a*))*, and pq = 1 = qp implies q(a*)p = (qap)*. If A is a Kleene algebra and n is a natural number, then one can consider the set Mn(A) consisting of all n-by-n matrices with entries in A. Using the ordinary notions of matrix addition and multiplication, one can define a unique *-operation so that Mn(A) becomes a Kleene algebra. History Kleene introduced regular expressions and gave some of their algebraic laws. Although he didn't define Kleene algebras, he asked for a decision procedure for equivalence of regular expressions. Redko proved that no finite set of equational axioms can characterize the algebra of regular languages. Salomaa gave complete axiomatizations of this algebra, however depending on problematic inference rules. The problem of providing a complete set of axioms, which would allow derivation of all equations among regular expressions, was intensively studied by John Horton Conway under the name of regular algebras, however, the bulk of his treatment was infinitary. In 1981, Kozen gave a complete infinitary equational deductive system for the algebra of regular languages. In 1994, he gave the above finite axiom system, which uses unconditional and conditional equalities (considering a ≤ b as an abbreviation for a + b = b), and is equationally complete for the algebra of regular languages, that is, two regular expressions a and b denote the same language only if a = b follows from the above axioms. Generalization (or relation to other structures) Kleene algebras are a particular case of closed semirings, also called quasi-regular semirings or Lehmann semirings, which are semirings in which every element has at least one quasi-inverse satisfying the equation: a* = aa* + 1 = a*a + 1. This quasi-inverse is not necessarily unique. In a Kleene algebra, a* is the least solution to the fixpoint equations: X = aX + 1 and X = Xa + 1. Closed semirings and Kleene algebras appear in algebraic path problems, a generalization of the shortest path problem. See also Action algebra Algebraic structure Kleene star Regular expression Star semiring Valuation algebra References Further reading The introduction of this book reviews advances in the field of Kleene algebra made in the last 20 years, which are not discussed in the article above. Algebraic structures Algebraic logic Formal languages Many-valued logic
Kleene algebra
[ "Mathematics" ]
2,152
[ "Mathematical structures", "Mathematical logic", "Formal languages", "Mathematical objects", "Fields of abstract algebra", "Algebraic logic", "Algebraic structures" ]
155,443
https://en.wikipedia.org/wiki/Corrosion
Corrosion is a natural process that converts a refined metal into a more chemically stable oxide. It is the gradual deterioration of materials (usually a metal) by chemical or electrochemical reaction with their environment. Corrosion engineering is the field dedicated to controlling and preventing corrosion. In the most common use of the word, this means electrochemical oxidation of metal in reaction with an oxidant such as oxygen, hydrogen, or hydroxide. Rusting, the formation of red-orange iron oxides, is a well-known example of electrochemical corrosion. This type of corrosion typically produces oxides or salts of the original metal and results in a distinctive coloration. Corrosion can also occur in materials other than metals, such as ceramics or polymers, although in this context, the term "degradation" is more common. Corrosion degrades the useful properties of materials and structures including mechanical strength, appearance, and permeability to liquids and gases. Corrosive is distinguished from caustic: the former implies mechanical degradation, the latter chemical. Many structural alloys corrode merely from exposure to moisture in air, but the process can be strongly affected by exposure to certain substances. Corrosion can be concentrated locally to form a pit or crack, or it can extend across a wide area, more or less uniformly corroding the surface. Because corrosion is a diffusion-controlled process, it occurs on exposed surfaces. As a result, methods to reduce the activity of the exposed surface, such as passivation and chromate conversion, can increase a material's corrosion resistance. However, some corrosion mechanisms are less visible and less predictable. The chemistry of corrosion is complex; it can be considered an electrochemical phenomenon. During corrosion at a particular spot on the surface of an object made of iron, oxidation takes place and that spot behaves as an anode. The electrons released at this anodic spot move through the metal to another spot on the object, and reduce oxygen at that spot in presence of H+ (which is believed to be available from carbonic acid () formed due to dissolution of carbon dioxide from air into water in moist air condition of atmosphere. Hydrogen ion in water may also be available due to dissolution of other acidic oxides from the atmosphere). This spot behaves as a cathode. Galvanic corrosion Galvanic corrosion occurs when two different metals have physical or electrical contact with each other and are immersed in a common electrolyte, or when the same metal is exposed to electrolyte with different concentrations. In a galvanic couple, the more active metal (the anode) corrodes at an accelerated rate and the more noble metal (the cathode) corrodes at a slower rate. When immersed separately, each metal corrodes at its own rate. What type of metal(s) to use is readily determined by following the galvanic series. For example, zinc is often used as a sacrificial anode for steel structures. Galvanic corrosion is of major interest to the marine industry and also anywhere water (containing salts) contacts pipes or metal structures. Factors such as relative size of anode, types of metal, and operating conditions (temperature, humidity, salinity, etc.) affect galvanic corrosion. The surface area ratio of the anode and cathode directly affects the corrosion rates of the materials. Galvanic corrosion is often prevented by the use of sacrificial anodes. Galvanic series In any given environment (one standard medium is aerated, room-temperature seawater), one metal will be either more noble or more active than others, based on how strongly its ions are bound to the surface. Two metals in electrical contact share the same electrons, so that the "tug-of-war" at each surface is analogous to competition for free electrons between the two materials. Using the electrolyte as a host for the flow of ions in the same direction, the noble metal will take electrons from the active one. The resulting mass flow or electric current can be measured to establish a hierarchy of materials in the medium of interest. This hierarchy is called a galvanic series and is useful in predicting and understanding corrosion. Corrosion removal Often, it is possible to chemically remove the products of corrosion. For example, phosphoric acid in the form of naval jelly is often applied to ferrous tools or surfaces to remove rust. Corrosion removal should not be confused with electropolishing, which removes some layers of the underlying metal to make a smooth surface. For example, phosphoric acid may also be used to electropolish copper but it does this by removing copper, not the products of copper corrosion. Resistance to corrosion Some metals are more intrinsically resistant to corrosion than others (for some examples, see galvanic series). There are various ways of protecting metals from corrosion (oxidation) including painting, hot-dip galvanization, cathodic protection, and combinations of these. Intrinsic chemistry The materials most resistant to corrosion are those for which corrosion is thermodynamically unfavorable. Any corrosion products of gold or platinum tend to decompose spontaneously into pure metal, which is why these elements can be found in metallic form on Earth and have long been valued. More common "base" metals can only be protected by more temporary means. Some metals have naturally slow reaction kinetics, even though their corrosion is thermodynamically favorable. These include such metals as zinc, magnesium, and cadmium. While corrosion of these metals is continuous and ongoing, it happens at an acceptably slow rate. An extreme example is graphite, which releases large amounts of energy upon oxidation, but has such slow kinetics that it is effectively immune to electrochemical corrosion under normal conditions. Passivation Passivation refers to the spontaneous formation of an ultrathin film of corrosion products, known as a passive film, on the metal's surface that act as a barrier to further oxidation. The chemical composition and microstructure of a passive film are different from the underlying metal. Typical passive film thickness on aluminium, stainless steels, and alloys is within 10 nanometers. The passive film is different from oxide layers that are formed upon heating and are in the micrometer thickness range – the passive film recovers if removed or damaged whereas the oxide layer does not. Passivation in natural environments such as air, water and soil at moderate pH is seen in such materials as aluminium, stainless steel, titanium, and silicon. Passivation is primarily determined by metallurgical and environmental factors. The effect of pH is summarized using Pourbaix diagrams, but many other factors are influential. Some conditions that inhibit passivation include high pH for aluminium and zinc, low pH or the presence of chloride ions for stainless steel, high temperature for titanium (in which case the oxide dissolves into the metal, rather than the electrolyte) and fluoride ions for silicon. On the other hand, unusual conditions may result in passivation of materials that are normally unprotected, as the alkaline environment of concrete does for steel rebar. Exposure to a liquid metal such as mercury or hot solder can often circumvent passivation mechanisms. It has been shown using electrochemical scanning tunneling microscopy that during iron passivation, an n-type semiconductor Fe(III) oxide grows at the interface with the metal that leads to the buildup of an electronic barrier opposing electron flow and an electronic depletion region that prevents further oxidation reactions. These results indicate a mechanism of "electronic passivation". The electronic properties of this semiconducting oxide film also provide a mechanistic explanation of corrosion mediated by chloride, which creates surface states at the oxide surface that lead to electronic breakthrough, restoration of anodic currents, and disruption of the electronic passivation mechanism. Corrosion in passivated materials Passivation is extremely useful in mitigating corrosion damage, however even a high-quality alloy will corrode if its ability to form a passivating film is hindered. Proper selection of the right grade of material for the specific environment is important for the long-lasting performance of this group of materials. If breakdown occurs in the passive film due to chemical or mechanical factors, the resulting major modes of corrosion may include pitting corrosion, crevice corrosion, and stress corrosion cracking. Pitting corrosion Certain conditions, such as low concentrations of oxygen or high concentrations of species such as chloride which compete as anions, can interfere with a given alloy's ability to re-form a passivating film. In the worst case, almost all of the surface will remain protected, but tiny local fluctuations will degrade the oxide film in a few critical points. Corrosion at these points will be greatly amplified, and can cause corrosion pits of several types, depending upon conditions. While the corrosion pits only nucleate under fairly extreme circumstances, they can continue to grow even when conditions return to normal, since the interior of a pit is naturally deprived of oxygen and locally the pH decreases to very low values and the corrosion rate increases due to an autocatalytic process. In extreme cases, the sharp tips of extremely long and narrow corrosion pits can cause stress concentration to the point that otherwise tough alloys can shatter; a thin film pierced by an invisibly small hole can hide a thumb sized pit from view. These problems are especially dangerous because they are difficult to detect before a part or structure fails. Pitting remains among the most common and damaging forms of corrosion in passivated alloys, but it can be prevented by control of the alloy's environment. Pitting results when a small hole, or cavity, forms in the metal, usually as a result of de-passivation of a small area. This area becomes anodic, while part of the remaining metal becomes cathodic, producing a localized galvanic reaction. The deterioration of this small area penetrates the metal and can lead to failure. This form of corrosion is often difficult to detect due to the fact that it is usually relatively small and may be covered and hidden by corrosion-produced compounds. Weld decay and knifeline attack Stainless steel can pose special corrosion challenges, since its passivating behavior relies on the presence of a major alloying component (chromium, at least 11.5%). Because of the elevated temperatures of welding and heat treatment, chromium carbides can form in the grain boundaries of stainless alloys. This chemical reaction robs the material of chromium in the zone near the grain boundary, making those areas much less resistant to corrosion. This creates a galvanic couple with the well-protected alloy nearby, which leads to "weld decay" (corrosion of the grain boundaries in the heat affected zones) in highly corrosive environments. This process can seriously reduce the mechanical strength of welded joints over time. A stainless steel is said to be "sensitized" if chromium carbides are formed in the microstructure. A typical microstructure of a normalized type 304 stainless steel shows no signs of sensitization, while a heavily sensitized steel shows the presence of grain boundary precipitates. The dark lines in the sensitized microstructure are networks of chromium carbides formed along the grain boundaries. Special alloys, either with low carbon content or with added carbon "getters" such as titanium and niobium (in types 321 and 347, respectively), can prevent this effect, but the latter require special heat treatment after welding to prevent the similar phenomenon of "knifeline attack". As its name implies, corrosion is limited to a very narrow zone adjacent to the weld, often only a few micrometers across, making it even less noticeable. Crevice corrosion Crevice corrosion is a localized form of corrosion occurring in confined spaces (crevices), to which the access of the working fluid from the environment is limited. Formation of a differential aeration cell leads to corrosion inside the crevices. Examples of crevices are gaps and contact areas between parts, under gaskets or seals, inside cracks and seams, spaces filled with deposits, and under sludge piles. Crevice corrosion is influenced by the crevice type (metal-metal, metal-non-metal), crevice geometry (size, surface finish), and metallurgical and environmental factors. The susceptibility to crevice corrosion can be evaluated with ASTM standard procedures. A critical crevice corrosion temperature is commonly used to rank a material's resistance to crevice corrosion. Hydrogen grooving In the chemical industry, hydrogen grooving is the corrosion of piping at grooves created by the interaction of a corrosive agent, corroded pipe constituents, and hydrogen gas bubbles. For example, when sulfuric acid () flows through steel pipes, the iron in the steel reacts with the acid to form a passivation coating of iron sulfate () and hydrogen gas (). The iron sulfate coating will protect the steel from further reaction; however, if hydrogen bubbles contact this coating, it will be removed. Thus, a groove can be formed by a travelling bubble, exposing more steel to the acid, causing a vicious cycle. The grooving is exacerbated by the tendency of subsequent bubbles to follow the same path. High-temperature corrosion High-temperature corrosion is chemical deterioration of a material (typically a metal) as a result of heating. This non-galvanic form of corrosion can occur when a metal is subjected to a hot atmosphere containing oxygen, sulfur ("sulfidation"), or other compounds capable of oxidizing (or assisting the oxidation of) the material concerned. For example, materials used in aerospace, power generation, and even in car engines must resist sustained periods at high temperature, during which they may be exposed to an atmosphere containing the potentially highly-corrosive products of combustion. Some products of high-temperature corrosion can potentially be turned to the advantage of the engineer. The formation of oxides on stainless steels, for example, can provide a protective layer preventing further atmospheric attack, allowing for a material to be used for sustained periods at both room and high temperatures in hostile conditions. Such high-temperature corrosion products, in the form of compacted oxide layer glazes, prevent or reduce wear during high-temperature sliding contact of metallic (or metallic and ceramic) surfaces. Thermal oxidation is also commonly used to produce controlled oxide nanostructures, including nanowires and thin films. Microbial corrosion Microbial corrosion, or commonly known as microbiologically influenced corrosion (MIC), is a corrosion caused or promoted by microorganisms, usually chemoautotrophs. It can apply to both metallic and non-metallic materials, in the presence or absence of oxygen. Sulfate-reducing bacteria are active in the absence of oxygen (anaerobic); they produce hydrogen sulfide, causing sulfide stress cracking. In the presence of oxygen (aerobic), some bacteria may directly oxidize iron to iron oxides and hydroxides, other bacteria oxidize sulfur and produce sulfuric acid causing biogenic sulfide corrosion. Concentration cells can form in the deposits of corrosion products, leading to localized corrosion. Accelerated low-water corrosion (ALWC) is a particularly aggressive form of MIC that affects steel piles in seawater near the low water tide mark. It is characterized by an orange sludge, which smells of hydrogen sulfide when treated with acid. Corrosion rates can be very high and design corrosion allowances can soon be exceeded leading to premature failure of the steel pile. Piles that have been coated and have cathodic protection installed at the time of construction are not susceptible to ALWC. For unprotected piles, sacrificial anodes can be installed locally to the affected areas to inhibit the corrosion or a complete retrofitted sacrificial anode system can be installed. Affected areas can also be treated using cathodic protection, using either sacrificial anodes or applying current to an inert anode to produce a calcareous deposit, which will help shield the metal from further attack. Metal dusting Metal dusting is a catastrophic form of corrosion that occurs when susceptible materials are exposed to environments with high carbon activities, such as synthesis gas and other high-CO environments. The corrosion manifests itself as a break-up of bulk metal to metal powder. The suspected mechanism is firstly the deposition of a graphite layer on the surface of the metal, usually from carbon monoxide (CO) in the vapor phase. This graphite layer is then thought to form metastable M3C species (where M is the metal), which migrate away from the metal surface. However, in some regimes, no M3C species is observed indicating a direct transfer of metal atoms into the graphite layer. Protection from corrosion Various treatments are used to slow corrosion damage to metallic objects which are exposed to the weather, salt water, acids, or other hostile environments. Some unprotected metallic alloys are extremely vulnerable to corrosion, such as those used in neodymium magnets, which can spall or crumble into powder even in dry, temperature-stable indoor environments unless properly treated. Surface treatments When surface treatments are used to reduce corrosion, great care must be taken to ensure complete coverage, without gaps, cracks, or pinhole defects. Small defects can act as an "Achilles' heel", allowing corrosion to penetrate the interior and causing extensive damage even while the outer protective layer remains apparently intact for a period of time. Applied coatings Plating, painting, and the application of enamel are the most common anti-corrosion treatments. They work by providing a barrier of corrosion-resistant material between the damaging environment and the structural material. Aside from cosmetic and manufacturing issues, there may be tradeoffs in mechanical flexibility versus resistance to abrasion and high temperature. Platings usually fail only in small sections, but if the plating is more noble than the substrate (for example, chromium on steel), a galvanic couple will cause any exposed area to corrode much more rapidly than an unplated surface would. For this reason, it is often wise to plate with active metal such as zinc or cadmium. If the zinc coating is not thick enough the surface soon becomes unsightly with rusting obvious. The design life is directly related to the metal coating thickness. Painting either by roller or brush is more desirable for tight spaces; spray would be better for larger coating areas such as steel decks and waterfront applications. Flexible polyurethane coatings, like Durabak-M26 for example, can provide an anti-corrosive seal with a highly durable slip resistant membrane. Painted coatings are relatively easy to apply and have fast drying times although temperature and humidity may cause dry times to vary. Reactive coatings If the environment is controlled (especially in recirculating systems), corrosion inhibitors can often be added to it. These chemicals form an electrically insulating or chemically impermeable coating on exposed metal surfaces, to suppress electrochemical reactions. Such methods make the system less sensitive to scratches or defects in the coating, since extra inhibitors can be made available wherever metal becomes exposed. Chemicals that inhibit corrosion include some of the salts in hard water (Roman water systems are known for their mineral deposits), chromates, phosphates, polyaniline, other conducting polymers, and a wide range of specially designed chemicals that resemble surfactants (i.e., long-chain organic molecules with ionic end groups). Anodization Aluminium alloys often undergo a surface treatment. Electrochemical conditions in the bath are carefully adjusted so that uniform pores, several nanometers wide, appear in the metal's oxide film. These pores allow the oxide to grow much thicker than passivating conditions would allow. At the end of the treatment, the pores are allowed to seal, forming a harder-than-usual surface layer. If this coating is scratched, normal passivation processes take over to protect the damaged area. Anodizing is very resilient to weathering and corrosion, so it is commonly used for building facades and other areas where the surface will come into regular contact with the elements. While being resilient, it must be cleaned frequently. If left without cleaning, panel edge staining will naturally occur. Anodization is the process of converting an anode into cathode by bringing a more active anode in contact with it. Biofilm coatings A new form of protection has been developed by applying certain species of bacterial films to the surface of metals in highly corrosive environments. This process increases the corrosion resistance substantially. Alternatively, antimicrobial-producing biofilms can be used to inhibit mild steel corrosion from sulfate-reducing bacteria. Controlled permeability formwork Controlled permeability formwork (CPF) is a method of preventing the corrosion of reinforcement by naturally enhancing the durability of the cover during concrete placement. CPF has been used in environments to combat the effects of carbonation, chlorides, frost, and abrasion. Cathodic protection Cathodic protection (CP) is a technique to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. Cathodic protection systems are most commonly used to protect steel pipelines and tanks; steel pier piles, ships, and offshore oil platforms. Sacrificial anode protection For effective CP, the potential of the steel surface is polarized (pushed) more negative until the metal surface has a uniform potential. With a uniform potential, the driving force for the corrosion reaction is halted. For galvanic CP systems, the anode material corrodes under the influence of the steel, and eventually it must be replaced. The polarization is caused by the current flow from the anode to the cathode, driven by the difference in electrode potential between the anode and the cathode. The most common sacrificial anode materials are aluminum, zinc, magnesium and related alloys. Aluminum has the highest capacity, and magnesium has the highest driving voltage and is thus used where resistance is higher. Zinc is general purpose and the basis for galvanizing. A number of problems are associated with sacrificial anodes. Among these, from an environmental perspective, is the release of zinc, magnesium, aluminum and heavy metals such as cadmium into the environment including seawater. From a working perspective, sacrificial anodes systems are considered to be less precise than modern cathodic protection systems such as Impressed Current Cathodic Protection (ICCP) systems. Their ability to provide requisite protection has to be checked regularly by means of underwater inspection by divers. Furthermore, as they have a finite lifespan, sacrificial anodes need to be replaced regularly over time. Impressed current cathodic protection For larger structures, galvanic anodes cannot economically deliver enough current to provide complete protection. Impressed current cathodic protection (ICCP) systems use anodes connected to a DC power source (such as a cathodic protection rectifier). Anodes for ICCP systems are tubular and solid rod shapes of various specialized materials. These include high silicon cast iron, graphite, mixed metal oxide or platinum coated titanium or niobium coated rod and wires. Anodic protection Anodic protection impresses anodic current on the structure to be protected (opposite to the cathodic protection). It is appropriate for metals that exhibit passivity (e.g. stainless steel) and suitably small passive current over a wide range of potentials. It is used in aggressive environments, such as solutions of sulfuric acid. Anodic protection is an electrochemical method of corrosion protection by keeping metal in passive state Rate of corrosion The formation of an oxide layer is described by the Deal–Grove model, which is used to predict and control oxide layer formation in diverse situations. A simple test for measuring corrosion is the weight loss method. The method involves exposing a clean weighed piece of the metal or alloy to the corrosive environment for a specified time followed by cleaning to remove corrosion products and weighing the piece to determine the loss of weight. The rate of corrosion () is calculated as where is a constant, is the weight loss of the metal in time , is the surface area of the metal exposed, and is the density of the metal (in g/cm3). Other common expressions for the corrosion rate is penetration depth and change of mechanical properties. Economic impact In 2002, the US Federal Highway Administration released a study titled "Corrosion Costs and Preventive Strategies in the United States" on the direct costs associated with metallic corrosion in the US industry. In 1998, the total annual direct cost of corrosion in the US roughly $276 billion (or 3.2% of the US gross domestic product at the time). Broken down into five specific industries, the economic losses are $22.6 billion in infrastructure, $17.6 billion in production and manufacturing, $29.7 billion in transportation, $20.1 billion in government, and $47.9 billion in utilities. Rust is one of the most common causes of bridge accidents. As rust displaces a much higher volume than the originating mass of iron, its build-up can also cause failure by forcing apart adjacent components. It was the cause of the collapse of the Mianus River Bridge in 1983, when support bearings rusted internally and pushed one corner of the road slab off its support. Three drivers on the roadway at the time died as the slab fell into the river below. The following NTSB investigation showed that a drain in the road had been blocked for road re-surfacing, and had not been unblocked; as a result, runoff water penetrated the support hangers. Rust was also an important factor in the Silver Bridge disaster of 1967 in West Virginia, when a steel suspension bridge collapsed within a minute, killing 46 drivers and passengers who were on the bridge at the time. Similarly, corrosion of concrete-covered steel and iron can cause the concrete to spall, creating severe structural problems. It is one of the most common failure modes of reinforced concrete bridges. Measuring instruments based on the half-cell potential can detect the potential corrosion spots before total failure of the concrete structure is reached. Until 20–30 years ago, galvanized steel pipe was used extensively in the potable water systems for single and multi-family residents as well as commercial and public construction. Today, these systems have long ago consumed the protective zinc and are corroding internally, resulting in poor water quality and pipe failures. The economic impact on homeowners, condo dwellers, and the public infrastructure is estimated at $22 billion as the insurance industry braces for a wave of claims due to pipe failures. Corrosion in nonmetals Most ceramic materials are almost entirely immune to corrosion. The strong chemical bonds that hold them together leave very little free chemical energy in the structure; they can be thought of as already corroded. When corrosion does occur, it is almost always a simple dissolution of the material or chemical reaction, rather than an electrochemical process. A common example of corrosion protection in ceramics is the lime added to soda–lime glass to reduce its solubility in water; though it is not nearly as soluble as pure sodium silicate, normal glass does form sub-microscopic flaws when exposed to moisture. Due to its brittleness, such flaws cause a dramatic reduction in the strength of a glass object during its first few hours at room temperature. Corrosion of polymers Polymer degradation involves several complex and often poorly understood physiochemical processes. These are strikingly different from the other processes discussed here, and so the term "corrosion" is only applied to them in a loose sense of the word. Because of their large molecular weight, very little entropy can be gained by mixing a given mass of polymer with another substance, making them generally quite difficult to dissolve. While dissolution is a problem in some polymer applications, it is relatively simple to design against. A more common and related problem is "swelling", where small molecules infiltrate the structure, reducing strength and stiffness and causing a volume change. Conversely, many polymers (notably flexible vinyl) are intentionally swelled with plasticizers, which can be leached out of the structure, causing brittleness or other undesirable changes. The most common form of degradation, however, is a decrease in polymer chain length. Mechanisms which break polymer chains are familiar to biologists because of their effect on DNA: ionizing radiation (most commonly ultraviolet light), free radicals, and oxidizers such as oxygen, ozone, and chlorine. Ozone cracking is a well-known problem affecting natural rubber for example. Plastic additives can slow these process very effectively, and can be as simple as a UV-absorbing pigment (e.g., titanium dioxide or carbon black). Plastic shopping bags often do not include these additives so that they break down more easily as ultrafine particles of litter. Corrosion of glass Glass is characterized by a high degree of corrosion resistance. Because of its high water resistance, it is often used as primary packaging material in the pharmaceutical industry since most medicines are preserved in a watery solution. Besides its water resistance, glass is also robust when exposed to certain chemically-aggressive liquids or gases. Glass disease is the corrosion of silicate glasses in aqueous solutions. It is governed by two mechanisms: diffusion-controlled leaching (ion exchange) and hydrolytic dissolution of the glass network. Both mechanisms strongly depend on the pH of contacting solution: the rate of ion exchange decreases with pH as 10−0.5pH, whereas the rate of hydrolytic dissolution increases with pH as 100.5pH. Mathematically, corrosion rates of glasses are characterized by normalized corrosion rates of elements (g/cm2·d) which are determined as the ratio of total amount of released species into the water (g) to the water-contacting surface area (cm2), time of contact (days), and weight fraction content of the element in the glass : . The overall corrosion rate is a sum of contributions from both mechanisms (leaching + dissolution): . Diffusion-controlled leaching (ion exchange) is characteristic of the initial phase of corrosion and involves replacement of alkali ions in the glass by a hydronium (H3O+) ion from the solution. It causes an ion-selective depletion of near surface layers of glasses and gives an inverse-square-root dependence of corrosion rate with exposure time. The diffusion-controlled normalized leaching rate of cations from glasses (g/cm2·d) is given by: , where is time, is the th cation effective diffusion coefficient (cm2/d), which depends on pH of contacting water as , and is the density of the glass (g/cm3). Glass network dissolution is characteristic of the later phases of corrosion and causes a congruent release of ions into the water solution at a time-independent rate in dilute solutions (g/cm2·d): , where is the stationary hydrolysis (dissolution) rate of the glass (cm/d). In closed systems, the consumption of protons from the aqueous phase increases the pH and causes a fast transition to hydrolysis. However, a further saturation of solution with silica impedes hydrolysis and causes the glass to return to an ion-exchange; e.g., diffusion-controlled regime of corrosion. In typical natural conditions, normalized corrosion rates of silicate glasses are very low and are of the order of 10−7 to 10−5 g/(cm2·d). The very high durability of silicate glasses in water makes them suitable for hazardous and nuclear waste immobilisation. Glass corrosion tests There exist numerous standardized procedures for measuring the corrosion (also called chemical durability) of glasses in neutral, basic, and acidic environments, under simulated environmental conditions, in simulated body fluid, at high temperature and pressure, and under other conditions. The standard procedure ISO 719 describes a test of the extraction of water-soluble basic compounds under neutral conditions: 2 g of glass, particle size 300–500 μm, is kept for 60 min in 50 mL de-ionized water of grade 2 at 98 °C; 25 mL of the obtained solution is titrated against 0.01 mol/L HCl solution. The volume of HCl required for neutralization is classified according to the table below. The standardized test ISO 719 is not suitable for glasses with poor or not extractable alkaline components, but which are still attacked by water; e.g., quartz glass, B2O3 glass or P2O5 glass. Usual glasses are differentiated into the following classes: Hydrolytic class 1 (Type I): This class, which is also called neutral glass, includes borosilicate glasses (e.g., Duran, Pyrex, Fiolax). Glass of this class contains essential quantities of boron oxides, aluminium oxides and alkaline earth oxides. Through its composition neutral glass has a high resistance against temperature shocks and the highest hydrolytic resistance. Against acid and neutral solutions it shows high chemical resistance, because of its poor alkali content against alkaline solutions. Hydrolytic class 2 (Type II): This class usually contains sodium silicate glasses with a high hydrolytic resistance through surface finishing. Sodium silicate glass is a silicate glass, which contains alkali- and alkaline earth oxide and primarily sodium oxide and calcium oxide. Hydrolytic class 3 (Type III): Glass of the 3rd hydrolytic class usually contains sodium silicate glasses and has a mean hydrolytic resistance, which is two times poorer than of type 1 glasses. Acid class DIN 12116 and alkali class DIN 52322 (ISO 695) are to be distinguished from the hydrolytic class DIN 12111 (ISO 719). See also References Further reading Glass chemistry Metallurgy
Corrosion
[ "Chemistry", "Materials_science", "Engineering" ]
7,008
[ "Glass engineering and science", "Glass chemistry", "Metallurgy", "Materials science", "Corrosion", "Electrochemistry", "nan", "Materials degradation" ]
155,544
https://en.wikipedia.org/wiki/Charles%27s%20law
Charles's law (also known as the law of volumes) is an experimental gas law that describes how gases tend to expand when heated. A modern statement of Charles's law is: When the pressure on a sample of a dry gas is held constant, the Kelvin temperature and the volume will be in direct proportion. This relationship of direct proportion can be written as: So this means: where: is the volume of the gas, is the temperature of the gas (measured in kelvins), and is a constant for a particular pressure and amount of gas. This law describes how a gas expands as the temperature increases; conversely, a decrease in temperature will lead to a decrease in volume. For comparing the same substance under two different sets of conditions, the law can be written as: The equation shows that, as absolute temperature increases, the volume of the gas also increases in proportion. History The law was named after scientist Jacques Charles, who formulated the original law in his unpublished work from the 1780s. In two of a series of four essays presented between 2 and 30 October 1801, John Dalton demonstrated by experiment that all the gases and vapours that he studied expanded by the same amount between two fixed points of temperature. The French natural philosopher Joseph Louis Gay-Lussac confirmed the discovery in a presentation to the French National Institute on 31 Jan 1802, although he credited the discovery to unpublished work from the 1780s by Jacques Charles. The basic principles had already been described by Guillaume Amontons and Francis Hauksbee a century earlier. Dalton was the first to demonstrate that the law applied generally to all gases, and to the vapours of volatile liquids if the temperature was well above the boiling point. Gay-Lussac concurred. With measurements only at the two thermometric fixed points of water (0°C and 100°C), Gay-Lussac was unable to show that the equation relating volume to temperature was a linear function. On mathematical grounds alone, Gay-Lussac's paper does not permit the assignment of any law stating the linear relation. Both Dalton's and Gay-Lussac's main conclusions can be expressed mathematically as: where 100 is the volume occupied by a given sample of gas at 100 °C; 0 is the volume occupied by the same sample of gas at 0 °C; and is a constant which is the same for all gases at constant pressure. This equation does not contain the temperature and so is not what became known as Charles's Law. Gay-Lussac's value for (), was identical to Dalton's earlier value for vapours and remarkably close to the present-day value of . Gay-Lussac gave credit for this equation to unpublished statements by his fellow Republican citizen J. Charles in 1787. In the absence of a firm record, the gas law relating volume to temperature cannot be attributed to Charles. Dalton's measurements had much more scope regarding temperature than Gay-Lussac, not only measuring the volume at the fixed points of water but also at two intermediate points. Unaware of the inaccuracies of mercury thermometers at the time, which were divided into equal portions between the fixed points, Dalton, after concluding in Essay II that in the case of vapours, “any elastic fluid expands nearly in a uniform manner into 1370 or 1380 parts by 180 degrees (Fahrenheit) of heat”, was unable to confirm it for gases. Relation to absolute zero Charles's law appears to imply that the volume of a gas will descend to zero at a certain temperature (−266.66 °C according to Gay-Lussac's figures) or −273.15 °C. Gay-Lussac was clear in his description that the law was not applicable at low temperatures: but I may mention that this last conclusion cannot be true except so long as the compressed vapours remain entirely in the elastic state; and this requires that their temperature shall be sufficiently elevated to enable them to resist the pressure which tends to make them assume the liquid state. At absolute zero temperature, the gas possesses zero energy and hence the molecules restrict motion. Gay-Lussac had no experience of liquid air (first prepared in 1877), although he appears to have believed (as did Dalton) that the "permanent gases" such as air and hydrogen could be liquified. Gay-Lussac had also worked with the vapours of volatile liquids in demonstrating Charles's law, and was aware that the law does not apply just above the boiling point of the liquid: I may, however, remark that when the temperature of the ether is only a little above its boiling point, its condensation is a little more rapid than that of atmospheric air. This fact is related to a phenomenon which is exhibited by a great many bodies when passing from the liquid to the solid-state, but which is no longer sensible at temperatures a few degrees above that at which the transition occurs. The first mention of a temperature at which the volume of a gas might descend to zero was by William Thomson (later known as Lord Kelvin) in 1848: This is what we might anticipate when we reflect that infinite cold must correspond to a finite number of degrees of the air-thermometer below zero; since if we push the strict principle of graduation, stated above, sufficiently far, we should arrive at a point corresponding to the volume of air being reduced to nothing, which would be marked as −273° of the scale (−100/.366, if .366 be the coefficient of expansion); and therefore −273° of the air-thermometer is a point which cannot be reached at any finite temperature, however low. However, the "absolute zero" on the Kelvin temperature scale was originally defined in terms of the second law of thermodynamics, which Thomson himself described in 1852. Thomson did not assume that this was equal to the "zero-volume point" of Charles's law, merely said that Charles's law provided the minimum temperature which could be attained. The two can be shown to be equivalent by Ludwig Boltzmann's statistical view of entropy (1870). However, Charles also stated: The volume of a fixed mass of dry gas increases or decreases by times the volume at 0 °C for every 1 °C rise or fall in temperature. Thus: where is the volume of gas at temperature (in degrees Celsius), is the volume at 0 °C. Relation to kinetic theory The kinetic theory of gases relates the macroscopic properties of gases, such as pressure and volume, to the microscopic properties of the molecules which make up the gas, particularly the mass and speed of the molecules. To derive Charles's law from kinetic theory, it is necessary to have a microscopic definition of temperature: this can be conveniently taken as the temperature being proportional to the average kinetic energy of the gas molecules, k: Under this definition, the demonstration of Charles's law is almost trivial. The kinetic theory equivalent of the ideal gas law relates to the average kinetic energy: See also References Further reading . Facsimile at the Bibliothèque nationale de France (pp. 315–22). . Facsimile at the Bibliothèque nationale de France (pp. 353–79). . External links Charles's law simulation from Davidson College, Davidson, North Carolina Charles's law demonstration by Prof. Robert Burk, Carleton University, Ottawa, Canada Charles's law animation from the Leonardo Project (GTEP/CCHS, UK) 1780s introductions 1780s in science Gas laws Scientific laws de:Thermische Zustandsgleichung idealer Gase#Gesetz von Gay-Lussac
Charles's law
[ "Chemistry", "Mathematics" ]
1,592
[ "Equations", "Mathematical objects", "Scientific laws", "Gas laws" ]
155,555
https://en.wikipedia.org/wiki/Image%20registration
Image registration is the process of transforming different sets of data into one coordinate system. Data may be multiple photographs, data from different sensors, times, depths, or viewpoints. It is used in computer vision, medical imaging, military automatic target recognition, and compiling and analyzing images and data from satellites. Registration is necessary in order to be able to compare or integrate the data obtained from these different measurements. Algorithm classification Intensity-based vs feature-based Image registration or image alignment algorithms can be classified into intensity-based and feature-based. One of the images is referred to as the moving or source and the others are referred to as the target, fixed or sensed images. Image registration involves spatially transforming the source/moving image(s) to align with the target image. The reference frame in the target image is stationary, while the other datasets are transformed to match to the target. Intensity-based methods compare intensity patterns in images via correlation metrics, while feature-based methods find correspondence between image features such as points, lines, and contours. Intensity-based methods register entire images or sub-images. If sub-images are registered, centers of corresponding sub images are treated as corresponding feature points. Feature-based methods establish a correspondence between a number of especially distinct points in images. Knowing the correspondence between a number of points in images, a geometrical transformation is then determined to map the target image to the reference images, thereby establishing point-by-point correspondence between the reference and target images. Methods combining intensity-based and feature-based information have also been developed. Transformation models Image registration algorithms can also be classified according to the transformation models they use to relate the target image space to the reference image space. The first broad category of transformation models includes linear transformations, which include rotation, scaling, translation, and other affine transforms. Linear transformations are global in nature, thus, they cannot model local geometric differences between images. The second category of transformations allow 'elastic' or 'nonrigid' transformations. These transformations are capable of locally warping the target image to align with the reference image. Nonrigid transformations include radial basis functions (thin-plate or surface splines, multiquadrics, and compactly-supported transformations), physical continuum models (viscous fluids), and large deformation models (diffeomorphisms). Transformations are commonly described by a parametrization, where the model dictates the number of parameters. For instance, the translation of a full image can be described by a single parameter, a translation vector. These models are called parametric models. Non-parametric models on the other hand, do not follow any parameterization, allowing each image element to be displaced arbitrarily. There are a number of programs that implement both estimation and application of a warp-field. It is a part of the SPM and AIR programs. Transformations of coordinates via the law of function composition rather than addition Alternatively, many advanced methods for spatial normalization are building on structure preserving transformations homeomorphisms and diffeomorphisms since they carry smooth submanifolds smoothly during transformation. Diffeomorphisms are generated in the modern field of Computational Anatomy based on flows since diffeomorphisms are not additive although they form a group, but a group under the law of function composition. For this reason, flows which generalize the ideas of additive groups allow for generating large deformations that preserve topology, providing 1-1 and onto transformations. Computational methods for generating such transformation are often called LDDMM which provide flows of diffeomorphisms as the main computational tool for connecting coordinate systems corresponding to the geodesic flows of Computational Anatomy. There are a number of programs which generate diffeomorphic transformations of coordinates via diffeomorphic mapping including MRI Studio and MRI Cloud.org Spatial vs frequency domain methods Spatial methods operate in the image domain, matching intensity patterns or features in images. Some of the feature matching algorithms are outgrowths of traditional techniques for performing manual image registration, in which an operator chooses corresponding control points (CP) in images. When the number of control points exceeds the minimum required to define the appropriate transformation model, iterative algorithms like RANSAC can be used to robustly estimate the parameters of a particular transformation type (e.g. affine) for registration of the images. Frequency-domain methods find the transformation parameters for registration of the images while working in the transform domain. Such methods work for simple transformations, such as translation, rotation, and scaling. Applying the phase correlation method to a pair of images produces a third image which contains a single peak. The location of this peak corresponds to the relative translation between the images. Unlike many spatial-domain algorithms, the phase correlation method is resilient to noise, occlusions, and other defects typical of medical or satellite images. Additionally, the phase correlation uses the fast Fourier transform to compute the cross-correlation between the two images, generally resulting in large performance gains. The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation. Single- vs multi-modality methods Another classification can be made between single-modality and multi-modality methods. Single-modality methods tend to register images in the same modality acquired by the same scanner/sensor type, while multi-modality registration methods tended to register images acquired by different scanner/sensor types. Multi-modality registration methods are often used in medical imaging as images of a subject are frequently obtained from different scanners. Examples include registration of brain CT/MRI images or whole body PET/CT images for tumor localization, registration of contrast-enhanced CT images against non-contrast-enhanced CT images for segmentation of specific parts of the anatomy, and registration of ultrasound and CT images for prostate localization in radiotherapy. Automatic vs interactive methods Registration methods may be classified based on the level of automation they provide. Manual, interactive, semi-automatic, and automatic methods have been developed. Manual methods provide tools to align the images manually. Interactive methods reduce user bias by performing certain key operations automatically while still relying on the user to guide the registration. Semi-automatic methods perform more of the registration steps automatically but depend on the user to verify the correctness of a registration. Automatic methods do not allow any user interaction and perform all registration steps automatically. Similarity measures for image registration Image similarities are broadly used in medical imaging. An image similarity measure quantifies the degree of similarity between intensity patterns in two images. The choice of an image similarity measure depends on the modality of the images to be registered. Common examples of image similarity measures include cross-correlation, mutual information, sum of squared intensity differences, and ratio image uniformity. Mutual information and normalized mutual information are the most popular image similarity measures for registration of multimodality images. Cross-correlation, sum of squared intensity differences and ratio image uniformity are commonly used for registration of images in the same modality. Many new features have been derived for cost functions based on matching methods via large deformations have emerged in the field Computational Anatomy including Measure matching which are pointsets or landmarks without correspondence, Curve matching and Surface matching via mathematical currents and varifolds. Uncertainty There is a level of uncertainty associated with registering images that have any spatio-temporal differences. A confident registration with a measure of uncertainty is critical for many change detection applications such as medical diagnostics. In remote sensing applications where a digital image pixel may represent several kilometers of spatial distance (such as NASA's LANDSAT imagery), an uncertain image registration can mean that a solution could be several kilometers from ground truth. Several notable papers have attempted to quantify uncertainty in image registration in order to compare results. However, many approaches to quantifying uncertainty or estimating deformations are computationally intensive or are only applicable to limited sets of spatial transformations. Applications Image registration has applications in remote sensing (cartography updating), and computer vision. Due to the vast range of applications to which image registration can be applied, it is impossible to develop a general method that is optimized for all uses. Medical image registration (for data of the same patient taken at different points in time such as change detection or tumor monitoring) often additionally involves elastic (also known as nonrigid) registration to cope with deformation of the subject (due to breathing, anatomical changes, and so forth). Nonrigid registration of medical images can also be used to register a patient's data to an anatomical atlas, such as the Talairach atlas for neuroimaging. In astrophotography, image alignment and stacking are often used to increase the signal to noise ratio for faint objects. Without stacking it may be used to produce a timelapse of events such as a planet's rotation of a transit across the Sun. Using control points (automatically or manually entered), the computer performs transformations on one image to make major features align with a second or multiple images. This technique may also be used for images of different sizes, to allow images taken through different telescopes or lenses to be combined. In cryo-TEM, instability causes specimen drift and many fast acquisitions with accurate image registration is required to preserve high resolution and obtain high signal to noise images. For low SNR data, the best image registration is achieved by cross-correlating all permutations of images in an image stack. Image registration is an essential part of panoramic image creation. There are many different techniques that can be implemented in real time and run on embedded devices like cameras and camera-phones. See also Computational Anatomy Correspondence problem Digital image correlation and tracking Georeferencing Image correlation Image rectification Inverse consistency Point set registration Rubbersheeting Spatial normalization Spatial verification References External links Richard Szeliski, Image Alignment and Stitching: A Tutorial. Foundations and Trends in Computer Graphics and Computer Vision, 2:1-104, 2006. B. Fischer, J. Modersitzki: Ill-posed medicine – an introduction to image registration. Inverse Problems, 24:1–19, 2008 Barbara Zitová, Jan Flusser: Image registration methods: a survey. Image Vision Comput. 21(11): 977-1000 (2003). C. Je and H.-M. Park. Optimized Hierarchical Block Matching for Fast and Accurate Image Registration. Signal Processing: Image Communication, Volume 28, Issue 7, pp. 779–791, August, 2013. Registering Multimodal MRI Images using Matlab. elastix : a toolbox for rigid and nonrigid registration of images. niftyreg: a toolbox for doing near real-time robust rigid, affine (using block matching) and non-rigid image registration (using a refactored version of the free form deformation algorithm). Image Registration techniques using MATLAB Image Compare application automatically compares a pair of images and highlights their differences. This application runs in desktop and mobile phone browsers without requiring installation. Computer vision Medical imaging
Image registration
[ "Engineering" ]
2,289
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
155,558
https://en.wikipedia.org/wiki/Sandia%20National%20Laboratories
Sandia National Laboratories (SNL), also known as Sandia, is one of three research and development laboratories of the United States Department of Energy's National Nuclear Security Administration (NNSA). Headquartered in Kirtland Air Force Base in Albuquerque, New Mexico, it has a second principal facility next to Lawrence Livermore National Laboratory in Livermore, California, and a test facility in Waimea, Kauai, Hawaii. Sandia is owned by the U.S. federal government but privately managed and operated by National Technology and Engineering Solutions of Sandia, a wholly owned subsidiary of Honeywell International. Established in 1949, SNL is a "multimission laboratory" with the primary goal of advancing U.S. national security by developing various science-based technologies. Its work spans roughly 70 areas of activity, including nuclear deterrence, arms control, nonproliferation, hazardous waste disposal, and climate change. Sandia hosts a wide variety of research initiatives, including computational biology, physics, materials science, alternative energy, psychology, MEMS, and cognitive science. Most notably, it hosted some of the world's earliest and fastest supercomputers, ASCI Red and ASCI Red Storm, and is currently home to the Z Machine, the largest X-ray generator in the world, which is designed to test materials in conditions of extreme temperature and pressure. Sandia conducts research through partnership agreements with academic, governmental, and commercial entities; educational opportunities are available through several programs, including the Securing Top Academic Research & Talent at Historically Black Colleges and Universities (START HBCU) Program and the Sandia University Partnerships Network (a collaboration with Purdue University, University of Texas at Austin, Georgia Institute of Technology, University of Illinois Urbana–Champaign, and University of New Mexico). Lab history Sandia National Laboratories' roots go back to World War II and the Manhattan Project. Prior to the United States formally entering the war, the U.S. Army leased land near an Albuquerque, New Mexico airport known as Oxnard Field to service transient Army and U.S. Navy aircraft. In January 1941 construction began on the Albuquerque Army Air Base, leading to establishment of the Bombardier School-Army Advanced Flying School near the end of the year. Soon thereafter it was renamed Kirtland Field, after early Army military pilot Colonel Roy C. Kirtland, and in mid-1942 the Army acquired Oxnard Field. During the war years facilities were expanded further and Kirtland Field served as a major Army Air Forces training installation. In the many months leading up to successful detonation of the first atomic bomb, the Trinity test, and delivery of the first airborne atomic weapon, Project Alberta, J. Robert Oppenheimer, Director of Los Alamos Laboratory, and his technical advisor, Hartly Rowe, began looking for a new site convenient to Los Alamos for the continuation of weapons development especially its non-nuclear aspects. They felt a separate division would be best to perform these functions. Kirtland had fulfilled Los Alamos' transportation needs for both the Trinity and Alberta projects, thus, Oxnard Field was transferred from the jurisdiction of the Army Air Corps to the U.S. Army Service Forces Chief of Engineer District, and thereafter, assigned to the Manhattan Engineer District. In July 1945, the forerunner of Sandia Laboratory, known as "Z" Division, was established at Oxnard Field to handle future weapons development, testing, and bomb assembly for the Manhattan Engineer District. The District-directive calling for establishing a secure area and construction of "Z" Division facilities referred to this as "Sandia Base" , after the nearby Sandia Mountains — apparently the first official recognition of the "Sandia" name. Sandia Laboratory was operated by the University of California until 1949, when President Harry S. Truman asked Western Electric, a subsidiary of American Telephone and Telegraph (AT&T), to assume the operation as an "opportunity to render an exceptional service in the national interest." Sandia Corporation, a wholly owned subsidiary of Western Electric, was formed on October 5, 1949, and, on November 1, 1949, took over management of the Laboratory. The United States Congress designated Sandia Laboratories as a National laboratory in 1979. In October 1993, Sandia National Laboratories (SNL) was managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin. In December 2016, it was announced that National Technology and Engineering Solutions of Sandia, under the direction of Honeywell International, would take over the management of Sandia National Laboratories beginning May 1, 2017; this contract remains in effect as of November 2022, covering government-owned facilities in Albuquerque, New Mexico (SNL/NM); Livermore, California (SNL/CA); Tonopah, Nevada; Shoreview, Minnesota; and Kauai, Hawaii. SNL/NM is the headquarters and the largest laboratory, employing more than 12,000 employees, while SNL/CA is a smaller laboratory, with around 1,700 employees. Tonopah and Kauai are occupied on a "campaign" basis, as test schedules dictate. The lab also managed the DOE/SNL Scaled Wind Farm Technology (SWiFT) Facility in Lubbock, Texas. Sandia led a project that studied how to decontaminate a subway system in the event of a biological weapons attack (such as anthrax). As of September 2017, the process to decontaminate subways in such an event is "virtually ready to implement," said a lead Sandia engineer. Sandia's integration with its local community includes a program through the Department of Energy's Tribal Energy program to deliver alternative renewable power to remote Navajo communities, spearheaded by senior engineer Sandra Begay. Legal issues On February 13, 2007, a New Mexico State Court found Sandia Corporation liable for $4.7 million in damages for the firing of a former network security analyst, Shawn Carpenter, who had reported to his supervisors that hundreds of military installations and defense contractors' networks were compromised and sensitive information was being stolen including hundreds of sensitive Lockheed documents on the Mars Reconnaissance Orbiter project. When his supervisors told him to drop the investigation and do nothing with the information, he went to intelligence officials in the United States Army and later the Federal Bureau of Investigation to address the national security breaches. When Sandia managers discovered his actions months later, they revoked his security clearance and fired him. In 2014, an investigation determined Sandia Corp. used lab operations funds to pay for lobbying related to the renewal of its $2 billion contract to operate the lab. Sandia Corp. and its parent company, Lockheed Martin, agreed to pay a $4.8 million fine. Technical areas SNL/NM consists of five technical areas (TA) and several additional test areas. Each TA has its own distinctive operations; however, the operations of some groups at Sandia may span more than one TA, with one part of a team working on a problem from one angle, and another subset of the same team located in a different building or area working with other specialized equipment. A description of each area is given below. TA-I operations are dedicated primarily to three activities: the design, research, and development of weapon systems; limited production of weapon system components; and energy programs. TA-I facilities include the main library and offices, laboratories, and shops used by administrative and technical staff. TA-II is a facility that was established in 1948 for the assembly of chemical high explosive main charges for nuclear weapons and later for production scale assembly of nuclear weapons. Activities in TA-II include the decontamination, decommissioning, and remediation of facilities and landfills used in past research and development activities. Remediation of the Classified Waste Landfill which started in March 1998, neared completion in FY2000. A testing facility, the Explosive Component Facility, integrates many of the previous TA-II test activities as well as some testing activities previously performed in other remote test areas. The Access Delay Technology Test Facility is also located in TA-II. TA-III is adjacent to and south of TA-V [both are approximately seven miles (11 km) south of TA-I]. TA-III facilities include extensive design-test facilities such as rocket sled tracks, centrifuges and a radiant heat facility. Other facilities in TA-III include a paper destructor, the Melting and Solidification Laboratory and the Radioactive and Mixed Waste Management Facility (RMWMF). RMWMF serves as central processing facility for packaging and storage of low-level and mixed waste. The remediation of the Chemical Waste Landfill, which started in September 1998, is an ongoing activity in TA-III. TA-IV, located approximately south of TA-I, consists of several inertial-confinement fusion research and pulsed power research facilities, including the High Energy Radiation Megavolt Electron Source (Hermes-III), the Z Facility, the Short Pulsed High Intensity Nanosecond X-Radiator (SPHINX) Facility, and the Saturn Accelerator. TA-IV also hosts some computer science and cognition research. TA-V contains two research reactor facilities, an intense gamma irradiation facility (using cobalt-60 and caesium-137 sources), and the Hot Cell Facility. SNL/NM also has test areas outside of the five technical areas listed above. These test areas, collectively known as Coyote Test Field, are located southeast of TA-III and/or in the canyons on the west side of the Manzanita Mountains. Facilities in the Coyote Canyon Test Field include the Solar Tower Facility (34.9623 N, 106.5097 W), the Lurance Canyon Burn Site and the Aerial Cable Facility. DOE/SNL Scaled Wind Farm Technology (SWIFT) Facility In collaboration with the Wind Energy Technologies Office (WETO) of U.S. Department of Energy, Texas Tech University, and the Vestas wind turbine corporation, SNL operates the Scaled Wind Farm Technology (SWiFT) Facility in Lubbock, Texas. Open-source software In the 1970s, the Sandia, Los Alamos, Air Force Weapons Laboratory Technical Exchange Committee initiated the development of the SLATEC library of mathematical and statistical routines, written in FORTRAN 77. Today, Sandia National Laboratories is home to several open-source software projects: FCLib (Feature Characterization Library) is a library for the identification and manipulation of coherent regions or structures from spatio-temporal data. FCLib focuses on providing data structures that are "feature-aware" and support feature-based analysis. It is written in C and developed under a "BSD-like" license. LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) is a molecular dynamics library that can be used to model parallel atomic/subatomic processes at large scale. It is produced under the GNU General Public License (GPL) and distributed on the Sandia National Laboratories website as well as SourceForge. LibVMI is a library for simplifying the reading and writing of memory in running virtual machines, a technique known as virtual machine introspection. It is licensed under the GNU Lesser General Public License. MapReduce-MPI Library is an implementation of MapReduce for distributed-memory parallel machines, utilizing the Message Passing Interface (MPI) for communication. It is developed under a modified Berkeley Software Distribution license. MultiThreaded Graph Library (MTGL) is a collection of graph-based algorithms designed to take advantage of parallel, shared-memory architectures such as the Cray XMT, Symmetric Multiprocessor (SMP) machines, and multi-core workstations. It is developed under a BSD License. ParaView is a cross-platform application for performing data analysis and visualization. It is a collaborative effort, developed by Sandia National Laboratories, Los Alamos National Laboratories, and the United States Army Research Laboratory, and funded by the Advanced Simulation and Computing Program. It is developed under a BSD license. Pyomo is a python-based optimization Mathematical Programming Language which supports most commercial and open-source solver engines. Soccoro, a collaborative effort with Wake Forest and Vanderbilt Universities, is object-oriented software for performing electronic-structure calculations based on density-functional theory. It utilizes libraries such as MPI, BLAS, and LAPACK and is developed under the GNU General Public License. Titan Informatics Toolkit is a collection of cross-platform libraries for ingesting, analyzing, and displaying scientific and informatics data. It is a collaborative effort with Kitware, Inc., and uses various open-source components such as the Boost Graph Library. It is developed under a New BSD license. Trilinos is an object-oriented library for building scalable scientific and engineering applications, with a focus on linear algebra techniques. Most Trilinos packages are licensed under a Modified BSD License. Xyce is an open source, SPICE-compatible, high-performance analog circuit simulator, capable of solving extremely large circuit problems. Charon is a TCAD simulator which was open-sourced by Sandia in 2020. It is significant as previously there were no major TCAD simulators for large-scale simulations that were open source. In addition, Sandia National Laboratories collaborates with Kitware, Inc. in developing the Visualization Toolkit (VTK), a cross-platform graphics and visualization software suite. This collaboration has focused on enhancing the information visualization capabilities of VTK and has in turn fed back into other projects such as ParaView and Titan. Self-guided bullet On January 30, 2012, Sandia announced that it successfully test-fired a self-guided dart that can hit targets at . The dart is long, has its center of gravity at the nose, and is made to be fired from a small-caliber smoothbore gun. It is kept straight in flight by four electromagnetically actuated fins encased in a plastic puller sabot that falls off when the dart leaves the bore. The dart cannot be fired from conventional rifled barrels because the gyroscopic stability provided by rifling grooves for regular bullets would prevent the self-guided bullet from reliably turning towards a target when in flight, so fins are responsible for stabilizing rather than spinning. A laser designator marks a target, which is tracked by the dart's optical sensor and 8-bit CPU. The guided projectile is kept cheap because it does not need an inertial measurement unit, since its small size allows it to make the fast corrections necessary without the aid of an IMU. The natural body frequency of the bullet is about 30 hertz, so corrections can be made 30 times per second in flight. Muzzle velocity with commercial gunpowder is (Mach 2.1), but military customized gunpowder can increase its speed and range. Computer modeling shows that a standard bullet would miss a target at by , while an equivalent guided bullet would hit within . Accuracy increases as distances get longer, since the bullet's motions settle more the longer it is in flight. Supercomputers List of supercomputers that have been operated by or resided at Sandia: Intel Paragon XP/S 140, 1993 to ? ASCI Red, 1997 to 2006 Red Storm, 2005 to 2012 Cielo, 2010 to 2016 Trinity, 2015 to current Astra, 2018 to current, based on ARM processors Attaway, 2019 to current See also Brookhaven National Laboratory Decontamination foam Jess (programming language) Lawrence Livermore National Laboratory National Renewable Energy Laboratory Test Readiness Program Titan Rain VxInsight References Further reading Computerworld article "Reverse Hacker Case Gets Costlier for Sandia Labs" San Jose Mercury News article "Ill Lab Workers Fight For Federal Compensation" Wired Magazine article "Linkin Park's Mysterious Cyberstalker" Slate article "Stalking Linkin Park" FedSmith.com article "Linkin Park, Nuclear Research and Obsession" The Santa Fe New Mexican article "Judge Upholds $4.3 Million Jury Award to Fired Sandia Lab Analyst" TIME article "A Security Analyst Wins Big in Court" The Santa Fe New Mexican article "Jury Awards Fired Sandia Analyst $4.3 Million" HPCwire article "Sandia May Unwittingly Have Sold Supercomputer to China" Federal Computer Weekly article "Intercepts: Chinese Checkers" Congressional Research Service report "China: Suspected Acquisition of U.S. Nuclear Weapon Secrets" Sandia National Laboratory Cooperative Monitoring Center article "Engagement with China" BBC News "Security Overhaul at US Nuclear Labs" Fox News "Iowa Republican Demands Tighter Nuclear Lab Security" UPI article "Workers Get Bonus After Being Disciplined" IndustryWeek article "3D Silicon Photonic Lattice" October 6, 2005 The Santa Fe New Mexican article "Sandia Security Managers Recorded Workers' Calls" May 17, 2002 New Mexico Business Weekly article "Sandia National Laboratories Says it's Worthless" External links DOE Laboratory Fact Sheet 1949 establishments in New Mexico Economy of Albuquerque, New Mexico Federally Funded Research and Development Centers Honeywell Livermore, California Lockheed Martin Military research of the United States Nuclear weapons infrastructure of the United States Plasma physics facilities Research institutes in New Mexico Supercomputer sites United States Department of Energy national laboratories Weapons manufacturing companies
Sandia National Laboratories
[ "Physics" ]
3,534
[ "Plasma physics facilities", "Plasma physics" ]
155,562
https://en.wikipedia.org/wiki/Bode%20plot
In electrical engineering and control theory, a Bode plot is a graph of the frequency response of a system. It is usually a combination of a Bode magnitude plot, expressing the magnitude (usually in decibels) of the frequency response, and a Bode phase plot, expressing the phase shift. As originally conceived by Hendrik Wade Bode in the 1930s, the plot is an asymptotic approximation of the frequency response, using straight line segments. Overview Among his several important contributions to circuit theory and control theory, engineer Hendrik Wade Bode, while working at Bell Labs in the 1930s, devised a simple but accurate method for graphing gain and phase-shift plots. These bear his name, Bode gain plot and Bode phase plot. "Bode" is often pronounced as which is a Dutch pronunciation, closer to English . Bode was faced with the problem of designing stable amplifiers with feedback for use in telephone networks. He developed the graphical design technique of the Bode plots to show the gain margin and phase margin required to maintain stability under variations in circuit characteristics caused during manufacture or during operation. The principles developed were applied to design problems of servomechanisms and other feedback control systems. The Bode plot is an example of analysis in the frequency domain. Definition The Bode plot for a linear, time-invariant system with transfer function ( being the complex frequency in the Laplace domain) consists of a magnitude plot and a phase plot. The Bode magnitude plot is the graph of the function of frequency (with being the imaginary unit). The -axis of the magnitude plot is logarithmic and the magnitude is given in decibels, i.e., a value for the magnitude is plotted on the axis at . The Bode phase plot is the graph of the phase, commonly expressed in degrees, of the argument function as a function of . The phase is plotted on the same logarithmic -axis as the magnitude plot, but the value for the phase is plotted on a linear vertical axis. Frequency response This section illustrates that a Bode plot is a visualization of the frequency response of a system. Consider a linear, time-invariant system with transfer function . Assume that the system is subject to a sinusoidal input with frequency , that is applied persistently, i.e. from a time to a time . The response will be of the form i.e., also a sinusoidal signal with amplitude shifted by a phase with respect to the input. It can be shown that the magnitude of the response is and that the phase shift is In summary, subjected to an input with frequency , the system responds at the same frequency with an output that is amplified by a factor and phase-shifted by . These quantities, thus, characterize the frequency response and are shown in the Bode plot. Rules for handmade Bode plot For many practical problems, the detailed Bode plots can be approximated with straight-line segments that are asymptotes of the precise response. The effect of each of the terms of a multiple element transfer function can be approximated by a set of straight lines on a Bode plot. This allows a graphical solution of the overall frequency response function. Before widespread availability of digital computers, graphical methods were extensively used to reduce the need for tedious calculation; a graphical solution could be used to identify feasible ranges of parameters for a new design. The premise of a Bode plot is that one can consider the log of a function in the form as a sum of the logs of its zeros and poles: This idea is used explicitly in the method for drawing phase diagrams. The method for drawing amplitude plots implicitly uses this idea, but since the log of the amplitude of each pole or zero always starts at zero and only has one asymptote change (the straight lines), the method can be simplified. Straight-line amplitude plot Amplitude decibels is usually done using to define decibels. Given a transfer function in the form where and are constants, , , and is the transfer function: At every value of s where (a zero), increase the slope of the line by per decade. At every value of s where (a pole), decrease the slope of the line by per decade. The initial value of the graph depends on the boundaries. The initial point is found by putting the initial angular frequency into the function and finding The initial slope of the function at the initial value depends on the number and order of zeros and poles that are at values below the initial value, and is found using the first two rules. To handle irreducible 2nd-order polynomials, can, in many cases, be approximated as . Note that zeros and poles happen when is equal to a certain or . This is because the function in question is the magnitude of , and since it is a complex function, . Thus at any place where there is a zero or pole involving the term , the magnitude of that term is . Corrected amplitude plot To correct a straight-line amplitude plot: At every zero, put a point above the line. At every pole, put a point below the line. Draw a smooth curve through those points using the straight lines as asymptotes (lines which the curve approaches). Note that this correction method does not incorporate how to handle complex values of or . In the case of an irreducible polynomial, the best way to correct the plot is to actually calculate the magnitude of the transfer function at the pole or zero corresponding to the irreducible polynomial, and put that dot over or under the line at that pole or zero. Straight-line phase plot Given a transfer function in the same form as above, the idea is to draw separate plots for each pole and zero, then add them up. The actual phase curve is given by To draw the phase plot, for each pole and zero: If is positive, start line (with zero slope) at 0°. If is negative, start line (with zero slope) at −180°. If the sum of the number of unstable zeros and poles is odd, add 180° to that basis. At every (for stable zeros ), increase the slope by degrees per decade, beginning one decade before (e.g., ). At every (for stable poles ), decrease the slope by degrees per decade, beginning one decade before (e.g., ). "Unstable" (right half-plane) poles and zeros () have opposite behavior. Flatten the slope again when the phase has changed by degrees (for a zero) or degrees (for a pole). After plotting one line for each pole or zero, add the lines together to obtain the final phase plot; that is, the final phase plot is the superposition of each earlier phase plot. Example To create a straight-line plot for a first-order (one-pole) low-pass filter, one considers the normalized form of the transfer function in terms of the angular frequency: The Bode plot is shown in Figure 1(b) above, and construction of the straight-line approximation is discussed next. Magnitude plot The magnitude (in decibels) of the transfer function above (normalized and converted to angular-frequency form), given by the decibel gain expression : Then plotted versus input frequency on a logarithmic scale, can be approximated by two lines, forming the asymptotic (approximate) magnitude Bode plot of the transfer function: The first line for angular frequencies below is a horizontal line at 0 dB, since at low frequencies the term is small and can be neglected, making the decibel gain equation above equal to zero. The second line for angular frequencies above is a line with a slope of −20 dB per decade, since at high frequencies the term dominates, and the decibel gain expression above simplifies to , which is a straight line with a slope of −20 dB per decade. These two lines meet at the corner frequency . From the plot, it can be seen that for frequencies well below the corner frequency, the circuit has an attenuation of 0 dB, corresponding to a unity pass-band gain, i.e. the amplitude of the filter output equals the amplitude of the input. Frequencies above the corner frequency are attenuated the higher the frequency, the higher the attenuation. Phase plot The phase Bode plot is obtained by plotting the phase angle of the transfer function given by versus , where and are the input and cutoff angular frequencies respectively. For input frequencies much lower than corner, the ratio is small, and therefore the phase angle is close to zero. As the ratio increases, the absolute value of the phase increases and becomes −45° when . As the ratio increases for input frequencies much greater than the corner frequency, the phase angle asymptotically approaches −90°. The frequency scale for the phase plot is logarithmic. Normalized plot The horizontal frequency axis, in both the magnitude and phase plots, can be replaced by the normalized (nondimensional) frequency ratio . In such a case the plot is said to be normalized, and units of the frequencies are no longer used, since all input frequencies are now expressed as multiples of the cutoff frequency . An example with zero and pole Figures 2-5 further illustrate construction of Bode plots. This example with both a pole and a zero shows how to use superposition. To begin, the components are presented separately. Figure 2 shows the Bode magnitude plot for a zero and a low-pass pole, and compares the two with the Bode straight line plots. The straight-line plots are horizontal up to the pole (zero) location and then drop (rise) at 20 dB/decade. The second Figure 3 does the same for the phase. The phase plots are horizontal up to a frequency factor of ten below the pole (zero) location and then drop (rise) at 45°/decade until the frequency is ten times higher than the pole (zero) location. The plots then are again horizontal at higher frequencies at a final, total phase change of 90°. Figure 4 and Figure 5 show how superposition (simple addition) of a pole and zero plot is done. The Bode straight line plots again are compared with the exact plots. The zero has been moved to higher frequency than the pole to make a more interesting example. Notice in Figure 4 that the 20 dB/decade drop of the pole is arrested by the 20 dB/decade rise of the zero resulting in a horizontal magnitude plot for frequencies above the zero location. Notice in Figure 5 in the phase plot that the straight-line approximation is pretty approximate in the region where both pole and zero affect the phase. Notice also in Figure 5 that the range of frequencies where the phase changes in the straight line plot is limited to frequencies a factor of ten above and below the pole (zero) location. Where the phase of the pole and the zero both are present, the straight-line phase plot is horizontal because the 45°/decade drop of the pole is arrested by the overlapping 45°/decade rise of the zero in the limited range of frequencies where both are active contributors to the phase. Gain margin and phase margin Bode plots are used to assess the stability of negative-feedback amplifiers by finding the gain and phase margins of an amplifier. The notion of gain and phase margin is based upon the gain expression for a negative feedback amplifier given by where AFB is the gain of the amplifier with feedback (the closed-loop gain), β is the feedback factor, and AOL is the gain without feedback (the open-loop gain). The gain AOL is a complex function of frequency, with both magnitude and phase. Examination of this relation shows the possibility of infinite gain (interpreted as instability) if the product βAOL = −1 (that is, the magnitude of βAOL is unity and its phase is −180°, the so-called Barkhausen stability criterion). Bode plots are used to determine just how close an amplifier comes to satisfying this condition. Key to this determination are two frequencies. The first, labeled here as f180, is the frequency where the open-loop gain flips sign. The second, labeled here f0 dB, is the frequency where the magnitude of the product |βAOL| = 1 = 0 dB. That is, frequency f180 is determined by the condition where vertical bars denote the magnitude of a complex number, and frequency f0 dB is determined by the condition One measure of proximity to instability is the gain margin. The Bode phase plot locates the frequency where the phase of βAOL reaches −180°, denoted here as frequency f180. Using this frequency, the Bode magnitude plot finds the magnitude of βAOL. If |βAOL|180 ≥ 1, the amplifier is unstable, as mentioned. If |βAOL|180 < 1, instability does not occur, and the separation in dB of the magnitude of |βAOL|180 from |βAOL| = 1 is called the gain margin. Because a magnitude of 1 is 0 dB, the gain margin is simply one of the equivalent forms: . Another equivalent measure of proximity to instability is the phase margin. The Bode magnitude plot locates the frequency where the magnitude of |βAOL| reaches unity, denoted here as frequency f0 dB. Using this frequency, the Bode phase plot finds the phase of βAOL. If the phase of βAOL(f0 dB) > −180°, the instability condition cannot be met at any frequency (because its magnitude is going to be < 1 when f = f180), and the distance of the phase at f0 dB in degrees above −180° is called the phase margin. If a simple yes or no on the stability issue is all that is needed, the amplifier is stable if f0 dB < f180. This criterion is sufficient to predict stability only for amplifiers satisfying some restrictions on their pole and zero positions (minimum phase systems). Although these restrictions usually are met, if they are not, then another method must be used, such as the Nyquist plot. Optimal gain and phase margins may be computed using Nevanlinna–Pick interpolation theory. Examples using Bode plots Figures 6 and 7 illustrate the gain behavior and terminology. For a three-pole amplifier, Figure 6 compares the Bode plot for the gain without feedback (the open-loop gain) AOL with the gain with feedback AFB (the closed-loop gain). See negative feedback amplifier for more detail. In this example, AOL = 100 dB at low frequencies, and 1 / β = 58 dB. At low frequencies, AFB ≈ 58 dB as well. Because the open-loop gain AOL is plotted and not the product β AOL, the condition AOL = 1 / β decides f0 dB. The feedback gain at low frequencies and for large AOL is AFB ≈ 1 / β (look at the formula for the feedback gain at the beginning of this section for the case of large gain AOL), so an equivalent way to find f0 dB is to look where the feedback gain intersects the open-loop gain. (Frequency f0 dB is needed later to find the phase margin.) Near this crossover of the two gains at f0 dB, the Barkhausen criteria are almost satisfied in this example, and the feedback amplifier exhibits a massive peak in gain (it would be infinity if β AOL = −1). Beyond the unity gain frequency f0 dB, the open-loop gain is sufficiently small that AFB ≈ AOL (examine the formula at the beginning of this section for the case of small AOL). Figure 7 shows the corresponding phase comparison: the phase of the feedback amplifier is nearly zero out to the frequency f180 where the open-loop gain has a phase of −180°. In this vicinity, the phase of the feedback amplifier plunges abruptly downward to become almost the same as the phase of the open-loop amplifier. (Recall, AFB ≈ AOL for small AOL.) Comparing the labeled points in Figure 6 and Figure 7, it is seen that the unity gain frequency f0 dB and the phase-flip frequency f180 are very nearly equal in this amplifier, f180 ≈ f0 dB ≈ 3.332 kHz, which means the gain margin and phase margin are nearly zero. The amplifier is borderline stable. Figures 8 and 9 illustrate the gain margin and phase margin for a different amount of feedback β. The feedback factor is chosen smaller than in Figure 6 or 7, moving the condition | β AOL | = 1 to lower frequency. In this example, 1 / β = 77 dB, and at low frequencies AFB ≈ 77 dB as well. Figure 8 shows the gain plot. From Figure 8, the intersection of 1 / β and AOL occurs at f0 dB = 1 kHz. Notice that the peak in the gain AFB near f0 dB is almost gone. Figure 9 is the phase plot. Using the value of f0 dB = 1 kHz found above from the magnitude plot of Figure 8, the open-loop phase at f0 dB is −135°, which is a phase margin of 45° above −180°. Using Figure 9, for a phase of −180° the value of f180 = 3.332 kHz (the same result as found earlier, of course). The open-loop gain from Figure 8 at f180 is 58 dB, and 1 / β = 77 dB, so the gain margin is 19 dB. Stability is not the sole criterion for amplifier response, and in many applications a more stringent demand than stability is good step response. As a rule of thumb, good step response requires a phase margin of at least 45°, and often a margin of over 70° is advocated, particularly where component variation due to manufacturing tolerances is an issue. See also the discussion of phase margin in the step response article. Bode plotter The Bode plotter is an electronic instrument resembling an oscilloscope, which produces a Bode diagram, or a graph, of a circuit's voltage gain or phase shift plotted against frequency in a feedback control system or a filter. An example of this is shown in Figure 10. It is extremely useful for analyzing and testing filters and the stability of feedback control systems, through the measurement of corner (cutoff) frequencies and gain and phase margins. This is identical to the function performed by a vector network analyzer, but the network analyzer is typically used at much higher frequencies. For education and research purposes, plotting Bode diagrams for given transfer functions facilitates better understanding and getting faster results (see external links). Related plots Two related plots that display the same data in different coordinate systems are the Nyquist plot and the Nichols plot. These are parametric plots, with frequency as the input and magnitude and phase of the frequency response as the output. The Nyquist plot displays these in polar coordinates, with magnitude mapping to radius and phase to argument (angle). The Nichols plot displays these in rectangular coordinates, on the log scale. See also Analog signal processing Phase margin Bode's sensitivity integral Bode's magnitude (gain)–phase relation Dielectric spectroscopy Notes References External links How to draw piecewise asymptotic Bode plots Gnuplot code for generating Bode plot: DIN-A4 printing template (pdf) Plots (graphics) Signal processing Electronic feedback Electronic amplifiers Electronics concepts Electrical parameters Classical control theory Filter frequency response
Bode plot
[ "Technology", "Engineering" ]
4,011
[ "Telecommunications engineering", "Computer engineering", "Signal processing", "Electrical engineering", "Electronic amplifiers", "Amplifiers", "Electrical parameters" ]
155,625
https://en.wikipedia.org/wiki/Penetrance
Penetrance in genetics is the proportion of individuals carrying a particular variant (or allele) of a gene (genotype) that also expresses an associated trait (phenotype). In medical genetics, the penetrance of a disease-causing mutation is the proportion of individuals with the mutation that exhibit clinical symptoms among all individuals with such mutation. For example: If a mutation in the gene responsible for a particular autosomal dominant disorder has 95% penetrance, then 95% of those with the mutation will go on to develop the disease, showing its phenotype, whereas 5% will not.   Penetrance only refers to whether an individual with a specific genotype exhibits any phenotypic signs or symptoms, and is not to be confused with variable expressivity which is to what extent or degree the symptoms for said disease are shown (the expression of the phenotypic trait). Meaning that, even if the same disease-causing mutation affects separate individuals, the expressivity will vary. Degrees of penetrance Complete penetrance If 100% of individuals carrying a particular genotype express the associated trait, the genotype is said to show complete penetrance. Neurofibromatosis type 1 (NF1), is an autosomal dominant condition which shows complete penetrance, consequently everyone who inherits the disease-causing variant of this gene will develop some degree of symptoms for NF1. Reduced penetrance The penetrance is said to be reduced if less than 100% of individuals carrying a particular genotype express associated traits, and is likely to be caused by a combination of genetic, environmental and lifestyle factors. BRCA1 is an example of a genotype with reduced penetrance. By age 70, the mutation is estimated to have a breast cancer penetrance of around 65% in women. Meaning that about 65% of women carrying the gene will develop breast cancer by the time they turn 70. Non-penetrance: Within the category of reduced penetrance, individuals carrying the mutation without displaying any signs or symptoms, are said to have a genotype that is non-penetrant. For the BRCA1 example above, the remaining 35% which never develop breast cancer, are therefore carrying the mutation, but it is non-penetrant. This can lead to healthy, unaffected parents carrying the mutation on to future generations that might be affected. Factors affecting penetrance Many factors such as age, sex, environment, epigenetic modifiers, and modifier genes are linked to penetrance. These factors can help explain why certain individuals with a specific genotype exhibit symptoms or signs of disease, whilst others do not. Age-dependent penetrance If clinical signs associated with a specific genotype appear more frequently with increasing age, the penetrance is said to be age dependent. Some diseases are non-penetrant up until a certain age and then the penetrance starts to increase drastically, whilst others exhibit low penetrance at an early age and continue to increase with time. For this reason, many diseases have a different estimated penetrance dependent on the age. A specific hexanucleotide repeat expansion within the C9orf72 gene said to be a major cause for developing amyotrophic lateral sclerosis (ALS) and frontotemporal dementia (FTD) is an example of a genotype with age dependent penetrance. The genotype is said to be non-penetrant until the age of 35, 50% penetrant by the age of 60, and almost completely penetrant by age 80. Gender-related penetrance For some mutations, the phenotype is more frequently present in one sex and in rare cases mutations appear completely non-penetrant in a particular gender. This is called gender-related penetrance or sex-dependent penetrance and may be the result of allelic variation, disorders in which the expression of the disease is limited to organs only found in one sex such as testis or ovaries, or sex steroid-responsive genes. Breast cancer caused by the BRCA2 mutation is an example of a disease with gender-related penetrance. The penetrance is determined to be much higher in women than men. By age 70, around 86% of females in contrast to 6% of males with the same mutation is estimated to develop breast cancer. In cases where clinical symptoms or the phenotype related to a genetic mutation are present only in one sex, the disorder is said to be sex-limited. Familial male-limited precocious puberty (FMPP) caused by a mutation in the LHCGR gene, is an example of a genotype only penetrant in males. Meaning that males with this particular genotype exhibit symptoms of the disease whilst the same genotype is nonpenetrant in females. Genetic modifiers Genetic modifiers are genetic variants or mutations able to modify a primary disease-causing variant's phenotypic outcome without being disease causing themselves. For instance, in single gene disorders there is one gene primarily responsible for development of the disease, but modifier genes inherited separately can affect the phenotype. Meaning that the presence of a mutation located on a loci different from the one with the disease-causing mutation, may either hinder manifestation of the phenotype or alter the mutations effects, and thereby influencing the penetrance. Environmental modifiers Exposure to environmental and lifestyle factors such as chemicals, diet, alcohol intake, drugs and stress are some of the factors that might influence disease penetrance. For example, several studies of BRCA1 and BRCA2 mutations, associated with an elevated risk of breast and ovarian cancer in women, have examined associations with environmental and behavioral modifiers such as pregnancies, history of breast feeding, smoking, diet, and so forth. Epigenetic regulation Sometimes, genetic alterations which can cause genetic disease and phenotypic traits, are not from changes related directly to the DNA sequence, but from epigenetic alterations such as DNA methylation or histone modifications. Epigenetic differences may therefore be one of the factors contributing to reduced penetrance. A study done on a pair of genetically identical monozygotic twins, where one twin got diagnosed with leukemia and later on thyroid carcinoma whilst the other had no registered illnesses, showed that the affected twin had increased methylation levels of the BRCA 1 gene. The research concluded that the family had no known DNA-repair syndrome or any other hereditary diseases in the last four generations, and no genetic differences between the studied pair of monozygotic twins were detected in the BRCA1 regulatory region. This indicates that epigenetic changes caused by environmental or behavioral factors had a key role in the cause of promotor hypermethylation of the BRCA1 gene in the affected twin, which caused the cancer. Determining penetrance It can be challenging to estimate the penetrance of a specific genotype due to all the influencing factors. In addition to the factors mentioned above there are several other considerations that must be taken into account when penetrance is determined: Ascertainment bias Penetrance estimates can be affected by ascertainment bias if the sampling is not systematic. Traditionally a phenotype-driven approach focusing on individuals with a given condition and their family members has been used to determine penetrance. However, it may be difficult to transfer these estimates over to the general population because family members may share other genetic and/or environmental factors that could influence manifestation of said disease, leading to ascertainment bias and an overestimation of the penetrance. Large-scale population-based studies, which use both genetic sequencing and phenotype data from large groups of people, is a different method for determining penetrance. This method offers less upward bias compared to family-based studies and is more accurate the larger the sample population is. These studies may contain a healthy-participant-bias which can lead to lower penetrance estimates. Phenocopies A genotype with complete penetrance will always display the clinical phenotypic traits related to its mutation (taking into consideration the expressivity), but the signs or symptoms displayed by a specific affected individual can often be similar to other unrelated phenotypical traits. Taking into consideration the effect that environmental or behavioral modifiers have, and how they can impact the cause of a mutation or epigenetic alteration, we now have the cause as to how different paths lead to the same phenotypic display. When similar phenotypes can be observed but by different causes, it is called phenocopies. Phenocopies is when environmental and/or behavioral modifiers causes an illness which mimics the phenotype of a genetic inherited disease. Because of phenocopies, determining the degree of penetrance for a genetic disease requires full knowledge of the individuals attending the studies, and the factors that may or may not have caused their illness.       For example, new research on Hypertrophic Cardiomyopathy (HCM) based on a technique called Cardiac Magnetic Resonance (CMR), describes how various genetic illnesses that showcase the same phenotypic traits as HCM, are actually phenocopies. Previously these phenocopies were all diagnosed and treated, thought to arrive from the same cause, but because of new diagnostic methods, they can now be separated and treated more efficiently. Subjects not yet covered Allelic heterogeneity Polygenic inheritance Locus heterogeneity References External links Tutorial about the different aspects of genetic penetrance. Medical genetics Genetic diseases and disorders Genetics
Penetrance
[ "Biology" ]
1,984
[ "Genetics" ]
155,627
https://en.wikipedia.org/wiki/Ibuprofen
Ibuprofen is a nonsteroidal anti-inflammatory drug (NSAID) that is used to relieve pain, fever, and inflammation. This includes painful menstrual periods, migraines, and rheumatoid arthritis. It may also be used to close a patent ductus arteriosus in a premature baby. It can be taken orally (by mouth) or intravenously. It typically begins working within an hour. Common side effects include heartburn, nausea, indigestion, and abdominal pain. As with other NSAIDs, potential side effects include gastrointestinal bleeding. Long-term use has been associated with kidney failure, and rarely liver failure, and it can exacerbate the condition of patients with heart failure. At low doses, it does not appear to increase the risk of heart attack; however, at higher doses it may. Ibuprofen can also worsen asthma. While its safety in early pregnancy is unclear, it appears to be harmful in later pregnancy, so it is not recommended during that period. Like other NSAIDs, it works by inhibiting the production of prostaglandins by decreasing the activity of the enzyme cyclooxygenase (COX). Ibuprofen is a weaker anti-inflammatory agent than other NSAIDs. Ibuprofen was discovered in 1961 by Stewart Adams and John Nicholson while working at Boots UK Limited and initially marketed as Brufen. It is available under a number of brand names including Advil, Motrin, and Nurofen. Ibuprofen was first marketed in 1969 in the United Kingdom and in 1974 in the United States. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 33rd most commonly prescribed medication in the United States, with more than 17million prescriptions. Medical uses Ibuprofen is used primarily to treat fever (including postvaccination fever), mild to moderate pain (including pain relief after surgery), painful menstruation, osteoarthritis, dental pain, headaches, and pain from kidney stones. About 60% of people respond to any NSAID; those who do not respond well to a particular one may respond to another. A Cochrane medical review of 51 trials of NSAIDs for the treatment of lower back pain found that "NSAIDs are effective for short-term symptomatic relief in patients with acute low back pain". It is used for inflammatory diseases such as juvenile idiopathic arthritis and rheumatoid arthritis. It is also used for pericarditis and patent ductus arteriosus. Ibuprofen lysine In some countries, ibuprofen lysine (the lysine salt of ibuprofen, sometimes called "ibuprofen lysinate") is licensed for treatment of the same conditions as ibuprofen; the lysine salt is used because it is more water-soluble. In 2006, ibuprofen lysine was approved in the United States by the Food and Drug Administration (FDA) for closure of patent ductus arteriosus in premature infants weighing between , who are no more than 32 weeks gestational age when usual medical management (such as fluid restriction, diuretics, and respiratory support) is not effective. Adverse effects Adverse effects include nausea, heartburn, indigestion, diarrhea, constipation, gastrointestinal ulceration, headache, dizziness, rash, salt and fluid retention, and high blood pressure. Infrequent adverse effects include esophageal ulceration, heart failure, high blood levels of potassium, kidney impairment, confusion, and bronchospasm. Ibuprofen can exacerbate asthma, sometimes fatally. Allergic reactions, including anaphylaxis, may occur. Ibuprofen may be quantified in blood, plasma, or serum to demonstrate the presence of the drug in a person having experienced an anaphylactic reaction, confirm a diagnosis of poisoning in people who are hospitalized, or assist in a medicolegal death investigation. A monograph relating ibuprofen plasma concentration, time since ingestion, and risk of developing renal toxicity in people who have overdosed has been published. In October 2020, the U.S. FDA required the drug label to be updated for all NSAID medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. Cardiovascular risk Along with several other NSAIDs, chronic ibuprofen use is correlated with the risk of progression to hypertension in women, though less than for paracetamol (acetaminophen), and myocardial infarction (heart attack), particularly among those chronically using higher doses. On 9 July 2015, the U.S. FDA toughened warnings of increased heart attack and stroke risk associated with ibuprofen and related NSAIDs; the NSAID aspirin is not included in this warning. The European Medicines Agency (EMA) issued similar warnings in 2015. Skin Along with other NSAIDs, ibuprofen has been associated with the onset of bullous pemphigoid or pemphigoid-like blistering. As with other NSAIDs, ibuprofen has been reported to be a photosensitizing agent, but it is considered a weak photosensitizing agent compared to other members of the 2-arylpropionic acid class. Like other NSAIDs, ibuprofen is an extremely rare cause of the autoimmune diseases Stevens–Johnson syndrome (SJS) and toxic epidermal necrolysis. Interactions Alcohol Drinking alcohol when taking ibuprofen may increase the risk of stomach bleeding. Aspirin According to the FDA, "ibuprofen can interfere with the antiplatelet effect of low-dose aspirin, potentially rendering aspirin less effective when used for cardioprotection and stroke prevention". Allowing sufficient time between doses of ibuprofen and immediate-release (IR) aspirin can avoid this problem. The recommended elapsed time between a dose of ibuprofen and a dose of aspirin depends on which is taken first. It would be 30 minutes or more for ibuprofen taken after IR aspirin, and 8 hours or more for ibuprofen taken before IR aspirin. However, this timing cannot be recommended for enteric-coated aspirin. If ibuprofen is taken only occasionally without the recommended timing, though, the reduction of the cardioprotection and stroke prevention of a daily aspirin regimen is minimal. Paracetamol (acetaminophen) Ibuprofen combined with paracetamol is considered generally safe in children for short-term usage. Overdose Ibuprofen overdose has become common since it was licensed for over-the-counter (OTC) use. Many overdose experiences are reported in the medical literature, although the frequency of life-threatening complications from ibuprofen overdose is low. Human responses in cases of overdose range from an absence of symptoms to a fatal outcome despite intensive-care treatment. Most symptoms are an excess of the pharmacological action of ibuprofen and include abdominal pain, nausea, vomiting, drowsiness, dizziness, headache, ear ringing, and nystagmus. Rarely, more severe symptoms such as gastrointestinal bleeding, seizures, metabolic acidosis, hyperkalemia, low blood pressure, slow heart rate, fast heart rate, atrial fibrillation, coma, liver dysfunction, acute kidney failure, cyanosis, respiratory depression, and cardiac arrest have been reported. The severity of symptoms varies with the ingested dose and the time elapsed; however, individual sensitivity also plays an important role. Generally, the symptoms observed with an overdose of ibuprofen are similar to the symptoms caused by overdoses of other NSAIDs. The correlation between the severity of symptoms and measured ibuprofen plasma levels is weak. Toxic effects are unlikely at doses below 100mg/kg, but can be severe above 400mg/kg (around 150 tablets of 200mg units for an average adult male); however, large doses do not indicate the clinical course is likely to be lethal. A precise lethal dose is difficult to determine, as it may vary with age, weight, and concomitant conditions of the person. Treatment to address an ibuprofen overdose is based on how the symptoms present. In cases presenting early, decontamination of the stomach is recommended. This is achieved using activated charcoal; charcoal absorbs the drug before it can enter the bloodstream. Gastric lavage is now rarely used, but can be considered if the amount ingested is potentially life-threatening, and it can be performed within 60 minutes of ingestion. Purposeful vomiting is not recommended. Most ibuprofen ingestions produce only mild effects, and the management of overdose is straightforward. Standard measures to maintain normal urine output should be instituted and kidney function monitored. Since ibuprofen has acidic properties and is also excreted in the urine, forced alkaline diuresis is theoretically beneficial. However, because ibuprofen is highly protein-bound in the blood, the kidneys' excretion of the unchanged drug is minimal. Forced alkaline diuresis is, therefore, of limited benefit. Miscarriage A Canadian study of pregnant women suggests that those taking any type or amount of NSAIDs (including ibuprofen, diclofenac, and naproxen) were 2.4 times more likely to miscarry than those not taking the medications. However, an Israeli study found no increased risk of miscarriage in the group of mothers using NSAIDs. Pharmacology Ibuprofen works by inhibiting cyclooxygenase (COX) enzymes, which convert arachidonic acid to prostaglandin H2 (PGH2). PGH2, in turn, is converted by other enzymes into various prostaglandins (which mediate pain, inflammation, and fever) and thromboxane A2 (which stimulates platelet aggregation and promotes blood clot formation). Like aspirin and indomethacin, ibuprofen is a nonselective COX inhibitor, in that it inhibits two isoforms of cyclooxygenase, COX-1 and COX-2. The analgesic, antipyretic, and anti-inflammatory activity of NSAIDs appears to operate mainly through inhibition of COX-2, which decreases the synthesis of prostaglandins involved in mediating inflammation, pain, fever, and swelling. Antipyretic effects may be due to action on the hypothalamus, resulting in an increased peripheral blood flow, vasodilation, and subsequent heat dissipation. Inhibition of COX-1 instead would be responsible for unwanted effects on the gastrointestinal tract. However, the role of the individual COX isoforms in the analgesic, anti-inflammatory, and gastric damage effects of NSAIDs is uncertain, and different compounds cause different degrees of analgesia and gastric damage. Ibuprofen is administered as a racemic mixture. The R-enantiomer undergoes extensive interconversion to the S-enantiomer in vivo. The S-enantiomer is believed to be the more pharmacologically active enantiomer. The R-enantiomer is converted through a series of three main enzymes. These enzymes include acyl-CoA-synthetase, which converts the R-enantiomer to (−)-R-ibuprofen I-CoA; 2-arylpropionyl-CoA epimerase, which converts (−)-R-ibuprofen I-CoA to (+)-S-ibuprofen I-CoA; and hydrolase, which converts (+)-S-ibuprofen I-CoA to the S-enantiomer. In addition to the conversion of ibuprofen to the S-enantiomer, the body can metabolize ibuprofen to several other compounds, including numerous hydroxyl, carboxyl and glucuronyl metabolites. Virtually all of these have no pharmacological effects. Unlike most other NSAIDs, ibuprofen also acts as an inhibitor of Rho kinase and may be useful in recovery from spinal cord injury. Another unusual activity is inhibition of the sweet taste receptor. Pharmacokinetics After oral administration, peak serum concentration is reached after 12 hours, and up to 99% of the drug is bound to plasma proteins. The majority of ibuprofen is metabolized and eliminated within 24 hours in the urine; however, 1% of the unchanged drug is removed through biliary excretion. Chemistry Ibuprofen is practically insoluble in water, but very soluble in most organic solvents like ethanol (66.18g/100mL at 40°C for 90% EtOH), methanol, acetone and dichloromethane. The original synthesis of ibuprofen by the Boots Group started with the compound isobutylbenzene. The synthesis took six steps. A modern, greener technique with fewer waste byproducts for the synthesis involves only three steps and was developed in the 1980s by the Celanese Chemical Company. The synthesis is initiated with the acylation of isobutylbenzene using the recyclable Lewis acid catalyst hydrogen fluoride. The following catalytic hydrogenation of isobutylacetophenone is performed with either Raney nickel or palladium on carbon to lead into the key-step, the carbonylation of 1-(4-isobutylphenyl)ethanol. This is achieved by a PdCl2(PPh3)2 catalyst, at around 50 bar of CO pressure, in the presence of HCl (10%). The reaction presumably proceeds through the intermediacy of the styrene derivative (acidic elimination of the alcohol) and (1-chloroethyl)benzene derivative (Markovnikow addition of HCl to the double bond). Stereochemistry Ibuprofen, like other 2-arylpropionate derivatives such as ketoprofen, flurbiprofen and naproxen, contains a stereocenter in the α-position of the propionate moiety. The product sold in pharmacies is a racemic mixture of the S and R-isomers. The S (dextrorotatory) isomer is the more biologically active; this isomer has been isolated and used medically (see dexibuprofen for details). The isomerase enzyme, alpha-methylacyl-CoA racemase, converts (R)-ibuprofen into the (S)-enantiomer. (S)-ibuprofen, the eutomer, harbors the desired therapeutic activity. The inactive (R)-enantiomer, the distomer, undergoes a unidirectional chiral inversion to offer the active (S)-enantiomer. That is, when the ibuprofen is administered as a racemate the distomer is converted in vivo into the eutomer while the latter is unaffected. History Ibuprofen was derived from propionic acid by the research arm of Boots Group during the 1960s. The name is derived from the 3 functional groups: isobutyl (ibu) propionic acid (pro) phenyl (fen). Its discovery was the result of research during the 1950s and 1960s to find a safer alternative to aspirin. The molecule was discovered and synthesized by a team led by Stewart Adams, with a patent application filed in 1961. Adams initially tested the drug as treatment for his hangover. In 1985, Boots' worldwide patent for ibuprofen expired and generic products were launched. The medication was launched as a treatment for rheumatoid arthritis in the United Kingdom in 1969, and in the United States in 1974. Later, in 1983 and 1984, it became the first NSAID (other than aspirin) to be available over-the-counter (OTC) in these two countries. Boots was awarded the Queen's Award for Technical Achievement in 1985 for the development of the drug. In November 2013, work on ibuprofen was recognized by the erection of a Royal Society of Chemistry blue plaque at Boots' Beeston Factory site in Nottingham, which reads: and another at BioCity Nottingham, the site of the original laboratory, which reads: Availability and administration Ibuprofen was made available by prescription in the United Kingdom in 1969 and in the United States in 1974. Ibuprofen is the International nonproprietary name (INN), British Approved Name (BAN), Australian Approved Name (AAN) and United States Adopted Name (USAN). In the United States, it has been sold under the brand-names Motrin and Advil since 1974 and 1984, respectively. Ibuprofen is commonly available in the United States up to the FDA's 1984 dose limit OTC, rarely used higher by prescription. In 2009, the first injectable formulation of ibuprofen was approved in the United States, under the brand name Caldolor. Ibuprofen can be taken orally (by mouth) (as a tablet, a capsule, or a suspension) and intravenously. Research Ibuprofen is sometimes used for the treatment of acne because of its anti-inflammatory properties, and has been sold in Japan in topical form for adult acne. As with other NSAIDs, ibuprofen may be useful in the treatment of severe orthostatic hypotension (low blood pressure when standing up). NSAIDs are of unclear utility in the prevention and treatment of Alzheimer's disease. Ibuprofen has been associated with a lower risk of Parkinson's disease and may delay or prevent it. Aspirin, other NSAIDs, and paracetamol (acetaminophen) had no effect on the risk for Parkinson's. In March 2011, researchers at Harvard Medical School announced in Neurology that ibuprofen had a neuroprotective effect against the risk of developing Parkinson's disease. People regularly consuming ibuprofen were reported to have a 38% lower risk of developing Parkinson's disease, but no such effect was found for other pain relievers, such as aspirin and paracetamol. Use of ibuprofen to lower the risk of Parkinson's disease in the general population would not be problem-free, given the possibility of adverse effects on the urinary and digestive systems. Some dietary supplements might be dangerous to take along with ibuprofen and other NSAIDs, but , more research needs to be conducted to be certain. These supplements include those that can prevent platelet aggregation, including ginkgo, garlic, ginger, bilberry, dong quai, feverfew, ginseng, turmeric, meadowsweet (Filipendula ulmaria), and willow (Salix spp.); those that contain coumarin, including chamomile, horse chestnut, fenugreek and red clover; and those that increase the risk of bleeding, like tamarind. Ibuprofen lysine is sold for rapid pain relief; given in the form of its lysine salt, absorption is much quicker (35 minutes for the salt compared to 90120 minutes for ibuprofen). However, a clinical trial with 351 participants in 2020, funded by Sanofi, found no significant difference between ibuprofen and ibuprofen lysine concerning the eventual onset of action or analgesic efficacy. References External links British inventions Dermatoxins Haleon Hepatotoxins Nephrotoxins Nonsteroidal anti-inflammatory drugs Propionic acids Racemic mixtures World Health Organization essential medicines Wikipedia medicine articles ready to translate Isobutyl compounds Medicine in the United States Army
Ibuprofen
[ "Chemistry" ]
4,247
[ "Racemic mixtures", "Stereochemistry", "Chemical mixtures" ]