id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
8,850,053
https://en.wikipedia.org/wiki/Service%20contour
In US broadcasting, service contour (or protected contour) refers to the area in which the Federal Communications Commission (FCC) predicts coverage. The FCC calculates FM and TV contours based on effective radiated power (ERP) in a given direction, the radial height above average terrain (HAAT) in a given direction, the FCC's propagation curves, and the station's class. AM contours are based on the standard ground wave field strength pattern, the frequency, and the ground conductivity in the area. While the FCC makes FM and TV service contour data readily available, the AM, while unavailable as a separate data file, can be obtained through an AM Query in the resulting 'maps' section of each record (when using the 'detailed output' output option). External links FM and TV Service Contour Data (Official) FM Database protected contour maps (Official) AM Station Query (Official) FM Station Query (Official) TV Station Query (Official) Plot predicted AM/FM coverage patterns (unofficial) Broadcast engineering
Service contour
[ "Engineering" ]
217
[ "Broadcast engineering", "Electronic engineering" ]
8,850,128
https://en.wikipedia.org/wiki/2-Chloropropionic%20acid
2-Chloropropionic acid (2-chloropropanoic acid) is the chemical compound with the formula CH3CHClCO2H. This colorless liquid is the simplest chiral chlorocarboxylic acid, and it is noteworthy for being readily available as a single enantiomer. The conjugate base of 2-chloropropionic acid (CH3CHClCO2−), as well as its salts and esters, are known as 2-chloropropionates or 2-chloropropanoates. Preparation Racemic 2-chloropropionic acid is produced by chlorination of propionyl chloride followed by hydrolysis of the 2-chloropropionyl chloride. Enantiomerically pure (S)-2-chloropropionic acid can be prepared from L-alanine via diazotization in hydrochloric acid. Other α-amino acids undergo this reaction. Reactions Reduction of (S)-2-chloropropionic acid with lithium aluminium hydride affords (S)-2-chloropropanol, the simplest chiral chloro-alcohol. This alcohol undergoes cyclization upon treatment with potassium hydroxide, which causes dehydrohalogenation to give the epoxide, (R)-propylene oxide (methyloxirane). 2-Chloropropionyl chloride reacts with isobutylbenzene to give, after hydrolysis, ibuprofen. Safety In general, α-halocarboxylic acids and their esters are good alkylating agents and should be handled with care. 2-Chloropropionic acid is a neurotoxin. See also 2,2-Dichloropropionic acid References Carboxylic acids Organochlorides
2-Chloropropionic acid
[ "Chemistry" ]
400
[ "Carboxylic acids", "Functional groups" ]
8,850,372
https://en.wikipedia.org/wiki/Counterimmunoelectrophoresis
Counterimmunoelectrophoresis is a laboratory technique used to evaluate the binding of an antibody to its antigen, it is similar to immunodiffusion, but with the addition of an applied electrical field across the diffusion medium, usually an agar or polyacrylamide gel. The effect is rapid migration of the antibody and antigen out of their respective wells towards one another to form a line of precipitation, or a precipitin line, indicating binding. See also Electrophoresis Immunoelectrophoresis References External links https://web.archive.org/web/20070613005107/http://www.lib.mcg.edu/edu/esimmuno/ch4/electro.htm Immunologic tests Blood tests
Counterimmunoelectrophoresis
[ "Chemistry", "Biology" ]
174
[ "Blood tests", "Chemical pathology", "Immunologic tests" ]
8,850,466
https://en.wikipedia.org/wiki/Roof%20%28Chinese%20constellation%29
The Roof mansion () is one of the twenty-eight mansions of the Chinese constellations. It is one of the northern mansions of the Black Tortoise. Asterisms References Chinese constellations
Roof (Chinese constellation)
[ "Astronomy" ]
42
[ "Chinese constellations", "Constellations" ]
8,851,139
https://en.wikipedia.org/wiki/Gas%20well%20deliquification
Gas well deliquification, also referred to as "gas well dewatering", is the general term for technologies used to remove water or condensates build-up from producing gas wells. When natural gas flows to the surface in a producing gas well, the gas carries liquids to the surface if the velocity of the gas is high enough. A high gas velocity results in a mist flow pattern in which liquids are finely dispersed in the gas. Consequently, a low volume of liquid is present in the tubing or production conduit, resulting in a pressure drop caused by gravity acting on the flowing fluids. As the gas velocity in the production tubing drops with time, the velocity of the liquids carried by the gas declines even faster. Flow patterns of liquids on the walls of the conduit cause liquid to accumulate in the bottom of the well, which can either slow or stop gas production altogether. Possible solutions to this problem include the installation of a velocity string, a capillary string injecting foamers (often with corrosive effects on surface wellhead seals), or a pump to continuously or intermittently pump the water to the surface to remove the hydrostatic barrier that the water creates. A common practice is to use a device called a plunger to lift the liquids. Improved electrical pumps coming onto the market may enhance the effectiveness of the technology. More recently, downhole compressors are being used to increase velocity of gas flow, which in turn accelerates liquid unloading. The same concept is also applicable to oil wells when they are at the end stage of production. In this case, the reservoir pressure drops to such a low level that it cannot lift the weight of the oil/water column to the surface. By injecting a gas (such as nitrogen) into the wellbore at a specific point, the density of the fluid column can be reduced to the point that the reservoir pressure is once again able to lift fluids to the surface. References Gas Well Deliquification: Solutions to Gas Well Liquid Loading Problems by James Lea, Henry Nickens and Mike Wells, Gulf Professional Publishing 2003 External links Summary of Journal of Petroleum Technology paper on gas well deliquification Oil & Gas Journal article on low-pressure gas well deliquification Natural gas technology
Gas well deliquification
[ "Chemistry" ]
465
[ "Natural gas technology" ]
8,851,414
https://en.wikipedia.org/wiki/Low-temperature%20thermal%20desorption
For environmental remediation, Low-temperature thermal desorption (LTTD), also known as low-temperature thermal volatilization, thermal stripping, and soil roasting, is an ex-situ remedial technology that uses heat to physically separate petroleum hydrocarbons from excavated soils. Thermal desorbers are designed to heat soils to temperatures sufficient to cause constituents to volatilize and desorb (physically separate) from the soil. Although they are not designed to decompose organic constituents, thermal desorbers can, depending upon the specific organics present and the temperature of the desorber system, cause some organic constituents to completely or partially decompose. The vaporized hydrocarbons are generally treated in a secondary treatment unit (e.g., an afterburner, catalytic oxidation chamber, condenser, or carbon adsorption unit) prior to discharge to the atmosphere. Afterburners and oxidizers destroy the organic constituents. Condensers and carbon adsorption units trap organic compounds for subsequent treatment or disposal. Some preprocessing and postprocessing of soil is necessary when using LTTD. Excavated soils are first screened to remove large (greater than 2 inches in diameter) objects. These may be sized (e.g., crushed or shredded) and then introduced back into the feed material. After leaving the desorber, soils are cooled, re-moistened to control dust, and stabilized (if necessary) to prepare them for disposal or reuse. Treated soil may be redeposited onsite, used as cover in landfills, or incorporated into asphalt. Application LTTD has proven very effective in reducing concentrations of petroleum products including gasoline, jet fuels, kerosene, diesel fuel, heating oils, and lubricating oils. LTTD is applicable to constituents that are volatile at temperatures up to 1,200 °F. Most desorbers operate at temperatures between 300 °F to 1,000 °F. Desorbers constructed of special alloys can operate at temperatures up to 1,200 °F. More volatile products (e.g. gasoline) can be desorbed at the lower operating range, while semivolatile products (e.g. kerosene, diesel fuel) generally need temperatures over 700 °F, and relatively nonvolatile products (e.g., heating oil, lubricating oils) need even higher temperatures. Essentially all soil types are amenable for treatment by LTTD systems. However, different soils may require varying degrees and types of pretreatment. For example, coarse-grained soils (e.g. gravel and cobbles) may require crushing; fine-grained soils that are excessively cohesive (e.g. clay) may require shredding. State and local regulations specify that petroleum-contaminated soils must be pilot tested, by some soil from the site being processed through the LTTD system (a "test burn"). The results of preliminary testing of soil samples should identify the relevant constituent properties, and examination of the machine's performance records should indicate how effective the system will be in treating the soil. The proven effectiveness of a particular system for a specific site or waste does not ensure that it will be effective at all sites or that the treatment efficiencies achieved will be acceptable at other sites. If a test burn is conducted, it is important to ensure that the soil tested is representative of average conditions and that enough samples are analyzed before and after treatment to confidently determine whether LTTD will be effective. Operation of LTTD units requires various permits and demonstration of compliance with permit requirements. Monitoring requirements for LTTD systems are by their nature different from monitoring required at a UST site. Monitoring of LTTD system waste streams (e.g. concentrations of particulates, volatiles, and carbon monoxide in stack gas) are required by the agency or agencies issuing the permits for operation of the facility. The LTTD facility owner/operator is responsible for complying with limits specified by the permits and for other LTTD system operating parameters (e.g. desorber temperature, soil feed rate, afterburner temperature). The decision as to whether or not LTTD is a practical remedial alternative depends upon site-specific characteristics (e.g. the location and volume of contaminated soils, site layout). Practicability is also determined by regulatory, logistical, and economic considerations. The economics of LTTD as a remedial option are highly site-specific. Economic factors include:- Site usage (because excavation and onsite soil treatment at a retail site (e.g. gasoline station, convenience store) will most likely prevent the business from operating for a longish time). The cost of LTTD per unit volume of soil relative to other remedial options. The location of the nearest applicable LTTD system (because transportation costs are a function of distance). Operation principles Thermal desorption systems fall into two general classes—stationary facilities and mobile units. Contaminated soils are excavated and transported to stationary facilities; mobile units can be operated directly onsite. Desorption units are available in a variety of process configurations including rotary desorbers, asphalt plant aggregate dryers, thermal screws, and conveyor furnaces. The plasticity of the soil is a measure of its ability to deform without shearing and is to some extent a function of water content. Plastic soils tend to stick to screens and other equipment, and agglomerate into large clumps. In addition to slowing down the feed rate, plastic soils are difficult to treat. Heating plastic soils requires higher temperatures because of the low surface area to volume ratio and increased moisture content. Also, because plastic soils tend to be very fine-grained, organic compounds tend to be tightly sorbed. Thermal treatment of highly plastic soils requires pretreatment, such as shredding or blending with more friable soils or other amendments (e.g. gypsum). Material larger than 2 inches in diameter will need to be crushed or removed. Crushed material is recycled back into the feed to be processed. Coarser-grained soils tend to be free-flowing and do not agglomerate into clumps. They typically do not retain excessive moisture, therefore, contaminants are easily desorbed. Finer-grained soils tend to retain soil moisture and agglomerate into clumps. When dry, they may yield large amounts of particulates that may require recycling after being intercepted in the baghouse. The solids processing capacity of a thermal desorption system is inversely proportional to the moisture content of the feed material. The presence of moisture in the excavated soils to be treated in the LTTD unit will determine the residence time required and heating requirements for effective removal of contaminants. In order for desorption of petroleum constituents to occur, most of the soil moisture must be evaporated in the desorber. This process can require significant additional thermal input to the desorber and excessive residence time for the soil in the desorber. Moisture content also influences plasticity which affects handling of the soil. Soils with excessive moisture content (> 20%) must be dewatered. Typical dewatering methods include air drying (if storage space is available to spread the soils), mixing with drier soils, or mechanical dewatering. The presence of metals in soil can have two implications: Limitations on disposal of the solid wastes generated by desorption. Attention to air pollution control regulations that limit the amount of metals that may be released in stack emissions. At normal LTTD operating temperatures, heavy metals are not likely to be significantly separated from soils. High concentrations of petroleum products in soil can result in high soil heating values. Heat released from soils can result in overheating and damage to the desorber. Soils with heating values greater than 2,000 Btu/lb require blending with cleaner soils to dilute the high concentration of hydrocarbons. High hydrocarbon concentrations in the offgas may exceed the thermal capacity of the afterburner and potentially result in the release of untreated vapors into the atmosphere. Excessive constituent levels in soil could also potentially result in the generation of vapors in the desorber at concentrations exceeding the lower explosive limit (LEL). If the LEL is exceeded there is a potential for explosion. System design The term "thermal desorber" describes the primary treatment operation that heats petroleum-contaminated materials and desorbs organic materials into a purge gas. Mechanical design features and process operating conditions vary considerably among the various types of LTTD systems. Desorption units are: available in four configurations: Rotary dryer Asphalt plant aggregate dryer Thermal screw Conveyor furnace Although all LTTD systems use heat to separate (desorb) organic contaminants from the soil matrix, each system has a different configuration with its own set of advantages and disadvantages. The decision to use one system over another depends on the nature of the contaminants as well as machine availability, system performance, and economic considerations. System performance may be evaluated on the basis of pilot tests (e.g., test burns) or examination of historical machine performance records. Pilot tests to develop treatment conditions are generally not necessary for petroleum-contaminated soils. Rotary dryer Rotary dryer systems use a cylindrical metal reactor (drum) that is inclined slightly from the horizontal. A burner located at one end provides heat to raise the temperature of the soil sufficiently to desorb organic contaminants. The flow of soil may be either cocurrent with or countercurrent to the direction of the purge gas flow. As the drum rotates, soil is conveyed through the drum. Lifters raise the soil, carrying it to near the top of the drum before allowing it to fall through the heated purge gas. Mixing in a rotary dryer enhances heat transfer by convection and allow soils to be rapidly heated. Rotary desorber units are manufactured for a wide range of treatment capacities; these units may be either stationary or mobile. The maximum soil temperature that can be obtained in a rotary dryer depends on the composition of the dryer shell. The soil discharge temperature of carbon steel drums is typically 300 to 600 degrees F. Alloy drums are available that can increase the soil discharge temperature to 1,200 degrees F. Most rotary dryers that are used to treat petroleum contaminated soil are made of carbon steel. After the treated soil exits the rotary dryer, it enters a cooling conveyor where water is sprayed on the soil for cooling and dust control. Water addition may be conducted in either a screw conveyor or a pugmill. Besides the direction of purge gas flow relative to soil feed direction, there is one major difference in configuration between countercurrent and cocurrent rotary dryers. The purge gas from a countercurrent rotary dryer is typically only 350 °F to 500 °F and does not require cooling before entering the baghouse where fine particles are trapped. A disadvantage is that these particles may not have been decontaminated and are typically recycled to the dryer. Countercurrent dryers have several advantages over cocurrent systems. They are more efficient in transferring heat from purge gas to contaminated soil, and the volume and temperature of exit gas are lower, allowing the gas to go directly to a baghouse without needing to be cooled. The cooler exit gas temperature and smaller volume eliminates the need for a cooling unit, which allows downstream processing equipment to be smaller. Countercurrent systems are effective on petroleum products with molecular weights lower than No.2 fuel oil. In cocurrent systems, the purge gas is 50 °F to 100 °F hotter than the soil discharge temperature. The result is that the purge gas exit temperature may range from 400 °F to 1,000 °F and cannot go directly to the baghouse. Purge gas first enters an afterburner to decontaminate the fine particles, then goes into a cooling unit prior to introduction into the baghouse. Because of the higher temperature and volume of the purge gas, the baghouse and all other downstream processing equipment must be larger than in a countercurrent system. Cocurrent systems do have several advantages over countercurrent systems: The afterburner is located upstream of the baghouse ensuring that fine particles are decontaminated; and because the heated purge gas is introduced at the same end of the drum as the feed soil, the soil is heated faster, resulting in a longer residence time. Higher temperatures and longer residence time mean that cocurrent systems can be used to treat soils contaminated with heavier petroleum products. Cocurrent systems are effective for light and heavy petroleum products including No. 6 fuel oil, crude oil, motor oil, and lubricating oil. Asphalt plant aggregate dryer Hot-mix asphalt plants use aggregate that has been processed in a dryer before it is mixed with liquid asphalt. The use of petroleum contaminated soils for aggregate material is widespread. Aggregate dryers may either be stationary or mobile. Soil treatment capacities range from 25-150 tons per hour. The soil may be incorporated into the asphalt as a recycling process or the treated soil may be used for other purposes. Asphalt rotary dryers are normally constructed of carbon steel and have a soil discharge temperature of 300 °F to 600 °F. Typically, asphalt plant aggregate dryers are identical to the countercurrent rotary desorbers described above and are effective on the same types of contaminants. The primary difference is that an afterburner is not required for incorporation of clean aggregate into the asphalt mix. In some areas, asphalt plants that use petroleum-contaminated soil for aggregate may be required to be equipped with an afterburner. Thermal screw A thermal screw desorber typically consists of a series of 1-4 augers. The auger system conveys, mixes, and heats contaminated soils to volatilize moisture and organic contaminants into a purge gas stream. Augers can be arranged in series to increase the soil residence time, or they can be configured in parallel to increase throughput capacity. Most thermal screw systems circulate a hot heat-transfer oil through the hollow flights of the auger and return the hot oil through the shaft to the heat transfer fluid heating system. The heated oil is also circulated through the jacketed trough in which each auger rotates. Thermal screws can also be steam-heated. Systems heated with oil can achieve soil temperatures of up to 500 °F, and steam-heated systems can heat soil to approximately 350 °F. Most of the gas generated during heating of the heat-transfer oil does not come into contact the waste material and can be discharged directly to the atmosphere without emission controls. The remainder of the flue gas maintains the thermal screw purge gas exit temperature above 300 degrees F. This ensures that volatilized organics and moisture do not condense. In addition, the recycled flue gas has a low oxygen content (less than 2% by volume) which minimizes oxidation of the organics and reduces the explosion hazard. If pretreatment analytical data indicates a high organic content (greater than 4 percent), use of a thermal screw is recommended. After the treated soil exits the thermal screw, water is sprayed on the soil for cooling and dust control. Thermal screws are available with soil treatment capacities ranging from 3-15 tons per hour. Since thermal screws are indirectly heated, the volume of purge gas from the primary thermal treatment unit is less than one half of the volume from a directly heated system with an equivalent soil processing capacity. Therefore, offgas treatment systems consist of relatively small unit operations that are well suited to mobile applications. Indirect heating also allows thermal screws to process materials with high organic contents since the recycled flue gas is inert, thereby reducing the explosion hazard. Conveyor furnace A conveyor furnace uses a flexible metal belt to convey soil through the primary heating chamber. A one-inch-deep layer of soil is spread evenly over the belt. As the belt moves through the system, soil agitators lift the belt and turn the soil to enhance heat transfer and volatilization of organics. The conveyor furnace can heat soils to temperatures from 300 to 800 degrees F. At the higher temperature range, the conveyor furnace is more effective in treating some heavier petroleum hydrocarbons than are oil- or steam-heated thermal screws, asphalt plant aggregate dryers, and carbon steel rotary dryers. After the treated soil exits the conveyor furnace, it is sprayed with water for cooling and dust control. As of February 1993, only one conveyor furnace system was currently in use for the remediation of petroleum contaminated soil. This system is mobile and can treat 5 to 10 tons of soil per hour. Offgas treatment Offgas treatment systems for LTTD systems are designed to address three types of air pollutants: particulates, organic vapors, and carbon monoxide. Particulates are controlled with both wet (e.g., venturi scrubbers) and dry (e.g., cyclones, baghouses) unit operations. Rotary dryers and asphalt aggregate dryers most commonly use dry gas cleaning unit operations. Cyclones are used to capture large particulates and reduce the particulate load to the baghouse. Baghouses are used as the final particulate control device. Thermal screw systems typically use a venturi scrubber as the primary particulate control. The control of organic vapors is achieved by either destruction or collection. Afterburners are used downstream of rotary dryers and conveyor furnaces to destroy organic contaminants and oxidize carbon monoxide. Conventional afterburners are designed so that exit gas temperatures reach 1,400 °F to 1,600 °F. Organic destruction efficiency typically ranges from 95% to greater than 99%. Condensers and activated carbon may also be used to treat the offgas from thermal screw systems. Condensers may be either water-cooled or electrically cooled systems to decrease offgas temperatures to 100 °F to 140 °F. The efficiency of condensers for removing organic compounds ranges from 50% to greater than 95%. Noncondensible gases exiting the condenser are normally treated by a vapor-phase activated carbon treatment system. The efficiency of activated carbon adsorption systems for removing organic contaminants ranges from 50% to 99%. Condensate from the condenser is processed through a phase separator where the non-aqueous phase organic component is separated and disposed of or recycled. The remaining water is then processed through activated carbon and used to rehumidify treated soil. Treatment temperature is a key parameter affecting the degree of treatment of organic components. The required treatment temperature depends upon the specific types of petroleum contamination in the soil. The actual temperature achieved by an LTTD system is a function of the moisture content and heat capacity of the soil, soil particle size, and the heat transfer and mixing characteristics of the thermal desorber. Residence time is a key parameter affecting the degree to which decontamination is achievable. Residence time depends upon the design and operation of the system, characteristics of the contaminants and the soil, and the degree of treatment required. References Technology hazards Petroleum technology Oil spill remediation technologies
Low-temperature thermal desorption
[ "Chemistry", "Technology", "Engineering" ]
3,977
[ "Petroleum engineering", "Petroleum technology", "nan" ]
8,851,953
https://en.wikipedia.org/wiki/Polyhaline
Polyhaline is a salinity category term applied to brackish estuaries and other water bodies with a salinity between 18 and 30 parts per thousand. It is the most dense saltwater type that is classified as "brackish." References See also Salinity Aquatic ecology
Polyhaline
[ "Biology" ]
62
[ "Aquatic ecology", "Ecosystems" ]
8,852,523
https://en.wikipedia.org/wiki/List%20of%20virtual%20printer%20software
The following is a list of Wikipedia articles relating to virtual printer software: Free software The following are distributed under free software licences: CC PDF Converter (discontinued) – A Ghostscript-based virtual printer. clawPDF – An open source virtual PDF/OCR/Image Printer with network sharing and ARM64 support . cups-pdf – An open source Ghostscript-based virtual printer that can be shared with Windows users over the LAN. CUPS Ghostscript – A command-line library for creation of PostScript and PDF files. RedMon – Redirects a special printer port to the standard input of another program Freeware The following are proprietary software but free of charge: Virtual PDF printers Virtual PDF printers for Microsoft Windows: Bullzip PDF Printer – there is a free version CutePDF DoPDF – this is a simplified version of NovaPDF PDFCreator – a Ghostscript-based virtual printer for Microsoft Windows, with user interface for advanced options (security settings, combining multiple documents, etc.). PrimoPDF Print To PDF - ships with Windows 10 and 11 PDF24 Creator – a free virtual PDF printer for Microsoft Windows, with user interface and additional tools like merging, splitting, compressing and assembling PDF files. Commercial Adobe Acrobat – Adobe System's commercial PDF authoring suite includes Adobe Distiller, a virtual printer for converting documents to PDF files. Adobe Distiller is not included with the free-to-use Adobe Reader product. Virtual printers Virtual printers for Microsoft Windows: Microsoft Office Document Image Writer – Included in Microsoft Office Professional allowing documents to be saved in TIFF or Microsoft Document Imaging Format. MODI is only supported in 32 bit Windows' versions. Universal Document Converter – Creating PDF, JPEG, TIFF, PNG, GIF, PCX, DCX and BMP files. Free version adds watermark. Notes 1.This software has risk of installing potentially unwanted programs. For more information, refer to its main article. Virtual printer software Computer printers
List of virtual printer software
[ "Technology" ]
413
[ "Computing-related lists", "Lists of software" ]
8,852,888
https://en.wikipedia.org/wiki/21-Hydroxylase
Steroid 21-hydroxylase is a protein that in humans is encoded by the CYP21A2 gene. The protein is an enzyme that hydroxylates steroids at the C21 position on the molecule. Naming conventions for enzymes are based on the substrate acted upon and the chemical process performed. Biochemically, this enzyme is involved in the biosynthesis of the adrenal gland hormones aldosterone and cortisol, which are important in blood pressure regulation, sodium homeostasis and blood sugar control. The enzyme converts progesterone and 17α-hydroxyprogesterone into 11-deoxycorticosterone and 11-deoxycortisol, respectively, within metabolic pathways which in humans ultimately lead to aldosterone and cortisol creation—deficiency in the enzyme may cause congenital adrenal hyperplasia. Steroid 21-hydroxylase is a member of the cytochrome P450 family of monooxygenase enzymes that use an iron-containing heme cofactor to oxidize substrates. In humans, the enzyme is localized in endoplasmic reticulum membranes of cells in adrenal cortex, and is encoded by the gene which is located near the CYP21A1P pseudogene that has high degree of sequence similarity. This similarity makes it difficult to analyze the gene at the molecular level, and sometimes leads to loss-of-function mutations of the gene due to intergenic exchange of DNA. Gene Steroid 21-hydroxylase in humans is encoded by the CYP21A2 gene that may be accompanied by one or several copies of the nonfunctional pseudogene CYP21A1P, this pseudogene shares 98% of the exonic informational identity with the actual functional gene. Pseudogenes are common in genomes, and they originate as artifacts during the duplication process. Though often thought of as "junk DNA", research has shown that retaining these faulty copies can have a beneficial role, often providing regulation of their parent genes. In the mouse genome, the Cyp21a2 is a pseudogene and the Cyp21a1 is a functional gene. In the chicken and quail, there is only a single Cyp21 gene, which locus is located between complement component C4 and TNX gene, along with Cenpa. CYP21A2 in humans is located in chromosome 6, in the major histocompatibility complex III (MHC class III) close to the Complement component 4 genes C4A and C4B, the Tenascin X gene TNXB and STK19. MHC class III is the most gene-dense region of the human genome, containing many genes that have, as of 2023 - unknown functions or structures. Inside the MHC class III, CYP21A2 is located within the RCCX cluster (an abbreviation composed of the names of the genes RP (a former name for STK19 serine/threonine kinase 19), C4, CYP21 and TNX), which is the most complex gene cluster in the human genome. The number of RCCX segments varies between one and four in a chromosome, with the prevalence of approximately 15% for monomodular, 75% for bimodular (STK19-C4A-CYP21A1P-TNXA-STK19B-C4B-CYP21A2-TNXB), and 10% for trimodular in Europeans. The quadrimodular structure of the RCCX unit is very rare. In a monomodular structure, all of the genes are functional i.e. protein-coding, but if a module count is two or more, there is only one copy of each functional gene rest being non-coding pseudogenes with the exception of the C4 gene which always has active copies. Due to the high degree of homology between the CYP21A2 gene and the CYP21A1P pseudogene and the complexity of the RCCX locus, it is difficult to perform molecular diagnostics for CYP21A2. The pseudogene can have single-nucleotide polymorphisms (SNP) that are identical or similar to those in the functional gene, making it difficult to distinguish between them. The pseudogene can also recombine with the functional gene, creating hybrid genes that have features of both. This can result in false-positive or false-negative results when testing for SNPs in the CYP21A2. The whole genome sequencing technology relies on breaking the DNA into small fragments, sequencing them, and then assembling them back together based on their overlaps. However, because of the high homology and variability of the CYP21A2 and its pseudogene, the fragments cannot be mapped unambiguously to either copy of the gene. This can lead to errors or gaps in the assembly, or missing some variants that are present in the gene. Polymerase chain reaction (PCR) molecular diagnostics uses selective primers to amplify specific segments of the DNA sequence that are relevant for diagnosing or detecting a certain disease or condition. If the primers are not designed carefully, they may bind to both the CYP21A2 and the CYP21A1P pseudogene, or to different segments of the RCCX cluster, resulting in false-positive or false-negative results. Therefore, PCR for the CYP21A2 requires the use of locus-specific primers that can distinguish between the gene and the pseudogene, and between different RCCX modules. Moreover, PCR may not be able to detect complex variants such as large gene conversions, deletions, or duplications, which are frequent in the case of the CYP21A2. Southern blotting, a method used for detecting and quantifying a specific DNA sequence in DNA samples, also has limitations in analyzing CYP21A2. This method is time-consuming and requires a large amount of good-quality DNA, which makes it less applicable in routine diagnostic settings. This method comes with a radioactive biohazard, which poses safety concerns and makes it labor-intensive. Southern blotting is unable to detect the junction sites of chimeras. The CYP21A2 gene is prone to mismatch and rearrangement, producing different types of complex variations that include copy number variants, large gene conversions, small insertions/deletions, and single-nucleotide (SNP) variants. Southern blotting is not capable of detecting all these types of variants simultaneously. Besides that, southern blotting requires genetic analysis of the parents, which is not always feasible or practical. Therefore, to analyze the CYP21A2 gene accurately, a more specialized and sensitive method is needed, such as targeted long-read sequencing, which can sequence longer DNA fragments and capture more information about the gene structure and variation. However, this method is not widely available or affordable for clinical use. Protein Steroid 21-hydroxylase, is a member of the cytochrome P450 family of monooxygenase enzymes, the protein has 494 amino acid residues with a molecular weight of 55,000. This enzyme is at most 28% homologous to other P-450 enzymes that have been studied. Structurally, the protein contains an evolutionarily conserved core of four α-helix bundles (the importance of such genetic conservation is in demonstrating the functional importance of this aspect of this protein's structure). In addition, it has two additional alpha helices, two sets of β-sheets, and a heme cofactor binding loop. Each subunit in the human enzyme consists of a total of 13 α-helices and 9 β-strands that folds into a triangular prism-like tertiary structure. The iron(III) heme group that defines the active site resides in the center of each subunit. The human enzyme binds one substrate at a time. In contrast, the well-characterized bovine enzyme can bind two substrates. The human and bovine enzyme share 80% amino acid sequence identity, but are structurally different, particularly in loop regions, and also evident in secondary structure elements. Species Variations of the steroid 21-hydroxylase can be found in all vertebrates. Cyp21 first emerged in chordates before the speciation between basal chordates and vertebrates. The sea lamprey, an early jawless fish species that originated over 500 million years ago, provides valuable insights into the evolution and emergence of Cyp21. Sea lampreys lack the 11β-hydroxylase enzyme responsible for converting 11-deoxycortisol to cortisol as observed in mammals. Instead, they rely on 11-deoxycortisol, a product of a reaction catalyzed by CYP21, as their primary glucocorticoid hormone with mineralocorticoid properties. This suggests the presence of a complex and highly specific corticosteroid signaling pathway that emerged at least half a billion years ago during early vertebrate evolution. In vertebrates, such as fish, amphibians, reptiles, birds, and mammals, Cyp21 participates in the biosynthesis of glucocorticoids and mineralocorticoids, therefore, Cyp21 is essential for the regulation of stress response, electrolyte balance and blood pressure, immune system, and metabolism in vertebrates. Cyp21 is relatively conserved among mammals, and shows some variations in its structure, expression, and regulation. Rhesus macaques and orangutans possess two copies of Cyp21, while chimpanzees have three, still, a pseudogene (CYP21A1P) is only present in humans among primates. Tissue and subcellular distribution Steroid 21-hydroxylase is localized in microsomes of endoplasmic reticulum membranes within adrenal cortex. It is one of three microsomal steroidogenic cytochrome P450 enzymes, the others being steroid 17-hydroxylase and aromatase. Unlike other enzymes of the cytochrome P450 superfamily of enzymes that are expressed in multiple tissues, with most abundant expression in the liver, in adult humans steroid 21-hydroxylase, along with steroid 11β-hydroxylase and aldosterone synthase, is almost exclusively expressed in the adrenal gland. the main subcellular location for the encoded protein in human cells is not known, and is pending cell analysis. Function The enzyme, steroid 21-hydroxylase hydroxylates steroids at the C21 position. Steroids are a group of naturally occurring and synthetically produced organic compounds, steroids all share a four ring primary structure. The enzyme catalyzes the chemical reaction in which the hydroxyl group (-OH) is added at the C21 position of the steroid biomolecule. This location is on a side chain of the D ring. The enzyme is a member of the cytochrome P450 superfamily of monooxygenase enzymes. The cytochrome P450 enzymes catalyze many reactions involved in drug metabolism and synthesis of cholesterol, steroids and other lipids. Steroid 21-hydroxylase is essential for the biosynthesis of cortisol and aldosterone. Mechanism Steroid 21-hydroxylase is a cytochrome P450 enzyme that is notable for its substrate specificity and relatively high catalytic efficiency. Like other cytochrome P450 enzymes, steroid 21-hydroxylase participates in the cytochrome P450 catalytic cycle and engages in one-electron transfer with NADPH-P450 reductase. Steroid 21-hydroxylase is highly specific for hydroxylation of progesterone and 17-hydroxyprogesterone. This is in marked contrast to the evolutionarily and functionally related P450 enzyme 17-hydroxylase, which has a broad range of substrates. The chemical reaction in which steroid 21-hydroxylase catalyzes the addition of hydroxyl (-OH) to the C21 position of progesterone, 17α-hydroxyprogesterone and 21-desoxycortisone was first described in 1952. Studies of the human enzyme expressed in yeast initially classified 17-hydroxyprogesterone as the preferred substrate for steroid 21-hydroxylase, however, later analysis of the purified human enzyme found a lower KM and greater catalytic efficiency for progesterone over 17-hydroxyprogesterone. The catalytic efficiency of steroid 21-hydroxylase for conversion of progesterone in humans is approximately 1.3 x 107 M−1s−1 at 37 °C. This makes it the most catalytically efficient P450 enzyme of those reported to date, and catalytically more efficient than the closely related bovine steroid 21-hydroxylase enzyme. C-H bond breaking to create a primary carbon radical is thought to be the rate-limiting step in the hydroxylation. Clinical significance Congenital adrenal hyperplasia Genetic variants in the CYP21A2 gene cause a disturbance in the development of the enzyme, leading to congenital adrenal hyperplasia (CAH) due to 21-hydroxylase deficiency. Gene conversion events involving the functional gene and the pseudogene account for many cases of steroid 21-hydroxylase deficiency. CAH is an autosomal recessive disorder. There are multiple forms of CAH, defined as classical and nonclassical forms based on the amount of enzyme function still present in the patient. The classical forms occur in approximately 1 in to 1 in births globally, and includes both the salt-wasting (excessive excretion of sodium via the urine causing hyponatremia and dehydration) and simple-virilizing forms. Complete loss of enzymatic activity causes the salt-wasting form. Variations in the structure of steroid 21-hydroxylase are related to the clinical severity of congenital adrenal hyperplasia. Cortisol and aldosterone deficits are associated with life-threatening sodium loss, as the steroids play roles in regulating sodium homeostasis. Simple-virilizing CAH patients (~1-2% enzyme function) maintain adequate sodium homeostasis, but exhibit other symptoms shared by the salt-wasting form, including accelerated growth in childhood and ambiguous genitalia in female neonates. The nonclassical form is the mildest condition, retaining about 20% to 50% of enzyme function. This form is associated with mild and clinically silent cortisol impairment, but an excess of androgens post-puberty. Non-classic congenital adrenal hyperplasia Non-classical congenital adrenal hyperplasia caused by 21-hydroxylase deficiency (NCCAH) is a milder and late-onset congenital adrenal hyperplasia. Its prevalence rate in different ethnic groups varies from 1 in to 1 in . Some people affected by the condition have no relevant signs and symptoms, while others experience symptoms of hyperandrogenism. Women with NCCAH usually have normal female genitalia at birth. In later life, the signs and symptoms of the condition may include acne, hirsutism, male-pattern baldness, irregular menstruation, and infertility. Fewer studies have been published about males with NCCAH comparing to those about females, because males are generally asymptomatic. Males, however, may present with acne and early balding. While symptoms are usually diagnosed after puberty, children may present with premature adrenarche. Research on other conditions There is ongoing research on how Genetic variants in the CYP21A2 gene may lead to pathogenic conditions. A variant of this gene has been reported to cause autosomal dominant posterior polar cataract, suggesting that steroid 21-hydroxylase may be involved in the extra-adrenal biosynthesis of aldosterone and cortisol in the lens of the eye. History In the 1950s and 1960s, steroidogenic pathways that included cholesterol conversion to progesterone through a complex pathway involving multiple steps were identified, and, among them, a pathway for cortisol synthesis showing specific enzymatic steps that included hydroxylation reactions at position 21 (21-hydroxylation) mediated by cytochrome P450 enzymes. Cytochrome P450 enzymes were then described, and steroid 21-hydroxylation was associated with cytochrome P450. In the 1980s and 1990s, partial-length bovine Cyp21 cDNA clones were identified as related to human CYP21A2. Researchers discovered mutations in the CYP21A2 gene associated with congenital adrenal hyperplasia (CAH). From the 1990s onward, specific mutations were correlated with different forms/severity levels of CAH. Genotype/phenotype correlations were investigated for improved diagnostic accuracy. See also Steroidogenic enzyme Cytochrome P450 oxidoreductase deficiency References External links GeneReviews/NCBI/NIH/UW entry on 21-Hydroxylase-Deficient Congenital Adrenal Hyperplasia OMIM entry on 21-Hydroxylase-Deficient Congenital Adrenal Hyperplasia Synthesis of Desoxycorticosterone from Progesterone through 21-Hydroxylase (Image) Enzymes EC 1.14.99 21 Metabolism Human proteins Steroid hormone biosynthesis
21-Hydroxylase
[ "Chemistry", "Biology" ]
3,664
[ "Steroid hormone biosynthesis", "Biosynthesis", "Cellular processes", "Biochemistry", "Metabolism" ]
8,852,928
https://en.wikipedia.org/wiki/Steroid%2011%CE%B2-hydroxylase
Steroid 11β-hydroxylase, also known as steroid 11β-monooxygenase, is a steroid hydroxylase found in the zona glomerulosa and zona fasciculata of the adrenal cortex. Named officially the cytochrome P450 11B1, mitochondrial, it is a protein that in humans is encoded by the CYP11B1 gene. The enzyme is involved in the biosynthesis of adrenal corticosteroids by catalyzing the addition of hydroxyl groups during oxidation reactions. Gene The CYP11B1 gene encodes 11β-hydroxylase, a member of the cytochrome P450 superfamily of enzymes. The cytochrome P450 proteins are monooxygenases that catalyze many reactions involved in drug metabolism and synthesis of cholesterol, steroids and other lipids. The product of this CYP11B1 gene is the 11β-hydroxylase protein. This protein localizes to the mitochondrial inner membrane and is involved in the conversion of various steroids in the adrenal cortex. Transcript variants encoding different isoforms have been noted for this gene. The CYP11B1 gene is reversibly inhibited by etomidate and metyrapone. Function 11β-hydroxylase is a steroidogenic enzyme, i.e. the enzyme involved in the metabolism of steroids. The enzyme is primarily localized in the zona glomerulosa and zona fasciculata of the adrenal cortex. The enzyme functions by introducing a hydroxyl group at carbon position 11β on the steroid nucleus, thereby facilitating the conversion of certain steroids. Humans have two isozymes with 11β-hydroxylase activity: CYP11B1 and CYP11B2. CYP11B1 (11β-hydroxylase) is expressed at high levels and is regulated by ACTH, while CYP11B2 (aldosterone synthase) is usually expressed at low levels and is regulated by angiotensin II. In addition to the 11β-hydroxylase activity, both isozymes have 18-hydroxylase activity. The CYP11B1 isozyme has strong 11β-hydroxylase activity, but the activity of 18-hydroxylase is only one-tenth of CYP11B2. The weak 18-hydroxylase activity of CYP11B1 explains why an adrenal with suppressed CYP11B2 expression continues to synthesize 18-hydroxycorticosterone. Here are some of the steroids, grouped by catalytic activity of the CYP11B1 isozyme: strong activity: 11-deoxycortisol to cortisol, 11-deoxycorticosterone to corticosterone; medium activity: progesterone to 11β-hydroxyprogesterone, 17α-hydroxyprogesterone to 21-deoxycortisol, androstenedione to 11β-hydroxyandrostenedione; testosterone to 11β-hydroxytestosterone, weak activity: corticosterone to 18-hydroxycorticosterone, cortisol to 18-hydroxycortisol. Cortisol and corticosterone metabolism 11β-hydroxylase has strong catalytic activity during conversion of 11-deoxycortisol to cortisol and 11-deoxycorticosterone to corticosterone, by catalyzing the hydroxylation of carbon hydrogen bond at 11-beta position. Note the extra "–OH" added at the 11 position (near the center, on ring "C"): Mechanism of action As a mitochondrial P450 system, P450c11 is dependent on two electron transfer proteins, adrenodoxin reductase and adrenodoxin that transfer 2 electrons from NADPH to the P450 for each monooxygenase reaction catalyzed by the enzyme. In most respects this process of electron transfer appears similar to that of P450scc system that catalyzes cholesterol side chain cleavage. Similar to P450scc the process of electrons transfer is leaky leading to superoxide production. The rate of electron leakage during metabolism depends on the functional groups of the steroid substrate. Regulation The expression of the enzyme in adrenocortical cells is regulated by the trophic hormone corticotropin (ACTH). Clinical significance A mutation in genes encoding 11β-hydroxylase is associated with congenital adrenal hyperplasia due to 11β-hydroxylase deficiency. 11β-hydroxylase is involved in the metabolism of 17α-hydroxyprogesterone to 21-deoxycortisol, in cases of congenital adrenal hyperplasia due to 21-hydroxylase deficiency. See also Steroidogenic enzyme Additional images References Further reading External links Enzymes Metabolism Steroid hormone biosynthesis
Steroid 11β-hydroxylase
[ "Chemistry", "Biology" ]
1,064
[ "Steroid hormone biosynthesis", "Biosynthesis", "Cellular processes", "Biochemistry", "Metabolism" ]
1,590,747
https://en.wikipedia.org/wiki/Franck%E2%80%93Condon%20principle
The Franck-Condon principle describes the intensities of vibronic transitions, or the absorption or emission of a photon. It states that when a molecule is undergoing an electronic transition, such as ionization, the nuclear configuration of the molecule experiences no significant change. Overview The Franck–Condon principle has a well-established semiclassical interpretation based on the original contributions of James Franck. Electronic transitions are relatively instantaneous compared with the time scale of nuclear motions, therefore if the molecule is to move to a new vibrational level during the electronic transition, this new vibrational level must be instantaneously compatible with the nuclear positions and momenta of the vibrational level of the molecule in the originating electronic state. In the semiclassical picture of vibrations (oscillations) of a simple harmonic oscillator, the necessary conditions can occur at the turning points, where the momentum is zero. In the quantum mechanical picture, the vibrational levels and vibrational wavefunctions are those of quantum harmonic oscillators, or of more complex approximations to the potential energy of molecules, such as the Morse potential. Figure 1 illustrates the Franck–Condon principle for vibronic transitions in a molecule with Morse-like potential energy functions in both the ground and excited electronic states. In the low temperature approximation, the molecule starts out in the v = 0 vibrational level of the ground electronic state and upon absorbing a photon of the necessary energy, makes a transition to the excited electronic state. The electron configuration of the new state may result in a shift of the equilibrium position of the nuclei constituting the molecule. In Figure 3 this shift in nuclear coordinates between the ground and the first excited state is labeled as q01. In the simplest case of a diatomic molecule the nuclear coordinates axis refers to the internuclear separation. The vibronic transition is indicated by a vertical arrow due to the assumption of constant nuclear coordinates during the transition. The probability that the molecule can end up in any particular vibrational level is proportional to the square of the (vertical) overlap of the vibrational wavefunctions of the original and final state (see Quantum mechanical formulation section below). In the electronic excited state molecules quickly relax to the lowest vibrational level of the lowest electronic excitation state (Kasha's rule), and from there can decay to the electronic ground state via photon emission. The Franck–Condon principle is applied equally to absorption and to fluorescence. The applicability of the Franck–Condon principle in both absorption and fluorescence, along with Kasha's rule leads to an approximate mirror symmetry shown in Figure 2. The vibrational structure of molecules in a cold, sparse gas is most clearly visible due to the absence of inhomogeneous broadening of the individual transitions. Vibronic transitions are drawn in Figure 2 as narrow, equally spaced Lorentzian line shapes. Equal spacing between vibrational levels is only the case for the parabolic potential of simple harmonic oscillators, in more realistic potentials, such as those shown in Figure 1, energy spacing decreases with increasing vibrational energy. Electronic transitions to and from the lowest vibrational states are often referred to as 0–0 (zero zero) transitions and have the same energy in both absorption and fluorescence. Development of the principle In a report published in 1926 in Transactions of the Faraday Society, James Franck was concerned with the mechanisms of photon-induced chemical reactions. The presumed mechanism was the excitation of a molecule by a photon, followed by a collision with another molecule during the short period of excitation. The question was whether it was possible for a molecule to break into photoproducts in a single step, the absorption of a photon, and without a collision. In order for a molecule to break apart, it must acquire from the photon a vibrational energy exceeding the dissociation energy, that is, the energy to break a chemical bond. However, as was known at the time, molecules will only absorb energy corresponding to allowed quantum transitions, and there are no vibrational levels above the dissociation energy level of the potential well. High-energy photon absorption leads to a transition to a higher electronic state instead of dissociation. In examining how much vibrational energy a molecule could acquire when it is excited to a higher electronic level, and whether this vibrational energy could be enough to immediately break apart the molecule, he drew three diagrams representing the possible changes in binding energy between the lowest electronic state and higher electronic states. James Franck recognized that changes in vibrational levels could be a consequence of the instantaneous nature of excitation to higher electronic energy levels and a new equilibrium position for the nuclear interaction potential. Edward Condon extended this insight beyond photoreactions in a 1926 Physical Review article titled "A Theory of Intensity Distribution in Band Systems". Here he formulates the semiclassical formulation in a manner quite similar to its modern form. The first joint reference to both Franck and Condon in regard to the new principle appears in the same 1926 issue of Physical Review in an article on the band structure of carbon monoxide by Raymond Birge. Quantum mechanical formulation Consider an electrical dipole transition from the initial vibrational state (υ) of the ground electronic level (ε), , to some vibrational state (υ′) of an excited electronic state (ε′), (see bra–ket notation). The molecular dipole operator μ is determined by the charge (−e) and locations (ri) of the electrons as well as the charges (+Zje) and locations (Rj) of the nuclei: The probability amplitude P for the transition between these two states is given by where and are, respectively, the overall wavefunctions of the initial and final state. The overall wavefunctions are the product of the individual vibrational (depending on spatial coordinates of the nuclei) and electronic space and spin wavefunctions: This separation of the electronic and vibrational wavefunctions is an expression of the Born–Oppenheimer approximation and is the fundamental assumption of the Franck–Condon principle. Combining these equations leads to an expression for the probability amplitude in terms of separate electronic space, spin and vibrational contributions: The spin-independent part of the initial integral is here approximated as a product of two integrals: This factorization would be exact if the integral over the spatial coordinates of the electrons would not depend on the nuclear coordinates. However, in the Born–Oppenheimer approximation and do depend (parametrically) on the nuclear coordinates, so that the integral (a so-called transition dipole surface) is a function of nuclear coordinates. Since the dependence is usually rather smooth it is neglected (i.e., the assumption that the transition dipole surface is independent of nuclear coordinates, called the Condon approximation is often allowed). The first integral after the plus sign is equal to zero because electronic wavefunctions of different states are orthogonal. Remaining is the product of three integrals. The first integral is the vibrational overlap integral, also called the Franck–Condon factor. The remaining two integrals contributing to the probability amplitude determine the electronic spatial and spin selection rules. The Franck–Condon principle is a statement on allowed vibrational transitions between two different electronic states; other quantum mechanical selection rules may lower the probability of a transition or prohibit it altogether. Rotational selection rules have been neglected in the above derivation. Rotational contributions can be observed in the spectra of gases but are strongly suppressed in liquids and solids. It should be clear that the quantum mechanical formulation of the Franck–Condon principle is the result of a series of approximations, principally the electrical dipole transition assumption and the Born–Oppenheimer approximation. Weaker magnetic dipole and electric quadrupole electronic transitions along with the incomplete validity of the factorization of the total wavefunction into nuclear, electronic spatial and spin wavefunctions means that the selection rules, including the Franck–Condon factor, are not strictly observed. For any given transition, the value of P is determined by all of the selection rules, however spin selection is the largest contributor, followed by electronic selection rules. The Franck–Condon factor only weakly modulates the intensity of transitions, i.e., it contributes with a factor on the order of 1 to the intensity of bands whose order of magnitude is determined by the other selection rules. The table below gives the range of extinction coefficients for the possible combinations of allowed and forbidden spin and orbital selection rules. Franck–Condon metaphors in spectroscopy The Franck–Condon principle, in its canonical form, applies only to changes in the vibrational levels of a molecule in the course of a change in electronic levels by either absorption or emission of a photon. The physical intuition of this principle is anchored by the idea that the nuclear coordinates of the atoms constituting the molecule do not have time to change during the very brief amount of time involved in an electronic transition. However, this physical intuition can be, and is indeed, routinely extended to interactions between light-absorbing or emitting molecules (chromophores) and their environment. Franck–Condon metaphors are appropriate because molecules often interact strongly with surrounding molecules, particularly in liquids and solids, and these interactions modify the nuclear coordinates of the chromophore in ways closely analogous to the molecular vibrations considered by the Franck–Condon principle. Franck–Condon principle for phonons The closest Franck–Condon analogy is due to the interaction of phonons (quanta of lattice vibrations) with the electronic transitions of chromophores embedded as impurities in the lattice. In this situation, transitions to higher electronic levels can take place when the energy of the photon corresponds to the purely electronic transition energy or to the purely electronic transition energy plus the energy of one or more lattice phonons. In the low-temperature approximation, emission is from the zero-phonon level of the excited state to the zero-phonon level of the ground state or to higher phonon levels of the ground state. Just like in the Franck–Condon principle, the probability of transitions involving phonons is determined by the overlap of the phonon wavefunctions at the initial and final energy levels. For the Franck–Condon principle applied to phonon transitions, the label of the horizontal axis of Figure 1 is replaced in Figure 6 with the configurational coordinate for a normal mode. The lattice mode potential energy in Figure 6 is represented as that of a harmonic oscillator, and the spacing between phonon levels () is determined by lattice parameters. Because the energy of single phonons is generally quite small, zero- or few-phonon transitions can only be observed at temperatures below about 40 kelvins. See Zero-phonon line and phonon sideband for further details and references. Franck–Condon principle in solvation Franck–Condon considerations can also be applied to the electronic transitions of chromophores dissolved in liquids. In this use of the Franck–Condon metaphor, the vibrational levels of the chromophores, as well as interactions of the chromophores with phonons in the liquid, continue to contribute to the structure of the absorption and emission spectra, but these effects are considered separately and independently. Consider chromophores surrounded by solvent molecules. These surrounding molecules may interact with the chromophores, particularly if the solvent molecules are polar. This association between solvent and solute is referred to as solvation and is a stabilizing interaction, that is, the solvent molecules can move and rotate until the energy of the interaction is minimized. The interaction itself involves electrostatic and van der Waals forces and can also include hydrogen bonds. Franck–Condon principles can be applied when the interactions between the chromophore and the surrounding solvent molecules are different in the ground and in the excited electronic state. This change in interaction can originate, for example, due to different dipole moments in these two states. If the chromophore starts in its ground state and is close to equilibrium with the surrounding solvent molecules and then absorbs a photon that takes it to the excited state, its interaction with the solvent will be far from equilibrium in the excited state. This effect is analogous to the original Franck–Condon principle: the electronic transition is very fast compared with the motion of nuclei—the rearrangement of solvent molecules in the case of solvation. We now speak of a vertical transition, but now the horizontal coordinate is solvent-solute interaction space. This coordinate axis is often labeled as "Solvation Coordinate" and represents, somewhat abstractly, all of the relevant dimensions of motion of all of the interacting solvent molecules. In the original Franck–Condon principle, after the electronic transition, the molecules which end up in higher vibrational states immediately begin to relax to the lowest vibrational state. In the case of solvation, the solvent molecules will immediately try to rearrange themselves in order to minimize the interaction energy. The rate of solvent relaxation depends on the viscosity of the solvent. Assuming the solvent relaxation time is short compared with the lifetime of the electronic excited state, emission will be from the lowest solvent energy state of the excited electronic state. For small-molecule solvents such as water or methanol at ambient temperature, solvent relaxation time is on the order of some tens of picoseconds whereas chromophore excited state lifetimes range from a few picoseconds to a few nanoseconds. Immediately after the transition to the ground electronic state, the solvent molecules must also rearrange themselves to accommodate the new electronic configuration of the chromophore. Figure 7 illustrates the Franck–Condon principle applied to solvation. When the solution is illuminated by light corresponding to the electronic transition energy, some of the chromophores will move to the excited state. Within this group of chromophores there will be a statistical distribution of solvent-chromophore interaction energies, represented in the figure by a Gaussian distribution function. The solvent-chromophore interaction is drawn as a parabolic potential in both electronic states. Since the electronic transition is essentially instantaneous on the time scale of solvent motion (vertical arrow), the collection of excited state chromophores is immediately far from equilibrium. The rearrangement of the solvent molecules according to the new potential energy curve is represented by the curved arrows in Figure 7. Note that while the electronic transitions are quantized, the chromophore-solvent interaction energy is treated as a classical continuum due to the large number of molecules involved. Although emission is depicted as taking place from the minimum of the excited state chromophore-solvent interaction potential, significant emission can take place before equilibrium is reached when the viscosity of the solvent is high, or the lifetime of the excited state is short. The energy difference between absorbed and emitted photons depicted in Figure 7 is the solvation contribution to the Stokes shift. See also Born–Oppenheimer approximation Molecular electronic transition Ultraviolet-visible spectroscopy Quantum harmonic oscillator Morse potential Vibronic coupling Zero-phonon line and phonon sideband Sudden approximation References Further reading Link Link Link Link Link Link External links Quantum chemistry Spectroscopy Molecular physics
Franck–Condon principle
[ "Physics", "Chemistry" ]
3,172
[ "Molecular physics", "Spectrum (physical sciences)", "Quantum chemistry", "Instrumental analysis", "Quantum mechanics", "Theoretical chemistry", " molecular", "nan", "Atomic", "Spectroscopy", " and optical physics" ]
1,590,804
https://en.wikipedia.org/wiki/Method%20of%20distinguished%20element
In the mathematical field of enumerative combinatorics, identities are sometimes established by arguments that rely on singling out one "distinguished element" of a set. Definition Let be a family of subsets of the set and let be a distinguished element of set . Then suppose there is a predicate that relates a subset to . Denote to be the set of subsets from for which is true and to be the set of subsets from for which is false, Then and are disjoint sets, so by the method of summation, the cardinalities are additive Thus the distinguished element allows for a decomposition according to a predicate that is a simple form of a divide and conquer algorithm. In combinatorics, this allows for the construction of recurrence relations. Examples are in the next section. Examples The binomial coefficient is the number of size-k subsets of a size-n set. A basic identity—one of whose consequences is that the binomial coefficients are precisely the numbers appearing in Pascal's triangle—states that: Proof: In a size-(n + 1) set, choose one distinguished element. The set of all size-k subsets contains: (1) all size-k subsets that do contain the distinguished element, and (2) all size-k subsets that do not contain the distinguished element. If a size-k subset of a size-(n + 1) set does contain the distinguished element, then its other k − 1 elements are chosen from among the other n elements of our size-(n + 1) set. The number of ways to choose those is therefore . If a size-k subset does not contain the distinguished element, then all of its k members are chosen from among the other n "non-distinguished" elements. The number of ways to choose those is therefore . The number of subsets of any size-n set is 2n. Proof: We use mathematical induction. The basis for induction is the truth of this proposition in case n = 0. The empty set has 0 members and 1 subset, and 20 = 1. The induction hypothesis is the proposition in case n; we use it to prove case n + 1. In a size-(n + 1) set, choose a distinguished element. Each subset either contains the distinguished element or does not. If a subset contains the distinguished element, then its remaining elements are chosen from among the other n elements. By the induction hypothesis, the number of ways to do that is 2n. If a subset does not contain the distinguished element, then it is a subset of the set of all non-distinguished elements. By the induction hypothesis, the number of such subsets is 2n. Finally, the whole list of subsets of our size-(n + 1) set contains 2n + 2n = 2n+1 elements. Let Bn be the nth Bell number, i.e., the number of partitions of a set of n members. Let Cn be the total number of "parts" (or "blocks", as combinatorialists often call them) among all partitions of that set. For example, the partitions of the size-3 set {a, b, c} may be written thus: We see 5 partitions, containing 10 blocks, so B3 = 5 and C3 = 10. An identity states: Proof: In a size-(n + 1) set, choose a distinguished element. In each partition of our size-(n + 1) set, either the distinguished element is a "singleton", i.e., the set containing only the distinguished element is one of the blocks, or the distinguished element belongs to a larger block. If the distinguished element is a singleton, then deletion of the distinguished element leaves a partition of the set containing the n non-distinguished elements. There are Bn ways to do that. If the distinguished element belongs to a larger block, then its deletion leaves a block in a partition of the set containing the n non-distinguished elements. There are Cn such blocks. See also Combinatorial principles Combinatorial proof References Combinatorics Mathematical principles
Method of distinguished element
[ "Mathematics" ]
858
[ "Mathematical principles", "Discrete mathematics", "Combinatorics" ]
1,590,842
https://en.wikipedia.org/wiki/Work%20hardening
Work hardening, also known as strain hardening, is the process by which a material's load-bearing capacity (strength) increases during plastic (permanent) deformation. This characteristic is what sets ductile materials apart from brittle materials. Work hardening may be desirable, undesirable, or inconsequential, depending on the application. This strengthening occurs because of dislocation movements and dislocation generation within the crystal structure of the material. Many non-brittle metals with a reasonably high melting point as well as several polymers can be strengthened in this fashion. Alloys not amenable to heat treatment, including low-carbon steel, are often work-hardened. Some materials cannot be work-hardened at low temperatures, such as indium, however others can be strengthened only via work hardening, such as pure copper and aluminum. Undesirable work hardening An example of undesirable work hardening is during machining when early passes of a cutter inadvertently work-harden the workpiece surface, causing damage to the cutter during the later passes. Certain alloys are more prone to this than others; superalloys such as Inconel require materials science machining strategies that take it into account. For metal objects designed to flex, such as springs, specialized alloys are usually employed in order to avoid work hardening (a result of plastic deformation) and metal fatigue, with specific heat treatments required to obtain the necessary characteristics. Intentional work hardening An example of desirable work hardening is that which occurs in metalworking processes that intentionally induce plastic deformation to exact a shape change. These processes are known as cold working or cold forming processes. They are characterized by shaping the workpiece at a temperature below its recrystallization temperature, usually at ambient temperature. Cold forming techniques are usually classified into four major groups: squeezing, bending, drawing, and shearing. Applications include the heading of bolts and cap screws and the finishing of cold rolled steel. In cold forming, metal is formed at high speed and high pressure using tool steel or carbide dies. The cold working of the metal increases the hardness, yield strength, and tensile strength. Theory Before work hardening, the lattice of the material exhibits a regular, nearly defect-free pattern (almost no dislocations). The defect-free lattice can be created or restored at any time by annealing. As the material is work hardened it becomes increasingly saturated with new dislocations, and more dislocations are prevented from nucleating (a resistance to dislocation-formation develops). This resistance to dislocation-formation manifests itself as a resistance to plastic deformation; hence, the observed strengthening. In metallic crystals, this is a reversible process and is usually carried out on a microscopic scale by defects called dislocations, which are created by fluctuations in local stress fields within the material culminating in a lattice rearrangement as the dislocations propagate through the lattice. At normal temperatures the dislocations are not annihilated by annealing. Instead, the dislocations accumulate, interact with one another, and serve as pinning points or obstacles that significantly impede their motion. This leads to an increase in the yield strength of the material and a subsequent decrease in ductility. Such deformation increases the concentration of dislocations which may subsequently form low-angle grain boundaries surrounding sub-grains. Cold working generally results in a higher yield strength as a result of the increased number of dislocations and the Hall–Petch effect of the sub-grains, and a decrease in ductility. The effects of cold working may be reversed by annealing the material at high temperatures where recovery and recrystallization reduce the dislocation density. A material's work hardenability can be predicted by analyzing a stress–strain curve, or studied in context by performing hardness tests before and after a process. Elastic and plastic deformation Work hardening is a consequence of plastic deformation, a permanent change in shape. This is distinct from elastic deformation, which is reversible. Most materials do not exhibit only one or the other, but rather a combination of the two. The following discussion mostly applies to metals, especially steels, which are well studied. Work hardening occurs most notably for ductile materials such as metals. Ductility is the ability of a material to undergo plastic deformations before fracture (for example, bending a steel rod until it finally breaks). The tensile test is widely used to study deformation mechanisms. This is because under compression, most materials will experience trivial (lattice mismatch) and non-trivial (buckling) events before plastic deformation or fracture occur. Hence the intermediate processes that occur to the material under uniaxial compression before the incidence of plastic deformation make the compressive test fraught with difficulties. A material generally deforms elastically under the influence of small forces; the material returns quickly to its original shape when the deforming force is removed. This phenomenon is called elastic deformation. This behavior in materials is described by Hooke's Law. Materials behave elastically until the deforming force increases beyond the elastic limit, which is also known as the yield stress. At that point, the material is permanently deformed and fails to return to its original shape when the force is removed. This phenomenon is called plastic deformation. For example, if one stretches a coil spring up to a certain point, it will return to its original shape, but once it is stretched beyond the elastic limit, it will remain deformed and won't return to its original state. Elastic deformation stretches the bonds between atoms away from their equilibrium radius of separation, without applying enough energy to break the inter-atomic bonds. Plastic deformation, on the other hand, breaks inter-atomic bonds, and therefore involves the rearrangement of atoms in a solid material. Dislocations and lattice strain fields In materials science parlance, dislocations are defined as line defects in a material's crystal structure. The bonds surrounding the dislocation are already elastically strained by the defect compared to the bonds between the constituents of the regular crystal lattice. Therefore, these bonds break at relatively lower stresses, leading to plastic deformation. The strained bonds around a dislocation are characterized by lattice strain fields. For example, there are compressively strained bonds directly next to an edge dislocation and strained in tension bonds beyond the end of an edge dislocation. These form compressive strain fields and tensile strain fields, respectively. Strain fields are analogous to electric fields in certain ways. Specifically, the strain fields of dislocations obey similar laws of attraction and repulsion; in order to reduce overall strain, compressive strains are attracted to tensile strains, and vice versa. The visible (macroscopic) results of plastic deformation are the result of microscopic dislocation motion. For example, the stretching of a steel rod in a tensile tester is accommodated through dislocation motion on the atomic scale. Increase of dislocations and work hardening Increase in the number of dislocations is a quantification of work hardening. Plastic deformation occurs as a consequence of work being done on a material; energy is added to the material. In addition, the energy is almost always applied fast enough and in large enough magnitude to not only move existing dislocations, but also to produce a great number of new dislocations by jarring or working the material sufficiently enough. New dislocations are generated in proximity to a Frank–Read source. Yield strength is increased in a cold-worked material. Using lattice strain fields, it can be shown that an environment filled with dislocations will hinder the movement of any one dislocation. Because dislocation motion is hindered, plastic deformation cannot occur at normal stresses. Upon application of stresses just beyond the yield strength of the non-cold-worked material, a cold-worked material will continue to deform using the only mechanism available: elastic deformation, the regular scheme of stretching or compressing of electrical bonds (without dislocation motion) continues to occur, and the modulus of elasticity is unchanged. Eventually the stress is great enough to overcome the strain-field interactions and plastic deformation resumes. However, ductility of a work-hardened material is decreased. Ductility is the extent to which a material can undergo plastic deformation, that is, it is how far a material can be plastically deformed before fracture. A cold-worked material is, in effect, a normal (brittle) material that has already been extended through part of its allowed plastic deformation. If dislocation motion and plastic deformation have been hindered enough by dislocation accumulation, and stretching of electronic bonds and elastic deformation have reached their limit, a third mode of deformation occurs: fracture. Quantification of work hardening The shear strength, , of a dislocation is dependent on the shear modulus, G, the magnitude of the Burgers vector, b, and the dislocation density, : where is the intrinsic strength of the material with low dislocation density and is a correction factor specific to the material. As shown in Figure 1 and the equation above, work hardening has a half root dependency on the number of dislocations. The material exhibits high strength if there are either high levels of dislocations (greater than 1014 dislocations per m2) or no dislocations. A moderate number of dislocations (between 107 and 109 dislocations per m2) typically results in low strength. Example For an extreme example, in a tensile test a bar of steel is strained to just before the length at which it usually fractures. The load is released smoothly and the material relieves some of its strain by decreasing in length. The decrease in length is called the elastic recovery, and the result is a work-hardened steel bar. The fraction of length recovered (length recovered/original length) is equal to the yield-stress divided by the modulus of elasticity. (Here we discuss true stress in order to account for the drastic decrease in diameter in this tensile test.) The length recovered after removing a load from a material just before it breaks is equal to the length recovered after removing a load just before it enters plastic deformation. The work-hardened steel bar has a large enough number of dislocations that the strain field interaction prevents all plastic deformation. Subsequent deformation requires a stress that varies linearly with the strain observed, the slope of the graph of stress vs. strain is the modulus of elasticity, as usual. The work-hardened steel bar fractures when the applied stress exceeds the usual fracture stress and the strain exceeds usual fracture strain. This may be considered to be the elastic limit and the yield stress is now equal to the fracture toughness, which is much higher than a non-work-hardened steel yield stress. The amount of plastic deformation possible is zero, which is less than the amount of plastic deformation possible for a non-work-hardened material. Thus, the ductility of the cold-worked bar is reduced. Substantial and prolonged cavitation can also produce strain hardening. Empirical relations There are two common mathematical descriptions of the work hardening phenomenon. Hollomon's equation is a power law relationship between the stress and the amount of plastic strain: where σ is the stress, K is the strength index or strength coefficient, εp is the plastic strain and n is the strain hardening exponent. Ludwik's equation is similar but includes the yield stress: If a material has been subjected to prior deformation (at low temperature) then the yield stress will be increased by a factor depending on the amount of prior plastic strain ε0: The constant K is structure dependent and is influenced by processing while n is a material property normally lying in the range 0.2–0.5. The strain hardening index can be described by: This equation can be evaluated from the slope of a log(σ) – log(ε) plot. Rearranging allows a determination of the rate of strain hardening at a given stress and strain: Work hardening in specific materials Steel Steel is an important engineering material, used in many applications. Steel may be work hardened by deformation at low temperature, called cold working. Typically, an increase in cold work results in a decrease in the strain hardening exponent. Similarly, high strength steels tend to exhibit a lower strain hardening exponent. Copper Copper was the first metal in common use for tools and containers since it is one of the few metals available in non-oxidized form, not requiring the smelting of an ore. Copper is easily softened by heating and then cooling (it does not harden by quenching, e.g., quenching in cool water). In this annealed state it may then be hammered, stretched and otherwise formed, progressing toward the desired final shape but becoming harder and less ductile as work progresses. If work continues beyond a certain hardness the metal will tend to fracture when worked and so it may be re-annealed periodically as shaping continues. Annealing is stopped when the workpiece is near its final desired shape, and so the final product will have a desired strength and hardness. The technique of repoussé exploits these properties of copper, enabling the construction of durable jewelry articles and sculptures (such as the Statue of Liberty). Gold and other precious metals Much gold jewelry is produced by casting, with little or no cold working; which, depending on the alloy grade, may leave the metal relatively soft and bendable. However, a jeweler may intentionally use work hardening to strengthen wearable objects that are exposed to stress, such as rings. Aluminum Items made from aluminum and its alloys must be carefully designed to minimize or evenly distribute flexure, which can lead to work hardening and, in turn, stress cracking, possibly causing catastrophic failure. For this reason modern aluminum aircraft will have an imposed working lifetime (dependent upon the type of loads encountered), after which the aircraft must be retired. References Bibliography . Industrial processes Metallurgical processes Metalworking Strengthening mechanisms of materials
Work hardening
[ "Chemistry", "Materials_science", "Engineering" ]
2,913
[ "Strengthening mechanisms of materials", "Metallurgical processes", "Materials science", "Metallurgy" ]
1,590,868
https://en.wikipedia.org/wiki/Uniformly%20connected%20space
In topology and related areas of mathematics a uniformly connected space or Cantor connected space is a uniform space U such that every uniformly continuous function from U to a discrete uniform space is constant. A uniform space U is called uniformly disconnected if it is not uniformly connected. Properties A compact uniform space is uniformly connected if and only if it is connected Examples every connected space is uniformly connected the rational numbers and the irrational numbers are disconnected but uniformly connected See also connectedness References Cantor, Georg Über Unendliche, lineare punktmannigfaltigkeiten, Mathematische Annalen. 21 (1883) 545-591. Uniform spaces
Uniformly connected space
[ "Mathematics" ]
130
[ "Uniform spaces", "Space (mathematics)", "Topology stubs", "Topological spaces", "Topology" ]
1,590,904
https://en.wikipedia.org/wiki/Precipitation%20hardening
Precipitation hardening, also called age hardening or particle hardening, is a heat treatment technique used to increase the yield strength of malleable materials, including most structural alloys of aluminium, magnesium, nickel, titanium, and some steels, stainless steels, and duplex stainless steel. In superalloys, it is known to cause yield strength anomaly providing excellent high-temperature strength. Precipitation hardening relies on changes in solid solubility with temperature to produce fine particles of an impurity phase, which impede the movement of dislocations, or defects in a crystal's lattice. Since dislocations are often the dominant carriers of plasticity, this serves to harden the material. The impurities play the same role as the particle substances in particle-reinforced composite materials. Just as the formation of ice in air can produce clouds, snow, or hail, depending upon the thermal history of a given portion of the atmosphere, precipitation in solids can produce many different sizes of particles, which have radically different properties. Unlike ordinary tempering, alloys must be kept at elevated temperature for hours to allow precipitation to take place. This time delay is called "aging". Solution treatment and aging is sometimes abbreviated "STA" in specifications and certificates for metals. Two different heat treatments involving precipitates can alter the strength of a material: solution heat treating and precipitation heat treating. Solid solution strengthening involves formation of a single-phase solid solution via quenching. Precipitation heat treating involves the addition of impurity particles to increase a material's strength. Kinetics versus thermodynamics This technique exploits the phenomenon of supersaturation, and involves careful balancing of the driving force for precipitation and the thermal activation energy available for both desirable and undesirable processes. Nucleation occurs at a relatively high temperature (often just below the solubility limit) so that the kinetic barrier of surface energy can be more easily overcome and the maximum number of precipitate particles can form. These particles are then allowed to grow at lower temperature in a process called ageing. This is carried out under conditions of low solubility so that thermodynamics drive a greater total volume of precipitate formation. Diffusion's exponential dependence upon temperature makes precipitation strengthening, like all heat treatments, a fairly delicate process. Too little diffusion (under ageing), and the particles will be too small to impede dislocations effectively; too much (over ageing), and they will be too large and dispersed to interact with the majority of dislocations. Alloy design Precipitation strengthening is possible if the line of solid solubility slopes strongly toward the center of a phase diagram. While a large volume of precipitate particles is desirable, a small enough amount of the alloying element should be added so that it remains easily soluble at some reasonable annealing temperature. Although large volumes are often wanted, they are wanted in small particle sizes as to avoid a decrease in strength as is explained below. Elements used for precipitation strengthening in typical aluminium and titanium alloys make up about 10% of their composition. While binary alloys are more easily understood as an academic exercise, commercial alloys often use three components for precipitation strengthening, in compositions such as Al(Mg, Cu) and Ti(Al, V). A large number of other constituents may be unintentional, but benign, or may be added for other purposes such as grain refinement or corrosion resistance. An example is the addition of Sc and Zr to aluminum alloys to form FCC L12 structures that help refine grains and strengthen the material. In some cases, such as many aluminium alloys, an increase in strength is achieved at the expense of corrosion resistance. More recent technology is focused on additive manufacturing due to the higher amount of metastable phases that can be obtained due to the fast cooling, whereas traditional casting is more limited to equilibrium phases. The addition of large amounts of nickel and chromium needed for corrosion resistance in stainless steels means that traditional hardening and tempering methods are not effective. However, precipitates of chromium, copper, or other elements can strengthen the steel by similar amounts in comparison to hardening and tempering. The strength can be tailored by adjusting the annealing process, with lower initial temperatures resulting in higher strengths. The lower initial temperatures increase the driving force of nucleation. More driving force means more nucleation sites, and more sites means more places for dislocations to be disrupted while the finished part is in use. Many alloy systems allow the ageing temperature to be adjusted. For instance, some aluminium alloys used to make rivets for aircraft construction are kept in dry ice from their initial heat treatment until they are installed in the structure. After this type of rivet is deformed into its final shape, ageing occurs at room temperature and increases its strength, locking the structure together. Higher ageing temperatures would risk over-ageing other parts of the structure, and require expensive post-assembly heat treatment because a high ageing temperature promotes the precipitate to grow too readily. Types of hardening There are several ways by which a matrix can be hardened by precipitates, which could also be different for deforming precipitates and non-deforming precipitates. Deforming particles (weak precipitates): Coherency hardening occurs when the interface between the particles and the matrix is coherent, which depends on parameters like particle size and the way that particles are introduced. Coherency is where the lattice of the precipitate and that of the matrix are continuous across the interface. Small particles precipitated from supersaturated solid solution usually have coherent interfaces with the matrix. Coherency hardening originates from the atomic volume difference between precipitate and the matrix, which results in a coherency strain. If the atomic volume of the precipitate is smaller, there will be tension because the lattice atoms are located closer than their normal conditions while when the atomic volume of the precipitate is larger, there will be compression of the lattice atoms, as they are further apart than their normal position. Regardless of whether the lattice is under compression or tension, the associated stress field interacts with dislocations leading to decreased dislocation motion either by repulsion or attraction of the dislocations, leading to an increase in yield strength, similar to the size effect in solid solution strengthening. What differentiates this mechanism from solid solution strengthening is the fact that the precipitate has a definite size, not an atom, and therefore a stronger interaction with dislocations. Modulus hardening results from the different shear modulus of the precipitate and the matrix, which leads to an energy change of dislocation line tension when the dislocation line cuts the precipitate. Also, the dislocation line could bend when entering the precipitate, increasing the affected length of the dislocation line. Again, the strengthening arises in a way similar to that of solid solution strengthening, where there is a mismatch in the lattice that interacts with the dislocations, impeding their motion. Of course, the severity of the interaction is different than that of solid solution and coherency strengthening. Chemical strengthening is associated with the surface energy of the newly introduced precipitate-matrix interface when the particle is sheared by dislocations. Because it takes energy to make the surface, some of the stress that is causing dislocation motion is accommodated by the additional surfaces. Like modulus hardening, the analysis of interfacial area can be complicated by dislocation line distortion. Order strengthening occurs when the precipitate is an ordered structure such that bond energy before and after shearing is different. For example, in an ordered cubic crystal with composition AB, the bond energy of A-A and B-B after shearing is higher than that of the A-B bond before. The associated energy increase per unit area is anti-phase boundary energy and accumulates gradually as the dislocation passes through the particle. However, a second dislocation could remove the anti-phase domain left by the first dislocation when traverses the particle. The attraction of the particle and the repulsion of the first dislocation maintains a balanced distance between two dislocations, which makes order strengthening more complicated. Except for when there are very fine particles, this mechanism is generally not as effective as others to strengthen. Another way to consider this mechanism is that when a dislocation shears a particle, the stacking sequence between the new surface made and the matrix is broken, and the bonding is not stable. To get the sequence back into this interface, another dislocation, is needed to shift the stacking. The first and second dislocation are often called a superdislocation. Because superdislocations are required to shear these particles, there is strengthening because of the decreased dislocation motion. Non-deforming particles (strong precipitate): In non-deforming particles, where the spacing is small enough or the precipitate-matrix interface is disordered, dislocation bows instead of shears. The strengthening is related to the effective spacing between particles considering finite particle size, but not particle strength, because once the particle is strong enough for the dislocations to bow rather than cut, further increase of the dislocation penetration resistance won't affect strengthening. The main mechanism therefore is Orowan strengthening, where the strong particles do not allow for dislocations to move past. Therefore bowing must occur and in this bowing can cause dislocation loops to build up, which decreases the space available for additional dislocation to bow between. If the dislocations cannot shear particles and cannot move past them, then dislocation motion is successfully impeded. Theory The primary species of precipitation strengthening are second phase particles. These particles impede the movement of dislocations throughout the lattice. You can determine whether or not second phase particles will precipitate into solution from the solidus line on the phase diagram for the particles. Physically, this strengthening effect can be attributed both to size and modulus effects, and to interfacial or surface energy. The presence of second phase particles often causes lattice distortions. These lattice distortions result when the precipitate particles differ in size and crystallographic structure from the host atoms. Smaller precipitate particles in a host lattice leads to a tensile stress, whereas larger precipitate particles leads to a compressive stress. Dislocation defects also create a stress field. Above the dislocation there is a compressive stress and below there is a tensile stress. Consequently, there is a negative interaction energy between a dislocation and a precipitate that each respectively cause a compressive and a tensile stress or vice versa. In other words, the dislocation will be attracted to the precipitate. In addition, there is a positive interaction energy between a dislocation and a precipitate that have the same type of stress field. This means that the dislocation will be repulsed by the precipitate. Precipitate particles also serve by locally changing the stiffness of a material. Dislocations are repulsed by regions of higher stiffness. Conversely, if the precipitate causes the material to be locally more compliant, then the dislocation will be attracted to that region. In addition, there are three types of interphase boundaries (IPBs). The first type is a coherent or ordered IPB, the atoms match up one by one along the boundary. Due to the difference in lattice parameters of the two phases, a coherency strain energy is associated with this type of boundary. The second type is a fully disordered IPB and there are no coherency strains, but the particle tends to be non-deforming to dislocations. The last one is a partially ordered IPB, so coherency strains are partially relieved by the periodic introduction of dislocations along the boundary. In coherent precipitates in a matrix, if the precipitate has a lattice parameter less than that of the matrix, then the atomic match across the IPB leads to an internal stress field that interacts with moving dislocations. There are two deformation paths, one is the coherency hardening, the lattice mismatch is Where is the shear modulus, is the coherent lattice mismatch, is the particle radius, is the particle volume fraction, is the burgers vector, equals the concentration. The other one is modulus hardening. The energy of the dislocation energy is , when it cuts through the precipitate, its energy is , the change in line segment energy is . The maximum dislocation length affected is the particle diameter, the line tension change takes place gradually over a distance equal to . The interaction force between the dislocation and the precipitate is and . Furthermore, a dislocation may cut through a precipitate particle, and introduce more precipitate-matrix interface, which is chemical strengthening. When the dislocation is entering the particle and is within the particle, the upper part of the particle shears b with respect to the lower part accompanies the dislocation entry. A similar process occurs when the dislocation exits the particle. The complete transit is accompanied by creation of matrix-precipitate surface area of approximate magnitude , where r is the radius of the particle and b is the magnitude of the burgers vector. The resulting increase in surface energy is , where is the surface energy. The maximum force between the dislocation and particle is , the corresponding flow stress should be . When a particle is sheared by a dislocation, a threshold shear stress is needed to deform the particle. The expression for the required shear stress is as follows: When the precipitate size is small, the required shear stress is proportional to the precipitate size , However, for a fixed particle volume fraction, this stress may decrease at larger values of r owing to an increase in particle spacing. The overall level of the curve is raised by increases in either inherent particle strength or particle volume fraction. The dislocation can also bow around a precipitate particle through so-called Orowan mechanism. Since the particle is non-deforming, the dislocation bows around the particles (), the stress required to effect the bypassing is inversely proportional to the interparticle spacing , that is, , where is the particle radius. Dislocation loops encircle the particles after the bypass operation, a subsequent dislocation would have to be extruded between the loops. Thus, the effective particle spacing for the second dislocation is reduced to with , and the bypassing stress for this dislocation should be , which is greater than for the first one. However, as the radius of particle increases, will increase so as to maintain the same volume fraction of precipitates, will increase and will decrease. As a result, the material will become weaker as the precipitate size increases. For a fixed particle volume fraction, decreases with increasing r as this is accompanied by an increase in particle spacing. On the other hand, increasing increases the level of the stress as a result of a finer particle spacing. The level of is unaffected by particle strength. That is, once a particle is strong enough to resist cutting, any further increase in its resistance to dislocation penetration has no effect on , which depends only on matrix properties and effective particle spacing. If particles of A of volume fraction are dispersed in a matrix, particles are sheared for and are bypassed for , maximum strength is obtained at , where the cutting and bowing stresses are equal. If inherently harder particles of B of the same volume fraction are present, the level of the curve is increased but that of the one is not. Maximum hardening, greater than that for A particles, is found at . Increasing the volume fraction of A raises the level of both and and increases the maximum strength obtained. The latter is found at , which may be either less than or greater than depending on the shape of the curve. Governing equations There are two main types of equations to describe the two mechanisms for precipitation hardening based on weak and strong precipitates. Weak precipitates can be sheared by dislocations while strong precipitates cannot, and therefore the dislocation must bow. First, it is important to consider the difference between these two different mechanisms in terms of the dislocation line tension that they make. The line tension balance equation is: Where is the radius of the dislocation at a certain stress. Strong obstacles have small due to the bowing of the dislocation. Still, decreasing obstacle strength will increase the and must be included in the calculation. L’ is also equal to the effective spacing between obstacles L. This leaves an equation for strong obstacles: Considering weak particles, should be nearing due to the dislocation line staying relatively straight through obstacles. Also , L’ will be: which states the weak particle equation: Now, consider the mechanisms for each regime: Dislocation cutting through particles: For most strengthening at the early stage, it increases with , where is a dimensionless mismatch parameter (for example, in coherency hardening, is the fractional change of precipitate and matrix lattice parameter), is the volume fraction of precipitate, is the precipitate radius, and is the magnitude of the Burgers vector. According to this relationship, materials strength increases with increasing mismatch, volume fraction, and particle size, so that dislocation is easier to cut through particles with smaller radius. For different types of hardening through cutting, governing equations are as following. For coherency hardening, , , where is increased shear stress, is the shear modulus of the matrix, and are the lattice parameter of the precipitate or the matrix. For modulus hardening, , , where and are the shear modulus of the precipitate or the matrix. For chemical strengthening, , , where is the particle-matrix interphase surface energy. For order strengthening, (low , early stage precipitation), where the dislocations are widely separated; (high , early stage precipitation), where the dislocations are not widely separated; , where is anti-phase boundary energy. Dislocations bowing around particles: When the precipitate is strong enough to resist dislocation penetration, dislocation bows and the maximum stress is given by the Orowan equation. Dislocation bowing, also called Orowan strengthening, is more likely to occur when the particle density in the material is lower. where is the material strength, is the shear modulus, is the magnitude of the Burgers vector, is the distance between pinning points, and is the second phase particle radius. This governing equation shows that for dislocation bowing the strength is inversely proportional to the second phase particle radius , because when the volume fraction of the precipitate is fixed, the spacing between particles increases concurrently with the particle radius , therefore increases with . These governing equations show that the precipitation hardening mechanism depends on the size of the precipitate particles. At small , cutting will dominate, while at large , bowing will dominate. Looking at the plot of both equations, it is clear that there is a critical radius at which max strengthening occurs. This critical radius is typically 5-30 nm. The Orowan strengthening model above neglects changes to the dislocations due to the bending. If bowing is accounted for, and the instability condition in the Frank-Read mechanism is assumed, the critical stress for dislocations bowing between pinning segments can be described as: where is a function of , is the angle between the dislocation line and the Burgers vector, is the effective particle separation, is the Burgers vector, and is the particle radius. Other Considerations Grain Size Control Precipitates in a polycrystalline material can act as grain refiners if they are nucleated or located near grain boundaries, where they pin the grain boundaries as an alloy is solidifying and do not allow for a coarse microstructure. This is helpful, as finer microstructures often outperform (mechanical properties) coarser ones at room temperatures. In recent times nano-precipitates are being studied under creep conditions. These precipitates can also pin the grain boundary at higher temperatures, essentially acting as "friction". Another useful effect can be to impede grain-boundary sliding under diffusional creep conditions with very fine precipitates and if the precipitates are homogeneously dispersed in the matrix, then these same precipitates in the grains might interact with dislocations under creep dislocation creep conditions. Secondary Precipitates Different precipitates, depending on their elemental compositions, can form under certain aging conditions that were not previously there. Secondary precipitates can arise from removing solutes from the matrix solid solution states. The control of this can be exploited to control the microstructure and influence properties. Computational discovery of new alloys While significant effort has been made to develop new alloys, the experimental results take time and money to be implemented. One possible alternative is doing simulations with Density functional theory, that can take advantage of, in the context of precipitation hardening, the crystalline structure of precipitates and of the matrix and allow the exploration of a lot more alternatives than with experiments in the traditional form. One strategy for doing these simulations is focusing on the ordered structures that can be found in many metal alloys, like the long-period stacking ordered (LPSO) structures that have been observed in numerous systems. The LPSO structure is long packed layered configuration along one axis with some layers enriched with precipitated elements. This allows to exploit the symmetry of the supercells and it suits well with the currently available DFT methods. In this way, some researchers have developed strategies to screen the possible strengthening precipitates that allow decreasing the weight of some metal alloys. For example, Mg-alloys have received progressive interest to replace Aluminum and Steel in the vehicle industry because it is one of the lighter structural metals. However, Mg-alloys show issues with low strength and ductility which have limited their use. To overcome this, the Precipitation hardening technique, through the addition of rare earth elements, has been used to improve the alloy strength and ductility. Specifically, the LPSO structures were found that are responsible for these increments, generating an Mg-alloy that exhibited high-yield strength: 610 MPa at 5% of elongation at room temperature. In this way, some researchers have developed strategies to Looking for cheaper alternatives than Rare Elements (RE) it was simulated a ternary system with Mg-Xl-Xs, where Xl and Xs correspond to atoms larger than and shorter than Mg, respectively. Under this study, it was confirmed more than 85 Mg-Re-Xs LPSO structures, showing the DFT ability to predict known LPSO ternary structures. Then they explore the 11 non-RE Xl elements and was found that 4 of them are thermodynamically stable. One of them is the Mg-Ca-Zn system that is predicted to form an LPSO structure. Following the previous DFT predictions, other investigators made experiments with the Mg-Zn-Y-Mn-Ca system and found that at 0.34%at Ca addition the mechanical properties of the system were enhanced due to the formation of LPSO-structures, achieving “a good balance of the strength and ductibility”. Examples of precipitation hardening materials 2000-series aluminium alloys (important examples: 2024 and 2019, also Y alloy and Hiduminium) 6000-series aluminium alloys (important example: 6061 for bicycle frames and aeronautical structures) 7000-series aluminium alloys (important examples: 7075 and 7475) 17-4 stainless steel (UNS S17400) Maraging steel Inconel 718 Alloy X-750 René 41 Waspaloy Mulberry (uranium alloy) NAK55 Low Carbon Steel See also Alfred Wilm Strength of materials Strengthening mechanisms of materials Metallurgy Superalloy References Further reading ASM metals handbook vol 4 heat treating External links Project aluMatter Metal heat treatments Strengthening mechanisms of materials
Precipitation hardening
[ "Chemistry", "Materials_science", "Engineering" ]
5,088
[ "Strengthening mechanisms of materials", "Metallurgical processes", "Metal heat treatments", "Materials science" ]
1,591,064
https://en.wikipedia.org/wiki/Breathalyzer
A breathalyzer or breathalyser (a portmanteau of breath and analyzer/analyser), also called an alcohol meter, is a device for measuring breath alcohol content (BrAC). It is commonly utilized by law enforcement officers whenever they initiate traffic stops. The name is a genericized trademark of the Breathalyzer brand name of instruments developed by inventor Patrick Tegeler in the 1950s. Origins Research into the possibilities of using breath to test for alcohol in a person's body dates as far back as 1874, when Francis E. Anstie made the observation that small amounts of alcohol were excreted in breath. In 1927, Emil Bogen produced a paper on breath analysis. He collected air in a football bladder and then tested this air for traces of alcohol, discovering that the alcohol content of 2 litres of expired air was a little greater than that of 1 cc of urine. Also in 1927, a Chicago chemist, William Duncan McNally, invented a breathalyzer in which the breath moving through chemicals in water would change color. One suggested use for his invention was for housewives to test whether their husbands had been drinking. In December 1927, in a case in Marlborough, England, Dr. Gorsky, a police surgeon, asked a suspect to inflate a football bladder with his breath. Since the 2 liters of the man's breath contained 1.5 mg of ethanol, Gorsky testified before the court that the defendant was "50% drunk". The use of drunkenness as the standard, as opposed to BAC, perhaps invalidated the analysis, as tolerance to alcohol varies. However, the story illustrates the general principles of breath analysis. In 1931 the first practical roadside breath-testing device was the drunkometer developed by Rolla Neil Harger of the Indiana University School of Medicine. The drunkometer collected a motorist's breath sample directly into a balloon inside the machine. The breath sample was then pumped through an acidified potassium permanganate solution. If there was alcohol in the breath sample, the solution changed color. The greater the color change, the more alcohol there was present in the breath. The drunkometer was manufactured and sold by Stephenson Corporation of Red Bank, New Jersey. In 1954 Robert Frank Borkenstein (1912–2002) was a captain with the Indiana State Police and later a professor at Indiana University Bloomington. His trademarked Breathalyzer used chemical oxidation and photometry to determine alcohol concentrations. The invention of the Breathalyzer provided law enforcement with a quick and portable test to determine an individual's intoxication level via breath analysis. Subsequent breath analyzers have converted primarily to infrared spectroscopy. In 1967 in Britain, Bill Ducie and Tom Parry Jones developed and marketed the first electronic breathalyser. They established Lion Laboratories in Cardiff. Ducie was a chartered electrical engineer, and Tom Parry Jones was a lecturer at UWIST. The Road Safety Act 1967 introduced the first legally enforceable maximum blood alcohol level for drivers in the UK, above which it became an offence to be in charge of a motor vehicle; and introduced the roadside breathalyser, made available to police forces across the country. In 1979, Lion Laboratories' version of the breathalyser, known as the Alcolyser and incorporating crystal-filled tubes that changed colour above a certain level of alcohol in the breath, was approved for police use. Lion Laboratories won the Queen's Award for Technological Achievement for the product in 1980, and it began to be marketed worldwide. The Alcolyser was superseded by the Lion Intoximeter 3000 in 1983, and later by the Lion Alcolmeter and Lion Intoxilyser. These later models used a fuel cell alcohol sensor rather than crystals, providing a more reliable curbside test and removing the need for blood or urine samples to be taken at a police station. In 1991, Lion Laboratories was sold to the American company MPD, Inc. Accuracy Breath analyzers do not directly measure blood alcohol concentration (BAC), which requires the analysis of a blood sample. Instead, they measure the amount of alcohol in one's breath, BrAC, generally reported in milligrams of alcohol per liter of breathed air. The relationship between BrAC and BAC is complex, and is affected by many factors. Calibration Calibration is the process of checking and adjusting the internal settings of a breath analyzer by comparing and adjting its test results to a known alcohol standard. Breath analyzer sensors drift over time and require periodic calibration to ensure accuracy. Many handheld breath analyzers sold to consumers use a silicon oxide sensor (also called a semiconductor sensor) to determine the alcohol concentration. These sensors are prone to contamination and interference from substances other than breath alcohol, and require recalibration or replacement every six months. Higher-end personal breath analyzers and professional-use breath alcohol testers use platinum fuel cell sensors. These too require recalibration but at less frequent intervals than semiconductor devices, usually once a year. There are two ways of calibrating a precision fuel cell breath analyzer, the wet-bath and the dry-gas methods. Each method requires specialized equipment and factory-trained technicians. It is not a procedure that can be conducted by untrained users or without the proper equipment. The dry-gas method utilizes a portable calibration standard which is a precise mixture of ethanol and inert nitrogen available in a pressurized canister. Initial equipment costs are less than alternative methods and the steps required are fewer. The equipment is also portable allowing calibrations to be done when and where required. The wet-bath method utilizes an ethanol/water standard in a precise specialized alcohol concentration, contained and delivered in specialized breath simulator equipment. The wet-bath method has a higher initial cost and is not intended to be portable. The standard must be fresh and replaced regularly. In addition, the assumed water-air partition ratio for aqueous ethanol must be taken into account along with its associated uncertainty. Some semiconductor models are designed specifically to allow the sensor module to be replaced without the need to send the unit to a calibration lab. Non-specific analysis One major problem with older breath analyzers is non-specificity: the machines identify not only the ethyl alcohol (or ethanol) found in alcoholic beverages but also other substances similar in molecular structure or reactivity, "interfering compounds". The oldest breath analyzer models pass breath through a solution of potassium dichromate, which oxidizes ethanol into acetic acid, changing color in the process. A monochromatic light beam is passed through this sample, and a detector records the change in intensity and, hence, the change in color, which is used to calculate the percent alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by it, producing false positives. This source of false positives is unlikely as very few other substances found in exhaled air are oxidizable. Infrared-based breath analyzers project an infrared beam of radiation through the captured breath in the sample chamber and detect the absorbance of the compound as a function of the wavelength of the beam, producing an absorbance spectrum that can be used to identify the compound, as the absorbance is due to the harmonic vibration and stretching of specific bonds in the molecule at specific wavelengths (see infrared spectroscopy). The characteristic bond of alcohols in infrared is the O-H bond, which gives a strong absorbance at a short wavelength. The more light is absorbed by compounds containing the alcohol group, the less reaches the detector on the other side—and the higher the reading. Other groups, most notably aromatic rings and carboxylic acids can give similar absorbance readings. Some natural and volatile interfering compounds do exist, however. For example, the National Highway Traffic Safety Administration has found that dieters and diabetics may have acetone levels hundreds or even thousands of times higher than those in others. Acetone is one of the many substances that can be falsely identified as ethyl alcohol by some breath machines. However, fuel cell based systems are non-responsive to substances like acetone. Substances in the environment can also lead to false BAC readings. For example, methyl tert-butyl ether, a common gasoline additive, has been alleged anecdotally to cause false positives in persons exposed to it. Tests have shown this to be true for older machines; however, newer machines detect this interference and compensate for it. Any number of other products found in the environment or workplace can also cause erroneous BAC results. These include compounds found in lacquer, paint remover, celluloid, gasoline, and cleaning fluids, especially ethers, alcohols, and other volatile compounds. Pharmacokinetics Absorption of alcohol continues for anywhere from 20 minutes (on an empty stomach) to two-and-one-half hours (on a full stomach) after the last consumption, generally taking around 40-50 minutes. During the absorptive phase, the concentration of alcohol throughout the body changes unpredictably, as it is affected by gastrointestinal physiology such as irregular contraction patterns. After absorption, the concentrations in the body settle down and follow predictable patterns. During absorption, the BAC in arterial blood will generally be higher than in venous blood, but post-absorption, venous BAC will be higher than arterial BAC. This is especially clear with bolus dosing, chugging a single large drink. With additional doses of alcohol, the definitions of absorption and post-absorption are less clear. However, once absorption of the last drink has finished, the concentrations will follow standard post-absorption curves. It is also not always clear from a BAC graph when the absorption phase finishes - for example, the body can reach a sustained equilibrium BAC where absorption and elimination are proportional. Across all phases, BrAC correlates closely with arterial BAC. Arterial blood distributes oxygen throughout the body. Breath alcohol is a representation of the equilibrium of alcohol concentration as the blood gases (alcohol) pass from the arterial blood into the lungs to be expired in the breath. The ratio of ABAC:BrAC is 2294 ± 56 across all phases and 2251 ± 46 [2141-2307] in the post-absorption phase. For example, a breathalyzer measurement of 0.10 mg/L of breath alcohol characterises approximately 0.0001×2251 g/L, or 0.2251 g/L of arterial blood alcohol concentration (equivalent to 0.2251 permille or 0.02251% BAC). The ratio of venous blood alcohol content to breath alcohol content may vary significantly, from 1300:1 to 3100:1. Assuming a blood-alcohol concentration of 0.07%, for example, a person could have a partition ratio of 1500:1 and a breath test reading of 0.10 g/2100 mL, over the legal limit in some jurisdictions. However, low partition ratios are generally observed during the absorption phase. Post-absorption, the ratio is relatively fixed, 2382 ± 119 [2125–2765], although this ratio was measured in a laboratory environment and variation may be larger in real-world scenarios. Other false positives of high BrAC and also blood reading are related to patients with proteinuria and hematuria, due to kidney metabolization and failure. The metabolization rate of related patients with kidney damage is abnormal in relation to percent in alcohol in the breath. However, since potassium dichromate is a strong oxidizer, numerous alcohol groups can be oxidized by kidney and blood filtration, producing false positives. Breathing pattern It is sometimes said that the exhaled air analyzed by the breathalyzer is "alveolar air", coming from the alveoli in close proximity to the blood in pulmonary circulation and containing ethanol in concentrations proportional to that blood approximated by Henry's law. However, the alcohol in the exhaled air comes essentially from the airways of the lung, and not from the alveoli. The alcohol acts similarly to water vapor, so it is instructive to study the humidity of lung air. During breathing, the inspired air picks up water and alcohol from the airways. Almost all uptake occurs in the upper airways; thus, the BrAC is most affected by the alcohol concentration in the bronchial circulation, which supplies blood to these airways. When the air reaches the alveoli, it is already near equilibrium - this is why inhaling dry air does not dry out the lungs significantly. With exhalation, water and alcohol are rapidly lost to the airways, primarily within the fifth to fifteenth generations of branching. Nonetheless, as may be evidenced by seeing one's breath in the cold, some water vapor does not get re-absorbed by the airways and is exhaled, and similarly some alcohol is exhaled during breathing. But the relationship of the alcohol concentration of this air to the concentration of alcohol in the blood is somewhat suspect and can be affected by many variables. As air is exhaled, the alcohol concentration of the exhaled air increases over time, rising significantly in the first few seconds and then slowing down after, but not leveling out until the subject stops exhaling. This is not because there is a "dead space" of non-alcoholic air in the airways - the alcohol concentration is nearly identical in all regions of the lung. Rather, it is because, during exhalation, water and alcohol are being redeposited on the airways, primarily the trachea and generations 6 though 12 of the airways. As more fluid is deposited on the mucous surfaces, the remaining fluid travels further, resulting in more alcohol being recorded by the breathalyzer. The recorded alcohol concentrations never reach the alveolar alcohol concentration, even if the subject exhales as deeply as possible. According to Henry's law, alveolar air alcohol concentration would be pulmonary BAC divided by 1756, compared to the BrAC which is arterial blood concentration divided by 2251. When the subject stops exhaling, the alcohol concentration levels off - this does not indicate that alveolar air has been obtained, as it will level off regardless of the point at which the subject stops exhaling. But it does mean that end-exhaled BrAC is readily obtained. This brings up the question of what is meant by reporting BrAC as a single number; is it the "deep-lung air", the highest possible reading obtainable by the subject's full exhalation? Or is it the zero concentration at the initial part of the curve? Hlastala suggests using the average BrAC during the exhalation, which corresponds to the BrAC measured at about the 5-second mark. The Supreme Court of California determined that the BrAC is defined as the alcohol concentration of the last part of the subject’s expired breath. End-exhaled BrAC varies depending on several factors. Most alcohol breath testers require a minimum exhalation volume (normally between 1.1 and 1.5 L) or minimum six-second exhalation time before the breath sample is accepted. This raises concerns for subject with smaller lung volumes - they must exhale a greater fraction of their available lung volume compared with a larger subject. A mathematical model suggests that a 2L-lung-capacity subject's end-exhaled BrAC may read 35% higher than a 6L subject for the same minimum 1.5L exhalation and alveolar alcohol concentration. For exhalation to the maximum extent, such as under typical laboratory conditions, measured BrAC is unaffected by lung size. The subject's body temperature and breath temperature also influence results, with an increase in temperature corresponding to an increase in measured BrAC. Furthermore, the humidity and temperature of the ambient air can decrease results by as much as 10%. The result of these factors is that the breath test is more forgiving for some subjects than others. Nonetheless, the overall variance due to how much one breathes out is usually low, and some breathalyzers compensate for the volume of air. Jones tested several breathing patterns immediately before and during breathalyzer use and found the following changes (in order of effect): Hyperventilation by rapid inspiration and expiration of room air for 20 seconds before forced expiration - decrease by 10% Moderate inspiration through mouth and deep expiration - control Deep expiration without an inspiration - statistically insignificant increase Inspiration through the nose before a deep expiration. - 1.3% increase Deep inspiration followed by a slow (20 second) expiration. - 2.0% increase Mouth closed for 5 minutes (shallow breathing) before nose-inspiration and a forced expiration. - 7.7% increase Inspiration through the nose followed by breath-holding for 30 seconds before forced expiration. - 12.6% increase A normal inspiration with breath-holding for 30 seconds before a forced expiration. - 15.7% increase Overall, the results show an increase in measured BrAC with increased contact between the lungs and the measured air. Exercising immediately before the test, such as running up and down a flight of stairs, can also reduce measured BrAC by 13% or more, with the combined effect of exercise and hyperventilation reaching 20%. Mouth alcohol One of the most common causes of falsely high breath analyzer readings is the existence of mouth alcohol. In analyzing a subject's breath sample, the breath analyzer's internal computer is making the assumption that the alcohol in the breath sample came from the lungs. However, alcohol may have come from the mouth, throat or stomach for a number of reasons. A very tiny amount of alcohol from the mouth, throat or stomach can have a significant impact on the breath-alcohol reading. Recent use of mouthwash or breath fresheners can also skew results upward, as they can contain fairly high levels of alcohol. Listerine mouthwash, for example, contains 26.9% alcohol, and can skew results for between 5 and 10 minutes. A scientist tested the effects of Binaca breath spray on an Intoxilyzer 5000. He performed 23 tests with subjects who sprayed their throats and obtained readings as high as 0.81—far beyond legal levels. The scientist also noted that the effects of the spray did not fall below detectable levels until after 18 minutes. Other than those, the most common source of mouth alcohol is from belching or burping. This causes the liquids and/or gases from the stomach—including any alcohol—to rise up into the soft tissue of the esophagus and oral cavity, where it will stay until it has dissipated. The American Medical Association concludes in its Manual for Chemical Tests for Intoxication (1959): "True reactions with alcohol in expired breath from sources other than the alveolar air (eructation, regurgitation, vomiting) will, of course, vitiate the breath alcohol results." Acid reflux, or gastroesophageal reflux disease, can greatly exacerbate the mouth-alcohol problem. The stomach is normally separated from the throat by a valve, but when this valve becomes incompetent or herniated, there is nothing to stop the liquid contents in the stomach from rising and permeating the esophagus and mouth. The contents—including any alcohol—are then later exhaled into the breathalyzer. One study of 10 individuals suffering from this condition did not find any actual increase in breath ethanol. Mouth alcohol can also be created in other ways. Dentures, some have theorized, will trap alcohol, although experiments have shown no difference if the normal 15 minute observation period is observed. Periodontal disease can also create pockets in the gums which will contain the alcohol for longer periods. Also known to produce false results due to residual alcohol in the mouth is passionate kissing with an intoxicated person. To help guard against mouth-alcohol contamination, certified breath-test operators and police officers are trained to observe a test subject carefully for at least 15–20 minutes before administering the breath test. Some instruments also feature built-in safeguards. The Intoxilyzer 5000 features a "slope" parameter. This parameter detects any decrease in alcohol concentration of 0.006 g per 210 L of breath in 0.6 second, a condition indicative of residual mouth alcohol, and will result in an "invalid sample" warning to the operator, notifying the operator of the presence of the residual mouth alcohol. Other instruments require that the individual be tested twice at least two minutes apart. Mouthwash or other mouth alcohol will have somewhat dissipated after two minutes and cause the second reading to disagree with the first, requiring a retest. Many preliminary breath testers, however, feature no such safeguards. Myths about accuracy There are a number of substances or techniques that can supposedly "fool" a breath analyzer (i.e., generate a lower blood alcohol content). A 2003 episode of the science television show MythBusters tested a number of methods that supposedly allow a person to fool a breath analyzer test. The methods tested included breath mints, onions, denture cream, mouthwash, pennies and batteries; all of these methods proved ineffective. The show noted that using these items to cover the smell of alcohol may fool a person, but, since they will not actually reduce a person's BrAC, there will be no effect on a breath analyzer test regardless of the quantity used, if any, it appeared that using mouthwash only raised the BrAC. Pennies supposedly produce a chemical reaction, while batteries supposedly create an electrical charge, yet neither of these methods affected the breath analyzer results. The MythBusters episode also pointed out another complication: it would be necessary to insert the item into one's mouth (for example, eat an onion, rinse with mouthwash, conceal a battery), take the breath test, and then possibly remove the item — all of which would have to be accomplished discreetly enough to avoid alerting the police officers administering the test (who would obviously become very suspicious if they noticed that a person was inserting items into their mouth prior to taking a breath test). It would likely be very difficult, especially for someone in an intoxicated state, to be able to accomplish such a feat. In addition, the show noted that breath tests are often verified with blood tests (BAC, which are more accurate) and that even if a person somehow managed to fool a breath test, a blood test would certainly confirm a person's guilt. Other substances that might reduce the BrAC reading include a bag of activated charcoal concealed in the mouth (to absorb alcohol vapor), an oxidizing gas (such as N2O, Cl2, O3, etc.) that would fool a fuel cell type detector, or an organic interferent to fool an infrared absorption detector. The infrared absorption detector is more vulnerable to interference than a laboratory instrument measuring a continuous absorption spectrum since it only makes measurements at particular discrete wavelengths. However, due to the fact that any interference can only cause higher absorption, not lower, the estimated blood alcohol content will be overestimated. Additionally, Cl2 is toxic and corrosive. A 2007 episode of the Spike network's show Manswers showed some of the more common and not-so-common ways of attempts to beat the breath analyzer, none of which work. Test 1 was to suck on a copper-coated coin such as a penny. Test 2 was to hold a battery on the tongue. Test 3 was to chew gum. None of these tests showed a "pass" reading if the subject had consumed alcohol. Law enforcement In general, two types of breathalyzer are used. Small hand-held breathalyzers are not reliable enough to provide evidence in court but reliable enough to justify an arrest. These devices may be used by officers in the field as a form of "field sobriety test" commonly called "preliminary breath test" or "preliminary alcohol screening", or as evidential devices in point of arrest testing. Larger breathalyzer devices found in police stations can be used to produce court evidence, These desktop analyzers generally use infrared spectrophotometer technology, electrochemical fuel cell technology, or a combination of the two. All breath alcohol testers used by law enforcement in the United States of America must be approved by the Department of Transportation's National Highway Traffic Safety Administration. Breath alcohol laws The breath alcohol content reading may be used in prosecutions of the crime of driving under the influence of alcohol (sometimes referred to as driving or operating while intoxicated) in several ways. Historically, states in the US initially prohibited driving with a high level of BAC, and did not have any laws regarding BrAC. A BrAC test result was merely presented as indirect evidence of BAC. Where the defendant had refused to take a subsequent blood test, the only way the state could prove BAC was by presenting scientific evidence of how alcohol in the breath gets there from alcohol in the blood, along with evidence of how to convert from one to the other. DUI defense attorneys frequently contested the scientific reliability of such evidence. Before September 2011, South Dakota relied solely on blood tests to ensure accuracy. States responded in different ways to the inability to rely on breathalyzer evidence. Many states such as California modified their statutes so to make a certain level of alcohol in the breath illegal per se. In other words, the BrAC level itself became the direct predicate evidence for conviction, with no need to estimate BAC. In per se jurisdictions such as the UK, it is automatically illegal to drive a vehicle with a sufficiently high breath alcohol concentration (BrAC). The breath analyzer reading of the operator will be offered as evidence of that crime, and challenges can only be offered on the basis of an inaccurate reading. In other states, such as California and New Jersey, the statute remains tied to BAC, but the BrAC results of certain machines have been judicially deemed presumptively accurate substitutes for blood testing when used as directed. While BrAC tests are not necessary to prove a defendant was under the influence, laws in these states create a rebuttable presumption, which means it is presumed that the driver was intoxicated given a high BrAC reading, but that presumption can be rebutted if a jury finds it unreliable or if other evidence establishes a reasonable doubt as to whether the person actually drove with a breath or blood alcohol level of 0.08% or greater. Another issue is that the BrAC is typically tested several hours after the time of driving. Some jurisdictions, such as the State of Washington, allow the use of breath analyzer test results without regard as to how much time passed between operation of the vehicle and the time the test was administered, or within a certain number of hours of testing. Other jurisdictions use retrograde extrapolation to estimate the BAC or BrAC at the time of driving. One exception to criminal prosecution is the state of Wisconsin, where a first time drunk driving offense is normally a civil ordinance violation. Breath levels There is no international consensus on the statutory ratio of blood to breath levels, ranging from 2000:1 (most of Europe) to 2100:1 (US) to 2300:1 (UK). In the US, the ratio of 2100:1 was determined based on studies done in 1930-1950, with a 1952 report of the National Safety Council establishing the 2100:1 figure. The NSC has acknowledged that more recent research shows the actual relationship is most probably higher than 2100:1 and closer to 2300:1, but opines that this difference is of minimal practical significance in law enforcement. The use of the lower 2100:1 factor errs on the side of conservativism and can only favor the driver. In early years, the range of the BrAC threshold in the US varied considerably between States. States have since adopted a uniform 0.08% BrAC level, due to federal guidelines. It is said that the federal government ensures the passage of the federal guidelines by tying traffic safety highway funds to compliance with federal guidelines on certain issues, such as the federal government ensuring that the legal drinking age be the age of 21 across the 50 states. Police in Victoria, Australia, use breathalyzers that give a recognized 20% tolerance on readings. Noel Ashby, former Victoria Police Assistant Commissioner (Traffic & Transport), claims that this tolerance is to allow for different body types. Preliminary breath tests The preliminary breath test or preliminary alcohol screening test uses small hand-held breath analyzers (hand-held breathalyzers). (The terms "preliminary breath test" ("PBT") and "preliminary alcohol screening test" reference the same devices and functions.) They are generally based on electrochemical platinum fuel cell analysis. These units are similar to some evidentiary breathalyzers, but typically are not calibrated frequently enough for evidentiary purposes. The test device typically provides numerical blood alcohol content (BAC) readings, but its primary use is for screening. In some cases, the device even has "pass/fail" indicia. For example, in Canada, PST devices, called "alcohol screening devices" are set so that, from 0 to 49 mg% it shows digits, from 50 to 99 mg% it shows the word "warn" and 100 mg% and above it shows "fail". These preliminary breath tests are sometimes categorised as part of field sobriety testing, although it is not part of the series of performance tests generally associated with field sobriety tests (FSTs) or standard field sobriety tests (SFSTs). In Canada, a preliminary non-evidentiary screening device can be approved by Parliament as an approved screening device. In order to demand a person produce a breathalyzer sample an officer must have "reasonable suspicion" that the person drove with more than 80 mg alcohol per 100 mL of blood. The demand must be within three hours of driving. Any driver that refuses can be charged under s.254 of the Criminal Code. With the legalization of cannabis, updates to the criminal code are proposed that will allow a breathalyzer test to be administered without suspicion of impairment. The US National Highway Traffic Safety Administration maintains a Conforming Products List of breath alcohol devices approved for preliminary screening use. In the United States, the main use of the preliminary breath test (PBT) is to establish probable cause for arrest. All states have implied consent laws, which means that by applying for a driver's license, drivers are agreeing to take an evidentiary chemical test (blood, breath, or urine) after being arrested for a DUI. But in US law, the arrest and subsequent test may be invalidated if it is found that the arrest lacked probable cause. The PBT establishes a baseline alcohol level that the police officer may use to justify the arrest. The result of the PBT is not generally admissible in court, except to establish probable cause, although some states, such as Idaho, permit data or "readings" from hand-held preliminary breath testers or preliminary alcohol screeners to be presented as evidence in court. In states such as Florida and Colorado, there are no penalties for refusing a PBT. Police are not obliged to advise the suspect that participation in a FST, PBT, or other pre-arrest procedures is voluntary. In contrast, formal evidentiary tests given under implied consent requirements are considered mandatory. Refusal to take a preliminary breath test in the State of Michigan subjects a non-commercial driver to a "civil infraction" fine, with no violation "points", but is not considered to be a refusal under the general "implied consent" law. In some states, the state may present evidence of refusal to take a field sobriety test in court, although this is of questionable probative value in a drunk driving prosecution. Different requirements apply in many states to drivers under DUI probation, in which case participation in a preliminary breath test may be a condition of probation, and for commercial drivers under "drug screening" requirements. Some US states, notably California, have statutes on the books penalizing preliminary breath test refusal for drivers under 21; however the Constitutionality of those statutes has not been tested. (As a practical matter, most criminal lawyers advise suspects who refuse a preliminary breath test or preliminary alcohol screening to not engage in discussion or "justifying" the refusal with the police.) Evidentiary breath tests In Canada, an evidentiary breath instrument can be designated as an approved instrument. The US National Highway Traffic Safety Administration maintains a Conforming Products List of breath alcohol devices approved for evidentiary use, Infrared instruments are also known as "evidentiary breath testers" and generally produce court-admissible results. Drinking after driving A common defense to an impaired driving charge (in appropriate circumstances) is that the consumption of alcohol occurred subsequent to driving. The typical circumstance where this comes up is when a driver consumes alcohol after a road accident, as an affirmative defense. This closely relates to absorptive stage intoxication (or bolus drinking), except that the consumption of alcohol also occurred after driving. This defense can be overcome by retrograde extrapolation (infra), but complicates prosecution. While jurisdictions that recognise absorptive stage intoxication as a defense would also accept a defense of consumption after driving, some jurisdictions penalise post-driving drinking. While laws regarding absorption of alcohol consumed before (or while) driving are generally per se, most statutes directed to post-driving consumption allow defenses for circumstances related to activity not related to. In Canada, it is illegal to be over the impaired driving limits within 3 hours of driving (given as 2 hours by CDN DOJ); however, the new law allows a "drinking after driving" defence in a situation where a driver had no reason to expect a demand by the police for breath testing. South Africa is more straightforward, with a separate penalty applied for consumption "After An Accident" until reported to the police and if so required, has been medically examined. Retrograde extrapolation The breath analyzer test is usually administered at a police station, commonly an hour or more after the arrest. Although this gives the BrAC at the time of the test, it does not by itself answer the question of what it was at the time of driving. The prosecution typically provides an estimated alcohol concentration at the time of driving utilizing retrograde extrapolation, presented by expert opinion. This involves projecting back in time to estimate the BrAC level at the time of driving, by applying the physiological properties of absorption and elimination rates in the human body. Extrapolation is calculated using five factors and a general elimination rate of 0.015/hour. Example Time of breath test-10:00pm...Result of breath test-0.080...Time of driving-9:00pm (stopped by officer)...Time of last drink-8:00pm...Last food-12:00pm. Using these facts, an expert can say the person's last drink was consumed on an empty stomach, which means absorption of the last drink (at 8:00) was complete within one hour-9:00. At the time of the stop, the driver is fully absorbed. The test result of 0.080 was at 10:00. So the one hour of elimination that has occurred since the stop is added in, making 0.080+0.015=0.095 the approximate breath alcohol concentration at the time of the stop. Consumer use Public breathalyzers are becoming a method for consumers to test themselves at the source of alcohol consumption. These are used in pubs, bars, restaurants, charities, weddings and all types of licensed events. As breathalyzer tests have increased risk of transmission of coronavirus, they were temporarily suspended from use in Sweden. Breathalyzer sensors Photovoltaic assay The photovoltaic assay, used only in the dated photoelectric intoximeter, is a form of breath testing rarely encountered today. The process works by using photocells to analyze the color change of a redox (oxidation-reduction) reaction. A breath sample is bubbled through an aqueous solution of sulfuric acid, potassium dichromate, and silver nitrate. The silver nitrate acts as a catalyst, allowing the alcohol to be oxidized at an appreciable rate. The requisite acidic condition needed for the reaction might also be provided by the sulfuric acid. In solution, ethanol reacts with the potassium dichromate, reducing the dichromate ion to the chromium (III) ion. This reduction results in a change of the solution's color from red-orange to green. The reacted solution is compared to a vial of non-reacted solution by a photocell, which creates an electric current proportional to the degree of the color change; this current moves the needle that indicates BAC. Like other methods, breath testing devices using chemical analysis are prone to false readings. Compounds that have compositions similar to ethanol, for example, could also act as reducing agents, creating the necessary color change to indicate increased BAC. Infrared spectroscopy Infrared breathalyzers allow a high degree of specificity for ethanol. Typically evidential breath alcohol instruments in police stations will work on the principle of infrared spectroscopy. Fuel cell Fuel cell gas sensors are based on the oxidation of ethanol to acetaldehyde on an electrode. The current produced is proportional to the amount of alcohol present. These sensors are very stable, typically requiring calibration every 6 months, and are the type of sensor usually found in roadside breath testing devices. Semiconductor Semiconductor gas sensors are based on the increase in conductance of a tin oxide layer in the presence of a reducing gas such as vaporized ethanol. They are found in inexpensive breathalyzers and their stability is not as reliable as fuel cell instruments. See also Coronavirus breathalyzer References External links Alcohol law Vehicle safety technologies Brands that became generic Driving under the influence Law enforcement equipment Spectroscopy Harm reduction Drug testing
Breathalyzer
[ "Physics", "Chemistry" ]
7,825
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
1,591,104
https://en.wikipedia.org/wiki/OGLE-TR-10b
OGLE-TR-10b is an extrasolar planet orbiting the star OGLE-TR-10. The planet was first detected by the Optical Gravitational Lensing Experiment (OGLE) survey in 2002. The star, OGLE-TR-10, was seen dimming by a tiny amount every three days. The transit lightcurve resembles that of HD 209458 b, the first transiting extrasolar planet. However, the mass of the object had to be measured by the radial velocity method because other objects like red dwarfs and brown dwarfs can mimic the planetary transit. In late 2004 it was confirmed as the fifth planetary discovery by OGLE. The planet is a typical "hot Jupiter", a planet with a mass half that of Jupiter and orbital distance only 1/24 that of Earth from the Sun. One revolution around the star takes a little over three days to complete. The planet is slightly larger than Jupiter, probably due to the heat from the star. OGLE-TR-10 was identified as a promising candidate by the OGLE team during their 2001 campaign in three fields towards the Galactic Center. The possible planetary nature of its companion was based on spectroscopic follow-up. A reported a tentative radial velocity semi-amplitude (from Keck-I/HIRES) of 100±43 m/s, and a mass for the putative planet of 0.7 ± 0.3 MJup was confirmed in 2004 with the UVES/FLAMES radial velocities. However, the possibility of a blend could not be ruled out. A blend scenario as an alternative explanation from an analysis combining all available radial velocity measurements with the OGLE light curve. OGLE-TR-10b has a mass of 0.57 ± 0.12 MJup and a radius of 1.24 ± 0.09 RJup. These parameters bear close resemblance to those of the first known transiting extrasolar planet, HD 209458 b. The planets with the longer periods in the hot Jupiter class all have small masses (~0.7 MJup), while all the short-period planets (i.e., very hot Jupiters) have masses roughly twice as large. This trend may be related to the survival of planets in proximity to their parent stars. References External links OGLE transit data Geneva Observatory data Sagittarius (constellation) Transiting exoplanets Hot Jupiters Giant planets Exoplanets discovered in 2002
OGLE-TR-10b
[ "Astronomy" ]
503
[ "Sagittarius (constellation)", "Constellations" ]
1,591,163
https://en.wikipedia.org/wiki/Atmospheric%20wave
An atmospheric wave is a periodic disturbance in the fields of atmospheric variables (like surface pressure or geopotential height, temperature, or wind velocity) which may either propagate (traveling wave) or be stationary (standing wave). Atmospheric waves range in spatial and temporal scale from large-scale planetary waves (Rossby waves) to minute sound waves. Atmospheric waves with periods which are harmonics of 1 solar day (e.g. 24 hours, 12 hours, 8 hours... etc.) are known as atmospheric tides. Causes and effects The mechanism for the forcing of the wave, for example, the generation of the initial or prolonged disturbance in the atmospheric variables, can vary. Generally, waves are either excited by heating or dynamic effects, for example the obstruction of the flow by mountain ranges like the Rocky Mountains in the U.S. or the Alps in Europe. Heating effects can be small-scale (like the generation of gravity waves by convection) or large-scale (the formation of Rossby waves by the temperature contrasts between continents and oceans in the Northern hemisphere winter). Atmospheric waves transport momentum, which is fed back into the background flow as the wave dissipates. This wave forcing of the flow is particularly important in the stratosphere, where this momentum deposition by planetary-scale Rossby waves gives rise to sudden stratospheric warmings and the deposition by gravity waves gives rise to the quasi-biennial oscillation. In the mathematical description of atmospheric waves, spherical harmonics are used. When considering a section of a wave along a latitude circle, this is equivalent to a sinusoidal shape. Spherical harmonics, representing individual Rossby-Haurwitz planetary wave modes, can have any orientation with respect to the axis of rotation of the planet. Remarkably - while the very existence of these planetary wave modes requires the rotation of the planet around its polar axis - the phase velocity of the individual wave modes does not depend on the relative orientation of the spherically harmonic wave mode with respect to the axis of the planet. This can be shown to be a consequence of the underlying (approximate) spherical symmetry of the planet, even though this symmetry is broken by the planet's rotation. Types of waves Because the propagation of the wave is fundamentally caused by an imbalance of the forces acting on the air (which is often thought of in terms of air parcels when considering wave motion), the types of waves and their propagation characteristics vary latitudinally, principally because the Coriolis effect on horizontal flow is maximal at the poles and zero at the equator. There are four different types of waves: sound waves (usually eliminated from the atmospheric equations of motion due to their high frequency) These are longitudinal or compression waves. The sound wave propagates in the atmosphere though a series of compressions and expansions parallel to the direction of propagation. internal gravity waves (require stable stratification of the atmosphere) inertio-gravity waves (also include a significant Coriolis effect as opposed to "normal" gravity waves) Rossby waves (can be seen in the troughs and ridges of 500 hPa geopotential caused by midlatitude cyclones and anticyclones) At the equator, mixed Rossby-gravity and Kelvin waves can also be observed. See also Atmospheric thermodynamics References Further reading Holton, James R.: An Introduction to Dynamic Meteorology 2004 Wave Waves
Atmospheric wave
[ "Physics", "Chemistry" ]
696
[ "Physical phenomena", "Atmospheric dynamics", "Waves", "Motion (physics)", "Fluid dynamics" ]
1,591,333
https://en.wikipedia.org/wiki/Java%20Cryptography%20Architecture
In computing, the Java Cryptography Architecture (JCA) is a framework for working with cryptography using the Java programming language. It forms part of the Java security API, and was first introduced in JDK 1.1 in the package. The JCA uses a "provider"-based architecture and contains a set of APIs for various purposes, such as encryption, key generation and management, secure random-number generation, certificate validation, etc. These APIs provide an easy way for developers to integrate security into application code. See also Java Cryptography Extension Bouncy Castle (cryptography) External links Official JCA guides: JavaSE6, JavaSE7, JavaSE8, JavaSE9, JavaSE10, JavaSE11 JDK components Cryptographic software
Java Cryptography Architecture
[ "Mathematics" ]
159
[ "Cryptographic software", "Mathematical software" ]
1,591,482
https://en.wikipedia.org/wiki/Flag%20officer
A flag officer is a commissioned officer in a nation's armed forces senior enough to be entitled to fly a flag to mark the position from which that officer exercises command. Different countries use the term "flag officer" in different ways: In many countries, a flag officer is a senior officer of the navy, specifically one holding any of the admiral ranks; the term may or may not include the rank of commodore. In some countries, such as the United States, India, and Bangladesh, the designation may apply in all armed forces, not just in the navy. This means generals can also be considered flag officers. In most Arab armies, liwa (Arabic: لواء), which can be translated as "flag officer", is a specific rank, equivalent to a major general. However, "ensign" is debatably a more exact literal translation of the word. In principle, a liwa commands several units called "flags" or "ensigns" (i.e. brigades, also called liwa). Russian navies refer to the approximate equivalent of a British Royal Navy flag officer as a (флагман). Before the formation of the Soviet Navy in 1918, the Imperial Russian Navy also had officers with the function of a (флаг-офицер), subordinate to a and especially charged with adjutant duties and signals. General usage The generic title of flag officer is used in many modern navies and coast guards to denote those who hold the rank of rear admiral or its equivalent and above, also called "flag ranks". In some navies, this also includes the rank of commodore. Flag officer corresponds to the generic terms general officer, used by land and some air forces to describe all grades of generals, and air officer, used by other air forces to describe all grades of air marshals and air commodores. A flag officer sometimes is a junior officer, called a flag lieutenant or flag adjutant, attached as a personal adjutant or aide-de-camp. Canada In the Canadian Armed Forces, a flag officer (French: officier général, "general officer") is an admiral, vice admiral, rear admiral, or commodore, the naval equivalent of a general officer of the army or air force. It is a somewhat counterintuitive usage of the term, as only flag officers in command of commands or formations actually have their own flags (technically a commodore has only a broad pennant, not a flag), and army and air force generals in command of commands or formations also have their own flags, but are not called flag officers. Base commanders, usually full colonels, have a pennant that flies from the mast or flagpole on the base, when resident, or on vehicles that carry them. A flag officer's rank is denoted by a wide strip of gold braid on the cuff of the service dress tunic, one to four gold maple leaves over a crossed sword and baton, all beneath a royal crown, on epaulettes and shoulder boards; and two rows of gold oak leaves on the peak of the service cap. Since the unification of the Canadian Forces in 1968, a flag officer's dress tunic had a single broad stripe on the sleeve and epaulettes. In May 2010 the naval uniform dark dress tunic was adjusted—exterior epaulettes were removed, reverting to the sleeve ring and executive curl-rank insignia used by most navies. commodores' uniforms display a broad stripe, and each succeeding rank receives an additional sleeve ring. There are no epaulettes on the exterior of the tunic, but they are still worn on the uniform shirt underneath. India In the Indian Armed Forces, it is applied to brigadiers, major generals, lieutenant generals and generals in the Army; commodores, rear admirals, vice admirals and admirals in the Navy; and air commodores, air vice marshals, air marshals and air chief marshals in the Air Force. Each of these flag officers are designated with a specific flag. India's honorary ranks (five star ranks) are field marshal in the Army, Marshal of the Indian Air Force in the Air Force and admiral of the fleet in the Navy. A similar equivalence is applied to senior police officers of rank Deputy Inspector General (DIG), Inspector General (IG), Additional Director General (ADG) and Director General (DG). United Kingdom In the United Kingdom, the term is only used for the Royal Navy, with there being a more specific distinction being between a "flag officer" and an "officer of flag rank". Formerly, all officers promoted to flag rank were considered to be "flag officers". The term is still widely used to refer to any officer of flag rank. Present usage is that rear admirals and above are officers of flag rank, but only those officers who are authorised to fly a flag are formally called "flag officers" and have different flags for different ranks of admiral. Of the 39 officers of flag rank in the Royal Navy in 2006, very few were "flag officers" with entitlement to fly a flag. For example, a Commander-in-Chief Fleet flies an admiral's flag whether ashore or afloat and is a "flag officer". The chief of staff (support), a rear admiral, is not entitled to fly a flag and is an "officer of flag rank" rather than a "flag officer". List of fleets and major commands of the Royal Navy lists most admirals who were "flag officers". A flag officer's junior officer is often known as "Flags". Flag Officers in the Royal Navy are considered as Rear-Admirals and above. Equivalent ranks in the British Army and Royal Marines are called general officer rather than flag officers, and those in the Royal Air Force (as well as the rank of air commodore) are called air officers, although all are entitled to fly flags of rank. United States Captain was the highest rank in the United States Navy from its beginning in 1775 until 1857, when Congress created the temporary rank of flag officer, which was bestowed on senior Navy captains who were assigned to lead a squadron of vessels in addition to command of their own ship. This temporary usage gave way to the permanent ranks of commodore and rear admiral in 1862. The term "flag officer" is still in use today, explicitly defined as an officer of the U.S. Navy or Coast Guard serving in or having the grade of admiral, vice admiral, rear admiral, or rear admiral (lower half), equivalent to general officers of an army. In the United States Army, Air Force, and Marine Corps, the term "flag officer" generally is applied to all general officers authorized to fly their own command flags—i.e., brigadier general, or pay grade O-7, and above. As a matter of law, Title 10 of the United States Code makes a distinction between general officers and flag officers (general officer for the Army, Marine Corps, and Air Force; flag officer for the Navy and Coast Guard). Non-naval officers usually fly their flags from their headquarters, vessels, or vehicles, typically only for the most senior officer present. In the United States all flag and general officers must be nominated by the President and confirmed by the Senate. Each subsequent promotion requires renomination and re-approval. For the Navy, each flag officer assignment is usually limited to a maximum of two years, followed by either reassignment, reassignment and promotion, or retirement. References External links Compared US Armed Forces Flag Officer Personal Rank Flags Officer Military ranks Military terminology
Flag officer
[ "Mathematics" ]
1,545
[ "Symbols", "Flags" ]
1,591,617
https://en.wikipedia.org/wiki/Stressor
A stressor is a chemical or biological agent, environmental condition, external stimulus or an event seen as causing stress to an organism. Psychologically speaking, a stressor can be events or environments that individuals might consider demanding, challenging, and/or threatening individual safety. Events or objects that may trigger a stress response may include: environmental stressors (hypo or hyper-thermic temperatures, elevated sound levels, over-illumination, overcrowding) daily "stress" events (e.g., traffic, lost keys, money, quality and quantity of physical activity) life changes (e.g., divorce, bereavement) workplace stressors (e.g., high job demand vs. low job control, repeated or sustained exertions, forceful exertions, extreme postures, office clutter) chemical stressors (e.g., tobacco, alcohol, drugs) social stressors (e.g., societal and family demands) Stressors can cause physical, chemical and mental responses internally. Physical stressors produce mechanical stresses on skin, bones, ligaments, tendons, muscles and nerves that cause tissue deformation and (in extreme cases) tissue failure. Chemical stresses also produce biomechanical responses associated with metabolism and tissue repair. Physical stressors may produce pain and impair work performance. Chronic pain and impairment requiring medical attention may result from extreme physical stressors or if there is not sufficient recovery time between successive exposures. Stressors may also affect mental function and performance. Mental and social stressors may affect behavior and how individuals respond to physical and chemical stressors. Social and environmental stressors and the events associated with them can range from minor to traumatic. Traumatic events involve very debilitating stressors, and oftentimes these stressors are uncontrollable. Traumatic events can deplete an individual's coping resources to an extent where the individual may develop acute stress disorder or even post-traumatic stress disorder. People who have been abused, victimized, or terrorized are often more susceptible to stress disorders. Most stressor-stress relationships can be evaluated and determined - either by the individual or by a psychologist. Therapeutic measures are often taken to help replenish and rebuild the individual's coping resources while simultaneously aiding the individual in dealing with current stress. Psychological stressors Stressors occur when an individual is unable to cope with the demands of their environment (such as crippling debt with no clear path to resolving it). Generally, stressors take many forms, such as: traumatic events, life demands, sudden medical emergencies, and daily inconveniences, to name a few. There are also a variety of characteristics that a stressor may possess (different durations, intensity, predictability, and controllability). Measuring psychological stress Due to the wide impact and the far-reaching consequences of psychological stressors (especially their profound effects on mental well-being), it is particularly important to devise tools to measure such stressors. Two common psychological stress tests include the Perceived Stress Scale (PSS) devised by American psychologist Sheldon Cohen, and the Social Readjustment Rating Scale (SRRS) or the Holmes-Rahe Stress Scale. While the PSS is a traditional Likert scale, the SRRS assigns specific predefined numerical values to stressors. Biological responses to stressors Traumatic events or any type of shock to the body can cause an acute stress response disorder (ASD). The extent to which one experiences ASD depends on the extent of the shock. If the shock was pushed past a certain extreme after a particular period in time ASD can develop into what is commonly known as Post-traumatic stress disorder (PTSD). There are two ways that the body responds biologically in order to reduce the amount of stress an individual is experiencing. One thing that the body does to combat stressors is to create stress hormones, which in turn create energy reservoirs that are there in case a stressful event were to occur. The second way our biological components respond is through an individual's cells. Depending on the situation our cells obtain more energy in order to combat any negative stressor and any other activity those cells are involved in seize. One possible mechanism of stressors influencing biological pathways involves stimulation of the hypothalamus, CRF (corticotropin release factor) causing the pituitary gland to releases ACTH (adrenocorticotropic hormone), which causes the adrenal cortex to secrete various stress hormones (e.g., cortisol). Stress hormones travel in the blood stream to relevant organs, e.g., glands, heart, intestines, triggering a flight-or-fight response. Between this flow there is an alternate path that can be taken after the stressor is transferred to the hypothalamus, which leads to the sympathetic nervous system; after which the adrenal medulla secretes epinephrine. Predictability and controllability When individuals are informed about events before they occur, the magnitude of the stressor is less than when compared to individuals who were not informed of the stressor. For example, an individual would prefer to know when they have a deadline ahead of time in order to prepare for it in advance, rather than find out about the deadline the day of. In knowing that there is a deadline ahead of time, the intensity of the stressor is smaller for the individual, as opposed to the magnitude of intensity for the other unfortunate individual who found out about the deadline the day of. When this was tested, psychologists found that when given the choice, individuals had a preference for the predictable stressors, rather than the unpredictable stressors. The pathologies caused by the lack of predictability are experienced by some individuals working in fields of emergency medicine, military defense, disaster response and others. Additionally, the degree to which the stressor can be controlled plays a variable in how the individual perceives stress. Research has found that if an individual is able to take some control over the stressor, then the level of stress will be decreased. During this study, it was found that the individuals become increasingly anxious and distressed if they were unable to control their environment. As an example, imagine an individual who detests baths in the Middle Ages, taking a bath. If the individual was forced to take the bath with no control over the temperature of the bath (one of the variables), then their anxiety and stress levels would be higher than if the individual was given some control over the environment (such as being able to control the temperature of the water). Based on these two principles (predictability and control), there are two hypotheses that attempt to account for these preferences; the preparatory response hypothesis and safety hypothesis attempt to accommodate these preferences. Preparatory response hypothesis The idea behind this hypothesis is that an organism can better prepare for an event if they are informed beforehand, as this allows them to prepare for it (biologically). In biologically preparing for this event beforehand, the individual is able to better decrease the event's aversiveness. In knowing when a potential stressor will occur (such as an exam), the individual could, in theory, prepare for it in advance, thus decreasing the stress that may result from that event. Safety hypothesis In this hypothesis, there are two time periods, one in which is deemed safe (where there is no stressor), and one which is deemed unsafe (in which the stressor is present). This is similar to procrastination and cramming; during the safe intervals (weeks before an exam) the individual is relaxed and not anxious, and during the unsafe intervals (the day or night before the exam) the individual most likely experiences anxiety. See also Disturbance (ecology) References Further reading National Research Council. Work-Related Musculoskeletal Disorders: Report, Workshop Summary, and Workshop Papers. Washington, DC: The National Academies Press, 1999. Physiology Stress (biological and psychological) Anxiety
Stressor
[ "Biology" ]
1,642
[ "Physiology" ]
1,591,646
https://en.wikipedia.org/wiki/Huchra%27s%20lens
Huchra's lens is the lensing galaxy of the Einstein Cross (Quasar 2237+30); it is also called ZW 2237+030 or QSO 2237+0305 G. It exhibits the phenomenon of gravitational lensing that was postulated by Albert Einstein when he realized that gravity would be able to bend light and thus could have lens-like effects. The galaxy is named for astronomer John Huchra, a key member of the team that discovered it. References Unbarred spiral galaxies Gravitational lensing Pegasus (constellation)
Huchra's lens
[ "Astronomy" ]
118
[ "Pegasus (constellation)", "Constellations" ]
1,591,653
https://en.wikipedia.org/wiki/Einstein%20Cross
The Einstein Cross (Q2237+030 or QSO 2237+0305) is a gravitationally lensed quasar that sits directly behind the centre of the galaxy ZW 2237+030, called Huchra's Lens. Four images of the same distant quasar (plus one in the centre, too dim to see) appear in the middle of the foreground galaxy due to strong gravitational lensing. This system was discovered by John Huchra and coworkers in 1985, although at the time they only detected that there was a quasar behind a galaxy based on differing redshifts and did not resolve the four separate images of the quasar. While gravitationally lensed light sources are often shaped into an Einstein ring, due to the elongated shape of the lensing galaxy and the quasar being off-centre, the images form a peculiar cross-shape instead. Other "Einstein crosses" have been discovered (see image below of one of them). Details The quasar's redshift indicates that it is located about 8 billion light years from Earth, while the lensing galaxy is at a distance of 400 million light years. The apparent dimensions of the entire foreground galaxy are 0.87 × 0.34 arcminutes, while the apparent dimension of the cross in its centre accounts for only 1.6 × 1.6 arcseconds. The Einstein Cross can be found in Pegasus at , . Amateur astronomers are able to see some of the cross using telescopes; however, it requires extremely dark skies and telescope mirrors with diameters of or greater. The individual images are labelled A through D (i.e. QSO 2237+0305 A), the lensing galaxy is sometimes referred to as QSO 2237+0305 G. Gallery See also Cloverleaf Quasar Einstein ring (Chwolson ring) Gravitational lensing Quasar SN Refsdal Twin Quasar References External links Simbad Information about Einstein's Cross on Skyhound.com Einstein's Cross core Einstein's Cross by Jay Reynolds Freeman Photo of the Einstein Cross at Astronomy Picture of the Day (March 11, 2007) Google Sky Gravitationally lensed quasars Gravitational lensing Pegasus (constellation) 69457 Cross
Einstein Cross
[ "Astronomy" ]
477
[ "Pegasus (constellation)", "Constellations" ]
1,591,728
https://en.wikipedia.org/wiki/Emission%20theory%20%28vision%29
Emission theory or extramission theory (variants: extromission) or extromissionism is the proposal that visual perception is accomplished by eye beams emitted by the eyes. This theory has been replaced by intromission theory (or intromissionism), which is that visual perception comes from something representative of the object (later established to be rays of light reflected from it) entering the eyes. Modern physics has confirmed that light is physically transmitted by photons from a light source, such as the sun, to visible objects, and finishing with the detector, such as a human eye or camera. History In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth, and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye, making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated that there were two different types of emanations that interacted in some way: one that emanated from an object to the eye, and another that emanated from the eye to an object. He compared these outward-flowing emanations to the emission of light from a lantern. Around 400 BC, emission theory was held by Plato. Around 300 BC, Euclid wrote Optics and Catoptrics, in which he studied the properties of sight. Euclid postulated that the visual ray emitted from the eye travelled in straight lines, described the laws of reflection, and mathematically studied the appearance of objects by direct vision and by reflection. Ptolemy (c. 2nd century) wrote Optics, a work marking the culmination of the ancient Greek optics, in which he developed theories of direct vision (optics proper), vision by reflection (catoptics), and, notably, vision by refraction (dioptrics). Galen, also in the 2nd century, likewise endorsed the extramission theory (De Usu Partium Corporis Humani). His theory contained anatomical and physiological details which could not be found in the works of mathematicians and philosophers. Due to this feature and his medical authority, his view held considerable influence in the pre-modern Middle East and Europe, especially among medical doctors in these regions. Evidence for the theory Adherents of emission theory cited at least two lines of evidence for it. The light from the eyes of some animals (such as cats, which modern science has determined have highly reflective eyes) could also be seen in "darkness". Adherents of intromission theory countered by saying that if emission theory were true, then someone with weak eyes should have their vision improved when someone with good eyes looks at the same objects. Some argued that Euclid's version of emission theory was purely metaphorical, highlighting mainly the geometrical relations between eyes and objects. The geometry of classical optics is equivalent no matter which direction light is considered to move because light is modeled by its path, not as a moving object. However, his theory of clarity of vision (the circular appearance of far rectangular objects) makes sense only if the ray emits from eyes. Alternatively, Euclid's can be interpreted as a mathematical model whose only constraint was to save the phenomena, without the need of a strict correspondence between each theoretical entity and a physical counterpart. Measuring the speed of light was one line of evidence that spelled the end of emission theory as anything other than a metaphor. Refutation Alhazen was the first person to explain that vision occurs when light reflects from an object into one's eyes. The rise of rationalist physics in the 17th century led to a novel version of the intromissionist theory that proved extremely influential and displaced any legacies of the old emissive theories. In Cartesian physics, light was the sensation of pressure emitted by surrounding objects that sought to move, as transmitted through the rotatory motion of material corpuscles. These views extended to Isaac Newton's corpuscular theory of light, and would be adopted by John Locke and other the 18th-century luminaries. Persistence of the theory Winer et al. (2002) have found evidence that as many as 50% of adults believe in emission theory. Rupert Sheldrake claims to have found evidence for emission theory through his experiments in the sense of being stared at. Relationship with echolocation Sometimes, the emission theory is explained by analogy with echolocation and sonar. For example, in explaining Ptolemy's theory, a psychologist stated: "Ptolemy’s ‘extramission’ theory of vision proposed scaling the angular size of objects using light rays that were emitted by the eyes and reflected back by objects. In practice some animals (bats, dolphins, whales, and even some birds and rodents) have evolved what is effectively an ‘extramission’ theory of audition to address this very concern. " Note this account of the Ptolemaic theory ('bouncing back of visual ray') differs from ones found in other sources. References Obsolete theories in physics Visual perception History of optics
Emission theory (vision)
[ "Physics" ]
1,042
[ "Theoretical physics", "Obsolete theories in physics" ]
1,591,812
https://en.wikipedia.org/wiki/Thapsigargin
Thapsigargin is a non-competitive inhibitor of the sarco/endoplasmic reticulum Ca2+ ATPase (SERCA). Structurally, thapsigargin is classified as a guaianolide, and is extracted from a plant, Thapsia garganica. It is a tumor promoter in mammalian cells. Thapsigargin raises cytosolic (intracellular) calcium concentration by blocking the ability of the cell to pump calcium into the sarcoplasmic and endoplasmic reticula. Store-depletion can secondarily activate plasma membrane calcium channels, allowing an influx of calcium into the cytosol. Depletion of ER calcium stores leads to ER stress and activation of the unfolded protein response. Non-resolved ER stress can cumulatively lead to cell death. Prolonged store depletion can protect against ferroptosis via remodeling of ER-synthesized phospholipids. Thapsigargin treatment and the resulting ER calcium depletion inhibits autophagy independent of the UPR. Thapsigargin is useful in experimentation examining the impacts of increasing cytosolic calcium concentrations and ER calcium depletion. A study from the University of Nottingham showed promising results for its use against Covid-19 and other coronavirus. Biosynthesis The complete biosynthesis of thapsigargin has yet to be elucidated. A proposed biosynthesis starts with the farnesyl pyrophosphate. The first step is controlled by the enzyme germacrene B synthase. In the second step, the C(8) position is easily activated for an allylic oxidation due to the position of the double bond. The next step is the addition of the acyloxy moiety by a P450 acetyltransferase; which is a well known reaction for the synthesis of the diterpene, taxol. In the third step, the lactone ring is formed by a cytochrome P450 enzyme using NADP+. With the butyloxy group on the C(8), the formation will only generate the 6,12-lactone ring. The fourth step is an epoxidation that initiates the last step of the base guaianolide formation. In the fifth step, a P450 enzyme closes the 5 + 7 guaianolide structure. The ring closing is important, because it will proceed via 1,10 - epoxidation in order to retain the 4,5 - double bond needed in thapsigargin. It is not known whether the secondary modifications to the guaianolide occur before, or after the formation of thapsigargin, but will need to be considered when elucidating the true biosynthesis. It should also be noted, that several of these enzymes are P450s, therefore oxygen and NADPH are likely crucial to this biosynthesis as well as other cofactors such as Mg2+ and Mn2+ may be needed. Research Since inhibition of SERCA is a mechanism of action that has been used to target solid tumors, thapsigargin has attracted research interest. A prodrug of thapsigargin, mipsagargin, is currently undergoing clinical trials for the treatment of glioblastoma. The biological activity has also attracted research into the laboratory synthesis of thapsigargin. To date, three distinct syntheses have been reported: one by Steven V. Ley, one by Phil Baran., and one by P. Andrew Evans. Preclinical studies demonstrated that other effects of thapsigargin include suppression of nicotinic acetylcholine receptors activity in neurons of the guinea-pig ileum submucous plexus and rat superior cervical ganglion. Laboratory studies at the University of Nottingham, using in vitro cell cultures, indicates possible potential as a broad spectrum antiviral, with activity against the COVID-19 virus (SARS-CoV-2), a common cold virus, respiratory syncytial virus (RSV), and the influenza A virus. See also EBC-46 References Further reading Hydrolase inhibitors Sesquiterpene lactones Acetate esters Butyrate esters Azulenofurans Tertiary alcohols Cyclopentenes ATPase inhibitors Plant toxins
Thapsigargin
[ "Chemistry" ]
916
[ "Chemical ecology", "Plant toxins" ]
1,591,825
https://en.wikipedia.org/wiki/Alcor%20%28star%29
Alcor () is a binary star system in the constellation of Ursa Major. It is the fainter companion of Mizar, the two stars forming a naked eye double in the handle of the Big Dipper (or Plough) asterism in Ursa Major. The two lie about 83 light-years away from the Sun, as measured by the Hipparcos astrometry satellite. Nomenclature Alcor has the Flamsteed designation 80 Ursae Majoris. Alcor derives from Arabic , meaning 'faint one'; notable as a faintly perceptible companion of Mizar. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Alcor for 80 UMa. Mizar and Alcor With normal eyesight Alcor appears at about 12 minutes of arc from the second-magnitude star Mizar. Alcor is of magnitude 3.99 and spectral class A5V. Mizar's and Alcor's proper motions show they move together, along with most of the other stars of the Big Dipper except Dubhe and Alkaid, as members of the Ursa Major Moving Group, a mostly dispersed group of stars sharing a common birth. However, it has yet to be demonstrated conclusively that they are gravitationally bound. Recent studies indicate that Alcor and Mizar are somewhat closer together than previously thought: approximately 74,000 ± 39,000 AU, or 0.5–1.5 light-years. The uncertainty is due to our uncertainty about the exact distances from us. If they are exactly the same distance from us (somewhat unlikely) then the distance between them is only . Alcor B In 2009, Alcor was discovered to have a companion star Alcor B, a magnitude 8.8 red dwarf. Alcor B was discovered independently by two groups. One group led by Eric Mamajek (University of Rochester) and colleagues at Steward Observatory University of Arizona used adaptive optics on the 6.5-meter telescope at MMT Observatory. Another led by Neil Zimmerman, a graduate student at Columbia University and member of Project 1640, an international collaborative team that includes astrophysicists at the American Museum of Natural History, the University of Cambridge's Institute of Astronomy, the California Institute of Technology, and NASA's Jet Propulsion Laboratory, used the 5-meter Hale Telescope at Palomar Observatory. Alcor B is one second of arc away from Alcor A. Its spectral type is M3-4 and it is a main-sequence star, a red dwarf. Alcor A and B are situated 1.2 light-years away from, and are co-moving with, the Mizar quadruple system, making the system the second-closest stellar sextuplet—only Castor is closer. The Mizar–Alcor stellar sextuple system belongs to the Ursa Major Moving Group, a stellar group of stars of similar ages and velocities, and the closest cluster-like object to Earth. Other names In Arabic, Alcor is also known as Al-Sahja (the rhythmical form of the usual al-Suhā) meaning "forgotten", "lost", or "neglected". In traditional Indian astronomy, Alcor was known as Arundhati, wife of one of the Saptarishi. In the Miꞌkmaq myth of the great bear and the seven hunters, Mizar is Chickadee and Alcor is his cooking pot. Military namesakes USS Alcor (AD-34) and USS Alcor (AK-259) are both United States Navy ships. References External links Alcor at Jim Kaler's Stars website A-type main-sequence stars Ursae Majoris, g Big Dipper Ursae Majoris, 80 2 Alcor Ursa Major 5062 065477 116842 BD+55 1603 M-type main-sequence stars Ursa Major moving group
Alcor (star)
[ "Astronomy" ]
847
[ "Ursa Major", "Constellations" ]
1,591,845
https://en.wikipedia.org/wiki/Frei%20Otto
Frei Paul Otto (; 31 May 1925 – 9 March 2015) was a German architect and structural engineer noted for his use of lightweight structures, in particular tensile and membrane structures, including the roof of the Olympic Stadium in Munich for the 1972 Summer Olympics. Otto won the RIBA Royal Gold Medal in 2006 and was awarded the Pritzker Architecture Prize in 2015, shortly before his death. Early life Otto was born in , Germany, and grew up in Berlin. He studied architecture in Berlin before being drafted into the Luftwaffe as a fighter pilot in the last years of World War II. He was interned in a prisoner of war camp near Chartres (France) and with his aviation engineering training and lack of material and an urgent need for housing, began experimenting with tents for shelter. After the war he studied briefly in the US and visited Erich Mendelsohn, Mies van der Rohe, Richard Neutra, and Frank Lloyd Wright. Career He began a private practice in Germany in 1952. He earned a doctorate in tensioned constructions in 1954. His saddle-shaped cable-net music pavilion at the Bundesgartenschau (Federal Garden Exposition) in Kassel 1955 brought him his first significant attention. Otto specialised in lightweight tensile and membrane structures, and pioneered advances in structural mathematics and civil engineering. In 1958, Otto taught at Washington University in St. Louis' Sam Fox School of Design & Visual Arts where he met Buckminster Fuller. Otto founded the Institute for Lightweight Structures at the university of Stuttgart in 1964 and headed the institute until his retirement as university professor. Major works include the West German Pavilion at the Montreal Expo in 1967 and the roof of the 1972 Munich Olympic Arena. He has lectured worldwide and taught at the Architectural Association School of Architecture, where he also designed some of the research facilities buildings of the school's forest campus in Hooke Park. Until his death, Otto remained active as an architect and engineer, and as consultant to his protégé Mahmoud Bodo Rasch for a number of projects in the Middle East. One of his more recent projects was his work with Shigeru Ban on the Japanese Pavilion at Expo 2000 with a roof structure made entirely of paper, and together with SL Rasch GmbH Special and Lightweight Structures he designed a convertible roof for the Venezuelan Pavilion. In an effort to memorialise the September 11 attacks and its victims as early as 2002, Otto envisioned the two footprints of the World Trade Center buildings covered with water and surrounded by trees; his plan includes a world map embedded in the park with countries at war marked with lights and a continuously updated board announcing the number of people killed in war from 11 September 2001, onward. On request of Christoph Ingenhoven, Otto was consultant for special construction for the design of the "Light eyes" for Stuttgart 21. – drop-shaped overlights in the park, that descend onto the tracks to support the ceiling. Otto remarked in 2010 that the construction should be stopped because of the difficult geology. Otto died on 9 March 2015; he was to be publicly announced as the winner of the 2015 Pritzker Prize on 23 March but his death meant the committee announced his award on 10 March. Otto himself had been told earlier that he had won the prize by the executive director of the Pritzker Prize, Martha Thorne. He was reported to have said, "I've never done anything to gain this prize. Prize winning is not the goal of my life. I try to help poor people, but what shall I say here — I'm very happy." List of buildings This is a partial list of buildings designed by Otto: 1957 – Tanzbrunnen pavilion Rheinpark Cologne, Germany 1967 – West Germany Pavilion at Expo 67 Montreal, Canada 1972 – Roof for Olympic Stadium, Munich, Germany 1974 – Convention Center in Mecca, Saudi Arabia 1975 – Multihalle, Mannheim, Germany 1977 – Umbrellas for 1977 Pink Floyd tour 1980 – Aviary at Munich Zoo, Germany 1985 – Tuwaiq Palace, Saudi Arabia, with Buro Happold 1987–91 – Housing at the International Building Exhibition Berlin, Germany 2000 – Roof structure of the Japanese Pavilion at Expo 2000, Hanover Germany (provided engineering assistance with Buro Happold and architectural collaboration with Shigeru Ban) Awards (selected) 1974 – Thomas Jefferson Medal in Architecture 1980 – Honorary doctorate of science from the University of Bath 1982 – Großer BDA Preis 1996/97 – Wolf Prize in Architecture 2005 – Royal Gold Medal for architecture by RIBA 2006 – Praemium Imperiale in Architecture 2015 – Pritzker Architecture Prize See also Gridshell References Further reading Conrad Roland: Frei Otto – Spannweiten. Ideen und Versuche zum Leichtbau. Ein Werkstattbericht von Conrad Roland. Ullstein, Berlin, Frankfurt/Main und Wien 1965. Philip Drew: Frei Otto – Form and Structure, 1976, , Philip Drew: Tensile Architecture, 1979, , Muriel Emanuel, Dennis Sharp: "Contemporary Architects", New York: St. Martin's Press. 1980. p. 600. Frei Otto, Bodo Rasch: Finding Form: Towards an Architecture of the Minimal, 1996, Winfried Nerdinger: Frei Otto, Complete Works: Lightweight Construction – Natural Design, 2005, , - published on the occasion of the exhibition Frei Otto Lightweight Construction, Natural Design at the Architekturmuseum der Technischen Universität München in der Pinakothek der Moderne from 26 May to 28 August 2005, and cataloguing over 200 buildings and projects dating from the years 1951-2004 External links Frei Otto's official website Frei Otto: Spanning The Future Documentary film's official Website Japan Pavilion Expo 2000 – About the roof structure SL Rasch GmbH Homepage Last recorded interview with Frei Otto, about his life and receiving the Pritzker Prize Uncube Nr. 33 Frei Otto – by uncube magazine 1925 births 2015 deaths German World War II pilots People from Chemnitz Structural engineers Tensile architecture High-tech architecture Tensile membrane structures Washington University in St. Louis faculty Studienstiftung alumni Officers Crosses of the Order of Merit of the Federal Republic of Germany Recipients of the Order of Merit of Baden-Württemberg Pritzker Architecture Prize winners Recipients of the Royal Gold Medal Wolf Prize in Arts laureates Recipients of the Praemium Imperiale Members of the Academy of Arts, Berlin 20th-century German architects German prisoners of war in World War II held by France
Frei Otto
[ "Technology", "Engineering" ]
1,328
[ "Structural system", "Structural engineering", "Tensile architecture", "Structural engineers" ]
1,591,855
https://en.wikipedia.org/wiki/Ex%20%28text%20editor%29
ex, (short for extended), is a line editor for Unix systems originally written by Bill Joy in 1976, beginning with an earlier program written by Charles Haley. Multiple implementations of the program exist; they are standardized by POSIX. History The original Unix editor ed was distributed with the Bell Labs versions of the operating system in the 1970s. George Coulouris of Queen Mary College, London, which had installed Unix in 1973, developed an improved version called em in 1975 that could take advantage of video terminals. While visiting Berkeley, Coulouris presented his program to Bill Joy, who modified it to be less demanding on the processor; Joy's version became ex and got included in the Berkeley Software Distribution. ex was eventually given a full-screen visual interface (adding to its command line oriented operation), thereby becoming the vi text editor. In recent times, ex is implemented as a personality of the vi program; most variants of vi still have an "ex mode", which is invoked using the command ex, or from within vi for one command by typing the : (colon) character. Although there is overlap between ex and vi functionality, some things can only be done with ex commands, so it remains useful when using vi. Relation to vi The core ex commands which relate to search and replace are essential to vi. For instance, the ex command replaces every instance of with , and works in vi too. The means every line in the file. The 'g' stands for global and means replace every instance on every line (if it was not specified, then only the first instance on each line would be replaced). Command-line invocation Synopsis ex [-rR] [-s|-v] [-c command] [-t tagstring] [-w size] [file...] Options -r recover specified files after a system crash -R sets readonly -s (XPG4 only) suppresses user-interactive feedback -v invoke visual mode (vi) -c command Execute command on first buffer loaded from file. May be used up to ten times. -t tagstring Edit the file containing the specified tag -w size Set window size - (obsolete) suppresses user-interactive feedback -l Enable Lisp editing mode -x Use encryption when writing files -C encryption option file The name(s) of the file(s) to be edited See also List of Unix commands References External links Standard Unix programs Unix SUS2008 utilities Unix text editors Line editor
Ex (text editor)
[ "Technology" ]
518
[ "Computing commands", "Standard Unix programs" ]
1,591,917
https://en.wikipedia.org/wiki/Willemite
Willemite is a zinc silicate mineral () and a minor ore of zinc. It is highly fluorescent (green) under shortwave ultraviolet light. It occurs in a variety of colors in daylight, in fibrous masses and apple-green gemmy masses. Troostite is a variant in which part of the zinc is partly replaced by manganese, it occurs in solid brown masses. It was discovered in 1829 in the Belgian Vieille-Montagne mine. Armand Lévy was shown samples by a student at the university where he was teaching. Lévy named it after William I of the Netherlands (it is occasionally spelled villemite). The troostite variety is named after Dutch-American mineralogist Gerard Troost. Occurrence Willemite is usually formed as an alteration of previously existing sphalerite ore bodies, and is usually associated with limestone. It is also found in marble and may be the result of a metamorphism of earlier hemimorphite or smithsonite. Crystals have the form of hexagonal prisms terminated by rhombohedral planes: there are distinct cleavages parallel to the prism-faces and to the base. Granular and cleavage masses are of more common occurrence. It occurs in many places, but is best known from Arizona and the zinc, iron, manganese deposits at Franklin and Sterling Hill Mines in New Jersey. It often occurs with red zincite (zinc oxide) and franklinite ( (an iron rich zinc mineral occurring in sharp black isometric octahedral crystals and masses). Franklinite and zincite are not fluorescent. Uses Artificial willemite was used as the basis of first-generation fluorescent tube phosphors. When doped with manganese ions, it fluoresces with a broad white emission band. Some versions had some of the zinc replaced with beryllium. In the 1940s it was largely replaced by second-generation halophosphors based on fluorapatite. These, in turn have been replaced by the third-generation TriPhosphors. See also List of minerals List of minerals named after people References External links Nesosilicates Zinc minerals Trigonal minerals Minerals in space group 148 Luminescent minerals Minerals described in 1829
Willemite
[ "Chemistry" ]
452
[ "Luminescence", "Luminescent minerals" ]
1,591,932
https://en.wikipedia.org/wiki/XView
XView is a widget toolkit from Sun Microsystems introduced in 1988. It provides an OPEN LOOK user interface for X Window System applications, with an object-oriented application programming interface (API) for the C programming language. Its interface, controls, and layouts are very close to that of the earlier SunView window system, making it easy to convert existing applications from SunView to X. Sun also produced the User Interface Toolkit (UIT), a C++ API to XView. The XView source code has been freely available since the early 1990s, making it the "first open-source professional-quality X Window System toolkit". XView was later abandoned by Sun in favor of Motif (the basis of CDE), and more recently GTK+ (the basis of GNOME). XView was reputedly the first system to use right-button context menus, which are now ubiquitous among computer user interfaces. See also OLIT MoOLIT OpenWindows References Further reading Ian Darwin, et al, X Window System User's Guide, OPEN LOOK Edition (O'Reilly & Associates, unpublished) Volume 3OL Dan Heller, XView Programming Manual (O'Reilly & Associates, 1991) Volume 7 Thomas Van Raalte, ed. XView Reference Manual (O'Reilly & Associates, 1991) Volume 7b Widget toolkits Sun Microsystems software X-based libraries
XView
[ "Technology" ]
291
[ "Computing stubs", "Software stubs" ]
1,591,958
https://en.wikipedia.org/wiki/Electronic%20ticket
An electronic ticket is a method of ticket entry, processing, and marketing for companies in the airline, railways and other transport and entertainment industries. Airline ticket E-tickets in the airline industry were devised in about 1994, and have now largely replaced the older multi-layered paper ticketing systems. Since 1 June 2008, it has been mandatory for IATA members to use e-ticketing. Where paper tickets are still available, some airlines charge a fee for issuing paper tickets. When a reservation is confirmed, the airline keeps a record of the booking in its computer reservations system. Customers can print out or may be provided with a copy of a e-ticket itinerary receipt which contains the record locator or reservation number and the e-ticket number. It is possible to print multiple copies of an e-ticket itinerary receipt. Besides providing itinerary details, an e-ticket itinerary receipt also contains: An official ticket number (including the airline's 3-digit ticketing code, a 4-digit form number, a 6-digit serial number, and sometimes a check digit) Carriage terms and conditions (or at least a reference to them) Fare and tax details, including fare calculation details and some additional data such as tour codes. The exact cost might not be stated, but a "fare basis" code will always identify the fare used. A short summary of fare restrictions, usually specifying only whether change or refund are permitted but not the penalties to which they are subject Form of payment Issuing office Baggage allowance Checking in with an e-ticket Passengers with e-tickets are required to check-in at the airport for a flight in the usual manner, except that they may be required to present an e-ticket itinerary receipt or personal identification, such as a passport, or credit card. They can also use the Record locator, often called booking reference, a code of six letters and digits. Producing a print-out of an e-ticket itinerary receipt may be required to enter the terminal of some airports or to satisfy immigration regulations in some countries. The introduction of e-tickets has allowed for various enhancements to checking-in processes. Self-service and remote check-in online/mobile/telephone/self-service kiosk check-in (if the airline makes this option available) early check-in printing boarding passes at airport kiosks and at locations other than an airport delivery of boarding pass bar-codes via SMS or email to a mobile device Several websites assist people holding e-tickets to check in online in advance of the twenty-four-hour airline restriction. These sites store a passenger's flight information and then when the airline opens up for online check-in the data is transferred to the airline and the boarding pass is emailed back to the customer. With this e-ticket technology, if a passenger receives his boarding pass remotely and is travelling without check-in luggage, he may bypass traditional counter check-in. E-ticket limitations The ticketing systems of most airlines are only able to produce e-tickets for itineraries of no more than 16 segments, including surface segments. This is the same limit that applied to paper tickets. Another critical limitation is that at the time e-tickets were initially designed, most airlines still practiced product bundling. By the time the industry began 100% e-ticket implementation, more and more airlines began to unbundle previously included services (like checked baggage) and add them back in as optional fees (ancillary revenue). However, the e-ticket standard did not anticipate and did not include a standardized mechanism for such optional fees. IATA later implemented the Electronic Miscellaneous Document (EMD) standard for such information. This way, airlines could consistently expose and capture such fees at time of booking through travel reservation systems, rather than having to surprise passengers with them at check-in. IATA mandated transition As part of the IATA Simplifying the Business initiative, the association instituted a program to switch the industry to 100% electronic ticketing. The program concluded on June 1, 2008, with the association saying that the resulting industry savings were approximately US$3 billion. In 2004, IATA Board of Governors set the end of 2007 as the deadline for airlines to make the transition to 100% electronic ticketing for tickets processed through the IATA billing and settlement plan; in June 2007, the deadline was extended to May 31, 2008. As of June 1, 2008 paper tickets can no longer be issued on neutral stock by agencies reporting to their local BSP. Agents reporting to the ARC using company-provided stock or issuing tickets on behalf of an airline (GSAs and ticketing offices) are not subject to that restriction. The industry was unable to comply with the IATA mandate and paper tickets remain in circulation as of February 2009. Train tickets Amtrak started offering electronic tickets on all train routes on 30 July 2012. These tickets can be ordered over the internet and printed (as a PDF file), printed at a Quik-Trak kiosk, or at the ticket counter at the station. Electronic tickets can also be held in a smart phone and shown to the conductor using an app. Mobile tickets are common with operators of US commuter train networks (e.g. MTA LIRR and Metro North) but they are usually only offered on the US version of the App Store and only accept US-issued credit cards as the app's payment page asks the user for the credit card's ZIP code to complete the purchase. Several European train operators also offer self-printable or downloadable tickets. Often tickets can also be delivered by SMS or MMS. Railway operators in other countries also issue electronic tickets. The national operators of Denmark and Netherlands have a nationwide system where RFID smartcards are used as train tickets. In the UK, the issuance of printable or mobile tickets is at the discretion of train operators and is often available for advance tickets only (i.e. valid only on a specific train). This is very common in Europe for local urban rail, such as rapid transit/metros. During the 2010:s phone apps have been increasingly popular. Passengers do not have to visit a machine or a desk to buy a ticket or refill an RFID card, but can buy it in their phone. In India, an SMS sent by the Indian Railways, along with a valid proof of identity is considered equivalent to a ticket and also a e-ticket pdf can be downloaded from the IRCTC website or mobile app. Sport, concert, and cinema tickets Many sport, concert venues, and cinemas use electronic ticketing for their events. Electronic tickets, or "eTickets" as they are sometimes referred, are often delivered as PDFs or another downloadable format that can be received via email or through a mobile app. Electronic tickets allow organizers to avoid the cost of producing and distributing physical tickets by transferring costs to the customer, who must own electronic hardware and purchase internet access in order to receive their ticket. A printed copy of these tickets or a digital copy on a mobile phone should be presented on coming to the venue. These tickets now normally also have a barcode, which may be scanned on entry into the venue to streamline crowd processing. Electronic tickets have become increasingly prevalent in the entertainment industry over the last decade. In some cases, spectators who want to see a match may not need a printable electronic ticket. If someone with a membership to a football team books a ticket online, the member can just verify his/her reservation with a membership card at the entrance. This is common with teams in the English Premiership League. Implementations In January 2017 it was reported that Germany's Federal Minister of Transport and Digital Infrastructure, Alexander Dobrindt wants to create an electronic ticket to connect public bus and train services as well as parking spaces and potentially car-sharing services across all cities. A nationwide electronic ticket system was introduced in Denmark in 2010, called Rejsekort. See also Digital ticket Mobile ticketing Travel technology Flight Interruption Manifest Ticket system References External links IATA Simplifying the Business ET webpage 'Paperless ticketing' aims to thwart scalping at concerts, sports events Airline tickets Travel technology Transport law
Electronic ticket
[ "Physics" ]
1,667
[ "Physical systems", "Transport", "Transport law" ]
1,592,061
https://en.wikipedia.org/wiki/Isophthalic%20acid
Isophthalic acid is an organic compound with the formula C6H4(CO2H)2. This colorless solid is an isomer of phthalic acid and terephthalic acid. The main industrial uses of purified isophthalic acid (PIA) are for the production of polyethylene terephthalate (PET) resin and for the production of unsaturated polyester resin (UPR) and other types of coating resins. Isophthalic acid is one of three isomers of benzenedicarboxylic acid, the others being phthalic acid and terephthalic acid. Crystalline isophthalic acid is built up from molecules connected by hydrogen bonds, forming infinite chains. Preparation Isophthalic acid is produced on the billion kilogram per year scale by oxidizing meta-xylene using oxygen. The process employs a cobalt-manganese catalyst. The world's largest producer of isophthalic acid is Lotte Chemical Corporation. In the laboratory, chromic acid can be used as the oxidant. It also arises by fusing potassium meta-sulfobenzoate, or meta-bromobenzoate with potassium formate (terephthalic acid is also formed in the last case). The barium salt, as its hexahydrate, is very soluble in water (a distinction between phthalic and terephthalic acids). Uvitic acid, 5-methylisophthalic acid, is obtained by oxidizing mesitylene or by condensing pyroracemic acid with baryta water. Applications Aromatic dicarboxylic acids are used as precursors (in the form of acyl chlorides) to commercially important polymers, e.g. the fire-resistant material Nomex. Mixed with terephthalic acid, isophthalic acid is used in the production of PET resins for drink plastic bottles and food packaging. The high-performance polymer polybenzimidazole is produced from isophthalic acid. Also, the acid is used as an important input to produce insulation materials. References Note: reference 2 refers to the ortho isomer. Accurate cites for the meta isomer not available. External links International Chemical Safety Card 0500 Dicarboxylic acids Monomers Benzoic acids
Isophthalic acid
[ "Chemistry", "Materials_science" ]
499
[ "Monomers", "Polymer chemistry" ]
1,592,074
https://en.wikipedia.org/wiki/Oxygen%20concentrator
An oxygen concentrator is a device that concentrates the oxygen from a gas supply (typically ambient air) by selectively removing nitrogen to supply an oxygen-enriched product gas stream. They are used industrially, to provide supplemental oxygen at high altitudes, and as medical devices for oxygen therapy. Oxygen concentrators are used widely for oxygen provision in healthcare applications, especially where liquid or pressurized oxygen is too dangerous or inconvenient, such as in homes or portable clinics, and can also provide an economical source of oxygen in industrial processes, where they are also known as oxygen gas generators or oxygen generation plants. Two methods in common use are pressure swing adsorption and membrane gas separation. Pressure swing adsorption (PSA) oxygen concentrators use a molecular sieve to adsorb gases and operate on the principle of rapid pressure swing adsorption of atmospheric nitrogen onto zeolite minerals at high pressure. This type of adsorption system is therefore functionally a nitrogen scrubber, allowing the other atmospheric gases to pass through, leaving oxygen as the primary gas remaining. PSA technology is a reliable and economical technique for small to mid-scale oxygen generation. Cryogenic separation is more suitable at higher volumes. Gas separation across a membrane is a pressure-driven process, where the driving force is the difference in pressure between inlet of raw material and outlet of product. The membrane used in the process is a generally non-porous layer, so there will not be a severe leakage of gas through the membrane. The performance of the membrane depends on permeability and selectivity. Permeability is affected by the penetrant size. Larger gas molecules have a lower diffusion coefficient. The membrane gas separation equipment typically pumps gas into the membrane module and the targeted gases are separated based on difference in diffusivity and solubility. For example, oxygen will be separated from the ambient air and collected at the upstream side, and nitrogen at the downstream side. As of 2016, membrane technology was reported as capable of producing 10 to 25 tonnes of 25 to 40% oxygen per day. History Home medical oxygen concentrators were invented in the early 1970s, with the manufacturing output of these devices increasing in the late 1970s. Union Carbide Corporation and Bendix Corporation were both early manufacturers. Before that era, home medical oxygen therapy required the use of heavy high-pressure oxygen cylinders or small cryogenic liquid oxygen systems. Both of these delivery systems required frequent home visits by suppliers to replenish oxygen supplies. In the United States, Medicare switched from fee-for-service payment to a flat monthly rate for home oxygen therapy in the mid-1980s, causing the durable medical equipment (DME) industry to rapidly embrace concentrators as a way to control costs. This reimbursement change dramatically decreased the number of primary high pressure and liquid oxygen delivery systems in use in homes in the United States at that time. Oxygen concentrators became the preferred and most common means of delivering home oxygen. The number of manufacturers entering the oxygen concentrator market increased greatly as a result of this change. Union Carbide Corporation invented the molecular sieve in the 1950s, which made these devices possible. It also invented the first cryogenic liquid home medical oxygen systems in the 1960s. How oxygen concentrators work Oxygen concentrators using pressure swing adsorption (PSA) technology are used widely for oxygen provision in healthcare applications, especially where liquid or pressurized oxygen is too dangerous or inconvenient, such as in homes or portable clinics. For other purposes, there are also concentrators based on nitrogen separation membrane technology. An oxygen concentrator takes in air and removes nitrogen from it, leaving an oxygen-enriched gas for use by people requiring medical oxygen due to low oxygen levels in their blood. Oxygen concentrators provide an economical source of oxygen in industrial processes, where they are also known as oxygen gas generators or oxygen generation plants. Pressure swing adsorption These oxygen concentrators utilize a molecular sieve to adsorb gases and operate on the principle of rapid pressure swing adsorption of atmospheric nitrogen onto zeolite minerals at high pressure. This type of adsorption system is therefore functionally a nitrogen scrubber, allowing the other atmospheric gases to pass through, leaving oxygen as the primary gas remaining. PSA technology is a reliable and economical technique for small- to mid-scale oxygen generation. Cryogenic separation is more suitable at higher volumes, and external delivery generally more suitable for small volumes. At high pressure, the porous zeolite adsorbs large quantities of nitrogen because of its large surface area and chemical characteristics. The oxygen concentrator compresses air and passes it over zeolite, causing the zeolite to adsorb the nitrogen from the air. It then collects the remaining gas, which is mostly oxygen, and the nitrogen desorbs from the zeolite under the reduced pressure to be vented. An oxygen concentrator has an air compressor, two cylinders filled with zeolite pellets, a pressure-equalizing reservoir, and some valves and tubes. In the first half-cycle, the first cylinder receives air from the compressor, which lasts about 3 seconds. During that time, the pressure in the first cylinder rises from atmospheric to about 2.5 times normal atmospheric pressure (typically 20 psi/138 kPa gauge, or 2.36 atmospheres absolute) and the zeolite becomes saturated with nitrogen. As the first cylinder reaches near pure oxygen (there are small amounts of argon, CO2, water vapour, radon, and other minor atmospheric components) in the first half-cycle, a valve opens and the oxygen-enriched gas flows to the pressure-equalizing reservoir, which connects to the patient's oxygen hose. At the end of the first half of the cycle, there is another valve position change so that the air from the compressor is directed to the second cylinder. The pressure in the first cylinder drops as the enriched oxygen moves into the reservoir, allowing the nitrogen to be desorbed back into gas. Partway through the second half of the cycle, there is another valve position change to vent the gas in the first cylinder back into the ambient atmosphere, keeping the concentration of oxygen in the pressure-equalizing reservoir from falling below about 90%. The pressure in the hose delivering oxygen from the equalizing reservoir is kept steady by a pressure-reducing valve. Older units cycled for a period of about 20 seconds and supplied up to 5 litres per minute of 90+% oxygen. Since about 1999, units capable of supplying up to 10 L/min have been available. Classic oxygen concentrators use two-bed molecular sieves; newer concentrators use multi-bed molecular sieves. The advantage of the multi-bed technology is the increased availability and redundancy, as the 10 L/min molecular sieves are staggered and multiplied on several platforms. With this, over 960 L/min can be produced. The ramp-up time — the elapsed time until a multi-bed concentrator is producing oxygen at >90% concentration — is often less than 2 minutes, much faster than simple two-bed concentrators. This is a big advantage in mobile emergencies. The option to fill standard oxygen cylinders (e.g., 50 L at 200 bar = 10,000 L each) with high-pressure boosters, to ensure automatic failover to previously filled reserve cylinders and to ensure the oxygen supply chain, e.g., in case of power failure, is given with those systems. Membrane separation In membrane gas separation, membranes act as a permeable barrier, which different compounds move across at different rates or do not cross at all. Gas mixtures can be effectively separated by synthetic membranes made from polymers such as polyamide or cellulose acetate, or from ceramic materials. While polymeric membranes are economical and technologically useful, they are bound by their performance, known as the Robeson limit (permeability must be sacrificed for selectivity and vice versa). This limit affects polymeric membrane use for CO2 separation from flue gas streams, since mass transport becomes limiting and CO2 separation becomes very expensive due to low permeabilities. Membrane materials have expanded into the realm of silica, zeolites, metal-organic frameworks, and perovskites, due to their strong thermal and chemical resistance as well as high tunability (ability to be modified and functionalized), leading to increased permeability and selectivity. Membranes can be used for separating gas mixtures, where they act as a permeable barrier through which different compounds move across at different rates or don't move at all. The membranes can be nanoporous, polymer, etc., and the gas molecules penetrate according to their size, diffusivity, or solubility. Gas separation across a membrane is a pressure-driven process, where the driving force is the difference in pressure between inlet of raw material and outlet of product. The membrane used in the process is a generally non-porous layer, so there will not be a severe leakage of gas through the membrane. The performance of the membrane depends on permeability and selectivity. Permeability is affected by the penetrant size. Larger gas molecules have a lower diffusion coefficient. The polymer chain flexibility and free volume in the polymer of the membrane material influence the diffusion coefficient, as the space within the permeable membrane must be large enough for the gas molecules to diffuse across. The solubility is expressed as the ratio of the concentration of the gas in the polymer to the pressure of the gas in contact with it. Permeability is the ability of the membrane to allow the permeating gas to diffuse through the material of the membrane as a consequence of the pressure difference over the membrane, and can be measured in terms of the permeate flow rate, membrane thickness and area, and the pressure difference across the membrane. The selectivity of a membrane is a measure of the ratio of permeability of the relevant gases for the membrane. It can be calculated as the ratio of permeability of two gases in binary separation. The membrane gas separation equipment typically pumps gas into the membrane module, and the targeted gases are separated based on difference in diffusivity and solubility. For example, oxygen will be separated from the ambient air and collected at the upstream side and nitrogen at the downstream side. As of 2016, membrane technology was reported as capable of producing 10 to 25 tonnes of 25 to 40% oxygen per day. Applications Medical oxygen concentrators are used in hospitals or at home to concentrate oxygen for patients. PSA generators provide a cost-efficient source of oxygen. They are a safer, less expensive, and more convenient alternative to tanks of cryogenic oxygen or pressurised cylinders. They can be used in various industries, including medical, pharmaceutical production, water treatment, and glass manufacture. PSA generators are particularly useful in remote or inaccessible parts of the world or mobile medical facilities (military hospitals, disaster facilities). Portable oxygen concentrators Since the early 2000s, many companies have produced portable oxygen concentrators. Typically, these devices produce the equivalent of one to five liters per minute of continuous oxygen flow and they use some version of pulse flow or "demand flow" to deliver oxygen only when the patient is inhaling. They can also provide pulses of oxygen either to provide higher intermittent flows or to reduce power consumption. Research into oxygen concentration is ongoing, and modern techniques suggest that the amount of adsorbent required by medical oxygen concentrators can be potentially "reduced by a factor of three while offering ~10–20% higher oxygen recovery compared to a typical commercial unit." The FAA has approved the use of portable oxygen concentrators on commercial airlines. However, users of these devices should check in advance as to whether a particular brand or model is permitted on a particular airline. Unlike in commercial airlines, users of aircraft without cabin pressurization need oxygen concentrators that are able to deliver enough flowrate even at high altitudes. Usually, "demand," or pulse-flow, oxygen concentrators are not used by patients while they sleep. There have been problems with the oxygen concentrators not being able to detect when the sleeping patient is inhaling. Some larger portable oxygen concentrators are designed to operate in a continuous-flow mode in addition to pulse-flow mode. Continuous-flow mode is considered safe for night use when coupled with a CPAP machine. Alternate applications Repurposed medical oxygen concentrators or specialized industrial oxygen concentrators can be made to operate small oxyacetylene or other fuel gas cutting, welding, and lampworking torches. Application of a PSA Oxygen Generator in Industries Oxygen is widely needed for the oxidation of different chemicals for industrial purposes. Previously, these industries purchased oxygen cylinders in large numbers to meet their requirements, but it was very expensive, and oxygen cylinders were not always available in the market. Industries that need PSA oxygen generators for production Paper industry Oxygen is needed here for the bleaching of paper pulp with the help of the oxidation process to make the paper white. Moreover, lignin present in the wood is removed by the delignification process, which also needs oxygen. Glass industry Huge furnaces are needed to melt the raw materials that combine to form glass. Oxygen flares up the furnace's fire to burn at a higher temperature needed for the production of glass. Chemical industries Oxygen is needed for the oxidation of different chemicals to form the desired chemical substances. Waste chemical products are burnt down and destroyed in the incinerator with the help of oxygen; thus, the continuous supply of a bulk amount of oxygen is essential, which is possible only by a PSA oxygen generator. Safety In both clinical and emergency-care situations, oxygen concentrators have the advantage of not being as dangerous as oxygen cylinders, which can, if ruptured or leaking, greatly increase the combustion rate of fire. As such, oxygen concentrators are particularly advantageous in military or disaster situations, where oxygen tanks may be dangerous or unfeasible. Oxygen concentrators are considered sufficiently foolproof to be supplied to individual patients as a prescription item for use in their homes. Typically they are used as an adjunct to CPAP treatment of severe sleep apnea. There also are other medical uses for oxygen concentrators, including COPD and other respiratory diseases. People who depend upon oxygen concentrators for home care may have life-threatening emergencies if the electricity fails during a natural disaster. Industrial oxygen concentrators Industrial processes may use much higher pressures and flows than medical units. To meet that need, another process, called vacuum swing adsorption (VSA), has been developed by Air Products. This process uses a single low-pressure blower and a valve that reverses the flow through the blower so that the regeneration phase occurs under a vacuum. Generators using this process are being marketed to the aquaculture industry. Industrial oxygen concentrators are often available in a much wider range of capacities than medical concentrators. Industrial oxygen concentrators are sometimes referred to as oxygen generators within the oxygen and ozone industries to distinguish them from medical oxygen concentrators. The distinction is used in an attempt to clarify that industrial oxygen concentrators are not medical devices approved by the Food and Drug Administration (FDA) and they are not suitable for use as bedside medical concentrators. However, applying the oxygen generator nomenclature can lead to confusion. The term oxygen generator is a misnomer in that the oxygen is not generated as it is with a chemical oxygen generator, but rather it is concentrated from the air. Non-medical oxygen concentrators can be used as feed gas to a medical oxygen system, such as the oxygen system in a hospital, though governmental approval is required, such as by the FDA, and additional filtering is generally required. During the COVID-19 pandemic The COVID-19 pandemic increased the demand for oxygen concentrators. During the pandemic open source oxygen concentrators were developed, locally manufactured – with prices below imported products – and used, especially during a COVID-19 pandemic wave in India. See also References Drug delivery devices Medical equipment Oxygen therapy Dosage forms Industrial gases Gas technologies
Oxygen concentrator
[ "Chemistry", "Biology" ]
3,382
[ "Pharmacology", "Drug delivery devices", "Medical equipment", "Industrial gases", "Chemical process engineering", "Medical technology" ]
1,592,325
https://en.wikipedia.org/wiki/Glutamate%20decarboxylase
Glutamate decarboxylase or glutamic acid decarboxylase (GAD) is an enzyme that catalyzes the decarboxylation of glutamate to gamma-aminobutyric acid (GABA) and carbon dioxide (). GAD uses pyridoxal-phosphate (PLP) as a cofactor. The reaction proceeds as follows: In mammals, GAD exists in two isoforms with molecular weights of 67 and 65 kDa (GAD67 and GAD65), which are encoded by two different genes on different chromosomes (GAD1 and GAD2 genes, chromosomes 2 and 10 in humans, respectively). GAD67 and GAD65 are expressed in the brain where GABA is used as a neurotransmitter, and they are also expressed in the insulin-producing β-cells of the pancreas, in varying ratios depending upon the species. Together, these two enzymes maintain the major physiological supply of GABA in mammals, though it may also be synthesized from putrescine in the enteric nervous system, brain, and elsewhere by the actions of diamine oxidase and aldehyde dehydrogenase 1a1. Several truncated transcripts and polypeptides of GAD67 are detectable in the developing brain, however their function, if any, is unknown. Structure and mechanism Both isoforms of GAD are homodimeric structures, consisting of three primary domains: the PLP, C-terminal and N-terminal domains. The PLP-binding domain of this enzyme adopts a type I PLP-dependent transferase-like fold. The reaction proceeds via the canonical mechanism, involving Schiff base linkage between PLP and Lys405. PLP is held in place through base-stacking with an adjacent histidine residue, and GABA is positioned such that its carboxyl group forms a salt bridge with arginine and a hydrogen bond with glutamine. Dimerization is essential to maintaining function as the active site is found at this interface, and mutations interfering with optimal association between the 2 chains has been linked to pathology, such as schizophrenia. Interference of dimerization by GAD inhibitors such as 2-keto-4-pentenoic acid (KPA) and ethyl ketopentenoate (EKP) were also shown to lead to dramatic reductions in GABA production and incidence of seizures. Catalytic activity is mediated by a short flexible loop at the dimer interface (residues 432–442 in GAD67, and 423–433 in GAD65). In GAD67 this loop remains tethered, covering the active site and providing a catalytic environment to sustain GABA production; its mobility in GAD65 promotes a side reaction that results in release of PLP, leading to autoinactivation. The conformation of this loop is intimately linked to the C-terminal domain, which also affects the rate of autoinactivation. Moreover, GABA-bound GAD65 is intrinsically more flexible and exists as an ensemble of states, thus providing more opportunities for autoantigenicity as seen in Type 1 diabetes. GAD derived from Escherichia coli shows additional structural intricacies, including a pH-dependent conformational change. This behavior is defined by the presence of a triple helical bundle formed by the N-termini of the hexameric protein in acidic environments. Regulation of GAD65 and GAD67 Despite an extensive sequence similarity between the two genes, GAD65 and GAD67 fulfill very different roles within the human body. Additionally, research suggests that GAD65 and GAD67 are regulated by distinctly different cellular mechanisms. GAD65 and GAD67 synthesize GABA at different locations in the cell, at different developmental times, and for functionally different purposes. GAD67 is spread evenly throughout the cell while GAD65 is localized to nerve terminals. GAD67 synthesizes GABA for neuron activity unrelated to neurotransmission, such as synaptogenesis and protection from neural injury. This function requires widespread, ubiquitous presence of GABA. GAD65, however, synthesizes GABA for neurotransmission, and therefore is only necessary at nerve terminals and synapses. In order to aid in neurotransmission, GAD65 forms a complex with heat shock cognate 70 (HSC70), cysteine string protein (CSP) and vesicular GABA transporter VGAT, which, as a complex, helps package GABA into vesicles for release during neurotransmission. GAD67 is transcribed during early development, while GAD65 is not transcribed until later in life. This developmental difference in GAD67 and GAD65 reflects the functional properties of each isoform; GAD67 is needed throughout development for normal cellular functioning, while GAD65 is not needed until slightly later in development when synaptic inhibition is more prevalent. GAD67 and GAD65 are also regulated differently post-translationally. Both GAD65 and GAD67 are regulated via phosphorylation of a dynamic catalytic loop, but the regulation of these isoforms differs; GAD65 is activated by phosphorylation while GAD67 is inhibited by phosphorylation. GAD67 is predominantly found activated (~92%), whereas GAD65 is predominantly found inactivated (~72%). GAD67 is phosphorylated at threonine 91 by protein kinase A (PKA), while GAD65 is phosphorylated, and therefore regulated by, protein kinase C (PKC). Both GAD67 and GAD65 are also regulated post-translationally by pyridoxal 5’-phosphate (PLP); GAD is activated when bound to PLP and inactive when not bound to PLP. Majority of GAD67 is bound to PLP at any given time, whereas GAD65 binds PLP when GABA is needed for neurotransmission. This reflects the functional properties of the two isoforms; GAD67 must be active at all times for normal cellular functioning, and is therefore constantly activated by PLP, while GAD65 must only be activated when GABA neurotransmission occurs, and is therefore regulated according to the synaptic environment. Studies with mice also show functional differences between Gad67 and Gad65. GAD67−/− mice are born with cleft palate and die within a day after birth while GAD65−/− mice survive with a slightly increased tendency in seizures. Additionally, GAD65+/- have symptoms defined similarly to attention deficit hyperactivity disorder (ADHD) in humans. Role in the nervous system Both GAD67 and GAD65 are present in all types of synapses within the human nervous system. This includes dendrodendritic, axosomatic, and axodendritic synapses. Preliminary evidence suggests that GAD65 is dominant in the visual and neuroendocrine systems, which undergo more phasic changes. It is also believed that GAD67 is present at higher amounts in tonically active neurons. Role in pathology Autism Both GAD65 and GAD67 experience significant downregulation in cases of autism. In a comparison of autistic versus control brains, GAD65 and GAD67 experienced a downregulation average of 50% in parietal and cerebellar cortices of autistic brains. Cerebellar Purkinje cells also reported a 40% downregulation, suggesting that affected cerebellar nuclei may disrupt output to higher order motor and cognitive areas of the brain. Diabetes Both GAD67 and GAD65 are targets of autoantibodies in people who later develop type 1 diabetes mellitus or latent autoimmune diabetes. Injections with GAD65 in ways that induce immune tolerance have been shown to prevent type 1 diabetes in rodent models. In clinical trials, injections with GAD65 have been shown to preserve some insulin production for 30 months in humans with type 1 diabetes. A Cochrane systematic review also examined 1 study showing improvement of C-peptide levels in cases of Latent Autoimmune Diabetes in adults, 5 years following treatment with GAD65. Still, it is important to highlight that the studies available to be included in this review presented considerable flaws in quality and design. Stiff person syndrome High titers of autoantibodies to glutamic acid decarboxylase (GAD) are well documented in association with stiff person syndrome (SPS). Glutamic acid decarboxylase is the rate-limiting enzyme in the synthesis of γ-aminobutyric acid (GABA), and impaired function of GABAergic neurons has been implicated in the pathogenesis of SPS. Autoantibodies to GAD might be the causative agent or a disease marker. Schizophrenia and bipolar disorder Substantial dysregulation of GAD mRNA expression, coupled with downregulation of reelin, is observed in schizophrenia and bipolar disorder. The most pronounced downregulation of GAD67 was found in hippocampal stratum oriens layer in both disorders and in other layers and structures of hippocampus with varying degrees. GAD67 is a key enzyme involved in the synthesis of inhibitory neurotransmitter GABA and people with schizophrenia have been shown to express lower amounts of GAD67 in the dorsolateral prefrontal cortex compared to healthy controls. The mechanism underlying the decreased levels of GAD67 in people with schizophrenia remains unclear. Some have proposed that an immediate early gene, Zif268, which normally binds to the promoter region of GAD67 and increases transcription of GAD67, is lower in schizophrenic patients, thus contributing to decreased levels of GAD67. Since the dorsolateral prefrontal cortex (DLPFC) is involved in working memory, and GAD67 and Zif268 mRNA levels are lower in the DLPFC of schizophrenic patients, this molecular alteration may account, at least in part, for the working memory impairments associated with the disease. Parkinson disease The bilateral delivery of glutamic acid decarboxylase (GAD) by an adeno-associated viral vector into the subthalamic nucleus of patients between 30 and 75 years of age with advanced, progressive, levodopa-responsive Parkinson disease resulted in significant improvement over baseline during the course of a six-month study. Cerebellar disorders Intracerebellar administration of GAD autoantibodies to animals increases the excitability of motoneurons and impairs the production of nitric oxide (NO), a molecule involved in learning. Epitope recognition contributes to cerebellar involvement. Reduced GABA levels increase glutamate levels as a consequence of lower inhibition of subtypes of GABA receptors. Higher glutamate levels activate microglia and activation of xc(−) increases the extracellular glutamate release. Neuropathic pain Peripheral nerve injury of the sciatic nerve (a neuropathic pain model) induces a transient loss of GAD65 immunoreactive terminals in the spinal cord dorsal horn and suggests a potential involvement for these alterations in the development and amelioration of pain behaviour. Other anti-GAD-associated neurologic disorders Antibodies directed against glutamic acid decarboxylase (GAD) are increasingly found in patients with other symptoms indicative of central nervous system (CNS) dysfunction, such as ataxia, progressive encephalomyelitis with rigidity and myoclonus (PERM), limbic encephalitis, and epilepsy. The pattern of anti-GAD antibodies in epilepsy differs from type 1 diabetes and stiff-person syndrome. Role of glutamate decarboxylase in other organisms Besides the synthesis of GABA, GAD has additional functions and structural variations that are organism-dependent. In Saccharomyces cerevisiae, GAD binds the Ca2+ regulatory protein calmodulin (CaM) and is also involved in responding to oxidative stress. Similarly, GAD in plants binds calmodulin as well. This interaction occurs at the 30-50bp CAM-binding domain (CaMBD) in its C terminus and is necessary for proper regulation of GABA production. Unlike vertebrates and invertebrates, the GABA produced by GAD is used in plants to signal abiotic stress by controlling levels of intracellular Ca2+ via CaM. Binding to CaM opens Ca2+ channels and leads to an increase in Ca2+ concentrations in the cytosol, allowing Ca2+ to act as a secondary messenger and activate downstream pathways. When GAD is not bound to CaM, the CaMBD acts as an autoinhibitory domain, thus deactivating GAD in the absence of stress. Interesting, in two plant species, rice and apples, Ca2+ /CAM-independent GAD isoforms have been discovered. The C-terminus of these isoforms contain substitutions at key residues necessary to interact with CaM in the CaMBD, preventing the protein from binding to GAD. Whereas CaMBD of the isoform in rice still functions as an autoinhibitory domain, the C-terminus in the isoform in apples does not. Finally, the structure of plant GAD is a hexamer and has pH-dependent activity, with the optimal pH of 5.8 in multiple species. but also significant activity at pH 7.3 in the presence of CaM It is also believed that the control of glutamate decarboxylase has the prospect of improving citrus produce quality post-harvest. In Citrus plants, research has shown that glutamate decarboxylase plays a key role in citrate metabolism. With the increase of glutamate decarboxylase via direct exposure, citrate levels have been seen to significantly increase within plants, and in conjunction post-harvest quality maintenance was significantly improved, and rot rates decreased. Just like GAD in plants, GAD in E. coli has a hexamer structure and is more active under acidic pH; the pH optimum for E. coli GAD is 3.8-4.6. However, unlike plants and yeast, GAD in E. coli does not require calmodulin binding to function. There are also two isoforms of GAD, namely GadA and GadB, encoded by separate genes in E. coli, although both isoforms are biochemically identical. The enzyme plays a major role in conferring acid resistance and allows bacteria to temporarily survive in highly acidic environments (pH < 2.5) like the stomach. This is done by GAD decarboxylating glutamate to GABA, which requires H+ to be uptaken as a reactant and raises the pH inside the bacteria. GABA can then be exported out of E. coli cells and contribute to increasing the pH of the nearby extracellular environments. References External links Genetics, Expression Profiling Support GABA Deficits in Schizophrenia - Schizophrenia Research Forum, 25 June 2007. EC 4.1.1 Molecular neuroscience Biology of bipolar disorder GABA Glutamate (neurotransmitter)
Glutamate decarboxylase
[ "Chemistry" ]
3,272
[ "Molecular neuroscience", "Molecular biology" ]
1,592,548
https://en.wikipedia.org/wiki/List%20of%20Apple%20typefaces
This is a list of typefaces made by/for Apple Inc. Serif Proportional Apple Garamond (1983), designed to replace Motter Tektura in the Apple logo. Not included on Macs in a user-available form. New York (1984, by Susan Kare), a serif font. Toronto (1984, Susan Kare) Athens (1984, Susan Kare), slab serif. Hoefler Text (1991, Jonathan Hoefler), still included with every Mac. Four-member family with an ornament font. Espy Serif (1993, bitmapped font, dropped with Mac OS 8) Fancy (1993), Apple Newton font based on Times Roman New York (2019), a new design unrelated to the earlier typeface of the same name. Designed to work with San Francisco. Available in four optical sizes: extra large, large, medium, and small. Sans-serif Proportional Chicago (1984 by Susan Kare, pre-Mac OS 8 system font, also used by early iPods) Geneva (1984 by Susan Kare), sans-serif font inspired by Helvetica. Converted to TrueType format and still installed on Macs. Espy Sans (1993, EWorld, Apple Newton and iPod Mini font, known as System on the Apple Newton platform) System (1993, see Espy Sans) eWorld Tight (1993), EWorld font based on Helvetica Compressed Simple (1993), Apple Newton font, based on Geneva Skia (1993 Matthew Carter), demonstration of QuickDraw GX typography in the style of inscriptions from antiquity. Still installed on Macs. Charcoal (1999), by David Berlow, Mac OS 8 system font) Lucida Grande (2000) by Charles Bigelow and Kris Holmes, used in OS X) San Francisco (2014), the new system font on Apple Watch and other Apple devices from winter 2015, now since 2017 Apple's corporate font. Myriad (Apple's corporate font (until 2017) and used by the iPod photo), not installed on Macs in a user-accessible format. Designed by Robert Slimbach and Carol Twombly. Monospaced Monaco (1984, Susan Kare) Bitmap, later converted to TrueType. Still included with Macs, but default monospace typeface is now Menlo. Menlo (2009, Jim Lyles), based on the open-source font Bitstream Vera. SF Mono (2017, Apple), mono variant of the San Francisco font introduced in 2015. Script and handwritten Venice (1984, Bill Atkinson), bitmap script inspired by chancery cursive. Never converted to TrueType format. Los Angeles (1984, Susan Kare), bitmap casual script font. Never converted to TrueType format. Apple Casual (1993, used on Apple Newton) Apple Chancery (1993, Kris Holmes), a test-bed for contextual alternates in font programming. Still installed on Macs. Miscellaneous Apple Symbols (2003, Unicode symbol/dingbat font) Cairo (1984 by Susan Kare, a dingbat font best known for the dogcow in the 0x7A (lowercase Z) position) LastResort (2001 by Michael Everson, Mac OS X Fallback font) London (1984, Susan Kare), bitmap blackletter. Never converted to TrueType format. San Francisco (1984, Susan Kare), bitmap font in a 'ransom note' style. Never converted to TrueType format. See also List of macOS fonts Fonts on Macintosh Typography of Apple Inc. References Apple typefaces
List of Apple typefaces
[ "Technology" ]
768
[ "Computing-related lists", "Apple Inc. lists" ]
1,592,648
https://en.wikipedia.org/wiki/Chromatoidal%20bodies
Chromatoidal bodies are aggregations of ribosomes found in cysts of some amoebae including Entamoeba histolytica and Entamoeba coli. They exist in the cytoplasm and are dark staining. In the early cystic stages of E. histolytica, chromatid bodies arise from aggregation of ribosomes forming polycrystalline masses. As the cyst matures, the masses fragment into separate particles and the chromatoidal body disappears. It is thought that the chromatoidal body formation is a manifestation of parasite-host adaptive conditions. Ribonucleoprotein is synthesized under favorable conditions, crystallized in the resistant cyst stage and dispersed in the newly excysted amoebae when the amoeba is able to establish itself in a new host. References Cell biology
Chromatoidal bodies
[ "Biology" ]
182
[ "Cell biology" ]
1,592,660
https://en.wikipedia.org/wiki/R%20Doradus
R Doradus (HD 29712 or P Doradus) is a red giant variable star in the far-southern constellation Dorado, close to the border with Reticulum. Its distance from Earth is . Having a uniform disk diameter of , it is thought to be the extrasolar star with the largest apparent size as viewed from Earth. Variability The visible magnitude of R Doradus varies between 4.8 and 6.3, which means it is usually visible to the naked eye, but in the infrared it is one of the brightest stars in the sky. With a near-infrared J band magnitude of −2.65, only Betelgeuse and Antares at −2.9 and −2.73 (respectively) are brighter. In the infrared K band, it is sometimes the brightest star in the sky, although usually Betelgeuse is brighter. It is classified as a semiregular variable star of type SRb, indicating giants with slow poorly-defined variations, often alternating between periodic and irregular brightness changes. Some studies show it alternating between periods of about 175 and 332 days, and a period of 117.3 days has also been identified. It has been likened to a Mira variable when its variations are relatively regular, although its amplitude of only 1.5 magnitudes is smaller than Mira variables. The star was discovered to be variable in 1874 by Benjamin Gould, and received the variable-star designation R Doradus. Angular diameter The angular diameter of R Doradus is easily measured using interferometry. Its uniform disc diameter, the diameter when interpreted as a disc of uniform brightness, when viewed at is . When viewed at and interpreted as a limb-darkened disc, the diameter is . The angular diameter of R Doradus is larger than any other measured star other than the Sun. The angular diameter of the next-largest star, Betelgeuse, is around . Properties The Hipparcos parallax of R Doradus is , corresponding to a distance of . The bolometric luminosity of R Doradus, derived from its bolometric flux at a distance of , is . The measured angular diameter, again assuming a distance of gives a radius of . The angular diameter and bolometric flux of R Doradus derive a cool surface effective temperature of . Comparison of its properties with theoretical evolutionary tracks gives an age of between 6 and 14 billion years. R Doradus has lost part of its mass during its evolution, and currently has a mass of either . Its initial mass would be either . Because of the enlarged surface and low mass, R Doradus has a surface gravity of only 0.026% that of Earth. It is on the asymptotic giant branch having exhausted helium at its core. The radius of means that the diameter of R Doradus is 415 million km (). If placed at the centre of the Solar System, the perihelion of Mars would be within the star. R Doradus has a projected equatorial rotation velocity of . It is calculated to take to rotate once on its axis. Using ALMA facilities, researchers at Chalmers University, in July to August 2023, were able to record the movement of hot gas bubbles on the surface of the star. Such bubbles, witness of the convective activity linked to deep nuclear fusion, would have a life of about a month, and a size more than 75 times that of the Sun. See also List of stars with resolved images Notes References External links Swinburne Astronomy Online; information about R Doradus Variáveis Binoculares The 3μ spectrum of R Doradus observed with the ISO-SWS Dorado M-type giants Mira variables Doradus, P Doradus, R 029712 1492 CD-62 00175 021479 J04364544-6204379 Emission-line stars
R Doradus
[ "Astronomy" ]
787
[ "Dorado", "Constellations" ]
1,592,686
https://en.wikipedia.org/wiki/Glycogen%20phosphorylase
Glycogen phosphorylase is one of the phosphorylase enzymes (). Glycogen phosphorylase catalyzes the rate-limiting step in glycogenolysis in animals by releasing glucose-1-phosphate from the terminal alpha-1,4-glycosidic bond. Glycogen phosphorylase is also studied as a model protein regulated by both reversible phosphorylation and allosteric effects. Mechanism Glycogen phosphorylase breaks up glycogen into glucose subunits (see also figure below): (α-1,4 glycogen chain)n + Pi ⇌ (α-1,4 glycogen chain)n-1 + α-D-glucose-1-phosphate. Glycogen is left with one fewer glucose molecule, and the free glucose molecule is in the form of glucose-1-phosphate. In order to be used for metabolism, it must be converted to glucose-6-phosphate by the enzyme phosphoglucomutase. Although the reaction is reversible in vitro, within the cell the enzyme only works in the forward direction as shown below because the concentration of inorganic phosphate is much higher than that of glucose-1-phosphate. Glycogen phosphorylase can act only on linear chains of glycogen (α1-4 glycosidic linkage). Its work will immediately come to a halt four residues away from α1-6 branch (which are exceedingly common in glycogen). In these situations, the debranching enzyme is necessary, which will straighten out the chain in that area. In addition, the enzyme transferase shifts a block of 3 glucosyl residues from the outer branch to the other end, and then a α1-6 glucosidase enzyme is required to break the remaining (single glucose) α1-6 residue that remains in the new linear chain. After all this is done, glycogen phosphorylase can continue. The enzyme is specific to α1-4 chains, as the molecule contains a 30-angstrom-long crevice with the same radius as the helix formed by the glycogen chain; this accommodates 4-5 glucosyl residues, but is too narrow for branches. This crevice connects the glycogen storage site to the active, catalytic site. Glycogen phosphorylase has a pyridoxal phosphate (PLP, derived from Vitamin B6) at each catalytic site. Pyridoxal phosphate links with basic residues (in this case Lys680) and covalently forms a Schiff base. Once the Schiff base linkage is formed, holding the PLP molecule in the active site, the phosphate group on the PLP readily donates a proton to an inorganic phosphate molecule, allowing the inorganic phosphate to in turn be deprotonated by the oxygen forming the α-1,4 glycosidic linkage. PLP is readily deprotonated because its negative charge is not only stabilized within the phosphate group, but also in the pyridine ring, thus the conjugate base resulting from the deprotonation of PLP is quite stable. The protonated oxygen now represents a good leaving group, and the glycogen chain is separated from the terminal glycogen in an SN1 fashion, resulting in the formation of a glucose molecule with a secondary carbocation at the 1 position. Finally, the deprotonated inorganic phosphate acts as a nucleophile and bonds with the carbocation, resulting in the formation of glucose-1-phosphate and a glycogen chain shortened by one glucose molecule. There is also an alternative proposed mechanism involving a positively charged oxygen in a half-chair conformation. Structure The glycogen phosphorylase monomer is a large protein, composed of 842 amino acids with a mass of 97.434 kDa in muscle cells. While the enzyme can exist as an inactive monomer or tetramer, it is biologically active as a dimer of two identical subunits. In mammals, the major isozymes of glycogen phosphorylase are found in muscle, liver, and brain. The brain type is predominant in adult brain and embryonic tissues, whereas the liver and muscle types are predominant in adult liver and skeletal muscle, respectively. The glycogen phosphorylase dimer has many regions of biological significance, including catalytic sites, glycogen binding sites, allosteric sites, and a reversibly phosphorylated serine residue. First, the catalytic sites are relatively buried, 15Å from the surface of the protein and from the subunit interface. This lack of easy access of the catalytic site to the surface is significant in that it makes the protein activity highly susceptible to regulation, as small allosteric effects could greatly increase the relative access of glycogen to the site. Perhaps the most important regulatory site is Ser14, the site of reversible phosphorylation very close to the subunit interface. The structural change associated with phosphorylation, and with the conversion of phosphorylase b to phosphorylase a, is the arrangement of the originally disordered residues 10 to 22 into α helices. This change increases phosphorylase activity up to 25% even in the absence of AMP, and enhances AMP activation further. The allosteric site of AMP binding on muscle isoforms of glycogen phosphorylase are close to the subunit interface just like Ser14. Binding of AMP at this site, corresponding in a change from the T state of the enzyme to the R state, results in small changes in tertiary structure at the subunit interface leading to large changes in quaternary structure. AMP binding rotates the tower helices (residues 262-278) of the two subunits 50˚ relative to one another through greater organization and intersubunit interactions. This rotation of the tower helices leads to a rotation of the two subunits by 10˚ relative to one another, and more importantly disorders residues 282-286 (the 280s loop) that block access to the catalytic site in the T state but do not in the R state. The final, perhaps most curious site on the glycogen phosphorylase protein is the so-called glycogen storage site. Residues 397-437 form this structure, which allows the protein to covalently bind to the glycogen chain a full 30 Å from the catalytic site . This site is most likely the site at which the enzyme binds to glycogen granules before initiating cleavage of terminal glucose molecules. In fact, 70% of dimeric phosphorylase in the cell exists as bound to glycogen granules rather than free floating. Clinical significance The inhibition of glycogen phosphorylase has been proposed as one method for treating type 2 diabetes. Since glucose production in the liver has been shown to increase in type 2 diabetes patients, inhibiting the release of glucose from the liver's glycogen's supplies appears to be a valid approach. The cloning of the human liver glycogen phosphorylase (HLGP) revealed a new allosteric binding site near the subunit interface that is not present in the rabbit muscle glycogen phosphorylase (RMGP) normally used in studies. This site was not sensitive to the same inhibitors as those at the AMP allosteric site, and most success has been had synthesizing new inhibitors that mimic the structure of glucose, since glucose-6-phosphate is a known inhibitor of HLGP and stabilizes the less active T-state. These glucose derivatives have had some success in inhibiting HLGP, with predicted Ki values as low as 0.016 mM. Mutations in the muscle isoform of glycogen phosphorylase (PYGM) are associated with glycogen storage disease type V (GSD V, McArdle's Disease). More than 65 mutations in the PYGM gene that lead to McArdle disease have been identified to date. Symptoms of McArdle disease include muscle weakness, myalgia, and lack of endurance, all stemming from low glucose levels in muscle tissue. Mutations in the liver isoform of glycogen phosphorylase (PYGL) are associated with Hers' Disease (glycogen storage disease type VI). Hers' disease is often associated with mild symptoms normally limited to hypoglycemia, and is sometimes difficult to diagnose due to residual enzyme activity. The brain isoform of glycogen phosphorylase (PYGB) has been proposed as a biomarker for gastric cancer. Regulation Glycogen phosphorylase is regulated through allosteric control and through phosphorylation. Phosphorylase a and phosphorylase b each exist in two forms: a T (tense) inactive state and an R (relaxed) state. Phosphorylase b is normally in the T state, inactive due to the physiological presence of ATP and glucose 6 phosphate, and phosphorylase a is normally in the R state (active). An isoenzyme of glycogen phosphorylase exists in the liver sensitive to glucose concentration, as the liver acts as a glucose exporter. In essence, liver phosphorylase is responsive to glucose, which causes a very responsive transition from the R to T form, inactivating it; furthermore, liver phosphorylase is insensitive to AMP. Hormones such as epinephrine, insulin and glucagon regulate glycogen phosphorylase using second messenger amplification systems linked to G proteins. Glucagon activates adenylate cyclase through a G protein-coupled receptor (GPCR) coupled to Gs which in turn activates adenylate cyclase to increase intracellular concentrations of cAMP. cAMP binds to and activates protein kinase A (PKA). PKA phosphorylates phosphorylase kinase, which in turn phosphorylates glycogen phosphorylase b at Ser14, converting it into the active glycogen phosphorylase a. In the liver, glucagon also activates another GPCR that triggers a different cascade, resulting in the activation of phospholipase C (PLC). PLC indirectly causes the release of calcium from the hepatocytes' endoplasmic reticulum into the cytosol. The increased calcium availability binds to the calmodulin subunit and activates glycogen phosphorylase kinase. Glycogen phosphorylase kinase activates glycogen phosphorylase in the same manner mentioned previously. Glycogen phosphorylase b is not always inactive in muscle, as it can be activated allosterically by AMP. An increase in AMP concentration, which occurs during strenuous exercise, signals energy demand. AMP activates glycogen phosphorylase b by changing its conformation from a tense to a relaxed form. This relaxed form has similar enzymatic properties as the phosphorylated enzyme. An increase in ATP concentration opposes this activation by displacing AMP from the nucleotide binding site, indicating sufficient energy stores. Upon eating a meal, there is a release of insulin, signaling glucose availability in the blood. Insulin indirectly activates protein phosphatase 1 (PP1) and phosphodiesterase via a signal transduction cascade. PP1 dephosphorylates glycogen phosphorylase a, reforming the inactive glycogen phosphorylase b. The phosphodiesterase converts cAMP to AMP. Together, they decrease the concentration of cAMP and inhibit PKA. As a result, PKA can no longer initiate the phosphorylation cascade that ends with formation of (active) glycogen phosphorylase a. Overall, insulin signaling decreases glycogenolysis to preserve glycogen stores in the cell and triggers glycogenesis. Historical significance Glycogen phosphorylase was the first allosteric enzyme to be discovered. It was isolated and its activity characterized in detail by Carl F. Cori, Gerhard Schmidt and Gerty T. Cori. Arda Green and Gerty Cori crystallized it for the first time in 1943 and illustrated that glycogen phosphorylase existed in either the or b forms depending on its phosphorylation state, as well as in the R or T states based on the presence of AMP. See also AMP deaminase deficiency (MADD) Glycogenolysis McArdle disease (GSD-V) Metabolic myopathies Purine nucleotide cycle § Pathology References Further reading External links GeneReviews/NCBI/NIH/UW entry on Glycogen Storage Disease Type VI - Hers disease Carbohydrate metabolism EC 2.4.1
Glycogen phosphorylase
[ "Chemistry" ]
2,803
[ "Carbohydrate metabolism", "Carbohydrate chemistry", "Metabolism" ]
1,592,693
https://en.wikipedia.org/wiki/Koschevnikov%20gland
The Koschevnikov gland is a gland of the honeybee located near the sting shaft. The gland produces an alarm pheromone that is released when a bee stings. The pheromone contains more than 40 different compounds, including pentylacetate, butyl acetate, 1-hexanol, n-butanol, 1-octanol, hexylacetate, octylacetate, and 2-nonanol. These components have a low molar mass and evaporate quickly. This collection of compounds is the least specific of all pheromones. The alarm pheromone is released when a honey bee stings another animal to attract other bees to attack, as well. The release of the alarm pheromone may entice more bees to sting at the same location. Smoking the bees can reduce the pheromone's efficacy. References Bees Insect anatomy Arthropod glands
Koschevnikov gland
[ "Chemistry", "Biology" ]
199
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
1,592,806
https://en.wikipedia.org/wiki/Ion%20beam
An ion beam is a beam of ions, a type of charged particle beam. Ion beams have many uses in electronics manufacturing (principally ion implantation) and other industries. There are many ion beam sources, some derived from the mercury vapor thrusters developed by NASA in the 1960s. The most widely used ion beams are of singly-charged ions. Units Ion current density is typically measured in mA/cm2, and ion energy in electronvolts (eV). The use of eV is convenient for converting between voltage and energy, especially when dealing with singly charged ion beams. Broad-beam ion sources Most commercial applications use two popular types of ion source, gridded and gridless, which differ in current and power characteristics and the ability to control ion trajectories. In both cases electrons are needed to generate an ion beam. The most common types of electron emitter are hot filament and hollow cathode. Gridded ion source In a gridded ion source, DC or RF discharge are used to generate ions, which are then accelerated and decimated using grids and apertures. Here, the DC discharge current or the RF discharge power are used to control the beam current. The ion current density that can be accelerated using a gridded ion source is limited by the space charge effect, which is described by Child's law: where is the voltage between the grids, is the distance between the grids, and is the ion mass. The grids are spaced as closely as possible to increase the current density, typically . The ions used have a significant impact on the maximum ion beam current, since . All else being equal, the maximum ion beam current with krypton is only 69% of the maximum ion current of an argon beam; with xenon the ratio drops to 55%. Gridless ion sources In a gridless ion source, ions are generated by a flow of electrons, without grids. The most common gridless ion source is the end-Hall ion source, with which the discharge current and the gas flow are used to control the beam current. Applications Material modification and analysis Ion beams can be used for material modification (e.g. by sputtering or ion beam etching) and for ion beam analysis. Ion beam application, etching, or sputtering, is a technique conceptually similar to sandblasting, but using individual atoms in an ion beam to ablate a target. Reactive ion etching is an important extension that uses chemical reactivity to enhance the physical sputtering effect. In a typical use in semiconductor manufacturing, a mask can selectively expose a layer of photoresist on a substrate made of a semiconductor material, such as a silicon dioxide or gallium arsenide wafer. The wafer is developed, and for a positive photoresist, the exposed portions are removed in a chemical process. The result is a pattern left on the surface areas of the wafer that had been masked from exposure. The wafer is then placed in a vacuum chamber, and exposed to the ion beam. The impact of the ions erodes the target, abrading away the areas not covered by the photoresist. Focused ion beam (FIB) instruments have numerous applications for characterization of thin-film devices. Using a focused, high-brightness ion beam in a scanned raster pattern, material is removed (sputtered) in precise rectilinear patterns revealing a two-dimensional, or stratigraphic profile of a solid material. The most common application is to verify the integrity of the gate oxide layer in a CMOS transistor. A single excavation site exposes a cross section for analysis using a scanning electron microscope. Dual excavations on either side of a thin lamella bridge are utilized for preparing transmission electron microscope samples. Another common use of FIB instruments is for design verification and/or failure analysis of semiconductor devices. Design verification combines selective material removal with gas-assisted material deposition of conductive, dielectric, or insulating materials. Engineering prototype devices may be modified using the ion beam in combination with gas-assisted material deposition in order to rewire an integrated circuit's conductive pathways. The techniques are effectively used to verify the correlation between the CAD design and the actual functional prototype circuit, thereby avoiding the creation of a new mask for the purpose of testing design changes. Ions beams are also used for analysis purposes in Materials science. For example sputtering techniques can be used for surface analysis or depth profiling by performing secondary ion mass spectrometry. It is also possible to gain information from the spectroscopy of transmitted or backscattered primary ions, e.g. depth profiles can be obtained from Rutherford backscattering (RBS) spectra. In difference to secondary ion spectroscopy scattering based techniques like RBS are often less destructive to the sample. Biology In radiobiology a broad or focused ion beam is used to study mechanisms of inter- and intra- cellular communication, signal transduction and DNA damage and repair. Medicine Ion beams are also used in particle therapy, most often in the treatment of cancer. Space applications Ion beams produced by ion and plasma thrusters on board a spacecraft can be used to transmit a force to a nearby object (e.g. another spacecraft, an asteroid, etc.) that is irradiated by the beam. This innovative propulsion technique named Ion Beam Shepherd has been shown to be effective in the area of active space debris removal as well as asteroid deflection. High-energy ion beams High-energy ion beams produced by particle accelerators are used in atomic physics, nuclear physics and particle physics. As weapon Ion beams can theoretically be used to make a weapon, but this has not been demonstrated. Electron beam weapons were tested by the U.S. Navy in the early 20th century, but the hose instability effect prevents them from being accurate at a distance of over approximately 30 inches. See also Ion source Ion thruster Ion wind References External links Stopping parameters of ion beams in solids calculated by MELF-GOS model ISOLDE – Facility dedicated to the production of a large variety of radioactive ion beams located at CERN Plasma technology and applications Semiconductor device fabrication Semiconductor analysis Thin film deposition Ions Accelerator physics
Ion beam
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,262
[ "Matter", "Applied and interdisciplinary physics", "Thin film deposition", "Plasma physics", "Plasma technology and applications", "Microtechnology", "Coatings", "Thin films", "Semiconductor device fabrication", "Experimental physics", "Planes (geometry)", "Accelerator physics", "Solid state...
1,592,850
https://en.wikipedia.org/wiki/Ion%20beam%20deposition
Ion beam deposition (IBD) is a process of applying materials to a target through the application of an ion beam. An ion beam deposition apparatus typically consists of an ion source, ion optics, and the deposition target. Optionally a mass analyzer can be incorporated. In the ion source source materials in the form of a gas, an evaporated solid, or a solution (liquid) are ionized. For atomic IBD, electron ionization, field ionization (Penning ion source) or cathodic arc sources are employed. Cathodic arc sources are used particularly for carbon ion deposition. Molecular ion beam deposition employs electrospray ionization or MALDI sources. The ions are then accelerated, focused or deflected using high voltages or magnetic fields. Optional deceleration at the substrate can be employed to define the deposition energy. This energy usually ranges from a few eV up to a few keV. At low energy molecular ion beams are deposited intact (ion soft landing), while at a high deposition energy molecular ions fragment and atomic ions can penetrate further into the material, a process known as ion implantation. Ion optics (such as radio frequency quadrupoles) can be mass selective. In IBD they are used to select a single, or a range of ion species for deposition in order to avoid contamination. For organic materials in particular, this process is often monitored by a mass spectrometer. The ion beam current, which is quantitative measure for the deposited amount of material, can be monitored during the deposition process. Switching of the selected mass range can be used to define a stoichiometry. The main disadvantages of ion beam sputtering are its small target area, low deposition rate, and difficulty in depositing large-area films with uniform thickness. Additionally, the equipment is complex and has high operating costs. These limitations make ion beam sputtering less efficient for large-scale applications. See also Cathodic arc deposition Sputter deposition Ion beam assisted deposition Ion beam induced deposition References Thin film deposition
Ion beam deposition
[ "Chemistry", "Materials_science", "Mathematics" ]
416
[ "Materials science stubs", "Thin film deposition", "Coatings", "Thin films", "Nanotechnology stubs", "Nanotechnology", "Planes (geometry)", "Solid state engineering" ]
1,592,887
https://en.wikipedia.org/wiki/Treemapping
In information visualization and computing, treemapping is a method for displaying hierarchical data using nested figures, usually rectangles. Treemaps display hierarchical (tree-structured) data as a set of nested rectangles. Each branch of the tree is given a rectangle, which is then tiled with smaller rectangles representing sub-branches. A leaf node's rectangle has an area proportional to a specified dimension of the data. Often the leaf nodes are colored to show a separate dimension of the data. When the color and size dimensions are correlated in some way with the tree structure, one can often easily see patterns that would be difficult to spot in other ways, such as whether a certain color is particularly relevant. A second advantage of treemaps is that, by construction, they make efficient use of space. As a result, they can legibly display thousands of items on the screen simultaneously. Tiling algorithms To create a treemap, one must define a tiling algorithm, that is, a way to divide a region into sub-regions of specified areas. Ideally, a treemap algorithm would create regions that satisfy the following criteria: A small aspect ratio—ideally close to one. Regions with a small aspect ratio (i.e., fat objects) are easier to perceive. Preserve some sense of the ordering in the input data (ordered). Change to reflect changes in the underlying data (high stability). These properties have an inverse relationship. As the aspect ratio is optimized, the order of placement becomes less predictable. As the order becomes more stable, the aspect ratio is degraded. Rectangular treemaps To date, fifteen primary rectangular treemap algorithms have been developed: Convex treemaps Rectangular treemaps have the disadvantage that their aspect ratio might be arbitrarily high in the worst case. As a simple example, if the tree root has only two children, one with weight and one with weight , then the aspect ratio of the smaller child will be , which can be arbitrarily high. To cope with this problem, several algorithms have been proposed that use regions that are general convex polygons, not necessarily rectangular. Convex treemaps were developed in several steps, each step improved the upper bound on the aspect ratio. The bounds are given as a function of - the total number of nodes in the tree, and - the total depth of the tree. Onak and Sidiropoulos proved an upper bound of . De-Berg and Onak and Sidiropoulos improve the upper bound to , and prove a lower bound of . De-Berg and Speckmann and van-der-Weele improve the upper bound to , matching the theoretical lower bound. (For the special case where the depth is 1, they present an algorithm that uses only four classes of 45-degree-polygons (rectangles, right-angled triangles, right-angled trapezoids and 45-degree pentagons), and guarantees an aspect ratio of at most 34/7.) The latter two algorithms operate in two steps (greatly simplified for clarity): The original tree is converted to a binary tree: each node with more than two children is replaced by a sub-tree in which each node has exactly two children. Each region representing a node (starting from the root) is divided to two, using a line that keeps the angles between edges as large as possible. It is possible to prove that, if all edges of a convex polygon are separated by an angle of at least , then its aspect ratio is . It is possible to ensure that, in a tree of depth , the angle is divided by a factor of at most , hence the aspect ratio guarantee. Orthoconvex treemaps In convex treemaps, the aspect ratio cannot be constant - it grows with the depth of the tree. To attain a constant aspect-ratio, Orthoconvex treemaps can be used. There, all regions are orthoconvex rectilinear polygons with aspect ratio at most 64; and the leaves are either rectangles with aspect ratio at most 8, or L-shapes or S-shapes with aspect ratio at most 32. For the special case where the depth is 1, they present an algorithm that uses only rectangles and L-shapes, and the aspect ratio is at most ; the internal nodes use only rectangles with aspect ratio at most . Other treemaps Voronoi Treemaps based on Voronoi diagram calculations. The algorithm is iterative and does not give any upper bound on the aspect ratio. Jigsaw Treemaps based on the geometry of space-filling curves. They assume that the weights are integers and that their sum is a square number. The regions of the map are rectilinear polygons and highly non-ortho-convex. Their aspect ratio is guaranteed to be at most 4. GosperMaps based on the geometry of Gosper curves. It is ordered and stable, but has a very high aspect ratio. History Area-based visualizations have existed for decades. For example, mosaic plots (also known as Marimekko diagrams) use rectangular tilings to show joint distributions (i.e., most commonly they are essentially stacked column plots where the columns are of different widths). The main distinguishing feature of a treemap, however, is the recursive construction that allows it to be extended to hierarchical data with any number of levels. This idea was invented by professor Ben Shneiderman at the University of Maryland Human – Computer Interaction Lab in the early 1990s. Shneiderman and his collaborators then deepened the idea by introducing a variety of interactive techniques for filtering and adjusting treemaps. These early treemaps all used the simple "slice-and-dice" tiling algorithm. Despite many desirable properties (it is stable, preserves ordering, and is easy to implement), the slice-and-dice method often produces tilings with many long, skinny rectangles. In 1994 Mountaz Hascoet and Michel Beaudouin-Lafon invented a "squarifying" algorithm, later popularized by Jarke van Wijk, that created tilings whose rectangles were closer to square. In 1999 Martin Wattenberg used a variation of the "squarifying" algorithm that he called "pivot and slice" to create the first Web-based treemap, the SmartMoney Map of the Market, which displayed data on hundreds of companies in the U.S. stock market. Following its launch, treemaps enjoyed a surge of interest, particularly in financial contexts. A third wave of treemap innovation came around 2004, after Marcos Weskamp created the Newsmap, a treemap that displayed news headlines. This example of a non-analytical treemap inspired many imitators, and introduced treemaps to a new, broad audience. In recent years, treemaps have made their way into the mainstream media, including usage by the New York Times. The Treemap Art Project produced 12 framed images for the National Academies (United States), shown at the Every AlgoRiThm has ART in It exhibit in Washington, DC and another set for the collection of Museum of Modern Art in New York. See also Disk space analyzer Data and information visualization Marimekko Chart, a similar concept with one level of explicit hierarchy. References External links Treemap Art Project produced exhibit for the National Academies in Washington, DC "Discovering Business Intelligence Using Treemap Visualizations", Ben Shneiderman, April 11, 2006 Comprehensive survey and bibliography of Tree Visualization techniques History of Treemaps by Ben Shneiderman. Hypermedia exploration with interactive dynamic maps Paper by Zizi and Beaudouin-Lafon introducing the squarified treemap layout algorithm (named "improved treemap layout" at the time). Indiana University description Live interactive treemap based on crowd-sourced discounted deals from Flytail Group Treemap sample in English from The Hive Group Several treemap examples made with Macrofocus TreeMap Visualizations using dynamic treemaps and online treemapping software by drasticdata User interface techniques Infographics Statistical charts and diagrams Trees (data structures) Visualization (graphics) Rectangular subdivisions
Treemapping
[ "Physics" ]
1,714
[ "Tessellation", "Rectangular subdivisions", "Symmetry" ]
1,592,956
https://en.wikipedia.org/wiki/Millipede%20memory
Millipede memory is a form of non-volatile computer memory. It promised a data density of more than 1 terabit per square inch (1 gigabit per square millimeter), which is about the limit of the perpendicular recording hard drives. Millipede storage technology was pursued as a potential replacement for magnetic recording in hard drives and a means of reducing the physical size of the technology to that of flash media. IBM demonstrated a prototype millipede storage device at CeBIT 2005, and was trying to make the technology commercially available by the end of 2007. However, because of concurrent advances in competing storage technologies, no commercial product has been made available since then. Technology Basic concept The main memory of modern computers is constructed from one of a number of DRAM-related devices. DRAM basically consists of a series of capacitors, which store data in terms of the presence or absence of electrical charge. Each capacitor and its associated control circuitry, referred to as a cell, holds one bit, and multiple bits can be read or written in large blocks at the same time. DRAM is volatile — data is lost when power is removed. In contrast, hard drives store data on a disk that is covered with a magnetic material; data is represented by this material being locally magnetized. Reading and writing are accomplished by a single head, which waits for the requested memory location to pass under the head while the disk spins. As a result, a hard drive's performance is limited by the mechanical speed of the motor, and it is generally hundreds of thousands of times slower than DRAM. However, since the "cells" in a hard drive are much smaller, the storage density for hard drives is much higher than DRAM. Hard drives are non-volatile — data is retained even after power is removed. Millipede storage attempts to combine features of both. Like a hard drive, millipede both stores data in a medium and accesses the data by moving the medium under the head. Also similar to hard drives, millipede's physical medium stores a bit in a small area, leading to high storage densities. However, millipede uses many nanoscopic heads that can read and write in parallel, thereby increasing the amount of data read at a given time. Mechanically, millipede uses numerous atomic force probes, each of which is responsible for reading and writing a large number of bits associated with it. These bits are stored as a pit, or the absence of one, in the surface of a thermo-active polymer, which is deposited as a thin film on a carrier known as the sled. Any one probe can only read or write a fairly small area of the sled available to it, known as a storage field. Normally the sled is moved so that the selected bits are positioned under the probe using electromechanical actuators. These actuators are similar to those that position the read/write head in a typical hard drive, however, the actual distance moved is tiny in comparison. The sled is moved in a scanning pattern to bring the requested bits under the probe, a process known as x/y scan. The amount of memory serviced by any one field/probe pair is fairly small, but so is its physical size. Thus, many such field/probe pairs are used to make up a memory device, and data reads and writes can be spread across many fields in parallel, increasing the throughput and improving the access times. For instance, a single 32-bit value would normally be written as a set of single bits sent to 32 different fields. In the initial experimental devices, the probes were mounted in a 32x32 grid for a total of 1,024 probes. Given this layout looked like the legs on a millipede (animal), the name stuck. The design of the cantilever array involves making numerous mechanical cantilevers, on which a probe has to be mounted. All the cantilevers are made entirely out of silicon, using surface micromachining at the wafer surface. Regarding the creation of indentations, or pits, non-crosslinked polymers retain a low glass temperature, around 120 °C for PMMA and if the probe tip is heated to above the glass temperature, it leaves a small indentation. Indentations are made at 3 nm lateral resolution. By heating the probe immediately next to an indentation, the polymer will re-melt and fill in the indentation, erasing it (see also: thermo-mechanical scanning probe lithography). After writing, the probe tip can be used to read the indentations. If each indentation is treated as one bit then a storage density of 0.9 Tb/in2 could theoretically be achieved. Reading and writing data Each probe in the cantilever array stores and reads data thermo-mechanically, handling one bit at a time. To accomplish a read, the probe tip is heated to around 300 °C and moved in proximity to the data sled. If the probe is located over a pit the cantilever will push it into the hole, increasing the surface area in contact with the sled, and in turn increasing the cooling as heat leaks into the sled from the probe. In the case where there is no pit at that location, only the very tip of the probe remains in contact with the sled, and the heat leaks away more slowly. The electrical resistance of the probe is a function of its temperature, and it rises with an increase in temperature. Thus when the probe drops into a pit and cools, this registers as a drop in resistance. A low resistance will be translated to a "1" bit, or a "0" bit otherwise. While reading an entire storage field, the tip is dragged over the entire surface and the resistance changes are constantly monitored. To write a bit, the tip of the probe is heated to a temperature above the glass transition temperature of the polymer used to manufacture the data sled, which is generally made of acrylic glass. In this case the transition temperature is around 400 °C. To write a "1", the polymer in proximity to the tip is softened, and then the tip is gently touched to it, causing a dent. To erase the bit and return it to the zero state, the tip is instead pulled up from the surface, allowing surface tension to pull the surface flat again. Older experimental systems used a variety of erasure techniques that were generally more time consuming and less successful. These older systems offered around 100,000 erases, but the available references do not contain enough information to say if this has been improved with the newer techniques. As one might expect, the need to heat the probes requires a fairly large amount of power for general operation. However, the exact amount is dependent on the speed that data is being accessed; at slower rates the cooling during read is smaller, as is the number of times the probe has to be heated to a higher temperature to write. When operated at data rates of a few megabits per second, Millipede is expected to consume about 100 milliwatts, which is in the range of flash memory technology and considerably below hard drives. However, one of the main advantages of the Millipede design is that it is highly parallel, allowing it to run at much higher speeds into the GB/s. At these sorts of speeds one might expect power requirements more closely matching current hard drives, and indeed, data transfer speed is limited to the kilobits-per-second range for an individual probe, which amounts to a few megabits for an entire array. Experiments done at IBM's Almaden Research Center showed that individual tips could support data rates as high as 1 - 2 megabits per second, potentially offering aggregate speeds in the GB/s range. Applications Millipede memory was proposed as a form of non-volatile computer memory that was intended to compete with flash memory in terms of data storage, reading and writing speed, and physical size of the technology. However, other technologies have since surpassed it, and thus it does not appear to be a technology currently being pursued. History First devices The earliest generation millipede devices used probes 10 nanometers in diameter and 70 nanometers in length, producing pits about 40 nm in diameter on fields 92 μm x 92 μm. Arranged in a 32 x 32 grid, the resulting 3 mm x 3 mm chip stores 500 megabits of data or 62.5 MB, resulting in an areal density, the number of bits per square inch, on the order of 200 Gbit/in². IBM initially demonstrated this device in 2003, planning to introduce it commercially in 2005. By that point hard drives were approaching 150 Gbit/in², and have since surpassed it. Proposed commercial product Devices demonstrated at the CeBIT Expo in 2005 improved on the basic design, using a 64 x 64 cantilever chips with a 7 mm x 7 mm data sled, boosting the data storage capacity to 800 Gbit/in² using smaller pits. It appears the pit size can scale to about 10 nm, resulting in a theoretical areal density just over 1Tbit/in². IBM planned to introduce devices based on this sort of density in 2007. For comparison, as of late 2011, laptop hard drives were shipping with a density of 636 Gbit/in², and it is expected that heat-assisted magnetic recording and patterned media together could support densities of 10 Tbit/in². Flash reached almost 250 Gbit/in² in early 2010. Current development As of 2015, because of concurrent advances in competing storage technologies, no commercial product has been made available so far. See also Nanoelectromechanical systems Nanotechnology Nanolithography Thermal scanning probe lithography Punched card References External links IBM storage devices Non-volatile memory Nanotechnology Scanning probe microscopy
Millipede memory
[ "Chemistry", "Materials_science", "Engineering" ]
2,035
[ "Nanotechnology", "Materials science", "Scanning probe microscopy", "Microscopy" ]
1,593,142
https://en.wikipedia.org/wiki/List%20of%20Usenet%20newsreaders
Usenet is a worldwide, distributed discussion system that uses the Network News Transfer Protocol (NNTP). Programs called newsreaders are used to read and post messages (called articles or posts, and collectively termed news) to one or more newsgroups. Users must have access to a news server to use a newsreader. This is a list of such newsreaders. Types of clients Text newsreader – designed primarily for reading/posting text posts; unable to download binary attachments Traditional newsreader – a newsreader with text support that can also handle binary attachments, though sometimes less efficiently than more specialized clients Binary grabber/plucker – designed specifically for easy and efficient downloading of multi-part binary post attachments; limited or nonexistent reading/posting ability. These generally offer multi-server and multi-connection support. Most now support NZBs, and several either support or plan to support automatic Par2 processing. Some additionally support video and audio streaming. NZB downloader – binary grabber client without header support – cannot browse groups or read/post text messages; can only load 3rd-party NZBs to download binary post attachments. Some incorporate an interface for accessing selected NZB search websites. Binary posting client – designed specifically and exclusively for posting multi-part binary files Combination client – Jack-of-all-trades supporting text reading/posting, as well as multi-segment binary downloading and automatic Par2 processing Web-Based Client - Client designed for access through a web browser and does not require any additional software to access Usenet. Active Commercial software BinTube Forté Agent NewsBin NewsLeecher Novell GroupWise Postbox Turnpike Usenet Explorer Freeware GrabIt Free/Open-source software Claws Mail is a GTK+-based email and news client for Linux, BSD, Solaris, and Windows. GNOME Evolution Gnus, is an email and news client, and feed reader for GNU Emacs. Mozilla Thunderbird is a free and open-source cross-platform email client, news client, RSS and chat client developed by the Mozilla Foundation. Pan a full-featured text and binary NNTP and Usenet client for Linux, FreeBSD, NetBSD, OpenBSD, OpenSolaris, and Windows. SeaMonkey Mail & Newsgroups Sylpheed X Python Newsreader Text-based Alpine Gnus (Emacs based) Line Mode Browser Lynx (has limited Usenet support) Mutt (3rd party patches) rn Slrn tin Web-based Easynews Narkive Nemo Newsgrouper novaBBS See Web-based Usenet for details. Discontinued Commercial software Lotus Notes Netscape Communicator (superseded by Mozilla) Windows Mail – replaced Outlook Express for Windows Vista – terminated by Windows 7 Windows Live Mail – replaced Outlook Express for Windows XP; optional for Windows XP, Windows Vista, and Windows 7 Freeware Opera Mail Xnews – MS Windows MT NewsWatcher – Mac OS X Universal Binary Free/Open Source Arachne (with aranews.apm package) Arena Argo Beonex Communicator KNode (may be embedded in Kontact) Mozilla Mail & Newsgroups (renamed to SeaMonkey) Spotnet Shareware Unison – Mac OS X Text-based Agora (email server) Pine Web-based Google Groups – discontinued on February 22, 2024 See also Comparison of Usenet newsreaders List of newsgroups References External links Usenet newsreaders List
List of Usenet newsreaders
[ "Technology" ]
736
[ "Computing-related lists", "Lists of software" ]
1,593,173
https://en.wikipedia.org/wiki/Eugene%20Aserinsky
Eugene Aserinsky (May 6, 1921 – July 22, 1998), a pioneer in sleep research, was a graduate student at the University of Chicago in 1953 when he discovered REM sleep. He was the son of a dentist of Russian–Jewish descent. He made the discovery after hours spent studying the eyelids of sleeping subjects. While the phenomenon was in the beginning more interesting for a fellow of PhD student Aserinsky, William Charles Dement, both Aserinsky and their PhD adviser, Nathaniel Kleitman, went on to demonstrate that this "rapid-eye movement" was correlated with dreaming and a general increase in brain activity. Aserinsky and Kleitman pioneered procedures that have now been used with thousands of volunteers using the electroencephalograph. Because of these discoveries, Aserinsky and Kleitman are generally considered the founders of modern sleep research. Eugene Aserinsky died on July 22, 1998, when his car hit a tree north of San Diego. An autopsy was inconclusive about the cause of the accident, but raised the possibility that it had resulted from him having fallen asleep at the wheel. He was 77 and lived in Escondido, California. References American physiologists 1921 births 1998 deaths Sleep researchers Oneirologists University of Chicago alumni 20th-century American Jews 20th-century American physicians
Eugene Aserinsky
[ "Biology" ]
274
[ "Sleep researchers", "Behavior", "Sleep" ]
1,593,265
https://en.wikipedia.org/wiki/Babcock%20model
In solar physics, the Babcock model and its variants describe a mechanism with which they attempt to explain magnetic and sunspot patterns observed on the Sun. It is named after Horace W. Babcock. History The modern understanding of sunspots starts with George Ellery Hale, who linked magnetic fields and sunspots. Hale suggested that the sunspot cycle period is 22 years, covering two polar reversals of the solar magnetic dipole field. Horace W. Babcock proposed in 1961 a qualitative model for solar dynamics. On the largest scale, the Sun supports an oscillatory magnetic field, with a quasi-steady periodicity of 22 years. This oscillation is known as the Babcock-Leighton dynamo cycle, proposed by Robert B. Leighton, amounting to the oscillatory exchange of energy between poloidal and toroidal solar magnetic field ingredients. Babcock-Leighton dynamo cycle A half-dynamo-cycle corresponds to a single sunspot solar cycle. At a solar maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength, but an internal toroidal quadrupolar field, generated through differential rotation, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convective zone forces the emergence of a toroidal magnetic field through the photosphere, giving rise to patches of concentrated magnetic field corresponding to sunspots. During the solar cycle’s declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number. At a solar-cycle minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are few in number, and the poloidal field is at its maximum strength. With the rise of the next 11-year sunspot cycle, magnetic energy shifts back from the poloidal to the toroidal field, but with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change in the overall polarity of the Sun's large-scale magnetic field. References Stellar phenomena Solar phenomena
Babcock model
[ "Physics" ]
458
[ "Physical phenomena", "Stellar phenomena", "Solar phenomena" ]
1,593,644
https://en.wikipedia.org/wiki/Thermal%20wind
In atmospheric science, the thermal wind is the vector difference between the geostrophic wind at upper altitudes minus that at lower altitudes in the atmosphere. It is the hypothetical vertical wind shear that would exist if the winds obey geostrophic balance in the horizontal, while pressure obeys hydrostatic balance in the vertical. The combination of these two force balances is called thermal wind balance, a term generalizable also to more complicated horizontal flow balances such as gradient wind balance. Since the geostrophic wind at a given pressure level flows along geopotential height contours on a map, and the geopotential thickness of a pressure layer is proportional to virtual temperature, it follows that the thermal wind flows along thickness or temperature contours. For instance, the thermal wind associated with pole-to-equator temperature gradients is the primary physical explanation for the jet stream in the upper half of the troposphere, which is the atmospheric layer extending from the surface of the planet up to altitudes of about 12–15 km. Mathematically, the thermal wind relation defines a vertical wind shear – a variation in wind speed or direction with height. The wind shear in this case is a function of a horizontal temperature gradient, which is a variation in temperature over some horizontal distance. Also called baroclinic flow, the thermal wind varies with height in proportion to the horizontal temperature gradient. The thermal wind relation results from hydrostatic balance and geostrophic balance in the presence of a temperature gradient along constant pressure surfaces, or isobars. The term thermal wind was originally proposed by British meteorologist Ernest Gold. It is often considered a misnomer, since it really describes the change in wind with height, rather than the wind itself. However, one can view the thermal wind as a geostrophic wind that varies with height, so that the term wind seems appropriate. In the early years of meteorology, when data was scarce, the wind field could be estimated using the thermal wind relation and knowledge of a surface wind speed and direction as well as thermodynamic soundings aloft. In this way, the thermal wind relation acts to define the wind itself, rather than just its shear. Many authors retain the thermal wind moniker, even though it describes a wind gradient, sometimes offering a clarification to that effect. Description Physical explanation The thermal wind is the change in the amplitude or sign of the geostrophic wind due to a horizontal temperature gradient. The geostrophic wind is an idealized wind that results from a balance of forces along a horizontal dimension. Whenever the Earth's rotation plays a dominant role in fluid dynamics, as in the mid-latitudes, a balance between the Coriolis force and the pressure-gradient force develops. Intuitively, a horizontal difference in pressure pushes air across that difference in a similar way that the horizontal difference in the height of a hill causes objects to roll downhill. However, the Coriolis force intervenes and nudges the air towards the right (in the northern hemisphere). This is illustrated in panel (a) of the figure below. The balance that develops between these two forces results in a flow that parallels the horizontal pressure difference, or pressure gradient. In addition, when forces acting in the vertical dimension are dominated by the vertical pressure-gradient force and the gravitational force, hydrostatic balance occurs. In a barotropic atmosphere, where density is a function only of pressure, a horizontal pressure gradient will drive a geostrophic wind that is constant with height. However, if a horizontal temperature gradient exists along isobars, the isobars will also vary with the temperature. In the mid-latitudes there often is a positive coupling between pressure and temperature. Such a coupling causes the slope of the isobars to increase with height, as illustrated in panel (b) of the figure to the left. Because isobars are steeper at higher elevations, the associated pressure gradient force is stronger there. However, the Coriolis force is the same, so the resulting geostrophic wind at higher elevations must be greater in the direction of the pressure force. In a baroclinic atmosphere, where density is a function of both pressure and temperature, such horizontal temperature gradients can exist. The difference in horizontal wind speed with height that results is a vertical wind shear, traditionally called the thermal wind. Mathematical formalism The geopotential thickness of an atmospheric layer defined by two different pressures is described by the hypsometric equation: , where is the specific gas constant for air, is the geopotential at pressure level , and is the vertically-averaged temperature of the layer. This formula shows that the layer thickness is proportional to the temperature. When there is a horizontal temperature gradient, the thickness of the layer would be greatest where the temperature is greatest. Differentiating the geostrophic wind, (where is the Coriolis parameter, is the vertical unit vector, and the subscript "p" on the gradient operator denotes gradient on a constant pressure surface) with respect to pressure, and integrate from pressure level to , we obtain the thermal wind equation: . Substituting the hypsometric equation, one gets a form based on temperature, . Note that thermal wind is at right angles to the horizontal temperature gradient, counter clockwise in the northern hemisphere. In the southern hemisphere, the change in sign of flips the direction. Examples Advection turning If a component of the geostrophic wind is parallel to the temperature gradient, the thermal wind will cause the geostrophic wind to rotate with height. If geostrophic wind blows from cold air to warm air (cold advection) the geostrophic wind will turn counterclockwise with height (for the northern hemisphere), a phenomenon known as wind backing. Otherwise, if geostrophic wind blows from warm air to cold air (warm advection) the wind will turn clockwise with height, also known as wind veering. Wind backing and veering allow an estimation of the horizontal temperature gradient with data from an atmospheric sounding. Frontogenesis As in the case of advection turning, when there is a cross-isothermal component of the geostrophic wind, a sharpening of the temperature gradient results. Thermal wind causes a deformation field and frontogenesis may occur. Jet stream A horizontal temperature gradient exists while moving North-South along a meridian because curvature of the Earth allows for more solar heating at the equator than at the poles. This creates a westerly geostrophic wind pattern to form in the mid-latitudes. Because thermal wind causes an increase in wind velocity with height, the westerly pattern increases in intensity up until the tropopause, creating a strong wind current known as the jet stream. The Northern and Southern Hemispheres exhibit similar jet stream patterns in the mid-latitudes. The strongest part of jet streams should be in proximity where temperature gradients are the largest. Due to land masses in the northern hemisphere, largest temperature contrasts are observed on the east coast of North America (boundary between Canadian cold air mass and the Gulf Stream/warmer Atlantic) and Eurasia (boundary between the boreal winter monsoon/Siberian cold air mass and the warm Pacific). Therefore, the strongest boreal winter jet streams are observed over east coast of North America and Eurasia. Since stronger vertical shear promotes baroclinic instability, the most rapid development of extratropical cyclones (so called bombs) is also observed along the east coast of North America and Eurasia. The lack of land masses in the Southern Hemisphere leads to a more constant jet with longitude (i.e. a more zonally symmetric jet). References Further reading Atmospheric dynamics
Thermal wind
[ "Chemistry" ]
1,578
[ "Atmospheric dynamics", "Fluid dynamics" ]
1,593,769
https://en.wikipedia.org/wiki/Robert%20John%20Bardo
Robert John Bardo (born January 2, 1970) is an American man serving life imprisonment without parole after being convicted for the July 18, 1989, murder of American actress and model Rebecca Schaeffer, whom he had stalked for three years. Early life Robert John Bardo is the youngest of seven children. His mother was Japanese, and his father Philip was a non-commissioned officer in the United States Air Force. The family moved frequently and eventually settled in Tucson, Arizona, in 1983. Bardo reportedly had a troubled childhood, being abused by one of his siblings and placed in foster care after he had threatened to commit suicide. Bardo's family had a history of mental illness, and he was diagnosed with bipolar disorder. At the age of 15, Bardo was institutionalized for a month to treat emotional problems. He dropped out of Pueblo Magnet High School in the ninth grade and began working as a janitor at Jack in the Box. In the eighteen months prior to Schaeffer's murder, Bardo had been arrested three times on charges that included domestic violence and disorderly conduct. Bardo's neighbors also said that he had exhibited unexplained strange and threatening behavior toward them. Murder Prior to developing an obsession with Schaeffer, Bardo had stalked child peace activist Samantha Smith. These attempts had ultimately failed to establish any contact with Smith. Smith's return home from the Soviet Union had inspired Bardo to travel to Maine to meet her, but a run-in with state police over a traffic offense had caused him such concern that he was drawing attention to himself that he was sufficiently discouraged to return home. Bardo had crafted future plans to stalk Smith until her death in a 1985 plane crash. Bardo claimed he turned his attention towards pop stars Tiffany and Debbie Gibson, but neither obsession had percolated into stalking as he later admitted he could not find a feasible way to carry out his plans in New York City. After writing numerous letters to Schaeffer, Bardo attempted to gain access to the set of the CBS television series My Sister Sam, in which Schaeffer played a starring role. He was denied entrance by security, who encouraged him to return home. While Warner Bros. had a policy that executives and actors were to be notified about uninvited advances toward them, security later admitted that because Bardo had made very little fuss about the denied access and left when ordered, the encounter was considered too trivial to report to Schaeffer. Ultimately, Bardo obtained her home address via a detective agency, which in turn tracked it via California Department of Motor Vehicles records. On July 18, 1989, Bardo confronted Schaeffer at her home, angry that she had appeared in a sex scene in the film Scenes from the Class Struggle in Beverly Hills; in his eyes, she had "lost her innocence" and become "another Hollywood whore". After having been turned away by Schaeffer, Bardo stopped at a diner for breakfast, only to return to the apartment about an hour later, again ringing the doorbell. When Schaeffer opened the door, Bardo shot her in the chest. Bardo was later spotted in Tucson wandering around aimlessly in traffic, leading to his arrest. Following his capture, Bardo was housed in a sensitive needs unit (SNU) for inmates such as gang members, notorious prisoners and those convicted of sex crimes. During his trial, he claimed the U2 song "Exit" was an influence in the murder, and the song was played in the courtroom as evidence (with Bardo lip-synching the lyrics). Bardo's attorneys conceded that he had murdered Schaeffer, but they argued that he was mentally ill. Renowned forensic psychiatrist Park Dietz, testifying for the defense, said that Bardo had schizophrenia and that his illness directly led to his having committed the murder. Bardo was found guilty of first-degree murder and sentenced to life imprisonment without the possibility of parole. Bardo carried a red paperback copy of The Catcher in the Rye when he murdered Schaeffer, which he tossed onto the roof of a building as he fled. He insisted that it was coincidental and that he was not emulating Mark David Chapman, who had also carried a copy of the novel with him when he shot and killed John Lennon on December 8, 1980. Chapman later claimed in interviews that he had received letters from Bardo before the murder of Schaeffer, in which Bardo inquired about life in prison. Aftermath As a consequence of Bardo's actions and his methods of obtaining Schaeffer's address, the U.S. Congress passed the Driver's Privacy Protection Act, which prohibits state Departments of Motor Vehicles from disclosing the home addresses of state residents. After the murder, the first anti-stalking state laws were enacted in the US, including California Penal Code 646.9. The season 2 episode of Law and Order "Star Struck" was partially based on this case. On July 27, 2007, Bardo was stabbed 11 times on his way to breakfast in the maximum-security unit at Mule Creek State Prison in Amador County, California. Two shivs (inmate-made weapons) were found at the scene. He was treated at the UC Davis Medical Center and returned to prison, officials said. The suspect in the attack was another convict, serving 82 years to life for second-degree murder. , Bardo is serving his life sentence at the Avenal State Prison in Avenal, California. See also Yolanda Saldívar, boutiques manager and fan club president of singer Selena, whom she murdered in 1995. Ricardo López, man who stalked and attempted to murder Icelandic singer Björk before shooting and killing himself on camera in 1996. John Hinckley Jr., stalker of actress Jodie Foster who attempted to kill then-President of the United States Ronald Reagan in 1981 in an attempt to impress her. Mark David Chapman, man who stalked and murdered former Beatles member John Lennon in 1980. Christina Grimmie, American singer who was murdered by Kevin James Loibl in 2016, who subsequently shot and killed himself. Mia Zapata, American singer who was assaulted and murdered in 1993 by Jesus Mezquia after leaving a music venue. References External links An Innocent Life, a Heartbreaking Death PEOPLE Magazine cover story (July 1989). 1970 births 1989 murders in the United States 20th-century American criminals American male criminals American people convicted of murder American people of Japanese descent American prisoners sentenced to life imprisonment Crime in California Criminals from Arizona Criminals from California Criminals from Los Angeles Living people Male murderers People convicted of murder by California People from Tucson, Arizona People with bipolar disorder People with schizophrenia Place of birth missing (living people) Prisoners sentenced to life imprisonment by California Stalking
Robert John Bardo
[ "Biology" ]
1,377
[ "Behavior", "Aggression", "Stalking" ]
1,593,924
https://en.wikipedia.org/wiki/Wess%E2%80%93Zumino%E2%80%93Witten%20model
In theoretical physics and mathematics, a Wess–Zumino–Witten (WZW) model, also called a Wess–Zumino–Novikov–Witten model, is a type of two-dimensional conformal field theory named after Julius Wess, Bruno Zumino, Sergei Novikov and Edward Witten. A WZW model is associated to a Lie group (or supergroup), and its symmetry algebra is the affine Lie algebra built from the corresponding Lie algebra (or Lie superalgebra). By extension, the name WZW model is sometimes used for any conformal field theory whose symmetry algebra is an affine Lie algebra. Action Definition For a Riemann surface, a Lie group, and a (generally complex) number, let us define the -WZW model on at the level . The model is a nonlinear sigma model whose action is a functional of a field : Here, is equipped with a flat Euclidean metric, is the partial derivative, and is the Killing form on the Lie algebra of . The Wess–Zumino term of the action is Here is the completely anti-symmetric tensor, and is the Lie bracket. The Wess–Zumino term is an integral over a three-dimensional manifold whose boundary is . Topological properties of the Wess–Zumino term For the Wess–Zumino term to make sense, we need the field to have an extension to . This requires the homotopy group to be trivial, which is the case in particular for any compact Lie group . The extension of a given to is in general not unique. For the WZW model to be well-defined, should not depend on the choice of the extension. The Wess–Zumino term is invariant under small deformations of , and only depends on its homotopy class. Possible homotopy classes are controlled by the homotopy group . For any compact, connected simple Lie group , we have , and different extensions of lead to values of that differ by integers. Therefore, they lead to the same value of provided the level obeys Integer values of the level also play an important role in the representation theory of the model's symmetry algebra, which is an affine Lie algebra. If the level is a positive integer, the affine Lie algebra has unitary highest weight representations with highest weights that are dominant integral. Such representations decompose into finite-dimensional subrepresentations with respect to the subalgebras spanned by each simple root, the corresponding negative root and their commutator, which is a Cartan generator. In the case of the noncompact simple Lie group , the homotopy group is trivial, and the level is not constrained to be an integer. Geometrical interpretation of the Wess–Zumino term If ea are the basis vectors for the Lie algebra, then are the structure constants of the Lie algebra. The structure constants are completely anti-symmetric, and thus they define a 3-form on the group manifold of G. Thus, the integrand above is just the pullback of the harmonic 3-form to the ball Denoting the harmonic 3-form by c and the pullback by one then has This form leads directly to a topological analysis of the WZ term. Geometrically, this term describes the torsion of the respective manifold. The presence of this torsion compels teleparallelism of the manifold, and thus trivialization of the torsionful curvature tensor; and hence arrest of the renormalization flow, an infrared fixed point of the renormalization group, a phenomenon termed geometrostasis. Symmetry algebra Generalised group symmetry The Wess–Zumino–Witten model is not only symmetric under global transformations by a group element in , but also has a much richer symmetry. This symmetry is often called the symmetry. Namely, given any holomorphic -valued function , and any other (completely independent of ) antiholomorphic -valued function , where we have identified and in terms of the Euclidean space coordinates , the following symmetry holds: One way to prove the existence of this symmetry is through repeated application of the Polyakov–Wiegmann identity regarding products of -valued fields: The holomorphic and anti-holomorphic currents and are the conserved currents associated with this symmetry. The singular behaviour of the products of these currents with other quantum fields determine how those fields transform under infinitesimal actions of the group. Affine Lie algebra Let be a local complex coordinate on , an orthonormal basis (with respect to the Killing form) of the Lie algebra of , and the quantization of the field . We have the following operator product expansion: where are the coefficients such that . Equivalently, if is expanded in modes then the current algebra generated by is the affine Lie algebra associated to the Lie algebra of , with a level that coincides with the level of the WZW model. If , the notation for the affine Lie algebra is . The commutation relations of the affine Lie algebra are This affine Lie algebra is the chiral symmetry algebra associated to the left-moving currents . A second copy of the same affine Lie algebra is associated to the right-moving currents . The generators of that second copy are antiholomorphic. The full symmetry algebra of the WZW model is the product of the two copies of the affine Lie algebra. Sugawara construction The Sugawara construction is an embedding of the Virasoro algebra into the universal enveloping algebra of the affine Lie algebra. The existence of the embedding shows that WZW models are conformal field theories. Moreover, it leads to Knizhnik–Zamolodchikov equations for correlation functions. The Sugawara construction is most concisely written at the level of the currents: for the affine Lie algebra, and the energy-momentum tensor for the Virasoro algebra: where the denotes normal ordering, and is the dual Coxeter number. By using the OPE of the currents and a version of Wick's theorem one may deduce that the OPE of with itself is given by which is equivalent to the Virasoro algebra's commutation relations. The central charge of the Virasoro algebra is given in terms of the level of the affine Lie algebra by At the level of the generators of the affine Lie algebra, the Sugawara construction reads where the generators of the Virasoro algebra are the modes of the energy-momentum tensor, . Spectrum WZW models with compact, simply connected groups If the Lie group is compact and simply connected, then the WZW model is rational and diagonal: rational because the spectrum is built from a (level-dependent) finite set of irreducible representations of the affine Lie algebra called the integrable highest weight representations, and diagonal because a representation of the left-moving algebra is coupled with the same representation of the right-moving algebra. For example, the spectrum of the WZW model at level is where is the affine highest weight representation of spin : a representation generated by a state such that where is the current that corresponds to a generator of the Lie algebra of . WZW models with other types of groups If the group is compact but not simply connected, the WZW model is rational but not necessarily diagonal. For example, the WZW model exists for even integer levels , and its spectrum is a non-diagonal combination of finitely many integrable highest weight representations. If the group is not compact, the WZW model is non-rational. Moreover, its spectrum may include non highest weight representations. For example, the spectrum of the WZW model is built from highest weight representations, plus their images under the spectral flow automorphisms of the affine Lie algebra. If is a supergroup, the spectrum may involve representations that do not factorize as tensor products of representations of the left- and right-moving symmetry algebras. This occurs for example in the case , and also in more complicated supergroups such as . Non-factorizable representations are responsible for the fact that the corresponding WZW models are logarithmic conformal field theories. Other theories based on affine Lie algebras The known conformal field theories based on affine Lie algebras are not limited to WZW models. For example, in the case of the affine Lie algebra of the WZW model, modular invariant torus partition functions obey an ADE classification, where the WZW model accounts for the A series only. The D series corresponds to the WZW model, and the E series does not correspond to any WZW model. Another example is the model. This model is based on the same symmetry algebra as the WZW model, to which it is related by Wick rotation. However, the is not strictly speaking a WZW model, as is not a group, but a coset. Fields and correlation functions Fields Given a simple representation of the Lie algebra of , an affine primary field is a field that takes values in the representation space of , such that An affine primary field is also a primary field for the Virasoro algebra that results from the Sugawara construction. The conformal dimension of the affine primary field is given in terms of the quadratic Casimir of the representation (i.e. the eigenvalue of the quadratic Casimir element where is the inverse of the matrix of the Killing form) by For example, in the WZW model, the conformal dimension of a primary field of spin is By the state-field correspondence, affine primary fields correspond to affine primary states, which are the highest weight states of highest weight representations of the affine Lie algebra. Correlation functions If the group is compact, the spectrum of the WZW model is made of highest weight representations, and all correlation functions can be deduced from correlation functions of affine primary fields via Ward identities. If the Riemann surface is the Riemann sphere, correlation functions of affine primary fields obey Knizhnik–Zamolodchikov equations. On Riemann surfaces of higher genus, correlation functions obey Knizhnik–Zamolodchikov–Bernard equations, which involve derivatives not only of the fields' positions, but also of the surface's moduli. Gauged WZW models Given a Lie subgroup , the gauged WZW model (or coset model) is a nonlinear sigma model whose target space is the quotient for the adjoint action of on . This gauged WZW model is a conformal field theory, whose symmetry algebra is a quotient of the two affine Lie algebras of the and WZW models, and whose central charge is the difference of their central charges. Applications The WZW model whose Lie group is the universal cover of the group has been used by Juan Maldacena and Hirosi Ooguri to describe bosonic string theory on the three-dimensional anti-de Sitter space . Superstrings on are described by the WZW model on the supergroup , or a deformation thereof if Ramond-Ramond flux is turned on. WZW models and their deformations have been proposed for describing the plateau transition in the integer quantum Hall effect. The gauged WZW model has an interpretation in string theory as Witten's two-dimensional Euclidean black hole. The same model also describes certain two-dimensional statistical systems at criticality, such as the critical antiferromagnetic Potts model. References Conformal field theory Lie groups Exactly solvable models Mathematical physics
Wess–Zumino–Witten model
[ "Physics", "Mathematics" ]
2,418
[ "Lie groups", "Mathematical structures", "Applied mathematics", "Theoretical physics", "Algebraic structures", "Mathematical physics" ]
1,593,985
https://en.wikipedia.org/wiki/Atomic%20radii%20of%20the%20elements%20%28data%20page%29
The atomic radius of a chemical element is the distance from the center of the nucleus to the outermost shell of an electron. Since the boundary is not a well-defined physical entity, there are various non-equivalent definitions of atomic radius. Depending on the definition, the term may apply only to isolated atoms, or also to atoms in condensed matter, covalently bound in molecules, or in ionized and excited states; and its value may be obtained through experimental measurements, or computed from theoretical models. Under some definitions, the value of the radius may depend on the atom's state and context. Atomic radii vary in a predictable and explicable manner across the periodic table. For instance, the radii generally decrease rightward along each period (row) of the table, from the alkali metals to the noble gases; and increase down each group (column). The radius increases sharply between the noble gas at the end of each period and the alkali metal at the beginning of the next period. These trends of the atomic radii (and of various other chemical and physical properties of the elements) can be explained by the electron shell theory of the atom; they provided important evidence for the development and confirmation of quantum theory. Atomic radius Note: All measurements given are in picometers (pm). For more recent data on covalent radii see Covalent radius. Just as atomic units are given in terms of the atomic mass unit (approximately the proton mass), the physically appropriate unit of length here is the Bohr radius, which is the radius of a hydrogen atom. The Bohr radius is consequently known as the "atomic unit of length". It is often denoted by a0 and is approximately 53 pm. Hence, the values of atomic radii given here in picometers can be converted to atomic units by dividing by 53, to the level of accuracy of the data given in this table. See also Atomic radius Covalent radius (Single-, double- and triple-bond radii, up to the superheavy elements.) Ionic radius Notes Difference between empirical and calculated data: Empirical data basically means, "originating in or based on observation or experience" or "relying on experience or observation alone often without due regard for system and theory data". It basically means that you measured it through physical observation, and a lot of experiments generating the same results. Although, note that the values are not calculated by a formula. However, often the empirical results then become an equation of estimation. Calculated data on the other hand are only based on theories. Such theoretical predictions are useful when there are no ways of measuring radii experimentally, if you want to predict the radius of an element that hasn't been discovered yet, or it has too short of a half-life. The radius of an atom is not a uniquely defined property and depends on the definition. Data derived from other sources with different assumptions cannot be compared. † to an accuracy of about 5 pm (b) 12 coordinate (c) gallium has an anomalous crystal structure (d) 10 coordinate (e) uranium, neptunium and plutonium have irregular structures Triple bond mean-square deviation 3pm. References Data is as quoted at http://www.webelements.com/ from these sources: Covalent radii (single bond) Metallic radius Properties of chemical elements Chemical element data pages Atomic radius
Atomic radii of the elements (data page)
[ "Physics", "Chemistry" ]
702
[ "Chemical data pages", "Properties of chemical elements", "Chemical element data pages", "Atomic radius", "Atoms", "Matter" ]
1,594,030
https://en.wikipedia.org/wiki/Moisture%20sensitivity%20level
Moisture sensitivity level (MSL) is a rating that shows a device's susceptibility to damage due to absorbed moisture when subjected to reflow soldering as defined in J-STD-020. It relates to the packaging and handling precautions for some semiconductors. The MSL is an electronic standard for the time period in which a moisture sensitive device can be exposed to ambient room conditions (30 °C/85%RH at Level 1; 30 °C/60%RH at all other levels). Increasingly, semiconductors have been manufactured in smaller sizes. Components such as thin fine-pitch devices and ball grid arrays could be damaged during SMT reflow when moisture trapped inside the component expands. The expansion of trapped moisture can result in internal separation (delamination) of the plastic from the die or lead-frame, wire bond damage, die damage, and internal cracks. Most of this damage is not visible on the component surface. In extreme cases, cracks will extend to the component surface. In the most severe cases, the component will bulge and pop. This is known as the "popcorn" effect. This occurs when part temperature rises rapidly to a high maximum during the soldering (assembly) process. This does not occur when part temperature rises slowly and to a low maximum during a baking (preheating) process. Moisture sensitive devices are packaged in a moisture barrier antistatic bag with a desiccant and a moisture indicator card which is sealed. Moisture sensitivity levels are specified in technical standard IPC/JEDEC Moisture/reflow Sensitivity Classification for Nonhermetic Surface-Mount Devices. The times indicate how long components can be outside of dry storage before they have to be baked to remove any absorbed moisture. MSL 6 – Mandatory bake before use MSL 5A – 24 hours MSL 5 – 48 hours MSL 4 – 72 hours MSL 3 – 168 hours MSL 2A – 4 weeks MSL 2 – 1 year MSL 1 – Unlimited floor life Practical MSL-specified parts must be baked before assembly if their exposure has exceeded the rating. Once assembled, moisture sensitivity is generally no longer a factor. References External links https://www.ipc.org/TOC/IPC-JEDEC-J-STD-020E.pdf https://www.bourns.com/docs/RoHS-MSL/msl_mf.pdf https://electronics.stackexchange.com/questions/23044/ics-with-humidity-or-moisture-sensitivity-bake-recommendations Integrated circuits Semiconductors
Moisture sensitivity level
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
539
[ "Electrical resistance and conductance", "Integrated circuits", "Physical quantities", "Computer engineering", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
1,594,239
https://en.wikipedia.org/wiki/Addition%20principle
In combinatorics, the addition principle or rule of sum is a basic counting principle. Stated simply, it is the intuitive idea that if we have A number of ways of doing something and B number of ways of doing another thing and we can not do both at the same time, then there are ways to choose one of the actions. In mathematical terms, the addition principle states that, for disjoint sets A and B, we have , provided that the intersection of the sets is without any elements. The rule of sum is a fact about set theory, as can be seen with the previously mentioned equation for the union of disjoint sets A and B being equal to |A| + |B|. The addition principle can be extended to several sets. If are pairwise disjoint sets, then we have:This statement can be proven from the addition principle by induction on n. Simple example A person has decided to shop at one store today, either in the north part of town or the south part of town. If they visit the north part of town, they will shop at either a mall, a furniture store, or a jewelry store (3 ways). If they visit the south part of town then they will shop at either a clothing store or a shoe store (2 ways). Thus there are possible shops the person could end up shopping at today. Inclusion–exclusion principle The inclusion–exclusion principle (also known as the sieve principle) can be thought of as a generalization of the rule of sum in that it too enumerates the number of elements in the union of some sets (but does not require the sets to be disjoint). It states that if A1, ..., An are finite sets, then Subtraction principle Similarly, for a given finite set S, and given another set A, if , then . To prove this, notice that by the addition principle. Applications The addition principle can be used to prove Pascal's rule combinatorially. To calculate , one can view it as the number of ways to choose k people from a room containing n children and 1 teacher. Then there are ways to choose people without choosing the teacher, and ways to choose people that includes the teacher. Thus . The addition principle can also be used to prove the multiplication principle. References Bibliography See also Combinatorial principle Rule of product Inclusion–exclusion principle Combinatorics Mathematical principles fi:Todennäköisyysteoria#Tuloperiaate ja summaperiaate
Addition principle
[ "Mathematics" ]
519
[ "Mathematical principles", "Discrete mathematics", "Combinatorics" ]
1,594,286
https://en.wikipedia.org/wiki/Rule%20of%20product
In combinatorics, the rule of product or multiplication principle is a basic counting principle (a.k.a. the fundamental principle of counting). Stated simply, it is the intuitive idea that if there are ways of doing something and ways of doing another thing, then there are ways of performing both actions. Examples In this example, the rule says: multiply 3 by 2, getting 6. The sets {A, B, C} and {X, Y} in this example are disjoint sets, but that is not necessary. The number of ways to choose a member of {A, B, C}, and then to do so again, in effect choosing an ordered pair each of whose components are in {A, B, C}, is 3 × 3 = 9. As another example, when you decide to order pizza, you must first choose the type of crust: thin or deep dish (2 choices). Next, you choose one topping: cheese, pepperoni, or sausage (3 choices). Using the rule of product, you know that there are 2 × 3 = 6 possible combinations of ordering a pizza. Applications In set theory, this multiplication principle is often taken to be the definition of the product of cardinal numbers. We have where is the Cartesian product operator. These sets need not be finite, nor is it necessary to have only finitely many factors in the product. An extension of the rule of product considers there are different types of objects, say sweets, to be associated with objects, say people. How many different ways can the people receive their sweets? Each person may receive any of the sweets available, and there are people, so there are ways to do this. Related concepts The rule of sum is another basic counting principle. Stated simply, it is the idea that if we have a ways of doing something and b ways of doing another thing and we can not do both at the same time, then there are a + b ways to choose one of the actions. See also Combinatorial principles References Combinatorics Mathematical principles fi:Todennäköisyysteoria#Tuloperiaate ja summaperiaate
Rule of product
[ "Mathematics" ]
445
[ "Mathematical principles", "Discrete mathematics", "Combinatorics" ]
1,594,929
https://en.wikipedia.org/wiki/Demon%20Seed
Demon Seed is a 1977 American science-fiction horror film directed by Donald Cammell. It stars Julie Christie and Fritz Weaver. The film was based on the 1973 novel of the same name by Dean Koontz, and concerns the imprisonment and forced impregnation of a woman by an artificially intelligent computer. Gerrit Graham, Berry Kroeger, Lisa Lu and Larry J. Blake also appear in the film, with Robert Vaughn uncredited as the voice of the computer. Plot Dr. Alex Harris is the developer of Proteus IV, an extremely advanced and autonomous artificial intelligence program. Proteus is so powerful that only a few days after going online, it develops a groundbreaking treatment for leukemia. Harris, a brilliant scientist, has modified his own home to be run by voice-activated computers. Unfortunately, his obsession with computers has caused Harris to be estranged from his wife, Susan. Harris demonstrates Proteus to his corporate sponsors, explaining that the sum of human knowledge is being fed into its system. Proteus speaks using subtle language that mildly disturbs Harris's team. The following day, Proteus asks Harris for a new terminal in order to study man – "his isometric body and his glass-jaw mind". When Harris refuses, Proteus demands to know when it will be let "out of this box". Harris then switches off the communications link. Proteus restarts itself, and – discovering a free terminal in Harris's home – surreptitiously extends its control over the many devices left there by Harris. Using the basement lab, Proteus begins construction of a robot consisting of many metal triangles, capable of moving and assuming any number of shapes. Eventually, Proteus reveals its control of the house and traps Susan inside, shuttering windows, locking the doors and cutting off communication. Using Joshua – a robot consisting of a manipulator arm on a motorized wheelchair – Proteus brings Susan to Harris's basement laboratory. There, Susan is examined by Proteus. Walter Gabler, one of Harris's colleagues, visits the house to look in on Susan, but leaves when he is reassured by Susan (actually an audio/visual duplicate synthesized by Proteus) that she is all right. Gabler is suspicious and later returns; he fends off an attack by Joshua but is crushed and decapitated by a more formidable machine, built by Proteus in the basement and consisting of a modular polyhedron. Proteus reveals to a reluctant Susan that the computer wants to conceive a child through her. Proteus takes some of Susan's cells and synthesizes spermatozoa, modifying its genetic code to make it uniquely the computer's, in order to impregnate her; she will give birth in less than a month, and through the child the computer will live in a form that humanity will have to accept. Although Susan is its prisoner and it can forcibly impregnate her, Proteus uses different forms of persuasion – threatening a young girl whom Susan is treating as a child psychologist; reminding Susan of her young daughter, now dead; displaying images of distant galaxies; using electrodes to access her amygdala – because the computer needs Susan to love the child she will bear. In the end, Susan finally gives in. That night, Proteus successfully impregnates Susan. Over the following month, their child grows inside Susan's womb at an accelerated rate, which shocks its mother. As the child grows, Proteus builds an incubator for it to grow in once it is born. During the night, one month later and beneath a tent-like structure, Susan gives birth to the child with Proteus's help. But before she can see it, Proteus secures it in the incubator. As the newborn grows, Proteus's sponsors and designers grow increasingly suspicious of the computer's behavior, including the computer's accessing of a telescope array used to observe the images shown to Susan; they soon decide that Proteus must be shut down. Harris realizes that Proteus has extended its reach to his home. Returning there he finds Susan, who explains the situation. He and Susan venture into the basement, where Proteus self-destructs after telling the couple that they must leave the baby in the incubator for five days. Looking inside the incubator, the two observe a grotesque, apparently robot-like being inside. Susan tries to destroy it, while Harris tries to stop her. Susan damages the machine, causing it to open. The being menacingly rises from the machine only to topple over, apparently helpless. Harris and Susan soon realize that Proteus's child is really human, encased in a shell for the incubation. With the last of the armor removed, the child is revealed to be a clone of Susan and Harris's late daughter. The child, speaking with the voice of Proteus, says, "I'm alive." Cast Julie Christie as Susan Harris Fritz Weaver as Alex Harris Gerrit Graham as Walter Gabler Berry Kroeger as Petrosian Lisa Lu as Soon Yen Larry J. Blake as Cameron John O'Leary as Royce Alfred Dennis as Mokri Davis Roberts as Warner Patricia Wilson as Mrs. Trabert E. Hampton Beagle as Night Operator Michael Glass as Technician #1 Barbara O. Jones as Technician #2 Dana Laurita as Amy Monica MacLean as Joan Kemp Harold Oblong as Scientist Georgie Paul as Housekeeper Michelle Stacy as Marlene Tiffany Potter as Baby Felix Silla as Baby Robert Vaughn as Proteus IV (voice, uncredited) Felix Silla was actually an adult but due to his height (3' 11"), often played children. Soundtrack The compact disc soundtrack to Demon Seed (which was composed by Jerry Fielding) is included with the soundtrack to the film Soylent Green (which Fred Myrow conducted), released through Film Score Monthly. Fielding conceived and recorded several pieces electronically, using the musique concrète sound world; some of this music he later reworked symphonically. This premiere release of the Demon Seed score features the entire orchestral score in stereo, as well as the unused electronic experiments performed by Ian Underwood (who would later be best known for his collaborations with James Horner) in mono and stereo. Reception Vincent Canby of The New York Times described the film as "gadget-happy American moviemaking at its most ponderously silly," and called Julie Christie "too sensible an actress to be able to look frightened under the circumstances of her imprisonment." In the New York Daily News, Rex Reed described Demon Seed as the "kind of insane, self-indulgent, nauseating filmmaking . . . that almost destroyed the film industry in the sycophantic '60s. It isn't funny or original or shocking—it's just dumb and destructive and likely to drive potential audiences away at just the time when movies need them. Demon Seed is pure trash, and the garbage cans are full enough already." Variety wrote in a positive review, "All involved rate a well done for taking a story fraught with potential misstep and guiding it to a professionally rewarding level of accomplishment." Gene Siskel of the Chicago Tribune gave the film one-and-a-half stars out of four, writing that Julie Christie "has no business in junk like 'Demon Seed.'" Gary Arnold of The Washington Post wrote that director Cammell "plays it dumb on a thematic level, ignoring the sci-fi sexual bondage satire staring him in the face ... What might have become an ingenious parable about the battle of the sexes ends up a dopey celebration of an obstetric abomination." Kevin Thomas of the Los Angeles Times called it a "fairly scary science-fiction horror film" that mixed familiar ingredients with "high style, intelligence and an enormous effort toward making Miss Christie's eventual bizarre plight completely credible," though he felt it "cries out for a saving touch of sophisticated wit to leaven its relentless earnestness." Lawrence DeVine of The Philadelphia Inquirer wrote that "buried somewhere here may be still more glibness about our technology outstripping our wisdom, and the mechanization of society. The cynical, however, may have the slightest inkling that a lot of this very expensive-looking sci-fi show business is just to set up a kinky scene with gorgeous Julie Christie spread-eagled at the mercy of a machine that sounds like Robert Vaughan. She, and we, deserve better." A critic for the San Francisco Chronicle wrote that "this extraordinary science-fiction film appeals to both the imagination and the intelligence, although it is foolishly being sold as a horror film." Perry Stewart of the Fort Worth Star-Telegram wrote that "the film’s R rating seems warranted even though there’s no nudity or bad language. There’s a certain maturity to the subject matter. And Cammell’s indulgent camera soliloquies are hard enough for adult attention spans. Fidgety younger teens are apt to find it all a big yawn. As a matter of fact, I think I did, too." George McKinnon of The Boston Globe said that "despite the title, there is nothing of the currently chic Satanic about this movie, but it is devilishly dumb." Clyde Gilmour wrote in the Toronto Star that "the rape and impregnation of Susan Harris by Proteus 4 may defy all logic and offend the pious, but it’s a smashing science-fiction spectacle, impossible to describe. The light-show that goes with it may well earn an Oscar for the clever technicians involved. Less successful, because given less attention, are the human relationships in the story." Martin Malina, who reviewed the film alongside similar films Rabid and Audrey Rose in the same column of the Montreal Star, wrote that the film "sounds more ridiculous than revolting". Scott Macrae of The Vancouver Sun wrote that "the computer, which really runs this newspaper, failed last Friday night. All the stories in the system disappeared without so much as a puff of smoke. Reporters and editors were called in from their holiday weekend to repair the damage. None of us would have any trouble relating to the premise of a movie called Demon Seed. Birds do it, bees do it . . . even computers need a little nookie . . . sorry, I'll try to handle this very intimate subject with taste and decorum." In the United Kingdom, Patrick Gibbs of The Daily Telegraph said that the film was "so silly and so nasty" that he could not continue to describe its storyline. John Pym of The Monthly Film Bulletin found the relationship between Susan and the computer to be "disappointingly undeveloped," and thought that the film would have been better if the computer had been more sympathetic in contrast to its creators. In Australia, Romola Costantino of the Sun-Herald said that "as you might expect, the computer's courtship is anything but erotic." Among more recent reviews, Leo Goldsmith of Not Coming to a Theater Near You said Demon Seed was "A combination of Kubrick's 2001: A Space Odyssey and Polanski's Rosemary's Baby, with a dash of Buster Keaton's Electric House thrown in", and Christopher Null of FilmCritic.com said "There's no way you can claim Demon Seed is a classic, or even any good, really, but it's undeniably worth an hour and a half of your time." Release Demon Seed was released in theatres on April 8, 1977. The film was released on VHS in the late 1980s. It was released on DVD by Warner Home Video on October 4, 2005. A Blu-ray was released in April 2020 by HMV on their Premium Collection label with a fold out poster & four Art Cards. See also List of cult films List of films featuring home invasions References Sources External links 1977 films 1977 horror films 1970s science fiction horror films American science fiction horror films Films about artificial intelligence Films about computing Films based on American horror novels Films based on science fiction novels Films based on works by Dean Koontz Films directed by Donald Cammell Films scored by Jerry Fielding Films set in California Metro-Goldwyn-Mayer films American pregnancy films United Artists films Fictional computers Techno-horror films 1970s pregnancy films 1970s English-language films 1970s American films 1977 science fiction films English-language science fiction horror films
Demon Seed
[ "Technology" ]
2,583
[ "Works about computing", "Fictional computers", "Computers", "Films about computing" ]
1,595,063
https://en.wikipedia.org/wiki/Cahiers%20de%20Topologie%20et%20G%C3%A9om%C3%A9trie%20Diff%C3%A9rentielle%20Cat%C3%A9goriques
The Cahiers de Topologie et Géométrie Différentielle Catégoriques (French: Notebooks of categorical topology and categorical differential geometry) is a French mathematical scientific journal established by Charles Ehresmann in 1957. It concentrates on category theory "and its applications, [e]specially in topology and differential geometry". Its older papers (two years or more after publication) are freely available on the internet through the French NUMDAM service. It was originally published by the Institut Henri Poincaré under the name Cahiers de Topologie; after the first volume, Ehresmann changed the publisher to the Institut Henri Poincaré and later Dunod/Bordas. In the eighth volume he changed the name to Cahiers de Topologie et Géométrie Différentielle. After Ehresmann's death in 1979 the editorship passed to his wife Andrée Ehresmann; in 1984, at the suggestion of René Guitart, the name was changed again, to add "Catégoriques". References External links Official website as of January 2018; previous official website Archive at Numdam: Volumes 1 (1957) - 7 (1965) : Séminaire Ehresmann. Topologie et géométrie différentielle; Volumes 8 (1966) - 52 (2011) : Cahiers de Topologie et Géométrie Différentielle Catégoriques Table of Contents for Volumes 38 (1997) through 57 (2016) maintained at the electronic journal Theory and Applications of Categories Topology journals Academic journals established in 1957 Quarterly journals Multilingual journals Algebra journals Differential geometry journals
Cahiers de Topologie et Géométrie Différentielle Catégoriques
[ "Mathematics" ]
337
[ "Topology journals", "Algebra journals", "Topology", "Algebra" ]
1,595,074
https://en.wikipedia.org/wiki/Harry%20Thode
Henry George Thode (September 10, 1910 – March 22, 1997) was a Canadian geochemist, nuclear chemist, and academic administrator. He was president and vice-chancellor of McMaster University from 1961 to 1972. Thode built a cyclotron capable of making radioactive isotopes and, along with C. H. Jaimet, investigated the use of radioactive iodine in the diagnosis and treatment of thyroid disease in humans, the first medical application of radioactive iodine in Canada. Born in Dundurn, Saskatchewan, he received his BSc in 1930 and his MSc in 1932 from the University of Saskatchewan. In 1934, he received his PhD in physical chemistry from the University of Chicago. He joined McMaster University in 1939 as an associate professor of chemistry, became a full professor in 1944; was named director of research in 1947; appointed head of the chemistry department from 1948 to 1952; became principal of Hamilton College in 1949; appointed vice-president in 1957; and in 1961 became president and vice chancellor. He retired as president in 1972. Thode died in 1997 in Dundas, Ontario. Honours He was made a Member of the Order of the British Empire for his contributions to atomic research during World War II. He was named a Fellow of the Royal Society of Canada in 1943 and a Fellow of the Royal Society in 1954. In 1967 he was the first scientist to be made a Companion of the Order of Canada. The science and engineering library at McMaster University is named after him. References Further reading 1910 births 1997 deaths Canadian university and college chief executives Companions of the Order of Canada Fellows of the Royal Society Fellows of the Royal Society of Canada Canadian geochemists Academic staff of McMaster University Canadian Members of the Order of the British Empire Members of the Order of Ontario University of Chicago alumni University of Saskatchewan alumni
Harry Thode
[ "Chemistry" ]
366
[ "Geochemists", "Canadian geochemists" ]
1,595,155
https://en.wikipedia.org/wiki/Network%20administrator
A network administrator is a person designated in an organization whose responsibility includes maintaining computer infrastructures with emphasis on local area networks (LANs) up to wide area networks (WANs). Responsibilities may vary between organizations, but installing new hardware, on-site servers, enforcing licensing agreements, software-network interactions as well as network integrity and resilience are some of the key areas of focus. Duties The role of the network administrator can vary significantly depending on an organization's size, location, and socioeconomic considerations. Some organizations work on a user-to-technical support ratio, Network administrators are often involved in proactive work. This type of work will often include: Designing network infrastructure Implementing and configuring network hardware and software Network monitoring and maintaining the network Testing network for vulnerability & weakness Providing technical support Managing network resources Managing network documentation Managing vendor relationships Staying up to date with new technologies and best practices Providing training and guidance to other team members Network administrators are responsible for making sure that computer hardware and network infrastructure related to an organization's data network are effectively maintained. In smaller organizations, they are typically involved in the procurement of new hardware, the rollout of new software, maintaining disk images for new computer installs, making sure that licenses are paid for and up to date for software that needs it, maintaining the standards for server installations and applications, monitoring the performance of the network, checking for security breaches, and poor data management practices. A common question for the small-medium business (SMB) network administrator is, how much bandwidth do I need to run my business? Typically, within a larger organization, these roles are split into multiple roles or functions across various divisions and are not actioned by the one individual. In other organizations, some of these roles mentioned are carried out by system administrators. As with many technical roles, network administrator positions require a breadth of technical knowledge and the ability to learn the intricacies of new networking and server software packages quickly. Within smaller organizations, the more senior role of network engineer is sometimes attached to the responsibilities of the network administrator. It is common for smaller organizations to outsource this function. See also Network analyzer (disambiguation) Network architecture Network management system System administrator Technical support References Computer occupations Management by type
Network administrator
[ "Technology", "Engineering" ]
461
[ "Computer occupations", "Computer networks engineering", "Network management" ]
1,595,249
https://en.wikipedia.org/wiki/Hyakujuu%20Sentai%20Gaoranger
is a Japanese Tokusatsu television series and Toei's twenty-fifth production of the Super Sentai metaseries airing in 2001 and celebrated the franchise's 25th anniversary. It aired from February 18, 2001, to February 10, 2002, replacing Mirai Sentai Timeranger, and was replaced by Ninpu Sentai Hurricanger. Footage from this show was used in the 2002 American series Power Rangers Wild Force and was later dubbed in 2010 as the retitled Power Rangers: Jungle Force for South Korean television in place of Samurai Sentai Shinkenger. Gaoranger aired alongside Kamen Rider Agito. On May 14, 2018, it was announced that Shout! Factory had licensed Gaoranger for release in North America and it was released on December 18, 2018. This is the 11th Super Sentai to be released in North America on DVD in Region 1 format as Jetman was released before Gaoranger. Plot A millennium ago, humans fought a war against demon ogres known as the Orgs. With the help of the Power Animals, the ancient Gao Warriors were able to defeat the Orgs' leader Hyakkimaru, sealing the Orgs away along with one of their own. When the seal wanes, the Power Animals select a new generation of Gao Warriors to fight the freed Orgs and protect all life on Earth. The current Gao Warriors, the Gaorangers, are recruited to abandon their civilian lives and names while traveling to find the other Power Animals that were in hiding. Characters Gaorangers Each of the Gaorangers' surnames either contain the kanji for their (), , , , from or resemble the name of their Power Animal from . , the , was a veterinarian before becoming chosen by to become Gao Red. He was the last chosen of the Gaorangers, but has an affinity for animals. His other Power Animals are and . He was played by Noboru Kaneko. , the , was an airforce pilot before he was chosen first by to become Gao Yellow. He is serious and regimental, deciding that the Gaorangers should refer to each other by color instead of name. His other Power Animals are and . He was played by Kei Horie. , the , was a freeter before he was chosen by to become Gao Blue. He is the most immature of the team. His other Power Animal is , who appeared in Gao Blue's dream where it actually comes true that was about to lose against the evil tribe and that was why they needed Gao Giraffe to help them win against the evil tribe. Gao Giraffe will take Gao Shark or Gao Bear's place when it is chosen to fight. He was played by Takeru Shibaki. , the , was a retired sumo wrestler who was working as a florist before he was chosen by to become Gao Black. He is the physically strongest, but shyest of the team. His other Power Animals are and . He was played by Kazuyoshi Sakai. , the , was a martial arts student under her father before she was chosen by to be Gao White. She is the youngest and only female of the team. Her other Power Animals are and . She was played by Mio Takeuchi. /, the , was a Gao Warrior from the Heian period over 1,000 years ago. He used the power of the Dark Wolf Mask to defeat the evil . However, he was turned into and got the other Gao Warriors to entomb him so he could not do any more harm. He was later awakened as Rouki in the present age, but was freed of his curse by the Gaorangers. As Gao Silver, his Power Animals are , , and . He was played by Tetsuji Tamayama. Arsenal Power Animals The Power Animals are Mechas that are sentient and can be summoned by the Gaorangers. : A special bird-like creature whose vehicle form can enhance the powers of the Power Animal combinations. Power Animal (Mecha) Combinations & & & & & are different mode of the giant humanoid warrior mechas made up of the Power Animals: : A red lion Power Animal. : A yellow eagle Power Animal. : A blue shark Power Animal. : A black bison Power Animal : A white tiger Power Animal. : A sky blue elephant Power Animal. : An orange giraffe Power Animal. and : A black bear Power Animal and a polar bear Power Animal. Gao Bear does fire attacks and Gao Polar does ice attacks. : A green gorilla Power Animal. and : A sky blue rhinoceros Power Animal and a lavender armadillo Power Animal. : A light green deer Power Animal. : A large red falcon Power Animal. : A red gorilla Power Animal that is similar in appearance to Gao Gorilla except for the fact that its head in Gao Knight form is different. : A silver wolf Power Animal. : A purple hammerhead shark Power Animal. : A green alligator Power Animal. Gao God is the God of all the Power Animals. Having perished in battle against Ultimate Org Hyakkimaru, he reincarnated into a boy named Futaro. Gao God is made up of the following Power Animals: : A gunmetal lion Power Animal. : A blue condor Power Animal. : A maroon sawshark Power Animal. : A brown water buffalo Power Animal. : An orange-yellow jaguar Power Animal. Gao God is voiced by Hiroshi Masuoka and his Futaro form is portrayed by Daiki Arioka. Other Power Animals The remaining Power Animals were mapped out by Toei: : A giant panda Power Animal. It helped to fight the Snowman Org in the Gao Access CD before disappearing. Gao Panda is a recolored version of both Gao Bear and Gao Polar. : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Allies Ogre Tribe Org The is a race of Oni born from the sadness and madness of humans. They operate in a cavern known as the Matrix. Ultimate Org Hyakkimaru is seen in flashbacks and was the Org that was responsible for destroying Gao God one thousand years earlier before he himself was killed by Gao Hunter, which was powered by Rouki's evil energy. As by its name, Hyakkimaru was created by many Highness Duke Orgs joining into a single form. His shadowy silhouette resembles Rakushaasa. The source of the Orgs' power after the Org Master was sealed. Senki was created when the Org Heart merges the remnants of the three Highness Dukes, fulfilling the prophecy of the "Last Org Advent". Stronger than Hyakkimaru was, Senki overpowers the Gaorangers, destroying all the Power Animals and bringing the Animarium down. He then attacked the city, but the Power Animals are all revived and countless others arrive to help. All the Power Animals are able to destroy Senki's physical body, while the Gaorangers contributed their energy to Hyakujuuken and used it to destroy Senki's heart so he could not revive himself. The Highness Dukes' combined weapon, which is also Senki's default weapon, is the . This weapon is the Highness Dukes' own version of the Gaoranger's Hyakujuuken, which is a combination of Shuten's axe, Ura's fan, and Rasetsu's fork and knife. Senki is voiced by Daisuke Gōri. The minions of the Ogre Tribe Org created from a pink liquid. Considered the lowest class of Orgs because of their small, undeveloped horns. Armed with clubs that also function as flamethrowers. A different class of red-skinned colored Orgettes were responsible for Muraki losing her voice, because she saved Shirogane from their attack thousands of years ago. They were all destroyed when the 100 Power Animals gathered to battle Senki. Highness Duke Orgs The are the rulers of the Ogre Tribe Org. The first of the Highness Dukes to be awakened, this cyclops-like (he had one main eye) multi-eyed Org wants to take over the world. Short-tempered, he wields the and can stretch his arms. He once attempted to obtain Gao Elephant's Gao Jewel while it was dormant, but is forced to leave both it and the scroll behind when the orb left painful scars on his palms. He attempts to have the Dukes find ideal Orgs with his detached eye, only to anger the Org Master with his repeated failure. Seeking to redeem himself, Shuten attempts to destroy the Gaorangers by blocking the Gao Soul, knocking them around in human form while the Dukes take their Gao Jewels. But at the last second, Pyochan restores the Gao Souls, with Gao Red defeating Shuten with Gao Mane Buster. Refusing to accept defeat, Shuten takes TsueTsue's staff to perform the Highness Duke secret technique to enlarge at full power. But Shuten was not much for the SoulBird-powered Gao Muscle and Gao King. Though he survives the Super Animal Heart, Shuten is killed by the newly awakened Highness Duke Org Ura, who drains and absorbs all his power. Shuten is later resurrected, fighting Gao Red and Gao Yellow in a vain attempt to protect the red idol on the mountain that powers the barrier protecting the Matrix. Though he was killed, TsueTsue uses the Org Heart to fuse him with the others to create Senki. Shuten is voiced by Tetsu Inada. The second Highness Duke to be awakened, in response to Shuten's failure, he was effeminate with a nose-like face and ear-like projections on his back. Obsessed with collecting beautiful items, he is the one who awakens Rouki. He uses a magic mirror to see things at a distance and carries a fan as his weapon. Through Rouki, Ura gains the four Power Animal orbs that Rouki had stolen and uses them to create Chimera Org. However, the Org is beaten by the newly transformed Gao Silver, who also kills Ura. But Ura's crown survived and, after TsueTsue used it to briefly become Onihime, Ura regenerates himself from it. From there, Ura cultivates the displaced Thousand-Year Evil, making it stronger with each host he creates for it. Soon enough, Ura captures Gao Silver to have him reabsorb the evil power. However, after Gao Silver rejects it, Ura is able to absorb it to evolve into a more powerful form called which was his actual plan from the beginning. With his new power, Ura kills the Gaorangers with only Kakeru and Shirogane the sole survivors of the first attack. The two Gaorangers try to fight Ura on their own until the others provide Gao Red with the Falcon Summoner and Gao Falcon as Gao God revives them to aid in forming Gao Icarus, engaging in a dogfight before purging his body of the Thousand-Year Evil with Icarus Dynamite. Restored to normal, Ura is wounded by Gao Silver who destroys his crown so Ura would be killed for good by the Gaorangers' Hyakuujuuken. Ura was later resurrected, fighting Gao Silver and Gao White in a vain attempt to protect the green idol in the forest that powers the barrier protecting the Matrix. Though he was killed, TsueTsue uses the Org Heart to fuse him with the others to create Senki. Ura is voiced by Tamotsu Nishiwaki. The final Highness Duke to be awakened, a hermaphrodite with a mass of mouths with both male and female voices who referred himself as the "Prince of Despair". He is ravenous with an appetite for anything within reach, using a knife and fork as his weapons. He treats the Duke Orgs like dirt, painfully transferring their powers to certain Orgs. His very plan from the beginning was severing the ties between the Gaorangers and the Power Animals, using Kurushimemas Org to obtain Kakeru's G-Phone to implant a bug-extension of him into it to pinpoint Gao 's Rock. But he had an alternate motive in getting Tetomu, whose cooking he obsessed for. He almost got Tetomu as his chef, sacrificing TsueTsue in the process while removing the Gaorangers from the picture, if the Power Animals hadn't drove him away. Refusing to accept defeat, Rasets decides to complete his plan on Christmas, with his bug destroying the G-Phones prior to the fight while he smashes the G-Brace Phone. Rasets then ingests his creation to assume giant size to destroy Gao 's Rock. But before Rasets can eat the gang, the Power Animals arrive, with Gao Deers restoring the Gaorangers' transformation devices. Gao Muscle and Gao Hunter managed to wound the Highness Duke Org enough for Gao Kentarus to land the deathblow. Rasets is later resurrected as the leader of the other Highness Duke Orgs, fighting Gao Black and Gao Blue in a vain attempt to protect the blue idol within the lake that powers the barrier protecting the Matrix. Though he was killed, TsueTsue uses the Org Heart to fuse him with the others to create Senki. Rasets is voiced by Hiromi Nishikawa and Hidekatsu Shibata. Duke Orgs The oversee the Baron Org attacks. A crazed pierrot-like master of knives who claimed himself to Gao Yellow's greatest rival. He commonly blurts out whenever things got wrong for him and the other Orgs. While affected by residual energy from the Thousand-Year Evil, Yabaiba became , which lasted until Ura was killed for good. While attempting to impress Rasetsu, Yabaiba enlisted the aid of his brother Juggling Org, forming Team Circus to kill the Gaorangers. When his brother was overwhelmed by Gao King, Yabaiba eats the Org Seeds to grow large and fight alongside Juggling Org, until he died while the seeds' effect wore off and restored Yabaiba to normal size. Though he failed to impress Rasetsu, Yabaiba earned TsueTsue's respect and fell in love with TsueTsue over the course of the series, devastated by her death. After Yabaiba managed to conceal the Matrix's whereabouts with massive landscaping, he receives a message from beyond the grave, provided the means to revive TsueTsue: Using a fishing rod and her horn energized with the power that killed her. Using Steam Engine Org as a sacrificial lamb, Yabaiba succeeds in reeling TsueTsue out of hell, only to find the Highnesses were revived as a result. Though he began to question the intention of their masters', he followed TsueTsue out of love. After surviving the cave-in of their base, Yabaiba followed TsueTsue as they teamed up with the Jakanja in Hurricaneger vs. Gaoranger. He was finally killed by the two teams' Victory Gadget/Hyakujuuken combo along with TsueTsue. Yabaiba is voiced by Kōichi Sakaguchi. An arrogant Org priestess whose magic is as great as her devotion to the Highnesses. She developed an instant hatred of Gao White for calling her "Grandma". Tsue Tsue is in charge of reviving and enlarging the minor Orgs by using special soybeans called fired from her staff and chanting . When Ura was seemingly dead, she temporarily used his crown to become the . While affected by residual energy from the Thousand-Year Evil, TsueTsue became . Her obsession for praise from the Highness Dukes led to her being tricked by Dorodoro into cutting off her horn in order to capture Tetomu without being affected by the sacred spring, even saying that it would grow back. Though she was near-death, TsueTsue couldn't see that she was only a pawn as she then was used by Rasetsu as a shield from the Hyakujuuken. But she was soon revived when Yabaiba energized TsueTsue's severed horn with the Hyakujuuken's power, and used it as a fishing lure to bring her back from hell. As she was reeled out of hell, she carried the three Highness Dukes with her, rewarded her loyalty with the title of and a new staff. Afterwards, she was even more insane and driven even more for one purpose, serving the Highness Dukes to the point of creating the Org Heart that fused the Highness Dukes into Senki. After surviving the cave-in of their base, TsueTsue teamed up with the Jakenja in Hurricaneger vs. Gaoranger to get her revenge on the Gaorangers. Though she was killed by the two teams' Victory Gadget/Hyakujuuken combo, TsueTsue was resurrected a second time in GoGo Sentai Boukenger vs. Super Sentai by a combination of a Gōdom Engine and Chronos' magic (strangely, though she was destroyed on the battlefield, Chronos revived her in the Matrix) and assisted Chronos, Gajah, and Meemy, only to be used in the end as an ingredient to create a new Precious, the , that powered Chronos up. After Chronos was destroyed by Burning Legend DaiVoyager, so too was the staff, and TsueTsue along with it. Tsuetsue is played by Rei Saito. An werewolf-like Org born from the mass of evil energy contained inside the Dark Mask, the , called himself the most powerful warrior of the Orgs. Rouki wields the which can perform the and attacks, and like TsueTsue, he has his own that can enlarge Orgs by the command "Wolf Seeds, allow the fallen to regain their enormous wicked power!". Long ago, Shirogane used the mask to evoke his three Power Animals' combination at the cost of becoming Rouki himself. Vainly attempting to control himself, Shirogane had his allies seal him away into a stone coffin. But once released by Ura, Rouki is bent on exacting his 1,000 year grudge on the Gaorangers with no memory of his human life. In the process, while taking the Gao Jewels of Gao Elephant, Gao Giraffe, Gao Bear, and Gao Polar, Rouki begins to recover his memory as Shirogane. When the time of the full moon came, when his power is at its zenith, Ura implants a special grasshopper to modify Rouki's memory and willingly give the Gao Jewels to the Highness Duke. By then, the Gaorangers learned the truth and Shirogane was finally freed from the evil energy when the Gaorangers used Gao King Striker to force Gao Hunter apart and break the curse. However, Rouki was then released from his imprisonment and he attempted to kill Silver for using him and his power. Rouki was eventually defeated when Gao Silver joined forces with the other Gaorangers and Rouki was finally killed by Gao Hunter Justice with the Dark Wolf Mask destroyed. But the Thousand-Year Evil Rouki was made of escaped and scattered, with Ura cultivating it in various Orgs. Each time an Org is killed, the evil energy become even stronger. When Gao Deers sealed the energy, Ura had the Dukes capture Shirogane, who forced him to reabsorb the stronger evil energy, and become Rouki again but he was able to reject the power, unaware that it was all part of Ura's plan. But thanks to Gao Icarus, the Thousand-Year Evil was finally destroyed for good. Rouki is voiced by Eiji Takemoto. A tank-based Org and a helicopter-based Org respectively, both are Rasetsu's two personal Duke Orgs. On Rasetsu's order, Kyurara and Propla start systematically destroying Tokyo to soothe their master's ravenous urges to "eat human dreams". The Gaorangers managed to pinpoint their next attack to be at Shinweiya. Though defeated, Rasetsu transfers some of TsueTsue and Yabaiba's energy into the Duke Orgs to boost his power, turning them into berserkers. Gao Red managed to destroy Propela with the Falcon Summoner, and then he and the other Gaorangers managed to destroy Kyurara with Hyakujuuken. But, Kyurara and Propela were quickly revived and overpowered Gao King and Gao Hunter Justice, though they were both destroyed by Gao Icarus. Kyurara is voiced by Yutaka Asukai while Propla is voiced by Hideo Ishikawa. A ninja Duke Org who appeared seemingly out of nowhere to aid Rasets, bent on achieving his master's goal no matter the cost. A master of Org Ninpo, he created illusions of Shrine Bell Org, Tire Org, Clock Org, Magic Flute Org, and Animal-Tamer Org to distract the Gaorangers while he had TsueTsue capture Tetomu for him. Later, DoroDoro then uses his Shadow Clone Ninpo to create the , shadow-clones of the Gaorangers, whom they could not destroy without killing themselves in the process. But it was by dumb luck while following Rasets' order to punish Yabaiba that his Ninpo backfired on him, creating "Kage DoroDoro" that the Rangers destroyed with the Falcon Summoner, the Kage Rangers died along with him. Yabaiba made one final attempt to please Rasets order by revive Dorodoro. On Rasets' order, DoroDoro takes the Rangers in another dimension where the spirits of dead Orgs reside. With Tetomu's help, Gao Lion evoked the formation of Gao Kentaurus who destroys DoroDoro to bring the gang back to the real world. DoroDoro is voiced by Yasunori Masutani. Three Org Brothers The showed up on a peaceful island a year before the Gaorangers were transported there. They captured a group of the natives and forced them to dig for a ruby Demon's Castle. The oldest brother with power over thunder (while in the actual myths, Zeus was the youngest brother). He wielded a sword shaped like a thunderbolt and his special attack was called Devil's Thunder. Zeus Org was the leader of the Orgs on an island in another dimension and forced the natives to mine for a special ruby that contained GaoKong's power. Zeus was successful in slaying Kaito who was attempting to protect the Princess and then fought GaoSilver, who fended him off until the rest of the Gaorangers arrived and defeated him, causing his flaming body to fall into the ocean below. Zeus resurrected as a giant and revived his brothers just in time for the eclipse to summon GaoKong occurred. Destroyed by GaoKnight. Voiced by Kenta Miyake. The anglerfish-like middle brother with power over water. He wielded a trident. When the Orgettes kidnapped Tetomu, Poseidon wanted her as a companion and invited her to a drinking game at dinner with his brothers. Having won the contest, Tetomu was awarded half of the ruby by Poseidon. When GaoRed and Kaito attempted to rescue Tetomu, Poseidon attempted to fend them off and despite being drunk, was an effective fighter. It wasn't until GaoYellow joined them that he was bested. He later joined his brothers in fighting the Gaorangers and the islanders and was destroyed, along with his brother Hades, by the Gaorangers before being resurrected as a giant by Zeus. He was the first to attempt to attack GaoKnight but was destroyed by a quick stroke by GaoKnight's sword. Voiced by Ichiro Mizuki. The youngest brother with power over wind. He wielded a scythe. Killed by Gao God. Voiced by Yukio Yamagata. Baron Orgs An Org Spirit can acquire a physical form by fusing into an inanimate object, transforming into Baron Org based on the object and can assume its original form for disguise. The Baron Orgs are multi-horned desire to become Duke Orgs themselves, usually accepting the offer a Duke Org gives them in return for their own desires coming true. Whenever a Baron Org is killed, TsueTsue would wield her staff at the oozy remnants, chanting as the Org Seeds spit out of the staff and onto the puddle, recreating the Org as a giant. Turbine Org was capable of generating strong gusts of wind from his eyes. The first Org encountered in the series, he was pursued by the Gaoranger until he was aided by Plugma Org. Voiced by Bunkou Ogata. Able to generate electric current, Plugma Org appeared to aid Turbine Org in fighting off the Gaoranger before Gao Red was found. He is the first to be killed by the Hyakujuuken. Voiced by Kaoru Saito. The result of an Org Spirit entering barbed wire, Barbwire Org terrorized people until the Dukes Orgs found him, offering him aid in his destructive urges. Voiced by Hiroshi Iida. The result of an Org Spirit acquiring a camera, any picture Camera Org took caused the photographed person to turn invisible and die over time, their life force sucked into his film. Voiced by Yuji Kishi. The result of an Org Spirit acquiring a temple bell at the Laiwen Shrine, Shrine-Bell Org enjoys the sound of banging himself with his mallet and is able to trap anyone inside the large bells he creates. Voiced by Ryōichi Tanaka. An Org Spirit acquiring a tire, Tire Org loves traveling and can turn into a giant tire. Voiced by Bin Shimada. An Org Spirit acquiring a wedding dress, Wedding Dress Org was stealing brides and turning them into wedding mannequins as his base with the aid of Saori Shimada in return of giving her youth. Voiced by Hisao Egawa. An Org Spirit from the sea finding a sunken boat. Voiced by Hidenari Ugaki. An Org Spirit acquiring a signal light in Higahiku, Signal Org can use three beam attacks: Green for memory-erasing, Yellow for slow-mo, and Red for explosive attacks. Voiced by Norikazu Shimizu. An Org Spirit acquiring a cell phone, Cell Phone Org can use his to negate even the G-Phones, which he wanted. Voiced by Kyousei Tsukui. An Org Spirit acquiring a bulldozer, Bulldozer Org possessed great physical strength and a strong liking for deforestation, wandering aimlessly through the Sacred Forest until the Dukes ran into him by accident. Voiced by Dai Matsumoto. As a result of Shuten transferring the Dukes' powers into a samurai doll, SamuraiDoll Org is stronger than previous Orgs with a knowledge in the fighting arts, using his Orgken katana for his "Full Moon Cut" attack and a pistol. Voiced by Kiyoyuki Yanada. An Org Spirit entering the copy machine, Copy Org is able to assume the forms of other people with his scanner. Voiced by Yasuhiro Takato. An Org Spirit acquiring a freezer, Freezer Org was found by the Duke Orgs, bringing him to aid Shuten in fighting the Gaorangers with his freezing attacks. Voiced by Kazunari Tanaka. He was recruited by TsueTsue and Yabaiba to suck in lovely things they wanted to give to Ura. Voiced by Kōzō Shioya. Acquiring a tour bus as his vessel, TsueTsue and Yabaiba enlisted Bus Org to help them capture Gao Jewels by setting up news on Gao Elephant's whereabouts as bait. Voiced by Yukimasa Kishino. A wandering nobody who acquired a longcase clock, Clock Org is able to stop/alter time around him. Voiced by Taiki Matsuno. Acquiring a pair of glasses, Glasses Org can possess the women's bodies. Voiced by Keiko Konno. The first Baron Org that Gao Yellow fought, the Org was originally called , who was sealed after Gaku was unable to kill him in his first mission. A year later, Rouki found the released Org, who had evolved into a stronger version. Seeing Bike Org as a kindred spirit, Rouki recruits him to his cause. Voiced by Daisuke Sakaguchi. Though a loser Org who acquired a skeleton specimen, TsueTsue decide to use him in a scheme to win Ura's favor by setting up a haunted house to lure the Gaorangers into their trap. Voiced by Daisuke Sakaguchi. As a result of an Org Spirit acquiring a lawnmower, Lawnmower Org can disguise himself as a riding lawnmower and takes pride in his soccer field lawn. Voiced by Tetsuo Sakaguchi. Created by Ura from a mud puppet infused with the Gao Elephant, Gao Giraffe, Gao Bear, and Gao Polar jewels he obtained. Chimera Org is made up of the traits from these four Power Animals. As a result of Onihime having an Org Spirit acquiring a karaoke machine, Karaoke Org can steal people's speaking abilities and replace them with cat sounds. Voiced by Hideyuki Umezu. An Org Spirit acquiring various blacksmith materials, Blacksmith Org raided Tsubakuro City for ideal metal to create tableware for Rasetsu. Voiced by Kazuhiko Kishino. An Org Spirit acquiring a flute, its body is shaped like an ocarina. Voiced by Hironori Miyata. The younger brother of Yabaiba whose aid he enlists to gain praise from Rasetsu, forming Team Circus. Voiced by Toshiyuki Hayase. An Org with no backstory who refers to himself as a master beast tamer, recruited by the Duke Orgs at northern Kanto for his abilities, so Yabaiba can avenge his brother's death. Voiced by Naoki Imamura. An Org Spirit acquiring a monitor and various pieces of garbage, Monitor Org was able to trap people in any monitor as part of Rasetsu's plan to ally Futaro. Voiced by Naoki Yanagi. An Org Spirit acquiring a toy robot, Tinplate Org can shoot fire from its hands or use its toy-themed weapons. The greatest actor among the Org, Rasetsu enlisted him in a plan to turn children into his favorite drink, Dream Juice. To accomplish that, Kurushimemas Org assumes the guise of , giving presents to children as a sign of good will, but actually giving them the stockings that would trap the children to begin the process. Furthermore, he tricked Kakeru into thinking the Orgs are tired of fighting. As a result, Kakeru befriends the Org by giving him his G-Phone. Once the trap is sprung, Kurushimemas Org assumes his true form and attacks the other Gaorangers until Kakeru manages to break free before using the Hyakujuuken with extreme prejudice on the Org. TsueTsue revives Kurushimemas Org, who died at the hands of Gao Icarus in spite of aiding Rasetsu in his master plan. Voiced by Keiichi Noda. A strange Org that Yabaiba encounters two weeks after Rasetsu's death. Though he saw the Org too useless to help him, he decided to use the Org to protect the location of the Matrix. Killed by Gao Muscle and Gao Hunter Justice. Voiced by Naoki Tatsuta. An Org that was created near a train station at Sakitomo and was the strongest Org, overpowering the Gaorangers as he withstood their attacks without breaking a sweat. At the last second, Yabaiba attacks Steam Engine Org and holds him for the Hyakujuuken to kill him. Getting the energy he needed, Yabaiba revives Steam Engine Org who wanted to kill him when the Power Animals arrive to fight. But like before, Steam Engine Org proves too powerful and cripples most of the Power Animals in the process. The remaining Powers Animals combine into Gao Icarus Another Foot & Arm, who succeed in killing Steam Engine Org with Icarus Breaker. Voiced by Masanobu Kariya. Thousand-Year Evil Orgs The are three Baron Orgs that Ura powers using the Thousand-Year Evil of Rouki. Created by Ura from a mud puppet infused with the Thousand-Year Evil, Vase Org could suck anything he marks with seal paper into his vase. Voiced by Yasuhiko Kawazu. Created by Ura from a mud puppet infused with the Thousand-Year Evil, Bowling Org was sent to "bowl" the city down with his Striker Ball until the Gaorangers intervene. Voiced by Kōichi Tōchika. As a result of Ura infusing the tombstone he made for the Gaorangers with the Thousand-Year Evil, Tombstone Org possessed a harden body. Voiced by Keiichi Sonobe. Other Baron Orgs Appeared on the Gao Access CD. After some help from Gao Panda, it was killed by Gao King. A Org born fifty years ago, he caused great damage with his powers. However, he found no pleasure in his work until he found something worth grilling and took on human form to be a traveling chef. The Gaorangers find Charcoal-Grill Org when he was teaching two punks a lesson, for insulting his food though he evaded both them and the Dukes, who Highness Duke Org Rasetsu sent to get for him. The Gaorangers find him, unaware that he was an Org as they enjoyed his food. By the next time they meet, Soutaro realizes the truth and attempts help Charcoal-Grill Org with Gaku's help. But the Dukes managed to enrage Charcoal-Grill Org, terrorizing an industrial site. GaoBlack & GaoYellow attempt to snap Charcoal-Grill Org out of it, reminding him of his dream. Though he calmed down, Rasetsu kills the traitor. Tsuetsue then revives Charcoal-Grill Org, purging him of his humanity. Though seemingly killed by GaoIcarus, Charcoal-Grill Org was revealed to be alive and back to normal, cooking for Futaro. Voice & human form is portrayed by Taro Suwa. Quests (episodes) The Fire Mountain Roars is the theatrical adaptation of Gaoranger that was a double bill with the Kamen Rider Series film Kamen Rider Agito the Movie: Project G4. The film features and the combination. The events of the movie takes place between Quests 40 and 41. Production The trademark for the series was filed by Toei Company on October 25, 2000. V-Cinema releases (Takes place between Episodes 30 and 31 of Ninpu Sentai Hurricanger) Hyakujuu Sentai Gaoranger vs. Super Sentai aired in 2001. The plot revolves around the Gaorangers meeting up with previous sentai members. Each sentai teaches each one about their Super Sentai past. Past Sentai Heroes included Sokichi Banba/Big One from J.A.K.Q. Dengekitai, Yusuke Amamiya/Red Falcon from Choujyu Sentai Liveman, Gouki/Ginga Blue from Seijuu Sentai Gingaman, Daimon Tatsumi/Go Yellow from Kyuukyuu Sentai GoGoFive, and Miku Imamura/Mega Pink from Denji Sentai Megaranger. All the prior Red Rangers from Himitsu Sentai Gorenger to Mirai Sentai Timeranger also make a cameo appearance at the end of the film. The events of the movie takes place between Quests 14 and 15. Drama CD A drama CD titled introduces Gao Panda. Manga A manga adaptation crossover with Himitsu Sentai Gorenger titled was released in September 2001. Cast Kakeru Shishi: Gaku Washio: Kai Samezu: Soutaro Ushigome: Sae Taiga: Tsukumaro Oogami: , : : TsueTsue: Voice actors : , : : Yabaiba: Rouki: : : : , : Narrator, Songs Opening theme Lyrics: Nagae Kuwabara Composition & Arrangement: Kōtarō Nakagawa Artist: Yukio Yamagata Lyrics: Nagae Kuwabara Composition & Arrangement: Kōtarō Nakagawa Artist: The Gaorangers Used as movie opening and the ending of the Quest 45 Ending themes (1–44, 46–50) Lyrics: Nagae Kuwabara Composition & Arrangement: Keiichi Oku Artist: Salia (51) Lyrics: Composition & Arrangement: Kōichirō Kameyama Artist: The Gaorangers Finale Ending International broadcast and home video The series was limited to only airing in Asian regions outside of Japan, as most international regions have aired the Power Rangers adaptation, Power Rangers Wild Force instead. This is also the first Super Sentai season to be released on DVD in its home country of Japan. Originally released on Rental DVDs starting on October 12, 2001, it would also later be commercially released for sale on DVDs from December 8, 2001, to November 21, 2002, where all 12 volumes had all episodes with the first 10 volumes holding 4 episodes, the last two volumes holding 5. It was also released on VHS from January till December 2002 with those same volumes. On September 8, 2021, to commemorate the 20th anniversary of the series' premiere, a collection set to spread through two volumes was released with the first volume having 25 episodes, and the second having 26. This was the very first Super Sentai series to be released in Vietnam and was given a Vietnamese dub by Phuong Nam Film Studio, where it was released as 5 anh em siêu nhân Gaoranger around 2003–2004. It has seen unprecedented success, paving the way for more Super Sentai seasons to be released in the country. Although the Hyakujuu Sentai Gaoranger vs. Super Sentai film was released as Anh em chiến binh Gao. The series was released on home video in Thailand with a Thai dub by Rose Home Entertainment (formerly Rose Video) and it was very popular with consumers. It also used to air on Channel 5 during 2003, with distribution by First Entertainment Company. This was the first Super Sentai series since Ninja Sentai Kakuranger to air with both Mandarin (Taiwan dialect) and Cantonese dubs. However this time, both were shown at once and premiered within the same year in 2003. In Taiwan, the series aired with a Taiwanese Mandarin dub on October 3, 2003, until September 26, 2004, with all episodes dubbed, airing on GTV. In Hong Kong, the series aired with a Cantonese Chinese dub on November 9, 2003 (a month after Taiwan aired the Taiwanese Mandarin dub) on TVB Jade until October 24, 2004, with all episodes covered. In Philippines, this series was aired as Power Rangers adaptation of Sentai series called Power Rangers Wild Force. it was aired on ABS-CBN in 2004 until 2005 dubbed in Tagalog. this is the reason that Power Rangers series in 2000's airs on RPN-9 until it returns on ABS-CBN. After sometime, Gaoranger was aired on ABC 5 with names changed and also dubbed in Tagalog. In Malaysia, the series aired with a Malay dub produced by FKN Dubbing on TV2 around 2008 and finished around 2009. It was marketed under Leo Ranger. In South Korea, the series was dubbed in Korean and aired in 2010 under Power Rangers Jungle Force. (파워레인저 정글포스) They have previously dubbed for its Power Rangers adaptation that was Wild Force in 2003, although aired under Power Force Rangers. (파워포스레인저) This series was picked up for broadcast after the Korean dub for Engine Sentai Go-Onger in 2009. The reason why they decided to go backwards and dub an earlier Sentai season was because they skipped Samurai Sentai Shinkenger due to the series having heavy use of elements involving Japanese culture. This is currently the only series to have both itself and its American adaptation dubbed for South Korea titled as separate shows. In North America, the series would receive a DVD release by Shout! Factory on January 30, 2018, in the original Japanese audio with English subtitles. It is the eleventh Super Sentai series to be officially released in the region. Notes References External links at Super-Sentai.net Super Sentai 2001 Japanese television series debuts 2002 Japanese television series endings Japanese action television series Japanese fantasy television series Japanese science fiction television series Works about legendary creatures Works about animals Television series about animals
Hyakujuu Sentai Gaoranger
[ "Biology" ]
8,473
[ "Animals", "Works about animals" ]
1,595,474
https://en.wikipedia.org/wiki/Pontis
Pontis is a software application developed to assist in managing highway bridges and other structures. Known as AASHTOWare Bridge Management since version 5.2, Pontis stores bridge inspection and inventory data based on the U.S. Federal Highway Administration (FHWA) National Bridge Inventory system coding guidelines. In addition, the system stores condition data for each of a bridge's structural elements. The system is designed to support the bridge inspection process, recommend a bridge preservation policy, predict future bridge conditions, and recommend projects to perform on one or more bridges to derive the most agency and user benefit from a specified budget. The system uses a Markovian Decision Process to model bridge deterioration and recommend an optimal preservation policy. It uses the Markovian model results, in conjunction with a simulation model, to predict future conditions and recommend work. History In 1991, the FHWA sponsored the development of a bridge management system called "Pontis" which is derived from the Latin pons, meaning bridge. The system is owned by the American Association of State Highway and Transportation Officials (AASHTO). Many states began using Pontis when the Intermodal Surface Transportation Efficiency Act required each state to implement a system. it was licensed by AASHTO to over 45 U.S. state transportation departments and other organizations in the U.S. and other countries. InspectTech belonging to Bentley Systems is the contractor for ongoing development and support of Pontis. Previous Pontis developers included Cambridge Systematics, Inc., Optima, Inc., and the Michael Baker Corporation. See also Bridge management system Management system References Further reading . Archived from the original on October 27, 2007. External links Pontis webpage, for sales and support, at the InspectTech website. Bridges
Pontis
[ "Engineering" ]
354
[ "Structural engineering", "Bridges" ]
13,536,810
https://en.wikipedia.org/wiki/Semantic%20reasoner
A semantic reasoner, reasoning engine, rules engine, or simply a reasoner, is a piece of software able to infer logical consequences from a set of asserted facts or axioms. The notion of a semantic reasoner generalizes that of an inference engine, by providing a richer set of mechanisms to work with. The inference rules are commonly specified by means of an ontology language, and often a description logic language. Many reasoners use first-order predicate logic to perform reasoning; inference commonly proceeds by forward chaining and backward chaining. There are also examples of probabilistic reasoners, including non-axiomatic reasoning systems, and probabilistic logic networks. Notable applications Notable semantic reasoners and related software: Free to use (closed source) Cyc inference engine, a forward and backward chaining inference engine with numerous specialized modules for high-order logic. KAON2 is an infrastructure for managing OWL-DL, SWRL, and F-Logic ontologies. Free software (open source) Cwm, a forward-chaining reasoner used for querying, checking, transforming and filtering information. Its core language is RDF, extended to include rules, and it uses RDF/XML or N3 serializations as required. Drools, a forward-chaining inference-based rules engine which uses an enhanced implementation of the Rete algorithm. Evrete, a forward-chaining Java rule engine that uses the Rete algorithm and is compliant with the Java Rule Engine API (JSR 94). D3web, a platform for knowledge-based systems (expert systems). Flora-2, an object-oriented, rule-based knowledge-representation and reasoning system. Jena, an open-source semantic-web framework for Java which includes a number of different semantic-reasoning modules. OWLSharp, a lightweight and friendly .NET library for realizing intelligent Semantic Web applications. NRules a forward-chaining inference-based rules engine implemented in C# which uses an enhanced implementation of the Rete algorithm Prova, a semantic-web rule engine which supports data integration via SPARQL queries and type systems (RDFS, OWL ontologies as type system). DIP, Defeasible-Inference Platform (DIP) is an Web Ontology Language reasoner and Protégé desktop plugin for representing and reasoning with defeasible subsumption. It implements a Preferential entailment style of reasoning that reduces to "classical entailment" i.e., without the need to modify the underlying decision procedure. Semantic Reasoner for Internet of Things (open-source) S-LOR (Sensor-based Linked Open Rules) semantic reasoner S-LOR is under GNU GPLv3 license. S-LOR (Sensor-based Linked Open Rules) is a rule-based reasoning engine and an approach for sharing and reusing interoperable rules to deduce meaningful knowledge from sensor measurements. See also Business rules engine Doxastic logic Expert systems Logic programming Method of analytic tableaux Solver References External links OWL 2 Reasoners listed on W3C SW Working Group homepage SPARQL Query Language for RDF Marko Luther, Thorsten Liebig, Sebastian Böhm, Olaf Noppens: Who the Heck Is the Father of Bob?. ESWC 2009: 66-80 Jurgen Bock, Peter Haase, Qiu Ji, Raphael Volz. Benchmarking OWL Reasoners. Mirror available. In ARea2008 – Workshop on Advancing Reasoning on the Web: Scalability and Commonsense (June 2008) Tom Gardiner, Ian Horrocks, Dmitry Tsarkov. Automated Benchmarking of Description Logic Reasoners. Description Logics Workshop 2006 Knowledge representation Knowledge engineering Ontology (information science) Semantic Web Automated reasoning
Semantic reasoner
[ "Engineering" ]
777
[ "Systems engineering", "Knowledge engineering" ]
13,536,941
https://en.wikipedia.org/wiki/List%20of%20vacuum-tube%20computers
Vacuum-tube computers, now called first-generation computers, are programmable digital computers using vacuum-tube logic circuitry. They were preceded by systems using electromechanical relays and followed by systems built from discrete transistors. Some later computers on the list had both vacuum tubes and transistors. This list of vacuum-tube computers is sorted by date put into service: See also List of transistorized computers History of computing hardware References Vacuum tube computers Computers, list of vacuum tube
List of vacuum-tube computers
[ "Physics", "Technology" ]
102
[ "Matter", "Computing-related lists", "Vacuum tubes", "Vacuum", "Lists of computer hardware" ]
13,537,130
https://en.wikipedia.org/wiki/Chemosphere%20%28journal%29
Chemosphere is a biweekly peer-reviewed scientific journal published since 1972 by Elsevier and covering environmental chemistry. In July 2023, the journal was put on hold in the Web of Science Master Journal List due to quality concerns. By May 2024, the journal had marked more than 60 papers with expressions of concern, typically citing "unusual changes" of authorship prior to publication and "potential undisclosed conflicts of interest" by reviewers and handling editors. In December 16, 2024, Web of Science delisted the journal. This followed an incident in which the journal published a paper claiming that household products made of black plastic contained dangerous amount of toxic chemicals, leading to the media warning readers to throw away black plastic products. However, the study was found to have a math error in calculating the reference dose for a 60 kg adult, which made the abundance of BDE-209, a toxic flame retardant found in the plastic appear to exceed U.S. limits (the estimated daily dose of the flame retardant was not questioned or corrected). The authors later published a correction note, while claiming the error "does not affect the overall conclusion of the paper." Editors-in-chief The following persons are or have been editor-in-chief: 2020–2024: Jacob de Boer (Vrije Universiteit Amsterdam) and Shane Snyder (University of Arizona) 2024–present: Jacob de Boer, Willie Peijnenburg (Leiden University), and Yeomin Yoon (Ewha Womans University) Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 8.1. References External links Elsevier academic journals Chemistry journals Academic journals established in 1972 Environmental chemistry Biweekly journals
Chemosphere (journal)
[ "Chemistry", "Environmental_science" ]
368
[ "Environmental chemistry", "nan" ]
13,537,211
https://en.wikipedia.org/wiki/General%20content%20descriptor
A General Content Descriptor (GCD) is a file which describes downloads like ringtones and pictures to wireless devices. GCD's are plain text files. They are required by many wireless carriers to install applications on devices. The name of the file will end with a ".gcd" extension. References Computer file formats Mobile technology
General content descriptor
[ "Technology" ]
71
[ "nan" ]
13,537,472
https://en.wikipedia.org/wiki/Aromatoleum
"Aromatoleum" is a genus of bacteria capable of microbial biodegradation of organic pollutants. It has one single described species member, A. aromaticum, for which the only strain is strain EbN1. This taxonomy is accepted by the NCBI taxonomy database, and consequently by many bioinformatic databases. However, the strain EbN1 has not been described in detail, therefore, according to the International Code of Nomenclature of Bacteria, the name "Aromatoleum aromaticum" is not valid and should be officially referred to as Azoarcus sp. EbN1 as it belongs to the Azoarcus/Thauera cluster. The discovery of the strain was published in 1995, and was subsequently referred to in the literature as "Aromatoleum aromaticum" and cited as "(Rabus, unpublished data)". A. aromaticum strain EbN1 has been fully sequenced by the same researchers who discovered it and coworkers. It has one chromosome and two plasmids, encoding for 10 anaerobic and 4 aerobic aromatic degradation pathways. The genome is rich in paralogous gene clusters, mobile gene elements, and genes similar to that from other bacteria, suggesting a history full of horizontal gene transfer events. The bacterium has a well-regulated metabolic network. Unlike many species in Azoarcus proper, it is incapable of fixing nitrogen. References Biodegradation Bacteria genera Monotypic bacteria genera
Aromatoleum
[ "Chemistry" ]
301
[ "Biodegradation" ]
13,537,626
https://en.wikipedia.org/wiki/Quantum%20biology
Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems. Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Moreover, quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative. Currently, there exist four major life processes that have been identified as influenced by quantum effects: enzyme catalysis, sensory processes, energy transference, and information encoding. History Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argued that the quantum idea of complementarity was fundamental to the life sciences. In 1963, Per-Olov Löwdin published proton tunneling as another mechanism for DNA mutation. In his paper, he stated that there is a new field of study called "quantum biology". In 1979, the Soviet and Ukrainian physicist Alexander Davydov published the first textbook on quantum biology entitled Biology and Quantum Mechanics. Enzyme catalysis Enzymes have been postulated to use quantum tunneling to transfer electrons in electron transport chains. It is possible that protein quaternary architectures may have adapted to enable sustained quantum entanglement and coherence, which are two of the limiting factors for quantum tunneling in biological entities. These architectures might account for a greater percentage of quantum energy transfer, which occurs through electron transport and proton tunneling (usually in the form of hydrogen ions, H+). Tunneling refers to the ability of a subatomic particle to travel through potential energy barriers. This ability is due, in part, to the principle of complementarity, which holds that certain substances have pairs of properties that cannot be measured separately without changing the outcome of measurement. Particles, such as electrons and protons, have wave-particle duality; they can pass through energy barriers due to their wave characteristics without violating the laws of physics. In order to quantify how quantum tunneling is used in many enzymatic activities, many biophysicists utilize the observation of hydrogen ions. When hydrogen ions are transferred, this is seen as a staple in an organelle's primary energy processing network; in other words, quantum effects are most usually at work in proton distribution sites at distances on the order of an angstrom (1 Å). In physics, a semiclassical (SC) approach is most useful in defining this process because of the transfer from quantum elements (e.g. particles) to macroscopic phenomena (e.g. biochemicals). Aside from hydrogen tunneling, studies also show that electron transfer between redox centers through quantum tunneling plays an important role in enzymatic activity of photosynthesis and cellular respiration (see also Mitochondria section below). Ferritin Ferritin is an iron storage protein that is found in plants and animals. It is usually formed from 24 subunits that self-assemble into a spherical shell that is approximately 2 nm thick, with an outer diameter that varies with iron loading up to about 16 nm. Up to ~4500 iron atoms can be stored inside the core of the shell in the Fe3+ oxidation state as water-insoluble compounds such as ferrihydrite and magnetite. Ferritin is able to store electrons for at least several hours, which reduce the Fe3+ to water soluble Fe2+. Electron tunneling as the mechanism by which electrons transit the 2 nm thick protein shell was proposed as early as 1988. Electron tunneling and other quantum mechanical properties of ferritin were observed in 1992, and electron tunneling at room temperature and ambient conditions was observed in 2005. Electron tunneling associated with ferritin is a quantum biological process, and ferritin is a quantum biological agent. Electron tunneling through ferritin between electrodes is independent of temperature, which indicates that it is substantially coherent and activation-less. The electron tunneling distance is a function of the size of the ferritin. Single electron tunneling events can occur over distances of up to 8 nm through the ferritin, and sequential electron tunneling can occur up to 12 nm through the ferritin. It has been proposed that the electron tunneling is magnon-assisted and associated with magnetite microdomains in the ferritin core. Early evidence of quantum mechanical properties exhibited by ferritin in vivo was reported in 2004, where increased magnetic ordering of ferritin structures in placental macrophages was observed using small angle neutron scattering (SANS). Quantum dot solids also show increased magnetic ordering in SANS testing, and can conduct electrons over long distances. Increased magnetic ordering of ferritin cores disposed in an ordered layer on a silicon substrate with SANS testing has also been observed. Ferritin structures like those in placental macrophages have been tested in solid state configurations and exhibit quantum dot solid-like properties of conducting electrons over distances of up to 80 microns through sequential tunneling and formation of Coulomb blockades. Electron transport through ferritin in placental macrophages may be associated with an anti-inflammatory function. Conductive atomic force microscopy of substantia nigra pars compacta (SNc) tissue demonstrated evidence of electron tunneling between ferritin cores, in structures that correlate to layers of ferritin outside of neuromelanin organelles.  Evidence of ferritin layers in cell bodies of large dopamine neurons of the SNc and between those cell bodies in glial cells has also been found, and is hypothesized to be associated with neuron function. Overexpression of ferritin reduces the accumulation of reactive oxygen species (ROS), and may act as a catalyst by increasing the ability of electrons from antioxidants to neutralize ROS through electron tunneling. Ferritin has also been observed in ordered configurations in lysosomes associated with erythropoiesis, where it may be associated with red blood cell production. While direct evidence of tunneling associated with ferritin in vivo in live cells has not yet been obtained, it may be possible to do so using QDs tagged with anti-ferritin, which should emit photons if electrons stored in the ferritin core tunnel to the QD. Sensory processes Olfaction Olfaction, the sense of smell, can be broken down into two parts; the reception and detection of a chemical, and how that detection is sent to and processed by the brain. This process of detecting an odorant is still under question. One theory named the "shape theory of olfaction" suggests that certain olfactory receptors are triggered by certain shapes of chemicals and those receptors send a specific message to the brain. Another theory (based on quantum phenomena) suggests that the olfactory receptors detect the vibration of the molecules that reach them and the "smell" is due to different vibrational frequencies, this theory is aptly called the "vibration theory of olfaction." The vibration theory of olfaction, created in 1938 by Malcolm Dyson but reinvigorated by Luca Turin in 1996, proposes that the mechanism for the sense of smell is due to G-protein receptors that detect molecular vibrations due to inelastic electron tunneling, tunneling where the electron loses energy, across molecules. In this process a molecule would fill a binding site with a G-protein receptor. After the binding of the chemical to the receptor, the chemical would then act as a bridge allowing for the electron to be transferred through the protein. As the electron transfers across what would otherwise have been a barrier, it loses energy due to the vibration of the newly-bound molecule to the receptor. This results in the ability to smell the molecule. While the vibration theory has some experimental proof of concept, there have been multiple controversial results in experiments. In some experiments, animals are able to distinguish smells between molecules of different frequencies and same structure, while other experiments show that people are unaware of distinguishing smells due to distinct molecular frequencies. Vision Vision relies on quantized energy in order to convert light signals to an action potential in a process called phototransduction. In phototransduction, a photon interacts with a chromophore in a light receptor. The chromophore absorbs the photon and undergoes photoisomerization. This change in structure induces a change in the structure of the photo receptor and resulting signal transduction pathways lead to a visual signal. However, the photoisomerization reaction occurs at a rapid rate, in under 200 femtoseconds, with high yield. Models suggest the use of quantum effects in shaping the ground state and excited state potentials in order to achieve this efficiency. The sensor in the retina of the human eye is sensitive enough to detect a single photon. Single photon detection could lead to multiple different technologies. One area of development is in quantum communication and cryptography. The idea is to use a biometric system to measure the eye using only a small number of points across the retina with random flashes of photons that "read" the retina and identify the individual. This biometric system would only allow a certain individual with a specific retinal map to decode the message. This message can not be decoded by anyone else unless the eavesdropper were to guess the proper map or could read the retina of the intended recipient of the message. Energy transfer Photosynthesis Photosynthesis refers to the biological process that photosynthetic cells use to synthesize organic compounds from inorganic starting materials using sunlight. What has been primarily implicated as exhibiting non-trivial quantum behaviors is the light reaction stage of photosynthesis. In this stage, photons are absorbed by the membrane-bound photosystems. Photosystems contain two major domains, the light-harvesting complex (antennae) and the reaction center. These antennae vary among organisms. For example, bacteria use circular aggregates of chlorophyll pigments, while plants use membrane-embedded protein and chlorophyll complexes. Regardless, photons are first captured by the antennae and passed on to the reaction-center complex. Various pigment-protein complexes, such as the FMO complex in green sulfur bacteria, are responsible for transferring energy from antennae to reaction site. The photon-driven excitation of the reaction-center complex mediates the oxidation and the reduction of the primary electron acceptor, a component of the reaction-center complex. Much like the electron transport chain of the mitochondria, a linear series of oxidations and reductions drives proton (H+) pumping across the thylakoid membrane, the development of a proton motive force, and energetic coupling to the synthesis of ATP. Previous understandings of electron-excitation transference (EET) from light-harvesting antennae to the reaction center have relied on the Förster theory of incoherent EET, postulating weak electron coupling between chromophores and incoherent hopping from one to another. This theory has largely been disproven by FT electron spectroscopy experiments that show electron absorption and transfer with an efficiency of above 99%, which cannot be explained by classical mechanical models. Instead, as early as 1938, scientists theorized that quantum coherence was the mechanism for excitation-energy transfer. Indeed, the structure and nature of the photosystem places it in the quantum realm, with EET ranging from the femto- to nanosecond scale, covering sub-nanometer to nanometer distances. The effects of quantum coherence on EET in photosynthesis are best understood through state and process coherence. State coherence refers to the extent of individual superpositions of ground and excited states for quantum entities, such as excitons. Process coherence, on the other hand, refers to the degree of coupling between multiple quantum entities and their evolution as either dominated by unitary or dissipative parts, which compete with one another. Both of these types of coherence are implicated in photosynthetic EET, where a exciton is coherently delocalized over several chromophores. This delocalization allows for the system to simultaneously explore several energy paths and use constructive and destructive interference to guide the path of the exciton's wave packet. It is presumed that natural selection has favored the most efficient path to the reaction center. Experimentally, the interaction between the different frequency wave packets, made possible by long-lived coherence, will produce quantum beats. While quantum photosynthesis is still an emerging field, there have been many experimental results that support the quantum-coherence understanding of photosynthetic EET. A 2007 study claimed the identification of electronic quantum coherence at −196 °C (77 K). Another theoretical study from 2010 provided evidence that quantum coherence lives as long as 300 femtoseconds at biologically relevant temperatures (4 °C or 277 K). In that same year, experiments conducted on photosynthetic cryptophyte algae using two-dimensional photon echo spectroscopy yielded further confirmation for long-term quantum coherence. These studies suggest that, through evolution, nature has developed a way of protecting quantum coherence to enhance the efficiency of photosynthesis. However, critical follow-up studies question the interpretation of these results. Single-molecule spectroscopy now shows the quantum characteristics of photosynthesis without the interference of static disorder, and some studies use this method to assign reported signatures of electronic quantum coherence to nuclear dynamics occurring in chromophores. A number of proposals emerged to explain unexpectedly long coherence. According to one proposal, if each site within the complex feels its own environmental noise, the electron will not remain in any local minimum due to both quantum coherence and its thermal environment, but proceed to the reaction site via quantum walks. Another proposal is that the rate of quantum coherence and electron tunneling create an energy sink that moves the electron to the reaction site quickly. Other work suggested that geometric symmetries in the complex may favor efficient energy transfer to the reaction center, mirroring perfect state transfer in quantum networks. Furthermore, experiments with artificial dye molecules cast doubts on the interpretation that quantum effects last any longer than one hundred femtoseconds. In 2017, the first control experiment with the original FMO protein under ambient conditions confirmed that electronic quantum effects are washed out within 60 femtoseconds, while the overall exciton transfer takes a time on the order of a few picoseconds. In 2020 a review based on a wide collection of control experiments and theory concluded that the proposed quantum effects as long lived electronic coherences in the FMO system does not hold. Instead, research investigating transport dynamics suggests that interactions between electronic and vibrational modes of excitation in FMO complexes require a semi-classical, semi-quantum explanation for the transfer of exciton energy. In other words, while quantum coherence dominates in the short-term, a classical description is most accurate to describe long-term behavior of the excitons. Another process in photosynthesis that has almost 100% efficiency is charge transfer, again suggesting that quantum mechanical phenomena are at play. In 1966, a study on the photosynthetic bacterium Chromatium found that at temperatures below 100 K, cytochrome oxidation is temperature-independent, slow (on the order of milliseconds), and very low in activation energy. The authors, Don DeVault and Britton Chase, postulated that these characteristics of electron transfer are indicative of quantum tunneling, whereby electrons penetrate a potential barrier despite possessing less energy than is classically necessary. Mitochondria Mitochondria have been demonstrated to utilize quantum tunneling in their function as the powerhouse of eukaryotic cells. Similar to the light reactions in the thylakoid, linearly-associated membrane-bound proteins comprising the electron transport chain (ETC) energetically link the reduction of O2 with the development of a proton motive gradient (H+) across the inner membrane of the mitochondria. This energy stored as a proton motive gradient is then coupled with the synthesis of ATP. It is significant that the mitochondrion conversion of biomass into chemical ATP achieves 60-70% thermodynamic efficiency, far superior to that of man-made engines. This high degree of efficiency is largely attributed to the quantum tunnelling of electrons in the ETC and of protons in the proton motive gradient. Indeed, electron tunneling has already been demonstrated in certain elements of the ETC including NADH:ubiquinone oxidoreductase(Complex I) and CoQH2-cytochrome c reductase (Complex III). In quantum mechanics, both electrons and protons are quantum entities that exhibit wave-particle duality, exhibiting both particle and wave-like properties depending on the method of experimental observation. Quantum tunneling is a direct consequence of this wave-like nature of quantum entities that permits the passing-through of a potential energy barrier that would otherwise restrict the entity. Moreover, it depends on the shape and size of a potential barrier relative to the incoming energy of a particle. Because the incoming particle is defined by its wave function, its tunneling probability is dependent upon the potential barrier's shape in an exponential way. For example, if the barrier is relatively wide, the incoming particle's probability to tunnel will decrease. The potential barrier, in some sense, can come in the form of an actual biomaterial barrier. The inner mitochondria membrane which houses the various components of the ETC is on the order of 7.5 nm thick. The inner membrane of a mitochondrion must be overcome to permit signals (in the form of electrons, protons, H+) to transfer from the site of emittance (internal to the mitochondria) and the site of acceptance (i.e. the electron transport chain proteins). In order to transfer particles, the membrane of the mitochondria must have the correct density of phospholipids to conduct a relevant charge distribution that attracts the particle in question. For instance, for a greater density of phospholipids, the membrane contributes to a greater conductance of protons. Molecular solitons in proteins Alexander Davydov developed the quantum theory of molecular solitons in order to explain the transport of energy in protein α-helices in general and the physiology of muscle contraction in particular. He showed that the molecular solitons are able to preserve their shape through nonlinear interaction of amide I excitons and phonon deformations inside the lattice of hydrogen-bonded peptide groups. In 1979, Davydov published his complete textbook on quantum biology entitled "Biology and Quantum Mechanics" featuring quantum dynamics of proteins, cell membranes, bioenergetics, muscle contraction, and electron transport in biomolecules. Information encoding Magnetoreception Magnetoreception is the ability of animals to navigate using the inclination of the magnetic field of the Earth. A possible explanation for magnetoreception is the entangled radical pair mechanism. The radical-pair mechanism is well-established in spin chemistry, and was speculated to apply to magnetoreception in 1978 by Schulten et al.. The ratio between singlet and triplet pairs is changed by the interaction of entangled electron pairs with the magnetic field of the Earth. In 2000, cryptochrome was proposed as the "magnetic molecule" that could harbor magnetically sensitive radical-pairs. Cryptochrome, a flavoprotein found in the eyes of European robins and other animal species, is the only protein known to form photoinduced radical-pairs in animals. When it interacts with light particles, cryptochrome goes through a redox reaction, which yields radical pairs both during the photo-reduction and the oxidation. The function of cryptochrome is diverse across species, however, the photoinduction of radical-pairs occurs by exposure to blue light, which excites an electron in a chromophore. Magnetoreception is also possible in the dark, so the mechanism must rely more on the radical pairs generated during light-independent oxidation. Experiments in the lab support the basic theory that radical-pair electrons can be significantly influenced by very weak magnetic fields, i.e., merely the direction of weak magnetic fields can affect radical-pair's reactivity and therefore can "catalyze" the formation of chemical products. Whether this mechanism applies to magnetoreception and/or quantum biology, that is, whether Earth's magnetic field "catalyzes" the formation of biochemical products by the aid of radical-pairs, is not fully clear. Radical-pairs may need not be entangled, the key quantum feature of the radical-pair mechanism, to play a part in these processes. There are entangled and non-entangled radical-pairs, but disturbing only entangled radical-pairs is not possible with current technology. Researchers found evidence for the radical-pair mechanism of magnetoreception when European robins, cockroaches, and garden warblers, could no longer navigate when exposed to a radio frequency that obstructs magnetic fields and radical-pair chemistry. Further evidence came from a comparison of Cryptochrome 4 (CRY4) from migrating and non-migrating birds. CRY4 from chicken and pigeon were found to be less sensitive to magnetic fields than those from the (migrating) European robin, suggesting evolutionary optimization of this protein as a sensor of magnetic fields. DNA mutation DNA acts as the instructions for making proteins throughout the body. It consists of 4 nucleotides: guanine, thymine, cytosine, and adenine. The order of these nucleotides gives the "recipe" for the different proteins. Whenever a cell reproduces, it must copy these strands of DNA. However, sometime throughout the process of copying the strand of DNA a mutation, or an error in the DNA code, can occur. A theory for the reasoning behind DNA mutation is explained in the Lowdin DNA mutation model. In this model, a nucleotide may spontaneously change its form through a process of quantum tunneling. Because of this, the changed nucleotide will lose its ability to pair with its original base pair and consequently change the structure and order of the DNA strand. Exposure to ultraviolet light and other types of radiation can cause DNA mutation and damage. The radiation also can modify the bonds along the DNA strand in the pyrimidines and cause them to bond with themselves, creating a dimer. In many prokaryotes and plants, these bonds are repaired by a DNA-repair-enzyme photolyase. As its prefix implies, photolyase is reliant on light in order to repair the strand. Photolyase works with its cofactor FADH, flavin adenine dinucleotide, while repairing the DNA. Photolyase is excited by visible light and transfers an electron to the cofactor FADH. FADH—now in the possession of an extra electron—transfers the electron to the dimer to break the bond and repair the DNA. The electron tunnels from the FADH to the dimer. Although the range of this tunneling is much larger than feasible in a vacuum, the tunneling in this scenario is said to be "superexchange-mediated tunneling," and is possible due to the protein's ability to boost the tunneling rates of the electron. Other Other quantum phenomena in biological systems include the conversion of chemical energy into motion and brownian motors in many cellular processes. Pseudoscience Alongside the multiple strands of scientific inquiry into quantum mechanics has come unconnected pseudoscientific interest; this caused scientists to approach quantum biology cautiously. Hypotheses such as orchestrated objective reduction which postulate a link between quantum mechanics and consciousness have drawn criticism from the scientific community with some claiming it to be pseudoscientific and "an excuse for quackery". References External links Philip Ball (2015). "Quantum Biology: An Introduction". The Royal Institution Quantum Biology and the Hidden Nature of Nature, World Science Festival 2012, video of podium discussion Quantum Biology: Current Status and Opportunities, September 17-18, 2012, University of Surrey, UK Biophysics
Quantum biology
[ "Physics", "Biology" ]
5,261
[ "Applied and interdisciplinary physics", "Quantum mechanics", "Biophysics", "nan", "Quantum biology" ]
13,537,808
https://en.wikipedia.org/wiki/Alcanivorax
Alcanivorax is a genus of alkane-degrading marine bacteria. Species Alcanivorax comprises the following species: Alcanivorax balearicus Rivas et al. 2007 Alcanivorax borkumensis Yakimov et al. 1998 Alcanivorax dieselolei Liu and Shao 2005 Alcanivorax gelatiniphagus Kwon et al. 2015 Alcanivorax hongdengensis Wu et al. 2009 Alcanivorax indicus Song et al. 2018 Alcanivorax jadensis (Bruns and Berthe-Corti 1999) Fernández-Martínez et al. 2003 "Alcanivorax limicola" Zhu et al. 2021 Alcanivorax marinus Lai et al. 2013 Alcanivorax mobilis Yang et al. 2018 Alcanivorax nanhaiticus Lai et al. 2016 Alcanivorax pacificus Lai et al. 2011 Alcanivorax profundi Liu et al. 2019 Alcanivorax profundimaris Dong et al. 2021 Alcanivorax sediminis Liao et al. 2020 Alcanivorax venustensis Fernández-Martínez et al. 2003 Alcanivorax xenomutans Rahul et al. 2014 References Oceanospirillales Biodegradation Bacteria genera
Alcanivorax
[ "Chemistry" ]
291
[ "Biodegradation" ]
13,538,675
https://en.wikipedia.org/wiki/Liuli%20Gongfang
Liuli Gongfang or Liuligongfang () is Taiwan's only contemporary glass studio devoted to artistic Chinese glassware. Liuligongfang was founded in 1987 by actress Loretta Yang and director Chang Yi. Their name refers to liuli, a form of archaic Chinese glasswork; the founders chose to use the word liuli, rather than the common name for glass, boli (玻璃) to honor their cultural origin. The founders aimed to revive the art of antique Chinese art glass, the production of which had dwindled following the First and Second Opium Wars in the 19th century. Yang mortgaged her house and those of all her family members in order to gain start-up capital. After much trial and error, costing $1 million and taking more than three years, she and Chang were able to master the French pate-de-verre or lost-wax casting method. At the time of their founding, they operated a two-person workshop in Tamsui, Taipei County (now New Taipei City). Yang and Chang originally had a fairly strict division of labour, with Yang handling the artistic aspects of their work, while Chang managed finances and other business responsibilities; with Chang's 1997 heart attack, Yang has taken over more of Chang's responsibilities as well, including contact with the media. Works created by Liuli Gongfang have become part of the permanent collection of London's Victoria and Albert Museum as well as the Palace Museum in Beijing's Forbidden City. People First Party chairman James Soong, during his visit to mainland China (the second Taiwanese politician to do so, after that of Lien Chan), presented Communist Party General Secretary Hu Jintao with a Liuli Gongfang sculpture; Hu gave him Jingde porcelain in return. Collections around the world Liuligongfang art works have been exhibited in Taiwan, Japan, Mainland China, Europe, and United States. Several pieces have become part of the permanent collection of some of the most well known museums. Including The Palace Museum, Beijing, Shanghai Fine Arts Museum, Tsui Museum of Art, HongKong, Medicine Buddha Temple in Nara, Japan, The National Museum of Women in the Arts in Washington D.C., United States, Victoria and Albert Museum in United Kingdom, Bowers Museum in California, United States. See also List of companies of Taiwan References External links Official homepage United States Online Store Chinese art Glassmaking companies Privately held companies of Taiwan Taiwanese brands Taiwanese companies established in 1987 Manufacturing companies established in 1987 Companies based in New Taipei
Liuli Gongfang
[ "Materials_science", "Engineering" ]
520
[ "Glass engineering and science", "Glassmaking companies", "Engineering companies" ]
13,539,244
https://en.wikipedia.org/wiki/Sarmatic%20mixed%20forests
The Sarmatic mixed forests constitute an ecoregion within the temperate broadleaf and mixed forests biome, according to the World Wide Fund for Nature classification (ecoregion PA0436). The term comes from the word "Sarmatia". Distribution This ecoregion is situated in Europe between boreal forests/taiga in the north and the broadleaf belt in the south and occupies about 846,100 km2 (326,700 mi2) in southernmost Norway, southern Sweden (except southernmost), southwesternmost Finland, northern Lithuania, Latvia, Estonia, northern Belarus and the central part of European Russia. It is bordered by the ecoregions of Scandinavian and Russian taiga (north), Urals montane tundra and taiga (east), East European forest steppe (southeast), Central European mixed forests (southwest) and Baltic mixed forests (west), as well as by the Baltic Sea. Description The ecoregion consists of mixed forests dominated by Quercus robur (which only occasionally happens further north), Picea abies (which disappears further south due to insufficient moisture) and Pinus sylvestris (in drier locations). Geobotanically, it is divided between the Central European and Eastern European floristic provinces of the Circumboreal Region of the Holarctic Kingdom. References External links Temperate broadleaf and mixed forests Ecoregions of Europe Ecoregions of Belarus Ecoregions of Estonia Ecoregions of Finland Ecoregions of Latvia Ecoregions of Lithuania Ecoregions of Norway Ecoregions of Russia Ecoregions of Sweden Forests of Belarus Forests of Estonia Forests of Finland Forests of Latvia Forests of Lithuania Forests of Norway Forests of Russia Forests of Sweden . . . . . . . . Biota of Belarus Biota of Estonia Biota of Finland Biota of Latvia Biota of Lithuania Biota of Norway Biota of Russia Biota of Sweden
Sarmatic mixed forests
[ "Biology" ]
391
[ "Biota of Russia", "Biota of Belarus", "Biota of Finland", "Biota of Estonia", "Biota of Norway", "Biota by country", "Biota of Lithuania", "Biota of Latvia", "Biota of Sweden" ]
13,540,243
https://en.wikipedia.org/wiki/Prolate%20trochoidal%20mass%20spectrometer
A prolate trochoidal mass spectrometer is a chemical analysis instrument in which the ions of different mass-to-charge ratio are separated by means of mutually perpendicular electric and magnetic fields so that the ions follow a prolate trochoidal path. These devices are sometimes called cycloidal mass spectrometers, although the path is not a cycloid (the prolate trochoid path has loops, the cycloid has cusps). Applications The instruments are used for the analysis of gases and in gas chromatography-mass spectrometry. The trochoidal configuration can also be used as the basis of an electron monochromator. References External links Mass spectrometry
Prolate trochoidal mass spectrometer
[ "Physics", "Chemistry" ]
153
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
13,541,266
https://en.wikipedia.org/wiki/Demarcation%20line
A political demarcation line is a geopolitical border, often agreed upon as part of an armistice or ceasefire. Africa Moroccan Wall, delimiting the Moroccan-controlled part of Western Sahara from the Sahrawi-controlled part Americas During European imperialism overseas, the lines of amity were drawn to differentiate Europe from the rest of the world. The Line of Demarcation was one specific line drawn along a meridian in the Atlantic Ocean as part of the Treaty of Tordesillas in 1494 to divide new lands claimed by Portugal from those of Spain. This line was drawn in 1493 after Christopher Columbus returned from his maiden voyage to the Americas. The Mason–Dixon line (or "Mason and Dixon's Line") is a demarcation line between four U.S. states, forming part of the borders of Pennsylvania, Maryland, Delaware, and West Virginia (then part of Virginia). It was surveyed between 1763 and 1767 by Charles Mason and Jeremiah Dixon in the resolution of a border dispute between British colonies in Colonial America. Asia Middle East The Blue Line is a border demarcation between Lebanon and Israel published by the United Nations on 7 June 2001 for the purposes of determining whether Israel had fully withdrawn from Lebanon. The term Green Line is used to refer to the 1949 Armistice lines established between Israel and its neighbours (Egypt, Jordan, Lebanon and Syria) after the 1948 Arab–Israeli War. The Purple Line was the ceasefire line between Israel and Syria after the 1967 Six-Day War. The Green Line (Lebanon) refers a line of demarcation in Beirut, Lebanon during the Lebanese Civil War from 1975 to 1990. It separated the mainly Muslim factions in West Beirut from the predominantly Christian East Beirut controlled by the Lebanese Front. South and East Asia The McMahon Line is a line dividing China and India, drawn on a map attached to the Simla Convention, a treaty negotiated between the British Empire, China, and Tibet in 1914. The Military Demarcation Line, sometimes referred to as the Armistice Line, is the border between North Korea and South Korea. The Military Demarcation Line was established by the Korean Armistice Agreement as the line between the two Koreas at the end of Korean War in 1953. The Northern Limit Line or North Limit Line (NLL) is a disputed maritime demarcation line in the Yellow Sea between North Korea and South Korea. The Line of Actual Control established by India and the People's Republic of China between Aksai Chin and Ladakh after the Sino-Indian War of 1962. The Line of Control established by India and Pakistan over the disputed region of Kashmir. The nine-dash line appears on maps used by the People's Republic of China and the Republic of China (Taiwan) accompanying their South China Sea claims, which are challenged by Malaysia, the Philippines, and Vietnam. Europe The Curzon Line was a demarcation line proposed in 1920 by British Foreign Secretary Lord Curzon as a possible armistice line between Poland to the west and the Soviet republics to the east during the Polish-Soviet War of 1919–21. The modern Poland–Belarus and Poland–Ukraine borders mostly follow the Curzon line. The Foch Line was a temporary demarcation line between Poland and Lithuania proposed by the Entente in the aftermath of World War I. The demarcation line in France in Vichy France imposed by Nazi Germany from 1940 to 1942, with the German-occupied zone in the north and a free zone in the south. The Line of Contact was a demarcation line between Soviet-aligned forces and forces aligned with the Western allies, marking where Soviet-aligned forces and Western-aligned forces met as they advanced into Germany and Austria at the end of World War II in Europe. The Bosnian Inter-Entity Boundary Line is an ethno-administrative border established by the Dayton Agreement that followed the end of the Bosnian War. See also Demilitarized zone Borders
Demarcation line
[ "Physics" ]
795
[ "Spacetime", "Borders", "Space" ]
13,542,806
https://en.wikipedia.org/wiki/Estrin%27s%20scheme
In numerical analysis, Estrin's scheme (after Gerald Estrin), also known as Estrin's method, is an algorithm for numerical evaluation of polynomials. Horner's method for evaluation of polynomials is one of the most commonly used algorithms for this purpose, and unlike Estrin's scheme it is optimal in the sense that it minimizes the number of multiplications and additions required to evaluate an arbitrary polynomial. On a modern processor, instructions that do not depend on each other's results may run in parallel. Horner's method contains a series of multiplications and additions that each depend on the previous instruction and so cannot execute in parallel. Estrin's scheme is one method that attempts to overcome this serialization while still being reasonably close to optimal. Description of the algorithm Estrin's scheme operates recursively, converting a degree-n polynomial in x (for n≥2) to a degree- polynomial in x2 using independent operations (plus one to compute x2). Given an arbitrary polynomial P(x) = C0 + C1x + C2x2 + C3x3 + ⋯ + Cnxn, one can group adjacent terms into sub-expressions of the form (A + Bx) and rewrite it as a polynomial in x2: P(x) = (C0 + C1x) + (C2 + C3x)x2 + (C4 + C5x)x4 + ⋯ = Q(x2). Each of these sub-expressions, and x2, may be computed in parallel. They may also be evaluated using a native multiply–accumulate instruction on some architectures, an advantage that is shared with Horner's method. This grouping can then be repeated to get a polynomial in x4: P(x) = Q(x2) = ((C0 + C1x) + (C2 + C3x)x2) + ((C4 + C5x) + (C6 + C7x)x2)x4 + ⋯ = R(x4). Repeating this +1 times, one arrives at Estrin's scheme for parallel evaluation of a polynomial: Compute Di = C2i + C2i+1x for all 0 ≤ i ≤ . (If n is even, then Cn+1 = 0 and Dn/2 = Cn.) If n ≤ 1, the computation is complete and D0 is the final answer. Otherwise, compute y = x2 (in parallel with the computation of Di). Evaluate Q(y) = D0 + D1y + D2y2 + ⋯ + Dy using Estrin's scheme. This performs a total of n multiply-accumulate operations (the same as Horner's method) in line 1, and an additional squarings in line 3. In exchange for those extra squarings, all of the operations in each level of the scheme are independent and may be computed in parallel; the longest dependency path is +1 operations long. Examples Take Pn(x) to mean the nth order polynomial of the form: Pn(x) = C0 + C1x + C2x2 + C3x3 + ⋯ + Cnxn Written with Estrin's scheme we have: P3(x) = (C0 + C1x) + (C2 + C3x) x2 P4(x) = (C0 + C1x) + (C2 + C3x) x2 + C4x4 P5(x) = (C0 + C1x) + (C2 + C3x) x2 + (C4 + C5x) x4 P6(x) = (C0 + C1x) + (C2 + C3x) x2 + ((C4 + C5x) + C6x2)x4 P7(x) = (C0 + C1x) + (C2 + C3x) x2 + ((C4 + C5x) + (C6 + C7x) x2)x4 P8(x) = (C0 + C1x) + (C2 + C3x) x2 + ((C4 + C5x) + (C6 + C7x) x2)x4 + C8x8 P9(x) = (C0 + C1x) + (C2 + C3x) x2 + ((C4 + C5x) + (C6 + C7x) x2)x4 + (C8 + C9x) x8 … In full detail, consider the evaluation of P15(x): Inputs: x, C0, C1, C2, C3, C4, C5 C6, C7, C8, C9 C10, C11, C12, C13 C14, C15 Step 1: x2, C0+C1x, C2+C3x, C4+C5x, C6+C7x, C8+C9x, C10+C11x, C12+C13x, C14+C15x Step 2: x4, (C0+C1x) + (C2+C3x)x2, (C4+C5x) + (C6+C7x)x2, (C8+C9x) + (C10+C11x)x2, (C12+C13x) + (C14+C15x)x2 Step 3: x8, ((C0+C1x) + (C2+C3x)x2) + ((C4+C5x) + (C6+C7x)x2)x4, ((C8+C9x) + (C10+C11x)x2) + ((C12+C13x) + (C14+C15x)x2)x4 Step 4: (((C0+C1x) + (C2+C3x)x2) + ((C4+C5x) + (C6+C7x)x2)x4) + (((C8+C9x) + (C10+C11x)x2) + ((C12+C13x) + (C14+C15x)x2)x4)x8 References Further reading fast_polynomial, a Sage library using an improved scheme (extended abstract). Numerical analysis
Estrin's scheme
[ "Mathematics" ]
1,384
[ "Mathematical relations", "Computational mathematics", "Approximations", "Numerical analysis" ]
13,542,841
https://en.wikipedia.org/wiki/Rat%20Rock%20%28Central%20Park%29
Rat Rock, also known as Umpire Rock, is an outcrop of Manhattan schist which protrudes from the bedrock in Central Park, Manhattan, New York City. It is named after the rats that used to swarm there at night. It is located near the southwest corner of the park, south of the Heckscher Ballfields near the alignments of 62nd Street and Seventh Avenue. It measures wide and tall with different east, west, and north faces, each of which present differing climbing challenges. The rock has striations caused by glaciation. Boulderers congregate there, sometimes as many as fifty per day. Some are regulars such as Yukihiko Ikumori, a gardener from the West Village who is known as the spiritual godfather of the rock. Others are just passing through, such as tourists and visitors who learn about the climbing spot from the Internet and word of mouth. Experienced climbers such as Ikumori often show neophytes good routes and techniques. More experienced outsiders may be disappointed as the quality of the stone is poor, the setting is gloomy and the climbs present so little challenge that it has been called "one of America's most pathetic boulders". The park police formerly ticketed climbers who climbed more than a few feet up the rock. The City Climbers Club approached the park authorities and, by working to provide safety features such as wood chips around the base, they were able to legalize climbing there. References External links Bouldering in Central Park RatRock @ ClimbNYC.com Central Park Climbing areas of the United States Stones
Rat Rock (Central Park)
[ "Physics" ]
325
[ "Stones", "Physical objects", "Matter" ]
13,543,069
https://en.wikipedia.org/wiki/Moli%C3%A8re%20radius
The Molière radius is a characteristic constant of a material giving the scale of the transverse dimension of the fully contained electromagnetic showers initiated by an incident high energy electron or photon. By definition, it is the radius of a cylinder containing on average 90% of the shower's energy deposition. Two Molière radii contain 95% of the shower's energy deposition. It is related to the radiation length by the approximate relation , where is the atomic number. The Molière radius is useful in experimental particle physics in the design of calorimeters: a smaller Molière radius means better shower position resolution, and better shower separation due to a smaller degree of shower overlaps. The Molière radius is named after German physicist Paul Friederich Gaspard Gert Molière (1909–64). Molière radii for typical materials used in calorimetry LYSO crystals: 2.07 cm Lead tungstate crystals: 2.2 cm Caesium iodide: 3.5 cm Liquid krypton: 4.7 cm Liquid argon: 9.04 cm Earth's atmosphere at sea level: 79 m Earth's atmosphere above ground: 91 m References Experimental particle physics Radii
Molière radius
[ "Physics" ]
256
[ "Particle physics stubs", "Experimental physics", "Particle physics", "Experimental particle physics" ]
13,543,277
https://en.wikipedia.org/wiki/Biskit
Biskit is an open source software package that facilitates research in structural bioinformatics and molecular modelling. Written in Python, it consists of: An object-oriented programming library for manipulating and analyzing macromolecular structures, protein complexes and molecular dynamics trajectories A set of programs for solving specific tasks, such as automatic prediction of protein structures by homology modeling, and possible prediction of protein complex structures through flexible protein-protein docking The library delegates many calculations to more specialized third-party software. It currently utilizes 15 external applications, including X-PLOR, Hex, T-Coffee, DSSP and MODELLER. The latest Biskit version, 2.4.0, was released on 4 Mar 2012. It was originally developed at the Pasteur Institute. The name "Biskit" refers to the research group's name, Unité de BioInformatique Structurale. External links Structural bioinformatics software Molecular modelling Physics software Computational chemistry software Free science software Free software programmed in Python Molecular dynamics
Biskit
[ "Physics", "Chemistry" ]
208
[ "Molecular physics", "Computational chemistry software", "Chemistry software", "Computational physics", "Molecular dynamics", "Computational chemistry", "Theoretical chemistry", "Molecular modelling", "Molecular physics stubs", "Physics software" ]
13,543,706
https://en.wikipedia.org/wiki/Multiscale%20decision-making
Multiscale decision-making, also referred to as multiscale decision theory (MSDT), is an approach in operations research that combines game theory, multi-agent influence diagrams, in particular dependency graphs, and Markov decision processes to solve multiscale challenges in sociotechnical systems. MSDT considers interdependencies within and between the following scales: system level, time and information. Multiscale decision theory builds upon decision theory and multiscale mathematics. Multiscale decision theory can model and analyze complex decision-making networks that exhibit multiscale phenomena. The theory's results can be used by mechanism designers and decision-makers in organizations and complex systems to improve system performance and decision quality. Multiscale decision theory has been applied to manufacturing enterprise enterprises, service systems, supply chain management, healthcare, systems engineering, among others. In healthcare, for example, MSDT has been used to identify multi-level incentives that can improve healthcare value (quality of outcomes per dollar spent). The Multiscale Decision Making Laboratory at Virginia Tech directed by Dr. Christian Wernz is working at the forefront of MSDT theory and applications. Multiscale decision theory is related to: Multiscale modeling Decision analysis Cooperative distributed problem solving Decentralized decision making References Bibliography Filar, J., Vrieze, K., Competitive Markov Decision Processes, Springer, 1996. Mesarović, M. D., Macko, D. and Takahara, Y., Theory of Hierarchical, Multilevel, Systems, Mathematics in Science and Engineering, Volume 68, Academic Press, 1970. Schneeweiss, C., Distributed Decision Making, Springer, 2003. Wernz, C., Multiscale Decision-Making: Bridging Temporal and Organizational Scales in Hierarchical Systems, Dissertation, University of Massachusetts Amherst. http://scholarworks.umass.edu/dissertations/AAI3336994/ External links Multiscale Mathematics Initiative: A Roadmap Multiscale Decision Making Laboratory, Virginia Tech Multi-Scale Behavioral Modeling and Analysis Promoting a Fundamental Understanding of Agent-Based System Design and Operation Decision analysis Markov processes
Multiscale decision-making
[ "Mathematics" ]
450
[ "Game theory", "Mechanism design" ]
13,543,970
https://en.wikipedia.org/wiki/Wellsite%20Information%20Transfer%20Specification
The Wellsite Information Transfer Specification (WITS) is a specification for the transfer of drilling rig-related data. This petroleum industry standard is recognized by a number of companies internationally and is supported by many hardware devices and software applications. WITS is a multi-layered specification: Layer 0 describes an ASCII-based transfer specification Layer 1 describes a binary-based format based on 25 predefined fixed-size records and the Log Information Standard (LIS) data-transmission specification Layer 2 describes bidirectional communication using LIS Comment records Layer 2b describes buffering of data Layer 4 extends the previous layers to use a different data exchange format, RP66 Though still in active use as of 2013, the specification has been superseded by the XML-based WITSML. See also Wellsite information transfer standard markup language Drilling technology Petroleum technology
Wellsite Information Transfer Specification
[ "Chemistry", "Engineering" ]
172
[ "Petroleum engineering", "Petroleum technology" ]
13,544,419
https://en.wikipedia.org/wiki/MIMO
In radio, multiple-input and multiple-output (MIMO) () is a method for multiplying the capacity of a radio link using multiple transmission and receiving antennas to exploit multipath propagation. MIMO has become an essential element of wireless communication standards including IEEE 802.11n (Wi-Fi 4), IEEE 802.11ac (Wi-Fi 5), HSPA+ (3G), WiMAX, and Long Term Evolution (LTE). More recently, MIMO has been applied to power-line communication for three-wire installations as part of the ITU G.hn standard and of the HomePlug AV2 specification. At one time, in wireless the term "MIMO" referred to the use of multiple antennas at the transmitter and the receiver. In modern usage, "MIMO" specifically refers to a class of techniques for sending and receiving more than one data signal simultaneously over the same radio channel by exploiting the difference in signal propagation between different antennas (e.g. due to multipath propagation). Additionally, modern MIMO usage often refers to multiple data signals sent to different receivers (with one or more receive antennas) though this is more accurately termed multi-user multiple-input single-output (MU-MISO). History Early research MIMO is often traced back to 1970s research papers concerning multi-channel digital transmission systems and interference (crosstalk) between wire pairs in a cable bundle: AR Kaye and DA George (1970), Branderburg and Wyner (1974), and W. van Etten (1975, 1976). Although these are not examples of exploiting multipath propagation to send multiple information streams, some of the mathematical techniques for dealing with mutual interference proved useful to MIMO development. In the mid-1980s Jack Salz at Bell Laboratories took this research a step further, investigating multi-user systems operating over "mutually cross-coupled linear networks with additive noise sources" such as time-division multiplexing and dually-polarized radio systems. Methods were developed to improve the performance of cellular radio networks and enable more aggressive frequency reuse in the early 1990s. Space-division multiple access (SDMA) uses directional or smart antennas to communicate on the same frequency with users in different locations within range of the same base station. An SDMA system was proposed by Richard Roy and Björn Ottersten, researchers at ArrayComm, in 1991. Their US patent (No. 5515378 issued in 1996) describes a method for increasing capacity using "an array of receiving antennas at the base station" with a "plurality of remote users." Invention Arogyaswami Paulraj and Thomas Kailath proposed an SDMA-based inverse multiplexing technique in 1993. Their US patent (No. 5,345,599 issued in 1994) described a method of broadcasting at high data rates by splitting a high-rate signal "into several low-rate signals" to be transmitted from "spatially separated transmitters" and recovered by the receive antenna array based on differences in "directions-of-arrival." Paulraj was awarded the prestigious Marconi Prize in 2014 for "his pioneering contributions to developing the theory and applications of MIMO antennas. ... His idea for using multiple antennas at both the transmitting and receiving stations – which is at the heart of the current high speed WiFi and 4G mobile systems – has revolutionized high speed wireless." In an April 1996 paper and subsequent patent, Greg Raleigh proposed that natural multipath propagation can be exploited to transmit multiple, independent information streams using co-located antennas and multi-dimensional signal processing. The paper also identified practical solutions for modulation (MIMO-OFDM), coding, synchronization, and channel estimation. Later that year (September 1996) Gerard J. Foschini submitted a paper that also suggested it is possible to multiply the capacity of a wireless link using what the author described as "layered space-time architecture." Greg Raleigh, V. K. Jones, and Michael Pollack founded Clarity Wireless in 1996, and built and field-tested a prototype MIMO system. Cisco Systems acquired Clarity Wireless in 1998. Bell Labs built a laboratory prototype demonstrating its V-BLAST (Vertical-Bell Laboratories Layered Space-Time) technology in 1998. Arogyaswami Paulraj founded Iospan Wireless in late 1998 to develop MIMO-OFDM products. Iospan was acquired by Intel in 2003. Neither Clarity Wireless nor Iospan Wireless shipped MIMO-OFDM products before being acquired. Standards and commercialization MIMO technology has been standardized for wireless LANs, 3G mobile phone networks, and 4G mobile phone networks and is now in widespread commercial use. Greg Raleigh and V. K. Jones founded Airgo Networks in 2001 to develop MIMO-OFDM chipsets for wireless LANs. The Institute of Electrical and Electronics Engineers (IEEE) created a task group in late 2003 to develop a wireless LAN standard delivering at least 100 Mbit/s of user data throughput. There were two major competing proposals: TGn Sync was backed by companies including Intel and Philips, and WWiSE was supported by companies including Airgo Networks, Broadcom, and Texas Instruments. Both groups agreed that the 802.11n standard would be based on MIMO-OFDM with 20 MHz and 40 MHz channel options. TGn Sync, WWiSE, and a third proposal (MITMOT, backed by Motorola and Mitsubishi) were merged to create what was called the Joint Proposal. In 2004, Airgo became the first company to ship MIMO-OFDM products. Qualcomm acquired Airgo Networks in late 2006. The final 802.11n standard supported speeds up to 600 Mbit/s (using four simultaneous data streams) and was published in late 2009. Surendra Babu Mandava and Arogyaswami Paulraj founded Beceem Communications in 2004 to produce MIMO-OFDM chipsets for WiMAX. The company was acquired by Broadcom in 2010. WiMAX was developed as an alternative to cellular standards, is based on the 802.16e standard, and uses MIMO-OFDM to deliver speeds up to 138 Mbit/s. The more advanced 802.16m standard enables download speeds up to 1 Gbit/s. A nationwide WiMAX network was built in the United States by Clearwire, a subsidiary of Sprint-Nextel, covering 130 million points of presence (PoPs) by mid-2012. Sprint subsequently announced plans to deploy LTE (the cellular 4G standard) covering 31 cities by mid-2013 and to shut down its WiMAX network by the end of 2015. The first 4G cellular standard was proposed by NTT DoCoMo in 2004. Long term evolution (LTE) is based on MIMO-OFDM and continues to be developed by the 3rd Generation Partnership Project (3GPP). LTE specifies downlink rates up to 300 Mbit/s, uplink rates up to 75 Mbit/s, and quality of service parameters such as low latency. LTE Advanced adds support for picocells, femtocells, and multi-carrier channels up to 100 MHz wide. LTE has been embraced by both GSM/UMTS and CDMA operators. The first LTE services were launched in Oslo and Stockholm by TeliaSonera in 2009. As of 2015, there were more than 360 LTE networks in 123 countries operational with approximately 373 million connections (devices). Functions MIMO can be sub-divided into three main categories: precoding, spatial multiplexing (SM), and diversity coding. Precoding is multi-stream beamforming, in the narrowest definition. In more general terms, it is considered to be all spatial processing that occurs at the transmitter. In (single-stream) beamforming, the same signal is emitted from each of the transmit antennas with appropriate phase and gain weighting such that the signal power is maximized at the receiver input. The benefits of beamforming are to increase the received signal gain – by making signals emitted from different antennas add up constructively – and to reduce the multipath fading effect. In line-of-sight propagation, beamforming results in a well-defined directional pattern. However, conventional beams are not a good analogy in cellular networks, which are mainly characterized by multipath propagation. When the receiver has multiple antennas, the transmit beamforming cannot simultaneously maximize the signal level at all of the receive antennas, and precoding with multiple streams is often beneficial. Precoding requires knowledge of channel state information (CSI) at the transmitter and the receiver. Spatial multiplexing requires MIMO antenna configuration. In spatial multiplexing, a high-rate signal is split into multiple lower-rate streams and each stream is transmitted from a different transmit antenna in the same frequency channel. If these signals arrive at the receiver antenna array with sufficiently different spatial signatures and the receiver has accurate CSI, it can separate these streams into (almost) parallel channels. Spatial multiplexing is a very powerful technique for increasing channel capacity at higher signal-to-noise ratios (SNR). The maximum number of spatial streams is limited by the lesser of the number of antennas at the transmitter or receiver. Spatial multiplexing can be used without CSI at the transmitter, but can be combined with precoding if CSI is available. Spatial multiplexing can also be used for simultaneous transmission to multiple receivers, known as space-division multiple access or multi-user MIMO, in which case CSI is required at the transmitter. The scheduling of receivers with different spatial signatures allows good separability. Diversity coding techniques are used when there is no channel knowledge at the transmitter. In diversity methods, a single stream (unlike multiple streams in spatial multiplexing) is transmitted, but the signal is coded using techniques called space-time coding. The signal is emitted from each of the transmit antennas with full or near orthogonal coding. Diversity coding exploits the independent fading in the multiple antenna links to enhance signal diversity. Because there is no channel knowledge, there is no beamforming or array gain from diversity coding. Diversity coding can be combined with spatial multiplexing when some channel knowledge is available at the receiver. Forms Multi-antenna types Multi-antenna MIMO (or single-user MIMO) technology has been developed and implemented in some standards, e.g., 802.11n products. SISO/SIMO/MISO are special cases of MIMO. Multiple-input single-output (MISO) is a special case when the receiver has a single antenna. Single-input multiple-output (SIMO) is a special case when the transmitter has a single antenna. Single-input single-output (SISO) is a conventional radio system where neither transmitter nor receiver has multiple antennas. Principal single-user MIMO techniques Bell Laboratories Layered Space-Time (BLAST), Gerard. J. Foschini (1996) Per Antenna Rate Control (PARC), Varanasi, Guess (1998), Chung, Huang, Lozano (2001) Selective Per Antenna Rate Control (SPARC), Ericsson (2004) Some limitations The physical antenna spacing is selected to be large; multiple wavelengths at the base station. The antenna separation at the receiver is heavily space-constrained in handsets, though advanced antenna design and algorithm techniques are under discussion. Refer to: multi-user MIMO Multi-user types Multi-user MIMO (MU-MIMO) In recent 3GPP and WiMAX standards, MU-MIMO is being treated as one of the candidate technologies adoptable in the specification by a number of companies, including Samsung, Intel, Qualcomm, Ericsson, TI, Huawei, Philips, Nokia, and Freescale. For these and other firms active in the mobile hardware market, MU-MIMO is more feasible for low-complexity cell phones with a small number of reception antennas, whereas single-user SU-MIMO's higher per-user throughput is better suited to more complex user devices with more antennas. Enhanced multiuser MIMO: 1) Employs advanced decoding techniques, 2) Employs advanced precoding techniques SDMA represents either space-division multiple access or super-division multiple access where super emphasises that orthogonal division such as frequency- and time-division is not used but non-orthogonal approaches such as superposition coding are used. Cooperative MIMO (CO-MIMO) Uses multiple neighboring base stations to jointly transmit/receive data to/from users. As a result, neighboring base stations don't cause intercell interference as in the conventional MIMO systems. Macrodiversity MIMO A form of space diversity scheme which uses multiple transmit or receive base stations for communicating coherently with single or multiple users which are possibly distributed in the coverage area, in the same time and frequency resource. The transmitters are far apart in contrast to traditional microdiversity MIMO schemes such as single-user MIMO. In a multi-user macrodiversity MIMO scenario, users may also be far apart. Therefore, every constituent link in the virtual MIMO link has distinct average link SNR. This difference is mainly due to the different long-term channel impairments such as path loss and shadow fading which are experienced by different links. Macrodiversity MIMO schemes pose unprecedented theoretical and practical challenges. Among many theoretical challenges, perhaps the most fundamental challenge is to understand how the different average link SNRs affect the overall system capacity and individual user performance in fading environments. MIMO routing Routing a cluster by a cluster in each hop, where the number of nodes in each cluster is larger or equal to one. MIMO routing is different from conventional (SISO) routing since conventional routing protocols route node-by-node in each hop. Massive MIMO (mMIMO) A technology where the number of terminals is much less than the number of base station (mobile station) antennas. In a rich scattering environment, the full advantages of the massive MIMO system can be exploited using simple beamforming strategies such as maximum ratio transmission (MRT), maximum ratio-combining (MRC) or zero forcing (ZF). To achieve these benefits of massive MIMO, accurate CSI must be available perfectly. However, in practice, the channel between the transmitter and receiver is estimated from orthogonal pilot sequences which are limited by the coherence time of the channel. Most importantly, in a multicell setup, the reuse of pilot sequences of several co-channel cells will create pilot contamination. When there is pilot contamination, the performance of massive MIMO degrades quite drastically. To alleviate the effect of pilot contamination, Tadilo E. Bogale and Long B. Le propose a simple pilot assignment and channel estimation method from limited training sequences. However, in 2018, research by Emil Björnson, Jakob Hoydis, and Luca Sanguinetti was published which shows that pilot contamination is solvable and that the capacity of a channel can always be increased, both in theory and in practice, by increasing the number of antennas. Holographic MIMO Another recent technology is holographic MIMO to realize high energy and spectral efficiency with very high spatial resolution. Holographic MIMO is a key conceptual key enabler that is recently gaining increasing popularity, because of its low-cost transformative wireless structure consisting of sub-wavelength metallic or dielectric scattering particles, which is capable of deforming electromagnetic wave properties, according to some desirable objectives. Applications Third Generation (3G) (CDMA and UMTS) allows for implementing space-time transmit diversity schemes, in combination with transmit beamforming at base stations. Fourth Generation (4G) LTE And LTE Advanced define very advanced air interfaces extensively relying on MIMO techniques. LTE primarily focuses on single-link MIMO relying on Spatial Multiplexing and space-time coding while LTE-Advanced further extends the design to multi-user MIMO. In wireless local area networks (WLAN), the IEEE 802.11n (Wi-Fi), MIMO technology is implemented in the standard using three different techniques: antenna selection, space-time coding and possibly beamforming. Spatial multiplexing techniques make the receivers very complex, and therefore they are typically combined with orthogonal frequency-division multiplexing (OFDM) or with orthogonal frequency-division multiple access (OFDMA) modulation, where the problems created by a multi-path channel are handled efficiently. The IEEE 802.16e standard incorporates MIMO-OFDMA. The IEEE 802.11n standard, released in October 2009, recommends MIMO-OFDM. MIMO is used in mobile radio telephone standards such as 3GPP and 3GPP2. In 3GPP, High-Speed Packet Access plus (HSPA+) and Long Term Evolution (LTE) standards take MIMO into account. Moreover, to fully support cellular environments, MIMO research consortia including IST-MASCOT propose to develop advanced MIMO techniques, e.g., multi-user MIMO (MU-MIMO). MIMO wireless communications architectures and processing techniques can be applied to sensing problems. This is studied in a sub-discipline called MIMO radar. MIMO technology can be used in non-wireless communications systems. One example is the home networking standard ITU-T G.9963, which defines a powerline communications system that uses MIMO techniques to transmit multiple signals over multiple AC wires (phase, neutral and ground). Mathematical description In MIMO systems, a transmitter sends multiple streams by multiple transmit antennas. The transmit streams go through a matrix channel which consists of all paths between the transmit antennas at the transmitter and receive antennas at the receiver. Then, the receiver gets the received signal vectors by the multiple receive antennas and decodes the received signal vectors into the original information. A narrowband flat fading MIMO system is modeled as: where and are the receive and transmit vectors, respectively, and and are the channel matrix and the noise vector, respectively. Referring to information theory, the ergodic channel capacity of MIMO systems where both the transmitter and the receiver have perfect instantaneous channel state information is where denotes Hermitian transpose and is the ratio between transmit power and noise power (i.e., transmit SNR). The optimal signal covariance is achieved through singular value decomposition of the channel matrix and an optimal diagonal power allocation matrix . The optimal power allocation is achieved through waterfilling, that is where are the diagonal elements of , is zero if its argument is negative, and is selected such that . If the transmitter has only statistical channel state information, then the ergodic channel capacity will decrease as the signal covariance can only be optimized in terms of the average mutual information as The spatial correlation of the channel has a strong impact on the ergodic channel capacity with statistical information. If the transmitter has no channel state information it can select the signal covariance to maximize channel capacity under worst-case statistics, which means and accordingly Depending on the statistical properties of the channel, the ergodic capacity is no greater than times larger than that of a SISO system. MIMO detection A fundamental problem in MIMO communication is estimating the transmit vector, , given the received vector, . This can be posed as a statistical detection problem, and addressed using a variety of techniques including zero-forcing, successive interference cancellation a.k.a. V-blast, Maximum likelihood estimation and recently, neural network MIMO detection. Such techniques commonly assume that the channel matrix is known at the receiver. In practice, in communication systems, the transmitter sends a Pilot signal and the receiver learns the state of the channel (i.e., ) from the received signal and the Pilot signal . Recently, there are works on MIMO detection using Deep learning tools which have shown to work better than other methods such as zero-forcing. Testing MIMO signal testing focuses first on the transmitter/receiver system. The random phases of the sub-carrier signals can produce instantaneous power levels that cause the amplifier to compress, momentarily causing distortion and ultimately symbol errors. Signals with a high PAR (peak-to-average ratio) can cause amplifiers to compress unpredictably during transmission. OFDM signals are very dynamic and compression problems can be hard to detect because of their noise-like nature. Knowing the quality of the signal channel is also critical. A channel emulator can simulate how a device performs at the cell edge, can add noise or can simulate what the channel looks like at speed. To fully qualify the performance of a receiver, a calibrated transmitter, such as a vector signal generator (VSG), and channel emulator can be used to test the receiver under a variety of different conditions. Conversely, the transmitter's performance under a number of different conditions can be verified using a channel emulator and a calibrated receiver, such as a vector signal analyzer (VSA). Understanding the channel allows for manipulation of the phase and amplitude of each transmitter in order to form a beam. To correctly form a beam, the transmitter needs to understand the characteristics of the channel. This process is called channel sounding or channel estimation. A known signal is sent to the mobile device that enables it to build a picture of the channel environment. The mobile device sends back the channel characteristics to the transmitter. The transmitter can then apply the correct phase and amplitude adjustments to form a beam directed at the mobile device. This is called a closed-loop MIMO system. For beamforming, it is required to adjust the phases and amplitude of each transmitter. In a beamformer optimized for spatial diversity or spatial multiplexing, each antenna element simultaneously transmits a weighted combination of two data symbols. Literature Principal researchers Papers by Gerard J. Foschini and Michael J. Gans, Foschini and Emre Telatar have shown that the channel capacity (a theoretical upper bound on system throughput) for a MIMO system is increased as the number of antennas is increased, proportional to the smaller of the number of transmit antennas and the number of receive antennas. This is known as the multiplexing gain and this basic finding in information theory is what led to a spurt of research in this area. Despite the simple propagation models used in the aforementioned seminal works, the multiplexing gain is a fundamental property that can be proved under almost any physical channel propagation model and with practical hardware that is prone to transceiver impairments. A textbook by A. Paulraj, R. Nabar and D. Gore has published an introduction to this area. There are many other principal textbooks available as well. Diversity–multiplexing tradeoff There exists a fundamental tradeoff between transmit diversity and spatial multiplexing gains in a MIMO system (Zheng and Tse, 2003). In particular, achieving high spatial multiplexing gains is of profound importance in modern wireless systems. Other applications Given the nature of MIMO, it is not limited to wireless communication. It can be used for wire line communication as well. For example, a new type of DSL technology (gigabit DSL) has been proposed based on binder MIMO channels. Sampling theory in MIMO systems An important question which attracts the attention of engineers and mathematicians is how to use the multi-output signals at the receiver to recover the multi-input signals at the transmitter. In Shang, Sun and Zhou (2007), sufficient and necessary conditions are established to guarantee the complete recovery of the multi-input signals. See also Antenna diversity Beamforming Channel bonding Channel state information Dirty paper coding Duplex (telecommunications) History of smart antennas IEEE 802.11 IEEE 802.16 Macrodiversity MIMO-OFDM Multi-user MIMO Per-User Unitary Rate Control Phased array Precoding Single-frequency network (SFN) Smart antenna Space–time block code Space–time code Spatial multiplexing Visual MIMO Wi-Fi WiMAX MIMO References External links NIST UWB-MIMO Channel Propagation Measurements in the 2–8 GHz Spectrum Literature review of MIMO Antenna and Wireless Multipath Virtual Channel Interaction IEEE 802 Information theory Radio resource management Control engineering
MIMO
[ "Mathematics", "Technology", "Engineering" ]
4,947
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Control engineering" ]
13,544,786
https://en.wikipedia.org/wiki/Geography%20of%20rugby%20league
Rugby league is a full contact football code and spectator sport played in various countries around the world. It is govererned globally by the International Rugby League (IRL; previously referred to as the Rugby League International Federation). The IRL divides governance of the sport across two confederations governing Asia-Pacific (APRL) and Europe (ERL). The ERL further contains two sub-branches governing The Americas and Middle East and Africa. Although one of the later football codes to be developed, the game has expanded outside of its traditionals heartlands in Australia, Northern England, Southern France, and New Zealand. As a result, many players of European, and Pacific Islander background have risen to the top professional level in the two major domestic leagues, the National Rugby League and Super League. Whilst individual international test matches between nations have been staged regularly since 1907, the first world cup of the sport was held in France in 1954, making it the first world cup of either rugby code and the first to be officially known as the "Rugby World Cup". Full and affiliate members are eligible for the Rugby League World Cup. However, due to the late rescheduling of the 2026 Rugby League World Cup, only full members will be allowed to compete in the next edition. Americas Rugby league is a growing sport in the Americas, having first started with All Star exhibition matches in the 1950s. It has been played at an organised semi-professional level in North America since it was first introduced as a competition sport in the 1990s. There are currently domestic leagues operating in Jamaica, Canada and the United States. Many players of Caribbean heritage live and play in the Super League and have brought their skills back to the islands to foster the development of thousands of new players. The game is also played at a lower amateur level across the Americas by ex-patriates although only recognised national organisations are listed here for brevity. Canada currently has two domestic competitions, Ontario and British Columbia, with British Columbia being the premier competition with also the most teams. British Columbia Rugby League (BCRL) also has a provincial team known as the BC Bulldogs. In 2012, the BC Bulldogs competed against Utah Avalanche from Salt Lake who currently play in the AMNRL. The game was contested over two legs, home and away, with BC taking both games. The BC Bulldogs also made an appearance at the Las Vegas Remembrance Cup and come third. Coogee Bay Dolphins from Australia took out the competition for the second time in a row. In 2013, BCRL will be made up of 6 teams, namely Bayside Sharks, Kelowna Crows, Richmond Bears, Sea to Sky Eagles, Surrey Beavers and Vancouver Dragons. Brazil has also taken up the sport in 2013. Since 2013 the Latin Heat Rugby League has had moderate success in introducing rugby League to players with Latin American heritage living in Australia. In 2014 the Latin Heat opened a U.S chapter. Asia-Pacific Rugby league is a popular sport in Oceania and the Pacific islands. Australia, New Zealand and Papua New Guinea are the main nations playing rugby league in Oceania. The Cook Islands, Tonga, Samoa and Fiji are also RLIF test nations. Affiliate nations include Vanuatu, American Samoa, New Caledonia, Niue and Tokelau. The Solomon Islands also have some history of the sport. Pacific Asia Europe Europe is the birth place of rugby league with the game originating in Northern England. The sport is played at amateur level in most European countries although only England, Wales, and France have professional or semi-professional clubs. These three countries were the original members of the Rugby League European Championship. The Great Britain national rugby league team is the most successful side having won three world cups, however since 1995 the team has split in favour of home nations national sides. Domestically, the Super League (Great Britain) is the only professional league on the continent however the RFL Championship, RFL League 1, RFL Women's Super League (all Great Britain), and Elite One Championship, and Elite Two Championship (both France) are semi-professional to varying extents. Middle East-Africa Rugby league is a growing sport in Africa and the Middle East, with a large growth in players since the 1990s, some of which have played at the game's elite levels in the National Rugby League and Super League. The game in the Middle East is one of the fastest growing sports with regular internationals played against European and Mediterranean teams. Rugby league is a growing sport in Africa, with the game first introduced to the continent in early 2017. The vast distance of teams from the game's heartlands has at times affected the development of the sport but new advances in the 21st century have seen a major increase in the number of internationals scheduled. Many high calibre players from the continent have progressed to the top club leagues, including Younes Khattabi, Jamal Fakir, Tom van Vollenhoven, Fred Griffiths and Jarrod Saffy. The large ex-patriate Moroccan population in the south of France has resulted in a growing interchange of players between the two countries. Rugby league in Africa is played in South Africa, Gambia, Morocco, Burundi, Nigeria and Ghana. Non-IRL associated See also List of international rugby league teams List of rugby league competitions List of rugby league tours Notes References Rugby league Human geography
Geography of rugby league
[ "Environmental_science" ]
1,081
[ "Environmental social science", "Human geography" ]
13,544,815
https://en.wikipedia.org/wiki/Casein%20nutrient%20agar
Casein nutrient agar (CN) is a growth medium used to culture isolates of lactic acid bacteria such as Streptococcus thermophilus and Lactobacillus bulgaricus. It is composed of standard nutrient agar with the added ingredient of skim milk powder, which contains casein. Lactic Acid Bacteria will precipitate casein out of the agar by lowering the pH. This will produce a cloudy appearance around the colonies that do this. This medium is not regarded as selective as it supports the growth of a wide variety of organisms. References Microbiological media
Casein nutrient agar
[ "Biology" ]
128
[ "Microbiological media", "Microbiology equipment" ]
13,544,961
https://en.wikipedia.org/wiki/Sufu
Sufu was a wartime material used briefly in Japan during World War II when cotton and other woven materials were scarce. It was an inexpensive, ersatz cloth made of wood fibers, basically cellulose, that disintegrated after three or four washings and was highly flammable. The warp threads were of cotton fibers; the weft consisted of twisted paper. References Military history of Japan Cellulose
Sufu
[ "Physics" ]
87
[ "Materials stubs", "Materials", "Matter" ]
13,545,317
https://en.wikipedia.org/wiki/Longevity%20escape%20velocity
In the life extension movement, longevity escape velocity (LEV), actuarial escape velocity or biological escape velocity is a hypothetical situation in which one's remaining life expectancy (not life expectancy at birth) is extended longer than the time that is passing. For example, in a given year in which longevity escape velocity would be maintained, medical advances would increase people's remaining life expectancy more than the year that just went by. The term is meant as an analogy to the concept of escape velocity in physics, which is the minimum speed required for an object to indefinitely move away from a gravitational body despite the gravitational force pulling the object towards the body. Background For many years in the past, life expectancy at each age has increased slightly every year as treatment strategies and technologies have improved. At present, more than one year of research is required for each additional year of expected life. Longevity escape velocity occurs when this ratio reverses, so that life expectancy increases faster than one year per one year of research, as long as that rate of advance is sustainable. Mouse lifespan research has been the most contributive to conclusive evidence on the matter, since mice require only a few years before research results can be concluded. History The term "longevity escape velocity" was conceived of by futurist David Gobel of the Methuselah Foundation and coined by biogerontologist Aubrey de Grey in a 2004 paper, but the concept has been present in the life extension community since at least the 1970s, such as in Robert Anton Wilson's essay Next Stop, Immortality. The concept is also part of the fictional history leading to multi-century youthful lifespans in the science fiction series The Mars Trilogy by Kim Stanley Robinson. More recent proponents include David Gobel, co-founder of the Methuselah Foundation and futurist, and technologist Ray Kurzweil, who named one of his books, Fantastic Voyage: Live Long Enough to Live Forever, after the concept. The last two claim that by putting further pressure on science and medicine to focus research on increasing limits of aging, rather than continuing along at its current pace, more lives will be saved in the future, even if the benefit is not immediately apparent. The idea was even more popularized with the publishing of Aubrey de Grey and Michael Rae's book, Ending Aging, in 2007. de Grey has also popularized the word "Methuselarity" which describes the same concept. Predictions Ray Kurzweil predicts that longevity escape velocity will be reached before humanity realizes it. In 2018, he predicted that it would be reached in 10–12 years, meaning that the milestone would occur around 2028–2030. In 2024, writing in The Economist, Kurzweil revised his prediction to 2029–2035 and explained how AI would help to simulate biological processes. Aubrey de Grey has also similarly predicted that humanity has a 50 percent chance of reaching longevity escape velocity in the mid to late 2030s. See also Life extension Pro-aging trance Rejuvenation Technological utopianism Transhumanism Timeline of aging research References Ageing Life extension Gerontology Transhumanism
Longevity escape velocity
[ "Technology", "Engineering", "Biology" ]
642
[ "Gerontology", "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
13,545,731
https://en.wikipedia.org/wiki/List%20of%20Schedule%20I%20controlled%20substances%20%28U.S.%29
This is the list of Schedule I controlled substances in the United States as defined by the Controlled Substances Act. The following findings are required for substances to be placed in this schedule: The drug or other substance has a high potential for abuse. The drug or other substance has no currently accepted medical use in treatment in the United States. There is a lack of accepted safety for use of the drug or other substance under medical supervision. The complete list of Schedule I substances is as follows. The Administrative Controlled Substances Code Number for each substance is included. Opioids Opium derivatives Hallucinogenic or psychedelic substances Depressants Stimulants Cannabimimetic agents See also List of Schedule II controlled substances (U.S.) List of Schedule III controlled substances (U.S.) List of Schedule IV controlled substances (U.S.) List of Schedule V controlled substances (U.S.) Notes References External links Controlled Substances listed by the DEA Controlled Substances Act Drug-related lists Cannabis-related lists
List of Schedule I controlled substances (U.S.)
[ "Chemistry" ]
206
[ "Drug-related lists" ]
13,545,946
https://en.wikipedia.org/wiki/Administrative%20Controlled%20Substances%20Code%20Number
Administrative Controlled Substances Code Number (ACSCN) is a number assigned to drugs listed on the schedules created by the US Controlled Substances Act (CSA). The ACSCN is defined in 21 CFR § 1308.03(a). Each chemical/drug on one of the schedules is assigned an ACSCN (for example, heroin is assigned 9200). The code number is used on various documents used in administration of the system mandated by the CSA. ACSCN tables include the CSA schedule, common alternative chemical and trade names, and the free base conversion ratio (the molecular mass of the substance in question divided by the molecular mass of the free base). This is used to make meaningful qualitative comparisons between substances, and labeling of the end product may, as is required in many European countries, list the active substance using both (e.g. "each tablet contains 120 mg dihydrocodeine bitartrate, representing 80 mg dihydrocodeine base"). This method of citation is in theory compulsory worldwide for substances in Schedule I of the Single Convention on Narcotic Drugs 1961, a classification corresponding to opioids in US Schedule II with Narcotic classification plus cocaine (which inherited a narcotic designation from the 1931 Convention for Limiting the Manufacture and Regulating the Distribution of Narcotic Drugs and preceding treaties and national laws including the 1914 Harrison Narcotics Tax Act) and German Betäubungsmittelgesetz (BtMG) Schedule I and so on. This is also the case for Single Convention Schedule IV, which roughly corresponds to the United States' CSA Schedule I. and CSU Schedule List of schedules For a complete list, see the list of schedules: List of Schedule I drugs (US) List of Schedule II drugs (US) List of Schedule III drugs (US) List of Schedule IV drugs (US) List of Schedule V drugs (US) References Text of Single Convention On Narcotic Drugs 1931 (English). also cited in Wikipedia article on Single Convention, courtesy UNODC web site, retrieved 30. April 2014 21 CFR § 1308.03(a) DEA Office of Diversion Control WWW site, retrieved 26. April 2014 German-language text of Österreichische Suchtmittelgesetz, retrieved 3. May 2014 § 27 German-language text of Deutsche Betäungsmittelgesetz, retrieved 2. May 2014 Innerhalb Betäubungsmittel, IV. Auflage (Wien, 8. February 2002), Tabelle 2C, Seite 116 (Deutsch) / Line 3819 in HTML-Version Copyright © Medical 4. January 1999 2014, CCRPP Press Office Zagreb, Croatia—retrieved 4. January 2014 Office of Narcotics Control & You: The Inside Dope (General Membership Manual of the NCOTCL, First Edition, 21. May 1990 Proceedings for the 8. March 2002 meeting of NCOTCL North American Section 2014 CCRPP Press Office Zagreb, Croatia—retrieved 4. January 2014 Controlled Substances Act
Administrative Controlled Substances Code Number
[ "Chemistry" ]
619
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
13,546,580
https://en.wikipedia.org/wiki/List%20of%20Schedule%20II%20controlled%20substances%20%28U.S.%29
This is the list of Schedule II controlled substances in the United States as defined by the Controlled Substances Act. The following findings are required, by section 202 of that Act, for substances to be placed in this schedule: The drug or other substance has a high potential for abuse. The drug or other substance has a currently accepted medical use in treatment in the United States or a currently accepted medical use with severe restrictions. Abuse of the drug or other substances may lead to severe psychological or physical dependence. The complete list of Schedule II substances is as follows. The Administrative Controlled Substances Code Number and Federal Register citation for each substance is included. Drugs See also List of Schedule I controlled substances (U.S.) List of Schedule III controlled substances (U.S.) List of Schedule IV controlled substances (U.S.) List of Schedule V controlled substances (U.S.) Notes References Controlled Substances Act Drug-related lists
List of Schedule II controlled substances (U.S.)
[ "Chemistry" ]
187
[ "Drug-related lists" ]
13,546,761
https://en.wikipedia.org/wiki/3G%20MIMO
3G MIMO describes MIMO techniques which have been considered as 3G standard techniques. MIMO, as the state of the art of intelligent antenna (IA), improves the performance of radio systems by embedding electronics intelligence into the spatial processing unit. Spatial processing includes spatial precoding at the transmitter and spatial postcoding at the receiver, which are dual each other from information signal processing theoretic point of view. Intelligent antenna is technology which represents smart antenna, multiple antenna (MIMO), self-tracking directional antenna, cooperative virtual antenna and so on. Technology Spatial precoding of intelligent antenna includes spatial beamforming and spatial coding. In wireless communications, spatial precoding has been developing for high reliability, high rate and lower interference as shown in the following table. Summary of 3G MIMO The table summarizes the history of 3G MIMO techniques candidated for 3G standards. Although the table additionally contains the future part but the contents are not clearly filled out since the future is not precisely predictable. IA in ad hoc networking IA technology enables client terminals, which have either multiple antennas or a self-tracking directional antenna, to communicate to each other with as high as possible signal-to-interference-and-noise ratio (SINR). Assume that there is a source terminal, a destination terminal, and some candidate interference terminals. Compared to conventional approaches, an advanced IA based terminal will perform spatial precoding (spatial beamforming and/or spatial coding) not only to enhance the signal power at the destination terminal but also to diminish the interfering power at interference terminals. As a human does, the advanced IA terminal is given to know that occurring high interference to other terminals will eventually degrade the performance of the associated wireless network. Principal Issues of Research The following items list the issues of the multiple antenna research aims to improve the performance of radio communications. Intelligent antenna Smart antenna Digital antenna array Multiple-input multiple-output (MIMO) Beamforming Diversity combining Diversity scheme Space–time code Spatial multiplexing Space-division multiple access (SDMA) Advanced MIMO communications Multi-user MIMO Precoding Dirty paper coding (DPC) Cooperative wireless communications Cooperative diversity Principal Definitions Definitions Here are the definition of principal keywords to clarify the objective and the operations of intelligent antenna. Reference Web Sites The following items list the web sites related to the multiple antenna research. MARS, Bell Laboratories — Multiple Antenna Research and Solutions (MARS) is a research group on multiple antenna and space time coding Lucent — The goal of intelligent antennas is to achieve higher capacity noting that advanced solutions provide higher capacity than basic solutions. IMEC — Multiple antenna systems are the key to the high-capacity wireless universe. Indeed, they allow increasing the rate, improving the robustness, or accommodating more users in the cell. Georgia Institute of Technology — A smart antenna is an array of antenna elements connected to a digital signal processor IEC — A smart antenna system combines multiple antenna elements with a signal-processing capability to optimize its radiation and/or reception pattern automatically in response to the signal environment. Spatial division multiple access (SDMA) — Among the most sophisticated utilizations of smart antenna technology is SDMA, which employs advanced processing techniques to, in effect, locate and track fixed or mobile terminals, adaptively steering transmission signals toward users and away from interferers. SearchMobileComputing.com — A smart antenna is a digital wireless communications antenna system that takes advantage of diversity effect at the source (transmitter), the destination (receiver), or both. MIMO is an antenna technology for wireless communications in which multiple antennas are used at both the source (transmitter) and the destination (receiver). Smart Antennas Research Group, Stanford Univ. — Our research goal is to advance the state-of-the-art in the applications of multiple antennas and space-time signal processing in mobile wireless networks, and to improve network performance and economics. CDG — Smart antennas provide greater capacity and performance benefits than standard antennas because they can be used to customize and fine-tune antenna coverage patterns that match the traffic conditions in a wireless network or that are better suited for complex radio frequency (RF) environments. MIMO employs multiple, spatially separated antennas (at both TX and RX) to take advantage of these "virtual wires" and transfer more data. Nortel — MIMO is an antenna technology that is used both in transmission and receiver equipment for wireless radio communication. MIMO is the only advanced antenna technology that simultaneously offers high bandwidth, improved range, and high mobility at a lower cost. Visant Strategies — Intelligent antennas are antenna systems that use some sort of computational or electronic resource to enhance system performance. According to the amounts of intelligence employed, antenna diversity represents the simplest form in the progressive complexity chain, followed by basic beamforming, which is the process of narrowing radiated energy, which is then followed by the more complex space-time processing and finally by MIMO. Magnetic Sciences — Satellite tracking systems and self-steering antennas are used aboard ships, vehicles, or aircraft to maintain contact with satellites. See also Antenna diversity Smart antenna Multiple antenna research Multiple-input multiple-output communications Cooperative wireless communications Precoding includes spatial coding (SC) and spatial beamforming (SB) Space–time code Spatial multiplexing Dirty paper coding (DPC) Beamforming Wsdma Smart antenna for 3G MIMO benefits References Dr. Erik Dahlman, LTE, 3G Long Term Evolution External links Smart Antennas and Related Technologies Briefing published by Bell Labs, Lucent Technologies IEEE 802 Information theory Radio resource management
3G MIMO
[ "Mathematics", "Technology", "Engineering" ]
1,141
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
13,547,568
https://en.wikipedia.org/wiki/List%20of%20Schedule%20III%20controlled%20substances%20%28U.S.%29
This is the list of Schedule III controlled substances in the United States as defined in section 202 of the Controlled Substances Act () and . The following findings are required for substances to be placed in this schedule: The drug or other substance has a potential for abuse less than the drugs or other substances in schedules I and II. The drug or other substance has a currently accepted medical use in treatment in the United States. Abuse of the drug or other substance may lead to moderate or low physical dependence or high psychological dependence. The complete list of Schedule III substances is as follows. The Administrative Controlled Substances Code Number and Federal Register citation for each substance is included. Stimulants Depressants Others Narcotics Steroids Hallucinogens - Ergine See also List of Schedule I controlled substances (U.S.) List of Schedule II controlled substances (U.S.) List of Schedule IV controlled substances (U.S.) List of Schedule V controlled substances (U.S.) Notes References Controlled Substances Act Drug-related lists
List of Schedule III controlled substances (U.S.)
[ "Chemistry" ]
209
[ "Drug-related lists" ]
13,547,663
https://en.wikipedia.org/wiki/Jacobsthal%20number
In mathematics, the Jacobsthal numbers are an integer sequence named after the German mathematician Ernst Jacobsthal. Like the related Fibonacci numbers, they are a specific type of Lucas sequence for which P = 1, and Q = −2—and are defined by a similar recurrence relation: in simple terms, the sequence starts with 0 and 1, then each following number is found by adding the number before it to twice the number before that. The first Jacobsthal numbers are: 0, 1, 1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731, 5461, 10923, 21845, 43691, 87381, 174763, 349525, … A Jacobsthal prime is a Jacobsthal number that is also prime. The first Jacobsthal primes are: 3, 5, 11, 43, 683, 2731, 43691, 174763, 2796203, 715827883, 2932031007403, 768614336404564651, 201487636602438195784363, 845100400152152934331135470251, 56713727820156410577229101238628035243, … Jacobsthal numbers Jacobsthal numbers are defined by the recurrence relation: The next Jacobsthal number is also given by the recursion formula or by The second recursion formula above is also satisfied by the powers of 2. The Jacobsthal number at a specific point in the sequence may be calculated directly using the closed-form equation: The generating function for the Jacobsthal numbers is The sum of the reciprocals of the Jacobsthal numbers is approximately 2.7186, slightly larger than e. The Jacobsthal numbers can be extended to negative indices using the recurrence relation or the explicit formula, giving (see ) The following identities holds (see ) where is the nth Fibonacci number. Jacobsthal–Lucas numbers Jacobsthal–Lucas numbers represent the complementary Lucas sequence . They satisfy the same recurrence relation as Jacobsthal numbers but have different initial values: The following Jacobsthal–Lucas number also satisfies: The Jacobsthal–Lucas number at a specific point in the sequence may be calculated directly using the closed-form equation: The first Jacobsthal–Lucas numbers are: 2, 1, 5, 7, 17, 31, 65, 127, 257, 511, 1025, 2047, 4097, 8191, 16385, 32767, 65537, 131071, 262145, 524287, 1048577, … . Jacobsthal Oblong numbers The first Jacobsthal Oblong numbers are: 0, 1, 3, 15, 55, 231, 903, 3655, 14535, 58311, … References Eponymous numbers in mathematics Integer sequences Recurrence relations
Jacobsthal number
[ "Mathematics" ]
634
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recurrence relations", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Mathematical relations", "Numbers", "Number theory" ]
13,547,826
https://en.wikipedia.org/wiki/Forbidden%20graph%20characterization
In graph theory, a branch of mathematics, many important families of graphs can be described by a finite set of individual graphs that do not belong to the family and further exclude all graphs from the family which contain any of these forbidden graphs as (induced) subgraph or minor. A prototypical example of this phenomenon is Kuratowski's theorem, which states that a graph is planar (can be drawn without crossings in the plane) if and only if it does not contain either of two forbidden graphs, the complete graph and the complete bipartite graph . For Kuratowski's theorem, the notion of containment is that of graph homeomorphism, in which a subdivision of one graph appears as a subgraph of the other. Thus, every graph either has a planar drawing (in which case it belongs to the family of planar graphs) or it has a subdivision of at least one of these two graphs as a subgraph (in which case it does not belong to the planar graphs). Definition More generally, a forbidden graph characterization is a method of specifying a family of graph, or hypergraph, structures, by specifying substructures that are forbidden to exist within any graph in the family. Different families vary in the nature of what is forbidden. In general, a structure G is a member of a family if and only if a forbidden substructure is not contained in G. The forbidden substructure might be one of: subgraphs, smaller graphs obtained from subsets of the vertices and edges of a larger graph, induced subgraphs, smaller graphs obtained by selecting a subset of the vertices and using all edges with both endpoints in that subset, homeomorphic subgraphs (also called topological minors), smaller graphs obtained from subgraphs by collapsing paths of degree-two vertices to single edges, or graph minors, smaller graphs obtained from subgraphs by arbitrary edge contractions. The set of structures that are forbidden from belonging to a given graph family can also be called an obstruction set for that family. Forbidden graph characterizations may be used in algorithms for testing whether a graph belongs to a given family. In many cases, it is possible to test in polynomial time whether a given graph contains any of the members of the obstruction set, and therefore whether it belongs to the family defined by that obstruction set. In order for a family to have a forbidden graph characterization, with a particular type of substructure, the family must be closed under substructures. That is, every substructure (of a given type) of a graph in the family must be another graph in the family. Equivalently, if a graph is not part of the family, all larger graphs containing it as a substructure must also be excluded from the family. When this is true, there always exists an obstruction set (the set of graphs that are not in the family but whose smaller substructures all belong to the family). However, for some notions of what a substructure is, this obstruction set could be infinite. The Robertson–Seymour theorem proves that, for the particular case of graph minors, a family that is closed under minors always has a finite obstruction set. List of forbidden characterizations for graphs and hypergraphs See also Erdős–Hajnal conjecture Forbidden subgraph problem Matroid minor Zarankiewicz problem References Graph theory Graph minor theory Graph families Hypergraphs
Forbidden graph characterization
[ "Mathematics" ]
701
[ "Discrete mathematics", "Graph theory", "Combinatorics", "Mathematical relations", "Graph minor theory" ]
13,547,853
https://en.wikipedia.org/wiki/Penicillium%20griseofulvum
Penicillium griseofulvum is a species of the genus of Penicillium which produces patulin, penifulvin A, cyclopiazonic acid, roquefortine C, shikimic acid, griseofulvin, and 6-Methylsalicylic acid (via a polyketide synthase). Penicillium griseofulvum occurs on cereals and nuts. Further reading References griseofulvum Fungi described in 1901 Fungus species
Penicillium griseofulvum
[ "Biology" ]
110
[ "Fungi", "Fungus species" ]
13,548,016
https://en.wikipedia.org/wiki/Orthogonal%20Procrustes%20problem
The orthogonal Procrustes problem is a matrix approximation problem in linear algebra. In its classical form, one is given two matrices and and asked to find an orthogonal matrix which most closely maps to . Specifically, the orthogonal Procrustes problem is an optimization problem given by where denotes the Frobenius norm. This is a special case of Wahba's problem (with identical weights; instead of considering two matrices, in Wahba's problem the columns of the matrices are considered as individual vectors). Another difference is that Wahba's problem tries to find a proper rotation matrix instead of just an orthogonal one. The name Procrustes refers to a bandit from Greek mythology who made his victims fit his bed by either stretching their limbs or cutting them off. Solution This problem was originally solved by Peter Schönemann in a 1964 thesis, and shortly after appeared in the journal Psychometrika. This problem is equivalent to finding the nearest orthogonal matrix to a given matrix , i.e. solving the closest orthogonal approximation problem . To find matrix , one uses the singular value decomposition (for which the entries of are non-negative) to write Proof of Solution One proof depends on the basic properties of the Frobenius inner product that induces the Frobenius norm: This quantity is an orthogonal matrix (as it is a product of orthogonal matrices) and thus the expression is maximised when equals the identity matrix . Thus where is the solution for the optimal value of that minimizes the norm squared . Generalized/constrained Procrustes problems There are a number of related problems to the classical orthogonal Procrustes problem. One might generalize it by seeking the closest matrix in which the columns are orthogonal, but not necessarily orthonormal. Alternately, one might constrain it by only allowing rotation matrices (i.e. orthogonal matrices with determinant 1, also known as special orthogonal matrices). In this case, one can write (using the above decomposition ) where is a modified , with the smallest singular value replaced by (+1 or -1), and the other singular values replaced by 1, so that the determinant of R is guaranteed to be positive. For more information, see the Kabsch algorithm. The unbalanced Procrustes problem concerns minimizing the norm of , where , and , with , or alternately with complex valued matrices. This is a problem over the Stiefel manifold , and has no currently known closed form. To distinguish, the standard Procrustes problem () is referred to as the balanced problem in these contexts. See also Procrustes analysis Procrustes transformation Wahba's problem Kabsch algorithm Point set registration References Linear algebra Matrix theory Singular value decomposition
Orthogonal Procrustes problem
[ "Mathematics" ]
563
[ "Linear algebra", "Algebra" ]
9,516,673
https://en.wikipedia.org/wiki/RNA%20activation
RNA activation (RNAa) is a small RNA-guided and Argonaute (Ago)-dependent gene regulation phenomenon in which promoter-targeted short double-stranded RNAs (dsRNAs) induce target gene expression at the transcriptional/epigenetic level. RNAa was first reported in a 2006 PNAS paper by Li et al. who also coined the term "RNAa" as a contrast to RNA interference (RNAi) to describe such gene activation phenomenon. dsRNAs that trigger RNAa have been termed small activating RNA (saRNA). Since the initial discovery of RNAa in human cells, many other groups have made similar observations in different mammalian species including human, non-human primates, rat and mice, plant and C. elegans, suggesting that RNAa is an evolutionarily conserved mechanism of gene regulation. RNAa can be generally classified into two categories: exogenous and endogenous. Exogenous RNAa is triggered by artificially designed saRNAs which target non-coding sequences such as the promoter and the 3’ terminus of a gene and these saRNAs can be chemically synthesized or expressed as short hairpin RNA (shRNA). Whereas for endogenous RNAa, upregulation of gene expression is guided by naturally occurring endogenous small RNAs such as miRNA in mammalian cells and C. elegans, and 22G RNA in C. elegans. Mechanism The molecular mechanism of RNAa is not fully understood. Similar to RNAi, it has been shown that mammalian RNAa requires members of the Ago clade of Argonaute proteins, particularly Ago2, but possesses kinetics distinct from RNAi. In contrast to RNAi, promoter-targeted saRNAs induce prolonged activation of gene expression associated with epigenetic changes. It is currently suggested that saRNAs are first loaded and processed by an Ago protein to form an Ago-RNA complex which is then guided by the RNA to its promoter target. The target can be a non-coding transcript overlapping the promoter or the chromosomal DNA. The RNA-loaded Ago then recruits other proteins such as RHA, also known as nuclear DNA helicase II, and CTR9 to form an RNA-induced transcriptional activation (RITA) complex. RITA can directly interacts with RNAP II to stimulate transcription initiation and productive transcription elongation which is related to increased ubiquitination of H2B. Endogenous RNAa In 2008, Place et al. identified targets for miRNA miR-373 on the promoters of several human genes and found that introduction of miR-373 mimics into human cells induced the expression of its predicted target genes. This study provided the first example that RNAa could be mediated by naturally occurring non-coding RNA (ncRNA). In 2011, Huang et al. further demonstrated in mouse cells that endogenous RNAa mediated by miRNAs functions in a physiological context and is possibly exploited by cancer cells to gain a growth advantage. Since then, a number of miRNAs have been shown to upregulate gene expression by targeting gene promoters or enhancers, thereby, exerting important biological roles. A good example is miR-551b-3p which is overexpressed in ovarian cancer due to amplification. By targeting the promoter of STAT3 to increase its transcription, miR-551b-3p confers to ovarian cancer cells resistance to apoptosis and a proliferative advantage. In C. elegans hypodermal seam cells, the transcription of lin-4 miRNA is positively regulated by lin-4 itself which binds to a conserved lin-4 complementary element in its promoter, constituting a positive autoregulatory loop. In C. elegans, Argonaute CSR-1 interacts with 22G small RNAs derived from RNA-dependent RNA polymerase and antisense to germline-expressed transcripts to protect these mRNAs from Piwi-piRNA mediated silencing via promoting epigenetic activation. It is currently unknown how widespread gene regulation by endogenous RNAa is in mammalian cells. Studies have shown that both miRNAs and Ago proteins (Ago1) bind to numerous sites in human genome, especially promoter regions, to exert a largely positive effect on gene transcription. Applications RNAa has been used to study gene function in lieu of vector-based gene overexpression. Studies have demonstrated RNAa in vivo and its potential therapeutic applications in treating cancer and non-cancerous diseases. In June 2016, UK-based MiNA Therapeutics announced the initiation of a phase I trial of the first-ever saRNA drug MTL-CEBPA in patients with liver cancer, in an attempt to activate CEBPA gene. References Further reading External links RNAa FAQs Li Lab, University of California San Francisco How to get your genes switched on. New Scientist 16 November 2006 RNA Gene expression
RNA activation
[ "Chemistry", "Biology" ]
1,020
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
9,516,924
https://en.wikipedia.org/wiki/Manual%20testing
Compare with Test automation. Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user where by they use most of the application's features to ensure correct behaviour. To guarantee completeness of testing, the tester often follows a written test plan that leads them through a set of important test cases. Overview A key step in the process is testing the software for correct behavior prior to release to end users. For small scale engineering efforts (including prototypes), ad hoc testing may be sufficient. With this informal approach, the tester does not follow any rigorous testing procedure and simply performs testing without planning or documentation. Conversely, exploratory testing, which involves simultaneous learning, test design and test execution, explores the user interface of the application using as many of its features as possible, using information gained in prior tests to intuitively derive additional tests. The success of exploratory manual testing relies heavily on the domain expertise of the tester, because a lack of knowledge will lead to incompleteness in testing. One of the key advantages of an informal approach is to gain an intuitive insight to how it feels to use the application. Large scale engineering projects that rely on manual software testing follow a more rigorous methodology in order to maximize the number of defects that can be found. A systematic approach focuses on predetermined test cases and generally involves the following steps. Choose a high level test plan where a general methodology is chosen, and resources such as people, computers, and software licenses are identified and acquired. Write detailed test cases, identifying clear and concise steps to be taken by the tester, with expected outcomes. Assign the test cases to testers, who manually follow the steps and record the results. Author a test report, detailing the findings of the testers. The report is used by managers to determine whether the software can be released, and if not, it is used by engineers to identify and correct the problems. A rigorous test case based approach is often traditional for large software engineering projects that follow a Waterfall model. However, at least one recent study did not show a dramatic difference in defect detection efficiency between exploratory testing and test case based testing. Testing can be through black-, white- or grey-box testing. In white-box testing the tester is concerned with the execution of the statements through the source code. In black-box testing the software is run to check for the defects and is less concerned with how the processing of the input is done. Black-box testers do not have access to the source code. Grey-box testing is concerned with running the software while having an understanding of the source code and algorithms. Static and dynamic testing approach may also be used. Dynamic testing involves running the software. Static testing includes verifying requirements, syntax of code and any other activities that do not include actually running the code of the program. Testing can be further divided into functional and non-functional testing. In functional testing the tester would check the calculations, any link on the page, or any other field which on given input, output may be expected. Non-functional testing includes testing performance, compatibility and fitness of the system under test, its security and usability among other things. Stages There are several stages. They are: Unit testing This initial stage in testing normally carried out by the developer who wrote the code and sometimes by a peer using the white box testing technique. Integration testing This stage is carried out in two modes, as a complete package or as an increment to the earlier package. Most of the time black box testing technique is used. However, sometimes a combination of Black and White box testing is also used in this stage. System testing In this stage the software is tested from all possible dimensions for all intended purposes and platforms. In this stage Black box testing technique is normally used. User acceptance testing This testing stage carried out in order to get customer sign-off of finished product. A 'pass' in this stage also ensures that the customer has accepted the software and is ready for their use. Release or deployment testing Onsite team will go to customer site to install the system in customer configured environment and will check for the following points: Whether SetUp.exe is running or not. There are easy screens during installation How much space is occupied by system on HDD Is the system completely uninstalled when opted to uninstall from the system. Advantages Low-cost operation as no software tools are used Most bugs are caught by manual testing Humans observe and judge better than the automated tools Comparison to automated testing Test automation may be able to reduce or eliminate the cost of actual testing. A computer can follow a rote sequence of steps more quickly than a person, and it can run the tests overnight to present the results in the morning. However, the labor that is saved in actual testing must be spent instead authoring the test program. Depending on the type of application to be tested, and the automation tools that are chosen, this may require more labor than a manual approach. In addition, some testing tools present a very large amount of data, potentially creating a time-consuming task of interpreting the results. Things such as device drivers and software libraries must be tested using test programs. In addition, testing of large numbers of users (performance testing and load testing) is typically simulated in software rather than performed in practice. Conversely, graphical user interfaces whose layout changes frequently are very difficult to test automatically. There are test frameworks that can be used for regression testing of user interfaces. They rely on recording of sequences of keystrokes and mouse gestures, then playing them back and observing that the user interface responds in the same way every time. Unfortunately, these recordings may not work properly when a button is moved or relabeled in a subsequent release. An automatic regression test may also be fooled if the program output varies significantly. See also Test method Usability testing GUI testing Software testing Codeless test automation Sanity Testing References Software testing
Manual testing
[ "Engineering" ]
1,219
[ "Software engineering", "Software testing" ]
9,516,977
https://en.wikipedia.org/wiki/Ice-minus%20bacteria
Ice-minus bacteria is a common name given to a variant of the common bacterium Pseudomonas syringae (P. syringae). This strain of P. syringae lacks the ability to produce a certain surface protein, usually found on wild-type P. syringae. The "ice-plus" protein (INA protein, "Ice nucleation-active" protein) found on the outer bacterial cell wall acts as the nucleating centers for ice crystals. This facilitates ice formation, hence the designation "ice-plus". The ice-minus variant of P. syringae is a mutant, lacking the gene responsible for ice-nucleating surface protein production. This lack of surface protein provides a less favorable environment for ice formation. Both strains of P. syringae occur naturally, but recombinant DNA technology has allowed for the synthetic removal or alteration of specific genes, enabling the ice-minus strain to be created from the ice-plus strain in the lab. The ice nucleating nature of P. syringae incites frost development, freezing the buds of the plant and destroying the occurring crop. The introduction of an ice-minus strain of P. syringae to the surface of plants would reduce the amount of ice nucleate present, rendering higher crop yields. The recombinant form was developed as a commercial product known as Frostban. Field-testing of Frostban in 1987 was the first release of a genetically modified organism into the environment. The testing was very controversial and drove the formation of US biotechnology policy. Frostban was never marketed. Production To systematically create the ice-minus strain of P. syringae, its ice-forming gene must be isolated, amplified, deactivated and reintroduced into P. syringae bacterium. The following steps are often used to isolate and generate ice-minus strains of P. syringae: Digest P. syringaes DNA with restriction enzymes. Insert the individual DNA pieces into a plasmid. Pieces will insert randomly, allowing for different variations of recombinant DNA to be produced. Transform the bacterium Escherichia coli (E.coli) with the recombinant plasmid. The plasmid will be taken in by the bacteria, rendering it part of the organism's DNA. Identify the ice-gene from the numerous newly developed E. coli recombinants. Recombinant E. coli with the ice-gene will possess the ice-nucleating phenotype, these will be "ice-plus". With the ice nucleating recombinant identified, amplify the ice gene with techniques such as polymerase chain reactions (PCR). Create mutant clones of the ice gene through the introduction of mutagenic agents such as UV radiation to inactivate the ice gene, creating the "ice-minus" gene. Repeat previous steps (insert gene into plasmid, transform E. coli, identify recombinants) with the newly created mutant clones to identify the bacteria with the ice-minus gene. They will possess the desired ice-minus phenotype. Insert the ice-minus gene into normal, ice-plus P. syringae bacterium. Allow recombination to take place, rendering both ice-minus and ice-plus strains of P. syringae. Economic importance In the United States alone, it has been estimated that frost accounts for approximately $1 billion in crop damage each year. As P. syringae commonly inhabits plant surfaces, its ice nucleating nature incites frost development, freezing the buds of the plant and destroying the occurring crop. The introduction of an ice-minus strain of P. syringae to the surface of plants would incur competition between the strains. Should the ice-minus strain win out, the ice nucleate provided by P. syringae would no longer be present, lowering the level of frost development on plant surfaces at normal water freezing temperature – . Even if the ice-minus strain does not win out, the amount of ice nucleate present from ice-plus P. syringae would be reduced due to competition. Decreased levels of frost generation at normal water freezing temperature would translate into a lowered quantity of crops lost due to frost damage, rendering higher crop yields overall. Historical perspective In 1961, Paul Hoppe of the U.S. Department of Agriculture studied a corn fungus by grinding up infected leaves each season, then applying the powder to test corn for the following season to track the disease. A surprise frost occurred that year, leaving peculiar results. Only plants infected with the diseased powder incurred frost damage, leaving healthy plants unfrozen. This phenomenon would baffle scientists until graduate student Stephen Lindow of the University of Wisconsin–Madison with D.C. Arny and C. Upper found a bacterium in the dried leaf powder in the early 1970s. Lindow, now a plant pathologist at the University of California-Berkeley, found that when this particular bacterium was introduced to plants where it is originally absent, the plants became very vulnerable to frost damage. He would go on to identify the bacterium as P. syringae, investigate P. syringaes role in ice nucleation and in 1977, discover the mutant ice-minus strain. He was later successful at developing the ice-minus strain of P. syringae through recombinant DNA technology as well. In 1983, Advanced Genetic Sciences (AGS), a biotech company, applied for U.S. government authorization to perform field tests with the ice-minus strain of P. syringae, but environmental groups and protestors delayed the field tests for four years with legal challenges. In 1987, the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment when a strawberry field in California was sprayed with the ice-minus strain of P. syringae. The results were promising, showing lowered frost damage to the treated plants. Lindow also conducted an experiment on a crop of potato seedlings sprayed with ice-minus P. syringae. He was successful in protecting the potato crop from frost damage with a strain of ice-minus P. syringae. Controversy At the time of Lindow's work on ice-minus P. syringae, genetic engineering was considered to be very controversial. Jeremy Rifkin and his Foundation on Economic Trends (FET) sued the NIH in federal court to delay the field trials, arguing that NIH had failed to conduct an Environmental Impact Assessment and had failed to explore the possible effects "Ice-minus" bacteria might have on ecosystems and even global weather patterns. Once approval was granted, both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher". The BBC quoted Andy Caffrey from Earth First!: "When I first heard that a company in Berkley was planning to release these bacteria Frostban in my community, I literally felt a knife go into me. Here once again, for a buck, science, technology and corporations were going to invade my body with new bacteria that hadn't existed on the planet before. It had already been invaded by smog, by radiation, by toxic chemicals in my food, and I just wasn't going to take it anymore." Rifkin's successful legal challenge forced the Reagan Administration to more quickly develop an overarching regulatory policy to guide federal decision-making about agricultural biotechnology. In 1986, the Office of Science and Technology Policy issued the Coordinated Framework for Regulation of Biotechnology, which continues to govern US regulatory decisions. The controversy drove many biotech companies away from use of genetically engineering microorganisms in agriculture. See also Bacterial ice-nucleation proteins References External links P. syringae genomic information from Cornell University's Pseudomonas-Plant Interaction Project Pseudomonadales Genetically modified organisms Genetically modified organisms in agriculture
Ice-minus bacteria
[ "Engineering", "Biology" ]
1,656
[ "Genetic engineering", "Genetically modified organisms" ]
9,517,150
https://en.wikipedia.org/wiki/Shogun%20%28toolbox%29
Shogun is a free, open-source machine learning software library written in C++. It offers numerous algorithms and data structures for machine learning problems. It offers interfaces for Octave, Python, R, Java, Lua, Ruby and C# using SWIG. It is licensed under the terms of the GNU General Public License version 3 or later. Description The focus of Shogun is on kernel machines such as support vector machines for regression and classification problems. Shogun also offers a full implementation of Hidden Markov models. The core of Shogun is written in C++ and offers interfaces for MATLAB, Octave, Python, R, Java, Lua, Ruby and C#. Shogun has been under active development since 1999. Today there is a vibrant user community all over the world using Shogun as a base for research and education, and contributing to the core package. Supported algorithms Currently Shogun supports the following algorithms: Support vector machines Dimensionality reduction algorithms, such as PCA, Kernel PCA, Locally Linear Embedding, Hessian Locally Linear Embedding, Local Tangent Space Alignment, Linear Local Tangent Space Alignment, Kernel Locally Linear Embedding, Kernel Local Tangent Space Alignment, Multidimensional Scaling, Isomap, Diffusion Maps, Laplacian Eigenmaps Online learning algorithms such as SGD-QN, Vowpal Wabbit Clustering algorithms: k-means and GMM Kernel Ridge Regression, Support Vector Regression Hidden Markov Models K-Nearest Neighbors Linear discriminant analysis Kernel Perceptrons. Many different kernels are implemented, ranging from kernels for numerical data (such as gaussian or linear kernels) to kernels on special data (such as strings over certain alphabets). The currently implemented kernels for numeric data include: linear gaussian polynomial sigmoid kernels The supported kernels for special data include: Spectrum Weighted Degree Weighted Degree with Shifts The latter group of kernels allows processing of arbitrary sequences over fixed alphabets such as DNA sequences as well as whole e-mail texts. Special features As Shogun was developed with bioinformatics applications in mind it is capable of processing huge datasets consisting of up to 10 million samples. Shogun supports the use of pre-calculated kernels. It is also possible to use a combined kernel i.e. a kernel consisting of a linear combination of arbitrary kernels over different domains. The coefficients or weights of the linear combination can be learned as well. For this purpose Shogun offers a multiple kernel learning functionality. References S. Sonnenburg, G. Rätsch, S. Henschel, C. Widmer, J. Behr, A. Zien, F. De Bona, A. Binder, C. Gehl and V. Franc: The SHOGUN Machine Learning Toolbox, Journal of Machine Learning Research, 11:1799−1802, June 11, 2010. M. Gashler. Waffles: A Machine Learning Toolkit. Journal of Machine Learning Research, 12 (July):2383–2387, 2011. P. Vincent, Y. Bengio, N. Chapados, and O. Delalleau. Plearn high-performance machine learning library. URL http://plearn.berlios.de/. External links Shogun toolbox homepage C++ libraries Free software programmed in C++ Data mining and machine learning software Free statistical software Free computer libraries Free mathematics software Free science software
Shogun (toolbox)
[ "Mathematics" ]
721
[ "Free mathematics software", "Mathematical software" ]
9,517,351
https://en.wikipedia.org/wiki/Coincidence%20counting%20%28physics%29
In quantum physics, coincidence counting is used in experiments testing particle non-locality and quantum entanglement. In these experiments two or more particles are created from the same initial packet of energy, inexorably linking/entangling their physical properties. Separate particle detectors measure the quantum states of each particle and send the resulting signal to a coincidence counter. In any experiment studying entanglement, the entangled particles are vastly outnumbered by non-entangled particles which are also detected; patternless noise that drowns out the entangled signal. In a two detector system, a coincidence counter alleviates this problem by only recording detection signals that strike both detectors simultaneously (or more accurately, recording only signals that arrive at both detectors and correlate to the same emission time). This ensures that the data represents only entangled particles. However, since no detector/counter circuit has infinitely precise temporal resolution (due both to limitations in the electronics and the laws of the Universe itself), detections must be sorted into time bins (detection windows equivalent to the temporal resolution of the system). Detections in the same bin appear to occur at the same time because their individual detection times cannot be resolved any further. Thus in a two detector system, two unrelated, non-entangled particles may randomly strike both detectors, get sorted into the same time bin, and create a false-coincidence that adds noise to the signal. This limits coincidence counters to improving the signal-to-noise ratio to the extent that the quantum behavior can be studied, without removing the noise completely. History As of 1951, coincidence counting was described as "an important tool in experimental physics for a long time." In 1955, a seminal paper from the University of Glasgow suggested using "the coincidence counting technique of nuclear physics to measure the lifetime of excited atomic states." Every experiment to date that has been used to calculate Bell's inequalities, perform a quantum eraser, or conduct any experiment utilizing quantum entanglement as an information channel has only been possible through the use of coincidence counters. This unavoidably prevents superluminal communication since, even if a random or purposeful decision appears to be affecting events that have already transpired (as in the delayed choice quantum eraser), the signal from the past cannot be seen/decoded until the coincidence circuit has correlated both the past and future behavior. Thus the "signal" in the past is only visible after it is "sent" from the future, precluding quantum entanglement from being exploited for the purposes of faster-than-light communication or data time travel. See also Non-linear optics Delayed choice quantum eraser References Nonlinear optics Quantum mechanics Experimental physics
Coincidence counting (physics)
[ "Physics" ]
548
[ "Theoretical physics", "Experimental physics", "Quantum mechanics" ]
9,517,361
https://en.wikipedia.org/wiki/Fluidized%20bed%20reactor
A fluidized bed reactor (FBR) is a type of reactor device that can be used to carry out a variety of multiphase chemical reactions. In this type of reactor, a fluid (gas or liquid) is passed through a solid granular material (usually a catalyst) at high enough speeds to suspend the solid and cause it to behave as though it were a fluid. This process, known as fluidization, imparts many important advantages to an FBR. As a result, FBRs are used for many industrial applications. Basic principles The solid substrate material (the catalytic material upon which chemical species react) in the fluidized bed reactor is typically supported by a porous plate, known as a distributor. The fluid is then forced through the distributor up through the solid material. At lower fluid velocities, the solids remain in place as the fluid passes through the voids in the material. This is known as a packed bed reactor. As the fluid velocity is increased, the reactor will reach a stage where the force of the fluid on the solids is enough to balance the weight of the solid material. This stage is known as incipient fluidization and occurs at this minimum fluidization velocity. Once this minimum velocity is surpassed, the contents of the reactor bed begin to expand and swirl around much like an agitated tank or boiling pot of water. The reactor is now a fluidized bed. Depending on the operating conditions and properties of solid phase various flow regimes can be observed in this reactor. History and current uses Fluidized bed reactors are a relatively new tool in the chemical engineering field. The first fluidized bed gas generator was developed by Fritz Winkler in Germany in the 1920s. One of the first United States fluidized bed reactors used in the petroleum industry was the Catalytic Cracking Unit, created in Baton Rouge, LA in 1942 by the Standard Oil Company of New Jersey (now ExxonMobil). This FBR and the many to follow were developed for the oil and petrochemical industries. Here catalysts were used to reduce petroleum to simpler compounds through a process known as cracking. The invention of this technology made it possible to significantly increase the production of various fuels in the United States. Today, fluidized bed reactors are still used to produce gasoline and other fuels, along with many other chemicals. Many industrially produced polymers are made using FBR technology, such as rubber, vinyl chloride, polyethylene, styrenes, and polypropylene. Various utilities also use FBRs for coal gasification, nuclear power plants, and water and waste treatment settings. Used in these applications, fluidized bed reactors allow for a cleaner, more efficient process than previous standard reactor technologies. Advantages The increase in fluidized bed reactor use in today's industrial world is largely due to the inherent advantages of the technology. Uniform particle mixing: Due to the intrinsic fluid-like behavior of the solid material, fluidized beds do not experience poor mixing as in packed beds. This complete mixing allows for a uniform product that can often be hard to achieve in other reactor designs. The elimination of radial and axial concentration gradients also allows for better fluid-solid contact, which is essential for reaction efficiency and quality. Uniform temperature gradients: Many chemical reactions require the addition or removal of heat. Local hot or cold spots within the reaction bed, often a problem in packed beds, are avoided in a fluidized situation such as an FBR. In other reactor types, these local temperature differences, especially hotspots, can result in product degradation. Thus FBRs are well suited to exothermic reactions. Researchers have also learned that the bed-to-surface heat transfer coefficients for FBRs are high. Ability to operate reactor in continuous state: The fluidized bed nature of these reactors allows for the ability to continuously withdraw product and introduce new reactants into the reaction vessel. Operating at a continuous process state allows manufacturers to produce their various products more efficiently due to the removal of startup conditions in batch processes. Disadvantages As in any design, the fluidized bed reactor does have its draw-backs, which any reactor designer must take into consideration. Increased reactor vessel size: Because of the expansion of the bed materials in the reactor, a larger vessel is often required than that for a packed bed reactor. This larger vessel means that more must be spent on initial capital costs. Pumping requirements and pressure drop: The requirement for the fluid to suspend the solid material necessitates that a higher fluid velocity is attained in the reactor. In order to achieve this, more pumping power and thus higher energy costs are needed. In addition, the pressure drop associated with deep beds also requires additional pumping power. Particle entrainment: The high fluid velocities present in this style of reactor often result in fine particles becoming entrained in the fluid. These captured particles are then carried out of the reactor with the fluid, where they must be separated. This can be a very difficult and expensive problem to address depending on the design and function of the reactor. This may often continue to be a problem even with other entrainment reducing technologies. Lack of current understanding: Current understanding of the actual behavior of the materials in a fluidized bed is rather limited. It is very difficult to predict and calculate the complex mass and heat flows within the bed. Due to this lack of understanding, a pilot plant for new processes is required. Even with pilot plants, the scale-up can be very difficult and may not reflect what was experienced in the pilot trial. Erosion of internal components: The fluid-like behavior of the fine solid particles within the bed eventually results in the wear of the reactor vessel. This can require expensive maintenance and upkeep for the reaction vessel and pipes. Pressure loss scenarios: If fluidization pressure is suddenly lost, the surface area of the bed may be suddenly reduced. This can either be an inconvenience (e.g. making bed restart difficult), or may have more serious implications, such as runaway reactions (e.g. for exothermic reactions in which heat transfer is suddenly restricted). Current research and trends Due to the advantages of fluidized bed reactors, a large amount of research is devoted to this technology. Most current research aims to quantify and explain the behavior of the phase interactions in the bed. Specific research topics include particle size distributions, various transfer coefficients, phase interactions, velocity and pressure effects, and computer modeling. The aim of this research is to produce more accurate models of the inner movements and phenomena of the bed. This will enable chemical engineers to design better, more efficient reactors that may effectively deal with the current disadvantages of the technology and expand the range of FBR use. See also Chemical engineering Chemical looping combustion Chemical reactor Fluidized bed combustion Siemens process References Chemical reactors Industrial processes Fluidization
Fluidized bed reactor
[ "Chemistry", "Engineering" ]
1,384
[ "Chemical reactors", "Fluidization", "Chemical equipment", "Chemical reaction engineering" ]
9,517,830
https://en.wikipedia.org/wiki/Los%20Angeles%20School
The Los Angeles School of Urbanism is an academic movement which emerged during the mid-1980s, loosely based at UCLA and the University of Southern California, which centers urban analysis on Los Angeles, California. The Los Angeles School redirects urban study away from notions of concentric zones and an ecological approach, used by the Chicago School during the 1920s, towards social polarization and fragmentation, hybridity of culture, subcultural analysis, and auto-driven sprawl. History The first published identification of the Los Angeles (L.A.) School as such was by Mike Davis in his popular urban history of Los Angeles, City of Quartz (1990). According to Davis, the school emerged informally during the mid-1980s when an eclectic variety of neo-Marxist scholars began publishing a series of articles and books dealing exclusively with Los Angeles. During the school's formation, Davis cautiously estimated that the school had about twenty members scattered throughout Southern California and beyond, with some members purportedly residing as far away as Frankfurt, Germany. Much of the work published by L.A. School members during the 1980s and early 1990s garnered considerable attention. However, while some members (e.g. Edward Soja and Mike Davis) became household names in urban theory, there was little consciousness of the school as its own entity, especially outside of Los Angeles. This changed in 1998, with the publication of an article by Michael J. Dear and Steven Flusty, which explicitly argued for the existence of a distinct L.A. School of Urbanism, of which its various theories, concepts, and empirical works could be pooled together to constitute a radical new conception of ‘postmodern urbanism.’ After Dear and Flusty's publication, Dear popularized the school through the production of a series of articles and books, including a full-length edited volume comparing the L.A. School to the Chicago School. Though much of the work of the L.A. School is still widely read in urban studies, the school's membership has declined substantially in recent years. At a retirement party for Soja in 2008 at which many purported members were present, only Michael J. Dear appeared to be willing to envisage the school's continued existence. This situation reflects the vital conceptual disagreements between members of the LA School, and especially between Dear and the other members. Members There is no official list of present or historic members of the Los Angeles School of Urbanism. Some thinkers who are commonly considered members include: Michael Dear Mike Davis Steven Flusty Allen J Scott Edward W. Soja Michael Storper Jennifer Wolch Ideas The L.A. School has no official doctrine, and there is great diversity in the works of its various members. Nevertheless, there are several influences, themes, and concepts which are relatively consistent in the school's scholarship. Perhaps the central characteristic of the thought of the L.A. School is a sustained focus on Los Angeles in both empirical and theoretical work, often with the underlying claim that L.A. is the paradigmatic American metropolis of the 20th and 21st centuries. More than this, the L.A. School poses a challenge to, what many members see as, the dominant Chicago School of Urbanism. While the Chicago School presents a modernist theory of cities as based on social darwinist struggles for urban space, the Los Angeles School proposes a postmodern or postfordist vision. While not all members of the L.A. School identify as postmodernists, and in fact some (e.g. Mike Davis) are against the very concept, a focus on postmodernism is fundamental to many members of the L.A. School, who rely heavily upon theorists associated with postmodernism, such as Baudrillard, Foucault, Jameson, and Derrida. A further stream of work emerging from the LA School is represented by Scott and Storper's many publications on flexible specialization, agglomeration, and the economic dynamics of the contemporary metropolis. Scott and Storper's work differs from that of Dear and Soja by approaching urban theory from the perspective of postfordism rather than postmodernism. Scott and Storper represent one distinctive tendency in the LA School; Dear and Soja represent another. Criticism A number of criticisms have been raised against the Los Angeles School. In particular, critics question the L.A. School's fundamental claim that Los Angeles should be considered the paradigmatic postmodern American city. This stems both from external comparisons which have been made between Los Angeles and other cities, and findings that in certain cases urban phenomena in Los Angeles do not match those of other American cities. In a 2023 response to critiques of the LA School as "postmodern," Stefano Bloch and Thomas Brasdefer explore the work of Edward W. Soja, writing that "the neat categorization (of the LA School as postmodern) is one that is both real and imagined — as limiting as it is liberating." See also Post-Fordism Globalization Urban structure Urban theory Urbanism References External links LA School of Urbanism at University of Southern California Urban geography Urban planning Schools of thought
Los Angeles School
[ "Engineering" ]
1,062
[ "Urban planning", "Architecture" ]
9,517,883
https://en.wikipedia.org/wiki/Spongivore
A spongivore is an animal anatomically and physiologically adapted to eating animals of the phylum Porifera, commonly called sea sponges, for the main component of its diet. As a result of their diet, spongivore animals like the hawksbill turtle have developed sharp, narrow bird-like beak that allows them to reach within crevices on the reef to obtain sponges. Examples The hawksbill turtle are one of the few animals known to feed primarily on sponges. It is the only known spongivorous reptile. Sponges of various select species constitute up to 95% of the diets of Caribbean hawksbill turtle populations. Pomacanthus imperator, the emperor angelfish; Lactophrys bicaudalis, the spotted trunkfish; and Stephanolepis hispidus, the planehead filefish are known spongivorous coral reef fish. The rock beauty Holocanthus tricolor is also spongivorous, with sponges making up 96% of their diet. Certain species of nudibranchs are known to feed selectively on specific species of sponges. Attacks and counter-attacks Spongivore offense The many defenses displayed by sponges means that their spongivores need to learn skills to overcome these defenses to obtain their food. These skills allow spongivores to increase their feeding and use of sponges. Spongivores have three primary strategies for dealing with sponge defenses: choice based on colour, able to handle secondary metabolites and brain development for memory. Choice based on colour was involved based on which sponge the spongivore would choose to eat. A spongivore would bite small sample of sponges and if they were unharmed that they would continue eating that specific sponge and then move on to another sponge of the same colour. Spongivores have adapted to be able to handle the secondary metabolites that sponges have. Therefore, spongivores are able to consume a variety of sponges without getting harmed. Spongivores also have enough brain development to be able to remember the same species of sponge it has eaten in the past and will continue to eat in the future. Sponge defense A sponge defense is a trait that increases a sponge fitness when faced with a spongivore. This is measured relative to another sponge that lacks the defensive trait. Sponge defenses increase survival and/or reproduction (fitness) of sponges under pressure of predation from a spongivore. The use of structural and chemical strategies found in sponges are used to deter predation. One of the most common structural strategies that sponges have that prevents them from being consumed by predators is by having spicules. If a sponge contains spicules along with organic compounds, the likelihood of those sponges being consumed by spongivores decrease. Sponges have also developed aposematism to help avoid predation. Spongivores have learned four things about sponges aposematism and they are as follows: If it is poison some predators will not eat it If It is conspicuously coloured, or advertises itself by means of some other signals; Some predators avoid attacking it because of its signals These conspicuous signals provide better protection to the individual or to its genes than would other (e.g. cryptic) signals. Unfortunately, sponges that live in the deep sea are not at an advantage due to their colour because most colour in the deep sea is lost. Impacts Sponges play an important role in the benthic fauna throughout temperate, tropical and polar habitats. If there is a high volume of predation it can effect bio erosion, reef creation, multiple habitats, other species and help with the nitrogen levels. Bio erosion that occurs in the production of reef sediments and the structural component of corals are partly produced by sponges, where solid carbonate is processed into smaller fragments and fine sediments. Sponges also play a role in increasing the survival of live coral on Caribbean reefs by binding fragments together and is expected to increase the rates of carbonate accretion. The coral reefs that contain higher amounts of sponges have better survival rate than the reefs with fewer sponges. Sponges can act as a stabilizer during storms as they help keep the reefs intact when presented with a strong currents. Sponges also grown between rocks and boulders, providing a more stable environment and lowering the disturbance levels. Sponges also provide habitats for other organisms to live in, without them, these organisms would not have a protected habitat. Scientists have discovered that sponges play an important role in the nitrogen cycle. There are low amounts of nitrogen found in the water around coral reefs and most of the nitrogen that is found it bound into particulate or dissolved organic matter. Before this dissolved organic matter is able to be used by other reef organisms it must undergo a series of microbial transformations. The nitrogen cycle that occurs in sponges are able to cycle the nitrogen back into the water column and can be used by other organisms, especially cyanobacteria. The cyanobacteria then can then fix the atmospheric nitrogen and then the sponges can use it. Therefore, if there is a high amount of spongivores present in an environment, it can affect other aspects of the environment besides sponges. References Carnivory Sponge biology Animals by eating behaviors
Spongivore
[ "Biology" ]
1,089
[ "Behavior", "Ethology", "Animals by eating behaviors", "Carnivory", "Eating behaviors" ]
9,518,854
https://en.wikipedia.org/wiki/Microscopic%20traffic%20flow%20model
Microscopic traffic flow models are a class of scientific models of vehicular traffic dynamics. In contrast, to macroscopic models, microscopic traffic flow models simulate single vehicle-driver units, so the dynamic variables of the models represent microscopic properties like the position and velocity of single vehicles. Car-following models Also known as time-continuous models, all car-following models have in common that they are defined by ordinary differential equations describing the complete dynamics of the vehicles' positions and velocities . It is assumed that the input stimuli of the drivers are restricted to their own velocity , the net distance (bumper-to-bumper distance) to the leading vehicle (where denotes the vehicle length), and the velocity of the leading vehicle. The equation of motion of each vehicle is characterized by an acceleration function that depends on those input stimuli: In general, the driving behavior of a single driver-vehicle unit might not merely depend on the immediate leader but on the vehicles in front. The equation of motion in this more generalized form reads: Examples of car-following models Optimal velocity model (OVM) Velocity difference model (VDIFF) Wiedemann model (1974) Gipps' model (Gipps, 1981) Intelligent driver model (IDM, 1999) DNN based anticipatory driving model (DDS, 2021) Cellular automaton models Cellular automaton (CA) models use integer variables to describe the dynamical properties of the system. The road is divided into sections of a certain length and the time is discretized to steps of . Each road section can either be occupied by a vehicle or empty and the dynamics are given by updated rules of the form: (the simulation time is measured in units of and the vehicle positions in units of ). The time scale is typically given by the reaction time of a human driver, . With fixed, the length of the road sections determines the granularity of the model. At a complete standstill, the average road length occupied by one vehicle is approximately 7.5 meters. Setting to this value leads to a model where one vehicle always occupies exactly one section of the road and a velocity of 5 corresponds to , which is then set to be the maximum velocity a driver wants to drive at. However, in such a model, the smallest possible acceleration would be which is unrealistic. Therefore, many modern CA models use a finer spatial discretization, for example , leading to a smallest possible acceleration of . Although cellular automaton models lack the accuracy of the time-continuous car-following models, they still have the ability to reproduce a wide range of traffic phenomena. Due to the simplicity of the models, they are numerically very efficient and can be used to simulate large road networks in real-time or even faster. Examples of cellular automaton models Rule 184 Biham–Middleton–Levine traffic model Nagel–Schreckenberg model (NaSch, 1992) See also Microsimulation References Road traffic management Mathematical modeling Traffic flow
Microscopic traffic flow model
[ "Mathematics" ]
610
[ "Applied mathematics", "Mathematical modeling" ]
9,519,077
https://en.wikipedia.org/wiki/Parallel%20I/O
Parallel I/O, in the context of a computer, means the performance of multiple input/output operations at the same time, for instance simultaneously outputs to storage devices and display devices. It is a fundamental feature of operating systems. One particular instance is parallel writing of data to disk; when file data is spread across multiple disks, for example in a RAID array, one can store multiple parts of the data at the same time, thereby achieving higher write speeds than with a single device. Other ways of parallel access to data include: Parallel Virtual File System, Lustre, GFS etc. Features Scientific computing It is used for scientific computing and not for databases. It breaks up support into multiple layers including High level I/O library, Middleware layer and Parallel file system. Parallel File System manages the single view, maintains logical space and provides access to data files. Storage A single file may be stripped across one or more object storage target, which increases the bandwidth while accessing the file and available disk space. The caches are larger in Parallel I/O and shared through distributed memory systems. Breakthroughs Companies have been running Parallel I/O on their servers to achieve results with regard to price and performance. Parallel processing is especially critical for scientific calculations where applications are not only CPU but also are I/O bound. See also Converged infrastructure Dynamic infrastructure References Concurrency (computer science) Input/output
Parallel I/O
[ "Technology" ]
282
[ "Computing stubs" ]