id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
5,063,927
https://en.wikipedia.org/wiki/Sodium%20cyanoborohydride
Sodium cyanoborohydride is a chemical compound with the formula . It is a colourless salt used in organic synthesis for chemical reduction including that of imines and carbonyls. Sodium cyanoborohydride is a milder reductant than other conventional reducing agents. Structure Sodium cyanoborohydride is a salt. The cationic sodium ion, [Na]+, interacts with the anionic cyanoborohydride ion, [BH3(CN)]−. The anionic component of the salt is tetrahedral at the boron atom. The electron-withdrawing cyanide substituent draws electron density away from the negatively charged boron; thus, reducing the electrophilic capabilities of the anionic component. This electronic phenomenon causes sodium cyanoborohydride to have more mild reducing qualities than other reducing agents. For example, Na[BH3(CN)] is less reducing than its counterpart sodium borohydride, containing [BH4]−. Uses Sodium cyanoborohydride is a mild reducing agent. It is generally used for the reduction of imines. These reactions occur <pH 7 because the iminium ions are the actual substrates. Reductive amination, sometimes called the Borch reaction, is the conversion of a carbonyl into an amine through an intermediate imine. The carbonyl is first treated with ammonia to promote imine formation by nucleophilic attack. The imine is then reduced to an amine by sodium cyanoborohydride. This reaction works on both aldehydes and ketones. The carbonyl can be treated with ammonia, a primary amine, or a secondary amine to produce, respectively, 1°, 2°, and 3° amines. Aromatic ketones and aldehydes can be reductively deoxygenated using sodium cyanoborohydride. This means that the carbonyl oxygen is being removed completely from the molecule. Deoxygenation using sodium cyanoborohydride is often done in the presence of trimethylsilyl chloride, or TMSCl. Preparation Sodium cyanoborohydride can be purchased from most chemical suppliers. It can be synthesized by combining sodium cyanide and borane tetrahydrofuran. Selectivity Since sodium cyanoborohydride is a mild reducing agent, it gives good chemoselectivity for reaction with certain functional groups in the presence of others. For example, sodium cyanoborohydride is generally incapable of reducing amides, ethers, esters and lactones, nitriles, or epoxides. Therefore, it can selectively reduce some functionalities in the presence of others. Some examples of selective reduction include: Reduction of iminium ions in the presence of carbonyls Reduction of aldehydes in the presence of ketones and esters. Reduction of aldehydes in the presence of thioesters The selectivity of this reducing agent makes it an important tool in organic synthesis. It allows for specific modifications to be made to complex organic molecules. History Georg Wittig was the first to synthesize a cyanoborohydride by treating lithium borohydride with hydrogen cyanide in 1951. The corresponding compound, sodium cyanoborohydride, was synthesized following a similar rationale by reacting sodium borohydride with hydrogen cyanide. The synthesis was later refined to use sodium cyanide and borane in THF making the process safer. See also Sodium triacetoxyborohydride – a milder reductant, but unstable in water Sodium borohydride – a stronger, cheaper reductant References Borohydrides Sodium compounds Reducing agents
Sodium cyanoborohydride
Chemistry
795
24,444,233
https://en.wikipedia.org/wiki/Comparison%20of%20pumps
This article lists different types of pump and provides a comparison of certain key design features. Different types of pumps are suitable for different applications, for example: a pump's maximum lift height also determines the applications it can be used for. Low-lift pumps are only suitable for the pumping of surface water (e.g., irrigation, drainage of lands, ...), while high-lift pumps allow deep water pumping (e.g., potable water pumping from wells). Direct lift devices Displacement pumps Velocity pumps Buoyancy pumps Impulse Pumps Note: reciprocating pumps are cyclic, rotary pumps are typically continuous. References Technological comparisons
Comparison of pumps
Physics,Chemistry,Technology
134
36,172,905
https://en.wikipedia.org/wiki/Chinese%20standard%20movement
The Chinese Standard Movement, also commonly known as the "Tongji" (Chinese: 统机, "unified") movement, is a mechanical watch movement that was developed in the People's Republic of China during its fourth Five-Year Plan in the 1970s. It was designed by engineers from several early Chinese watch factories as part of a Ministry of Light Industry initiative to consolidate the industry, and with a few exceptions it became mandatory for all factories to discontinue the production of their own movements and to mass-produce the standard movement. Because of this, the production of the standard movement defines an entire era in the history of Chinese watchmaking. Once the most commonly produced mechanical/automatic watch movements in China, the numbers produced and their quality (at least for a majority of produced movements) have since declined significantly; today the movement lives on typically in simple (even crude) automatic and skeletonized (i.e. using hollowed-out parts and segments such that the inner workings are more visible) variants, usually installed in cheaply produced watches made in China as well. History Origins By the late 1960s, the Chinese watch industry had matured, with good quality and quantity of output from various factories. To build upon this, the 4th Five Year Plan called for a program of 'consolidation' for the industry, in which a standardized watch design would be manufactured in factories in (almost) all provinces. The resultant movement is known as 统一机芯 (Tongyi Jixin, "Unified Movement") in Chinese, often abbreviated to 统机 (Tongji). The prototype SZ-1 was developed by a design group formed by engineers from many units. The project commenced in 1969 under the guidance of the Ministry of Light Industry, drawing upon the resources of Shanghai Clock & Watch Industry Company, Shanghai Watch Factory, Shanghai No. 2 Watch Factory, Tianjin Clock & Watch Factory, Beijing, Liaoning, Guangzhou & Xi'an Hongqi Watch Factories, Xi'an Fenglei Meters & Watch Company, together with the Clock & Watch Research Institute of the Ministry of Light Industry in Xi'an, and the technicians and scholars of timing instruments of Tianjin University. The group studied many foreign watch designs and combined the merits of them for the prototype SZ-1. Blueprints were finalized in November 1971. The resultant design most closely resembles the Enicar AR1010, found in one of the limited range of Swiss watches sold in China at that time; however, there is no evidence of Enicar involvement in the SZ-1 project. A substantially larger version of the same design, designated HJ1A, was developed by Jilin Watch Factory for use in pocket watches. Mass production Once production of the new watch was established in existing factories, many new factories were built also to make the standard watch. In most factories the complete watch was manufactured in-house, thus the required skills and technologies were distributed more widely across the nation. By the end of the 1970s there were more than 30 complete watch manufacturing enterprises in China; and possibly as many as 50. Watch production in China increased from 6.564 million in 1974 to 33.01 million in 1982. About 82% of Chinese watches produced in 1983 had Standard movements. Decline Though the movement was the predominantly-produced watch movement in China until sometime in the 1980s, its manufacture was not immune to the quartz crisis of the watch industry that occurred during that decade; changes in economic policy, replacement designs, factory closings, and the re-purposing of a number of Chinese watch-producing facilities would contribute to declines in its manufacture. Furthermore, the return of Hong Kong to China in 1997 (until then, Hong Kong had been producing its own movements, both quartz and mechanical) also reduced the dominance of the Chinese Standard Movement in terms of numbers manufactured. Current production The movement is frequently seen today in its skeletonized and simple automatic variants in watches whose list prices range between US$10–100. The quality of a majority of movements has declined significantly since its initial manufacture in the 1970s, and in spite of its design (which is considered to be very good) it has since earned a reputation for poor quality, mostly due to quality control and manufacturing quality issues of the facilities in China which still produce this movement. Even at this late stage new variants continue to be developed. The effort involved in such work is a sign that good quality Standard movements will continue to be available from at least a few sources. Liaoning Watch Factory is producing a new automatic standard movement distinguishable by a wider auto-winding bridge that partly covers the mainspring barrel. This has also been seen in combination with a skeleton base movement with a more elaborate cut and decoration than most Standard skeletons. LWF may also be responsible for a new Standard-based open-heart movement, in which the balance has been relocated to the dial side. In 2008 the Shandong Liaocheng Zhong Tai Watch Company introduced a new skeleton version on a 33mm main plate with a simple auto-winding module on the 'magic-lever' principle. All of these variants have been enthusiastically adopted by the many new lower-priced Shenzhen-based brands such as Fineat, and some foreign watch companies such as Invicta. Significance of the Standard Movement The project to establish the Standard watch originally aimed to make a steel-cased 17 jewel watch available to, and within the means of, almost any worker in the People's Republic of China. The often elaborate case-backs and signed crowns of many vintage Standard watches are a testimony to the pride of the local enterprises that built them. The distributed production of a standard design via a vertically-integrated business model, i.e. a single enterprise building the whole watch, has provided a foundation of skills and technology on which the modern Chinese watch industry is built. With greater international market competition a greater horizontal integration in the industry has emerged, but this is possible only due to the skills and technology already in place. Production details and specifications The standard movement was designed to have fewer parts than other similar movements, so that it was easier to produce and service, while at the same time maintaining high accuracy and reliability. The basic specification of the Standard wristwatch caliber is a minimum of 17 jewels, 21,600 bph (beats per hour) escapement, a minimum 40-hour power reserve and an average rate within +/-30 seconds per day. The movement is manufactured in a number of grades (from high to low) in both automatic and manual-winding forms. Initially manufactured exclusively by Chinese companies (i.e. state-controlled watch manufacturers), variants of the Chinese Standard Movement can be found in all grades and both forms, including in a number of watches whose marques are not Chinese but are still manufactured in China. References Watches Timekeeping components Chinese inventions
Chinese standard movement
Technology
1,383
501,158
https://en.wikipedia.org/wiki/Intermediate%20filament
Intermediate filaments (IFs) are cytoskeletal structural components found in the cells of vertebrates, and many invertebrates. Homologues of the IF protein have been noted in an invertebrate, the cephalochordate Branchiostoma. Intermediate filaments are composed of a family of related proteins sharing common structural and sequence features. Initially designated 'intermediate' because their average diameter (10 nm) is between those of narrower microfilaments (actin) and wider myosin filaments found in muscle cells, the diameter of intermediate filaments is now commonly compared to actin microfilaments (7 nm) and microtubules (25 nm). Animal intermediate filaments are subcategorized into six types based on similarities in amino acid sequence and protein structure. Most types are cytoplasmic, but one type, Type V is a nuclear lamin. Unlike microtubules, IF distribution in cells shows no good correlation with the distribution of either mitochondria or endoplasmic reticulum. Structure The structure of proteins that form intermediate filaments (IF) was first predicted by computerized analysis of the amino acid sequence of a human epidermal keratin derived from cloned cDNAs. Analysis of a second keratin sequence revealed that the two types of keratins share only about 30% amino acid sequence homology but share similar patterns of secondary structure domains. As suggested by the first model, all IF proteins appear to have a central alpha-helical rod domain that is composed of four alpha-helical segments (named as 1A, 1B, 2A and 2B) separated by three linker regions. The central building block of an intermediate filament is a pair of two intertwined proteins that is called a coiled-coil structure. This name reflects the fact that the structure of each protein is helical, and the intertwined pair is also a helical structure. Structural analysis of a pair of keratins shows that the two proteins that form the coiled-coil bind by hydrophobic interactions. The charged residues in the central domain do not have a major role in the binding of the pair in the central domain. Cytoplasmic IFs assemble into non-polar unit-length filaments (ULFs). Identical ULFs associate laterally into staggered, antiparallel, soluble tetramers, which associate head-to-tail into protofilaments that pair up laterally into protofibrils, four of which wind together into an intermediate filament. Part of the assembly process includes a compaction step, in which ULF tighten and assume a smaller diameter. The reasons for this compaction are not well understood, and IF are routinely observed to have diameters ranging between 6 and 12 nm. The N-terminus and the C-terminus of IF proteins are non-alpha-helical regions and show wide variation in their lengths and sequences across IF families. The N-terminal "head domain" binds DNA. Vimentin heads are able to alter nuclear architecture and chromatin distribution, and the liberation of heads by HIV-1 protease may play an important role in HIV-1 associated cytopathogenesis and carcinogenesis. Phosphorylation of the head region can affect filament stability. The head has been shown to interact with the rod domain of the same protein. C-terminal "tail domain" shows extreme length variation between different IF proteins. The anti-parallel orientation of tetramers means that, unlike microtubules and microfilaments, which have a plus end and a minus end, IFs lack polarity and cannot serve as basis for cell motility and intracellular transport. Also, unlike actin or tubulin, intermediate filaments do not contain a binding site for a nucleoside triphosphate. Cytoplasmic IFs do not undergo treadmilling like microtubules and actin fibers, but are dynamic. Biomechanical properties IFs are rather deformable proteins that can be stretched several times their initial length. The key to facilitate this large deformation is due to their hierarchical structure, which facilitates a cascaded activation of deformation mechanisms at different levels of strain. Initially the coupled alpha-helices of unit-length filaments uncoil as they're strained, then as the strain increases they transition into beta-sheets, and finally at increased strain the hydrogen bonds between beta-sheets slip and the ULF monomers slide along each other. Types There are about 70 different human genes coding for various intermediate filament proteins. However, different kinds of IFs share basic characteristics: In general, they are all polymers that measure between 9–11 nm in diameter when fully assembled. Animal IFs are subcategorized into six types based on similarities in amino acid sequence and protein structure: Types I and II – acidic and basic keratins These proteins are the most diverse among IFs and constitute type I (acidic) and type II (basic) IF proteins. The many isoforms are divided in two groups: epithelial keratins (about 20) in epithelial cells (image to right) trichocytic keratins (about 13) (hair keratins), which make up hair, nails, horns and reptilian scales. Regardless of the group, keratins are either acidic or basic. Acidic and basic keratins bind each other to form acidic-basic heterodimers and these heterodimers then associate to make a keratin filament. Cytokeratin filaments laterally associate with each other to create a thick bundle of ~50 nm radius. The optimal radius of such bundles is determined by the interplay between the long range electrostatic repulsion and short range hydrophobic attraction. Subsequently, these bundles would intersect through junctions to form a dynamic network, spanning the cytoplasm of epithelial cells. Type III There are four proteins classed as type III intermediate filament proteins, which may form homo- or heteropolymeric proteins. Desmin IFs are structural components of the sarcomeres in muscle cells and connect different cell organelles like the desmosomes with the cytoskeleton. Glial fibrillary acidic protein (GFAP) is found in astrocytes and other glia. Peripherin found in peripheral neurons. Vimentin, the most widely distributed of all IF proteins, can be found in fibroblasts, leukocytes, and blood vessel endothelial cells. They support the cellular membranes, keep some organelles in a fixed place within the cytoplasm, and transmit membrane receptor signals to the nucleus. Syncoilin is an atypical type III IF protein. Type IV Alpha-internexin Neurofilaments – the type IV family of intermediate filaments that is found in high concentrations along the axons of vertebrate neurons. Synemin Syncoilin Type V – nuclear lamins Lamins Lamins are fibrous proteins having structural function in the cell nucleus. In metazoan cells, there are A and B type lamins, which differ in their length and pI. Human cells have three differentially regulated genes. B-type lamins are present in every cell. B type lamins, lamin B1 and B2, are expressed from the LMNB1 and LMNB2 genes on 5q23 and 19q13, respectively. A-type lamins are only expressed following gastrulation. Lamin A and C are the most common A-type lamins and are splice variants of the LMNA gene found at 1q21. These proteins localize to two regions of the nuclear compartment, the nuclear lamina—a proteinaceous structure layer subjacent to the inner surface of the nuclear envelope and throughout the nucleoplasm in the nucleoplasmic veil. Comparison of the lamins to vertebrate cytoskeletal IFs shows that lamins have an extra 42 residues (six heptads) within coil 1b. The c-terminal tail domain contains a nuclear localization signal (NLS), an Ig-fold-like domain, and in most cases a carboxy-terminal CaaX box that is isoprenylated and carboxymethylated (lamin C does not have a CAAX box). Lamin A is further processed to remove the last 15 amino acids and its farnesylated cysteine. During mitosis, lamins are phosphorylated by MPF, which drives the disassembly of the lamina and the nuclear envelope. Type VI Beaded filaments: Filensin, Phakinin. Nestin (was once proposed for reclassification but due to differences, remains as a type VI IF protein) Vertebrate-only. Related to type I-IV. Used to contain other newly discovered IF proteins not yet assigned to a type. Function Cell adhesion At the plasma membrane, some keratins or desmin interact with desmosomes (cell-cell adhesion) and hemidesmosomes (cell-matrix adhesion) via adapter proteins. Associated proteins Filaggrin binds to keratin fibers in epidermal cells. Plectin links vimentin to other vimentin fibers, as well as to microfilaments, microtubules, and myosin II. Kinesin is being researched and is suggested to connect vimentin to tubulin via motor proteins. Keratin filaments in epithelial cells link to desmosomes (desmosomes connect the cytoskeleton together) through plakoglobin, desmoplakin, desmogleins, and desmocollins; desmin filaments are connected in a similar way in heart muscle cells. Diseases arising from mutations in IF genes Dilated cardiomyoathy (DCM), mutations in the DES gene Arrhythmogenic cardiomyopathy (ACM), mutations in the DES gene Restrictive cardiomyopathy (RCM), mutations in the DES gene Non-compaction cardiomyopathy, mutations in the DES genes Cardiomyopathy in combination with skeletal myopathy (DES) Epidermolysis bullosa simplex; keratin 5 or keratin 14 mutation Laminopathies are a family of diseases caused by mutations in nuclear lamins and include Hutchinson-Gilford progeria syndrome and various lipodystrophies and cardiomyopathies among others. In other organisms IF proteins are universal among animals in the form of a nuclear lamin. The Hydra has an additional "nematocilin" derived from the lamin. Cytoplasmic IFs (type I-IV) are only found in Bilateria; they also arose from a gene duplication event involving "type V" nuclear lamin. In addition, a few other diverse types of eukaryotes have lamins, suggesting an early origin of the protein. There was not really a concrete definition of an "intermediate filament protein", in the sense that the size or shape-based definition does not cover a monophyletic group. With the inclusion of unusual proteins like the network-forming beaded lamins (type VI), the current classification is moving to a clade containing nuclear lamin and its many descendants, characterized by sequence similarity as well as the exon structure. Functionally-similar proteins out of this clade, like crescentins, alveolins, tetrins, and epiplasmins, are therefore only "IF-like". They likely arose through convergent evolution. References Further reading External links Protein families Cytoskeleton
Intermediate filament
Biology
2,453
44,632,934
https://en.wikipedia.org/wiki/Conservative%20replacement
A conservative replacement (also called a conservative mutation or a conservative substitution or a homologous replacement) is an amino acid replacement in a protein that changes a given amino acid to a different amino acid with similar biochemical properties (e.g. charge, hydrophobicity and size). Conversely, a radical replacement, or radical substitution, is an amino acid replacement that exchanges an initial amino acid by a final amino acid with different physicochemical properties. Description There are 20 naturally occurring amino acids, however some of these share similar characteristics. For example, leucine and isoleucine are both aliphatic, branched hydrophobes. Similarly, aspartic acid and glutamic acid are both small, negatively charged residues. Although there are many ways to classify amino acids, they are often sorted into six main classes on the basis of their structure and the general chemical characteristics of their side chains (R groups). Physicochemical distances aim at quantifying the intra-class and inter-class dissimilarity between amino acids based on their measurable properties, and many such measures have been proposed in the literature. Owing to their simplicity, two of the most commonly used measures are the ones of Grantham (1974) and Miyata et al (1979). A conservative replacement is therefore an exchange between two amino acids separated by a small physicochemical distance. Conversely, a radical replacement is an exchange between two amino acids separated by a large physicochemical distance. Impact on function Conservative replacements in proteins often have a better effect on function than non-conservative replacements. The reduced effect of conservative replacements on function can also be seen in the occurrence of different replacements in nature. Non-conservative replacements between proteins are far more likely to be removed by natural selection due to their deleterious effects. See also Segregating site Ultra-conserved element Sequence alignment Sequence alignment software References Biochemistry Amino acids
Conservative replacement
Chemistry,Biology
394
1,118,963
https://en.wikipedia.org/wiki/Nonribosomal%20peptide
Nonribosomal peptides (NRP) are a class of peptide secondary metabolites, usually produced by microorganisms like bacteria and fungi. Nonribosomal peptides are also found in higher organisms, such as nudibranchs, but are thought to be made by bacteria inside these organisms. While there exist a wide range of peptides that are not synthesized by ribosomes, the term nonribosomal peptide typically refers to a very specific set of these as discussed in this article. Nonribosomal peptides are synthesized by nonribosomal peptide synthetases, which, unlike the ribosomes, are independent of messenger RNA. Each nonribosomal peptide synthetase can synthesize only one type of peptide. Nonribosomal peptides often have cyclic and/or branched structures, can contain non-proteinogenic amino acids including D-amino acids, carry modifications like N-methyl and N-formyl groups, or are glycosylated, acylated, halogenated, or hydroxylated. Cyclization of amino acids against the peptide "backbone" is often performed, resulting in oxazolines and thiazolines; these can be further oxidized or reduced. On occasion, dehydration is performed on serines, resulting in dehydroalanine. This is just a sampling of the various manipulations and variations that nonribosomal peptides can perform. Nonribosomal peptides are often dimers or trimers of identical sequences chained together or cyclized, or even branched. Nonribosomal peptides are a very diverse family of natural products with an extremely broad range of biological activities and pharmacological properties. They are often toxins, siderophores, or pigments. Nonribosomal peptide antibiotics, cytostatics, and immunosuppressants are in commercial use. Examples Antibiotics Actinomycin Bacitracin Calcium dependent antibiotic Daptomycin Vancomycin Teixobactin Tyrocidine Gramicidin Zwittermicin A Antibiotic precursors ACV-Tripeptide Cytostatics Epothilone Fabclavine Bleomycin Immunosuppressants Ciclosporin (Cyclosporine A) Siderophores Pyoverdine Enterobactin Myxochelin A Pigments Indigoidine Toxins Microcystins and Nodularins, cyanotoxins from cyanobacteria. Nitrogen storage polymers Cyanophycin – produced by some cyanobacteria Phytotoxins HC-toxin – a virulence factor made by the plant pathogenic fungus Cochliobolus (Helminthosporium) carbonum AM-toxin – made by the plant pathogenic fungus Alternaria alternata pv. Mali victorin – a chlorinated cyclic pentapeptide made by the pathogenic fungus Cochliobolus victoriae. Its nonribosomal synthesis has not been established. Biosynthesis Nonribosomal peptides are synthesized by one or more specialized nonribosomal peptide-synthetase (NRPS) enzymes. The NRPS genes for a certain peptide are usually organized in one operon in bacteria and in gene clusters in eukaryotes. However the first fungal NRP to be found was ciclosporin. It is synthesized by a single 1.6MDa NRPS. The enzymes are organized in modules that are responsible for the introduction of one additional amino acid. Each module consists of several domains with defined functions, separated by short spacer regions of about 15 amino acids. The biosynthesis of nonribosomal peptides shares characteristics with the polyketide and fatty acid biosynthesis. Due to these structural and mechanistic similarities, some nonribosomal peptide synthetases contain polyketide synthase modules for the insertion of acetate or propionate-derived subunits into the peptide chain. Note that as many as 10% percent of bacterial NRPS are not laid out as large modular proteins, but as separate enzymes. Some NRPS modules deviate from the standard domain structure, and some extra domains have been described. There are also NRPS enzymes that serve as a scaffold for other modifications to the substrate to incorporate unusual amino acids. Modules The order of modules and domains of a complete nonribosomal peptide synthetase is as follows: Initiation or Starting module: [F/NMT]-A-PCP- Elongation or Extending modules: -(C/Cy)-[NMT]-A-PCP-[E]- Termination or Releasing module: -(TE/R) (Order: N-terminus to C-terminus; []: optionally; (): alternatively) Domains F: Formylation (optional) A: Adenylation (required in a module) PCP: Thiolation and peptide carrier protein with attached 4'-phospho-pantetheine (required in a module) C: Condensation forming the amide bond (required in a module) Cy: Cyclization into thiazoline or oxazolines (optional) Ox: Oxidation of thiazolines or oxazolines to thiazoles or oxazoles (optional) Red: Reduction of thiazolines or oxazolines to thiazolidines or oxazolidines (optional) E: Epimerization into D-amino acids (optional) NMT: N-methylation (optional) TE: Termination by a thio-esterase (only found once in a NRPS) R: Reduction to terminal aldehyde or alcohol (optional) X: Recruits cytochrome P450 enzymes (optional) Starting stage Loading: The first amino acid is activated with ATP as a mixed acyl-phosphoric acid anhydride with AMP by the A-domain and loaded onto the serine-attached 4'-phospho-pantethine (4'PP) sidechain of the PCP-domain catalyzed by the PCP-domain (thiolation). Some A domains require interaction with MbtH-like proteins for their activity. Sometimes the amino group of the bound amino acid is formylated by an F-domain or methylated by an NMT-domain. Elongation stages Loading: Analogous to the starting stage, each module loads its specific amino acid onto its PCP-domain. Condensation: The C-domain catalyzes the amide bond formation between the thioester group of the growing peptide chain from the previous module with the amino group of the current module. The extended peptide is now attached to the current PCP-domain. Condensation-Cyclization: Sometimes the C-domain is replaced by a Cy-domain, which, in addition to the amide bond formation, catalyzes the reaction of the serine, threonine, or cysteine sidechain with the amide-N, thereby forming oxazolidines and thiazolidine, respectively. Epimerization: Sometimes an E-domain epimerizes the innermost amino acid of the peptide chain into the D-configuration. This cycle is repeated for each elongation module. Termination stage Termination: The TE-domain (thio-esterase domain) hydrolyzes the completed polypeptide chain from the PCP-domain of the previous module, thereby often forming cyclic amides (lactams) or cyclic esters (lactones). Also, the peptide can be released by an R-domain that reduces the thioester bond to terminal aldehyde or alcohol. Processing The final peptide is often modified, e.g., by glycosylation, acylation, halogenation, or hydroxylation. The responsible enzymes are usually associated to the synthetase complex and their genes are organized in the same operons or gene clusters. Priming and deblocking To become functional, the 4'-phospho-pantetheine sidechain of acyl-CoA molecules has to be attached to the PCP-domain by 4'PP transferases (Priming) and the S-attached acyl group has to be removed by specialized associated thioesterases (TE-II) (Deblocking). Substrate specificities Most domains have a very broad substrate specificity and usually only the A-domain determines which amino acid is incorporated in a module. Ten amino acids that control substrate specificity and can be considered the 'codons' of nonribosomal peptide synthesis have been identified, and rational protein design has yielded methodologies to computationally switch the specificities of A-domains. The condensation C-domain is also believed to have substrate specificity, especially if located behind an epimerase E-domain-containing module where it functions as a 'filter' for the epimerized isomer. Computational methods, such as SANDPUMA and NRPSpredictor2, have been developed to predict substrate specificity from DNA or protein sequence data. Mixed with polyketides Due to the similarity with polyketide synthases (PKS), many secondary metabolites are, in fact, fusions of NRPs and polyketides. In essence, this occurs when PK modules follow NRP modules, and vice versa. Although there is high degree of similarity between the Carrier (PCP/ACP) domains of both types of synthetases, the mechanism of condensation is different from a chemical standpoint: PKS, carbon-carbon bond formation through Claisen condensation reaction NRPs, the C domain catalyzes the amide bond formation between the amino acid it adds to the chain (on the PCP of one module) and the nascent peptide(on the PCP of the next module). See also Epothilone Esterase Ribosomally synthesized and post-translationally modified peptides References Further reading Molecular biology Enzymes Glycopeptide antibiotics Antibiotics Peptides
Nonribosomal peptide
Chemistry,Biology
2,137
2,976,342
https://en.wikipedia.org/wiki/Equivariant%20cohomology
In mathematics, equivariant cohomology (or Borel cohomology) is a cohomology theory from algebraic topology which applies to topological spaces with a group action. It can be viewed as a common generalization of group cohomology and an ordinary cohomology theory. Specifically, the equivariant cohomology ring of a space with action of a topological group is defined as the ordinary cohomology ring with coefficient ring of the homotopy quotient : If is the trivial group, this is the ordinary cohomology ring of , whereas if is contractible, it reduces to the cohomology ring of the classifying space (that is, the group cohomology of when G is finite.) If G acts freely on X, then the canonical map is a homotopy equivalence and so one gets: Definitions It is also possible to define the equivariant cohomology of with coefficients in a -module A; these are abelian groups. This construction is the analogue of cohomology with local coefficients. If X is a manifold, G a compact Lie group and is the field of real numbers or the field of complex numbers (the most typical situation), then the above cohomology may be computed using the so-called Cartan model (see equivariant differential forms.) The construction should not be confused with other cohomology theories, such as Bredon cohomology or the cohomology of invariant differential forms: if G is a compact Lie group, then, by the averaging argument, any form may be made invariant; thus, cohomology of invariant differential forms does not yield new information. Koszul duality is known to hold between equivariant cohomology and ordinary cohomology. Relation with groupoid cohomology For a Lie groupoid equivariant cohomology of a smooth manifold is a special example of the groupoid cohomology of a Lie groupoid. This is because given a -space for a compact Lie group , there is an associated groupoidwhose equivariant cohomology groups can be computed using the Cartan complex which is the totalization of the de-Rham double complex of the groupoid. The terms in the Cartan complex arewhere is the symmetric algebra of the dual Lie algebra from the Lie group , and corresponds to the -invariant forms. This is a particularly useful tool for computing the cohomology of for a compact Lie group since this can be computed as the cohomology ofwhere the action is trivial on a point. Then,For example,since the -action on the dual Lie algebra is trivial. Homotopy quotient The homotopy quotient, also called homotopy orbit space or Borel construction, is a “homotopically correct” version of the orbit space (the quotient of by its -action) in which is first replaced by a larger but homotopy equivalent space so that the action is guaranteed to be free. To this end, construct the universal bundle EG → BG for G and recall that EG admits a free G-action. Then the product EG × X —which is homotopy equivalent to X since EG is contractible—admits a “diagonal” G-action defined by (e,x).g = (eg,g−1x): moreover, this diagonal action is free since it is free on EG. So we define the homotopy quotient XG to be the orbit space (EG × X)/G of this free G-action. In other words, the homotopy quotient is the associated X-bundle over BG obtained from the action of G on a space X and the principal bundle EG → BG. This bundle X → XG → BG is called the Borel fibration. An example of a homotopy quotient The following example is Proposition 1 of . Let X be a complex projective algebraic curve. We identify X as a topological space with the set of the complex points , which is a compact Riemann surface. Let G be a complex simply connected semisimple Lie group. Then any principal G-bundle on X is isomorphic to a trivial bundle, since the classifying space is 2-connected and X has real dimension 2. Fix some smooth G-bundle on X. Then any principal G-bundle on is isomorphic to . In other words, the set of all isomorphism classes of pairs consisting of a principal G-bundle on X and a complex-analytic structure on it can be identified with the set of complex-analytic structures on or equivalently the set of holomorphic connections on X (since connections are integrable for dimension reason). is an infinite-dimensional complex affine space and is therefore contractible. Let be the group of all automorphisms of (i.e., gauge group.) Then the homotopy quotient of by classifies complex-analytic (or equivalently algebraic) principal G-bundles on X; i.e., it is precisely the classifying space of the discrete group . One can define the moduli stack of principal bundles as the quotient stack and then the homotopy quotient is, by definition, the homotopy type of . Equivariant characteristic classes Let E be an equivariant vector bundle on a G-manifold M. It gives rise to a vector bundle on the homotopy quotient so that it pulls-back to the bundle over . An equivariant characteristic class of E is then an ordinary characteristic class of , which is an element of the completion of the cohomology ring . (In order to apply Chern–Weil theory, one uses a finite-dimensional approximation of EG.) Alternatively, one can first define an equivariant Chern class and then define other characteristic classes as invariant polynomials of Chern classes as in the ordinary case; for example, the equivariant Todd class of an equivariant line bundle is the Todd function evaluated at the equivariant first Chern class of the bundle. (An equivariant Todd class of a line bundle is a power series (not a polynomial as in the non-equivariant case) in the equivariant first Chern class; hence, it belongs to the completion of the equivariant cohomology ring.) In the non-equivariant case, the first Chern class can be viewed as a bijection between the set of all isomorphism classes of complex line bundles on a manifold M and In the equivariant case, this translates to: the equivariant first Chern gives a bijection between the set of all isomorphism classes of equivariant complex line bundles and . Localization theorem The localization theorem is one of the most powerful tools in equivariant cohomology. See also Equivariant differential form Kirwan map Localization formula for equivariant cohomology GKM variety Bredon cohomology Notes References Relation to stacks PDF page 10 has the main result with examples. Further reading External links — Excellent survey article describing the basics of the theory and the main important theorems What is the equivariant cohomology of a group acting on itself by conjugation? Algebraic topology Homotopy theory Symplectic topology Group actions (mathematics)
Equivariant cohomology
Physics,Mathematics
1,577
2,248,699
https://en.wikipedia.org/wiki/Sulfoxide
In organic chemistry, a sulfoxide, also called a sulphoxide, is an organosulfur compound containing a sulfinyl () functional group attached to two carbon atoms. It is a polar functional group. Sulfoxides are oxidized derivatives of sulfides. Examples of important sulfoxides are alliin, a precursor to the compound that gives freshly crushed garlic its aroma, and dimethyl sulfoxide (DMSO), a common solvent. Structure and bonding Sulfoxides feature relatively short S–O distances. In DMSO, the S–O distance is 1.531 Å. The sulfur center is pyramidal; the sum of the angles at sulfur is about 306°. Sulfoxides are generally represented with the structural formula R−S(=O)−R', where R and R' are organic groups. The bond between the sulfur and oxygen atoms is intermediate of a dative bond and a polarized double bond. The double-bond resonance form implies 10 electrons around sulfur (10-S-3 in N-X-L notation). The double-bond character of the S−O bond may be accounted for by donation of electron density into C−S antibonding orbitals ("no-bond" resonance forms in valence-bond language). Nevertheless, due to its simplicity and lack of ambiguity, the IUPAC recommends use of the expanded octet double-bond structure to depict sulfoxides, rather than the dipolar structure or structures that invoke "no-bond" resonance contributors. The S–O interaction has an electrostatic aspect, resulting in significant dipolar character, with negative charge centered on oxygen. Chirality A lone pair of electrons resides on the sulfur atom, giving it tetrahedral electron-pair geometry and trigonal pyramidal shape (steric number 4 with one lone pair; see VSEPR theory). When the two organic residues are dissimilar, the sulfur atom is a chiral center, for example, in methyl phenyl sulfoxide. The energy barrier required to invert this stereocenter is sufficiently high that sulfoxides are optically stable near room temperature. That is, the rate of racemization is slow at room temperature. The enthalpy of activation for racemization is in the range 35 - 42 kcal/mol and the corresponding entropy of activation is -8 - +4 cal/mol-K. The barriers are lower for allylic and benzylic substituents. Preparation Sulfoxides are typically prepared by oxidation of sulfides, sometimes referred to as sulfoxidation. hydrogen peroxide is a typical oxidant, but periodate has also been used. In these oxidations, care is required to avoid over oxidation to form the sulfone. For example, dimethyl sulfide is oxidized to dimethyl sulfoxide and then further to dimethyl sulfone. Unsymmetrical sulfides are prochiral, thus their oxidation gives chiral sulfoxides. This process can be performed enantioselectively. Symmetrical sulfoxides can be formed from a diorganylzinc compound and liquid sulfur dioxide. Aryl sulfoxides In addition to the oxidation routes, diaryl sulfoxides can be prepared by two Friedel–Crafts arylations of sulfur dioxide using an acid catalyst: 2 ArH + SO2 → Ar2SO + H2O Both aryl sulfinyl chlorides and diaryl sulfoxides can be also prepared from arenes through reaction with thionyl chloride in the presence of Lewis acid catalysts such as BiCl3, Bi(OTf)3, LiClO4, or NaClO4. Reactions Deoxygenation and oxygenation Sulfoxides undergo deoxygenation to give sulfides. Typically metal complexes are used to catalyze the reaction, using hydrosilanes as the stoichiometric reductant. The deoxygenation of dimethylsulfoxide is catalyzed by DMSO reductase, a molybdoenzyme: OSMe2 + 2e− + 2 H+ → SMe2 + H2O Acid-base reactions The α-CH groups of alkyl sulfoxides are susceptible to deprotonation by strong bases, such as sodium hydride: CH3S(O)CH3 + NaH → CH3S(O)CH2Na + H2 In the Pummerer rearrangement, alkyl sulfoxides react with acetic anhydride to give migration of the oxygen from sulfur to the adjacent carbon as an acetate ester. The first step of the reaction sequence involves the sulfoxide oxygen acting as a nucleophile: Elimination reactions Sulfoxide undergo thermal elimination via an Ei mechanism to yield vinyl alkenes and sulfenic acids. CH3S(O)CH2CH2R → CH3SOH + CH2=CHR The acids are powerful antioxidants, but lack long-term stability. Some parent sulfoxides are therefore marketed as antioxidant polymer stabilisers. Structures based on thiodipropionate esters are popular. The reverse reaction is possible. Coordination chemistry Sulfoxides, especially DMSO, form coordination complexes with transition metals. Depending on the hard-soft properties of the metal, the sulfoxide binds through either the sulfur or the oxygen atom. The latter is particularly common. Applications and occurrence DMSO is a widely used solvent. The sulfoxide functional group occurs in several drugs. Notable is esomeprazole, the optically pure form of the proton-pump inhibitor omeprazole. Another commercially important sulfoxides include armodafinil. Methionine sulfoxide forms from the amino acid methionine and its accumulation is associated with aging. The enzyme DMSO reductase catalyzes the interconversion of DMSO and dimethylsulfide. Naturally-occurring chiral sulfoxides include alliin and ajoene. Further reading References Functional groups
Sulfoxide
Chemistry
1,309
37,797,383
https://en.wikipedia.org/wiki/Vector%20measuring%20current%20meter
A vector measuring current meter (VMCM) is an instrument used for obtaining measurements of horizontal velocity in the upper ocean, which exploits two orthogonal cosine response propeller sensors that directly measure the components of horizontal velocity. VMCM was developed in the late 1970s by Drs. Robert Weller and Russ Davis and commercially produced by EG&G Sealink System (currently EdgeTech). The instrument has the capability of one year long deployment at depths of up to 5000 m. Both laboratory and field test results show that the VMCM is capable of making accurate measurements of horizontal velocity in the upper ocean. The VMCM is the current standard for making high quality velocity measurements in near-surface regions and it has been used for benchmarking other current meters. Equipment The main components of a VMCM are its two orthogonal cosine response propeller sensors, that directly measure the components of horizontal velocity parallel to their axes. The orientation of the instrument with respect to magnetic north is sensed with a flux-gate compass, which permits to evaluate the direction of flux, providing the angle of the Y axis with respect to the magnetic North. A microprocessor rotates the X-Y coordinates in the conventional east–west and north–south components of velocity. This is done once each sample interval and, at the end of the record interval, the conventional components of velocity are averaged and the averages are stored on a cassette magnetic tape. Other components of the system are a bearing retainer, an end cap, an outer bearing race, a ball retainer and bearing balls, an encoder and an epoxy or Noryl plastic disk with four magnets, pressure window, an aluminum disk, two magnetodiodes mounted asymmetrically on a printed circuit ring, a hub, and a shaft with inner races machined in it. The function of the magnetodiodes is detecting the rotation of the propeller sensors. Incorporated in the system there is the vector averaging electronics, that uses the pulses from the magnetodiodes and the instrument heading from the flux-gate compass to calculate and record the velocity components. In the 1990s, Way et al. upgraded the electronics by redesigning the vector measuring circuitry, data acquisition, and storage components and retaining instead the propeller sensors assembly, which proved to be reliable in the several tests accomplished. A pressure case houses the electronics and the appendage on which the propellers are mounted on. In its first design of the late 1970s, a VMCM was approximately 2.56 m high and had a mass of 34.5 kg in air. The original VMCM is no longer commercially available from EG&G (currently EdgeTech). The 1970s electronics components are outdated and difficult, if not impossible, to find. Like many of the electronic components the original flux gate compass is no longer available. Propeller sensors The innovation brought from VMCM over other current meters results from the choice of the biaxial propeller sensors, developed with accurate cosine response, and the design of the instrument so that flow interference with the instrument body was minimized. "Cosine response" refers to propellers that only respond to the component of flow parallel to their axis of rotation. Their revolution rate is then proportional to the magnitude of the flow times the cosine of the angle between the axle and the flow vector. If the angular response function of the propellers is cosinusoidal, then two such sensors at right angles with their axes in the horizontal plane measure orthogonal components of horizontal velocity directly. No computation of components is necessary (though they are rotated from the instrument reference frame into the conventional east–west and north–south components), and summing the components accomplishes the vector averaging. The advantages of a propeller with cosine response have been widely recognized. Weller and Davis designed the propeller sensors and their location within the pressure cage in order to obtain a response as close as possible to an ideal cosinusoidal angular response. After having fabricated and testes several families of propellers, they found the best response in a dual propeller (two propellers fixed on an axle) sensor with two five-bladed, 30-degree pitch propellers with diameter of 22 cm. The propellers are hard anodized, epoxy coated on the exterior, and protected by zinc anodes. They have been made from polycarbonate plastic (LEXAN) and, more recently, from Noryl. Propeller sensors make use of Cartesian coordinate system and provide orthogonal velocity components in the horizontal plane. The measured coordinates need only be rotated in the conventional directions east–west and north–south. Pressure cage The pressure case houses the electronics and the appendage on which the propellers are mounted on. It is fabricated from 6A1-4V titanium alloy rod (1.27 cm diameter), which withstand higher yield strength than steel and has a superior resistance to corrosion and metal fatigue in the seawater. Designed in this way, the pressure cage is capable of taking tensions of up to 10,000 pounds and hold the electronics and the propeller sensors in isolation of the tension. This permits a safe working until 5,000 m depth. Early on, the propeller bearings were a source of failure. After considerable testing, the bearings were upgraded from polycarbonate plastic to silicon nitride and, as a result of this change, there have not been any bearing failures. Data logger/controller In the early 1990s, Brian S. Way et al. developed a new version of the VMCM and greatly improved the electronic system. The new version of the VMCM includes as primary subunits the vector measuring front-end (consisting of rotor and compass hardware interface) and a low-power microcontroller to accomplish the sampling. Initial sampling setup (e.g. sample rate, averaging interval, calibration factors) is set by command from an Onset Computer (Tattletale 8, TT8). However, actual sampling and computation of vector averages are handled in the VMCM front-end subunit. A Microchip Technology PIC microcontroller handles all of these tasks, producing current vector North and East (Vn and Ve) reading at the desired interval. In standard operation with the new version of VMCM, the PIC microcontroller in the VMCM front-end samples the rotors and compass at the rate set by the TT8 initially. At each sample, rotor and compass readings are accumulated for vector-averaging and, at the chosen sample interval, the vector averages Vn and Ve are relayed to the TT8 for further processing and/or storage. User interface / Setup software The main setup program gives the user the ability to choose from the following commands: record interval, which parameters to log (it is possible to add measurement of other parameters such as temperature, conductivity, oxygen, word time updated with each record, tilt, battery voltage), sample intervals for each selected parameter, start time to begin logging end time to stop logger. In the new version of the VMCM, the ease and flexibility for setting up and adding sensors has decreased the time needed for pre-deployment instrument preparation in port. How VMCM computes horizontal velocity The two orthogonal cosine response propeller sensors directly measure the components of horizontal velocity parallel to their axes. The flux-gate compass senses the orientation of the instrument with respect to magnetic North and permits to evaluate the direction of flux. The microprocessor rotates the coordinates in the conventional east–west and north–south components of velocity. This is done once each sample interval and, at the end of the record interval, the conventional components of velocity are averaged and the averages are stored. The rotation of the propeller sensors is detected by the magnetodiodes. As a result of the asymmetry in placement of the magnetodiodes, a staggered pair of pulses is produced each quarter revolution; the phase relationship indicates the sense of direction of rotation and the pulse rate indicates the rate of rotation. In order to calculate and record the velocity components, the vector averaging circuitry is turned on by a rotor count, which is signaled by a proper sequence of changes in the levels of the magnetodiodes. The instrument heading () is determined and stored in a register and updated at a 1-Hz rate (once each second). If either propeller rotates sufficiently (the original version of VMCM had a speed threshold of less than one centimeter per second ), a pair of pulses is produced by the magnetodiodes of one hub and a count occurs from the rotor. Then, the cosine and sine of the heading (that is currently stored in the heading register) are added to the proper register that stores the and velocity components. To accomplish this, at the end of each sampling interval over which the averaging is performed, the following sums are evaluated: and where N is the number of quarter revolutions by the sensor oriented east–west when = 0, M is the number of quarter revolutions by the other sensor, and and are the headings of the instrument in the heading register when the ith and jth pairs of pulses were supplied by the two propeller sensors. The velocity components are stored in a 12-bit registers and, at the end of each sampling interval, they are written as 16-bit words (12 bits of data, 4 bits identifying the channel) on a flash drive support (in its original design of the late 1970s, a cassette tape with more limited storage capacity was used). The instruments typically record average and average every sample interval and time every hour. Two other channels of information, such as temperature and pressure, can be recorded. Various sample intervals can be selected. As the vector averaging circuitry is turned on only when a pair of magnetodiode pulses occurs, the current drain is proportional to the flow rate of the water. Comparison with other measuring instruments Based on the intercomparison of the test data obtained from the VMCM and from other measuring instruments such as Aandera, VACM, electromagnetic current meters, and ACM, it has been experienced that VMCM sensor introduces the least error in relatively small mean flows when high frequency oscillatory fluctuations are also present. (because of surface waves, mooring motion, or both). This quality, together with the accuracy of the propeller sensors experienced in steady, unsteady flows, and combinations of both, make the VMCM appropriate to make accurate measurements in the upper ocean. References Oceanography Oceans
Vector measuring current meter
Physics,Environmental_science
2,146
518,397
https://en.wikipedia.org/wiki/Angle%20of%20repose
The angle of repose, or critical angle of repose, of a granular material is the steepest angle of descent or dip relative to the horizontal plane on which the material can be piled without slumping. At this angle, the material on the slope face is on the verge of sliding. The angle of repose can range from 0° to 90°. The morphology of the material affects the angle of repose; smooth, rounded sand grains cannot be piled as steeply as can rough, interlocking sands. The angle of repose can also be affected by additions of solvents. If a small amount of water is able to bridge the gaps between particles, electrostatic attraction of the water to mineral surfaces increases the angle of repose, and related quantities such as the soil strength. When bulk granular materials are poured onto a horizontal surface, a conical pile forms. The internal angle between the surface of the pile and the horizontal surface is known as the angle of repose and is related to the density, surface area and shapes of the particles, and the coefficient of friction of the material. Material with a low angle of repose forms flatter piles than material with a high angle of repose. The term has a related usage in mechanics, where it refers to the maximum angle at which an object can rest on an inclined plane without sliding down. This angle is equal to the arctangent of the coefficient of static friction μs between the surfaces. Applications of theory The angle of repose is sometimes used in the design of equipment for the processing of particulate solids. For example, it may be used to design an appropriate hopper or silo to store the material, or to size a conveyor belt for transporting the material. It can also be used in determining whether or not a slope (of a stockpile, or uncompacted gravel bank, for example) would likely collapse; the talus slope is derived from angle of repose and represents the steepest slope a pile of granular material can take. This angle of repose is also crucial in correctly calculating stability in vessels. It is also commonly used by mountaineers as a factor in analysing avalanche danger in mountainous areas. Formulation If the coefficient of static friction μs is known of a material, then a good approximation of the angle of repose can be made with the following function. This function is somewhat accurate for piles where individual objects in the pile are minuscule and piled in random order. where is the angle of repose. A simple free body diagram can be used to understand the relationship between the angle of repose and the stability of the material on the slope. For the heaped material to resist collapse, the frictional forces must be equivalent to the horizontal component of the gravitational force , where is the mass of the material, is the gravitational acceleration and  is the slope angle: The frictional force is equivalent to the multiplication product of the coefficient of static friction  and the Normal Force or : Where is the angle of repose, or the angle at which the slope fails under regular conditions, and  is the coefficient of static friction of the material on the slope. Measurement There are numerous methods for measuring angle of repose and each produces slightly different results. Results are also sensitive to the exact methodology of the experimenter. As a result, data from different labs are not always comparable. One method is the triaxial shear test, another is the direct shear test. The measured angle of repose may vary with the method used, as described below. Tilting box method This method is appropriate for fine-grained, non-cohesive materials with individual particle size less than 10 mm. The material is placed within a box with a transparent side to observe the granular test material. It should initially be level and parallel to the base of the box. The box is slowly tilted until the material begins to slide in bulk, and the angle of the tilt is measured. Fixed funnel method The material is poured through a funnel to form a cone. The tip of the funnel should be held close to the growing cone and slowly raised as the pile grows, to minimize the impact of falling particles. Stop pouring the material when the pile reaches a predetermined height or the base a predetermined width. Rather than attempt to measure the angle of the resulting cone directly, divide the height by half the width of the base of the cone. The inverse tangent of this ratio is the angle of repose. Revolving cylinder method The material is placed within a cylinder with at least one transparent end. The cylinder is rotated at a fixed speed, and the observer watches the material move within it. The effect is similar to watching clothes tumble over one another in a slowly rotating clothes dryer. The granular material assumes a certain angle as it flows within the rotating cylinder. This method is recommended for obtaining the dynamic angle of repose, which may vary from the static angle of repose measured by other methods. Of various materials Here is a list of various materials and their angle of repose. All measurements are approximated. With different supports Different supports modify the shape of the pile, in the illustrations below sand piles, although angles of repose remain the same. Exploitation by antlion and wormlion (Vermileonidae) larvae The larvae of the antlions and the unrelated wormlions Vermileonidae trap small insects such as ants by digging conical pits in loose sand, such that the slope of the walls is effectively at the critical angle of repose for the sand. They achieve this by flinging the loose sand out of the pit and permitting the sand to settle at its critical angle of repose as it falls back. Thus, when a small insect, commonly an ant, blunders into the pit, its weight causes the sand to collapse below it, drawing the victim toward the center where the predator that dug the pit lies in wait under a thin layer of loose sand. The larva assists this process by vigorously flicking sand out from the center of the pit when it detects a disturbance. This undermines the pit walls and causes them to collapse toward the center. The sand that the larva flings also pelts the prey with loose rolling material that prevents it from getting any foothold on the easier slopes that the initial collapse of the slope has presented. The combined effect is to bring the prey down to within grasp of the larva, which then can inject venom and digestive fluids. In geotechnics See also The angle of repose plays a part in several topics of technology and science, including: Aeolian processes Barchan Bulk cargo Concrete slump test Grade (slope) Mass wasting Oceanic trench Retaining wall Rotary kiln Sand volcano References Particulates Shear strength Soil mechanics Engineering concepts
Angle of repose
Physics,Chemistry,Engineering
1,375
22,256,076
https://en.wikipedia.org/wiki/Thermal%20neutral%20zone
Endothermic organisms known as homeotherms maintain internal temperatures with minimal metabolic regulation within a range of ambient temperatures called the thermal neutral zone (TNZ). Within the TNZ the basal rate of heat production is equal to the rate of heat loss to the environment. Homeothermic organisms adjust to the temperatures within the TNZ through different responses requiring little energy. Environmental temperatures can cause fluctuations in a homeothermic organism's metabolic rate. This response is due to the energy required to maintain a relatively constant body temperature above ambient temperature by controlling heat loss and heat gain. The degree of this response depends not only on the species, but also on the levels of insulative and metabolic adaptation. Environmental temperatures below the TNZ, the lower critical temperature (LCT), require an organism to increase its metabolic rate to meet the environmental demands for heat. The Regulation about the TNZ requires metabolic heat production when the LCT is reached, as heat is lost to the environment. The organism reaches the LCT when the Ta (ambient temp.) decreases. When an organism reaches this stage the metabolic rate increases significantly and thermogenesis increases the Tb (body temp.) If the Ta continues to decrease far below the LCT hypothermia occurs. Alternatively, evaporative heat loss for cooling occurs when temperatures above the TNZ, the upper critical zone (UCT), are realized (Speakman and Keijer 2013). When the Ta reaches too far above the UCT, the rate of heat gain and rate of heat production become higher than the rate of heat dissipation (heat loss through evaporative cooling), resulting in hyperthermia. It can show postural changes where it changes its body shape or moves and exposes different areas to the sun/shade, and through radiation, convection and conduction, heat exchange occurs. Vasomotor responses allow control of the flow of blood between the periphery and the core to control heat loss from the surface of the body. Lastly, the organism can show insulation adjustments; a common example being "goosebumps" in humans where hair follicles are raised by pilomotor muscles, also shown in animals' pelage and plumage. In humans The thermoneutral zone describes a range of temperatures of the immediate environment in which a standard healthy adult can maintain normal body temperature without needing to use energy above and beyond normal basal metabolic rate. It starts at approximately for normal weight men and at around for those who are overweight and extends towards circa . Note this is for a resting human and does not allow for shivering, sweating or exercising. Even with light clothing, radiation and convection losses are dramatically reduced, effectively reducing the TNZ. Hence, a comfortable temperature in a heated building may be 18 - 22 degrees Celsius (64.4 - 71.6 degrees Fahrenheit). Humans produce an obligatory of heat energy at rest as a by-product from basic processes like pumping blood, digesting, breathing, biochemical synthesis and catabolism etc. This is comparable to a common incandescent light-bulb. However, adult humans can produce in excess of of heat energy during strenuous exercise. Hence, if the body were perfectly insulated, core temperature would continue to increase until lethal core temperatures were achieved. Conversely, we are normally in surroundings that are considerably cooler than the body's core temperature of creating a gradient for thermal energy flow from the core to the surroundings. Therefore, the body must ensure it can also minimize the loss of heat to around 100 watts, if it is to maintain core temperature. In short, the skin must be able to get rid of 100 watts of heat in relatively warm environments, but also ensure that it does not lose too much more than this in relatively cold environments. The human outer or peripheral shell (skin, subcutaneous fat etc.) acts as an adjustable insulator/radiator with the main mechanism of adjustment being blood flow to this compartment. If the surroundings are warm then heat loss is less, so the body directs more blood to the periphery to maintain the gradient for energy flow. Conversely, if the surroundings are cool, blood flow can be profoundly reduced to the skin, so that heat loss is reduced significantly. These passive processes determine the TNZ, as negligible work is done to redirect blood to the peripheries or the core. Physiological mechanisms: The skin has a huge capacity to accept blood flow resulting in a range of 1ml/100g of skin/min, to 150ml/100g/min. Its metabolic requirements are very low and hence it only requires a very small fraction of the heart's output to maintain its own growth and metabolism. In temperate environments the blood flow to the skin is much higher than required for metabolism, the determining factor is the need for the body to get rid of its heat. In fact, skin can survive for long periods of time (hours) with sub-physiological blood flow and oxygenation, and, as long as this is followed by a period of good perfusion, necrosis will not occur. In temperate environments there is room to increase or decrease blood flow to the skin dramatically. This is achieved by way of special arrangements in the vascular beds of the skin. There are significant numbers of extra vessels, especially in the extremities with their large surface areas (hands, ears, toes etc.). These are direct connections between artery and vein which bypass nourishing capillaries, and are controlled by the sympathetic nervous system. These shunts are normally mostly closed, but opening them up allows the skin to become engorged with blood, and because these vessels have low resistance, the blood flow through them is brisk. Conversely, when blood supply to the skin must be reduced these shunts can be closed and furthermore, the normal mechanism of vasoconstriction of arterioles, can dramatically reduce perfusion of the skin. Across species Different species have different temperatures of their thermal neutral zones. In dogs, the thermoneutral zone ranges from . Domestic cats have a considerably higher thermoneutral zone, ranging between 30 and 38 °C. In horses, the lower critical temperature is 5 °C while the upper critical temperature depends on the definition used. Their thermoneutral zone is roughly . In mice, the lower critical temperature and upper critical temperature can be the same, creating a thermoneutral point instead of a thermoneutral zone. This point varies throughout the day depending on whether the mouse is in the active dark phase (33 °C) or the resting light phase (29 °C). References Animal physiology Thermoregulation
Thermal neutral zone
Biology
1,372
35,759,318
https://en.wikipedia.org/wiki/Thermoanaerobacter%20brockii
Thermoanaerobacter brockii, formerly Thermoanaerobium brockii, is a thermophilic, anaerobic, spore-forming bacterium. The bacterium was first isolated from Yellowstone National Park. The growth range for the organism is 35 to 80°C and pH 5.5-9.5, with optimal growth conditions at 65-70°C and pH 7.5. T. brockii stains Gram-positive. While originally thought to be non-sporeforming bacteria, it was later discovered that the organism produced spores that can survive heating at 115 °C for 80 min. The species was originally classified as Thermoanaerobium brockii, but further analysis put the bacteria into the genus Thermoanaerobacter. The species is named "in honor of T.D. Brock, a pioneer in the golden era of thermophily." References External links Type strain of Thermoanaerobacter brockii at BacDive - the Bacterial Diversity Metadatabase Thermoanaerobacterales Thermophiles Anaerobes Bacteria described in 1983
Thermoanaerobacter brockii
Biology
237
14,300,776
https://en.wikipedia.org/wiki/Structural%20Awards
The Institution of Structural Engineers' Structural Awards have been awarded for the structural design of buildings and infrastructure since 1968. The awards were re-organised in 2006 to include ten categories and the Supreme Award for structural engineering excellence, the highest award a structural project can win. The David Alsop Sustainability Award, in memory of David Alsop, who died on 18 October 1996 while a vice president and president elect of the Institution of Structural Engineers, is made for "an outstanding structure which demonstrates excellent coordination of all aspects of the engineering elements and services combined with elegance, life-time economy and respect for the environment in which the structure is built." It was first awarded in 2000. Laureates Supreme Award The Supreme Award was first awarded in 2003 to recognise the very best of structural engineering design. Other Categories 2016 Award for Sustainability: 5 Broadgate, London, England - Buro Happold Award for Arts or Entertainment Structures: Stavros Niarchos Foundation Cultural Center, Athens, Greece - Expedition Engineering Award for Commercial or Retail Structures: Torre BBVA Bancomer, Mexico City, Mexico - Arup Award for Community or Residential Structures: Grandview Heights Aquatic Centre, Surrey, British Columbia - Fast + Epp Award for Education or Healthcare Structures: Blavatnik School of Government, University of Oxford, Oxford, Pell Frischmann Award for Pedestrian Bridges: Elizabeth Quay Bridge, Perth, Australia - Arup Award for Sports or Leisure Structures: City of Manchester Stadium Expansion, Manchester, England - Buro Happold Award for Infrastructure or Transportation Structures: Transformation of Birmingham New Street railway station, England - Atkins - AKT II Structural Heritage Award: Mount Stewart House, County Down, Northern Ireland, UK - Mann Williams Award for Small Projects: Formby Helical Stair, Formby, UK - Webb Yates Engineers Award for Small Practices: Expo 2015 Hive, Milan, Italy & London, UK - Simmonds Studio Award for Regional Groups: Information Age Gallery, Science Museum, London, UK - Heyne Tillet Steel Commendations No commendations were made. 2015 Award for Sustainability: Housing for low-income communities in El Salvador - Arup Award for Arts or Entertainment Structures: The Vegas High Roller, Las Vegas, United States - Arup Award for Commercial or Retail Structures: Intesa SanPaolo Tower, Turin, Italy - Expedition Engineering - Studio Ossola Award for Community or Residential Structures: Malapa Hominid Fossil Site Cover + Visitors' Platform, Malapa, South Africa - Fellows Consulting Award for Education or Healthcare Structures: Melbourne School of Design, Melbourne, Australia - IrwinConsult Award for Pedestrian Bridges: Greenwich Reach Swing Bridge, London, UK - Flint & Neill Award for Sports or Leisure Structures: Singapore Sports Hub, Singapore - Arup Award for Infrastructure or Transportation Structures: Anaheim Regional Transportation Intermodal Center (ARTIC), Anaheim, California - Thornton Tomasetti Structural Heritage Award: Restoration of Victoria Theatre and Concert Hall, Singapore - T.Y. Lin International Pte. Ltd Award for Highway or Railway Bridge Structures: Schuman Bridge, Lyon, France - Flint & Neill Award for Small Projects: Stage by the Sea, Littlehampton, UK - Expedition Engineering Award for Small Practices: Steel and Glass Features for the 300th Anniversary of Omsk, Russia - Malishev Engineers Award for Regional Groups: The SSE Hydro, Glasgow, Scotland - Arup Commendations For Commercial or Retail Structures: Believe in Better Building - Arup Associates and Engenuiti For Education or Healthcare Structures: Alfriston School Swimming Pool, Beaconsfield, UK - Elliott Wood Partnership For Pedestrian Bridges: Jim Stynes Bridge, Melbourne, Australia - Aurecon For Pedestrian Bridges: Merchant Square Footbridge, London, UK - Knight Architects and AKT II For Small Projects: Central London Stone Stair - Webb Yates Engineers 2014 Award for Sustainability: Muregeya Bridge - Ove Arup & Partners Award for Arts or Entertainment Structures: Reid Building, Glasgow School of Art - Arup Award for Commercial or Retail Structures: Glass Lantern, Apple Zorlu - Eckersley O'Callaghan Award for Community or Residential Structures: Kew House - Price & Myers Award for Education or Healthcare Structures: WWF Living Planet Centre - Expedition Engineering Award for Pedestrian Bridges: Footbridge over the Bow, Banff, Alberta - Fast + Epp Award for Sports or Leisure Structures: Adelaide Oval Redevelopment - Arup Award for Infrastructure or Transportation Structures: Pulkovo Airport - Ramboll Structural Heritage Award: Forth Rail Bridge Restoration - Pell Frischmann Award for Highway or Railway Bridge Structures: Elbebridge Schönebeck - Leonhardt, Andrä und Partner Award for Small Projects: Somerset House, The Miles Stair - Techniker Award for Small Practices: Lower Hātea River Crossing (Te Matau a Pohe) - Knight Architects / Peters & Cheung Award for Regional Groups: Bangor Aurora Aquatic & Leisure Complex - WYG Commendations For Sustainability: WWF Living Planet Centre - Expedition Engineering For Commercial or Retail Structures: Trimble Navigation's Office, Christchurch - Opus International Consultants For Community or Residential Structures: Temple Lodge - Ramboll For Highway or Railway Bridge Structures: Shenyang Hun River Ribbon Bridge - Tongji Architectural Design (Group) Co., Ltd. For Pedestrian Bridges: Muregeya Bridge - Ove Arup & Partners For Small Projects: Red Bridge House - Lyons O'Neill Ltd For Sports or Leisure Structures: Hazza Bin Zayed Stadium - Thornton Tomasetti For Structural Heritage: Manchester Town Hall Complex Transformation Programme - URS 2013 Award for Sustainability: Halley VI Antarctic Research Station by Aecom (formerly Faber Maunsell) Award for Arts or Entertainment Structures: Gardens by the Bay - Atelier One and Meinhardt Infrastructure Award for Commercial or Retail Structures: CCTV Headquarters - Arup Award for Community or Residential Structures: Bishop Edward King Chapel - Price & Myers Award for Education or Healthcare Structures: The University of Exeter Forum - Buro Happold Award for Pedestrian Bridges: Pembroke College footbridge - Price & Myers Award for Sports or Leisure Structures: First Direct Arena - Arup Award for Infrastructure or Transportation Structures: Emirates Air Line - Expedition Engineering, Buro Happold and URS Structural Heritage Award: Cutty Sark Restoration - Buro Happold Award for Highway or Railway Bridge Structures: Taizhou Bridge - Jiangsu Provincial Communications Planning and Design Institute and Aecom Award for Small Projects: KREOD Pavilion - Ramboll Award for Small Practices: Feature stairs for the new Mariinsky Theatre - Malishev Engineers Commendations For Commercial or Retail Structures: The Shard - WSP For Commercial or Retail Structures: Trinity Leeds gridshell roof - Sinclair Knight Merz For Community or Residential Structures: Tsingtao Pearl Visitor Centre - Fast + Epp For Education or Healthcare Structures: Botanical Garden Hothouse - Søren Jensen Consulting Engineers For Structural Heritage: Tynemouth Metro station - Ramboll For Small Projects: Castle Green Bridge - Flint & Neill 2012 Sustainability Award: Conservation and Restoration of the Iron Market, Port-au-Prince, Haiti - Alan Baxter and Associates Award for Arts or Entertainment Structures: Crystal Bridges Museum of American Art, Bentonville, Arkansas - Buro Happold Award for Commercial or Retail Structures: Al Hamra Tower, Kuwait City, Kuwait - Skidmore, Owings & Merrill Award for Community or Residential Structures: VanDusen Botanical Garden Visitor Centre, Vancouver, British Columbia, Canada - Fast + Epp Award for Education or Healthcare Structures: the Tunbridge Wells Hospital, Tunbridge Wells, Kent, England - Ramboll Award for Pedestrian Bridges: Jarrold Bridge, Norwich, England - Ramboll Award for Sports or Leisure Structures: London Stadium, Stratford, England - Buro Happold Award for Infrastructure or Transportation Structures: London King's Cross railway station Redevelopment, London, England - Arup Structural Heritage Award: West Gate Bridge Strengthening, Melbourne, Victoria, Australia - West Gate Bridge Strengthening Alliance Award for Highway or Railway Bridge Structures: Compiègne Bridge, France - Flint & Neill Award for Small Projects: Rise (sculpture), Belfast, Northern Ireland - Price & Myers Award for Small Practices: Retention and Relocation of Facade at Chenil House, London, England - Considine Consulting Commendations For Education or Healthcare Structures: Centre for Interactive Research on Sustainability, Vancouver, British Columbia, Canada - Fast + Epp For Pedestrian Bridges: Peace Bridge (Foyle), Derry, Northern Ireland - AECOM For Structural Heritage: Victoria Memorial Museum Rehabilitation, Ottawa, Ontario, Canada - Parsons Brinckerhoff Halsall For Structural Heritage: Conservation and Restoration of the Iron Market, Port-au-Prince, Haiti - Alan Baxter and Associates For Highway or Railway Bridge Structures: Twin Sails Bridge, Poole, England - Ramboll For Small Practices: Georgia Ministry of Justice Prosecutor's Office, Georgia - Engenuiti 2011 Heritage Award for Buildings or Infrastructure Projects: Royal Shakespeare Theatre redevelopment, Stratford upon Avon, England - Buro Happold Award for Pedestrian Bridges: Media City Footbridge, Manchester, England - Gifford Award for Transportation Structures: Dublin Airport Terminal 2, Dublin, Ireland - Arup Award for Commercial or Retail Structures: Khan Shatyr Entertainment Center, Astana, Kazakhstan - Buro Happold Award for Education or Healthcare Structures: NMIT Arts and Media Centre, Nelson, New Zealand - Aurecon Award for Community or Residential Structures: Elsinore Culture Yard, Elsinore, Denmark - Søren Jensen Consulting Engineers Award for Sports Structures: London Velodrome, England - Expedition Engineering Award for Arts, Leisure or Entertainment Structures: Las Arenas Bullring, Barcelona, Spain - Expedition Engineering Award for Industrial or Process Structures: Port Phillip Estate, Red Hill, Victoria, Australia - Arup Award for Small Practices: Westgate Bridge Suspended Access Platforms, Melbourne, Australia - Alan White Design David Alsop Sustainability Award: Wales Institute for Sustainable Education, Powys, Wales - Buro Happold Award for Small Projects: Bridge of Dreams, Princeton, Canada - Fast + Epp Commendations For Heritage Infrastructure: Mizen Head Footbridge, County Cork, Ireland - Gifford For Pedestrian Bridges: Redhayes Bridge, Exeter, England - Parsons Brinckerhoff For Community or Residential Structures: Bramall Learning Centre, Harrogate, England - Gifford For Sports Structures: Aviva Stadium, Dublin, Ireland - Buro Happold For Sustainability: Lee Valley VeloPark, England - Expedition Engineering For Sustainability: Open Academy, Norwich, England - Ramboll 2010 Heritage Award for Buildings or Infrastructure Projects: Supreme Court of New Zealand - Holmes Consulting Group Award for Pedestrian Bridges: Meads Reach Bridge, Bristol, England - Price & Myers Award for Transportation Structures: Stonecutters Bridge, Hong Kong - Arup Award for Commercial or Retail Structures: Burj Khalifa, Dubai - Skidmore, Owings & Merrill Award for Education or Healthcare Structures: NZi3 Innovation Institute Building, Canterbury, New Zealand - Beca Award for Community or Residential Structures: Chips, Manchester, England - Martin Stockley Associates Award for Arts, Leisure or Entertainment Structures: John Hope Gateway, Edinburgh, Scotland - Buro Happold Award for Sports Structures: Melbourne Rectangular Stadium, Melbourne, Australia - Arup David Alsop Sustainability Award: Forth Road Bridge Main Cable Project - AECOM Award for Small Projects: Serpentine Gallery Pavilion 2009, London, England - Arup Commendations For Pedestrian Bridges: Forthside Bridge - Gifford For Transportation Structures: Dubai Metro (Red Line) - Atkins For Commercial or Retail Structures: Apple Store Upper West Side - Eckersley O’Callaghan Structural Design For Education or Healthcare Structures: The London Clinic - New Cancer Centre - Alan Baxter Associates LLP For Community or Residential Structures: Hull History Centre, Hull, England - Alan Baxter Associates LLP For Sustainability: Queen Elizabeth II Court, Hampshire, England - Gifford 2009 Heritage Award for Buildings: St Martin in the Fields - Alan Baxter & Associates Heritage Award for Infrastructure: Not awarded Award for Pedestrian Bridges: Infinity Footbridge - Expedition Engineering Award for Transportation Structures: Clackmannanshire Bridge at Kincardine - Scott Wilson Group Incorporating Benaim Award for Commercial or Retail Structures: Cabot Circus Roof, Bristol - Sinclair Knight Merz Award for Education or Healthcare Structures: Te Puni Village, Wellington, New Zealand - Aurecon Award for Community or Residential Structures: The Cathedral of Christ the Light, California, United States - Skidmore, Owings & Merrill Award for Sports Structures: Richmond Olympic Oval roof, Richmond, British Columbia, Canada - Fast + Epp Structural Engineers Award for Arts, Leisure or Entertainment Structures: Natural History Museum's Darwin Centre Phase Two, London - Arup Award for Industrial or Process Structures: Bodegas Protos Winery, Penafiel, Valladolid, Spain - Arup David Alsop Sustainability Award: Mapungubwe National Park Interpretive Centre, Mapungubwe National Park, South Africa - Henry Fagan & Partners, John Ochsendorf & Michael Ramage Award for Small Projects: Serpentine Gallery Summer Pavilion 2008 London - Arup Commendations For Heritage Infrastructure: Canford Bridge, Dorset, England - Buro Happold For Pedestrian Bridge: Castleford Bridge, Castleford, England - Alan Baxter & Associates For Pedestrian Bridge: Bishops Stortford Goods Yard Footbridge - Gifford For Commercial or Retail Structure: 201 Bishopsgate and the Broadgate Tower - Skidmore, Owings & Merrill For Arts or Entertainment Structure: Curtis R Priem Experimental Media and Performing Arts Center, Troy, New York, United States - Buro Happold For Industrial or Process Structure: Advanced Manufacturing Research Centre, University of Sheffield, England - Buro Happold For Industrial or Process Structure: Lakeside Energy from Waste Plant, Slough, England - Royal Haskoning For Sustainability: Richmond Olympic Oval roof, Richmond, British Columbia, Canada - Fast + Epp Structural Engineers For Small Projects: Rudolph Steiner House, London, England - Gifford 2008 Heritage Award for Buildings: St Pancras railway station, High Speed 1 - The RLE Consortium (Arup, Bechtel, Halcrow, Systra) Heritage Award for Infrastructure: Westminster Bridge Fascia Replacement Project - Hyder Consulting and Tony Gee & Partners Award for Pedestrian Bridges: The Living Bridge, Limerick - Arup Award for Transportation Structures: Roadway Bridge Across the Lockwitz Valley - Leonhardt, Andra und Partner Award for Commercial or Retail Structures: Heathrow Terminal 5 - Arup Award for Education or Healthcare Structures: Thomas Deacon Academy - Buro Happold Award for Community or Residential Structures: Casa Kike - Tall Engineers Ltd Award for Sports Structures: Beijing National Aquatics Centre - Arup and CCDI Award for Arts, Leisure or Entertainment Structures: O2 Arena - Buro Happold Award for Industrial or Process Structures: not awarded David Alsop Sustainability Award: not awarded Award for Small Projects: Spire of Hope, St Annie's Cathedral, Belfast - Ramboll Commendations For Heritage Buildings: Household Cavalry Museum, Horse Guards, Whitehall, London, England - Gifford For Pedestrian Bridges: Tri Countries Bridge, Well am Rhein, Germany - Leohardt, Andra und Partner For Transportation Structures: Fabian Way Bridge, Swansea, Wales - Flint & Neill For Commercial or Retail Structures: BBC W1 Phase 1, London, England - Ramboll For Education or Healthcare Structures: The University Town Library, Shenzhen, China - Shenzhen General Institute of Architectural Design & Research For Sports Structures: Kensington Oval, Barbados - Arup Associates with CEP For Industrial or Process Structures: The Solera Factory, Valencia, Spain - Webb Yates For Sustainability: 55 Baker Street, London, England - Expedition Engineering 2007 Heritage Award for Buildings: The Library of Parliament, Canada - Adjeleian Allen Rubeli Ltd Heritage Award for Infrastructure: Dresden Hauptbahnhof - Buro Happold and Schmitt Stumpf Frühauf and Partner (Munich) Award for Pedestrian Bridges: Nescio Bridge, Amsterdam - Arup and Grontmij, Lelystad Award for Transportation Structures: Sheppey Crossing - A249 to Sheerness - Cass Hayward and Capita Symonds Award for Commercial or Retail Structures: The New Beijing Poly Plaza - Skidmore, Owings & Merrill Award for Education or Healthcare Structures: Teaching and Research Complex of Tongji University - Architectural Design & Research Institute of Tongji University Award for Community or Residential Structures: New Life Boat Station RNLI Padstow - John Martin Construction Ltd and Royal Haskoning Ltd Award for Sports Structures: The Emirates Stadium - Buro Happold Award for Arts, Leisure or Entertainment Structures: The Savill Building - Buro Happold and Engineers Haskins Robinson Waters Award for Industrial or Process Structures: The Diamond Synchrotron - Jacobs Engineering Group David Alsop Sustainability Award: Adnams Distribution Centre - Faber Maunsell Award for Small Projects: Pines Calyx Centre - Scott Wilson incorporating Cameron Taylor Commendations For Heritage Buildings: Baskerville House, Birmingham, England - Buro Happold For Heritage Infrastructure: St Pancras railway station Underground redevelopment, London, England - Arup For Transportation Structures: Wadi Abdoun Bridge, Amman, Jordan - Dar Al-handasah Consultants For Community or Residential Structures: York House, Hong Kong - Maunsell Structural Consultants For Sports Structures: University of Phoenix Stadium, Phoenix, Arizona, United States - Walter P Moore For Arts, Leisure or Entertainment Structures: Auckland War Memorial Museum Redevelopment, Auckland, New Zealand - Holmes Consulting Group For Sustainability: Pines Calyx Centre, Dover, England - Scott Wilson Group incorporating Cameron Taylor For Small Projects: Achray Bridge. Wales - Forestry Commission - Civil Engineering 2006 Heritage Award for Buildings: Somerset House Floor Stiffening - Alan Baxter & Associates Heritage Award for Infrastructure: SS Great Britain - Fenton Holloway Award for Pedestrian Bridges: Sean O'Casey Pedestrian Bridge - O'Connor Sutton Cronin Award for Transportation Structures: Sungai Prai Bridge - Dar Al-Handasah Consultants Award for Commercial or Retail Structures: New Terminal and Satellite Buildings at Madrid Barajas Airport - Anthony Hunt Associates (now SKM) & TPS Award for Education or Healthcare Structures: Evelina Children's Hospital - Buro Happold Award for Community or Residential Structures: The Hub, Regent's Park - Price & Myers Consulting Engineers Award for Sports Structures: Lingotto Speed Skating Oval - Buro Happold Award for Arts, Leisure or Entertainment Structures: Phaeno Science Centre - Adams Kara Taylor Award for Industrial or Process Structures: Astra Honda Motor New Plant - PT Gistama Intisemesta David Alsop Sustainability Award: Shenzhen Western Corridor - Arup Award for Small Projects: not awarded Commendations For Heritage Buildings: Skyways Project, Liverpool, England - Curtins Consulting For Pedestrian Bridges: Whitemud Creek Arch Bridge, Edmonton, Alberta, Canada - Associated Engineering For Transportation Structures: The Paddington Bridge Project, London, England - Cass Hayward For Commercial or Retail Structures: St Paul's Hotel, Sheffield, England - Buro Happold For Commercial or Retail Structures: Langham Place, Hong Kong - Ove Arup & Partners For Commercial or Retail Structures: The Grand Gateway, Shanghai, China - Maunsell Structural Consultants For Education or Healthcare Structures: The Manchester Interdisciplinary Biocentre, Manchester, England - Faber Maunsell For Community or Residential Structures: Moho, Manchester, England - Joule Consulting Engineers For Arts, Leisure or Entertainment Structures: Spinnaker Tower, Portsmouth, England - Scott Wilson Group For Sports Structures: Allianz Arena, Munich, Germany - ArupSport For Sports Structures: Khalifa Stadium, Doha, Qatar - Arup For Sustainability: Waters' Edge Country Park Visitor & Business Centre, Lincolnshire, England - Furness Partnership 2005 In 2005, the following awards were made: Structural Special Award (two awards): Whitby Bird for Mossbourne Community Academy, London, England Buro Happold for Greenside Place Bridge, Edinburgh, Scotland Structural Special Commendation: Arup for the Airside Centre, Zurich Airport, Zurich, Switzerland Structural Achievement Award (two awards): Arup for Gatwick Airport Bridge, Surrey, England Buro Happold for the Memorial to the Murdered Jews of Europe, Berlin, Germany Structural Achievement Commendation: Gifford for Brading Roman Villa, Isle of Wight, England David Alsop Award: Gifford for Carlton House Studios, Hampshire, England David Alsop Commendation: Buro Happold for the Nomadic Museum, New York, USA Structural Heritage Award (two awards): Arup for the Moat revetment and ramp at the Tower of London, London, England Buro Happold for the New refectory, Norwich Cathedral, Norfolk, England Structural Heritage Commendation: Structwel Designers and Consultants PVT Ltd for Ganesh Hall and Darbar Hall of Rajwada, Indore Madhya Pradesh, India 2004 In 2004, the following awards were made: Structural Special Award (three awards): Building Design Partnership for Umoja House, Dar es Salaam, Tanzania Fluid Structures for Glass Extension, Private House, London, England Fast + EPP for Central City Timber, Surrey, British Columbia, Canada Structural Special Commendation: Buro Happold for New Hangar, TAG Farnborough Airport, Hampshire, England Structural Achievement Award: Africon for Pungwe River Bridge, Mozambique Structural Achievement Commendation: WSP Cantor Seinuk for Golden Jubilee Bridges, London, England David Alsop Award: Faber Maunsell for 1 South Gyle Crescent, Edinburgh, Scotland David Alsop Commendation: SKM Anthony Hunts for Eden Foundation Building - Cornwall, England Structural Research and Development Award: Buro Happold for Mechtenberg Brücken - Gelsenkirchen, Germany Structural Heritage Commendation: Opus International Consultants for Seismic Strengthening/Refurbishment of Historic Chief Post Office, Auckland, New Zealand 2003 Structural Special Award: Gifford for the Gateshead Millennium Bridge, Gateshead, England Arup Associates for the City of Manchester Stadium, Manchester, England Structural Achievement Award: Dewhurst Macfarlane & Partners with Goldreich Engineering for the Kimmel Center for the Performing Arts, Philadelphia, USA Hyder Consulting for the Tamar Bridge Strengthening and Widening, Plymouth, England Structural Achievement Commendation: Fast & Epp for Brentwood Town Centre station, Vancouver, British Columbia, Canada Faber Maunsell for the Recital Room, Royal Academy of Music, London, England Structural Research & Development Award: Arup for the London Millennium Bridge, London, England Structural Heritage Commendation: White Young Green for The Light, Leeds, England Peel & Fowler for The Redhouse Cone, Stourbridge, England David Alsop Commendation: Buro Happold for the Weald and Downland Gridshell, Singleton, England 2002 In 2002, the following awards were made: Structural Special Award (three awards): Anthony Hunt Associates Ltd and Mero (UK) for The Eden Project, Cornwall, England Ove Arup & Partners for Osaka Maritime Museum Dome, Osaka, Japan Buro Happold for the Queen Elizabeth II Great Court, British Museum, London, England Structural Achievement Award:Buro Happold for the Japan Pavilion, Expo 2000, Hanover, Germany Structural Achievement Commendation: Buro Happold for the Glasgow Science Centre Tower, Glasgow, Scotland Structural Heritage Award: WSP Group for the redevelopment of Knightsbridge Crown Court for Harrods, London, England Structural Heritage Commendation (three awards): Oscar Faber for Liverpool Lime Street railway station, England Arup Consulting Engineers for the Guinness Storehouse Guinness Brewery, Dublin, Ireland Wright Consulting Engineers for the Church of Assumption of Our Lady, Hartwell, Aylesbury, England David Alsop Award: Buro Happold for Wessex Water Operations Centre, Bath, England David Alsop Commendation: Whitby Bird & Partners for Toyota Manufacturing UK headquarters, Epsom, England 2001 In 2001, the following awards were made: Structural Special Award: Babtie Allott & Lomax and Hollandia for the London Eye Structural Special Commendations: Price & Myers for the Millennium Bridge, Dublin, Ireland WS Atkins Consultants for the Millennium Stadium, Cardiff, Wales Hyder Consulting for the Emirates Towers, Dubai, United Arab Emirates Structural Achievement Awards: Arup GmbH for the Cargo Lifter Airship Hangar, Brand, Germany Hyder Consulting for the strengthening of the M5 Avonmouth Bridge, England Structural Achievement Commendation: WS Atkins Consultants for the Glass Walls, Korean Trade Centre, Seoul, South Korea Structural Heritage Award: Price & Myers for the Royal Court Theatre, London, England Structural Heritage Commendation: Oscar Faber for The Triangle, Manchester, England David Alsop Awards: WSP Group for the Sainsbury's Millennium Store, Greenwich, London, England Ove Arup & Partners for Portcullis House, Westminster, London, England 2000 In 2000, the following awards were made: Structural Special Awards: WS Atkins for the Burj Al Arab, in Dubai, United Arab Emirates Buro Happold for the Millennium Dome in London, England Modus Consulting Engineers for Stadium Australia, Sydney, Australia Hyder Consulting for Stratford railway station Concourse, London, England Skidmore, Owings & Merrill for the Jin Mao Tower, Shanghai, China Structural Achievement Awards: Ove Arup & Partners for the Natwest Media Centre at Lord's Cricket Ground, London, England Flint & Neill Partnership for Lockmeadow Bridge, Maidstone, England WSP Group for Canning Town station, London, England Structural Achievement Commendations: Skidmore, Owings & Merrill for the Lisbon Multi-Use Arena, Portugal Ove Arup & Partners for the grandstand at Lord's Cricket Ground, London, England Structural Heritage Award: Building Design Partnership for Neptune Court redevelopment, National Maritime Museum, Greenwich, England Structural Heritage Commendation: Oscar Faber for Oxford Road railway station, Manchester, England See also List of engineering awards References External links http://www.structuralawards.org http://www.istructe.org Architecture awards British science and technology awards Structural engineering awards IStructE Supreme Award laureates
Structural Awards
Engineering
5,310
56,239
https://en.wikipedia.org/wiki/Acrylamide
Acrylamide (or acrylic amide) is an organic compound with the chemical formula CH2=CHC(O)NH2. It is a white odorless solid, soluble in water and several organic solvents. From the chemistry perspective, acrylamide is a vinyl-substituted primary amide (CONH2). It is produced industrially mainly as a precursor to polyacrylamides, which find many uses as water-soluble thickeners and flocculation agents. Acrylamide forms in burnt areas of food, particularly starchy foods like potatoes, when cooked with high heat, above . Despite health scares following this discovery in 2002, and its classification as a probable carcinogen, acrylamide from diet is thought unlikely to cause cancer in humans; Cancer Research UK categorized the idea that eating burnt food causes cancer as a "myth". Production Acrylamide can be prepared by the hydration of acrylonitrile, which is catalyzed enzymatically: CH2=CHCN + H2O → CH2=CHC(O)NH2 This reaction also is catalyzed by sulfuric acid as well as various metal salts. Treatment of acrylonitrile with sulfuric acid gives acrylamide sulfate, . This salt can be converted to acrylamide with a base or to methyl acrylate with methanol. Uses The majority of acrylamide is used to manufacture various polymers, especially polyacrylamide. This water-soluble polymer, which has very low toxicity, is widely used as thickener and flocculating agent. These functions are valuable in the purification of drinking water, corrosion inhibition, mineral extraction, and paper making. Polyacrylamide gels are routinely used in medicine and biochemistry for purification and assays. Toxicity and carcinogenicity Acrylamide can arise in some cooked foods via a series of steps by the reaction of the amino acid asparagine and glucose. This condensation, one of the Maillard reactions, followed by dehydrogenation produces N-(D-glucos-1-yl)-L-asparagine, which upon pyrolysis generates some acrylamide. The discovery in 2002 that some cooked foods contain acrylamide attracted significant attention to its possible biological effects. IARC, NTP, and the EPA have classified it as a probable carcinogen, although epidemiological studies (as of 2019) suggest that dietary acrylamide consumption does not significantly increase people's risk of developing cancer. Europe According to the EFSA, the main toxicity risks of acrylamide are "Neurotoxicity, adverse effects on male reproduction, developmental toxicity and carcinogenicity". However, according to their research, there is no concern on non-neoplastic effects. Furthermore, while the relation between consumption of acrylamide and cancer in rats and mice has been shown, it is still unclear whether acrylamide consumption has an effect on the risk of developing cancer in humans, and existing epidemiological studies in humans are very limited and do not show any relation between acrylamide and cancer in humans. Food industry workers exposed to twice the average level of acrylamide do not exhibit higher cancer rates. United States Acrylamide is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. Acrylamide is considered a potential occupational carcinogen by U.S. government agencies and classified as a Group 2A carcinogen by the IARC. The Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health have set dermal occupational exposure limits at 0.03 mg/m3 over an eight-hour workday. Opinions of health organizations Baking, grilling or broiling food causes significant concentrations of acrylamide. This discovery in 2002 led to international health concerns. Subsequent research has however found that it is not likely that the acrylamides in burnt or well-cooked food cause cancer in humans; Cancer Research UK categorizes the idea that burnt food causes cancer as a "myth". The American Cancer Society says that laboratory studies have shown that acrylamide is likely to be a carcinogen, but that evidence from epidemiological studies suggests that dietary acrylamide is unlikely to raise the risk of people developing most common types of cancer. Hazards Radiolabeled acrylamide is also a skin irritant and may be a tumor initiator in the skin, potentially increasing risk for skin cancer. Symptoms of acrylamide exposure include dermatitis in the exposed area, and peripheral neuropathy. Laboratory research has found that some phytochemicals may have the potential to be developed into drugs which could alleviate the toxicity of acrylamide. Mechanism of action Acrylamide is metabolized to the genotoxic derivative glycidamide. On the other hand, acrylamide and glycidamide can be detoxified via conjugation with glutathione. Occurrence in food Acrylamide was discovered in foods, mainly in starchy foods, such as potato chips (UK: potato crisps), French fries (UK: chips), and bread that had been heated higher than . Production of acrylamide in the heating process was shown to be temperature-dependent. It was not found in food that had been boiled, or in foods that were not heated. Acrylamide has been found in roasted barley tea, called mugicha in Japanese. The barley is roasted so it is dark brown prior to being steeped in hot water. The roasting process produced 200–600 micrograms/kg of acrylamide in mugicha. This is less than the >1000 micrograms/kg found in potato crisps and other fried whole potato snack foods cited in the same study and it is unclear how much of this enters the drink to be ingested. Rice cracker and sweet potato levels were lower than in potatoes. Potatoes cooked whole were found to have significantly lower acrylamide levels than the others, suggesting a link between food preparation method and acrylamide levels. Acrylamide levels appear to rise as food is heated for longer periods of time. Although researchers are still unsure of the precise mechanisms by which acrylamide forms in foods, many believe it is a byproduct of the Maillard reaction. In fried or baked goods, acrylamide may be produced by the reaction between asparagine and reducing sugars (fructose, glucose, etc.) or reactive carbonyls at temperatures above . Later studies have found acrylamide in black olives, dried plums, dried pears, coffee, and peanuts. The US FDA has analyzed a variety of U.S. food products for levels of acrylamide since 2002. Occurrence in cigarettes Cigarette smoking is a major acrylamide source. It has been shown in one study to cause an increase in blood acrylamide levels three-fold greater than any dietary factor. See also Acrydite: research on this compound casts light on acrylamide Acrolein Alkyl nitrites Deep-frying Deep fryer Vacuum fryer Substance of very high concern Heterocyclic amines Polycyclic aromatic hydrocarbons References Further reading External links Carboxamides Hazardous air pollutants IARC Group 2A carcinogens Monomers Reproductive toxicants Suspected fetotoxicants
Acrylamide
Chemistry,Materials_science
1,627
26,598,534
https://en.wikipedia.org/wiki/Indian%20Financial%20System%20Code
The Indian Financial System Code (IFS Code or IFSC) is an alphanumeric code that facilitates electronic funds transfer in India. A code uniquely identifies each bank branch participating in the three main Payment and settlement systems in India: the National Electronic Funds Transfer (NEFT), Real Time Gross Settlement (RTGS) and Immediate Payment Service (IMPS) systems. Format The IFSC is an 11-character code with the first four alphabetic characters representing the bank name, and the last six characters (usually numeric, but can be alphabetic) representing the branch. The fifth character is 0 (zero) and reserved for future use. Bank IFS Code is used by the NEFT & RTGS systems to route the messages to the destination banks/branches. The format of the IFS Code is shown below. Lists of IFS Codes Bank-wise lists of IFS Codes are available with all the bank-branches participating in inter bank electronic funds transfer. A list of bank-branches participating in NEFT/RTGS and their IFS Code is available on the website of the Reserve Bank of India. All the banks have also been advised to print the IFS code of the branch on cheques issued by branches to their customers. See also List of financial regulatory authorities by country References https://www.npci.org.in/national-automated-clearing-live-members-1 https://rbidocs.rbi.org.in/rdocs/RTGS/DOCs/RTGEB0815.xlsx https://rbidocs.rbi.org.in/rdocs/content/docs/68774.xlsx External links Find IFSC of Banks' Branches on Reserve Bank of India's Website List of NEFT Enabled Bank Branches (with IFSC) on Reserve Bank of India's Website Banking in India Real-time gross settlement E-commerce in India Financial routing standards Standards of India
Indian Financial System Code
Technology
408
13,791,326
https://en.wikipedia.org/wiki/39%20Cygni
39 Cygni is a binary star system near the southern border of the northern constellation of Cygnus, approximately 270 light years away from Earth. It is visible to the naked eye as an orange-hued star with an apparent visual magnitude of 4.43. The system is moving closer to the Sun with a heliocentric radial velocity of −15 km/s. This is a single-lined spectroscopic binary with an orbital period of about and an eccentricity of 0.5. The projected semi-major axis of the primary star's orbit is , providing a lower bound on the separation of the stars. The system is around four billion years old. The visible component is an evolved K-type giant star with a stellar classification of ; the suffix notation indicates a mild underabundance of iron in the spectrum. It is probably on the horizontal branch, fusing helium in its core, but may be on the red giant branch fusing hydrogen in a shell around an insert helium core. It has 1.9 times the mass of the Sun and has expanded to 25 times the Sun's radius. The star is radiating 186 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,284 K. The unseen secondary component is most probably a main sequence star with a type between F and mid-K, although it may be a white dwarf instead. Its mass is at least 0.7–1.0 times the mass of the Sun. References K-type giants Spectroscopic binaries Cygnus (constellation) Durchmusterung objects Cygni, 39 194317 100587 7806
39 Cygni
Astronomy
339
40,542,804
https://en.wikipedia.org/wiki/Sulfidation
Sulfidation (British spelling also sulphidation) is a process of installing sulfide ions in a material or molecule. The process is widely used to convert oxides to sulfides but is also related to corrosion and surface modification. Inorganic, materials, and organic chemistry Sulfidation is relevant to the formation of sulfide minerals. A large scale application of sulfidation is the conversion of molybdenum oxides to the corresponding sulfides. This conversion is a step in the preparation of catalysts for hydrodesulfurization wherein alumina impregnated with molybdate salts are converted to molybdenum disulfide by the action of hydrogen sulfide. In organosulfur chemistry, sulfiding is often called thiation. The preparation of thioamides from amides involves thiation. A typical reagent is phosphorus pentasulfide (P4S10). The idealized equation for this conversion is: RC(O)NH2 + 1/4 P4S10 → RC(S)NH2 + 1/4 P4S6O4 This conversion where an oxygen atom in the amide function is replaced by a sulfur atom involves no redox reaction. Sulfidation of metals It is known that aluminum improves the sulfidation resistance of iron alloys. The sulfidation of tungsten is a multiple step process. The first step is an oxidation reaction, converting the tungsten to a tungsten bronze on the surface of the object. The tungsten bronze coating is then converted to a sulfide. One commonly encountered occurrence of sulfidation in manufacturing environments involves the sulfidic corrosion of metal piping. The increased resistance to corrosion found in stainless steel is attributed to a layer of chromium oxide that forms due to oxidation of the chromium found in the alloy. The process of liquid sulfidation has also been used in the manufacturing of diamond-like carbon films. These films are generally used to coat surfaces to reduce the wear due to friction. The inclusion of sulfidation in the process has been shown to reduce the friction coefficient of the diamond-like carbon film. References Corrosion Thermodynamics Chemical reactions
Sulfidation
Physics,Chemistry,Materials_science,Mathematics
454
60,446,213
https://en.wikipedia.org/wiki/Composite%20glass
Composite glass is the collective term for a laminate having at least two glass panes which are in each case connected by means of an adhesive intermediate layer composed of plastic, e.g. by means of a casting resin or a thermoplastic composite film, which is highly tear-resistant and is viscoelastic. Composite glass should not be confused with composite windows. Applications Windscreens of all kinds of vehicles as well as crash-proof glazing or pavement light used in the construction sector are part of the main fields of application. The composite film used mostly in the construction and automotive sectors is composed of polyvinyl butyral (PVB). Other customary intermediate layer materials include ethylene-vinyl acetate (EVA), polyacrylate (PA), poly(methyl methacrylate) (PMMA), polyurethane (PUR), etc. Depending on the number, type and thickness of the glass panes used and intermediate layers, composite glasses are used as safety glass, sound-proof glass, fireproof glass, as well as throw-through-resistant, breakthrough-resistant or ballistic-resistant glass etc. Glazing which is particularly resistant is produced by means of a combination of glass panes having one or a plurality of panes made from polycarbonate. Smart glasses are also often manufactured as composite glass. Since 2006, in accordance with the latest results coming from research, films between glass, which are provided by means of PVB, EVA or TPU as well as LED and SMD electronics mentioned above, are laminated whereby even products such as luminous glass stairways and tables as well as other composite safety glass systems are made possible. Recently, scientists in Queensland, Australia have developed composite glass that gives phones an 'unbreakable' screen. This breakthrough could enhance the knowledge of composite glass as we know it. Examples The so-called "pummel test" inter alia is used to control the quality of composite glass. See also Laminated glass Reference list 2. Australian Broadcasting Company (ABC) News "Composite glass breakthrough by Queensland researchers could help make phone screens 'unbreakable'" Glass
Composite glass
Physics,Chemistry
445
4,960,751
https://en.wikipedia.org/wiki/Rance%20Tidal%20Power%20Station
The Rance Tidal Power Station is a tidal power station located on the estuary of the Rance River in Brittany, France. Opened in 1966 as the world's first tidal power station, the 240-megawatt (MW) facility was the largest such power station in the world by installed capacity for 45 years until the 254-MW South Korean Sihwa Lake Tidal Power Station surpassed it in 2011. Characteristics The power station has 24 turbines. These reach total peak output at 240 MW, and produce an annual output of approximately 500 GWh (2023: 506 GWh; 491 GWh in 2009, 523 GWh in 2010); thus the average output is approximately 57 MW, and the capacity factor is approximately 24%. The turbines are "bulb" Kaplan turbines, of nominal power 10 MW; their diameter is 5.35 m, each has 4 blades, their nominal rotation speed is 93.75 rpm and their maximal speed 240 rpm. Half of the turbines were built from martensitic stainless steel, the other half from aluminium bronze. The plant is equipped with cathodic protection against corrosion. It supplies 0.12% of the power demand of France. The power density is of the order of 2.6 kW/m2. The cost of electricity production is estimated at 0.12€/KwH . The barrage is long, from Brebis point in the west to Briantais point in the east. The power plant portion of the dam is long and the tidal basin measures . History An early attempt to build a tidal power plant was made at Aber Wrac'h in the Finistère in 1925, but due to insufficient finance, it was abandoned in 1930. Plans for this plant served as the draft for follow-on work. Use of tidal energy is not an entirely new concept, since tidal mills have long existed in areas exposed to tides, particularly along the Rance. The idea of constructing a tidal power plant on the Rance dates to Gerard Boisnoer in 1921. The site was attractive because of the wide average-range between low and high tide levels, with a maximum perigean spring tide range of . The first studies which envisaged a tidal plant on the Rance were done by the Society for the Study of Utilization of the Tides in 1943. Nevertheless, work did not actually commence until 1961. Albert Caquot, the visionary engineer, was instrumental in the construction of the dam, designing an enclosure in order to protect the construction site from the ocean tides and the strong streams. Construction necessitated draining the area where the plant was to be built, which required construction of two dams which took two years. Construction of the plant commenced on 20 July 1963, while the Rance was entirely blocked by the two dams. Construction took three years and was completed in 1966. Charles de Gaulle, then President of France, inaugurated the plant on 26 November of the same year. Inauguration of the road crossing the plant took place on 1 July 1967, and connection of the plant to the French National Power Grid was carried out on 4 December 1967. In total, the plant cost ₣620 million (approximately €94.5 million). It took almost 20 years for the La Rance to pay for itself. Assessments In spite of the high development cost of the project, the costs have now been recovered, and electricity production costs are lower than that of nuclear power generation (1.8 ¢/kWh versus 2.5 ¢/kWh for nuclear). However, the capacity factor of the plant is 28%, lower than 85–90% for nuclear power. Environmental impact The barrage has caused progressive silting of the Rance ecosystem. Sand-eels and plaice have disappeared, though sea bass and cuttlefish have returned to the river. By definition, tides still flow in the estuary and the operator, EDF, endeavours to adjust their level to minimize the biological impact. Tourist attraction A tourist facility at the dam is open to visitors. The facility attracted approximately 40,000 visitors in 2011. A lock for navigation at the west end of the dam allows the passage of 1,600-tonne vessels between the English Channel and the Rance. Departmental road 168 crosses the dam and allows vehicles to travel between Dinard and Saint-Malo. There is a drawbridge where the road crosses the lock which is raised to allow larger vessels to pass. The Rance estuary is the first part of the inland waterway from the English Channel to the Bay of Biscay via the Canal d'Ille-et-Rance and the river Vilaine. See also List of tidal power stations List of largest power stations in the world Renewable energy in France References External links La Houille Blanche, n. 2-3, April 1973 La Houille Blanche, n. 3, April 1997 EDF website Energy infrastructure completed in 1963 Tidal power stations in France Coastal construction Buildings and structures in Ille-et-Vilaine Saint-Malo Tidal barrages Électricité de France Tourist attractions in Ille-et-Vilaine Articles containing video clips 1963 establishments in France 20th-century architecture in France
Rance Tidal Power Station
Engineering
1,050
25,543,876
https://en.wikipedia.org/wiki/Sup45p
Sup45p is the Saccharomyces cerevisiae (a yeast) eukaryotic translation termination factor. More specifically, it is the yeast eukaryotic release factor 1 (eRF1). Its job is to recognize stop codons in RNA and bind to them. It binds to the Sup35p protein and then takes on the shape of a tRNA molecule so that it can safety incorporate itself into the A site of the Ribosome to disruptits flow and "release" the protein and end translation. Notes Saccharomyces cerevisiae genes Protein biosynthesis
Sup45p
Chemistry
130
60,046,320
https://en.wikipedia.org/wiki/Acetomicrobium%20flavidum
Acetomicrobium flavidum is a thermophilic bacterium in the genus Acetomicrobium. It was first isolated from thermophilic, anaerobic sewage sludge digester operated at . The bacterium is gram negative and highly motile. The species represented around 25% of the microbial population in the sludge. References Synergistota Bacteria described in 1985
Acetomicrobium flavidum
Biology
87
11,015,838
https://en.wikipedia.org/wiki/Malagasy%20giant%20rat
The Malagasy giant rat (Hypogeomys antimena), also known as the votsotsa or votsovotsa, is a nesomyid rodent found only in the Menabe region of Madagascar. It is an endangered species due to habitat loss, slow reproduction, and limited range (200 square kilometres north of Morondava, between the rivers Tomitsy and Tsiribihina) Pairs are monogamous and females bear only one or two young per year. It is the only extant species in the genus Hypogeomys; another species, Hypogeomys australis, is known from subfossil remains a few thousand years old. Physical description Malagasy giant rats have an appearance somewhat similar to rabbits, though maintaining many rat-like features especially in the face. Males and females both grow to roughly rabbit-size, around and , though with an additional of dark tail. They have a coarse coat which varies from gray to brown to reddish, darkening around the head and fading to white on the belly. They also have prominent, pointed ears and long, muscular back legs, used for jumping to avoid predators. They can leap almost in the air, for which reason they are sometimes called giant jumping rats. Reproduction and maturation The male Malagasy giant rat reaches sexual maturity within one year, but will not mate until reaching 1.5 to two years of age. The female Malagasy giant rat reaches sexual maturity in two years. These rats are one of the few rodent species to practice sexual monogamy. Once mated, a pair will stay together until one of them dies. On the death of a mate, females tend to remain in the burrow until a new male is found. While males usually wait for a new mate as well, they do occasionally move to live with a widowed female. Females give birth to a single offspring after a gestation of 102–138 days (number observed in captivity) once or twice during the mating season, which coincides with the Madagascar rainy season from December to April. The young are raised by both parents, remaining in the family burrow for the first 4–6 weeks, then increasingly exploring and foraging outside. Young males stay with the family unit for one year before achieving sexual maturity and leaving to find their own burrow. Females do not mature for two years and remain with their parents for the extra year. Males are extremely protective of their young. They are known to increase their own predation risk to follow or defend their offspring. Lifestyle and behavior Completely nocturnal, the giant rats live in burrows up to across with as many as six entrances which, even those in regular use, are kept blocked by dirt and leaves to discourage predation by the Malagasy ground boa. The other main traditional predatory threat is the puma-like fossa but increasingly feral dogs and cats introduced to the island are hunting them as well. When foraging, the rats move on all fours, searching the forest floor for fallen fruit, nuts, seeds, and leaves. They have also been known to strip bark from trees and dig for roots and invertebrates. Pairs are highly territorial and the male and female will both defend their territory from other rats. They mark their territory with urine, feces, and scent gland secretions. Conservation and efforts The Malagasy giant rat is listed as critically endangered. Limited range, habitat destruction, increased predation by non-native feral dogs and cats, and disease have all led to the decline. Many feral cats also carry a parasite called toxoplasmosis which causes rodents to lose their fear of cats, to the point of almost being attracted to cats, resulting in their being caught and killed more easily. Hantavirus is another rodent disease ravaging the population, which causes kidney failure. The Madagascan Government has enacted laws to protect the giant rat. Much of their territory is now the Kirindy Forest Reserve where sustainable forestry is practiced. The government has also introduced policies that help the inhabitants of the island coexist with the animals that live there. Gerald Durrell was the first scientist to breed the rats in captivity. In 1990, he brought five specimens to Jersey. Since then, 16 breeding programs have been set up and 12 have been successful. References Hypogeomys Mammals described in 1869 Endemic fauna of Madagascar EDGE species Taxa named by Alfred Grandidier
Malagasy giant rat
Biology
892
70,409,969
https://en.wikipedia.org/wiki/Unditching%20beam
An unditching beam is a device that is used to aid in the recovery of armoured fighting vehicles when they become bogged or "ditched". The device is a beam that is attached to the continuous tracks that provides additional traction for the vehicle to extricate itself from a ditch or from boggy conditions. The unditching beam was first introduced into service during the First World War with the British Mark IV tank. It is believed the device was designed by Philip Johnson who was serving as an engineering officer at the British Army's depot at Érin; originally the device weighed and was constructed of a solid beam of oak with two large steel plates bolted to two sides to provide protection. When not in use it was stowed on two rails mounted on the roof of the tank that ran the entire length of the vehicle, and when employed the beam was chained to the tank's tracks, giving the vehicle something firm to drive over. Unditching beams remain a commonly carried standard ancillary on a number of Russian produced armoured fighting vehicles. See also Unditching roller References Citations Bibliography Automotive_engineering World War I military equipment of the United Kingdom
Unditching beam
Engineering
234
47,294,125
https://en.wikipedia.org/wiki/TU%20Corvi
TU Corvi is a yellow-white hued star in the southern constellation of Corvus. It is a dimly visible to the naked eye with an apparent visual magnitude of 6.20. The distance to this star can be estimated from its annual parallax shift of , yielding a range of about 246 light years. Based upon measured changes in its proper motion, it may be a close binary system. This is an F-type main-sequence star with a stellar classification of F0 V. Previously it had been classed as F0 III, matching an evolved giant star. It is a Delta Scuti variable, varying by an amplitude of 0.025 in B magnitude with a period of 118 minutes. At the age of 786 million years, it has a high rate of spin with a projected rotational velocity of 103 km/s. The star has 1.45 times the mass of the Sun and is radiating 12.6 times the Sun's luminosity from its photosphere at an effective temperature of 7,132 K. References F-type main-sequence stars Delta Scuti variables Corvus (constellation) Durchmusterung objects 109585 061496 4797 Corvi, TU
TU Corvi
Astronomy
254
21,086,625
https://en.wikipedia.org/wiki/Fluticasone%20furoate
Fluticasone furoate, sold under the brand name Flonase Sensimist among others, is a corticosteroid for the treatment of non-allergic and allergic rhinitis administered by a nasal spray. It is also available as an inhaled corticosteroid to help prevent and control symptoms of asthma. It is derived from cortisol. Unlike fluticasone propionate, which is only approved for children four years and older, fluticasone furoate is approved in children as young as two years of age when used for allergies. It was approved for medical use in the United States in April 2007, and in the European Union in November 2008. In 2021, fluticasone was the 23rd most commonly prescribed medication in the United States, with more than 25million prescriptions. Medical uses Fluticasone furoate is indicated for the treatment of the symptoms of allergic rhinitis, and asthma. Fluticasone Furoate is a corticosteroid medication primarily used to treat allergic rhinitis (hay fever) and non-allergic (perennial) rhinitis. It is also indicated for the treatment of nasal polyps in adults. Additionally, fluticasone furoate nasal spray may be prescribed for the management of symptoms associated with sinusitis. Always consult with a healthcare professional for accurate diagnosis and appropriate treatment options. Fluticasone furoate nasal spray is highly effective in relieving symptoms of allergic rhinitis, including nasal congestion, runny nose, sneezing, and nasal itching. It works by reducing inflammation in the nasal passages and decreasing the production of mucus. Fluticasone Furoate is used as a maintenance treatment for asthma in patients aged 12 years and older. It helps to reduce inflammation in the airways, which is a key component of asthma management. It helps to control symptoms such as wheezing, shortness of breath, chest tightness, and coughing, thereby improving the overall quality of life for individuals with asthma. Regular use of Fluticasone Furoate can reduce the frequency and severity of asthma exacerbations or attacks, helping to prevent serious episodes of breathing difficulty. When used as an inhaler, Fluticasone furoate helps to control asthma symptoms by reducing airway inflammation and preventing asthma attacks. It is often used as a maintenance treatment to provide long-term control of asthma symptoms and improve lung function. Available forms Inhalers: Fluticasone furoate is commonly available in the form of a dry powder inhaler (DPI) for inhalation. This inhaler is used for the maintenance treatment of asthma in patients aged 12 years and older. It delivers the medication directly to the lungs, where it acts to reduce inflammation and improve asthma symptoms. Adult and Paediatric dosage for - Powder inhalation 50 mcg/actuation 100 mcg/actuation 200 mcg/actuation Nasal Spray: Fluticasone furoate is also available as a nasal spray, primarily used for the treatment of allergic rhinitis (hay fever) symptoms such as nasal congestion, sneezing, itching, and runny nose. It helps to reduce inflammation in the nasal passages and provides relief from allergy symptoms. Nasal Drops: In some cases, fluticasone furoate may be available as nasal drops for the treatment of nasal polyps or other specific nasal conditions. These nasal drops are applied directly into the nostrils to reduce inflammation and symptoms associated with nasal polyps. Side effects Common side effects of Fluticasone furoate nasal spray include nasal irritation, dryness, itching, and nosebleeds. These side effects are usually mild and transient. Some individuals may experience throat irritation or coughing when using Fluticasone furoate inhalers. Rinsing the mouth and throat with water after inhalation can help reduce these symptoms. Headache is another common side effect reported with the use of Fluticasone furoate nasal spray or inhalers. It is usually mild and resolves with continued use. In rare cases, Fluticasone furoate may cause more serious side effects, such as adrenal suppression, glaucoma, cataracts, or growth retardation in children. These side effects are more likely to occur with long-term, high-dose use, although they are still rare. Serious side effects of Fluticasone Furoate include: hives, difficulty breathing, swelling of the face, lips, tongue, or throat, white patches in the mouth or the tongue, fever, chills, persistent sore throat, mood changes, depression, mood swings, agitation, vision problems, increased thirst or urination, easy bruising or bleeding, bone pain, and severe wheezing It's essential to use Fluticasone furoate as directed by a healthcare professional to maximize its benefits and minimize the risk of side effects. If you experience any concerning side effects while using Fluticasone furoate, it's important to consult your doctor for further evaluation and management. Interactions Fluticasone Furoate has serious interactions with the following drugs: Abametapir Apalutamide Fexinidazole Ombitasvir/paritaprevir/ritonavir and dasabuvir (DSC) Tucatinib Fluticasone Furoate has moderate interactions with at least 45 other drugs. Toxicity The toxicity of Fluticasone furoate is primarily associated with excessive or prolonged use, especially at high doses. While Fluticasone furoate is generally considered safe when used according to prescribed guidelines, long-term or improper use can lead to various adverse effects. Pharmacology Mechanism of action The mechanism of action of Fluticasone furoate, like other corticosteroids, involves its binding to glucocorticoid receptors within cells. Fluticasone furoate enters target cells and binds to glucocorticoid receptors (GRs), which are found in the cytoplasm of many cell types. Upon binding, the Fluticasone furoate-GR complex undergoes conformational changes, leading to its translocation into the cell nucleus. Once in the nucleus, the Fluticasone furoate-GR complex binds to specific DNA sequences known as glucocorticoid response elements (GREs) located in the promoter regions of target genes. Binding of the Fluticasone furoate-GR complex to GREs modulates the transcription of target genes, leading to the production of mRNA molecules. These mRNA molecules are then translated into proteins, which mediate the anti-inflammatory and immunosuppressive effects of Fluticasone furoate. Fluticasone furoate regulates the expression of various genes involved in inflammation, such as cytokines, chemokines, and inflammatory enzymes. By suppressing the production of these inflammatory mediators, Fluticasone furoate reduces inflammation and related symptoms. Fluticasone furoate also inhibits the function of immune cells, such as T lymphocytes and macrophages, by interfering with their activation and proliferation. This immunosuppressive action helps to dampen immune responses and is beneficial in conditions where excessive inflammation or immune activity is harmful, such as allergic rhinitis and asthma. Pharmacokinetics The pharmacokinetics and metabolism of Fluticasone furoate involve its absorption, distribution, metabolism, and elimination from the body. Fluticasone furoate is typically administered via inhalation for asthma or intranasal spray for allergic rhinitis. After administration, it is absorbed through the respiratory mucosa. The bioavailability of Fluticasone furoate is relatively low due to extensive first-pass metabolism in the liver. Fluticasone furoate has a high protein binding affinity (approximately 91%) to plasma proteins, primarily to serum albumin. It distributes extensively into tissues after systemic absorption. Fluticasone furoate undergoes extensive metabolism primarily in the liver, mediated by the enzyme cytochrome P450 3A4 (CYP3A4). The major metabolic pathways include oxidation and conjugation reactions. Oxidation may occur at various sites within the molecule, leading to the formation of metabolites with reduced corticosteroid activity. Conjugation reactions, such as glucuronidation and sulfation, also contribute to the formation of metabolites that are more water-soluble and readily excreted from the body. The metabolites of Fluticasone furoate, along with a small portion of unchanged drug, are primarily eliminated via the kidneys in urine and to a lesser extent in feces via biliary excretion. The elimination half-life of Fluticasone furoate is relatively short, ranging from approximately 14 to 24 hours, depending on factors such as dose and route of administration. History Fluticasone furoate or (FF) was discovered by researchers at GlaxoSmithKline, also known as (GSK), and Theravance, Inc. (NASDAQ: THRX). Research first began in 2006, however, its final phases of research began conclusion from the 6th December 2013 and into 2014. Dave Allen, Head, Respiratory Therapy Area Unit, R&D said, “We are pleased to see the results delivered by FF/VI in the treatment of asthma. We have undertaken a large and comprehensive clinical programme providing data on the efficacy and safety profile for FF/VI in asthma. With these additional data we will consider our next steps in relation to an asthma filing in the US.” on 6 December 2013. Dave Allen is responsible for the identification of novel differentiated medicines and their progression to registration and launch at GlaxoSmithKline (GSK). He leads a group of over 200 scientists and clinicians who exploit scientific innovations that have the potential to address the major unmet needs in diseases such as COPD, severe asthma, acute lung injury and idiopathic pulmonary fibrosis. Also noted in the 6th of December 2013 press release from GlaxoSmithKline (GSK), “There is an ongoing unmet medical need among patients with asthma,” said Rick E Winningham, Chief Executive Officer of Theravance. “This is an important outcome for FF/VI and we will continue working with GSK to determine how we can make this potential treatment available to appropriate patients who could benefit from a new asthma medicine.” Fluticasone furoate is most commonly known for its form combinations vilanterol trifenate, known as Fluticasone furoate/vilanterol (FF/VI) for its treatment of bronchospasms for COPD ( Chronic Obstructive Pulmonary Disease ). GlaxoSmithKline announced on 20 August 2014 that the Food and Drug Administration (FDA) as approved Arnuity™ Ellipta® (fluticasone furoate inhalation powder) for use in The United States of America, a once-daily inhaled corticosteroid (ICS) medicine for maintenance treatment of asthma as prophylactic therapy in patients aged 12 years and older. Arnuity is not indicated for relief of acute bronchospasm. GSK Australia and Theravance, Inc. (NASDAQ: THRX) announced today that the Therapeutic Goods Administration (TGA) has approved BREO® ELLIPTA® (fluticasone furoate/vilanterol [FF/VI]) on the 22nd April 2014, for the treatment of patients with asthma or chronic obstructive pulmonary disease (COPD) in Australia. FF/VI was approved by the FDA for sale as BREO® ELLIPTA® (fluticasone furoate/vilanterol [FF/VI]) on the 30th April 2015 for use in The United States of America, for the once-daily treatment of asthma in patients aged 18 years and older. Breo Ellipta is not indicated for the relief of acute bronchospasm. Drug class Fluticasone Furoate falls under the drug class of Corticosteroid. Corticosteroids are a class of steroid hormones produced naturally by the adrenal cortex, which is located on top of the kidneys. They play a crucial role in regulating various physiological processes in the body, including metabolism, immune response, and inflammation. Corticosteroids can also refer to synthetic drugs that mimic the actions of these natural hormones. There are two main types of corticosteroids: glucocorticoids and mineralocorticoids. Glucocorticoids, such as cortisol, are involved in regulating metabolism and suppressing inflammation. They have anti-inflammatory and immunosuppressive properties, making them useful in the treatment of conditions like asthma, arthritis, and autoimmune diseases. Mineralocorticoids, such as aldosterone, primarily regulate electrolyte and fluid balance in the body. Synthetic corticosteroids, like prednisone, dexamethasone, and fluticasone, are commonly used in medicine to reduce inflammation and suppress immune responses in conditions such as allergies, asthma, rheumatoid arthritis, and inflammatory bowel disease. They are available in various forms, including oral tablets, inhalers, creams, and injections, depending on the specific condition being treated and the desired route of administration. Chemistry Fluticasone furoate contains fluorine atoms at specific positions on the steroid nucleus. These fluorinated substituents enhance the molecule's potency and duration of action. The furoate ester group is attached at position 17 of the steroid nucleus. This ester group contributes to the molecule's lipophilicity, which affects its absorption and distribution in the body. Fluticasone furoate has a side chain attached at position 17 of the steroid nucleus. This side chain plays a crucial role in determining the molecule's selectivity and potency. Molecular Formula: C27H29F3O6S Molecular Weight : 538.6 g/mol Reactivity Fluticasone furoate, like other corticosteroids, exhibits specific chemical reactivity characteristics based on its structure. Based on its chemical structure, which includes a corticosteroid backbone with a fluorine substitution pattern, Fluticasone furoate might exhibit some reactivity typical of compounds with such structures, such as: Steroid Backbone Stability: The steroid backbone of fluticasone furoate is relatively stable under normal conditions, which is important for its pharmaceutical formulation and shelf-life stability. Ester Hydrolysis: The furoate ester group in fluticasone furoate is susceptible to hydrolysis under certain conditions, particularly in aqueous environments with acidic or basic pH. This hydrolysis can lead to the breakdown of the ester bond, potentially altering the pharmacokinetics and bioavailability of the molecule. Fluorine Reactivity: Fluticasone furoate contains fluorine atoms in its structure, which can influence its chemical reactivity. Electrophilic substitution: The presence of fluorine atoms in the molecule can make it susceptible to electrophilic aromatic substitution reactions, where the fluorine atoms can be replaced by other functional groups under certain conditions. Reduction: The carbonyl group in the molecule might be reduced under appropriate conditions to yield an alcohol derivative. Acid-base reactions: The presence of functional groups such as the ketone and ester moieties can lead to acid-base reactions under appropriate conditions. Oxidation: Depending on the reaction conditions, oxidation of certain functional groups such as alcohols or aldehydes within the molecule might occur. Conjugation: The molecule may undergo conjugation reactions, such as glucuronidation or sulfation, in the liver to facilitate its elimination from the body. Synthesis "a solution of Compound II in butanone with DMAP and tripropylamine is treated with furoyl chloride to obtain Compound III, which is then treated with N-methylpiperazine to de-fluoridize to obtain Compound IV. Compound IV is reacted with a fluoromethylating reagent to obtain the fluticasone furoate of Compound I". As found in the method patent from: US8969547B2, United States. Additional Information Fluticasone furoate, sold under the brand name Flonase Sensimist among others, is a corticosteroid for the treatment of non-allergic and allergic rhinitis administered by a nasal spray. It is also available as an inhaled corticosteroid to help prevent and control symptoms of asthma. It is derived from cortisol. Unlike fluticasone propionate, which is only approved for children four years and older, fluticasone furoate is approved in children as young as two years of age when used for allergies. It was approved for medical use in the United States in April 2007, and in the European Union in November 2008. In 2021, fluticasone was the 23rd most commonly prescribed medication in the United States, with more than 25million prescriptions. Society and culture Brand names In the US it is marketed by GlaxoSmithKline for asthma as Arnuity Ellipta and is only available with a prescription. It is sold over-the-counter for allergic rhinitis as Flonase Sensimist. The Veramyst brand name was discontinued in the US. The combination drugs fluticasone furoate/umeclidinium bromide/vilanterol, marketed as Trelegy Ellipta, and fluticasone furoate/vilanterol, marketed as Breo Ellipta (US, Canada, New Zealand) and Relvar Ellipta (EU, UK), are approved for use in the United States for long-term maintenance treatment of airflow obstruction in people with chronic obstructive pulmonary disease (COPD). They are also approved for the treatment of asthma. The combination fluticasone propionate/salmeterol (Advair Diskus) is indicated for the treatment of asthma and chronic obstructive pulmonary disease. References Combination drugs Corticosteroid esters Furoate esters Drugs developed by GSK plc Glucocorticoids Organofluorides 2-Furyl compounds Thioesters
Fluticasone furoate
Chemistry
3,907
67,710,149
https://en.wikipedia.org/wiki/Cannabielsoin
Cannabielsoin (CBE) is a metabolite of cannabidiol, one of the major chemical components of cannabis. History Cannabielsoin in scientific journals was first cited in 1973. It was concluded that cannabielsoin was formed from cannabidiol as part of the metabolic process and is non-psychoactive. See also Cannabicitran Cannabicyclol Cannabimovone Cannabitriol Iso-THC References Cannabis Dibenzofurans Isopropenyl compounds Hydroxyarenes Tertiary alcohols Human drug metabolites Phytocannabinoids
Cannabielsoin
Chemistry
132
233,055
https://en.wikipedia.org/wiki/Rate-monotonic%20scheduling
In computer science, rate-monotonic scheduling (RMS) is a priority assignment algorithm used in real-time operating systems (RTOS) with a static-priority scheduling class. The static priorities are assigned according to the cycle duration of the job, so a shorter cycle duration results in a higher job priority. These operating systems are generally preemptive and have deterministic guarantees with regard to response times. Rate monotonic analysis is used in conjunction with those systems to provide scheduling guarantees for a particular application. Introduction A simple version of rate-monotonic analysis assumes that threads have the following properties: No resource sharing (processes do not share resources, e.g. a hardware resource, a queue, or any kind of semaphore blocking or non-blocking (busy-waits)) Deterministic deadlines are exactly equal to periods Static priorities (the task with the highest static priority that is runnable immediately preempts all other tasks) Static priorities assigned according to the rate monotonic conventions (tasks with shorter periods/deadlines are given higher priorities) Context switch times and other thread operations are free and have no impact on the model It is a mathematical model that contains a calculated simulation of periods in a closed system, where round-robin and time-sharing schedulers fail to meet the scheduling needs otherwise. Rate monotonic scheduling looks at a run modeling of all threads in the system and determines how much time is needed to meet the guarantees for the set of threads in question. Optimality The rate-monotonic priority assignment is optimal under the given assumptions, meaning that if any static-priority scheduling algorithm can meet all the deadlines, then the rate-monotonic algorithm can too. The deadline-monotonic scheduling algorithm is also optimal with equal periods and deadlines, in fact in this case the algorithms are identical; in addition, deadline monotonic scheduling is optimal when deadlines are less than periods. For the task model in which deadlines can be greater than periods, Audsley's algorithm endowed with an exact schedulability test for this model finds an optimal priority assignment. Upper bounds on utilization Least upper bound proved that for a set of periodic tasks with unique periods, a feasible schedule that will always meet deadlines exists if the CPU utilization is below a specific bound (depending on the number of tasks). The schedulability test for RMS is: where is the utilization factor, is the computation time for process , is the release period (with deadline one period later) for process , and is the number of processes to be scheduled. For example, for two processes. When the number of processes tends towards infinity, this expression will tend towards: Therefore, a rough estimate when is that RMS can meet all of the deadlines if total CPU utilization, , is less than 70%. The other 30% of the CPU can be dedicated to lower-priority, non-real-time tasks. For smaller values of or in cases where is close to this estimate, the calculated utilization bound should be used. In practice, for the process, should represent the worst-case (i.e. longest) computation time and should represent the worst-case deadline (i.e. shortest period) in which all processing must occur. Relationship to queueing theory In queueing theory, is called the interarrival time, and is called the service time. These two parameters are often specified as rates: is the arrival rate, and is the service rate. The utilization for each task, denoted , is then: as above. Upper bound for harmonic task sets Liu and Layland noted that this bound may be relaxed to the maximum possible value of 1.0, if for tasks , where and , is an integer multiple of , which is to say that all tasks have a period that is not just a multiple of the shortest period, , but instead that any task's period is a multiple of all shorter periods. This is known as an harmonic task set. An example of this would be: . It is acknowledged by Liu and Layland that it is not always feasible to have a harmonic task set and that in practice other mitigation measures, such as buffering for tasks with soft-time deadlines or using a dynamic priority assignment approach may be used instead to allow for a higher bound. Generalization to harmonic chains Kuo and Mok showed that for a task set made up of harmonic task subsets (known as harmonic chains), the least upper bound test becomes: In the instance where for each task, its period is an exact multiple of every other task that has a shorter period, the task set can be thought of as being composed of harmonic task subsets of size 1 and therefore , which makes this generalization equivalent to Liu and Layland's least upper bound. When , the upper bound becomes 1.0, representing full utilization. Stochastic bounds It has been shown that a randomly generated periodic task system will usually meet all deadlines when the utilization is 88% or less, however this fact depends on knowing the exact task statistics (periods, deadlines) which cannot be guaranteed for all task sets, and in some cases the authors found that the utilization reached the least upper bound presented by Liu and Layland. Hyperbolic bound The hyperbolic bound is a tighter sufficient condition for schedulability than the one presented by Liu and Layland: , where is the CPU utilization for each task. It is the tightest upper bound that can be found using only the individual task utilization factors. Resource sharing In many practical applications, resources are shared and the unmodified RMS will be subject to priority inversion and deadlock hazards. In practice, this is solved by disabling preemption or by priority inheritance. Alternative methods are to use lock-free algorithms or avoid the sharing of a mutex/semaphore across threads with different priorities. This is so that resource conflicts cannot result in the first place. Disabling of preemption The OS_ENTER_CRITICAL() and OS_EXIT_CRITICAL() primitives that lock CPU interrupts in a real-time kernel, e.g. MicroC/OS-II The splx() family of primitives which nest the locking of device interrupts (FreeBSD 5.x/6.x), Priority inheritance The basic priority inheritance protocol promotes the priority of the task that holds the resource to the priority of the task that requests that resource at the time the request is made. Upon release of the resource, the original priority level before the promotion is restored. This method does not prevent deadlocks and suffers from chained blocking. That is, if a high priority task accesses multiple shared resources in sequence, it may have to wait (block) on a lower priority task for each of the resources. The real-time patch to the Linux kernel includes an implementation of this formula. The priority ceiling protocol enhances the basic priority inheritance protocol by assigning a ceiling priority to each semaphore, which is the priority of the highest job that will ever access that semaphore. A job cannot preempt a lower priority critical section if its priority is lower than the ceiling priority for that section. This method prevents deadlocks and bounds the blocking time to at most the length of one lower priority critical section. This method can be suboptimal, in that it can cause unnecessary blocking. The priority ceiling protocol is available in the VxWorks real-time kernel. It is also known as Highest Locker's Priority Protocol (HLP). Priority inheritance algorithms can be characterized by two parameters. First, is the inheritance lazy (only when essential) or immediate (boost priority before there is a conflict). Second is the inheritance optimistic (boost a minimum amount) or pessimistic (boost by more than the minimum amount): In practice there is no mathematical difference (in terms of the Liu-Layland system utilization bound) between the lazy and immediate algorithms, and the immediate algorithms are more efficient to implement, and so they are the ones used by most practical systems. An example of usage of basic priority inheritance is related to the "Mars Pathfinder reset bug" which was fixed on Mars by changing the creation flags for the semaphore so as to enable the priority inheritance. Interrupt Service Routines All interrupt service routines (ISRs), whether they have a hard real-time deadline or not should be included in RMS analysis to determine schedulability in cases where ISRs have priorities above all scheduler-controlled tasks. An ISR may already be appropriately prioritized under RMS rules if its processing period is shorter than that of the shortest, non-ISR process. However, an ISR with a period/deadline longer than any non-ISR process period with a critical deadline results in a violation of RMS and prevents the use of the calculated bounds for determining schedulability of a task set. Mitigating mis-prioritized ISRs One method for mitigating a mis-prioritized ISR is to adjust the analysis by reducing the ISR's period to be equal to that of the shortest period, if possible. Imposing this shorter period results in prioritization that conforms to RMS, but also results in a higher utilization factor for the ISR and therefore for the total utilization factor, which may still be below the allowable bound and therefore schedulability can be proven. As an example, consider a hardware ISR that has a computation time, of 500 microseconds and a period, , of 4 milliseconds. If the shortest scheduler-controlled task has a period, of 1 millisecond, then the ISR would have a higher priority, but a lower rate, which violates RMS. For the purposes of proving schedulability, set and recalculate the utilization factor for the ISR (which also raises the total utilization factor). In this case, will change from to . This utilization factor would be used when adding up the total utilization factor for the task set and comparing to the upper bound to prove schedulability. It should be emphasized that adjusting the period of the ISR is for analysis only and that the true period of the ISR remains unchanged. Another method for mitigating a mis-prioritized ISR is to use the ISR to only set a new semaphore/mutex while moving the time-intensive processing to a new process that has been appropriately prioritized using RMS and will block on the new semaphore/mutex. When determining schedulability, a margin of CPU utilization due to ISR activity should be subtracted from the least upper bound. ISRs with negligible utilization may be ignored. Examples Example 1 Under RMS, P2 has the highest release rate (i.e. the shortest release period) and so would have the highest priority, followed by P1 and finally P3. Least Upper Bound The utilization will be: . The sufficient condition for processes, under which we can conclude that the system is schedulable is: Because , and because being below the Least Upper Bound is a sufficient condition, the system is guaranteed to be schedulable. Example 2 Under RMS, P2 has the highest release rate (i.e. the shortest release period) and so would have the highest priority, followed by P3 and finally P1. Least Upper Bound Using the Liu and Layland bound, as in Example 1, the sufficient condition for processes, under which we can conclude that the task set is schedulable, remains: The total utilization will be: . Since , the system is determined not to be guaranteed to be schedulable by the Liu and Layland bound. Hyperbolic Bound Using the tighter Hyperbolic bound as follows: it is found that the task set is schedulable. Example 3 Under RMS, P2 has the highest rate (i.e. the shortest period) and so would have the highest priority, followed by P3 and finally P1. Least Upper Bound Using the Liu and Layland bound, as in Example 1, the sufficient condition for processes, under which we can conclude that the task set is schedulable, remains: The total utilization will be: . Since , the system is determined not to be guaranteed to be schedulable by the Liu and Layland bound. Hyperbolic Bound Using the tighter Hyperbolic bound as follows: Since the system is determined to not be guaranteed to be schedulable by the Hyperbolic bound. Harmonic Task Set Analysis Because , tasks 2 and 3 can be considered a harmonic task subset. Task 1 forms its own harmonic task subset. Therefore, the number of harmonic task subsets, , is . Using the total utilization factor calculated above (0.81875), since the system is determined to be schedulable. See also Deadline-monotonic scheduling Deos, a time and space partitioned real-time operating system containing a working Rate Monotonic Scheduler. Dynamic priority scheduling Earliest deadline first scheduling RTEMS, an open source real-time operating system containing a working Rate Monotonic Scheduler. Scheduling (computing) Queueing theory Kingman's formula References Further reading . , Chapter 6. . External links Mars Pathfinder Bug from Research @ Microsoft What really happened on Mars Rover Pathfinder by Mike Jones from The Risks Digest, Vol. 19, Issue 49 The actual reason for the Mars Pathfinder Bug, by those who actually dealt with it, rather than someone whose company and therefore stock value depended upon the description of the problem, or someone who heard someone talking about the problem. Processor scheduling algorithms Real-time computing
Rate-monotonic scheduling
Technology
2,798
55,002,259
https://en.wikipedia.org/wiki/Non-canonical%20base%20pairing
Non-canonical base pairs are planar hydrogen bonded pairs of nucleobases, having hydrogen bonding patterns which differ from the patterns observed in Watson-Crick base pairs, as in the classic double helical DNA. The structures of polynucleotide strands of both DNA and RNA molecules can be understood in terms of sugar-phosphate backbones consisting of phosphodiester-linked D 2’ deoxyribofuranose (D ribofuranose in RNA) sugar moieties, with purine or pyrimidine nucleobases covalently linked to them. Here, the N9 atoms of the purines, guanine and adenine, and the N1 atoms of the pyrimidines, cytosine and thymine (uracil in RNA), respectively, form glycosidic linkages with the C1’ atom of the sugars. These nucleobases can be schematically represented as triangles with one of their vertices linked to the sugar, and the three sides accounting for three edges through which they can form hydrogen bonds with other moieties, including with other nucleobases. The side opposite to the sugar linked vertex is traditionally called the Watson-Crick edge, since they are involved in forming the Watson-Crick base pairs which constitute building blocks of double helical DNA. The two sides adjacent to the sugar-linked vertex are referred to, respectively, as the Sugar and Hoogsteen (C-H for pyrimidines) edges. Each of the four different nucleobases are characterized by distinct edge-specific distribution patterns of their respective hydrogen bond donor and acceptor atoms, complementarity with which, in turn, define the hydrogen bonding patterns involved in base pairing. The double helical structures of DNA or RNA are generally known to have base pairs between complementary bases, Adenine:Thymine (Adenine:Uracil in RNA) or Guanine:Cytosine. They involve specific hydrogen bonding patterns corresponding to their respective Watson-Crick edges, and are considered as Canonical Base Pairs. At the same time, the helically twisted backbones in the double helical duplex DNA form two grooves, major and minor, through which the hydrogen bond donor and acceptor atoms corresponding respectively to the Hoogsteen and sugar edges are accessible for additional potential molecular recognition events. Experimental evidences reveal that the nucleotide bases are also capable of forming a wide variety of pairing between bases in various geometries, having hydrogen bonding patterns different from those observed in canonical base pairs. These base pairs, which are generally referred to as Non-Canonical Base Pairs, are held together by multiple hydrogen bonds, and are mostly planar and stable. Most of these play very important roles in shaping the structure and function of different functional RNA molecules. In addition to their occurrences in several double stranded stem regions, most of the loops and bulges that appear in single-stranded RNA secondary structures form recurrent 3D motifs, where non-canonical base pairs play a central role. Non-canonical base pairs also play crucial roles in mediating the tertiary contacts in RNA 3D structures. History Double helical structures of DNA as well as folded single stranded RNA are now known to be stabilized by Watson-Crick base pairing between the purines, adenine and guanine, with the pyrimidines, thymine (or uracil for RNA) and cytosine. In this scheme, the N1 atoms of the purine residues respectively form hydrogen bond with the N3 atoms of the pyrimidine residues in A:T and G:C complementarity. The second hydrogen bond in A:T base pairs involves the N6 amino group of adenine and the O4 atom of thymine (or uracil in RNA). Similarly, the second hydrogen bond in G:C base pairs involves O6 atom and N4 amino group of guanine and cytosine, respectively. The G:C base pairs also have a third hydrogen bond involving the N2 amino group of guanine and the O2 atom of cytosine. However, even till about twenty years after this scheme was initially proposed by James D. Watson and Francis H.C. Crick, experimental evidences suggesting other forms of base-base interactions continued to draw the attention of researchers investigating the structure of DNA. The first high resolution structure of an adenine:thymine base pair, as solved by Karst Hoogsteen by single crystal X-ray crystallography in 1959 revealed a structure whose geometry was very different from what was proposed by Watson and Crick. It had two hydrogen bonds involving N7 and N6 atoms of adenine and N3 and O4 (or O2) atoms of thymine. It may be noted that due to use of thymine base with methyl group representing sugar, a symmetry axis appears passing through N1 and C6 atoms and the O2 and O4 atoms appears identical. In order to distinguish this alternate base pairing scheme from the Watson-Crick scheme, base pairs where a hydrogen bond involves the N7 atom of a purine residue have been referred to as Hoogsteen base pair, and later, the purine base edge which includes its N7 atom is referred to as its Hoogsteen edge. The first high resolution structure of guanine:cytosine pair, obtained by W. Guschelbauer also was similar to the Hoogsteen base pair, although this structure required an unusual protonation of N1 imino nitrogen of cytosine, which is possible only at significantly lower pH. Experimental evidences, including low resolution NMR studies as well as high resolution X-ray crystallographic studies, supporting Watson-Crick base pairing were obtained as late as in the early '70s. Almost a decade later, with the advent of efficient DNA synthesis methods, Richard Dickerson followed by several other groups, solved structures of the physiological double helical B-DNA with a complete helical turn, based on the crystals of synthetic DNA oligomers. The pairing geometries of the A:T (A:U in RNA) and G:C pairs in these structures confirmed the common or canonical form of base pairing as proposed by Watson and Crick, while those with all other geometries, and compositions, are now referred to as non-canonical base pairs. It was noticed that even in double stranded DNA, where canonical Watson Crick base pairs associate the two complementary anti-parallel strands together, there were occasional occurrences of Hoogsteen and other non-Watson-Crick base pairs. It was also proposed that within Watson-Crick base pair dominated DNA double helices, Hoogsteen base pair formation could be a transient phenomenon. While canonical Watson-Crick base pairs are most prevalent and are commonly observed in a majority of chromosomal DNA and in most functional RNAs, presence of stable non-canonical base pairs is also extremely significant in DNA biology. An example of non-Watson-Crick, or non-canonical, base pairing can be found at the ends of chromosomal DNA. The 3'-ends of chromosomes contain single stranded overhangs with some conserved sequence motifs (such as TTAGGG in most vertebrates). The single stranded region adopts some definite three-dimensional structures, which has been solved by X-ray crystallography as well as by NMR spectroscopy. The single strands containing the above sequence motifs are found to form interesting four stranded mini-helical structures stabilized by Hoogsteen base pairing between guanine residues. In these structures, four guanine residues form a near planar base quartet, referred to as G-quadruplex, where each guanine participates in base pairing with its neighboring guanine, involving their Watson-Crick and Hoogsteen edges in a cyclic manner. The four central carbonyl groups are often stabilized by potassium ions (K+). From the full genomic sequences of different organisms, it has been observed that telomere like sequences sometimes also interrupt double helical regions near transcription start site of some oncogenes, such as c-myc. It is possible that these sequence stretches form G-quadruplex like structures, which can suppress the expression of the related genes. The complementary cytosine rich sequences, on the other strand, may adopt another similar four stranded structure, the i-motif, stabilized by cytosine:cytosine non-canonical base pairs. While non-canonical base pairs are still relatively rare in DNA, in RNA molecules, where generally a single polymeric strand folds onto itself to form various secondary and tertiary structures, the occurrence of non-Watson-Crick base pairs turns out to be far more prevalent. As early as in the 1970s, analysis of the crystal structure of yeast tRNAPhe showed that RNA structures possess significant non-canonical variations in base pairing schemes. Subsequently, the structures of ribozymes, ribosome, riboswitches, etc. have highlighted their abundance, and hence the need for a comprehensive characterization of Non-Canonical Base Pairs. These three-dimensional RNA structures generally possess several secondary structural motifs, such as double helical stems, stems with hairpin loops, symmetric and asymmetric internal loops, kissing loops between two hairpin motifs, pseudoknots, continuous stacks between two segments of helices, multi helix junctions etc. along with single stranded regions. These secondary structural motifs, except for the single stranded motifs, are stabilized by hydrogen bonded base pairs and several of these are non-canonical base pairs, including G:U Wobble base pairs. It is notable in this context, that the Wobble hypothesis of Francis Crick predicted the possibility of G:U base pair, in place of the canonical G:C or A:U base pairs, also mediating the recognition between mRNA codons and tRNA anticodons, during protein synthesis. The G:U wobble base pair is the most numerously observed non-canonical base pair. While, because of its geometric similarity with the canonical base pairs, they frequently occur in the double helical stem regions of RNA structures, the geometric differences continue to draw the attention of nucleic acid researchers, providing new insights related to its structural significance. It may be noted that though the base pairs in the folded RNA structures, give rise to double helical stems, its two cleft regions – the major groove and minor groove, differ in their respective dimensions from those in DNA double helices. Unlike for those in DNA, the sequence discriminating major grooves in RNA double helices are very narrow and deep. On the other hand, the minor groove regions, though wide and shallow, do not carry much sequence specific information in terms of the hydrogen bonding donor-acceptor positioning of the corresponding base pair edges. The G:U wobble base pairs, along with the various other non-canonical base pairs, introduce variations in the structures of RNA double helices, thus enhancing the accessibility of the discriminating major groove edges of associated base pairs. This has been seen to be very important for molecular recognition steps during tRNA aminoacylation as well as in ribosome functions. Considering the immense importance of the non-canonical base pairs in RNA structure, folding and functions, researchers from multiple domains – biology, chemistry, physics, mathematics, computer science, etc., have joined in the effort to understand their structure, dynamics, function and their consequences. The complexities associated with experimental handling of RNA further underline the importance of diverse theoretical inputs towards addressing these issues. Types Two bases may approach each other in various ways, eventually leading to specific molecular recognition mediated by, often non-canonical, base pairing interactions, in addition to strong stacking interactions. These are essential for the process of RNA single strands folding into three-dimensional structures. Early studies on such unusual base pairs by Jiri Sponer, Pavel Hobza and their group were somewhat disadvantaged due to the unavailability of suitable unambiguous systematic naming schemes. While some of the observed base pair were assigned names following the Saenger nomenclature scheme. others were arbitrarily assigned names by different researchers.  It may be mentioned that some attempts were also made by Michael Levitt and coworkers to classify base-base association in terms of adjacency of bases, through either pairing or stacking interactions. There was clearly a need for a classification scheme for different types of non-canonical base pairs, which could comprehensively and unambiguously handle newer variants coming up due to the rapid increase in the sampling space. Different approaches which have evolved in response to this need are described below. Based on hydrogen bonding The nucleotide bases are nearly planar heterocyclic moieties, with conjugated pi-electron cloud, and with several hydrogen bonding donors and accepters distributed around the edges, usually designated as W, H or S, based on whether the edges can respectively be involved in forming Watson-Crick base pair, Hoogsteen base pair, or, whether the edge is adjacent to the C2’-OH group of the ribose sugar. Eric Westhof and Neocles Leontis used these edge designations to propose a currently widely accepted nomenclature scheme for base pairs. The hydrogen bonding donor and acceptor atoms could thus be classified in terms of their positioning along their three edges, namely the Watson-Crick or W edge, the Hoogsteen or H edge, and the Sugar or S edge. Since base pairs are mediated through hydrogen bonding interactions based on hydrogen bond donor-acceptor complementarity, this, in turn, provides a convenient bottoms-up approach towards classifying base pair geometries in terms of respective interacting edges of the participating bases. It may be noted that, unlike the Hoogsteen edge of purines, the corresponding edges of the pyrimidine bases do not have any polar hydrogen bond acceptor atom such as N7. However, these bases have C—H groups at their C6 and C5 atoms, which can act as weak hydrogen bond donors, as proposed by Gautam Desiraju. The Hoogsteen edge, hence, is also called Hoogsteen/C-H edge in a unified scheme for designating equivalent positions of purines as well as pyrimidines. Thus, the total number of possible edge combinations involved in base pairing are 6, namely Watson-Crick/Watson-Crick (or W:W), Watson-Crick/Hoogsteen (or W:H), Watson-Crick/Sugar (or W:S), Hoogsteen/Hoogsteen (or H:H), Hoogsteen/Sugar (or H:S) and Sugar/Sugar (or S:S). In the canonical Watson-Crick base pairs, the glycosidic bonds attaching the N9 (of purine) and N1 (of pyrimidine) of the paired bases with their respective sugar moieties, are on the same side of the mean hydrogen bonding axis, and are hence called Cis Watson-Crick base pairs. However, the relative orientations of the two sugars may also be Trans with respect to the mean hydrogen bonding direction giving rise to a distinct Trans Watson-Crick geometric class, consisting of species which were earlier referred to as reverse Watson-Crick base pairs according to Saenger nomenclature. The possibility of both Cis and Trans glycosidic bond orientation for each of the 6 possible edge combinations, gives rise to 12 geometric families of base pairs (see table). According to the Leontis-Westhoff scheme, any base pair can be systematically and unambiguously named using the syntax <Base_1: Base_2><Edge_1: Edge_2><Glycosidic Bond Orientation> where Base_1 and Base_2 carry information on respective base identities and their nucleotide number. This nomenclature scheme also allows us to enumerate the total number of distinct possible base pair types. For a given glycosidic bond orientation, say Cis, the four naturally occurring bases each have three possible edges for formation of base pairs giving rise to 12 such possible base pairing edge identities, each of which can in principle form base pairing with any edge of another base, irrespective of complementarity. This gives rise to a 12x12 symmetric matrix displaying 144 pairwise permutations of base pairing edge identities, where, apart from the 12 diagonal entries, others include repeat combinations. Thus, there are 78 (= 12 + 132/2) unique entries corresponding to the cis glycosidic bond orientation.  Considering both cis and trans glycosidic bond orientations, the number of base pair types amounts to 156. Of course, this number 156 is only an indicator. It includes base-edge combinations where base pairs cannot be formed due to absence of hydrogen bond donor acceptor complementarities.  For example, potential pairing between two guanine residues utilizing their Watson-Crick edges in cis form (cWW) is not supported by hydrogen bonding donor-acceptor complementarity, and is not observed with consistent hydrogen bonding pattern. This method of enumerating the possible number of distinct base pair types also does not consider possibilities of multimodality or bifurcated base pairs, or even instances of base pairs involving modified bases, protonated bases and water or ion mediation in hydrogen bond formation. Two cytosine bases can form trans Watson-Crick/Watson-Crick (tWW) base pairing with their neutral as well as hemi protonated forms, possibly both, giving rise to the i-motif DNA. However, both C(+):C tWW and C:C tWW, are counted as one type among 156 possible types. Based on isosteres Although significant differences are there between structures of non-canonical base pairs belonging to different geometric families, some base pairs within the same geometric family have been found to substitute each other without disrupting the overall structure. These base pairs are called isosteric base pairs. Isosteric base pairs always belong to same geometric families, but all the base pairs in a particular geometric family are not always isosteric. Two base pairs are called isosteric if they meet the following three criteria: (i) The C1′–C1′ distances should be similar; (ii) the paired bases should be related by the similar rotation in 3D space; and (iii) H-bonds formation should occur between equivalent base positions. A detailed approach towards quantifying isostericity, in terms of an IsoDiscrepancy Index (IDI), which can facilitate reliable prediction regarding which base pair substitutions can potentially occur in conserved motifs, was formulated by Neocles Leontis, Craig Zirbel and Eric Westhof. Based on IDI values and available base pair structural data, the group maintains a curated online base pair catalogue and an updated set of Isostericity Matrices (IM) corresponding to each of the 12 geometric families. Using this resource, one can comprehensively classify different types of canonical and non-canonical base pairs in terms of their positions in the Isostericity Matrices. This approach, for example, indicates that the four base pair types: A:U cWW, U:A cWW, G:C cWW and C:G cWW are isosteric to each other. Thus, as also confirmed by detailed sequence comparisons, double mutations altering A:U cWW to U:A cWW or even to G:C cWW may not disturb the structure, and, unless stability issues are involved, the function of the related RNA.  It was also found that the wobble G:U cWW base pair is not really isosteric to U:G cWW base pair, indicating that such double mutations may significantly affect the functioning of the corresponding RNA. On the other hand, some of the base pairs which are stabilized involving Sugar edge of the bases are mutually isosteric. Based on local strand direction It may be noted here that because of the geometric relationship of the bases with the sugar phosphate backbone, these 12 geometric families of base pairs are associated with two possible local strand orientations, namely parallel and antiparallel. For the 6 families with edge combinations involving Watson-Crick and Sugar edges, W:W, W:S and S:S, cis and trans families are respectively associated with antiparallel and parallel 5' to 3' local strand direction. Introduction of the Hoogsteen edge, as one of the partners in the combination, causes an inversion in the relationship. Thus, for W:H and H:S, cis and trans respectively correspond to parallel and antiparallel local strand orientation. As expected, when both the edges are H, a double inversion is observed, and H:H cis and trans correspond respectively to antiparallel and parallel local strand orientations. The annotation of local strand orientation in terms of parallel and antiparallel directions helps to understand which faces of the individual bases can be seen for a given base pair from the 5’- or the 3’ sides. This annotation also helps in classifying the 12 geometries into two groups of 6 each, where the geometries can potentially interconvert within each group, by in-plane relative rotation of the bases. However, one should note that the above theory is applicable only when the glycosidic torsion angles of both the nucleotide residues are anti. Notably, crystallographic observations and energetic considerations indicate that syn glycosidic torsions are also quite possible.  Hence the above classification of parallel or antiparallel nature of strand directions, by itself, does not always provide the complete understanding. Various functional RNA molecules are stabilized, in their specific folded pattern, by both canonical as well as non-canonical base pairs. Most tRNA molecules, for example, are known to have four short double helical segments, giving rise to a cloverleaf like two-dimensional structure. The three-dimensional structure of tRNA, however, takes an L-shape. This is mediated by several non-canonical base pairs and base triplets. The D-loop and TψC loop are held together by several such base pairs.  There is a variety of non-canonical base pair varieties, which can be browsed through different websites such as NDB, RNABPDB, RNABP COGEST, etc., to get a better understanding. It may be noted that the above scheme is valid for naturally occurring nucleotide bases. However, there are plenty of examples of post-transcriptional chemical modifications of the bases, many of which are seen in tRNAs or ribosomes. It may be important to understand their structural features also. Identification In case of double helical DNA, identification of base pairs is quite trivial using molecular visualizers such as VMD, RasMol, PyMOL etc. It is, however, not so simple for single stranded folded functional RNA molecules.  Several algorithms have been implemented in software tools for the automated detection of base pairs in RNA structures solved by X-ray crystallography, NMR or other methods. Essentially the programs detect hydrogen bonds between two bases, and ensure their (near) planar orientation, before reporting that they constitute a base pair. Since most of the structures of RNA, available in public domain, are solved by X-ray crystallography, the positions of hydrogen atoms are rarely reported. Hence, detection of hydrogen bond becomes a non-trivial job. The DSSR algorithm by Lu and Wilma K. Olson considers two bases to be paired when they detect one or more hydrogen bond(/s) between the bases, by actually modeling the positions of the hydrogen atoms, and by ensuring the perpendiculars to the two bases being nearly parallel to each other. The positions of the hydrogen atoms can be deduced by converting Internal Coordinates (bond length, bond angle and torsion angle) along with positions of precursor atoms, such as amino group nitrogen atoms and those bonded to the nitrogen or Z-matrix to external Cartesian Coordinates. The base pairs identified by this method are listed in NDB and FR3D databases. A unique way of identification of base pairs in RNA was incorporated in MC-Annotate by Francois Major. In this algorithm they make use of the positions of the hydrogen atoms as well as lone-pair electrons using suitable molecular mechanics/dynamics force-fields and derive hydrogen bond formation probabilities for them. The final identifications of base pairs are done based on these probabilities and approach of hydrogen atoms to lone-pairs electrons of nitrogen or oxygen. This method also attempted to classify the base pair nomenclature with additional information of each interacting edge, such as Ws indicating the sugar edge corner of the Watson-Crick edge, Wh representing the Hoogsteen edge corner of Watson-Crick edge, Bw indicating bifurcated three-center hydrogen bond involving both the hydrogen atoms of amino groups to form hydrogen bonds with a carbonyl oxygen involving both of its lone-pairs, etc. As claimed by the authors, this nomenclature scheme adds some additional features to the Leontis-Westhof (LW) scheme and may be referred to as the LW+ scheme. A major advantage of this scheme lies in its ability to distinguish between alternative base pairing geometries, where multimodality is observed within an LW family. This method, however, does not consider the possible participation of the 2'-OH group of the ribose sugars in base pair formation. Another algorithm, namely BPFIND by Dhananjay Bhattacharyya and coworkers, demands at least two hydrogen bonds using two distinct sets of donors and acceptors atoms between the bases. This hypothesis driven algorithm considers distances between two pairs of atoms (hydrogen bond donor (D1 and D2) and acceptor (A1 and A2) and four suitably chosen precursor atoms (PD1, PD2, PA1, PA2) corresponding to the D's and A's. Small values of such distances in conjunction with large values of the angles defined by θ1(PD1—D1—A1), θ2(D1—A1—PA1), θ3(PD2—D2—A2), θ4(D2—A2—PA2) (close to 180o or πc) ensures two structural features which characterize well defined base pairs: i) the hydrogen bonds are strong and linear and ii) the two bases are co-planar. Notably, so long as one restricts the search to base pairs which are stabilized by at least two distinct hydrogen bonds, the above algorithms, by and large, yield the same set of base pairs in different RNA structures. Sometimes in the crystal structures it is observed that two closely spaced bases are oriented in such a way that apart from the regular hydrogen bonds two additional electronegative hydrogen bond acceptor atoms are very close to each other, which may cause electrostatic repulsion. The concept of protonated base pairing, implicating a possible protonation of one of these electronegative, (potentially) hydrogen bond acceptor atoms thus converting it into a hydrogen bond donor, was introduced to explain stability of such geometries. Some of the NMR derived structures also support the protonation hypothesis, but possibly more rigorous studies using neutron diffraction or other techniques would be able to confirm it. The quality of the crystal structures permitting, some algorithms also attempted to detect water or cation mediated base pair formation. Stability The canonical Watson-Crick base pairs, G:C and A:T/U as well as most of the non-canonical ones are stabilized by two or more (e.g. 3 in the case of G:C cWW) hydrogen bonds. Justifiably, a significant amount of research on non-canonical base pairs has been carried out towards bench-marking their strengths (interaction energies) and (geometric) stability against those of the canonical base pairs. It may be noted here that base pair geometries, as observed in the crystal structures, are often influenced by several interactions present in the crystal environment, thus perturbing their intrinsically stable geometries arising out of the hydrogen bonding and related interactions between the two bases. Therefore, in principle, it is possible that the observed geometries in some cases are intrinsically unstable, and that they are stabilized by other interactions provided by the environment. Several groups have attempted to determine the interaction energies in these non-canonical base pairs using different quantum chemistry based approaches, such as Density functional theory (DFT) or MP2 methods. These methods were applied on suitably truncated, hydrogen-added, and geometry optimized models of the base (or nucleoside) pairs extracted from PDB structures. Depending upon the optimization protocol, typically three types of interaction energies have been reported. In the first method, the base pair model geometries, isolated from their respective environments, are fully optimized without any constraints. thus providing the intrinsic geometries and interaction energies of the isolated models. This procedure, however, sometimes leads to optimized geometries of base pairs involving edges different from initial crystal geometry. Abhijit Mitra and collaborators also used an additional second protocol, where the heavy atom (non-hydrogen) coordinates are retained as in the crystal geometries, optimizing only the positions of the added hydrogen atoms. In the third protocol, followed mostly by Jiri Sponer and his group, optimization was carried out with constraints on some angles and dihedrals.  Given that the models are extracted from their respective crystal structures, and are isolated from their crystal environments, the second and the third protocols provide two different approaches towards approximating the environmental effects, without explicit considerations of any specific environmental interactions.  This has further been addressed in some reports by considering specific environmental factors, such as coordination with Magnesium, or even some covalent modifications to the bases. All the three protocols are useful in their respective contexts. Further, a comparison of the model geometries, obtained by the different protocols, provide an idea regarding both, the stability of the corresponding base pair geometries, as well as regarding the probable extent and nature of environmental influences. It was found that most non-canonical base pairs, having two or more hydrogen bonds, generally maintain the same hydrogen bonding pattern in the crystal and in fully optimized in isolation geometries, respectively, thus indicating their intrinsic geometric stability. Interaction energies calculated from these optimized models also indicated the energetic stability of the corresponding non-canonical base pairs.  The previous notion that non-canonical base pairs are weaker than the Watson-Crick base pairs, was found to be incorrect. Interaction energies between the bases of Several base pairs, such as G:G tWW, G:G cWH, A:U cHW, G:A cWW, G:U cWW, etc., are found to be larger than that of canonical A:U cWW base pair. Of course all non-canonical base pairs are not necessarily very strong or stable in terms of interaction energy.  Several base pairs have been detected on the basis of weak hydrogen bonds involving C—H...O/N atoms, where interaction energies are rather small. Further, geometry optimizations of some of the observed base pairs, in particular, but not limited to those involving weak hydrogen bonds, or those stabilized by single hydrogen bonds, were found to adopt alternate geometries, thus indicating their intrinsic lack of geometric stability. These alteration of hydrogen bonding schemes, giving rise to changes in base pairing family upon free optimization, may have some functional implication in RNA, such as their action as conformational switch. Accordingly, as mentioned above in the Sponer's protocol, there have been some attempts to restrain the experimentally observed geometry while carrying out geometry optimization for interaction energy calculations. Interestingly, in several cases, interaction energies calculated for these ‘away from intrinsically stable’ geometries also indicate good energetic stability. Though the energetics and geometric stabilities of different non-canonical base pairs do not show any generalized correlations, analysis of several databases, such as RNABPDB and RNABP COGEST, which catalogue structural and energetic features of some of the observed base pair and their stacks, reveal some interesting general trends. For example, geometry optimizations of several base pairs involving 2’-OH group of sugar residue resulted in significant alterations from their initial geometry. This is possibly due to flexibility of the sugar puckers and glycosidic torsions. The significantly high interaction energies of protonated base pairs, despite the high energy cost of base protonation, also deserve a special mention in this context. This can mostly be attributed to the additional   charge-induced dipole interactions which are associated with protonated base pairs. Structure Base pairing An estimated 60% of bases in structured RNA participate in canonical Watson-Crick base pairs. Base pairing occurs when two bases form hydrogen bonds with each other. These hydrogen bonds can be either polar or non-polar interactions. The polar hydrogen bonds are formed by N-H...O/N and/or O-H...O/N interactions. Non-polar hydrogen bonds are formed between C-H...O/N. Edge interactions Each base has three potential edges where it can interact with another base. The Purine bases have 3 edges which are able to hydrogen bond. Those are known as the Watson-Crick edge(WC), the Hoogsteen edge(H), and the Sugar edge(S). Pyrimidine bases also have three hydrogen-bonding edges. Like the purine, there is the Watson-Crick edge(WC) and the Sugar edge(S) but the third edge is referred to as the "C-H" edge(H) on the pyrimidine bases. This C-H edge is sometimes also referred to as the Hoogsteen edge for simplicity. The various edges for the purine and pyrimidine bases are shown in Figure 2. Besides the three edges of interaction, base pairs can also vary in their cis/trans forms. The cis and trans structures depend on the orientation of the ribose sugar as opposed to the hydrogen bond interaction. These various orientations are shown in Figure 3. Therefore, with the cis/trans forms and the 3 hydrogen bond edges, there are 12 basic types of base pairing geometries which can be found in RNA structures. Those 12 types are WC:WC (cis/trans), WC:HC (cis/trans), WC:S (cis/trans), H:S (cis/trans), H:H (cis/trans), and S:S (cis/trans). Classification These 12 types can be further divided into more subgroups which are dependent on the directionality of the glycosidic bonds and steric extensions. With all of the various base pair combinations there are 169 theoretically possible base pair combinations. The actual number of base-pair combinations is lower because some combinations result in non-favorable interactions. This number of possible non-canonical base pairs is still being determined as it depends strongly on base pairing criteria . Understanding base pair configuration is similarly difficult since the pairing is depends on the bases surroundings. These surroundings can consist of adjacent base pairs, adjacent loops, or third interactions (such as a base triple). The bonds between various bases are well defined because of their rigid and planar shape. The spatial interactions between the two bases can be classified in 6 rigid-body parameters or intra-base pair parameters (3 translational, 3 rotational) as shown in Figure 4. These parameters describe the base pairs' three dimensional conformation. The three translational arrangements are known as shear, stretch, and stagger. These three parameters are directly related to the proximity and direction of the hydrogen bonds. The rotational arrangements are buckle, propeller, and opening. Rotational arrangements relate to the non-planar confirmation (as compared to the ideal coplanar geometry). Intra-base pair parameters are used to determine the structure and stabilities of non-canonical base pairs and were originally created for the base pairings in DNA, but were found to also fit the non-canonical base models. Types The most common non-canonical base pairs are trans A:G Hoogsteen/sugar edge, A:U Hoogsteen/WC, and G:U Wobble pairs. Hoogsteen base pairs Hoogsteen base pairs occur between adenine (A) and thymine (T); and guanine (G) and cytosine(C); similarly to Watson-Crick base pairs. However, the purine (A and G) takes on an alternative conformation with respect to the pyrimidine. In the A-U Hoogsteen base pair, the adenine is rotated 180° about the glycosidic bond, resulting in an alternative hydrogen bonding scheme which has one hydrogen bond in common with the Watson-Crick base pair (adenine N6 and thymine N4), while the other hydrogen bond, instead of occurring between adenine N1 and thymine N3 as in the Watson-Crick base pair, occurs between adenine N7 and thymine N3. The A-U base pair is shown in Figure 5. In the G-C Hoogsteen base pair, like the A-T Hoogsteen base pair, the purine (guanine) is rotated 180° about the glycosidic bond while the pyrimidine (cytosine) remains in place. One hydrogen bond from the Watson-Crick base pair is maintained (guanine O6 and cytosine N4) and the other occurs between guanine N7 and a protonated cytosine N3 (note that the Hoogsteen G-C base pair has two hydrogen bonds, while the Watson-Crick G-C base pair has three). Wobble base pairs Wobble base pairing occur between two nucleotides that are not Watson-Crick base pairs and was proposed by Watson in 1966. The 4 main examples are guanine-uracil (G-U), hypoxanthine-uracil (I-U), hypoxanthine-adenine (I-A), and hypoxanthine-cytosine (I-C). These wobble base pairs are very important in tRNA. Most organisms have less than 45 tRNA molecules even though 61 tRNA molecules would technically be necessary to canonically pair to the codon. Wobble base pairing allows for the 5' anticodon to bond to a non-standard base pair. Examples of wobble base pairs are given in Figure 6. 3-D Structure The secondary and three-dimensional structures of RNA are formed and stabilized through non-canonical base pairs. Base pairs make up many secondary structural blocks which aid the folding of RNA complexes and three dimensional structures. The overall folded RNA is stabilized by the tertiary and secondary structures canonically base pairing together. Due to the many possible non-canonical base pairs, there are an unlimited amount of structures, which allows for the diverse functions of RNA. The arrangement of the non-canonical bases also allow long-range RNA interactions, recognition of proteins and other molecules, and structural stabilizing elements. Many of the common non-canonical base pairs can be added to a stacked RNA stem without disturbing its helical character. Secondary Basic secondary structural elements of RNA include bulges, double helices, hairpin loops, and internal loops. An example of a hairpin loop of RNA is given in Figure 7. As shown in the figure, hairpin loops and internal loops require a sudden change in backbone direction. Non-canonical base pairing allows for the increased flexibility at junctions or turns required in the secondary structure. Tertiary Three-dimensional structures are formed through the long-range intra-molecular interactions between the secondary structures. This leads to the formation of pseudoknots, ribose zippers, kissing hairpin loops, or co-axial pseudocontinuous helices. The three-dimensional structures of RNA are primarily determined through molecular simulations or computationally guided measurements. An example of a Pseudoknot is given in Figure 8. Structural features of a base-pair, formed by two planar rigid units, can be quantified, using six parameters – three translational and three rotational. IUPAC recommended parameters are Propeller, Buckle, Open Angle, Stagger, Shear and Stretch (Figure 8). There are several publicly available software, such as Curves by Richard Lavery, 3DNA by Olson, NUPARM by Manju Bansal, etc., which may be used to calculate these parameters. While the first two calculate the parameters of canonical and non-canonical base-pairs relative to the standard canonical Watson-Crick base pairs geometry, the NUPARM algorithm calculates in absolute terms using base pairing edge specific axis system. Hence, for most non-canonical base-pairs, which involve non-Watson-Crick edges, some of the parameters (Open, Shear and Stretch) calculated by Curves or 3DNA are usually large even in their respective intrinsically most stable geometries.  On the other hand, the values provided by NUPARM indicate the quality of hydrogen bonding and planarity of the two bases in a more realistic fashion. Thus, the NUPARM Stretch values, indicating separation of the two bases of a base pair, and which depend on optimal hydrogen bonding distances, are always around 3Ǻ. Some other general trends observed in the values of the above parameters may be of interest to note. Most of the cis base pairs are seen to have Propeller values around -10o and small values of Buckle and Stagger. The Open and Shear values often depend on positions of the hydrogen bonding atoms. As for example, GU cWW wobble base pairs have Shear value around -2.2Ǻ while GC or AU cWW base pairs have Shear values around zero. The Open values for most base pairs are close to zero but the values are often rather large for those involving 2’-OH group of sugar in the NUPARM derived parameter set. The trans base pairs, however, do not show any systematic trend in their Propeller values. Roles In RNA The structural hierarchy in RNA is usually described in terms of a stem-loop 2D secondary structure, which further folds to form its 3D tertiary structure, stabilized by what are referred to as long range tertiary contacts. Most often the non-canonical base pairs are involved in those tertiary contacts or extra-stem base pairs. For example, some of the non-canonical base pairs in tRNA appear between the D-stem and TψC loops (Figure 5), which are close in the three-dimensional structure. Such base pairing interactions give stability to the L-shaped structure of tRNA. In this region, some base pairs are found to be additionally hydrogen bonded to a third base.  Thus, the 23rd residue is simultaneously paired to 9th and 12th residues, together forming a base triple, the smallest member of the class of higher order multiplets. Multiplets One base, in addition to forming proper planar base pairing with a second base, can often participate in base pair formation with a third base forming a base triple. One such classic example is in formation of DNA triple helix, where two bases of two antiparallel strands form consecutive Watson-Crick base pairs in a double helix and a base of a third strand form Hoogsteen base pairing with the purine bases of the Watson-Crick base pairs. Many different types of base triples have been reported in the available RNA structures and have been elegantly classified in the literature. Multiplets are however not limited to triplet formation. Four bases giving rise to a base quartet is now well documented in the structure of the G-quadruplex characteristically found in the telomere. Here four Guanine residues pair up within themselves in a cyclic form involving Watson-Crick/Hoogsteen cis (cWH) base pairing scheme and each of the Guanine bases are found to be respectively interact with two other guanine bases. Three to four such base G-quadruplexes stack on top of the other to form a four stranded DNA structure. In addition to such a cyclic topology, several other topologies of base:base pairings are possible for higher order multiplets such as quartets, pentets etc. Double helical regions Non-canonical base pairs quite frequently appear within double helical regions of RNA. The G:U cWW non-canonical base pairs are seen very frequently within double helical regions as this base pair is nearly isosteric to the other canonical ones. Due to complication of strand direction, as elaborated in the Classification section (Table 1), not all types of non-canonical base pairs can be accommodated within double helical regions with anti glycosidic torsion angles. However, many non-canonical base pairs, e.g. A:G tHS (trans Hoogsteen/Sugar edge) or A:U tHW (trans Hoogsteen/Watson-Crick), A:G cWW, etc., are often seen within double helical regions giving rise to symmetric internal loop like motifs. Attempts have been made to classify all such situations where two base pairs (canonical or non-canonical) stack in anti-parallel sense possibly giving rise to double helical regions in RNA structures. These base pairs are quite stable, and they are able to maintain the helical property quite well. The backbone torsion angles around these residues are also generally within reasonable limits: C3'-endo sugar pucker with anti glycosidic torsion, α/γ torsion angles around -60o/60o, β/ε torsion angles around 180o. Recurrent structural motifs Non-canonical base pairs often appear in different structural motifs, including pseudoknots, with their special hydrogen bonding features. Structural features of these recurrent motifs have been archived in searchable databases, such as, FR3D and RNA FRABASE. Also, several of these motifs can be identified in a given query PDB file by the NASSAM web-server. They are most frequently detected at the termini of double helical segment acting as capping residues, often preceding hairpin loops. The most frequently found non-canonical base pair, namely G:A tSH, is an integral part of GNRA tetraloops, where N can be any nucleotide residue and R is a purine residue. This motif shows some amount of flexibility and alterations of structural features depending on whether the Guanine and Adenine are paired or not. Several other types of tetraloops motifs, such as UNCG, YNMG, GNAC, CUYG, (where Y stands for pyrimidine and M is either Adenine or Cytosine) etc., have been found in available RNA structures. However, these do not generally show involvement of non-canonical base pairing. In addition to these common hairpin motifs, where the loop residues largely remain unpaired, there are also a few motifs where the loop residues make extensive interactions between themselves or with other residues external to the loop. A common example is the C-loop motif, where the bulging loop residues make non-canonical base pairing with the bases of double helical regions forming non-canonical base pairing (Figure 9). The extra base pairs in these cases give rise to additional stabilization to the composite double helix containing motif. Non-canonical base pairs are also involved in receptor-loop interaction, such as in T-loop motif. Another interesting example of the involvement of non-canonical base pairs in recurrent contexts was detected as the GAAA receptor motif, which consists of A:A cHS base pair followed by U:A tWH base pair stacked on both sides by G:C cWW base pairs. Here we have successive non-canonical base pairs within an antiparallel RNA double helical domain.  Similarly there is an A:A cSH base pair involving two consecutive residues in this motif. Such pairing between consecutive residues, which is also termed as a dinucleotide platform motif, is quite commonly observed. They appear in many RNA structures and the pairing can also be between other bases. Such dinucleotide platform was reported in A:A, A:G, A:U, G:A, G:U base pairs belonging to the cSH class and also in A:A cHH base pairs. These motifs can alter the strand direction within a double helix by formation of kinks. Such dinucleotide platform along with triplet formation is also an integral component of the Sarcin-ricin motif. Modeling Prediction of biomolecular structure from sequence alone is a long-term goal of scientists working in the fields of bioinformatics, computational chemistry, statistical physics as well as in computer science. Prediction of protein structures from amino acid sequence by methods like homology modeling, comparative modeling, threading, etc. were largely successful due to availability of about 1200 unique protein folds. Inspired by the protein experience, there are now several approaches towards predicting RNA structures, albeit with varying degrees of success.  It can be seen that most of the approaches are essentially limited to the prediction of RNA 2D stem-loop structure, also referred to as RNA secondary structure. For example, minimum computed free energy prediction of double helical regions of RNA sequences from the energy of base pairing and stacking interactions, essentially computationally derived from experimental thermodynamic data, was initially introduced by Ruth Nussinov and later by Michael Zuker. This, in turn, has inspired several related modified algorithms, including data on neighboring group interactions etc. Most of these approaches, however, mainly consider data on canonical base pairing, with only a few which also consider thermodynamic data on Hoogsteen base pairs. Thus, in addition to the computational costs and complications associated with the identification of pseudoknots, all these methods also suffer from the drawback associated with the paucity of experimental data on non-canonical base pairs. However, there are also several approaches which attempt at predicting the tertiary 3D structure corresponding to given predicted 2D structures. There are also a few involving 3D fragment based modeling, which are getting further facilitated with the increasing availability of motif wise curated RNA 3D structure data. It is also encouraging to note that there are now some software and servers, such as MC-Fold, RNAPDBee, RNAWolfe, etc. available for exploring non-canonical base pairing in RNA 3D structures. Some of these methods depend on structural database of RNA, such as FRABASE, to obtain 3D coordinates of motifs containing non-canonical base pairs and stitch the information with 3D structure of double helices containing canonical base pairs. It may be relevant in this context, to mention about the approach towards 3D model building of double helical regions with both canonical and non-canonical base pairs used in 3DNA by Olson or in RNAHelix by Bhattacharyya and Bansal.  These software suites use base pair parameters to generate 3D coordinates of individual dinucleotide steps, which can be extended to model double helices of arbitrary lengths with canonical or non-canonical base pairs.  The above-mentioned methods attempt to model a single structure (2D or 3D) of a given RNA sequence. However, growing evidences indicate that a given RNA sequence can adopt ensemble of structures and possibly interconvert between them.  This ensembles obviously adopt different base pairing patterns between different sets of residues. Thus, there are enough pointers to suggest that the focus on modeling single structures appears to have been a bottleneck for accurate modeling of RNA structure. The theoretical prediction of RNA 2D structure and consequently 3D structure can also be confirmed by different chemical probing methods. One of the latest such tools is SHAPE (Selective 2′-hydroxyl acylation analyzed by primer extension), and SHAPE-Directed RNA Secondary Structure Prediction appears to be most promising. Coupled with mutational profiling, ensembles of RNA structures, which often include non-canonical base pairing, can be experimentally studied using the SHAPE-MaP approach. One of the ways ahead today appears to be an integration of Zuker's minimum free energy approach with experimentally derived SHAPE data, including simulated SHAPE data as outlined in Montaseri et al. (2016) and Spasic et al. (2017). See also Hoogsteen base pair Wobble base pair References Molecular genetics Nucleic acids
Non-canonical base pairing
Chemistry,Biology
10,917
19,797,453
https://en.wikipedia.org/wiki/Builders%20hardware
Builders' hardware or just builders hardware is a group of metal hardware specifically used for protection, decoration, and convenience in buildings. Building products do not make any part of a building, rather they support them and make them work. It usually supports fixtures like windows, doors, and cabinets. Common examples include door handles, door hinges, deadbolts, latches, numerals, letter plates, switch plates, and door knockers. Builders hardware is commonly available in brass, steel, aluminium, stainless steel, and iron. Well known suppliers of builders hardware mainly exist in China, India, Mexico and some in the U.S. Classifications While builders hardware is classified by supplying at least one of the three attributes listed above, it is usually broken down by where it is used, or by usage. Bathroom hardware Bathroom hardware includes the products that are used in constructing and maintaining the bathroom appearance and decoration. Bathroom products includes faucets, showers, holders, tubs, shelves, mirrors etc. Door hardware All those products that are used either in door decoration, maintenance, or in any other function come under door hardware, such as door handles, fasteners, hinges, hooks, number plates, knockers, etc. Furniture hardware Furniture hardware are those products that are used to support the furniture look, design and durability. Furniture hardware products include furniture frames, furniture legs, furniture arms, etc. Safety & security hardware Buildings, goods and their occupants needs protection from fire, intruders, and other external agents. Proper protection systems include fire safe security system, home monitoring, smoke detectors, locksets, window guards, etc. Plumbing hardware Plumbing hardware products are used for supplying water throughout the building using hose, pipes and tubes. These hardware products ensure that water is supplied properly and continuously. Since water runs or remains all the time in these products, it is needed that the materials with which these products are highly corrosion resistant and can withstand extreme temperatures. The most common materials are copper, aluminum, steel and PVC. Cabinet hardware The products that are used to make cabinets working come under cabinet hardware like cabinet fasteners, brackets, latches, hinges, pulls, locks, etc. Cabinet hardware are small components that make cabinets functional. These products are made of materials like plastics, metals and may be glasses. Window hardware Window hardware does not include window itself rather they are smaller components that are used to install, fix and protect windows, such as window extrusions, fasteners, handles, hinges, locks and many more. Curtain hardware Curtain hardware includes products like hooks, curtain rings, curtain finials, etc. These products are used to hang curtain at doors, windows, verandas, etc. Curtain hooks and poles are used to handle and move the curtains. Curtain hardware products are made of varieties of materials including metals and plastics. Mostly aluminum and iron are used for making rings, hooks, rods and poles. See also Architectural ironmongery References Bibliography . Hardware (mechanical)
Builders hardware
Physics,Technology,Engineering
606
50,760,419
https://en.wikipedia.org/wiki/Porn%20studies
Porn studies is the critical academic study of pornography and its associated industry, typically in the broader rubric of the field of sexuality studies. Porn studies takes as its object of research pornography itself — its visual artefacts, cultural role, controversies, and influence on the public — as well as the manner in which pornography is researched. The development of porn studies as a field of academia has been driven by the publication of the same name. Subjects Areas and themes that scholars of porn studies, as a field, may focus on include: gay pornography and how it reproduces idealized pictures of masculinity, the uses of pornographic comics by Japanese women, the proliferation of amateur porn sparked by the Pamela Anderson and Tommy Lee video, interraciality in the porn industry, and more. The field of porn studies situates itself in the broader field of critical studies. In doing so, it aims to "unpack what is at stake in the construction of particular views and practices... draw[ing] on insights from disciplines that acknowledge the complexity of culture and are aware of the shifts and continuities in the ways that sex and media are constructed historically." The critical approach includes an enquiry into the types of theoretical tools suggested by different forms of analysis, and how the questions one asks influence the research that is produced. Studies A Danish study showed that the availability of pornography reduces the incidents of at least some sexual crimes. A 2009 study found that often perceived link between pornography and sexual violence was non-existent. In 2010, a study found that men who watch pornography were more likely to be dissatisfied with their sex lives, although the reverse was true for women; heterosexual couples who watch pornography together were more likely to report higher levels of sexual satisfaction and dedication than those who viewed it alone. A 2016 study found that those who regularly watch pornography are more likely to divorce, although the study did not determine if viewing pornography was a cause for the divorce or a symptom of other problems. Theoretical foundations The philosophical foundation of the discipline porn studies is social constructivism. Thus, scholars of porn studies are not as interested in empirical questions about the effects of pornography on society — which traditionally cover issues like the links between the consumption of pornography and undesired behavioral and social outcomes; whether or not pornography is a public health problem; or whether pornography may have positive social benefits — but instead on questions surrounding how norms shape what is actually researched. This approach to enquiry is opposed to positivist approaches in social science which "obscures the subjective, ideological and normative dimension of scientific paradigms." Criticism Scholars of porn studies may encounter opposition from college and university administrators who are concerned about the consequences of exposing students to potentially obscene material in a typical course. Such concerns include the age of consent of the students viewing the material, and potential legal ramifications. Critics of violence in hardcore pornography have also raised objections to the discipline as a whole for its alleged role in perpetuating the damaging effects of porn. See also Porn Studies (journal) Sexology References Further reading Sexology Pornography Cultural studies
Porn studies
Biology
625
39,539,086
https://en.wikipedia.org/wiki/MPEG%20media%20transport
MPEG media transport (MMT), specified as ISO/IEC 23008-1 (MPEG-H Part 1), is a digital container standard developed by Moving Picture Experts Group (MPEG) that supports High Efficiency Video Coding (HEVC) video. MMT was designed to transfer data using the all-Internet Protocol (All-IP) network. History In April 2013 a list of requirements was released for MMT and the general requirements stated that MMT must have clear advantages when compared to existing container formats and that it must have low computational demands. Also in April 2013 a list of use cases for MMT was released which included the need for it to support Ultra HD video content, 3D video content, interactive content, user-generated content, applications that support multi-device presentation, subtitles, picture-in-picture video, and multiple audio tracks. MPEG has estimated that the first edition of MMT will reach Final Draft International Standard (FDIS) in November 2013. On May 30, 2013, NHK started showing test equipment based on MMT at the NHK Science & Technology Research Laboratories Open House 2013. Schedule The timescale for the completion of the first version of the MMT standard in the MPEG standardization process: October 2010: Call for Proposals March 2011: Working Draft July 2012: Committee Draft January 2013: 2nd Committee Draft April 2013: Draft International Standard November 2013: Final Draft International Standard May 2014: International Standard published Highlights MPEG MMT succeeds MPEG-2 TS as the media transport solution for broadcasting and IP network content distribution, with the aim of serving new applications like UHDTV, second screen, ..., etc., with full support of HTML5 and simplification of packetization and synchronization with a pure IP based transport. It has the following technology innovations: Convergence of IP transport and HTML 5 presentation Multiplexing of various streaming components from different sources Simplification of TS stack and easy conversion between storage file format and streaming format Support multiple devices and hybrid delivery Advanced QoS/QoE engineering features Solutions and demos SKT MMT-based True Realtime (TR) video streaming solution (Oct' 2014) SK Telecom (The leading mobile operator in Korea) and Samsung have developed and tested their True Real-Time Mobile Streaming system based on the emerging MPEG MMT standard over SKT's commercial LTE network with Btv video streaming platform. The results showed a latency reduction of 80%, which would significantly improve the user experience of live content streaming. Current mobile video streaming technologies often suffer up to 15 seconds of latency, but its implementation of MMT has reduced that to 3 seconds. SK Telecom said they will put more effort to strengthen their mobile network service quality by developing innovative and advanced technologies with the aim of having it commercially available next year. Technicolor-Sinclair Demo (Oct 2014) Sinclair Broadcast Group and Technicolor delivered successfully ATSC 3.0 4K UHD testbed platform. The Technicolor platform, based on open audio, video, and transport standards including Scalable HEVC (SHVC), MPEG-H audio, and MPEG-MMT transport, has been integrated into Sinclair's experimental OFDM transmission system in Baltimore, Maryland. The impact of this deployment is that broadcasters will be able to deliver the highest quality content, inclusive of 4K UHD broadcast in a simultaneous transmission to consumers both at home and on the go. NHK MMT UHD system demo (May, 2014) In Japan, Super Hi-Vision test services are planned to begin in 2016, and commercial services are planned to begin in 2020. NHK has studied MPEG Media Transport (MMT) as the transport protocol for the next generation of broadcasting systems since it enables hybrid delivery using broadcasting and broadband networks. They have demonstrated MMT-based 8K Super Hi-Vision Broadcasting at their open house exhibition. libatsc3 Android Sample App with MMT MFU playback (January 2020) libatsc3 provides an ATSC 3.0 NGBP Open Source Library - Tools for parsing and decoding STLTP, LMT, LLS, SLS, and NextGen supported standards. In January 2020, libatsc3 released a baseline Android sample app providing PCAP playback of ROUTE/DASH and implemented the world's first open-source MMT player with MFU (Media Fragmentation Unit) de-encapsulation. By using the MFU for media essence decoding (e.g. single samples are pushed to the media decoder), rather than the traditional MPU (Media Presentation Unit) of ISOBMFF and DASH, the baseline NGBP implementation can provide robust media playback regardless of packetized DU (data unit) loss, transient MFU loss, or sustained MPU loss. Rapid recovery and de-encapsulation durability are also enabled by implementing out-of-order de-packetization using the MMTHSample hint at the start of every media sample - providing the sample number, data unit length, and offset. Other implementations relying on ISOBMFF with MOOF and TRUN box provide only one emission of sample length and duration MPU, posing a high risk of full GOP loss disproportionate to the MDAT size (e.g. 1KB of ALC packet loss may result up to the loss of ~1MB or more of the essence). libatsc3 is designed to be robust and durable in inherently lossy ATSC 3.0 IP-multicast emissions, including mobile reception, to demonstrate the potential of NextGen across all devices and platforms. More information at libatsc3 Overview. libatsc3 ExoPlayer MMT Plugin with MFU de-packetization and out-of-order mode support (February 2021) Expanding on the libatsc3 android proof-of-concept, ONEMedia 3.0 and ngbp.org have developed an ExoPlayer plugin for MMT, including support for MFU de-packetization and out-of-order mode support. Source and sample Android Activity available on GitHub: ExoPlayer ISO23008-1 MMT extension See also ISO base media file format — a previous digital container standard created by MPEG and defined in ISO/IEC 14496-12 MPEG transport stream — a previous digital container standard created by MPEG and defined in ISO/IEC 13818-1 MPEG program stream — a previous digital container standard created by MPEG and defined in ISO/IEC 13818-1 References External links MPEG Media Transport Digital container formats Computer file formats IEC standards ISO standards MPEG-H
MPEG media transport
Technology
1,376
23,766,663
https://en.wikipedia.org/wiki/Joule%20Unlimited
Joule Unlimited, formerly known as Joule Biotechnologies, was a producer of alternative energy technologies based in Bedford, Massachusetts. The company developed a process to generate hydrocarbon-based fuel by combining non-fresh water, nutrients, cyanobacteria, carbon dioxide, and sunlight. After ten years of operation and building a demonstration plant in New Mexico, the company shut down in August 2017. The company shut down after management was unable to raise money. Technology claims The company claimed it would be able to produce more than 20,000 gallons of fuel per acre per year (19,000 m3/km2/annum) in almost refined form using carbon dioxide waste from industrial processes and desert land. Helioculture uses photosynthetic organisms, but is otherwise distinct from the process that makes fuel from algae. Oils made from algae usually have to be refined into fuel following a batch process, but helioculture secretes fuel directly rather than storing it in their cells - either ethanol or hydrocarbons - that do not need refining. The helioculture process also does not produce biomass. This process is enabled by the discovery of unique genes coding for enzymatic mechanisms that enable the direct synthesis of such key molecules as alkanes, olefins, ethanol and polymers and other high-value chemicals ordinarily derived from petroleum, using bacterial variants. Helioculture allows for brackish water or graywater, nonindustrial waste water from sources such as baths and washing machines, to be used, while traditional biofuels such as cellulosic ethanol require fresh water. Joule Unlimited claimed that its product would have been cost competitive with crude oil at $50 a barrel ($310/m3). The company also stated that its product could supply all of the transportation fuel for the United States from an area the size of the Texas panhandle. Joule Unlimited did not reveal the name of the organism that it used, although it acknowledged that the company had modified the organism. In September, 2010, Joule received a patent for genetically altered bacterium. People Joule Unlimited was founded in 2007 within Flagship VentureLabs by Noubar Afeyan and David Berry. In addition to its founders, Joule's Board of Directors included Graham Allison, Anatoly Chubais, Stelios Papadopoulos, Caroline Dorsa, and Ruben Vardanian. Joule's Scientific Advisory Board includes synthetic biologists George M. Church and Jim Collins. Audi partnership After building a demonstration plant in New Mexico, Joule Unlimited entered into a strategic partnership with Audi in 2012 to accelerate the commercialization of their fuels, ethanol named Sunflow-E and diesel named Sunflow-D. Audi brands them as e-ethanol and e-diesel respectively. References External links Joule Biotechnologies Unveils Liquid Fuel From Solar Power, Wall St. Journal, July 27, 2009 Joule Biotechnologies announce new biofuel jargon, scant details, Scientific American, July 27, 2009 A Cagey Bet On Clean Tech, Forbes, July 27, 2009 Patents assigned to Joule Unlimited US 7785861 ("Hyperphotosynthetic organisms", 2010-08-31) US 7794969 ("Methods and compositions for the recombinant biosynthesis of n-alkanes", 2010-09-14) Patent applications (Joule Unlimited personnel) WO 2010 062707 A1 ("Methods and compositions for producing carbon-based products of interest in micro-organisms", 2010-06-03) WO 2010 036951 A3R4 ("Methods and compositions for producing carbon-based products of interest in micro-organisms", 2010-06-24) WO 2010 068288 A3R4 ("Solar biofactory, photobioreactors, passive thermal regulation systems and methods for producing products", 2010-10-07) WO 2010 017245 A8R5 ("Methods and compositions for producing carbon-based products of interest in micro-organisms", 2010-10-14) Synpcc7942_1594 — acyl-ACP reductase (Synechococcus elongatus PCC 7942) (see US 7794969) "Microbial biosynthesis of alkanes", Schirmer A, Rude MA, Li X, Popova E, del Cardayre SB, Science 2010 Jul 30;329(5991):559-62. Sustainable energy Algae biomass producers Companies based in Bedford, Massachusetts
Joule Unlimited
Engineering,Biology
945
32,867,251
https://en.wikipedia.org/wiki/Daylight%20factor
In architecture, a daylight factor (DF) is the ratio of the light level inside a structure to the light level outside the structure. It is defined as: DF = (Ei / Eo) x 100% where, Ei = illuminance due to daylight at a point on the indoors working plane, Eo = simultaneous outdoor illuminance on a horizontal plane from an unobstructed hemisphere of overcast sky. To calculate Ei, requires knowing the amount of outside light received inside of a building. Light can reach a room via through a glazed window, rooflight, or other aperture via three paths: Direct light from a patch of sky visible at the point considered, known as the sky component (SC), Light reflected from an exterior surface and then reaching the point considered, known as the externally reflected component (ERC), Light entering through the window but reaching the point only after reflection from an internal surface, known as the internally reflected component (IRC). The sum of the three components gives the illuminance level (typically measured in lux) at the point considered: Illuminance = SC + ERC + IRC The daylight factor can be improved by increasing SC (for example placing a window so it "sees" more of the sky rather than adjacent buildings), increasing ERC (for example by painting surrounding buildings white), increasing IRC (for example by using light colours for room surfaces). In most rooms, the ceiling and floor are a fixed colour, and much of the walls are covered by furnishings. This gives less flexibility in changing the daylight factor by using different wall colours than might be expected meaning changing SC is often the key to good daylight design. Architects and engineers use daylight factors in architecture and building design to assess the internal natural lighting levels as perceived on working planes or surfaces. They use this information to determine if light is sufficient for occupants to carry out normal activities. The design day for daylight factor calculations is based on the standard CIE overcast Sky for 21 September at 12:00pm, and where the Ground Ambient light level is 11921 Lux. CIE being the Commission Internationale de l´Eclairage, or International Commission on Illumination. Calculating daylight factors requires complex repetition of calculations and thus is generally undertaken using a complex software product such as Radiance. This is a suite of tools for performing lighting simulation, which includes a renderer as well as many other tools for measuring simulated light levels. It uses ray tracing to perform all lighting calculations. One failing in many of these calculations is that they are often completed without wall hangings or furniture against the walls. This can lead to higher predictions of the daylight factor than is correct. To assess the effect of a poor or good daylight factor, one might compare the results for a given calculation against published design guidance. In the UK this is likely to be CIBSE Lighting Guide 10 (LG10-1999), which broadly bands average daylight factors into the following categories: Under 2 – Not adequately lit – artificial lighting is required all of the time Over 5 – Well lit – artificial lighting generally not required, except at dawn and dusk – but glare and solar gain may cause problems See also Daylighting Right to light Climate based daylight modelling Notes External links International Commission on Illumination Light Visibility Energy-saving lighting Lighting
Daylight factor
Physics,Mathematics
678
8,492,337
https://en.wikipedia.org/wiki/Eratosthenes%20Seamount
The Eratosthenes Seamount or Eratosthenes Tablemount is a seamount in the Eastern Mediterranean, in the Levantine basin about south of western Cyprus. Unlike most seamounts, it is a carbonate platform, not a volcano. It is a large, submerged massif, about . Its peak lies at the depth of and it rises above the surrounding seafloor, which is located at the depth of up to and is a part of the Herodotus Abyssal Plain. It is one of the largest features on the Eastern Mediterranean seafloor. In 2010 and 2012 the Ocean Exploration Trust's vessel EV Nautilus explored the seamount looking for shipwrecks. Three were found; two were Ottoman vessels from the 19th century and the third was from the 4th century BC. Such seamounts are considered to be ideal for the preservation of shipwrecks because at depths of around the areas are not disturbed by trawlers or by sediments coming off land. Oceanography The Cyprus eddy is a sustained mesoscale eddy with a diameter about , regularly appearing above Eratosthenes Seamount. It was surveyed by oceanographic cruises notably in 1995, 2000, 2001 and 2009. Geology During the Messinian crisis, as the sea level in the Mediterranean dropped by about , the seamount emerged. See also CenSeam Ferdinandea Eratosthenes (crater) References External links Mart, Yossi and Robertson, Alastair H. F. (1998). Eratosthenes Seamount: an oceanographic yardstick recording the Late Mesozoic-Tertiary geological history of the Eastern Mediterranean, in Robertson, A.H.F., Emeis, K.-C., Richter, C., and Camerlenghi, A. (eds.), Proceedings of the Ocean Drilling Program, Scientific Results, Vol. 160, Chapter 52, 701–708. Kempler, Ditza (1998). Eratosthenes Seamount: the possible spearhead of incipient continental collision in the Eastern Mediterranean, in Robertson, A.H.F., Emeis, K.-C., Richter, C., and Camerlenghi, A. (eds.), Proceedings of the Ocean Drilling Program, Scientific Results, Vol. 160, Chapter 53, 709–721. Earthref entry Seamounts of the Mediterranean Physical oceanography Continental fragments
Eratosthenes Seamount
Physics
500
50,571
https://en.wikipedia.org/wiki/Transportation%20engineering
Transportation engineering or transport engineering is the application of technology and scientific principles to the planning, functional design, operation and management of facilities for any mode of transportation to provide for the safe, efficient, rapid, comfortable, convenient, economical, and environmentally compatible movement of people and goods transport. Theory The planning aspects of transportation engineering relate to elements of urban planning, and involve technical forecasting decisions and political factors. Technical forecasting of passenger travel usually involves an urban transportation planning model, requiring the estimation of trip generation, trip distribution, mode choice, and route assignment. More sophisticated forecasting can include other aspects of traveler decisions, including auto ownership, trip chaining (the decision to link individual trips together in a tour) and the choice of residential or business location (known as land use forecasting). Passenger trips are the focus of transportation engineering because they often represent the peak of demand on any transportation system. A review of descriptions of the scope of various committees indicates that while facility planning and design continue to be the core of the transportation engineering field, such areas as operations planning, logistics, network analysis, financing, and policy analysis are also important, particularly to those working in highway and urban transportation. The National Council of Examiners for Engineering and Surveying (NCEES) list online the safety protocols, geometric design requirements, and signal timing. Transportation engineering, primarily involves planning, design, construction, maintenance, and operation of transportation facilities. The facilities support air, highway, railroad, pipeline, water, and even space transportation. The design aspects of transportation engineering include the sizing of transportation facilities (how many lanes or how much capacity the facility has), determining the materials and thickness used in pavement designing the geometry (vertical and horizontal alignment) of the roadway (or track). Before any planning occurs an engineer must take what is known as an inventory of the area or, if it is appropriate, the previous system in place. This inventory or database must include information on population, land use, economic activity, transportation facilities and services, travel patterns and volumes, laws and ordinances, regional financial resources, and community values and expectations. These inventories help the engineer create business models to complete accurate forecasts of the future conditions of the system. Operations and management involve traffic engineering, so that vehicles move smoothly on the road or track. Older techniques include signs, signals, markings, and tolling. Newer technologies involve intelligent transportation systems, including advanced traveler information systems (such as variable message signs), advanced traffic control systems (such as ramp meters), and vehicle infrastructure integration. Human factors are an aspect of transportation engineering, particularly concerning driver-vehicle interface and user interface of road signs, signals, and markings. Specializations Highway engineering Engineers in this specialization: Handle the planning, design, construction, and operation of highways, roads, and other vehicular facilities as well as their related bicycle and pedestrian realms Estimate the transportation needs of the public and then secure the funding for projects Analyze locations of high traffic volumes and high collisions for safety and capacity Use engineering principles to improve the transportation system Utilize the three design controls, which are the drivers, the vehicles, and the roadways themselves Railroad engineering Railway engineers handle the design, construction, and operation of railroads and mass transit systems that use a fixed guideway (such as light rail or monorails). Typical tasks include: Determine horizontal and vertical alignment of the railways Determine station location Design functional segments of stations like lines, platforms, etc. Estimate construction cost Railway engineers work to build a cleaner and safer transportation network by reinvesting and revitalizing the rail system to meet future demands. In the United States, railway engineers work with elected officials in Washington, D.C., on rail transportation issues to make sure that the rail system meets the country's transportation needs. Railroad engineers can also move into the specialized field of train dispatching which focuses on train movement control. Port and harbor engineering Port and harbor engineers handle the design, construction, and operation of ports, harbors, canals, and other maritime facilities. Airport engineering Airport engineers design and construct airports. Airport engineers must account for the impacts and demands of aircraft in their design of airport facilities. These engineers must use the analysis of predominant wind direction to determine runway orientation, determine the size of runway border and safety areas, different wing tip to wing tip clearances for all gates and must designate the clear zones in the entire port. The Civil Engineering Department, consisting of Civil and Structural Engineers, undertakes the structural design of passenger, terminal design and cargo terminals, aircraft hangars (for parking commercial, private and government aircraft), runways and other pavements, technical buildings for installation of airport ground aids etc. for the airports in-house requirements and consultancy projects. They are even responsible for the master plan for airports they are authorized to work with. See also Bicycle transportation engineering Highway engineering List of BIM software Pavement engineering Traffic engineering References External links Home Institute of Transportation Engineers, a professional society for transportation engineers A better future transformed by transportation technology and innovation. ITS America Home ASCE Engineering disciplines Civil engineering
Transportation engineering
Engineering
1,023
18,869,479
https://en.wikipedia.org/wiki/Skytap
Skytap, Inc. is a private company based in Seattle, Washington offering a public service for cloud computing. Skytap provides self-service access to environments for learning, developing, testing, training, and running enterprise applications. The company was founded as Illumita in 2006 and renamed in 2008. Skytap is also offered by IBM to enable enterprises to migrate and modernize their core business applications. The company was purchased by Kyndryl in 2024. History Illumita was founded by Brian Bershad, Hank Levy, and Steve Gribble, a trio of University of Washington professors who had done research on virtualization and cloud computing, and by graduate student David Richardson. Illumita changed its name to Skytap in 2008, and launched its first product, Skytap Virtual Lab, in April of the same year. Skytap received early funding from the Washington Research Foundation. As of 2011, the organization is funded by Insight Venture Partners, the Madrona Venture Group, Ignition Partners, Bezos Expeditions, and OpenView Venture Partners. Skytap Virtual Lab expanded in scope, and was renamed Skytap in 2008. In 2011, Skytap won the Best of VMworld award in the public/hybrid cloud Computing Technologies category for Skytap Cloud, and the company has been named to annual top cloud computing provider lists from Deloitte, Geekwire, Seattle Business Magazine, and the Puget Sound Business Journal. In March 2018, Skytap added John Ludeman to its leadership team as SVP of Engineering. Ludeman joined Skytap after 30 years of engineering experience at Microsoft. In August 2019, Thor Culverhouse stepped down as CEO and former CTO Bradley Schick has been tapped as his replacement. In May 2024, tech infrastructrure multinational Kyndryl Holdings purchased Skytap. Skytap Skytap is an enterprise service purpose-built for the development and testing of complex applications. Users can import existing virtualized applications or build new applications in the cloud. Environments can be accessed through any modern web browser, REST-based application programming interface (API), command-line interface (CLI), or application lifecycle management tool (Jenkins, Visual Studio TFS, etc.) Skytap uses a browser-based interface for all system management, and hosts a library of pre-configured virtual machine images. Using either these images or their own imported VMs, users can create sharable configurations of one or more machines, and securely connect to active machines via a proprietary HTML5-based browser client. References Further reading “Skytap Continues Public Cloud Onslaught.” Networkcomputing.com. November 16, 2011. Retrieved January 2, 2012. “Virtualization Roundup: Four Lab Managers Tested and Reviewed.” PCworld.com. June 9, 2010. Retrieved January 2, 2012. “Skytap Raises $10 Million for Cloud Automation Solutions.” Techcruch.com. December 31, 2010. Retrieved January 2, 2012. “Venture Firms Give Startup a Vote of Confidence.” The Seattle Times. August 10, 2007. Retrieved January 2, 2012. External links Cloud infrastructure Cloud computing providers Software companies based in Seattle Software companies established in 2006 2006 establishments in Washington (state) American companies established in 2006 2024 mergers and acquisitions
Skytap
Technology
686
53,855,483
https://en.wikipedia.org/wiki/NGC%20447
NGC 447 is a spiral galaxy of type (R)SB(rs)0/a located in the constellation Pisces. It was first discovered on October 8, 1861 by Heinrich d'Arrest (and later listed as NGC 447); it was also seen in the 1890s by Edward Emerson Barnard (and later listed as IC 1656). It was described by Dreyer as "faint, pretty large, brighter middle, 11th magnitude star to northeast." References External links 0447 18611008 Pisces (constellation) Spiral galaxies 004550
NGC 447
Astronomy
115
43,335,980
https://en.wikipedia.org/wiki/Gene%20drive
A gene drive is a natural process and technology of genetic engineering that propagates a particular suite of genes throughout a population by altering the probability that a specific allele will be transmitted to offspring (instead of the Mendelian 50% probability). Gene drives can arise through a variety of mechanisms. They have been proposed to provide an effective means of genetically modifying specific populations and entire species. The technique can employ adding, deleting, disrupting, or modifying genes. Proposed applications include exterminating insects that carry pathogens (notably mosquitoes that transmit malaria, dengue, and zika pathogens), controlling invasive species, or eliminating herbicide or pesticide resistance. As with any potentially powerful technique, gene drives can be misused in a variety of ways or induce unintended consequences. For example, a gene drive intended to affect only a local population might spread across an entire species. Gene drives that eradicate populations of invasive species in their non-native habitats may have consequences for the population of the species as a whole, even in its native habitat. Any accidental return of individuals of the species to its original habitats, through natural migration, environmental disruption (storms, floods, etc.), accidental human transportation, or purposeful relocation, could unintentionally drive the species to extinction if the relocated individuals carried harmful gene drives. Gene drives can be built from many naturally occurring selfish genetic elements that use a variety of molecular mechanisms. These naturally occurring mechanisms induce similar segregation distortion in the wild, arising when alleles evolve molecular mechanisms that give them a transmission chance greater than the normal 50%. Most gene drives have been developed in insects, notably mosquitoes, as a way to control insect-borne pathogens. Recent developments designed gene drives directly in viruses, notably herpesviruses. These viral gene drives can propagate a modification into the population of viruses, and aim to reduce the infectivity of the virus. Mechanism In sexually-reproducing species, most genes are present in two copies (which can be the same or different alleles), either one of which has a 50% chance of passing to a descendant. By biasing the inheritance of particular altered genes, synthetic gene drives could more effectively spread alterations through a population. Typically, scientists insert the gene drive into an organism's DNA along with the CRISPR-Cas9 machinery. When the modified organism mates and its DNA mixes with that of its mate, the CRISPR-Cas9 tool cuts the partner's DNA at the same spot where the gene drive is located in the first organism. The cell repairs the cut DNA by copying the gene drive from the first organism into the corresponding spot in the DNA of the offspring. This means both copies of the gene (one from each parent) now contain the gene drive. Molecular mechanisms At the molecular level, an endonuclease gene drive works by cutting a chromosome at a specific site that does not encode the drive, inducing the cell to repair the damage by copying the drive sequence onto the damaged chromosome. The cell then has two copies of the drive sequence. The method derives from genome editing techniques and relies on homologous recombination. To achieve this behavior, endonuclease gene drives consist of two nested elements: An endonuclease that selectively cuts at the "target sequence", i.e. the rival allele. This can be one of: A homing endonuclease, which is what natural inteins use to propagate. They are, however, very difficult, if not impossible, to retarget. An RNA-guided endonuclease (e.g., Cas9 or Cas12a) and its guide RNA, which can be easily altered to set the target. Cas9 is the most promising technology identified in a 2014 review. Cas9 gene drives have been successfully tested in 2015, and Cas12a in 2023. Any other programmable endonuclease system, such as modular zinc finger nucleases and TALEN. One such drive has been successfully tested in fruit flies, but it turned out to be evolutionarily unstable due to the many-repeat nature of those endonucleases. A template sequence used by the DNA repair machinery after the target sequence is cut. To achieve the self-propagating nature of gene drives, this repair template contains at least the endonuclease sequence. Because the template must be used to repair a double-strand break at the cutting site, its sides are homologous to the sequences that are adjacent to the cutting site in the host genome. By targeting the gene drive to a gene coding sequence, this gene will be inactivated; additional sequences can be introduced in the gene drive to encode new functions. As a result, the gene drive insertion in the genome will re-occur in each organism that inherits one copy of the modification and one copy of the wild-type gene. If the gene drive is already present in the egg cell (e.g. when received from one parent), all the gametes of the individual will carry the gene drive (instead of 50% in the case of a normal gene). Spreading in the population Since it can never more than double in frequency with each generation, a gene drive introduced in a single individual typically requires dozens of generations to affect a substantial fraction of a population. Alternatively, releasing drive-containing organisms in sufficient numbers can affect the rest within a few generations; for instance, by introducing it in every thousandth individual, it takes only 12–15 generations to be present in all individuals. Whether a gene drive will ultimately become fixed in a population and at which speed depends on its effect on individual fitness, on the rate of allele conversion, and on the population structure. In a well mixed population and with realistic allele conversion frequencies (≈90%), population genetics predicts that gene drives get fixed for a selection coefficient smaller than 0.3; in other words, gene drives can be used to spread modifications as long as reproductive success is not reduced by more than 30%. This is in contrast with normal genes, which can only spread across large populations if they increase fitness. Gene drive in viruses Because the strategy usually relies on the simultaneous presence of an unmodified and a gene drive allele in the same cell nucleus, it had generally been assumed that a gene drive could only be engineered in sexually reproducing organisms, excluding bacteria and viruses. However, during a viral infection, viruses can accumulate hundreds or thousands of genome copies in infected cells. Cells are frequently co-infected by multiple virions and recombination between viral genomes is a well-known and widespread source of diversity for many viruses. In particular, herpesviruses are nuclear-replicating DNA viruses with large double-stranded DNA genomes and frequently undergo homologous recombination during their replication cycle. These properties have enabled the design of a gene drive strategy that doesn't involve sexual reproduction, instead relying on co-infection of a given cell by a naturally occurring and an engineered virus. Upon co-infection, the unmodified genome is cut and repaired by homologous recombination, producing new gene drive viruses that can progressively replace the naturally occurring population. In cell culture experiments, it was shown that a viral gene drive can spread into the viral population and strongly reduce the infectivity of the virus, which opens novel therapeutic strategies against herpesviruses. Technical limitations Because gene drives propagate by replacing other alleles that contain a cutting site and the corresponding homologies, their application has been mostly limited to sexually reproducing species (because they are diploid or polyploid and alleles are mixed at each generation). As a side effect, inbreeding could in principle be an escape mechanism, but the extent to which this can happen in practice is difficult to evaluate. Due to the number of generations required for a gene drive to affect an entire population, the time to universality varies according to the reproductive cycle of each species: it may require under a year for some invertebrates, but centuries for organisms with years-long intervals between birth and sexual maturity, such as humans. Hence this technology is of most use in fast-reproducing species. Effectiveness in real practice varies between techniques, especially by choice of germline promoter. Lin and Potter 2016 (a) discloses the promoter technology homology assisted CRISPR knockin (HACK) and Lin and Potter 2016 (b) demonstrates actual use, achieving a high proportion of altered progeny from each altered Drosophila mother. Issues Issues highlighted by researchers include: Mutations: A mutation could happen mid-drive, which has the potential to allow unwanted traits to "ride along". Escape: Cross-breeding or gene flow potentially allow a drive to move beyond its target population. Ecological impacts: Even when new traits' direct impact on a target is understood, the drive may have side effects on the surroundings. The Broad Institute of MIT and Harvard added gene drives to a list of uses of gene-editing technology it doesn't think companies should pursue. Bioethics concerns Gene drives affect all future generations and represent the possibility of a larger change in a living species than has been possible before. In December 2015, scientists of major world academies called for a moratorium on inheritable human genome edits that would affect the germline, including those related to CRISPR-Cas9 technologies, but supported continued basic research and gene editing that would not affect future generations. In February 2016, British scientists were given permission by regulators to genetically modify human embryos by using CRISPR-Cas9 and related techniques on condition that the embryos were destroyed in seven days. In June 2016, the US National Academies of Sciences, Engineering, and Medicine released a report on their "Recommendations for Responsible Conduct" of gene drives. A 2018 mathematical modelling studies suggest that despite preexisting and evolving gene drive resistance (caused by mutations at the cutting site), even an inefficient CRISPR "alteration-type" gene drive can achieve fixation in small populations. With a small but non-zero amount of gene flow among many local populations, the gene drive can escape and convert outside populations as well. Kevin M. Esvelt stated that an open conversation was needed around the safety of gene drives: "In our view, it is wise to assume that invasive and self-propagating gene drive systems are likely to spread to every population of the target species throughout the world. Accordingly, they should only be built to combat true plagues such as malaria, for which we have few adequate countermeasures and that offer a realistic path towards an international agreement to deploy among all affected nations.". He moved to an open model for his own research on using gene drives to eradicate Lyme disease in Nantucket and Martha's Vineyard. Esvelt and colleagues suggested that CRISPR could be used to save endangered wildlife from extinction. Esvelt later retracted his support for the idea, except for extremely hazardous populations such as malaria-carrying mosquitoes, and isolated islands that would prevent the drive from spreading beyond the target area. History Austin Burt, an evolutionary geneticist at Imperial College London, introduced the possibility of conducting gene drives based on natural homing endonuclease selfish genetic elements in 2003. Researchers had already shown that such genes could act selfishly to spread rapidly over successive generations. Burt suggested that gene drives might be used to prevent a mosquito population from transmitting the malaria parasite or to crash a mosquito population. Gene drives based on homing endonucleases have been demonstrated in the laboratory in transgenic populations of mosquitoes and fruit flies. However, homing endonucleases are sequence-specific. Altering their specificity to target other sequences of interest remains a major challenge. The possible applications of gene drive remained limited until the discovery of CRISPR and associated RNA-guided endonucleases such as Cas9 and Cas12a. In June 2014, the World Health Organization (WHO) Special Programme for Research and Training in Tropical Diseases issued guidelines for evaluating genetically modified mosquitoes. In 2013 the European Food Safety Authority issued a protocol for environmental assessments of all genetically modified organisms. Funding Target Malaria, a project funded by the Bill and Melinda Gates Foundation, invested $75 million in gene drive technology. The foundation originally estimated the technology to be ready for field use by 2029 somewhere in Africa. However, in 2016 Gates changed this estimate to some time within the following two years. In December 2017, documents released under the Freedom of Information Act showed that DARPA had invested $100 million in gene drive research. Control strategies Scientists have designed multiple strategies to maintain control over gene drives. In 2020, researchers reported the development of two active guide RNA-only elements that, according to their study, may enable halting or deleting gene drives introduced into populations in the wild with CRISPR-Cas9 gene editing. The paper's senior author cautions that the two neutralizing systems they demonstrated in cage trials "should not be used with a false sense of security for field-implemented gene drives". If elimination is not necessary, it may be desirable to intentionally preserve the target population at a lower level by using a less severe gene drive technology. This works by maintaining the semi-defective population indefinitely in the target area, thereby crowding out potential nearby, wild populations that would otherwise move back in to fill a void. CRISPR CRISPR is the leading genetic engineering method. In 2014, Esvelt and coworkers first suggested that CRISPR/Cas9 might be used to build gene drives. In 2015, researchers reported successful engineering of CRISPR-based gene drives in Saccharomyces, Drosophila, and mosquitoes. They reported efficient inheritance distortion over successive generations, with one study demonstrating the spread of a gene into laboratory populations. Drive-resistant alleles were expected to arise for each of the described gene drives; however, this could be delayed or prevented by targeting highly conserved sites at which resistance was expected to have a severe fitness cost. Because of CRISPR's targeting flexibility, gene drives could theoretically be used to engineer almost any trait. Unlike previous approaches, they could be tailored to block the evolution of drive resistance by targeting multiple sequences. CRISPR could also enable gene drive architectures that control rather than eliminate populations. In 2022, t-CRISPR, was used to pass the “t haplotype” gene to about 95% of offspring. The approach spreads faulty copies of a female fertility gene to offspring, rendering them infertile. The researchers reported that their models suggested that adding 256 altered animals to an island with a population of 200,000 mice would eliminate the population in about 25 years. The traditional approaches of poison and traps were not needed. Applications Gene drives have two main classes of application, which have implications of different significance: introduce a genetic modification in laboratory populations; once a strain or a line carrying the gene drive has been produced, the drive can be passed to any other line by mating. Here, the gene drive is used to much more easily achieve a task that could be accomplished with other techniques. introduce a genetic modification in wild populations. Gene drives constitute a major development that makes possible previously unattainable changes. Because of their unprecedented potential risk, safeguard mechanisms have been proposed and tested. Disease vector species One possible application is to genetically modify mosquitoes, mice, and other disease vectors so that they cannot transmit diseases, such as malaria and dengue fever in the case of mosquitoes, and tick-borne disease in the case of mice. Researchers have claimed that by applying the technique to 1% of the wild population of mosquitoes, that they could eradicate malaria within a year. Invasive species control A gene drive could be used to eliminate invasive species and has, for example, been proposed as a way to eliminate invasive species in New Zealand. Gene drives for biodiversity conservation purposes are being explored as part of The Genetic Biocontrol of Invasive Rodents (GBIRd) program because they offer the potential for reduced risk to non-target species and reduced costs when compared to traditional invasive species removal techniques. Given the risks of such an approach described below, the GBIRd partnership is committed to a deliberate, step-wise process that will only proceed with public alignment, as recommended by the world's leading gene drive researchers from the Australian and US National Academy of Sciences and many others. A wider outreach network for gene drive research exists to raise awareness of the value of gene drive research for the public good. Some scientists are concerned about the technique, fearing it could spread and wipe out species in native habitats. The gene could mutate, potentially causing unforeseen problems (as could any gene). Many non-native species can hybridize with native species, such that a gene drive afflicting a non-native plant or animal that hybridizes with a native species could doom the native species. Many non-native species have naturalized into their new environment so well that crops and/or native species have adapted to depend on them. Predator Free 2050 The Predator Free 2050 project is a New Zealand government program to eliminate eight invasive mammalian predator species (including rats, short-tailed weasels, and possums) from the country by 2050. The project was first announced in 2016 by New Zealand's prime minister John Key and in January 2017 it was announced that gene drives would be considered in the effort, but this has not yet been actualised. In 2017, one group in Australia and another in Texas released preliminary research into creating 'daughterless mice' using gene drives in mammals. California In 2017, scientists at the University of California, Riverside developed a gene drive to attack the invasive spotted-wing drosophila, a type of fruit fly native to Asia that is costing California's cherry farms $700 million per year because of its tail's razor-edged ovipositor that destroys unblemished fruit. The primary alternative control strategy involves the use of insecticides called pyrethroids that kill almost all insects that it contacts. Wild animal welfare The transhumanist philosopher David Pearce has advocated for using CRISPR-based gene drives to reduce the suffering of wild animals. Kevin M. Esvelt, an American biologist who has helped develop gene drive technology, has argued that there is a moral case for the elimination of the New World screwworm through such technologies because of the immense suffering that infested wild animals experience when they are eaten alive. See also Biological machines Cas9 Cas12a Meiotic drive Genome editing Population control Sterile insect technique Synthetic biology Target Malaria References Further reading Genetic engineering Genetics techniques Genome editing Sterilization (medicine)
Gene drive
Chemistry,Engineering,Biology
3,830
23,983,507
https://en.wikipedia.org/wiki/WSDMA
WSDMA (Wideband Space Division Multiple Access) is a high bandwidth channel access method, developed for multi-transceiver systems such as active array antennas. WSDMA is a beamforming technique suitable for overlay on the latest air-interface protocols including WCDMA and OFDM. WSDMA enabled systems can determine the angle of arrival (AoA) of received signals to spatially divide a cell sector into many sub-sectors. This spatial awareness provides information necessary to maximise Carrier to Noise+Interference Ratio (CNIR) link budget, through a range of digital processing routines. WSDMA facilitates a flexible approach to how uplink and downlink beamforming is performed and is capable of spatial filtering known interference generating locations. Key features Transmit and receive beam shaping and steering Multiple sub-sector path processing Spatial interference filtering Sector activity scan Characteristics and principles of operation Active Panel Antenna Calibration Active Panel Antenna systems, comprising a planar array of micro-radios and associated antenna element, rely upon a comprehensive calibration scheme which is able to correct inter-path signal mismatches in phase, amplitude and latency. This facilitates precise control of the uplink and downlink RF beam pattern and avoids distortion effects that occur in the absence of calibration. Multiple Sub-Sector path processing By dividing the cell sector into a number of sub-sector beams, WSDMA provides the network with spatially filtered signals, maximising link budget through improved antenna gain and interference mitigation. This allows for mobile users in the cell to reduce their uplink power transmission, thereby further reducing interference and minimising both base station and UE power consumption. WSDMA provides simultaneous sector-wide and sub-sector beam processing to improve link performance in multipath environments. Sub-sector beam processing can optimise changing user demographics within the cell sector. Downlink WSDMA Downlink WSDMA provides an optimised RF beam pattern, reducing interference in overlap regions of adjacent cell sectors. Long term statistical based adjustment can optimise cell patterns depending on the user population density per spatial region serviced by the cell. See also WCDMA OFDM Smart Antennas Beamforming 3G MIMO References "Beamforming: A versatile approach to spatial filtering" B. D. V. Veen and K. M. Buckley. . IEEE ASSP Magazine, Apr. 1988. W-CDMA and cdma2000 for 3G mobile networks. M. R. Karim, M. Sarraf L.Lengierand R.Farrell. Amplitude and Phase Mismatch Calibration Testbed for 2x2 Tower-Top Antenna Array System. China-Ireland Conference on Information and Communications Technologies 2007. Antennas (radio) Signal processing
WSDMA
Technology,Engineering
561
55,030,748
https://en.wikipedia.org/wiki/Yichus
Yichus ( yḥws), a Hebrew-based Yiddish word meaning "lineage". In some past and present Jewish communities, good —meaning descent from a family of high reputation—is necessary for a person to be considered as a potential marriage partner. Colloquially, the term refers to the chain of origin for a statement, creative work or object. Etymology first appeared in the Hebrew Bible in the Book of Ezra. It appears in and ), where the Hebrew root (yud-chet-sin) means "relation to" or "related to." In the later rabbinic Hebrew, the last letter of the root changed from sin () to samekh (), though the pronunciation and meaning remained unchanged. The latter spelling (yud-hey-samech) appears frequently in rabbinic literature. Although the word originated in Hebrew, the term is generally accepted as a Yiddish word that has flowed into modern English. The anglicized word has been transliterated as , , , and . History As far back as the Talmudic era, being son-in-law to someone widely respected was valued. Subsequently, even the of being son-in-law to the son-in-law and similar lineage links were valued. From the 14th century onwards, was an important concern for Eastern European Jews. Good could refer to Torah scholarship or wealth, while bad resulted from the suspicion of illegitimate descent. However, many rabbis disapproved of the concept of , instead insisting on judging individuals based on their personal merits. "In Lithuania some Jewish families hid their (lineage)". There was a tension between on one hand, and "meritocratic leadership based on scholarship" on the other. Judgments of became one of the mechanisms which determined social hierarchies. From the 19th century, the significance of declined as more marriages were based on romantic love, and reformers criticized for leading to inbreeding within small circles of "acceptable" families. However, nowadays is still an important qualification for marriage in charedi communities. The family trees, or pedigree charts, of Jewish families, listing genealogy and family history records, have been identified with several names, among which are yichus book, yichus brief, and yichus record. To help a child trace lineage, some families would write a "yichus book". The focus of a yichus brief (letter of relationship) is not as extensive as a yichus book whereas a yichus book or yichus record/"sefer yuchsin"/registry is community-oriented. Some families also kept a separate "Register of Circumcisions". Types Being the (מְחוּתָּן, father of one's child's spouse) of a notable person is sometimes considered important enough to include in a wedding invitation and in giving other credentials. Although primarily used for same generation relatives, it can be used beyond that generation. Being a (literally son-after-son, i.e. patrilineal) descendant is sometimes considered more notable than other forms of descent. For various reasons, surnames/family names were changed, and sometimes reverted. Thus, Jewish family names have not always been a reliable indicator of ancestry. For example: certain family names, such as Cohen, are not as strongly indicative of being a Kohen as Katz. References External links Dem Ganefs Yiches (The Thief's Lineage), a 19th century song parodying the concept of yichus Orthodox Judaism Hebrew words and phrases Yiddish words and phrases Jewish marital law Jewish life cycle Genealogy Family trees
Yichus
Biology
727
38,587,811
https://en.wikipedia.org/wiki/F.%20M.%20Devienne
Fernand Marcel Devienne (20 February 1913 – 19 April 2003) was a French physicist who developed research on molecular beams and spectrum analysis in rarefied gas environment. Life Devienne was born in Marseille on 20 February 1913. A Doctor of physics, F. Marcel Devienne was director of a research laboratory (Laboratoire de physique moléculaire des hautes énergies de Peymeinade, now closed) in Peymeinade, Alpes Maritimes. He also presided yearly symposiums on molecular beams. He was one of the first to study the energy properties of triatomic hydrogen molecules and triatomic deuterium. His researches also sought to recreate interstellar-like conditions to experiment synthesis of biological compounds in such environments. Devienne also conducted extensive fast atom bombardment experiments in mass spectrometry. Devienne died on 19 April 2003 in Cannes. Honours F. M. Devienne was chevalier of the Legion of Honour, member of the New York Academy of Sciences, Fellow of the International Symposiulm on Molecular Beams, laureate of the 1997 Lazare-Carnot Prize and of the 1972 Gustave Ribaud Prize of the French Academy of Sciences. Works F. M. Devienne (ed.) Rarefied Gas Dynamics, Pergamon Press, 1960 F. M. Devienne Jets Moléculaires de Hautes Énergies, 1961 Resources F. M. Devienne facts on WorldCat F-Marcel Devienne facts on SciTech References French physicists Scientists from Marseille 1913 births 2003 deaths Molecular physics
F. M. Devienne
Physics,Chemistry
316
32,734,383
https://en.wikipedia.org/wiki/Golden%20age%20of%20Spanish%20software
The golden age of Spanish software () was a time, between 1983 and 1992, when Spain became the second largest 8 bit computer entertainment software producer in Europe, only behind the United Kingdom. The disappearance of the 8 bit technology and its replacement by the 16 bit machines marked the end of this era, during which many software companies based in Spain launched their career: Dinamic Software, Topo Soft, Opera Soft, Made in Spain and Zigurat among others. The name Edad de oro del soft español was coined by specialized magazines of the time and has been used to refer to these years until nowadays. History Rise (1983–1985) In the year 1983, the first home personal computers started arriving in Spain, all of them 8 bit machines. ZX Spectrum and Amstrad CPC were the most sold in the country, followed by MSX and Commodore 64 among others. These were simple machines, with lesser resources, therefore easy to manipulate, so many young programmers all over the country started experimenting with them. The Golden Era of Spanish Software officially starts with the launch of Bugaboo, by PACO & PACO, the first Spanish video game to get a massive international distribution. Shortly, Fred (Roland on the ropes for Amstrad), by others authors, this time under the company Made in Spain, was another success, and the owners of Made in Spain decided to create Zigurat, a mother company that would at first be dedicated to distribution, turning Made in Spain into a producing company for Zigurat, which also would at first distribute titles from independent companies. Years later, Made in Spain and Zigurat would completely merge into a single producer and distributor company. Meanwhile, Dinamic Software made their first steps when they launched Yenght for ZX Spectrum, which was a text adventure. And in the field of distribution, Erbe Software, the main Spanish software distributor for more than a decade, started their activity. In their first years, Erbe tried also to produce their own titles, but in this activity they did not last for long. Peak (1985–1989) In 1985, with the birth of magazines Micromanía and Microhobby, videogames gained massive popularity, and the rest of the top companies of the Era, Opera Soft in 1986 and Topo Soft in 1987 started their activity, the first one with Livingstone, I presume, and the second one with Spirits, after their authors programmed for Erbe Software Las tres luces de Glaurung (Conquestador). The just born Zigurat had their biggest success as Sir Fred and El misterio del Nilo, an unofficial version of the movie The Jewel of the Nile, which caused problems internationally because one of the characters of the game was too similar to Michael Douglas, and the authors were forced to change the graphic design of this character in the international versions. Dinamic had their first huge successes in the Johny Jones trilogy, comprising Saimazoom, Babaliba, and mainly Abu Simbel Profanation. After this, they would start another trilogy, the Moves trilogy, comprising Army Moves, Navy Moves, and much later Arctic Moves. And little by little, publishing titles starring famous sportsmen became popular. Dinamic were the first, with Basket Master starring Fernando Martín, and they were followed by other companies, with titles starring Ángel Nieto, Carlos Sainz, Poli Díaz, Emilio Butragueño and others. Meanwhile, Opera Soft published Goody, Sol Negro, Cosa Nostra, and above all, La Abadía del Crimen, based on Umberto Eco's The Name of the Rose, considered one of the best titles of all the Golden Era of Spanish Software and one of the best titles ever released on ZX Spectrum. On the other hand, Topo Soft, the last of the big ones, quickly arrived on top with titles like Mad Mix Game and its continuation, and Survivor among others. Meanwhile, Dinamic published a text adventure version of Don Quijote, and after that, a section of Dinamic dedicated only to text adventures became independent, and they named themselves Aventuras AD, publishing titles like El Jabato among others. Decline (1989–1992) In 1985 the 16/32-bit Amiga and Atari ST arrived, and little by little, IBM PC compatibles, followed by consoles like SNES and Mega Drive. Although the Spanish companies did some tiny efforts to evolve, they never really switched to 16-bits and concentrated on the declining 8-bit market which, almost extinct in Europe, still had strength in Spain, mainly thanks to the rule Erbe Software, main distributor in the country, imposing a sales price of 875 pesetas (5,26 euros) for all their titles, trying to put an end to piracy. But at this moment, Spanish companies started having serious financial problems, and one by one they launched their last titles. Topo Soft funders left the company in 1989 to establish Animagic, whose main title was Mortadelo y Filemon II (Clever and Smart II). Born in bad times, they did not last for long. On the other hand, Topo Soft launched Lorna, Journey to the Center of the Earth, and above all, Gremlins 2, is the first time a Spanish video game company managed to get an exclusive license for all Europe from a Hollywood movie. In 1991, aware of the importance of 16 bit, they tried to switch, with the project of creating a desktop environment for MS-DOS, but the project did not succeed, and Topo closed on bankruptcy in 1992. Meanwhile, Opera Soft, after publishing Gonzalezzz, Mot and Angel Nieto Pole 500, starts decaying like the rest of the companies. In their last months, they launched titles like La Colmena and one dedicated to Barcelona 92, to disappear shortly after. Some of their components, like Gonzalo Suárez Girard (Gonzo Suárez), would later move to Pyro Studios launching titles like Commandos: Behind Enemy Lines among others. Aventuras AD, paradoxically, had their most successful period during this time of decline, launching the most of their titles during this time, mainly the Ci-U-Than Legends trilogy, composed of La Diosa de Cozumel, Los Templos Sagrados and Chichén Itzá, being pioneers in Spain creating a predecessor of graphic adventures with La Aventura Espacial, a text adventure controlled by menus. Nevertheless, the sales did not last for long, and Aventuras AD disappeared in 1992. Zigurat and Dinamic were the only companies which survived from the Golden Era of Spanish Software, although they had to transform and abandon their previous activity. Zigurat, after an 8-bit market collapsed, started developing coin up arcade games, lasting for many years. Dinamic Software, on the other hand, after publishing After the War, Narco Police and Risky Woods, closed on bankruptcy and was refounded as Dinamic Multimedia in 1993, having in PC Fútbol as their biggest success during the 1990s. However, the dot-com bubble finished Dinamic Multimedia in 2001, but before this, the original founders of the company, who had left it in 1999, had already founded FX Interactive, which is still known nowadays. 2010s resurgence The 1990s and 2000s have been described as "lost decades" for the Spanish video game industry. However, Alberto Flores de Rio wrote in the Encyclopedia of Video Games that the 2010s may be a resurgence for Spanish-based game development. Akaoni Studio and MercurySteam started off the decade with financially successful games. Alejando Alcolea of Hobby Consolas called 2015 the possible start for a "second golden age of Spanish software". References Spanish software History of video games History of software Home computer software Science and technology in Spain Video gaming in Spain
Golden age of Spanish software
Technology
1,614
3,110,270
https://en.wikipedia.org/wiki/Beta%20Leporis
Beta Leporis (β Leporis, abbreviated Beta Lep, β Lep), formally named Nihal , is the second brightest star in the constellation of Lepus. Nomenclature Beta Leporis is the star's Bayer designation. It is also known by the traditional named Nihal, Arabic for "quenching their thirst". The occasional spelling Nibal appears to be due to a misreading. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Nihal for this star. In Chinese, (), meaning Toilet, refers to an asterism consisting of β Leporis, α Leporis, γ Leporis and δ Leporis. Consequently, the Chinese name for β Leporis itself is (), "the Second Star of Toilet". Properties Based on parallax measurements from the Hipparcos astrometry satellite, this star is located about from the Earth. It has an apparent visual magnitude of 2.84 and a stellar classification of G5 II. The mass of this star is 3.5 times the mass of the Sun and it is about 240 million years old, which is the sufficient time for a star this massive to consume the hydrogen at its core and evolve away from the main sequence, becoming a G-type bright giant. The angular diameter of Beta Leporis, after correction for limb darkening, is . At the distance to this star, it yield a physical radius of 15.9 times the radius of the Sun. This is a double star system and may be a binary, whereby the second star has a brightness of 7.34 mag. Using adaptive optics on the AEOS telescope at Haleakala Observatory, the pair was found to be separated by an angle of 2.58 arcseconds at a position angle of 1.4°. Component B has been observed to fluctuate in brightness and is catalogued as suspected variable star NSV 2008. References Nihal Lepus (constellation) G-type bright giants 5 Leporis, Beta Leporis, 09 036079 025606 BD-20 1096 1829
Beta Leporis
Astronomy
484
2,583,687
https://en.wikipedia.org/wiki/Prefoldin%20subunit%203
Prefoldin subunit 3 (VBP-1), also Von Hippel–Lindau binding protein 1, is a prefoldin chaperone protein that binds to von Hippel–Lindau protein and transports it from perinuclear granules to the nucleus or cytoplasm inside the cell. It is also involved in transporting nascent polypeptides to cytosolic chaperonins for post-translational folding. VBP-1 is a 197–amino acid heterohexamer comprising two prefoldin-α and four prefoldin-β subunits, and is a member of the prefoldin-α subunit family. It is ubiquitously expressed in tissues, and is located in the cell nucleus and cytoplasm. The VBP1 gene is located at Xq28. Homologues are known to exist between human VBP-1 and proteins in mice, Drosophila and C. elegans. See also Von Hippel–Lindau tumor suppressor Von Hippel–Lindau disease Eugen von Hippel Arvid Lindau References External links Proteins
Prefoldin subunit 3
Chemistry
235
14,777,889
https://en.wikipedia.org/wiki/TBX3
T-box transcription factor TBX3 is a protein that in humans is encoded by the TBX3 gene. T-box 3 (TBX3) is a member of the T-box gene family of transcription factors which all share a highly conserved DNA binding domain known as the T-box. The T-box gene family consists of 17 members in mouse and humans that are grouped into five subfamilies, namely Brachyury (T), T-brain (Tbr1), TBX1, TBX2, and TBX6. Tbx3 is a member of the Tbx2 subfamily which includes Tbx2, Tbx4 and Tbx5. The human TBX3 gene maps to chromosome 12 at position 12q23-24.1 and consists of 7 exons which encodes a 723 amino acid protein (ENSEMBL assembly release GRCh38.p12). Transcript splicing Alternative processing and splicing results in at least 4 distinct TBX3 isoforms with TBX3 and TBX3+2a being the predominant isoforms. TBX3+2a results from alternative splicing of the second intron which leads to the addition of the +2a exon and consequently this isoform has an additional 20 amino acids within the T-box DNA binding domain. The functions of TBX3 and TBX3+2a may vary slightly across different cell types. Structure and function TBX3 has domains which are important for its transcription factor function which include a DNA-binding domain (DBD) also called the T-box, a nuclear localization signal, two repression domains (R2 and R1) and an activation domain (A). The T-box recognizes a palindromic DNA sequence (T(G/C)ACACCT AGGTGTGAAATT) known as the T-element, or half sites within this sequence called half T-elements, although it can also recognize variations within the consensus T-element sequences. While there are 29 predicted phosphorylation sites in the TBX3 protein only the SP190, SP692 and S720 have been fully characterized. The kinases involved are cyclin A-CDK2 at either SP190 or SP354, p38 mitogen-activated protein (MAP) kinase at SP692 in embryonic kidney cells and AKT3 at S720 in melanoma. These modifications act in a context dependent manner to promote TBX3 protein stability, nuclear localization and transcriptional activity. TBX3 can activate and/or repress its target genes by binding a T-element, or half T-element sites. Indeed, Tbx3 binds highly conserved T-elements to activate the promoters of Eomes, T, Sox17 and Gata6, which are factors essential for mesoderm differentiation and extra embryonic endodermal. Furthermore, in the cancer context, TBX3 directly represses the cell cycle regulators p19ARF/p14ARF, p21WAF1 and TBX2 as well as E-cadherin which encodes a cell adhesion molecule, to promote proliferation and migration. TBX3 directly represses a region of the PTEN promoter which lacks putative T-elements, but which forms an important regulatory unit for PTEN transcriptional activators, thus raising the possibility that TBX3 may also repress some of its target genes through interfering with transcriptional activators. The function of TBX3 as either a transcriptional repressor or transcriptional activator is, in part, modulated by protein co-factors. For example, it can interact with other transcription factors such as Nkx2-5, Msx 1/2 and Sox4 to assist it binding to its target genes to regulate heart development and it can interact with histone deacetylases (HDACs) 1, 2, 3 and 5 to repress p14ARF in breast cancer and with HDAC5 to repress E-cadherin to promote metastasis in hepatocellular carcinoma. Lastly, TBX3 can also co-operate with other factors to inhibit the process of mRNA splicing by directly binding RNAs containing the core motif of a T-element. Indeed, TBX3 interacts with Coactivator of AP1 and Estrogen Receptor (CAPERα) to repress the long non-coding RNA, Urothelial Cancer Associated 1 (UCA1), which leads to the bypass of senescence through the stabilization of p16INK4a mRNA. TBX3 has been functionally connected to the regulation of the Wnt signalling, thereby providing a novel explanation of how signalling pathways are orchestrated by tissue-specific transcription factors. Role in development During mouse embryonic development, Tbx3 is expressed in the inner cell mass of the blastocyst, in the extraembryonic mesoderm during gastrulation, and in the developing heart, limbs, musculoskeletal structures, mammary glands, nervous system, skin, eye, liver, pancreas, lungs and genitalia. Tbx3 null embryos show defects in, among other structures, the heart, mammary glands and limbs and they die in utero by embryonic day E16.5, most likely due to yolk sac and heart defects. These observations together with numerous other studies have illustrated that Tbx3 plays crucial roles in the development of the heart, mammary glands, limbs and lungs. TBX3 has been implicated in the regulation of Wnt target genes by tissue-specific crosstalk with the protein BCL9. Role in stem cells Embryonic stem cells (ESCs) and adult stem cells, are undifferentiated cells which when they divide have the potential to either remain a stem cell or to differentiate into other specialized cells. Adult stem cells are multipotent progenitor cells found in numerous adult tissues and, as part of the body repair system, they can develop into more than one cell type but they are more limited than ESCs. TBX3 is highly expressed in mouse ESCs (mESCs) and appears to have a dual role in these cells. Firstly it can enhance and maintain stem cell pluripotency by preventing differentiation and enhancing self-renewal and secondly it can maintain the pluripotency and differentiation potential of mESCS. Induced pluripotent stem cells (iPSCs) are ESC-like cells that can generate scalable quantities of relevant tissue and are of major interest for their application in personalized regenerative medicine, drug screening, and for our understanding of the cell signaling networks that regulate embryonic development and disease. In vitro studies have shown that Tbx3 is an important factor that, together with KLF4, SOX2, OCT4, Nanog, LIN-28A and C-MYC, can reprogram somatic cells to form iPS cells. Clinical significance TBX3 has been implicated in human diseases including the ulnar mammary syndrome, obesity, rheumatoid arthritis and cancer. In humans, heterozygous mutations of TBX3 lead to the autosomal dominant developmental disorder, ulnar mammary syndrome (UMS), which is characterized by a number of clinical features including mammary and apocrine gland hypoplasia, upper limb defects, malformations of areola, dental structures, heart and genitalia. Several UMS causing mutations in the TBX3 gene have been reported which include 5 nonsense, 8 frameshift (due to deletion, duplication and insertion), 3 missense and 2 splice site mutations. Missense mutations within the T-domain, or the loss of RD1 result in aberrant transcripts and truncated proteins of TBX3. These mutations lead to reduced DNA binding, transcriptional control and splicing regulation of TBX3 and the loss of function and are associated with the most severe phenotype of UMS. Tbx3 is expressed in heterogenous populations of hypothalamic arcuate nucleus neurons which control energy homeostasis by regulating appetite and energy expenditure and the ablation of TBX3 function in these neurons was shown to cause obesity in mouse models. Importantly, Tbx3 was shown to be a key player in driving the functional heterogeneity of hypothalamic neurons and this function was conserved in mice, drosophila and humans. Genome wide association studies also causally linked TBX3 to rheumatoid arthritis (RA) susceptibility and a recent study identified Tbx3 as a candidate gene for RA in collagen-induced arthritis (CIA) mouse models. The severity of RA directly correlated with TBX3 serum levels in the CIA mouse models. Furthermore, Tbx3 was shown to repress B lymphocyte proliferation and to activate the humoral immune response which is associated with chronic inflammation of the synovium leading to RA. Tbx3 may thus be an important player in regulating the immune system and could be used as a biomarker for the diagnosis of RA severity. TBX3 is overexpressed in a wide range of carcinomas (breast, pancreatic, melanoma, liver, lung, gastric, ovarian, bladder and head and neck cancers) and sarcomas (chondrosarcoma, fibrosarcoma, liposarcoma, rhabdomyosarcoma and synovial sarcoma) and there is compelling evidence that it contributes to several hallmarks of cancer. Indeed, TBX3 can bypass cellular senescence, apoptosis and anoikis as well as promote uncontrolled cell proliferation, tumor formation, angiogenesis and metastasis. Furthermore, TBX3 contributes to the expansion of cancer stem cells (CSCs) and is a key player in regulating pluripotency-related genes in these cells. CSCs contribute to tumor relapse and drug resistance and thus this may be another mechanism by which TBX3 contributes to cancer formation and tumor aggressiveness. The mechanisms by which TBX3 contributes to oncogenic processes involve, in part, its ability to inhibit the tumor suppressor pathways p14ARF/p53/p21WAF1/CIP1, p16INK4a/pRb, p57KIP2, PTEN, E-cadherin and activating the angiogenesis-associated genes FGF2 and VEGF-A and the EMT gene SNAI. Some of the oncogenic signaling molecules identified that upregulate TBX3 include TGF-β, BRAF-MAPK, c-Myc, AKT, and PLCᗴ/PKC. The function of TBX3 is also regulated by phosphorylation by the p38-MAPK, AKT3 and cyclin A/CDK2 and by protein co-factors, which include PRC2, Histone Deacetylases 1, 2, 3 and 5 and CAPERα. There is also evidence that TBX3 may function as a tumour suppressor. During oncogenesis, TBX3 is silenced by methylation in some cancers and this was associated with a poor overall survival, resistance to cancer therapy and a more invasive phenotype. In addition, TBX3 is overexpressed in fibrosarcoma cells and removing TBX3 from these cells led to a more aggressive phenotype. Notes References External links Transcription factors
TBX3
Chemistry,Biology
2,417
938,663
https://en.wikipedia.org/wiki/Multi-task%20learning
Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Inherently, Multi-task learning is a multi-objective optimization problem having trade-offs between different tasks. Early versions of MTL were called "hints". In a widely cited 1997 paper, Rich Caruana gave the following characterization:Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Further examples of settings for MTL include multiclass classification and multi-label classification. Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks. Methods The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge: Task grouping and overlap Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a linear combination of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with sparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases. Task relatedness can be imposed a priori or learned from the data. Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly. For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains. Exploiting unrelated tasks One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods. Transfer of knowledge Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep convolutional neural network GoogLeNet, an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task. Multiple non-stationary tasks Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed Group online adaptive learning (GOAL). Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting financial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents. Multi-task optimization Multitask optimization: In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models. Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics. Mathematics Reproducing Hilbert space of vector valued functions (RKHSvv) The MTL problem can be cast within the context of RKHSvv (a complete inner product space of vector-valued functions equipped with a reproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015. RKHSvv concepts Suppose the training data set is , with , , where indexes task, and . Let . In this setting there is a consistent input and output space and the same loss function for each task: . This results in the regularized machine learning problem: where is a vector valued reproducing kernel Hilbert space with functions having components . The reproducing kernel for the space of functions is a symmetric matrix-valued function , such that and the following reproducing property holds: The reproducing kernel gives rise to a representer theorem showing that any solution to equation has the form: Separable kernels The form of the kernel induces both the representation of the feature space and structures the output across tasks. A natural simplification is to choose a separable kernel, which factors into separate kernels on the input space and on the tasks . In this case the kernel relating scalar components and is given by . For vector valued functions we can write , where is a scalar reproducing kernel, and is a symmetric positive semi-definite matrix. Henceforth denote . This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely by . Methods for non-separable kernels is a current field of research. For the separable case, the representation theorem is reduced to . The model output on the training data is then , where is the empirical kernel matrix with entries , and is the matrix of rows . With the separable kernel, equation can be rewritten as where is a (weighted) average of applied entry-wise to and . (The weight is zero if is a missing observation). Note the second term in can be derived as follows: Known task structure Task structure representations There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping. Task structure examples Via the regularizer formulation, one can represent a variety of task structures easily. Letting (where is the TxT identity matrix, and is the TxT matrix of ones) is equivalent to letting control the variance of tasks from their mean . For example, blood levels of some biomarker may be taken on patients at time points during the course of a day and interest may lie in regularizing the variance of the predictions across patients. Letting , where is equivalent to letting control the variance measured with respect to a group mean: . (Here the cardinality of group r, and is the indicator function). For example, people in different political parties (groups) might be regularized together with respect to predicting the favorability rating of a politician. Note that this penalty reduces to the first when all tasks are in the same group. Letting , where is the Laplacian for the graph with adjacency matrix M giving pairwise similarities of tasks. This is equivalent to giving a larger penalty to the distance separating tasks t and s when they are more similar (according to the weight ,) i.e. regularizes . All of the above choices of A also induce the additional regularization term which penalizes complexity in f more broadly. Learning tasks together with their structure Learning problem can be generalized to admit learning task matrix A as follows: Choice of must be designed to learn matrices A of a given type. See "Special cases" below. Optimization of Restricting to the case of convex losses and coercive penalties Ciliberto et al. have shown that although is not convex jointly in C and A, a related problem is jointly convex. Specifically on the convex set , the equivalent problem is convex with the same minimum value. And if is a minimizer for then is a minimizer for . may be solved by a barrier method on a closed set by introducing the following perturbation: The perturbation via the barrier forces the objective functions to be equal to on the boundary of . can be solved with a block coordinate descent method, alternating in C and A. This results in a sequence of minimizers in that converges to the solution in as , and hence gives the solution to . Special cases Spectral penalties - Dinnuzo et al suggested setting F as the Frobenius norm . They optimized directly using block coordinate descent, not accounting for difficulties at the boundary of . Clustered tasks learning - Jacob et al suggested to learn A in the setting where T tasks are organized in R disjoint clusters. In this case let be the matrix with . Setting , and , the task matrix can be parameterized as a function of : , with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxation . In this formulation, . Generalizations Non-convex penalties - Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases. Non-separable kernels - Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels. Software package A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR) implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning, Multi-Task Learning with Joint Feature Selection, Robust Multi-Task Feature Learning, Trace-Norm Regularized Multi-Task Learning, Alternating Structural Optimization, Incoherent Low-Rank and Sparse Learning, Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning, Multi-Task Learning with Graph Structures. Literature Multi-Target Prediction: A Unifying View on Problems and Methods Willem Waegeman, Krzysztof Dembczynski, Eyke Huellermeier https://arxiv.org/abs/1809.02352v1 See also Artificial intelligence Artificial neural network Automated machine learning (AutoML) Evolutionary computation Foundation model General game playing Human-based genetic algorithm Kernel methods for vector output Multitask optimization Robot learning Transfer learning James–Stein estimator References External links The Biosignals Intelligence Group at UIUC Washington University in St. Louis Department of Computer Science Software The Multi-Task Learning via Structural Regularization Package Online Multi-Task Learning Toolkit (OMT) A general-purpose online multi-task learning toolkit based on conditional random field models and stochastic gradient descent training (C#, .NET) Machine learning
Multi-task learning
Engineering
2,723
3,522,125
https://en.wikipedia.org/wiki/Locate%20%28Unix%29
locate is a Unix utility which serves to find files on filesystems. It searches through a prebuilt database of files generated by the updatedb command or by a daemon and compressed using incremental encoding. It operates significantly faster than find, but requires regular updating of the database. This sacrifices overall efficiency (because of the regular interrogation of filesystems even when no user needs information) and absolute accuracy (since the database does not update in real time) for significant speed improvements, particularly on very large filesystems. locate was first created in 1982. The BSD and GNU Findutils versions derive from the original implementation. Their primary database is world-readable, so the index is built as an unprivileged user. locate command is also included in MacOS. mlocate (Merging Locate) and the earlier slocate (Secure Locate) use a restricted-access database, only showing filenames accessible to the user. plocate uses posting lists. Like mlocate and slocate, it only shows files if find would list it. Compared to mlocate, it is much faster, and its index is smaller. See also mdfind related command in MacOS References External links GNU Findutils mlocate Variants: plocate - Variant faster than mlocate, with a smaller index. rlocate - Variant using kernel module and daemon for continuous updates. KwickFind - KDE GUI frontend for locate Locate32 for Windows - GPL'ed graphical Windows variant (no longer available) GNU Project software Unix file system-related software Information retrieval systems
Locate (Unix)
Technology
337
31,622,897
https://en.wikipedia.org/wiki/Balete%20tree
The balete tree (also known as balite or baliti) are several species of trees in the Philippines from the genus Ficus, which are generally referred to as balete in Filipino. A number of these are strangler figs, as they germinate upon other trees, before entrapping their host tree entirely and eventually killing it. Consequently the young plants are hemiepiphytes, i.e. epiphytes or air plants that grow several hanging roots which eventually touch the ground and take root. Some baletes produce natural rubber of an inferior quality. The Indian rubber tree, F. elastica, was formerly cultivated to some extent for rubber. Some of the species like tangisang-bayawak or Ficus variegata are large and could probably be utilized for match wood. The wood of Ficus species are soft, light, and of inferior quality, and the trees usually have ill-formed, short boles. List of species which shares the common name of Balete Ficus microcarpa F. arayatensis Warb. F. balete Merr. F. benjamina Linn. F. benjamina Linn. var. nuda Miq. F. clusioides Miq. F. concinna Miq. F. elastica Roxb. F. forstenii Miq. F. indica Linn. F. parvifolia Miq. F. payapa Blanco F. philipinenses Miq. F. retusa Linn. F. stipulosa Miq. Linn. F. variegata Blume Ornamental use Baletes are planted as graceful trees along avenues in Manila and other large cities in the Philippines, and they are also excellent as shade trees. Several species of the tree are also use for bonsai making in the country. Baletes are used as houseplants; however, it is a source of indoor household allergens which may cause respiratory allergy. Philippine folklore In some areas of the country, some people believe that balete trees are dwelling places for supernatural beings (engkanto) like diwata, kapre or tikbalang. In some places, sorcery rituals are known performed inside the chambers formed by the tree. Also among others, some superstitious folks suggest not bringing in balete as decorative plants inside a house as they allegedly invite ghosts. Balete Drive in New Manila, Quezon City, named after a enormous balete tree that used to stand in the middle of the street, is allegedly one of the most haunted places in the city. The tale of a white lady appears at night hailing cars that drive by has been circulated since the 1950s. Extreme examples The balete tree inside the OISCA Farm in Lumapao, Canlaon, Negros Oriental, is estimated by botanists from Silliman University to be around 1,328 years old. It would take at least 42 men to encircle its trunk. At the heart of this wide tree trunk is a cavity where lizards, bats and many insects have made it their home. With fireflies lighting it at night like a year-round Christmas tree, it is one of the city's main tourist attractions. A balete tree called "Millennium Tree" in Barangay Quirino, Maria Aurora, Aurora province is claimed to be the largest of its kind in Asia. It is estimated to be about more than 600 years old and tall with its roots about to in diameter. It is possible for adult people to squeeze into the center of its root network. A 400-year-old balete tree in Barangay Campalanas in the Lazi, Siquijor is believed to be the oldest and the biggest in the province. The tree is noted for the spring that emanates from its base and flows straight into a man-made pool. Gallery See also Bodhi tree, 2500 years ago Buddha attained enlightenment under this tree Kodama, spirits in Japanese folklore Largest banyan trees, Balete trees Peepal tree, Ficus religiosa Sacred tree Tree spirit Yorishiro, spirits-attracting object References External links "The Forests of the Philippines" by the Philippine Bureau of Forestry from Google Books. Flora of the Philippines Trees of the Philippines Plant common names Austronesian spirituality Philippine folk culture
Balete tree
Biology
892
51,108,168
https://en.wikipedia.org/wiki/K2-72d
K2-72d is a small exoplanet orbiting around the red dwarf star K2-72 approximately 227.7 light-years away. K2-72d completes an orbit in 7.8 days, and it has a radius of only 73% of that of the Earth. Host star The planet orbits a (M-type) red dwarf star named K2-72, orbited by a total of four planets, of which K2-72e has the longest orbital period. The star has a mass of 0.27 and a radius of 0.33 . It has a temperature of 3360 K and its age is unknown. In comparison, the Sun is 4.6 billion years old and has a surface temperature of 5778 K. The star's apparent magnitude, or how bright it appears from Earth's perspective, is 15.309. Therefore, it is too dim to be seen with the naked eye and can only be observed with a telescope. Discovery The planet, along with the other three planets in the K2-72 system, were announced in mid-July 2016 as part of the new results from the second mission of the Kepler spacecraft. References Exoplanets discovered in 2016 Transiting exoplanets 72 Aquarius (constellation)
K2-72d
Astronomy
263
2,421,175
https://en.wikipedia.org/wiki/Medoid
Medoids are representative objects of a data set or a cluster within a data set whose sum of dissimilarities to all the objects in the cluster is minimal. Medoids are similar in concept to means or centroids, but medoids are always restricted to be members of the data set. Medoids are most commonly used on data when a mean or centroid cannot be defined, such as graphs. They are also used in contexts where the centroid is not representative of the dataset like in images, 3-D trajectories and gene expression (where while the data is sparse the medoid need not be). These are also of interest while wanting to find a representative using some distance other than squared euclidean distance (for instance in movie-ratings). For some data sets there may be more than one medoid, as with medians. A common application of the medoid is the k-medoids clustering algorithm, which is similar to the k-means algorithm but works when a mean or centroid is not definable. This algorithm basically works as follows. First, a set of medoids is chosen at random. Second, the distances to the other points are computed. Third, data are clustered according to the medoid they are most similar to. Fourth, the medoid set is optimized via an iterative process. Note that a medoid is not equivalent to a median, a geometric median, or centroid. A median is only defined on 1-dimensional data, and it only minimizes dissimilarity to other points for metrics induced by a norm (such as the Manhattan distance or Euclidean distance). A geometric median is defined in any dimension, but unlike a medoid, it is not necessarily a point from within the original dataset. Definition Let be a set of points in a space with a distance function d. Medoid is defined as Clustering with medoids Medoids are a popular replacement for the cluster mean when the distance function is not (squared) Euclidean distance, or not even a metric (as the medoid does not require the triangle inequality). When partitioning the data set into clusters, the medoid of each cluster can be used as a representative of each cluster. Clustering algorithms based on the idea of medoids include: Partitioning Around Medoids (PAM), the standard k-medoids algorithm Hierarchical Clustering Around Medoids (HACAM), which uses medoids in hierarchical clustering Algorithms to compute the medoid of a set From the definition above, it is clear that the medoid of a set can be computed after computing all pairwise distances between points in the ensemble. This would take distance evaluations (with ). In the worst case, one can not compute the medoid with fewer distance evaluations. However, there are many approaches that allow us to compute medoids either exactly or approximately in sub-quadratic time under different statistical models. If the points lie on the real line, computing the medoid reduces to computing the median which can be done in by Quick-select algorithm of Hoare. However, in higher dimensional real spaces, no linear-time algorithm is known. RAND is an algorithm that estimates the average distance of each point to all the other points by sampling a random subset of other points. It takes a total of distance computations to approximate the medoid within a factor of with high probability, where is the maximum distance between two points in the ensemble. Note that RAND is an approximation algorithm, and moreover may not be known apriori. RAND was leveraged by TOPRANK which uses the estimates obtained by RAND to focus on a small subset of candidate points, evaluates the average distance of these points exactly, and picks the minimum of those. TOPRANK needs distance computations to find the exact medoid with high probability under a distributional assumption on the average distances. trimed presents an algorithm to find the medoid with distance evaluations under a distributional assumption on the points. The algorithm uses the triangle inequality to cut down the search space. Meddit leverages a connection of the medoid computation with multi-armed bandits and uses an upper-Confidence-bound type of algorithm to get an algorithm which takes distance evaluations under statistical assumptions on the points. Correlated Sequential Halving also leverages multi-armed bandit techniques, improving upon Meddit. By exploiting the correlation structure in the problem, the algorithm is able to provably yield drastic improvement (usually around 1-2 orders of magnitude) in both number of distance computations needed and wall clock time. Implementations An implementation of RAND, TOPRANK, and trimed can be found here. An implementation of Meddit can be found here and here. An implementation of Correlated Sequential Halving can be found here. Medoids in text and natural language processing (NLP) Medoids can be applied to various text and NLP tasks to improve the efficiency and accuracy of analyses. By clustering text data based on similarity, medoids can help identify representative examples within the dataset, leading to better understanding and interpretation of the data. Text clustering Text clustering is the process of grouping similar text or documents together based on their content. Medoid-based clustering algorithms can be employed to partition large amounts of text into clusters, with each cluster represented by a medoid document. This technique helps in organizing, summarizing, and retrieving information from large collections of documents, such as in search engines, social media analytics and recommendation systems. Text summarization Text summarization aims to produce a concise and coherent summary of a larger text by extracting the most important and relevant information. Medoid-based clustering can be used to identify the most representative sentences in a document or a group of documents, which can then be combined to create a summary. This approach is especially useful for extractive summarization tasks, where the goal is to generate a summary by selecting the most relevant sentences from the original text. Sentiment analysis Sentiment analysis involves determining the sentiment or emotion expressed in a piece of text, such as positive, negative, or neutral. Medoid-based clustering can be applied to group text data based on similar sentiment patterns. By analyzing the medoid of each cluster, researchers can gain insights into the predominant sentiment of the cluster, helping in tasks such as opinion mining, customer feedback analysis, and social media monitoring. Topic modeling Topic modeling is a technique used to discover abstract topics that occur in a collection of documents. Medoid-based clustering can be applied to group documents with similar themes or topics. By analyzing the medoids of these clusters, researchers can gain an understanding of the underlying topics in the text corpus, facilitating tasks such as document categorization, trend analysis, and content recommendation. Techniques for measuring text similarity in medoid-based clustering When applying medoid-based clustering to text data, it is essential to choose an appropriate similarity measure to compare documents effectively. Each technique has its advantages and limitations, and the choice of the similarity measure should be based on the specific requirements and characteristics of the text data being analyzed. The following are common techniques for measuring text similarity in medoid-based clustering: Cosine similarity Cosine similarity is a widely used measure to compare the similarity between two pieces of text. It calculates the cosine of the angle between two document vectors in a high-dimensional space. Cosine similarity ranges between -1 and 1, where a value closer to 1 indicates higher similarity, and a value closer to -1 indicates lower similarity. By visualizing two lines originating from the origin and extending to the respective points of interest, and then measuring the angle between these lines, one can determine the similarity between the associated points. Cosine similarity is less affected by document length, so it may be better at producing medoids that are representative of the content of a cluster instead of the length. Jaccard similarity Jaccard similarity, also known as the Jaccard coefficient, measures the similarity between two sets by comparing the ratio of their intersection to their union. In the context of text data, each document is represented as a set of words, and the Jaccard similarity is computed based on the common words between the two sets. The Jaccard similarity ranges between 0 and 1, where a higher value indicates a higher degree of similarity between the documents. Euclidean distance Euclidean distance is a standard distance metric used to measure the dissimilarity between two points in a multi-dimensional space. In the context of text data, documents are often represented as high-dimensional vectors, such as TF vectors, and the Euclidean distance can be used to measure the dissimilarity between them. A lower Euclidean distance indicates a higher degree of similarity between the documents. Using Euclidean distance may result in medoids that are more representative of the length of a document. Edit distance Edit distance, also known as the Levenshtein distance, measures the similarity between two strings by calculating the minimum number of operations (insertions, deletions, or substitutions) required to transform one string into the other. In the context of text data, edit distance can be used to compare the similarity between short text documents or individual words. A lower edit distance indicates a higher degree of similarity between the strings. Medoid applications in large language models Medoids for analyzing large language model embeddings Medoids can be employed to analyze and understand the vector space representations generated by large language models (LLMs), such as BERT, GPT, or RoBERTa. By applying medoid-based clustering on the embeddings produced by these models for words, phrases, or sentences, researchers can explore the semantic relationships captured by LLMs. This approach can help identify clusters of semantically similar entities, providing insights into the structure and organization of the high-dimensional embedding spaces generated by these models. Medoids for data selection and active learning Active learning involves choosing data points from a training pool that will maximize model performance. Medoids can play a crucial role in data selection and active learning with LLMs. Medoid-based clustering can be used to identify representative and diverse samples from a large text dataset, which can then be employed to fine-tune LLMs more efficiently or to create better training sets. By selecting medoids as training examples, researchers can may have a more balanced and informative training set, potentially improving the generalization and robustness of the fine-tuned models. Medoids for model interpretability and safety Applying medoids in the context of LLMs can contribute to improving model interpretability. By clustering the embeddings generated by LLMs and selecting medoids as representatives of each cluster, researchers can provide a more interpretable summary of the model's behavior. This approach can help in understanding the model's decision-making process, identifying potential biases, and uncovering the underlying structure of the LLM-generated embeddings. As the discussion around interpretability and safety of LLMs continues to ramp up, using medoids may serve as a valuable tool for achieving this goal. Real-world applications As a versatile clustering method, medoids can be applied to a variety of real-world issues in numerous fields, stretching from biology and medicine to advertising and marketing, and social networks. Its potential to handle complex data sets with a high degree of perplexity makes it a powerful device in modern-day data analytics. Gene expression analysis In gene expression analysis, researchers utilize advanced technologies consisting of microarrays and RNA sequencing to measure the expression levels of numerous genes in biological samples, which results in multi-dimensional data that can be complex and difficult to analyze. Medoids are a potential solution by clustering genes primarily based on their expression profiles, enabling researchers to discover co-expressed groups of genes that could provide valuable insights into the molecular mechanisms of biological processes and diseases. Social network analysis For social network evaluation, medoids can be an exceptional tool for recognizing central or influential nodes in a social network. Researchers can cluster nodes based on their connectivity styles and identify nodes which are most likely to have a substantial impact on the network’s function and structure. One popular approach to making use of medoids in social network analysis is to compute a distance or similarity metric between pairs of nodes based on their properties. Market segmentation Medoids also can be employed for market segmentation, which is an analytical procedure that includes grouping clients primarily based on their purchasing behavior, demographic traits, and various other attributes. Clustering clients into segments using medoids allows companies to tailor their advertising and marketing techniques in a manner that aligns with the needs of each group of customers. The medoids serve as representative factors within every cluster, encapsulating the primary characteristics of the customers in that group. The Within-Groups Sum of Squared Error (WGSS) is a formula employed in market segmentation that aims to quantify the concentration of squared errors within clusters. It seeks to capture the distribution of errors within groups by squaring them and aggregating the results.The WGSS metric quantifies the cohesiveness of samples within clusters, indicating tighter clusters with lower WGSS values and a correspondingly superior clustering effect. The formula for WGSS is: Where is the average distance of samples within the k-th cluster and is the number of samples in the k-th cluster. Anomaly detection Medoids can also be instrumental in identifying anomalies, and one efficient method is through cluster-based anomaly detection. They can be used to discover clusters of data points that deviate significantly from the rest of the data. By clustering the data into groups using medoids and comparing the properties of every cluster to the data, researchers can clearly detect clusters that are anomalous. Visualization of the medoid-based clustering process Purpose Visualization of medoid-based clustering can be helpful when trying to understand how medoid-based clustering work. Studies have shown that people learn better with visual information. In medoid-based clustering, the medoid is the center of the cluster. This is different from k-means clustering, where the center isn't a real data point, but instead can lie between data points. We use the medoid to group “clusters” of data, which is obtained by finding the element with minimal average dissimilarity to all other objects in the cluster. Although the visualization example used utilizes k-medoids clustering, the visualization can be applied to k-means clustering as well by swapping out average dissimilarity with the mean of the dataset being used. Visualization using one-dimensional data Distance matrix A distance matrix is required for medoid-based clustering, which is generated using Jaccard Dissimilarity (which is 1 - the Jaccard Index). This distance matrix is used to calculate the distance between two points on a one-dimensional graph. The above image shows an example of a Jaccard Dissimilarity graph. Clustering Step 1 Medoid-based clustering is used to find clusters within a dataset. An initial one-dimensional dataset which contains clusters that need to be discovered is used for the process of medoid-based clustering. In the image below, there are twelve different objects in the dataset at varying x-positions. Step 2 K random points are chosen to be the initial centers. The value chosen for K is known as the K-value. In the image below, 3 has been chosen as the K-value. The process for finding the optimal K-value will be discussed in step 7. Step 3 Each non-center object is assigned to its nearest center. This is done using a distance matrix. The lower the dissimilarity, the closer the points are. In the image below, there are 5 objects in cluster 1, 3 in cluster 2, and 4 in cluster 3. Step 4 The new center for each cluster is found by finding the object whose average dissimilarity to all other objects in the cluster is minimal. The center selected during this step is called the medoid. The image below shows the results of medoid selection. Step 5 Steps 3-4 are repeated until the centers no longer move, as in the images below. Step 6 The final clusters are obtained when the centers no longer move between steps. The image below shows what a final cluster can look like. Step 7 The variation is added up within each cluster to see how accurate the centers are. By running this test with different K-values, an "elbow" of the variation graph can be acquired, where the graph's variation levels out. The "elbow" of the graph is the optimal K-value for the dataset. Medoids in high dimensions A common problem with k-medoids clustering and other medoid-based clustering algorithms is the "curse of dimensionality," in which the data points contain too many dimensions or features. As dimensions are added to the data, the distance between them becomes sparse, and it becomes difficult to characterize clustering by Euclidean distance alone. As a result, distance based similarity measures converge to a constant and we have a characterization of distance between points which may not be reflect our data set in meaningful ways. One way to mitigate the effects of the curse of dimensionality is by using spectral clustering. Spectral clustering achieves a more appropriate analysis by reducing the dimensionality of then data using principal component analysis, projecting the data points into the lower dimensional subspace, and then running the chosen clustering algorithm as before. One thing to note, however, is that as with any dimension reduction we lose information, so it must be weighed against clustering in advanced how much reduction is necessary before too much data is lost. High dimensionality doesn't only affect distance metrics however, as the time complexity also increases with the number of features. k-medoids is sensitive to initial choice of medoids, as they are usually selected randomly. Depending on how such medoids are initialized, k-medoids may converge to different local optima, resulting in different clusters and quality measures, meaning k-medoids might need to run multiple times with different initializations, resulting in a much higher run time. One way to counterbalance this is to use k-medoids++, an alternative to k-medoids similar to its k-means counterpart, k-means++ which chooses initial medoids to begin with based on a probability distribution, as a sort of "informed randomness" or educated guess if you will. If such medoids are chosen with this rationale, the result is an improved runtime and better performance in clustering. The k-medoids++ algorithm is described as follows: The initial medoid is chosen randomly among all of the spatial points. For each spatial point 𝑝, compute the distance between 𝑝 and the nearest medoids which is termed as D(𝑝) and sum all the distances to 𝑆 . The next medoid is determined by using weighted probability distribution. Specifically, a random number 𝑅 between zero and the summed distance 𝑆 is chosen and the corresponding spatial point is the next medoid. Step (2) and Step (3) are repeated until 𝑘 medoids have been chosen. Now that we have appropriate first selections for medoids, the normal variation of k-medoids can be run. References Cluster analysis Means External links StatQuest k-means video used for visual reference in #Visualization_of_the_medoid-based_clustering_process section
Medoid
Physics,Mathematics
4,004
8,899,175
https://en.wikipedia.org/wiki/The%20COED%20Project
The COED Project, or the COmmunications and EDiting Project, was an innovative software project created by the Computer Division of NOAA, US Department of Commerce in Boulder, Colorado in the 1970s. This project was designed, purchased and implemented by the in-house computing staff rather than any official organization. Intent The computer division previously had a history of frequently replacing its mainframe computers. Starting with a CDC 1604, then a CDC 3600, a couple of CDC 3800s, and finally a CDC 6600. The department also had an XDS 940 timesharing system which would support up to 32 users on dial-up modems. Due to rapidly changing requirements for computer resources, it was expected that new systems would be installed on a regular basis, and the resultant strain on the users to adapt to each new system was perceived to be excessive. The COED project was the result of a study group convened to solve this problem. The project was implemented by the computer specialists who were also responsible for the purchase, installation, and maintenance of all the computers in the division. COED was designed and implemented in long hours of overtime. The data communications aspect of the system was fully implemented and resulted in greatly improved access to the XDS 940 and CDC 6600 systems. It was also used as the front end of the - Free University of Amsterdam's SARA system for many years. Design A complete networked system was a pair of Modcomps: one II handled up to 256 communication ports, and one IV handled the disks and file editing. The system was designed to be fully redundant. If one pair failed the other automatically took over. All computer systems in the network were kept time-synchronized so that all file dates/times would be accurate - synchronized to the National Bureau of Standards atomic clock, housed in the same building. Another innovation was asynchronous dynamic speed recognition. After a terminal connected to a port, the user would type a Carriage Return character, and the software would detect the speed of the terminal (in the range of 110 to 9600 bit/s) and present a log in message to the user at the appropriate speed. Due to limitations of the operating systems which came with the Modcomps, new Operating systems had to be created, CORTEX for the Modcomp II's and IV BRAIN for the Modcomp IV's. History (Dates are approximate - from memory) 1970: First discussions of new communications system for XDS 940 1971: The COED Project was created 1972: The system was designed, funding was approved, a Request for Quote for the hardware was issued and executed 1973: The hardware components—2 Modcomp IV's and 2 Modcomp II's were delivered and installed and implementation began 1976: (April 8) First communication through COED to XDS 940 worked! 1979: project terminated Staff Those involved in the original design meetings were: Ralph Slutz, George Sugar, Jim Winkelman and most of the COED implementors. Support was also provided by Tom Gray. The COED implementors were: W. Schyler (Sky) Stevenson, Project Manager and operating system implementer Howard Bussey, Mark Emmer, David Lillie, and Vern Schryver. The 6600 interface to COED was implemented by Anthony Brittain, Dan Dechatelets and Kathy Browne. External links Author - David Lillie's homepage Computer systems Software projects 1970s establishments in Colorado
The COED Project
Technology,Engineering
705
10,114,984
https://en.wikipedia.org/wiki/Daigremontianin
Daigremontianin is a bufadienolide. Bufadienolides are steroids and cardiac glycoside aglycones (meaning that they bind with carbohydrates to form cardiac glycosides) that are similar to cardenolides, differing only in the structure of the C-17 substituent on the D ring. This chemical has been found to be toxic in experiments on mice. It is one of five bufadienolides that have been isolated from Kalanchoe daigremontiana. Toxicity Crassulaceans are one of the prime sources of bufadienolide cardiac glycosides (including daigremontianin) responsible for an estimated 33% of cattle mortalities related to plant poisoning in South Africa. Crassulacean bufadienolides cause cardiac poisoning, but repeated small doses cause a condition called cotyledonosis, an intoxication affecting nervous and muscular systems of small animals, particularly, sheep in the Karoo area of South Africa. References External links Canadian Biodiversity Information Facility Bufanolides Diols Aldehydes Ketones
Daigremontianin
Chemistry
240
35,990,296
https://en.wikipedia.org/wiki/Amylostereum%20ferreum
Amylostereum ferreum is a species of crust fungus in the family Amylostereaceae. References External links Russulales Fungi described in 1869 Taxa named by Miles Joseph Berkeley Fungus species
Amylostereum ferreum
Biology
42
19,757,699
https://en.wikipedia.org/wiki/Local%20Void
The Local Void is a vast, empty region of space, lying adjacent to the Local Group. Discovered by Brent Tully and Rick Fisher in 1987, the Local Void is now known to be composed of three separate sectors, separated by bridges of "wispy filaments". The precise extent of the void is unknown, but it is at least 45 Mpc (150 million light-years) across, and possibly 150 to 300 Mpc. The Local Void appears to have significantly fewer galaxies than expected from standard cosmology. Location and dimensions Voids are affected by the way gravity causes matter in the universe to "clump together", herding galaxies into clusters and chains, which are separated by regions mostly devoid of galaxies, yet the exact mechanisms are subject to scientific debate. Astronomers have previously noticed that the Milky Way sits in a large, flat array of galaxies called the Local Sheet, which bounds the Local Void. The Local Void extends approximately , beginning at the edge of the Local Group. It is believed that the distance from Earth to the centre of the Local Void must be at least . The size of the Local Void was calculated due to an isolated dwarf galaxy known as ESO 461-36 located inside it. The bigger and emptier the void, the weaker its gravity, and the faster the dwarf should be fleeing the void towards concentrations of matter, yet discrepancies give room for competing theories. Dark energy has been suggested as one alternative explanation for the speedy expulsion of the dwarf galaxy. An earlier "Hubble Bubble" model, based on measured velocities of Type 1a supernovae, proposed a relative void centred on the Milky Way. Recent analysis of that data, however, suggested that interstellar dust had resulted in misleading measurements. Several authors have shown that the local universe up to 300 Mpc from the Milky Way is less dense than surrounding areas – by 15–50%. This has been called the Local Void or Local Hole. Some media reports have dubbed it the KBC Void, although this name has not been taken up in other publications. Effect on surroundings Scientists believe that the Local Void is growing and that the Local Sheet, which makes up one wall of the void, is rushing away from the void's centre at . Concentrations of matter normally pull together, creating a larger void where matter is rushing away. The Local Void is surrounded uniformly by matter in all directions, except for one sector in which there is nothing, which has the effect of taking more matter away from that sector. The effect on the nearby galaxy is astonishingly large. The Milky Way's velocity away from the Local Void is . List of void galaxies Several void galaxies have been found within the Local Void. These include: See also List of voids References Astrochemistry Interstellar media Local Sheet Voids (astronomy)
Local Void
Chemistry,Astronomy
574
49,458,281
https://en.wikipedia.org/wiki/IRsweep
IRsweep is a Swiss company offering optical spectroscopy solutions and multipass absorption cells. The spectroscopy is based on semiconductor quantum cascade laser frequency combs in the mid-infrared wavelength range. The company is based in Zurich, Switzerland and was founded in 2014 and acquired by Sensirion Holding in May 2021. The technology is used for high speed absorption measurements of different molecules and is robust against cross-sensitivities. Such sensor systems are in high demand for process analytics as well as research applications, as the mid-infrared range hosts the strongest absorption features of many molecules. History IRsweep was founded in 2014 as a spin-off from the Swiss Federal Institute of Technology (ETH Zurich). The company commercialized its first product after having developed its prototypes for academic research projects. The first derived product is the IRcell, a cylindrical multipass cell combining a long optical path in a small detection volume. See also Infrared spectroscopy References Companies based in Zurich Spectroscopy Technology companies of Switzerland
IRsweep
Physics,Chemistry
200
34,164,583
https://en.wikipedia.org/wiki/History%20of%20computer%20clusters
The history of computer clusters is best captured by a footnote in Greg Pfister's In Search of Clusters: "Virtually every press release from DEC mentioning clusters says ‘DEC, who invented clusters...’. IBM did not invent them either. Customers invented clusters, as soon as they could not fit all their work on one computer, or needed a backup. The date of the first is unknown, but it would be surprising if it was not in the 1960s, or even late 1950s." The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. Amdahl's Law describes mathematically the speedup one can expect from parallelizing any given otherwise serially performed task on a parallel architecture. This article defined the engineering basis for both multiprocessor computing and cluster computing, where the primary differentiator is whether or not the interprocessor communications are supported "inside" the computer (on for example a customized internal communications bus or network) or "outside" the computer on a commodity network. Consequently, the history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster. Packet switching networks were conceptually invented by the RAND corporation in 1962. Using the concept of a packet switched network, the ARPANET project succeeded in creating in 1969 what was arguably the world's first commodity-network based computer cluster by linking four different computer centers (each of which was something of a "cluster" in its own right, but probably not a commodity cluster). The ARPANET project grew into the Internet—which can be thought of as "the mother of all computer clusters" (as the union of nearly all of the compute resources, including clusters, that happen to be connected). It also established the paradigm in use by all computer clusters in the world today—the use of packet-switched networks to perform interprocessor communications between processor (sets) located in otherwise disconnected frames. The development of customer-built and research clusters proceeded hand in hand with that of both networks and the Unix operating system from the early 1970s, as both TCP/IP and the Xerox PARC project created and formalized protocols for network-based communications. The Hydra operating system was built for a cluster of DEC PDP-11 minicomputers called C.mmp at Carnegie Mellon University in 1971. However, it was not until circa 1983 that the protocols and tools for easily doing remote job distribution and file sharing were defined (largely within the context of BSD Unix, as implemented by Sun Microsystems) and hence became generally available commercially, along with a shared filesystem. The first commercial clustering product was ARCnet, developed by Datapoint in 1977. ARCnet was not a commercial success and clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system. The ARCnet and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. VAXcluster, now VMScluster, is still available on OpenVMS running on Alpha, Itanium and x86-64 systems. Two other noteworthy early commercial clusters were the Tandem Himalaya (a circa 1994 high-availability product) and the IBM S/390 Parallel Sysplex (also circa 1994, primarily for business use). No history of commodity computer clusters would be complete without noting the pivotal role played by the development of Parallel Virtual Machine (PVM) software in 1989. This open source software based on TCP/IP communications enabled the instant creation of a virtual supercomputer—a high performance compute cluster—made out of any TCP/IP connected systems. Free-form heterogeneous clusters built on top of this model rapidly achieved total throughput in FLOPS that greatly exceeded that available even with the most expensive "big iron" supercomputers. PVM and the advent of inexpensive networked PCs led, in 1993, to a NASA project to build supercomputers out of commodity clusters. In 1995 the Beowulf cluster—a cluster built on top of a commodity network for the specific purpose of "being a supercomputer" capable of performing tightly coupled parallel HPC computations—was invented, which spurred the independent development of grid computing as a named entity, although Grid-style clustering had been around at least as long as the Unix operating system and the Arpanet, whether or not it, or the clusters that used it, were named. See also History of supercomputing References Concurrent computing Computer clusters Parallel computing
History of computer clusters
Technology
1,032
205,472
https://en.wikipedia.org/wiki/List%20of%20international%20call%20prefixes
This is a list of international dialing prefixes used in various countries for direct dialing of international telephone calls. These prefixes are typically required only when dialling from a landline, while in GSM-compliant mobile phone (cell phone) systems, the symbol + before the country code may be used irrespective of where the telephone is used at that moment; the network operator provides the access codes automatically. Countries by international prefix Countries using carrier selection codes The following is a non-exhaustive list of countries that optionally allow for carrier selection in addition to using the standard prefix listed in the preceding section. Historic international prefixes The following are international call prefixes that were used in various countries sometime in the past but are no longer used. See also List of country calling codes (International telephone dialing codes) List of mobile telephone prefixes by country List of North American Numbering Plan area codes Public switched telephone network Notes References External links Standards and recommendations Call Prefixes Telephone numbers Telecommunications lists
List of international call prefixes
Mathematics
202
255,426
https://en.wikipedia.org/wiki/Commuting
Commuting is periodically recurring travel between a place of residence and place of work or study, where the traveler, referred to as a commuter, leaves the boundary of their home community. By extension, it can sometimes be any regular or often repeated travel between locations, even when not work-related. The modes of travel, time taken and distance traveled in commuting varies widely across the globe. Most people in least-developed countries continue to walk to work. The cheapest method of commuting after walking is usually by bicycle, so this is common in low-income countries but is also increasingly practised by people in wealthier countries for environmental, health, and often time reasons. In middle-income countries, motorcycle commuting is very common. The next technology adopted as countries develop is more dependent on location: in more populous, older cities, especially in Eurasia mass transit (rail, bus, etc.) predominates, while in smaller, younger cities, and large parts of North America and Australasia, commuting by personal automobile is more common. A small number of very wealthy people, and those working in remote locations around the world, also commute by air travel, often for a week or more at a time rather than the more typical daily commute. Transportation links that enable commuting also impact the physical layout of cities and regions, allowing a distinction to arise between mostly-residential suburbs and the more economically focused urban core of a city (process known as suburban sprawl), but the specifics of how that distinction is realized remain drastically different between societies, with Eurasian "suburbs" often being more densely populated than North American "urban cores". History The first separation between workplace and place of residence occurred as a result of the invention of the steam railway. The word commuter derives from the early days of rail travel in US cities, such as New York, Philadelphia, Boston and Chicago, where, in the 1840s, the railways engendered suburbs from which travelers paid a reduced or 'commuted' fare into the city. Later, the back formations "commute" and "commuter" were coined therefrom. Commuted tickets would usually allow the traveler to repeat the same journey as often as they liked during the period of validity: normally, the longer the period the cheaper the cost per day. Before the 19th century, most workers lived less than an hour's walk from their work. The Industrial Revolution brought specialization of work and workplaces, and relocated most paid work from households and rural areas to factories in urban areas. Today, many people travel daily to work a long way from their own towns, cities, and villages, especially in industrialised societies. Depending on factors such as the high cost of housing in city centres, lack of public transit, and traffic congestion, modes of travel may include automobiles, motorcycles, trains, aircraft, buses, and bicycles. Where Los Angeles is infamous for its automobile gridlock, commuting in New York is closely associated with the subway; in London and Tokyo and several European cities, "commuter" is automatically associated with rail passengers. In the near future there may be another move away from the traditional "commute" with the introduction of flexible working. Some have suggested that many employees would be far more productive and live healthier, stress-free lives if the daily commute is removed completely. Suburbs Commuting has had a large impact on modern life. It has allowed cities to grow to sizes that were previously not practical, and it has led to the proliferation of suburbs. Many large cities or conurbations are surrounded by commuter belts, also known as metropolitan areas, commuter towns, dormitory towns, or bedroom communities. The prototypical commuter lives in one of these areas and travels daily to work or to school in the core city. As urban sprawl pushes further and further away from central business districts, new businesses can appear in outlying cities, leading to the existence of the reverse commuter who lives in a core city but works in the suburbs, and to a type of secondary commuter who lives in a more distant exurb and works in the outlying city or industrial suburb. Gender differences A UK study, published in 2009, found that on average women suffer four times as much psychological stress from their work commute as men do. An Indian study conducted in Mangalore led by Edmond Fernandes stated that creating a gender sensitive commuter-centric road safety policy requires to be developed to protect women while commuting as they felt stressed and scared to travel alone, particularly at night. Education Institutions that have few dormitories or low or no student housing populations are called commuter schools in the United States, like community colleges. Traffic Most commuters travel at the same time of day, resulting in the morning and evening rush hours, with congestion on roads and public transport systems not designed or maintained well enough to cope with the peak demands. As an example, Interstate 405 located in Southern California is one of the busiest freeways in the United States. Commuters may sit up to two hours in traffic during rush hour. Construction work or collisions on the freeway distract and slow down commuters, contributing to even longer delays. Pollution Cars carrying only one occupant use fuel and roads less efficiently than shared cars or public transport, and increase traffic congestion. Commuting by car is a major factor contributing to air pollution. Carpool lanes can help commuters reach their destinations more quickly, encourage people to socialize, and spend time together, while reducing air pollution. Some governments and employers have introduced employee travel reduction programs that encourage such alternatives as carpooling and remote work. Some are also carpooling using Internet sites to save money. Alternatives like personal rapid transit have also been proposed to reap the energy-efficiency benefits of a mass transit system while maintaining the speed and convenience of individual transport. Traffic emissions, such as from cars and trucks, also contribute. Airborne by-products from vehicle exhaust systems cause air pollution and are a major ingredient in the creation of smog in some large cities. The major culprits from transportation sources are carbon monoxide (CO), nitrogen oxides (NO and NOx), volatile organic compounds, sulfur dioxide, and hydrocarbons. Hydrocarbons are the main components of petroleum fuels such as gasoline and diesel fuel. These molecules react with sunlight, heat, ammonia, moisture, and other compounds to form the noxious vapours, ground level ozone, and particles that comprise smog. Social trends Commuting trends in the United States In the United States, the Census Bureau's American Community Survey (ACS) collects data on commuting times, allowing an analysis of average commute time by industry, location, and vehicle. According to the 2014 ACS, the average commute time for adults in the United States was 26.8 minutes. The occupations with the longest commutes were Construction and Mining (33.4 minutes), Computer Science and Math (31.8), and Business Operations Specialists (30.2), while those in the military had the shortest commute (21). In general, urban and suburban workers in the US have similar commute times (about 30 minutes), while rural workers have significantly shorter commutes (22.6 minutes). In the US, over 90% of workers commute by car, while about 5% commute by public transportation. Statistical models indicate that in addition to demographics and work duration, commute time is one of the most important determinants of discretionary time allocation by individuals. Commuting College Students The number of students who commute to college continues to increase significantly as the years go by. From 1996 to 2006 alone, the percentage of undergraduate students who commuted to campus began to increase at a rate of 30% to 50%. In a study involving 10 universities in Canada, 61% of students reported that their commute was a challenge to campus participation, while 30% perceived it as a barrier to academic success. Factors influencing satisfaction included commute mode, duration, travel attitudes, and campus type. Notably, 72% of students had one-way commutes of one hour or less, 22% had commutes lasting between 60 and 90 minutes, and 9% faced commutes exceeding 90 minutes. Commuting and the scarcity of local employment Commuting is often made necessary due to local employment market factors which may stem from the decline of manufacturing (i.e., in cities where large manufacturing employers have either closed or laid off workers, with no other employers to absorb that loss) and, in general, the sheer lack of local employment. More specifically, wages from local employers are often insufficient for a worker household to sustain itself. As a result, the needs of worker households must be sustained and this leads to a wider field of job search beyond a local area to the next nearest city or metropolitan area, resulting in the requirement for commuting. Hence, in areas where little or no transit options exist that can facilitate a journey to work to meet the requirements of a worker schedule, the use of a car is therefore made necessary. This is a personal choice driven by financial need, highlighting the broader issue of sustaining local economies. Social and health implications of commuting Since commuting largely stems from a need to travel outside a home community to sustain a household income while facing a bleak local employment market, this comes with additional social and health implications. First, there is the increased risk of injury and accident while driving as distance and time in the vehicle increases, which is generally observed when operating a vehicle. Fatigue and hazardous road conditions add to this risk. Second, while income from employment is greater in other cities, stress from commuting factors become a factor for personal health. Ironically, stress from having to locate employment or being placed in a low-income situation might lead to a similar outcome. However, this is dichotomous with the satisfaction of a sustainable income and good employment, which is clearly the goal of an individual who is faced with commuting. See also References External links "Commuters," a poetic rendition of the New Jersey-to-New York commuting life by Steve Peacock (2011) InDigestMag.com US Commuting Averages (2002) Some Commuters are travelling from France to London Platform 11 – Ireland's National Rail Commuter Group Five Maps That Reveal New Truths About America's Megaregions Transport and the environment Urban geography Types of travel
Commuting
Physics
2,137
7,128,296
https://en.wikipedia.org/wiki/SmartComputing
Smart Computing was a monthly computing and technology magazine published by Sandhills Publishing Company in Lincoln, Nebraska, USA. First released under the name PC Novice, it was published from 1990 to 2013. Content The magazine featured articles, reviews of hardware and software, editorial content and classified advertising. It was geared more toward newer users than its sister publications, Computer Power User and CyberTrend (previously known as PC Today). Articles and Features Technology News and Notes, by Christian Perry - News and a monthly Q/A help desk Tech Diaries, various authors - Reviews Software Head-to-Head, various authors - a comparison of software September 2006: Anti-Spam: , SonicWALL Email Security Desktop, OnlyMyEmail, VQme Anti Spam with Webmail. Winner: SonicWALL Email Security Desktop October 2006: Instant Messaging clients: Yahoo! Messenger 8, AIM Triton 1.5, Google Talk, ICQ 5.1, Trillian 3.1, Windows Live Messenger. Winner: Yahoo! Messenger January 2007: Office suites: StarOffice 8, Microsoft Office 2007 Home and Student Edition, Corel WordPerfect X3 Standard Edition, Ability Office Standard Edition. Winner: StarOffice 8 Software Reviews, various Staff Picks, various - staff's choices of hardware Windows Tips & Tricks, various - helpful hints for using Microsoft Windows General Computing, various - articles about no specific topic Reader's Tips, by readers - readers give hints to other readers Learning Linux, by Vince Cogley, NEW COLUMN - teach yourself using Linux with the Ubuntu distribution Plugged In, various - tips on using the Internet Mr. Modem's Desktop, by Mr. Modem - various tips and Internet links Quick Studies, various - tips on and fixing problems with using very commonly used software Tidbits, by Marty Sems - information on new stuff Tech Support, various - consists of: What to Do When... - a guide on fixing road-block problems Examining Errors - the magazine helps readers with errors Fast Fixes - information on new software updates Q&A - answers to tech support questions FAQ - answers to frequently asked questions; each month all questions are about the same topic Action Editor, unknown - Action Editor comes to the rescue when companies deny service or give bad service Tales From The Trenches, by Gregory Anderson - his bad experiences when using computers and what to do about them if they happen to you Editorial License, by Rod Scher - description unknown See also Computer magazines References External links Publisher's website 1990 establishments in Nebraska 2013 disestablishments in Nebraska Monthly magazines published in the United States Defunct computer magazines published in the United States Home computer magazines Magazines established in 1990 Magazines disestablished in 2013 Magazines published in Nebraska Mass media in Lincoln, Nebraska
SmartComputing
Technology
568
1,874,573
https://en.wikipedia.org/wiki/Cost%20per%20mille
Cost per mille (CPM), also called cost per thousand (CPT) (in Latin, French and Italian, mille means one thousand), is a commonly-used measurement in advertising. It is the cost an advertiser pays for one thousand views or impressions of an advertisement. Radio, television, newspaper, magazine, out-of-home advertising, and online advertising can be purchased on the basis of exposing the ad to one thousand viewers or listeners. It is used in marketing as a benchmarking metric to calculate the relative cost of an advertising campaign or an ad message in a given medium. The "cost per thousand advertising impressions" metric (CPM) is calculated by dividing the cost of an advertising placement by the number of impressions (expressed in thousands) that it generates. CPM is useful for comparing the relative efficiency of various advertising opportunities or media and in evaluating the overall costs of advertising campaigns. For media without countable views, CPM reflects the cost per 1000 estimated views of the ad. This traditional form of measuring advertising cost can also be used in tandem with performance based models such as percentage of sale, or cost per acquisition (CPA). Purpose The purpose of the CPM metric is to compare the costs of advertising campaigns within and across different media. A typical advertising campaign might try to reach potential consumers in multiple locations and through various media. The cost per thousand impressions (CPM) metric enables marketers to make cost comparisons between these media, both at the planning stage and during reviews of past campaigns. Marketers calculate CPM by dividing advertising campaign costs by the number of impressions (or opportunities-to-see) that are delivered by each part of the campaign. Thus, CPM is the cost of a media campaign, relative to its success in generating impressions to see. As the impression counts are generally sizeable, marketers customarily work with the CPM impressions. Dividing by 1,000 is an industry-standard. Similarly, revenue can be expressed in terms of Revenue per mille (RPM). In email marketing, CPM (cost per mille) refers to the cost of sending a thousand email messages. Also referred to as CPT (cost per thousand), this pricing method is used by email service providers (ESPs) to cover the cost of the mail server, bandwidth, hosting images, deliverability services, and bounce management. There is other types of CPM and one of is vCPM (Viewable CPM). With viewable CPM, you bid on 1,000 viewable impressions and you pay for impressions that are measured as viewable. Viewable CPM lets you bid on the actual value of your ad appearing in a viewable position on a given placement. Using a higher vCPM bid than your CPM bid is usually more effective for winning these more valuable types of impressions. Construction To calculate CPM, marketers first state the results of a media campaign (gross impressions). Second, they divide that result into the relevant media cost: Advertising Cost ($) / Impressions Generated For example: Total cost for running the ad is $15,000. The total amount of impressions generated is 2,400,000. ($15,000/2,400,000)=$0.00625 CPM is calculated as: $0.00625x1000 (meaning per thousand impressions)=$6.25 Note: Notice how the CPM is $6.25 and not $0.00625, this is because we are looking at cost per thousand. In online advertising, if a website sells banner ads for a $20 CPM, that means it costs $20 to show the banner on 1000 page views. While the Super Bowl has the highest per-spot ad cost in the United States, it also has the most television viewers annually. Consequently, its CPM may be comparable to a less expensive spot aired during standard programming. Related metrics and concepts Effective cost per mille The Search Engine Marketing Professionals Organization (SEMPO) defines eCPM as: A hybrid Cost-per-Click (CPC) auction calculated by multiplying the CPC times the click-through rate (CTR), and multiplying that by one thousand. (Represented by: (CPC x CTR) x 1000 = eCPM.) This monetization model is used by Google to rank site-targeted CPM ads (in the Google content network) against keyword-targeted CPC ads (Google AdWords PPC) in their hybrid auction. In internet marketing, effective cost per mille is used to measure the effectiveness of a publisher's inventory being sold (by the publisher) via a CPA, CPC, or Cost per time basis. In other words, the eCPM tells the publisher what they would have received if they sold the advertising inventory on a CPM basis (instead of a CPA, CPC, or Cost per time). This information can be used to compare revenue across channels that may have widely varying traffic—by figuring the earnings per thousand impressions. Example There are two banners: "Super Apps" and "Fantastic Apps". The publishers earn $1 per click. Both banners were published for the duration of one week. "Super Apps" was viewed by 2000 visitors from which 10 clicked on it. "Fantastic Apps" was viewed by 2000 visitors from which 50 clicked on it. This shows that: "Super Apps" has an eCPM of $5 (=($1*10/2000)*1000) "Fantastic Apps" has an eCPM of $25 (=($1*50/2000)*1000) Cost per point (CPP) or cost per rating point (CPR or CPRP) CPP is the cost of an advertising campaign, relative to the rating points delivered. In a manner similar to CPM, cost per point measures the cost per rating point for an advertising campaign by dividing the cost of the advertising by the rating points delivered. The American Marketing Association defines cost-per-rating-point (CPR or CPRP) as: A method of comparing the cost effectiveness of two or more alternative media vehicles in radio or television. CPRP is computed by dividing the cost of the time unit or commercial by the rating of the media vehicle during that time period. See also CPA – Cost per action CPC – Cost per click CPI – Cost per impression CPL – Cost per lead CTR – Click-through rate Digital marketing PPC – Pay per click VTR – View-through rate References Internet terminology Advertising indicators Compensation methods Costs Rates Marketing analytics
Cost per mille
Technology
1,349
18,136,205
https://en.wikipedia.org/wiki/LcrV
In molecular biology, LcrV is a protein found in Yersinia pestis and several other bacterial species. It forms part of the Yersinia pestis virulence protein factors that also includes all Yops, or Yersinia outer protein, but the name has been kept out of convention. LcrV's main function is not actually known, but it is essential for the production of other Yops. The type III secretion system of Gram-negative bacteria is used to transport virulence factors from the pathogen directly into the host cell and is only triggered when the bacterium comes into close contact with the host. Effector proteins secreted by the type III system do not possess a secretion signal, and are considered unique because of this. Yersinia spp. secrete effector proteins called YopB and YopD that facilitate the spread of other translocated proteins through the type III needle and the host cell cytoplasm. In turn, the transcription of these moieties is thought to be regulated by another gene, lcrV, found on the Yops virulon that encodes the entire type III system. The product of this gene, LcrV protein, also regulates the secretion of YopD through the type III translocon, and itself acts as a protective "V" antigen for Yersinia pestis, the causative agent of plague. A homologue of the Y. pestis LcrV protein, PcrV, has been found in Pseudomonas aeruginosa, an opportunistic pathogen. In vivo studies using mice found that immunisation with the protein protected burned animals from infection by P. aeruginosa, and enhanced survival. In addition, it is speculated that PcrV determines the size of the needle pore for type III secreted effectors. LcrV is a multifunctional protein that has been shown to act at the level of secretion control by binding the Ysc inner-gate protein LcrG and to modulate the host immune response by altering cytokine production. LcrV is also necessary for full induction of low-calcium response (LCR) stimulon virulence gene transcription. The polypeptide is encoded on a plasmid and is only present when the surroundings are around 37o Celsius. References Further reading Salyers, Abigail & Whitt, Dixie; Bacterial Pathogenesis: A Molecular Approach, AMS Press Biological Weapons Defense: Infectious Diseases and Counterbioterrorism, Humana Press Protein families Bacterial proteins
LcrV
Biology
533
4,418,897
https://en.wikipedia.org/wiki/Alternant%20matrix
In linear algebra, an alternant matrix is a matrix formed by applying a finite list of functions pointwise to a fixed column of inputs. An alternant determinant is the determinant of a square alternant matrix. Generally, if are functions from a set to a field , and , then the alternant matrix has size and is defined by or, more compactly, . (Some authors use the transpose of the above matrix.) Examples of alternant matrices include Vandermonde matrices, for which , and Moore matrices, for which . Properties The alternant can be used to check the linear independence of the functions in function space. For example, let and choose . Then the alternant is the matrix and the alternant determinant is Therefore M is invertible and the vectors form a basis for their spanning set: in particular, and are linearly independent. Linear dependence of the columns of an alternant does not imply that the functions are linearly dependent in function space. For example, let and choose . Then the alternant is and the alternant determinant is 0, but we have already seen that and are linearly independent. Despite this, the alternant can be used to find a linear dependence if it is already known that one exists. For example, we know from the theory of partial fractions that there are real numbers A and B for which Choosing and we obtain the alternant . Therefore, is in the nullspace of the matrix: that is, . Moving to the other side of the equation gives the partial fraction decomposition If and for any then the alternant determinant is zero (as a row is repeated). If and the functions are all polynomials, then divides the alternant determinant for all In particular, if V is a Vandermonde matrix, then divides such polynomial alternant determinants. The ratio is therefore a polynomial in called the bialternant. The Schur polynomial is classically defined as the bialternant of the polynomials . Applications Alternant matrices are used in coding theory in the construction of alternant codes. See also List of matrices Wronskian References Matrices Determinants
Alternant matrix
Mathematics
444
26,898,094
https://en.wikipedia.org/wiki/Von%20Neumann%E2%80%93Morgenstern%20utility%20theorem
In decision theory, the von Neumann–Morgenstern (VNM) utility theorem demonstrates that rational choice under uncertainty involves making decisions that take the form of maximizing the expected value of some cardinal utility function. This function is known as the von Neumann–Morgenstern utility function. The theorem forms the foundation of expected utility theory. In 1947, John von Neumann and Oskar Morgenstern proved that any individual whose preferences satisfied four axioms has a utility function, where such an individual's preferences can be represented on an interval scale and the individual will always prefer actions that maximize expected utility. That is, they proved that an agent is (VNM-)rational if and only if there exists a real-valued function u defined by possible outcomes such that every preference of the agent is characterized by maximizing the expected value of u, which can then be defined as the agent's VNM-utility (it is unique up to affine transformations i.e. adding a constant and multiplying by a positive scalar). No claim is made that the agent has a "conscious desire" to maximize u, only that u exists. VNM-utility is a decision utility in that it is used to describe decisions. It is related, but not necessarily equivalent to, the utility of Bentham's utilitarianism. Set-up In the theorem, an individual agent is faced with options called lotteries. Given some mutually exclusive outcomes, a lottery is a scenario where each outcome will happen with a given probability, all probabilities summing to one. For example, for two outcomes A and B, denotes a scenario where P(A) = 25% is the probability of A occurring and P(B) = 75% (and exactly one of them will occur). More generally, for a lottery with many possible outcomes Ai, we write: with the sum of the s equal to 1. The outcomes in a lottery can themselves be lotteries between other outcomes, and the expanded expression is considered an equivalent lottery: 0.5(0.5A + 0.5B) + 0.5C = 0.25A + 0.25B + 0.50C. If lottery M is preferred over lottery L, we write , or equivalently, . If the agent is indifferent between L and M, we write the indifference relation If M is either preferred over or viewed with indifference relative to L, we write The axioms The four axioms of VNM-rationality are completeness, transitivity, continuity, and independence. These axioms, apart from continuity, are often justified using the Dutch book theorems (whereas continuity is used to set aside lexicographic or infinitesimal utilities). Completeness assumes that an individual has well defined preferences: Axiom 1 (Completeness) For any lotteries and , either or . (the individual must express some preference or indifference). Note that this implies reflexivity. Transitivity assumes that preferences are consistent across any three options: Axiom 2 (Transitivity) If and , then . Continuity assumes that there is a "tipping point" between being better than and worse than a given middle option: Axiom 3 (Continuity): If , then there exists a probability such that where the notation on the left side refers to a situation in which L is received with probability p and N is received with probability (1–p). Instead of continuity, an alternative axiom can be assumed that does not involve a precise equality, called the Archimedean property. It says that any separation in preference can be maintained under a sufficiently small deviation in probabilities: Axiom 3′ (Archimedean property): If , then there exists a probability such that Only one of (3) or (3′) need to be assumed, and the other will be implied by the theorem. Independence assumes that a preference holds independently of the probability of another outcome. Axiom 4 (Independence): For any and (with the "irrelevant" part of the lottery underlined): In other words, the probabilities involving cancel out and don't affect our decision, because the probability of is the same in both lotteries. Note that the "only if" direction is necessary for the theorem to work. Without that, we have this counterexample: there are only two outcomes , and the agent is indifferent on , and strictly prefers all of them over . With the "only if" direction, we can argue that implies , thus excluding this counterexample. The independence axiom implies the axiom on reduction of compound lotteries: Axiom 4′ (Reduction of compound lotteries): For any lotteries and any , To see how Axiom 4 implies Axiom 4', set in the expression in Axiom 4, and expand. The theorem For any VNM-rational agent (i.e. satisfying axioms 1–4), there exists a function u which assigns to each outcome A a real number u(A) such that for any two lotteries, where E(u(L)), or more briefly Eu(L) is given by As such, u can be uniquely determined (up to adding a constant and multiplying by a positive scalar) by preferences between simple lotteries, meaning those of the form pA + (1 − p)B having only two outcomes. Conversely, any agent acting to maximize the expectation of a function u will obey axioms 1–4. Such a function is called the agent's von Neumann–Morgenstern (VNM) utility. Proof sketch The proof is constructive: it shows how the desired function can be built. Here we outline the construction process for the case in which the number of sure outcomes is finite. Suppose there are n sure outcomes, . Note that every sure outcome can be seen as a lottery: it is a degenerate lottery in which the outcome is selected with probability 1. Hence, by the Completeness and Transitivity axioms, it is possible to order the outcomes from worst to best: We assume that at least one of the inequalities is strict (otherwise the utility function is trivial—a constant). So . We use these two extreme outcomes—the worst and the best—as the scaling unit of our utility function, and define: and For every probability , define a lottery that selects the best outcome with probability and the worst outcome otherwise: Note that and . By the Continuity axiom, for every sure outcome , there is a probability such that: and For every , the utility function for outcome is defined as so the utility of every lottery is the expectation of u: To see why this utility function makes sense, consider a lottery , which selects outcome with probability . But, by our assumption, the decision maker is indifferent between the sure outcome and the lottery . So, by the Reduction axiom, he is indifferent between the lottery and the following lottery: The lottery is, in effect, a lottery in which the best outcome is won with probability , and the worst outcome otherwise. Hence, if , a rational decision maker would prefer the lottery over the lottery , because it gives him a larger chance to win the best outcome. Hence: if and only if Reaction Von Neumann and Morgenstern anticipated surprise at the strength of their conclusion. But according to them, the reason their utility function works is that it is constructed precisely to fill the role of something whose expectation is maximized: "Many economists will feel that we are assuming far too much ... Have we not shown too much? ... As far as we can see, our postulates [are] plausible ... We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate." – VNM 1953, § 3.1.1 p.16 and § 3.7.1 p. 28 Thus, the content of the theorem is that the construction of u is possible, and they claim little about its nature. Consequences Automatic consideration of risk aversion It is often the case that a person, faced with real-world gambles with money, does not act to maximize the expected value of their dollar assets. For example, a person who only possesses $1000 in savings may be reluctant to risk it all for a 20% chance odds to win $10,000, even though However, if the person is VNM-rational, such facts are automatically accounted for in their utility function u. In this example, we could conclude that where the dollar amounts here really represent outcomes (cf. "value"), the three possible situations the individual could face. In particular, u can exhibit properties like u($1)+u($1) ≠ u($2) without contradicting VNM-rationality at all. This leads to a quantitative theory of monetary risk aversion. Implications for the expected utility hypothesis In 1738, Daniel Bernoulli published a treatise in which he posits that rational behavior can be described as maximizing the expectation of a function u, which in particular need not be monetary-valued, thus accounting for risk aversion. This is the expected utility hypothesis. As stated, the hypothesis may appear to be a bold claim. The aim of the expected utility theorem is to provide "modest conditions" (i.e. axioms) describing when the expected utility hypothesis holds, which can be evaluated directly and intuitively: "The axioms should not be too numerous, their system is to be as simple and transparent as possible, and each axiom should have an immediate intuitive meaning by which its appropriateness may be judged directly. In a situation like ours this last requirement is particularly vital, in spite of its vagueness: we want to make an intuitive concept amenable to mathematical treatment and to see as clearly as possible what hypotheses this requires." – VNM 1953 § 3.5.2, p. 25 As such, claims that the expected utility hypothesis does not characterize rationality must reject one of the VNM axioms. A variety of generalized expected utility theories have arisen, most of which drop or relax the independence axiom. Implications for ethics and moral philosophy Because the theorem assumes nothing about the nature of the possible outcomes of the gambles, they could be morally significant events, for instance involving the life, death, sickness, or health of others. A von Neumann–Morgenstern rational agent is capable of acting with great concern for such events, sacrificing much personal wealth or well-being, and all of these actions will factor into the construction/definition of the agent's VNM-utility function. In other words, both what is naturally perceived as "personal gain", and what is naturally perceived as "altruism", are implicitly balanced in the VNM-utility function of a VNM-rational individual. Therefore, the full range of agent-focused to agent-neutral behaviors are . If the utility of is , a von Neumann–Morgenstern rational agent must be indifferent between and . An agent-focused von Neumann–Morgenstern rational agent therefore cannot favor more equal, or "fair", distributions of utility between its own possible future selves. Distinctness from other notions of utility Some utilitarian moral theories are concerned with quantities called the "total utility" and "average utility" of collectives, and characterize morality in terms of favoring the utility or happiness of others with disregard for one's own. These notions can be related to, but are distinct from, VNM-utility: 1) VNM-utility is a decision utility: it is that according to which one decides, and thus by definition cannot be something which one disregards. 2) VNM-utility is not canonically additive across multiple individuals (see Limitations), so "total VNM-utility" and "average VNM-utility" are not immediately meaningful (some sort of normalization assumption is required). The term E-utility for "experience utility" has been coined to refer to the types of "hedonistic" utility like that of Bentham's greatest happiness principle. Since morality affects decisions, a VNM-rational agent's morals will affect the definition of its own utility function (see above). Thus, the morality of a VNM-rational agent can be characterized by correlation of the agent's VNM-utility with the VNM-utility, E-utility, or "happiness" of others, among other means, but not by disregard for the agent's own VNM-utility, a contradiction in terms. Limitations Nested gambling Since if L and M are lotteries, then pL + (1 − p)M is simply "expanded out" and considered a lottery itself, the VNM formalism ignores what may be experienced as "nested gambling". This is related to the Ellsberg problem where people choose to avoid the perception of risks about risks. Von Neumann and Morgenstern recognized this limitation: "...concepts like a specific utility of gambling cannot be formulated free of contradiction on this level. This may seem to be a paradoxical assertion. But anybody who has seriously tried to axiomatize that elusive concept, will probably concur with it." – VNM 1953 § 3.7.1, p. 28. Incomparability between agents Since for any two VNM-agents X and Y, their VNM-utility functions uX and uY are only determined up to additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to compare the two. Hence expressions like uX(L) + uY(L) and uX(L) − uY(L) are not canonically defined, nor are comparisons like uX(L) < uY(L) canonically true or false. In particular, the aforementioned "total VNM-utility" and "average VNM-utility" of a population are not canonically meaningful without normalization assumptions. Applicability to economics The expected utility hypothesis has been shown to have imperfect predictive accuracy in a set of lab based empirical experiments, such as the Allais paradox. References and further reading Anand, Paul. Foundations of Rational Choice Under Risk Oxford, Oxford University Press. 1993 reprinted 1995, 2002 Fishburn, Peter C. Utility Theory for Decision Making. Huntington, NY. Robert E. Krieger Publishing Co. 1970. Sixto Rios (1998) Some problems and developments in decision science, Revista Matematica Complutense 11(1):113–41. Peterson, Martin (2009). An Introduction to Decision Theory (Cambridge Introductions to Philosophy). Cambridge: Cambridge University Press. Game theory Utility John von Neumann Economics theorems Rational choice theory Decision theory
Von Neumann–Morgenstern utility theorem
Mathematics
3,076
16,637,535
https://en.wikipedia.org/wiki/Size%20function
Size functions are shape descriptors, in a geometrical/topological sense. They are functions from the half-plane to the natural numbers, counting certain connected components of a topological space. They are used in pattern recognition and topology. Formal definition In size theory, the size function associated with the size pair is defined in the following way. For every , is equal to the number of connected components of the set that contain at least one point at which the measuring function (a continuous function from a topological space to ) takes a value smaller than or equal to . The concept of size function can be easily extended to the case of a measuring function , where is endowed with the usual partial order . A survey about size functions (and size theory) can be found in. History and applications Size functions were introduced in for the particular case of equal to the topological space of all piecewise closed paths in a closed manifold embedded in a Euclidean space. Here the topology on is induced by the -norm, while the measuring function takes each path to its length. In the case of equal to the topological space of all ordered -tuples of points in a submanifold of a Euclidean space is considered. Here the topology on is induced by the metric . An extension of the concept of size function to algebraic topology was made in where the concept of size homotopy group was introduced. Here measuring functions taking values in are allowed. An extension to homology theory (the size functor) was introduced in . The concepts of size homotopy group and size functor are strictly related to the concept of persistent homology group studied in persistent homology. It is worth to point out that the size function is the rank of the -th persistent homology group, while the relation between the persistent homology group and the size homotopy group is analogous to the one existing between homology groups and homotopy groups. Size functions have been initially introduced as a mathematical tool for shape comparison in computer vision and pattern recognition, and have constituted the seed of size theory. The main point is that size functions are invariant for every transformation preserving the measuring function. Hence, they can be adapted to many different applications, by simply changing the measuring function in order to get the wanted invariance. Moreover, size functions show properties of relative resistance to noise, depending on the fact that they distribute the information all over the half-plane . Main properties Assume that is a compact locally connected Hausdorff space. The following statements hold: every size function is a non-decreasing function in the variable and a non-increasing function in the variable . every size function is locally right-constant in both its variables. for every , is finite. for every and every , . for every and every , equals the number of connected components of on which the minimum value of is smaller than or equal to . If we also assume that is a smooth closed manifold and is a -function, the following useful property holds: in order that is a discontinuity point for it is necessary that either or or both are critical values for . A strong link between the concept of size function and the concept of natural pseudodistance between the size pairs exists. if then . The previous result gives an easy way to get lower bounds for the natural pseudodistance and is one of the main motivation to introduce the concept of size function. Representation by formal series An algebraic representation of size functions in terms of collections of points and lines in the real plane with multiplicities, i.e. as particular formal series, was furnished in . The points (called cornerpoints) and lines (called cornerlines) of such formal series encode the information about discontinuities of the corresponding size functions, while their multiplicities contain the information about the values taken by the size function. Formally: cornerpoints are defined as those points , with , such that the number is positive. The number is said to be the multiplicity of . cornerlines and are defined as those lines such that The number is sad to be the multiplicity of . Representation Theorem: For every , it holds . This representation contains the same amount of information about the shape under study as the original size function does, but is much more concise. This algebraic approach to size functions leads to the definition of new similarity measures between shapes, by translating the problem of comparing size functions into the problem of comparing formal series. The most studied among these metrics between size function is the matching distance. References See also Size theory Natural pseudodistance Size functor Size homotopy group Size pair Matching distance Topological data analysis Topology Algebraic topology
Size function
Physics,Mathematics
934
58,591,903
https://en.wikipedia.org/wiki/Primate%20sociality
Primate sociality is an area of primatology that aims to study the interactions between three main elements of a primate social network: the social organisation, the social structure and the mating system. The intersection of these three structures describe the socially complex behaviours and relationships occurring among adult males and females of a particular species. Cohesion and stability of groups are maintained through a confluence of factors, including: kinship, willingness to cooperate, frequency of agonistic behaviour, or varying intensities of dominance structures. Primate social organisation exists along a spectrum, with networks ranging from the solitary neighbourhood systems to the multi-individual units to the complex multilevel societies that are composed of hierarchically-organised social units. The evolution of diverse primate social systems is considered to be a naturally selected anti-predation response. Increased resource detection, cooperation and social learning are also considered as co-benefits of group living. Emergence of group living Similar to genetic traits, behavioural characteristics can similarly result from natural selection processes. In opposition to many animal-decision making strategies which encourage individual fitness, group living (or sociality) prioritises an inclusive group fitness. Socioecological factors are thought to influence primate social organisation. For example, the main benefits of group living are hypothesised to be: Improved predator detection. Predator vigilance (or awareness) and predator defence are thought to increase with group living. More eyes means detection will occur sooner, communication among members will ensure appropriate responses and actions are taken, minimising the primates' susceptibility to predation. Improved resource (water or food) detection. The hypothesis is that more individuals infer a heightened spatial knowledge and an increased ability to detect resources if more landscape is being covered. Opportunity for cooperation. Primate sociality and living in close proximity bolsters cooperative behaviours necessary for participating in activities such as hunting, alloparenting, and/or territory or mate defence. Reduced risk of infanticide. There have been observations from certain baboon populations that suggest a correlation between infant survival and group size: e.g. infants are likelier to survive in larger groups. Increased opportunity for social learning. The main constraining factors of social group sizes are related to: Resource abundance. Because living in groups requires members to share access to essential resources (like food, water, mates, sleeping sites) there are selective costs that constrain group size. Pathogen transmission. Larger groups increases exposure to pathogens among its members. Competition and aggression. If intra-group competition becomes too high, the associated stress can potentially impose negative health impacts. Cognitive capabilities. There is an assumption that cognitive abilities must be able to interpret the complex information of group living (including information resulting from social relationships). Interestingly, there are competing hypotheses for the role feeding takes in influencing primate sociality. It is interpreted as having both as a positive (resource detection) and negative (resource competition) effect depending on the analysis. In order for sociality to have been selected for via natural selection, the collective benefits of group living must outweigh the collective costs. Thus, if intra-group competition becomes too great, the group is likely to fracture into smaller units. A thorough review of the literature suggests that the lower threshold of primate group living is determined by risk of predation while the upper limit of group size is determined by feeding competition among individuals. Primate social organisation Social organisation refers to the size (number of individuals), composition (variation between the sexes), and cohesion (relating to proximity and bond strengths among individuals) of the society in consideration. The synchronisation of individuals, or lack thereof, also provides insight into relationships among individuals. There are seven types of primate social organisations identified in the literature (discussed below), including: solitary primate systems, pair-bonded systems, one-male-multi-female systems, one-female-multi-male systems, multi-male-multi-female systems, fission fusion societies, and multilevel societies. Interestingly, primate social organisation is not necessarily species-specific. For instance, an example of within species (intra-species) variation would be tamarins and marmosets. These two primates are part of the callitrichidae family and have been observed to demonstrate pair-bonding systems in some populations while others have one-male unit (OMU) systems. Solitary primate systems, sometimes referred to as neighbourhood systems, occur when an adult male's territory overlaps with one (or more) adult female's territory and individuals conduct activities (most often foraging or offspring care) independently from one another. In this system, solitary does not imply antisocial, but rather behaviour is characterised by this lack of synchronisation among the individuals. In fact, many solitary primates maintain social networks by using vocal or olfactory signals to communicate. Examples of solitary primates: orangutans, galagos, lorises, some species of lemurs, some tarsiers Pair-bonded systems, or pair-living primates, are small social units consisting of one adult male and one adult female, and their immature offspring. There are factors of time and space that define this type social system. Firstly, pair-bonds must demonstrate a long-term affiliative partnership for at least one year or one seasonal cycle. Secondly, there must also be a higher frequency of association (spatial proximity) between the bonded-pair individuals than there is with other individuals. Paternal care of offspring is a relatively uncommon trait in primate social systems; however, the monogamous mating system often observed (though it should not be assumed) in pair-bonding generates an equal variance for offspring success for both pair members. Thus, paternal involvement in off-spring rearing is much likelier to be observed in primate species where pair-living occurs. Examples of pair-bonded primate species: titi monkeys, owl monkeys, some species of marmosets and tamarins, many species of siamangs and gibbons One-female-multi-male groups are composed of one reproductive adult female and two or more adult male partners in the group. If there are other associated females within the group, they will likely have their reproductivity suppressed either via agonistic behaviours (aggressive and submissive interactions) or olfactory signals (such as pheromones). This social system promotes cooperative breeding (or alloparenting), where the non-breeding individuals assist in providing care for the off-spring produced by the main breeding female. Examples of one-female-multi-male structured primate species: many species of tamarin and marmoset One-male-multi-female groups are usually characterised by a single resident male who defends a group of (often related) adult females against males from outside the group. While tenure ship is held, this form of social organisation allows a male exclusive access to reproductive females for breeding purposes. A resident male often suffers challenges from extra-group males (perhaps belonging to all-male bachelor groups) whereby these males may attempt takeovers with the goal of gaining sole access to the reproductive females. A takeover by a new resident male could lead to infanticide (infant killing). This behaviour is interpreted as a strategic attempt to bring females back into estrous, which allows mating opportunities to occur sooner for the new resident male. Examples of one-male-multi-female structured primate species: some species of gorillas, numerous colobine and guenon species, patas monkeys, howler monkeys Multi-male-multi-female social systems are characterised by associations between larger numbers of individuals forming groups. Since individuals are able to mate with multiple partners, paternity is often hidden or skewed which helps ensure the survival of off-spring. A variety of social relationships and bonds exist among multi-male-multi-female group members. For instance, some research has led to observations of dyadic relationships, or friendships. These friendships are more moderate forms of the pair-bonded social structures, existing within the multi-male-multi-female system. For instance, one study of savannah baboons (Papio cynocephalusI) observed that the lactating females in the group would more closely associate with specific adult males. As further research is conducted on primate friendships, three main benefits have been hypothesised: close associations with a specific male (1) tends to discourage infanticide, (2) tends to reduce incidence of harassment of the female, and (3) stimulates paternal investment and care in the offspring. The benefits of friendships within the multi-male-multi-female group systems demonstrate similar advantages as pair-bonded systems. Examples of multi-male-multi-female structured primate species: many species of macaques, baboons, vervet monkeys, mangabeys, capuchins, squirrel monkeys, woolly monkeys, some colobine species, some lemurs (ring-tailed and sifaka). Fission fusion societies demonstrate a high degree of fluidity by splitting (fission) and merging (fusion) as the group moves across a landscape. This type of organisation is less cohesive than multi-male-multi-female groups, with patterns often reflecting the local availability of resources. For instance, if foraging patch sizes are small, the larger group will often break apart to forage and later merge in order to sleep. This type of society is typically characterised by female philopatry, where female kin lineages make up the core of the groups and males disperse to other groups. Interestingly, some researchers hypothesise that fission fusion societies may have been socially inherited from the last shared common ancestor of humans, chimpanzees and bonobos. Examples of primate species with fission fusion societies: humans, chimpanzees, bonobos, spider monkeys Multilevel societies, sometimes referred to as hierarchical or modular societies, are the largest and most complex form of primate social organisation. Social stratification of these societies is discrete and has at least one stable core unit. Typically, multilevel societies are composed of between two and four levels of social structures: one-male units (OMUs, or harems) nested within clans, which are nested within bands, which are nested within a troop. OMUs are composed of a single-male breeding unit (a leader male), several females, and may even have a follower male. Similar to the resident male of a one-male-multi-female group, the OMU leader male is susceptible to takeovers by outside adult males. In some species, there is an additional level to the society: clans. The clan level is nested between the OMU and the band level. Clans consist of OMUs and of all-male units (AMUs) of bachelor males (either kin-related or not). Finally, a band is a coalesced grouping of OMUs who routinely sleep and forage together. The troop is a temporary aggregation of bands who might also forage or sleep in the same sites depending on environmental constraints. While multilevel societies might seem similar to fission fusion societies, they are not. Fission fusion societies have a dynamic element with routine variability whereas multilevel societies maintain stability through the hierarchy of core units. In order to fully understand how these complex societies function, it is important to observe social relationships and their interactions not only within tiers, but between them as well. Modular systems are considered to be an evolutionary construct resulting from the need to split up larger groups, whether they are large multi-male-multi-female groups or an amalgamation of closely related units. Examples of primate species with multilevel societies: hamadryas baboons, geladas, snub-nosed monkeys Primate social structures Primate social structures are meant to describe the diverse relationships that exist between individuals, as well as the patterns of interactions that define them. Researchers hypothesize that environmental and social pressures have allowed for a whole array of inter-individual (between individuals) relationships that promote inclusive group fitness. Inter-individual relationships are thought to be influenced by sex-related variables and can occur (1) between females, (2) between males or (3) between members of the opposite sex. Factors influencing inter-female relationships are primarily thought to be: food competition; group size; and dispersal patterns. These three elements will characterize the degree of competition among female group members. For instance, in a female philopatric society there are often stable kin-based hierarchies that develop. Conversely, in male philopatric species or egalitarian societies, females regularly transfer between groups (eliminating the potential for hierarchies or coalitions) leading to female bonding as a sole mechanism for resource defence against other groups. Inter-male relationships tend to be characterized by agonism and competition over access to females. Socioecological theory predicts that fierce competition exists among male group members over access to females, leading to higher frequencies of agonistic interactions being common. Some species of primates demonstrate male-male relationships leading to alliances and affiliative behaviors when inclusive group fitness is being prioritized over individual fitness. Finally, intersexual relationships (between adult male and adult female individuals) are also shaped by a number of factors, including sexual selection, dispersal patterns, dominance structures, certainty of paternity, risk of infanticide and/or the level of sexual dimorphism that is present within a species. Affinity and affiliation between individuals is often largely determined by the dispersal patterns characterizing a primate social system. For instance, chimpanzees (Pan troglodytes) have patrilineal social systems, where the males usually remain in their natal groups and the females emigrate into neighboring groups. Conversely, in the matrilineal societies of bonobos (Pan paniscus), it is the females who remain in their natal groups and the males who disperse to new groups. Dispersal patterns will also likely affect the structure or organization of social hierarchies. There are also affiliative behaviors which encourage stronger associations among individuals over time. Close proximity, grooming and non-aggressive social interactions are expected characteristics of well-bonded primates. Grooming is a multifunctional behavior. Firstly, it is practical. Grooming allows the opportunity for unwanted dirt, dead skin, debris or ectoparasites to be removed from an individual's hair or fur. Moreover, it is a social activity. Grooming helps initiate new relationships and maintain existing ones; it can be used to deflate aggressive social interactions; and, it is beneficial to an individual's health since grooming has been linked to reductions in stress. Agonistic interactions, or agonism, refers to the frequency and degree of aggressive and submissive interactions occurring between individuals. The frequency at which individuals are being subjected to agonistic interactions could be related to factors such as rank (there is evidence of both high- and low-ranking individuals being targets of conspecific harassment) or dispersal patterns (non-resident individuals attempting to emigrate into other groups can often be at higher-risk of harassment from resident group members). Primate social systems and their organisation exist across a spectrum. While some systems reflect a strict dominance hierarchy, others are characterised by more egalitarian structures. A confluence of variables and behaviours, such as diet or dispersal patterns, are thought to shape social systems. Many forms of social hierarchies exist in primate systems. In resident-nepotistic intolerant hierarchies, the stable hierarchy is based on kinship and rank can be linearly traced, as it is inherited. In contrast, in resident-nepotistic-tolerant hierarchies, stability is maintained via inter-individual coalitions and tolerance by dominant individuals. In this system, power is not ultimate; it is partially mitigated by cooperation among subordinate individuals. Another form of dominance structure is related to age. For example, some gorillas demonstrate an age-graded dominance structure: wherein the eldest male member is the highest-ranking dominant male (or alpha). Primate mating systems Primate mating systems infer both a social element and a genetic element. Therefore, a mating system should describe: (1) the interactions and resulting relationship between the mating pairs involved; and (2) the reproductive outcomes from the mating system. For instance, monogamy infers exclusive mating access and, thus, greater paternity certainty. Observed mating systems in primates include: monogamy, polyandry, polygyny and polygamy (as described below). Monogamy, or a monogamous mating system, is when one adult male and one adult female have a preferential partner for copulation. There is a long-term temporal element to this category of mating system (longer than one year or one seasonal cycle) and offspring resulting from this mating system will belong to the pair. Additionally, there is an assumption that each member in this partnership have relatively equal likelihood for successfully reproducing. Though strict monogamy is rare in nature, some primate bonded pairs demonstrate monogamous (or partially monogamous) mating systems. In some monogamous pair-bonded species there have been observations of extra-pair copulations, wherein a male or female member and a partner of the opposite sex, other than the so-called mate, have been witnessed mating. Polyandry, or a polyandrous mating system, is when one reproductive adult female mates with two or more different adult males. In this mating system, the adult males mate exclusively with the adult female. Polygyny, or a polygynous mating system, is when one adult male mates with two or more adult females. It is the most common type of mating system in observed in primate studies. Polygyny can occur as a result of spatial constraints where solitary males are able to defend access to nearby solitary females. Another pattern reflects a scramble competition, wherein adult males roam the landscape in search of sexually receptive females, moving on shortly after mating. Harem-polygyny occurs when a single adult male defends access to multiple females in order to gain exclusive mating access. Finally, groups of males might form coalitions in order to successfully defend mating access to females. Polygamy, or a polygamous mating system, is when both males and females mate with two or more partners. In this mating system offspring paternity might remain unknown. References Primate behavior Sociobiology
Primate sociality
Biology
3,786
2,670,362
https://en.wikipedia.org/wiki/Beta%20Sculptoris
Beta Sculptoris, Latinized from β Sculptoris, is a single, blue-white hued star in the southern constellation of Sculptor. It has an apparent visual magnitude of 4.37, which is bright enough to be seen with the naked eye. Based upon an annual parallax shift of 18.74 mas as seen from Earth, it is located 174 light years from the Sun. This is a B-type giant star with a stellar classification of B9.5IIIp(HgMnSi). It belongs to the class of chemically peculiar stars known as a Mercury-Manganese star, showing overabundances of mercury, manganese, and silicon in its spectrum. It is a suspected α2 CVn variable with magnitude variation from 4.35 to 4.39. The star has nearly three times the mass of the Sun and double the Sun's radius. It is radiating 81 times the Sun's luminosity from its photosphere at an effective temperature of 12,110 K. References B-type giants Alpha2 Canum Venaticorum variables Mercury-manganese stars Suspected variables Sculptor (constellation) Sculptoris, Beta CD-38 15527 221507 116231 8937
Beta Sculptoris
Astronomy
253
1,896,782
https://en.wikipedia.org/wiki/Area%20of%20a%20circle
In geometry, the area enclosed by a circle of radius is . Here, the Greek letter represents the constant ratio of the circumference of any circle to its diameter, approximately equal to 3.14159. One method of deriving this formula, which originated with Archimedes, involves viewing the circle as the limit of a sequence of regular polygons with an increasing number of sides. The area of a regular polygon is half its perimeter multiplied by the distance from its center to its sides, and because the sequence tends to a circle, the corresponding formula–that the area is half the circumference times the radius–namely, , holds for a circle. Terminology Although often referred to as the area of a circle in informal contexts, strictly speaking, the term disk refers to the interior region of the circle, while circle is reserved for the boundary only, which is a curve and covers no area itself. Therefore, the area of a disk is the more precise phrase for the area enclosed by a circle. History Modern mathematics can obtain the area using the methods of integral calculus or its more sophisticated offspring, real analysis. However, the area of a disk was studied by the Ancient Greeks. Eudoxus of Cnidus in the fifth century B.C. had found that the area of a disk is proportional to its radius squared. Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius in his book Measurement of a Circle. The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area  r2 for the disk. Prior to Archimedes, Hippocrates of Chios was the first to show that the area of a disk is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Historical arguments A variety of arguments have been advanced historically to establish the equation to varying degrees of mathematical rigor. The most famous of these is Archimedes' method of exhaustion, one of the earliest uses of the mathematical concept of a limit, as well as the origin of Archimedes' axiom which remains part of the standard analytical treatment of the real number system. The original proof of Archimedes is not rigorous by modern standards, because it assumes that we can compare the length of arc of a circle to the length of a secant and a tangent line, and similar statements about the area, as geometrically evident. Using polygons The area of a regular polygon is half its perimeter times the apothem. As the number of sides of the regular polygon increases, the polygon tends to a circle, and the apothem tends to the radius. This suggests that the area of a disk is half the circumference of its bounding circle times the radius. Archimedes's proof Following Archimedes' argument in The Measurement of a Circle (c. 260 BCE), compare the area enclosed by a circle to a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius. If the area of the circle is not equal to that of the triangle, then it must be either greater or less. We eliminate each of these by contradiction, leaving equality as the only possibility. We use regular polygons in the same way. Not greater Suppose that the area C enclosed by the circle is greater than the area T = cr/2 of the triangle. Let E denote the excess amount. Inscribe a square in the circle, so that its four corners lie on the circle. Between the square and the circle are four segments. If the total area of those gaps, G4, is greater than E, split each arc in half. This makes the inscribed square into an inscribed octagon, and produces eight segments with a smaller total gap, G8. Continue splitting until the total gap area, Gn, is less than E. Now the area of the inscribed polygon, Pn = C − Gn, must be greater than that of the triangle. But this forces a contradiction, as follows. Draw a perpendicular from the center to the midpoint of a side of the polygon; its length, h, is less than the circle radius. Also, let each side of the polygon have length s; then the sum of the sides is ns, which is less than the circle circumference. The polygon area consists of n equal triangles with height h and base s, thus equals nhs/2. But since h < r and ns < c, the polygon area must be less than the triangle area, cr/2, a contradiction. Therefore, our supposition that C might be greater than T must be wrong. Not less Suppose that the area enclosed by the circle is less than the area T of the triangle. Let D denote the deficit amount. Circumscribe a square, so that the midpoint of each edge lies on the circle. If the total area gap between the square and the circle, G4, is greater than D, slice off the corners with circle tangents to make a circumscribed octagon, and continue slicing until the gap area is less than D. The area of the polygon, Pn, must be less than T. This, too, forces a contradiction. For, a perpendicular to the midpoint of each polygon side is a radius, of length r. And since the total side length is greater than the circumference, the polygon consists of n identical triangles with total area greater than T. Again we have a contradiction, so our supposition that C might be less than T must be wrong as well. Therefore, it must be the case that the area enclosed by the circle is precisely the same as the area of the triangle. This concludes the proof. Rearrangement proof Following Satō Moshun , Nicholas of Cusa and Leonardo da Vinci , we can use inscribed regular polygons in a different way. Suppose we inscribe a hexagon. Cut the hexagon into six triangles by splitting it from the center. Two opposite triangles both touch two common diameters; slide them along one so the radial edges are adjacent. They now form a parallelogram, with the hexagon sides making two opposite edges, one of which is the base, s. Two radial edges form slanted sides, and the height, h is equal to its apothem (as in the Archimedes proof). In fact, we can also assemble all the triangles into one big parallelogram by putting successive pairs next to each other. The same is true if we increase it to eight sides and so on. For a polygon with 2n sides, the parallelogram will have a base of length ns, and a height h. As the number of sides increases, the length of the parallelogram base approaches half the circle circumference, and its height approaches the circle radius. In the limit, the parallelogram becomes a rectangle with width r and height r. {| class="wikitable" frame="vsides" style="text-align:center" cellspacing="0" cellpadding="3" |+ Unit disk area by rearranging n polygons. |- ! colspan="2" | polygon | rowspan="11" style="padding:1px;"| ! colspan="3" | parallelogram |- ! n !! side !! base !! height !! area |- | align="right" | 4 || 1.4142136 || 2.8284271 || 0.7071068 || 2.0000000 |- | align="right" | 6 || 1.0000000 || 3.0000000 || 0.8660254 || 2.5980762 |- | align="right" | 8 || 0.7653669 || 3.0614675 || 0.9238795 || 2.8284271 |- | align="right" | 10 || 0.6180340 || 3.0901699 || 0.9510565 || 2.9389263 |- | align="right" | 12 || 0.5176381 || 3.1058285 || 0.9659258 || 3.0000000 |- | align="right" | 14 || 0.4450419 || 3.1152931 || 0.9749279 || 3.0371862 |- | align="right" | 16 || 0.3901806 || 3.1214452 || 0.9807853 || 3.0614675 |- | align="right" | 96 || 0.0654382 || 3.1410320 || 0.9994646 || 3.1393502 |- | ∞ || 1/∞ || || 1 || |} Modern proofs There are various equivalent definitions of the constant π. The conventional definition in pre-calculus geometry is the ratio of the circumference of a circle to its diameter: However, because the circumference of a circle is not a primitive analytical concept, this definition is not suitable in modern rigorous treatments. A standard modern definition is that is equal to twice the least positive root of the cosine function or, equivalently, the half-period of the sine (or cosine) function. The cosine function can be defined either as a power series, or as the solution of a certain differential equation. This avoids any reference to circles in the definition of , so that statements about the relation of to the circumference and area of circles are actually theorems, rather than definitions, that follow from the analytical definitions of concepts like "area" and "circumference". The analytical definitions are seen to be equivalent, if it is agreed that the circumference of the circle is measured as a rectifiable curve by means of the integral The integral appearing on the right is an abelian integral whose value is a half-period of the sine function, equal to . Thus is seen to be true as a theorem. Several of the arguments that follow use only concepts from elementary calculus to reproduce the formula , but in many cases to regard these as actual proofs, they rely implicitly on the fact that one can develop trigonometric functions and the fundamental constant in a way that is totally independent of their relation to geometry. We have indicated where appropriate how each of these proofs can be made totally independent of all trigonometry, but in some cases that requires more sophisticated mathematical ideas than those afforded by elementary calculus. Onion proof Using calculus, we can sum the area incrementally, partitioning the disk into thin concentric rings like the layers of an onion. This is the method of shell integration in two dimensions. For an infinitesimally thin ring of the "onion" of radius t, the accumulated area is 2t dt, the circumferential length of the ring times its infinitesimal width (one can approximate this ring by a rectangle with width=2t and height=dt). This gives an elementary integral for a disk of radius r. It is rigorously justified by the multivariate substitution rule in polar coordinates. Namely, the area is given by a double integral of the constant function 1 over the disk itself. If D denotes the disk, then the double integral can be computed in polar coordinates as follows: which is the same result as obtained above. An equivalent rigorous justification, without relying on the special coordinates of trigonometry, uses the coarea formula. Define a function by . Note ρ is a Lipschitz function whose gradient is a unit vector (almost everywhere). Let D be the disc in . We will show that , where is the two-dimensional Lebesgue measure in . We shall assume that the one-dimensional Hausdorff measure of the circle is , the circumference of the circle of radius r. (This can be taken as the definition of circumference.) Then, by the coarea formula, Triangle proof Similar to the onion proof outlined above, we could exploit calculus in a different way in order to arrive at the formula for the area of a disk. Consider unwrapping the concentric circles to straight strips. This will form a right angled triangle with r as its height and 2r (being the outer slice of onion) as its base. Finding the area of this triangle will give the area of the disk The opposite and adjacent angles for this triangle are respectively in degrees 9.0430611..., 80.956939... and in radians 0.1578311... , 1.4129651.... Explicitly, we imagine dividing up a circle into triangles, each with a height equal to the circle's radius and a base that is infinitesimally small. The area of each of these triangles is equal to . By summing up (integrating) all of the areas of these triangles, we arrive at the formula for the circle's area: It too can be justified by a double integral of the constant function 1 over the disk by reversing the order of integration and using a change of variables in the above iterated integral: Making the substitution converts the integral to which is the same as the above result. The triangle proof can be reformulated as an application of Green's theorem in flux-divergence form (i.e. a two-dimensional version of the divergence theorem), in a way that avoids all mention of trigonometry and the constant . Consider the vector field in the plane. So the divergence of r is equal to two, and hence the area of a disc D is equal to By Green's theorem, this is the same as the outward flux of r across the circle bounding D: where n is the unit normal and ds is the arc length measure. For a circle of radius R centered at the origin, we have and , so the above equality is The integral of ds over the whole circle is just the arc length, which is its circumference, so this shows that the area A enclosed by the circle is equal to times the circumference of the circle. Another proof that uses triangles considers the area enclosed by a circle to be made up of an infinite number of triangles (i.e. the triangles each have an angle of at the centre of the circle), each with an area of (derived from the expression for the area of a triangle: ). Note that due to small angle approximation. Through summing the areas of the triangles, the expression for the area of the circle can therefore be found: Semicircle proof Note that the area of a semicircle of radius r can be computed by the integral . By trigonometric substitution, we substitute , hence The last step follows since the trigonometric identity implies that and have equal integrals over the interval , using integration by substitution. But on the other hand, since , the sum of the two integrals is the length of that interval, which is . Consequently, the integral of is equal to half the length of that interval, which is . Therefore, the area of a circle of radius r, which is twice the area of the semi-circle, is equal to . This particular proof may appear to beg the question, if the sine and cosine functions involved in the trigonometric substitution are regarded as being defined in relation to circles. However, as noted earlier, it is possible to define sine, cosine, and in a way that is totally independent of trigonometry, in which case the proof is valid by the change of variables formula and Fubini's theorem, assuming the basic properties of sine and cosine (which can also be proved without assuming anything about their relation to circles). Isoperimetric inequality The circle is the closed curve of least perimeter that encloses the maximum area. This is known as the isoperimetric inequality, which states that if a rectifiable Jordan curve in the Euclidean plane has perimeter C and encloses an area A (by the Jordan curve theorem) then Moreover, equality holds in this inequality if and only if the curve is a circle, in which case and . Fast approximation The calculations Archimedes used to approximate the area numerically were laborious, and he stopped with a polygon of 96 sides. A faster method uses ideas of Willebrord Snell (Cyclometricus, 1621), further developed by Christiaan Huygens (De Circuli Magnitudine Inventa, 1654), described in . Archimedes' doubling method Given a circle, let un be the perimeter of an inscribed regular n-gon, and let Un be the perimeter of a circumscribed regular n-gon. Then un and Un are lower and upper bounds for the circumference of the circle that become sharper and sharper as n increases, and their average (un + Un)/2 is an especially good approximation to the circumference. To compute un and Un for large n, Archimedes derived the following doubling formulae:   (geometric mean), and    (harmonic mean). Starting from a hexagon, Archimedes doubled n four times to get a 96-gon, which gave him a good approximation to the circumference of the circle. In modern notation, we can reproduce his computation (and go further) as follows. For a unit circle, an inscribed hexagon has u6 = 6, and a circumscribed hexagon has U6 = 4. Doubling seven times yields {| class="wikitable" frame="vsides" style="text-align:center" cellspacing="0" cellpadding="3" |+ Archimedes doubling seven times; n = 6 × 2k. |- style="background-color:#eeeeee" ! k !! n !! un !! Un !! |- | 0 || 6 || 6.0000000 || 6.9282032 || 3.2320508 |- | 1 || 12 || 6.2116571 || 6.4307806 || 3.1606094 |- | 2 || 24 || 6.2652572 || 6.3193199 || 3.1461443 |- | 3 || 48 || 6.2787004 || 6.2921724 || 3.1427182 |- | 4 || 96 || 6.2820639 || 6.2854292 || 3.1418733 |- | 5 || 192 || 6.2829049 || 6.2837461 || 3.1416628 |- | 6 || 384 || 6.2831152 || 6.2833255 || 3.1416102 |- | 7 || 768 || 6.2831678 || 6.2832204 || 3.1415970 |} (Here approximates the circumference of the unit circle, which is 2, so approximates .) The last entry of the table has 355⁄113 as one of its best rational approximations; i.e., there is no better approximation among rational numbers with denominator up to 113. The number 355⁄113 is also an excellent approximation to , attributed to Chinese mathematician Zu Chongzhi, who named it Milü. This approximation is better than any other rational number with denominator less than 16,604. The Snell–Huygens refinement Snell proposed (and Huygens proved) a tighter bound than Archimedes': This for n = 48 gives a better approximation (about 3.14159292) than Archimedes' method for n = 768. Derivation of Archimedes' doubling formulae Let one side of an inscribed regular n-gon have length sn and touch the circle at points A and B. Let A′ be the point opposite A on the circle, so that A′A is a diameter, and A′AB is an inscribed triangle on a diameter. By Thales' theorem, this is a right triangle with right angle at B. Let the length of A′B be cn, which we call the complement of sn; thus cn2+sn2 = (2r)2. Let C bisect the arc from A to B, and let C′ be the point opposite C on the circle. Thus the length of CA is s2n, the length of C′A is c2n, and C′CA is itself a right triangle on diameter C′C. Because C bisects the arc from A to B, C′C perpendicularly bisects the chord from A to B, say at P. Triangle C′AP is thus a right triangle, and is similar to C′CA since they share the angle at C′. Thus all three corresponding sides are in the same proportion; in particular, we have C′A : C′C = C′P : C′A and AP : C′A = CA : C′C. The center of the circle, O, bisects A′A, so we also have triangle OAP similar to A′AB, with OP half the length of A′B. In terms of side lengths, this gives us In the first equation C′P is C′O+OP, length r + cn, and C′C is the diameter, 2r. For a unit circle we have the famous doubling equation of Ludolph van Ceulen, If we now circumscribe a regular n-gon, with side A″B″ parallel to AB, then OAB and OA″B″ are similar triangles, with A″B″ : AB = OC : OP. Call the circumscribed side Sn; then this is Sn : sn = 1 : 1⁄2cn. (We have again used that OP is half the length of A′B.) Thus we obtain Call the inscribed perimeter un = nsn, and the circumscribed perimeter Un = nSn. Then combining equations, we have so that This gives a geometric mean equation. We can also deduce or This gives a harmonic mean equation. Dart approximation When more efficient methods of finding areas are not available, we can resort to "throwing darts". This Monte Carlo method uses the fact that if random samples are taken uniformly scattered across the surface of a square in which a disk resides, the proportion of samples that hit the disk approximates the ratio of the area of the disk to the area of the square. This should be considered a method of last resort for computing the area of a disk (or any shape), as it requires an enormous number of samples to get useful accuracy; an estimate good to 10−n requires about 100n random samples . Finite rearrangement We have seen that by partitioning the disk into an infinite number of pieces we can reassemble the pieces into a rectangle. A remarkable fact discovered relatively recently is that we can dissect the disk into a large but finite number of pieces and then reassemble the pieces into a square of equal area. This is called Tarski's circle-squaring problem. The nature of Laczkovich's proof is such that it proves the existence of such a partition (in fact, of many such partitions) but does not exhibit any particular partition. Non-Euclidean circles Circles can be defined in non-Euclidean geometry, and in particular in the hyperbolic and elliptic planes. For example, the unit sphere is a model for the two-dimensional elliptic plane. It carries an intrinsic metric that arises by measuring geodesic length. The geodesic circles are the parallels in a geodesic coordinate system. More precisely, fix a point that we place at the zenith. Associated to that zenith is a geodesic polar coordinate system , , , where z is the point . In these coordinates, the geodesic distance from z to any other point having coordinates is the value of at x. A spherical circle is the set of points a geodesic distance R from the zenith point z. Equivalently, with a fixed embedding into , the spherical circle of radius centered at z is the set of x in such that . We can also measure the area of the spherical disk enclosed within a spherical circle, using the intrinsic surface area measure on the sphere. The area of the disk of radius R is then given by More generally, if a sphere has radius of curvature , then the area of the disk of radius R is given by Observe that, as an application of L'Hôpital's rule, this tends to the Euclidean area in the flat limit . The hyperbolic case is similar, with the area of a disk of intrinsic radius R in the (constant curvature ) hyperbolic plane given by where cosh is the hyperbolic cosine. More generally, for the constant curvature hyperbolic plane, the answer is These identities are important for comparison inequalities in geometry. For example, the area enclosed by a circle of radius R in a flat space is always greater than the area of a spherical circle and smaller than a hyperbolic circle, provided all three circles have the same (intrinsic) radius. That is, for all . Intuitively, this is because the sphere tends to curve back on itself, yielding circles of smaller area than those in the plane, whilst the hyperbolic plane, when immersed into space, develops fringes that produce additional area. It is more generally true that the area of the circle of a fixed radius R is a strictly decreasing function of the curvature. In all cases, if is the curvature (constant, positive or negative), then the isoperimetric inequality for a domain with area A and perimeter L is where equality is achieved precisely for the circle. Generalizations We can stretch a disk to form an ellipse. Because this stretch is a linear transformation of the plane, it has a distortion factor which will change the area but preserve ratios of areas. This observation can be used to compute the area of an arbitrary ellipse from the area of a unit circle. Consider the unit circle circumscribed by a square of side length 2. The transformation sends the circle to an ellipse by stretching or shrinking the horizontal and vertical diameters to the major and minor axes of the ellipse. The square gets sent to a rectangle circumscribing the ellipse. The ratio of the area of the circle to the square is /4, which means the ratio of the ellipse to the rectangle is also /4. Suppose a and b are the lengths of the major and minor axes of the ellipse. Since the area of the rectangle is ab, the area of the ellipse is ab/4. We can also consider analogous measurements in higher dimensions. For example, we may wish to find the volume inside a sphere. When we have a formula for the surface area, we can use the same kind of "onion" approach we used for the disk. See also Area-equivalent radius Area of a triangle References Bibliography (Originally published by Cambridge University Press, 1897, based on J. L. Heiberg's Greek version.) (Originally Grundzüge der Mathematik, Vandenhoeck & Ruprecht, Göttingen, 1971.) External links Science News on Tarski problem Area Circles Articles containing proofs de:Kreis#Kreisfläche
Area of a circle
Physics,Mathematics
5,915
8,616,527
https://en.wikipedia.org/wiki/Low%20voltage
In electrical engineering, low voltage is a relative term, the definition varying by context. Different definitions are used in electric power transmission and distribution, compared with electronics design. Electrical safety codes define "low voltage" circuits that are exempt from the protection required at higher voltages. These definitions vary by country and specific codes or regulations. IEC Definition The International Electrotechnical Commission (IEC) standard IEC 61140:2016 defines Low voltage as 0 to 1000 V AC RMS or 0 to 1500 V DC. Other standards such as IEC 60038 defines supply system low voltage as voltage in the range 50 to 1000 V AC or 120 to 1500 V DC in IEC Standard Voltages which defines power distribution system voltages around the world. In electrical power systems low voltage most commonly refers to the mains voltages as used by domestic and light industrial and commercial consumers. "Low voltage" in this context still presents a risk of electric shock, but only a minor risk of electric arcs through the air. United Kingdom British Standard BS 7671, Requirements for Electrical Installations. IET Wiring Regulations, defines supply system low voltage as: The ripple-free direct current requirement only applies to 120 V DC, not to any DC voltage above that. For example, a direct current that is exceeding 1500 V during voltage fluctuations is not categorized as low-voltage. United States In electrical power , the US National Electrical Code (NEC), NFPA 70, article 725 (2005), defines low distribution system voltage (LDSV) as up to 49 V. The NFPA standard 79 defines distribution protected extra-low voltage (PELV) as nominal voltage of 30 Vrms or 60 V DC ripple-free for dry locations, and 6 Vrms or 15 V DC in all other cases. Standard NFPA 70E, Article 130, 2021 Edition, omits energized electrical conductors and circuit parts operating at less than 50 V from its safety requirements of work involving electrical hazards when an electrically safe work condition cannot be established. UL standard 508A, article 43 (table 43.1) defines 0 to 20 V peak / 5 A or 20.1 to 42.4 V peak / 100 VA as low-voltage limited energy (LVLE) circuits. See also High voltage Low Voltage Directive References Further reading Defining Low Voltage Circuits Electricity Electrical engineering Electrical safety
Low voltage
Engineering
477
26,158,226
https://en.wikipedia.org/wiki/Prospective%20Outlook%20on%20Long-term%20Energy%20Systems
Prospective Outlook on Long-term Energy Systems (POLES) is a world simulation model for the energy sector that runs on the Vensim software. It is a techno-economic model with endogenous projection of energy prices, a complete accounting of energy demand and supply of numerous energy vectors and associated technologies, and a carbon dioxide and other greenhouse gases emissions module. History POLES was initially developed in the early 1990s in the Institute of Energy Policy and Economics IEPE (now EDDEN-CNRS) in Grenoble, France. It was conceived on the basis of research issues related to global energy supply and climate change and the long-term impact of energy policies. It was initially developed through a detailed description of sectoral energy demand, electricity capacity planning and fossil fuel exploration and production in the different world regions. Along its development it incorporated theoretical and practical expertise in many fields such as mathematics, economics, engineering, energy analysis, international trade and technical change. The initial development of POLES was financed by the JOULE II and III programmes of the European Commission’s Third and Fourth Framework Programmes (FP) for Research and Technological Development (1990-1994 and 1994-1998) as well as by the French CNRS. Since then, the model has been developed extensively through several projects, some partly financed by FP5, FP6 and FP7, and in collaboration between the EDDEN-CNRS, the consulting company Enerdata and the European Joint Research Centre IPTS. With a history spanning twenty years, it is one of the few energy models worldwide that benefits from a continuous development process and expertise over such an extended time period. Structure The model provides a complete system for the simulation and economic analysis of the world’s energy sector up to 2050. POLES is a partial equilibrium model with a yearly recursive simulation process with a combination of price-induced behavioural equations and a cost- and performance-based system for a large number of energy or energy-related technologies. Contrary to several other energy sector models, international energy prices are endogenous. The main exogenous variables are the gross domestic product and population for each country or region. The model’s structure corresponds to a system of interconnected modules and articulates three levels of analysis: international energy markets, regional energy balances, and national energy demand (which includes new technologies, electricity production, primary energy production systems and sectoral greenhouse gas emissions). POLES breaks down the world into 66 regions, of which 54 correspond to countries (including the 28 countries of the European Union) and 12 correspond to countries aggregates; for each of these regions, a full energy balance is modelled. The model covers 15 energy demand sectors in each region. Demand sectors Each demand sector is described with a high degree of detail, including activity indicators, short- and long-term energy prices and associated elasticities and technological evolution trends (thus including the dynamic cumulative processes associated with technological learning curves). This allows a strong economic consistency in the adjustment of supply and demand by region, as relative price changes at a sectoral level impact all key component of a region’s sector. Sectoral value added is simulated. Energy demand for each fuel in a sector follows a market share-based competition driven by energy prices and factors related to policy or development assumptions. The model is composed of the following demand sectors: Residential and Tertiary: two sectors. Industry: Energy uses in industry: four sectors, allowing for a detailed modelling of such energy-intensive industries such as the steel industry, the chemicals industry and the non-metallic minerals industry (cement, glass). Non-energy uses in industry: two sectors, for the transformation sectors such as plastics production and chemical feedstock production. Transport: four sectors (air, rail, road and other). Road transport modelling comprises several vehicle types (passenger cars, merchandise heavy trucks) and allows the study of inter-technology competition with the penetration of alternative vehicles (hybrids, electric or fuel cell vehicles). International bunkers: two sectors. Agriculture: one sector. Oil and gas supply There are 88 oil and gas production regions with inter-regional trade; these producing regions supply the international energy markets, which in turn feed the demand of the 66 aforementioned world regions. Fossil fuel supply modelisation includes a technological improvement in the oil recovery rate, a linkage between new discoveries and cumulative drilling and a feedback of the reserves/production ratio on the oil price. OPEC and non-OPEC production is differentiated. The model includes non-conventional oil resources such as oil shales and tar sands. Power Generation There are 30 electricity generation technologies, among which several technologies that are still marginal or planned, such as thermal production with carbon capture and storage or new nuclear designs. Price-induced diffusion tools such as feed-in tariffs can be included as drivers for projecting the future development of new energy technologies. The model distinguishes four typical daily load curves in a year, with two-hour steps. The load curves are met by a generation mix given by a merit order that is based on marginal costs of operation, maintenance and annualized capital costs. Expected power demand over the year influences investment decisions for new capacity planning in the next step. Emissions and carbon price The model includes accounting of greenhouse gas (GHG) emissions and allows visualising GHG flows on sectoral, regional and global levels. POLES covers fuel combustion-related emissions in all demand sectors, thus covering over half of global GHG emissions. The six Kyoto Protocol GHGs are covered (carbon dioxide, methane, nitrous oxide, sulphur hexafluoride, hydrofluorocarbons and perfluorocarbons). The model can be used to test the sensibility of the energy sector to the carbon price as applied to the price of fossil fuels on a regional level, as envisaged or experimented by cap and trade systems like the EU’s Emissions Trading Scheme. Databases The model’s databases have been developed by IPTS, EDDEN and Enerdata. Data on technological costs and performances were provided by the TECHPOL database. The data for historical energy demand, consumption and prices are compiled and provided by Enerdata. Uses The POLES model can be used to study or test the effect of different energy resources assumptions or energy policies and assess the importance of various driving variables behind energy demand and the penetration rates of certain electricity generation or end-use technologies. POLES does not directly provide the macro-economic impact of mitigation solutions as envisaged by the Stern Review, however it allows a detailed assessment of the costs associated with the development of low- or zero-carbon technologies. Linked with GHG emissions profiles, the model can produce marginal abatement cost curves (MACCs) for each region and sector at a desired time; these can be used to quantify the costs related to GHG emissions reduction or as an analysis tool for strategic areas for emissions control policies and emissions trading systems under different market configurations and trading rules. Studies including POLES simulations have been commissioned by international bodies such as several Directorates-General of the European Commission, national energy, environment, industry and transport agencies or private actors in the energy sector. Criticism POLES can model changes in sectoral value added and shifts of activity between sectors. However POLES is not a macroeconomic model in the sense that it uses the gross domestic product as an input and includes no feedback on it that could result from the evolution of the energy system: carbon pricing, falling oil production and its effect on transport and mobility, or growth induced by technological innovation (such as the IT boom of the 1990s). As such, it does not provide the total impact on society of, e.g., climate adaptation or mitigation (it does however quantify the total cost to the energy sector, including investment necessary in the development of low-carbon technologies). The model does not cover all greenhouse gases emissions, notably those related to agriculture (in part), land use, land-use change and forestry. As such, the climate component of the model does not allow to fully project GHG stocks, concentrations and associated temperature rises from anthropogenic climate change. See also Energy economics Energy modeling Energy policy UNFCCC External links Enerdata LEPII-EPE JRC IPTS References Energy economics Energy models
Prospective Outlook on Long-term Energy Systems
Environmental_science
1,690
31,540,845
https://en.wikipedia.org/wiki/Ritter%20reaction
The Ritter reaction (sometimes called the Ritter amidation) is a chemical reaction that transforms a nitrile into an N-alkyl amide using various electrophilic alkylating reagents. The original reaction formed the alkylating agent using an alkene in the presence of a strong acid. Mechanism and scope The Ritter reaction proceeds by the electrophilic addition of either a carbenium ion or covalent species to the nitrile. The resulting nitrilium ion is hydrolyzed to the desired amide. Primary, secondary, tertiary, and benzylic alcohols, as well as tert-butyl acetate, also successfully react with nitriles in the presence of strong acids to form amides via the Ritter reaction. A wide range of nitriles can be used. In particular, cyanide can be used to prepare formamides, which are useful precursors to isocyanides, or may also be hydrolysed to give amines. Applications A large scale application of the Ritter reaction is in the synthesis of tert-octylamine, by way of the intermediate formamide. This process was originally described by Ritter in 1948, and an estimated 10,000 tons/y (year: 2000) of this and related lipophilic amines are prepared in this way. Otherwise, the Ritter reaction is most useful in the formation of amines and amides of pharmaceutical interest. Real world applications include Merck's industrial-scale synthesis of anti-HIV drug Crixivan (indinavir); the production of the falcipain-2 inhibitor PK-11195; the synthesis of the alkaloid aristotelone; and synthesis of Amantadine, an antiviral and antiparkinsonian drug. Other applications of the Ritter reaction include synthesis of dopamine receptor ligands and production of racemic amphetamine from allylbenzene and methyl cyanide. The Ritter reaction is inferior to most amination methods because it cogenerates substantial amounts of salts. Illustrative is the conversion of isobutylene to tert-butylamine using HCN and sulfuric acid followed by base neutralization. The weight of the salt byproduct is greater than the weight of the amine. In the laboratory, the Ritter reaction suffers from the necessity of an extremely strong acid catalyst. Other methods have been proposed in order to promote carbocation formation, including photocatalytic electron transfer or direct photolysis. History The reaction is named after John J. Ritter, who supervised the Ph.D. thesis work of P. Paul Minieri. References Addition reactions Name reactions Amide synthesis reactions Articles containing video clips
Ritter reaction
Chemistry
564
44,744,577
https://en.wikipedia.org/wiki/Sir%20Hans%20Krebs%20Medal
The Sir Hans Krebs Lecture and Medal is awarded annually by the Federation of European Biochemical Societies (FEBS) for outstanding achievements in Biochemistry and Molecular Biology or related sciences. It was endowed by the Lord Rank Centre for Research and named after the German-born British biochemist Sir Hans Adolf Krebs, well known for identifying the urea and citric acid cycles. The awardee receives a silver medal and presents one of the plenary lectures at the FEBS Congress. List of recipients Source: (1968–2002) 2022 Cecília Rodrigues (University of Lisbon, Portugal) 2019 Mathias Uhlen 2018 Albert J.R. Heck 2017 Carol V. Robinson 2016 Kári Stefánsson 2015 Jürgen Knoblich 2014 Michael N. Hall 2013 Richard J. Roberts 2012 V. Ramakrishnan 2011 Elena Conti 2010 Harald Stenmark 2009 Václav Hořejší 2008 Tim Hunt 2007 Tom Rapoport 2006 Aaron Ciechanover 2005 Thomas Jenuwein 2004 Ryszard Gryglewski 2003 No award? 2002 Jacques Pouysségur 2001 Sir Philip Cohen 2000 Thomas Steitz 1999 Stanley B. Prusiner 1998 Bengt I. Samuelsson 1997 David Baltimore 1996 Josef Stefaan Schell 1995 Kim Nasmyth 1994 Jean-Pierre Changeux 1993 Christiane Nüsslein-Volhard 1992 Robert Huber 1991 No Award 1990 Pierre Chambon 1989 Helmut Beinert 1988 No award 1987 Tom Blundell 1986 Gottfried Schatz 1985 Robert Joseph Paton Williams 1984 Richard Henderson 1983 Arthur Kornberg 1982 François Jacob 1981 Cesar Milstein 1980 Sydney Brenner (No lecture due to illness) 1979 Pierre Desnuelle 1978 Peter D. Mitchell 1977 Francis Crick 1976 No award 1975 Heinz-Gunter Wittmann 1974 Charles Weissmann 1973 Arthur B. Pardee 1972 Ephraim Katchalski 1971 David Chilton Phillips 1970 No Award 1969 Alexander Spirin 1968 Max Perutz (inaugural award) See also List of biochemistry awards References Awards established in 1968 Biochemistry awards European science and technology awards
Sir Hans Krebs Medal
Chemistry,Biology
410
11,264,285
https://en.wikipedia.org/wiki/Quantum%20graph
In mathematics and physics, a quantum graph is a linear, network-shaped structure of vertices connected on edges (i.e., a graph) in which each edge is given a length and where a differential (or pseudo-differential) equation is posed on each edge. An example would be a power network consisting of power lines (edges) connected at transformer stations (vertices); the differential equations would then describe the voltage along each of the lines, with boundary conditions for each edge provided at the adjacent vertices ensuring that the current added over all edges adds to zero at each vertex. Quantum graphs were first studied by Linus Pauling as models of free electrons in organic molecules in the 1930s. They also arise in a variety of mathematical contexts, e.g. as model systems in quantum chaos, in the study of waveguides, in photonic crystals and in Anderson localization, or as limit on shrinking thin wires. Quantum graphs have become prominent models in mesoscopic physics used to obtain a theoretical understanding of nanotechnology. Another, more simple notion of quantum graphs was introduced by Freedman et al. Aside from actually solving the differential equations posed on a quantum graph for purposes of concrete applications, typical questions that arise are those of controllability (what inputs have to be provided to bring the system into a desired state, for example providing sufficient power to all houses on a power network) and identifiability (how and where one has to measure something to obtain a complete picture of the state of the system, for example measuring the pressure of a water pipe network to determine whether or not there is a leaking pipe). Metric graphs A metric graph is a graph consisting of a set of vertices and a set of edges where each edge has been associated with an interval so that is the coordinate on the interval, the vertex corresponds to and to or vice versa. The choice of which vertex lies at zero is arbitrary with the alternative corresponding to a change of coordinate on the edge. The graph has a natural metric: for two points on the graph, is the shortest distance between them where distance is measured along the edges of the graph. Open graphs: in the combinatorial graph model edges always join pairs of vertices however in a quantum graph one may also consider semi-infinite edges. These are edges associated with the interval attached to a single vertex at . A graph with one or more such open edges is referred to as an open graph. Quantum graphs Quantum graphs are metric graphs equipped with a differential (or pseudo-differential) operator acting on functions on the graph. A function on a metric graph is defined as the -tuple of functions on the intervals. The Hilbert space of the graph is where the inner product of two functions is may be infinite in the case of an open edge. The simplest example of an operator on a metric graph is the Laplace operator. The operator on an edge is where is the coordinate on the edge. To make the operator self-adjoint a suitable domain must be specified. This is typically achieved by taking the Sobolev space of functions on the edges of the graph and specifying matching conditions at the vertices. The trivial example of matching conditions that make the operator self-adjoint are the Dirichlet boundary conditions, for every edge. An eigenfunction on a finite edge may be written as for integer . If the graph is closed with no infinite edges and the lengths of the edges of the graph are rationally independent then an eigenfunction is supported on a single graph edge and the eigenvalues are . The Dirichlet conditions don't allow interaction between the intervals so the spectrum is the same as that of the set of disconnected edges. More interesting self-adjoint matching conditions that allow interaction between edges are the Neumann or natural matching conditions. A function in the domain of the operator is continuous everywhere on the graph and the sum of the outgoing derivatives at a vertex is zero, where if the vertex is at and if is at . The properties of other operators on metric graphs have also been studied. These include the more general class of Schrödinger operators, where is a "magnetic vector potential" on the edge and is a scalar potential. Another example is the Dirac operator on a graph which is a matrix valued operator acting on vector valued functions that describe the quantum mechanics of particles with an intrinsic angular momentum of one half such as the electron. The Dirichlet-to-Neumann operator on a graph is a pseudo-differential operator that arises in the study of photonic crystals. Theorems All self-adjoint matching conditions of the Laplace operator on a graph can be classified according to a scheme of Kostrykin and Schrader. In practice, it is often more convenient to adopt a formalism introduced by Kuchment, see, which automatically yields an operator in variational form. Let be a vertex with edges emanating from it. For simplicity we choose the coordinates on the edges so that lies at for each edge meeting at . For a function on the graph let Matching conditions at can be specified by a pair of matrices and through the linear equation, The matching conditions define a self-adjoint operator if has the maximal rank and The spectrum of the Laplace operator on a finite graph can be conveniently described using a scattering matrix approach introduced by Kottos and Smilansky . The eigenvalue problem on an edge is, So a solution on the edge can be written as a linear combination of plane waves. where in a time-dependent Schrödinger equation is the coefficient of the outgoing plane wave at and coefficient of the incoming plane wave at . The matching conditions at define a scattering matrix The scattering matrix relates the vectors of incoming and outgoing plane-wave coefficients at , . For self-adjoint matching conditions is unitary. An element of of is a complex transition amplitude from a directed edge to the edge which in general depends on . However, for a large class of matching conditions the S-matrix is independent of . With Neumann matching conditions for example Substituting in the equation for produces -independent transition amplitudes where is the Kronecker delta function that is one if and zero otherwise. From the transition amplitudes we may define a matrix is called the bond scattering matrix and can be thought of as a quantum evolution operator on the graph. It is unitary and acts on the vector of plane-wave coefficients for the graph where is the coefficient of the plane wave traveling from to . The phase is the phase acquired by the plane wave when propagating from vertex to vertex . Quantization condition: An eigenfunction on the graph can be defined through its associated plane-wave coefficients. As the eigenfunction is stationary under the quantum evolution a quantization condition for the graph can be written using the evolution operator. Eigenvalues occur at values of where the matrix has an eigenvalue one. We will order the spectrum with . The first trace formula for a graph was derived by Roth (1983). In 1997 Kottos and Smilansky used the quantization condition above to obtain the following trace formula for the Laplace operator on a graph when the transition amplitudes are independent of . The trace formula links the spectrum with periodic orbits on the graph. is called the density of states. The right hand side of the trace formula is made up of two terms, the Weyl term is the mean separation of eigenvalues and the oscillating part is a sum over all periodic orbits on the graph. is the length of the orbit and is the total length of the graph. For an orbit generated by repeating a shorter primitive orbit, counts the number of repartitions. is the product of the transition amplitudes at the vertices of the graph around the orbit. Applications Quantum graphs were first employed in the 1930s to model the spectrum of free electrons in organic molecules like Naphthalene, see figure. As a first approximation the atoms are taken to be vertices while the σ-electrons form bonds that fix a frame in the shape of the molecule on which the free electrons are confined. A similar problem appears when considering quantum waveguides. These are mesoscopic systems - systems built with a width on the scale of nanometers. A quantum waveguide can be thought of as a fattened graph where the edges are thin tubes. The spectrum of the Laplace operator on this domain converges to the spectrum of the Laplace operator on the graph under certain conditions. Understanding mesoscopic systems plays an important role in the field of nanotechnology. In 1997 Kottos and Smilansky proposed quantum graphs as a model to study quantum chaos, the quantum mechanics of systems that are classically chaotic. Classical motion on the graph can be defined as a probabilistic Markov chain where the probability of scattering from edge to edge is given by the absolute value of the quantum transition amplitude squared, . For almost all finite connected quantum graphs the probabilistic dynamics is ergodic and mixing, in other words chaotic. Quantum graphs embedded in two or three dimensions appear in the study of photonic crystals. In two dimensions a simple model of a photonic crystal consists of polygonal cells of a dense dielectric with narrow interfaces between the cells filled with air. Studying dielectric modes that stay mostly in the dielectric gives rise to a pseudo-differential operator on the graph that follows the narrow interfaces. Periodic quantum graphs like the lattice in are common models of periodic systems and quantum graphs have been applied to the study the phenomena of Anderson localization where localized states occur at the edge of spectral bands in the presence of disorder. See also Schild's Ladder, a novel dealing with a fictional quantum graph theory Feynman diagram References Quantum mechanics Extensions and generalizations of graphs
Quantum graph
Physics,Mathematics
1,998
580,713
https://en.wikipedia.org/wiki/Virtual%20sex
Virtual sex is sexual activity where two or more people (or one person and a virtual character) gather together via some form of communications equipment to arouse each other, often by the means of transmitting sexually explicit messages. Virtual sex describes the phenomenon, no matter the communications equipment used. Digital remote stimulation involves the use of electronic sex toys to stimulate a person in the genital area from a distance Camming is virtual sex that is over video chat from services that provide it. Cybersex is virtual sex typed over the Internet, including IRC, e-mail, instant messaging, chat rooms, webcam, role-playing games, etc. Phone sex is virtual sex spoken over the telephone. Sexting is virtual sex sent via mobile phone network text messaging. The advent of cell phones with built-in digital cameras has undoubtedly added new dimensions to these activities. Modern consumer virtual reality headsets allow users to engage in virtual sex through simulated environments, either with other humans or with virtual characters. These terms and practices continuously evolve as technologies and methods of communication change. Increases in Internet connectivity, bandwidth availability, and the proliferation of webcams have also had implications for virtual sex enthusiasts. It is increasingly common for these activities to include the exchange of pictures or motion video. There are companies which allow paying customers to watch people have live sex or masturbate and at the same time allow themselves to be watched as well. Recently, devices have been introduced and marketed to allow remote-controlled stimulation. Consent An important part of taking part in virtual sex, or sexual acts, would be consent. The ethics of sexting are already being established by young people for whom consent figures as a critical concept. Distinctions between positive and negative experiences of sexting are mostly dependent on whether consent was given to make and share the images. , it is illegal for any person under the age of 18 to consent to any form of virtual sex (only if nude pictures are sent), because images of minors are considered child pornography. Addiction There are approximately one half to 2 million sex addicts in the world that have access to the Internet and the prospectives of virtual sex on the Internet are appealing to them. The internet opens up a world where people can reinvent themselves and try on a completely different online persona; they can freely experiment with and explore a variety of new, hidden or repressed sexual behaviors, fetishes and sexual fantasies. This can feel liberating, but can also be extremely dangerous as it has the potential of becoming addicting and have adverse effects on cybernauts' other aspects of life. What attracts people to sex via the Internet can be explained by the “Triple A” engine of Affordability, Accessibility, and Anonymity. The "Triple A" engine represents the risk factors for people that are already susceptible to sexual compulsivity or psychological vulnerability related to sexual compulsivity. Affordability is about the cheap price of virtual sex. Pornography magazines and videos used to have a price of $20 or more per individual piece, while today anyone can have access to unlimited amount of pornographic content at the price of a $20 monthly subscription to the internet. Accessibility is a person's capacity to have access to the Internet - a service that is virtually accessible to anyone in the world. Finally, Anonymity references the ability to have access to sexual content without disclosing your true identity; this can feel empowering and make it that much easier to have sex, as one would not have to risk being seen by someone they know and feel ashamed or worried of possible gossips and rumors about them. When does healthy virtual sex become a pathology? Addiction is defined by 3 main characteristics: compulsivity (not being able to freely choose when to stop or continue a behavior), continuation of the behavior despite adverse consequences, and obsession with the activity. When one losses control and lets virtual sex impact negatively at least one aspect of their life, this is when it stops being healthy. According to clinical studies, the main adverse consequences of virtual sex addiction are about the damage it causes in marital and other romantic relationships, disrupted due to online affairs and online sexual compulsivity. In a research study, it was found that online affairs and sexual compulsivity were reported by 53% of the virtual sex addicts interviewed to be the cause of disruption of their romantic relationships. Virtual sex can become a coping mechanism to temporarily escape real life problems. However, it is not an effective one and even potentially harmful, as the underlying issues will go on unaddressed and only become more complex with time. Generally, there are a couple of patterns explaining why one can become addicted to virtual sex and the ways one can use it as a coping mechanism. Often, it is used to cope with emotional problems. Virtual sex can serve as a distraction from painful emotions, such as loneliness, stress, and anxiety, as consuming online pornographic content makes the addict feel more confident, desirable, and excited, creating a numbing effect. Another pattern involves young, insecure, socially awkward or emotionally troubled people who use internet to interact with others online rather than in person in order to avoid rejection from a real person. In the Internet they can find a virtually unlimited number of people who seem interesting and interested in them. They find the online world more comforting and safe, as it is harder to pick on social clues of disapproval or judgement. Gradually online friends can become more "real" than offline friends and an online friend can become an opportunity for online affair and cybersex. Partners that are cheated on through online affairs feel that online affairs are just as painful as offline ones - it is a significant source of stress, makes them feel betrayed as they were lied to, and feel insecure as they will negatively compare themselves with the online women or men. Virtual sex can become an escape and a new addiction for recovering sex addicts that are going through a stressful period in their life. Feeling triggered by life problems, prior sex addicts can find themselves using online pornographic content as a quick and easy, but temporary fix to help them soothe themselves, forget about life's problems, and feel better about themselves. Another pattern is when an individual takes advantage of the online sexual content to explore forbidden, hidden, and repressed sexual fantasies, which can become addicting and completely absorb the person into this virtual space. Long-distance relationships Approximately 14 million people in the United States are in a long distance relationship. Among young adults, 40% to 50% are in a long distance relationship at any given time, as well as 75% of college students at least at one given moment during their studies. It is expected that the number of long distance relationships will be increasing due to the globalized nature of today's world. Hence, the internet might be a useful tool to make long distance relationships work. One way couples in long distance relationships engage in a sexual activity online is through sexting. Self-expression through sexting between partners can create a feeling of intimacy and closeness between partners even at a distance. Long distance relationships may be more susceptible to sexual boredom, hence sexting can be an effective way of keeping partners sexually engaged at a distance. In a study, the associations between sexting and feelings of closeness were studied. It was found that more sexting more often in a long distance relationship was not predictive of higher interpersonal closeness between the partners. However, there was found a correlation between sexting and sexual satisfaction, as well as relationship satisfaction. See also Red Light Center Teledildonics Virtual reality sex Deuel, Nancy R. 1996. Our passionate response to virtual reality. Computer-mediated Communication: Linguistic, Social, and Cross-Cultural Perspectives, p. 129-146. Ed. by Susan C. Herring. John Benjamins Publishing Company, Philadelphia. Lunceford, Brett. “Virtual Sex.” In Encyclopedia of Gender in Media, edited by Mary Kosut. Thousand Oaks, CA: Sage, 2012. References External links "Cyberwatch: Online Dating and Cybersex" "Teledildonics Products and Teledildonic Devices" at Tinynibbles.com "Cyborgasms: Cybersex Amongst Multiple-selves and Cyborgs in the Narrow-bandwidth Space of American Online Chat Rooms", 1996 MA Dissertation by Robin B. Hamman Sexuality and computing
Virtual sex
Technology
1,696
37,907,595
https://en.wikipedia.org/wiki/Distributed%20object%20middleware
Distributed Object Middleware (DOM) is a type of infrastructure that allows remote access to remote objects transparently. It is based on the Remote Procedure Call (RPC) mechanism. Some DOM systems also enable objects on different platforms to interact, for example, CORBA. Other examples of DOM systems include Microsoft's Distributed Component Object Model (DCOM), and Enterprise JavaBeans (EJB) by Sun Microsystems (now Oracle Corporation). References Middleware
Distributed object middleware
Technology,Engineering
97
195,963
https://en.wikipedia.org/wiki/ILLIAC%20IV
The ILLIAC IV was the first massively parallel computer. The system was originally designed to have 256 64-bit floating point units (FPUs) and four central processing units (CPUs) able to process 1 billion operations per second. Due to budget constraints, only a single "quadrant" with 64 FPUs and a single CPU was built. Since the FPUs all processed the same instruction – ADD, SUB etc. – in modern terminology, the design would be considered to be single instruction, multiple data, or SIMD. The concept of building a computer using an array of processors came to Daniel Slotnick while working as a programmer on the IAS machine in 1952. A formal design did not start until 1960, when Slotnick was working at Westinghouse Electric and arranged development funding under a US Air Force contract. When that funding ended in 1964, Slotnick moved to the University of Illinois Urbana–Champaign and joined the Illinois Automatic Computer (ILLIAC) team. With funding from Advanced Research Projects Agency (ARPA), they began the design of a newer concept with 256 64-bit processors instead of the original concept with 1,024 1-bit processors. While the machine was being assembled by Burroughs, the university began building a new facility to house it. Political tension over the funding from the US Department of Defense led to the ARPA and the university fearing for the machine's safety. When the first 64-processor quadrant of the machine was completed in 1972, it was sent to the NASA Ames Research Center in California. After three years of thorough modification to fix various flaws, ILLIAC IV was connected to the ARPANET for distributed use in November 1975, becoming the first network-available supercomputer, beating the Cray-1 by nearly 12 months. Running at half its design speed, the one-quadrant ILLIAC IV delivered 50 MFLOP peak, making it the fastest computer in the world at that time. It is also credited with being the first large computer to use solid-state memory, as well as the most complex computer built to that date, with over 1 million gates. Generally considered a failure due to massive budget overruns, the design was instrumental in the development of new techniques and systems for programming parallel systems. In the 1980s, several machines based on ILLIAC IV concepts were successfully delivered. History Origins In June 1952, Daniel Slotnick began working on the IAS machine at the Institute for Advanced Study (IAS) at Princeton University. The IAS machine featured a bit-parallel math unit that operated on 40-bit words. Originally equipped with Williams tube memory, a magnetic drum from Engineering Research Associates was later added. This drum had 80 tracks so two words could be read at a time, and each track stored 1,024 bits. While contemplating the drum's mechanism, Slotnik began to wonder if that was the correct way to build a computer. If the bits of a word were written serially to a single track, instead of in parallel across 40 tracks, then the data could be fed into a bit-serial computer directly from the drum bit-by-bit. The drum would still have multiple tracks and heads, but instead of gathering up a word and sending it to a single ALU, in this concept the data on each track would be read a bit at a time and sent into parallel ALUs. This would be a word-parallel, bit-serial computer. Slotnick raised the idea at the IAS, but John von Neumann dismissed it as requiring "too many tubes". Slotnick left the IAS in February 1954 to return to school for his PhD and the matter was forgotten. SOLOMON After completing his PhD and some post-doc work, Slotnick ended up at IBM. By this time, for scientific computing at least, tubes and drums had been replaced with transistors and magnetic-core memory. The idea of parallel processors working on different streams of data from a drum no longer had the same obvious appeal. Nevertheless, further consideration showed that parallel machines could still offer significant performance in some applications; Slotnick and a colleague, John Cocke (better known as the inventor of RISC), wrote a paper on the concept in 1958. After a short time at IBM and then another at Aeronca Aircraft, Slotnick ended up at Westinghouse's Air Arm division, which worked on radar and similar systems. Under a contract from the US Air Force's RADC, Slotnik was able to build a team to design a system with 1,024 bit-serial ALUs, known as "processing elements" or PE's. This design was given the name SOLOMON, after King Solomon, who was both very wise and had 1,000 wives. The PE's would be fed instructions from a single master central processing unit (CPU), the "control unit" or CU. SOLOMON's CU would read instructions from memory, decode them, and then hand them off to the PE's for processing. Each PE had its own memory for holding operands and results, the PE Memory module, or PEM. The CU could access the entire memory via a dedicated memory bus, whereas the PE's could only access their own PEM. To allow results from one PE to be used as inputs in another, a separate network connected each PE to its eight closest neighbours. Several testbed systems were constructed, including a 3-by-3 (9 PE) system and a 10-by-10 model with simplified PEs. During this period, some consideration was given to more complex PE designs, becoming a 24-bit parallel system that would be organized in a 256-by-32 arrangement. A single PE using this design was built in 1963. As the design work continued, the primary sponsor within the US Department of Defense was killed in an accident and no further funding was forthcoming. Looking to continue development, Slotnik approached Livermore, who at that time had been at the forefront of supercomputer purchases. They were very interested in the design but convinced him to upgrade the current design's fixed-point math units to true floating point, which resulted in the SOLOMON.2 design. Livermore would not fund development, instead, they offered a contract in which they would lease the machine once it was completed. Westinghouse management considered it too risky, and shut down the team. Slotnik left Westinghouse attempting to find venture capital to continue the project, but failed. Livermore would later select the CDC STAR-100 for this role, as CDC was willing to take on the development costs. ILLIAC IV When SOLOMON ended, Slotnick joined the Illinois Automatic Computer design (ILLIAC) team at the University of Illinois at Urbana-Champaign. Illinois had been designing and building large computers for the U.S. Department of Defense and the Advanced Research Projects Agency (ARPA) since 1949. In 1964 the university signed a contract with ARPA to fund the effort, which became known as ILLIAC IV, since it was the fourth computer designed and created at the university. Development started in 1965, and a first-pass design was completed in 1966. In contrast to the bit-serial concept of SOLOMON, in ILLIAC IV the PE's were upgraded to be full 64-bit (bit-parallel) processors, using 12,000 gates and 2048-words of thin-film memory. The PEs had five 64-bit registers, each with a special purpose. One of these, RGR, was used for communicating data to neighbouring PEs, moving one "hop" per clock cycle. Another register, RGD, indicated whether or not that PE was currently active. "Inactive" PEs could not access memory, but they would pass results to neighbouring PEs using the RGR. The PEs were designed to work as a single 64-bit FPU, two 32-bit half-precision FPUs, or eight 8-bit fixed-point processors. Instead of 1,024 PEs and a single CU, the new design had a total of 256 PEs arranged into four 64-PE "quadrants", each with its own CU. The CU's were also 64-bit designs, with sixty-four 64-bit registers and another four 64-bit accumulators. The system could run as four separate 64-PE machines, two 128-PE machines, or a single 256-PE machine. This allowed the system to work on different problems when the data was too small to demand the entire 256-PE array. Based on a 25 MHz clock, with all 256-PEs running on a single program, the machine was designed to deliver 1 billion floating point operations per second, or in today's terminology, 1 GFLOPS. This made it much faster than any machine in the world; the contemporary CDC 7600 had a clock cycle of 27.5 nanoseconds, or 36 MIPS, although for a variety of reasons it generally offered performance closer to 10 MIPS. To support the machine, an extension to the Digital Computer Laboratory buildings were constructed. Sample work at the university was primarily aimed at ways to efficiently fill the PEs with data, thus conducting the first "stress test" in computer development. In order to make this as easy as possible, several new computer languages were created; IVTRAN and TRANQUIL were parallelized versions of FORTRAN, and Glypnir was a similar conversion of ALGOL. Generally, these languages provided support for loading arrays of data "across" the PEs to be executed in parallel, and some even supported the unwinding of loops into array operations. Construction, problems In early 1966, the university sent out a request for proposals looking for industrial partners interested in building the design. Seventeen responses were received in July, seven responded, and of these three were selected. Several of the responses, including Control Data, attempted to interest them in a vector processor design instead, but as these were already being designed the team was not interested in building another. In August 1966, eight-month contracts were offered to RCA, Burroughs, and Univac to bid on the construction of the machine. Burroughs eventually won the contract, having teamed up with Texas Instruments (TI). Both offered new technical advances that made their bid the most interesting. Burroughs was offering to build a new and much faster version of thin-film memory which would improve performance. TI was offering to build 64-pin emitter-coupled logic (ECL) integrated circuits (ICs) with 20 logic gates each. At the time, most ICs used 16-pin packages and had between 4 and 7 gates. Using TI's ICs would make the system much smaller. Burroughs also supplied the specialized disk drives, which featured a separate stationary head for every track and could offer speeds up to 500 Mbit/s and stored about 80 MB per 36" disk. They would also provide a Burroughs B6500 mainframe to act as a front-end controller, loading data from secondary storage and performing other housekeeping tasks. Connected to the B6500 was a 3rd party laser optical recording medium, a write-once system that stored up to 1 Tbit on thin metal film coated on a strip of polyester sheet carried by a rotating drum. Construction of the new design began at Burroughs' Great Valley Lab. At the time, it was estimated the machine would be delivered in early 1970. After a year of working on the ICs, TI announced they had failed to be able to build the 64-pin designs. The more complex internal wiring was causing crosstalk in the circuitry, and they asked for another year to fix the problems. Instead, the ILLIAC team chose to redesign the machine based on available 16-pin ICs. This required the system to run slower, using a 16 MHz clock instead of the original 25 MHz. The change from 64-pin to 16-pin cost the project about two years, and millions of dollars. TI was able to get the 64-pin design working after just over another year, and began offering them on the market before ILLIAC was complete. As a result of this change, the individual PC boards grew about square to about . This doomed Burroughs' efforts to produce a thin-film memory for the machine, because there was now no longer enough space for the memory to fit within the design's cabinets. Attempts to increase the size of the cabinets to make room for the memory caused serious problems with signal propagation. Slotnick surveyed the potential replacements and picked a semiconductor memory from Fairchild Semiconductor, a decision that was so opposed by Burroughs that a full review by ARPA followed. In 1969, these problems, combined with the resulting cost overruns from the delays, led to the decision to build only a single 64-PE quadrant, thereby limiting the machine's speed to about 200 MFLOPS. Together, these changes cost the project three years and $6 million. By 1969, the project was spending $1 million a month, and had to be spun out of the original ILLIAC team who were becoming increasingly vocal in their opposition to the project. Move to Ames By 1970, the machine was finally being built at a reasonable rate and it was being readied for delivery in about a year. On 6 January 1970, The Daily Illini, the student newspaper, claimed that the computer would be used to design nuclear weapons. In May, the Kent State shootings took place, and anti-war violence erupted across university campuses. Slotnick grew to be opposed to the use of the machine on classified research, and announced that as long as it was on the university grounds that all processing that took place on the machine would be publicly released. He also grew increasingly concerned that the machine would be subject to attack by the more radical student groups. a position that seemed wise after the local students joined the 9 May 1970 nationwide student strike by declaring a "day of Illiaction", and especially the 24 August bombing of the mathematics building at the University of Wisconsin–Madison. With the help of Hans Mark, the director of the NASA Ames Research Center in what was becoming Silicon Valley, in January 1971 the decision was made to deliver the machine to Ames rather than the university. Located on an active US Navy base and protected by the U.S. Marines, security would no longer be a concern. The machine was finally delivered to Ames in April 1972, and installed in the Central Computer Facility in building N-233. By this point it was several years late and well over budget at a total price of $31 million, almost four times the original estimate of $8 million for the complete 256-PE machine. NASA also decided to replace the B6500 front-end machine with a PDP-10, which were in common use at Ames and would make it much easier to connect to the ARPAnet. This required the development of new software, especially compilers, on the PDP-10. This caused further delays in bringing the machine online. The Illiac IV was contracted to be managed by ACTS Computing Corporation headquartered in Southfield, MI, a Timesharing and Remote Job Entry (RJE) company that had recently been acquired by the conglomerate, Lear Siegler Corporation. The DoD contracted with ACTS under a cost plus 10% contract. This unusual arrangement was due to the constraint that no government employee could be paid more than a Congress person and many Illiac IV personnel made more than that limit. Dr. Mel Pirtle, with a background from the University of California, Berkeley and the Berkeley Computer Corporation (BCC) was engaged as the Illiac IV's director. Making it work When the machine first arrived, it could not be made to work. It suffered from all sorts of problems from cracking PCBs, to bad resistors, to the packaging of the TI ICs being highly sensitive to humidity. These issues were slowly addressed, and by the summer of 1973 the first programs were able to be run on the system although the results were highly questionable. Starting in June 1975, a concerted four-month effort began that required, among other changes, replacing 110,000 resistors, rewiring parts to fix propagation delay issues, improving filtering in the power supplies, and a further reduction in clock speed to 13 MHz. At the end of this process, the system was finally working properly. From then on, the system ran Monday morning to Friday afternoon, providing 60 hours of up-time for the users, but requiring 44 hours of scheduled downtime. Nevertheless, it was increasingly used as NASA programmers learned ways to get performance out of the complex system. At first, performance was dismal, with most programs running at about 15 MFLOPS, about three times the average for the CDC 7600. Over time this improved, notably after Ames programmers wrote their own version of FORTRAN, CFD, and learned how to parallel I/O into the limited PEMs. On problems that could be parallelized the machine was still the fastest in the world, outperforming the CDC 7600 by two to six times, and it is generally credited as the fastest machine in the world until 1981. On 7 September 1981, after nearly 10 years of operation, the ILLIAC IV was turned off. The machine was officially decommissioned in 1982, and NASA's advanced computing division ended with it. One control unit and one processing element chassis from the machine is now on display at the Computer History Museum in Mountain View, less than a mile from its operational site. Aftermath ILLIAC was very late, very expensive, and never met its goal of producing 1 GFLOP. It was widely considered a failure even by those who worked on it; one stated simply that "any impartial observer has to regard Illiac IV as a failure in a technical sense." In terms of project management it is widely regarded as a failure, running over its cost estimates by four times and requiring years of remedial efforts to make it work. As Slotnik himself later put it: However, later analyses note that the project had several long-lasting effects on the computer market as a whole, both intentionally and unintentionally. Among the indirect effects was the rapid update of semiconductor memory after the ILLIAC project. Slotnick received a lot of criticism when he chose Fairchild Semiconductor to produce the memory ICs, as at the time the production line was an empty room and the design existed only on paper. However, after three months of intense effort, Fairchild had a working design being produced en masse. As Slotnick would later comment, "Fairchild did a magnificent job of pulling our chestnuts out of the fire. The Fairchild memories were superb and their reliability to this day is just incredibly good." ILLIAC is considered to have dealt a death blow to magnetic-core memory and related systems like thin-film. Another indirect effect was caused by the complexity of the printed circuit boards (PCBs), or modules. At the original 25 MHz design speed, impedance in the ground wiring proved to be a serious problem, demanding that the PCBs be as small as possible. As their complexity grew, the PCBs had to add more and more layers in order to avoid growing larger. Eventually, they reached 15-layers deep, which proved to be well beyond the capabilities of draftsmen. The design was ultimately completed using new automated design tools provided by a subcontractor, and the complete design required two years of computer time on a Burroughs mainframe. This was a major step forward in computer aided design, and by the mid-1970s such tools were commonplace. ILLIAC also led to major research into the topic of parallel processing that had wide-ranging effects. During the 1980s, with the price of microprocessors falling according to Moore's Law, a number of companies created MIMD (Multiple Instruction, Multiple Data) to build even more parallel machines, with compilers that could make better use of the parallelism. The Thinking Machines CM-5 is an excellent example of the MIMD concept. It was the better understanding of parallelism on ILLIAC that led to the improved compilers and programs that could take advantage of these designs. As one ILLIAC programmer put it, "If anybody builds a fast computer out of a lot of microprocessors, Illiac IV will have done its bit in the broad scheme of things." Most supercomputers of the era took another approach to higher performance, using a single very-high-speed vector processor. Similar to the ILLIAC in some ways, these processor designs loaded up many data elements into a single custom processor instead of a large number of specialized ones. The classic example of this design is the Cray-1, which had performance similar to the ILLIAC. There was more than a little "backlash" against the ILLIAC design as a result, and for some time the supercomputer market looked on massively parallel designs with disdain, even when they were successful. As Seymour Cray famously quipped, "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?" Description Physical arrangement Each quadrant of the machine was high, deep and long. Arranged beside the quadrant was its input/output (I/O) system, whose disk system stored 2.5 GiB and could read and write data at 1 billion bits per second, along with the B6700 computer that connected to the machine through the same 1,024-bit-wide interface as the disk system. The machine consisted of a series of carrier chassis holding a number of the small modules. The majority of these were the Processing Units (PUs), which contained the modules for a single PE, its PEM, and the Memory Logic Unit that handled address translation and I/O. The PUs were identical, so they could be replaced or reordered as required. Processor details Each CU had about 30 to 40,000 gates. The CU had sixteen 64-bit registers and a separate sixty-four slot 64-bit "scratchpad", LDB. There were four accumulators, AC0 through AC3, a program counter ILR, and various control registers. The system had a short instruction pipeline and implemented instruction look ahead. The PEs had about 12,000 gates. It included four 64-bit registers, using an accumulator A, an operand buffer B and a secondary scratchpad S. The fourth, R, was used to broadcast or receive data from the other PEs. The PEs used a carry-lookahead adder, a leading-one detector for Boolean operations, and a barrel shifter. 64-bit additions took about 200 ns and multiplications about 400 ns. The PE's were connected to a private memory bank, the PEM, which held 2,048 64-bit words. Access time was on the order of 250 ns The PEs used a load/store architecture. The instruction set (ISA) contained two separate sets of instructions, one for the CU (or a unit within it, ADVAST) and another for the PEs. Instructions for the PEs were not decoded, and instead sent directly to the FINST register to be sent to the PEs to process. The ADVAST instructions were decoded and entered the CU's processing pipeline. Logical arrangement Each quadrant contained 64 PEs and one CU. The CU had access to the entire I/O bus and could address all of the machine's memory. The PEs could only access their own local store, the PEM, of 2,048 64-bit words. Both the PEs and CU could use load and store operations to access the disk system. The cabinets were so large that it required 240 ns for signals to travel from one end to the other. For this reason, the CU could not be used to coordinate actions, instead, the entire system was clock-synchronous with all operations in the PEs guaranteed to take the same amount of time no matter what the operands were. That way the CU could be sure that the operations were complete without having to wait for results or status codes. To improve the performance of operations that required the output of one PE's results to be used as the input to another PE, the PEs were connected directly to their neighbours, as well as the ones eight-steps away - for instance, PE1 was directly connected to PE0 and PE2, as well as PE9 and PE45. The eight-away connections allowed faster transport when the data needed to travel between more distant PEs. Each shift of data moved 64-words in a single 125 ns clock cycle. The system used a one-address format, in which the instructions included the address of one of the operands and the other operand was in the PE's accumulator (the A register). The address was sent to the PE's over a separate "broadcast" bus. Depending on the instruction, the value on the bus might refer to a memory location in the PE's PEM, a value in one of the PE registers, or a numeric constant. Since each PE had its own memory, while the instruction format and the CUs saw the entire address space, the system included an index register (X) to offset the base address. This allowed, for example, the same instruction stream to work on data that was not aligned in the same locations in different PEs. The common example would be an array of data that was loaded into different locations in the PEMs, which could then be made uniform by setting the index in the different PEs. Branches In traditional computer designs, instructions are loaded into the CPU one at a time as they are read from memory. Normally, when the CPU completes processing an instruction, the program counter (PC) is incremented by one word and the next instruction is read. This process is interrupted by branches, which causes the PC to jump to one of two locations depending on a test, like whether a given memory address holds a non-zero value. In the ILLIAC design, each PE would be applying this test to different values, and thus have different outcomes. Since those values are private to the PE, the following instructions would need to be loaded based on a value only the PE knew. To avoid the delays reloading the PE instructions would cause, the ILLIAC loaded the PEMs with the instructions on both sides of the branch. Logical tests did not change the PC, instead, they set "mode bits" that told the PE whether or not to run the next arithmetic instruction. To use this system, the program would be written so that one of the two possible instruction streams followed the test, and ended with an instruction to invert the bits. Code for the second branch would then follow, ending with an instruction to set all the bits to 1. If the test selected the "first" branch, that PE would continue on as normal. When it reached the end of that code, the mode operator instruction would flip the mode bits, and from then on that PE would ignore further instructions. This would continue until it reached the end of the code for the second branch, where the mode reset instruction would turn the PE back on. If a particular PE's test resulted in the second branch being taken, it would instead set the mode bits to ignore further instructions until it reached the end of the first branch, where the mode operator would flip the bits and cause the second branch to begin processing, once again turning them all on at the end of that branch. Since the PEs can operate in 64-, 32- and 8-bit modes, the mode flags had multiple bits so the individual words could be turned on or off. For instance, in the case when the PE was operating in 32-bit mode, one "side" of the PE might have the test come out true while the other side was false. Terminology CU: control unit CPU: central processing unit ISA: instruction set architecture MAC: multiply-and-accumulate PC: program counter PE: processing element PEM: processing element memory module PU: processing unit See also Amdahl's law, which suggests there are limits to the performance increase of parallel computers ILLIAC III, a special-purpose SIMD machine built around the same time as ILLIAC IV Parallel Element Processing Ensemble, another massively-parallel Burroughs machine, this one a Bell Labs design Bull Gamma 60, an early parallel computer released in 1960 Notes References Citations Bibliography Further reading ILLIAC IV CFD ILLIAC IV External links ILLIAC IV documentation at bitsavers.org Oral history interview with Ivan Sutherland, Charles Babbage Institute, University of Minnesota. Sutherland describes his tenure from 1963 to 1965 as head of the Information Processing Techniques Office (IPTO) and new initiatives such as ILLIAC IV. The Legacy of Illiac IV panel discussion at Computer History Museum, June 24, 1997. Massively parallel computers One-of-a-kind computers Parallel computing Supercomputers
ILLIAC IV
Technology
5,931
51,673,289
https://en.wikipedia.org/wiki/38%20Virginis
38 Virginis is an F-type main sequence star in the constellation of Virgo. It is around 108 light years distant from the Earth. Nomenclature The name 38 Virginis derives from the star being the 38th star in order of right ascension catalogued in the constellation Virgo by Flamsteed in his star catalogue. The designation b of 38 Virginis b derives from the order of discovery and is given to the first planet orbiting a given star, followed by the other lowercase letters of the alphabet. In the case of 38 Virginis, only one was discovered, which was designated b. Stellar characteristics 38 Virginis is an F-type main sequence star that is approximately 118% the mass of and 145% the radius of the Sun. It has a temperature of 6557 K and is about 1.9 billion years old. In comparison, the Sun is about 4.6 billion years old and has a temperature of 5778 K. The star is metal-rich, with a metallicity ([Fe/H]) of 0.07 dex, or 117% the solar amount. Its luminosity () is 3.48 times that of the Sun. A companion star is cataloged in the CCDM at a separation of half an arcsecond. Planetary system The star is known to host one exoplanet, 38 Virginis b, discovered in 2016. This planet has a relatively low eccentricity out of any long-period giant exoplanet discovered, with an eccentricity of 0.03. The planet has a mass of around 4.5 times that of the planet Jupiter. Its orbit very likely puts it and any moons it may have in the habitable zone of its star. Notes References F-type main-sequence stars Planetary systems with one confirmed planet Virgo (constellation) Double stars Durchmusterung objects Virginis, 38 111998 062875 4891
38 Virginis
Astronomy
393
3,116,378
https://en.wikipedia.org/wiki/Tau2%20Eridani
{{DISPLAYTITLE:Tau2 Eridani}} Tau2 Eridani (τ2 Eridani, abbreviated Tau2 Eri, τ2 Eri), formally named Angetenar , is a star in the constellation of Eridanus. It is visible to the naked eye with an apparent visual magnitude of 4.78. The distance to this star, as determined via the parallax method, is around 187 light-years. Nomenclature τ2 Eridani (Latinised to Tau2 Eridani) is the system's Bayer designation. It is one of a series of stars that share the Bayer designation Tau Eridani. It bore the traditional name Angetenar, derived from the Arabic Al Ḥināyat an-Nahr, 'the Bend in the River', near which it lies. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Angetenar for this star on 30 June 2017 and it is now so included in the List of IAU-approved Star Names. In Chinese, (), meaning Celestial Meadows, refers to an asterism consisting of Tau2 Eridani, Gamma Eridani, Pi Eridani, Delta Eridani, Epsilon Eridani, Zeta Eridani, Eta Eridani, Pi Ceti, Tau1 Eridani, Tau3 Eridani, Tau4 Eridani, Tau5 Eridani, Tau6 Eridani, Tau7 Eridani, Tau8 Eridani and Tau9 Eridani. Consequently, the Chinese name for Tau2 Eridani itself is (, .) Properties Tau2 Eridani is an evolved K-type giant star with a stellar classification of K0 III. It is a red clump giant on the horizontal branch of the Hertzsprung–Russell diagram, indicating that is it now generating energy through the thermonuclear fusion of helium at its core. Around 720 million years old, Tau2 Eridani has 2.4 times the mass of the Sun and has expanded to over 8 times the solar radius. It shines with nearly 44 times the Sun's luminosity from an outer atmosphere that has an effective temperature of 5,049 K. It is a member of the Galactic thin disk population. References K-type giants Horizontal-branch stars Eridanus (constellation) Angetenar Eridani, Tau2 Durchmusterung objects Eridani, 02 017824 013288 0850
Tau2 Eridani
Astronomy
542
75,047
https://en.wikipedia.org/wiki/Aeroelasticity
Aeroelasticity is the branch of physics and engineering studying the interactions between the inertial, elastic, and aerodynamic forces occurring while an elastic body is exposed to a fluid flow. The study of aeroelasticity may be broadly classified into two fields: static aeroelasticity dealing with the static or steady state response of an elastic body to a fluid flow, and dynamic aeroelasticity dealing with the body's dynamic (typically vibrational) response. Aircraft are prone to aeroelastic effects because they need to be lightweight while enduring large aerodynamic loads. Aircraft are designed to avoid the following aeroelastic problems: divergence where the aerodynamic forces increase the twist of a wing which further increases forces; control reversal where control activation produces an opposite aerodynamic moment that reduces, or in extreme cases reverses, the control effectiveness; and flutter which is uncontained vibration that can lead to the destruction of an aircraft. Aeroelasticity problems can be prevented by adjusting the mass, stiffness or aerodynamics of structures which can be determined and verified through the use of calculations, ground vibration tests and flight flutter trials. Flutter of control surfaces is usually eliminated by the careful placement of mass balances. The synthesis of aeroelasticity with thermodynamics is known as aerothermoelasticity, and its synthesis with control theory is known as aeroservoelasticity. History The second failure of Samuel Langley's prototype plane on the Potomac was attributed to aeroelastic effects (specifically, torsional divergence). An early scientific work on the subject was George Bryan's Theory of the Stability of a Rigid Aeroplane published in 1906. Problems with torsional divergence plagued aircraft in the First World War and were solved largely by trial-and-error and ad hoc stiffening of the wing. The first recorded and documented case of flutter in an aircraft was that which occurred to a Handley Page O/400 bomber during a flight in 1916, when it suffered a violent tail oscillation, which caused extreme distortion of the rear fuselage and the elevators to move asymmetrically. Although the aircraft landed safely, in the subsequent investigation F. W. Lanchester was consulted. One of his recommendations was that left and right elevators should be rigidly connected by a stiff shaft, which was to subsequently become a design requirement. In addition, the National Physical Laboratory (NPL) was asked to investigate the phenomenon theoretically, which was subsequently carried out by Leonard Bairstow and Arthur Fage. In 1926, Hans Reissner published a theory of wing divergence, leading to much further theoretical research on the subject. The term aeroelasticity itself was coined by Harold Roxbee Cox and Alfred Pugsley at the Royal Aircraft Establishment (RAE), Farnborough in the early 1930s. In the development of aeronautical engineering at Caltech, Theodore von Kármán started a course "Elasticity applied to Aeronautics". After teaching the course for one term, Kármán passed it over to Ernest Edwin Sechler, who developed aeroelasticity in that course and in publication of textbooks on the subject. In 1947, Arthur Roderick Collar defined aeroelasticity as "the study of the mutual interaction that takes place within the triangle of the inertial, elastic, and aerodynamic forces acting on structural members exposed to an airstream, and the influence of this study on design". Static aeroelasticity In an aeroplane, two significant static aeroelastic effects may occur. Divergence is a phenomenon in which the elastic twist of the wing suddenly becomes theoretically infinite, typically causing the wing to fail. Control reversal is a phenomenon occurring only in wings with ailerons or other control surfaces, in which these control surfaces reverse their usual functionality (e.g., the rolling direction associated with a given aileron moment is reversed). Divergence Divergence occurs when a lifting surface deflects under aerodynamic load in a direction which further increases lift in a positive feedback loop. The increased lift deflects the structure further, which eventually brings the structure to the point of divergence. Unlike flutter, which is another aeroelastic problem, instead of irregular oscillations, divergence causes the lifting surface to move in the same direction and when it comes to point of divergence the structure deforms. Control reversal Control surface reversal is the loss (or reversal) of the expected response of a control surface, due to deformation of the main lifting surface. For simple models (e.g. single aileron on an Euler-Bernoulli beam), control reversal speeds can be derived analytically as for torsional divergence. Control reversal can be used to aerodynamic advantage, and forms part of the Kaman servo-flap rotor design. Dynamic aeroelasticity Dynamic aeroelasticity studies the interactions among aerodynamic, elastic, and inertial forces. Examples of dynamic aeroelastic phenomena are: Flutter Flutter is a dynamic instability of an elastic structure in a fluid flow, caused by positive feedback between the body's deflection and the force exerted by the fluid flow. In a linear system, "flutter point" is the point at which the structure is undergoing simple harmonic motion—zero net damping—and so any further decrease in net damping will result in a self-oscillation and eventual failure. "Net damping" can be understood as the sum of the structure's natural positive damping and the negative damping of the aerodynamic force. Flutter can be classified into two types: hard flutter, in which the net damping decreases very suddenly, very close to the flutter point; and soft flutter, in which the net damping decreases gradually. In water the mass ratio of the pitch inertia of the foil to that of the circumscribing cylinder of fluid is generally too low for binary flutter to occur, as shown by explicit solution of the simplest pitch and heave flutter stability determinant. Structures exposed to aerodynamic forces—including wings and aerofoils, but also chimneys and bridges—are generally designed carefully within known parameters to avoid flutter. Blunt shapes, such as chimneys, can give off a continuous stream of vortices known as a Kármán vortex street, which can induce structural oscillations. Strakes are typically wrapped around chimneys to stop the formation of these vortices. In complex structures where both the aerodynamics and the mechanical properties of the structure are not fully understood, flutter can be discounted only through detailed testing. Even changing the mass distribution of an aircraft or the stiffness of one component can induce flutter in an apparently unrelated aerodynamic component. At its mildest, this can appear as a "buzz" in the aircraft structure, but at its most violent, it can develop uncontrollably with great speed and cause serious damage to the aircraft or lead to its destruction, as in Northwest Airlines Flight 2 in 1938, Braniff Flight 542 in 1959, or the prototypes for Finland's VL Myrsky fighter aircraft in the early 1940s. Famously, the original Tacoma Narrows Bridge was destroyed as a result of aeroelastic fluttering. Aeroservoelasticity In some cases, automatic control systems have been demonstrated to help prevent or limit flutter-related structural vibration. Propeller whirl flutter Propeller whirl flutter is a special case of flutter involving the aerodynamic and inertial effects of a rotating propeller and the stiffness of the supporting nacelle structure. Dynamic instability can occur involving pitch and yaw degrees of freedom of the propeller and the engine supports leading to an unstable precession of the propeller. Failure of the engine supports led to whirl flutter occurring on two Lockheed L-188 Electra aircraft, in 1959 on Braniff Flight 542 and again in 1960 on Northwest Orient Airlines Flight 710. Transonic aeroelasticity Flow is highly non-linear in the transonic regime, dominated by moving shock waves. Avoiding flutter is mission-critical for aircraft that fly through transonic Mach numbers. The role of shock waves was first analyzed by Holt Ashley. A phenomenon that impacts stability of aircraft known as "transonic dip", in which the flutter speed can get close to flight speed, was reported in May 1976 by Farmer and Hanson of the Langley Research Center. Buffeting Buffeting is a high-frequency instability, caused by airflow separation or shock wave oscillations from one object striking another. It is caused by a sudden impulse of load increasing. It is a random forced vibration. Generally it affects the tail unit of the aircraft structure due to air flow downstream of the wing. The methods for buffet detection are: Pressure coefficient diagram Pressure divergence at trailing edge Computing separation from trailing edge based on Mach number Normal force fluctuating divergence Prediction and cure In the period 1950–1970, AGARD developed the Manual on Aeroelasticity which details the processes used in solving and verifying aeroelastic problems along with standard examples that can be used to test numerical solutions. Aeroelasticity involves not just the external aerodynamic loads and the way they change but also the structural, damping and mass characteristics of the aircraft. Prediction involves making a mathematical model of the aircraft as a series of masses connected by springs and dampers which are tuned to represent the dynamic characteristics of the aircraft structure. The model also includes details of applied aerodynamic forces and how they vary. The model can be used to predict the flutter margin and, if necessary, test fixes to potential problems. Small carefully chosen changes to mass distribution and local structural stiffness can be very effective in solving aeroelastic problems. Methods of predicting flutter in linear structures include the p-method, the k-method and the p-k method. For nonlinear systems, flutter is usually interpreted as a limit cycle oscillation (LCO), and methods from the study of dynamical systems can be used to determine the speed at which flutter will occur. Media These videos detail the Active Aeroelastic Wing two-phase NASA-Air Force flight research program to investigate the potential of aerodynamically twisting flexible wings to improve maneuverability of high-performance aircraft at transonic and supersonic speeds, with traditional control surfaces such as ailerons and leading-edge flaps used to induce the twist. Notable aeroelastic failures The original Tacoma Narrows Bridge was destroyed as a result of aeroelastic fluttering. Propeller whirl flutter of the Lockheed L-188 Electra on Braniff Flight 542. 1931 Transcontinental & Western Air Fokker F-10 crash. Body freedom flutter of the GAF Jindivik drone. See also Adaptive compliant wing Aerospace engineering Kármán vortex street Mathematical modeling Oscillation Parker Variable Wing Vortex shedding Vortex-induced vibration X-53 Active Aeroelastic Wing References Further reading Bisplinghoff, R. L., Ashley, H. and Halfman, H., Aeroelasticity. Dover Science, 1996, , 880 p. Maurice Biot & L. Arnold (1948) "Low speed flutter and its physical interpretation", Journal of Aeronautical Sciences 15: 232–6 Dowell, E. H., A Modern Course on Aeroelasticity. . Fung, Y. C., An Introduction to the Theory of Aeroelasticity. Dover, 1994, . Hodges, D. H. and Pierce, A., Introduction to Structural Dynamics and Aeroelasticity, Cambridge, 2002, . Wright, J. R. and Cooper, J. E., Introduction to Aircraft Aeroelasticity and Loads, Wiley 2007, . Hoque, M. E., "Active Flutter Control", LAP Lambert Academic Publishing, Germany, 2010, . Collar, A. R., "The first fifty years of aeroelasticity", Aerospace, vol. 5, no. 2, pp. 12–20, 1978. Garrick, I. E. and Reed W. H., "Historical development of aircraft flutter", Journal of Aircraft, vol. 18, pp. 897–912, Nov. 1981. External links Aeroelasticity Branch – NASA Langley Research Center DLR Institute of Aeroelasticity National Aerospace Laboratory The Aeroelasticity Group – Texas A&M University NACA Technical Reports – NASA Langley Research Center NASA Aeroelasticity Handbook Aerodynamics Aircraft wing design Aerospace engineering Solid mechanics Elasticity (physics) Articles containing video clips
Aeroelasticity
Physics,Chemistry,Materials_science,Engineering
2,525
42,764
https://en.wikipedia.org/wiki/Hagia%20Sophia
Hagia Sophia (; ; ; ), officially the Hagia Sophia Grand Mosque,(; ), is a mosque and former church serving as a major cultural and historical site in Istanbul, Turkey. The last of three church buildings to be successively erected on the site by the Eastern Roman Empire, it was completed in AD 537, becoming the world's largest interior space and among the first to employ a fully pendentive dome. It is considered the epitome of Byzantine architecture and is said to have "changed the history of architecture". The site was an Eastern rite church from AD 360 to 1453, except for a brief time as a Latin Catholic church between the Fourth Crusade in 1204 and 1261. After the fall of Constantinople in 1453, it served as a mosque until 1935, when it became an interfaith museum, until being controversially reclassified solely as a mosque in 2020. The current structure was built by the Byzantine emperor Justinian I as the Christian cathedral of Constantinople for the Byzantine Empire between 532 and 537, and was designed by the Greek geometers Isidore of Miletus and Anthemius of Tralles. It was formally called the Church of God's Holy Wisdom, () the third church of the same name to occupy the site, as the prior one had been destroyed in the Nika riots. As the episcopal see of the ecumenical patriarch of Constantinople, it remained the world's largest cathedral for nearly a thousand years, until the Seville Cathedral was completed in 1520. Hagia Sophia became the paradigmatic Orthodox church form, and its architectural style was emulated by Ottoman mosques a thousand years later. The Hagia Sophia served as an architectural inspiration for many other religious buildings including the Hagia Sophia in Thessaloniki, Panagia Ekatontapiliani, the Şehzade Mosque, the Süleymaniye Mosque, the Rüstem Pasha Mosque and the Kılıç Ali Pasha Complex. The religious and spiritual centre of the Eastern Orthodox Church for nearly one thousand years, the church was dedicated to the Holy Wisdom. The church has been described as "holding a unique position in the Christian world", and as "an architectural and cultural icon of Byzantine and Eastern Orthodox civilization". It was where the excommunication of Patriarch Michael I Cerularius was officially delivered by Humbert of Silva Candida, the envoy of Pope Leo IX in 1054, an act considered the start of the East–West Schism. In 1204, it was converted during the Fourth Crusade into a Catholic cathedral under the Latin Empire, before being returned to the Eastern Orthodox Church upon the restoration of the Byzantine Empire in 1261. Enrico Dandolo, the doge of Venice who led the Fourth Crusade and the 1204 Sack of Constantinople, was buried in the church. After the fall of Constantinople to the Ottoman Empire in 1453, it was converted to a mosque by Mehmed the Conqueror and became the principal mosque of Istanbul until the 1616 construction of the Sultan Ahmed Mosque. Upon its conversion, the bells, altar, iconostasis, ambo, and baptistery were removed, while iconography, such as the mosaic depictions of Jesus, Mary, Christian saints and angels were removed or plastered over. Islamic architectural additions included four minarets, a minbar and a mihrab. The patriarchate moved to the Church of the Holy Apostles, which became the city's cathedral. The complex remained a mosque until 1931, when it was closed to the public for four years. It was re-opened in 1935 as a museum under the secular Republic of Turkey, and the building was Turkey's most visited tourist attraction . In July 2020, the Council of State annulled the 1934 decision to establish the museum, and the Hagia Sophia was reclassified as a mosque. The 1934 decree was ruled to be unlawful under both Ottoman and Turkish law as Hagia Sophia's , endowed by Sultan Mehmed, had designated the site a mosque. Proponents of the decision argued the Hagia Sophia was the personal property of the sultan. The decision to designate Hagia Sophia as a mosque was highly controversial. It resulted in divided opinions and drew condemnation from the Turkish opposition, UNESCO, the World Council of Churches and the International Association of Byzantine Studies, as well as numerous international leaders, while several Muslim leaders in Turkey and other countries welcomed its conversion into a mosque. History Church of Constantius II The first church on the site was known as the () because of its size compared to the sizes of the contemporary churches in the city. According to the Chronicon Paschale, the church was consecrated on 15 February 360, during the reign of the emperor Constantius II () by the Arian bishop Eudoxius of Antioch. It was built next to the area where the Great Palace was being developed. According to the 5th-century ecclesiastical historian Socrates of Constantinople, the emperor Constantius had "constructed the Great Church alongside that called Irene which because it was too small, the emperor's father [Constantine] had enlarged and beautified". A tradition which is not older than the 7th or 8th century reports that the edifice was built by Constantius' father, Constantine the Great (). Hesychius of Miletus wrote that Constantine built Hagia Sophia with a wooden roof and removed 427 (mostly pagan) statues from the site. The 12th-century chronicler Joannes Zonaras reconciles the two opinions, writing that Constantius had repaired the edifice consecrated by Eusebius of Nicomedia, after it had collapsed. Since Eusebius was the bishop of Constantinople from 339 to 341, and Constantine died in 337, it seems that the first church was erected by Constantius. The nearby Hagia Irene ("Holy Peace") church was completed earlier and served as cathedral until the Great Church was completed. Besides Hagia Irene, there is no record of major churches in the city-centre before the late 4th century. Rowland Mainstone argued the 4th-century church was not yet known as Hagia Sophia. Though its name as the 'Great Church' implies that it was larger than other Constantinopolitan churches, the only other major churches of the 4th century were the Church of St Mocius, which lay outside the Constantinian walls and was perhaps attached to a cemetery, and the Church of the Holy Apostles. The church itself is known to have had a timber roof, curtains, columns, and an entrance that faced west. It likely had a narthex and is described as being shaped like a Roman circus. This may mean that it had a U-shaped plan like the basilicas of San Marcellino e Pietro and Sant'Agnese fuori le mura in Rome. However, it may also have been a more conventional three-, four-, or five-aisled basilica, perhaps resembling the original Church of the Holy Sepulchre in Jerusalem or the Church of the Nativity in Bethlehem. The building was likely preceded by an atrium, as in the later churches on the site. According to Ken Dark and Jan Kostenec, a further remnant of the 4th century basilica may exist in a wall of alternating brick and stone banded masonry immediately to the west of the Justinianic church. The top part of the wall is constructed with bricks stamped with brick-stamps dating from the 5th century, but the lower part is of constructed with bricks typical of the 4th century. This wall was probably part of the propylaeum at the west front of both the Constantinian and Theodosian Great Churches. The building was accompanied by a baptistery and a skeuophylakion. A hypogeum, perhaps with an martyrium above it, was discovered before 1946, and the remnants of a brick wall with traces of marble revetment were identified in 2004. The hypogeum was a tomb which may have been part of the 4th-century church or may have been from the pre-Constantinian city of Byzantium. The skeuophylakion is said by Palladius to have had a circular floor plan, and since some U-shaped basilicas in Rome were funerary churches with attached circular mausolea (the Mausoleum of Constantina and the Mausoleum of Helena), it is possible it originally had a funerary function, though by 405 its use had changed. A later account credited a woman called Anna with donating the land on which the church was built in return for the right to be buried there. Excavations on the western side of the site of the first church under the propylaeum wall reveal that the first church was built atop a road about wide. According to early accounts, the first Hagia Sophia was built on the site of an ancient pagan temple, although there are no artefacts to confirm this. The Patriarch of Constantinople John Chrysostom came into a conflict with Empress Aelia Eudoxia, wife of the emperor Arcadius (), and was sent into exile on 20 June 404. During the subsequent riots, this first church was largely burnt down. Palladius noted that the 4th-century skeuophylakion survived the fire. According to Dark and Kostenec, the fire may only have affected the main basilica, leaving the surrounding ancillary buildings intact. Church of Theodosius II A second church on the site was ordered by Theodosius II (), who inaugurated it on 10 October 415. The Notitia Urbis Constantinopolitanae, a fifth-century list of monuments, names Hagia Sophia as , while the former cathedral Hagia Irene is referred to as . At the time of Socrates of Constantinople around 440, "both churches [were] enclosed by a single wall and served by the same clergy". Thus, the complex would have encompassed a large area including the future site of the Hospital of Samson. If the fire of 404 destroyed only the 4th-century main basilica church, then the 5th century Theodosian basilica could have been built surrounded by a complex constructed primarily during the fourth century. During the reign of Theodosius II, the emperor's elder sister, the Augusta Pulcheria () was challenged by the patriarch Nestorius (). The patriarch denied the Augusta access to the sanctuary of the "Great Church", likely on 15 April 428. According to the anonymous Letter to Cosmas, the virgin empress, a promoter of the cult of the Virgin Mary who habitually partook in the Eucharist at the sanctuary of Nestorius's predecessors, claimed right of entry because of her equivalent position to the Theotokos – the Virgin Mary – "having given birth to God". Their theological differences were part of the controversy over the title theotokos that resulted in the Council of Ephesus and the stimulation of Monophysitism and Nestorianism, a doctrine, which like Nestorius, rejects the use of the title. Pulcheria along with Pope Celestine I and Patriarch Cyril of Alexandria had Nestorius overthrown, condemned at the ecumenical council, and exiled. The area of the western entrance to the Justinianic Hagia Sophia revealed the western remains of its Theodosian predecessor, as well as some fragments of the Constantinian church. German archaeologist Alfons Maria Schneider began conducting archaeological excavations during the mid-1930s, publishing his final report in 1941. Excavations in the area that had once been the 6th-century atrium of the Justinianic church revealed the monumental western entrance and atrium, along with columns and sculptural fragments from both 4th- and 5th-century churches. Further digging was abandoned for fear of harming the structural integrity of the Justinianic building, but parts of the excavation trenches remain uncovered, laying bare the foundations of the Theodosian building. The basilica was built by architect Rufinus. The church's main entrance, which may have had gilded doors, faced west, and there was an additional entrance to the east. There was a central pulpit and likely an upper gallery, possibly employed as a matroneum (women's section). The exterior was decorated with elaborate carvings of rich Theodosian-era designs, fragments of which have survived, while the floor just inside the portico was embellished with polychrome mosaics. The surviving carved gable end from the centre of the western façade is decorated with a cross-roundel. Fragments of a frieze of reliefs with 12 lambs representing the 12 apostles also remain; unlike Justinian's 6th-century church, the Theodosian Hagia Sophia had both colourful floor mosaics and external decorative sculpture. At the western end, surviving stone fragments of the structure show there was vaulting, at least at the western end. The Theodosian building had a monumental propylaeum hall with a portico that may account for this vaulting, which was thought by the original excavators in the 1930s to be part of the western entrance of the church itself. The propylaeum opened onto an atrium which lay in front of the basilica church itself. Preceding the propylaeum was a steep monumental staircase following the contours of the ground as it sloped away westwards in the direction of the Strategion, the Basilica, and the harbours of the Golden Horn. This arrangement would have resembled the steps outside the atrium of the Constantinian Old St Peter's Basilica in Rome. Near the staircase, there was a cistern, perhaps to supply a fountain in the atrium or for worshippers to wash with before entering. The 4th-century skeuophylakion was replaced in the 5th century by the present-day structure, a rotunda constructed of banded masonry in the lower two levels and of plain brick masonry in the third. Originally this rotunda, probably employed as a treasury for liturgical objects, had a second-floor internal gallery accessed by an external spiral staircase and two levels of niches for storage. A further row of windows with marble window frames on the third level remain bricked up. The gallery was supported on monumental consoles with carved acanthus designs, similar to those used on the late 5th-century Column of Leo. A large lintel of the skeuophylakion's western entrance – bricked up during the Ottoman era – was discovered inside the rotunda when it was archaeologically cleared to its foundations in 1979, during which time the brickwork was also repointed. The skeuophylakion was again restored in 2014 by the Vakıflar. A fire started during the tumult of the Nika Revolt, which had begun nearby in the Hippodrome of Constantinople, and the second Hagia Sophia was burnt to the ground on 13–14 January 532. The court historian Procopius wrote: Church of Justinian I (current structure) On 23 February 532, only a few weeks after the destruction of the second basilica, Emperor Justinian I inaugurated the construction of a third and entirely different basilica, larger and more majestic than its predecessors. Justinian appointed two architects, mathematician Anthemius of Tralles and geometer and engineer Isidore of Miletus, to design the building. Construction of the church began in 532 during the short tenure of Phocas as praetorian prefect. Although Phocas had been arrested in 529 as a suspected practitioner of paganism, he replaced John the Cappadocian after the Nika Riots saw the destruction of the Theodosian church. According to John the Lydian, Phocas was responsible for funding the initial construction of the building with 4,000 Roman pounds of gold, but he was dismissed from office in October 532. John the Lydian wrote that Phocas had acquired the funds by moral means, but Evagrius Scholasticus later wrote that the money had been obtained unjustly. According to Anthony Kaldellis, both of Hagia Sophia's architects named by Procopius were associated with the school of the pagan philosopher Ammonius of Alexandria. It is possible that both they and John the Lydian considered Hagia Sophia a great temple for the supreme Neoplatonist deity who manifestated through light and the sun. John the Lydian describes the church as the "temenos of the Great God" (). Originally the exterior of the church was covered with marble veneer, as indicated by remaining pieces of marble and surviving attachments for lost panels on the building's western face. The white marble cladding of much of the church, together with gilding of some parts, would have given Hagia Sophia a shimmering appearance quite different from the brick- and plaster-work of the modern period, and would have significantly increased its visibility from the sea. The cathedral's interior surfaces were sheathed with polychrome marbles, green and white with purple porphyry, and gold mosaics. The exterior was clad in stucco that was tinted yellow and red during the 19th-century restorations by the Fossati architects. The construction is described by Procopius in On Buildings (, ). Columns and other marble elements were imported from throughout the Mediterranean, although the columns were once thought to be spoils from cities such as Rome and Ephesus. Even though they were made specifically for Hagia Sophia, they vary in size. More than ten thousand people were employed during the construction process. This new church was contemporaneously recognized as a major work of architecture. Outside the church was an elaborate array of monuments around the bronze-plated Column of Justinian, topped by an equestrian statue of the emperor which dominated the Augustaeum, the open square outside the church which connected it with the Great Palace complex through the Chalke Gate. At the edge of the Augustaeum was the Milion and the Regia, the first stretch of Constantinople's main thoroughfare, the Mese. Also facing the Augustaeum were the enormous Constantinian thermae, the Baths of Zeuxippus, and the Justinianic civic basilica under which was the vast cistern known as the Basilica Cistern. On the opposite side of Hagia Sophia was the former cathedral, Hagia Irene. Referring to the destruction of the Theodosian Hagia Sophia and comparing the new church with the old, Procopius lauded the Justinianic building, writing in De aedificiis: Upon seeing the finished building, the Emperor reportedly said: "Solomon, I have surpassed thee" (). Justinian and Patriarch Menas inaugurated the new basilica on 27 December 537, 5 years and 10 months after construction started, with much pomp. Hagia Sophia was the seat of the Patriarchate of Constantinople and a principal setting for Byzantine imperial ceremonies, such as coronations. The basilica offered sanctuary from persecution to criminals, although there was disagreement about whether Justinian had intended for murderers to be eligible for asylum. Earthquakes in August 553 and on 14 December 557 caused cracks in the main dome and eastern semi-dome. According to the Chronicle of John Malalas, during a subsequent earthquake on 7 May 558, the eastern semi-dome collapsed, destroying the ambon, altar, and ciborium. The collapse was due mainly to the excessive bearing load and to the enormous shear load of the dome, which was too flat. These caused the deformation of the piers which sustained the dome. Justinian ordered an immediate restoration. He entrusted it to Isidorus the Younger, nephew of Isidore of Miletus, who used lighter materials. The entire vault had to be taken down and rebuilt 20 Byzantine feet () higher than before, giving the building its current interior height of . Moreover, Isidorus changed the dome type, erecting a ribbed dome with pendentives whose diameter was between 32.7 and 33.5 m. Under Justinian's orders, eight Corinthian columns were disassembled from Baalbek, Lebanon and shipped to Constantinople around 560. This reconstruction, which gave the church its present 6th-century form, was completed in 562. The poet Paul the Silentiary composed an ekphrasis, or long visual poem, for the re-dedication of the basilica presided over by Patriarch Eutychius on 24 December 562. Paul the Silentiary's poem is conventionally known under the Latin title Descriptio Sanctae Sophiae, and he was also author of another ekphrasis on the ambon of the church, the Descripto Ambonis. According to the history of the patriarch Nicephorus I and the chronicler Theophanes the Confessor, various liturgical vessels of the cathedral were melted down on the order of the emperor Heraclius () after the capture of Alexandria and Roman Egypt by the Sasanian Empire during the Byzantine–Sasanian War of 602–628. Theophanes states that these were made into gold and silver coins, and a tribute was paid to the Avars. The Avars attacked the extramural areas of Constantinople in 623, causing the Byzantines to move the "garment" relic () of Mary, mother of Jesus to Hagia Sophia from its usual shrine of the Church of the Theotokos at Blachernae just outside the Theodosian Walls. On 14 May 626, the Scholae Palatinae, an elite body of soldiers, protested in Hagia Sophia against a planned increase in bread prices, after a stoppage of the Cura Annonae rations resulting from the loss of the grain supply from Egypt. The Persians under Shahrbaraz and the Avars together laid the siege of Constantinople in 626; according to the Chronicon Paschale, on 2 August 626, Theodore Syncellus, a deacon and presbyter of Hagia Sophia, was among those who negotiated unsuccessfully with the khagan of the Avars. A homily, attributed by existing manuscripts to Theodore Syncellus and possibly delivered on the anniversary of the event, describes the translation of the Virgin's garment and its ceremonial re-translation to Blachernae by the patriarch Sergius I after the threat had passed. Another eyewitness account of the Avar–Persian siege was written by George of Pisidia, a deacon of Hagia Sophia and an administrative official in for the patriarchate from Antioch in Pisidia. Both George and Theodore, likely members of Sergius's literary circle, attribute the defeat of the Avars to the intervention of the Theotokos, a belief that strengthened in following centuries. In 726, the emperor Leo the Isaurian issued a series of edicts against the veneration of images, ordering the army to destroy all icons – ushering in the period of Byzantine iconoclasm. At that time, all religious pictures and statues were removed from the Hagia Sophia. Following a brief hiatus during the reign of Empress Irene (797–802), the iconoclasts returned. Emperor Theophilus () had two-winged bronze doors with his monograms installed at the southern entrance of the church. The basilica suffered damage, first in a great fire in 859, and again in an earthquake on 8 January 869 that caused the collapse of one of the half-domes. Emperor Basil I ordered repair of the tympanas, arches, and vaults. In his book De caerimoniis aulae Byzantinae ("Book of Ceremonies"), the emperor Constantine VII () wrote a detailed account of the ceremonies held in the Hagia Sophia by the emperor and the patriarch. Early in the 10th century, the pagan ruler of the Kievan Rus' sent emissaries to his neighbors to learn about Judaism, Islam, and Roman and Orthodox Christianity. After visiting Hagia Sophia his emissaries reported back: "We were led into a place where they serve their God, and we did not know where we were, in heaven or on earth." In the 940s or 950s, probably around 954 or 955, after the Rus'–Byzantine War of 941 and the death of the Grand Prince of Kiev, Igor I (), his widow Olga of Kiev – regent for her infant son Sviatoslav I () – visited the emperor Constantine VII and was received as queen of the Rus' in Constantinople. She was probably baptized in Hagia Sophia's baptistery, taking the name of the reigning augusta, Helena Lecapena, and receiving the titles zōstē patrikía and the styles of archontissa and hegemon of the Rus'. Her baptism was an important step towards the Christianization of the Kievan Rus', though the emperor's treatment of her visit in De caerimoniis does not mention baptism. Olga is deemed a saint and equal-to-the-apostles () in the Eastern Orthodox Church. According to an early 14th-century source, the second church in Kiev, Saint Sophia's, was founded in anno mundi 6460 in the Byzantine calendar, or . The name of this future cathedral of Kiev probably commemorates Olga's baptism at Hagia Sophia. After the great earthquake of 25 October 989, which collapsed the western dome arch, Emperor Basil II asked for the Armenian architect Trdat, creator of the Cathedral of Ani, to direct the repairs. He erected again and reinforced the fallen dome arch, and rebuilt the west side of the dome with 15 dome ribs. The extent of the damage required six years of repair and reconstruction; the church was re-opened on 13 May 994. At the end of the reconstruction, the church's decorations were renovated, including the addition of four immense paintings of cherubs; a new depiction of Christ on the dome; a burial cloth of Christ shown on Fridays, and on the apse a new depiction of the Virgin Mary holding Jesus, between the apostles Peter and Paul. On the great side arches were painted the prophets and the teachers of the church. According to the 13th-century Greek historian Niketas Choniates, the emperor John II Comnenus celebrated a revived Roman triumph after his victory over the Danishmendids at the siege of Kastamon in 1133. After proceeding through the streets on foot carrying a cross with a silver quadriga bearing the icon of the Virgin Mary, the emperor participated in a ceremony at the cathedral before entering the imperial palace. In 1168, another triumph was held by the emperor Manuel I Comnenus, again preceding with a gilded silver quadriga bearing the icon of the Virgin from the now-demolished East Gate (or Gate of St Barbara, later the ) in the Propontis Wall, to Hagia Sophia for a thanks-giving service, and then to the imperial palace. In 1181, the daughter of the emperor Manuel I, Maria Comnena, and her husband, the caesar Renier of Montferrat, fled to Hagia Sophia at the culmination of their dispute with the empress Maria of Antioch, regent for her son, the emperor Alexius II Comnenus. Maria Comnena and Renier occupied the cathedral with the support of the patriarch, refusing the imperial administration's demands for a peaceful departure. According to Niketas Choniates, they "transformed the sacred courtyard into a military camp", garrisoned the entrances to the complex with locals and mercenaries, and despite the strong opposition of the patriarch, made the "house of prayer into a den of thieves or a well-fortified and precipitous stronghold, impregnable to assault", while "all the dwellings adjacent to Hagia Sophia and adjoining the Augusteion were demolished by [Maria's] men". A battle ensued in the Augustaion and around the Milion, during which the defenders fought from the "gallery of the Catechumeneia (also called the Makron)" facing the Augusteion, from which they eventually retreated and took up positions in the exonarthex of Hagia Sophia itself. At this point, "the patriarch was anxious lest the enemy troops enter the temple, with unholy feet trample the holy floor, and with hands defiled and dripping with blood still warm plunder the all-holy dedicatory offerings". After a successful sally by Renier and his knights, Maria requested a truce, the imperial assault ceased, and an amnesty was negotiated by the megas doux Andronikos Kontostephanos and the megas hetaireiarches John Doukas. Greek historian Niketas Choniates compared the preservation of the cathedral to the efforts made by the 1st-century emperor Titus to avoid the destruction of the Second Temple during the siege of Jerusalem in the First Jewish–Roman War. Choniates reports that in 1182, a white hawk wearing jesses was seen to fly from the east to Hagia Sophia, flying three times from the "building of the Thōmaitēs" (a basilica erected on the southeastern side of the Augustaion) to the Palace of the Kathisma in the Great Palace, where new emperors were acclaimed. This was supposed to presage the end of the reign of Andronicus I Comnenus (). Choniates further writes that in 1203, during the Fourth Crusade, the emperors Isaac II Angelus and Alexius IV Angelus stripped Hagia Sophia of all gold ornaments and silver oil-lamps in order to pay off the Crusaders who had ousted Alexius III Angelus and helped Isaac return to the throne. Upon the subsequent Sack of Constantinople in 1204, the church was further ransacked and desecrated by the Crusaders, as described by Choniates, though he did not witness the events in person. According to his account, composed at the court of the rump Empire of Nicaea, Hagia Sophia was stripped of its remaining metal ornaments, its altar was smashed into pieces, and a "woman laden with sins" sang and danced on the synthronon. He adds that mules and donkeys were brought into the cathedral's sanctuary to carry away the gilded silver plating of the bema, the ambo, and the doors and other furnishings, and that one of them slipped on the marble floor and was accidentally disembowelled, further contaminating the place. According to Ali ibn al-Athir, whose treatment of the Sack of Constantinople was probably dependent on a Christian source, the Crusaders massacred some clerics who had surrendered to them. Much of the interior was damaged and would not be repaired until its return to Orthodox control in 1261. The sack of Hagia Sophia, and Constantinople in general, remained a sore point in Catholic–Eastern Orthodox relations. During the Latin occupation of Constantinople (1204–1261), the church became a Latin Catholic cathedral. Baldwin I of Constantinople () was crowned emperor on 16 May 1204 in Hagia Sophia in a ceremony which closely followed Byzantine practices. Enrico Dandolo, the Doge of Venice who commanded the sack and invasion of the city by the Latin Crusaders in 1204, is buried inside the church, probably in the upper eastern gallery. In the 19th century, an Italian restoration team placed a cenotaph marker, frequently mistaken as being a medieval artifact, near the probable location and is still visible today. The original tomb was destroyed by the Ottomans during the conversion of the church into a mosque. Upon the capture of Constantinople in 1261 by the Empire of Nicaea and the emperor Michael VIII Palaeologus, (), the church was in a dilapidated state. In 1317, emperor Andronicus II Palaeologus () ordered four new buttresses () to be built in the eastern and northern parts of the church, financing them with the inheritance of his late wife, Irene of Montferrat (1314). New cracks developed in the dome after the earthquake of October 1344, and several parts of the building collapsed on 19 May 1346. Repairs by architects Astras and Peralta began in 1354. On 12 December 1452, Isidore of Kiev proclaimed in Hagia Sophia the long-anticipated ecclesiastical union between the western Catholic and eastern Orthodox Churches as decided at the Council of Florence and decreed by the papal bull Laetentur Caeli, though it would be short-lived. The union was unpopular among the Byzantines, who had already expelled the Patriarch of Constantinople, Gregory III, for his pro-union stance. A new patriarch was not installed until after the Ottoman conquest. According to the Greek historian Doukas, the Hagia Sophia was tainted by these Catholic associations, and the anti-union Orthodox faithful avoided the cathedral, considering it to be a haunt of demons and a "Hellenic" temple of Roman paganism. Doukas also notes that after the Laetentur Caeli was proclaimed, the Byzantines dispersed discontentedly to nearby venues where they drank toasts to the Hodegetria icon, which had, according to late Byzantine tradition, interceded to save them in the former sieges of Constantinople by the Avar Khaganate and the Umayyad Caliphate. According to Nestor Iskander's Tale on the Taking of Tsargrad, the Hagia Sophia was the focus of an alarming omen interpreted as the Holy Spirit abandoning Constantinople on 21 May 1453, in the final days of the Siege of Constantinople. The sky lit up, illuminating the city, and "many people gathered and saw on the Church of the Wisdom, at the top of the window, a large flame of fire issuing forth. It encircled the entire neck of the church for a long time. The flame gathered into one; its flame altered, and there was an indescribable light. At once it took to the sky. ... The light itself has gone up to heaven; the gates of heaven were opened; the light was received; and again they were closed." This phenomenon was perhaps St Elmo's fire induced by gunpowder smoke and unusual weather. The author relates that the fall of the city to "Mohammadenism" was foretold in an omen seen by Constantine the Great – an eagle fighting with a snake – which also signified that "in the end Christianity will overpower Mohammedanism, will receive the Seven Hills, and will be enthroned in it". The eventual fall of Constantinople had long been predicted in apocalyptic literature. A reference to the destruction of a city founded on seven hills in the Book of Revelation was frequently understood to be about Constantinople, and the Apocalypse of Pseudo-Methodius had predicted an "Ishmaelite" conquest of the Roman Empire. In this text, the Muslim armies reach the Forum Bovis before being turned back by divine intervention; in later apocalyptic texts, the climactic turn takes place at the Column of Theodosius closer to Hagia Sophia; in others, it occurs at the Column of Constantine, which is closer still. Hagia Sophia is mentioned in a hagiography of uncertain date detailing the life of the Eastern Orthodox saint Andrew the Fool. The text is self-attributed to Nicephorus, a priest of Hagia Sophia, and contains a description of the end time in the form of a dialogue, in which the interlocutor, upon being told by the saint that Constantinople will be sunk in a flood and that "the waters as they gush forth will irresistibly deluge her and cover her and surrender her to the terrifying and immense sea of the abyss", says "some people say that the Great Church of God will not be submerged with the city but will be suspended in the air by an invisible power". The reply is given that "When the whole city sinks into the sea, how can the Great Church remain? Who will need her? Do you think God dwells in temples made with hands?" The Column of Constantine, however, is prophesied to endure. From the time of Procopius in the reign of Justinian, the equestrian imperial statue on the Column of Justinian in the Augustaion beside Hagia Sophia, which gestured towards Asia with right hand, was understood to represent the emperor holding back the threat to the Romans from the Sasanian Empire in the Roman–Persian Wars, while the orb or globus cruciger held in the statue's left was an expression of the global power of the Roman emperor. Subsequently, in the Arab–Byzantine wars, the threat held back by the statue became the Umayyad Caliphate, and later, the statue was thought to be fending off the advance of the Turks. The identity of the emperor was often confused with that of other famous saint-emperors like Theodosius I and Heraclius. The orb was frequently referred to as an apple in foreigners' accounts of the city, and it was interpreted in Greek folklore as a symbol of the Turks' mythological homeland in Central Asia, the "Lone Apple Tree". The orb fell to the ground in 1316 and was replaced by 1325, but while it was still in place around 1412, by the time Johann Schiltberger saw the statue in 1427, the "empire-apple" () had fallen to the earth. An attempt to raise it again in 1435 failed, and this amplified the prophecies of the city's fall. For the Turks, the "red apple" () came to symbolize Constantinople itself and subsequently the military supremacy of the Islamic caliphate over the Christian empire. In Niccolò Barbaro's account of the fall of the city in 1453, the Justinianic monument was interpreted in the last days of the siege as representing the city's founder Constantine the Great, indicating "this is the way my conqueror will come". According to Laonicus Chalcocondyles, Hagia Sophia was a refuge for the population during the city's capture. Despite the ill-repute and empty state of Hagia Sophia after December 1452, Doukas writes that after the Theodosian Walls were breached, the Byzantines took refuge there as the Turks advanced through the city: "All the women and men, monks, and nuns ran to the Great Church. They, both men and women, were holding in their arms their infants. What a spectacle! That street was crowded, full of human beings." He attributes their change of heart to a prophecy. In accordance with the traditional custom of the time, Sultan Mehmed II allowed his troops and his entourage three full days of unbridled pillage and looting in the city shortly after it was captured. This period saw the destruction of many Orthodox churches; Hagia Sophia itself was looted as the invaders believed it to contain the greatest treasures of the city. Shortly after the defence of the Walls of Constantinople collapsed and the victorious Ottoman troops entered the city, the pillagers and looters made their way to the Hagia Sophia and battered down its doors before storming inside. Once the three days passed, Mehmed was to claim the city's remaining contents for himself. However, by the end of the first day, he proclaimed that the looting should cease as he felt profound sadness when he toured the looted and enslaved city. Throughout the siege of Constantinople, the trapped people of the city participated in the Divine Liturgy and the Prayer of the Hours at the Hagia Sophia, and the church was a safe-haven and a refuge for many of those who were unable to contribute to the city's defence, including women, children, elderly, the sick and the wounded. As they were trapped in the church, the many congregants and other refugees inside became spoils-of-war to be divided amongst the triumphant invaders. The building was desecrated and looted, and those who sought shelter within the church were enslaved. While most of the elderly and the infirm, injured, and sick were killed, the remainder (mainly teenage males and young boys) were chained and sold into slavery. Mosque (1453–1935) Constantinople fell to the attacking Ottoman forces on 29 May 1453. Sultan Mehmed II entered the city and performed the Friday prayer and khutbah (sermon) in Hagia Sophia, and this action marked the official conversion of Hagia Sophia into a mosque. The church's priests and religious personnel continued to perform Christian rites, prayers, and ceremonies until they were compelled to stop by the invaders. When Mehmed and his entourage entered the church, he ordered that it be converted into a mosque immediately. One of the ʿulamāʾ (Islamic scholars) present climbed onto the church's ambo and recited the shahada ("There is no god but Allah, and Muhammad is his messenger"), thus marking the beginning of the conversion of the church into a mosque. Mehmed is reported to have taken a sword to a soldier who tried to pry up one of the paving slabs of the Proconnesian marble floor. As described by Western visitors before 1453, such as the Córdoban nobleman Pero Tafur and the Florentine geographer Cristoforo Buondelmonti, the church was in a dilapidated state, with several of its doors fallen from their hinges. Mehmed II ordered a renovation of the building. Mehmed attended the first Friday prayer in the mosque on 1 June 1453. Aya Sofya became the first imperial mosque of Istanbul. Most of the existing houses in the city and the area of the future Topkapı Palace were endowed to the corresponding waqf. From 1478, 2,360 shops, 1,300 houses, 4 caravanserais, 30 boza shops, and 23 shops of sheep heads and trotters gave their income to the foundation. Through the imperial charters of 1520 (AH 926) and 1547 (AH 954), shops and parts of the Grand Bazaar and other markets were added to the foundation. Before 1481, a small minaret was erected on the southwest corner of the building, above the stair tower. Mehmed's successor Bayezid II () later built another minaret at the northeast corner. One of the minarets collapsed after the earthquake of 1509, and around the middle of the 16th century they were both replaced by two diagonally opposite minarets built at the east and west corners of the edifice. In 1498, Bernardo Bonsignori was the last Western visitor to Hagia Sophia to report seeing the ancient Justinianic floor; shortly afterwards the floor was covered over with carpet and not seen again until the 19th century. In the 16th century, Sultan Suleiman the Magnificent () brought two colossal candlesticks from his conquest of the Kingdom of Hungary and placed them on either side of the mihrab. During Suleiman's reign, the mosaics above the narthex and imperial gates depicting Jesus, Mary, and various Byzantine emperors were covered by whitewash and plaster, which were removed in 1930 under the Turkish Republic. During the reign of Selim II (), the building started showing signs of fatigue and was extensively strengthened with the addition of structural supports to its exterior by Ottoman architect Mimar Sinan, who was also an earthquake engineer. In addition to strengthening the historic Byzantine structure, Sinan built two additional large minarets at the western end of the building, the original sultan's lodge and the türbe (mausoleum) of Selim II to the southeast of the building in 1576–1577 (AH 984). In order to do that, parts of the Patriarchate at the south corner of the building were pulled down the previous year. Moreover, the golden crescent was mounted on the top of the dome, and a respect zone 35 arşın (about 24 m) wide was imposed around the building, leading to the demolition of all houses within the perimeter. The türbe became the location of the tombs of 43 Ottoman princes. Murad III () imported two large alabaster Hellenistic urns from Pergamon (Bergama) and placed them on two sides of the nave. In 1594 (AH 1004) Mimar (court architect) Davud Ağa built the türbe of Murad III, where the Sultan and his valide, Safiye Sultan were buried. The octagonal mausoleum of their son Mehmed III () and his valide was built next to it in 1608 (AH 1017) by royal architect Dalgiç Mehmet Aĝa. His son Mustafa I () converted the baptistery into his türbe. In 1717, under the reign of Sultan Ahmed III (), the crumbling plaster of the interior was renovated, contributing indirectly to the preservation of many mosaics, which otherwise would have been destroyed by mosque workers. In fact, it was usual for the mosaic's tesserae—believed to be talismans—to be sold to visitors. Sultan Mahmud I ordered the restoration of the building in 1739 and added a medrese (a Koranic school, subsequently the library of the museum), an imaret (soup kitchen for distribution to the poor) and a library, and in 1740 he added a Şadirvan (fountain for ritual ablutions), thus transforming it into a külliye, or social complex. At the same time, a new sultan's lodge and a new mihrab were built inside. Renovation of 1847–1849 The 19th-century restoration of the Hagia Sophia was ordered by Sultan Abdulmejid I () and completed between 1847 and 1849 by eight hundred workers under the supervision of the Swiss-Italian architect brothers Gaspare and Giuseppe Fossati. The brothers consolidated the dome with a restraining iron chain and strengthened the vaults, straightened the columns, and revised the decoration of the exterior and the interior of the building. The mosaics in the upper gallery were exposed and cleaned, although many were recovered "for protection against further damage". Eight new gigantic circular-framed discs or medallions were hung from the cornice, on each of the four piers and at either side of the apse and the west doors. These were designed by the calligrapher Kazasker Mustafa Izzet Efendi (1801–1877) and painted with the names of Allah, Muhammad, the Rashidun (the first four caliphs: Abu Bakr, Umar, Uthman and Ali), and the two grandsons of Muhammad: Hasan and Husayn, the sons of Ali. In 1850, the architects Fossati built a new maqsura or caliphal loge in Neo-Byzantine columns and an Ottoman–Rococo style marble grille connecting to the royal pavilion behind the mosque. The new maqsura was built at the extreme east end of the northern aisle, next to the north-eastern pier. The existing maqsura in the apse, near the mihrab, was demolished. A new entrance was constructed for the sultan: the . The Fossati brothers also renovated the minbar and mihrab. Outside the main building, the minarets were repaired and altered so that they were of equal height. A clock building, the , was built by the Fossatis for use by the muwaqqit (the mosque timekeeper), and a new madrasa (Islamic school) was constructed. The was also built under their direction. When the restoration was finished, the mosque was re-opened with a ceremony on 13 July 1849. An edition of lithographs from drawings made during the Fossatis' work on Hagia Sophia was published in London in 1852, entitled: Aya Sophia of Constantinople as Recently Restored by Order of H.M. The Sultan Abdulmejid. Occupation of Istanbul (1918–1923) In the aftermath of the defeat of the Ottoman Empire in World War I, Constantinople was occupied by British, French, Italian, and Greek forces. On , the Greek Orthodox Christian military priest Eleftherios Noufrakis performed an unauthorized Divine Liturgy in the Hagia Sophia, the only such instance since the 1453 fall of Constantinople. The anti-occupation Sultanahmet demonstrations were held next to Hagia Sophia from March to May 1919. In Greece, the 500 drachma banknotes issued in 1923 featured Hagia Sophia. Museum (1935–2020) In 1935, the first Turkish President and founder of the Republic of Turkey, Mustafa Kemal Atatürk, transformed the building into a museum. During the Second World War, the minarets of the museum housed MG 08 machine guns. The carpet and the layer of mortar underneath were removed and marble floor decorations such as the omphalion appeared for the first time since the Fossatis' restoration, when the white plaster covering many of the mosaics had been removed. Due to neglect, the condition of the structure continued to deteriorate, prompting the World Monuments Fund (WMF) to include the Hagia Sophia in their 1996 and 1998 Watch Lists. During this time period, the building's copper roof had cracked, causing water to leak down over the fragile frescoes and mosaics. Moisture entered from below as well. Rising ground water increased the level of humidity within the monument, creating an unstable environment for stone and paint. The WMF secured a series of grants from 1997 to 2002 for the restoration of the dome. The first stage of work involved the structural stabilization and repair of the cracked roof, which was undertaken with the participation of the Turkish Ministry of Culture and Tourism. The second phase, the preservation of the dome's interior, afforded the opportunity to employ and train young Turkish conservators in the care of mosaics. By 2006, the WMF project was complete, though many areas of Hagia Sophia continue to require significant stability improvement, restoration, and conservation. In 2014, Hagia Sophia was the second most visited museum in Turkey, attracting almost 3.3 million visitors annually. While use of the complex as a place of worship (mosque or church) was strictly prohibited, in 1991 the Turkish government allowed the allocation of a pavilion in the museum complex (Ayasofya Müzesi Hünkar Kasrı) for use as a prayer room, and, since 2013, two of the museum's minarets had been used for voicing the call to prayer (the ezan) regularly. From the early 2010s, several campaigns and government high officials, notably Turkey's deputy prime minister Bülent Arınç in November 2013, demanded the Hagia Sophia be converted back into a mosque. In 2015, Pope Francis publicly acknowledged the Armenian genocide, which is officially denied in Turkey. In response, the mufti of Ankara, Mefail Hızlı, said he believed the Pope's remarks would accelerate the conversion of Hagia Sophia into a mosque. On 1 July 2016, Muslim prayers were held again in the Hagia Sophia for the first time in 85 years. That November, a Turkish NGO, the Association for the Protection of Historic Monuments and the Environment, filed a lawsuit for converting the museum into a mosque. The court decided it should stay as a 'monument museum'. In October 2016, Turkey's Directorate of Religious Affairs (Diyanet) appointed, for the first time in 81 years, a designated imam, Önder Soy, to the Hagia Sophia mosque (Ayasofya Camii Hünkar Kasrı), located at the Hünkar Kasrı, a pavilion for the sultans' private ablutions. Since then, the adhan has been regularly called out from the Hagia Sophia's all four minarets five times a day. On 13 May 2017, a large group of people, organized by the Anatolia Youth Association (AGD), gathered in front of Hagia Sophia and prayed the morning prayer with a call for the re-conversion of the museum into a mosque. On 21 June 2017 the Directorate of Religious Affairs () organized a special programme, broadcast live by state-run television TRT, which included the recitation of the Quran and prayers in Hagia Sophia, to mark the Laylat al-Qadr. Reversion to mosque (2018–present) Since 2018, Turkish president Recep Tayyip Erdoğan had talked of reverting the status of the Hagia Sophia back to a mosque, as a populist gesture. On 31 March 2018 Erdoğan recited the first verse of the Quran in the Hagia Sophia, dedicating the prayer to the "souls of all who left us this work as inheritance, especially Istanbul's conqueror," strengthening the political movement to make the Hagia Sophia a mosque once again, reversing Atatürk's measure of turning the Hagia Sophia into a secular museum. In March 2019 Erdoğan said that he would change the status of Hagia Sophia from a museum to a mosque, adding that it had been a "very big mistake" to turn it into a museum. As a UNESCO World Heritage site, this change would require approval from UNESCO's World Heritage Committee. In late 2019 Erdoğan's office took over the administration and upkeep of the nearby Topkapı Palace Museum, transferring responsibility for the site from the Ministry of Culture and Tourism by presidential decree. In 2020, Turkey's government celebrated the 567th anniversary of the Conquest of Constantinople with an Islamic prayer in Hagia Sophia. Erdoğan said during a televised broadcast "Al-Fath surah will be recited and prayers will be done at Hagia Sophia as part of conquest festival". In May, during the anniversary events, passages from the Quran were read in the Hagia Sophia. Greece condemned this action, while Turkey in response accused Greece of making "futile and ineffective statements". In June, the head of Turkey's Directorate of Religious Affairs () said that "we would be very happy to open Hagia Sophia for worship" and that if it happened "we will provide our religious services as we do in all our mosques". On 25 June, John Haldon, president of the International Association of Byzantine Studies, wrote an open letter to Erdoğan asking that he "consider the value of keeping the Aya Sofya as a museum". On 10 July 2020, the decision of the Council of Ministers from 1935 to transform the Hagia Sophia into a museum was annulled by the Council of State, decreeing that Hagia Sophia cannot be used "for any other purpose" than being a mosque and that the Hagia Sophia was property of the Fatih Sultan Mehmet Han Foundation. The council reasoned Ottoman Sultan Mehmet II, who conquered Istanbul, deemed the property to be used by the public as a mosque without any fees and was not within the jurisdiction of the Parliament or a ministry council. Despite secular and global criticism, Erdoğan signed a decree annulling the Hagia Sophia's museum status, reverting it to a mosque. The call to prayer was broadcast from the minarets shortly after the announcement of the change and rebroadcast by major Turkish news networks. The Hagia Sophia Museum's social media channels were taken down the same day, with Erdoğan announcing at a press conference that prayers themselves would be held there from 24 July. A presidential spokesperson said it would become a working mosque, open to anyone similar to the Parisian churches Sacré-Cœur and Notre-Dame. The spokesperson also said that the change would not affect the status of the Hagia Sophia as a UNESCO World Heritage site, and that "Christian icons" within it would continue to be protected. Earlier the same day, before the final decision, the Turkish Finance and Treasury Minister Berat Albayrak and the Justice Minister Abdulhamit Gül expressed their expectations of opening the Hagia Sophia to worship for Muslims. Mustafa Şentop, Speaker of Turkey's Grand National Assembly, said "a longing in the heart of our nation has ended". A presidential spokesperson claimed that all political parties in Turkey supported Erdoğan's decision, but the Peoples' Democratic Party had previously released a statement denouncing the decision, saying "decisions on human heritage cannot be made on the basis of political games played by the government". The mayor of Istanbul, Ekrem İmamoğlu, said that he supports the conversion "as long as it benefits Turkey", adding that he felt that Hagia Sophia has been a mosque since 1453. Ali Babacan attacked the policy of his former ally Erdoğan, saying the Hagia Sophia issue "has come to the agenda now only to cover up other problems". Orhan Pamuk, Turkish novelist and Nobel laureate, publicly denounced the move, saying "Kemal Atatürk changed... Hagia Sophia from a mosque to a museum, honouring all previous Greek Orthodox and Latin Catholic history, making it as a sign of Turkish modern secularism". On 17 July, Erdoğan announced that the first prayers in the Hagia Sophia would be open to between 1,000 and 1,500 worshippers, stating that Turkey had sovereign power over Hagia Sophia and was not obligated to bend to international opinion. While the Hagia Sophia has now been rehallowed as a mosque, the place remains open for visitors outside of prayer times. Entrance was initially free, but starting from 15 January 2024, foreign nationals have to pay an entrance fee. On 22 July, a turquoise-coloured carpet was laid to prepare the mosque for worshippers, attended by Ali Erbaş, head of the Diyanet. The omphalion was left exposed. Due to the COVID-19 pandemic, Erbaş said Hagia Sophia would accommodate up to 1,000 worshippers at a time and asked that they bring "masks, a prayer rug, patience and understanding". The mosque opened for Friday prayers on 24 July, the 97th anniversary of the signature of the Treaty of Lausanne, which established the borders of the modern Turkish Republic. The mosaics of the Virgin and Child in the apse were covered by white drapes. There had been proposals to conceal the mosaics with lasers during prayer times, but this idea was ultimately shelved. Erbaş proclaimed during his sermon, "Sultan Mehmet the Conqueror dedicated this magnificent construction to believers to remain a mosque until the Day of Resurrection". Erdoğan and some government ministers attended the midday prayers as many worshippers prayed outside; at one point the security cordon was breached and dozens of people broke through police lines. Turkey invited foreign leaders and officials, including Pope Francis, for the prayers. It is the fourth Byzantine church converted from museum to a mosque during Erdoğan's rule. In April 2022, the Hagia Sophia held its first Ramadan tarawih prayer in 88 years. International reaction and discussions Days before the final decision on the conversion was made, Ecumenical Patriarch Bartholomew I of Constantinople stated in a sermon that "the conversion of Hagia Sophia into a mosque would disappoint millions of Christians around the world", he also said that Hagia Sophia, which was "a vital center where East is embraced with the West", would "fracture these two worlds" in the event of conversion. The proposed conversion was decried by other Orthodox Christian leaders, the Russian Orthodox Church's Patriarch Kirill of Moscow stating that "a threat to Hagia Sophia [wa]s a threat to all of Christian civilization". Following the Turkish government's decision, UNESCO announced it "deeply regret[ted]" the conversion "made without prior discussion", and asked Turkey to "open a dialogue without delay", stating that the lack of negotiation was "regrettable". UNESCO further announced that the "state of conservation" of Hagia Sophia would be "examined" at the next session of the World Heritage Committee, urging Turkey "to initiate dialogue without delay, in order to prevent any detrimental effect on the universal value of this exceptional heritage". Ernesto Ottone, UNESCO's Assistant Director-General for Culture said "It is important to avoid any implementing measure, without prior discussion with UNESCO, that would affect physical access to the site, the structure of the buildings, the site's moveable property, or the site's management". UNESCO's statement of 10 July said "these concerns were shared with the Republic of Turkey in several letters, and again yesterday evening with the representative of the Turkish Delegation" without a response. The World Council of Churches, which claims to represent 500 million Christians of 350 denominations, condemned the decision to convert the building into a mosque, saying that would "inevitably create uncertainties, suspicions and mistrust"; the World Council of Churches urged Turkey's president Erdoğan "to reconsider and reverse" his decision "in the interests of promoting mutual understanding, respect, dialogue and cooperation, and avoiding cultivating old animosities and divisions". At the recitation of the Sunday Angelus prayer at St Peter's Square on 12 July Pope Francis said, "My thoughts go to Istanbul. I think of Santa Sophia and I am very pained" (). The International Association of Byzantine Studies announced that its 21st International Congress, due to be held in Istanbul in 2021, will no longer be held there and is postponed to 2022. Josep Borrell, the European Union's High Representative for Foreign Affairs and Vice-President of the European Commission, released a statement calling the decisions by the Council of State and Erdoğan "regrettable" and pointing out that "as a founding member of the Alliance of Civilisations, Turkey has committed to the promotion of inter-religious and inter-cultural dialogue and to fostering of tolerance and co-existence." According to Borrell, the European Union member states' twenty-seven foreign ministers "condemned the Turkish decision to convert such an emblematic monument as the Hagia Sophia" at meeting on 13 July, saying it "will inevitably fuel the mistrust, promote renewed division between religious communities and undermine our efforts at dialog and cooperation" and that "there was a broad support to call on the Turkish authorities to urgently reconsider and reverse this decision". Greece denounced the conversion and considered it a breach of the UNESCO World Heritage titling. Greek culture minister Lina Mendoni called it an "open provocation to the civilised world" which "absolutely confirms that there is no independent justice" in Erdoğan's Turkey, and that his Turkish nationalism "takes his country back six centuries". Greece and Cyprus called for EU sanctions on Turkey. Morgan Ortagus, the spokesperson for the United States Department of State, noted: "We are disappointed by the decision by the government of Turkey to change the status of the Hagia Sophia." Jean-Yves Le Drian, foreign minister of France, said his country "deplores" the move, saying "these decisions cast doubt on one of the most symbolic acts of modern and secular Turkey". Vladimir Dzhabarov, deputy head of the foreign affairs committee of the Russian Federation Council, said that it "will not do anything for the Muslim world. It does not bring nations together, but on the contrary brings them into collision" and calling the move a "mistake". The former deputy prime minister of Italy, Matteo Salvini, held a demonstration in protest outside the Turkish consulate in Milan, calling for all plans for accession of Turkey to the European Union to be terminated "once and for all". In East Jerusalem, a protest was held outside the Turkish consulate on 13 July, with the burning of a Turkish flag and the display of the Greek flag and flag of the Greek Orthodox Church. In a statement the Turkish foreign ministry condemned the burning of the flag, saying "nobody can disrespect or encroach our glorious flag". Ersin Tatar, prime minister of the Turkish Republic of Northern Cyprus, which is recognized only by Turkey, welcomed the decision, calling it "sound" and "pleasing". He further criticized the government of Cyprus, claiming that "the Greek Cypriot administration, who burned down our mosques, should not have a say in this". Through a spokesman the Foreign Ministry of Iran welcomed the change, saying the decision was an "issue that should be considered as part of Turkey's national sovereignty" and "Turkey's internal affair". Sergei Vershinin, deputy foreign minister of Russia, said that the matter was of one of "internal affairs, in which, of course, neither we nor others should interfere." The Arab Maghreb Union was supportive. Ekrema Sabri, imam of the al-Aqsa Mosque, and Ahmed bin Hamad al-Khalili, grand mufti of Oman, both congratulated Turkey on the move. The Muslim Brotherhood was also in favour of the news. A spokesman for the Palestinian Islamist movement Hamas called the verdict "a proud moment for all Muslims". Pakistani politician Chaudhry Pervaiz Elahi of the Pakistan Muslim League (Q) welcomed the ruling, claiming it was "not only in accordance with the wishes of the people of Turkey but the entire Muslim world". The Muslim Judicial Council group in South Africa praised the move, calling it "a historic turning point". In Nouakchott, capital of Mauritania, there were prayers and celebrations topped by the sacrifice of a camel. On the other hand, Shawki Allam, grand mufti of Egypt, ruled that conversion of the Hagia Sophia to a mosque is "impermissible". When President Erdoğan announced that the first Muslim prayers would be held inside the building on 24 July, he added that "like all our mosques, the doors of Hagia Sophia will be wide open to locals and foreigners, Muslims and non-Muslims." Presidential spokesman İbrahim Kalın said that the icons and mosaics of the building would be preserved, and that "in regards to the arguments of secularism, religious tolerance and coexistence, there are more than four hundred churches and synagogues open in Turkey today." Ömer Çelik, spokesman for the ruling Justice and Development Party (AKP), announced on 13 July that entry to Hagia Sophia would be free of charge and open to all visitors outside prayer times, during which Christian imagery in the building's mosaics would be covered by curtains or lasers. The Turkish foreign minister, Mevlüt Çavuşoğlu, told TRT Haber on 13 July that the government was surprised at the reaction of UNESCO, saying that "We have to protect our ancestors' heritage. The function can be this way or that way – it does not matter". On 14 July the prime minister of Greece, Kyriakos Mitsotakis, said his government was "considering its response at all levels" to what he called Turkey's "unnecessary, petty initiative", and that "with this backward action, Turkey is opting to sever links with western world and its values". In relation to both Hagia Sophia and the Cyprus–Turkey maritime zones dispute, Mitsotakis called for European sanctions against Turkey, referring to it as "a regional troublemaker, and which is evolving into a threat to the stability of the whole south-east Mediterranean region". Dora Bakoyannis, Greek former foreign minister, said Turkey's actions had "crossed the Rubicon", distancing itself from the West. On the day of the building's re-opening, Mitsotakis called the re-conversion evidence of Turkey's weakness rather than a show of power. Armenia's Foreign Ministry expressed "deep concern" about the move, adding that it brought to a close Hagia Sophia's symbolism of "cooperation and unity of humankind instead of clash of civilizations." Catholicos Karekin II, the head of the Armenian Apostolic Church, said the move "violat[ed] the rights of national religious minorities in Turkey." Sahak II Mashalian, the Armenian Patriarch of Constantinople, perceived as loyal to the Turkish government, endorsed the decision to convert the museum into a mosque. He said, "I believe that believers' praying suits better the spirit of the temple instead of curious tourists running around to take pictures." In July 2021, UNESCO asked for an updated report on the state of conservation and expressed "grave concern". There were also some concerns about the future of its World Heritage status. Turkey responded that the changes had "no negative impact" on UNESCO standards and the criticism is "biased and political". Architecture Hagia Sophia is one of the greatest surviving examples of Byzantine architecture. Its interior is decorated with mosaics, marble pillars, and coverings of great artistic value. Justinian had overseen the completion of the greatest cathedral ever built up to that time, and it was to remain the largest cathedral for 1,000 years until the completion of the cathedral in Seville in Spain. The Hagia Sophia uses masonry construction. The structure has brick and mortar joints that are 1.5 times the width of the bricks. The mortar joints are composed of a combination of sand and minute ceramic pieces distributed evenly throughout the mortar joints. This combination of sand and potsherds was often used in Roman concrete, a predecessor to modern concrete. A considerable amount of iron was used as well, in the form of cramps and ties. Justinian's basilica was at once the culminating architectural achievement of late antiquity and the first masterpiece of Byzantine architecture. Its influence, both architecturally and liturgically, was widespread and enduring in the Eastern Christianity, Western Christianity, and Islam alike. The vast interior has a complex structure. The nave is covered by a central dome which at its maximum is from floor level and rests on an arcade of 40 arched windows. Repairs to its structure have left the dome somewhat elliptical, with the diameter varying between . At the western entrance and eastern liturgical side, there are arched openings extended by half domes of identical diameter to the central dome, carried on smaller semi-domed exedrae, a hierarchy of dome-headed elements built up to create a vast oblong interior crowned by the central dome, with a clear span of .The theories of Hero of Alexandria, a Hellenistic mathematician of the 1st century AD, may have been utilized to address the challenges presented by building such an expansive dome over so large a space. Svenshon and Stiffel proposed that the architects used Hero's proposed values for constructing vaults. The square measurements were calculated using the side-and-diagonal number progression, which results in squares defined by the numbers 12 and 17, wherein 12 defines the side of the square and 17 its diagonal, which have been used as standard values as early as in cuneiform Babylonian texts. Each of the four sides of the great square Hagia Sophia is approximately 31 m long, and it was previously thought that this was the equivalent of 100 Byzantine feet. Svenshon suggested that the size of the side of the central square of Hagia Sophia is not 100 Byzantine feet but instead 99 feet. This measurement is not only rational, but it is also embedded in the system of the side-and-diagonal number progression (70/99) and therefore a usable value by the applied mathematics of antiquity. It gives a diagonal of 140 which is manageable for constructing a huge dome like that of the Hagia Sophia. Floor The stone floor of Hagia Sophia dates from the 6th century. After the first collapse of the vault, the broken dome was left in situ on the original Justinianic floor and a new floor was laid above the rubble when the dome was rebuilt in 558. From the installation of this second Justinianic floor, the floor became part of the liturgy, with significant locations and spaces demarcated in various ways using different-coloured stones and marbles. The floor is predominantly made up of Proconnesian marble, quarried on Proconnesus (Marmara Island) in the Propontis (Sea of Marmara). This was the main white marble used in the monuments of Constantinople. Other parts of the floor, like the Thessalian verd antique "marble", were quarried in Thessaly in Roman Greece. The Thessalian verd antique bands across the nave floor were often likened to rivers. The floor was praised by numerous authors and repeatedly compared to a sea. The Justinianic poet Paul the Silentiary likened the ambo and the solea connecting it to the sanctuary with an island in a sea, with the sanctuary itself a harbour. The 9th-century Narratio writes of it as "like the sea or the flowing waters of a river". Michael the Deacon in the 12th century also described the floor as a sea in which the ambo and other liturgical furniture stood as islands. During the 15th-century conquest of Constantinople, the Ottoman caliph Mehmed is said to have ascended to the dome and the galleries in order to admire the floor, which according to Tursun Beg resembled "a sea in a storm" or a "petrified sea". Other Ottoman-era authors also praised the floor; Tâcîzâde Cafer Çelebi compared it to waves of marble. The floor was hidden beneath a carpet on 22 July 2020. Narthex and portals The Imperial Gate, or Imperial Door, was the main entrance between the exo- and esonarthex, and it was originally exclusively used by the emperor. A long ramp from the northern part of the outer narthex leads up to the upper gallery. Upper gallery The upper gallery, or matroneum, is horseshoe-shaped; it encloses the nave on three sides and is interrupted by the apse. Several mosaics are preserved in the upper gallery, an area traditionally reserved for the Empress and her court. The best-preserved mosaics are located in the southern part of the gallery. The northern first floor gallery contains runic graffiti believed to have been left by members of the Varangian Guard. Structural damage caused by natural disasters is visible on the Hagia Sophia's exterior surface. To ensure that the Hagia Sophia did not sustain any damage on the interior of the building, studies have been conducted using ground penetrating radar within the gallery of the Hagia Sophia. With the use of ground-penetrating radar (GPR), teams discovered weak zones within the Hagia Sophia's gallery and also concluded that the curvature of the vault dome has been shifted out of proportion, compared to its original angular orientation. Dome The dome of Hagia Sophia has spurred particular interest for many art historians, architects, and engineers because of the innovative way the original architects envisioned it. The dome is carried on four spherical triangular pendentives, making the Hagia Sophia one of the first large-scale uses of this element. The pendentives are the corners of the square base of the dome, and they curve upwards into the dome to support it, thus restraining the lateral forces of the dome and allowing its weight to flow downwards. The main dome of the Hagia Sophia was the largest pendentive dome in the world until the completion of St Peter's Basilica, and it has a much lower height than any other dome of such a large diameter. The great dome at the Hagia Sophia is 32.6 meters (one hundred and seven feet) in diameter and is only 0.61 meters (two feet) thick. The main building materials for the original Hagia Sophia were brick and mortar. Brick aggregate was used to make roofs easier to construct. The aggregate weighs 2402.77 kilograms per cubic meter (150 pounds per cubic foot), an average weight of masonry construction at the time. Due to the materials plasticity, it was chosen over cut stone due to the fact that aggregate can be used over a longer distance. According to Rowland Mainstone, "it is unlikely that the vaulting-shell is anywhere more than one normal brick in thickness". The weight of the dome remained a problem for most of the building's existence. The original cupola collapsed entirely after the earthquake of 558; in 563 a new dome was built by Isidore the Younger, a nephew of Isidore of Miletus. Unlike the original, this included 40 ribs and was raised 6.1 meters (20 feet), in order to lower the lateral forces on the church walls. A larger section of the second dome collapsed as well, over two episodes, so that as of 2021, only two sections of the present dome, the north and south sides, are from the 562 reconstructions. Of the whole dome's 40 ribs, the surviving north section contains eight ribs, while the south section includes six ribs. Although this design stabilizes the dome and the surrounding walls and arches, the actual construction of the walls of Hagia Sophia weakened the overall structure. The bricklayers used more mortar than brick, which is more effective if the mortar was allowed to settle, as the building would have been more flexible; however, the builders did not allow the mortar to cure before they began the next layer. When the dome was erected, its weight caused the walls to lean outward because of the wet mortar underneath. When Isidore the Younger rebuilt the fallen cupola, he had first to build up the interior of the walls to make them vertical again. Additionally, the architect raised the height of the rebuilt dome by approximately so that the lateral forces would not be as strong and its weight would be transmitted more effectively down into the walls. Moreover, he shaped the new cupola like a scalloped shell or the inside of an umbrella, with ribs that extend from the top down to the base. These ribs allow the weight of the dome to flow between the windows, down the pendentives, and ultimately to the foundation. Hagia Sophia is famous for the light that reflects everywhere in the interior of the nave, giving the dome the appearance of hovering above. This effect was achieved by inserting forty windows around the base of the original structure. Moreover, the insertion of the windows in the dome structure reduced its weight. Buttresses Numerous buttresses have been added throughout the centuries. The flying buttresses to the west of the building, although thought to have been constructed by the Crusaders upon their visit to Constantinople, were actually built during the Byzantine era. This shows that the Romans had prior knowledge of flying buttresses, which can also be seen at in Greece, at the Rotunda of Galerius in Thessaloniki, at the monastery of Hosios Loukas in Boeotia, and in Italy at the octagonal basilica of San Vitale in Ravenna. Other buttresses were constructed during the Ottoman times under the guidance of the architect Sinan. A total of 24 buttresses were added. Minarets The minarets were an Ottoman addition and not part of the original church's Byzantine design. They were built for notification of invitations for prayers (adhan) and announcements. Mehmed had built a wooden minaret over one of the half domes soon after Hagia Sophia's conversion from a cathedral to a mosque. This minaret does not exist today. One of the minarets (at southeast) was built from red brick and can be dated back from the reign of Mehmed or his successor Beyazıd II. The other three were built from white limestone and sandstone, of which the slender northeast column was erected by Bayezid II and the two identical, larger minarets to the west were erected by Selim II and designed by the famous Ottoman architect Mimar Sinan. Both are in height, and their thick and massive patterns complete Hagia Sophia's main structure. Many ornaments and details were added to these minarets on repairs during the 15th, 16th, and 19th centuries, which reflect each period's characteristics and ideals. Notable elements and decorations Originally, under Justinian's reign, the interior decorations consisted of abstract designs on marble slabs on the walls and floors as well as mosaics on the curving vaults. Of these mosaics, the two archangels Gabriel and Michael are still visible in the spandrels (corners) of the bema. There were already a few figurative decorations, as attested by the late 6th-century ekphrasis of Paul the Silentiary, the Description of Hagia Sophia. The spandrels of the gallery are faced in inlaid thin slabs (opus sectile), showing patterns and figures of flowers and birds in precisely cut pieces of white marble set against a background of black marble. In later stages, figurative mosaics were added, which were destroyed during the iconoclastic controversy (726–843). Present mosaics are from the post-iconoclastic period. Apart from the mosaics, many figurative decorations were added during the second half of the 9th century: an image of Christ in the central dome; Eastern Orthodox saints, prophets and Church Fathers in the tympana below; historical figures connected with this church, such as Patriarch Ignatius; and some scenes from the Gospels in the galleries. Basil II let artists paint a giant six-winged seraph on each of the four pendentives. The Ottomans covered their faces with golden stars, but in 2009, one of them was restored to its original state. Loggia of the Empress The loggia of the empress is located in the centre of the gallery of the Hagia Sophia, above the Imperial Gate and directly opposite the apse. From this matroneum (women's gallery), the empress and the court-ladies would watch the proceedings down below. A green stone disc of verd antique marks the spot where the throne of the empress stood. Lustration urns Two huge marble lustration (ritual purification) urns were brought from Pergamon during the reign of Sultan Murad III. They are from the Hellenistic period and carved from single blocks of marble. Marble Door The Marble Door inside the Hagia Sophia is located in the southern upper enclosure or gallery. It was used by the participants in synods, who entered and left the meeting chamber through this door. It is said that each side is symbolic and that one side represents heaven while the other represents hell. Its panels are covered in fruits and fish motifs. The door opens into a space that was used as a venue for solemn meetings and important resolutions of patriarchate officials. The Nice Door The Nice Door is the oldest architectural element found in the Hagia Sophia dating back to the 2nd century BC. The decorations are of reliefs of geometric shapes as well as plants that are believed to have come from a pagan temple in Tarsus in Cilicia, part of the Cibyrrhaeot Theme in modern-day Mersin Province in south-eastern Turkey. It was incorporated into the building by Emperor Theophilos in 838 where it is placed in the south exit in the inner narthex. Imperial Gate The Imperial Gate is the door that was used solely by the Emperor and his personal bodyguard and retinue. It is the largest door in the Hagia Sophia and has been dated to the 6th century. It is about 7 meters long and Byzantine sources say it was made with wood from Noah's Ark. In April 2022, the door was vandalised by unknown assailant(s). The incident became known after the Association of Art Historians published a photo with the destruction. The Greek Foreign Ministry condemned the incident, while Turkish officials claimed that "a citizen has taken a piece of the door" and started an investigation. Wishing column At the northwest of the building, there is a column with a hole in the middle covered by bronze plates. This column goes by different names; the "perspiring" or "sweating column", the "crying column", or the "wishing column". Legend states that it has been moist since the appearance of Gregory Thaumaturgus near the column in 1200. It is believed that touching the moisture cures many illnesses. The Viking Inscription In the southern section of Hagia Sophia, a 9th-century Viking inscription has been discovered, which reads, "Halvdan was here." It is theorized that the inscription was created by a Viking soldier serving as a mercenary in the Eastern Roman Empire. Mosaics The first mosaics which adorned the church were completed during the reign of Justin II. Many of the non-figurative mosaics in the church come from this period. Most of the mosaics, however, were created in the 10th and 12th centuries, following the periods of Byzantine Iconoclasm. During the Sack of Constantinople in 1204, the Latin Crusaders vandalized valuable items in every important Byzantine structure of the city, including the golden mosaics of the Hagia Sophia. Many of these items were shipped to Venice, whose Doge Enrico Dandolo had organized the invasion and sack of Constantinople after an agreement with Prince Alexios Angelos, the son of a deposed Byzantine emperor. 19th-century restoration Following the building's conversion into a mosque in 1453, many of its mosaics were covered with plaster, due to Islam's ban on representational imagery. This process was not completed at once, and reports exist from the 17th century in which travellers note that they could still see Christian images in the former church. In 1847–1849, the building was restored by two Swiss-Italian Fossati brothers, Gaspare and Giuseppe, and Sultan Abdulmejid I allowed them to also document any mosaics they might discover during this process, which were later archived in Swiss libraries. This work did not include repairing the mosaics, and after recording the details about an image, the Fossatis painted it over again. The Fossatis restored the mosaics of the two hexapteryga (singular , pr. hexapterygon, six-winged angel; it is uncertain whether they are seraphim or cherubim) located on the two east pendentives, and covered their faces again before the end of the restoration. The other two mosaics, placed on the west pendentives, are copies in paint created by the Fossatis since they could find no surviving remains of them. As in this case, the architects reproduced in paint damaged decorative mosaic patterns, sometimes redesigning them in the process. The Fossati records are the primary sources about a number of mosaic images now believed to have been completely or partially destroyed in the 1894 Istanbul earthquake. These include a mosaic over a now-unidentified Door of the Poor, a large image of a jewel-encrusted cross, and many images of angels, saints, patriarchs, and church fathers. Most of the missing images were located in the building's two tympana. One mosaic they documented is Christ Pantocrator in a circle, which would indicate it to be a ceiling mosaic, possibly even of the main dome, which was later covered and painted over with Islamic calligraphy that expounds God as the light of the universe. The Fossatis' drawings of the Hagia Sophia mosaics are today kept in the Archive of the Canton of Ticino. 20th-century restoration Many mosaics were uncovered in the 1930s by a team from the Byzantine Institute of America led by Thomas Whittemore. The team chose to let a number of simple cross images remain covered by plaster but uncovered all major mosaics found. Because of its long history as both a church and a mosque, a particular challenge arises in the restoration process. Christian iconographic mosaics can be uncovered, but often at the expense of important and historic Islamic art. Restorers have attempted to maintain a balance between both Christian and Islamic cultures. In particular, much controversy rests upon whether the Islamic calligraphy on the dome of the cathedral should be removed, in order to permit the underlying Pantocrator mosaic of Christ as Master of the World to be exhibited (assuming the mosaic still exists). The Hagia Sophia has been a victim of natural disasters that have caused deterioration to the buildings structure and walls. The deterioration of the Hagia Sophia's walls can be directly attributed to salt crystallization. The crystallization of salt is due to an intrusion of rainwater that causes the Hagia Sophia's deteriorating inner and outer walls. Diverting excess rainwater is the main solution to the deteriorating walls at the Hagia Sophia. Built between 532 and 537, a subsurface structure under the Hagia Sophia has been under investigation, using LaCoste-Romberg gravimeters to determine the depth of the subsurface structure and to discover other hidden cavities beneath the Hagia Sophia. The hidden cavities have also acted as a support system against earthquakes. With these findings using the LaCoste-Romberg gravimeters, it was also discovered that the Hagia Sophia's foundation is built on a slope of natural rock. Imperial Gate mosaic The Imperial Gate mosaic is located in the tympanum above that gate, which was used only by the emperors when entering the church. Based on style analysis, it has been dated to the late 9th or early 10th century. The emperor with a nimbus or halo could possibly represent emperor Leo VI the Wise or his son Constantine VII Porphyrogenitus bowing down before Christ Pantocrator, seated on a jewelled throne, giving his blessing and holding in his left hand an open book. The text on the book reads: "Peace be with you" (John 20, , ) and "I am the light of the world" (John 8, ). On each side of Christ's shoulders is a circular medallion with busts: on his left the Archangel Gabriel, holding a staff, on his right his mother Mary. Southwestern entrance mosaic The southwestern entrance mosaic, situated in the tympanum of the southwestern entrance, dates from the reign of Basil II. It was rediscovered during the restorations of 1849 by the Fossatis. The Virgin sits on a throne without a back, her feet resting on a pedestal, embellished with precious stones. The Christ Child sits on her lap, giving his blessing and holding a scroll in his left hand. On her left side stands emperor Constantine in ceremonial attire, presenting a model of the city to Mary. The inscription next to him says: "Great emperor Constantine of the Saints". On her right side stands emperor Justinian I, offering a model of the Hagia Sophia. The medallions on both sides of the Virgin's head carry the nomina sacra and , abbreviations of the . The composition of the figure of the Virgin enthroned was probably copied from the mosaic inside the semi-dome of the apse inside the liturgical space. Apse mosaics The mosaic in the semi-dome above the apse at the east end shows Mary, mother of Jesus holding the Christ Child and seated on a jewelled thokos backless throne. Since its rediscovery after a period of concealment in the Ottoman era, it "has become one of the foremost monuments of Byzantium". The infant Jesus's garment is depicted with golden tesserae. Guillaume-Joseph Grelot, who had travelled to Constantinople, in 1672 engraved and in 1680 published in Paris an image of the interior of Hagia Sophia which shows the apse mosaic indistinctly. Together with a picture by Cornelius Loos drawn in 1710, these images are early attestations of the mosiac before it was covered towards the end of the 18th century. The mosaic of the Virgin and Child was rediscovered during the restorations of the Fossati brothers in 1847–1848 and revealed by the restoration of Thomas Whittemore in 1935–1939. It was studied again in 1964 with the aid of scaffolding. It is not known when this mosaic was installed. According to Cyril Mango, the mosaic is "a curious reflection on how little we know about Byzantine art". The work is generally believed to date from after the end of Byzantine Iconoclasm and usually dated to the patriarchate of Photius I () and the time of the emperors Michael III () and Basil I (). Most specifically, the mosaic has been connected with a surviving homily known to have been written and delivered by Photius in the cathedral on 29 March 867. Other scholars have favoured earlier or later dates for the present mosaic or its composition. Nikolaos Oikonomides pointed out that Photius's homily refers to a standing portrait of the Theotokos – a Hodegetria – while the present mosaic shows her seated. Likewise, a biography of the patriarch Isidore I () by his successor Philotheus I () composed before 1363 describes Isidore seeing a standing image of the Virgin at Epiphany in 1347. Serious damage was done to the building by earthquakes in the 14th century, and it is possible that a standing image of the Virgin that existed in Photius's time was lost in the earthquake of 1346, in which the eastern end of Hagia Sophia was partly destroyed. This interpretation supposes that the present mosaic of the Virgin and Child enthroned is of the late 14th century, a time in which, beginning with Nilus of Constantinople (), the patriarchs of Constantinople began to have official seals depicting the Theotokos enthroned on a thokos. Still other scholars have proposed an earlier date than the later 9th century. According to George Galavaris, the mosaic seen by Photius was a Hodegetria portrait which after the earthquake of 989 was replaced by the present image not later than the early 11th century. According to Oikonomides however, the image in fact dates to before the Triumph of Orthodoxy, having been completed , during the iconodule interlude between the First Iconoclast (726–787) and the Second Iconoclast (814–842) periods. Having been plastered over in the Second Iconoclasm, Oikonomides argues a new, standing image of the Virgin Hodegetria was created above the older mosaic in 867, which then fell off in the earthquakes of the 1340s and revealed again the late 8th-century image of the Virgin enthroned. More recently, analysis of a hexaptych menologion icon panel from Saint Catherine's Monastery at Mount Sinai has determined that the panel, showing numerous scenes from the life of the Virgin and other theologically significant iconic representations, contains an image at the centre very similar to that in Hagia Sophia. The image is labelled in Greek merely , but in the Georgian language the inscription reveals the image is labelled "of the semi-dome of Hagia Sophia". This image is therefore the oldest depiction of the apse mosaic known and demonstrates that the apse mosaic's appearance was similar to the present day mosaic in the late 11th or early 12th centuries, when the hexaptych was inscribed in Georgian by a Georgian monk, which rules out a 14th-century date for the mosaic. The portraits of the archangels Gabriel and Michael (largely destroyed) in the bema of the arch also date from the 9th century. The mosaics are set against the original golden background of the 6th century. These mosaics were believed to be a reconstruction of the mosaics of the 6th century that were previously destroyed during the iconoclastic era by the Byzantines of that time, as represented in the inaugural sermon by the patriarch Photios. However, no record of figurative decoration of Hagia Sophia exists before this time. Emperor Alexander mosaic The Emperor Alexander mosaic is not easy to find for the first-time visitor, located on the second floor in a dark corner of the ceiling. It depicts the emperor Alexander in full regalia, holding a scroll in his right hand and a globus cruciger in his left. A drawing by the Fossatis showed that the mosaic survived until 1849 and that Thomas Whittemore, founder of the Byzantine Institute of America who was granted permission to preserve the mosaics, assumed that it had been destroyed in the earthquake of 1894. Eight years after his death, the mosaic was discovered in 1958 largely through the researches of Robert Van Nice. Unlike most of the other mosaics in Hagia Sophia, which had been covered over by ordinary plaster, the Alexander mosaic was simply painted over and reflected the surrounding mosaic patterns and thus was well hidden. It was duly cleaned by the Byzantine Institute's successor to Whittemore, Paul A. Underwood. Empress Zoe mosaic The Empress Zoe mosaic on the eastern wall of the southern gallery dates from the 11th century. Christ Pantocrator, clad in the dark blue robe (as is the custom in Byzantine art), is seated in the middle against a golden background, giving his blessing with the right hand and holding the Bible in his left hand. On either side of his head are the nomina sacra and , meaning Iēsous Christos. He is flanked by Constantine IX Monomachus and Empress Zoe, both in ceremonial costumes. He is offering a purse, as a symbol of donation, he made to the church, while she is holding a scroll, symbol of the donations she made. The inscription over the head of the emperor says: "Constantine, pious emperor in Christ the God, king of the Romans, Monomachus". The inscription over the head of the empress reads as follows: "Zoë, the very pious Augusta". The previous heads have been scraped off and replaced by the three present ones. Perhaps the earlier mosaic showed her first husband Romanus III Argyrus or her second husband Michael IV. Another theory is that this mosaic was made for an earlier emperor and empress, with their heads changed into the present ones. Comnenus mosaic The Comnenus mosaic, also located on the eastern wall of the southern gallery, dates from 1122. The Virgin Mary is standing in the middle, depicted, as usual in Byzantine art, in a dark blue gown. She holds the Christ Child on her lap. He gives his blessing with his right hand while holding a scroll in his left hand. On her right side stands emperor John II Comnenus, represented in a garb embellished with precious stones. He holds a purse, symbol of an imperial donation to the church. His wife, the empress Irene of Hungary stands on the left side of the Virgin, wearing ceremonial garments and offering a document. Their eldest son Alexius Comnenus is represented on an adjacent pilaster. He is shown as a beardless youth, probably representing his appearance at his coronation aged seventeen. In this panel, one can already see a difference with the Empress Zoe mosaic that is one century older. There is a more realistic expression in the portraits instead of an idealized representation. The Empress Irene (born Piroska), daughter of Ladislaus I of Hungary, is shown with plaited blond hair, rosy cheeks, and grey eyes, revealing her Hungarian descent. The emperor is depicted in a dignified manner. Deësis mosaic The Deësis mosaic (, "Entreaty") probably dates from 1261. It was commissioned to mark the end of 57 years of Latin Catholic use and the return to the Eastern Orthodox faith. It is the third panel situated in the imperial enclosure of the upper galleries. It is widely considered the finest in Hagia Sophia, because of the softness of the features, the humane expressions and the tones of the mosaic. The style is close to that of the Italian painters of the late 13th or early 14th century, such as Duccio. In this panel the Virgin Mary and John the Baptist (Ioannes Prodromos), both shown in three-quarters profile, are imploring the intercession of Christ Pantocrator for humanity on Judgment Day. The bottom part of this mosaic is badly deteriorated. This mosaic is considered as the beginning of a renaissance in Byzantine pictorial art. Northern tympanum mosaics The northern tympanum mosaics feature various saints. They have been able to survive due to their high and inaccessible location. They depict Patriarchs of Constantinople John Chrysostom and Ignatios of Constantinople standing, clothed in white robes with crosses, and holding richly jewelled Bibles. The figures of each patriarch, revered as saints, are identifiable by labels in Greek. The other mosaics in the other tympana have not survived probably due to the frequent earthquakes, as opposed to any deliberate destruction by the Ottoman conquerors. Dome mosaic The dome was decorated with four non-identical figures of the six-winged angels which protect the Throne of God; it is uncertain whether they are seraphim or cherubim. The mosaics survive in the eastern part of the dome, but since the ones on the western side were damaged during the Byzantine period, they have been renewed as frescoes. During the Ottoman period each seraph's (or cherub's) face was covered with metallic lids in the shape of stars, but these were removed to reveal the faces during renovations in 2009. Other burials Selim II (1524 – 15 December 1574) Murad III 1546–1595 Mustafa I ( – 20 January 1639), in the courtyard. Enrico Dandolo ( – June 1205), in the east gallery. Gli ( – 7 November 2020), in the garden. Works influenced by the Hagia Sophia Many buildings have been modeled on the Hagia Sophia's core structure of a large central dome resting on pendentives and buttressed by two semi-domes. Byzantine churches influenced by the Hagia Sophia include the Hagia Sophia in Thessaloniki, and the Hagia Irene. The latter was remodeled to have a dome similar to the Hagia Sophia's during the reign of Justinian. Several mosques commissioned by the Ottoman dynasty have plans based on the Hagia Sophia, including the Süleymaniye Mosque and the Bayezid II Mosque. Ottoman architects preferred to surround the central dome with four semi-domes rather than two. There are four semi-domes on the Sultan Ahmed Mosque, the Fatih Mosque, and the New Mosque (Istanbul). As with the original plan of the Hagia Sophia, these mosques are entered through colonnaded courtyards. However, the courtyard of the Hagia Sophia no longer exists. Neo-Byzantine churches modeled on the Hagia Sophia include the Kronstadt Naval Cathedral, Holy Trinity Cathedral, Sibiu and Poti Cathedral. Each closely replicates the internal geometry of the Hagia Sophia. The layout of the Kronstadt Naval Cathedral is nearly identical to the Hagia Sophia in size and geometry. Its marble revetment also mimics the style of the Hagia Sophia. As with Ottoman mosques, several churches based on the Hagia Sophia include four semi-domes rather than two, such as the Church of Saint Sava in Belgrade. The Catedral Metropolitana Ortodoxa in São Paulo and the Église du Saint-Esprit (Paris) both replace the two large tympanums beneath the main dome with two shallow semi-domes. The Église du Saint-Esprit is two thirds the size of the Hagia Sophia. Several churches combine elements of the Hagia Sophia with a Latin cross plan. For instance, the transept of the Cathedral Basilica of Saint Louis (St. Louis) is formed by two semi-domes surrounding the main dome. The church's column capitals and mosaics also emulate the style of the Hagia Sophia. Other examples include the Alexander Nevsky Cathedral, Sofia, St Sophia's Cathedral, London, Saint Clement Catholic Church, Chicago, and the Basilica of the National Shrine of the Immaculate Conception. Synagogues based on the Hagia Sophia include the Congregation Emanu-El (San Francisco), Great Synagogue of Florence, and Hurva Synagogue. Gallery See also Runic inscriptions in Hagia Sophia List of Byzantine inventions List of tallest domes List of largest monoliths List of oldest church buildings List of tallest structures built before the 20th century List of Turkish Grand Mosques Conversion of non-Islamic places of worship into mosques Notes References Citations Sources Hagia Sophia. Hagia Sophia . Accessed 23 September 2014. . Runciman, Steven (1965). The Fall of Constantinople , 1453. Cambridge: Cambridge University Press. p. 145. . Further reading See also the thematically organised full bibliography in Stroth (2021), pp. 137–183. Harris, Jonathan, Constantinople: Capital of Byzantium. Hambledon/Continuum (2007). Scharf, Joachim: "Der Kaiser in Proskynese. Bemerkungen zur Deutung des Kaisermosaiks im Narthex der Hagia Sophia von Konstantinopel". In: Festschrift Percy Ernst Schramm zu seinem siebzigsten Geburtstag von Schülern und Freunden zugeeignet, Wiesbaden 1964, pp. 27–35. Weitzmann, Kurt, ed., Age of spirituality: late antique and early Christian art, third to seventh century , no. 592, 1979, Metropolitan Museum of Art, New York, Articles Bordewich, Fergus M., "A Monumental Struggle to Preserve Hagia Sophia", Smithsonian magazine, December 2008 Calian, Florian, The Hagia Sophia and Turkey's Neo-Ottomanism , Armenian Weekly. Ousterhout, Robert G. "Museum or Mosque? Istanbul's Hagia Sophia has been a monument to selective readings of history ." History Today (Sept 2020). Suchkov, Maxim, Why did Moscow call Ankara's Hagia Sophia decision "Turkey's internal affair"? , Middle East Institute. Mosaics Hagia Sophia, hagiasophia.com: Mosaics. External links 360 Degree Virtual Tour of Hagia Sophia Mosque Museum Gigapixel of Hagia Sophia Dome (214 Billion Pixel) Hagia Sophia Museum, Republic of Turkey, Ministry of Culture & Tourism The Most Visited Museums of Turkey: Hagia Sophia Museum, Governorship of Istanbul
Hagia Sophia
Engineering
21,686
22,406,394
https://en.wikipedia.org/wiki/Nimbus%20%28cloud%20computing%29
Nimbus is a toolkit that, once installed on a cluster, provides an infrastructure as a service cloud to its client via WSRF-based or Amazon EC2 WSDL web service APIs. Nimbus is free and open-source software, subject to the requirements of the Apache License, version 2. Nimbus supports both the hypervisors Xen and KVM and virtual machine schedulers Portable Batch System and Oracle Grid Engine. It allows deployment of self-configured virtual clusters via contextualization. It is configurable with respect to scheduling, networking leases, and usage accounting. Requirements Xen 3.x Kernel-based Virtual Machine Java 1.5+ Python (2.4+) Linux kernel's Netfilter and ebtables for a bridging firewall DHCP server See also Cloud-computing comparison References External links Cloud infrastructure Free software for cloud computing Free software programmed in Java (programming language) Free software programmed in Python Virtualization software for Linux
Nimbus (cloud computing)
Technology
207
3,761,640
https://en.wikipedia.org/wiki/List%20of%20uniform%20polyhedra%20by%20Wythoff%20symbol
There are many relations among the uniform polyhedra. Here they are grouped by the Wythoff symbol. Key Regular All the faces are identical, each edge is identical and each vertex is identical. They all have a Wythoff symbol of the form p|q 2. Convex The Platonic solids. Non-convex The Kepler-Poinsot solids. Quasi-regular Each edge is identical and each vertex is identical. There are two types of faces which appear in an alternating fashion around each vertex. The first row are semi-regular with 4 faces around each vertex. They have Wythoff symbol 2|p q. The second row are ditrigonal with 6 faces around each vertex. They have Wythoff symbol 3|p q or 3/2|p q. Wythoff p q|r Truncated regular forms Each vertex has three faces surrounding it, two of which are identical. These all have Wythoff symbols 2 p|q, some are constructed by truncating the regular solids. Hemipolyhedra The hemipolyhedra all have faces which pass through the origin. Their Wythoff symbols are of the form p p/m|q or p/m p/n|q. With the exception of the tetrahemihexahedron they occur in pairs, and are closely related to the semi-regular polyhedra, like the cuboctohedron. Rhombic quasi-regular Four faces around the vertex in the pattern p.q.r.q. The name rhombic stems from inserting a square in the cuboctahedron and icosidodecahedron. The Wythoff symbol is of the form p q|r. Even-sided forms Wythoff p q r| These have three different faces around each vertex, and the vertices do not lie on any plane of symmetry. They have Wythoff symbol p q r|, and vertex figures 2p.2q.2r. Wythoff p q (r s)| Vertex figure p.q.-p.-q. Wythoff p q (r s)|, mixing pqr| and pqs|. Snub polyhedra These have Wythoff symbol |p q r, and one non-Wythoffian construction is given |p q r s. Wythoff |p q r Wythoff |p q r s Polyhedra Uniform polyhedra
List of uniform polyhedra by Wythoff symbol
Physics
509
70,340,520
https://en.wikipedia.org/wiki/Phone%20repair%20with%20rice
Submerging a mobile device into rice is a common repair advice for devices that suffered from water damage. This technique has not been shown to be effective in repairing them. Submerging these devices into a desiccant may or may not be more effective than leaving them to dry in open air. Uncooked rice is inferior to other common desiccants such as silica gel or cat litter. Despite what has been said, it is not recommended as the starch and particles from the rice can get lodged inside the phone's inner parts. History Rice has traditionally been used to keep camera equipment and films dry in tropical environments. In July 2007, less than a month after the original iPhone was released, a member of MacRumors named jorsuss started a thread titled "I dropped my iPhone in water". They covered the phone in rice, which may have been the first documented attempt to use the procedure on an iPhone. See also IP Code Further reading References Smartphones Misconceptions Rice
Phone repair with rice
Technology
207
1,835,001
https://en.wikipedia.org/wiki/Nakayama%27s%20lemma
In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring. The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in , although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals. Statement Let be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in : Statement 1: Let be an ideal in , and a finitely generated module over . If , then there exists with such that . This is proven below. A useful mnemonic for Nakayama's lemma is "". This summarizes the following alternative formulation: Statement 2: Let be an ideal in , and a finitely generated module over . If , then there exists an such that for all . Proof: Take in Statement 1. The following corollary is also known as Nakayama's lemma, and it is in this form that it most often appears. Statement 3: If is a finitely generated module over , is the Jacobson radical of , and , then . Proof: (with as in Statement 1) is in the Jacobson radical so is invertible. More generally, one has that is a superfluous submodule of when is finitely generated. Statement 4: If is a finitely generated module over , is a submodule of , and = , then = . Proof: Apply Statement 3 to . The following result manifests Nakayama's lemma in terms of generators. Statement 5: If is a finitely generated module over and the images of elements 1,..., of in generate as an -module, then 1,..., also generate as an -module. Proof: Apply Statement 4 to . If one assumes instead that is complete and is separated with respect to the -adic topology for an ideal in , this last statement holds with in place of and without assuming in advance that is finitely generated. Here separatedness means that the -adic topology satisfies the T1 separation axiom, and is equivalent to Consequences Local rings In the special case of a finitely generated module over a local ring with maximal ideal , the quotient is a vector space over the field . Statement 5 then implies that a basis of lifts to a minimal set of generators of . Conversely, every minimal set of generators of is obtained in this way, and any two such sets of generators are related by an invertible matrix with entries in the ring. Geometric interpretation In this form, Nakayama's lemma takes on concrete geometrical significance. Local rings arise in geometry as the germs of functions at a point. Finitely generated modules over local rings arise quite often as germs of sections of vector bundles. Working at the level of germs rather than points, the notion of finite-dimensional vector bundle gives way to that of a coherent sheaf. Informally, Nakayama's lemma says that one can still regard a coherent sheaf as coming from a vector bundle in some sense. More precisely, let be a coherent sheaf of -modules over an arbitrary scheme . The stalk of at a point , denoted by , is a module over the local ring and the fiber of at is the vector space . Nakayama's lemma implies that a basis of the fiber lifts to a minimal set of generators of . That is: Any basis of the fiber of a coherent sheaf at a point comes from a minimal basis of local sections. Reformulating this geometrically, if is a locally free -module representing a vector bundle , and if we take a basis of the vector bundle at a point in the scheme , this basis can be lifted to a basis of sections of the vector bundle in some neighborhood of the point. We can organize this data diagrammaticallywhere is an n-dimensional vector space, to say a basis in (which is a basis of sections of the bundle ) can be lifted to a basis of sections for some neighborhood of . Going up and going down The going up theorem is essentially a corollary of Nakayama's lemma. It asserts: Let be an integral extension of commutative rings, and a prime ideal of . Then there is a prime ideal in such that . Moreover, can be chosen to contain any prime of such that . Module epimorphisms Nakayama's lemma makes precise one sense in which finitely generated modules over a commutative ring are like vector spaces over a field. The following consequence of Nakayama's lemma gives another way in which this is true: If is a finitely generated -module and is a surjective endomorphism, then is an isomorphism. Over a local ring, one can say more about module epimorphisms: Suppose that is a local ring with maximal ideal , and are finitely generated -modules. If is an -linear map such that the quotient is surjective, then is surjective. Homological versions Nakayama's lemma also has several versions in homological algebra. The above statement about epimorphisms can be used to show: Let be a finitely generated module over a local ring. Then is projective if and only if it is free. This can be used to compute the Grothendieck group of any local ring as . A geometrical and global counterpart to this is the Serre–Swan theorem, relating projective modules and coherent sheaves. More generally, one has Let be a local ring and a finitely generated module over . Then the projective dimension of over is equal to the length of every minimal free resolution of . Moreover, the projective dimension is equal to the global dimension of , which is by definition the smallest integer such that Here is the residue field of and is the tor functor. Inverse function theorem Nakayama's lemma is used to prove a version of the inverse function theorem in algebraic geometry: Let be a projective morphism between quasi-projective varieties. Then is an isomorphism if and only if it is a bijection and the differential is injective for all . Proof A standard proof of the Nakayama lemma uses the following technique due to . Let M be an R-module generated by n elements, and φ: M → M an R-linear map. If there is an ideal I of R such that φ(M) ⊂ IM, then there is a monic polynomial with pk ∈ Ik, such that as an endomorphism of M. This assertion is precisely a generalized version of the Cayley–Hamilton theorem, and the proof proceeds along the same lines. On the generators xi of M, one has a relation of the form where aij ∈ I. Thus The required result follows by multiplying by the adjugate of the matrix (φδij − aij) and invoking Cramer's rule. One finds then det(φδij − aij) = 0, so the required polynomial is To prove Nakayama's lemma from the Cayley–Hamilton theorem, assume that IM = M and take φ to be the identity on M. Then define a polynomial p(x) as above. Then has the required property: and . Noncommutative case A version of the lemma holds for right modules over non-commutative unital rings R. The resulting theorem is sometimes known as the Jacobson–Azumaya theorem. Let J(R) be the Jacobson radical of R. If U is a right module over a ring, R, and I is a right ideal in R, then define U·I to be the set of all (finite) sums of elements of the form u·i, where · is simply the action of R on U. Necessarily, U·I is a submodule of U. If V is a maximal submodule of U, then U/V is simple. So U·J(R) is necessarily a subset of V, by the definition of J(R) and the fact that U/V is simple. Thus, if U contains at least one (proper) maximal submodule, U·J(R) is a proper submodule of U. However, this need not hold for arbitrary modules U over R, for U need not contain any maximal submodules. Naturally, if U is a Noetherian module, this holds. If R is Noetherian, and U is finitely generated, then U is a Noetherian module over R, and the conclusion is satisfied. Somewhat remarkable is that the weaker assumption, namely that U is finitely generated as an R-module (and no finiteness assumption on R), is sufficient to guarantee the conclusion. This is essentially the statement of Nakayama's lemma. Precisely, one has: Nakayama's lemma: Let U be a finitely generated right module over a (unital) ring R. If U is a non-zero module, then U·J(R) is a proper submodule of U. Proof Let be a finite subset of , minimal with respect to the property that it generates . Since is non-zero, this set is nonempty. Denote every element of by for . Since generates ,. Suppose , to obtain a contradiction. Then every element can be expressed as a finite combination for some . Each can be further decomposed as for some . Therefore, we have . Since is a (two-sided) ideal in , we have for every , and thus this becomes for some , . Putting and applying distributivity, we obtain . Choose some . If the right ideal were proper, then it would be contained in a maximal right ideal and both and would belong to , leading to a contradiction (note that by the definition of the Jacobson radical). Thus and has a right inverse in . We have . Therefore, . Thus is a linear combination of the elements from . This contradicts the minimality of and establishes the result. Graded version There is also a graded version of Nakayama's lemma. Let R be a ring that is graded by the ordered semigroup of non-negative integers, and let denote the ideal generated by positively graded elements. Then if M is a graded module over R for which for i sufficiently negative (in particular, if M is finitely generated and R does not contain elements of negative degree) such that , then . Of particular importance is the case that R is a polynomial ring with the standard grading, and M is a finitely generated module. The proof is much easier than in the ungraded case: taking i to be the least integer such that , we see that does not appear in , so either , or such an i does not exist, i.e., . See also Module theory Serre–Swan theorem Notes References . . . . . . . Links How to understand Nakayama's Lemma and its Corollaries Theorems in ring theory Algebraic geometry Commutative algebra Lemmas in algebra
Nakayama's lemma
Mathematics
2,542
64,616,308
https://en.wikipedia.org/wiki/Cyparissus%20%28Vignali%29
Cyparissus is a 1620s Baroque painting on a mythological subject from Ovid's Metamorphoses by the Italian painter Jacopo Vignali. It is on display in the Musée des Beaux-Arts of Strasbourg, France, to which it had been donated by the collectors Othon Kaufmann and François Schlageter in 1994. Its inventory number is 994-1-8, or 44.994.1.8. The painting depicts the young Cyparissus, mourning his pet deer, that he had mistakenly killed with his own bow and arrow. The young boy's pain is amplified beyond the description given by Ovid, and possibly inspired by a 1624 Venetian edition of Giovanni Andrea dell' Anguillara's Metamorfosi ridotte in ottava rima, in which the tearful aspect of the story is emphasized. It is one of the very few profane paintings by Vignali. References External links Cyparissus , presentation on the museum's website Paintings in the Musée des Beaux-Arts de Strasbourg 1620s paintings Mythological paintings Baroque paintings Italian paintings Paintings about death Human–animal interaction Oil on canvas paintings Paintings based on Metamorphoses
Cyparissus (Vignali)
Biology
251
22,102,292
https://en.wikipedia.org/wiki/DySPAN
The Dynamic Spectrum Access Networks Standards Committee (DySPAN-SC), formerly Standards Coordinating Committee 41 (SCC41), and even earlier the IEEE P1900 Standards Committee, is sponsored by the Institute of Electrical and Electronics Engineers (IEEE). The group develops standards for radio and spectrum management. Its working groups and resulting standards, numbered in the 1900 range, are sometimes referred to as IEEE 1900.X. Background The IEEE P1900 Standards Committee was established in March 2005 jointly by the IEEE Communications Society (ComSoc) and the IEEE Electromagnetic Compatibility Society (EMC). The effort developed supporting standards for radio and dynamic spectrum management. On March 22, 2007, the IEEE Standards Board approved its reorganization as Standards Coordinating Committee 41 (SCC41), Dynamic Spectrum Access Networks (DySPAN). The IEEE ComSoc and EMC sponsored this effort, as they did for IEEE 1900. The IEEE 1900 Committee ceased to exist at the inaugural meeting of SCC41 in April 2007. The work of the IEEE 1900.x Working Groups continued under SCC41. SCC41 voted to be directly answerable to ComSoc in December 2010, and was renamed as IEEE DySPAN-SC. At its December 2010 Meeting, the IEEE Standards Association Standards Board (SASB) approved the transfer of projects to the Communications Society Standards Board. Overview DySPAN-SC focuses on Dynamic Spectrum Access and associated technologies. Due to the strong inter-relationships between such topics, it also touches on other areas such as Cognitive Radio. Working groups IEEE DySPAN-SC currently oversees the following standards development working groups: 1900.1 Working Group on Terminology and Concepts for Next Generation Radio Systems and Spectrum Management 1900.2 Working Group on Recommended Practice for Interference and Coexistence Analysis 1900.3 Working Group on Recommended Practice for Conformance Evaluation of Software Defined Radio (SDR) Software Modules 1900.4 Working Group on Architectural Building Blocks Enabling Network-Device Distributed Decision Making for Optimized Radio Resource Usage in Heterogeneous Wireless Access Networks 1900.5 Working Group on Policy Language and Policy Architectures for Managing Cognitive Radio for Dynamic Spectrum Access Applications 1900.6 Working Group on Spectrum Sensing Interfaces and Data Structures for Dynamic Spectrum Access and other Advanced Radio Communication Systems P1900.7 Working Group on Radio Interface for White Space Dynamic Spectrum Access Radio Systems Supporting Fixed and Mobile Operation Proposed standards have "P" prepended to the name until they are ratified. The first to be published was 1900.2 in July 2008. Next was 1900.1 on September 26, 2008. Then 1900.4 was published on February 27, 2009. Work then began on amendment P1900.4.1a for dynamic spectrum access networks in white space frequency bands, and P1900.4.1 for interoperability between components of the IEEE 1900.4 system. The 1900.6 standard was published on April 22, 2011, and work began on an amendment 1900.6a. IEEE 1900.4 The IEEE 1900.4 Working Group is on "Architectural Building Blocks Enabling Network-Device Distributed Decision Making for Optimized Radio Resource Usage in Heterogeneous Wireless Access Networks" It is a working group under the IEEE SCC41. IEEE 1900.4 was published on February 27, 2009. There are two projects for the 1900.4 Working Group starting April 2009: 1900.4a: Standard for Architectural Building Blocks Enabling Network-Device Distributed Decision Making for Optimized Radio Resource Usage in Heterogeneous Wireless Access Networks – Amendment: Architecture and Interfaces for Dynamic Spectrum Access Networks in White Space Frequency Bands 1900.4.1: Standard for Interfaces and Protocols Enabling Distributed Decision Making for Optimized Radio Resource Usage in Heterogeneous Wireless Networks Standard Overview Use cases (cases in which the protocols described by this standard will be used) include: Dynamic spectrum assignment Dynamic spectrum sharing Distributed radio resource usage optimization History The protocol was first popularized by various articles, including one on Monday, March 23, 2009. See also IEEE 802.22 standard for Wireless Regional Area Network Software-defined radio Cognitive radio Open spectrum References Further reading Radio technology Wireless networking standards IEEE standards
DySPAN
Technology,Engineering
840
68,824,917
https://en.wikipedia.org/wiki/CP-450
CP 450 was a large cabinet containing a floppy disk drive interface, exactly like the TRS-80 Color Computer, manufactured by Prológica, a computer company located in Brazil. General information The standard operating system is DOS-400, an adapted and renamed copy of disk Extended Color BASIC (DECB or RSDOS). It was also possible to run other operating systems, such as Microware OS-09 and TSC Flex9. Using OS-9 allowed the user to access all 64 KB of RAM available on this particular version of the CP 400. The CP 450 units stopped being manufactured at the end of 1986, along with other accessories suitable for the CP 400. Bibliography Micro Computador - Curso Básico. Rio de Janeiro: Rio Gráfica, 1984, v. 1, pp. 49–50. ABREU, Carlos Alberto C. 77 programas para linha TRS-80. Rio de Janeiro: Microkit, 1985. References Computer-related introductions in 1984
CP-450
Technology
203
5,931,077
https://en.wikipedia.org/wiki/National%20Art%20Glass%20Gallery
National Art Glass Gallery is located at the Wagga Wagga Civic Centre which started collecting studio glass in 1979 under the name Wagga Wagga Art Gallery but was changed to its current name to recognise the galley's national significance. collections References External links National Art Glass Gallery - Wagga Wagga Art Gallery Art museums and galleries in New South Wales Glass museums and galleries Wagga Wagga Art museums and galleries established in 1979 1979 establishments in Australia
National Art Glass Gallery
Materials_science,Engineering
90
52,711,824
https://en.wikipedia.org/wiki/Cytestrol%20acetate
Cytestrol acetate is a steroidal antiestrogen and a cytostatic antineoplastic agent (i.e., chemotherapeutic) which was developed for the treatment of breast cancer but was never marketed. It is an 11α-hydroxylated derivative of ethinylestradiol in which a bis(2-chloroethyl)amine nitrogen mustard moiety has been attached as an ester at the C3 position and acetate esters have been attached at the C11α and C17β positions. The mechanism of action of cytestrol acetate in breast cancer is two-fold: (1) acting as an antiestrogen similarly to fulvestrant or ICI-164384; and (2) having cytostatic actions via the carbamate–nitrogen mustard moiety analogously to estramustine phosphate. The drug shows potent efficacy against breast cancer superior to that of tamoxifen in in vitro models. See also List of hormonal cytostatic antineoplastic agents List of Russian drugs References Acetate esters Antiestrogen esters Antiestrogens Antineoplastic drugs Carbamates Hormonal antineoplastic drugs Nitrogen mustards Organochlorides Prodrugs Russian drugs Chloroethyl compounds
Cytestrol acetate
Chemistry
280
64,118,685
https://en.wikipedia.org/wiki/Haseb%20%28rocket%29
The Haseb rocket is an Iranian 107 mm Artillery rocket derived from the Chinese Type 63 multiple rocket launcher. It is mounted on a Haseb (same name) Multiple Launch Rocker System (MLRS) with 12-tube 2-wheel split rail similar to that of the Type 63 multiple rocket launcher. The rocket has a range of 9 km and a warhead weighing 8 kg. Operators See also Oghab Noor Shahin Arash Yaqeen-1 References Artillery of Iran Rocket artillery
Haseb (rocket)
Astronomy
100
41,287
https://en.wikipedia.org/wiki/Intersymbol%20interference
In telecommunications, intersymbol interference (ISI) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have a similar effect as noise, thus making the communication less reliable. The spreading of the pulse beyond its allotted time interval causes it to interfere with neighboring pulses. ISI is usually caused by multipath propagation or the inherent linear or non-linear frequency response of a communication channel causing successive symbols to blur together. The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible. Ways to alleviate intersymbol interference include adaptive equalization and error correcting codes. Causes Multipath propagation One of the causes of intersymbol interference is multipath propagation in which a wireless signal from a transmitter reaches the receiver via multiple paths. The causes of this include reflection (for instance, the signal may bounce off buildings), refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and ionospheric reflection. Since the various paths can be of different lengths, this results in the different versions of the signal arriving at the receiver at different times. These delays mean that part or all of a given symbol will be spread into the subsequent symbols, thereby interfering with the correct detection of those symbols. Additionally, the various paths often distort the amplitude and/or phase of the signal, thereby causing further interference with the received signal. Bandlimited channels Another cause of intersymbol interference is the transmission of a signal through a bandlimited channel, i.e., one where the frequency response is zero above a certain frequency (the cutoff frequency). Passing a signal through such a channel results in the removal of frequency components above this cutoff frequency. In addition, components of the frequency below the cutoff frequency may also be attenuated by the channel. This filtering of the transmitted signal affects the shape of the pulse that arrives at the receiver. The effects of filtering a rectangular pulse not only change the shape of the pulse within the first symbol period, but it is also spread out over the subsequent symbol periods. When a message is transmitted through such a channel, the spread pulse of each individual symbol will interfere with following symbols. Bandlimited channels are present in both wired and wireless communications. The limitation is often imposed by the desire to operate multiple independent signals through the same area/cable; due to this, each system is typically allocated a piece of the total bandwidth available. For wireless systems, they may be allocated a slice of the electromagnetic spectrum to transmit in (for example, FM radio is often broadcast in the 87.5–108 MHz range). This allocation is usually administered by a government agency; in the case of the United States this is the Federal Communications Commission (FCC). In a wired system, such as an optical fiber cable, the allocation will be decided by the owner of the cable. The bandlimiting can also be due to the physical properties of the medium - for instance, the cable being used in a wired system may have a cutoff frequency above which practically none of the transmitted signal will propagate. Communication systems that transmit data over bandlimited channels usually implement pulse shaping to avoid interference caused by the bandwidth limitation. If the channel frequency response is flat and the shaping filter has a finite bandwidth, it is possible to communicate with no ISI at all. Often the channel response is not known beforehand, and an adaptive equalizer is used to compensate the frequency response. Effects on eye patterns One way to study ISI in a PCM or data transmission system experimentally is to apply the received wave to the vertical deflection plates of an oscilloscope and to apply a sawtooth wave at the transmitted symbol rate R (R = 1/T) to the horizontal deflection plates. The resulting display is called an eye pattern because of its resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye opening. An eye pattern provides a great deal of information about the performance of the pertinent system. The width of the eye opening defines the time interval over which the received wave can be sampled without error from ISI. It is apparent that the preferred time for sampling is the instant of time at which the eye is open widest. The sensitivity of the system to timing error is determined by the rate of closure of the eye as the sampling time is varied. The height of the eye opening, at a specified sampling time, defines the margin over noise. An eye pattern, which overlays many samples of a signal, can give a graphical representation of the signal characteristics. The first image above is the eye pattern for a binary phase-shift keying (PSK) system in which a one is represented by an amplitude of −1 and a zero by an amplitude of +1. The current sampling time is at the center of the image and the previous and next sampling times are at the edges of the image. The various transitions from one sampling time to another (such as one-to-zero, one-to-one and so forth) can clearly be seen on the diagram. The noise margin - the amount of noise required to cause the receiver to get an error - is given by the distance between the signal and the zero amplitude point at the sampling time; in other words, the further from zero at the sampling time the signal is the better. For the signal to be correctly interpreted, it must be sampled somewhere between the two points where the zero-to-one and one-to-zero transitions cross. Again, the further apart these points are the better, as this means the signal will be less sensitive to errors in the timing of the samples at the receiver. The effects of ISI are shown in the second image which is an eye pattern of the same system when operating over a multipath channel. The effects of receiving delayed and distorted versions of the signal can be seen in the loss of definition of the signal transitions. It also reduces both the noise margin and the window in which the signal can be sampled, which shows that the performance of the system will be worse (i.e. it will have a greater bit error ratio). Countering ISI There are several techniques in telecommunications and data storage that try to work around the problem of intersymbol interference. Design systems such that the impulse response is short enough that very little energy from one symbol smears into the next symbol. Separate symbols in time with guard periods. Apply an equalizer at the receiver, that, broadly speaking, attempts to undo the effect of the channel by applying an inverse filter. Apply a sequence detector at the receiver, that attempts to estimate the sequence of transmitted symbols using the Viterbi algorithm. Intentional intersymbol interference Coded modulation systems also exist that intentionally build a controlled amount of ISI into the system at the transmitter side, known as faster-than-Nyquist signaling. Such a design trades a computational complexity penalty at the receiver against a Shannon capacity gain of the overall transceiver system. See also Nyquist ISI criterion References Further reading External links Definition of ISI from Federal Standard 1037C Intersymbol interference concept Telecommunication theory Wireless networking Television terminology
Intersymbol interference
Technology,Engineering
1,521
36,653,939
https://en.wikipedia.org/wiki/Pozzolanic%20activity
The pozzolanic activity is a measure for the degree of reaction over time or the reaction rate between a pozzolan and Ca2+ or calcium hydroxide (Ca(OH)2) in the presence of water. The rate of the pozzolanic reaction is dependent on the intrinsic characteristics of the pozzolan such as the specific surface area, the chemical composition and the active phase content. Physical surface adsorption is not considered as being part of the pozzolanic activity, because no irreversible molecular bonds are formed in the process. Reaction The pozzolanic reaction is the chemical reaction that occurs in portland cement upon the addition of pozzolans. It is the main reaction involved in the Roman concrete invented in Ancient Rome and used to build, for example, the Pantheon. The pozzolanic reaction converts a silica-rich precursor with no cementing properties, to a calcium silicate, with good cementing properties. In chemical terms, the pozzolanic reaction occurs between calcium hydroxide, also known as portlandite (Ca(OH)2), and silicic acid (written as H4SiO4, or Si(OH)4, in the geochemical notation): Ca(OH)2 + H4SiO4 → CaH2SiO4·2 H2O or summarized in abbreviated cement chemist notation: CH + SH → C-S-H The pozzolanic reaction can also be written in an ancient industrial silicate notations as: + → or even directly: + → Both notations still coexist in the literature, depending on the research field considered. However, the more recent geochemical notation in which the Si atom is tetracoordinated by four hydroxyl groups (, also commonly noted ) is more correct than the ancient industrial silicate notation for which silicic acid () was represented in the same way as carbonic acid () whose geometrical configuration is trigonal planar. When only considering mass balance, they are equivalent and both are used. The product CaH2SiO4·2 H2O is a calcium silicate hydrate, also abbreviated as C-S-H in cement chemist notation, the hyphenation denotes the variable stoichiometry. The atomic (or molar) ratio Ca/Si, CaO/SiO2, or C/S, and the number of water molecules can vary and the above-mentioned stoichiometry may differ. Many pozzolans may also contain aluminate, or Al(OH)4−, that will react with calcium hydroxide and water to form calcium aluminate hydrates such as C4AH13, C3AH6 or hydrogarnet, or in combination with silica C2ASH8 or strätlingite (cement chemist notation). In the presence of anionic groups such as sulfate, carbonate or chloride, AFm phases and AFt or ettringite phases can form. Pozzolanic reaction is a long term reaction, which involves dissolved silicic acid, water and CaO or Ca(OH)2 or other pozzolans to form a strong cementation matrix. This process is often irreversible. Sufficient amount of free calcium ion and a high pH of 12 and above is needed to initiate and maintain the pozzolanic reaction. This is because at a pH of around 12, the solubility of silicon and aluminium ions is high enough to support the pozzolanic reaction. Activity determining parameters Particle properties Prolonged grinding results in an increased pozzolanic activity by creating a larger specific surface area available for reaction. Moreover, grinding also creates crystallographic defects at and below the particle surface. The dissolution rate of the strained or partially disconnected silicate moieties is strongly enhanced. Even materials which are commonly not regarded to behave as a pozzolan, such as quartz, can become reactive once ground below a certain critical particle diameter. Composition The overall chemical composition of a pozzolan is considered as one of the parameters governing long-term performance (e.g. compressive strength) of the blended cement binder, ASTM C618 prescribes that a pozzolan should contain SiO2 + Al2O3 + Fe2O3 ≥ 70 wt.%. In case of a (quasi) one phase material such as blast-furnace slags the overall chemical composition can be considered as meaningful parameter, for multi-phase materials only a correlation between the pozzolanic activity and the chemistry of the active phases can be sought. Many pozzolans consist of a heterogeneous mixture of phases of different pozzolanic activity. Obviously, the content in reactive phases is an important property determining the overall reactivity. In general, the pozzolanic activity of phases thermodynamically stable at ambient conditions is low when compared to on an equal specific surface basis to less thermodynamically stable phase assemblages. Volcanic ash deposits containing large amounts of volcanic glass or zeolites are more reactive than quartz sands or detrital clay minerals. In this respect, the thermodynamic driving force behind the pozzolanic reaction serves as a rough indicator of the potential reactivity of a (alumino-)silicate material. Similarly, materials showing structural disorder such as glasses show higher pozzolanic activities than crystalline ordered compounds. Reaction conditions The rate of the pozzolanic reaction can also be controlled by external factors such as the mix proportions, the amount of water or space available for the formation and growth of hydration products and the temperature of reaction. Therefore, typical blended cement mix design properties such as the replacement ratio of pozzolan for Portland cement, the water to binder ratio and the curing conditions strongly affect the reactivity of the added pozzolan. Pozzolanic activity tests Mechanical tests Mechanical evaluation of the pozzolanic activity is based upon a comparison of the compressive strength of mortar bars containing pozzolans as a partial replacement for Portland cement to reference mortar bars containing only Portland cement as binder. The mortar bars are prepared, cast, cured and tested following a detailed set of prescriptions. Compressive strength testing is carried out at fixed moments, typically 3, 7, and 28 days after mortar preparation. A material is considered pozzolanically active when it contributes to the compressive strength, taking into account the effect of dilution. Most national and international technical standards or norms include variations of this methodology. Chemical tests A pozzolanic material is by definition capable of binding calcium hydroxide in the presence of water. Therefore, the chemical measurement of this pozzolanic activity represents a way of evaluating pozzolanic materials. This can be done by directly measuring the amount of calcium hydroxide a pozzolan consumes over time. At high water to binder ratio (suspended solutions), this can be measured by titrimety or by spectroscopic techniques. At lower water to binder ratios (pastes), thermal analysis or X-ray powder diffraction techniques are commonly used to determine remaining calcium hydroxide contents. Other direct methods have been developed that aim to directly measure the degree of reaction of the pozzolan itself. Here, selective dissolutions, X-ray powder diffraction or scanning electron microscopy image analysis methods have been used. Indirect methods comprise on the one hand methods that investigate which material properties are responsible for the pozzolan's reactivity with portlandite. Material properties of interest are the (re)active silica and alumina content, the specific surface area and/or the reactive mineral and amorphous phases of the pozzolanic material. Other methods indirectly determine the extent of the pozzolanic activity by measuring an indicative physical property of the reacting system. Measurements of the electrical conductivity, chemical shrinkage of the pastes or the heat evolution by heat flow calorimetry reside in the latter category. See also Aerated autoclaved concrete Alkali-aggregate reaction Alkali-carbonate reaction Alkali-silica reaction Calcium silicate hydrate (C-S-H) Calthemite Cement Cement chemist notation Cenospheres Concrete Concrete degradation Energetically modified cement (EMC) Fly ash Geopolymer Metakaolin Portland cement Pozzolan Pozzolana Rice husk ash Roman concrete Silica fume Sodium silicate References Further reading Cook D.J. (1986) Natural pozzolanas. In: Swamy R.N., Editor (1986) Cement Replacement Materials, Surrey University Press, p. 200. Lechtman H. and Hobbs L. (1986) "Roman Concrete and the Roman Architectural Revolution", Ceramics and Civilization Volume 3: High Technology Ceramics: Past, Present, Future, edited by W.D. Kingery and published by the American Ceramics Society, 1986; and Vitruvius, Book II:v,1; Book V:xii2. McCann A.M. (1994) "The Roman Port of Cosa" (273 BC), Scientific American, Ancient Cities, pp. 92–99, by Anna Marguerite McCann. Covers, hydraulic concrete, of "Pozzolana mortar" and the 5 piers, of the Cosa harbor, the Lighthouse on pier 5, diagrams, and photographs. Height of Port city: 100 BC. Cement Concrete Masonry
Pozzolanic activity
Engineering
1,943
11,421,987
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD42
In molecular biology, snoRNA U42 (also known as SNORD42) is a non-coding RNA (ncRNA) molecule which functions in the modification of other small nuclear RNAs (snRNAs). This type of modifying RNA is usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. snoRNA U42 belongs to the C/D box class of snoRNAs which contain the conserved sequence motifs known as the C box (UGAUGA) and the D box (CUGA). Most of the members of the box C/D family function in directing site-specific 2'-O-methylation of substrate RNAs. In the human genome there are two closely related copies of U42 (called U42A and U42B) both located within the introns of the ribosomal protein L23a (RPL23a) gene. Both snoRNAs are predicted to guide the site specific 2'O-ribose methylation of 18S ribosomal RNA (rRNA) residue U116. The mouse orthologue (MBII-287) has also been identified. References External links Small nuclear RNA
Small nucleolar RNA SNORD42
Chemistry
279
384,327
https://en.wikipedia.org/wiki/Intersection%20number
In mathematics, and especially in algebraic geometry, the intersection number generalizes the intuitive notion of counting the number of times two curves intersect to higher dimensions, multiple (more than 2) curves, and accounting properly for tangency. One needs a definition of intersection number in order to state results like Bézout's theorem. The intersection number is obvious in certain cases, such as the intersection of the x- and y-axes in a plane, which should be one. The complexity enters when calculating intersections at points of tangency, and intersections which are not just points, but have higher dimension. For example, if a plane is tangent to a surface along a line, the intersection number along the line should be at least two. These questions are discussed systematically in intersection theory. Definition for Riemann surfaces Let X be a Riemann surface. Then the intersection number of two closed curves on X has a simple definition in terms of an integral. For every closed curve c on X (i.e., smooth function ), we can associate a differential form of compact support, the Poincaré dual of c, with the property that integrals along c can be calculated by integrals over X: , for every closed (1-)differential on X, where is the wedge product of differentials, and is the Hodge star. Then the intersection number of two closed curves, a and b, on X is defined as . The have an intuitive definition as follows. They are a sort of dirac delta along the curve c, accomplished by taking the differential of a unit step function that drops from 1 to 0 across c. More formally, we begin by defining for a simple closed curve c on X, a function fc by letting be a small strip around c in the shape of an annulus. Name the left and right parts of as and . Then take a smaller sub-strip around c, , with left and right parts and . Then define fc by . The definition is then expanded to arbitrary closed curves. Every closed curve c on X is homologous to for some simple closed curves ci, that is, , for every differential . Define the by . Definition for algebraic varieties The usual constructive definition in the case of algebraic varieties proceeds in steps. The definition given below is for the intersection number of divisors on a nonsingular variety X. 1. The only intersection number that can be calculated directly from the definition is the intersection of hypersurfaces (subvarieties of X of codimension one) that are in general position at x. Specifically, assume we have a nonsingular variety X, and n hypersurfaces Z1, ..., Zn which have local equations f1, ..., fn near x for polynomials fi(t1, ..., tn), such that the following hold: . for all i. (i.e., x is in the intersection of the hypersurfaces.) (i.e., the divisors are in general position.) The are nonsingular at x. Then the intersection number at the point x (called the intersection multiplicity at x) is , where is the local ring of X at x, and the dimension is dimension as a k-vector space. It can be calculated as the localization , where is the maximal ideal of polynomials vanishing at x, and U is an open affine set containing x and containing none of the singularities of the fi. 2. The intersection number of hypersurfaces in general position is then defined as the sum of the intersection numbers at each point of intersection. 3. Extend the definition to effective divisors by linearity, i.e., and . 4. Extend the definition to arbitrary divisors in general position by noticing every divisor has a unique expression as D = P – N for some effective divisors P and N. So let Di = Pi – Ni, and use rules of the form to transform the intersection. 5. The intersection number of arbitrary divisors is then defined using a "Chow's moving lemma" that guarantees we can find linearly equivalent divisors that are in general position, which we can then intersect. Note that the definition of the intersection number does not depend on the order in which the divisors appear in the computation of this number. Serre's Tor formula Let V and W be two subvarieties of a nonsingular projective variety X such that dim(V) + dim(W) = dim(X). Then we expect the intersection V ∩ W to be a finite set of points. If we try to count them, two kinds of problems may arise. First, even if the expected dimension of V ∩ W is zero, the actual intersection may be of a large dimension: for example the self-intersection number of a projective line in a projective plane. The second potential problem is that even if the intersection is zero-dimensional, it may be non-transverse, for example, if V is a plane curve and W is one of its tangent lines. The first problem requires the machinery of intersection theory, discussed above in detail, which replaces V and W by more convenient subvarieties using the moving lemma. On the other hand, the second problem can be solved directly, without moving V or W. In 1965 Jean-Pierre Serre described how to find the multiplicity of each intersection point by methods of commutative algebra and homological algebra. This connection between a geometric notion of intersection and a homological notion of a derived tensor product has been influential and led in particular to several homological conjectures in commutative algebra. Serre's Tor formula states: let X be a regular variety, V and W two subvarieties of complementary dimension such that V ∩ W is zero-dimensional. For any point x ∈ V ∩ W, let A be the local ring of x. The structure sheaves of V and W at x correspond to ideals I, J ⊆ A. Then the multiplicity of V ∩ W at the point x is where length is the length of a module over a local ring, and Tor is the Tor functor. When V and W can be moved into a transverse position, this homological formula produces the expected answer. So, for instance, if V and W meet transversely at x, the multiplicity is 1. If V is a tangent line at a point x to a parabola W in a plane at a point x, then the multiplicity at x is 2. If both V and W are locally cut out by regular sequences, for example if they are nonsingular, then in the formula above all higher Tor's vanish, hence the multiplicity is positive. The positivity in the arbitrary case is one of Serre's multiplicity conjectures. Further definitions The definition can be vastly generalized, for example to intersections along subvarieties instead of just at points, or to arbitrary complete varieties. In algebraic topology, the intersection number appears as the Poincaré dual of the cup product. Specifically, if two manifolds, X and Y, intersect transversely in a manifold M, the homology class of the intersection is the Poincaré dual of the cup product of the Poincaré duals of X and Y. Snapper–Kleiman definition of intersection number There is an approach to intersection number, introduced by Snapper in 1959-60 and developed later by Cartier and Kleiman, that defines an intersection number as an Euler characteristic. Let X be a scheme over a scheme S, Pic(X) the Picard group of X and G the Grothendieck group of the category of coherent sheaves on X whose support is proper over an Artinian subscheme of S. For each L in Pic(X), define the endomorphism c1(L) of G (called the first Chern class of L) by It is additive on G since tensoring with a line bundle is exact. One also has: ; in particular, and commute. (this is nontrivial and follows from a dévissage argument.) The intersection number of line bundles Li's is then defined by: where χ denotes the Euler characteristic. Alternatively, one has by induction: Each time F is fixed, is a symmetric functional in Li's. If Li = OX(Di) for some Cartier divisors Di's, then we will write for the intersection number. Let be a morphism of S-schemes, line bundles on X and F in G with . Then . Intersection multiplicities for plane curves There is a unique function assigning to each triplet consisting of a pair of projective curves, and , in and a point , a number called the intersection multiplicity of and at that satisfies the following properties: if and only if and have a common factor that is zero at if and only if one of or is non-zero (i.e. the point is not in the intersection of the two curves) where for any Although these properties completely characterize intersection multiplicity, in practice it is realised in several different ways. One realization of intersection multiplicity is through the dimension of a certain quotient space of the power series ring . By making a change of variables if necessary, we may assume that . Let and be the polynomials defining the algebraic curves we are interested in. If the original equations are given in homogeneous form, these can be obtained by setting . Let denote the ideal of generated by and . The intersection multiplicity is the dimension of as a vector space over . Another realization of intersection multiplicity comes from the resultant of the two polynomials and . In coordinates where , the curves have no other intersections with , and the degree of with respect to is equal to the total degree of , can be defined as the highest power of that divides the resultant of and (with and seen as polynomials over ). Intersection multiplicity can also be realised as the number of distinct intersections that exist if the curves are perturbed slightly. More specifically, if and define curves which intersect only once in the closure of an open set , then for a dense set of , and are smooth and intersect transversally (i.e. have different tangent lines) at exactly some number points in . We say then that . Example Consider the intersection of the x-axis with the parabola at the origin. Writing and we get Thus, the intersection multiplicity is two; it is an ordinary tangency. Similarly one can compute that the curves and with integers intersect at the origin with multiplicity Self-intersections Some of the most interesting intersection numbers to compute are self-intersection numbers. This means that a divisor is moved to another equivalent divisor in general position with respect to the first, and the two are intersected. In this way, self-intersection numbers can become well-defined, and even negative. Applications The intersection number is partly motivated by the desire to define intersection to satisfy Bézout's theorem. The intersection number arises in the study of fixed points, which can be cleverly defined as intersections of function graphs with a diagonals. Calculating the intersection numbers at the fixed points counts the fixed points with multiplicity, and leads to the Lefschetz fixed-point theorem in quantitative form. Notes References Appendix A. Algebraic Curves: An Introduction To Algebraic Geometry, by William Fulton with Richard Weiss. New York: Benjamin, 1969. Reprint ed.: Redwood City, CA, USA: Addison-Wesley, Advanced Book Classics, 1989. . Full text online. Algebraic geometry
Intersection number
Mathematics
2,382
71,700,481
https://en.wikipedia.org/wiki/QQ%20Vulpeculae
QQ Vulpeculae is a cataclysmic variable binary star system in the northern constellation of Vulpecula, abbreviated QQ Vul. It has a brightness that fluctuates around an apparent visual magnitude of 14.7, which is too faint to be viewed with the naked eye. The distance to this system is approximately 981 light years based on parallax measurements. This system was detected as a soft X-ray source using the HEAO-1 satellite during 1977–78. The Einstein Observatory was then used in 1981 to more precisely position the source, which was designated E 2003+225. In 1982, J. A. Nousek and associates observed the optical counterpart and found it varied in brightness with a period of , displaying strong emission lines of hydrogen and helium. They identified it as a variable of the AM Herculis type. The system shows a brightness variation of 0.7 magnitude during each orbit, plus a short-term flickering of 0.2 magnitudes. The accepted model for this class of variable is a binary system with a red dwarf secondary in a close orbit with a magnetic white dwarf. The red dwarf is overflowing its Roche lobe and matter is streaming onto the white dwarf. The magnetic field of the white dwarf draws this material toward the magnetic poles, and the material is heated to a sufficient temperature to emit X-rays. In 1985, a weak, extended radio source was detected at the location of this system, suggesting it may be a remnant of a past nova event. X-ray observations in 1991 suggested there are separate regions of hard and soft X-ray emission, indicating matter is being accreted along two poles. The soft X-ray site is likely at the magnetic pole furthest from the secondary star. The strength of the magnetic field in the white dwarf is estimated at . Over long periods, the system has been shown to switch between states of high and low brightness. K. Mukai and associates in 1986 suggested that the primary dip in the light curve is due to the geometry of the system in combination with a partial eclipse of the primary accretion region by the accretion column. The secondary dip may be caused by the limb of the white dwarf partially eclipsing the active accretion region. The rotation period of the white dwarf appears to be locked to the orbital period. References Further reading Polars (cataclysmic variable stars) Red dwarfs White dwarfs Vulpecula Vulpeculae, QQ
QQ Vulpeculae
Astronomy
514
57,340,821
https://en.wikipedia.org/wiki/Kittell%20graph
In the mathematical field of graph theory, the Kittell graph is a planar graph with 23 vertices and 63 edges. Its unique planar embedding has 42 triangular faces. The Kittell graph is named after Irving Kittell, who used it as a counterexample to Alfred Kempe's flawed proof of the four-color theorem. Simpler counterexamples include the Errera graph and Poussin graph (both published earlier than Kittell) and the Fritsch graph and Soifer graph. References Individual graphs Planar graphs
Kittell graph
Mathematics
113
1,411,865
https://en.wikipedia.org/wiki/TRACE
Transition Region and Coronal Explorer (TRACE, or Explorer 73, SMEX-4) was a NASA heliophysics and solar observatory designed to investigate the connections between fine-scale magnetic fields and the associated plasma structures on the Sun by providing high-resolution images and observation of the solar photosphere, the transition region, and the solar corona. A main focus of the TRACE instrument is the fine structure of coronal loops low in the solar atmosphere. TRACE is the third spacecraft in the Small Explorer program, launched on 2 April 1998, and obtained its last science image on 21 June 2010, at 23:56 UTC. Mission The Transition Region and Coronal Explorer (TRACE) is a NASA small explorer mission designed to examine the three-dimensional magnetic structures which emerge through the Sun's photosphere (the visible surface of the Sun) and define both the geometry and dynamics of the upper solar atmosphere (the transition region and corona). Its primary science objectives are to: (1) follow the evolution of magnetic field structures from the solar interior to the corona; (2) investigate the mechanisms of the heating of the outer solar atmosphere; and, (3) determine the triggers and onset of solar flares and mass ejections. TRACE is a single-instrument, three-axis stabilized spacecraft. The spacecraft attitude control system (ACS) utilizes three magnetic-torquer coils, a digital Sun sensor, six coarse Sun sensors, a three-axis magnetometer, four reaction wheels, and three two-axis inertial gyros to maintain pointing. In science mode, the spacecraft uses an instrument-provided guide telescope as a fine guidance sensor to provide a pointing accuracy of less than 5 arcseconds. Power is provided to the spacecraft through the use of four panels of gallium arsenide (GaAs) solar cells with a total area of . The solar array actually produces power of around 220 watts, 85 W of which is used each orbit by the spacecraft and 35 W of which is used by the instrument each orbit. The remaining power is used for operational and decontamination heating of the spacecraft and telescope. A 9 A-hour nickel–cadmium battery (NiCd) provides energy during time when the spacecraft is in the Earth's shadow. Communications are provided via a 5 W S-band transponder, providing up to 2.25 Mbit/s downlink data transmission and 2 kbit/s uplink. Data are transmitted up to six times daily. Data are stored onboard using a solid-state recorder capable of holding up to 300 MB. The command and data handling system uses a 32-bit 80386/80387 processor. Spacecraft The satellite was built by NASA's Goddard Space Flight Center. Its telescope was constructed by a consortium led by Lockheed Martin's Advanced Technology Center. The optics were designed and built to a State of the art surface finish by the Smithsonian Astrophysical Observatory (SAO). The telescope has a aperture and 1024 × 1024 charge-coupled device (CCD) detector giving an 8.5 arcminute field of view (FoV). The telescope is designed to take correlated images in a range of wavelengths from visible light through the Lyman alpha line to far ultraviolet. The different wavelength passbands correspond to plasma emission temperatures from 4,000 to 4,000,000 K. The optics use a special multilayer technique to focus the difficult-to-reflect extreme ultraviolet (EUV) light; the technique was first used for solar imaging in the late 1980s and 1990s, notably by the MSSTA and NIXT sounding rocket payloads. Experiment TRACE Imaging Telescope The telescope is of Cassegrain design, long with an aperture of . The focal length is . The field of view of the telescope is 8.5 x 8.5 arcminutes with a spatial resolution of one arcsecond. The light is focused on a 1024 x 1024 element CCD detector (0.5 arcseconds/pixel). The temporal resolution of the instrument is less than 1 second, although the nominal temporal resolution is 5 seconds. Exposure times for observations range between 2 ms and 260 seconds. The primary and secondary mirrors have normal-incidence coatings specially designed for EUV and UV observations which divide the mirrors into quadrants. These segmented coatings are designed to provide identically sized and perfectly coaligned images. Which mirror quadrant is used for an observation is determined by the position of a quadrant selector shutter mechanism, positioned behind the entrance aperture. Three of the mirror coatings provide for observations in specific iron emission bands: Fe IX (central wavelength/bandwidth: 17.3 nm/0.64 nm); Fe XII (19.5 nm/0.65 nm); and Fe XV (28.4 nm/1.07 nm). The final mirror coating allows broadband observations in the ultraviolet (centered on 500 nm). Further selection of observations in the UV can be made through the use of a filter wheel, mounted in front of the CCD. The filter wheel permits continuum observations (170 nm/20 nm) as well as observations in emission bands for C (carbon) I and Fe II (160 nm/27.5 nm), C IV (155 nm/2 nm), and H (Hydrogen) I (Lyman-alpha) (121.6 nm/8.4 nm). The TRACE primary mirror assembly is based on primary mirror support assemblies used in SWATH, a small explorer developed for the U.S. Air Force, and NIXT, a set of rocket flights flown by the Smithsonian Astrophysical Observatory (SAO) five times between 1983 and 1993. Many of the designs and some of the space flight hardware from the MDI instrument on Solar and Heliospheric Observatory (SoHO) was also used. Image gallery See also Explorer program References External links TRACE website by Lockheed Martin TRACE Data Center by Lockheed Martin TRACE website (archived) by NASA's Goddard Space Flight Center TRACE movies archive by Lockheed Martin Spacecraft launched in 1998 Explorers Program Missions to the Sun Solar telescopes Ultraviolet telescopes Solar space observatories Spacecraft launched by Pegasus rockets
TRACE
Astronomy
1,253
48,498,010
https://en.wikipedia.org/wiki/Management%20plane
In computer networking, the management plane of a networking device is the element of a system that configures, monitors, and provides management, monitoring and configuration services to, all layers of the network stack and other parts of the system. It should be distinguished from the control plane, which is primarily concerned with routing table and forwarding information base computation. In system diagrams, the management plane is typically shown in three dimensions as overlapping the network stack, separated by a dimension that delineates the power plane, control plane, data plane, and management plane. References See also Control plane Data Plane Management interface Internet architecture
Management plane
Technology
125
587,339
https://en.wikipedia.org/wiki/Circuit%20diagram
A circuit diagram (or: wiring diagram, electrical diagram, elementary diagram, electronic schematic) is a graphical representation of an electrical circuit. A pictorial circuit diagram uses simple images of components, while a schematic diagram shows the components and interconnections of the circuit using standardized symbolic representations. The presentation of the interconnections between circuit components in the schematic diagram does not necessarily correspond to the physical arrangements in the finished device. Unlike a block diagram or layout diagram, a circuit diagram shows the actual electrical connections. A drawing meant to depict the physical arrangement of the wires and the components they connect is called artwork or layout, physical design, or wiring diagram. Circuit diagrams are used for the design (circuit design), construction (such as PCB layout), and maintenance of electrical and electronic equipment. In computer science, circuit diagrams are useful when visualizing expressions using Boolean algebra. Symbols Circuit diagrams are pictures with symbols that have differed from country to country and have changed over time, but are now to a large extent internationally standardized. Simple components often had symbols intended to represent some feature of the physical construction of the device. For example, the symbol for a resistor dates back to the time when that component was made from a long piece of wire wrapped in such a manner as to not produce inductance, which would have made it a coil. These wirewound resistors are now used only in high-power applications, smaller resistors being cast from carbon composition (a mixture of carbon and filler) or fabricated as an insulating tube or chip coated with a metal film. The internationally standardized symbol for a resistor is therefore now simplified to an oblong, sometimes with the value in ohms written inside, instead of the zig-zag symbol. A less common symbol is simply a series of peaks on one side of the line representing the conductor, rather than back-and-forth. The linkages between leads were once simple crossings of lines. With the arrival of computerized drafting, the connection of two intersecting wires was shown by a crossing of wires with a "dot" or "blob" to indicate a connection. At the same time, the crossover was simplified to be the same crossing, but without a "dot". However, there was a danger of confusing the wires that were connected and not connected in this manner, if the dot was drawn too small or accidentally omitted (e.g. the "dot" could disappear after several passes through a copy machine). As such, the modern practice for representing a 4-way wire connection is to draw a straight wire and then to draw the other wires staggered along it with "dots" as connections (see diagram), so as to form two separate T-junctions that brook no confusion and are clearly not a crossover. For crossing wires that are insulated from one another, a small semi-circle symbol is commonly used to show one wire "jumping over" the other wire (similar to how jumper wires are used). A common, hybrid style of drawing combines the T-junction crossovers with "dot" connections and the wire "jump" semi-circle symbols for insulated crossings. In this manner, a "dot" that is too small to see or that has accidentally disappeared can still be clearly differentiated from a "jump". On a circuit diagram, the symbols for components are labelled with a descriptor or reference designator matching that on the list of parts. For example, C1 is the first capacitor, L1 is the first inductor, Q1 is the first transistor, and R1 is the first resistor. Often the value or type designation of the component is given on the diagram beside the part, but detailed specifications would go on the parts list. Detailed rules for reference designations are provided in the International standard IEC 61346. Organization It is a usual (although not universal) convention that schematic drawings are organized on the page from left to right and top to bottom in the same sequence as the flow of the main signal or power path. For example, a schematic for a radio receiver might start with the antenna input at the left of the page and end with the loudspeaker at the right. Positive power supply connections for each stage would be shown towards the top of the page, with grounds, negative supplies, or other return paths towards the bottom. Schematic drawings intended for maintenance may have the principal signal paths highlighted to assist in understanding the signal flow through the circuit. More complex devices have multi-page schematics and must rely on cross-reference symbols to show the flow of signals between the different sheets of the drawing. Detailed rules for the preparation of circuit diagrams, and other document types used in electrotechnology, are provided in the international standard IEC 61082-1. Circuit diagrams are often drawn with the same standardized title block and frame as other engineering drawings. Relay logic line diagrams, also called ladder logic diagrams, use another common standardized convention for organizing schematic drawings, with a vertical power supply rail on the left and another on the right, and components strung between them like the rungs of a ladder. Artwork Once the schematic has been made, it is converted into a layout that can be fabricated onto a printed circuit board (PCB). Schematic-driven layout starts with the process of schematic capture. The result is what is known as a . The rat's nest is a jumble of wires (lines) criss-crossing each other to their destination nodes. These wires are routed either manually or automatically by the use of electronics design automation (EDA) tools. The EDA tools arrange and rearrange the placement of components and find paths for tracks to connect various nodes. This results in the final layout artwork for the integrated circuit or printed circuit board. A generalized design flow may be as follows: Schematic → schematic capture → netlist → rat's nest → routing → artwork → PCB development and etching → component mounting → testing Education Teaching about the functioning of electrical circuits is often on primary and secondary school curricula. Students are expected to understand the rudiments of circuit diagrams and their functioning. Use of diagrammatic representations of circuit diagrams can aid understanding of principles of electricity. Principles of the physics of circuit diagrams are often taught with the use of analogies, such as comparing functioning of circuits to other closed systems such as water heating systems with pumps being the equivalent to batteries. See also Boxology Circuit design language Electronic symbol Logic gate One-line diagram Pinout Schematic capture Schematic editor References External links Electrical diagrams Electronic design Diagrams
Circuit diagram
Engineering
1,362
14,756,639
https://en.wikipedia.org/wiki/ST2%20cardiac%20biomarker
The ST2 cardiac biomarker (also known as soluble interleukin 1 receptor-like 1) is a protein biomarker of cardiac stress encoded by the IL1RL1 gene. ST2 signals the presence and severity of adverse cardiac remodeling and tissue fibrosis, which occurs in response to myocardial infarction, acute coronary syndrome, or worsening heart failure. ST2 provides prognostic information that is independent of other cardiac biomarkers such as BNP, NT-proBNP, highly sensitive troponin, GDF-15, and galectin-3. One study indicated that discrimination is independent of age, body mass index, history of heart failure, anemia and impaired kidney function or sex. Protein ST2 is a member of the interleukin 1 receptor family. The ST2 protein has two isoforms and is directly implicated in the progression of cardiac disease: a soluble form (referred to as soluble ST2 or sST2) and a membrane-bound receptor form (referred to as the ST2 receptor or ST2L). When the myocardium is stretched, the ST2 gene is upregulated, increasing the concentration of circulating soluble ST2. The ligand for ST2 is the cytokine interleukin-33 (IL-33). Binding of IL-33 to the ST2 receptor, in response to cardiac disease or injury, such as an ischemic event, elicits a cardioprotective effect resulting in preserved cardiac function. This cardioprotective IL-33 signal is counterbalanced by the level of soluble ST2, which binds IL-33 and makes it unavailable to the ST2 receptor for cardioprotective signaling. As a result, the heart is subjected to greater stress in the presence of high levels of soluble ST2. Correlation with mortality Published and peer-reviewed findings indicate that ST2 is a predictor of mortality at presentation. Studies have shown patients with ST2 levels above a clinical threshold consistently have a much higher risk of mortality while, equally important, patients with ST2 levels below threshold have a very low risk of mortality. Although it has been shown that ST2 concentrations correlate with heart failure severity there is no level that perfectly separates patients with and without heart failure for disease diagnosis. However, as a prognostic marker it has been clearly shown that patients are at a higher risk of adverse outcomes when ST2 levels are above a cutoff value of 35 ng/mL. Patients with ACS ST2 is a strong predictor of cardiovascular death and risk of developing new heart failure in ST Elevation Myocardial Infarction (STEMI) & NSTE-ACS patients. In patients presenting with Acute Coronary Syndrome (ACS), those in the highest quartile (above 35 ng/ml) have more than 3 times higher risk of cardiovascular death and new heart failure at 30 days, than those in the lower quartiles. At one year, there is a relative risk of 2.3 for adverse outcomes. ST2 is an active participant in the cardiac remodeling pathway and could identify which patients will respond to Eplerenone, or other therapies that reverse myocardial fibrosis. Clinical utility ST2 has considerable prognostic value and is used as an aid for risk stratification in identifying patients who are at high risk of mortality and rehospitalization in patients diagnosed with heart failure. ST2 is independent of natriuretic peptides, such as natriuretic peptide BNP and NT-proBNP, and therefore provide unique and complementary prognostic information. ST2 is also not adversely influenced by age, impaired renal function or elevated body mass index (BMI), common confounding situations for natriuretic peptide measurements. Repeated measurements of ST2 may aid in clinical decision-making. The ST2 test ST2 is measured by an immunoassay, commercially marketed as the Presage ST2 Assay by Critical Diagnostics of San Diego, California. The assay has Food and Drug Administration approval and a CE Mark. References Biomarkers
ST2 cardiac biomarker
Biology
867
3,428,490
https://en.wikipedia.org/wiki/Graph%20Modelling%20Language
Graph Modeling Language (GML) is a hierarchical ASCII-based file format for describing graphs. It has been also named Graph Meta Language. Example A simple graph in GML format: graph [ comment "This is a sample graph" directed 1 id 42 label "Hello, I am a graph" node [ id 1 label "node 1" thisIsASampleAttribute 42 ] node [ id 2 label "node 2" thisIsASampleAttribute 43 ] node [ id 3 label "node 3" thisIsASampleAttribute 44 ] edge [ source 1 target 2 label "Edge from node 1 to node 2" ] edge [ source 2 target 3 label "Edge from node 2 to node 3" ] edge [ source 3 target 1 label "Edge from node 3 to node 1" ] ] Applications supporting GML Cytoscape, an open source bioinformatics software platform for visualizing molecular interaction networks, loads and save previously-constructed interaction networks in GML. igraph, an open source network analysis library with interfaces to multiple programming languages. Gephi, an open source graph visualization and manipulation software. Graph-tool, a free Python module for manipulation and statistical analysis of graphs. NetworkX, an open source Python library for studying complex graphs. Tulip (software) is a free software in the domain of information visualisation capable of manipulating huge graphs (with more than 1.000.000 elements). yEd, a free Java-based graph editor, supports import from and export to GML. The Graphviz project includes two command-line tools (gml2gv and gv2gml) that can convert to and from the DOT file format. Wolfram Language, a general very high-level programming language, supports GML import and export. See also Graph Query Language (GQL) DGML References External links GML: A portable Graph File Format, Michael Himsolt - 2010/11/30 (archived version) Unravelling Graph-Exchange File Formats, by Matthew Roughan and Jonathan Tuke, 2015, https://arxiv.org/pdf/1503.02781.pdf Computer file formats Graph description languages
Graph Modelling Language
Mathematics
461