id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
26,046,315
https://en.wikipedia.org/wiki/3-Methylhexane
3-Methylhexane is a branched hydrocarbon with two enantiomers. It is one of the isomers of heptane. The molecule is chiral, and is one of the two isomers of heptane to have this property, the other being its structural isomer 2,3-dimethylpentane. The enantiomers are (R)-3-methylhexane and (S)-3-methylhexane. References Alkanes
3-Methylhexane
Chemistry
104
363,225
https://en.wikipedia.org/wiki/Substitution%20matrix
In bioinformatics and evolutionary biology, a substitution matrix describes the frequency at which a character in a nucleotide sequence or a protein sequence changes to other character states over evolutionary time. The information is often in the form of log odds of finding two specific character states aligned and depends on the assumed number of evolutionary changes or sequence dissimilarity between compared sequences. It is an application of a stochastic matrix. Substitution matrices are usually seen in the context of amino acid or DNA sequence alignments, where they are used to calculate similarity scores between the aligned sequences. Background In the process of evolution, from one generation to the next the amino acid sequences of an organism's proteins are gradually altered through the action of DNA mutations. For example, the sequence ALEIRYLRD could mutate into the sequence ALEINYLRD in one step, and possibly AQEINYQRD over a longer period of evolutionary time. Each amino acid is more or less likely to mutate into various other amino acids. For instance, a hydrophilic residue such as arginine is more likely to be replaced by another hydrophilic residue such as glutamine, than it is to be mutated into a hydrophobic residue such as leucine. (Here, a residue refers to an amino acid stripped of a hydrogen and/or a hydroxyl group and inserted in the polymeric chain of a protein.) This is primarily due to redundancy in the genetic code, which translates similar codons into similar amino acids. Furthermore, mutating an amino acid to a residue with significantly different properties could affect the folding and/or activity of the protein. This type of disruptive substitution is likely to be removed from populations by the action of purifying selection because the substitution has a higher likelihood of rendering a protein nonfunctional. If we have two amino acid sequences in front of us, we should be able to say something about how likely they are to be derived from a common ancestor, or homologous. If we can line up the two sequences using a sequence alignment algorithm such that the mutations required to transform a hypothetical ancestor sequence into both of the current sequences would be evolutionarily plausible, then we'd like to assign a high score to the comparison of the sequences. To this end, we will construct a 20x20 matrix where the th entry is equal to the probability of the th amino acid being transformed into the th amino acid in a certain amount of evolutionary time. There are many different ways to construct such a matrix, called a substitution matrix. Here are the most commonly used ones: Identity matrix The simplest possible substitution matrix would be one in which each amino acid is considered maximally similar to itself, but not able to transform into any other amino acid. This matrix would look like This identity matrix will succeed in the alignment of very similar amino acid sequences but will be miserable at aligning two distantly related sequences. We need to figure out all the probabilities in a more rigorous fashion. It turns out that an empirical examination of previously aligned sequences works best. Log-odds matrices We express the probabilities of transformation in what are called log-odds scores. The scores matrix S is defined as where is the probability that amino acid transforms into amino acid , and , are the frequencies of amino acids i and j. The base of the logarithm is not important, and the same substitution matrix is often expressed in different bases. Example matrices PAM One of the first amino acid substitution matrices, the PAM (Point Accepted Mutation) matrix was developed by Margaret Dayhoff in the 1970s. This matrix is calculated by observing the differences in closely related proteins. Because the use of very closely related homologs, the observed mutations are not expected to significantly change the common functions of the proteins. Thus the observed substitutions (by point mutations) are considered to be accepted by natural selection. One PAM unit is defined as 1% of the amino acid positions that have been changed. To create a PAM1 substitution matrix, a group of very closely related sequences with mutation frequencies corresponding to one PAM unit is chosen. Based on collected mutational data from this group of sequences, a substitution matrix can be derived. This PAM1 matrix estimates what rate of substitution would be expected if 1% of the amino acids had changed. The PAM1 matrix is used as the basis for calculating other matrices by assuming that repeated mutations would follow the same pattern as those in the PAM1 matrix, and multiple substitutions can occur at the same site. With this assumption, the PAM2 matrix can estimated by squaring the probabilities. Using this logic, Dayhoff derived matrices as high as PAM250. Usually the PAM 30 and the PAM70 are used. BLOSUM Dayhoff's methodology of comparing closely related species turned out not to work very well for aligning evolutionarily divergent sequences. Sequence changes over long evolutionary time scales are not well approximated by compounding small changes that occur over short time scales. The BLOSUM (BLOck SUbstitution Matrix) series of matrices rectifies this problem. Henikoff & Henikoff constructed these matrices using multiple alignments of evolutionarily divergent proteins. The probabilities used in the matrix calculation are computed by looking at "blocks" of conserved sequences found in multiple protein alignments. These conserved sequences are assumed to be of functional importance within related proteins and will therefore have lower substitution rates than less conserved regions. To reduce bias from closely related sequences on substitution rates, segments in a block with a sequence identity above a certain threshold were clustered, reducing the weight of each such cluster (Henikoff and Henikoff). For the BLOSUM62 matrix, this threshold was set at 62%. Pairs frequencies were then counted between clusters, hence pairs were only counted between segments less than 62% identical. One would use a higher numbered BLOSUM matrix for aligning two closely related sequences and a lower number for more divergent sequences. It turns out that the BLOSUM62 matrix does an excellent job detecting similarities in distant sequences, and this is the matrix used by default in most recent alignment applications such as BLAST. Differences between PAM and BLOSUM PAM matrices are based on an explicit evolutionary model (i.e. replacements are counted on the branches of a phylogenetic tree: maximum parismony), whereas the BLOSUM matrices are based on an implicit model of evolution. The PAM matrices are based on mutations observed throughout a global alignment, this includes both highly conserved and highly mutable regions. The BLOSUM matrices are based only on highly conserved regions in series of alignments forbidden to contain gaps. The method used to count the replacements is different: unlike the PAM matrix, the BLOSUM procedure uses groups of sequences within which not all mutations are counted the same. Higher numbers in the PAM matrix naming scheme denote larger evolutionary distance, while larger numbers in the BLOSUM matrix naming scheme denote higher sequence similarity and therefore smaller evolutionary distance. Example: PAM150 is used for more distant sequences than PAM100; BLOSUM62 is used for closer sequences than BLOSUM50. Newer matrices A number of newer substitution matrices have been proposed to deal with inadequacies in earlier designs. JTT, published in the same year as BLOSOM, also performs clustering and uses an implicit model. This may help reduce the systematic error from maximum parismony (MP), but also wastes sequence information. WAG (Wheelan And Goldman), published in 2001, uses a maximum likelihood estimating procedure instead of any form of MP. The substitution scores are calculated based on the likelihood of a change considering multiple tree topologies derived using neighbor-joining. The scores correspond to an substitution model which includes also amino-acid stationary frequencies and a scaling factor in the similarity scoring. There are two versions of the matrix: WAG matrix based on the assumption of the same amino-acid stationary frequencies across all the compared protein and WAG* matrix with different frequencies for each of included protein families. Specialized substitution matrices and their extensions The real substitution rates in a protein depends not only on the identity of the amino acid, but also on the specific structural or sequence context it is in. Many specialized matrices have been developed for these contexts, such as in transmembrane alpha helices, for combinations of secondary structure states and solvent accessibility states, or for local sequence-structure contexts. These context-specific substitution matrices lead to generally improved alignment quality at some cost of speed but are not yet widely used. Recently, sequence context-specific amino acid similarities have been derived that do not need substitution matrices but that rely on a library of sequence contexts instead. Using this idea, a context-specific extension of the popular BLAST program has been demonstrated to achieve a twofold sensitivity improvement for remotely related sequences over BLAST at similar speeds (CS-BLAST). Terminology Although "transition matrix" is often used interchangeably with "substitution matrix" in fields other than bioinformatics, the former term is problematic in bioinformatics. With regards to nucleotide substitutions, "transition" is also used to indicate those substitutions that are between the two-ring purines (A → G and G → A) or are between the one-ring pyrimidines (C → T and T → C). Because these substitutions do not require a change in the number of rings, they occur more frequently than the other substitutions. "Transversion" is the term used to indicate the slower-rate substitutions that change a purine to a pyrimidine or vice versa (A ↔ C, A ↔ T, G ↔ C, and G ↔ T). See also Models of DNA evolution Substitution model References Further reading External links PAM Matrix calculator Bioinformatics Matrices
Substitution matrix
Mathematics,Engineering,Biology
2,003
831,261
https://en.wikipedia.org/wiki/Bhattacharyya%20distance
In statistics, the Bhattacharyya distance is a quantity which represents a notion of similarity between two probability distributions. It is closely related to the Bhattacharyya coefficient, which is a measure of the amount of overlap between two statistical samples or populations. It is not a metric, despite being named a "distance", since it does not obey the triangle inequality. History Both the Bhattacharyya distance and the Bhattacharyya coefficient are named after Anil Kumar Bhattacharyya, a statistician who worked in the 1930s at the Indian Statistical Institute. He has developed this through a series of papers. He developed the method to measure the distance between two non-normal distributions and illustrated this with the classical multinomial populations, this work despite being submitted for publication in 1941, appeared almost five years later in Sankhya. Consequently, Professor Bhattacharyya started working toward developing a distance metric for probability distributions that are absolutely continuous with respect to the Lebesgue measure and published his progress in 1942, at Proceedings of the Indian Science Congress and the final work has appeared in 1943 in the Bulletin of the Calcutta Mathematical Society. Definition For probability distributions and on the same domain , the Bhattacharyya distance is defined as where is the Bhattacharyya coefficient for discrete probability distributions. For continuous probability distributions, with and where and are the probability density functions, the Bhattacharyya coefficient is defined as . More generally, given two probability measures on a measurable space , let be a (sigma finite) measure such that and are absolutely continuous with respect to i.e. such that , and for probability density functions with respect to defined -almost everywhere. Such a measure, even such a probability measure, always exists, e.g. . Then define the Bhattacharyya measure on by It does not depend on the measure , for if we choose a measure such that and an other measure choice are absolutely continuous i.e. and , then , and similarly for . We then have . We finally define the Bhattacharyya coefficient . By the above, the quantity does not depend on , and by the Cauchy inequality . Using , and , Gaussian case Let , , where is the normal distribution with mean and variance ; then . And in general, given two multivariate normal distributions , , where Note that the first term is a squared Mahalanobis distance. Properties and . does not obey the triangle inequality, though the Hellinger distance does. Bounds on Bayes error The Bhattacharyya distance can be used to upper and lower bound the Bayes error rate: where and is the posterior probability. Applications The Bhattacharyya coefficient quantifies the "closeness" of two random statistical samples. Given two sequences from distributions , bin them into buckets, and let the frequency of samples from in bucket be , and similarly for , then the sample Bhattacharyya coefficient is which is an estimator of . The quality of estimation depends on the choice of buckets; too few buckets would overestimate , while too many would underestimate. A common task in classification is estimating the separability of classes. Up to a multiplicative factor, the squared Mahalanobis distance is a special case of the Bhattacharyya distance when the two classes are normally distributed with the same variances. When two classes have similar means but significantly different variances, the Mahalanobis distance would be close to zero, while the Bhattacharyya distance would not be. The Bhattacharyya coefficient is used in the construction of polar codes. The Bhattacharyya distance is used in feature extraction and selection, image processing, speaker recognition, phone clustering, and in genetics. See also Bhattacharyya angle Kullback–Leibler divergence Hellinger distance Mahalanobis distance Chernoff bound Rényi entropy F-divergence Fidelity of quantum states References External links Statistical Intuition of Bhattacharyya's distance Some of the properties of Bhattacharyya Distance Nielsen, F.; Boltz, S. (2010). "The Burbea–Rao and Bhattacharyya centroids". IEEE Transactions on Information Theory. 57 (8): 5455–5466. Kailath, T. (1967). "The Divergence and Bhattacharyya Distance Measures in Signal Selection". IEEE Transactions on Communication Technology. 15 (1): 52–60. Djouadi, A.; Snorrason, O.; Garber, F. (1990). "The quality of Training-Sample estimates of the Bhattacharyya coefficient". IEEE Transactions on Pattern Analysis and Machine Intelligence. 12 (1): 92–97. Statistical distance Statistical deviation and dispersion Anil Kumar Bhattacharya
Bhattacharyya distance
Physics
1,029
71,806,701
https://en.wikipedia.org/wiki/Oil%20and%20natural%20gas%20refining%20in%20Turkmenistan
Turkmenistan, a country rich in both oil and natural gas, has developed a sub-sector in its economy related to the refining of those two sorts of fossil fuels. Oil refining There are two oil refineries in Turkmenistan, located in the cities of Türkmenbaşy and Seýdi. The Türkmenbaşy oil refinery is the larger of the two refineries, with a capacity of more than 10 million tons of oil per year. The refinery produces a range of products, including unleaded gasoline, petroleum coke, asphalt, laundry detergent, hydro-treated diesel, and lube oil. The Turkmen government has demonstrated interest in attracting foreign investment to build factories producing end-user petroleum based products such as detergents and tires. The refinery reported that its products are exported to Russia, China, Iran, Afghanistan, Turkey, Pakistan, Tajikistan, and Japan. Turkmenistan has invested US$900 million in a number of projects designed to help increase the country's refining capacity by 95 percent by 2030. The projects include the construction of a facility for coking (carbonization) and tar de-asphalting with annual capacity of 900,000 and 500,000 tons, respectively. The government of Turkmenistan also constructed a facility to produce asphalt with an annual capacity of 38,000 tons as well as a facility to produce polypropylene film and an oil refinery with a capacity of 3 billion tons per year. Turkmenistan has commissioned a feasibility study regarding the construction of a new oil refinery in its Balkan province. Natural gas refining Sysmic and geological research was conducted in the Caspian Sea with the help of the United States in 2000, which showed the presence of 11,000,000,000 tons of oil and of gas on the coast of the country. Historically, Turkmenistan has been a heavy exporter of natural gas, exporting nearly 80% of its raw material produced. As of the 2010s, however, the country has increasingly faced a difficult time in increasing its exports of natural gas. In response, the government plans to refine natural gas to make chemicals such as methanol, synthetic rubber, and materials for paint. In October 2018, the government opened a chemical facility in the Balkan Region of Turkmenistan to create polyethylene and polypropylene, in contract with LG International, Toyo Engineering, and Hyundai Engineering. The facility cost US$3.4 billion to build, and is capable of converting of natural gas into 81,000 tons of polypropylene as well as 386,000 tons of polyethylene every year. The government also created a facility to convert natural gas to gasoline, in the Ahal Region of Turkmenistan. It was first conceived in 2013 by a contract including Türkmengaz, Rönesans Türkmen, and Kawasaki Heavy Industries. It cost US$1.7 billion to build, and was opened in 2019. The plant is capable of converting of natural gas into 600,000 tons of A-92 gasoline every year. In addition to gasoline, the government has sought to convert natural gas to liquid petroleum. A new facility was announced in April 2016 by the Turkmenistan Ministry of Oil & Gas, created by a contract that includes South Korean LG International Corp., Hyundai Engineering Co, and the Japanese Itochu Corporation. It is projected to convert into 1.1 million tons of diesel fuel and over 400,000 tons of naphtha every year. The country has plans to build additional gas to liquids plants in the coming years. For decades, Turkmenistan had a dispute with Azerbaijan regarding the ownership of a large oil and gas block located inside the exclusive economic zone of both countries in the Caspian Sea. In 2021, the two countries signed a memorandum of understanding in Ashgabat to conduct joint exploration and field development activities. Following the 2022 Russian invasion of Ukraine and the subsequent gas dispute, Turkmenistan was considered as an alternative supplier to Europe. In July 2022, the Turkmenistan started extraction from a new gas plant in the Mary Region. References Economy of Turkmenistan Oil refineries Natural gas Fossil fuels in Turkmenistan
Oil and natural gas refining in Turkmenistan
Chemistry
826
222,266
https://en.wikipedia.org/wiki/Bubble%20gum
Bubble gum (or bubblegum) is a type of chewing gum, designed to be inflated out of the mouth as a bubble. Composition In modern chewing gum, if natural rubber such as chicle is used, it must pass several purity and cleanliness tests. However, most modern types of chewing gum use synthetic gum-based materials. These materials allow for longer lasting flavor, a softer texture, and a reduction in tackiness. Mechanical properties As a sort of chewing gum consisting of long-chain polysaccharides, bubblegum can typically exhibit linear and nonlinear viscoelastic behaviors. Therefore, the distinct deformations under chewing can be affected by shear rate, shear strain, and shear stress applied through teeth. Based on these, it is helpful to characterize the intrinsic rheological properties of chewing gums for future improvement and optimization of commercial products’ texture and chewiness. The linear viscoelastic (LVE) property can be probed on pre-shaped gum cuds through a small isothermal strain deformation (i.e., below yield strain) under small amplitude oscillatory shear (SAOS). Here the critical yield strain is defined as the modulus deviating about 10% from its initial value. Under it, gum cuds show elastic deformation that follows power-law behavior as a critical gel in the linear regime; otherwise, exhibiting nonlinear responses with increasing shear stress (plasticity). Normally, this yield strain is less than 1%. Regarding plastic deformation, the nonlinear viscoelasticity can be explored through shear creep experiments (relaxation time) and the start-up of steady shear stress-controlled uniaxial/biaxial extension. The former demonstrates that fractional recovery, defined as the ratio between measured strain after deformation and recovered strain without adding shear stress, for chewing gums under moderate shear stress (~ 1000 Pa) is between 25% and 40%. This relatively high fractional recovery (the ability to recover its previous shape) is consistent with providing a satisfying sensory feel. On the contrary, bubble gums only show fractional recovery lower than 15%. Therefore, bubble gums can withstand more substantial stresses before break-up than normal chewing gums. This distinction is mainly due to its on-purpose design, which allows it to form and maintain large, stable bubbles when blown up through sizeable shear stress on the tongue. The stretching experiment shows gum cuds owning strain hardening during uniaxial extension. In particular, the LVE regime is absent with applying a constant Hencky strain rate, like the plastic flow in polycrystals or polymers. Moreover, different values of Hencky strain rates can lead to either extensional viscosity plateaus before sagging (macroscopic failure) or necking (strain hardening) following a low/high strain rate. Typically, the strain softening at a low strain rate manifests the disintegration of brittle networks within gums. In contrast, the nonuniform deformation of polymers and crystallization induced by strain explain the strain hardening behavior at a high strain rate. History In 1928, Walter Diemer, an accountant for the Fleer Chewing Gum Company in Philadelphia, was experimenting with new gum recipes. One recipe, based on a formula for a chewing gum called "Blibber-Blubber", was found to be less sticky than regular chewing gum and stretched more easily. This gum became highly successful and was eventually named by the president of Fleer as Dubble Bubble because of its stretchy texture. This remained the dominant brand of bubble gum until after WWII, when Bazooka bubble gum entered the market. Until the 1970s, bubble gum still tended to stick to one's face as a bubble popped. At that time, synthetic bubble gum was introduced, which would almost never stick. The first brands in the US to use these new synthetic gum bases were Hubba Bubba and Bubble Yum. Bubble gum got its distinctive pink color because the original recipe Diemer worked on produced a dingy gray colored gum, so he added red dye (diluted to pink), as that was the only dye he had on hand at the time. Flavors In taste tests, children tend to prefer strawberry and blue raspberry flavors, rejecting more complex flavors, as they say these make them want to swallow the gum rather than continue chewing. Bubble gum flavor While there is a bubble gum "flavor" – which various artificial flavorings including esters are mixed to obtain – it varies from one company to another. Esters used in synthetic bubble gum flavoring may include methyl salicylate, ethyl butyrate, benzyl acetate, amyl acetate or cinnamic aldehyde. A natural bubble gum flavoring can be produced by combining banana, pineapple, cinnamon, cloves, and wintergreen. Vanilla, cherry, lemon, and orange oil have also been suggested as ingredients. Records In 1996, Susan Montgomery Williams of Fresno, California, set the Guinness World Record for largest bubble gum bubble ever blown, which was in diameter. However, Chad Fell holds the record for "Largest Hands-free Bubblegum Bubble" at , achieved on 24 April 2004. Tourism Bubblegum Alley is a tourist attraction in downtown San Luis Obispo, California, known for its accumulation of used bubble gum on the walls of an alley. The Market Theater Gum Wall is a brick wall covered in used chewing gum, located in an alleyway in Post Alley under Pike Place Market in Downtown Seattle. See also Functional gum Gumball machine Gum base Gum industry Inca Kola List of chewing gum brands References Chewing gum Bubbles (physics) Products introduced in 1928 American inventions
Bubble gum
Chemistry
1,153
32,853,038
https://en.wikipedia.org/wiki/Ampelmann%20system
The Ampelmann system is an offshore personnel transfer system which was founded in 2008 as a spin-off of the Delft University of Technology. The motion compensation platform allows access from a moving vessel to offshore structures, even in high wave conditions. transferring offshore crew from various types of vessels to offshore oil & gas platforms, offshore turbines and all other fixed and floating structures at sea. Ampelmann technology Accessing any offshore structure can be problematic due to the movement of a vessel compared to the structure. The Ampelmann eliminates any relative motion by taking instant measurements of the ship’s motions and then compensating movement using a Stewart platform. This means that the top of the Ampelmann remains completely stationary compared to the structure. The offshore gangway can then be extended towards the structure, so all personnel can walk to work offshore safely, even in high wave conditions. The system operates at maximum windspeed of 20 m/s or 38 knots. Besides transferring people, the system can also be used for cargo transfer up to 1000kg. Clients The customers that use this system are mainly operating in the offshore oil & gas industry and the offshore wind industry. They use the system to enable their employees to perform maintenance on offshore wind turbines or to work on an offshore oil rig. Both market segments are growing in a rapid pace. The offshore wind sector, for example, has grown 54% in 2009. This growth has mainly been caused by the shift to more sustainable forms of energy generation by governments all over the world. The customer relation of Ampelmann is only business-to-business (B2B). Platforms are usually owned by private companies, whereas wind parks are always run by private companies. There is no agent of independent status between Ampelmann and the user of the system. External links PhD thesis Google Patents Offshore Access References Offshore engineering
Ampelmann system
Engineering
376
7,794,373
https://en.wikipedia.org/wiki/Order%20management%20system
An order management system, or OMS, is a computer software system used in a number of industries for order entry and processing. Electronic commerce and catalogers Orders can be received from businesses, consumers, or a mix of both, depending on the products. Offers and pricing may be done via catalogs, websites, or [broadcast network] advertisements. An integrated order management system may encompass these modules: Product information (descriptions, attributes, locations, quantities) Inventory available to promise (ATP) and sourcing Vendors, purchasing, and receiving Marketing (catalogs, promotions, pricing) Customers and prospects Order entry and customer service (including returns and refunds) Financial processing (credit cards, billing, payment on account) Order processing (selection, printing, picking, packing, shipping) There are several business domains which use OMS for different purposes but the core reasons remain the same: Telecom – To keep track of customers, accounts, credit verification, product delivery, billing, etc. Retail – Large retail companies use OMS to keep track of orders from customers, stock level maintenance, packaging and shipping and to synchronize orders across various channels. For example, if a customer orders online and picks up in store. Pharmaceuticals and healthcare Automotive – to keep track of parts sourced through OEMs Financial services Order management requires multiple steps in a sequential process like capture, validation, fraud check, payment authorization, sourcing, backorder management, pick, pack, ship and associated customer communications. Order management systems usually have workflow capabilities to manage this process. Financial securities Another use for order management systems is as a software-based platform that facilitates and manages the order execution of securities, typically through the FIX protocol. Order management systems, sometimes known in the financial markets as trade order management systems, are used on both the buy-side and the sell-side, although the functionality provided by buy-side and sell-side OMS differs slightly. Typically only exchange members can connect directly to an exchange, which means that a sell-side OMS usually has exchange connectivity, whereas buy-side an OMS is concerned with connecting to sell-side firms. Buy-side vs sell-side An OMS allows firms to input orders to the system for routing to the pre-established destinations. They also allow firms to change, cancel and update orders. When an order is executed on the sell-side, the sell-side OMS must then update its state and send an execution report to the order's originating firm. An OMS should also allow firms to access information on orders entered into the system, including detail on all open orders and on previously completed orders. The development of multi-asset functionality is a pressing concern for firms developing OMS software. The Order Management System supports Portfolio Management by translating intended Asset Allocation actions into marketable orders for the buy-side. This typically falls into four categories: Re-balance – The periodic reallocation of a fund's asset allocation / market exposures to correct for market valuation fluctuations and cash flows Tracking – Periodic adjustment to align an Index Fund or SMA with its target Discretionary – Adhoc reallocation initiated by Portfolio Managers and Analysts Tactical Asset Allocation – Reallocation made to capture temporary inefficiencies TAA How an OMS works Changes in positional allocation often affect multiple accounts creating hundreds or thousands of small orders, which are typically grouped into aggregate market orders and crossing orders to avoid the legitimate fear of front running. When reallocation involves contradictory operations, trade crossing can sometimes be done. Crossing orders involve moving shares and cash between internal accounts, and then potentially publishing the resulting "trade" to the listing exchange. Aggregate orders, on the other hand, are traded together. The details of which accounts are participating in the market order are sometimes divulged with the trade (OTC trading) or post-execution (FIX and/or through the settlement process.) In some circumstances, such as equities in the United States, an average price for the aggregate market order can be applied to all of the shares allocated to the individual accounts which participated in the aggregate market order. In other circumstances, such as Futures or Brazilian markets, each account must be allocated specific prices at which the market order is executed. Identifying the price that an account received from the aggregate market order is regulated and scrutinized post-trade process of trade allocation. Some Order Management Systems go a step further in their trade allocation process by providing tax lot assignment. Because investment managers are graded on unrealized profit & loss, but the investor needs to pay capital gains on realized profit & loss; it is often beneficial for the investor that the exact share/contract uses in a closing trade be carefully chosen. For example, selling older shares rather than newly acquired shares may reduce the effective tax rate. This information does not need to be finalized until capital gains are to be paid or until taxes are to be filed, OMS tax lot assignments are considered usually tentative. The tax lot assignments remade or recorded within the Accounting System are considered definitive. Compliance An OMS is a data-rich source of information which is able to communicate to the front and back office systems (or modules in the case of a single platform software). Prices, number of shares, volumes, date, time, financial instrument, share class, exchange are all key data values which allows the asset/investment manager to maintain an accurate and true positional view of their portfolio ensures all investment guidelines are met and any potential breaches are avoided or resolved in a timely manner. Guidelines between the investor and investment manager are stated in the Investment Policy Statement, IPS, and can be understood as constraints on the asset allocation of the portfolio to ensure the manager does not drift from the stated investment strategy over time at an attempt of TAA. For example, an agreed guideline may include a set portion of the portfolio should constitute of cash and cash equivalents to maintain liquidity levels. Reporting An outcome of an OMS successfully communicating to an asset manager's systems is the ease of producing accurate and timely reporting. All data can be seamlessly interpreted to create valuable information about the portfolio's performance and composition, as well as investment activities, fees and cash flows to a granular level. As investors are demanding increasingly detailed and frequent reporting, an asset manager can benefit from the correct set up of an OMS to deliver information whilst focusing on core activities. Increasing financial regulations are also causing managers to allocate more resources to ensure firstly, they are able to obtain the correct data on their trades and then they are compliant to the new metrics. For example, if a predetermined percent of the portfolio can hold a certain asset class or risk exposure to the asset class or market, the investment manager must be able to report this was satisfied during the reporting period. Types Order management systems can be standalone systems like Multiorders or modules of ERP and SCM systems such as Oracle, Megaventory, NetSuite, Ordoro, Fishbowl or Cloud Commerce Pro. Another difference is whether the system is an on-premises software or a cloud-based software. Their basic difference is that the on-premises ERP solutions are installed locally on a company's own computers and servers and managed by their own IT staff, while a cloud software is hosted on the vendor's servers and accessed through a web browser. Order management systems for financial securities can also be used as a standalone system or modules of a PMS system, to process trade orders simultaneously across a number of funds, the IT infrastructure lowers operational risk. See also Execution management system Order fulfillment Purchase order Sales order References Enterprise resource planning terminology Electronic trading systems
Order management system
Technology
1,560
621,132
https://en.wikipedia.org/wiki/Priority%20inheritance
In real-time computing, priority inheritance is a method for eliminating unbounded priority inversion. Using this programming method, a process scheduling algorithm increases the priority of a process (A) to the maximum priority of any other process waiting for any resource on which A has a resource lock (if it is higher than the original priority of A). The basic idea of the priority inheritance protocol is that when a job blocks one or more high-priority jobs, it ignores its original priority assignment and executes its critical section at an elevated priority level. After executing its critical section and releasing its locks, the process returns to its original priority level. Example Consider three jobs: Suppose that both H and L require some shared resource. If L acquires this shared resource (entering a critical section), and H subsequently requires it, H will block until L releases it (leaving its critical section). Without priority inheritance, process M could preempt process L during the critical section and delay its completion, in effect causing the lower-priority process M to indirectly preempt the high-priority process H. This is a priority inversion bug. With priority inheritance, L will execute its critical section at H's high priority whenever H is blocked on the shared resource. As a result, M will be unable to preempt L and will be blocked. That is, the higher-priority job M must wait for the critical section of the lower priority job L to be executed, because L has inherited H's priority. When L exits its critical section, it regains its original (low) priority and awakens H (which was blocked by L). H, having high priority, preempts L and runs to completion. This enables M and L to resume in succession and run to completion without priority inversion. Operating systems supporting priority inheritance ERIKA Enterprise FreeRTOS Microsoft Azure RTOS, formerly Express Logic's ThreadX Linux VxWorks iRMX See also Priority ceiling protocol References External links "Priority Inheritance: The Real Story" by Doug Locke "Against Priority Inheritance" by Victor Yodaiken "Implementing Concurrency Control With Priority Inheritance in Real-Time CORBA" by Steven Wohlever, Victor Fay Wolfe and Russell Johnston "Priority Inheritance Spin Locks for Multiprocessor Real-Time Systems" by Cai-Dong Wang, Hiroaki Takada and Ken Sakamura "Hardware Support for Priority Inheritance" by Bilge E. S. Akgul, Vincent J. Mooney, Henrik Thane and Pramote Kuacharoen Real-time computing Concurrency control
Priority inheritance
Technology
520
5,574,547
https://en.wikipedia.org/wiki/Cathepsin%20C
Cathepsin C (CTSC) also known as dipeptidyl peptidase I (DPP-I) is a lysosomal exo-cysteine protease belonging to the peptidase C1 protein family, a subgroup of the cysteine cathepsins. In humans, it is encoded by the CTSC gene. Function Cathepsin C appears to be a central coordinator for activation of many serine proteases in immune/inflammatory cells. Cathepsin C catalyses excision of dipeptides from the N-terminus of protein and peptide substrates, except if (i) the amino group of the N-terminus is blocked, (ii) the site of cleavage is on either side of a proline residue, (iii) the N-terminal residue is lysine or arginine, or (iv) the structure of the peptide or protein prevents further digestion from the N-terminus. Structure The cDNAs encoding rat, human, murine, bovine, dog and two Schistosome cathepsin Cs have been cloned and sequenced and show that the enzyme is highly conserved. The human and rat cathepsin C cDNAs encode precursors (prepro-cathepsin C) comprising signal peptides of 24 residues, pro-regions of 205 (rat cathepsin C) or 206 (human cathepsin C) residues and catalytic domains of 233 residues which contain the catalytic residues and are 30-40% identical to the mature amino acid sequences of papain and a number of other cathepsins including cathepsins, B, H, K, L, and S. The translated prepro-cathepsin C is processed into the mature form by at least four cleavages of the polypeptide chain. The signal peptide is removed during translocation or secretion of the pro-enzyme (pro-cathepsin C) and a large N-terminal proregion fragment (also known as the exclusion domain), which is retained in the mature enzyme, is separated from the catalytic domain by excision of a minor C-terminal part of the pro-region, called the activation peptide. A heavy chain of about 164 residues and a light chain of about 69 residues are generated by cleavage of the catalytic domain. Unlike the other members of the papain family, mature cathepsin C consists of four subunits, each composed of the N-terminal proregion fragment, the heavy chain and the light chain. Both the pro-region fragment and the heavy chain are glycosylated. Clinical significance Defects in the encoded protein have been shown to be a cause of Papillon-Lefevre disease, an autosomal recessive disorder characterized by palmoplantar keratosis and periodontitis. Cathepsin C functions as a key enzyme in the activation of granule serine peptidases in inflammatory cells, such as elastase and cathepsin G in neutrophils cells and chymase and tryptase in mast cells. In many inflammatory diseases, such as rheumatoid arthritis, chronic obstructive pulmonary disease (COPD), inflammatory bowel disease, asthma, sepsis, and cystic fibrosis, a significant portion of the pathogenesis is caused by increased activity of some of these inflammatory proteases. Once activated by cathepsin C, the proteases are capable of degrading various extracellular matrix components, which can lead to tissue damage and chronic inflammation. References Further reading External links The MEROPS online database for peptidases and their inhibitors: C01.070 Proteins Proteases EC 3.4.14 Cathepsins
Cathepsin C
Chemistry
783
53,869,832
https://en.wikipedia.org/wiki/Limits%20of%20stability
Limits of Stability (LoS) are a concept in balance and stability, defined as the points at which the center of gravity (CoG) approaches the limits of the base of support (BoS) and requires a corrective strategy to bring the center of mass (CoM) back within the BoS. In simpler terms, LoS represents the maximum distance an individual can intentionally sway in any direction without losing balance or needing to take a step. The typical range of stable swaying is approximately 12.5° in the front-back (antero-posterior) direction and 16° in the side-to-side (medio-lateral) direction. This stable swaying area is often referred to as the 'Cone of Stability', which varies depending on the specific task being performed. When the CoG moves beyond the BoS, the individual must take a step or hold onto an external support to maintain balance and prevent a fall. These stability limits are perceived rather than solely physiological; they represent the subject's readiness to adjust their CoG position. Clinical significance Limits of Stability (LoS) is a significant variable in assessing stability and voluntary motor control in dynamic states. It provides valuable information by tracking the instantaneous change in the center of mass (COM) velocity and position. LoS is a useful measure for evaluating postural instability and identifying individuals at higher risk of falling, making it a valuable screening tool. Individuals with decreased LoS are more susceptible to falling when shifting their bodyweight forward, backward, or sideways, thus increasing their risk of injuries. A restricted LoS can significantly affect an individual's ability to respond to balance control tests and react to perturbations. This reduction in LoS may be attributed to various factors, including weakness in ankle and foot muscles, musculoskeletal issues in the lower limbs, or an internal perception to resist larger displacements. These impairments can be correlated with medical examination findings and serve as an essential outcome measure for rehabilitation of specific underlying body impairments. From a clinical perspective, individuals with better LoS can perform complex mobility tasks without support and are more capable of tolerating environmental challenges. Possible causes of LoS impairment Impaired cognitive processing: This is often associated with aging and can result in attention deficits. Neuromuscular impairments: Conditions such as bradykinesia (slowness of movement), ataxia (lack of muscle coordination), and poor motor control can affect attention and cognitive functions. Musculoskeletal impairments: Weakness, limited range of motion (ROM), and pain in the musculoskeletal system can also impact attention and cognitive abilities. Emotional Overlay: Emotions like fear or anxiety can influence cognitive processing and attention. Aphysiology: This refers to exaggeration or poor effort in performing tasks, which can affect attention and cognitive performance. Limits of Stability Testing Various tools such as the Functional Reach Test (FRT) and Limits Of Stability (LOS) test have been used to assess LoS. Functional Reach Test (FRT): This test is commonly used to assess balance and LoS in the forward direction. It is cost-effective and easy to administer. However, it only measures LoS in the forward direction and is performed in a standing posture with the feet in a static position. Limits Of Stability (LOS) Test: This is a more advanced tool compared to FRT and is used to measure balance under multi-directional conditions. In this test, the subject stands on force plates and intentionally shifts their body weight in the cued direction. Parameters measured in LOS test Reaction Time (RT): The time taken by an individual to start shifting their center of gravity (COG) from the static position after receiving a cue, measured in seconds. Movement Velocity (MVL): The average speed at which the COG shifts. EndPoint Excursions (EPE): The distance willingly covered by the subject in their very first attempt towards the target, expressed as a percentage. Maximum Excursions (MXE): The amount of distance the subject actually covered or moved their COG. Directional Control (DCL): A comparison between the amount of movement demonstrated in the desired direction (towards the target) to the amount of external movement in the opposite direction of the target, expressed as a percentage. Interpretation of LOS Results The ability to move around without falling is essential for performing activities of daily living (ADLs). Patients who exhibit delays in reaction time, decreased movement velocity, restricted Limits of Stability (LoS) boundary or cone of stability, or uncontrolled center of gravity (CoG) movement are at a higher risk of falling. A delayed reaction time may indicate cognitive processing issues, while reduced movement velocities may indicate higher-level central nervous system deficits. Reduced endpoint excursions, excessively larger maximum excursions, and poor directional control are all indicative of motor control abnormalities. A LoS score close to 100 represents minimal sway and hence a reduced risk of falling, while scores close to 0 imply a higher risk of falling. Validity and reliability of LOS The LOS test has been validated for use across multiple patient populations, including community-dwelling elderly individuals, those with neurological disorders, and those with back and knee injuries. A study conducted by Wernick-Robinson and collaborators in 1999 on the test-retest reliability suggests that using the amount of distance covered in the functional reach test alone may not be an adequate measure of dynamic balance. The study also highlights that for a better evaluation of postural control, additional assessment of movement strategies is indispensable. Similarly, another study conducted by Brouwer et al. also claims that the Limits of Stability (LOS) test is a reliable measure for balance testing in healthy populations. Functional Impact and Implications of LOS The capability of moving around without falling is necessary for activities of daily living (ADLs).Instability during weight-shifting activities or the inability to perform certain weight transfer tasks, such as bending forward to take objects from a shelf or leaning backward to rinse hair in the shower, can result from a restricted LoS boundary. The ability to voluntarily move the COG to positions within the Limits of Stability (LOS) with control is fundamental to independence and safety in mobility tasks, such as reaching for objects, transitioning from seated to standing positions (or standing to seated), and walking. The LoS can be indicative of fall risks in various populations, including the elderly, individuals with movement disorders, and those with neurological impairments. Ensuring adequate control and movement within the LoS is crucial for maintaining independence and safety in everyday mobility tasks. References Geriatrics Biomechanics Medical tests
Limits of stability
Physics
1,343
2,605,397
https://en.wikipedia.org/wiki/Insurance%20score
An insurance score – also called an insurance credit score – is a numerical point system based on select credit report characteristics. There is no direct relationship to financial credit scores used in lending decisions, as insurance scores are not intended to measure creditworthiness, but rather to predict risk. Insurance companies use insurance scores for underwriting decisions, and to partially determine charges for premiums. Insurance scores are applied in personal product lines, namely homeowners and private passenger automobile insurance, and typically not elsewhere. Background Insurance scoring models are built from selections of credit report factors, combined with insurance claim and profitability data, to produce numerical formulae or algorithms. A scoring model may be unique to an insurance company and to each line of business (e.g. homeowners or automobile), in terms of the factors selected for consideration and the weighting of the point assignments. As insurance credit scores are not intended to measure creditworthiness, they commonly focus on financial habits and choices (i.e., age of oldest account, number of inquiries in 24 months, ratio of total balance to total limits, number of open retail credit cards, number of revolving accounts with balances greater than 75% of limits, etc.) Therefore it is possible for a consumer with a high financial credit score, and excellent payment history, to receive a poor insurance score. Insurers consider credit report information in their underwriting and pricing decisions as a predictor of profitability and risk of loss. Various studies have found a strong relationship between credit-based insurance scores and profitability or risk of loss. The scores are generally most predictive when little or no other information exists, such as in the case of clean driving records, or claims-free policies; in instances where past claims, points, or other similar information exist on record, the personal histories will typically be more predictive than the scores. Insurers consider credit report information, along with other factors, such as driving experience, previous claims and vehicle age, to develop a picture of a consumer's risk profile and to establish premium rates. The correlation, between credit-based insurance scores and overall insurance profitability and loss, has not been disputed. Support and opposition The use of credit information in insurance pricing and underwriting is heavily disputed. Proponents of insurance credit scoring include insurance companies, the American Academy of Actuaries (AAA), the Insurance Information Institute (III), and credit bureaus such as Fair Isaac and TransUnion. Active opponents include many state insurance departments and regulators, and consumer protection organizations such as the Center for Economic Justice, the Consumer Federation of America, the National Consumer Law Center and Texas Watch. As a result of successful lobbying by the insurance industry, credit scoring is legal in nearly all states. The state of Hawaii has banned all use of credit information in personal automobile underwriting and rating, and other states have established restrictions. A number of states have also made unsuccessful attempts to ban or restrict the practice. The National Association of Insurance Commissioners has acknowledged that a correlation does exist between insurance scores and losses, but asserts that the benefit of credit reports to consumers has not yet been established. Public information Insurance credit-scoring models are considered proprietary, and a trade secret, in most cases. The designers wish to protect their models from view for a number of reasons: they may provide competitive advantage in the insurance marketplace, or they anticipate consumers might attempt to alter results, by changing the information they provide, if the computations were common knowledge. Thus there is little public information available about the details of insurance credit-scoring models. One actuarial study has been published, The Impact of Personal Credit History on Loss Performance in Personal Lines, by James Monaghan, ACAS MAAA. Allstate has published a private passenger automobile credit scoring model, the ISM7 (NI) Scorecard (where "NI" indicates no inquiries are considered). Key reports and studies Credit-Based Insurance Scores: Impacts on Consumers of Automobile Insurance, A Report to Congress by the Federal Trade Commission. This study found that insurance credit scores are effective predictors of risk. It also showed that African-Americans and Hispanics are substantially overrepresented in the lowest credit scores, and substantially underrepresented in the highest, while Caucasians and Asians are more evenly spread across the scores. The credit scores were also found to predict risk within each of the ethnic groups, leading the Federal Trade Commission (FTC) to conclude that the scoring models are not solely proxies for redlining. The FTC stated that little data was available to evaluate benefit of insurance scores to consumers. The report was disputed by representatives of the Consumer Federation of America, the National Fair Housing Alliance, the National Consumer Law Center, and the Center for Economic Justice, for relying on data provided by the insurance industry, which was not open to examination. The Impact of Personal Credit History on Loss Performance in Personal Lines, by James Monaghan ACAS MAAA. This actuarial study matched 170,000 policy records with credit report information to show the correlation between historical loss ratios and various credit report elements. The Use of Credit History for Personal Lines of Insurance: Report to the National Association of Insurance Commissioners, American Academy of Actuaries Risk Classification Subcommittee of the Property/Casualty Products, Pricing and Market Committee. Insurers' Use of Credit Scoring for Homeowners Insurance in Ohio: A Report to the Ohio Civil Rights Commission, from Birny Birnbaum, Center for Economic Justice. Birny Birnbaum, Consulting Economist, argues that insurance credit scoring is inherently unfair to consumers and violates basic risk classification principles. Insurance Credit Scoring: An Unfair Practice, Center for Economic Justice. This report argues that insurance scoring: is inherently unfair; has a disproportionate impact on consumers in poor and minority communities; penalizes consumers for rational behavior and sound financial management practices; penalizes consumers for lenders’ business decisions unrelated to payment history; is an arbitrary practice; and undermines the basic insurance mechanism and public policy goals for insurance. The Use of Credit Scoring in Automobile and Homeowners Insurance, A Report to the Governor, the Legislature and the People of Michigan, by Frank M. Fitzgerald, Commissioner, Office of Financial and Insurance Services. This report reviewed the viewpoints of the industry, agents, consumers, and other interested parties. In conclusion, insurance credit scoring was found to be within the scope of Michigan law. Use of Credit Information by Insurers in Texas, Report to the 79th Legislature, Texas Department of Insurance. This study found a consistent pattern of differences in credit scores among different racial/ethnic groups. Whites and Asians were found to have better scores than Blacks and Hispanics. Differences in income levels were not as pronounced as for racial/ethnic groups, but average credit scores at upper income levels were better than those at lower and moderate income levels. The study found a strong relationship between credit scores and claims experience on an aggregate basis. In 2002, the Texas Department of Insurance received a peak of 600 complaints related to credit scoring, which declined and leveled to 300 per year. Insurance Credit Scoring in Alaska, State of Alaska, Department of Community and Economic Development, Division of Insurance. The study suggested unequal effects on consumers of varying income and ethnic backgrounds. Specifically, the higher income neighborhoods and those with a higher proportion of Caucasians were the least impacted by credit scoring. Although data available for the study was limited, the state of Alaska determined that some restrictions on credit scoring would be appropriate to protect the public. References Sources Insurance Information Institute Credit Insurance Fact Sheets External links The Truth Behind Insurance Scoring, www.InsuranceScored.com Insurance firms blasted for credit score rules, by Herb Weisbaum, Con$umerMan on MSNBC (January 2010) Washington Commissioner Criticizes Insurers' Credit Score Use, Insurance Journal (July 2010) Caution! The Secret Score Behind Your Auto Insurance, Consumer Reports (August 2006) Metrics Insurance score
Insurance score
Mathematics
1,616
56,140,139
https://en.wikipedia.org/wiki/List%20object
In category theory, an abstract branch of mathematics, and in its applications to logic and theoretical computer science, a list object is an abstract definition of a list, that is, a finite ordered sequence. Formal definition Let C be a category with finite products and a terminal object 1. A list object over an object of C is: an object , a morphism : 1 → , and a morphism : × → such that for any object of with maps : 1 → and : × → , there exists a unique : → such that the following diagram commutes: where〈id, 〉denotes the arrow induced by the universal property of the product when applied to id (the identity on ) and . The notation * (à la Kleene star) is sometimes used to denote lists over . Equivalent definitions In a category with a terminal object 1, binary coproducts (denoted by +), and binary products (denoted by ×), a list object over can be defined as the initial algebra of the endofunctor that acts on objects by ↦ 1 + ( × ) and on arrows by ↦ [id1,〈id, 〉]. Examples In Set, the category of sets, list objects over a set are simply finite lists with elements drawn from . In this case, picks out the empty list and corresponds to appending an element to the head of the list. In the calculus of inductive constructions or similar type theories with inductive types (or heuristically, even strongly typed functional languages such as Haskell), lists are types defined by two constructors, nil and cons, which correspond to and , respectively. The recursion principle for lists guarantees they have the expected universal property. Properties Like all constructions defined by a universal property, lists over an object are unique up to canonical isomorphism. The object 1 (lists over the terminal object) has the universal property of a natural number object. In any category with lists, one can define the length of a list to be the unique morphism : → 1 which makes the following diagram commute: References See also Natural number object F-algebra Initial algebra Objects (category theory) Topos theory
List object
Mathematics
449
58,465,846
https://en.wikipedia.org/wiki/Aspergillus%20fumisynnematus
Aspergillus fumisynnematus is a species of fungus in the genus Aspergillus. It is from the Fumigati section. The species was first described in 1993. It has been reported to produce neosartorin, pyripyropens, and fumimycin. Growth and morphology A. fumisynnematus has been cultivated on both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References fumisynnematus Fungi described in 1993 Fungus species
Aspergillus fumisynnematus
Biology
136
18,462,622
https://en.wikipedia.org/wiki/HD%20213429
HD 213429 is a spectroscopic binary system in the equatorial constellation of Aquarius. It has a combined apparent magnitude of 6.16 and is located around 83 light years away. The pair orbit each other with a period of 631 days, at an average separation of 1.74 AU and an eccentricity of 0.38. References External links Image HD 213429 high proper motion star adsabs.harvard.edu/ Aquarius (constellation) 213429 F-type main-sequence stars 8581 Spectroscopic binaries Durchmusterung objects 111170 Gliese and GJ objects
HD 213429
Astronomy
131
212,838
https://en.wikipedia.org/wiki/Sir%20George%20Stokes%2C%201st%20Baronet
Sir George Gabriel Stokes, 1st Baronet, (; 13 August 1819 – 1 February 1903) was an Irish mathematician and physicist. Born in County Sligo, Ireland, Stokes spent all of his career at the University of Cambridge, where he was the Lucasian Professor of Mathematics from 1849 until his death in 1903. As a physicist, Stokes made seminal contributions to fluid mechanics, including the Navier–Stokes equations; and to physical optics, with notable works on polarisation and fluorescence. As a mathematician, he popularised "Stokes' theorem" in vector calculus and contributed to the theory of asymptotic expansions. Stokes, along with Felix Hoppe-Seyler, first demonstrated the oxygen transport function of haemoglobin, and showed colour changes produced by the aeration of haemoglobin solutions. Stokes was made a baronet by the British monarch in 1889. In 1893 he received the Royal Society's Copley Medal, then the most prestigious scientific prize in the world, "for his researches and discoveries in physical science". He represented Cambridge University in the British House of Commons from 1887 to 1892, sitting as a Conservative. Stokes also served as president of the Royal Society from 1885 to 1890 and was briefly the Master of Pembroke College, Cambridge. Stokes's extensive correspondence and his work as Secretary of the Royal Society has led him to be referred to as a gatekeeper of Victorian science, with his contributions surpassing his own published papers. Biography George Stokes was the youngest son of the Reverend Gabriel Stokes (died 1834), a clergyman in the Church of Ireland who served as rector of Skreen in County Sligo, and his wife Elizabeth Haughton, daughter of the Reverend John Haughton. Stokes's home life was strongly influenced by his father's evangelical Protestantism: three of his brothers entered the Church, of whom the most eminent was John Whitley Stokes, Archdeacon of Armagh. Alongside a lifelong commitment to his Protestant faith, Stokes's childhood in Skreen had a strong influence on his later decision to pursue fluid dynamics as a research area. His daughter, Isabella Humphreys, wrote that her father "told me that he was nearly carried away by one of these great waves when bathing as a boy off the coast of Sligo, and this first attracted his attention to waves". John and George were always close, and George lived with John while attending school in Dublin. Of all his family he was closest to his sister Elizabeth. Their mother was remembered in the family as "beautiful but very stern". After attending schools in Skreen, Dublin and Bristol, in 1837 Stokes matriculated at Pembroke College, Cambridge. Four years later he graduated as senior wrangler and first Smith's prizeman, achievements that earned him election as a fellow of the college. In accordance with the college statutes, Stokes had to resign the fellowship when he married in 1857. Twelve years later, under new statutes, he was re-elected to the fellowship and he retained that place until 1902, when on the day before his 83rd birthday, he was elected as the college's Master. Stokes did not hold that position for long, for he died at Cambridge on 1 February the following year, and was buried in the Mill Road cemetery. There is also a memorial to him in the north aisle at Westminster Abbey. Career In 1849, Stokes was appointed to the Lucasian professorship of mathematics at Cambridge, a position he held until his death in 1903. On 1 June 1899, the jubilee of this appointment was celebrated there in a ceremony attended by numerous delegates from European and American universities. A commemorative gold medal was presented to Stokes by the chancellor of the university and marble busts of Stokes by Hamo Thornycroft were formally offered to Pembroke College and to the university by Lord Kelvin. At 54 years, Stokes's tenure as the Lucasian Professor was the longest in history. Stokes, who was made a baronet in 1889, further served his university by representing it in parliament from 1887 to 1892 as one of the two members for the Cambridge University constituency. In 1885–1890 he was also president of the Royal Society, of which he had been one of the secretaries since 1854. As he was also Lucasian Professor at this time, Stokes was the first person to hold all three positions simultaneously; Newton held the same three, although not at the same time. Stokes was the oldest of the trio of natural philosophers, James Clerk Maxwell and Lord Kelvin being the other two, who especially contributed to the fame of the Cambridge school of mathematical physics in the middle of the 19th century. Stokes's original work began about 1840, and is distinguished for its quantity and quality. The Royal Society's catalogue of scientific papers gives the titles of over a hundred memoirs by him published down to 1883. Some of these are only brief notes, others are short controversial or corrective statements, but many are long and elaborate treatises. Contributions to science In scope, Stokes's work covered a wide range of physical inquiry but, as Marie Alfred Cornu remarked in his Rede Lecture of 1899, the greater part of it was concerned with waves and the transformations imposed on them during their passage through various media. Fluid dynamics Stokes's first published papers, which appeared in 1842 and 1843, were on the steady motion of incompressible fluids and some cases of fluid motion. These were followed in 1845 by one on the friction of fluids in motion and the equilibrium and motion of elastic solids, and in 1850 by another on the effects of the internal friction of fluids on the motion of pendulums. To the theory of sound he made several contributions, including a discussion of the effect of wind on the intensity of sound and an explanation of how the intensity is influenced by the nature of the gas in which the sound is produced. These inquiries together put the science of fluid dynamics on a new footing, and provided a key not only to the explanation of many natural phenomena, such as the suspension of clouds in the air, and the subsidence of ripples and waves in water, but also to the solution of practical problems, such as the flow of water in rivers and channels, and the skin resistance of ships. Creeping flow Stokes's work on fluid motion and viscosity led to his calculating the terminal velocity for a sphere falling in a viscous medium. This became known as Stokes' law. He derived an expression for the frictional force (also called drag force) exerted on spherical objects with very small Reynolds numbers. His work is the basis of the falling sphere viscometer, in which the fluid is stationary in a vertical glass tube. A sphere of known size and density is allowed to descend through the liquid. If correctly selected, it reaches terminal velocity, which can be measured by the time it takes to pass two marks on the tube. Electronic sensing can be used for opaque fluids. Knowing the terminal velocity, the size and density of the sphere, and the density of the liquid, Stokes's law can be used to calculate the viscosity of the fluid. A series of steel ball bearings of different diameters is normally used in the classic experiment to improve the accuracy of the calculation. The school experiment uses glycerine as the fluid, and the technique is used industrially to check the viscosity of fluids used in processes. The same theory explains why small water droplets (or ice crystals) can remain suspended in air (as clouds) until they grow to a critical size and start falling as rain (or snow and hail). Similar use of the equation can be made in the settlement of fine particles in water or other fluids. "stokes", the CGS unit of kinematic viscosity, was named in recognition of his work. Light Perhaps his best-known researches are those which deal with the wave theory of light. His optical work began at an early period in his scientific career. His first papers on the aberration of light appeared in 1845 and 1846, and were followed in 1848 by one on the theory of certain bands seen in the spectrum. In 1849 he published a long paper on the dynamical theory of diffraction, in which he showed that the plane of polarisation must be perpendicular to the direction of propagation. Two years later he discussed the colours of thick plates. Stokes also investigated George Airy's mathematical description of rainbows. Airy's findings involved an integral that was awkward to evaluate. Stokes expressed the integral as a divergent series, which were little understood. However, by cleverly truncating the series (i.e., ignoring all except the first few terms of the series), Stokes obtained an accurate approximation to the integral that was far easier to evaluate than the integral itself. Stokes's research on asymptotic series led to fundamental insights about such series. Fluorescence In 1852, in his famous paper on the change of wavelength of light, he described the phenomenon of fluorescence, as exhibited by fluorspar and uranium glass, materials which he viewed as having the power to convert invisible ultra-violet radiation into radiation of longer wavelengths that are visible. The Stokes shift, which describes this conversion, is named in Stokes's honour. A mechanical model, illustrating the dynamical principle of Stokes's explanation was shown. The offshoot of this, Stokes line, is the basis of Raman scattering. In 1883, during a lecture at the Royal Institution, Lord Kelvin said he had heard an account of it from Stokes many years before, and had repeatedly but vainly begged him to publish it. Polarization In the same year, 1852, there appeared the paper on the composition and resolution of streams of polarised light from different sources, and in 1853 an investigation of the metallic reflection exhibited by certain non-metallic substances. The research was to highlight the phenomenon of light polarisation. About 1860 he was engaged in an inquiry on the intensity of light reflected from, or transmitted through, a pile of plates; and in 1862 he prepared for the British Association a valuable report on double refraction, a phenomenon where certain crystals show different refractive indices along different axes. Perhaps the best known crystal is Iceland spar, transparent calcite crystals. A paper on the long spectrum of the electric light bears the same date, and was followed by an inquiry into the absorption spectrum of blood. Chemical analysis The chemical identification of organic bodies by their optical properties was treated in 1864; and later, in conjunction with the Rev. William Vernon Harcourt, he investigated the relation between the chemical composition and the optical properties of various glasses, with reference to the conditions of transparency and the improvement of achromatic telescopes. A still later paper connected with the construction of optical instruments discussed the theoretical limits to the aperture of microscope objectives. Ophthalmology In 1849, Stokes invented the Stokes lens to detect astigmatism. It is a lens combination consisted of equal but opposite power cylindrical lenses attached together in such a way so that the lenses can be rotated relative to one another. Other work In other areas of physics may be mentioned his paper on the conduction of heat in crystals (1851) and his inquiries in connection with Crookes radiometer; his explanation of the light border frequently noticed in photographs just outside the outline of a dark body seen against the sky (1882); and, still later, his theory of the x-rays, which he suggested might be transverse waves travelling as innumerable solitary waves, not in regular trains. Two long papers published in 1849 – one on attractions and Clairaut's theorem, and the other on the variation of gravity at the surface of the Earth (1849) – Stokes's gravity formula—also demand notice, as do his mathematical memoirs on the critical values of sums of periodic series (1847) and on the numerical calculation of a class of definite integrals and infinite series (1850) and his discussion of a differential equation relating to the breaking of railway bridges (1849), research related to his evidence given to the Royal Commission on the Use of Iron in Railway structures after the Dee Bridge disaster of 1847. Unpublished research Many of Stokes's discoveries were not published, or were only touched upon in the course of his oral lectures. One such example is his work in the theory of spectroscopy. In his presidential address to the British Association in 1871, Lord Kelvin stated his belief that the application of the prismatic analysis of light to solar and stellar chemistry had never been suggested directly or indirectly by anyone else when Stokes taught it to him at Cambridge University some time prior to the summer of 1852, and he set forth the conclusions, theoretical and practical, which he learnt from Stokes at that time, and which he afterwards gave regularly in his public lectures at Glasgow. These statements, containing as they do the physical basis on which spectroscopy rests, and the way in which it is applicable to the identification of substances existing in the sun and stars, make it appear that Stokes anticipated Gustav Kirchhoff by at least seven or eight years. Stokes, however, in a letter published some years after the delivery of this address, stated that he had failed to take one essential step in the argument—not perceiving that emission of light of definite wavelength not merely permitted, but necessitated, absorption of light of the same wavelength. He modestly disclaimed "any part of Kirchhoff's admirable discovery," adding that he felt some of his friends had been over-zealous in his cause. It must be said, however, that English men of science have not accepted this disclaimer in all its fullness, and still attribute to Stokes the credit of having first enunciated the fundamental principles of spectroscopy. In another way, too, Stokes did much for the progress of mathematical physics. Soon after he was elected to the Lucasian chair he announced that he regarded it as part of his professional duties to help any member of the university with difficulties he might encounter in his mathematical studies, and the assistance rendered was so real that pupils were glad to consult him, even after they had become colleagues, on mathematical and physical problems in which they found themselves at a loss. Then during the thirty years he acted as secretary of the Royal Society, he exercised an enormous if inconspicuous influence on the advancement of mathematical and physical science, not only directly by his own investigations, but indirectly by suggesting problems for inquiry and inciting men to attack them, and by his readiness to give encouragement and help. Contributions to engineering Stokes was involved in several investigations into railway accidents, especially the Dee Bridge disaster in May 1847, and he served as a member of the subsequent Royal Commission into the use of cast iron in railway structures. He contributed to the calculation of the forces exerted by moving engines on bridges. The bridge failed because a cast iron beam was used to support the loads of passing trains. Cast iron is brittle in tension or bending, and many other similar bridges had to be demolished or reinforced. He appeared as an expert witness at the Tay Bridge disaster, where he gave evidence about the effects of wind loads on the bridge. The centre section of the bridge (known as the High Girders) was completely destroyed during a storm on 28 December 1879, while an express train was in the section, and everyone aboard died (more than 75 victims). The Board of Inquiry listened to many expert witnesses, and concluded that the bridge was "badly designed, badly built and badly maintained". As a result of his evidence, he was appointed a member of the subsequent Royal Commission into the effect of wind pressure on structures. The effects of high winds on large structures had been neglected at that time, and the commission conducted a series of measurements across Britain to gain an appreciation of wind speeds during storms, and the pressures they exerted on exposed surfaces. Work on religion Stokes generally held conservative religious values and beliefs. In 1886, he became president of the Victoria Institute, which had been founded to defend evangelical Christian principles against challenges from the new sciences, especially the Darwinian theory of biological evolution. He gave the 1891 Gifford lecture on natural theology. He was also the vice-president of the British and Foreign Bible Society and was actively involved in doctrinal debates concerning missionary work. However, although his religious views were mostly orthodox, he was unusual among Victorian evangelicals in rejecting eternal punishment in hell, and instead was a proponent of Christian conditionalism. As President of the Victoria Institute, Stokes wrote: "We all admit that the book of Nature and the book of Revelation come alike from God, and that consequently there can be no real discrepancy between the two if rightly interpreted. The provisions of Science and Revelation are, for the most part, so distinct that there is little chance of collision. But if an apparent discrepancy should arise, we have no right on principle, to exclude either in favour of the other. For however firmly convinced we may be of the truth of revelation, we must admit our liability to err as to the extent or interpretation of what is revealed; and however strong the scientific evidence in favour of a theory may be, we must remember that we are dealing with evidence which, in its nature, is probable only, and it is conceivable that wider scientific knowledge might lead us to alter our opinion". Personal life Stokes married Mary Susanna Robinson, the only daughter of Irish astronomer Rev Thomas Romney Robinson, at St Patrick's Cathedral, Armagh on 4 July 1857. They had five children: Arthur Romney, who inherited the baronetcy; Susanna Elizabeth, who died in infancy; Isabella Lucy (Mrs Laurence Humphry) who contributed the personal memoir of her father in "Memoir and Scientific Correspondence of the Late George Gabriel Stokes, Bart"; Dr William George Gabriel, physician, a troubled man who committed suicide aged 30 while temporarily insane; and Dora Susanna, who died in infancy. His male line and hence his baronetcy have since become extinct. Legacy and honours Lucasian Professor of Mathematics at Cambridge University From the Royal Society, of which he became a fellow in 1851, he received the Rumford Medal in 1852 in recognition of his inquiries into the wavelength of light, and later, in 1893, the Copley Medal. In 1869 he presided over the Exeter meeting of the British Association. In 1874 he was elected an International Honorary Member of the American Academy of Arts and Sciences In 1883 he was elected an International Member of the United States National Academy of Sciences From 1883 to 1885 he was Burnett lecturer at Aberdeen, his lectures on light, which were published in 1884–1887, dealt with its nature, its use as a means of investigation, and its beneficial effects. On 18 April 1888 he was admitted as a Freeman of the City of London. On 6 July 1889 Queen Victoria made him a Baronet as Sir George Gabriel Stokes of Lensfield Cottage in the Baronetage of the United Kingdom; the title became extinct in 1916. In 1889, he was elected an International Member of the American Philosophical Society. In 1891, as Gifford lecturer, he published a volume on Natural Theology. Member of the Prussian Order Pour le Mérite His academic distinctions included honorary degrees from many universities, including Doctor mathematicae (honoris causa) from the Royal Frederick University on 6 September 1902, when they celebrated the centennial of the birth of mathematician Niels Henrik Abel. The stokes, a unit of kinematic viscosity, is named after him. In 1909, the Stokes Society at Pembroke College was founded as an academic hub for undergraduate scientists across the University. It remains active as of 2023. In July 2017, Dublin City University named a building after Stokes in recognition of his contributions to physics and mathematics. Publications Stokes's mathematical and physical papers (see external links) were published in a collected form in five volumes; the first three (Cambridge, 1880, 1883, and 1901) under his own editorship, and the two last (Cambridge, 1904 and 1905) under that of Sir Joseph Larmor, who also selected and arranged the Memoir and Scientific Correspondence of Stokes published at Cambridge in 1907. See also Stokes flow List of presidents of the Royal Society References Further reading Wilson, David B., Kelvin and Stokes A Comparative Study in Victorian Physics, (1987) Peter R Lewis, Beautiful Railway Bridge of the Silvery Tay: Reinvestigating the Tay Bridge Disaster of 1879, Tempus (2004). PR Lewis, Disaster on the Dee: Robert Stephenson's Nemesis of 1847, Tempus Publishing (2007) George Gabriel Stokes: Life, Science and Faith Edited by Mark McCartney, Andrew Whitaker, and Alastair Wood, Oxford University Press, 2019. External links Biography on Dublin City University Web site (1907), ed. by J. Larmor Mathematical and physical papers volume 1 and volume 2 from the Internet Archive Mathematical and physical papers, volumes 1 to 5 from the University of Michigan Digital Collection. Life and work of Stokes Natural Theology (1891), Adam and Charles Black. (1891–93 Gifford Lectures) 1819 births 1903 deaths Scientists from County Sligo 19th-century Anglo-Irish people Alumni of Pembroke College, Cambridge Fellows of Pembroke College, Cambridge Masters of Pembroke College, Cambridge Baronets in the Baronetage of the United Kingdom 19th-century Irish mathematicians 19th-century Irish physicists British physicists Optical physicists 19th-century British mathematicians Fluid dynamicists Irish Anglicans Lucasian Professors of Mathematics Members of the Parliament of the United Kingdom for English constituencies Members of the Parliament of the United Kingdom for the University of Cambridge Fellows of the Royal Society Presidents of the Royal Society Foreign associates of the National Academy of Sciences UK MPs 1886–1892 Viscosity Recipients of the Copley Medal Recipients of the Pour le Mérite (civil class) Senior Wranglers Members of the American Philosophical Society Presidents of the Cambridge Philosophical Society
Sir George Stokes, 1st Baronet
Physics,Chemistry
4,403
56,170,677
https://en.wikipedia.org/wiki/Pelorism
Pelorism is the term, said to be first used by Charles Darwin, for the formation of 'peloric flowers' which botanically is the abnormal production of radially symmetrical (actinomorphic) flowers in a species that usually produces bilaterally symmetrical (zygomorphic) flowers. These flowers are spontaneous floral symmetry mutants. The term epanody is also applied to this phenomenon. Bilaterally symmetrical (zygomorphic) flowers are known to have evolved several times from radially symmetrical (actinomorphic) flowers, these changes being linked to increasing specialisation in pollinators. History Pelorism has been of interest since a five-spurred variety of the common toad-flax (Linaria vulgaris L.) was first discovered in 1742 by a young Uppsala botanist on an island in the Stockholm archipelago and then in 1744 described by Carl Linnaeus. The mutant, spreading vegetatively, had five spurs rather than the usual one; however, the rest of the plant was normal. Linnaeus found that this variety was contrary to his concept that genera and species had universally arisen through an act of "original creation and remained unchanged since then". Linnaeus called this type of mutant a 'Peloria', the Greek for 'monster' or 'prodigy', because of the huge implications for the then current belief that species were immutable. He wrote that "This is certainly no less remarkable than if a cow were to give birth to a calf with a wolf's head." The peloric plant fascinated Linnaeus to the extent that he grew it at his summer residence in Hammarby and his explanation for it was that a toad-flax had been pollinated by another species. Charles Darwin, Charles Victor Naudin, Johann Wolfgang von Goethe and Hugo de Vries amongst others analysed and wrote about this significant mutation. The botanist Thomas Fairchild corresponded with Carl Linnæus. Fairchild scientifically produced an artificial hybrid, Dianthus caryophyllus x barbatus in 1717 that became known as 'Fairchild's Mule', a cross between a sweet william and a carnation. The production of hybrids further undermined the belief that species as created by God were immutable. Appearance and causation It is notable in foxgloves that the terminal flower more frequently develops peloric features than lateral flowers and this has had been put down to terminal buds having a greater supply of sap. Charles Darwin took a particular interest in peloric flowers, growing and cross-pollinating antirrhinums himself, as he saw the phenomenon as suggestive of a partial reversion to a past or ancestral type. Peloric forms quite commonly appear as random mutations in several orchid species in nature and this is genetically controlled, although the expression can be influenced by both environmental changes and by stresses. The occurrence is unstable and the same plant may exhibit normally on the next flowering. In more highly evolved flowers such as orchids the details of pelorism may appear more complex with characteristics such as the two lateral petals replaced by two additional labella, the labellum replaced by an additional lateral petals in semipeloria or pseudopeloria where the modified labellum is less distinctively altered, however it is still distinguishable from the lateral petals and this confers a degree of zygomorphy upon the petal whorl. Although the production of peloric flowers has been shown to generally follow standard Mendelian inheritance, this is not fixed and had been linked to the inability of a plant to produce normal internodes, this failure resulting in the fusion of flower buds, giving actinomorphic flowers. Foxglove plants show the terminal peloric bud flowering first followed by the more normal buds flowering below, the reverse of the normal flowering pattern, the terminal flowers withering first. The CYCLOIDEA (CYC) gene controls floral symmetry and peloric antirrhinums have been artificially induced by this gene being knocking out. Etymology Peloria derives from both new Latin and from the Greek word , meaning 'monstrous'. Examples Pelorism is found in several orchid species, such as Phalaenopsis it is demonstrated by flowers with abnormal numbers of petals or lips as a genetic trait, the expression of which is environmentally influenced and may appear random. It is widely noted in the mint family and species such as Digitalis purpurea, gloxinia, Antirrhinum majus, Pelargonium, and auricula. Because peloric flowers are larger and arguably more attractive than normal flowers plants such as cultivars of the gloxinia (Sinningia speciosa) have been deliberately bred to have peloric flowers. See also Floral symmetry References External links Peloric Foxglove flowers Further reading Plant morphology
Pelorism
Biology
984
33,289,687
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2079
In molecular biology, glycoside hydrolase family 79 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 79 includes endo-beta-N-glucuronidase and heparanase (CAZY GH_79). Heparan sulphate proteoglycans (HSPGs) play a key role in the self- assembly, insolubility and barrier properties of basement membranes and extracellular matrices. Hence, cleavage of heparan sulphate (HS) affects the integrity and functional state of tissues and thereby fundamental normal and pathological phenomena involving cell migration and response to changes in the extracellular microenvironment. Heparanase degrades HS at specific intrachain sites. The enzyme is synthesized as a latent approximately 65 kDa protein that is processed at the N-terminus into a highly active approximately 50 kDa form. Experimental evidence suggests that heparanase may facilitate both tumour cell invasion and neovascularisation, both critical steps in cancer progression. The enzyme is also involved in cell migration associated with inflammation and autoimmunity. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 79
Biology
368
7,990
https://en.wikipedia.org/wiki/Data%20warehouse
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis and is a core component of business intelligence. Data warehouses are central repositories of data integrated from disparate sources. They store current and historical data organized so as to make it easy to create reports, query and get insights from the data. Unlike databases, they are intended to be used by analysts and managers to help make organizational decisions. The data stored in the warehouse is uploaded from operational systems (such as marketing or sales). The data may pass through an operational data store and may require data cleansing for additional operations to ensure data quality before it is used in the data warehouse for reporting. The two main approaches for building a data warehouse system are extract, transform, load (ETL) and extract, load, transform (ELT). Components The environment for data warehouses and marts includes the following: Source systems of data (often, the company's operational databases, such as relational databases); Data integration technology and processes to extract data from source systems, transform them, and load them into a data mart or warehouse; Architectures to store data in the warehouse or marts; Tools and applications for varied users; Metadata, data quality, and governance processes. Metadata includes data sources (database, table, and column names), refresh schedules and data usage measures. Related systems Operational databases Operational databases are optimized for the preservation of data integrity and speed of recording of business transactions through use of database normalization and an entity–relationship model. Operational system designers generally follow Codd's 12 rules of database normalization to ensure data integrity. Fully normalized database designs (that is, those satisfying all Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. Relational databases are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected by each transaction. To improve performance, older data are periodically purged. Data warehouses are optimized for analytic access patterns, which usually involve selecting specific fields rather than all fields as is common in operational databases. Because of these differences in access, operational databases (loosely, OLTP) benefit from the use of a row-oriented database management system (DBMS), whereas analytics databases (loosely, OLAP) benefit from the use of a column-oriented DBMS. Operational systems maintain a snapshot of the business, while warehouses maintain historic data through ETL processes that periodically migrate data from the operational systems to the warehouse. Online analytical processing (OLAP) is characterized by a low rate of transactions and complex queries that involve aggregations. Response time is an effective performance measure of OLAP systems. OLAP applications are widely used for data mining. OLAP databases store aggregated, historical data in multi-dimensional schemas (usually star schemas). OLAP systems typically have a data latency of a few hours, while data mart latency is closer to one day. The OLAP approach is used to analyze multidimensional data from multiple sources and perspectives. The three basic operations in OLAP are roll-up (consolidation), drill-down, and slicing & dicing. Online transaction processing (OLTP) is characterized by a large numbers of short online transactions (INSERT, UPDATE, DELETE). OLTP systems emphasize fast query processing and maintaining data integrity in multi-access environments. For OLTP systems, performance is the number of transactions per second. OLTP databases contain detailed and current data. The schema used to store transactional databases is the entity model (usually 3NF). Normalization is the norm for data modeling techniques in this system. Predictive analytics is about finding and quantifying hidden patterns in the data using complex mathematical models and to predict future outcomes. By contrast, OLAP focuses on historical data analysis and is reactive. Predictive systems are also used for customer relationship management (CRM). Data marts A data mart is a simple data warehouse focused on a single subject or functional area. Hence it draws data from a limited number of sources such as sales, finance or marketing. Data marts are often built and controlled by a single department in an organization. The sources could be internal operational systems, a central data warehouse, or external data. As with warehouses, stored data is usually not normalized. Types of data marts include dependent, independent, and hybrid data marts. Variants ETL The typical extract, transform, load (ETL)-based data warehouse uses staging, data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates disparate data sets by transforming the data from the staging layer, often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data. The main source of the data is cleansed, transformed, catalogued, and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support. However, the means to retrieve and analyze data, to extract, transform, and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition of data warehousing includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrieve metadata. ELT ELT-based data warehousing gets rid of a separate ETL tool for data transformation. Instead, it maintains a staging area inside the data warehouse itself. In this approach, data gets extracted from heterogeneous source systems and are then directly loaded into the data warehouse, before any transformation occurs. All necessary transformations are then handled inside the data warehouse itself. Finally, the manipulated data gets loaded into target tables in the same data warehouse. Benefits A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to: Integrate data from multiple sources into a single database and data model. More congregation of data to single database so a single query engine can be used to present data in an operational data store. Mitigate the problem of isolation-level lock contention in transaction processing systems caused by long-running analysis queries in transaction processing databases. Maintain data history, even if the source transaction systems do not. Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization grows via merging. Improve data quality, by providing consistent codes and descriptions, flagging or even fixing bad data. Present the organization's information consistently. Provide a single common data model for all data of interest regardless of data source. Restructure the data so that it makes sense to the business users. Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the operational systems. Add value to operational business applications, notably customer relationship management (CRM) systems. Make decision–support queries easier to write. Organize and disambiguate repetitive data. History The concept of data warehousing dates back to the late 1980s when IBM researchers Barry Devlin and Paul Murphy developed the "business data warehouse". In essence, the data warehousing concept was intended to provide an architectural model for the flow of data from operational systems to decision support environments. The concept attempted to address the various problems associated with this flow, mainly the high costs associated with it. In the absence of a data warehousing architecture, an enormous amount of redundancy was required to support multiple decision support environments. In larger corporations, it was typical for multiple decision support environments to operate independently. Though each environment served different users, they often required much of the same stored data. The process of gathering, cleaning and integrating data from various sources, usually from long-term existing operational systems (usually referred to as legacy systems), was typically in part replicated for each environment. Moreover, the operational systems were frequently reexamined as new decision support requirements emerged. Often new requirements necessitated gathering, cleaning and integrating new data from "data marts" that was tailored for ready access by users. Additionally, with the publication of The IRM Imperative (Wiley & Sons, 1991) by James M. Kerr, the idea of managing and putting a dollar value on an organization's data resources and then reporting that value as an asset on a balance sheet became popular. In the book, Kerr described a way to populate subject-area databases from data derived from transaction-driven systems to create a storage area where summary data could be further leveraged to inform executive decision-making. This concept served to promote further thinking of how a data warehouse could be developed and managed in a practical way within any enterprise. Key developments in early years of data warehousing: 1960s – General Mills and Dartmouth College, in a joint research project, develop the terms dimensions and facts. 1970s – ACNielsen and IRI provide dimensional data marts for retail sales. 1970s – Bill Inmon begins to define and discuss the term Data Warehouse. 1975 – Sperry Univac introduces MAPPER (MAintain, Prepare, and Produce Executive Reports), a database management and reporting system that includes the world's first 4GL. It is the first platform designed for building Information Centers (a forerunner of contemporary data warehouse technology). 1983 – Teradata introduces the DBC/1012 database computer specifically designed for decision support. 1984 – Metaphor Computer Systems, founded by David Liddle and Don Massaro, releases a hardware/software package and GUI for business users to create a database management and analytic system. 1988 – Barry Devlin and Paul Murphy publish the article "An architecture for a business and information system" where they introduce the term "business data warehouse". 1990 – Red Brick Systems, founded by Ralph Kimball, introduces Red Brick Warehouse, a database management system specifically for data warehousing. 1991 – James M. Kerr authors The IRM Imperative, which suggests data resources could be reported as an asset on a balance sheet, furthering commercial interest in the establishment of data warehouses. 1991 – Prism Solutions, founded by Bill Inmon, introduces Prism Warehouse Manager, software for developing a data warehouse. 1992 – Bill Inmon publishes the book Building the Data Warehouse. 1995 – The Data Warehousing Institute, a for-profit organization that promotes data warehousing, is founded. 1996 – Ralph Kimball publishes the book The Data Warehouse Toolkit. 1998 – Focal modeling is implemented as an ensemble (hybrid) data warehouse modeling approach, with Patrik Lager as one of the main drivers. 2000 – Dan Linstedt releases in the public domain the Data vault modeling, conceived in 1990 as an alternative to Inmon and Kimball to provide long-term historical storage of data coming in from multiple operational systems, with emphasis on tracing, auditing and resilience to change of the source data model. 2008 – Bill Inmon, along with Derek Strauss and Genia Neushloss, publishes "DW 2.0: The Architecture for the Next Generation of Data Warehousing", explaining his top-down approach to data warehousing and coining the term, data-warehousing 2.0. 2008 – Anchor modeling was formalized in a paper presented at the International Conference on Conceptual Modeling, and won the best paper award 2012 – Bill Inmon develops and makes public technology known as "textual disambiguation". Textual disambiguation applies context to raw text and reformats the raw text and context into a standard data base format. Once raw text is passed through textual disambiguation, it can easily and efficiently be accessed and analyzed by standard business intelligence technology. Textual disambiguation is accomplished through the execution of textual ETL. Textual disambiguation is useful wherever raw text is found, such as in documents, Hadoop, email, and so forth. 2013 – Data vault 2.0 was released, having some minor changes to the modeling method, as well as integration with best practices from other methodologies, architectures and implementations including agile and CMMI principles Data organization Facts A fact is a value or measurement in the system being managed. Raw facts are ones reported by the reporting entity. For example, in a mobile telephone system, if a base transceiver station (BTS) receives 1,000 requests for traffic channel allocation, allocates for 820, and rejects the rest, it could report three facts to a management system: Raw facts are aggregated to higher levels in various dimensions to extract information more relevant to the service or business. These are called aggregated facts or summaries. For example, if there are three BTSs in a city, then the facts above can be aggregated to the city level in the network dimension. For example: Dimensional versus normalized approach for storage of data The two most important approaches to store data in a warehouse are dimensional and normalized. The dimensional approach uses a star schema as proposed by Ralph Kimball. The normalized approach, also called the third normal form (3NF) is an entity-relational normalized model proposed by Bill Inmon. Dimensional approach In a dimensional approach, transaction data is partitioned into "facts", which are usually numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the total price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. This dimensional approach makes data easier to understand and speeds up data retrieval. Dimensional structures are easy for business users to understand because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization's business processes and operational system, and dimensions are the context about them (Kimball, Ralph 2008). Another advantage is that the dimensional model does not involve a relational database every time. Thus, this type of modeling technique is very useful for end-user queries in data warehouse. The model of facts and dimensions can also be understood as a data cube, where dimensions are the categorical coordinates in a multi-dimensional cube, the fact is a value corresponding to the coordinates. The main disadvantages of the dimensional approach are: It is complicated to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems It is difficult to modify the warehouse structure if the organization changes the way it does business. Normalized approach In the normalized approach, the data in the warehouse are stored following, to a degree, database normalization rules. Normalized relational database tables are grouped into subject areas (for example, customers, products and finance). When used in large enterprises, the result is dozens of tables linked by a web of joins.(Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. Disadvantages include that, because of the large number of tables, it can be difficult for users to join data from different sources into meaningful information and access the information without a precise understanding of the date sources and the data structure of the data warehouse. Both normalized and dimensional models can be represented in entity–relationship diagrams because both contain joined relational tables. The difference between them is the degree of normalization. These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008). In Information-Driven Business, Robert Hillard compares the two approaches based on the information needs of the business problem. He concludes that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but at the cost of usability. The technique measures information quantity in terms of information entropy and usability in terms of the Small Worlds data transformation measure. Design methods Bottom-up design In the bottom-up approach, data marts are first created to provide reporting and analytical capabilities for specific business processes. These data marts can then be integrated to create a comprehensive data warehouse. The data warehouse bus architecture is primarily an implementation of "the bus", a collection of conformed dimensions and conformed facts, which are dimensions that are shared (in a specific way) between facts in two or more data marts. Top-down design The top-down approach is designed using a normalized enterprise data model. "Atomic" data, that is, data at the greatest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse. Hybrid design Data warehouses often resemble the hub and spokes architecture. Legacy systems feeding the warehouse often include customer relationship management and enterprise resource planning, generating large amounts of data. To consolidate these various data models, and facilitate the extract transform load process, data warehouses often make use of an operational data store, the information from which is parsed into the actual data warehouse. To reduce data redundancy, larger systems often store the data in a normalized way. Data marts for specific reports can then be built on top of the data warehouse. A hybrid (also called ensemble) data warehouse database is kept on third normal form to eliminate data redundancy. A normal relational database, however, is not efficient for business intelligence reports where dimensional modelling is prevalent. Small data marts can shop for data from the consolidated warehouse and use the filtered, specific data for the fact tables and dimensions required. The data warehouse provides a single source of information from which the data marts can read, providing a wide range of business information. The hybrid architecture allows a data warehouse to be replaced with a master data management repository where operational (not static) information could reside. The data vault modeling components follow hub and spokes architecture. This modeling style is a hybrid design, consisting of the best practices from both third normal form and star schema. The data vault model is not a true third normal form, and breaks some of its rules, but it is a top-down architecture with a bottom up design. The data vault model is geared to be strictly a data warehouse. It is not geared to be end-user accessible, which, when built, still requires the use of a data mart or star schema-based release area for business purposes. Characteristics There are basic features that define the data in the data warehouse that include subject orientation, data integration, time-variant, nonvolatile data, and data granularity. Subject-oriented Unlike the operational systems, the data in the data warehouse revolves around the subjects of the enterprise. Subject orientation is not database normalization. Subject orientation can be really useful for decision-making. Gathering the required objects is called subject-oriented. Integrated The data found within the data warehouse is integrated. Since it comes from several operational systems, all inconsistencies must be removed. Consistencies include naming conventions, measurement of variables, encoding structures, physical attributes of data, and so forth. Time-variant While operational systems reflect current values as they support day-to-day operations, data warehouse data represents a long time horizon (up to 10 years) which means it stores mostly historical data. It is mainly meant for data mining and forecasting. (E.g. if a user is searching for a buying pattern of a specific customer, the user needs to look at data on the current and past purchases.) Nonvolatile The data in the data warehouse is read-only, which means it cannot be updated, created, or deleted (unless there is a regulatory or statutory obligation to do so). Options Aggregation In the data warehouse process, data can be aggregated in data marts at different levels of abstraction. The user may start looking at the total sale units of a product in an entire region. Then the user looks at the states in that region. Finally, they may examine the individual stores in a certain state. Therefore, typically, the analysis starts at a higher level and drills down to lower levels of details. Virtualization With data virtualization, the data used remains in its original locations and real-time access is established to allow analytics across multiple sources creating a virtual data warehouse. This can aid in resolving some technical difficulties such as compatibility problems when combining data from various platforms, lowering the risk of error caused by faulty data, and guaranteeing that the newest data is used. Furthermore, avoiding the creation of a new database containing personal information can make it easier to comply with privacy regulations. However, with data virtualization, the connection to all necessary data sources must be operational as there is no local copy of the data, which is one of the main drawbacks of the approach. Architecture The different methods used to construct/organize a data warehouse specified by an organization are numerous. The hardware utilized, software created and data resources specifically required for the correct functionality of a data warehouse are the main components of the data warehouse architecture. All data warehouses have multiple phases in which the requirements of the organization are modified and fine-tuned. Evolution in organization use These terms refer to the level of sophistication of a data warehouse: Offline operational data warehouse Data warehouses in this stage of evolution are updated on a regular time cycle (usually daily, weekly or monthly) from the operational systems and the data is stored in an integrated reporting-oriented database. Offline data warehouse Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting. On-time data warehouse Online Integrated Data Warehousing represent the real-time Data warehouses stage data in the warehouse is updated for every transaction performed on the source data Integrated data warehouse These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems. See also List of business intelligence software References Further reading Davenport, Thomas H. and Harris, Jeanne G. Competing on Analytics: The New Science of Winning (2007) Harvard Business School Press. Ganczarski, Joe. Data Warehouse Implementations: Critical Implementation Factors Study (2009) VDM Verlag Kimball, Ralph and Ross, Margy. The Data Warehouse Toolkit Third Edition (2013) Wiley, Linstedt, Graziano, Hultgren. The Business of Data Vault Modeling Second Edition (2010) Dan linstedt, William Inmon. Building the Data Warehouse (2005) John Wiley and Sons, Data engineering
Data warehouse
Engineering
4,723
68,754,709
https://en.wikipedia.org/wiki/List%20of%20fungi%20of%20South%20Africa%20%E2%80%93%20A
This is an alphabetical list of fungal taxa as recorded from South Africa. Currently accepted names have been appended. Ab Genus Abrothallus Pérez-Ort. & Suija (2013), Abrothallaceae (Lichenicolous fungi) Abrothallus parmeliarum Nyl. probably (Sommerf.) Arnold 1874 Ac Genus Acarospora A.Massal. (1852) Acarosporaceae (Lichenised fiungi) Acarospora angolensis H. Magn. 1929 Acarospora austroafricana (Zahlbr.) H. Magn. 1933 Acarospora bella Jatta 1906 Acarospora bylii H. Magn. 1933 Acarospora calviniensis H. Magn. 1933 Acarospora capensis Zahlbr. 1926 Acarospora cervina A. Massal. (1852), Acarospora citrina (Taylor) Zahlbr. (1913). Acarospora crassilabra (Müll. Arg.) Zahlbr. 1927 Acarospora deserticola Zahlbr. 1926 Acarospora finckei Zahlbr. 1927 Acarospora finckei var. lobulata H. Magn. ex Zahlbr. 1932 Acarospora fuscata (Ach.) Arnold (1871) Acarospora immixta H. Magn. 1929 Acarospora initialis H. Magn. 1929 Acarospora initialis var. perfectior H. Magn. 1933 Acarospora insculpta H. Magn. 1933 Acarospora intermixta H. Magn. 1933 Acarospora intrusa H. Magn. 1933 Acarospora laeta H. Magn. 1933 Acarospora laeta var. annularis H. Magn. 1933 Acarospora laevigata H. Magn. 1933 Acarospora longispora H. Magn. 1933 Acarospora lucida H. Magn. 1929 Acarospora luderitzensis H. Magn. 1933 Acarospora macrospora (Hepp ex Nyl.) A. Massal. ex Bagl. (1857) Acarospora meridionalis H. Magn. 1932 Acarospora negligens H. Magn. (1929) Acarospora ochrophaea H. Magn. 1933 Acarospora ortendahlii H.Magn.* Acarospora perexigua (Müll. Arg.) Hue 1909 Acarospora porinoides (Stizenb.) Zahlbr. 1927 Acarospora rhodesiae H. Magn. 1933 Acarospora socialis H. Magn. 1929 Acarospora steineri H. Magn. 1933 Acarospora subbadia H. Magn. 1933 Acarospora subochracea H. Magn. 1932 Acarospora subtersa H. Magn. 1929 Acarospora sulphurata var. austroafricana Zahlbr. 1926 Acarospora tenuis (Vain.) H. Magn. 1929 Acarospora tersa Zahlbr. probably (Fr.) J. Steiner 1897 Acarospora tersa var. bella (Ach.) Vain. ex Van der Byl 1931 Acarospora tersa var. tenuis Wain probably Vain. 1901 Acarospora tersa var. thaeodes Wain probably Vain. 1901 Acarospora tersa var. thaeodes Zahlbr.* Acarospora thaeodes A. Massal. 1861 Acarospora xanthophana (Nyl.) Jatta 1906 Family: Acarosporaceae Zahlbr. 1906 Genus: Achorion Remak 1845 Achorion schoenleinii Remak ex Guég. 1845, accepted as Trichophyton schoenleinii (Lebert) Langeron & Miloch. ex Nann., (1934) Genus: Acremoniella Sacc. 1886 Acremoniella sp. Genus: Acremonium Link 1809 Acremonium verticillatum Link 1809, accepted as Cladobotryum verticillatum (Link) S. Hughes, (1958) Acremonium sp. Genus: Acrospeira Berk. & Broome 1857 Acrospeira macrosporoidea (Berk. & Broome) Wiltshire 1938, accepted as Monodictys castaneae (Wallr.) S. Hughes, (1958) Genus: Actinonema Actinonema rosae (Lib.) Fr. 1849, accepted as Diplocarpon rosae (Lib.) F.A. Wolf, (1912) Genus: Actinopeltella Actinopeltella nitida Doidge 1924, accepted as Actinopeltis nitida (Doidge) Arx, (1962) Ae Genus: Aecidium Pers. 1796, accepted as Puccinia Pers., (1794), (Rust fungi) Aecidium acalyphicolum Doidge* Aecidium acanthopsidis Syd. & P. Syd. 1915 Aecidium albilabrum Kalchbr. 1871 Aecidium albo-atrum P.Henn.* Aecidium anceps Syd. & P. Syd. 1901 Aecidium ancylanthi Henn. 1903 Aecidium antholyzae Bubak* Aecidium ari Desm. 1823 accepted as Puccinia sessilis J. Schröt., (1870) [1869] Aecidium aroideum Cooke 1879 Aecidium australe Berk. 1843 Aecidium banketense Hopk. 1938 recorded as Aecidium banketensis Aecidium barleriae Doidge 1948 Aecidium baumanianum Henn. 1903, Aecidium baumii P.Henn.* Aecidium benguellense Lagerh. 1889 Aecidium berkleyae Henn. & Pole-Evans 1908 Aecidium bicolor Sacc. 1899 Aecidium brideliae Henn. & Pole-Evans 1908 Aecidium brunswigiae Henn. 1898 Aecidium bulbines Henn. & Pole-Evans 1908 Aecidium burtt-davyi Doidge 1939 Aecidium bylianum Syd. 1924 Aecidium capense Berk. & M.A. Curtis 1860 Aecidium cardiospermi Cooke 1882 accepted as Dietelia cardiospermi (Cooke) Berndt & A.R. Wood, (2012) Aecidium cephalandrae Cooke 1884 Aecidium cephalariae Syd. & P. Syd. 1912 Aecidium chlorophyti Kalchbr. probably Har. & Pat. 1909 Aecidium clarum Syd. & P. Syd. 1912 Aecidium clematidis-brachiatae Doidge 1927 Aecidium clerodendricola Henn. 1903 Aecidium clutiae Doidge 1927 recorded as Aecidium cluytiae Aecidium compositarum DC. , possibly Mart. 1817 accepted as Puccinia lapsanae Fuckel [as 'lampsanae'], (1860), or Rabenh. 1851 Aecidium conyzae-pinnatilobatae P. Syd. & Syd. 1923 Aecidium cookeanum De Toni 1888 Aecidium corycii Doidge 1927 Aecidium crini Kalchbr. 1882 Aecidium crypticum Kalchbr. & Cooke 1880 Aecidium cussoniae Kalchbr. 1882 Aecidium davyi Syd. & P. Syd. 1912, Aecidium decipiens Syd. & P. Syd. 1923 Aecidium denekiae Doidge 1927 Aecidium dielsii Henn. 1902 Aecidium dinteri Doidge 1939 Aecidium diospyri A.L. Sm. 1898 Aecidium dipcadi-viridis Doidge 1948 Aecidium dissotidis Cooke 1882 Aecidium doidgeae Syd. & P. Syd.(1912) recorded as Aecidium doidgei Aecidium dolichi Cooke 1882 accepted as Synchytrium dolichi (Cooke) Gäum., (1927) Aecidium dubiosum Syd. & P. Syd. 1901 Aecidium elegans Dietel 1888 accepted as Endophyllum macowanii Pole-Evans as 'macowani' (1909) [1908] Aecidium elytropappi Henn. 1898 accepted as Endophyllum elytropappi (Henn.) A.R. Wood & Crous, (2005) Aecidium englerianum Henn. & Lindau 1893 Aecidium eriospermi Henn. 1897 Aecidium evansii Henn. 1908 Aecidium fluggeae Doidge 1927 Aecidium flustra Berk. ex Syd. & P. Syd. 1923 Aecidium garckeanum Henn. 1891 Aecidium gomphostigmae Doidge 1927 Aecidium habunguense Henn. 1903 accepted as Puccinia agrophila Syd., (1937) Aecidium hartwegiae Thüm. 1877 Aecidium helichrysi Doidge 1927 Aecidium heliotropicola P.H.B. Talbot (1948) recorded as Aecidium heliotropicolum Aecidium hoffmanni P. Syd. & Syd. 1923 Aecidium hibisci Cooke 1892 Aecidium impatientis-capensis Doidge 1927 Aecidium incertum Syd. & P. Syd. 1901 Aecidium inornatum Kalchbr. 1882 accepted as Ravenelia inornata (Kalchbr.) Dietel, (1894) Aecidium ipomoeae Thüm. 1840 accepted as Albugo ipomoeae-panduratae (Schwein.) Swingle, (1892) Aecidium kakelense Henn. 1903 Aecidium kraussiae P. Syd. & Syd. 1923 Aecidium lebeckiae Henn. 1898 Aecidium leguminosarum Rabenh. probably Opiz 1836, accepted as Uromyces viciae-fabae (Pers.) J. Schröt., (1875) Aecidium leiocarpum Syd. & P. Syd. 1917 Aecidium leonotidis Henn. 1895 accepted as Puccinia leonotidis (Henn.) Arthur,(1921) Aecidium litakunense Doidge 1939 Aecidium longaense Henn. 1903 Aecidium loranthi Cooke 1885 Aecidium macarangae P.Henn.* Aecidium macowanianum Thüm. 1875 Aecidium macowanianum f. conyzae-pinnatilobatae Thüm. 1875 accepted as Endophyllum macowanianum (Thüm.) Pole-Evans, (1907) Aecidium menthae DC., (1815), accepted as Puccinia menthae Pers. (1801) Aecidium menyharthi Henn. 1906 Aecidium metalasiae Syd. & P. Syd. 1912, accepted as Endophyllum metalasiae (Syd. & P. Syd.) A.R. Wood & Berndt, (2012) Aecidium moggii Doidge 1939 Aecidium myrsiphylli Kalchbr. 1882 Aecidium nestlerae Doidge 1948 Aecidium nidorellae Doidge 1927 Aecidium ornamentale Kalchbr. 1875 accepted as Ravenelia ornamentalis (Kalchbr.) Dietel, (1906) Aecidium ornithogaleum Bubák 1905, accepted as Puccinia hordei G.H.Otth (1871)[1870] Aecidium ornithogali Kalchbr. possibly *Aecidium ornithogaleum Bubák 1905, Aecidium osteospermi Doidge 1927 accepted as Endophyllum osteospermi (Doidge) A.R. Wood, (1998) Aecidium osyridicarpi Massee 1911 accepted as Puccinia osyridicarpi (Massee) Grove [as osyridocarpi], (1916) Aecidium oxalidis Thüm. 1876 accepted as Puccinia sorghi Schwein., (1832) [1834] Aecidium pachystigmae Doidge 1927, Aecidium pelargonii Thüm. 1877, accepted as Puccinia pelargonii (Thüm.) P. Syd. & Syd., (1904) Aecidium pentziae-globosae Doidge 1948 Aecidium permultum Syd. & P. Syd. 1912 Aecidium pienarii Doidge (1927) Aecidium plectranthi Barclay 1890 accepted as Coleosporium plectranthi (Barclay) Sacc., (1891) Aecidium plectroniae Cooke 1882 Aecidium plectroniicola Henn. 1903 Aecidium pottsii Doidge 1927 Aecidium pretoriense Doidge 1927 Aecidium pychnostachydis Syd. possibly Aecidium pycnostachydis (Kalchbr.) Doidge 1927, Aecidium rafniae MacOwan Aecidium ranunculacearum DC. 1815, accepted as Uromyces dactylidis G.H. Otth, (1861) Aecidium relhaniae Dippen. 1931 Aecidium resinaecola (F. Rudolphi) G. Winter 1884 recorded as Aecidium resinicolum Aecidium resinaecola var. tumefaciens G. Winter 1884 recorded as Aecidium resinicolum var. tumefaciens Aecidium rhamni Pers. may be Aecidium rhamni J.F. Gmel. (1792), accepted as Puccinia coronata Corda (1837) Aecidium rhamni f. Rhamni prinoides Thuem.* Aecidium rhynchosiae Cooke 1882 accepted as Synchytrium dolichi (Cooke) Gäum., (1927) Aecidium royenae Cooke & Massee 1889 Aecidium rubellum Pers. ex J.F. Gmel. 1792 accepted as Puccinia phragmitis (Schumach.) Tul., (1854) Aecidium rumicis f. Rumicis eckloniana Thuem.* Aecidium schlechterianum Henn. 1898 Aecidium senecionis Desm. 1836 Aecidium senecionis f. Senecionis mikanoides Thuem.* Aecidium senecionis f. Senecionis napifolii Thuem.* Aecidium senecionis f. Senecionis quinquelobi Thuem.* Aecidium senecionum Desm.* Aecidium serrae Syd. & P. Syd. 1912 Aecidium spinicolum Doidge Aecidium spinicola Doidge [as 'spinicolum'], (1948) Aecidium stobaeae Kalchbr. & Cooke 1879 Puccinia stobaeae MacOwan [as 'stobeae'], (1882) Aecidium talinophilum P. Syd. & Syd. 1923 Aecidium tetragonii Doidge Aecidium tetragoniae Doidge, (1939) Aecidium thunbergiae Cooke 1882 accepted as Puccinia thunbergiae Cooke, (1882) Aecidium tinneae Henn. 1903 Aecidium transvaaliae Henn. & Pole-Evans 1908 Aecidium truncatum Kalchbr. Aecidium tubulosum Pat. & Gaillard 1889 Aecidium tylophorae Cooke 1890 Aecidium uleanum Pazschke 1892 Aecidium urgineae Henn. & Pole-Evans 1908 Aecidium urtica Schw. possibly one of: Aecidium urticae Schumach. 1803, acceprted as Puccinia urticata; Aecidium urticae DC. 1815; or Aecidium urticae Sawada 1944; Aecidium vangueriae Cooke 1882 Aecidium vernoniae-monocephalae Doidge 1927 Aecidium vernoniae-podocomae Doidge 1927 Aecidium viborgiae P.Henn. Aecidium wiborgiae Henn. [as 'viborgiae'], (1898) Aecidium vignae Cooke Aecidium vitis A.L.Sm. Aecidium welwitschii Lagerh. Aecidium wiborgiae Henn. (1898) recorded as viborgiae Aecidium withaniae Thüm. 1877 Aecidium woodianum Doidge 1927 Aecidium sp. Genus: Aegerita Pers. 1794, Aegerita penniseti Henn., (1904), accepted as Beniowskia sphaeroidea (Kalchbr. & Cooke) E.W. Mason, (1928) Ag Family: Agaricaceae Chevall. 1826 Genus: Agaricus L. 1753 Agaricus (Amanita) muscarius Linn, ex Fr. L. 1753, accepted as Amanita muscaria (L.) Lam., (1783) Agaricus (Amanita) praetorius Fr. 1838 basionym Agaricus praetorius Fr., (1838) Agaricus campestris L. 1753 Agaricus comtulus Fr. 1838 Agaricus dialeri (Bres. & Torrend) Sacc. & Trotter 1912 accepted as Leucoagaricus dialeri (Bres. & Torrend) D.A. Reid, (1975) Agaricus (Clitocybe) amarus (Alb. & Schwein.) Fr. 1821 accepted as Lepista amara (Alb. & Schwein.) Maire, (1930) Agaricus (Clitocybe) expallens Pers. 1801 accepted as Pseudoclitocybe expallens (Pers.) M.M. Moser, (1967) Agaricus (Clitocybe) fragrans Sow. ex Fr. Fr. 1815? Agaricus (Clitocybe) gentianeus Quel* Agaricus (Clitocybe) laccata Scop, ex Fr. probably Agaricus laccatus Scop., (1772) Agaricus (Clitocybe) membranaceus Fr. possibly one of; Agaricus membranaceus Hoffm. 1787; Agaricus membranaceus Scop. 1788, accepted as Homophron cernuum; Agaricus membranaceus Bolton 1788; Agaricus membranaceus Vahl 1790, accepted as Infundibulicybe gibba; Agaricus membranaceus Cooke & Massee 1892, accepted as Lepiota membranacea Agaricus (Clitocybe) sinopicus Fr. 1818 accepted as Bonomyces sinopicus (Fr.) Vizzini, (2014) Agaricus (Clitocybe) splendens Pers. 1801accepted as Paralepista splendens (Pers.) Vizzini, (2012) Agaricus (Clitocybe) trullaeformis Fr. Agaricus trulliformis Fr. [as 'trullaeformis'], (1821) accepted as Infundibulicybe trulliformis (Fr.) Gminder, (2016) Agaricus (Clitocybe) zizyphinus Vivian Agaricus ziziphina Viv., (1834) syn. Clitocybe ziziphina (Viv.) Sacc., (1887) Agaricus (Collybia) acervatus Fr. 1821accepted as Connopus acervatus (Fr.) K.W. Hughes, Mather & R.H. Petersen 2010 Agaricus (Collybia) alveolatus Kalchbr. 1881accepted as Hymenopellis alveolata (Kalchbr.) R.H. Petersen [as alveolatus], (2010) Agaricus (Collybia) butyraceus Bull. 1792accepted as Rhodocollybia butyracea (Bull.) Lennox, (1979) Agaricus (Collybia) chortophilus Berk. 1843 accepted as Neoclitocybe chortophila (Berk.) D.A. Reid, (1975) Agaricus (Collybia) confluens Pers. 1796 accepted as Collybiopsis confluens (Pers.) R.H. Petersen,(2021) Agaricus (Collybia) dryophilus Bull. 1790 accepted as Gymnopus dryophilus (Bull.) Murrill, N. (1916) Agaricus (Collybia) extuberans Fr. 1838 accepted as Gymnopus ocior (Pers.) Antonín & Noordel., (1997) Agaricus (Collybia) homotrichus Berk. possibly Agaricus hemitrichus Pers. 1801, accepted as Cortinarius hemitrichus (Pers.) Fr., (1838) Agaricus (Collybia) macilentus Fr. 1821 accepted as Agaricus macilentus Fr., (1821) Agaricus (Collybia) melinosarcus Kalchbr. 1876 accepted as Agaricus melinosarcus Kalchbr., 1876) Agaricus (Collybia) radicatus Relhan 1786 accepted as Hymenopellis radicata (Relhan) R.H. Petersen, (2010) Agaricus (Collybia) radicatus var. brachypus Kalchbr. 1881 accepted as Hymenopellis radicata (Relhan) R.H. Petersen, (2010) Agaricus (Collybia) stridulus Fr. 1870 accepted as Melanoleuca stridula (Fr.) Singer, (1943) Agaricus (Collybia) velutipes Fr. possibly Curtis 1782, accepted as Flammulina velutipes (Curtis) Singer, (1951) [1949] Agaricus (Coprinus) ephemerus Bull. 1786 accepted as Coprinellus ephemerus (Bull.) Redhead, Vilgalys & Moncalvo, (2001) Agaricus (Crepidotus) applanatus Pers. 1796 accepted as Crepidotus applanatus (Pers.) P. Kumm., (1871) Agaricus (Crepidotus) episphaeria Berk. 1846 accepted as Agaricus episphaeria Berk. 1846 Agaricus (Crepidotus) inandae Cooke 1890 accepted as Agaricus inandae Cooke 1890 Agaricus (Crepidotus) pogonatus Kalchbr. 1881 accepted as Agaricus pogonatus Kalchbr. 1881 Agaricus (Crepidotus) proteus Kalchbr. 1876 accepted as Melanotus proteus (Sacc.) Singer, (1946) Agaricus (Crepidotus) scalaris var. lobulatus Kalchbr. 1881 Agaricus scalaris var. lobulatus Kalchbr., (1881) Agaricus cretaceus Fr. * Agaricus (Entoloma) sagittaeformis Kalchbr. & Cooke (Agaricus sagittiformis Kalchbr. & Cooke [as sagittaeformis], (1881) accepted as Termitomyces sagittiformis (Kalchbr. & Cooke) D.A. Reid [as sagittaeformis], (1975) Agaricus (Flammula) alnicola Fr. 1821 accepted as Flammula alnicola (Fr.) P. Kumm., (1871) Agaricus (Flammula) flavidus Schaeff. 1774 accepted as Pholiota flavida (Schaeff.) Singer, (1951) [1949] Agaricus (Flammula) harmoge Fr. 1838 accepted as Agaricus harmoge Fr. 1838 Agaricus (Flammula) janus Berk. & Br. accepted as Agaricus janus Berk. & Broome 1871 Agaricus (Flammula) tilopus Kalchbr. & MacOwan 1881accepted as Pholiota tilopus (Kalchbr. & MacOwan) D.A. Reid, (1975) Agaricus (Galera) eatoni Berk. 1876accepted as Agaricus eatonii Berk. 1876 Agaricus (Galera) hypnorum Schrank 1789 accepted as Galerina hypnorum (Schrank) Kühner, (1935) Agaricus (Galera) peroxydatus Berk. 1843 accepted as Conocybe peroxydata (Berk.) D.A. Reid, (1975) Agaricus (Galera) tener Schaeff. 1774 accepted as Conocybe tenera (Schaeff.) Kühner, (1935) Agaricus (Hebeloma) spoliatus Fr. 1838 accepted as Hebeloma spoliatum (Fr.) Gillet, [1878] Agaricus (Hypholoma) candolleanus Fr. 1818 accepted as Psathyrella candolleana (Fr.) Maire, (1937) Agaricus (Hypholoma) capnolepis Kalchbr. 1881 accepted as Agaricus capnolepis Kalchbr., (1881) Agaricus (Hypholoma) fascicularis Huds. 1778 accepted as Hypholoma fasciculare (Huds.) P. Kumm., (1871) Agaricus (Lepiota) africamus Kalchbr. possibly Agaricus africanus (Fayod) Sacc. 1895 Agaricus (Lepiota) atricapillus Cooke & Massee 1888 accepted as Agaricus atricapillus Cooke & Massee 1888 Agaricus (Lepiota) cuculliformis Fr. 1849 accepted as Agaricus cuculliformis Fr. 1849 Agaricus (Lepiota) excoriatus Schaeff. 1774 accepted as Macrolepiota excoriata (Schaeff.) Wasser, (1978) Agaricus (Lepiota) kunzei Fr. 1849 accepted as Agaricus kunzei Fr., (1849) [1848] Agaricus (Lepiota) magnannulatus Kalchbr. 1881 accepted as Agaricus magnannulatus Kalchbr. 1881 Agaricus (Lepiota) montagnei Kalchbr. 1881 accepted as Agaricus montagnei Kalchbr. 1881 Agaricus (Lepiota) polysarcos Kalchbr. & MacOwan (1881)accepted as Agaricus polysarcos Kalchbr. & MacOwan, (1881) Agaricus (Lepiota) procerus Scop, 1772 accepted as Agaricus procerus Scop., (1772) Agaricus (Lepiota) pteropus Kalchbr. & MacOwan 1880 accepted as Agaricus pleropus Kalchbr. & MacOwan [as pteropus], (1880) Agaricus (Lepiota) rubricatus Berk. & Br. 1871 accepted as Agaricus rubricatus Berk. & Broome, (1871) Agaricus (Lepiota) sulfurellus Kalchbr. & MacOwan accepted as Agaricus sulfurellus Kalchbr. 1879 Agaricus (Lepiota) various Kalchbr. & MacOwan. possibly one of: Agaricus varius Schaeff., (1774) accepted as Cortinarius varius (Schaeff.) Fr., (1838) [1836-1838] or Agaricus varius Bolton,(1788) accepted as Panaeolus fimicola (Pers.) Gillet, (1878) Agaricus (Lepiota) zeyheri Berk. 1843 accepted as Leucocoprinus zeyheri (Berk.) Singer, (1943) Agaricus (Lepiota) zeyheri var. telosus Kalchbr. & MacOwan 1881 Agaricus (Lepiota) zeyheri var. verrucellosus Kalchbr. possibly Miq. ex Kalchbr. 1881 Agaricus muscarius Linn. 1753 accepted as Amanita muscaria (L.) Lam., (1783) Agaricus (Mycena) actiniceps Kalchbr. & Cooke 1881 accepted as Marasmius actiniceps (Kalchbr. & Cooke) D.A. Reid, (1975) Agaricus (Mycena) argutus Kalchbr. 1881 Agaricus argutus Kalchbr., (1881) Agaricus (Mycena) capillaris Schumach. 1803 accepted as Mycena capillaris (Schumach.) P. Kumm., (1871) Agaricus (Mycena) clavicularis Fr. 1821 accepted as Mycena clavicularis (Fr.) Gillet, (1876) [1878] Agaricus (Mycena) corticola Fr. 1821 accepted as Mycena clavicularis (Fr.) Gillet, (1876) [1878] Agaricus (Mycena) debilis Fr. 1838 Agaricus debilis Fr. 1838 Agaricus (Mycena) dilatatus Fr. 1815 accepted as Mycena stylobates (Pers.) P. Kumm., (1871) Agaricus (Mycena) dregeanus Lév. 1846 Agaricus dregeanus Lév. 1846 Agaricus (Mycena) galeropsis Fr. 1877 Agaricus galeropsis Fr. 1877 Agaricus (Mycena) heliscus Berk. & Broome 1871 accepted as Gloiocephala helisca (Berk. & Broome) Pegler, (1986) Agaricus (Mycena) hiemalis Osbeck 1788 accepted as Phloeomana hiemalis (Osbeck) Redhead, (2016) Agaricus (Mycena) macrorrhizus Fr. 1848 Agaricus macrorrhizus Fr. 1848 Agaricus (Mycena) rhodiophyllus Kalchbr. 1881 Agaricus rhodiophyllus Kalchbr. 1881 Agaricus (Mycena) sciolus Kalchbr. 1881 Agaricus sciolus Kalchbr. 1881 Agaricus (Mycena) tintinnabulum (Paulet) Fr. 1838 Agaricus tintinnabulum (Paulet) Fr. 1838 Agaricus (Mycena) vitreus Fr. 1821 Mycena vitrea (Fr.) Quél., (1872) Agaricus (Naucoria) arenicola Berk. 1843 accepted as Agrocybe pediades (Fr.) Fayod, (1889) Agaricus (Naucoria) furfuraceus Pers. 1801 accepted as Tubaria furfuracea (Pers.) Gillet, (1876) [1878] Agaricus (Naucoria) pediades Fr. 1821 accepted as Agrocybe pediades (Fr.) Fayod, (1889) Agaricus (Naucoria) pygmaeus Bull. 1791 accepted as Psathyrella pygmaea (Bull.) Singer, (1951) [1949] Agaricus (Naucoria) semiorbicularis Bull. 1789 accepted as Agrocybe pediades (Fr.) Fayod, (1889) Agaricus (Naucoria) undulosus Jungh. possibly Agaricus undulosus Fr., (1838) [1836-1838] Agaricus (Nolanea) castus MacOwan 1881 accepted as Mycena casta (MacOwan) D.A. Reid, (1975) Agaricus (Omphalia) griseo-pallidus Desm. (1826) accepted as Arrhenia griseopallida (Desm.) Watling, (1989) [1988] Agaricus (Omphalia) integrellus Pers. 1800 accepted as Delicatula integrella (Pers.) Fayod, (1889) Agaricus (Omphalia) linopus Kalchbr. 1881 accepted as Agaricus linopus Kalchbr., (1881) Agaricus (Omphalia) micromeles Berk. & Broome 1871 accepted as Agaricus micromeles Berk. & Broome, (1871) Agaricus (Omphalia) paurophyllus Berk. 1876 accepted as Agaricus paurophyllus Berk., (1876) Agaricus (Omphalia) polypus Kalchbr. 1877 accepted as Marasmius polypus (Kalchbr.) D.A. Reid, (1975) Agaricus (Omphalia) rusticus Fr. 1838 accepted as Arrhenia rustica (Fr.) Redhead, Lutzoni, Moncalvo & Vilgalys, (2002) Agaricus (Omphalia) scyphiformis Fr. 1818 accepted as Agaricus scyphiformis Fr., (1818) Agaricus (Omphalia) scyphoides Fr. 1821 accepted as Clitopilus scyphoides (Fr.) Singer, (1946) Agaricus (Omphalia) syndesmius Kalchbr. 1881 accepted as Agaricus syndesmius Kalchbr., (1881) Agaricus (Omphalia) umbelliferus Linn, ex Fr. var. cinnabarinus Berk. possibly Agaricus umbellifer L., (1753) (Checked to here on Index Fungorum) Agaricus (Panaeolus) caliginosus Jungh. 1830 accepted as Agaricus caliginosus Jungh., (1830) Agaricus (Panaeolus) campanulatis L. 1753 accepted as Panaeolus papilionaceus (Bull.) Quél., (1872) Agaricus (Panaeolus) fimicolus Fr. accepted as Agaricus fimicola Fr., (1821) Agaricus (Panaeolus) papilionaceus Bull. 1781 accepted as Panaeolus papilionaceus (Bull.) Quél., (1872) Agaricus (Panaeolus) separatum Linn, ex Fr. Agaricus separatus L. 1753 accepted as Panaeolus semiovatus (Sowerby) S. Lundell & Nannf. (1938) Agaricus (Pholiota) aurivellus Batsch ex Fr.* Agaricus (Pholiota) mycenoides Fr.* Agaricus (Pholiota) spectabilis Fr. possibly Weinm. 1824, accepted as Gymnopilus junonius (Fr.) P.D. Orton, (1960) Agaricus (Pholiota) togularis Bull, 1793 accepted as Agrocybe praecox (Pers.) Fayod, (1889) Agaricus (Pholiota) unicolor Vahl 1792 accepted as Galerina marginata (Batsch) Kühner, (1935) Agaricus (Pleurotus) atrocaeruleus Fr. accepted as Agaricus atrocoeruleus Fr. [as atrocœruleus], (1815), accepted as Hohenbuehelia atrocoerulea (Fr.) Singer [as atrocaerulea], (1951) [1949] Agaricus (Pleurotus) aureo-tomentosus Kalchbr. Agaricus aureotomentosus Kalchbr. [as aureo-tomentosus], (1880) Agaricus (Pleurotus) caveatus Berk. & M.A. Curtis 1868, accepted as Crepidotus caveatus (Berk. & M.A. Curtis) Murrill, (1916) Agaricus (Pleurotus) clusilis Kalchbr. 1880 accepted as Marasmiellus clusilis (Sacc.) D.A. Reid, (1975) Agaricus (Pleurotus) contrarius Kalchbr. 1881 accepted as Marasmiellus contrarius (Sacc.) D.A. Reid, (1975) Agaricus (Pleurotus) flahellatus Berk. & Broome 1871, accepted as Pleurotus flabellatus Sacc., (1887) Agaricus (Pleurotus) gilvescens Kalchbr. 1881 accepted as Agaricus gilvescens Kalchbr. 1881 Agaricus (Pleurotus) limpidus Fr. 1838 accepted as Agaricus limpidus Fr., (1838) [1836-1838] Agaricus (Pleurotus) olearius DC. 1815 accepted as Omphalotus olearius (DC.) Singer, (1948) [1946] Agaricus (Pleurotus) perpusillus Fr. possibly Agaricus perpusillus Lumn. 1791 Agaricus (Pleurotus) radiatim-plicatus Kalchbr. 1881 accepted as Marasmiellus radiatim-plicatus (Kalchbr.) D.A. Reid, (1975) Agaricus (Pleurotus) sciadeum Kalchbr. & MacOwan 1881 Agaricus sciadeum Kalchbr. & MacOwan 1881, accepted as Hohenbuehelia sciadium (Kalchbr. & MacOwan) Singer [as sciadea], (1951) [1949] Agaricus (Pleurotus) sciadeum var. salmoneus Kalchbr. & MacOwan, (1881) accepted as Phyllotopsis salmonea (Kalchbr. & MacOwan) D.A. Reid [as Phylotopis salmoneus], (1975) Agaricus (Pleurotus) septicus Fr. 1821 accepted as Agaricus septicus Fr., (1821) Agaricus (Pleurotus) striatulus Fr. nay be one of: Agaricus striatulus J.F. Gmel. 1792, accepted as Gloeophyllum striatum (Fr.) Murrill, Bull. (1905); Agaricus striatulus Pers. 1801, accepted as Resupinatus striatulus (Pers.) Murrill, (1915) or Agaricus striatulus Schumach. 1803 Agaricus (Pluteus) cervinus Schaeff. 1774 accepted as Pluteus cervinus (Schaeff.) P. Kumm., (1871) Agaricus (Psalliota) arvensis Schaeff. 1774 accepted as Agaricus arvensis Schaeff., (1774) Agaricus (Psalliota) arvensis var. grossus Berk.* Agaricus (Psalliota) campestris L. 1753 accepted as Agaricus campestris L. [as campester], (1753) Agaricus (Psalliota) campestris (b) praticola. probably Agaricus campestris var. praticola Vittad. ex Fr. 1838, accepted as Agaricus campestris L. [as campester], (1753) Agaricus (Psalliota) pratensis var. australis Berk. 1843 accepted as Cuphophyllus pratensis(Pers.) Bon, (1985) [1984] Agaricus (Psalliota) sylvaticus Schaeff. 1774 accepted as Agaricus sylvaticus Schaeff., (1774) Agaricus (Psathyra) corrugis Pers. 1794 accepted as Psathyrella corrugis (Pers.) Konrad & Maubl., (1949) [1948] Agaricus (Psathyra) spadiceo-griseus Schaeff. 1774 accepted as Psathyrella spadiceogrisea (Schaeff.) Maire, (1937) Agaricus (Psathyrella) disseminatus Pers. 1801 accepted as Coprinellus disseminatus (Pers.) J.E. Lange [as disseminata], (1938) Agaricus (Psathyrella) gracilis Fr. 1821, accepted as Psathyrella corrugis (Pers.) Konrad & Maubl., (1949) [1948] Agaricus (Psathyrella) pronus Fr. 1838 accepted as Psathyrella prona (Fr.) Gillet, (1878) Agaricus (Psathyrella) subtilis Fr. 1821 accepted as Agaricus subtilis Fr., (1821) Agaricus (Psathyrella) sp. Agaricus (Psilocybe) atrorufus Schaeff. 1774 accepted as Deconica montana (Pers.) P.D. Orton, (1960) Agaricus (Psilocybe) atrorufus var. montanus Pers. ex Fr.* Agaricus (Psilocybe) ericaeus Pers. 1801 accepted as Hypholoma ericaeum (Pers.) Kühner, Bull. (1936) Agaricus (Psilocybe) foenisecii Pers. 1800 accepted as Panaeolina foenisecii (Pers.) Maire, (1933) Agaricus (Psilocybe) semilanceatus Fr. 1818 accepted as Psilocybe semilanceata (Fr.) P. Kumm., (1871) Agaricus (Psilocybe) squalens Fr. 1838 accepted as Agaricus squalens Fr.,(1838) [1836-1838] Agaricus (Psilocybe) taediosus Kalchbr. 1880 accepted as Stropharia taediosa (Kalchbr.) D.A. Reid, (1975) Agaricus (Psilocybe) udus Pers. 1801 accepted as Bogbodia uda (Pers.) Redhead, (2013) Agaricus purpuratus Kalchbr. var. cinerea possibly Agaricus purpuratus Cooke & Massee 1890 Agaricus (Schulzeria) umkowaani Cooke & Massee, 1889 accepted as Termitomyces umkowaan (Cooke & Massee) D.A. Reid [as umkowaani], (1975) Agaricus (Stropharia) melaspermus Bull, ex Fr. possibly Agaricus melaspermus Fr., (1838) [1836-1838] Agaricus (Stropharia) obturatus Fr. 1821 accepted as Psilocybe coronilla (Bull.) Noordel., (1995) Agaricus (Stropharia) olivaceo-flava Kalchbr. & MacOwan accepted as Agaricus olivaceoflavus Kalchbr. & MacOwan [as olivaceo-flavus], (1881) Agaricus (Stropharia) semiglobatus Batsch 1786 accepted as Protostropharia semiglobata (Batsch) Redhead, Moncalvo & Vilgalys, (2013) Agaricus (Tricholoma) caffrorum Kalchbr. & MacOwan 1881 accepted as Lepista caffrorum (Kalchbr. & MacOwan) Singer, (1951) [1949] Agaricus (Tricholoma) caffrorum var. sulonensis Kalchbr. & MacOwan 1881 accepted as Lepista caffrorum (Kalchbr. & MacOwan) Singer, (1951) [1949] Agaricus (Tricholoma) georgii Clus. ex Fr. possibly L. 1753, accepted as Calocybe gambosa (Fr.) Donk, (1962) Agaricus (Tricholoma) melaleucus var. porphyroleucus (Bull.) Fr. 1821 accepted as Melanoleuca polioleuca (Fr.) Kühner & Maire, Bull. (1934) Agaricus (Tricholoma) ustalis Fr. 1818 accepted as Tricholoma ustale (Fr.) P. Kumm., (1871) Agaricus (Volvaria) bombycinus Schaeff. 1774 accepted as Volvariella bombycina (Schaeff.) Singer, (1951) [1949] Ai Genus: Aithaloderma Syd. & P. Syd. 1913 Aithaloderma capense [as capensis] Doidge 1927 accepted as Chaetothyrium capense (Doidge) Hansf. (1950) Al Genus: Alectoria Ach. 1809?(lichens) Alectoria chalybeiformis (L.) Röhl. 1813 f. terrestris Stizenb. 1890 Alectoria jubata (L.) Ach. 1810 Alectoria usneoides (Ach.) Ach. 1810 accepted as Ramalina usnea (L.) R. Howe, (1914) Genus: Aleurodiscus Aleurodiscus acerinus (Pers.) Höhn. & Litsch. 1907 accepted as Dendrothele acerina (Pers.) P.A. Lemke, (1965) Aleurodiscus acerinus var. longisporus Höhn. & Litsch. 1907 accepted as Dendrothele acerina (Pers.) P.A. Lemke, (1965) Aleurodiscus capensis Lloyd 1920 accepted as Aleurocystis capensis (Lloyd) Lloyd,(1920) [1921] Aleurodiscus cerussatus [as cerrussatus] (Bres.) Höhn. & Litsch. 1907 Aleurodiscus corneus Lloyd 1920, Aleurodiscus disciformis (DC.) Pat. 1894 Genus: Allantonectria Earle 1901, accepted as Thyronectria Sacc., (1875), Sordariomycetes Allantonectria miltina (Durieu & Mont.) Weese 1910 Genus: Allarthothelium (Vain.) Zahlbr. 1908, accepted as Arthonia Ach., (1806) (Ramalinaceae C. Agardh [as 'Ramalineae'], (1821) Allarthothelium minimum Vain. 1926, accepted as Bilimbia minima (Vain.) Räsänen, (1943) Genus: Allomyces E.J. Butler 1911 Allomyces arbusculus E.J. Butler 1911 Genus: Aloysiella Mattir. & Sacc. 1908, accepted as Metacapnodium Speg., (1918) Aloysiella ruwenzorensis Mattir. & Sacc. 1908 Genus: Alternaria Nees 1816 Alternaria allii Nolla. (1927), accepted as Alternaria solani Sorauer, (1896) Alternaria brassicae (Berk.) Sacc. 1880 Alternaria brassicae f. phaseoli [as var. phaseoli] Brunaud 1894 accepted as Alternaria brassicae (Berk.) Sacc., (1880) Alternaria circinans (Berk. & M.A. Curtis) P.C. Bolle 1924 accepted as Alternaria brassicicola (Schwein.) Wiltshire, (1947) Alternaria citri Ellis & N. Pierce 1902 Alternaria crassa (Sacc.) Rands (1917) Alternaria cucumerina (Ellis & Everh.) J.A. Elliott 1917 Alternaria dianthi F. Stevens & J.G. Hall 1909 Alternaria gossypina (Thüm.) J.C.F. Hopkins 1931 Alternaria herculea (Ellis & G. Martin) J.A. Elliott (1917),accepted as Alternaria brassicae (Berk.) Sacc., (1880) Alternaria longipes Tisd. & Wadk possibly (Ellis & Everh.) E.W. Mason 1928, Alternaria macrospora Zimm. (1904), Alternaria macrospora (group) possibly accepted as Alternaria brassicae (Berk.) Sacc. (1880) Alternaria solani (Ellis & G. Martin) L.R. Jones & Grout 1896 Alternaria solani (group) Alternaria tabacina (Ellis & Everh.) Hori (1903) Alternaria tenuis Nees (1817), accepted as Alternaria alternata (Fr.) Keissl. (1912) Alternaria violae L.D. Galloway & Dorsett 1900 Alternaria sp. Am Genus: Amanita Amanita mappa Quel. possibly (Batsch) Bertill. 1866, accepted as Amanita citrina Pers., Tent. (1797) Amanita muscaria S.F.Gray possibly (L.) Lam., (1783) Amanita pantherina Quel possibly (DC.) Krombh. 1846 Amanita phalloides Secr. 1833 Amanita rubescens (Pers. ex Fr.) Gray (1797) possibly Pers. 1797 Amanita solitaria Secr Genus: Amanitopsis Roze accepted as Amanita Pers. (1794) Amanitopsis praetoria (Fr.) Sacc. 1887 basionym Agaricus praetorius Fr. 1838 Family: Amaurochaetaceae Rostaf. ex Cooke 1877 Genus: Amaurochaete Rostaf. 1873 Amaurochaete fuliginosa (Sowerby) T. Macbr. 1899, Genus: Amauroderma Murrill, (1905) Amauroderma argenteofulvum (Van der Byl) Doidge 1950 Amauroderma fuscoporia Wakef. 1948 accepted as Amauroderma fuscoporium Wakef. [as 'fuscoporia'], (1948) Amauroderma rugosum Lloyd possibly (Blume & T. Nees) Torrend 1920, accepted as Sanguinoderma rugosum (Blume & T. Nees) Y.F. Sun, D.H. Costa & B.K. Cui, (2020) Genus: Amazonia Theiss. 1913 Amazonia asterinoides (G. Winter) Theiss. 1913, Amazonia goniomae Doidge 1924 Genus: Amphiloma Nyl. (1855), accepted as Lepraria Ach. (1803) Amphiloma elegans (Link) Körb. 1855 accepted as Xanthoria elegans (Link) Th. Fr., (1860) Amphiloma elegantissimum (Nyl.) Müll. Arg. 1888, accepted as Stellarangia elegantissima (Nyl.) Frödén, Arup & Søchting, (2013) Amphiloma eudoxum Müll. Arg. 1888, accepted as Teloschistopsis eudoxa (Müll. Arg.) Frödén, Arup & Søchting, (2013) Amphiloma leucoxanthum Müll. Arg. 1888 An Genus: Anaptychia Körb. 1848 Anaptychia corallophora Wain. probably Anaptychia coralliphora (Taylor) Zahlbr. [as 'corallophora'], (1931), accepted as Polyblastidium corallophorum (Taylor) Kalb, (2015) Anaptychia dactyliza (Nyl.) Zahlbr. 1924 Anaptychia granulifera (Ach.) A. Massal. 1853 Anaptychia hypoleuca (Ach.) A. Massal. 1860, accepted as Polyblastidium hypoleucum (Ach.) Kalb, (2015) Anaptychia hypoleuca var. colorata Zahlbr. 1927 Anaptychia leucomela Massal. Anaptychia leucomelos (L.) A. Massal. [as 'leucomela'], (1890) accepted as Leucodermia leucomelos (L.) Kalb, (2015) Anaptychia leucomelaena Vain.* Anaptychia leucomelaena var. angustifolia Müll.Arg. possibly Anaptychia leucomelos var. angustifolia (Meyen & Flot.) Müll. Arg., (1894) accepted as Anaptychia leucomelaena var. multifida f. squarrosa Vain. possibly Anaptychia leucomelos f. squarrosa Vain. [as Anaptychia leucomelaena f. squarrosa], (1901) or Anaptychia leucomelos var. multifida (Meyen & Flot.) Vain., (1890) Anaptychia obesa f. caesiocrocata (Nyl.) Zahlbr. 1931 Anaptychia palpebrata (Taylor) Vain. 1898 Anaptychia podocarpa (Bél.) A. Massal. 1860 accepted as Heterodermia podocarpa (Bél.) D.D. Awasthi, (1973) Anaptychia speciosa (Wulfen) A. Massal. 1853 accepted as Heterodermia speciosa (Wulfen) Trevis., (1868) Anaptychia speciosa f. sorediosa (Müll. Arg.) Zahlbr. 1931 Anaptychia speciosa var. esorediata Vain. 1901 Anaptychia speciosa var. lobulifera Vain. 1901 Genus: Anelleria Anelleria separata Karst.* Genus: Angelina Fr. 1849 Angelina nigrocinnabarina (Schwein.) Berk. & M.A. Curtis 1868 accepted as Blitridium nigrocinnabarinum (Schwein.) Sacc., (1889) Genus: Antennaria Link 1809 Antennaria (Goleroa) engleriana v.Hohn.* Genus: Anthostomella Sacc. 1875 Anthostomella africana Berl. & Vogl. possibly (Kalchbr. & Cooke) Sacc. 1882 Anthostomella capensis Doidge 1948 Anthostomella cassinopsidis Rehm 1906 Anthostomella cassinopsidis (Kalchbr. & Cooke) Petr. & Syd. 1925 Anthostomella nigroannulata Sacc. 1882 Anthostomella salaciae Doidge 1948 Genus: Anthracophyllum Ces. 1879 Anthracophyllum nigritum (Lév.) Kalchbr., (1881)as Anthracophyllum nigrita Genus: Anthracothecium Hampe ex A. Massal. 1860 Anthracothecium biferum Zahlbr. 1932 Anthracothecium duplicans (Nyl.) Müll. Arg. 1880 accepted as Pyrenula duplicans (Nyl.) Aptroot, (2008) Anthracothecium pyrenuloides (Mont.) Müll. Arg. 1880 accepted as Pyrenula pyrenuloides (Mont.) R.C. Harris, (1989), Anthracothecium thelomorphum (Tuck.) Zahlbr. (1922) [as thelemorphum] Anthracothecium thwaitesii(Leight.) Müll. Arg. 1880 Anthracothecium variolosum (Pers.) Müll. Arg. 1880 Genus: Anthurus Kalchbr. & MacOwan 1880 Anthurus archeri (Berk.) E.Fisch.,(1886) accepted as Clathrus archeri (Berk.) Dring 1980 Anthurus macowani Marloth 1913 Anthurus woodii MacOwan 1880 Ap Genus: Aphysa Theiss. & Syd. 1917, accepted as Coleroa Rabenh., (1850) Aphysa lebeckiae (Verwoerd & Dippen.) Doidge 1942, accepted as Coleroa lebeckiae (Verwoerd & Dippen.) Arx, (1962) Aphysa rhynchosiae (Kalchbr. & Cooke) Theiss. & Syd. 1917, accepted as Coleroa rhynchosiae (Kalchbr. & Cooke) E. Müll., (1962) Aphysa senniana (Sacc.) Doidge 1941, accepted as Coleroa senniana (Sacc.) Arx, (1962) Genus: Appendiculella Höhn. 1919 Appendiculella calostroma (Desm.) Höhn. 1919 Ar Family: Arachniaceae Coker & Couch 1928 Genus: Arachnion Schwein. 1822 Arachnion alborosellum Verwoerd 1926 [as alborosella] Arachnion album Schwein. 1822 Arachnion firmoderma Verwoerd 1926 Arachnion giganteum Lloyd* Arachnion scleroderma Lloyd 1915 Genus: Arcangeliella Cavara 1900 accepted as Lactarius Pers., (1797) Arcangeliella africana (Lloyd) Zeller & C.W. Dodge 1935 accepted as Neosecotium africanum (Lloyd) Singer & A.H. Sm., (1960) Genus: Arctomia Th. Fr. 1861 (lichens) Arctomia muscicola A.L. Sm. 1932 Genus: Armillaria (Fr.) Staude 1857 Armillaria mellea Quel. possibly (Vahl) P. Kumm. 1871 Armillaria ramentacea Quel. possibly (Bull. ex Pers.) Gillet 1874, accepted as Tricholoma ramentaceum (Bull. ex Pers.) Ricken, (1915) Genus: Arrhenia Fr. 1849 Arrhenia cucullata Family: Arthoniaceae Rchb. 1841 Genus: Arthonia Ach. 1806 Arthonia albida (Müll. Arg.) Willey 1890 Arthonia angulata Fée 1837 Arthonia angulosa Müll. Arg. 1887 Arthonia antillarum (Fée) Nyl. 1867 Arthonia argentea Stizenb. 1891 Arthonia calospora Müll. Arg. 1882 Arthonia capensis Stizenb. 1891 accepted as Tryblidaria capensis (Stizenb.) Vouaux, (1914) Arthonia cinnabarina (DC.) Wallr. 1831 accepted as Coniocarpon cinnabarinum DC., (1805) Arthonia circumscissa Merrill possibly Vain. 1890, accepted as Cyclographina circumscissa (Vain.) Makhija & Patw., (1995) Arthonia consanguinea (Müll. Arg.) Willey 1890 Arthonia dispersa Nyl. possibly (Schrad.) Dufour 1818, accepted as Naevia dispersa (Schrad.) Thiyagaraja, Lücking & K.D. Hyde, (2020) or Arthonia dispersa subsp. excipienda (Nyl.) Nyl. 1861 Arthonia fusconigra Nyl. 1859 Arthonia gregaria (Weigel) Körb. 1855 accepted as Coniocarpon cinnabarinum DC., (1805) Arthonia hormidiella Stirt. 1877 Arthonia lecideicarpa Zahlbr. 1932 Arthonia melanopsis Stirt. 1877 Arthonia nana Stizenb. 1891 Arthonia oblongula Müll. Arg. 1887 Arthonia obvelata (Müll. Arg.) Willey 1890 Arthonia palmicola Ach. 1814 Arthonia platygraphidea Nyl. 1863 Arthonia polymorpha Ach. 1814 Arthonia propinqua Nyl. 1863 Arthonia pyrenuloides Müll. Arg. 1887 Arthonia rubrofuscescens Vain. 1926 Arthonia variabilis Müll. Arg. 1887 Arthonia violascens Flot. ex Nyl. Arthonia wilmsiana Müll. Arg. 1886 Genus: Arthopyrenia A. Massal. 1852 Arthopyrenia alboatra Müll. Arg. 1883 Arthopyrenia capensis Zahlbr. 1921 Arthopyrenia cinchonae (Ach.) Müll. Arg. 1883 Arthopyrenia cinchonae var. fumida (Stizenb.) Zahlbr. 1921 accepted as Constrictolumina cinchonae (Ach.) Lücking, M.P. Nelsen & Aptroot, (2016) Arthopyrenia fallax (Nyl.) Arnold 1873 accepted as Pseudosagedia fallax (Nyl.) Oxner, (1956) Arthopyrenia knysnana Zahlbr. 1932 Arthopyrenia leucanthes (Stirt.) Zahlbr. 1922 Arthopyrenia norata A. Massal. 1861 Arthopyrenia paraphysata Zahlbr. 1932 Arthopyrenia pruinosogrisea (C. Knight) Müll. Arg. 1894 Arthopyrenia recepta Müll. Arg. 1883 Arthopyrenia simulans Müll. Arg. 1887 Genus: Arthothelium A. Massal. 1852 (Lichens) Arthothelium abnorme (Ach.) Müll. Arg. 1880 Arthothelium albidum Müll. Arg. 1887 Arthothelium album Zahlbr. 1932 Arthothelium argenteum (Stizenb.) Zahlbr. 1922 Arthothelium consanguineum Müll. Arg. 1888 Arthothelium fusconigrum (Nyl.) Müll. Arg. 1894 Arthothelium melanopsis (Stirt.) Zahlbr. 1922 Arthothelium michylum Vain. 1922 Arthothelium obvelatum Müll. Arg. 1887 Arthothelium phaeosporum Zahlbr. 1936 Arthothelium psyllodes Zahlbr. 1936 Arthothelium psyllodes var. precursum Zahlbr. 1936 Arthothelium violascens (Flot.) Zahlbr. 1922 Genus: Arthrobotryum Ces. 1854 Arthrobotryum melanoplaca Berk. & M.A. Curtis 1868 accepted as Spiropes melanoplaca (Berk. & M.A. Curtis) M.B. Ellis, (1968) Genus: Arthrosporium Sacc. 1880 Arthrosporium parasiticum G. Winter 1886 accepted as Atractilina parasitica (G. Winter) Deighton & Piroz., (1972) As Genus: Ascobolus Pers. ex J.F. Gmel. 1792 Ascobolus ciliatus Berk. possibly J.C. Schmidt 1817 accepted as Lasiobolus papillatus (Pers.) Sacc., (1884) or Ascobolus ciliatus var. ciliatus Berk. 1836 Ascobolus ciliatus Schum.* Ascobolus furfuraceus Pers. 1794 Ascobolus stercorarius (Bull.) J. Schröt. 1893 accepted as Ascobolus furfuraceus Pers., (1794) Genus: Ascochyta Lib. 1830 Ascochyta alkekengi C. Massal. 1900 Ascochyta atropunctata G. Winter 1885 Ascochyta calpurniae G. Winter 1885 Ascochyta caricae Pat. 1891 Ascochyta cherimoliae Thüm. 1879. Ascochyta citricola McAlpine 1899 Ascochyta dianthi (Alb. & Schwein.) Berk. 1860 accepted as Septoria dianthi (Alb. & Schwein.) Desm., (1849) Ascochyta kentiae Maubl. 1903 Ascochyta nicotianae Pass. 1881 accepted as Boeremia exigua (Desm.) Aveskamp, Gruyter & Verkley, (2010) Ascochyta papaveris Oudem. 1885, accepted as Diplodina papaveris (Oudem.) Lind, (1926) Ascochyta parasitica Fautrey 1891 accepted as Sirococcus conigenus (Pers.) P.F. Cannon & Minter, (1983) Ascochyta pisi Lib. 1830, accepted as Didymella pisi Chilvers, J.D. Rogers & Peever, (2009) Genus: Ascophanus Boud. 1869 Ascophanus durbanensis Van der Byl 1925 accepted as Iodophanus durbanensis (Van der Byl) Kimbr., Luck-Allen & Cain, (1969) Ascophanus granulatus (Bull.) Speg. 1878 accepted as Cheilymenia granulata (Bull.) J. Moravec, (1990) Ascophanus granuliformis (P. Crouan & H. Crouan) Boud. 1869 accepted as Coprotus granuliformis (P. Crouan & H. Crouan) Kimbr., (1967) Ascophanus sp. Genus: Ascostratum Syd. & P. Syd. 1912 Ascostratum insigne Syd. & P. Syd. 1912 Genus: Ascotricha Berk. 1838, Ascotricha chartarum Berk. 1838 Genus: Aseroe Labill. 1800 Aseroe rubra Labill. 1800 Family: Ashbyaceae C.W. Dodge 1935 Genus: Ashbya Guillierm. 1928 Ashbya gossypii (S.F. Ashby & W. Nowell) Guillierm. (1928), accepted as Eremothecium gossypii (S.F. Ashby & W. Nowell) Kurtzman, J. (1995) Genus: Aspergillus P. Micheli 1729 Aspergillus amstelodami (L. Mangin) Thom & Church 1926 Aspergillus candidus Link 1809 Aspergillus carbonarius (Bainier) Thom 1916 Aspergillus eburneus Biourge. accepted as Aspergillus neoniveus Samson, S.W. Peterson, Frisvad & Varga, (2011) Aspergillus flavus Link 1809 Aspergillus fumigatus Fresen. 1863 Aspergillus glaucus (L.) Link 1809 Aspergillus minutus Gilman & Abott probably E.V. Abbott 1927; Aspergillus niger Tiegh. 1867 Aspergillus ochraceus (series) Aspergillus melleus Yukawa. Aspergillus parasiticus Speare Aspergillus repens (Corda) Sacc. (1882), valid on Species Fungorum accepted as Aspergillus reptans Samson & W. Gams, (1986) per Mycobank Aspergillus repens-glaucus (series) possibly G. Wilh. 1877 Aspergillus sartoryi Syd. 1913 Aspergillus sulphureus (Fresen.) Thom & Church 1926 Aspergillus sydowi (Bainier & Sartory) Thom & Church 1926 Aspergillus terreus Thom 1918 Aspergillus versicolor (Vuill.) Tirab. 1908 Aspergillus welwitschiae (Bres.) Henn. 1907 Aspergillus sp. Genus: Aspicilia A. Massal. 1852 Aspicilia nubila (Stizenb.) Hue 1912 Genus: Asterella (may refer to Asterella Rostr. 1888, accepted as Venturia; Venturiaceae, Asterella (Sacc.) Sacc. 1891, accepted as Asterina; Asterinaceae, or Asterella Hara 1936, accepted as Astrosphaeriella; Pleosporales) Asterella infuscans (G. Winter) Sacc. 1891 Asterella phaeostroma (Cooke) Sacc. 1891 Asterella rehmii Henn. 1893 accepted as Placoasterella rehmii (Henn.) Theiss. & Syd., (1915) Family: Asterinaceae Hansf. 1946 Genus: Asterina Lév. 1845 Asterina africana (Van der Byl) Doidge 1942 Asterina africana var. kiggelariae Doidge 1942 accepted as Asterina africana (Van der Byl) Doidge, (1942) Asterina aulica Syd. 1938 Asterina balansae var. africana Theiss.* Asterina bosmanae Doidge 1942 Asterina bottomleyae Doidge 1942 Asterina capensis Kalchbr. & Cooke 1880 accepted as Meliola capensis (Kalchbr. & Cooke) Theiss., (1912) Asterina capparicola [as capparidicola] Doidge (1942) Asterina celtidicola Henn. 1905 Asterina celtidicola var. microspora Doidge 1920 accepted as Asterina celtidicola Henn. 1905 Asterina clausenicola Doidge 1920 Asterina combreti Syd. & P. Syd. 1910 Asterina combreti var. kutuensis v. Hohn* Asterina confluens Kalchbr. & Cooke 1880 Asterina crotonicola Doidge 1922 Asterina crotoniensis R.W. Ryan 1939 Asterina delicata Doidge 1920 Asterina diplocarpa Cooke 1882 Asterina diplocarpa var. hibisci Doidge 1942 accepted as Asterina hibisci (Doidge) Hosag., (2004) Asterina dissiliens (Syd.) Doidge 1942 accepted as Prillieuxina dissiliens (Syd.) Arx, (1962) Asterina dissiliens var. senegalensis Doidge 1942 accepted as Prillieuxina dissiliens (Syd.) Arx, (1962) Asterina ditricha Kalchbr. & Cooke 1880 Asterina elegans Doidge 1942 Asterina erysiphoides Kalchbr. & Cooke 1880 accepted as Asterostomella erysiphoides (Kalchbr. & Cooke) Bat. & Cif., (1959) Asterina excoecariae Doidge 1920 Asterina ferruginosa Doidge 1920 Asterina fimbriata Kalchbr. & Cooke 1880 Asterina fleuryae Doidge 1942 Asterina gerbericola Doidge 1924 Asterina gibbosa var. megathyria Doidge 1920, accepted as Asterolibertia megathyria (Doidge) Doidge, (1942) Asterina grewiae Cooke 1882 Asterina grewiae var. zonata Doidge 1942 accepted as Asterina grewiae Cooke 1882 Asterina hendersoni Doidge 1920 Asterina inconspicua (Doidge) Doidge 1942 accepted as Prillieuxina inconspicua (Doidge) Arx, (1962) Asterina infuscans G. Winter 1885 Asterina interrupta G. Winter 1884 accepted as Vizella interrupta (G. Winter) S. Hughes, (1953) Asterina knysnae Doidge 1942 Asterina loranthicola Syd. & P. Syd. 1914 Asterina macowaniana Kalchbr. & Cooke 1880 Asterina myriadea Cooke 1882 Asterina natalensis Doidge 1920 Asterina natalitia Doidge 1942 Asterina nodosa Doidge 1942 Asterina oncinotidis Doidge 1942 Asterina opaca Syd. & P. Syd. 1912 Asterina oxyanthi Doidge 1942 Asterina pavoniae Werderm. 1923 Asterina peglerae Doidge 1920 Asterina pemphidioides Cooke 1876 Asterina peraffinis Speg. 1889 Asterina phaeostroma Cooke 1882 Asterina polythyria Doidge 1920 Asterina punctiformis var. fimbriata (Kalchbr. & Cooke) Theiss Asterina radiofissilis (Sacc.) Theiss. 1912 Asterina raripoda Doidge 1920 accepted as Maublancia raripoda (Doidge) Arx, (1962) Asterina reticulata Kalchbr. & Cooke 1880 Asterina rhamnicola Doidge 1920 accepted as Schiffnerula rhamnicola (Doidge) S. Hughes, (1987) Asterina rinoreae Doidge 1942 accepted as Asteridiella rinoreae (Doidge) Hansf. (1961) Asterina robusta Doidge 1920 Asterina saniculae Doidge 1942 Asterina scolopiae Doidge 1922 Asterina secamonicola Doidge 1927 Asterina similis Cooke 1882 Asterina solaris Kalchbr. & Cooke 1880 accepted as Asterodothis solaris (Kalchbr. & Cooke) Theiss., (1912) Asterina sphaerasca Thüm. 1875 Asterina streptocarpi Doidge 1924 Asterina stylospora Cooke 1882 accepted as Capnodiastrum stylosporum (Cooke) Petr., (1952) Asterina syzygii Doidge 1942 Asterina tenuis G. Winter 1886 Asterina tertia var. africana Doidge 1920 accepted as Asterina tertia Racib. 1913 Asterina toruligena Cooke 1882 Asterina trichiliae Doidge 1920 Asterina trichocladi Doidge 1942 accepted as Maublancia trichocladii (Doidge) Arx, (1962) Asterina uncinata Doidge 1920 Asterina undulata Doidge 1920 Asterina vagans Speg. 1888 Asterina vagans var. subreticulata Theiss* Asterina vanderbijlii Werderm. 1923 [as van der Bylii] Asterina vepridis Doidge 1942 Asterina woodiana (Doidge) Doidge 1942 Asterina woodii Doidge 1942 Asterina xumenensis Doidge 1942 Asterina zeyheri Doidge 1942 Genus: Asterinella Theiss. 1912 Asterinella acokantherae Doidge 1920 accepted as Lembosina acokantherae (Doidge) Arx [as 'acocantherae'], (1962) Asterinella burchelliae Doidge 1920 accepted as Asterolibertia burchelliae (Doidge) Doidge, (1942) Asterinella contorta (Doidge) Hansf. 1946 Asterinella dissiliens Syd. 1924 accepted as Prillieuxina dissiliens (Syd.) Arx, (1962) Asterinella dissiliens var. senegalensis Doidge* Asterinella inconspicua (Doidge) Hansf. 1948 accepted as Prillieuxina inconspicua (Doidge) Arx, (1962) Asterinella lembosioides Doidge 1920 accepted as Echidnodes lembosioides (Doidge) Doidge, (1942) Asterinella mimusopsis Doidge [as 'mimusopsidis'], (1922) Asterinella pterocelastri Doidge 1924 accepted as Prillieuxina pterocelastri (Doidge) R.W. Ryan, (1939) Asterinella tecleae Doidge 1942 Asterinella woodiana Doidge 1920, Genus Asterodothis Theiss. 1912 Asterodothis solaris (Kalchbr. & Cooke) Theiss. 1912 Genus: Asterolibertia G. Arnaud 1918 Asterolibertia burchelliae (Doidge) Doidge 1942 Asterolibertia megathyria (Doidge) Doidge 1942 Asterolibertia megathyria var. randiae Doidge 1942 Genus: Asteroma DC. 1815 Asteroma pallidum Kalchbr. * Asteroma pullum Kalchbr. 1875 Genus: Asteromyxa Theiss. & Syd. 1918 accepted as Dimeriella Speg., (1908) Asteromyxa inconspicua Doidge 1924 accepted as Prillieuxina inconspicua (Doidge) Arx, (1962) Genus: Asterostomella Speg. 1886 Asterostomella eugeniicola Doidge [as 'eugenicola'], (1942) Asterostomella reticulata v.Hohn.* Asterostomella visci Doidge 1942 Genus: Asterostroma Massee 1889 Asterostroma cervicolor (Berk. & M.A. Curtis) Massee 1889 Genus: Asterostromella Höhn. & Litsch. 1907 accepted as Vararia P. Karst., (1898) Asterostromella rumpiana P.H.B. Talbot 1948 Genus: Astrosporina J. Schröt. 1889 accepted as Inocybe (Fr.) Fr., (1863) Astrosporina maritima (P. Karst.) Rea 1922 accepted as Inocybe impexa (Lasch) Kuyper, (1986) Family: Astrotheliaceae Zahlbr. 1898 Au Genus: Auerswaldia possibly Rabenh. 1857, accepted as Melanospora Ceratostomataceae, or Auerswaldia Sacc. 1883; Dothideaceae Auerswaldia disciformis G. Winter 1884 accepted as Auerswaldiella winteri Arx & E. Müll., (1954) Auerswaldia examinans (Berk.) Sacc. 1883 accepted as Bagnisiella examinans (Berk.) Arx & E. Müll., (1975) Auerswaldia scabies (Kalchbr. & Cooke) Sacc. 1883 accepted as Phyllachora scabies (Kalchbr. & Cooke) Cooke, (1885) Family: Auriculariaceae Fr. 1838 Genus: Auricularia Bull. 1780 Auricularia auricula-judae Seer. possibly (Bull.) Quél. 1886 Auricularia delicata (Mont. ex Fr.) Henn. 1893 Auricularia eminii Henn. 1893 [as Emini] Auricularia flava Lloyd 1922 Auricularia fuscosuccinea (Mont.) Henn. 1893 Auricularia lobata Sommerf. 1826 accepted as Auricularia mesenterica (Dicks.) Pers., (1822) Auricularia mesenterica Fr. possibly (Dicks.) Pers. 1822 Auricularia mesenterica var. lobata Quel. * Auricularia nigra P.Henn. possibly (Sw.) Earle 1899 accepted as Auricularia nigricans (Sw.) Birkebak, Looney & Sánchez-García, (2013) Auricularia ornata Pers. 1827 Auricularia polytricha (Mont.) Sacc. (1885), accepted as Auricularia nigricans (Sw.) Birkebak, Looney & Sánchez-García, (2013) Auricularia squamosa Pat. & Har. 1893 See also List of bacteria of South Africa List of Oomycetes of South Africa List of slime moulds of South Africa List of fungi of South Africa List of fungi of South Africa – A List of fungi of South Africa – B List of fungi of South Africa – C List of fungi of South Africa – D List of fungi of South Africa – E List of fungi of South Africa – F List of fungi of South Africa – G List of fungi of South Africa – H List of fungi of South Africa – I List of fungi of South Africa – J List of fungi of South Africa – K List of fungi of South Africa – L List of fungi of South Africa – M List of fungi of South Africa – N List of fungi of South Africa – O List of fungi of South Africa – P List of fungi of South Africa – Q List of fungi of South Africa – R List of fungi of South Africa – S List of fungi of South Africa – T List of fungi of South Africa – U List of fungi of South Africa – V List of fungi of South Africa – W List of fungi of South Africa – X List of fungi of South Africa – Y List of fungi of South Africa – Z References Sources Further reading External links Species Fungorum – a nomenclature database Name search at Index Fungorum Fungi A South Africa
List of fungi of South Africa – A
Biology
17,874
187,926
https://en.wikipedia.org/wiki/Centroid
In mathematics and physics, the centroid, also known as geometric center or center of figure, of a plane figure or solid figure is the arithmetic mean position of all the points in the surface of the figure. The same definition extends to any object in -dimensional Euclidean space. In geometry, one often assumes uniform mass density, in which case the barycenter or center of mass coincides with the centroid. Informally, it can be understood as the point at which a cutout of the shape (with uniformly distributed mass) could be perfectly balanced on the tip of a pin. In physics, if variations in gravity are considered, then a center of gravity can be defined as the weighted mean of all points weighted by their specific weight. In geography, the centroid of a radial projection of a region of the Earth's surface to sea level is the region's geographical center. History The term "centroid" was coined in 1814. It is used as a substitute for the older terms "center of gravity" and "center of mass" when the purely geometrical aspects of that point are to be emphasized. The term is peculiar to the English language; French, for instance, uses "" on most occasions, and other languages use terms of similar meaning. The center of gravity, as the name indicates, is a notion that arose in mechanics, most likely in connection with building activities. It is uncertain when the idea first appeared, as the concept likely occurred to many people individually with minor differences. Nonetheless, the center of gravity of figures was studied extensively in Antiquity; Bossut credits Archimedes (287–212 BCE) with being the first to find the centroid of plane figures, although he never defines it. A treatment of centroids of solids by Archimedes has been lost. It is unlikely that Archimedes learned the theorem that the medians of a triangle meet in a point—the center of gravity of the triangle—directly from Euclid, as this proposition is not in the Elements. The first explicit statement of this proposition is due to Heron of Alexandria (perhaps the first century CE) and occurs in his Mechanics. It may be added, in passing, that the proposition did not become common in the textbooks on plane geometry until the nineteenth century. Properties The geometric centroid of a convex object always lies in the object. A non-convex object might have a centroid that is outside the figure itself. The centroid of a ring or a bowl, for example, lies in the object's central void. If the centroid is defined, it is a fixed point of all isometries in its symmetry group. In particular, the geometric centroid of an object lies in the intersection of all its hyperplanes of symmetry. The centroid of many figures (regular polygon, regular polyhedron, cylinder, rectangle, rhombus, circle, sphere, ellipse, ellipsoid, superellipse, superellipsoid, etc.) can be determined by this principle alone. In particular, the centroid of a parallelogram is the meeting point of its two diagonals. This is not true of other quadrilaterals. For the same reason, the centroid of an object with translational symmetry is undefined (or lies outside the enclosing space), because a translation has no fixed point. Examples The centroid of a triangle is the intersection of the three medians of the triangle (each median connecting a vertex with the midpoint of the opposite side). For other properties of a triangle's centroid, see below. Determination Plumb line method The centroid of a uniformly dense planar lamina, such as in figure (a) below, may be determined experimentally by using a plumbline and a pin to find the collocated center of mass of a thin body of uniform density having the same shape. The body is held by the pin, inserted at a point, off the presumed centroid in such a way that it can freely rotate around the pin; the plumb line is then dropped from the pin (figure b). The position of the plumbline is traced on the surface, and the procedure is repeated with the pin inserted at any different point (or a number of points) off the centroid of the object. The unique intersection point of these lines will be the centroid (figure c). Provided that the body is of uniform density, all lines made this way will include the centroid, and all lines will cross at exactly the same place. This method can be extended (in theory) to concave shapes where the centroid may lie outside the shape, and virtually to solids (again, of uniform density), where the centroid may lie within the body. The (virtual) positions of the plumb lines need to be recorded by means other than by drawing them along the shape. Balancing method For convex two-dimensional shapes, the centroid can be found by balancing the shape on a smaller shape, such as the top of a narrow cylinder. The centroid occurs somewhere within the range of contact between the two shapes (and exactly at the point where the shape would balance on a pin). In principle, progressively narrower cylinders can be used to find the centroid to arbitrary precision. In practice air currents make this infeasible. However, by marking the overlap range from multiple balances, one can achieve a considerable level of accuracy. Of a finite set of points The centroid of a finite set of points in is This point minimizes the sum of squared Euclidean distances between itself and each point in the set. By geometric decomposition The centroid of a plane figure can be computed by dividing it into a finite number of simpler figures computing the centroid and area of each part, and then computing Holes in the figure overlaps between the parts, or parts that extend outside the figure can all be handled using negative areas Namely, the measures should be taken with positive and negative signs in such a way that the sum of the signs of for all parts that enclose a given point is if belongs to and otherwise. For example, the figure below (a) is easily divided into a square and a triangle, both with positive area; and a circular hole, with negative area (b). The centroid of each part can be found in any list of centroids of simple shapes (c). Then the centroid of the figure is the weighted average of the three points. The horizontal position of the centroid, from the left edge of the figure is The vertical position of the centroid is found in the same way. The same formula holds for any three-dimensional objects, except that each should be the volume of rather than its area. It also holds for any subset of for any dimension with the areas replaced by the -dimensional measures of the parts. By integral formula The centroid of a subset of can also be computed by the formula where the integrals are taken over the whole space and is the characteristic function of the subset of if and otherwise. Note that the denominator is simply the measure of the set This formula cannot be applied if the set has zero measure, or if either integral diverges. Another formula for the centroid is where is the th coordinate of and is the measure of the intersection of with the hyperplane defined by the equation Again, the denominator is simply the measure of For a plane figure, in particular, the barycentric coordinates are where is the area of the figure is the length of the intersection of with the vertical line at abscissa and is the length of the intersection of with the horizontal line at ordinate Of a bounded region The centroid of a region bounded by the graphs of the continuous functions and such that on the interval is given by where is the area of the region (given by ). With an integraph An integraph (a relative of the planimeter) can be used to find the centroid of an object of irregular shape with smooth (or piecewise smooth) boundary. The mathematical principle involved is a special case of Green's theorem. Of an L-shaped object This is a method of determining the centroid of an L-shaped object. Divide the shape into two rectangles, as shown in fig 2. Find the centroids of these two rectangles by drawing the diagonals. Draw a line joining the centroids. The centroid of the shape must lie on this line Divide the shape into two other rectangles, as shown in fig 3. Find the centroids of these two rectangles by drawing the diagonals. Draw a line joining the centroids. The centroid of the L-shape must lie on this line As the centroid of the shape must lie along and also along it must be at the intersection of these two lines, at The point might lie inside or outside the L-shaped object. Of a triangle The centroid of a triangle is the point of intersection of its medians (the lines joining each vertex with the midpoint of the opposite side). The centroid divides each of the medians in the ratio which is to say it is located of the distance from each side to the opposite vertex (see figures at right). Its Cartesian coordinates are the means of the coordinates of the three vertices. That is, if the three vertices are and then the centroid (denoted here but most commonly denoted in triangle geometry) is The centroid is therefore at in barycentric coordinates. In trilinear coordinates the centroid can be expressed in any of these equivalent ways in terms of the side lengths and vertex angles The centroid is also the physical center of mass if the triangle is made from a uniform sheet of material; or if all the mass is concentrated at the three vertices, and evenly divided among them. On the other hand, if the mass is distributed along the triangle's perimeter, with uniform linear density, then the center of mass lies at the Spieker center (the incenter of the medial triangle), which does not (in general) coincide with the geometric centroid of the full triangle. The area of the triangle is times the length of any side times the perpendicular distance from the side to the centroid. A triangle's centroid lies on its Euler line between its orthocenter and its circumcenter exactly twice as close to the latter as to the former: In addition, for the incenter and nine-point center we have If is the centroid of the triangle then The isogonal conjugate of a triangle's centroid is its symmedian point. Any of the three medians through the centroid divides the triangle's area in half. This is not true for other lines through the centroid; the greatest departure from the equal-area division occurs when a line through the centroid is parallel to a side of the triangle, creating a smaller triangle and a trapezoid; in this case the trapezoid's area is that of the original triangle. Let be any point in the plane of a triangle with vertices and centroid Then the sum of the squared distances of from the three vertices exceeds the sum of the squared distances of the centroid from the vertices by three times the squared distance between and The sum of the squares of the triangle's sides equals three times the sum of the squared distances of the centroid from the vertices: A triangle's centroid is the point that maximizes the product of the directed distances of a point from the triangle's sidelines. Let be a triangle, let be its centroid, and let be the midpoints of segments respectively. For any point in the plane of Of a polygon The centroid of a non-self-intersecting closed polygon defined by vertices is the point where and and where is the polygon's signed area, as described by the shoelace formula: In these formulae, the vertices are assumed to be numbered in order of their occurrence along the polygon's perimeter; furthermore, the vertex is assumed to be the same as meaning on the last case must loop around to (If the points are numbered in clockwise order, the area computed as above, will be negative; however, the centroid coordinates will be correct even in this case.) Of a cone or pyramid The centroid of a cone or pyramid is located on the line segment that connects the apex to the centroid of the base. For a solid cone or pyramid, the centroid is the distance from the base to the apex. For a cone or pyramid that is just a shell (hollow) with no base, the centroid is the distance from the base plane to the apex. Of a tetrahedron and -dimensional simplex A tetrahedron is an object in three-dimensional space having four triangles as its faces. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a median, and a line segment joining the midpoints of two opposite edges is called a bimedian. Hence there are four medians and three bimedians. These seven line segments all meet at the centroid of the tetrahedron. The medians are divided by the centroid in the ratio The centroid of a tetrahedron is the midpoint between its Monge point and circumcenter (center of the circumscribed sphere). These three points define the Euler line of the tetrahedron that is analogous to the Euler line of a triangle. These results generalize to any -dimensional simplex in the following way. If the set of vertices of a simplex is then considering the vertices as vectors, the centroid is The geometric centroid coincides with the center of mass if the mass is uniformly distributed over the whole simplex, or concentrated at the vertices as equal masses. Of a hemisphere The centroid of a solid hemisphere (i.e. half of a solid ball) divides the line segment connecting the sphere's center to the hemisphere's pole in the ratio (i.e. it lies of the way from the center to the pole). The centroid of a hollow hemisphere (i.e. half of a hollow sphere) divides the line segment connecting the sphere's center to the hemisphere's pole in half. See also Chebyshev center Circular mean Fréchet mean -means algorithm List of centroids Medoid Pappus's centroid theorem Notes References External links Encyclopedia of Triangle Centers by Clark Kimberling. The centroid is indexed as X(2). Characteristic Property of Centroid at cut-the-knot Interactive animations showing Centroid of a triangle and Centroid construction with compass and straightedge Experimentally finding the medians and centroid of a triangle at Dynamic Geometry Sketches, an interactive dynamic geometry sketch using the gravity simulator of Cinderella. Affine geometry Geometric centers Means Triangle centers
Centroid
Physics,Mathematics
3,059
13,190,594
https://en.wikipedia.org/wiki/Harry%20H.%20Goode
Harry H. Goode (June 30, 1909 – October 30, 1960) was an American computer engineer and systems engineer and professor at the University of Michigan. He is known as co-author of the book Systems Engineering from 1957, which is one of the earliest significant books directly related to systems engineering. Biography Harry H. Goode (née Goodstein) was born in New York City in 1909. He received his B.A. in history from New York University in 1931, when the country was in the depths of the Depression. While studying chemical engineering at Cooper Union, Goode earned his living playing the clarinet and saxophone in New York jazz bands. He received his second bachelor's degree in 1940. During the war he attended Columbia University and received a master's degree in mathematics in 1945. In 1941 Goode started working as a statistician for the New York City Department of Health. From 1946 to 1949 Goode worked for the U.S. Navy in Sands Point, Long Island, where he became head of the Special Projects Branch. Here he contributed to flight control simulation training, aircraft instrumentation, antisubmarine warfare, weapons systems design, and computer research and initiated computerbased simulation projects. In the 1950s Goode became professor at the University of Michigan. Until his death in 1960 he was president of the National Joint Computer Committee (NJCC). He was the principal architect of what was to become AFIPS (American Federation of Information Processing Societies). Had he lived, Goode undoubtedly would have become the first president of AFIPS, for he was the prime mover in organizing the three American constituent societies that were members of NJCC into one federation. Work Harry Goode worked on the research frontiers of Management Science, Operations Research and Systems engineering in connection with organisms as systems, the reactions of groups, models of human preference, the experimental exploration of human observation, detection, and decision making, and the analysis and synthesis of speech. Harry H. Goode Memorial Award The IEEE Computer Society yearly awards a Harry H. Goode Memorial Award for achievements in the information processing field which are considered either a single contribution of theory, design, or technique of outstanding significance, or the accumulation of important contributions on theory or practice over an extended time period, the total of which represent an outstanding contribution. Publications Goode wrote several books and articles. Books: 1944 Mathematical Analysis of Ordinary and Deviated Pursuit Curves, with Leonard Gillman, Special Devices Section, Training Division, Bureau of Aeronautics, Navy Department, 264 pp. 1944. 1957 Systems Engineering: An Introduction to the Design of Large-Scale Systems, with Robert Engel Machol, McGraw-Hill, 551 pp. Articles, a selection: 1945 "Service Records and Their Administrative Uses", with Abraham H. Kantrow, Leona Baumgartner, in: Am J Public Health Nations Health. 1945 October; 35(10): 1063–1069. 1956 "The Use of a Digital Computer to Model a Signalized Intersection", with C.H. Pollmar and J.B. Wright, in: Proceedings of Highway Research Board, vol. 35, 1956, pp. 548 – 557. 1957 "Survey of Operations Research and Systems Engineering", Paper presented at Conference of Engineering Deans on Science and Technology, Purdue University, September 1957. 1958 "Greenhouses of Science for Management", in: Management Science, Vol. 4, No. 4 (Jul. 1958), pp. 365–381. 1958 "Simulation: Simulation and display of four inter-related vehicular traffic intersections", with C. True Wendell, Paper presented at the 13th national meeting of the Association for Computing Machinery ACM '58. About Harry H. Goode: Isaac L. Auerbach, "Harry H. Goode, June 30, 1909-October 30, 1960", IEEE Annals of the History of Computing, vol. 08, no. 3, pp. 257–260, Jul-Sept 1986. Robert E. Machol, Harry H. Goode, System Engineer, in: Science, Volume 133, Issue 3456, pp. 864–866, 03/1961. References External links Harry H. Goode Memorial Award, IEEE Computer Society. The McGraw-Hill Series in Control Systems Engineering overview. by Kent H Lundberg, January 2004. 1909 births 1960 deaths American engineering writers Systems engineers Columbia Graduate School of Arts and Sciences alumni University of Michigan faculty New York University alumni Cooper Union alumni 20th-century American writers
Harry H. Goode
Engineering
922
953,148
https://en.wikipedia.org/wiki/Lagrange%20reversion%20theorem
In mathematics, the Lagrange reversion theorem gives series or formal power series expansions of certain implicitly defined functions; indeed, of compositions with such functions. Let v be a function of x and y in terms of another function f such that Then for any function g, for small enough y: If g is the identity, this becomes In which case the equation can be derived using perturbation theory. In 1770, Joseph Louis Lagrange (1736–1813) published his power series solution of the implicit equation for v mentioned above. However, his solution used cumbersome series expansions of logarithms. In 1780, Pierre-Simon Laplace (1749–1827) published a simpler proof of the theorem, which was based on relations between partial derivatives with respect to the variable x and the parameter y. Charles Hermite (1822–1901) presented the most straightforward proof of the theorem by using contour integration. Lagrange's reversion theorem is used to obtain numerical solutions to Kepler's equation. Simple proof We start by writing: Writing the delta-function as an integral we have: The integral over k then gives and we have: Rearranging the sum and cancelling then gives the result: References External links Lagrange Inversion [Reversion] Theorem on MathWorld Cornish–Fisher expansion, an application of the theorem Article on equation of time contains an application to Kepler's equation. Theorems in analysis Inverse functions fr:Théorème d'inversion de Lagrange
Lagrange reversion theorem
Mathematics
311
55,004,265
https://en.wikipedia.org/wiki/Klaas%20Wynne
Klaas Wynne (also Wijnne; born 1964) is a professor in the School of Chemistry at the University of Glasgow and chair of Chemical Physics. He was previously a professor in the Department of Physics at the University of Strathclyde (1996–2010). Education He received his BSc in chemistry from the University of Amsterdam in 1987 and his PhD in chemistry from the University of Amsterdam in 1990 under the supervision of Joop van Voorst. He did his postdoctoral fellowship in the laboratory of Robin Hochstrasser at the University of Pennsylvania. Research Wynne has authored over 90 published scientific papers. His work is focused on the structure and dynamics of liquids and solutions as well as peptides, proteins, and other biomolecules treated as amorphous objects behaving much like liquids. He described the Mayonnaise Effect, which explains the anomalous increase of the viscosity of solutions with concentration in terms of a jamming transition. He is particularly interested in phase behaviour such as "supercooling of liquids, folding transitions in peptides, phase separation and nucleation using laser-tweezing, nucleation of crystals from solution", and liquid-liquid and liquid-crystalline transitions. These phenomena are studied using femtosecond spectroscopies such as ultrafast optical Kerr-effect spectroscopy, time-domain terahertz spectroscopy (THz-TDS) as well as optical microscopy and various other forms of spectroscopy. Awards and honours Chemical Dynamics Award of the Royal Society of Chemistry (RSC), 2018. Associate Editor Journal of the American Chemical Society (JACS), 2017–2020. Elected Fellow of the Royal Society of Edinburgh (FRSE), 2015. Member of Faraday Division Council, 2013–2016. Member of the editorial advisory board of the Journal of Physical Chemistry, 2012–2015. Visiting professor in the Department of Chemical and Process Engineering, University of Strathclyde, 2012–2014. Member of the board of Chemical Physics (Elsevier), 2012- Fellow of the Royal Society of Chemistry, 2006. Fellow of the Institute of Physics, 2005. NATO research fellowship, 1991. References External links Chemical Photonics | Wynne lab Ultrafast Chemical Physics in the city of Glasgow The Biomolecular spectroscopy & dynamics Cluster (BioC) 1964 births Living people Academics of the University of Glasgow Fellows of the Royal Society of Edinburgh Fellows of the Royal Society of Chemistry Fellows of the Institute of Physics 21st-century Scottish chemists Physical chemists Scientists from Amsterdam 21st-century Dutch chemists University of Amsterdam alumni
Klaas Wynne
Chemistry
530
21,787,021
https://en.wikipedia.org/wiki/Movat%27s%20stain
Movat's stain is a pentachrome stain originally developed by Henry Zoltan Movat (1923–1995), a Hungarian-Canadian Pathologist in Toronto in 1955 to highlight the various constituents of connective tissue, especially cardiovascular tissue, by five colors in a single stained slide. In 1972, H. K. Russell, Jr. modified the technique so as to reduce the time for staining and to increase the consistency and reliability of the staining, creating the Russell–Movat stain. Principle Modified Russell–Movat staining highlights numerous tissue components in histological slides. It is obtained by a mix of five stains: alcian blue, Verhoeff hematoxylin and crocein scarlet combined with acidic fuchsine and saffron. At pH 2.5, alcian blue is fixed by electrostatic binding with the acidic mucopolysaccharides. The Verhoeff hematoxylin has a high affinity for nuclei and elastin fibers, negatively charged. The combination of crocein scarlet with acidic fuchsine stains acidophilic tissue components in red. Then, collagen and reticulin fibers are unstained by a reaction with phosphotungstic acid and stained in yellow by saffron. Uses Modified Russell–Movat staining is used to study the heart, blood vessels and connective tissues. It can also be used to diagnose vascular and lung diseases. Gallery References See also Cardiovascular disease Staining
Movat's stain
Chemistry,Biology
311
37,662,322
https://en.wikipedia.org/wiki/Brandt%20matrix
In mathematics, Brandt matrices are matrices, introduced by , that are related to the number of ideals of given norm in an ideal class of a definite quaternion algebra over the rationals, and that give a representation of the Hecke algebra. calculated the traces of the Brandt matrices. Let O be an order in a quaternion algebra with class number H, and Ii,...,IH invertible left O-ideals representing the classes. Fix an integer m. Let ej denote the number of units in the right order of Ij and let Bij denote the number of α in Ij−1Ii with reduced norm N(α) equal to mN(Ii)/N(Ij). The Brandt matrix B(m) is the H×H matrix with entries Bij. Up to conjugation by a permutation matrix it is independent of the choice of representatives Ij; it is dependent only on the level of the order O. References Number theory Matrices
Brandt matrix
Mathematics
207
15,706,459
https://en.wikipedia.org/wiki/Peter%20Debye%20Award
The Peter Debye Award in Physical Chemistry is awarded annually by the American Chemical Society "to encourage and reward outstanding research in physical chemistry". The award is named after Peter Debye and granted without regard to age or nationality. Recipients 2020 Laura Gagliardi 2019 Daniel M. Neumark 2018 2017 2016 Mark A. Ratner 2015 Xiaoliang Sunney Xie 2014 Henry F. Schaefer III 2013 William E. Moerner 2012 David Chandler 2011 Louis E. Brus 2010 George Schatz 2009 Richard J. Saykally 2008 Michael L. Klein 2007 John T. Yates, Jr. 2006 Donald Truhlar 2005 Stephen Leone 2004 William Carl Lineberger 2003 William H. Miller 2002 Giacinto Scoles 2001 John Ross 2000 Peter G. Wolynes 1999 Jesse L. Beauchamp 1998 Graham R. Fleming 1997 Robin M. Hochstrasser 1996 Ahmed Zewail 1995 John C. Tully 1994 William A. Klemperer 1993 F. Sherwood Rowland 1992 Frank H. Stillinger 1991 Richard N. Zare 1990 Harden M. McConnell 1989 Gabor A. Somorjai 1988 Rudolph A. Marcus 1987 Harry G. Drickamer 1986 Yuan T. Lee 1985 Stuart A. Rice 1984 B. Seymour Rabinovitch 1983 George C. Pimentel 1982 Peter M. Rentzepis 1981 Richard B. Bernstein 1976 Robert W. Zwanzig 1975 Herbert S. Gutowsky 1974 Walter H. Stockmayer 1973 William N. Lipscomb, Jr. 1972 Clyde A. Hutchison, Jr. 1971 Norman Davidson 1970 1969 Paul J. Flory 1968 George B. Kistiakowsky 1967 Joseph E. Mayer 1966 Joseph O. Hirschfelder 1965 Lars Onsager 1964 Henry Eyring 1963 Robert S. Mulliken 1962 E. Bright Wilson, Jr. See also List of chemistry awards References Awards of the American Chemical Society Physical chemistry Awards established in 1962 Peter Debye
Peter Debye Award
Physics,Chemistry
398
636,424
https://en.wikipedia.org/wiki/Manilkara%20bidentata
Manilkara bidentata is a species of Manilkara native to a large area of northern South America, Central America and the Caribbean. Common names include bulletwood, balatá, ausubo, massaranduba, quinilla, and (ambiguously) "cow-tree". Description The balatá is a large tree, growing to tall. The leaves are alternate, elliptical, entire, and long. The flowers are white, and are produced at the beginning of the rainy season. The fruit is a yellow berry, in diameter, which is edible; it contains one (occasionally two) seed(s). Its latex is used industrially for products such as chicle. Uses The latex is extracted in the same manner in which sap is extracted from the rubber tree. It is then dried to form an inelastic rubber-like material. It is almost identical to gutta-percha (produced from a closely related southeast Asian tree), and is sometimes called gutta-balatá. Balatá was often used in the production of high-quality golf balls, to use as the outer layer of the ball. Balatá-covered balls have a high spin rate, but do not travel as far as most balls with a Surlyn cover. Due to the nondurable nature of the material the golf club strikes, balatá-covered balls do not last long before needing to be replaced. While once favored by professional and low-handicap players, they are now obsolete, replaced by newer Surlyn and urethane technology. In 1943, Major League Baseball used balata instead of rubber in its baseballs due to wartime rationing. The balata balls initially displayed significantly less resilience (bounce) than rubber-core balls, forcing MLB to reformulate the balata-ball design several weeks into the season. MLB resumed using rubber in 1944. Today, Brazil is the largest producer of Massaranduba wood, where it is cut in the Amazon rainforest. The tree is a hardwood with a red heart, which is used for furniture and as a construction material where it grows. Locals often refer to it as bulletwood for its extremely hard wood, which is so dense that it does not float in water. Drilling is necessary to drive nailed connections. In trade, it is occasionally (and incorrectly) called "brazilwood". The fruit, like that of the related sapodilla (M. zapota), is edible. Though its heartwood may present in a shade of purple, Manilkara bidentata should not be confused with another tropical tree widely known as "purpleheart", Peltogyne pubescens. This timber is being used to produce outdoor furniture and is being marketed as "Pacific Jarrah" in Australia. References External links bidentata Plants described in 1807 Trees of South America Natural materials Organic polymers Rubber Elastomers
Manilkara bidentata
Physics,Chemistry
591
4,866,818
https://en.wikipedia.org/wiki/NSMB%20%28mathematics%29
NSMB is a computer system for solving Navier–Stokes equations using the finite volume method. It supports meshes built of several blocks (multi-blocks) and supports parallelisation. The name stands for "Navier–Stokes multi-block". It was developed by a consortium of European scientific institutions and companies, between 1992 and 2003. References Numerical software
NSMB (mathematics)
Mathematics
74
21,066,211
https://en.wikipedia.org/wiki/Greenwood%20Clean%20Energy
Greenwood Clean Energy, Inc. is an American manufacturing company headquartered in Redmond, Washington, that manufactures wood and biomass central heating systems. History The company was founded in 2009. A year later, the Clean Energy Company (CEC) acquired the Greenwood brand, certain assets, and intellectual property of Greenwood Technologies. CEC became Greenwood Clean Energy, Inc. and began marketing its appliances under the Greenwood brand. Emissions Greenwood has participated with the U.S. Environmental Protection Agency to advocate for the use of more efficient wood boilers. Greenwood Clean Energy's Frontier CX heating appliance meets the requirements for efficiency and emissions outlined in the Washington State Department of Ecology standards of less than 4.5 grams of particulate matter per hour using the Douglas Fir test fuel. Awards Greenwood has received numerous awards, including: 2008 Reader's Choice Award, Contractor magazine; Plumbing & Mechanical magazine; 2007 Brilliant Innovation Award, Discover Brilliant International Conference; 2007 AHR Expo Innovation Award, AHR Expo; and 2006 Vesta Award: Groundbreaking Advancement in Renewable Fuel Heating, Hearth and Home. References Boilers Companies based in Redmond, Washington Residential heating Heaters
Greenwood Clean Energy
Chemistry
230
34,855,349
https://en.wikipedia.org/wiki/Richard%20Stribeck
Richard Stribeck (7. July 1861 in Stuttgart, † 29. March 1950) was a German engineer, after whom the Stribeck Curve is named. Life Stribeck studied mechanical engineering in 1880 at the Technical University of Stuttgart in 1885 and worked as a designer in Königsberg. In 1888 he became professor in Stuttgart, and 1890 Professor of Mechanical Engineering at the Technical University of Darmstadt. 1893 he took a professorship at the Dresden Technical University. In 1896 he became head of the laboratory equipment of the university. In 1898 was head of the Stribeck Physical metallurgy department of the Technical Institute and director of the private military-industrial laboratory (Center for Scientific-Technical Research) in Neubabelsberg. In 1902 he described the friction coefficient in lubricated bearings, now known as the Stribeck curve. From 1908 Stribeck worked for the Friedrich Krupp AG in Essen in 1919 at the Robert Bosch GmbH in Stuttgart. Stribeck was a college friend of the industrialist Robert Bosch, with whom he remained for a joint study at the Royal Wuerttemberg in Stuttgart Polytechnic lifelong allegiance. Stribeck was honored for his services to the Wilhelm Exner Medal proposed. Work Richard Stribeck carried out studies in the field of tribology, focusing on friction in lubricated sliding contacts, such as journal bearings. His work lead to the development of the Stribeck Curve, a fundamental tribological concept that shows how operation conditions (in particular normal load, lubricant viscosity and lubricant entrainment velocity) influence the friction coefficient in fluid-lubricated contacts. For these contributions, he was named as one of 23 "Men of Tribology" by Duncan Dowson. Popular culture On Tim Allen's sitcom Last Man Standing, on ABC, Allen's wife attempts to prove there are no ghosts at work by explaining how frictional contact mechanics caused a cold glass to slide spontaneously across a counter, finishing with the statement that "it's your basic Stribeck curve". References German mechanical engineers Engineers from Stuttgart 1861 births 1950 deaths Tribologists
Richard Stribeck
Materials_science
436
3,000,842
https://en.wikipedia.org/wiki/Unique%20games%20conjecture
In computational complexity theory, the unique games conjecture (often referred to as UGC) is a conjecture made by Subhash Khot in 2002. The conjecture postulates that the problem of determining the approximate value of a certain type of game, known as a unique game, has NP-hard computational complexity. It has broad applications in the theory of hardness of approximation. If the unique games conjecture is true and P ≠ NP, then for many important problems it is not only impossible to get an exact solution in polynomial time (as postulated by the P versus NP problem), but also impossible to get a good polynomial-time approximation. The problems for which such an inapproximability result would hold include constraint satisfaction problems, which crop up in a wide variety of disciplines. The conjecture is unusual in that the academic world seems about evenly divided on whether it is true or not. Formulations The unique games conjecture can be stated in a number of equivalent ways. Unique label cover The following formulation of the unique games conjecture is often used in hardness of approximation. The conjecture postulates the NP-hardness of the following promise problem known as label cover with unique constraints. For each edge, the colors on the two vertices are restricted to some particular ordered pairs. Unique constraints means that for each edge none of the ordered pairs have the same color for the same node. This means that an instance of label cover with unique constraints over an alphabet of size k can be represented as a directed graph together with a collection of permutations πe: [k] → [k], one for each edge e of the graph. An assignment to a label cover instance gives to each vertex of G a value in the set [k] = {1, 2, ... k}, often called “colours.” Such instances are strongly constrained in the sense that the colour of a vertex uniquely defines the colours of its neighbours, and hence for its entire connected component. Thus, if the input instance admits a valid assignment, then such an assignment can be found efficiently by iterating over all colours of a single node. In particular, the problem of deciding if a given instance admits a satisfying assignment can be solved in polynomial time. The value of a unique label cover instance is the fraction of constraints that can be satisfied by any assignment. For satisfiable instances, this value is 1 and is easy to find. On the other hand, it seems to be very difficult to determine the value of an unsatisfiable game, even approximately. The unique games conjecture formalises this difficulty. More formally, the (c, s)-gap label-cover problem with unique constraints is the following promise problem (Lyes, Lno): Lyes = {G: Some assignment satisfies at least a c-fraction of constraints in G} Lno = {G: Every assignment satisfies at most an s-fraction of constraints in G} where G is an instance of the label cover problem with unique constraints. The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1 − δ, ε)-gap label-cover problem with unique constraints over alphabet of size k is NP-hard. Maximizing Linear Equations Modulo k Consider the following system of linear equations over the integers modulo k: When each equation involves exactly two variables, this is an instance of the label cover problem with unique constraints; such instances are known as instances of the Max2Lin(k) problem. It is not immediately obvious that the inapproximability of Max2Lin(k) is equivalent to the UGC, but this is in fact the case, by a reduction. Namely, the UGC is equivalent to: for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the (1 − δ, ε)-gap Max2Lin(k) problem is NP-hard. Connection with computational topology It has been argued that the UGC is essentially a question of computational topology, involving local-global principles (the latter are also evident in the proof of the 2-2 Games Conjecture, see below). Linial observed that unique label cover is an instance of the Maximum Section of a Covering Graph problem (covering graphs is the terminology from topology; in the context of unique games these are often referred to as graph lifts). To date, all known problems whose inapproximability is equivalent to the UGC are instances of this problem, including Unique Label Cover and Max2Lin(k). When the latter two problems are viewed as instances of Max Section of a Covering Graph, the reduction between them preserves the structure of the graph covering spaces, so not only the problems, but the reduction between them has a natural topological interpretation. Grochow and Tucker-Foltz exhibited a third computational topology problem whose inapproximability is equivalent to the UGC: 1-Cohomology Localization on Triangulations of 2-Manifolds. Two-prover proof systems A unique game is a special case of a two-prover one-round (2P1R) game. A two-prover one-round game has two players (also known as provers) and a referee. The referee sends each player a question drawn from a known probability distribution, and the players each have to send an answer. The answers come from a set of fixed size. The game is specified by a predicate that depends on the questions sent to the players and the answers provided by them. The players may decide on a strategy beforehand, although they cannot communicate with each other during the game. The players win if the predicate is satisfied by their questions and their answers. A two-prover one-round game is called a unique game if for every question and every answer by the first player, there is exactly one answer by the second player that results in a win for the players, and vice versa. The value of a game is the maximum winning probability for the players over all strategies. The unique games conjecture states that for every sufficiently small pair of constants ε, δ > 0, there exists a constant k such that the following promise problem (Lyes, Lno) is NP-hard: Lyes = {G: the value of G is at least 1 − δ} Lno = {G: the value of G is at most ε} where G is a unique game whose answers come from a set of size k. Probabilistically checkable proofs Alternatively, the unique games conjecture postulates the existence of a certain type of probabilistically checkable proof for problems in NP. A unique game can be viewed as a special kind of nonadaptive probabilistically checkable proof with query complexity 2, where for each pair of possible queries of the verifier and each possible answer to the first query, there is exactly one possible answer to the second query that makes the verifier accept, and vice versa. The unique games conjecture states that for every sufficiently small pair of constants there is a constant such that every problem in NP has a probabilistically checkable proof over an alphabet of size with completeness , soundness , and randomness complexity which is a unique game. Relevance The unique games conjecture was introduced by Subhash Khot in 2002 in order to make progress on certain questions in the theory of hardness of approximation. The truth of the unique games conjecture would imply the optimality of many known approximation algorithms (assuming P ≠ NP). For example, the approximation ratio achieved by the algorithm of Goemans and Williamson for approximating the maximum cut in a graph is optimal to within any additive constant assuming the unique games conjecture and P ≠ NP. A list of results that the unique games conjecture is known to imply is shown in the adjacent table together with the corresponding best results for the weaker assumption P ≠ NP. A constant of or means that the result holds for every constant (with respect to the problem size) strictly greater than or less than , respectively. Discussion and alternatives Currently, there is no consensus regarding the truth of the unique games conjecture. Certain stronger forms of the conjecture have been disproved. A different form of the conjecture postulates that distinguishing the case when the value of a unique game is at least from the case when the value is at most is impossible for polynomial-time algorithms (but perhaps not NP-hard). This form of the conjecture would still be useful for applications in hardness of approximation. The constant in the above formulations of the conjecture is necessary unless P = NP. If the uniqueness requirement is removed the corresponding statement is known to be true by the parallel repetition theorem, even Results Marek Karpinski and Warren Schudy have constructed linear time approximation schemes for dense instances of unique games problem. In 2008, Prasad Raghavendra has shown that if the unique games conjecture is true, then for every constraint satisfaction problem the best approximation ratio is given by a certain simple semidefinite programming instance, which is in particular polynomial. In 2010, Prasad Raghavendra and David Steurer defined the gap-small-set expansion problem, and conjectured that it is NP-hard. The resulting small set expansion hypothesis implies the unique games conjecture. It has also been used to prove strong hardness of approximation results for finding complete bipartite subgraphs. In 2010, Sanjeev Arora, Boaz Barak and David Steurer found a subexponential time approximation algorithm for the unique games problem. A key ingredient in their result was the spectral algorithm of Alexandra Kolla (see also the earlier manuscript of A. Kolla and Madhur Tulsiani). The latter also re-proved that unique games on expander graphs could be solved in polynomial time, and was one of (if not the) first graph algorithms to take advantage of the full spectrum of a graph rather than just its first two eigenvalues. In 2012, it was shown that distinguishing instances with value at most from instances with value at least is NP-hard. In 2018, after a series of papers, a weaker version of the conjecture, called the 2-2 games conjecture, was proven. In a certain sense, this proves "a half" of the original conjecture. This also improves the best known gap for unique label cover: it is NP-hard to distinguish instances with value at most from instances with value at least . References Further reading . 2002 in computing Approximation algorithms Computational complexity theory Computational hardness assumptions Unsolved problems in computer science Conjectures
Unique games conjecture
Mathematics
2,167
58,527
https://en.wikipedia.org/wiki/Finitely%20generated%20abelian%20group
In abstract algebra, an abelian group is called finitely generated if there exist finitely many elements in such that every in can be written in the form for some integers . In this case, we say that the set is a generating set of or that generate . So, finitely generated abelian groups can be thought of as a generalization of cyclic groups. Every finite abelian group is finitely generated. The finitely generated abelian groups can be completely classified. Examples The integers, , are a finitely generated abelian group. The integers modulo , , are a finite (hence finitely generated) abelian group. Any direct sum of finitely many finitely generated abelian groups is again a finitely generated abelian group. Every lattice forms a finitely generated free abelian group. There are no other examples (up to isomorphism). In particular, the group of rational numbers is not finitely generated: if are rational numbers, pick a natural number coprime to all the denominators; then cannot be generated by . The group of non-zero rational numbers is also not finitely generated. The groups of real numbers under addition and non-zero real numbers under multiplication are also not finitely generated. Classification The fundamental theorem of finitely generated abelian groups can be stated two ways, generalizing the two forms of the fundamental theorem of finite abelian groups. The theorem, in both forms, in turn generalizes to the structure theorem for finitely generated modules over a principal ideal domain, which in turn admits further generalizations. Primary decomposition The primary decomposition formulation states that every finitely generated abelian group G is isomorphic to a direct sum of primary cyclic groups and infinite cyclic groups. A primary cyclic group is one whose order is a power of a prime. That is, every finitely generated abelian group is isomorphic to a group of the form where n ≥ 0 is the rank, and the numbers q1, ..., qt are powers of (not necessarily distinct) prime numbers. In particular, G is finite if and only if n = 0. The values of n, q1, ..., qt are (up to rearranging the indices) uniquely determined by G, that is, there is one and only one way to represent G as such a decomposition. The proof of this statement uses the basis theorem for finite abelian group: every finite abelian group is a direct sum of primary cyclic groups. Denote the torsion subgroup of G as tG. Then, G/tG is a torsion-free abelian group and thus it is free abelian. tG is a direct summand of G, which means there exists a subgroup F of G s.t. , where . Then, F is also free abelian. Since tG is finitely generated and each element of tG has finite order, tG is finite. By the basis theorem for finite abelian group, tG can be written as direct sum of primary cyclic groups. Invariant factor decomposition We can also write any finitely generated abelian group G as a direct sum of the form where k1 divides k2, which divides k3 and so on up to ku. Again, the rank n and the invariant factors k1, ..., ku are uniquely determined by G (here with a unique order). The rank and the sequence of invariant factors determine the group up to isomorphism. Equivalence These statements are equivalent as a result of the Chinese remainder theorem, which implies that if and only if j and k are coprime. History The history and credit for the fundamental theorem is complicated by the fact that it was proven when group theory was not well-established, and thus early forms, while essentially the modern result and proof, are often stated for a specific case. Briefly, an early form of the finite case was proven by Gauss in 1801, the finite case was proven by Kronecker in 1870, and stated in group-theoretic terms by Frobenius and Stickelberger in 1878. The finitely presented case is solved by Smith normal form, and hence frequently credited to , though the finitely generated case is sometimes instead credited to Poincaré in 1900; details follow. Group theorist László Fuchs states: The fundamental theorem for finite abelian groups was proven by Leopold Kronecker in 1870, using a group-theoretic proof, though without stating it in group-theoretic terms; a modern presentation of Kronecker's proof is given in , 5.2.2 Kronecker's Theorem, 176–177. This generalized an earlier result of Carl Friedrich Gauss from Disquisitiones Arithmeticae (1801), which classified quadratic forms; Kronecker cited this result of Gauss's. The theorem was stated and proved in the language of groups by Ferdinand Georg Frobenius and Ludwig Stickelberger in 1878. Another group-theoretic formulation was given by Kronecker's student Eugen Netto in 1882. The fundamental theorem for finitely presented abelian groups was proven by Henry John Stephen Smith in , as integer matrices correspond to finite presentations of abelian groups (this generalizes to finitely presented modules over a principal ideal domain), and Smith normal form corresponds to classifying finitely presented abelian groups. The fundamental theorem for finitely generated abelian groups was proven by Henri Poincaré in 1900, using a matrix proof (which generalizes to principal ideal domains). This was done in the context of computing the homology of a complex, specifically the Betti number and torsion coefficients of a dimension of the complex, where the Betti number corresponds to the rank of the free part, and the torsion coefficients correspond to the torsion part. Kronecker's proof was generalized to finitely generated abelian groups by Emmy Noether in 1926. Corollaries Stated differently the fundamental theorem says that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of those being unique up to isomorphism. The finite abelian group is just the torsion subgroup of G. The rank of G is defined as the rank of the torsion-free part of G; this is just the number n in the above formulas. A corollary to the fundamental theorem is that every finitely generated torsion-free abelian group is free abelian. The finitely generated condition is essential here: is torsion-free but not free abelian. Every subgroup and factor group of a finitely generated abelian group is again finitely generated abelian. The finitely generated abelian groups, together with the group homomorphisms, form an abelian category which is a Serre subcategory of the category of abelian groups. Non-finitely generated abelian groups Note that not every abelian group of finite rank is finitely generated; the rank 1 group is one counterexample, and the rank-0 group given by a direct sum of countably infinitely many copies of is another one. See also The composition series in the Jordan–Hölder theorem is a non-abelian generalization. Notes References Reprinted (pp. 367–409) in The Collected Mathematical Papers of Henry John Stephen Smith, Vol. I, edited by J. W. L. Glaisher. Oxford: Clarendon Press (1894), xcv+603 pp. Abelian group theory Algebraic structures
Finitely generated abelian group
Mathematics
1,545
55,726,954
https://en.wikipedia.org/wiki/Integrated%20transport%20network
An integrated transport network is a transport system that allows travellers to have a seamless, rapid public transport experience. Journeys are optimised to have as little interchange as possible, services are scheduled to minimise waiting times, and ticketing or other administrative tasks are reduced to the minimum. The concept may be applied to single transport modes or to combinations. History One of the first integrated transport networks was the Rede Integrada de Transporte in Curitiba, Brazil. Opened in 1979, the RIT was one of the first systems to merge multiple aspects of a metropolitan transport network into one. In the UK, the concept of "integrated transport" was first publicised by Tony Blair's 1997 Labour government in a 1998 white paper, titled "A New Deal for Transport: Better for everyone". A response to growing concerns around traffic congestion, it suggested that improving and better "integrating" existing public transport systems would reduce overall congestion. Planning In order to achieve an integrated network including different modes of transport, all agencies responsible for the various modes must work effectively together. If they are fragmented, and especially if they are in direct competition with each other, effective joint working is unlikely. Traveller information Prospective travellers must be able to access information about their possible journey. For public transport services, travellers should be able to access real-time information on services, or should (in high-use areas) be confident that the next service will arrive very shortly. See also Transit-oriented development References Intermodal transport Sustainable urban planning Sustainable transport Transit-oriented developments
Integrated transport network
Physics
313
382,683
https://en.wikipedia.org/wiki/Landing%20gear
Landing gear is the undercarriage of an aircraft or spacecraft that is used for taxiing, takeoff or landing. For aircraft, it is generally needed for all three of these. It was also formerly called alighting gear by some manufacturers, such as the Glenn L. Martin Company. For aircraft, Stinton makes the terminology distinction undercarriage (British) = landing gear (US). For aircraft, the landing gear supports the craft when it is not flying, allowing it to take off, land, and taxi without damage. Wheeled landing gear is the most common, with skis or floats needed to operate from snow/ice/water and skids for vertical operation on land. Retractable undercarriages fold away during flight, which reduces drag, allowing for faster airspeeds. Landing gear must be strong enough to support the aircraft and its design affects the weight, balance and performance. It often comprises three wheels, or wheel-sets, giving a tripod effect. Some unusual landing gear have been evaluated experimentally. These include: no landing gear (to save weight), made possible by operating from a catapult cradle and flexible landing deck: air cushion (to enable operation over a wide range of ground obstacles and water/snow/ice); tracked (to reduce runway loading). For launch vehicles and spacecraft landers, the landing gear usually only supports the vehicle on landing and during subsequent surface movement, and is not used for takeoff. Given their varied designs and applications, there exist dozens of specialized landing gear manufacturers. The three largest are Safran Landing Systems, Collins Aerospace (part of Raytheon Technologies) and Héroux-Devtek. Aircraft The landing gear represents 2.5 to 5% of the maximum takeoff weight (MTOW) and 1.5 to 1.75% of the aircraft cost, but 20% of the airframe direct maintenance cost. A suitably-designed wheel can support , tolerate a ground speed of 300 km/h and roll a distance of ; it has a 20,000 hours time between overhaul and a 60,000 hours or 20 year life time. Gear arrangements Wheeled undercarriages normally come in two types: Conventional landing gear or "taildragger", where there are two main wheels towards the front of the aircraft and a single, much smaller, wheel or skid at the rear. The same helicopter arrangement is called tricycle tailwheel. Tricycle landing gear, where there are two main wheels (or wheel assemblies) under the wings and a third smaller wheel in the nose. PZL.37 Łoś Was the first bomber aircraft with twin wheels on a single shock absorber. The same helicopter arrangement is called tricycle nosewheel. The taildragger arrangement was common during the early propeller era, as it allows more room for propeller clearance. Most modern aircraft have tricycle undercarriages. Taildraggers are considered harder to land and take off (because the arrangement is usually unstable, that is, a small deviation from straight-line travel will tend to increase rather than correct itself), and usually require special pilot training. A small tail wheel or skid/bumper may be added to a tricycle undercarriage to prevent damage to the underside of the fuselage if over-rotation occurs on take-off leading to a tail strike. Aircraft with tail-strike protection include the B-29 Superfortress, Boeing 727 trijet and Concorde. Some aircraft with retractable conventional landing gear have a fixed tailwheel. Hoerner estimated the drag of the Bf 109 fixed tailwheel and compared it with that of other protrusions such as the pilot's canopy. A third arrangement (known as tandem or bicycle) has the main and nose gear located fore and aft of the center of gravity (CG) under the fuselage with outriggers on the wings. This is used when there is no convenient location on either side of the fuselage to attach the main undercarriage or to store it when retracted. Examples include the Lockheed U-2 spy plane and the Harrier jump jet. The Boeing B-52 uses a similar arrangement, except that the fore and aft gears each have two twin-wheel units side by side. Quadricycle gear is similar to bicycle but with two sets of wheels displaced laterally in the fore and aft positions. Raymer classifies the B-52 gear as quadricycle. The experimental Fairchild XC-120 Packplane had quadricycle gear located in the engine nacelles to allow unrestricted access beneath the fuselage for attaching a large freight container. Helicopters use skids, pontoons or wheels depending on their size and role. Retractable gear To decrease drag in flight, undercarriages retract into the wings and/or fuselage with wheels flush with the surrounding surface, or concealed behind flush-mounted doors; this is called retractable gear. If the wheels do not retract completely but protrude partially exposed to the airstream, it is called a semi-retractable gear. Most retractable gear is hydraulically operated, though some is electrically operated or even manually operated on very light aircraft. The landing gear is stowed in a compartment called a wheel well. Pilots confirming that their landing gear is down and locked refer to "three greens" or "three in the green.", a reference to the electrical indicator lights (or painted panels of mechanical indicator units) from the nosewheel/tailwheel and the two main gears. Blinking green lights or red lights indicate the gear is in transit and neither up and locked or down and locked. When the gear is fully stowed up with the up-locks secure, the lights often extinguish to follow the dark cockpit philosophy; some airplanes have gear up indicator lights. Redundant systems are used to operate the landing gear and redundant main gear legs may also be provided so the aircraft can be landed in a satisfactory manner in a range of failure scenarios. The Boeing 747 was given four separate and independent hydraulic systems (when previous airliners had two) and four main landing gear posts (when previous airliners had two). Safe landing would be possible if two main gear legs were torn off provided they were on opposite sides of the fuselage. In the case of power failure in a light aircraft, an emergency extension system is always available. This may be a manually operated crank or pump, or a mechanical free-fall mechanism which disengages the uplocks and allows the landing gear to fall under gravity. Shock absorbers Aircraft landing gear includes wheels equipped with solid shock absorbers on light planes, and air/oil oleo struts on larger aircraft. Large aircraft As aircraft weights have increased more wheels have been added and runway thickness has increased to keep within the runway loading limit. The Zeppelin-Staaken R.VI, a large German World War I long-range bomber of 1916, used eighteen wheels for its undercarriage, split between two wheels on its nose gear struts, and sixteen wheels on its main gear units—split into four side-by-side quartets each, two quartets of wheels per side—under each tandem engine nacelle, to support its loaded weight of almost . Multiple "tandem wheels" on an aircraft—particularly for cargo aircraft, mounted to the fuselage lower sides as retractable main gear units on modern designs—were first seen during World War II, on the experimental German Arado Ar 232 cargo aircraft, which used a row of eleven "twinned" fixed wheel sets directly under the fuselage centerline to handle heavier loads while on the ground. Many of today's large cargo aircraft use this arrangement for their retractable main gear setups, usually mounted on the lower corners of the central fuselage structure. The prototype Convair XB-36 had most of its weight on two main wheels, which needed runways at least thick. Production aircraft used two four-wheel bogies, allowing the aircraft to use any airfield suitable for a B-29. A relatively light Lockheed JetStar business jet, with four wheels supporting , needed a thick flexible asphalt pavement. The Boeing 727-200 with four tires on two legs main landing gears required a thick pavement. The thickness rose to for a McDonnell Douglas DC-10-10 with supported on eight wheels on two legs. The heavier, , DC-10-30/40 were able to operate from the same thickness pavements with a third main leg for ten wheels, like the first Boeing 747-100, weighing on four legs and 16 wheels. The similar-weight Lockheed C-5, with 24 wheels, needs an pavement. The twin-wheel unit on the fuselage centerline of the McDonnell Douglas DC-10-30/40 was retained on the MD-11 airliner and the same configuration was used on the initial Airbus A340-200/300, which evolved in a complete four-wheel undercarriage bogie for the heavier Airbus A340-500/-600. The up to Boeing 777 has twelve main wheels on two three-axles bogies, like the later Airbus A350. The Airbus A380 has a four-wheel bogie under each wing with two sets of six-wheel bogies under the fuselage. The Antonov An-225, the largest cargo aircraft, had 4 wheels on the twin-strut nose gear units like the smaller Antonov An-124, and 28 main gear wheels. The A321neo has a twin-wheel main gear inflated to 15.7 bar (228 psi), while the A350-900 has a four-wheel main gear inflated to 17.1 bar (248 psi). STOL aircraft STOL aircraft have a higher sink-rate requirement if a carrier-type, no-flare landing technique has to be adopted to reduce touchdown scatter. For example, the Saab 37 Viggen, with landing gear designed for a 5m/sec impact, could use a carrier-type landing and HUD to reduce its scatter from 300 m to 100m. The de Havilland Canada DHC-4 Caribou used long-stroke legs to land from a steep approach with no float. Operation from water A flying boat has a lower fuselage with the shape of a boat hull giving it buoyancy. Wing-mounted floats or stubby wing-like sponsons are added for stability. Sponsons are attached to the lower sides of the fuselage. A floatplane has two or three streamlined floats. Amphibious floats have retractable wheels for land operation. An amphibious aircraft or amphibian usually has two distinct landing gears, namely a "boat" hull/floats and retractable wheels, which allow it to operate from land or water. Beaching gear is detachable wheeled landing gear that allows a non-amphibious floatplane or flying boat to be maneuvered on land. It is used for aircraft maintenance and storage and is either carried in the aircraft or kept at a slipway. Beaching gear may consist of individual detachable wheels or a cradle that supports the entire aircraft. In the former case, the beaching gear is manually attached or detached with the aircraft in the water; in the latter case, the aircraft is maneuvered onto the cradle. Helicopters are able to land on water using floats or a hull and floats. For take-off a step and planing bottom are required to lift from the floating position to planing on the surface. For landing a cleaving action is required to reduce the impact with the surface of the water. A vee bottom parts the water and chines deflect the spray to prevent it damaging vulnerable parts of the aircraft. Additional spray control may be needed using spray strips or inverted gutters. A step is added to the hull, just behind the center of gravity, to stop water clinging to the afterbody so the aircraft can accelerate to flying speed. The step allows air, known as ventilation air, to break the water suction on the afterbody. Two steps were used on the Kawanishi H8K. A step increases the drag in flight. The drag contribution from the step can be reduced with a fairing. A faired step was introduced on the Short Sunderland III. One goal of seaplane designers was the development of an open ocean seaplane capable of routine operation from very rough water. This led to changes in seaplane hull configuration. High length/beam ratio hulls and extended afterbodies improved rough water capabilities. A hull much longer than its width also reduced drag in flight. An experimental development of the Martin Marlin, the Martin M-270, was tested with a new hull with a greater length/beam ratio of 15 obtained by adding 6 feet to both the nose and tail. Rough-sea capability can be improved with lower take-off and landing speeds because impacts with waves are reduced. The Shin Meiwa US-1A is a STOL amphibian with blown flaps and all control surfaces. The ability to land and take-off at relatively low speeds of about 45 knots and the hydrodynamic features of the hull, long length/beam ratio and inverted spray gutter for example, allow operation in wave heights of 15 feet. The inverted gutters channel spray to the rear of the propeller discs. Low speed maneuvring is necessary between slipways and buoys and take-off and landing areas. Water rudders are used on seaplanes ranging in size from the Republic RC-3 Seabee to the Beriev A-40 Hydro flaps were used on the Martin Marlin and Martin SeaMaster. Hydroflaps, submerged at the rear of the afterbody, act as a speed brake or differentially as a rudder. A fixed fin, known as a skeg, has been used for directional stability. A skeg, was added to the second step on the Kawanishi H8K flying boat hull. High speed impacts in rough water between the hull and wave flanks may be reduced using hydro-skis which hold the hull out of the water at higher speeds. Hydro skis replace the need for a boat hull and only require a plain fuselage which planes at the rear. Alternatively skis with wheels can be used for land-based aircraft which start and end their flight from a beach or floating barge. Hydro-skis with wheels were demonstrated as an all-purpose landing gear conversion of the Fairchild C-123, known as the Panto-base Stroukoff YC-134. A seaplane designed from the outset with hydro-skis was the Convair F2Y Sea Dart prototype fighter. The skis incorporated small wheels, with a third wheel on the fuselage, for ground handling. In the 1950s hydro-skis were envisaged as a ditching aid for large piston-engined aircraft. Water-tank tests done using models of the Lockheed Constellation, Douglas DC-4 and Lockheed Neptune concluded that chances of survival and rescue would be greatly enhanced by preventing critical damage associated with ditching. Shipboard operation The landing gear on fixed-wing aircraft that land on aircraft carriers have a higher sink-rate requirement because the aircraft are flown onto the deck with no landing flare. Other features are related to catapult take-off requirements for specific aircraft. For example, the Blackburn Buccaneer was pulled down onto its tail-skid to set the required nose-up attitude. The naval McDonnell Douglas F-4 Phantom II in UK service needed an extending nosewheel leg to set the wing attitude at launch. The landing gear for an aircraft using a ski-jump on take-off is subjected to loads of 0.5g which also last for much longer than a landing impact. Helicopters may have a deck-lock harpoon to anchor them to the deck. In-flight use Some aircraft have a requirement to use the landing-gear as a speed brake. Flexible mounting of the stowed main landing-gear bogies on the Tupolev Tu-22R raised the aircraft flutter speed to . The bogies oscillated within the nacelle under the control of dampers and springs as an anti-flutter device. Gear common to different aircraft Some experimental aircraft have used gear from existing aircraft to reduce program costs. The Martin-Marietta X-24 lifting body used the nose/main gear from the North American T-39 / Northrop T-38 and the Grumman X-29 from the Northrop F-5 / General Dynamics F-16. Other types Skids Skids has been used on aircraft landing gear. The North American X-15 used skids as the rear landing gear and the Rockwell HiMAT used them in testing. When an airplane needs to land on surfaces covered by snow, the landing gear usually consists of skis or a combination of wheels and skis. Detachable Some aircraft use wheels for takeoff and jettison them when airborne for improved streamlining without the complexity, weight and space requirements of a retraction mechanism. The wheels are sometimes mounted onto axles that are part of a separate "dolly" (for main wheels only) or "trolley" (for a three-wheel set with a nosewheel) chassis. Landing is done on skids or similar simple devices (fixed or retractable). The SNCASE Baroudeur used this arrangement. Historical examples include the "dolly"-using Messerschmitt Me 163 Komet rocket fighter, the Messerschmitt Me 321 Gigant troop glider, and the first eight "trolley"-using prototypes of the Arado Ar 234 jet reconnaissance bomber. The main disadvantage to using the takeoff dolly/trolley and landing skid(s) system on German World War II aircraft—intended for a sizable number of late-war German jet and rocket-powered military aircraft designs—was that aircraft would likely be scattered all over a military airfield after they had landed from a mission, and would be unable to taxi on their own to an appropriately hidden "dispersal" location, which could easily leave them vulnerable to being shot up by attacking Allied fighters. A related contemporary example are the wingtip support wheels ("pogos") on the Lockheed U-2 reconnaissance aircraft, which fall away after take-off and drop to earth; the aircraft then relies on titanium skids on the wingtips for landing. Rearwards and sideways retraction Some main landing gear struts on World War II aircraft, in order to allow a single-leg main gear to more efficiently store the wheel within either the wing or an engine nacelle, rotated the single gear strut through a 90° angle during the rearwards-retraction sequence to allow the main wheel to rest "flat" above the lower end of the main gear strut, or flush within the wing or engine nacelles, when fully retracted. Examples are the Curtiss P-40, Vought F4U Corsair, Grumman F6F Hellcat, Messerschmitt Me 210 and Junkers Ju 88. The Aero Commander family of twin-engined business aircraft also shares this feature on the main gears, which retract aft into the ends of the engine nacelles. The rearward-retracting nosewheel strut on the Heinkel He 219 and the forward-retracting nose gear strut on the later Cessna Skymaster similarly rotated 90 degrees as they retracted. On most World War II single-engined fighter aircraft (and even one German heavy bomber design) with sideways retracting main gear, the main gear that retracted into the wings was raked forward in the "down" position for better ground handling, with a retracted position that placed the main wheels at some distance aft of their position when downairframe—this led to a complex angular geometry for setting up the "pintle" angles at the top ends of the struts for the retraction mechanism's axis of rotation. with some aircraft, like the P-47 Thunderbolt and Grumman Bearcat, even mandating that the main gear struts lengthened as they were extended to give sufficient ground clearance for their large four-bladed propellers. One exception to the need for this complexity in many WW II fighter aircraft was Japan's famous Zero fighter, whose main gear stayed at a perpendicular angle to the centerline of the aircraft when extended, as seen from the side. Variable axial position of main wheels The main wheels on the Vought F7U Cutlass could move 20 inches between a forward and aft position. The forward position was used for take-off to give a longer lever-arm for pitch control and greater nose-up attitude. The aft position was used to reduce landing bounce and reduce risk of tip-back during ground handling. Tandem layout The tandem or bicycle layout is used on the Hawker Siddeley Harrier, which has two main-wheels behind a single nose-wheel under the fuselage and a smaller wheel near the tip of each wing. On second generation Harriers, the wing is extended past the outrigger wheels to allow greater wing-mounted munition loads to be carried, or to permit wing-tip extensions to be bolted on for ferry flights. A tandem layout was evaluated by Martin using a specially-modified Martin B-26 Marauder (the XB-26H) to evaluate its use on Martin's first jet bomber, the Martin XB-48. This configuration proved so manoeuvrable that it was also selected for the B-47 Stratojet. It was also used on the U-2, Myasishchev M-4, Yakovlev Yak-25, Yak-28 and Sud Aviation Vautour. A variation of the multi tandem layout is also used on the B-52 Stratofortress which has four main wheel bogies (two forward and two aft) underneath the fuselage and a small outrigger wheel supporting each wing-tip. The B-52's landing gear is also unique in that all four pairs of main wheels can be steered. This allows the landing gear to line up with the runway and thus makes crosswind landings easier (using a technique called crab landing). Since tandem aircraft cannot rotate for takeoff, the forward gear must be long enough to give the wings the correct angle of attack during takeoff. During landing, the forward gear must not touch the runway first, otherwise the rear gear will slam down and may cause the aircraft to bounce and become airborne again. Crosswind landing accommodation One very early undercarriage incorporating castoring for crosswind landings was pioneered on the Bleriot VIII design of 1908. It was later used in the much more famous Blériot XI Channel-crossing aircraft of 1909 and also copied in the earliest examples of the Etrich Taube. In this arrangement the main landing gear's shock absorption was taken up by a vertically sliding bungee cord-sprung upper member. The vertical post along which the upper member slid to take landing shocks also had its lower end as the rotation point for the forward end of the main wheel's suspension fork, allowing the main gear to pivot on moderate crosswind landings. Manually-adjusted main-gear units on the B-52 can be set for crosswind take-offs. It rarely has to be used from SAC-designated airfields which have major runways in the predominant strongest wind direction. The Lockheed C-5 Galaxy has swivelling 6-wheel main units for crosswind landings and castoring rear units to prevent tire scrubbing on tight turns. "Kneeling" gear Both the nosegear and the wing-mounted main landing gear of the World War II German Arado Ar 232 cargo/transport aircraft were designed to kneel. This made it easier to load and unload cargo, and improved taxiing over ditches and on soft ground. Some early U.S. Navy jet fighters were equipped with "kneeling" nose gear consisting of small steerable auxiliary wheels on short struts located forward of the primary nose gear, allowing the aircraft to be taxied tail-high with the primary nose gear retracted. This feature was intended to enhance safety aboard aircraft carriers by redirecting the hot exhaust blast upwards, and to reduce hangar space requirements by enabling the aircraft to park with its nose underneath the tail of a similarly equipped jet. Kneeling gear was used on the North American FJ-1 Fury and on early versions of the McDonnell F2H Banshee, but was found to be of little use operationally, and was omitted from later Navy fighters. The nosewheel on the Lockheed C-5, partially retracts against a bumper to assist in loading and unloading of cargo using ramps through the forward, "tilt-up" hinged fuselage nose while stationary on the ground. The aircraft also tilts backwards. The Messier twin-wheel main units fitted to the Transall and other cargo aircraft can tilt forward or backward as necessary. The Boeing AH-64 Apache helicopter is able to kneel to fit inside the cargo hold of a transport aircraft and for storage. Tail support Aircraft landing gear includes devices to prevent fuselage contact with the ground by tipping back when the aircraft is being loaded. Some commercial aircraft have used tail props when parked at the gate. The Douglas C-54 had a critical CG location which required a ground handling strut. The Lockheed C-130 and Boeing C-17 Globemaster III use ramp supports. The unladen CG of the rear-engined Ilyushin IL-62 is aft of the main gear due to design decisions stemming from efforts to reduce overall weight, systems complexity and drag; to prevent the fuselage from tilting back when unloaded, the aircraft has a unique fully retractable vertical tail strut with castering wheels to allow towing or pushback. The strut is not intended for taxiing or flight, when the weight of the crew, passengers, cargo and fuel provide the necessary fore-aft balance. Monowheel To minimize drag, modern gliders usually have a single wheel, retractable or fixed, centered under the fuselage, which is referred to as monowheel gear or monowheel landing gear. Monowheel gear is also used on some powered aircraft, where drag reduction is a priority, such as the Europa Classic. Much like the Me 163 rocket fighter, some gliders from prior to the Second World War used a take-off dolly that was jettisoned on take-off; these gliders then landed on a fixed skid. This configuration is necessarily accompanied with a taildragger. Helicopters Light helicopters use simple landing skids to save weight and cost. The skids may have attachment points for wheels so that they can be moved for short distances on the ground. Skids are impractical for helicopters weighing more than four tons. Some high-speed machines have retractable wheels, but most use fixed wheels for their robustness, and to avoid the need for a retraction mechanism. Tailsitter Experimental tailsitter aircraft use landing gear located in their tails for VTOL operation. Light aircraft For light aircraft a type of landing gear which is economical to produce is a simple wooden arch laminated from ash, as used on some homebuilt aircraft. A similar arched gear is often formed from spring steel. The Cessna Airmaster was among the first aircraft to use spring steel landing gear. The main advantage of such gear is that no other shock-absorbing device is needed; the deflecting leaf provides the shock absorption. Folding gear The limited space available to stow landing gear has led to many complex retraction mechanisms, each unique to a particular aircraft. An early example, the German Bomber B combat aircraft design competition winner, the Junkers Ju 288, had a complex "folding" main landing gear unlike any other aircraft designed by either Axis or Allied sides in the war: its single oleo strut was only attached to the lower end of its Y-form main retraction struts, handling the twinned main gear wheels, and folding by swiveling downwards and aftwards during retraction to "fold" the maingear's length to shorten it for stowage in the engine nacelle it was mounted in. However, the single pivot-point design also led to numerous incidents of collapsed maingear units for its prototype airframes. Tracked Increased contact area can be obtained with very large wheels, many smaller wheels or track-type gear. Tracked gear made by Dowty was fitted to a Westland Lysander in 1938 for taxi tests, then a Fairchild Cornell and a Douglas Boston. Bonmartini, in Italy, fitted tracked gear to a Piper Cub in 1951. Track-type gear was also tested using a C-47, C-82 and B-50. A much heavier aircraft, an XB-36, was made available for further tests, although there was no intention of using it on production aircraft. The stress on the runway was reduced to one third that of the B-36 four-wheel bogie. Ground carriage Ground carriage is a long-term (after 2030) concept of flying without landing gear. It is one of many aviation technologies being proposed to reduce greenhouse gas emissions. Leaving the landing gear on the ground reduces weight and drag. Leaving it behind after take-off was done for a different reason, i.e. with military objectives, during World War II using the "dolly" and "trolley" arrangements of the German Me 163B rocket fighter and Arado Ar 234A prototype jet recon-bomber. Steering There are several types of steering. Taildragger aircraft may be steered by rudder alone (depending upon the prop wash produced by the aircraft to turn it) with a freely pivoting tail wheel, or by a steering linkage with the tail wheel, or by differential braking (the use of independent brakes on opposite sides of the aircraft to turn the aircraft by slowing one side more sharply than the other). Aircraft with tricycle landing gear usually have a steering linkage with the nosewheel (especially in large aircraft), but some allow the nosewheel to pivot freely and use differential braking and/or the rudder to steer the aircraft, like the Cirrus SR22. Some aircraft require that the pilot steer by using rudder pedals; others allow steering with the yoke or control stick. Some allow both. Still others have a separate control, called a tiller, used for steering on the ground exclusively. Rudder When an aircraft is steered on the ground exclusively using the rudder, it needs a substantial airflow past the rudder, which can be generated either by the forward motion of the aircraft or by propeller slipstream. Rudder steering requires considerable practice to use effectively. Although it needs airflow past the rudder, it has the advantage of not needing any friction with the ground, which makes it useful for aircraft on water, snow or ice. Direct Some aircraft link the yoke, control stick, or rudder directly to the wheel used for steering. Manipulating these controls turns the steering wheel (the nose wheel for tricycle landing gear, and the tail wheel for taildraggers). The connection may be a firm one in which any movement of the controls turns the steering wheel (and vice versa), or it may be a soft one in which a spring-like mechanism twists the steering wheel but does not force it to turn. The former provides positive steering but makes it easier to skid the steering wheel; the latter provides softer steering (making it easy to overcontrol) but reduces the probability of skidding. Aircraft with retractable gear may disable the steering mechanism wholly or partially when the gear is retracted. Differential braking Differential braking depends on asymmetric application of the brakes on the main gear wheels to turn the aircraft. For this, the aircraft must be equipped with separate controls for the right and left brakes (usually on the rudder pedals). The nose or tail wheel usually is not equipped with brakes. Differential braking requires considerable skill. In aircraft with several methods of steering that include differential braking, differential braking may be avoided because of the wear it puts on the braking mechanisms. Differential braking has the advantage of being largely independent of any movement or skidding of the nose or tailwheel. Tiller A tiller in an aircraft is a small wheel or lever, sometimes accessible to one pilot and sometimes duplicated for both pilots, that controls the steering of the aircraft while it is on the ground. The tiller may be designed to work in combination with other controls such as the rudder or yoke. In large airliners, for example, the tiller is often used as the sole means of steering during taxi, and then the rudder is used to steer during takeoff and landing, so that both aerodynamic control surfaces and the landing gear can be controlled simultaneously when the aircraft is moving at aerodynamic speeds. Tires and wheels The specified selection criterion, e.g., minimum size, weight, or pressure, are used to select suitable tires and wheels from manufacturer's catalog and industry standards found in the Aircraft Yearbook published by the Tire and Rim Association, Inc. Gear loading The choice of the main wheel tires is made on the basis of the static loading case. The total main gear load is calculated assuming that the aircraft is taxiing at low speed without braking: where is the weight of the aircraft and and are the distance measured from the aircraft's center of gravity(cg) to the main and nose gear, respectively. The choice of the nose wheel tires is based on the nose wheel load during braking at maximum effort: where is the lift, is the drag, is the thrust, and is the height of aircraft cg from the static groundline. Typical values for on dry concrete vary from 0.35 for a simple brake system to 0.45 for an automatic brake pressure control system. As both and are positive, the maximum nose gear load occurs at low speed. Reverse thrust decreases the nose gear load, and hence the condition results in the maximum value: To ensure that the rated loads will not be exceeded in the static and braking conditions, a seven percent safety factor is used in the calculation of the applied loads. Inflation pressure Provided that the wheel load and configuration of the landing gear remain unchanged, the weight and volume of the tire will decrease with an increase in inflation pressure. From the flotation standpoint, a decrease in the tire contact area will induce a higher bearing stress on the pavement which may reduce the number of airfields available to the aircraft. Braking will also become less effective due to a reduction in the frictional force between the tires and the ground. In addition, the decrease in the size of the tire, and hence the size of the wheel, could pose a problem if internal brakes are to be fitted inside the wheel rims. The arguments against higher pressure are of such a nature that commercial operators generally prefer the lower pressures in order to maximize tire life and minimize runway stress. To prevent punctures from stones Philippine Airlines had to operate their Hawker Siddeley 748 aircraft with pressures as low as the tire manufacturer would permit. However, too low a pressure can lead to an accident as in the Nigeria Airways Flight 2120. A rough general rule for required tire pressure is given by the manufacturer in their catalog. Goodyear for example advises the pressure to be 4% higher than required for a given weight or as fraction of the rated static load and inflation. Tires of many commercial aircraft are required to be filled with nitrogen, and not subsequently diluted with more than 5% oxygen, to prevent auto-ignition of the gas which may result from overheating brakes producing volatile vapors from the tire lining. Naval aircraft use different pressures when operating from a carrier and ashore. For example, the Northrop Grumman E-2 Hawkeye tire pressures are on ship and ashore. En-route deflation is used in the Lockheed C-5 Galaxy to suit airfield conditions at the destination but adds excessive complication to the landing gear and wheels Future developments Airport community noise is an environmental issue which has brought into focus the contribution of aerodynamic noise from the landing gear. A NASA long-term goal is to confine aircraft objectional noise to within the airport boundary. During the approach to land the landing gear is lowered several miles from touchdown and the landing gear is the dominant airframe noise source, followed by deployed highlift devices. With engines at a reduced power setting on the approach it is necessary to reduce airframe noise to make a significant reduction to total aircraft noise. The addition of add-on fairings is one approach for reducing the noise from the landing gear with a longer term approach to address noise generation during initial design. Airline specifications require an airliner to reach up to 90,000 take-offs and landings and roll 500,000 km on the ground in its lifetime. Conventional landing gear is designed to absorb the energy of a landing and does not perform well at reducing ground-induced vibrations in the airframe during landing ground roll, taxi and take-off. Airframe vibrations and fatigue damage can be reduced using semi-active oleos which vary damping over a wide range of ground speeds and runway quality. Accidents Malfunctions or human errors (or a combination of these) related to retractable landing gear have been the cause of numerous accidents and incidents throughout aviation history. Distraction and preoccupation during the landing sequence played a prominent role in the approximately 100 gear-up landing incidents that occurred each year in the United States between 1998 and 2003. A gear-up landing, also known as a belly landing, is an accident that results from the pilot forgetting to lower the landing gear, or being unable to do so because of a malfunction. Although rarely fatal, a gear-up landing can be very expensive if it causes extensive airframe/engine damage. For propeller-driven aircraft a prop strike may require an engine overhaul. Some aircraft have a stiffened fuselage underside or added features to minimize structural damage in a wheels-up landing. When the Cessna Skymaster was converted for a military spotting role (the O-2 Skymaster), fiberglass railings were added to the length of the fuselage; they were adequate to support the aircraft without damage if it was landed on a grassy surface. The Bombardier Dash 8 is notorious for its landing gear problems. There were three incidents involved, all of them involving Scandinavian Airlines, flights SK1209, SK2478, and SK2867. This led to Scandinavian retiring all of its Dash 8s. The cause of these incidents was a locking mechanism that failed to work properly. This also caused concern for the aircraft for many other airlines that found similar problems, Bombardier Aerospace ordered all Dash 8s with 10,000 or more hours to be grounded, it was soon found that 19 Horizon Airlines Dash 8s had locking mechanism problems, so did 8 Austrian Airlines planes, this did cause several hundred flights to be canceled. On September 21, 2005, JetBlue Airways Flight 292 successfully landed with its nose gear turned 90 degrees sideways, resulting in a shower of sparks and flame after touchdown. On November 1, 2011, LOT Polish Airlines Flight LO16 successfully belly landed at Warsaw Chopin Airport due to technical failures; all 231 people on board escaped without injury. Emergency extension systems In the event of a failure of the aircraft's landing gear extension mechanism a backup is provided. This may be an alternate hydraulic system, a hand-crank, compressed air (nitrogen), pyrotechnic or free-fall system. A free-fall or gravity drop system uses gravity to deploy the landing gear into the down and locked position. To accomplish this the pilot activates a switch or mechanical handle in the cockpit, which releases the up-lock. Gravity then pulls the landing gear down and deploys it. Once in position the landing gear is mechanically locked and safe to use for landing. Ground resonance in rotorcraft Rotorcraft with fully articulated rotors may experience a dangerous and self-perpetuating phenomenon known as ground resonance, in which the unbalanced rotor system vibrates at a frequency coinciding with the natural frequency of the airframe, causing the entire aircraft to violently shake or wobble in contact with the ground. Ground resonance occurs when shock is continuously transmitted to the turning rotors through the landing gear, causing the angles between the rotor blades to become uneven; this is typically triggered if the aircraft touches the ground with forward or lateral motion, or touches down on one corner of the landing gear due to sloping ground or the craft's flight attitude. The resulting violent oscillations may cause the rotors or other parts to catastrophically fail, detach, and/or strike other parts of the airframe; this can destroy the aircraft in seconds and critically endanger persons unless the pilot immediately initiates a takeoff or closes the throttle and reduces rotor pitch. Ground resonance was cited in 34 National Transportation Safety Board incident and accident reports in the United States between 1990 and 2008. Rotorcraft with fully articulated rotors typically have shock-absorbing landing gear designed to prevent ground resonance; however, poor landing gear maintenance and improperly inflated tires may contribute to the phenomenon. Helicopters with skid-type landing gear are less prone to ground resonance than those with wheels. Stowaways Unauthorized passengers have been known to stowaway on larger aircraft by climbing a landing gear strut and riding in the compartment meant for the wheels. There are extreme dangers to this practice, with numerous deaths reported. Dangers include a lack of oxygen at high altitude, temperatures well below freezing, crush injury or death from the gear retracting into its confined space, and falling out of the compartment during takeoff or landing. Spacecraft Launch vehicles Landing gear has traditionally not been used on the vast majority of launch vehicles, which take off vertically and are destroyed on falling back to earth. With some exceptions for suborbital vertical-landing vehicles (e.g., the Masten Xoie or Armadillo Aerospace's Lunar Lander Challenge vehicle), or for spaceplanes that use the vertical takeoff, horizontal landing (VTHL) approach (e.g., the Space Shuttle orbiter, or the USAF X-37), landing gear have been largely absent from orbital vehicles during the early decades since the advent of spaceflight technology, when orbital space transport has been the exclusive preserve of national-monopoly governmental space programs. Each spaceflight system through 2015 had relied on expendable boosters to begin each ascent to orbital velocity. Advances during the 2010s in private space transport, where new competition to governmental space initiatives has emerged, have included the explicit design of landing gear into orbital booster rockets. SpaceX has initiated and funded a multimillion-dollar reusable launch system development program to pursue this objective. As part of this program, SpaceX built, and flew eight times in 2012–2013, a first-generation test vehicle called Grasshopper with a large fixed landing gear in order to test low-altitude vehicle dynamics and control for vertical landings of a near-empty orbital first stage. A second-generation test vehicle called F9R Dev1 was built with extensible landing gear. The prototype was flown four times—with all landing attempts successful—in 2014 for low-altitude tests before being self-destructed for safety reasons on a fifth test flight due to a blocked engine sensor port. The orbital-flight version of the test vehicles–Falcon 9 and Falcon Heavy—includes a lightweight, deployable landing gear for the booster stage: a nested, telescoping piston on an A-frame. The total span of the four carbon fiber/aluminum extensible landing legs is approximately , and weigh less than ; the deployment system uses high-pressure helium as the working fluid. The first test of the extensible landing gear was successfully accomplished in April 2014 on a Falcon 9 returning from an orbital launch and was the first successful controlled ocean soft touchdown of a liquid-rocket-engine orbital booster. After a single successful booster recovery in 2015, and several in 2016, the recovery of SpaceX booster stages became routine by 2017. Landing legs had become an ordinary operational part of orbital spaceflight launch vehicles. The newest launch vehicle under development at SpaceX—the Starship—is expected to have landing legs on its first stage called Super Heavy like Falcon 9 but also has landing legs on its reusable second stage, a first for launch vehicle second stages. The first prototype of Starship—Starhopper, built in early 2019—had three fixed landing legs with replaceable shock absorbers. In order to reduce mass of the flight vehicle and the payload penalty for a reusable design, the long-term plan is for Super Heavy to land directly back at the launch site on special ground equipment that is part of the launch mount. Landers Spacecraft designed to land safely on extraterrestrial bodies such as the Moon or Mars are known as either legged landers (for example the Apollo Lunar Module) or pod landers (for example Mars Pathfinder) depending on their landing gear. Pod landers are designed to land in any orientation after which they may bounce and roll before coming to rest at which time they have to be given the correct orientation to function. The whole vehicle is enclosed in crushable material or airbags for the impacts and may have opening petals to right it. Features for landing and movement on the surface were combined in the landing gear for the Mars Science Laboratory. For landing on low-gravity bodies landing gear may include hold-down thrusters, harpoon anchors and foot-pad screws, all of which were incorporated in the design of comet-lander Philae for redundancy. In the case of Philae, however, both harpoons and the hold-down thruster failed, resulting in the craft bouncing before landing for good at a non-optimal orientation. See also Dayton-Wright RB-1 Racer, an early example of an airplane with retractable landing gear. Landing gear extender Tundra tire, a low-pressure landing gear tire allowing landings on rough surfaces Undercarriage arrangements of jetliners and other aircraft. Verville Racer Aircraft, an early example of an airplane with retractable landing gear. References External links Aircraft undercarriage Articles containing video clips Aircraft systems
Landing gear
Engineering
9,415
19,557,595
https://en.wikipedia.org/wiki/Theddlethorpe%20Gas%20Terminal
Theddlethorpe Gas Terminal (TGT) is a former gas terminal on the Lincolnshire coast on Mablethorpe Road at Theddlethorpe St Helen close to Mablethorpe in East Lindsey in England. It is just off the A1031 and next door to a holiday camp and Mablethorpe Seal Sanctuary and Wildlife Centre (Animal Gardens). History From December 1969, there were plans for the terminal proposed by the Gas Council. Planning permission was given in April 1970. It was built in 1972 to receive gas from the Viking gas field from 4 July 1972, being the UK's third main gas terminal when owned by Conoco. The first stage cost around £5 million. A new offshore gas pipeline had to be built for the plant. It was originally called the Viking Gas Terminal, changing to its current name in 1984. In the early 1990s, a new pipeline was built to the terminal by Kinetica, a company jointly owned by PowerGen and Conoco. The pipeline to Killingholme was opened by Tim Eggar on 21 July 1992. Operation The main site was owned by ConocoPhillips, with pipelines to National Grid's National Transmission System, and E.ON's Killingholme Pipeline System to both Killingholme A power station and Killingholme B power station, transporting 256,000 m3/h at a pressure of 40-55 bar. 10% of the UK's ever increasing gas requirements came from Theddlethorpe. By August 2018 gas production through Theddlethorpe was about 4 million standard cubic metres (mscm) per day representing about 2.5% of the UK seasonal demand of 160 mscm per day. Around one hundred people worked on the site. The 30-inch line from the NTS terminal (Feeder No. 8) is routed to Hatton Lincolnshire where it connected to the 36-inch NTS Wisbech to Scunthorpe line (Feeder No. 7). In 1988, in association with the LOGGS development a second 30-inch line (Feeder No. 17) was laid from the Theddlethorpe terminal to Hatton. In 2017 ConocoPhillips announced that the Theddlethorpe terminal was to close in 2018. Production from Theddlethorpe ceased at 05:00 on 15 August 2018. Offshore pipeline systems These are in the UK Southern North Sea Basin, part of the UK Continental Shelf (UKCS). There were four major pipeline systems. Lincolnshire Offshore Gas Gathering System (LOGGS) collected gas from the V-field series of gas fields plus Audrey WD, WM & XW, Annabel, Alison KX, Ann XM, Anglia YM YD, Jupiter, Saturn ND, Mimas MN, Tethys TN, Ganymede ZD, Europa EZ, N.W. Bell ZX and Callisto ZM. A 118km 36-inch diameter pipeline transported gas from the LOGGS PP installation to Theddlethorpe, it was commissioned in 1988, and ceased production in August 2018. The LOGGS installation comprised five bridge-linked platforms PR (riser), PC (compression), PP (production), PA (accommodation) and PD (drilling). Caister Murdoch System (CMS) collected gas from the Boulton BM, Boulton H HM, Caister CM, Cavendish RM, Hawksley EM, Hunter HK, Kelvin TM, Ketch KA, McAdam MM, Munro MH, Murdoch MD, Schooner SA, and Watt QM gas fields. A 188 km 26-inch pipeline transported gas from the Murdoch installation to Theddlethorpe, it was commissioned in 1993, and ceased production in August 2018. The Viking field collected gas from the Viking, Victor JM & JD, Victoria SM and Vixen VM fields. The Viking Transportation System (VTS) transported gas from the Viking B complex (bridge-linked platforms BA, BD, BP, BC) via a 26.9 km 16-inch pipeline to the LOGGS complex for onward transmission to Theddlethorpe. The Viking B field originally exported gas via a 10.9 km 24-inch pipeline to the Viking AR platform and thence via a 138 km 28-inch pipeline, commissioned in July 1972, to Theddlethorpe, these lines were disused from 2009 when the VTS was commissioned. Pickerill field a 66 km 24-inch pipeline transported gas from the Pickerill A installation to Theddlethorpe, it was commissioned in 1992 and ceased production in August 2018. The Juliet gas field was tied into Pickerill A in 2014. Natural gas liquids Liquids from the refinery operation were transferred to Phillips 66's (previously ConocoPhillip's when the two companies were one) Humber Refinery next door to the Killingholme Power Station (ICHP), twenty six miles away to the north-west of Theddlethorpe. Gas fields The following gas fields produced fluids to the Theddlethorpe gas terminal. Viking The main field that connected to the terminal was the Viking gas field, via the Viking Transportation System. The field is off the Lincolnshire coast, and is in two areas - Viking A and Viking B. It was 50% owned by ConocoPhillips. It had initial recoverable reserves of 125 billion m3. Production on the North Viking Field (Viking A) began in July 1972 and South Viking (Viking B) in August 1973 after the North Viking field was discovered in March 1969 and South Viking in December 1968. It was initially operated by Conoco and the National Coal Board, then by ConocoPhillips on behalf of BP (former Britoil), and was jointly owned by both. It is close to the Indefatigable field, and a plan was to use the (nearer) Bacton gas plant instead. Production from the Viking gas field was the main incentive to build the Theddlethorpe site. Offshore installations within the field include Viking AR, the Viking B complex (bridge-linked BA, BD, BP & BC), Viking CD, Viking DD, Viking ED, Viking GD, Viking HD, Viking JD, Viking KD & Viking LD. Other Viking A installations were decommissioned in 1991 and removed in 1994. Installations CD, DD, ED, GD and HD ceased production in 2011-15 and were removed in 2017-18. Vixen This field was owned 50:50 by ConocoPhillips Ltd and BP (Britoil plc). Operated by ConocoPhillips. It is off the Lincolnshire coast. Gas was transported from the Vixen VM subsea wellhead to the terminal via the Viking Transportation System. Production began in October 2000 and was discovered in May 1999. Part of the V field system and named after the de Havilland Sea Vixen. Boulton Owned and run by ConocoPhillips. Subsea wellhead Boulton HM produced gas via the Watt QW subsea template to Murdoch MD, gas from the Boulton BM installation was transported to the terminal via the Caister-Murdoch System (CMS) via the Murdoch field. It was discovered in November 1984 with production starting in December 1997 and named after Matthew Boulton, a colleague of James Watt. Caister It was originally run by Total, and then operated by ConocoPhillips. The Caister installation was designated CM. Gas was transported via the Murdoch field and the Caister Murdoch System (CMS) to the terminal. It was discovered in January 1968 with production starting October 1993 and named after Caister Castle in Norfolk. It was 50% owned by Consort Europe Resources (became part of E.ON Ruhrgas), 21% by GDF Britain Ltd, and 30% by ConocoPhillips. It was latterly owned 40% by E.ON Ruhrgas UK Caister Ltd, 39% by ConocoPhillips UK Ltd, and 21% by GDF Suez E & P UK Ltd. Murdoch The field is from the Lincolnshire coast. It was run by ConocoPhillips and named the Scottish engineer William Murdoch, a compatriot of James Watt, and who is best known for inventing gas lighting, using coal gas. It was discovered in August 1987 with production starting in October 1993. It was owned 54% by ConocoPhillips, 34% by Tullow Exploration Ltd and 11% by GDF Britain Ltd. It is now owned 59% by ConocoPhillips UK Ltd, 26% by GDF Suez E & P Uk Ltd, and 14% by Tullow Oil SK Ltd. The subsea Murdoch K field (KM) was run by Tullow Oil. The Murdoch installation comprised three bridge-linked platforms designated MD, MC and MA. Gas was transported by the Caister Murdoch System to the terminal. Cavendish The field was owned by RWE Dea AG of Germany (Operator) and Dana Petroleum. It used the Caister Murdoch System and was discovered in January 1989. The Cavendish installation has the field designation RM. Named after the British scientist Henry Cavendish who discovered hydrogen. Saltfleetby The onshore field was discovered in October 1997 and opened in December 1999. Originally run by Roc Oil of Australia, it was latterly operated by Wingas (owned by Gazprom) who bought it in December 2004. The field was only 5 miles from Theddlethorpe and was named after Saltfleetby, the nearest village to the field. Schooner The field opened in October 1996. It was run by Tullow Oil, which it bought from Shell and Esso in 2004. Owned 90% by Tullow Oil SK Ltd, 5% by GDF Britain Ltd, and 5% by E.ON Ruhrgas UK EU Ltd. The Schooner SA installation used the Caister Murdoch System and was discovered in December 1986. Named after the schooner boat. Ketch The field opened in October 1999 and was run by Tullow Oil, which it bought from Shell in 2004. The Ketch KA installation used the Caister Murdoch System. Discovered in November 1984. Named after the ketch boat. Ann Discovered in May 1966. Production bengan in October 1993. Uses the LOGGS system. Was owned 85% by Venture Production (North Sea Developments) Ltd and 15% by Roots Gas Ltd (based in Aberdeen), and latterly owned completely by Venture, who operated the field. It comprised two subsea installations with the field designation Ann A4 and Ann XM. Decommissioned after a decision made in June 2017. Audrey Discovered in March 1976. Production began in October 1988. Used the LOGGS system. Was jointly owned by Conoco and Centrica, and latterly owned by Centrica Energy who operated the field. Field was much larger than the neighbouring Ann field. It comprised a subsea installation Audrey WM and two platforms Audrey 1 WD and Audrey 2 XW. Decommissioned after a decision made in June 2017. Alison Discovered in February 1987 with production starting in October 1995. A small field. Was owned 85% by Venture Production (North Sea Developments) Ltd and 15% by Roots Gas Ltd, and then owned by Centrica Energy (who bought Venture Production plc in 2009), who operated it. Alison is a subsea installation with the field designation KX. Decommissioned after a decision made in June 2017. Anglia Discovered in December 1985, with production starting in November 1991. Was owned 55% by CalEnergy Gas (UK) Ltd, 32% by Consort North Sea Ltd, 12% by Highland Energy Ltd. Latterly owned 25% by Dana Petroleum (since September 2006), 12% by RWE Dea UK SNS Ltd, 30% by GDF Suez E & P UK Ltd, and 30% by First Oil. Was operated by CalEnergy and then operated by GDF Suez until 2011 since when it was run by Ithaca Energy. Used the LOGGS system. It comprised the subsea installation Anglia YM and platform YD. Pickerill Discovered in December 1984 with production starting in August 1992. Comprised two platforms Pickerill A and Pickerill B. Originally run by ARCO and latterly run by Perenco. Was owned 43% by ARCO British Ltd, 23% by AGIP (UK) Ltd, 23% by Superior Oil (UK) Ltd and 10% by Marubeni Oil & Gas (UK) Ltd. Latterly owned 95% by Perenco UK Ltd and 5% by Marubeni. Topaz The field began operations in November 2009. It was run by RWE Dea. Named after the topaz mineral of aluminium. Kelvin Operated by ConocoPhillips and used the Caister-Murdoch system. Discovered in September 2005 with production starting in November 2007. Owned 50% by ConocoPhillips (UK) Ltd, 27% by GDF Suez E & P UK Ltd, and 22% by Tullow Oil SK Ltd. The Kelvin platform has the field designation Kelvin TM. Named after William Thomson, 1st Baron Kelvin. Rita Operated by E.ON Ruhrgas UK North Sea Ltd. Production began in March 2009 and discovered in May 1996. Owned 74% by E.ON Ruhrgas UK Caister Ltd and 26% by GDF Suez E & P UK Ltd. Comprised a subsea wellhead RH, gas was transported via the Hunter field (HK). Jupiter area These fields were Ganymede ZD (discovered June 1989 with production starting October 1995), Sinope (discovered January 1991 with production starting October 1999), Callisto ZM (discovered February 1990 with production starting October 1995), Europa EZ (discovered September 1972 with production starting October 1999) and NW Bell ZX (discovered in 1994 and production began in August 1999). They used the LOGGS pipeline via Ganymede ZD, being operated by ConocoPhillips. It is named after the moons of Jupiter. They were owned 20% by ConocoPhillips, 30% by Statoil and 50% by Superior Oil Company (latterly owned by Esso). Saturn area These fields were Saturn (discovered December 1987 with production starting in September 2005), Mimas MN (discovered in May 1989 with production starting in June 2007), Hyperion, Atlas, Rhea (all three operating as one from September 2005 and discovered in January 1991) and Tethys TN (discovered in February 1991 with production starting in February 2007). The platforms had the field designations Saturn ND, Mimas MN and Tethys TN. They used the LOGGS pipeline. The fields were named after the moons of Saturn. Owned by ConocoPhillips, RWE Dea AG, and by Venture North Sea Gas Ltd. Operated by ConocoPhillips. V fields These fields are Vulcan (discovered April 1983 with production starting October 1988), South Valiant & North Valiant (discovered in July 1970 and January 1971 with production starting for both in October 1988), Vanguard (discovered December 1982 with production starting October 1988), Victor JD (discovered May 1972, production started September 1984, ceased 2015), Vampire OD (discovered in January 1994, production started October 1999, ceased 2016), Viscount VD (production ceased 2015) and Valkyrie OD. They use the LOGGS pipeline via the Viking platform. It is mostly jointly owned by ConocoPhillips and BP (former Britoil). Named after aircraft - the Avro Vulcan, Vickers Valiant, Handley Page Victor, Vickers Viscount, XB-70 Valkyrie, and de Havilland Vampire. The V field project was officially opened by Margaret Thatcher on 1 September 1988, when she visited the terminal. In the LOGGS system, the accommodation platform is separate from the production platform. The V-field comprised the following installations: North Valliant 1 PD (bridge-linked to LOGGS), North Valliant 2 SP, South Valiant TD, Vanguard QD, Victor JD and subsea Victor JM, Vulcan RD and UR, Vanguard QD, Vampire/Valkyrie OD and Viscount VD. Vulcan UR installation will be removed in 2018-19, Vampire OD and Viscount VO installations will be removed in 2020. Juliet Juliet was discovered by GDF SUEZ in block 47/14b in December 2008. This field was operated by GDF SUEZ and production started beginning of January 2014, with the west well. Production at the east well started during first quarter 2014. The production was sent via pipeline to the Pickerill A platform (see above), and from there to the Theddlethorpe Gas Terminal. Clipper South RWE Dea UK, which has a 50% equity share in the gas field, is the owner. Fairfield Energy and Bayern Gas each hold 25% equity in the project. Developed in 2012 as a single platform designated RL producing to the LOGGS installation. In November 2018 the export of fluids was rerouted to the Clipper complex and thence to Bacton. Decommissioning Following the end of production the Viking, LOGGS, Pickerill and CMS pipelines were flushed, cleaned and filled with seawater. The onshore lines from the Theddlethorpe terminal – the 30-inch and 36-inch lines to the National Grid terminal, the 30-inch Killingholme line, the 6-inch Humber oil refinery line, and the Saltfleetby gas pipeline – were purged, flushed and disconnected from the terminal. All the plant at the terminal was emptied, purged and flushed. This work constitutes the first phase of decommissioning. In 2019 Chrysaor assumed the ownership of Conoco-Phillips North Sea Assets. Chrysaor were granted planning permission to demolish Theddlethorpe gas terminal by Lincolnshire County Council in January 2020. The third and fourth phases will be the remediation followed by restoration of the site back to agricultural land, this is expected to be complete by 2022. In March 2021 Chrysaor Holdings merged with Premier Oil to form Harbour Energy. In July 2021 Look North reported that the Radioactive Waste Management (RWM) a Government Agency was in early discussions with Lincolnshire County Council regarding a proposal to store spent nuclear material at the site. However, Harbour Energy plan to utilise the site and some of the spent offshore gas fields for carbon capture and storage. See also Easington Gas Terminal Bacton Gas Terminal CATS Terminal Rampside Gas Terminal St Fergus Gas Terminal List of oil and gas fields of the North Sea Offshore installation identification Lincolnshire Offshore Gas Gathering System Viking gas field Caister Murdoch System gas fields Pickerill and Juliet gas fields References External links Harbour Energy Gas fields Vixen Viking Boulton Caister Murdoch Fields owned by Tullow Oil Anglia Topaz Pickerill at Perenco Map of the fields at UKOOG (PDF) via archive .org News items Ship narrowly misses Murdoch gas rigs in January 2007 Video clips Gas platforms and the Mablethorpe area seen from a commercial flight Energy infrastructure completed in 1972 Buildings and structures in Lincolnshire ConocoPhillips East Lindsey District Engie Natural gas infrastructure in the United Kingdom Natural gas plants Natural gas terminals North Sea energy Science and technology in Lincolnshire Mablethorpe
Theddlethorpe Gas Terminal
Chemistry
3,892
3,782,069
https://en.wikipedia.org/wiki/Digital%20rhetoric
Digital rhetoric is communication that exists in the digital sphere. It can be expressed in many different forms, including text, images, videos, and software. Due to the increasingly mediated nature of contemporary society, distinctions between digital and non-digital environments are less clear. This has expanded the scope of digital rhetoric to account for the increased fluidity with which humans interact with technology. The field of digital rhetoric is not yet fully established. It draws theory and practices from the tradition of rhetoric as both an analytical tool and a production guide. As a whole, it can be categorized as a meta-discipline. Due to evolving study, digital rhetoric has held various meanings to different scholars over time. It can take on a variety of meanings based on what is being analyzed, depending on the concept, forms or objects of study, or rhetorical approach. Digital rhetoric can also be analyzed through the lenses of different social movements. Digital rhetoric lacks a strict definition amongst scholars. The discussion and debate toward reaching a definition accounts for much of the writing, study, and teaching of the topic. One of the most straightforward definitions for "digital rhetoric" is that it is the application of rhetorical theory to digital communication. Definition Precursors to Digital Rhetoric, and Rhetoric in the early Computer Age Rhetoric has developed alongside many technological developments, with digital mediums being amongst the most recent and transformative. There are many ancient and historical examples of "machine's to think with." Early examples of devices being used for the purpose of guiding thought include Martianus Capella's 9th century glossed collections of prose, philosophy and other writings, and the biblical concordances developed by monks between the 12th and 13th century. Some argue that types of man-made artwork and codes, such as ancient Egyptian hieroglyphics, were some of the first forms of digital rhetoric. In 1917, C.I Scofield created an annotated version of the King James Bible. This version of the bible would indicate passages that related to one another throughout both the Old and New Testaments, guiding the reader's interpretation of the work. These "connected topic concordances" are similar to "key-in context concordances" used on modern computers. By the 1960s, early computers had become more prominent in many environments, and began seeing application outside of math and science. In 1964, Harvard's Allan B. Ellis published an analysis of how computers could be used to better understand literary works, through having text from The Adventures of Huckleberry plugged into punched cards and having the computer analyze the titular character. Between the mid-60's and early 70's, there were several experiments to investigate the potential for computers in the grading of academic papers. These computers were programmed to approximate the way that teachers generally approached the grading process, and judge the content in its quality of vocabulary, composition, and approach to the content. Evolving definition of 'digital rhetoric' The following subsections detail the evolving definition of 'digital rhetoric' as a term since its creation in 1989. Early definitions (1989–2015) The term was coined by rhetorician Richard A. Lanham in a 1989 lecture and was first published in his 1993 essay collection, The Electronic Word: Democracy, Technology, and the Arts. Lanham avoided coming to a firm definition, instead aiming to connect digital communication to examples from traditional communication, discussing the relationship between postmodern theory, digital arts, and classical rhetoric. Digital rhetoric theory is primarily based in traditional rhetoric and shares many of its methods and characteristics, including its status as a meta-discipline. Lanham's work referred to many works of Hypertext theory. Hypertext theory is a similar, but less broad concept to digital rhetoric, which studied the consequences of computer users interacting with hypertext links. Much of the writing on the theory focused on how the meaning that hypertext links gave to words and enforces a relationship between users and the particular words, and how this could be implemented in rhetorical and educational settings. In 1997, Calgary University professor Doug Brent expanded on the concept of hypertext theory, approaching the topic from a rhetorical framework, when past studies depended more on literary analysis. This presented hypertext as a kind of "new rhetoric". The same year, Bowling Green University scholar Gary Heba united studies of hypertext and visual rhetoric into the concept of "HyperRhetoric", a multimedia communication experience that could not be replicated outside of an internet setting. Heba stated that as the online landscape and the perspectives of users change, HyperRhetoric must also adapt and evolve. This fluidity remains a characteristic of digital rhetoric. The late 1990s and early 2000s represented a greater shift towards rhetoric in digital communication study, and how "persuasion" functions in an online setting. In 2005, Rensselaer Polytechnic Institute scholar James P. Zappen expanded the conversation beyond persuasion and into digital rhetoric's capacity for creative expression in exploring the behavior of individuals and groups in online settings. Recent scholarship (2015–present) In his 2015 book Digital Rhetoric: Theory, Method, Practice, Douglas Eyman defined digital rhetoric as "the application of rhetorical theory (as analytic method or heuristic for production) to digital texts and performances". By this definition, digital rhetoric can be applied as an analytic method for digital content and be a basis for future study, offering rhetorical questions as research guidelines. Eyman categorized the emerging field of digital rhetoric as interdisciplinary in nature, related to fields like: digital literacy, visual rhetoric, new media, human–computer interaction, critical code studies, and a variety of many more. In 2018, rhetorician Angela Haas offered her own definition of digital rhetoric, defining it as "the digital negotiation of information – and its historical, social, economic, and political contexts and influences – to affect change". Haas emphasized that digital rhetoric does not solely apply to text-based items—it can also apply to image-based or system-based items. In this way, any form of communication that occurs in the digital sphere can be counted as digital rhetoric. In 2023, scholars Zoltan P. Majdik and S. Scott Graham considered not only the rhetorical landscape of artificial intelligence but what it might mean to use artificial intelligence as a resource for rhetorical scholarship. The authors posit a dual perspective on AI—first, as a rapidly developing technology that will have profound effects on human communication, and second, as an object of study for communication scholars who might want to consider using AI in the same way they might consider using any other resource or technology. In 2024, Penn State rhetorician Stuart A. Selber defined digital rhetoric studies through a selection of guiding questions: How does traditional rhetoric inform the study of digital communication as a rhetorical medium? When traditional rhetoric fails to inform scholars in a situation exclusive to digital formats, are new concepts required or can traditional concepts be reconsidered? If new ideas are needed, what will be their source? How will they be examples of rhetoric? Selber stated that a concept is rhetorical if it helps in analyzing how speakers use the circumstances of society and their message's medium to influence the opinions of others. Other definitions While most research represents a traditionally Western view of rhetoric, Arthur Smith of UCLA explains that the ancient rhetoric of many cultures, such as African rhetoric, existed independent of Western influence, and developed in ways that reflect the values and functions of those societies. Today, rhetoric encompasses all forms of discourse that serve any given purpose within specific contexts, while also being shaped by those contexts. Some scholars interpret this rhetorical discourse with greater focus on the digital aspect. University of Texas's Casey Boyle, Rutgers University-Camden's James Brown Jr., and University of Virginia's Steph Ceraso claim that "the digital" is no longer a single strategy that can be used to enhance traditional rhetoric, but an "ambient condition" that encompass all parts of life. As technology has become more ubiquitous, the lines between traditional and digital rhetoric have blurred. Technology and rhetoric can influence and transform each other. Concepts Circulation Circulation theorizes the ways that text and discourse moves through time and space, and any kind of media can be circulated. A new form of communication is composed, created, and distributed through digital technologies. Media scholar Henry Jenkins explains there is a shift from distribution to circulation, which signals a move toward an increasingly participatory model of culture in which people shape, share, re-frame, and remix media content in ways not previously possible within the traditional rhetorical formats like print. The various concepts of circulation include: Collaboration – Digital rhetoric has taken on a very collaborative nature through the use of digital platforms. Sites such as YouTube and Wikipedia involve opportunity for "new forms of collaborative production". Digital platforms have created opportunities for more people to enact and create, as digital platforms open doors for collaborative communication that can occur synchronously, asynchronously, over far distances, and across multiple disciplines and professions. Crowdsourcing – Daren Brabham describes the concept of crowdsourcing as the use of modern technology to collaborate, create, and solve problems collectively. Ethical concerns have been raised while engaging in crowdsourcing, specifically in situations that lack a clear set of compensation practices or protections in place to secure information. Delivery – Digital technologies allow rhetoric to be delivered in new "electronic forms of discourse". Acts and modes of communication can be represented digitally by combining multiple different forms of media into a composite helping to create an easy user experience. The growing popularity of the Internet meme is an example of combining, circulating, and delivering media in a collaborative effort through file sharing. Although memes are sent through microtransactions they often have a macro-level, large-scale impact. Another form of rhetorical delivery are encyclopedias, which traditionally were printed and based primarily on text and images. However, modern technological developments now enable online encyclopedias to integrate sound, animation, video, algorithmic search functions, and high-level productions into a cohesive multimedia experience as part of their new forms of digital rhetoric. Critical literacy Critical literacy is the ability to identify bias in media, under the assumption that all media is biased. It can also be defined as a communicative tool to lead to social change and promote social action by using a critical lens when approaching social-political topics. In order to identify bias amid the immense volume of information imposed on digital audiences, individuals need to develop the ability to process and critically examine content—on both familiar and unfamiliar topics. In an essay on critical literacy in writing, the University of Melbourne stated the importance of developing these skills through reading and questioning what texts are trying to accomplish. Ultimately, this allows an idea's interpretation to come from the reader, not the writer. For example, a study conducted at the Indiana University in Bloomington used algorithms to assess 14 million Twitter messages containing statements about the 2016 U.S. presidential campaign and election. They found that from May 2016 to March 2017, social bots were responsible for causing approximately 389,000 unsupported political claims to go viral. Interactivity Interactivity in digital rhetoric can be defined as the ways in which readers connect to and communicate with digital texts. This includes activity between the audience, the audience and the message being sent, the audience and the medium, and the communication between separate mediums. Readers have the ability to like, share, repost, comment on, and remix online content. These interactions allow writers, scholars, and content creators to get a better idea of how their work is affecting their audience. Some ways communicators promote interactivity include the following: Mind sharing is the methods and components of communication that collective intelligence is gathered and transferred. It is based in the sharing of emotional, knowledge-based, and goal-based sharing. The human ability of language is the primary example of mind-sharing. Mind sharing functions as a method of concept sharing, presenting generally agreed upon meanings for words and phrases, and concept activation sharing, where these specific meanings prompt reactions when communicated. Multimodality is a form of communication that uses multiple methods (or modes) to inform audiences of an idea. It can involve a mix of written text, pictures, audio, or videos. These communications offer a wealth of information that could not be accessed from traditional methods, but are disorganized and can be difficult to reach conclusions from. All writing and all communication is, theoretically, multimodal.   Remix is a method of digital rhetoric that manipulates and transforms an original work to convey a new message. The use of remix can help the creator make an argument by connecting seemingly unrelated ideas into a convincing whole. As modern technology develops, self-publication sites such as YouTube, SoundCloud, and WordPress have stimulated remix culture, allowing for easier creation and dissemination of reworked content. Unlike appropriation, which is the use and potential recontextualization of existing material without significant modification, 'remix' is defined by Ridolfo and Devoss as "the process of taking old pieces of text, images, sounds, and video and stitching them together to form a new product". A popular example of remixing is the creation and sharing of memes. Procedural rhetoric Procedural rhetoric is rhetoric formed through processes or practices. Some scholars view video games as one of these processes through which rhetoric can be formed. For example, ludology scholar and game designer Gonzalo Frasca poses that the simulation-nature of computers and video games offers a "natural medium for modeling reality and fiction". Therefore, according to Frasca, video games can take on a new form of digital rhetoric in which reality is mimicked but also created for the future. Similarly, scholar Ian Bogost argues that video games can serve as models for how 'real-world' cultural and social systems operate. They also argue for the necessity of literacy in playing video games as this allows players to challenge (and ultimately accept or reject) the rhetorical standpoints of these games. Rhetorical velocity Rhetorical velocity is the concept of authors writing in a way in which they are able to predict how their work might be recomposed. Scholars Jim Ridolfo and Danielle DeVoss first coined this idea in 2009 when they described rhetorical velocity as "a conscious rhetorical concern for distance, travel, speed, and time, pertaining specifically to theorizing instances of strategic appropriation by a third party". Author Sean Morey agrees with this definition of rhetorical velocity and describes it as a creator anticipating the response their work with generate. For example, digital rhetoric is often labelled using tags, which are keywords used to help readers find, view, and share relevant texts and information. These tags can be found on blog posts, news articles, scholarly journals, and more. Tagging allows writers, scholars, and content creators to organize their work and make it more accessible and understandable to readers. Appropriation carries both positive and negative connotations for rhetorical velocity. In some ways, appropriation is a tool that can be used for the reapplication of outdated ideas to make them better. In other ways, appropriation is seen as a threat to creative and cultural identities. Social media receives the bulk of this scrutiny due to the lack of education of its users. Most "contributors are often unaware of what they are contributing", which perpetuates the negative connotation. Scholars in digital rhetoric—such as Jessica Reyman, Amy Hea, and Johndan Johnson-Eilola—explore this topic and its effects on society. Scholars have also connected the role of rhetorical velocity to visual rhetoric through a study of environmental image circulation, demonstrating that "while environmental image circulation is often viewed as an ambivalent, or even performative, practice for environmental citizenship, it is also an important space for cultivating participatory culture online." Visual rhetoric Digital rhetoric often invokes visual rhetoric due to digital rhetoric's reliance on visuals. Charles Hill states that images "do not necessarily have to portray an object, or even a class of objects, that exists or ever did exist" to remain impactful. However, the use of imagery for rhetorical purposes in digital spaces cannot always be easily differentiated from "traditional" physical visual mediums. As such, approaching this concept requires a careful analysis of the viewer, situational, and visual contexts involved. A prominent part of this concept is its intersection of perspective with technology, as computers allow users to create a curated view for online space. Examples of the Internet relying and reshaping visual rhetoric include Social media platforms like Instagram, and incredibly realistic deepfakes. Digitally-produced art is a significant way users express themselves on technological platforms; the unique intersection of text and image has given rise to new rhetorical language through the modification of slang and in-group language. In particular, the culturally-specific and nuanced use of pop culture references through Internet memes have gradually built upon themselves to create complex, highly flexible, and Internet-specific (or even platform-specific) dialects of speech. Through popularity-based natural selection, edits of commonly accepted meme templates fuel the cycle of rhetorical creation. Other forms of digital-visual rhetoric include remixing and parodying. In the chapter "Digital Rhetoric Practice" in Digital Rhetoric Theory, Method, Practice, Douglas Eyman speaks about the growth of digital rhetoric in a digital world. Digital rhetoric has become distinguished from its other rhetoric counterparts, as it is an easily accessible path for people to spread their messages through the reuse of already existing content and putting their own twist on it. This is widespread because of meme cultures and online video platforms. Digital-visual rhetoric does not only rely on intentional manipulation. Sometimes, meanings can arise from unexpected places and otherwise-overlooked features. For example, emojis can carry heavy consequences by permeating daily communication. Varying skin tones provided (or excluded) by developers for emojis may perpetuate preexisting racial biases of colorism. Even otherwise-innocuous images of peaches and eggplants are regular stand-ins for genital regions; they can be both harmless modes of flirtation and tools for sexually harassing women online when sent en masse. The concept of the avatar also illustrates visual rhetoric's deeply personal impact, particularly when using Miami University scholar James E. Porter's definition of the avatar as an extended "virtual body". While scholars such as Beth Kolko hoped for an equitable online world free of physical barriers, social issues still persist in digital realms, such as gender discrimination and racism. For example, Victoria Woolums found that, in the video game World of Warcraft, an avatar's gender identity instigated bias from other characters even though an avatar's gender identity may not be physically accurate to its user. These relationships are further complicated by the varying degrees of anonymity characterizing inter-user communications in online spaces. While the possibility of true privacy can be facilitated by impersonal avatars, they are still personal manifestations of a user's self in the context of digital spaces. Furthermore, the tools available to curate and express these are platform-dependent and ripe for both liberation and exploitation. In circumstances such as Gamergate or debates regarding influencer culture and their portrayals of impossible and computer-edited body image, self-presentation is heavily mediated by accessibility to and mastery of online avatars. Forms and objects of study Infrastructure Information infrastructure is the underlying organization of public information on the Internet, which impacts how and what the public accesses online. Databases and search engines are information infrastructure as they play a large role in access to and dissemination of information. Information Infrastructure often consists of algorithms and metadata standards, which curate the information presented to the public. Software Coding and software engineering are not often recognized as rhetorical writing practices, but in the process of writing code, people instruct machines to "make arguments and judgments and address audiences both mechanic and human". Technologies themselves can be viewed as rhetorical genres, simultaneously guiding users' experiences and communication with each other and being shaped and improved through humans use. Choices baked into software that are invisible to users impact the user experience and reveal information about the priorities of the software engineers. For instance, while Facebook allows users to choose over 50 gender identities to display on their public profile, an investigation into the social media's software revealed that users are filtered into the male-female gender binary within the database for targeted advertising purposes. For another example, pieces of software called BitTorrent trackers facilitate the massive distribution of information on Wikipedia. Software facilitates the collective rhetorical action of this encyclopedia. The field of software studies encourages the investigation into and recognition of software's impacts on people and culture. People Online communities Online communities are groups of people with common interests that interact and engage over the Internet. Many online communities are found within social networking sites, online forums, and chat rooms, such as Facebook, Twitter, Reddit, and 4chan, where members can share and discuss information and inquiries. These online spaces often establish their own rules, norms, and culture, and in some cases, users will adopt community-specific terminology or phrases. Scholars have noted that online communities have especially gained prominence among users like e-patients and victim-survivors of abuse. Within online health and support groups respectively, members have been able to find others who share similar experiences, receive advice and emotional support, and record their own narrative. Online communities support community but in some cases can support polarization. Communities face issues with online harassment in the form of trolling, cyberbullying, and hate speech. According to the Pew Research Center, 41% of Americans have experienced some form of online harassment with 75% of these experiences occurring over social media. Another area of concern is the influence of algorithms on delineating the online communities a user comes in contact with. Personalizing algorithms can tailor a user's experience to their analytically determined preference, which creates a "filter bubble". The user loses agency in content accessibility and information dissemination when these bubbles are created. The loss of agency can lead to polarization, but recent research indicates that individual level polarization is rare. Most polarization is due to the influx of users with extreme views that can encourage users to move towards partisan fringes from "gateway communities". Social media Social media makes human connection formal, manageable, and profitable to social media companies. The technology that promotes this human connection is not human, but automated. As people use social media and form their experiences on the platforms to meet their interests, the technology also affects how the users interact with each other and the world. Social media also allows for the weaving of "offline and online communities into integrated movements". Users' actions, such as liking, commenting, sending, retweeting, or saving a post, contribute to the algorithmic customization of their personalized content. Social media's reach is determined by these algorithms. Social media also offers various image altering tools that can impact image perception—making the platform less human and more automated. Digital activism Digital activism serves an agenda-setting function as it can influence mainstream media and news outlets. Hashtags, which curate posts with similar themes and ideas into a central location on a digital platform, aid in bringing exposure to social and political issues. The subsequent discussions these hashtags create put pressure on private institutions and governments to address these issues, as can be seen with movements like #CripTheVote, #BringBackOurGirls, or #MeToo. Many recent social movements have originated on Twitter, as Twitter Topic Networks provide a framework for online community organizing. Digital activism allows people who may have not had a voice previously an equal chance to be heard. Though some believe that digital activism has a universal function, it takes different forms and philosophies in different parts of the world. In some parts of the world, it takes on a "techno-political" approach, basing communications off of broad political, social, and economic trends, relying on technology prevalent in the free culture movement. Others take a "techno-pragmatic" philosophy, focused more on the specific political and social goal, often at a more personal level. Some areas remain "techno-fragmented," where there are few intersections between traditional and digital forms of activism. Influencers and content creators As social media is increasingly becoming more available, the influencer/content creator position has also become recognized as a profession. With such a large and rapid consumer presence on social media, it creates both a helpful and overwhelming source of consumer information for advertisers. There is substantial potential to identify "market mavens" on social media due to fandom culture and the nature of influencer/content creator followings. Social media has opened up business opportunities for corporations to employ influencer marketing, where they can more easily find suitable influencers to advertise their products to their viewers. Online learning Although online learning existed previously, its prevalence increased during the COVID-19 pandemic. Online learning platforms are known as e-learning management systems (ELMS). They allow both students and teachers access to a shared, digital space which includes classroom resources, assignments, discussions, and social networking through direct messaging and email. Although socialization is a component of ELMS, not all students utilize these resources; rather, they focus on the lecturer as the primary resource of knowledge. The long-term effects of emergency online learning, which many turned to during the height of the pandemic, is ongoing; however, one study concluded that students' "motivation, self-efficacy, and cognitive engagement decreased after the transition". Interactive media Video games The procedural and interactive nature of video games leads them to be rich examples of procedural rhetoric. This rhetoric can range from games designed to bolster children's learning to challenging one's assumptions of their world. An educational video game developed for students at the University of Texas at Austin, titled Rhetorical Peaks, was made with the goal of examining rhetoric's procedural nature and to capture the constantly changing contexts of rhetoric. The open-ended nature of the game, as well as the developer's intent on playing the game within a classroom setting, encouraged collaboration among students and the development of individual interpretations of the game's plot based on vague clues; this ultimately helped them to realize that there must be a willingness to change between lines of thought and to work both within and beyond accepted limits in understanding rhetoric. In mainstream gaming, each game has its own set of language which help shape the way information is transferred between players in their community. Within the realm of online gaming—which includes games such as Call of Duty or League of Legends—players can communicate with each other and create their own rhetoric within the established world of the game, which allows players to influence and be influenced by the other gamers around them. The game Detroit: Become Human has another way of encouraging digital rhetoric within the gaming community. This decision-based video game gives the player the power to create their own story that deals with gender, race, and sexuality. Its futuristic message of a human-to-machine relationship prompts discussion due to the difficult moral decisions made while playing. At the end, there are surveys to take to see other players' opinions about certain decisions around the world. Podcasting Podcasting is another form of digital rhetoric. Podcasting can augment the ancient progymnasmata in ways that illuminate the relationship between rhetoric and digital sound. Podcasting can teach rhetorical practices through soundwriting. And a rhetorical pedagogy oriented around narrative nonfiction podcasting may—if it can overcome some key limitations—hold the potential to spark social change. Mobile applications Mobile applications (apps) are computer programs designed specifically for mobile devices, such as phones or tablets. Mobile apps cater to a wide range of audiences and needs, and allow for a "cultural hybridity of habit" which allows anyone to stay connected with anyone, anywhere. Due to this, there is always access to changing cultures and lifestyles, since there are so many different apps available to research or publish work. Furthermore, mobile apps allow individual users to manage aspects of their lives, while the apps themselves are able to change and upgrade socially. Information access on mobile devices poses challenges to user interfaces, notably due to the small screen and keys (or lack thereof), in comparison to larger counterparts such as laptops and PCs. However, it also has the advantage of heightening physical interactivity with touch, and presents experiences with multiple senses in this way. Likewise, mobile technologies offer location-based affordances for layering different types of information in communication design. With these varying factors, mobile applications need trustworthy, reliable, and helpful UI design and UX design to create successful user experience. Immersive media Emerging immersive technologies such as virtual reality remove the visual presence of devices and mimic emotional experiences. User immersion into virtual reality includes simulated real-life communication; virtual reality provides the illusion of being somewhere the body physically is not, which contributes to widespread communication that reaches the point of telepresence and telexistence. Digital museums, serious games, and interactive documentaries often utilize virtual reality and augmented reality elements to relate users to historical settings and events, to teach them about the topic or to inform them of a specific point of view. While these are useful in conveying information in an immersive setting with an accessible narrative, those narratives can simplify the context to a point where some of the nuance is lost. Museums that employ immersive exhibits often find that tourist engage with these for the purpose of leisure, rather than wanting to gain thorough learning experience. Critical approaches Technofeminism Digital rhetoric gives a platform to technofeminism, a concept that brings together the intersections of gender, capitalism, and technology. Technofeminism advocates for equality for women in technology-heavy fields and researches the relationship between women and their devices. Intersectionality is a term coined by Kimberlé Crenshaw that recognizes the societal injustices based on our identities. It is often challenging for women to navigate finding and interacting in digital spaces without harassment or gender biases. There is an importance of digital activism for unrepresented communities, such as gender non-conforming and transgender people of all races, disabled people, and people of color. Technofeminism and intersectionality are still not very prevalent when developing new technologies and research. In the journal Computers and Composition, only five articles explicitly use the term intersectionality or technofeminism. Online feminism also faces challenges of reactive sexism and misogyny. In one example, of the over 600 million internet users in India, 63% users are male, with 39% being female. This contrast in users often makes these heavily male digital spaces hostile to women. While some feminist social media movements are able to inspire policy change or shine a light on issues facing women, others have been subject to severe backlashes with few achievements to show as a result, even if the movement reaches a wide audience. Rhetorical feminism Cheryl Glenn, in her article "The Language of Rhetorical Feminism, Anchored in Hope", explores the study of rhetoric, feminism, and hope, introducing a theoretical framework she calls "rhetorical feminism". This framework began as a platform for recognizing and valuing the traditionally overlooked rhetorical practices and powers of marginalized groups called "Others". Glenn's approach is meant to challenge biased attitudes and actions, and promote a what some consider an inclusive and tolerant societal discourse. In connection to digital rhetoric, the article underscores the power of digital platforms in their ability to either facilitate or obstruct democratic dialogues. Glenn acknowledges the influence of rhetoric across traditional and digital domains to challenge systems seen as unjust and engage individuals in democratic practices. Glenn's stance within the article aligns with the broader narrative of digital rhetoric, which often explores the dynamics of power, representation, and access to digital platforms in molding public discourse. Digital cultural rhetoric As the Internet has expanded, digital media or rhetoric has come to be used to represent or identify a culture. Scholars have studied how digital rhetoric is affected by one's personal factors, such as race, religion, and sexuality. Due to these factors, people utilize different tools and absorb information differently. Digital culture has created the need for specialized communities on the web. Computer-mediated communities such as Reddit can give a voice to these specialized communities. One can experience and converse with other like-minded people on the web via comment sections and shared online migration. The creation of digital cultural rhetoric has allowed for the use of online slang that other communities may not be aware of. Online communities that explore digital cultural rhetoric allow users to discover their social identity and confront stereotypes that they face (or faced). Embodiment Embodiment is the idea that every person has a unique relationship with technology based on their unique set of identities. Studying the relationship between bodies and technology is one way that digital rhetoricians are able to promote equal access and opportunity within the digital sphere. Since technology is considered to be an extension of the real world, users are also shaped by the experiences they have in digital spaces. The artificial interactions that occur in online environments allow users to exist in a way that is additive to their human experience. Pedagogy With digital rhetoric becoming increasingly present, pedagogical approaches have been proposed by scholars to teach digital rhetoric in the classroom. Courses in digital rhetoric study the intersectionality between users and digital material, as well as how different backgrounds such as age, ethnicity, and gender can affect these interactions. Studies of digital pedagogy function as insight into the advantages and disadvantages of implementing digital technology in to education settings, and the consequences of incorrect use. Examples include electronic libraries and databases, as well as "thinking tools" used by students for the purposes of transcription, editing, and tagging of works. Digital pedagogy is a wider scope of study than online pedagogy, focusing not only on the internet, but also on the devices and mediums of that convey the online communication. Higher education Several scholars teach digital rhetoric courses at universities in the US, although their approaches vary considerably. Jeff Grabill, a scholar with a background in English, education, and technology, encourages his contemporaries to find a bridge between the scholarly field of digital rhetoric and its implementation. Another rhetorician, Cheryl Ball, specializes in areas that consist of multimodal composition and editing practices, digital media scholarship, digital publishing, and university writing pedagogy. Ball teaches students to write and compose multimodal texts by analyzing rhetorical options and choosing the most appropriate genres, technologies, media, and modes for a particular situation. Multimodality also influenced Understanding Rhetoric: A Graphic Guide to Writing by Elizabeth Losh (et al.), which emphasizes engaging the comic form of literacy. A similar approach also inspired Melanie Gagich to alter the curriculum of her first-year English course completely, aiming to redefine digital projects as rigorous academic assignments and teach her students necessary audience analysis skills. Such a design ultimately allowed students in Gagich's classroom to develop their creativity and confidence as writers. In another approach, Douglas Eyman recommends a course in web authoring and design that provides undergraduates more practical instruction in the production and rhetorical understanding of digital texts; specifically, it provides opportunities for students to learn fundamentals of web writing and design conventions, rules, and procedures. Similarly, Collin Bjork argues that "integrating digital rhetoric with usability testing can help researchers cultivate a more complex understanding of how students, instructors, and interfaces interact in OWI [online writing instruction]". Other scholars focus more on the relationship between digital rhetoric and social impact. Scholars Lori Beth De Hertogh (et al.) and Angela Haas have published materials discussing intersectionality and digital rhetoric, arguing that the two are inseparable and classes covering digital rhetoric must also explore intersectionality. Iowa State's Lauren Malone has also analyzed the relationship between identity and teaching digital rhetoric through research on online engagement of queer and transgender people of color. From this research, Malone created a series of steps for digital rhetoric instructors to take in order to foster inclusivity within their classrooms. In her work, scholar Melanie Kill has introduced digital rhetoric to college-aged students, arguing for the importance of editing Wikipedia and capitalizing on their privilege of education and access to materials. Similar to De Hertogh (et al.) and Haas, Kill believes an education in digital rhetoric serves all students, as it facilitates positive social change. K–12 Many educational systems are framed so that students actively participate in technological systems as designers of digital rhetoric, not passive users. There are three core goals students have identified for their coursework: building their own digital space, learning all aspects of digital rhetoric (including the theory, technology, and uses), and applying it in their own lives. The ecological system generated by the interactions of students with classmates, digital media, and other individuals is the basis of "interconnected" rhetorical processes and shared digital work. Video games are one avenue through which students learn to design the rhetoric and code underlying their technological systems. Video game use has evolved rapidly since the 1980s, and current video games have been incorporated into education. Scholar Ian Bogost suggests that video games can be utilized in a multitude of subjects to serve as models for studying the non-digital world. Specifically, he notes that video games could be used as an "entry point" into computer science for students who may not have been interested in the field. Games and game technology enhance learning by operating at the "outer and growing edge of a player's competence". Games challenge students at levels that cause frustration but preserve motivation to solve the challenge at this edge. Bogost also notes that video games can be taught as rhetorical and expressive in nature, allowing children to model their experiences through programming. When dissected, the ethics and rhetoric in video games' computational systems is exposed. Analysis of video games as an interactive medium reveals the underlying rhetoric through the performative activity of the player. Recognition of procedural rhetoric through course studies reflects how these mediums can augment politics, advertisement, and information. To help address the rhetoric in video game code, scholar Collin Bjork makes a series of recommendations for integrating digital rhetoric with usability testing in online writing instruction. Some scholars have also identified specific practices for digital rhetoric instruction in pre-collegiate classrooms. As Douglas Eyman points out, students require agency when learning digital rhetoric, meaning instructors designing lessons must allow students to interact with the technology directly and enact change on the design. This is consistent with discoveries by other professors, who claim one of the primary goals of students in a digital rhetoric classroom is to create space for themselves, connections with peers, and deeply understand its significance. These interpersonal connections reflect a "thick correlation between digitalization and empowering pedagogy". Pre-K The United States Government's Office of Educational Technology has emphasized four guiding principles when using technology with early learners: When used appropriately, technology can be a tool for learning. The use of technology should allow for increased access to learning opportunities for all children. Technology can be used to strengthen relationships between children and their families, early educators, and friends. Technology is most effective when early learners are interacting with adults and peers. Adults can also supervise children online for said effectiveness. Despite these four pillars, most studies conclude that learning technology for children under the age of two is not beneficial. At most, technology can be used to promote relationship development for these children; for instance, by using video chat software to connect with loved ones at a distance. Digital rhetoric as a field of study In 2009, rhetorician Elizabeth Losh offered this four-part definition of digital rhetoric in her book Virtualpolitik: The conventions of new digital genres that are used for everyday discourse, as well as for special occasions, in average people's lives. Public rhetoric, often in the form of political messages from government institutions, that is represented or recorded through digital technology and disseminated via electronically distributed networks. The emerging scholarly discipline concerned with the rhetorical interpretation of computer-generated media as objects of study. Mathematical theories of communication from the field of information science, many of which attempt to quantify the amount of uncertainty in a given linguistic exchange or the likely paths through which messages travel. Losh's definition demonstrates that digital rhetoric is a field that relies on different methods to study various types of information, such as code, text, visuals, videos, and so on. Douglas Eyman suggests that classical theories can be mapped onto digital media but a larger academic focus should be placed on the "extension of rhetorical theory". Careers in developing and analyzing the rhetoric in code form a prominent field of study. Computers and Composition, a journal established in 1985, focuses on computer communication and has considered the use of "rhetoric as their conceptual framework" and the digital rhetoric in software development. Studies on how digital rhetoric implicates various topics are ongoing and encompass many fields. In his book, Digital Griots: African American Rhetoric in a Multimedia Age, Adam J. Banks states that modern day storytellers, like stand-up comics and spoken word poets, give African American rhetoric a flexible approach that is still true to tradition. While digital rhetoric can be used to facilitate traditions, select cultures face several practical application issues. Radhika Gajjala, professor at Bowling Green State University, writes that South Asian cyber feminists face issues with regard to building their web presence. Research ethics Writing and rhetoric scholars Heidi McKee and James E. Porter discuss the complicated issue of Internet users posting information publicly on the Internet but expecting the post to be semi-private. This appears contradictory, but socially the Internet is composed of millions of social identities, social groups, social norms, and social influence. These social aspects of the Internet are important to consider when studying digital topics because the digital and non-digital are getting harder to distinguish from one another. A study conducted by Rösner and Krämer in 2016 showed that participants' identities would reflect the norms of these online social groups. Similar to how social groups are seen in an in-person setting, posts on forums, comment sections, and social media are like having a conversation with friends in a public setting. Typically, researchers would not use a conversation heard in public, but an online conversation is not only available to its social group. James Zappen, in his article "Digital Rhetoric: Toward an Integrated Theory", adds that many of these groups foster a creative and collaborative nature to share information to the public. McKee and Porter suggest the use of a casuistic heuristic approach to doing digital research. This method of study is based on focusing on the moral principle of 'do no harm' to the audience and generating needed formulas or diagrams to help guide the researcher when gathering data. It is noted that this method does not provide all the answers. Instead, it is a starting point for the scholar to approach the digital world. More scholars have added their own take to an ethical approach for digital data. Many have a case-based approach with add-on consent from participants (if possible), anonymity to participants, and consideration of what harm could come to the groups being studied. Eyman gives background information on ancient rhetoric going all the way back to Aristotle. including illustrations of both conventional and modern rhetoric. Beginning with ancient Greece and the medieval eras, there is a shift to more modern methods and instances. He explains three expression modes: Ethos, Logos, and Pathos. The term "digital" also refers to the physical production of texts, whether they are produced in print or electronically. In rhetorical studies, text can be seen as the medium for persuasive discourse or arguments; however, this tradition is primarily associated with printed texts, with less regard to 'Electric rhetoric', 'computational rhetoric', and 'technorhetoric'. Eyman explores how traditional concepts, in rhetoric such as Ethos, Logos, and Pathos have been modernized to remain relevant today. He clarifies that these age-old methods of persuasion still hold significance but have evolved to be applied now. For instance, establishing credibility or Ethos is no longer solely dependent upon the speaker's character. It now encompasses elements of presence such, as maintaining a reputation, a substantial following, and producing valuable content. When creating points using logic (Logos) incorporating elements such, as charts or videos can aid in clarifying intricate concepts for the audience's comprehension level to increase significantly. To enhance connections (Pathos) integrating visuals along, with sound and video components can intensify the impact of messages by adding a personal and profound touch to them. Eyman also mentions the shift, in dynamics brought about by platforms where persuasion becomes a process between speakers and audiences, unlike the traditional one-way communication in rhetoric. The ability of audiences to actively engage by commenting and sharing enables them to influence and steer conversations in spaces. It's a shift that illustrates how a simple post, on media has the potential to spark extensive discussions and interactions as it resonates with a wider audience. In today's paced communication landscape communicators need to be prepared for interactions and varied responses, from their audience impacting the effectiveness of their message. Moreover, Eyman discusses the issues surrounding communication strategies. New digital technologies allow tailored messages to target audiences by imposing algorithms determining content visibility. This gives rise, to concerns relating to data privacy, and openness. For example, the use of algorithms to hand picked user content may slightly shape their viewpoints without their awareness. Eyman emphasizes the importance of handling this form of "persuasion" due to its significant impact on public viewpoint or belief. Overall Eyman believes that digital communicators should be careful with these tools and use them ethically. He argues that rhetoric in the digital world isn’t just about persuading; it’s also about understanding the impact of these methods and respecting the audience’s trust and privacy. This balanced approach encourages effective yet ethical communication. Narrative Rhetoric Digital storytelling is another development over that has grown with the advancement of technology. While most of these have appeared in the context of fictional works, nonfiction, rhetorical work have also taken on elements of narrative theory in a digital setting. Nonfiction "Interactive Digital Narratives" use strategies usually utilized in the service of fictional storytelling as way of conveying information or trying to convince others of a certain position or argument. Practical examples of IDNs being applied to works of rhetoric include interactive documentaries, documentaries which the user engages with on some engages with on a level more than simply observing it, and serious games, video games made with goals of nonrecreational education and training. The interactive nature of these communications means that the rhetoric of the narrative is being constantly reshaped and reinterpreted, meaning that there are many digital narratives go on without any true ending. Prolepsis Prolepsis refers to the methods by which someone anticipates possible responses and arguments to a message. In digital communication, this exists in the form of social media proleptic cues, where one user issues a social media post makes a claim about the future or attempts to influence actions towards what the future should become. Other users who respond to these posts, in the form of comments or other validating/invalidating reactions, do so based on their own views on the predictions made. These responses serve as feedback for the original user, and as guiding tools for those responding to gauge and adapt to their own predictions. The nature of these statements makes it so that there is a possibility that anyone can inspire conversation or calls to action over a certain topic, even if they are ill-informed on the subject. Instances such as these can often lead to the spread of misinformation and disinformation online. The misuse of prolepsis in a digital sphere often occurs through false citations of authority, appeals to cultural and societal fears, and the employment of slippery slope arguments. Social issues Access Referred to as the digital divide, issues of economic access and user-level access are recurring issues in digital rhetoric. These issues show up most prevalently at the intersection of computers and writing, though the digital divide impacts a multitude of online forums, user bases, and communities. A lack of access can refer to inequality in obtaining information, means of communication, and opportunities. For many that teach digital rhetoric in schools and universities, student access to technologies at home and in school is an operative concern. There is some debate about whether mobile devices like smartphones make technology access more equitable. In addition, the socioeconomic divide that is created due to accessibility is a major factor of digital rhetoric. For instance, Linda Darling-Hammond, an NIH researcher and professor of education at Stanford University, discusses the lack of educational resources that children of color in America face. Further, Angela M. Haas, author of "Wampum as Hypertext: An American Indian Intellectual Tradition of Multimedia Theory and Practice", describes access in a more theoretical way. Her text explains that through access one can connect a physical body with the digital space. Another contributing factor is technology diffusion, which refers to how the market for new technology changes over time, and how that influences technology use and production across society. Studies conducted by scholar Sunil Wattal conclude that technology diffusion mimics social class status. As such, technology diffusion varies from community to community, making it a much greater challenge to ensure access equity across classes. These examples preface the topic that access encompasses every aspect of one's life and must be perceived as such. If accessibility is not resolved at a foundational level, then social discrimination will be further perpetuated. Another issue of access comes in the form of paywalls, which can be a major hindrance for education and reduce accessibility to many educational tools and materials. This practice can increase barriers to scholarship and limit information that is open access and has forced some universities to pay over $11 million annually for access to certain works. Open access removes the barriers of access fees and the restrictions of copyright and licensing, allowing more equal access to works. Open access and digital rhetoric do not eliminate copyright, but they eliminate restrictions by giving authors the choice to maintain their right to copy and distribute their materials however they choose, or turn the rights over to a specific journal. Digital rhetoric involves works that are found online and open access is allowing more people to be able to reach these works. Politics The increased digitalization of media has amplified the influence of digital rhetoric in politics, as it introduces a more direct relationship between politicians and citizens. Digital communication platforms and social networking sites allow citizens to share information and engage in debate with other people of similar or distinct political ideologies, which have been shown to influence and predict the political behavior of individuals outside the digital world. Some politicians have used digital rhetoric as a persuasive tool to communicate information to citizens. Reciprocally, digital rhetoric has enabled increasing political participation among citizens. Theoretical research on digital rhetoric in politics has attributed the increase of political participation to three models: the motivation model, the learning model, and the attitude model. The motivation model proposes that digital rhetoric has decreased the opportunity costs of participating in politics since it makes information readily available to the people. The learning model established the increase in political participation to the vast amount of political information available on the Internet which increases the inclusion of the citizens in the political process. The attitude model extended from the previous two by suggesting that digital rhetoric has changed the perception of citizens towards politics, particularly by providing interactive tools that allow people to engage in the political process. Online harassment Online harassment has, over time, become an increasingly persistent issue, especially on social media. Analysis linked cyberbullying-specific behaviors, including perpetration and victimization, to a number of detrimental psychosocial outcomes. The trend of people posting about their characters and lifestyles reinforces stereotypes (such as "hillbillies"), an outcome based on the fact that the rhetoric of difference is a naturalized component of the ethnic and racial identity. Due to limits on the number of characters available to convey a message (for example, Twitter's 280-character limit), messages in digital rhetoric tend to be scarcely explained, allowing stereotypes to flourish. Erika Sparby theorized that anonymity and use pseudonyms or avatars on social media gives users more confidence to address someone or something negatively. In 2005, these issues led to the launch of the first cyberbullying prevention campaign: STOMP Out Bullying. Like the abundance of campaigns that would form in the next fifteen years, it focuses on creating cyberbullying awareness and reducing and preventing bullying. The challenge of bullying within social media has increased following the rise of "cancel culture", which aims to end the career of a culprit through any means possible, mainly the boycott of their works. More recently, techniques utilizing machine learning and artificial intelligence have become popular in synthesizing deepfakes: realistic but fake videos of people whose faces are swapped out with other people's faces. These kinds of videos can be created by easily obtainable and simple software, inciting concerns that people may use the software to blackmail or bully people online. A large quantity of images containing faces are required to create a deepfake. In addition, specific types of characteristics, such as different exposure and color levels, need to be consistent to make a realistic video. However, given the vast amounts of photos of people publicly available on the Internet from social media sites, there is concern about the extent to which people can use deepfakes as a bullying tactic. There have already been multiple incidents of this kind of harassment being used to bully people. One example involved a mother who used deepfake software to frame a few of her daughter's classmates at school by producing fake videos of them in pornographic videos. Due to machine learning and artificial intelligence being relatively new subfields of computer science and mathematics, there has not been enough time for deepfake video detection technologies to mature, and so far are only detectable using the human eye to spot irregularities in movement of the people in the videos. Misinformation and disinformation While digital rhetoric can often be used to persuade, in some cases it is used to spread false and inaccurate information. The proliferation of illegitimate information over the Internet has given rise to the term misinformation, which is defined as the spread of false claims that may or may not be intended to mislead others. This is not to be confused with disinformation, which is illegitimate or inaccurate information that is spread with the intent to mislead others. Both misinformation and disinformation have consequences towards the knowledge, perceptions, and, in some cases, actions of individuals. Social media specifically has greatly impacted the spread of false information. Scientific facts, such as the damaging environmental impacts of climate change, now come into question on a daily basis. Social media has contributed to the proliferation of misinformation/disinformation because of its viral and largely unfiltered nature. Everyday users have the power to join and perpetuate a narrative that could be entirely false. In recent years, the term "fake news"—used synonymously with misinformation—has been highly popularized and politicized in digital spaces. The effects of misinformation were further on display during the 2020 United States presidential election, where social media usage had an impact on Congress. Starting as early as April 2020, then-President Donald Trump tweeted about the dangers of widespread mail voting fraud though studies had shown that mail voting fraud is rare and the dangers are negligible. After losing the election, Trump continued to use Twitter as his main platform to speak about rigged elections, mail-in voter fraud, and other proven falsehoods. On January 6, 2021, Congress was set to certify the results of the 2020 election whilst a rally of Trump supporters were protesting the election results based on Trump's claims of fraud. This assembly of his supporters quickly turned violent, as a mob stormed the Capitol with the intent to overturn election results. The insurrection led to the death of five people. Trump was permanently suspended from Twitter two days later because his involvement in the insurrection violated Twitter's terms and conditions regarding the "glorification of violence". He was also suspended from other major social media sites such as Facebook and YouTube. This incident started a heated debate about social media companies' abilities to limit free speech; ultimately, these companies are still private businesses who are allowed to determine their own terms and conditions as they see fit, which users must agree to in order to use these platforms in the first place. Legitimacy There is controversy regarding the innovative nature of digital rhetoric. Arguments opposed to legitimizing web text are Platonically based, in that they reject the new form of scholarship (web text) and praise the old form (print) in the same way that oral communication was originally favored over written communication. Originally some traditionalists did not regard online open access journals with the same legitimacy as print journals for this reason; however, digital arenas have become the primary place for disseminating academic information in many areas of scholarship. Modern scholars struggle to "claim academic legitimacy" in these new media forms, as the tendency of pedagogy is to write about a subject rather than actively work in it. Within the past decade, more scholarly texts have been openly accessible, which provides an innovative way for students to gain access to textual materials online for free, such as scholarly journals like Kairos, Harlot of the Arts, and Enculturation. COVID-19 pandemic The persistence of the global COVID-19 pandemic has changed both physical and digital spaces. The resulting isolation and economic shutdowns complicated existing issues and created a new set of globalized challenges as it "imposed" a change to the "psychosocial environment". The pandemic has forced the majority of individuals with Internet access to depend on technology in order to remain connected to the outside world, and on a larger scale, global economies have become reliant on transitioning business to digital platforms. Additionally, the pandemic forced schools across the globe to switch to an 'online only' approach. By March 25, 2020, all school systems in the United States closed indefinitely. In search of a platform to host online learning, many schools incorporated popular video chat service Zoom as their method of providing socially distant instruction. In April 2020, Zoom was hosting over 300 million daily meetings, as opposed to 10 million in December 2019. The shift to online learning demonstrated the current state of accessibility to digital information while promoting the use of digital learning through Zoom meetings, YouTube videos, and broadcasting systems such as Open Broadcaster Software. Still, it is questioned whether or not the switch to online learning has had detrimental impacts on students. In particular, it has been difficult to transition younger students to completely online models of learning, who often miss the social aspects of a school setting. The pandemic has also contributed to creating misleading rhetoric in online spaces. Heightened public health concerns combined with the accessibility of social media led to the rapid spread of both misinformation and disinformation regarding COVID-19. Some people online theorized that the deadly virus could be cured by the ingestion of bleach, while others believed the disease to have been intentionally started by China in an attempt to take over the world. Trump also supported taking hydroxychloroquine to prevent the contraction of COVID-19. The World Health Organization (WHO) had advised on numerous occasions that the drug has no signs of preventing the spread of the virus. Despite their illegitimate nature, these conspiracy theories have spread rapidly in digital spaces. As a result, WHO declared the proliferation of misinformation regarding the virus an "infodemic". In response, many social media sites to strengthen their policies relating to false information, but many misleading claims still find their way online. See also Artificial intelligence rhetoric Composition studies Computer-mediated communication Digital humanities Digital literacy Digital media Hypermedia Internet studies Media studies Technological convergence Feminist technoscience Technofeminism References Communication studies Digital humanities Internet culture Rhetoric
Digital rhetoric
Technology
12,295
64,411,788
https://en.wikipedia.org/wiki/Super-AGB%20star
A super-AGB star is a star with a mass intermediate between those that end their lives as a white dwarf and those that end with a core collapse supernova, and properties intermediate between asymptotic giant branch (AGB) stars and red supergiants. They have initial masses of in stellar-evolutionary models, but have exhausted their core hydrogen and helium, left the main sequence, and expanded to become large, cool, and luminous. HR diagram Super-AGB stars occupy the top-right of the Hertzsprung–Russell diagram (HR diagram), and have cool temperatures between 3,000 and , which is similar to normal AGB stars and red supergiant stars (RSG stars). These cool temperatures allow molecules to form in their photospheres and atmospheres. Super-AGB stars emit most of their light in the infra-red spectrum because of their extremely cool temperatures. The Chandrasekhar limit and their life A super-AGB star's core may grow to (or past) the Chandrasekhar mass because of continued hydrogen (H) and helium (He) shell burning, ending as core-collapse supernovae. The most massive super-AGB stars (at around ) are theorized to end in electron capture supernovae. The error in this determination due to uncertainties in the third dredge-up efficiency and AGB mass-loss rate could lead to about a doubling of the number of electron-capture supernovae, which also supports the theory that these stars make up 66% of the supernovae detected by satellites. These stars are at a similar stage in life to red giant stars, such as Aldebaran, Mira, and Chi Cygni, and are at a stage where they start to brighten, and their brightness tends to vary, along with their size and temperature. These stars represent a transition to the more massive supergiant stars that undergo full fusion of elements heavier than helium. During the triple-alpha process, some elements heavier than carbon are also produced: mostly oxygen, but also some magnesium, neon, and even heavier elements, gaining an oxygen-neon (ONe) core. Super-AGB stars develop partially degenerate carbon–oxygen cores that are large enough to ignite carbon in a flash analogous to the earlier helium flash. The second dredge-up is very strong in this mass range and that keeps the core size below the level required for burning of neon as occurs in higher-mass supergiants. References attribution contains text copied from Asymptotic giant branch available under CC-BY-SA-3.0 Asymptotic-giant-branch stars Stellar evolution
Super-AGB star
Physics
553
62,029,996
https://en.wikipedia.org/wiki/Prymnesin-B1
Prymnesin-B1 is a chemical with the molecular formula . It is a member of the prymnesins, a class of ladder-frame polyether phycotoxins made by the alga Prymnesium parvum. It is known to be toxic to fish. It is a so called "B-type" prymnesin, which differ in the number of backbone cycles when compared to A-type prymnesins like prymnesin-2. Structures Prymnesins-B1 is formed of a large polyether polycyclic core with several conjugate double and triple bonds, chlorine and nitrogen heteroatoms and a single sugar moiety consisting of α-D-galactopyranose. Biosynthesis The backbone of B-type prymnesins like prymnesin-B1 is reportedly made by giant polyketide synthase enzymes dubbed the "PKZILLAs". See also Prymnesin-1 Prymnesin-2 References Phycotoxins Polyether toxins Primary alcohols Secondary alcohols Conjugated enynes Organochlorides Halohydrins Halogen-containing natural products Amines Conjugated diynes Glycosides Heterocyclic compounds with 5 rings Oxygen heterocycles
Prymnesin-B1
Chemistry
281
56,275,958
https://en.wikipedia.org/wiki/Bharat%20Bhushan%20%28academic%29
Bharat Bhushan is an American engineer. He is an Ohio Eminent Scholar and the Howard D. Winbigler Professor at Ohio State University. Education Bhushan graduated with a BE in mechanical engineering from the Birla Institute of Technology and Science, Pilani in 1970, an MS in mechanical engineering from the Massachusetts Institute of Technology in 1971, and an MS in mechanics from the University of Colorado, Boulder (CU) in 1973. He received his PhD from CU in 1979. He also has an MBA in management from the Rensselaer Polytechnic Institute (1980). In addition to five earned college degrees, he has also received five honorary doctorate degrees: in 1990 from the University of Trondheim; in 1996 from the Warsaw University of Technology; in 2000 from the Metal-Polymer Research Institute of the National Academy of Sciences of Belarus; in 2011 from the University of Kragujevac, and in 2019 from University of Tyumen, Russia. Research Bhushan's research interests include nanotribology, nanomechanics, scanning probe microscopy, biotechnology, nanotechnology, biomimetics, science and public policy. He has authored or co-authored ten books, more than 900 scientific papers, and holds more than 25 US and foreign patents. He is one of 1500 Google Scholar's ‘Highly Cited Researchers in All Fields’, with a ‘h index’ of 130, and one of Scopus 440 scientists for career-long citation impact across all fields worldwide. He is the Fourth Highly Cited Researcher in Mechanical Engineering and an ISI Highly Cited Researcher in Materials Science and in the Cross-field Category. Among various fellowships, in 1998 and 2007, he received the Alexander von Humboldt Research Prize for Senior Scientists, in 1999, he received the Fulbright Senior Scholar Award, and in 2002 and 2007, he received Max Planck Foundation Research Award for Outstanding Foreign Scientists, He received the International Award from the Society of Tribologists and Lubrication Engineers. In 2015, he received the Institution of Chemical Engineers(UK) Global Award. In 2020, he received the Mayo D. Hersey Award from the American Society of Mechanical Engineers and the Tribology Gold Medal from the International Tribology Council. References External links Nanoprobe Laboratory at OSU 20th-century American engineers 21st-century American engineers American mechanical engineers American nanotechnologists Tribologists Ohio State University faculty Living people American people of Indian descent Year of birth missing (living people)
Bharat Bhushan (academic)
Materials_science
501
21,081,997
https://en.wikipedia.org/wiki/European%20Flight%20Test%20Safety%20Award
The European Flight Test Safety Award was created after the fatal accident of test pilot Gérard Guillaumaud by his fiancée Heidi Biermeier. The regulations of the award state that recipients must be individuals who made significant contributions in the area of safety within flight testing. Award Ceremony The award was first granted in October 2007 in London at the award dinner concluding the 1st European Flight Test Safety Workshops. The workshop is hosted by the Flight Test Safety Committee of Society of Experimental Test Pilots (SETP) and of Society of Flight Test Engineers (SFTE). The recipient is nominated by a jury, consisting of two flight test experts and the founder of the award, Ms. Heidi Biermeier. Recipients 2007 Dipl.-Ing. Dr.-Ing. Dieter W. Reisinger, MSc (Austrian Airlines) 2008 Gérard Temme of CertiFlyer B.V. 2009 Patrick L. Svatek, (Flight test engineer at NAVAIR, now USNTPS Patuxent River) 2010 Billie Flynn, (Experimental Test Pilot) Lockheed Martin, 2011 General (retired.) Desmond Barker (CSIR), author of the book Zero Error Margin - Airshow Display Flying Analyzed 2012 Capt. David C. Carbaugh, chief pilot, Boeing Flight Operations Safety 2013 Maurice "Moe" Girard, senior engineering test pilot, Bombardier 2014 Gulfstream 2015 Daniel Schwenzel (Airbus Helicopters) Workshop Locations and Theme 2007 London 2008 Amsterdam 2009 Vienna, "First Flight" 2010 London 2011 Salzburg, "Displaying Prototype Aircraft - Risks and Preparation" 2012 Salzburg, "Loss of Control - Tackling Aviation's #1 Killer" 2013 Amsterdam, "Human Machine Interface and Flight Deck Design" 2014 Manching, "Safety Management Systems in Flight Test Organizations" 2015 Aix-en-Provence, "Finding the Black Swan" There is also an annual North American Flight Test Safety Workshop. See also List of aviation awards References External links Website Flight Test Safety Committee Flight Test Safety Workshop 2009 in Vienna Flight Test Safety Workshop, May 2010 in San Jose, California Flight Test Safety Workshop 2011 in Salzburg Society of Experimental Test Pilots Aerospace engineering Aviation safety Aviation awards
European Flight Test Safety Award
Engineering
434
38,551,417
https://en.wikipedia.org/wiki/Tylopilus%20humilis
Tylopilus humilis, commonly known as the humble bolete is a bolete fungus in the family Boletaceae. It was first described scientifically in 1967 by Harry Delbert Thiers from collections made in Mendocino, California. It is found in the United States. See also List of North American boletes References External links humilis Fungi described in 1967 Fungi of the United States Fungi without expected TNC conservation status Fungus species
Tylopilus humilis
Biology
92
42,620,730
https://en.wikipedia.org/wiki/Living%20Fossil%20%28short%20story%29
"Living Fossil" is a science fiction story by American writer L. Sprague de Camp, on the concepts of human extinction and future evolution. It was first published in the magazine Astounding Science-Fiction for February 1939. It first appeared in book form in the anthology A Treasury of Science Fiction (Crown Publishers, 1948); it later appeared in the anthologies Gates to Tomorrow (Atheneum, 1973), and The SFWA Grand Masters, Volume 1 (Tor Books, 1999). The story has been translated into Danish, Swedish and Italian. It is perhaps the earliest work of fiction dealing with the afterwards popular theme of humanity being replaced by other intelligent primates in the future, later epitomized by Pierre Boulle's Planet of the Apes. Plot summary In the far future (perhaps five to ten million years from now), humans and much of the world's fauna have gone extinct, and new creatures have evolved from the remaining species to take their places. Jmu, intelligent primates evolved from capuchin monkeys, now fill the niche left by humans, giant agoutis that of horses, giant tapirs that of elephants. There are also giant rabbits. Other animals, like bears, lions, deer, geese, ducks, snakes, dragonflies, grasshoppers, fleas and mayflies, continue to survive in their previous ecological roles. It is a world of depleted resources, much of these having been used up by humans, but the Jmu have developed to a fairly high level their own technology, including aeronautical balloons, rifles, binoculars and cameras. Two Jmu from South America, zoologist Nawputta and his guide Chujee, an amateur naturalist, are exploring what was once the Pittsburgh area of North America's Eastern Forest. Their goal is to catalogue new species and investigate the scant, ruinous remains of human civilization. They encounter Nguchoy tsu Chaw, a timber scout for the local Jmu colony. He is alone; his own partner, Jawga tsu Shrra, was recently killed by a rattlesnake. Nguchoy treats the newcomers with suspicion, but he helpfully steers them towards a huge stand of valuable pine. In the pine forest the scientists happen upon fresh bones that Nawputta excitedly identifies as human, previously only known from fossils. They appear to have been shot by Jmu. Later, Nawputta manages to shoot a live specimen, a primitive armed with a wooden club, which he proceeds to skin and dissect in the interest of science. Discovered by other humans, he and Chujee hastily retreat as they rouse the countryside with signal drums and the whole tribe hunts them with spears. The Jmu drive the tribe off with gunfire and escape a subsequent ambush. They outdistance pursuit, but the humans are still on their trail. Nawputta and Chujee rendezvous back at Nguchoy's camp, finding him absent. Ruminating on previous suspicions, they reason the timber scout encountered the humans first and stirred them up by murdering the man whose remains they had initially found. He then directed his fellow Jmu into the same area, intending they meet their own deaths at the hands of the angered humans, leaving him sole, undisputed claim to the valuable timber. In this light, it also occurs to them that the death of Nguchoy's partner came at a most convenient time for him. They locate the grave of Jawga and find he died by gunshot, not snakebite. On Nguchoy's return, they surprise the scout, who confesses. They thereupon confiscate his canoe and depart down river, leaving him alone to face the vengeance of the approaching humans. Nawputta plans to return to South America before the local colonists rediscover and despoil the forest, hoping to have the human habitat set aside as a preserve for these living fossils. Reception John K. Aiken, in his review of the anthology A Treasury of Science Fiction, included "Living Fossil" in "the dozen or more ... first-class stories it boasts." Critics Alexei and Cory Panshin have noted the environmentalist subtext of the story, noting that it suggests "that our fall came to pass not through the operation of some iron law of growth and decay, but rather as the result of a multiplicity of human failings, not the least of which was abuse of the environment. ... But for de Camp, mankind was by no means inevitably doomed. There was an obvious way forward, and that was for us to embrace nature, and not to rebel against it." Relation to other works The plot feature of other primates taking the place of an extinct humanity in the far future is also explored in de Camp's novel Genus Homo (1950), written in collaboration with P. Schuyler Miller. Another use of intelligent non-human primates can be found in de Camp's later short story "The Blue Giraffe" (1939). The resolving device of a scientist engineering the death of a greedy antagonist in defense of science is echoed in his later short story "In-Group" (1952). References External links Short stories by L. Sprague de Camp 1939 short stories Works originally published in Analog Science Fiction and Fact Speculative evolution
Living Fossil (short story)
Biology
1,091
17,639,853
https://en.wikipedia.org/wiki/Tecticornia%20bibenda
Tecticornia bibenda is a species of plant in the subfamily Salicornioideae of the family Amaranthaceae from Western Australia. Its segmented stem gives it an appearance similar to the Michelin Man. T. bibenda ranked tenth in the top species of 2008 by the International Institute for Species Exploration. References bibenda Caryophyllales of Australia Halophytes Eudicots of Western Australia Plants described in 2007 Taxa named by Kelly Anne Shepherd
Tecticornia bibenda
Chemistry
100
195,113
https://en.wikipedia.org/wiki/Digital%20divide
The digital divide is the unequal access to digital technology, including smartphones, tablets, laptops, and the internet. The digital divide worsens inequality around access to information and resources. In the Information Age, people without access to the Internet and other technology are at a disadvantage, for they are unable or less able to connect with others, find and apply for jobs, shop, and learn. People who are homeless, living in poverty, elderly people, and those living in rural communities may have limited access to the Internet; in contrast, urban middle class and upper-class people have easy access to the Internet. Another divide is between producers and consumers of Internet content, which could be a result of educational disparities. While social media use varies across age groups, a US 2010 study reported no racial divide. History The historical roots of the digital divide in America refer to the increasing gap that occurred during the early modern period between those who could and could not access the real time forms of calculation, decision-making, and visualization offered via written and printed media. Within this context, ethical discussions regarding the relationship between education and the free distribution of information were raised by thinkers such as Immanuel Kant, Jean Jacques Rousseau, and Mary Wollstonecraft (1712–1778). The latter advocated that governments should intervene to ensure that any society's economic benefits should be fairly and meaningfully distributed. Amid the Industrial Revolution in Great Britain, Rousseau's idea helped to justify poor laws that created a safety net for those who were harmed by new forms of production. Later when telegraph and postal systems evolved, many used Rousseau's ideas to argue for full access to those services, even if it meant subsidizing hard-to-serve citizens. Thus, "universal services" referred to innovations in regulation and taxation that would allow phone services such as AT&T in the United States to serve hard-to-serve rural users. In 1996, as telecommunications companies merged with Internet companies, the Federal Communications Commission adopted Telecommunications Services Act of 1996 to consider regulatory strategies and taxation policies to close the digital divide. Though the term "digital divide" was coined among consumer groups that sought to tax and regulate information and communications technology (ICeT) companies to close the digital divide, the topic soon moved onto a global stage. The focus was the World Trade Organization which passed a Telecommunications Services Act, which resisted regulation of ICT companies so that they would be required to serve hard to serve individuals and communities. In 1999, to assuage anti-globalization forces, the WTO hosted the "Financial Solutions to Digital Divide" in Seattle, US, co-organized by Craig Warren Smith of Digital Divide Institute and Bill Gates Sr. the chairman of the Bill and Melinda Gates Foundation. It catalyzed a full-scale global movement to close the digital divide, which quickly spread to all sectors of the global economy. In 2000, US president Bill Clinton mentioned the term in the State of the Union Address. During the COVID-19 pandemic At the outset of the COVID-19 pandemic, governments worldwide issued stay-at-home orders that established lockdowns, quarantines, restrictions, and closures. The resulting interruptions to schooling, public services, and business operations drove nearly half of the world's population into seeking alternative methods to live while in isolation. These methods included telemedicine, virtual classrooms, online shopping, technology-based social interactions and working remotely, all of which require access to high-speed or broadband internet access and digital technologies. A Pew Research Centre study reports that 90% of Americans describe the use of the Internet as "essential" during the pandemic. The accelerated use of digital technologies creates a landscape where the ability, or lack thereof, to access digital spaces becomes a crucial factor in everyday life. According to the Pew Research Center, 59% of children from lower-income families were likely to face digital obstacles in completing school assignments. These obstacles included the use of a cellphone to complete homework, having to use public Wi-Fi because of unreliable internet service in the home and lack of access to a computer in the home. This difficulty, titled the homework gap, affects more than 30% of K-12 students living below the poverty threshold, and disproportionally affects American Indian/Alaska Native, Black, and Hispanic students. These types of interruptions or privilege gaps in education exemplify problems in the systemic marginalization of historically oppressed individuals in primary education. The pandemic exposed inequity causing discrepancies in learning. A lack of "tech readiness", that is, confident and independent use of devices, was reported among the US elderly population; with more than 50% reporting an inadequate knowledge of devices and more than one-third reporting a lack of confidence. Moreover, according to a UN research paper, similar results can be found across various Asian countries, with those above the age of 74 reporting a lower and more confused usage of digital devices. This aspect of the digital divide and the elderly occurred during the pandemic as healthcare providers increasingly relied upon telemedicine to manage chronic and acute health conditions. Aspects There are manifold definitions of the digital divide, all with slightly different emphasis, which is evidenced by related concepts like digital inclusion, digital participation, digital skills, media literacy, and digital accessibility. Infrastructure The infrastructure by which individuals, households, businesses, and communities connect to the Internet address the physical mediums that people use to connect to the Internet such as desktop computers, laptops, basic mobile phones or smartphones, iPods or other MP3 players, gaming consoles such as Xbox or PlayStation, electronic book readers, and tablets such as iPads. Traditionally, the nature of the divide has been measured in terms of the existing numbers of subscriptions and digital devices. Given the increasing number of such devices, some have concluded that the digital divide among individuals has increasingly been closing as the result of a natural and almost automatic process. Others point to persistent lower levels of connectivity among women, racial and ethnic minorities, people with lower incomes, rural residents, and less educated people as evidence that addressing inequalities in access to and use of the medium will require much more than the passing of time. Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003–2004 increased levels of inequality". During the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 5G and fiber optics FTTH). Measurement methodologies of the digital divide, and more specifically an Integrated Iterative Approach General Framework (Integrated Contextual Iterative Approach – ICI) and the digital divide modeling theory under measurement model DDG (Digital Divide Gap) are used to analyze the gap existing between developed and developing countries, and the gap among the 27 members-states of the European Union. The Good Things Foundation, a UK non-profit organisation, collates data on the extent and impact of the digital divide in the UK and lobbies the government to fix digital exclusion Skills and digital literacy Research from 2001 showed that the digital divide is more than just an access issue and cannot be alleviated merely by providing the necessary equipment. There are at least three factors at play: information accessibility, information utilization, and information receptiveness. More than just accessibility, the digital divide consists of society's lack of knowledge on how to make use of the information and communication tools once they exist within a community. Information professionals have the ability to help bridge the gap by providing reference and information services to help individuals learn and utilize the technologies to which they do have access, regardless of the economic status of the individual seeking help. Location One can connect to the internet in a variety of locations, such as homes, offices, schools, libraries, public spaces, and Internet cafes. Levels of connectivity often vary between rural, suburban, and urban areas. In 2017, the Wireless Broadband Alliance published the white paper The Urban Unconnected, which highlighted that in the eight countries with the world's highest GNP about 1.75 billion people had no internet connection, and one third of them lived in the major urban centers. Delhi (5.3 millions, 9% of the total population), São Paulo (4.3 millions, 36%), New York (1.6 mln, 19%), and Moscow (2.1 mln, 17%) registered the highest percentages of citizens who had no internet access of any type. As of 2021, only about half of the world's population had access to the internet, leaving 3.7 billion people without internet. A majority of those are in developing countries, and a large portion of them are women. Also, the governments of different countries have different policies about privacy, data governance, speech freedoms and many other factors. Government restrictions make it challenging for technology companies to provide services in certain countries. This disproportionately impacts the different regions of the world; Europe has the highest percentage of the population online while Africa has the lowest. From 2010 to 2014 Europe went from 67% to 75% and in the same time span Africa went from 10% to 19%. Network speeds play a large role in the quality of an internet connection. Large cities and towns may have better access to high speed internet than rural areas, which may have limited or no service. Households can be locked into a specific service provider, since it may be the only carrier that even offers service to the area. This applies to regions that have developed networks, like the United States, but also applies to developing countries, so that very large areas have virtually no coverage. In those areas there are very limited actions that a consumer could take, since the issue is mainly infrastructure. Technologies that provide an internet connection through satellite are becoming more common, like Starlink, but they are still not available in many regions. Based on location, a connection may be so slow as to be virtually unusable, solely because a network provider has limited infrastructure in the area. For example, to download 5 GB of data in Taiwan it might take about 8 minutes, while the same download might take 30 hours in Yemen. From 2020 to 2022, average download speeds in the EU climbed from 70 Mbps to more than 120 Mbps, owing mostly to the demand for digital services during the pandemic. There is still a large rural-urban disparity in internet speeds, with metropolitan areas in France and Denmark reaching rates of more than 150 Mbps, while many rural areas in Greece, Croatia, and Cyprus have speeds of less than 60 Mbps. The EU aspires for complete gigabit coverage by 2030, however as of 2022, only over 60% of Europe has high-speed internet infrastructure, signalling the need for more enhancements. Applications Common Sense Media, a nonprofit group based in San Francisco, surveyed almost 1,400 parents and reported in 2011 that 47 percent of families with incomes more than $75,000 had downloaded apps for their children, while only 14 percent of families earning less than $30,000 had done so. Reasons and correlating variables As of 2014, the gap in a digital divide was known to exist for a number of reasons. Obtaining access to ICTs and using them actively has been linked to demographic and socio-economic characteristics including income, education, race, gender, geographic location (urban-rural), age, skills, awareness, political, cultural and psychological attitudes. Multiple regression analysis across countries has shown that income levels and educational attainment are identified as providing the most powerful explanatory variables for ICT access and usage. Evidence was found that Caucasians are much more likely than non-Caucasians to own a computer as well as have access to the Internet in their homes. As for geographic location, people living in urban centers have more access and show more usage of computer services than those in rural areas. In developing countries, a digital divide between women and men is apparent in tech usage, with men more likely to be competent tech users. Controlled statistical analysis has shown that income, education and employment act as confounding variables and that women with the same level of income, education and employment actually embrace ICT more than men (see Women and ICT4D), this argues against any suggestion that women are "naturally" more technophobic or less tech-savvy. However, each nation has its own set of causes or the digital divide. For example, the digital divide in Germany is unique because it is not largely due to difference in quality of infrastructure. The correlation between income and internet use suggests that the digital divide persists at least in part due to income disparities. Most commonly, a digital divide stems from poverty and the economic barriers that limit resources and prevent people from obtaining or otherwise using newer technologies. In research, while each explanation is examined, others must be controlled to eliminate interaction effects or mediating variables, but these explanations are meant to stand as general trends, not direct causes. Measurements for the intensity of usages, such as incidence and frequency, vary by study. Some report usage as access to Internet and ICTs while others report usage as having previously connected to the Internet. Some studies focus on specific technologies, others on a combination (such as Infostate, proposed by Orbicom-UNESCO, the Digital Opportunity Index, or ITU's ICT Development Index). Economic gap in the United States During the mid-1990s, the United States Department of Commerce, National Telecommunications & Information Administration (NTIA) began publishing reports about the Internet and access to and usage of the resource. The first of three reports is titled "Falling Through the Net: A Survey of the "Have Nots" in Rural and Urban America" (1995), the second is "Falling Through the Net II: New Data on the Digital Divide" (1998), and the final report "Falling Through the Net: Defining the Digital Divide" (1999). The NTIA's final report attempted clearly to define the term digital divide as "the divide between those with access to new technologies and those without". Since the introduction of the NTIA reports, much of the early, relevant literature began to reference the NTIA's digital divide definition. The digital divide is commonly defined as being between the "haves" and "have-nots". The U.S. Federal Communications Commission's (FCC) 2019 Broadband Deployment Report indicated that 21.3 million Americans do not have access to wired or wireless broadband internet. As of 2020, BroadbandNow, an independent research company studying access to internet technologies, estimated that the actual number of United States Americans without high-speed internet is twice that number. According to a 2021 Pew Research Center report, smartphone ownership and internet use has increased for all Americans, however, a significant gap still exists between those with lower incomes and those with higher incomes: U.S. households earning $100K or more are twice as likely to own multiple devices and have home internet service as those making $30K or more, and three times as likely as those earning less than $30K per year. The same research indicated that 13% of the lowest income households had no access to internet or digital devices at home compared to only 1% of the highest income households. According to a Pew Research Center survey of U.S. adults executed from January 25 to February 8, 2021, the digital lives of Americans with high and low incomes are varied. Conversely, the proportion of Americans that use home internet or cell phones has maintained constant between 2019 and 2021. A quarter of those with yearly average earnings under $30,000 (24%) says they don't own smartphones. Four out of every ten low-income people (43%) do not have home internet access or a computer (43%). Furthermore, the more significant part of lower-income Americans does not own a tablet device. On the other hand, every technology is practically universal among people earning $100,000 or higher per year. Americans with larger family incomes are also more likely to buy a variety of internet-connected products. Wi-Fi at home, a smartphone, a computer, and a tablet are used by around six out of ten families making $100,000 or more per year, compared to 23 percent in the lesser household. Racial gap in the United States Although many groups in society are affected by a lack of access to computers or the Internet, communities of color are specifically observed to be negatively affected by the digital divide. Pew research shows that as of 2021, home broadband rates are 81% for White households, 71% for Black households and 65% for Hispanic households. While 63% of adults find the lack of broadband to be a disadvantage, only 49% of White adults do. Smartphone and tablet ownership remains consistent with about 8 out of 10 Black, White, and Hispanic individuals reporting owning a smartphone and half owning a tablet. A 2021 survey found that a quarter of Hispanics rely on their smartphone and do not have access to broadband. Physical and mental disability gap Inequities in access to information technologies are present among individuals living with a physical disability in comparison to those who are not living with a disability. In 2011, according to the Pew Research Center, 54% of households with a person who had a disability had home Internet access, compared to 81% of households that did not have a person who has a disability. The type of disability an individual has can prevent them from interacting with computer screens and smartphone screens, such as having a quadriplegia disability or having a disability in the hands. However, there is still a lack of access to technology and home Internet access among those who have a cognitive and auditory disability as well. There is a concern of whether or not the increase in the use of information technologies will increase equality through offering opportunities for individuals living with disabilities or whether it will only add to the present inequalities and lead to individuals living with disabilities being left behind in society. Issues such as the perception of disabilities in society, national and regional government policy, corporate policy, mainstream computing technologies, and real-time online communication have been found to contribute to the impact of the digital divide on individuals with disabilities. In 2022, a survey of people in the UK with severe mental illness found that 42% lacked basic digital skills, such as changing passwords or connecting to Wi-Fi. People with disabilities are also the targets of online abuse. Online disability hate crimes have increased by 33% across the UK between 2016–17 and 2017–18 according to a report published by Leonard Cheshire, a health and welfare charity. Accounts of online hate abuse towards people with disabilities were shared during an incident in 2019 when model Katie Price's son was the target of online abuse that was attributed to him having a disability. In response to the abuse, a campaign was launched by Price to ensure that Britain's MPs held accountable those who perpetuate online abuse towards those with disabilities. Online abuse towards individuals with disabilities is a factor that can discourage people from engaging online which could prevent people from learning information that could improve their lives. Many individuals living with disabilities face online abuse in the form of accusations of benefit fraud and "faking" their disability for financial gain, which in some cases leads to unnecessary investigations. Gender gap Due to the rapidly declining price of connectivity and hardware, skills deficits have eclipsed barriers of access as the primary contributor to the gender digital divide. Studies show that women are less likely to know how to leverage devices and Internet access to their full potential, even when they do use digital technologies. In rural India, for example, a study found that the majority of women who owned mobile phones only knew how to answer calls. They could not dial numbers or read messages without assistance from their husbands, due to a lack of literacy and numeracy skills. A survey of 3,000 respondents across 25 countries found that adolescent boys with mobile phones used them for a wider range of activities, such as playing games and accessing financial services online. Adolescent girls in the same study tended to use just the basic functionalities of their phone, such as making calls and using the calculator. Similar trends can be seen even in areas where Internet access is near-universal. A survey of women in nine cities around the world revealed that although 97% of women were using social media, only 48% of them were expanding their networks, and only 21% of Internet-connected women had searched online for information related to health, legal rights or transport. In some cities, less than one quarter of connected women had used the Internet to look for a job. Studies show that despite strong performance in computer and information literacy (CIL), girls do not have confidence in their ICT abilities. According to the International Computer and Information Literacy Study (ICILS) assessment girls' self-efficacy scores (their perceived as opposed to their actual abilities) for advanced ICT tasks were lower than boys'. A paper published by J. Cooper from Princeton University points out that learning technology is designed to be receptive to men instead of women. Overall, the study presents the problem of various perspectives in society that are a result of gendered socialization patterns that believe that computers are a part of the male experience since computers have traditionally presented as a toy for boys when they are children. This divide is followed as children grow older and young girls are not encouraged as much to pursue degrees in IT and computer science. In 1990, the percentage of women in computing jobs was 36%, however in 2016, this number had fallen to 25%. This can be seen in the under representation of women in IT hubs such as Silicon Valley. There has also been the presence of algorithmic bias that has been shown in machine learning algorithms that are implemented by major companies. In 2015, Amazon had to abandon a recruiting algorithm that showed a difference between ratings that candidates received for software developer jobs as well as other technical jobs. As a result, it was revealed that Amazon's machine algorithm was biased against women and favored male resumes over female resumes. This was due to the fact that Amazon's computer models were trained to vet patterns in resumes over a 10-year period. During this ten-year period, the majority of the resumes belong to male individuals, which is a reflection of male dominance across the tech industry. Age gap The age gap contributes to the digital divide due to the fact that people born before 1983 did not grow up with the internet. According to Marc Prensky, people who fall into this age range are classified as "digital immigrants." A digital immigrant is defined as "a person born or brought up before the widespread use of digital technology." The internet became officially available for public use on January 1, 1983; anyone born before then has had to adapt to the new age of technology. On the contrary, people born after 1983 are considered "digital natives". Digital natives are defined as people born or brought up during the age of digital technology. Across the globe, there is a 10% difference in internet usage between people aged 15–24 years old and people aged 25 years or older. According to the International Telecommunication Union (ITU), 75% of people aged 15–24 used the internet in 2022 compared to 65% of people aged 25 years or older. The highest amount of digital divide between generations occurs in Africa with 55% of the younger age group using the internet compared to 36% of people aged 25 years or older. The lowest amount of divide occurs between the Commonwealth of Independent States with 91% of the younger age group using the internet compared to 83% of people aged 25 years or older. In addition to being less connected with the internet, older generations are less likely to use financial technology, also known as fintech. Fintech is any way of managing money via digital devices. Some examples of fintech include digital payment apps such as Vemno and Apple Pay, tax services such as TurboTax, or applying for a mortgage digitally. In data from World Bank Findex, 40% of people younger than 40 years old utilized fintech compared to less than 25% of people aged 60 years or older. Global level The divide between differing countries or regions of the world is referred to as the global digital divide, which examines the technological gap between developing and developed countries. The divide within countries (such as the digital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at different socioeconomic levels or other demographic categories. In contrast, the global digital divide describes disparities in access to computing and information resources, and the opportunities derived from such access. As the internet rapidly expands it is difficult for developing countries to keep up with the constant changes. In 2014 only three countries (China, US, Japan) host 50% of the globally installed bandwidth potential. This concentration is not new, as historically only ten countries have hosted 70–75% of the global telecommunication capacity (see Figure). The U.S. lost its global leadership in terms of installed bandwidth in 2011, replaced by China, who hosted more than twice as much national bandwidth potential in 2014 (29% versus 13% of the global total). Some zero-rating programs such as Facebook Zero offer free/subsidized data access to certain websites. Critics object that this is an anti-competitive program that undermines net neutrality and creates a "walled garden". A 2015 study reported that 65% of Nigerians, 61% of Indonesians, and 58% of Indians agree with the statement that "Facebook is the Internet" compared with only 5% in the US. Implications Social capital Once an individual is connected, Internet connectivity and ICTs can enhance his or her future social and cultural capital. Social capital is acquired through repeated interactions with other individuals or groups of individuals. Connecting to the Internet creates another set of means by which to achieve repeated interactions. ICTs and Internet connectivity enable repeated interactions through access to social networks, chat rooms, and gaming sites. Once an individual has access to connectivity, obtains infrastructure by which to connect, and can understand and use the information that ICTs and connectivity provide, that individual is capable of becoming a "digital citizen." Economic disparity In the United States, the research provided by Unguarded Availability Services notes a direct correlation between a company's access to technological advancements and its overall success in bolstering the economy. The study, which includes over 2,000 IT executives and staff officers, indicates that 69 percent of employees feel they do not have access to sufficient technology to make their jobs easier, while 63 percent of them believe the lack of technological mechanisms hinders their ability to develop new work skills. Additional analysis provides more evidence to show how the digital divide also affects the economy in places all over the world. A BEG report suggests that in countries like Sweden, Switzerland, and the U.K., the digital connection among communities is made easier, allowing for their populations to obtain a much larger share of the economies via digital business. In fact, in these places, populations hold shares approximately 2.5 percentage points higher. During a meeting with the United Nations a Bangladesh representative expressed his concern that poor and undeveloped countries would be left behind due to a lack of funds to bridge the digital gap. Education The digital divide impacts children's ability to learn and grow in low-income school districts. Without Internet access, students are unable to cultivate necessary technological skills to understand today's dynamic economy. The need for the internet starts while children are in school – necessary for matters such as school portal access, homework submission, and assignment research. The Federal Communications Commission's Broadband Task Force created a report showing that about 70% of teachers give students homework that demand access to broadband. Approximately 65% of young scholars use the Internet at home to complete assignments as well as connect with teachers and other students via discussion boards and shared files. A recent study indicates that approximately 50% of students say that they are unable to finish their homework due to an inability to either connect to the Internet or in some cases, find a computer. Additionally, The Public Policy Institute of California reported in 2023 that 27% of the state’s school children lack the necessary broadband to attend school remotely, and 16% have no internet connection at all. This has led to a new revelation: 42% of students say they received a lower grade because of this disadvantage. According to research conducted by the Center for American Progress, "if the United States were able to close the educational achievement gaps between native-born white children and black and Hispanic children, the U.S. economy would be 5.8 percent—or nearly $2.3 trillion—larger in 2050". In a reverse of this idea, well-off families, especially the tech-savvy parents in Silicon Valley, carefully limit their own children's screen time. The children of wealthy families attend play-based preschool programs that emphasize social interaction instead of time spent in front of computers or other digital devices, and they pay to send their children to schools that limit screen time. American families that cannot afford high-quality childcare options are more likely to use tablet computers filled with apps for children as a cheap replacement for a babysitter, and their government-run schools encourage screen time during school. Students in school are also learning about the digital divide. To reduce the impact of the digital divide and increase digital literacy in young people at an early age, governments have begun to develop and focus policy on embedding digital literacies in both student and educator programs, for instance, in Initial Teacher Training programs in Scotland. The National Framework for Digital Literacies in Initial Teacher Education was developed by representatives from Higher Education institutions that offer Initial Teacher Education (ITE) programs in conjunction with the Scottish Council of Deans of Education (SCDE) with the support of Scottish Government This policy driven approach aims to establish an academic grounding in the exploration of learning and teaching digital literacies and their impact on pedagogy as well as ensuring educators are equipped to teach in the rapidly evolving digital environment and continue their own professional development. Demographic differences Factors such as nationality, gender, and income contribute to the digital divide across the globe. Depending on what someone identifies as, their access to the internet can potentially decrease. According to a study conducted by the ITU in 2022, Africa has the fewest people on the internet at a 40% rate; the next lowest internet population is the Asia-Pacific region at 64%. Internet access remains a problem in Least Developing Countries and Landlocked Developing Countries. They both have 36% of people using the internet compared to a 66% average around the world. Men generally have more access to the internet around the world. The gender parity score across the globe is 0.92. A gender parity score is calculated by the percentage of women who use the internet divided by the percentage of men who use the internet. Ideally, countries want to have gender parity scores between 0.98 and 1.02. The region with the least gender parity is Africa with a score of 0.75. The next lowest gender parity score belongs to the Arab States at 0.87. Americans, Commonwealth of Independent States, and Europe all have the highest gender parity scores with scores that do not go below 0.98 or higher than 1. Gender parity scores are often impacted by class. Low income regions have a score of 0.65 while upper-middle income and high income regions have a score of 0.99. The difference between economic classes has been a prevalent issue with the digital divide up to this point. People who are considered to earn low income use the internet at a 26% rate followed by lower-middle income at 56%, upper-middle income at 79%, and high income at 92%. The staggering difference between low income individuals and high income individuals can be traced to the affordability of mobile products. Products are becoming more affordable as the years pass; according to the ITU, “the global median price of mobile-broadband services dropped from 1.9 percent to 1.5 percent of average gross national income (GNI) per capita.” There is still plenty of work to be done, as there is a 66% difference between low income individuals and high income individuals' access to the internet. Facebook divide The Facebook divide, a concept derived from the "digital divide", is the phenomenon with regard to access to, use of, and impact of Facebook on society. It was coined at the International Conference on Management Practices for the New Economy (ICMAPRANE-17) on February 10–11, 2017. Additional concepts of Facebook Native and Facebook Immigrants were suggested at the conference. Facebook divide, Facebook native, Facebook immigrants, and Facebook left-behind are concepts for social and business management research. Facebook immigrants utilize Facebook for their accumulation of both bonding and bridging social capital. Facebook natives, Facebook immigrants, and Facebook left-behind induced the situation of Facebook inequality. In February 2018, the Facebook Divide Index was introduced at the ICMAPRANE conference in Noida, India, to illustrate the Facebook divide phenomenon. Solutions In the year 2000, the United Nations Volunteers (UNV) program launched its Online Volunteering service, which uses ICT as a vehicle for and in support of volunteering. It constitutes an example of a volunteering initiative that effectively contributes to bridge the digital divide. ICT-enabled volunteering has a clear added value for development. If more people collaborate online with more development institutions and initiatives, this will imply an increase in person-hours dedicated to development cooperation at essentially no additional cost. This is the most visible effect of online volunteering for human development. Since May 17, 2006, the United Nations has raised awareness of the divide by way of the World Information Society Day. In 2001, it set up the Information and Communications Technology (ICT) Task Force. Later UN initiatives in this area are the World Summit on the Information Society since 2003, and the Internet Governance Forum, set up in 2006. As of 2009, the borderline between ICT as a necessity good and ICT as a luxury good was roughly around US$10 per person per month, or US$120 per year, which means that people consider ICT expenditure of US$120 per year as a basic necessity. Since more than 40% of the world population lives on less than US$2 per day, and around 20% live on less than US$1 per day (or less than US$365 per year), these income segments would have to spend one third of their income on ICT (120/365 = 33%). The global average of ICT spending is at a mere 3% of income. Potential solutions include driving down the costs of ICT, which includes low-cost technologies and shared access through Telecentres. In 2022, the US Federal Communications Commission started a proceeding "to prevent and eliminate digital discrimination and ensure that all people of the United States benefit from equal access to broadband internet access service, consistent with Congress's direction in the Infrastructure Investment and Jobs Act. Social media websites serve as both manifestations of and means by which to combat the digital divide. The former describes phenomena such as the divided users' demographics that make up sites such as Facebook, WordPress and Instagram. Each of these sites hosts communities that engage with otherwise marginalized populations. Libraries In 2010, an "online indigenous digital library as part of public library services" was created in Durban, South Africa to narrow the digital divide by not only giving the people of the Durban area access to this digital resource, but also by incorporating the community members into the process of creating it. In 2002, the Gates Foundation started the Gates Library Initiative which provides training assistance and guidance in libraries. In Kenya, lack of funding, language, and technology illiteracy contributed to an overall lack of computer skills and educational advancement. This slowly began to change when foreign investment began. In the early 2000s, the Carnegie Foundation funded a revitalization project through the Kenya National Library Service. Those resources enabled public libraries to provide information and communication technologies to their patrons. In 2012, public libraries in the Busia and Kiberia communities introduced technology resources to supplement curriculum for primary schools. By 2013, the program expanded into ten schools. Effective use Even though individuals might be capable of accessing the Internet, many are opposed by barriers to entry, such as a lack of means to infrastructure or the inability to comprehend or limit the information that the Internet provides. Some individuals can connect, but they do not have the knowledge to use what information ICTs and Internet technologies provide them. This leads to a focus on capabilities and skills, as well as awareness to move from mere access to effective usage of ICT. Community informatics (CI) focuses on issues of "use" rather than "access". CI is concerned with ensuring the opportunity not only for ICT access at the community level but also, according to Michael Gurstein, that the means for the "effective use" of ICTs for community betterment and empowerment are available. Gurstein has also extended the discussion of the digital divide to include issues around access to and the use of "open data" and coined the term "data divide" to refer to this issue area. Criticism Knowledge divide Since gender, age, race, income, and educational digital divides have lessened compared to the past, some researchers suggest that the digital divide is shifting from a gap in access and connectivity to ICTs to a knowledge divide. A knowledge divide concerning technology presents the possibility that the gap has moved beyond the access and having the resources to connect to ICTs to interpreting and understanding information presented once connected. Second-level digital divide The second-level digital divide, also referred to as the production gap, describes the gap that separates the consumers of content on the Internet from the producers of content. As the technological digital divide is decreasing between those with access to the Internet and those without, the meaning of the term digital divide is evolving. Previously, digital divide research was focused on accessibility to the Internet and Internet consumption. However, with an increasing number of the population gaining access to the Internet, researchers are examining how people use the Internet to create content and what impact socioeconomics are having on user behavior. New applications have made it possible for anyone with a computer and an Internet connection to be a creator of content, yet the majority of user-generated content available widely on the Internet, like public blogs, is created by a small portion of the Internet-using population. Web 2.0 technologies like Facebook, YouTube, Twitter, and Blogs enable users to participate online and create content without having to understand how the technology actually works, leading to an ever-increasing digital divide between those who have the skills and understanding to interact more fully with the technology and those who are passive consumers of it. Some of the reasons for this production gap include material factors like the type of Internet connection one has and the frequency of access to the Internet. The more frequently a person has access to the Internet and the faster the connection, the more opportunities they have to gain the technology skills and the more time they have to be creative. Other reasons include cultural factors often associated with class and socioeconomic status. Users of lower socioeconomic status are less likely to participate in content creation due to disadvantages in education and lack of the necessary free time for the work involved in blog or website creation and maintenance. Additionally, there is evidence to support the existence of the second-level digital divide at the K-12 level based on how educators' use technology for instruction. Schools' economic factors have been found to explain variation in how teachers use technology to promote higher-order thinking skills. See also Achievement gap Civic opportunity gap Computer technology for developing areas Digital divide by country Digital divide in Canada Digital divide in China Digital divide in South Africa Digital divide in Thailand Digital rights in the Caribbean Digital inclusion Digital rights Global Internet usage Government by algorithm Information society International communication Internet geography Internet governance List of countries by Internet connection speeds Light-weight Linux distribution Literacy National broadband plans from around the world NetDay Net neutrality Rural Internet Satellite internet Starlink Groups devoted to digital divide issues Center for Digital Inclusion Digital Textbook a South Korean Project that intends to distribute tablet notebooks to elementary school students. Inveneo Michelson 20MM Foundation TechChange United Nations Information and Communication Technologies Task Force References Sources Citations The British Museum. "Our Earliest Technology?" Smarthistory. Accessed October 12, 2022. Our earliest technology? – Smarthistory. What Is The Digital Divide and How Is It Being Bridged? Wise, Jason. "How Many People Own Televisions in 2022? (Ownership Stats)." EarthWeb, September 6, 2022. How Many People Own Televisions in 2022? (Ownership Stats) – EarthWeb. Published by Statista Research Department, and Sep 20. "Internet and Social Media Users in the World 2022." Statista, September 20, 2022. Internet and social media users in the world 2022. Bibliography Borland, J. (April 13, 1998). "Move Over Megamalls, Cyberspace Is the Great Retailing Equalizer". Knight Ridder/Tribune Business News. James, J. (2004). Information Technology and Development: A new paradigm for delivering the Internet to rural areas in developing countries. New York, NY: Routledge. (print). (e-book). Southwell, B. G. (2013). Social networks and popular understanding of science and health: sharing disparities. Baltimore, MD: Johns Hopkins University Press. (book). World Summit on the Information Society (WSIS), 2005. "What's the state of ICT access around the world?" Retrieved July 17, 2009. World Summit on the Information Society (WSIS), 2008. "ICTs in Africa: Digital Divide to Digital Opportunity". Retrieved July 17, 2009. Further reading "Falling Through the Net: Defining the Digital Divide" (PDF ), NTIS, U.S. Department of Commerce, July 1999. DiMaggio, P. & Hargittai, E. (2001). "From the "Digital Divide" to 'Digital Inequality': Studying Internet Use as Penetration Increases", Working Paper No. 15, Center for Arts and Cultural Policy Studies, Woodrow Wilson School, Princeton University. Retrieved May 31, 2009. Foulger, D. (2001). "Seven bridges over the global digital divide" . IAMCR & ICA Symposium on Digital Divide, November 2001. Retrieved July 17, 2009. Council of Economic Advisors (2015). Mapping the Digital Divide. "A Nation Online: Entering the Broadband Age", NTIS, U.S. Department of Commerce, September 2004. Rumiany, D. (2007). "Reducing the Global Digital Divide in Sub-Saharan Africa" . Posted on Global Envision with permission from Development Gateway, January 8, 2007. Retrieved July 17, 2009. "Telecom use at the Bottom of the Pyramid 2 (use of telecom services and ICTs in emerging Asia)", LIRNEasia, 2007. "Telecom use at the Bottom of the Pyramid 3 (Mobile2.0 applications, migrant workers in emerging Asia)", LIRNEasia, 2008–09. "São Paulo Special: Bridging Brazil's digital divide", Digital Planet, BBC World Service, October 2, 2008. Graham, M. (2009). "Global Placemark Intensity: The Digital Divide Within Web 2.0 Data", Floatingsheep Blog. Yfantis, V. (2017). Disadvantaged Populations And Technology In Music. External links Digital Inclusion Network, an online exchange on topics related to the digital divide and digital inclusion, E-Democracy.org. E-inclusion, an initiative of the European Commission to ensure that "no one is left behind" in enjoying the benefits of Information and Communication Technologies (ICT). eEurope – An information society for all, a political initiative of the European Union. Statistics from the International Telecommunication Union (ITU) Divide Information society Technology development Economic geography Cultural globalization Global inequality Rural economics Social inequality
Digital divide
Technology
9,138
24,154,893
https://en.wikipedia.org/wiki/C14H20N2
{{DISPLAYTITLE:C14H20N2}} The molecular formula C14H20N2 may refer to: Cipralisant Diethyltryptamine, a psychedelic drug 5-Ethyl-DMT N-Methyl-N-isopropyltryptamine N-t-Butyltryptamine Methylpropyltryptamine
C14H20N2
Chemistry
78
12,194,069
https://en.wikipedia.org/wiki/Hibiscadelphus%20wilderianus
Hibiscadelphus wilderianus, also known as the Maui hau kuahiwi is an extinct species of flowering plant in the family Malvaceae that was endemic to Hawaii. Extinction The plant was endemic to ancient lava fields on the southern slopes of Mount Haleakalā, on Maui, Hawaii. Its forest habitat was devastated by cattle ranchers, and the final tree was found dying in 1912. Today it is believed to be extinct. In 2019 the scent of the flower was recreated using DNA sequenced from a preserved specimen. References Species known from a single specimen wilderianus Extinct flora of Hawaii Endemic flora of Hawaii Biota of Maui Plant extinctions since 1500 Taxonomy articles created by Polbot
Hibiscadelphus wilderianus
Biology
145
12,194,868
https://en.wikipedia.org/wiki/C3H5N
{{DISPLAYTITLE:C3H5N}} The molecular formula C3H5N (molar mass: 55.08 g/mol, exact mass: 55.0422 u) may refer to: 1-Azabicyclo[1.1.0]butane 1-Azetine () 2-Azetine Propargylamine (2-propynylamine) Propionitrile (propanenitrile)
C3H5N
Chemistry
102
31,244,986
https://en.wikipedia.org/wiki/Plancherel%20measure
In mathematics, Plancherel measure is a measure defined on the set of irreducible unitary representations of a locally compact group , that describes how the regular representation breaks up into irreducible unitary representations. In some cases the term Plancherel measure is applied specifically in the context of the group being the finite symmetric group – see below. It is named after the Swiss mathematician Michel Plancherel for his work in representation theory. Definition for finite groups Let be a finite group, we denote the set of its irreducible representations by . The corresponding Plancherel measure over the set is defined by where , and denotes the dimension of the irreducible representation . Definition on the symmetric group An important special case is the case of the finite symmetric group , where is a positive integer. For this group, the set of irreducible representations is in natural bijection with the set of integer partitions of . For an irreducible representation associated with an integer partition , its dimension is known to be equal to , the number of standard Young tableaux of shape , so in this case Plancherel measure is often thought of as a measure on the set of integer partitions of given order n, given by The fact that those probabilities sum up to 1 follows from the combinatorial identity which corresponds to the bijective nature of the Robinson–Schensted correspondence. Application Plancherel measure appears naturally in combinatorial and probabilistic problems, especially in the study of longest increasing subsequence of a random permutation . As a result of its importance in that area, in many current research papers the term Plancherel measure almost exclusively refers to the case of the symmetric group . Connection to longest increasing subsequence Let denote the length of a longest increasing subsequence of a random permutation in chosen according to the uniform distribution. Let denote the shape of the corresponding Young tableaux related to by the Robinson–Schensted correspondence. Then the following identity holds: where denotes the length of the first row of . Furthermore, from the fact that the Robinson–Schensted correspondence is bijective it follows that the distribution of is exactly the Plancherel measure on . So, to understand the behavior of , it is natural to look at with chosen according to the Plancherel measure in , since these two random variables have the same probability distribution. Poissonized Plancherel measure Plancherel measure is defined on for each integer . In various studies of the asymptotic behavior of as , it has proved useful to extend the measure to a measure, called the Poissonized Plancherel measure, on the set of all integer partitions. For any , the Poissonized Plancherel measure with parameter on the set is defined by for all . Plancherel growth process The Plancherel growth process is a random sequence of Young diagrams such that each is a random Young diagram of order whose probability distribution is the nth Plancherel measure, and each successive is obtained from its predecessor by the addition of a single box, according to the transition probability for any given Young diagrams and of sizes n − 1 and n, respectively. So, the Plancherel growth process can be viewed as a natural coupling of the different Plancherel measures of all the symmetric groups, or alternatively as a random walk on Young's lattice. It is not difficult to show that the probability distribution of in this walk coincides with the Plancherel measure on . Compact groups The Plancherel measure for compact groups is similar to that for finite groups, except that the measure need not be finite. The unitary dual is a discrete set of finite-dimensional representations, and the Plancherel measure of an irreducible finite-dimensional representation is proportional to its dimension. Abelian groups The unitary dual of a locally compact abelian group is another locally compact abelian group, and the Plancherel measure is proportional to the Haar measure of the dual group. Semisimple Lie groups The Plancherel measure for semisimple Lie groups was found by Harish-Chandra. The support is the set of tempered representations, and in particular not all unitary representations need occur in the support. References Representation theory
Plancherel measure
Mathematics
861
387,241
https://en.wikipedia.org/wiki/Bar%20%28music%29
In musical notation, a bar (or measure) is a segment of music bounded by vertical lines, known as bar lines (or barlines), usually indicating one or more recurring beats. The length of the bar, measured by the number of note values it contains, is normally indicated by the time signature. Types of bar lines Regular bar lines consist of a thin vertical line extending from the top line to the bottom line of the staff, sometimes also extending between staves in the case of a grand staff or a family of instruments in an orchestral score. A double bar line (or double bar) consists of two single bar lines drawn close together, separating two sections within a piece, or a bar line followed by a thicker bar line, indicating the end of a piece or movement. Note that double bar refers not to a type of bar (i.e., measure), but to a type of bar line. Typically, a double bar is used when followed by a new key signature, whether or not it marks the beginning of a new section. A repeat sign (or, repeat bar line) looks like the music end, but it has two dots, one above the other, indicating that the section of music that is before is to be repeated. The beginning of the repeated passage can be marked by a begin-repeat sign; if this is absent, the repeat is understood to be from the beginning of the piece or movement. This begin-repeat sign, if appearing at the beginning of a staff, does not act as a bar line because no bar is before it; its only function is to indicate the beginning of the passage to be repeated. A mensurstrich is a bar line which stretches only between staves of a score, not through each staff; this is a specialized notation used by editors of early music to help orient modern musicians when reading music which was originally written without bar lines. Lines extending only partway through the staff are rarely used, sometimes to help orient the reader in very long measures in complex time signatures, or as brief section divisions in Gregorian chant notation.Some composers use dashed or dotted bar lines; others (including Hugo Distler) have placed bar lines at different places in the different parts to indicate different stress patterns from part to part. If many consecutive bars contain only rests, they may be replaced by a single bar containing a multirest, as shown. The number above shows the number of bars replaced. Bars and stresses Whether the music contains a regular meter or mixed meters, the first note in the bar (known as the downbeat) is usually stressed slightly in relation to the other notes in the bar. Igor Stravinsky said of bar lines: Bars and bar lines also indicate grouping: rhythmically of beats within and between bars, within and between phrases, and on higher levels such as meter. Numbering bars The first metrically complete bar within a piece of music is called "bar 1" or "m. 1". When the piece begins with an anacrusis (an incomplete bar at the beginning of a piece of music), "bar 1" or "m. 1" is the following bar. Bars contained within first or second endings are numbered consecutively. History The earliest bar lines, used in keyboard and vihuela music in the 15th and 16th centuries, didn't reflect a regular meter at all but were only section divisions, or in some cases marked off every beat. Bar lines began to be introduced into ensemble music in the late 16th century but continued to be used irregularly. Not until the mid-17th century were bar lines used in the modern style with every measure being the same length, and they began to be associated with time signatures. Modern editions of early music that was originally notated without bar lines sometimes use a mensurstrich as a compromise. Hypermeasure A hypermeasure, large-scale or high-level measure, or measure-group is a metric unit in which, generally, each regular measure is one beat (actually hyperbeat) of a larger meter. Thus a beat is to a measure as a measure/hyperbeat is to a hypermeasure. Hypermeasures must be larger than a notated bar, perceived as a unit, consist of a pattern of strong and weak beats, and along with adjacent hypermeasures, which must be of the same length, create a sense of hypermeter. The term was coined by Edward T. Cone in Musical Form and Musical Performance (New York: Norton, 1968), and is similar to the less formal notion of a phrase. See also Bar-line shift Tala (music) Wazn References Further reading Cone, Edward T. (1968). Musical Form and Musical Performance. . Musical notation Rhythm and meter
Bar (music)
Physics
971
5,280,060
https://en.wikipedia.org/wiki/Computer%20Chronicles
Computer Chronicles (also titled as The Computer Chronicles from 1983 to 1989) is an American half-hour television series that was broadcast on PBS public television from 1983 to 2002 . It documented and explored the personal computer as it grew from its infancy in the early 80's to its rise in the global market at the turn of the 21st century.Episode reviewed a variety of home and business computers, including hardware accessories, software and other consumer computing devices and gadgetry. Often a news-like segment of the show reported on new developments and announcements in the computer industry. A wide range of computing topics were showcased and demonstrated, ranging from anything to business, education, gaming, digital music creation and editing, to the networking and online telecommunication. History and overview The series was created by Stewart Cheifet (later the show's co-host), who was then the station manager of the College of San Mateo's KCSM-TV (now independent non-commercial KPJK). The show was initially broadcast as a local weekly series beginning in 1981. The show was, at various points in its run, produced by KCSM-TV, WITF-TV in Harrisburg, Pennsylvania, and KTEH in San Jose. It became a national series on PBS in 1983, running until 2002, with Cheifet as host. Gary Kildall, founder of the software company Digital Research, served as Cheifet's co-host from 1983 to 1990, providing insights and commentary on products, as well as discussions on the future of the ever-expanding personal computer sphere. After Kildall left the show, Cheifet would serve as solo host from 1991 onward. After Kildall's death in 1994, the show paid tribute to him in a special episode. Computer Chronicles had several supporting presenters appearing alongside Cheifet, including: George Morrow: Presenter, commentator and occasional co-host, who for a time headed the Morrow Design company, Morrow was a well-known face on the Chronicles until the 1990s. Morrow died in 2003. Paul Schindler: Featured predominantly in software reviews, Schindler contributed to the series until the early 1990s. Tim Bajarin: author and columnist who appeared on a few of the 1990s episodes as a co-host and contributor. Wendy Woods: Provided reports for many software and hardware products, as well as talking with the main presenters in the studio about specific topics. Janelle Stelson: presented the news and reviews segment. Jan Lewis: Former president of the Palo Alto Research Group (not to be confused with Xerox PARC), served as both co-host and interviewee throughout the 1980s. Herb Lechner: with SRI International, served as both co-host and interviewee on some of the earliest episodes. Format The Computer Chronicles format remained relatively unchanged throughout its run, except perhaps with the noticeable difference in presenting style; originally formal, with Cheifet and the guests wearing business suits (with neckties) customary in the professional workplace in the early 1980s, it evolved by the 1990s into a more relaxed, casual style, with Cheifet and guests adopting the "business casual" style of dress that the Silicon Valley computer industry arguably helped pioneer. Beginning in 1984, the last five minutes or so featured Random Access, a segment that gave the viewer the latest computer news from the home and business markets. Stewart Cheifet, Janelle Stelson, Maria Gabriel and various other individuals presented the segment. Random Access was discontinued in 1997. The Online Minute, introduced in 1995 and lasting until 1997, gave the viewers certain Web sites that dealt with the episode's topic. It featured Giles Bateman, who designed the show's "Web page" opening sequence that was used from that period up until the show's end. The opening graphics were changed in 1989, and the show was renamed "Computer Chronicles", omitting the word "The". The graphics were redesigned again in 1995, with the "Web page" graphics designed by Giles Bateman, and redesigned again in 1998 to show clips from the show in a "multiple window" format. The theme tune from 1983 to 1989 was "Byte by Byte" by Craig Palmer for the Network Music Library. From 1990 until the show's end, the theme song was Zenith, composed for OmniMusic by John Manchester. Another feature on the show was Stewart's "Pick of the Week", in which he detailed a popular piece of software or gadget on the market that appealed to him and might appeal to the home audience. From 1994 to 1997, the show was produced by PCTV, based in New Hampshire in cooperation with KCSM-TV. Starting in the fall of 1997 and continuing to its end, the show was produced by KTEH San Jose and Stewart Cheifet Productions. Availability The show ended its run in 2002. Almost all episodes of Computer Chronicles have been made available for free download at the Internet Archive. There is also an unofficial YouTube channel with episodes. Many episodes of the show have been dubbed into other languages, including Arabic, French and Spanish. See also Net Cafe, de facto spin-off of Computer Chronicles co-hosted by Cheifet that aired from 1996 to 2002 WDR Computerclub, similar show in German TV References External links archive.org - Computer Bowl archives Computer Chronicles history and information American non-fiction television series PBS original programming 1981 American television series debuts 1990s American television series 2002 American television series endings Computer television series
Computer Chronicles
Technology
1,110
44,860,596
https://en.wikipedia.org/wiki/Aliflurane
Aliflurane (code name Hoechst Compound 26 or 26-P) is a halocarbon drug which was investigated as an inhalational anesthetic but was never marketed. See also Halopropane Norflurane Roflurane Synthane Teflurane References General anesthetics Cyclopropanes Ethers Organochlorides Organofluorides GABAA receptor positive allosteric modulators Fluranes
Aliflurane
Chemistry
98
24,906,899
https://en.wikipedia.org/wiki/Measurement%20of%20sea%20ice
Measurement of sea ice is important for safety of navigation and for monitoring the environment, particularly the climate. Sea ice extent interacts with large climate patterns such as the North Atlantic oscillation and Atlantic Multidecadal Oscillation, to name just two, and influences climate in the rest of the globe. The amount of sea ice coverage in the arctic has been of interest for centuries, as the Northwest Passage was of high interest for trade and seafaring. There is a longstanding history of records and measurements of some effects of the sea ice extent, but comprehensive measurements were sparse till the 1950s and started with the satellite era in the late 1970s. Modern direct records include data about ice extent, ice area, concentration, thickness, and the age of the ice. The current trends in the records show a significant decline in Northern hemisphere sea ice and a small but statistically significant increase in the winter Southern hemisphere sea ice. Furthermore, current research comprises and establishes extensive sets of multi-century historical records of arctic and subarctic sea ice and uses, among others high-resolution paleo-proxy sea-ice records. The arctic sea ice is a dynamic climate-system component and is linked to the Atlantic multidecadal variability and the historical climate over various decades. There are circular changes of sea ice patterns but so far no clear patterns based on modeling predictions. Methods of measuring sea ice Early observations Records assembled by Vikings showing the number of weeks per year that ice occurred along the north coast of Iceland date back to A.D. 870, but a more complete record exists since 1600. More extensive written records of Arctic sea ice date back to the mid-18th century. The earliest of those records relate to Northern Hemisphere shipping lanes, but records from that period are sparse. Air temperature records dating back to the 1880s can serve as a stand-in (proxy) for Arctic sea ice, but such temperature records were initially collected at only 11 locations. Russia's Arctic and Antarctic Research Institute has compiled ice charts dating back to 1933. Today, scientists studying Arctic sea ice trends can rely on a fairly comprehensive record dating back to 1953, using a combination of satellite records, shipping records, and ice charts from several countries. In the Antarctic, direct data prior to the satellite record are even more sparse. To try to extend the historical record of Southern Hemisphere sea ice extent further back in time, scientists have been investigating various proxies for sea ice extent. One is records kept by Antarctic whalers that document the location of all whales caught and relate to sea ice observations directly. There seems to be an abrupt mid-twentieth-century decline in Antarctic sea-ice extent from whaling records, the direct global estimates of the Antarctic sea-ice cover from satellite observations, since the 1970 provide no clear trends. Because whales tend to congregate near the sea ice edge to feed, their locations could be a proxy for the ice extent. Other proxies use the presence of phytoplankton-derived organic compounds and other extremophiles traces in Antarctic ice cores and sediments. Since phytoplankton grow most abundantly along the edges of the ice pack, the concentration of this sulfur-containing organic compounds and their geochemistry provide indicators of how far the ice edge extended from the continent. There are further extensive sets of multicentury historical records of arctic and subarctic sea ice and uses, among others high-resolution paleo proxy sea-ice records. Satellites Useful satellite data concerning sea ice began in December 1972 with the Electrically Scanning Microwave Radiometer (ESMR) instrument. However, this was not directly comparable with the later SMMR/SSMI, and so the practical record begins in late 1978 with the launch of NASA's Scanning Multichannel Microwave Radiometer (SMMR) satellite., and continues with the Special Sensor Microwave/Imager (SSMI). Advanced Microwave Scanning Radiometer (AMSR) and Cryosat-2 provide separate records. Since 1979, satellites have provided a consistent continuous record of sea ice. However, the record relies on stitching together measurements from a series of different satellite-borne instruments, which can lead to errors associated with intercalibration across the sensor changes. Satellite images of sea ice are made from observations of microwave energy radiated from the Earth's surface. Because ocean water emits microwaves differently from sea ice, ice "looks" different from water to the satellite sensor—see sea ice emissivity modelling. The observations are processed into digital picture elements, or pixels. Each pixel represents a square surface area on Earth. The first instruments provided approximately 25 kilometers by 25 kilometers resolution; later instruments higher. Algorithms examine the microwave emissions, and their vertical and horizontal polarisations, and estimate the ice area. Sea ice may be considered in terms of total volume, or in terms of areal coverage. Estimates of ice volume are harder to obtain as they require a knowledge of the ice thickness, which is complicated to measure directly; efforts such as PIOMAS use a combination of observations and modelling to estimate total volume. There are two ways to express the total polar ice cover: ice area and ice extent. To estimate ice area, scientists calculate the percentage of sea ice in each pixel, multiply by the pixel area, and total the amounts. Scientists set a threshold percentage to estimate ice extent, and count every pixel meeting or exceeding that threshold as "ice-covered." The common threshold is 15%. The threshold-based approach may seem less accurate, but it has the advantage of being more consistent. When scientists are analyzing satellite data, it is easier to say whether there is or is not at least 15% ice cover in a pixel than it is to say, for example, whether the ice cover is 70 percent or 75 percent. By reducing the uncertainty in the amount of ice, scientists can be more certain that changes in sea ice cover over time are real. A careful analysis of satellite radar altimetry echoes can distinguish between those backscattered from the open ocean, new ice, or multi-year ice. The difference between the elevation of the echoes from snow/sea ice and open water gives the elevation of the ice above the ocean; the ice thickness can be computed from this. The technique has a limited vertical resolution and is easily confused by the presence of even small amounts of open water. Hence it has mostly been used in the Arctic, where the ice is thicker and more continuous. Recent advancements led to the development of new experimental sea-ice thickness products from satellite radar altimetry during the Arctic melt season. Submarines Starting in 1958 U. S. Navy submarines collected upward-looking sonar profiles, for navigation and defense, and converted the information into estimates of ice thickness. Data from U. S. and Royal Navy submarines available from the NSIDC includes maps showing submarine tracks. Data are provided as ice draft profiles and as statistics derived from the profile data. Statistics files include information concerning ice draft characteristics, keels, level ice, leads, undeformed and deformed ice. Buoys Buoys are placed on the ice to measure ice properties and weather conditions by the participants of the International Arctic Buoy Program and its sister, the International Programme for Antarctic Buoys. Buoys can have sensors to measure air temperature, atmospheric pressure, snow and ice thickness, snow and ice temperature, ocean currents, sea ice motion, sea level pressure, sea surface temperature and salinity, skin temperature, surface winds, water temperature, longwave and shortwave radiation. Ice mass balance (IMB) buoys measure air, snow, ice and seawater in situ temperature and temperature after internal heating cycles. Such heating cycles allow more accurate identification of snow-ice and ice-water interfaces. Temperature buoys allow to estimate conductive, latent and ocean heat fluxes for undeformed ice and for pressure ridges. Upward looking sonar Upward looking sonar (ULS) devices can be deployed under polar ice over a period of months or even years, and can provide a complete profile of ice thickness for a single site. Sonars are directly measuring sea ice draft, so accurate estimate of sea ice thickness requires knowledge about snow thickness, snow and sea-ice density. The accuracy of sonar measurements also depends on the salinity of the seawater between sonar and sea ice, and many sonar installations also include CTD and ADCP. Upward looking or multibeam sonars can be also mounted on remotely operated underwater vehicles (ROV) to investigate sea ice draft over the diameter of several hundreds of meters and several months. Auxiliary observations Auxiliary observations of sea ice are made from shore stations, ships, and from aircraft. Although in recent years remotely sensed data has come to play a major role in sea ice analysis, it is not yet possible to compile a complete and accurate picture of sea ice conditions from this data source alone. Auxiliary sea ice observations play a major role in confirming remotely sensed ice information or providing important corrections to the overall picture of ice conditions. The most important auxiliary sea ice observation is the location of the ice edge. Its value reflects both the importance of the ice edge location in general and the difficulty of accurately locating the ice edge with remotely sensed data. It is also useful to provide a description of the ice edge in terms of indications of freezing or thawing, wind-driven advance or retreat, and compactness or diffuseness. Other important auxiliary information includes the location of the icebergs, floebergs, ice islands, old ice, ridging and hummocking. These ice features are poorly monitored by remote sensing techniques but are very important aspects of the ice cover. Types of measurements Sea ice extent Sea ice extent is the area of sea with a specified amount of ice, usually 15%. To satellite microwave sensors, surface melt appears to be open water rather than water on top of sea ice. So, while reliable for measuring area most of the year, the microwave sensors are prone to underestimating the actual ice concentration and area when the surface is melting. Sea ice area To estimate ice area, scientists calculate the percentage of sea ice in each pixel, multiply by the pixel area, and total the amounts. To estimate ice extent, scientists set a threshold percentage, and count every pixel meeting or exceeding that threshold as "ice-covered." The National Snow and Ice Data Center, one of NASA's Distributed Active Archive Centers, monitors sea ice extent using a threshold of 15 percent. Sea ice concentration Sea ice concentration is the percentage of an area that is covered with sea ice. Sea ice thickness Sea ice thickness decreases over time, and increases when winds and currents push the ice together. The European Space Agency's Cryosat-2 satellite was launched in April 2010 on a quest to map the thickness and shape of the Earth's polar ice cover. Its single instrument – a SAR/Interferometric Radar Altimeter is able to measure the sea ice freeboard. Sea ice age The age of the ice is another key descriptor of the state of the sea ice cover, since older ice tends to be thicker and more resilient than younger ice. Sea ice rejects salt over time and becomes less salty resulting in a higher melting point. A simple two-stage approach classifies sea ice into first year and multiyear ice. First-year is ice that has not yet survived a summer melt season, while multi-year ice has survived at least one summer and can be several years old. See sea ice growth processes. Sea ice mass balance Sea ice mass balance is the balance of how much the ice grows in the winter and melts in the summer. For Arctic sea ice virtually all of the growth occurs on the bottom of the ice. Melting occurs on both the top and the bottom of the ice. In the vast majority of cases all of the snow melts during the summer, typically in just a couple of weeks. The mass balance is a powerful concept since it is the great integrator of the heat budget. If there is a net increase of heat, then the ice will thin. A net cooling will result in thicker ice. Making direct measurements of the mass balance is simple. An array of stakes and thickness gauges is used to measure ablation and accumulation of ice and snow at the top and bottom of the ice cover. In spite of the importance of mass balance measurements and the relatively simple equipment involved in making them, there are few observational results. This is due, in large part, to the expense involved in operating a long-term field camp to serve as the base for these studies. Sea ice volume There are no Arctic-wide or Antarctic-wide measurements of the volume of sea ice, but the volume of the Arctic sea ice is calculated using the Pan-Arctic Ice Ocean Modeling and Assimilation System (PIOMAS) developed at the University of Washington Applied Physics Laboratory/Polar Science Center. PIOMAS blends satellite-observed sea ice concentrations into model calculations to estimate sea ice thickness and volume. Comparison with submarine, mooring, and satellite observations help increase the confidence of the model results. ICESat was a laser altimeter equipped satellite, which could measure the freeboard of ice flows. Its active service period was from February 2003 to October 2009. Together with a set of auxiliary data like ice density, snow cover thickness, air pressure, water salinity one can calculate the flow thickness and thus its volume. Its data have been compared with the respective PIOMAS data and a reasonably agreement has been found. Cryosat-2, launched in April 2010, has the ability to measure the freeboard of ice flows, just like ICESat, only that it uses radar instead of laser pulses. Data is calculated with the PIOMAS model. Trends in the data Reliable and consistent records for all seasons are only available during the satellite era, from 1979 onwards. Northern hemisphere According to scientific measurements, both the thickness and extent of summer sea ice in the Arctic have shown a dramatic decline over the past thirty years. Southern hemisphere Records before the satellite era are sparse. William K. de la Mare, 1997, in Abrupt mid-twentieth-century decline in Antarctic sea-ice extent from whaling records found a southwards shift in ice edge based on whaling records; these findings have been questioned, but later papers by de la Mare and by Cotte support the same conclusion. The satellite-derived Antarctic sea ice trends show pronounced increase in the central Pacific sector by ~4–10% per decade and a decrease in the Bellingshausen/western Weddell sector with similar percentages but lower extent. There is a close connection to the Antarctic Oscillation of the further and impacts of positive polarities of the El Niño-Southern Oscillation (ENSO) for the latter. The magnitude of the ice changes associated with the AAO and ENSO are smaller than the regional ice trends and local (or less understood large) scale processes still need to be investigated for a complete explanations. Use of 1981 to 2010 as a baseline Scientists use the 1981 to 2010 average because it provides a consistent baseline for year-to-year comparisons of sea ice extent. Thirty years is considered a standard baseline period for weather and climate, and the satellite record is now long enough to provide a thirty-year baseline period. See also Polar amplification Polar vortex References External links National Snow and Ice Data Center Arctic Sea Ice News & Analysis Earth phenomena Aquatic ecology Articles containing video clips
Measurement of sea ice
Physics,Biology
3,132
53,980,637
https://en.wikipedia.org/wiki/List%20of%20investigational%20antidepressants
This is a list of investigational antidepressants, or drugs that are currently under development for clinical use in the treatment of depression but are not yet approved. Specific indications include major depressive disorder, treatment-resistant depression, dysthymia, bipolar depression, and postpartum depression, among others. Chemical/generic names are listed first, with developmental code names, synonyms, and brand names in parentheses. This list was last comprehensively updated in August 2024. It is likely to become outdated with time. Under development Preregistration Buprenorphine/samidorphan (ALKS-5461) – μ-opioid receptor partial agonist, κ-opioid receptor antagonist, δ-opioid receptor antagonist, and μ-opioid receptor antagonist combination – New Drug Application (NDA) rejected in 2019, no updates since 2021 Phase 3 Aticaprant (AVTX-501; CERC-501; JNJ-3964; JNJ-67953964; LY-2456302) – κ-opioid receptor antagonist CYB003 (CYB-003; deuterated psilocybin analogue) – serotonin 5-HT2A receptor agonist and psychedelic hallucinogen Cycloserine/lurasidone (Cyclurad; NRX-101) – ionotropic glutamate NMDA receptor glycine site partial agonist and atypical antipsychotic (non-selective monoamine receptor modulator) combination Esmethadone (dextromethadone; REL-1017) – ionotropic glutamate NMDA receptor antagonist and other actions Ketamine (HTX-100; NRX-100) – ionotropic glutamate NMDA receptor antagonist Navacaprant (BTRX-140; BTRX-335140; CYM-53093; NMRA-140; NMRA-335140) – κ-opioid receptor antagonist Pimavanserin (ACP-103; BVF-048; Nuplazid) – serotonin 5-HT2A receptor antagonist or inverse agonist Psilocybin (COMP-360) – non-selective serotonin receptor agonist and psychedelic hallucinogen Seltorexant (JNJ-42847922; JNJ-7922; MIN-202) – orexin OX2 receptor antagonist SEP-4199 (non-racemic amisulpride; aramisulpride/esamisulpride [85:15 ratio]) – atypical antipsychotic (dopamine D2 and D3 receptor antagonist and serotonin 5-HT2B and 5-HT7 receptor antagonist) SNG-12 (Synapsinae) – glycine transporter 1 (GlyT1) inhibitor Solriamfetol (Sunosi; JZP-110; SKLN05; ARL-N05; YKP-10A; R-228060; ADX-N05) – norepinephrine and dopamine reuptake inhibitor and trace amine-associated receptor 1 (TAAR1) agonist Ulotaront (SEP-363856; SEP-856) – serotonin 5-HT1A receptor agonist and trace amine-associated receptor 1 (TAAR1) agonist Phase 2/3 PRAX-114 – extrasynaptic GABAA receptor-preferring positive allosteric modulator and neurosteroid Phase 2 4-Chlorokynurenine (4-CL-KYN; AV-101) – ionotropic glutamate NMDA receptor glycine site antagonist and kynurenine modulator Ademetionine (MSI-190; MSI-195; S-Adenosyl-L-Methionine; Sam-E; SAMe; Strada) – cofactor in monoamine neurotransmitter biosynthesis ALTO-100 (NSI-189) – unknown mechanism of action (hippocampal neurogenesis stimulant and indirect brain-derived neurotrophic factor (BDNF) modulator) ALTO-203 – histamine H3 receptor agonist Apimostinel (AGN-241660; GATE-202; NRX-1074) – ionotropic glutamate NMDA receptor glycine site partial agonist Arketamine ((R)-ketamine; PCN-101) – ionotropic glutamate NMDA receptor antagonist Azetukalner (1OP-2198; Encukalner; VRX-621698; XEN-1101; XPF-008) – KCNQ potassium channel agonist BHV-7000 (BPN-25203; KB-3061) – KCNQ2 potassium channel stimulant BI-1358894 – transient receptor potential cation TRPC4 and TRPC5 channel inhibitor Brezivaptan (ANC-501; THY-1773; TS-1211; TS-121) – vasopressin V1b receptor antagonist Centanafadine (CTN; EB-1020) – serotonin, norepinephrine, and dopamine reuptake inhibitor Deudextromethorphan/quinidine (AVP-786; CTP-786; d-DM/Q; d6-DM/Q; deuterated dextromethorphan/ultra-low-dose quinidine) – sigma σ1 receptor agonist, serotonin reuptake inhibitor, ionotropic glutamate NMDA receptor antagonist, other actions, and CYP2D6 inhibitor combination Dimethyltryptamine (N,N-Dimethyltryptamine; DMT; N,N-DMT; BMND-01; BMND-02; BMND-03) – non-selective serotonin receptor agonist and psychedelic hallucinogen Emestedastat (UE-2343; Xanamem) – 11β-hydroxysteroid dehydrogenase type 1 (11β-HSD1) inhibitor (glucocorticoid synthesis inhibitor) Esketamine (CLE-100) – ionotropic glutamate NMDA receptor antagonist FKB-01MD (FKB01MD; TGBA-01AD; TGBA01AD) – serotonin reuptake inhibitor, serotonin 5-HT1A receptor agonist, serotonin 5-HT1D receptor modulator, and serotonin 5-HT2 receptor agonist GM-1020 – ionotropic glutamate NMDA receptor antagonist GM-2505 – serotonin 5-HT2A and 5-HT2C receptor agonist, psychedelic hallucinogen, and serotonin releasing agent Itruvone (PH-10; PH-10A; PH10-NS) – chemoreceptor cell stimulant, vomeropherine, and neurosteroid JNJ-54175446 (JNJ-5446) – purinergic P2X7 receptor antagonist JNJ-55308942 – purinergic P2X7 receptor antagonist Ketamine (extended-release; R-107; R107) – ionotropic glutamate NMDA receptor antagonist Ketamine (intranasal; Ereska; PMI-100; PMI-150; SLS-002; TUR-002) – ionotropic glutamate NMDA receptor antagonist Ketamine (prolonged-release oral; KET-01) – ionotropic glutamate NMDA receptor antagonist Liafensine (BMS-820836; DB-104) – serotonin, norepinephrine, and dopamine reuptake inhibitor Lisdexamfetamine (Vyvanse) – norepinephrine and dopamine releasing agent Lysergic acid diethylamide (LSD; MB-22001) – non-selective serotonin receptor agonist and psychedelic hallucinogen Mebufotenin (5-MeO-DMT; 5-Methoxy-N,N-Dimethyltryptamine; BPL-002; BPL-003) – non-selective serotonin receptor agonist and psychedelic hallucinogen Mifepristone (Mifeprex; RU-38486; RU-486) – progesterone receptor antagonist, glucocorticoid receptor antagonist, and androgen receptor antagonist NBI-1070770 – NR2B subunit-containing ionotropic glutamate NMDA receptor negative allosteric modulator NORA-520 (brexanolone prodrug) – GABAA receptor positive allosteric modulator and neurosteroid Onabotulinum toxin A (BoNTA; Botox; Botulinum toxin A injectable; GSK-1358820; OnabotA X; onabotulinumtoxinA X; Vistabel; Vistabex) – acetylcholine release inhibitor and neuromuscular blocking agent Onfasprodil (CAD-9271; MIJ-821; MIJ821) – NR2B subunit-containing ionotropic glutamate NMDA receptor negative allosteric modulator OPC-64005 – serotonin, norepinephrine, and dopamine reuptake inhibitor Osavampator (TAK-653; NBI-1065845; NBI-845) – ionotropic glutamate AMPA receptor positive allosteric modulator OSU-6162 (PNU-9639; PNU-96391; PNU-96391A) – serotonin 5-HT2A receptor partial agonist (non-hallucinogenic), dopamine D2 receptor partial agonist, and sigma σ1 receptor ligand (so-called "monoaminergic stabilizer") PDC-1421 (ABV-1504; ABV-1505; ABV-1601; BLI-1005) – norepinephrine reuptake inhibitor Pegipanermin (DN-TNF; INB-03; LIVNate; Quellor; XENP-1595; XENP-345; XPro-1595; XPro595; XPro) – tumor necrosis factor α (TNFα) inhibitor Pramipexole (CTC-413; CTC-501) – dopamine D2, D3, and D4 receptor agonist Pregnenolone methyl ether (3β-methoxypregnenolone; MAP-4343) – microtubule-associated protein (MAP) stimulant and tubulin polymerization promoter Ralmitaront (RG-7906; RO-6889450) – trace amine-associated receptor 1 (TAAR1) agonist RE-104 (FT-104; 4-HO-DiPT/iprocin prodrug) – serotonin 5-HT2A receptor agonist and psychedelic hallucinogen Rislenemdaz (AVTX-301; CERC-301; MK-0657) – NR2B subunit-containing ionotropic glutamate NMDA receptor negative allosteric modulator Ropanicant (SUVN-911) – α4β2 nicotinic acetylcholine receptor antagonist SP-624 – sirtuin 6 (SIRT6) stimulant SPN-820 (NV-5138; SPN-821) – sestrin2 modulator and mammalian target of rapamycin complex 1 (mTORC1) stimulant Tebideutorexant (JNJ-61393215; JNJ-3215) – orexin OX1 receptor antagonist Tildacerfont (SPR-001; LY-2371712) – corticotropin releasing factor receptor 1 (CRF1) antagonist Tramadol (controlled-release; ETS-6103; ETX-6103; Viotra) – μ-opioid receptor agonist, serotonin and norepinephrine reuptake inhibitor, serotonin 5-HT2C receptor antagonist, and other actions TS-161 – metabotropic glutamate mGlu2 and mGlu3 receptor antagonist Zelquistinel (AGN-241751; GATE-251) – ionotropic glutamate NMDA receptor partial positive allosteric modulator Phase 1/2 PT-00114 (PT100114) – corticotropin releasing hormone (CRH) inhibitor Phase 1 ABX-002 (LL-340001 prodrug) – thyroid hormone receptor β (TRβ) agonist Agomelatine (ALTO-300; agomelatine 25mg formulation) – serotonin 5-HT2C receptor antagonist and melatonin MT1 and MT2 receptor agonist BI-1569912 – NR2B subunit-containing ionotropic glutamate NMDA receptor negative allosteric modulator BRII-296 (extended-release injectable aqueous suspension formulation of brexanolone) – GABAA receptor positive allosteric modulator and neurosteroid Brilaroxazine (RP-5000; RP-5063) – atypical antipsychotic (non-selective monoamine receptor modulator) Carbidopa/oxitriptan (EVX-101) – serotonin precursor (5-hydroxytryptophan; 5-HTP) and aromatic L-amino acid decarboxylase (AAAD) inhibitor combination Crisdesalazine (AAD-2004) – microsomal prostaglandin E2 synthase-1 (mPGES-1) inhibitor DGX-001 – gut–brain axis modulator Dimethyltryptamine (N,N-dimethyltryptamine; DMT; N,N-DMT; VLS-01) – non-selective serotonin receptor agonist and psychedelic hallucinogen DLX-001 (DLX-1; AAZ; AAZ-A-154) – non-hallucinogenic serotonin 5-HT2A receptor agonist DSP-3456 – metabotropic glutamate mGlu2 and mGlu3 receptor negative allosteric modulator Ebselen (DR-3305; Harmokisane; PZ-51; SPI-1005; SPI-3005) – multiple mechanisms of action Icalcaprant (CVL-354) – κ-opioid receptor antagonist KAR-2618 (GFB-887) – transient receptor potential cation TRPC4 and TRPC5 channel inhibitor PIPE-307 – muscarinic acetylcholine M1 receptor antagonist SAL-0114 – undefined mechanism of action Scopolamine (DPI-385-CVS; DPI-386; DPI-386 Nasal Gel; DPI-386-SG; DPI-386-SS; DPI-387; DPI-521-CG; DPI-550-TBI; INSCOP spray) – non-selective muscarinic acetylcholine receptor antagonist Traneurocin (NA-831; Cycloprolylglycine; CPG) – unknown or undefined / ionotropic glutamate AMPA receptor positive allosteric modulator, GABAA receptor positive allosteric modulator, and racetam-like drug XW-10508 (oral esketamine conjugate prodrug) – ionotropic glutamate NMDA receptor antagonist Preclinical 2-Bromo-LSD (bromolysergide; BETR-001, TD-0148A) – non-hallucinogenic serotonin 5-HT2A receptor agonist and other actions ACD856 (ACD-856) – tropomyosin receptor kinase TrkA, TrkB, and TrkC positive allosteric modulator ALTO-202 – NR2B subunit-containing ionotropic glutamate NMDA receptor antagonist Brexpiprazole (long-acting injectable; MTD-211) – atypical antipsychotic (non-selective monoamine receptor modulator) CB-03 (CB-04; CB-003; CB03-154) – KCNQ2 and KCNQ3 potassium channel stimulant CRHR1 antagonist therapeutic (HMNC Brain Health) – corticotropin releasing factor receptor 1 (CRF1) antagonist Duloxetine (oral suspension) – serotonin and norepinephrine reuptake inhibitor ENX-104 – presynaptic dopamine D2 and D3 autoreceptor antagonist (at low doses) ENX-105 – dopamine D2 and D3 receptor antagonist and serotonin 5-HT1A and 5-HT2A receptor agonist (non-hallucinogenic) Etifoxine deuterated (GRX-917) – GABAA receptor positive allosteric modulator and translocator protein (TSPO; peripheral benzodiazepine receptor) agonist (neurosteroidogenesis stimulant) GABA positive allosteric modulator (CS Bay Therapeutics) – GABAA receptor positive allosteric modulator INV-88 – macrophage migration inhibitory factor (MIF) inhibitor ITI-333 – serotonin 5-HT2A receptor antagonist, μ-opioid receptor biased partial agonist, α1A-adrenergic receptor antagonist, and dopamine D1 receptor antagonist ITI-1549 – non-hallucinogenic serotonin 5-HT2A receptor agonist and serotonin 5-HT2B receptor antagonist Ketamine (depot; ALA-3000) – ionotropic glutamate NMDA receptor antagonist Lithium cocrystal (AL-001; LiProSal; lithium salicylate L-proline ionic cocrystal) – unknown mechanism of action and mood stabilizer (improved formulation of lithium) LPH-5 – selective serotonin 5-HT2A receptor partial agonist and psychedelic hallucinogen LPCN-1154 (LPCN1154; oral brexanolone) – GABAA receptor positive allosteric modulator and neurosteroid Lucid-PSYCH (Lucid-201) – undefined mechanism of action and psychedelic hallucinogen Mebufotenin (5-MeO-DMT; 5-methoxy-N,N-dimethyltryptamine; LSR-1019) – non-selective serotonin receptor agonist and psychedelic hallucinogen Midomafetamine (microneedle transdermal patch; 3,4-methylenedioxymethamphetamine; MDMA) – serotonin, norepinephrine, and dopamine releasing agent and weak serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptor agonist (entactogen and weak psychedelic hallucinogen) Nezavist – GABAA receptor positive allosteric modulator NLX-101 (F-15599) – serotonin 5-HT1A receptor full agonist PSIL-001 (DMT analogue) – serotonin 5-HT1 receptor modulator (non-hallucinogenic) PSIL-002 (DMT analogue) – serotonin 5-HT1 receptor modulator (non-hallucinogenic) Psylo-4001 – serotonin 5-HT2A receptor agonist and psychedelic hallucinogen SYT-510 – endocannabinoid reuptake inhibitor TF-0066 – undefined mechanism of action Research BHV-5000 – low-trapping ionotropic glutamate NMDA receptor antagonist NP-10679 – NR2B subunit-containing ionotropic glutamate NMDA receptor negative allosteric modulator Psilocybin (MYCO-001; MYCO-003) – non-selective serotonin receptor agonist and psychedelic hallucinogen Small molecule therapeutic - Rugen Therapeutics – undefined mechanism of action Phase unknown Amuxetine – serotonin, norepinephrine, and dopamine reuptake inhibitor EDG-005 – undefined mechanism of action EDG-006 – undefined mechanism of action Iloperidone (Fanapt; Fanaptum; Fiapta; HP-873; ILO-522; VYV-683; Zomaril) – atypical antipsychotic (non-selective monoamine receptor modulator) INV-407 – undefined mechanism of action Ketamine (intravenous/oral; Braxia) – ionotropic glutamate NMDA receptor antagonist SK-2110 (buprenorphine implant) – μ-opioid receptor partial agonist, κ-opioid receptor antagonist, and δ-opioid receptor antagonist – under development in China Venlafaxine (controlled-release) – serotonin and norepinephrine reuptake inhibitor Not under development Development suspended BVF-045 (bupropion/undisclosed serotonin reuptake inhibitor) – norepinephrine and dopamine reuptake inhibitor, nicotinic acetylcholine receptor negative allosteric modulator, and serotonin reuptake inhibitor combination Dexmedetomidine (BXCL-501; Igalmi; KalmPen) – α2-adrenergic receptor agonist ETX-155 – GABAA receptor positive allosteric modulator Ganaxolone (CCD-1042; Ztalmy) – GABAA receptor positive allosteric modulator and neurosteroid No development reported AAG-561 – corticotropin releasing hormone (CRH) inhibitor Adinazolam (Deracyn; U-41123; U-41123F) – GABAA receptor positive allosteric modulator and benzodiazepine Amitifadine (DOV-21947; EB-1010) – serotonin, norepinephrine, and dopamine reuptake inhibitor AN-788 (NSD-788) – serotonin and dopamine reuptake inhibitor ANAVEX 1-41 (blarcamesine analogue) – sigma σ1 receptor agonist, muscarinic acetylcholine receptor modulator, and sodium and chloride channel modulator Aripiprazole (transdermal; AQS-1301) – atypical antipsychotic (non-selective monoamine receptor modulator) Arketamine (HR-071603; (R)-ketamine nasal spray) – ionotropic glutamate NMDA receptor antagonist AZD-8108 – ionotropic glutamate NMDA receptor antagonist BCI-632 – metabotropic glutamate mGlu2 and mGlu3 receptor antagonist BCI-838 – metabotropic glutamate mGlu2 and mGlu3 receptor antagonist BMS-866949 (CSTI-500) – serotonin, norepinephrine, and dopamine reuptake inhibitor BNC-210 (IW-2143) – α7 nicotinic acetylcholine receptor negative allosteric modulator Bryostatin 1 (MW-904) – protein kinase C (PKC) stimulant BTRX-246040 (LY-2940094) – nociceptin receptor antagonist Bupropion/naltrexone (Contrave) – norepinephrine and dopamine reuptake inhibitor, nicotinic acetylcholine receptor negative allosteric modulator, and μ-opioid receptor antagonist combination Cericlamine (JO-1017) – serotonin reuptake inhibitor Citalopram/pipamperone (PipCit; PNB-01) – serotonin reuptake inhibitor and typical antipsychotic combination Depression therapy - Genopia Biomedical – undefined mechanism of action (R)-Desmethylsibutramine ((+)-desmethylsibutramine; or (R)-desmethylsibutramine/(+)-didesmethylsibutramine) – serotonin, norepinephrine, and dopamine reuptake inhibitor Dipraglurant (ADX-48621; mGluR5-NAM) – metabotropic glutamate mGlu5 receptor negative allosteric modulator Erteberel (LY-500307; SERBA-1) – estrogen receptor β (ERβ) agonist Esketamine (esketamine DPI; Falkieri; PG061; S-ketamine) – ionotropic glutamate NMDA receptor antagonist Eszopiclone (Lunesta) – GABAA receptor positive allosteric modulator and Z-drug EVT-101 (ENS-101) – NR2B subunit-containing ionotropic glutamate NMDA receptor negative allosteric modulator Fananserin (RP-62203) – serotonin 5-HT2A receptor antagonist and dopamine D4 receptor antagonist Fibroblast growth factor 1 (FGF-1) – fibroblast growth factor receptor (FGFR) agonist Filorexant (MK-6096) – orexin OX1 and OX2 receptor antagonist GEA-857 (alaproclate analogue) – potassium conductance putative blocker GSK-588045 – serotonin 5-HT1A, 5-HT1B, and 5-HT1D receptor antagonist GSK-1360707 – serotonin, norepinephrine, and dopamine reuptake inhibitor HS-10353 – GABAA receptor positive allosteric modulator Hypidone – serotonin reuptake inhibitor and serotonin 5-HT1A and 5-HT6 receptor agonist Igmesine (CI-1019; JO-1784) – sigma σ1 receptor agonist Imiloxan (RS-21361) – α2-adrenergic receptor antagonist IN-ASTR-001 – undefined mechanism of action Ketamine (transdermal patch; SHX-001) – ionotropic glutamate NMDA receptor antagonist Ketamine (sublingual; ketamine wafer; SLS-003; Wafermine) – ionotropic glutamate NMDA receptor antagonist KFM-19 – adenosine A1 receptor antagonist LSM-6 (3-hydroxy-N,N-dimethylphenethylamine) – undefined mechanism of action (adrenergic and serotonergic agent; constituent of Limacia scanden Lour.) – was under development in Malaysia Lumateperone deuterated (ITI-1284) – atypical antipsychotic (non-selective monoamine receptor modulator) Midomafetamine (3,4-methylenedioxymethamphetamine; MDMA; ecstasy) – serotonin, norepinephrine, and dopamine releasing agent and weak serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptor agonist (entactogen and weak psychedelic hallucinogen) Mitizodone (HEC-113995) – serotonin reuptake inhibitor and serotonin 5-HT1A and 5-HT1B receptor partial agonist Nivacortol (nivazole; NEBO-174; novozola) – glucocorticoid receptor antagonist Omiloxetine – serotonin reuptake inhibitor Oxitriptan (5-hydroxytryptophan; 5-HTP; EVX-301) – serotonin precursor Pseudohypericin – undefined mechanism of action (constituent of St John's wort) Psilocybin (CYB-001; INT0052/2020) – non-selective serotonin receptor agonist and psychedelic hallucinogen Psilocybin (biosynthetic psilocybin; PB-1818) – non-selective serotonin receptor agonist and psychedelic hallucinogen QRX-002 – ionotropic glutamate NMDA receptor antagonist RG-7351 – trace amine-associated receptor 1 (TAAR1) agonist Riluzole (sublingual; BHV-0223; Nurtec) – complex mechanism of action or glutamatergic modulator Risperidone (Risperdal) – atypical antipsychotic (non-selective monoamine receptor modulator) SAR-102779 (SAR-10279) – neurokinin NK2 receptor antagonist SD-254 (deuterated venlafaxine) – serotonin and norepinephrine reuptake inhibitor SEP-378614 – undefined mechanism of action SNA-1 – undefined mechanism of action SPL-801-B ((2R,6R)-hydroxynorketamine; 6-HNK) – non-hallucinogenic ketamine derivative/metabolite TrkB receptor antagonist (Celon Pharma) – tropomyosin receptor kinase TrkB antagonist YDP-2225 – undefined mechanism of action Development discontinued ABT-436 – vasopressin V1b receptor antagonist Adatanserin (WAY-SEB-324; WY-50324; SEB-324) – serotonin 5-HT1A receptor partial agonist and serotonin 5-HT2A and 5-HT2C receptor antagonist ADX-71149 (JNJ-1813; JNJ-40411813; JNJ-mGluR2-PAM) – metabotropic glutamate mGlu2 receptor modulator Amesergide (LY-237733) – serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptor antagonist, α2-adrenergic receptor antagonist, and other actions Amibegron (SR-58611; SR-58611A) – β3-adrenergic receptor agonist Aprepitant (MK-869) – neurokinin NK1 receptor antagonist ARA-014418 (AR-A014418; GSK-3β Inhibitor VIII) – glycogen synthase kinase GSK-3β inhibitor Armodafinil (CEP-10953; Nuvigil; (R)-modafinil) – atypical dopamine reuptake inhibitor Atipamezole (antisedan; MPV-1248) – α2-adrenergic receptor antagonist Atomoxetine (LY-139603; Strattera; Tomoxetine) – norepinephrine reuptake inhibitor AZD-2066 – metabotropic glutamate mGlu5 receptor antagonist AZD-2327 – δ-opioid receptor agonist AZD-7268 – δ-opioid receptor agonist AZD-8129 (AR-A000002; AR-A2XX; AR-A2) – serotonin 5-HT1B receptor antagonist Basimglurant (NOE-101; RG-7090; RO-4917523) – metabotropic glutamate mGlu5 receptor antagonist Befloxatone (MD-370503) – monoamine oxidase MAO-A reversible inhibitor BMS-181101 (BMY-42569) – serotonin reuptake inhibitor and serotonin 5-HT1A and 5-HT1D receptor agonist Buspirone (transdermal; BuSpar Patch) – serotonin 5-HT1A receptor partial agonist Casopitant (GW-679769; GW679769) – neurokinin NK1 receptor antagonist Centpropazine – unknown mechanism of action Cibinetide (ARA-290) – erythropoietin receptor (EpoR) agonist Citalopram (controlled-release) – serotonin reuptake inhibitor Clavulanic acid (RX-10100; Serdaxin; Zoraxel) – β-lactamase inhibitor and unknown mechanism of action (glutamate transporter GLT1 expression enhancer, dopamine, glutamate, and serotonin modulator, possibly via Munc18-1 and Rab4 interactions) Clovoxamine (DU-23811) – serotonin and norepinephrine reuptake inhibitor Coluracetam (BCI-540; MKC-231) – ionotropic glutamate AMPA receptor positive allosteric modulator, choline uptake and acetylcholine synthesis enhancer, and racetam CP-316311 – corticotropin releasing hormone (CRH) inhibitor Crinecerfont (NBI-74788; SSR-125543; SSR-125543A) – corticotropin releasing factor receptor 1 (CRF1) antagonist CRL-41789 – undefined mechanism of action Cutamesine (AGY-94806; Msc-1; SA-4503) – sigma σ1 receptor agonist CX157 (KP157; TriRima; Tyrima) – monoamine oxidase MAO-A reversible inhibitor Dapoxetine (LY-210448; Priligy) – serotonin reuptake inhibitor Dasotraline (DSP-225289; SEP-225289; SEP-0225289; SEP-289) – serotonin, norepinephrine, and dopamine reuptake inhibitor DDP-225 – norepinephrine reuptake inhibitor and serotonin 5-HT3 receptor antagonist Decoglurant – metabotropic glutamate mGlu2 and mGlu3 receptor antagonist Delequamine (RS-15385; RS-15385197) – α2-adrenergic receptor antagonist Delucemine (NPS-1506) – ionotropic glutamate NMDA receptor polyamine site antagonist and serotonin reuptake inhibitor Dexmecamylamine (AT-5214; NIH-11008; S-mecamylamine; TC-5214) – α3β4 and α4β2 nicotinic acetylcholine receptor negative allosteric modulator Dexnafenodone (LU-43706) – serotonin and norepinephrine reuptake inhibitor DMP-695 – corticotropin releasing hormone (CRH) inhibitor DOV-216303 – serotonin, norepinephrine, and dopamine reuptake inhibitor DPC-368 – undefined mechanism of action DSP-1200 – serotonin 5-HT2A receptor antagonist, dopamine D2 receptor antagonist, and α2A-adrenergic receptor antagonist Edivoxetine (EDP-125; LY-2216684) – norepinephrine reuptake inhbitor Elzasonan (CP-448187) – serotonin 5-HT1B and 5-HT1D receptor antagonist Emapunil (AC-5216; XBD-173) – translocator protein (TSPO; peripheral benzodiazepine receptor) agonist (neurosteroidogenesis stimulant) Emicerfont (GW-876008; GW-876008X) – corticotropin releasing factor receptor 1 (CRF1) antagonist Eplivanserin (Ciltyri; Sliwens; SR-46349; SR-46349B; SR-46615A) – serotonin 5-HT2A receptor antagonist Eptapirone (F-11440) – serotonin 5-HT1A receptor full agonist Esreboxetine ((S,S)-Reboxetine; AXS-14; PNU-165442G) – norepinephrine reuptake inhibitor Esuprone (LU-43839) – monoamine oxidase MAO-A reversible inhibitor Ethyl eicosapentaenoic acid (Ethyl-EPA) – omega-3 fatty acid Farampator (CX-691; ORG-24448) – ionotropic glutamate AMPA receptor positive allosteric modulator Fasoracetam (AEVI-001; LAM-105; MDGN-001; NFC-1; NS-105) – unknown mechanism of action (metabotropic glutamate receptor modulator) and racetam FCE-25876 – serotonin reuptake inhibitor Flerobuterol (CRL-40827) – β-adrenergic receptor agonist Flesinoxan (DU-29373) – serotonin 5-HT1A receptor agonist Flibanserin (Addyi; BIMT-17; Girosa) – serotonin 5-HT1A receptor agonist, serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptor antagonist, and dopamine D4 receptor antagonist (R)-Fluoxetine – serotonin reuptake inhibitor Fluparoxan (GR-50360; GR-50360A) – α2-adrenergic receptor antagonist Gaboxadol (LU-02030; LU-2-030; MK-0928; OV-101; THIP) – GABAA receptor agonist Girisopam (EGIS-5810; GYKI-51189) – GABAA receptor positive allosteric modulator and benzodiazepine GYKI-52895 – dopamine reuptake inhibitor Haloperidol (CLR-3001) – typical antipsychotic (non-selective monoamine receptor modulator; low-dose withdrawal therapy) HT-2157 (SNAP-37889) – galanin GAL3 receptor antagonist ICI-170809 (ZM-170809) – serotonin 5-HT2A, 5-HT2B, and 5-HT2C receptor antagonist Idazoxan – α2-adrenergic receptor antagonist Ipsapirone (BAY-Q-7821; TVX-Q-7821) – serotonin 5-HT1A receptor partial agonist IRFI-165 – adenosine A1 receptor antagonist Istradefylline (KW-6002; Nourianz; Nouriast) – adenosine A2 receptor antagonist JNJ-18038683 – serotonin 5-HT7 receptor antagonist JNJ-19567470 (CRA-5626; R-317573) – corticotropin releasing factor receptor 1 (CRF1) antagonist JNJ-26489112 – unknown mechanism of action (topiramate successor) JNJ-39393406 – α7 nicotinic acetylcholine receptor positive allosteric modulator Lanicemine (AZD-6765) – low-trapping ionotropic glutamate NMDA receptor antagonist LB-100 (LB-1) – protein phosphatase 2A (PP2A) inhibitor Levoprotiline (CGP-12103; CGP-12103A; CGS-12103; R(–)-oxaprotiline; R(–)-hydroxymaprotiline) – histamine H1 receptor antagonist, other actions, and tetracyclic antidepressant Lithium – unknown mechanism of action and mood stabilizer Litoxetine (IXA-001; SL-810385) – serotonin reuptake inhibitor and weak serotonin 5-HT3 receptor antagonist Losmapimod (FTX-1821; GS-856553; GSK-856553; GW-856553; GW-856553X) – p38-α/β mitogen-activated protein kinase (MAPK) inhibitor and double homeobox 4 (DUX4) inhibitor LU-AA34893 (LU-AA-34893) – serotonin receptor modulator LU-AA39959 (LU-AA-39959) – ion channel modulator Lubazodone (SM-50C; YM-35992; YM-992) – serotonin reuptake inhibitor and serotonin 5-HT2A receptor antagonist Masitinib (AB-07105; AB-1010; Alsitek; Masican; Masipro; Masiviera) – tyrosine kinase inhibitor and other actions Mecamylamine (Inversine; Tridmac) – nicotinic acetylcholine receptor negative allosteric modulator MIN-117 (WF-516) – serotonin and dopamine reuptake inhibitor, serotonin 5-HT1A and 5-HT7 receptor antagonist, and α1-adrenergic receptor antagonist MK-1942 – undefined mechanism of action ML-105 – undefined mechanism of action Naloxone/tianeptine (TNX-601; TNX-601-CR; TNX-601-ER) – weak and atypical μ- and δ-opioid receptor agonist, other actions, tricyclic antidepressant, and μ-opioid receptor antagonist combination Naluzotan (PRX-00023) – serotonin 5-HT1A receptor partial agonist and sigma σ1 receptor antagonist NB-415 – vasopressin V1b receptor antagonist Neboglamine (nebostinel; CR-2249; XY-2401) – ionotropic glutamate NMDA receptor glycine site positive allosteric modulator Nefiracetam (BRN 6848330; CCRIS 6729; DM 9384; DMPPA; DN-9384; DZL-221; HL-0812; HPI-001; Motiva; Translon) – unknown mechanism of action (voltage-gated calcium channel potentiator, α4β2 nicotinic acetylcholine receptor potentiator, ionotropic glutamate NMDA receptor potentiator (possible glycine site partial positive allosteric modulator), ionotropic glutamate AMPA receptor potentiator, and GABAA receptor agonist) and racetam Nemifitide (INN-00835) – unknown mechanism of action NS-2359 (GSK-372475) – serotonin, norepinephrine, and dopamine reuptake inhibitor NS-2389 (GW-650250; GW-650250A) – serotonin, norepinephrine, and dopamine reuptake inhibitor ORM-10921 – α2C-adrenergic receptor antagonist Orvepitant (GW-823296; GW-823296X; GW823296) – neurokinin NK1 receptor antagonist Osanetant (ACER-801; SR-142801; SR-142806) – neurokinin NK3 receptor antagonist Pexacerfont (BMS-562086) – corticotropin releasing factor receptor 1 (CRF1) antagonist PF-04455242 – κ-opioid receptor antagonist PT-150 (ORG-34517; SCH-900636) – glucocorticoid receptor antagonist and androgen receptor antagonist Radafaxine (GW-353162; (2S,3S)-hydroxybupropion) – norepinephrine and dopamine reuptake inhibitor Ramelteon (Rozerem; TAK-375) – melatonin MT1 and MT2 receptor agonist Rapastinel (BV-102; GLYX-13; TPPT-amide) – ionotropic glutamate NMDA receptor glycine site partial agonist RG-7166 – serotonin, norepinephrine, and dopamine reuptake inhibitor Ritanserin (R-55667) – serotonin 5-HT2A and 5-HT2C receptor antagonist Robalzotan (AZD-7371; NAD-299) – serotonin 5-HT1A receptor antagonist Rolipram (ME-3167; ZK-62711) – phosphodiesterase 4 (PDE4) inhibitor Sabcomeline (BCI-224; CEB-242; Memric; SB-202026) – muscarinic acetylcholine M1 receptor agonist Saredutant (SR-48968) – neurokinin NK2 receptor antagonist SB-236057 – serotonin 5-HT1B receptor inverse agonist SB-245570 – serotonin 5-HT1B receptor antagonist Sibutramine (Aoquqing; BTS-54524; Ectiva; KES-524; Meridia; Reductase; Reductil; Reduxade; Sibutral) – serotonin, norepinephrine, and dopamine reuptake inhibitor Siramesine (LU-28179) – sigma σ2 receptor agonist Sirukumab (CNTO-136; Plivensia) – interleukin 6 (IL-6) inhibitor SKL-10406 (SKL-DEP) – serotonin, norepinephrine, and dopamine reuptake inhibitor SKL-PSY (FZ-016) – serotonin 5-HT1A receptor agonist Sodium phenylbutyrate (slow-release; LU-901; Lunaphen) – histone deacetylase (HDAC) inhibitor SSR-149415 (SR-149415) – vasopressin V1b receptor antagonist SSR-241586 – neurokinin NK2 and NK3 receptor antagonist Tandospirone (metanopirone; Sediel; SM-3997) – serotonin 5-HT1A receptor partial agonist and α2-adrenergic receptor antagonist Tasimelteon (BMS-214778; Hetlioz; VEC-162) – melatonin MT1 and MT2 receptor agonist Tedatioxetine (LU-AA24530) – serotonin, norepinephrine, and dopamine reuptake inhibitor, serotonin 5-HT2A, 5-HT2C, and 5-HT3 receptor antagonist, and α1A-adrenergic receptor antagonist Tianeptine (JNJ-39823277; TPI-1062) – weak and atypical μ- and δ-opioid receptor agonist, other actions, and tricyclic antidepressant TS-111 – undefined mechanism of action Tulrampator (CX-1632; S-47445) – ionotropic glutamate AMPA receptor positive allosteric modulator Vanoxerine (boxeprazine; GBR-12909) – atypical dopamine reuptake inhibitor Verucerfont (GSK-561679; NBI-77860) – corticotropin releasing factor receptor 1 (CRF1) antagonist Vestipitant (GW-597599; GW-597599B) – neurokinin NK1 receptor antagonist Viloxazine (Qelbree) – norepinephrine reuptake inhibitor VN-2222 – serotonin reuptake inhibitor and serotonin 5-HT1A receptor partial agonist VUFB-17649 – serotonin reuptake inhibitor VUFB-18285 – serotonin reuptake inhibitor ZD-4974 – neurokinin NK1 receptor antagonist Zelatriazin (TAK-041; NBI-1065846; NBI-846) – G protein-coupled receptor 139 (GPR139) agonist Ziprasidone (Geodon) – atypical antipsychotic (non-selective monoamine receptor modulator) Preregistration submission withdrawal Aripiprazole/sertraline (ASC-01) – atypical antipsychotic (non-selective monoamine receptor modulator) and serotonin reuptake inhibitor combination Formal development never or not yet started Mevidalen (LY-3154207; D1 PAM) – dopamine D1 receptor positive allosteric modulator – under development for Lewy body disease Nitrous oxide (N2O; "laughing gas") – ionotropic glutamate NMDA receptor antagonist – being studied for depression but doesn't seem to be being formally developed towards approval Clinically used drugs Approved drugs Agomelatine (Valdoxan) – serotonin 5-HT2C receptor antagonist and melatonin MT1 and MT2 receptor agonist Amineptine (Survector, Maneon) – norepinephrine and dopamine reuptake inhibitor – withdrawn Amisulpride (Solian) – atypical antipsychotic (dopamine D2 and D3 receptor antagonist and serotonin 5-HT2B and 5-HT7 receptor antagonist) Amitriptyline (Elavil) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Amoxapine (Asendin) – tetracyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Aripiprazole (Abilify) – atypical antipsychotic (non-selective monoamine receptor modulator) Brexanolone (allopregnanolone; SAGE-547; SGE-102; Zulresso) – GABAA receptor positive allosteric modulator and neurosteroid – approved for postpartum depression Brexpiprazole (Rexulti) – atypical antipsychotic (non-selective monoamine receptor modulator) Bupropion (Wellbutrin) – norepinephrine and dopamine reuptake inhibitor and nicotinic acetylcholine receptor negative allosteric modulator Bupropion/dextromethorphan (Auvelity) – sigma σ1 receptor agonist, serotonin reuptake inhibitor, norepinephrine and dopamine reuptake inhibitor, nicotinic acetylcholine receptor negative allosteric modulator, ionotropic glutamate NMDA receptor antagonist, other actions, and CYP2D6 inhibitor combination Butriptyline (Evadyne) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) – discontinued Cariprazine (Vraylar) – atypical antipsychotic (non-selective monoamine receptor modulator) Citalopram (Celexa) – serotonin reuptake inhibitor Desipramine (Norpramin) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Desvenlafaxine (Pristiq) – serotonin and norepinephrine reuptake inhibitor Desvenlafaxine (extended-release; Khedezla) – serotonin and norepinephrine reuptake inhibitor – withdrawn Desvenlafaxine (extended-release; WIP-DF17) – serotonin and norepinephrine reuptake inhibitor – registered in South Korea Dosulepin (dothiepin; Prothiaden) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Doxepin (Sinequan) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Duloxetine (Cymbalta; Drizalma Sprinkle) – serotonin and norepinephrine reuptake inhibitor Escitalopram (Lexapro) – serotonin reuptake inhibitor Esketamine (Spravato) – ionotropic glutamate NMDA receptor antagonist Fluoxetine (Prozac; Sarafem) – serotonin reuptake inhibitor Fluvoxamine (Luvox) – serotonin reuptake inhibitor Gepirone (Exxua) – serotonin 5-HT1A receptor partial agonist and α2-adrenergic receptor antagonist Hypericum extract (LI-160; St John's wort) – undefined mechanism of action Imipramine (Tofranil) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Iproniazid (Marsilid) – monoamine oxidase MAO-A and MAO-B irreversible inhibitor – withdrawn Isocarboxazid (Marplan) – monoamine oxidase MAO-A and MAO-B irreversible inhibitor Levomilnacipran (Fetzima) – serotonin and norepinephrine reuptake inhibitor Levosulpiride (L-sulpiride; Levobren; Levopraid; Levosulpride; RV-12309; Sulpepta) – dopamine D2 and D3 receptor antagonist and serotonin 5-HT4 receptor agonist Lofepramine (Lomont) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Lumateperone (Caplyta) – atypical antipsychotic (non-selective monoamine receptor modulator) Lurasidone (Latuda) – atypical antipsychotic (non-selective monoamine receptor modulator) Maprotiline (Ludiomil) – tetracyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Mianserin (Tolvon) – tetracyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Milnacipran (Dalcipran; Ixel; Savella) – serotonin and norepinephrine reuptake inhibitor Mirtazapine (Remeron) – α2-adrenergic receptor antagonist, serotonin 5-HT2A, 5-HT2C, and 5-HT3 receptor antagonist, histamine H1 receptor inverse agonist, and tetracyclic antidepressant Moclobemide (Aurorix; Manerix) – monoamine oxidase MAO-A reversible inhibitor Nefazodone (BMY-13754; Dutonin; MJ-13754; MS-13754; Nefadar; Serzone) – serotonin 5-HT1A receptor ligand, serotonin 5-HT2A and 5-HT2C receptor antagonist, α1- and α2-adrenergic receptor antagonist, weak serotonin, norepinephrine, and dopamine reuptake inhibitor, and other actions – mostly withdrawn Nomifensine (Merital; Alival) – norepinephrine and dopamine reuptake inhibitor – withdrawn Nortriptyline (Aventyl) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Olanzapine (Zyprexa) – atypical antipsychotic (non-selective monoamine receptor modulator) Olanzapine/fluoxetine (OFC; Symbyax; ZypZac) – atypical antipsychotic (non-selective monoamine receptor modulator) and serotonin reuptake inhibitor combination Opipramol (Ensidon; G-33040; Insidon; Nisidana) – sigma σ1 and σ2 receptor agonist, serotonin 5-HT2A receptor antagonist, dopamine D2 receptor antagonist, α1-adrenergic receptor antagonist, histamine H1 receptor antagonist, other actions, and tricyclic antidepressant Paroxetine (Paxil; Seroxat) – serotonin reuptake inhibitor Phenelzine (Nardil) – monoamine oxidase MAO-A and MAO-B irreversible inhibitor Protriptyline (Vivactil) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Quetiapine (Seroquel) – atypical antipsychotic (non-selective monoamine receptor modulator) Reboxetine (Edronax) – norepinephrine reuptake inhibitor Selegiline (Emsam) – monoamine oxidase MAO-B irreversible inhibitor, catecholaminergic activity enhancer, and weak norepineprhine releasing agent (via metabolites) Sertraline (Zoloft; Lustral) – serotonin reuptake inhibitor Setiptiline (Tecipul; Tesolon) – serotonin receptor antagonist, α2-adrenergic receptor antagonist, norepinephrine reuptake inhibitor, and tetracyclic antidepressant Tianeptine (Coaxil; Stablon; Tatinol) – weak and atypical μ- and δ-opioid receptor agonist, other actions, and tricyclic antidepressant Toludesvenlafaxine (ansofaxine; 4-methylbenzoate desvenlafaxine; desvenlafaxine prodrug; LPM-570065; LY-03005; Ruoxinlin) – serotonin, norepinephrine, and dopamine reuptake inhibitor Tranylcypromine (Parnate) – monoamine oxidase MAO-A and MAO-B irreversible inhibitor Trazodone (Oleptro; Trittico) – serotonin 5-HT1A receptor partial agonist, serotonin 5-HT2A and 5-HT2C receptor antagonist, α1- and α2-adrenergic receptor antagonist, weak serotonin reuptake inhibitor, and other actions Trimipramine (Surmontil) – tricyclic antidepressant (non-selective monoamine reuptake inhibitor and/or receptor modulator) Venlafaxine (Effexor) – serotonin and norepinephrine reuptake inhibitor Vilazodone (Viibryd) – serotonin reuptake inhibitor and serotonin 5-HT1A receptor agonist Viloxazine (Vivalan) – norepinephrine reuptake inhibitor Vortioxetine (Trintellix) – serotonin reuptake inhibitor, serotonin 5-HT1A and 5-HT1B receptor agonist, and serotonin 5-HT1D, 5-HT3, and 5-HT7 receptor antagonist Zimelidine (Zelmid) – serotonin reuptake inhibitor – withdrawn Zuranolone (BIIB-125; S-812217; SAGE-217; SGE-797; Zurzuvae) – GABAA receptor positive allosteric modulator and neurosteroid – approved for postpartum depression See also List of antidepressants List of investigational drugs References Further reading External links AdisInsight - Springer Antidepressants, investigational Dynamic lists Experimental antidepressants
List of investigational antidepressants
Chemistry
13,388
7,284,218
https://en.wikipedia.org/wiki/Integrated%20enterprise%20modeling
Integrated enterprise modeling (IEM) is an enterprise modeling method used for the admission and for the reengineering of processes both in producing enterprises and in the public area and service providers. In integrated enterprise modeling different aspects as functions and data become described in one model. Furthermore, the method supports analyses of business processes independently of the available organizational structure. The Integrated Enterprise Modeling is developed at the Fraunhofer Institute for Production Systems and Design Technology (German: IPK) Berlin, Germany. Integrated enterprise modeling topics Base constructs The integrated enterprise modeling (IEM) method uses an object-oriented approach and adapts this for the enterprise description. An application-oriented division of all elements of an enterprise forms the core of the method in generic object classes "product", "resource" and "order". Product The object class "product" represents all objects whose production and sale are the aim of the looked-at-enterprise as well as all objects which flow into the end product. Raw materials, intermediate products, components and end products, as well as services and the describing data, are included. Order The object class "order" describes all types of commissioning in the enterprise. The objects of the class "order" represent the information that is relevant from the point of view of planning, control, and supervision of the enterprise processes. One understands by it what, when, at which objects, in whose responsibility and with which resources it will be executed. Resource The IEM class "resource" contains all necessary key players which are required in the enterprise for the execution or support of activities. Among other things, these are employees, business partner, all kinds of documents as well as information systems or operating supplies. The classes "product", "order", and "resource" can gradually be given full particulars and specified. Through this it is possible to show both line of business typical and enterprise-specific product, order and resource subclasses. Structures (e.g. parts lists or organisation charts) can be shown as relational features of the classes with the help of being-part-of- and consists-of-relations between different subclasses. Action The activities which are necessary for the production of products and to the provision of services can be described as follows: an activity is the purposeful change of objects. The aim orientation of the activities causes an explicit or implicit planning and control. The execution of the activities is incumbent by the capable key players. From these considerations the definitions can be derived for the following constructs: An action is an object neutral description of activities: a verbal description of a work task, a lawsuit or proceeding; A function describes the change of state of a defined status into another defined one of objects of a class by using an action; and An activity specifies necessary resources for the state transformation of objects of a class the controlling order described by a function and these for the execution of this transformation in the enterprise, in each case represented by an object state description. Views All modeled data of the looked-at-enterprise are recorded in the model core of an Integrated Enterprise Modeling (IEM) model in two main views: the "information model"; and the "business process model". All relevant objects of an enterprise, their qualities and relations are shown in the "information model". It is class trees of the object classes "product", "order" and "resource" here. The "business process model" represents enterprise processes and their relations to each other. Activities are shown in their interaction with the objects. Process modeling The structuring of the enterprise processes in Integrated Enterprise Modeling (IEM) is reached by its hierarchical subdivision with the help of the decomposition. Decomposition means the reduction of a system in a partial system which respectively contains components which are in a logical cohesion. The process modeling is a partitioning of processes into its threads. Every thread describes a task completed into itself. The decomposition of single processes can be carried out long enough until the threads are manageable, i.e. appropriately small. They may turn out also not too rudimentary because a high number of detailed processes increases the complexity of a business process model. A process modeling person, therefore, has to find a balance between the effort complexity degree of the model and possible detailed description of the enterprise processes. A model depth generally recommends itself with at most three to four decomposition levels (model levels). On a model level business process flows are represented with the aid of illustrated combination elements. There are these five basic types of combinations between the activities: Sequential order: At a sequential order the activities are executed after each other. Parallel branching: A parallel branching means that all parallel branched activities to be executed have to be completed before the following activity can be started with. It is not necessary that the parallel activities are executed at the same time. They can be deferred, too. Case distinction: Decision either or. The case distinction is a branching in alternative processes depending on definition of the subsequent conditions. Uniting: The end of a parallel as the case may be alternative execution or also an integration of process chains is indicated by the uniting. Loop: A repatriation (loop, cycle) is represented by means of case distinction and uniting. The activities included in the loop are executed as long as the condition for the continuation is given. Modeling proceeding The modeling procedure for the illustration of business processes in IEM covers the following steps: System delimitation; Modeling; Model evaluation and use; and Model change. The system delimitation is the base of an efficient modeling. Starting out from a conceptual formulation the area of the real system to be shown is selected and interfaces will be defined to an environment. In addition, the detail depth of the model is also determined, i.e. the depth of the hierarchical decomposition relations in the view "business process model". The delimited real system is convicted with help of the IEM method in an abstract model. IEM is the construction of the two main positions "information model" and "business process model". The "information model" is made by the specification of the object classes to be modeled for "product", "order" and "resource" with the class structures as well as descriptive and relational features. By identification and description of functions, activities and its combination to processes the "business process model" is formed. As a general rule the construction of the "information model" follows first in which the modeling person can go back to available reference class structures. The reference classes which do not correspond to the real system or were not found to be relevant at the system delimitation are deleted. The missing relevant classes are inserted. After the object base is fixed, the activities and functions are joined at the objects according to the "generic activity model" and with the help of combination elements to business processes. A model is made which can be analysed and changed if it is required. It often happens, that during the construction of the "business process model" new relevant object classes are identified so that the class trees getting completed. The construction of the two positions is, therefore, an iterative process. Afterward, weak points and improvement potentials can be identified in the course of the model evaluation. This can cause the model changes whose realization should clear the weak points and make use of the improvement potentials in the real system. Modeling tool MO²GO The software tool MO²GO (method for an object-oriented business process optimization) supports the modeling process based on the integrated enterprise modeling (IEM). Different analyses of a given model are available like the planning and implementation of information systems. The MO²GO system is expandable easily and makes a high-speed modeling approach possible. The currently used MO²GO system consists of the following components: MO²GO version 2.4: This component offers modeling functions for class structures, process chains and mechanism for analysis of IEM. MO²GO Macro editor version 2.1: The macro editor supports the outline of MO²GO macros for user-defined evaluation procedures. MO²GO Viewer version 1.07: The Java-based and licence-free MO²GO Viewer is a user interface to be used easily to navigate process chains through MO²GO. MO²GO XML converter version 1.0: Nowadays the IT implementation works mainly with UML diagrams. MO²GO supports a component for a model based XML file which can be imported in UML tools. MO²GO Web publisher version 2.0: The web Publisher is a mechanism of analysis to be started directly out of MO²GO 2.4. A process assistant is the result of the evaluation of the model contents based on texture and hyperlink representation. To be able to adapt the process assistant to the user requirements flexibly, the web Publisher contains a configuration component. MO²GO process assistant The IEM business process models contain much information that can not only be used by system analysts but also be helpful for the employees at their daily work. To provide this model information for the staff and to enable the participation of the employees for the results of the modeling, a special tool was developed at the Fraunhofer IPK. This is a web-based process assistant whose contents are generated automatically from the IEM business process model of the enterprise. The process assistant provides all users the information of the business process model in an HTML-based form by intranet of the enterprise. For its implementation, no special methods or tool knowledge is required besides the basic EDP and Internet experiences. The process assistant has been developed so that the employees can find answers to the questions fast and precisely: e.g. What are the processes in the enterprise? In which way are they structured as? Who and with which responsibility is involved in the certain process? Which documents and application systems are used? Or also: A certain organisation unit is involved at which processes? Or in which processes a certain document or an application system is used? To make an informative process assistant from the business process model, certain modeling rules must be followed. The means e.g. that the individual actions must be deposited with its descriptions, the responsibility of the organisation units must be indicated explicitly or the paths also must be entered to the documents in the class tree. The fulfilment of these conditions means an additional time expenditure at the modeling, if these conditions are met, all employees are able to "surf" online through the intranet with the help of the process assistant by an informative enterprise documentation. They have the possibility between a graphic view and a texture-based description according to their preferences and methodical previous knowledge. The graphic view is provided by the MO²GO Viewer, a viewer tool for MO²GO models. The process assistant and the MO²GO Viewer are connected so that the graphic representation of the process looked at can be accessed context sensitively from the process assistant. Users can call on all templates, specifications and documents for the working sequence both from the process assistant and from the MO²GO Viewer online. Therefore, the process assistant cannot only be employed for the tracing of the modeling results but also in the daily business for the training of new employees as well as execution of process steps. To improve the usability in the daily routine, the process assistant can be adapted to the needs of the users' flexibility. This customization can be carried out both concerning the layout and concerning the main content emphases of the process assistant. Areas of application of the IEM Knowledge is used in organisations as a resource to render services for customers. The service preparation performs along actions which are described as processes or business processes. The analysis and improvement in dealing with knowledge presupposes a common idea about this context. An explicit description of the processes, therefore, is required because they represent the context for the respective knowledge contents. The process modeling represents a powerful instrument for the design and a conversion of a process-oriented knowledge management. In the context of the method of the business process-oriented knowledge management (GPO KM) developed at the Fraunhofer IPK the method of the "integrated enterprise modeling" (IEM) is accessed. It makes it possible to be able to show, to describe, to analyse and to form organisational processes. The IEM features few object classes, is ascertainable easily understandable and fast. Furthermore, the object orientation of the IEM opens up the possibility of showing knowledge as an object class. For the knowledge-oriented modeling of the business processes according to the IEM method the relevant knowledge contents have to be specified after knowledge domains and know-how bearers and represented as resources in the business process model. In further applications, IEM is used to create models across organisations (e.g. companies) to archive a common understanding between the involved stakeholders and derive services (create software and define the ASP). In this context the object-oriented basis of IEM has been used to create a common semantic across the single company models and to archive compliant enterprise models (predefined classes – terminology, model templates, etc.). The reason is that the terminology used within a model has to be understandable independent of the modeling language, see also SDDEM. See also Business process modeling References Further reading Peter Bernus ; Mertins, K. ; Schmidt, G. (2006). Handbook on architectures of information systems. Berlin : Springer, 2006, (International handbook on information systems) , Second Edition 2006 Mertins, K. (1994). Modellierungsmethoden für rechnerintegrierte Produktionsprozesse. Hanser Fachbuchverlag, Germany, ASIN 3446177469 Mertins, K.; Süssenguth, W.; Jochem, R. (1994). Modellierungsmethoden für rechnerintegrierte Produktionsprozesse Carl Hanser Verlag, Germany, Mertins, K.; Jochem, J. (1997). Qualitätsorientierte Gestaltung von Geschäftsprozessen. Beuth-Verlag Berlin (Germany) Mertins, K.; Jochem, R. (1998). MO²GO. Handbook on Architectures of Information Systems. Springer-Verlag Berlin (Germany) Mertins, K.; Jaekel, F-W. (2006). MO²GO: User Oriented Enterprise Models for Organizational and IT Solutions. In: Bernus, P.; Mertins, K.; Schmidt, G.: Handbook on Architectures of Information Systems. Second Edition. Springer-Verlag Berlin. Spur, G.; Mertins, K.; Jochem, R.; Warnecke, H.J. (1993). Integrierte Unternehmensmodellierung Beuth Verlag GmbH Germany, Schwermer, M. (1998): Modellierungsvorgehen zur Planung von Geschäftsprozessen (Dissertation) FhG/IPK Berlin (Germany), External links Fraunhofer Institute for Production Systems and Design Technology Modeling tool MO²GO Enterprise modelling Systems engineering
Integrated enterprise modeling
Engineering
3,102
285,293
https://en.wikipedia.org/wiki/Necrotizing%20gingivitis
Necrotizing gingivitis (NG) is a common, non-contagious infection of the gums with sudden onset. The main features are painful, bleeding gums, and ulceration of interdental papillae (the sections of gum between adjacent teeth). This disease, along with necrotizing periodontitis (NP) and necrotizing stomatitis, is classified as a necrotizing periodontal disease, one of the three general types of gum disease caused by inflammation of the gums (periodontitis). The often severe gum pain that characterizes NG distinguishes it from the more common gingivitis or chronic periodontitis which is rarely painful. If NG is improperly treated or neglected, it may become chronic and/or recurrent. The causative organisms are mostly anaerobic bacteria, particularly Fusobacteriota and spirochete species. Predisposing factors include poor oral hygiene, smoking, poor nutrition, psychological stress, and a weakened immune system. When the attachments of the teeth to the bone are involved, the term NP is used. Treatment of NG is by removal of dead gum tissue and antibiotics (usually metronidazole) in the acute phase, and improving oral hygiene to prevent recurrence. Although the condition has a rapid onset and is debilitating, it usually resolves quickly and does no serious harm. The informal name trench mouth arose during World War I as many soldiers developed the disease, probably because of the poor conditions and extreme psychological stress. Signs and symptoms In the early stages some affected people may complain of a feeling of tightness around the teeth. Three signs/symptoms must be present to diagnose this condition: Severe gum pain. Profuse gum bleeding that requires little or no provocation. Interdental papillae are ulcerated with dead tissue. The papillary necrosis of NG has been described as "punched out". Other signs and symptoms may be present, but not always. Foul breath. Bad taste (metallic taste). Malaise, fever and/or cervical lymph node enlargement are rare (unlike the typical features of herpetic stomatitis). Pain is fairly well localized to the affected areas. Systemic reactions may be more pronounced in children. Cancrum oris (noma) is a very rare complication, usually in debilitated children. Similar features but with more intense pain may be seen in necrotizing periodontitis in HIV/AIDS. Causes Necrotizing periodontal diseases are caused by a mixed bacterial infection that includes anaerobes such as P. intermedia and Fusobacterium as well as spirochetes, such as Treponema. Necrotizing gingivitis may also be associated with diseases in which the immune system is compromised, including HIV/AIDS. Necrotizing gingivitis is an opportunistic infection that occurs on a background of impaired local or systemic host defenses. The predisposing factors for necrotizing gingivitis are smoking, psychological stress, malnutrition, and immunosuppression. The following zones of infection have been described (superficial to deep): the bacterial zone, the neutrophil rich zone, the necrotic zone and the spirochetal zone. Diagnosis Diagnosis is usually clinical. Smear for fusospirochaetal bacteria and leukocytes; blood picture occasionally. The important differentiation is with acute leukemia or herpetic stomatitis. Classification Necrotizing gingivitis is part of a spectrum of disease termed necrotizing periodontal diseases. It is the most minor form of this spectrum, with more advanced stages being termed necrotizing periodontitis, necrotizing stomatitis, and the most extreme, cancrum oris. Necrotizing periodontitis (NP) is where the infection leads to attachment loss, and involves only the gingiva, periodontal ligament and alveolar ligament. Progression of the disease into tissue beyond the mucogingival junction characterizes necrotizing stomatitis. Treatment Treatment includes irrigation and debridement of necrotic areas (areas of dead and/or dying gum tissue), oral hygiene instruction and the uses of mouth rinses and pain medication. If there is systemic involvement, then oral antibiotics may be given, such as metronidazole. As these diseases are often associated with systemic medical issues, proper management of the systemic disorders is appropriate. Prognosis Untreated, the infection may lead to rapid destruction of the periodontium and can spread, as necrotizing stomatitis or noma, into neighbouring tissues in the cheeks, lips or the bones of the jaw. As stated, the condition can occur and be especially dangerous in people with weakened immune systems. This progression to noma is possible in malnourished susceptible individuals, with severe disfigurement possible. Epidemiology In developed countries, this disease occurs mostly in young adults. In developing countries, NUG may occur in children of low socioeconomic status, usually occurring with malnutrition (especially inadequate protein intake) and shortly after the onset of viral infections (e.g. measles). Predisposing factors include smoking, viral respiratory infections and immune defects, such as in HIV/AIDS. Uncommon, except in lower socioeconomic classes, this typically affects adolescents and young adults, especially in institutions, armed forces, etc., or people with HIV/AIDS. The disease has occurred in epidemic-like patterns, but it is not contagious. History Necrotizing gingivitis has been observed for centuries. Xenophon observes sore mouth and foul smelling breath in Greek soldiers in the 4th century BC. Hunter describes the clinical features of necrotizing gingivitis in 1778, differentiating it from scurvy (avitaminosis C) and chronic periodontitis. Jean Hyacinthe Vincent, a French physician working at the Paris Pasteur Institute describes a fusospirochetal infection of the pharynx and palatine tonsils, causing "ulcero-membranous pharyngitis and tonsillitis", which later became known as Vincent's angina. Later in 1904, Vincent describes the same pathogenic organisms in "ulceronecrotic gingivitis". Vincent's angina is sometimes confused with NUG, however the former is tonsillitis and pharyngitis, and the latter involves the gums, and usually the two conditions occur in isolation from each other. The term trench mouth evolved because the disease was observed in front line soldiers during World War I, thought to be a result at least partly because of extreme psychologic stress they were exposed to. The same condition was appearing in civilians during periods of bombing raids, who were away from the front line, and who had relatively good diets during wartime due to rationing, so it is assumed that psychologic stress was the significant causative factor. It has also been associated with high tobacco use in the army. Many other historical names for this condition (and Vincent's angina) have occurred, including: "acute membranous gingivitis", "fusospirillary gingivitis", " fusospirillosis", "fusospirochetal gingivitis", "phagedenic gingivitis", "Vincent stomatitis", "Vincent gingivitis", and "Vincent infection". In the late 1980s-early 1990s, it was originally thought that some necrotizing periodontal diseases seen in severely affected AIDS patients were strictly a sequela of HIV, and it was even called HIV-associated periodontitis. It is now understood that its association with HIV/AIDS was due to the immunocompromised status of such patients; it also occurs with higher prevalence in association with other diseases in which the immune system is compromised. The 1999 American Academy of Periodontology Classification termed the condition "necrotizing ulcerative periodontitis". The "ulcerative" descriptor was removed from the name, because ulceration is considered to be secondary to the necrosis. See also Canker sore Mouth ulcer References External links Periodontal disorders Conditions of the mucous membranes Necrosis Trench warfare
Necrotizing gingivitis
Biology
1,750
21,474
https://en.wikipedia.org/wiki/Natural%20number
In mathematics, the natural numbers are the numbers 0, 1, 2, 3, and so on, possibly excluding 0. Some start counting with 0, defining the natural numbers as the non-negative integers , while others start with 1, defining them as the positive integers Some authors acknowledge both definitions whenever convenient. Sometimes, the whole numbers are the natural numbers plus zero. In other cases, the whole numbers refer to all of the integers, including negative integers. The counting numbers are another term for the natural numbers, particularly in primary school education, and are ambiguous as well although typically start at 1. The natural numbers are used for counting things, like "there are six coins on the table", in which case they are called cardinal numbers. They are also used to put things in order, like "this is the third largest city in the country", which are called ordinal numbers. Natural numbers are also used as labels, like jersey numbers on a sports team, where they serve as nominal numbers and do not have mathematical properties. The natural numbers form a set, commonly symbolized as a bold or blackboard bold . Many other number sets are built from the natural numbers. For example, the integers are made by adding 0 and negative numbers. The rational numbers add fractions, and the real numbers add infinite decimals. Complex numbers add the square root of . This chain of extensions canonically embeds the natural numbers in the other number systems. Natural numbers are studied in different areas of math. Number theory looks at things like how numbers divide evenly (divisibility), or how prime numbers are spread out. Combinatorics studies counting and arranging numbered objects, such as partitions and enumerations. History Ancient roots The most primitive method of representing a natural number is to use one's fingers, as in finger counting. Putting down a tally mark for each object is another primitive method. Later, a set of objects could be tested for equality, excess or shortage—by striking out a mark and removing an object from the set. The first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers. The ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1, 10, and all powers of 10 up to over 1 million. A stone carving from Karnak, dating back from around 1500 BCE and now at the Louvre in Paris, depicts 276 as 2 hundreds, 7 tens, and 6 ones; and similarly for the number 4,622. The Babylonians had a place-value system based essentially on the numerals for 1 and 10, using base sixty, so that the symbol for sixty was the same as the symbol for one—its value being determined from context. A much later advance was the development of the idea that  can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation (within other numbers) dates back as early as 700 BCE by the Babylonians, who omitted such a digit when it would have been the last symbol in the number. The Olmec and Maya civilizations used 0 as a separate number as early as the , but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628 CE. However, 0 had been used as a number in the medieval computus (the calculation of the date of Easter), beginning with Dionysius Exiguus in 525 CE, without being denoted by a numeral. Standard Roman numerals do not have a symbol for 0; instead, nulla (or the genitive form nullae) from , the Latin word for "none", was employed to denote a 0 value. The first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, sometimes even not as a number at all. Euclid, for example, defined a unit first and then a number as a multitude of units, thus by his definition, a unit is not a number and there are no unique numbers (e.g., any two units from indefinitely many units is a 2). However, in the definition of perfect number which comes shortly afterward, Euclid treats 1 as a number like any other. Independent studies on numbers also occurred at around the same time in India, China, and Mesoamerica. Emergence as a term Nicolas Chuquet used the term progression naturelle (natural progression) in 1484. The earliest known use of "natural number" as a complete English phrase is in 1763. The 1771 Encyclopaedia Britannica defines natural numbers in the logarithm article. Starting at 0 or 1 has long been a matter of definition. In 1727, Bernard Le Bovier de Fontenelle wrote that his notions of distance and element led to defining the natural numbers as including or excluding 0. In 1889, Giuseppe Peano used N for the positive integers and started at 1, but he later changed to using N0 and N1. Historically, most definitions have excluded 0, but many mathematicians such as George A. Wentworth, Bertrand Russell, Nicolas Bourbaki, Paul Halmos, Stephen Cole Kleene, and John Horton Conway have preferred to include 0. Mathematicians have noted tendencies in which definition is used, such as algebra texts including 0, number theory and analysis texts excluding 0, logic and set theory texts including 0, dictionaries excluding 0, school books (through high-school level) excluding 0, and upper-division college-level books including 0. There are exceptions to each of these tendencies and as of 2023 no formal survey has been conducted. Arguments raised include division by zero and the size of the empty set. Computer languages often start from zero when enumerating items like loop counters and string- or array-elements. Including 0 began to rise in popularity in the 1960s. The ISO 31-11 standard included 0 in the natural numbers in its first edition in 1978 and this has continued through its present edition as ISO 80000-2. Formal construction In 19th century Europe, there was mathematical and philosophical discussion about the exact nature of the natural numbers. Henri Poincaré stated that axioms can only be demonstrated in their finite application, and concluded that it is "the power of the mind" which allows conceiving of the indefinite repetition of the same act. Leopold Kronecker summarized his belief as "God made the integers, all else is the work of man". The constructivists saw a need to improve upon the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers, thus stating they were not really natural—but a consequence of definitions. Later, two classes of such formal definitions emerged, using set theory and Peano's axioms respectively. Later still, they were shown to be equivalent in most practical applications. Set-theoretical definitions of natural numbers were initiated by Frege. He initially defined a natural number as the class of all sets that are in one-to-one correspondence with a particular set. However, this definition turned out to lead to paradoxes, including Russell's paradox. To avoid such paradoxes, the formalism was modified so that a natural number is defined as a particular set, and any set that can be put into one-to-one correspondence with that set is said to have that number of elements. In 1881, Charles Sanders Peirce provided the first axiomatization of natural-number arithmetic. In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic, and in 1889, Peano published a simplified version of Dedekind's axioms in his book The principles of arithmetic presented by a new method (). This approach is now called Peano arithmetic. It is based on an axiomatization of the properties of ordinal numbers: each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several weak systems of set theory. One such system is ZFC with the axiom of infinity replaced by its negation. Theorems that can be proved in ZFC but cannot be proved using the Peano Axioms include Goodstein's theorem. Notation The set of all natural numbers is standardly denoted or Older texts have occasionally employed as the symbol for this set. Since natural numbers may contain or not, it may be important to know which version is referred to. This is often specified by the context, but may also be done by using a subscript or a superscript in the notation, such as: Naturals without zero: Naturals with zero: Alternatively, since the natural numbers naturally form a subset of the integers (often they may be referred to as the positive, or the non-negative integers, respectively. To be unambiguous about whether 0 is included or not, sometimes a superscript "" or "+" is added in the former case, and a subscript (or superscript) "0" is added in the latter case: Properties This section uses the convention . Addition Given the set of natural numbers and the successor function sending each natural number to the next one, one can define addition of natural numbers recursively by setting and for all , . Thus, , , and so on. The algebraic structure is a commutative monoid with identity element 0. It is a free monoid on one generator. This commutative monoid satisfies the cancellation property, so it can be embedded in a group. The smallest group containing the natural numbers is the integers. If 1 is defined as , then . That is, is simply the successor of . Multiplication Analogously, given that addition has been defined, a multiplication operator can be defined via and . This turns into a free commutative monoid with identity element 1; a generator set for this monoid is the set of prime numbers. Relationship between addition and multiplication Addition and multiplication are compatible, which is expressed in the distribution law: . These properties of addition and multiplication make the natural numbers an instance of a commutative semiring. Semirings are an algebraic generalization of the natural numbers where multiplication is not necessarily commutative. The lack of additive inverses, which is equivalent to the fact that is not closed under subtraction (that is, subtracting one natural from another does not always result in another natural), means that is not a ring; instead it is a semiring (also known as a rig). If the natural numbers are taken as "excluding 0", and "starting at 1", the definitions of + and × are as above, except that they begin with and . Furthermore, has no identity element. Order In this section, juxtaposed variables such as indicate the product , and the standard order of operations is assumed. A total order on the natural numbers is defined by letting if and only if there exists another natural number where . This order is compatible with the arithmetical operations in the following sense: if , and are natural numbers and , then and . An important property of the natural numbers is that they are well-ordered: every non-empty set of natural numbers has a least element. The rank among well-ordered sets is expressed by an ordinal number; for the natural numbers, this is denoted as (omega). Division In this section, juxtaposed variables such as indicate the product , and the standard order of operations is assumed. While it is in general not possible to divide one natural number by another and get a natural number as result, the procedure of division with remainder or Euclidean division is available as a substitute: for any two natural numbers and with there are natural numbers and such that The number is called the quotient and is called the remainder of the division of by . The numbers and are uniquely determined by and . This Euclidean division is key to the several other properties (divisibility), algorithms (such as the Euclidean algorithm), and ideas in number theory. Algebraic properties satisfied by the natural numbers The addition (+) and multiplication (×) operations on natural numbers as defined above have several algebraic properties: Closure under addition and multiplication: for all natural numbers and , both and are natural numbers. Associativity: for all natural numbers , , and , and . Commutativity: for all natural numbers and , and . Existence of identity elements: for every natural number , and . If the natural numbers are taken as "excluding 0", and "starting at 1", then for every natural number , . However, the "existence of additive identity element" property is not satisfied Distributivity of multiplication over addition for all natural numbers , , and , . No nonzero zero divisors: if and are natural numbers such that , then or (or both). Generalizations Two important generalizations of natural numbers arise from the two uses of counting and ordering: cardinal numbers and ordinal numbers. A natural number can be used to express the size of a finite set; more precisely, a cardinal number is a measure for the size of a set, which is even suitable for infinite sets. The numbering of cardinals usually begins at zero, to accommodate the empty set . This concept of "size" relies on maps between sets, such that two sets have the same size, exactly if there exists a bijection between them. The set of natural numbers itself, and any bijective image of it, is said to be countably infinite and to have cardinality aleph-null (). Natural numbers are also used as linguistic ordinal numbers: "first", "second", "third", and so forth. The numbering of ordinals usually begins at zero, to accommodate the order type of the empty set . This way they can be assigned to the elements of a totally ordered finite set, and also to the elements of any well-ordered countably infinite set without limit points. This assignment can be generalized to general well-orderings with a cardinality beyond countability, to yield the ordinal numbers. An ordinal number may also be used to describe the notion of "size" for a well-ordered set, in a sense different from cardinality: if there is an order isomorphism (more than a bijection) between two well-ordered sets, they have the same ordinal number. The first ordinal number that is not a natural number is expressed as ; this is also the ordinal number of the set of natural numbers itself. The least ordinal of cardinality (that is, the initial ordinal of ) is but many well-ordered sets with cardinal number have an ordinal number greater than . For finite well-ordered sets, there is a one-to-one correspondence between ordinal and cardinal numbers; therefore they can both be expressed by the same natural number, the number of elements of the set. This number can also be used to describe the position of an element in a larger finite, or an infinite, sequence. A countable non-standard model of arithmetic satisfying the Peano Arithmetic (that is, the first-order Peano axioms) was developed by Skolem in 1933. The hypernatural numbers are an uncountable model that can be constructed from the ordinary natural numbers via the ultrapower construction. Other generalizations are discussed in . Georges Reeb used to claim provocatively that "The naïve integers don't fill up ". Formal definitions There are two standard methods for formally defining natural numbers. The first one, named for Giuseppe Peano, consists of an autonomous axiomatic theory called Peano arithmetic, based on few axioms called Peano axioms. The second definition is based on set theory. It defines the natural numbers as specific sets. More precisely, each natural number is defined as an explicitly defined set, whose elements allow counting the elements of other sets, in the sense that the sentence "a set has elements" means that there exists a one to one correspondence between the two sets and . The sets used to define natural numbers satisfy Peano axioms. It follows that every theorem that can be stated and proved in Peano arithmetic can also be proved in set theory. However, the two definitions are not equivalent, as there are theorems that can be stated in terms of Peano arithmetic and proved in set theory, which are not provable inside Peano arithmetic. A probable example is Fermat's Last Theorem. The definition of the integers as sets satisfying Peano axioms provide a model of Peano arithmetic inside set theory. An important consequence is that, if set theory is consistent (as it is usually guessed), then Peano arithmetic is consistent. In other words, if a contradiction could be proved in Peano arithmetic, then set theory would be contradictory, and every theorem of set theory would be both true and wrong. Peano axioms The five Peano axioms are the following: 0 is a natural number. Every natural number has a successor which is also a natural number. 0 is not the successor of any natural number. If the successor of equals the successor of , then equals . The axiom of induction: If a statement is true of 0, and if the truth of that statement for a number implies its truth for the successor of that number, then the statement is true for every natural number. These are not the original axioms published by Peano, but are named in his honor. Some forms of the Peano axioms have 1 in place of 0. In ordinary arithmetic, the successor of is . Set-theoretic definition Intuitively, the natural number is the common property of all sets that have elements. So, it seems natural to define as an equivalence class under the relation "can be made in one to one correspondence". This does not work in all set theories, as such an equivalence class would not be a set (because of Russell's paradox). The standard solution is to define a particular set with elements that will be called the natural number . The following definition was first published by John von Neumann, although Levy attributes the idea to unpublished work of Zermelo in 1916. As this definition extends to infinite set as a definition of ordinal number, the sets considered below are sometimes called von Neumann ordinals. The definition proceeds as follows: Call , the empty set. Define the successor of any set by . By the axiom of infinity, there exist sets which contain 0 and are closed under the successor function. Such sets are said to be inductive. The intersection of all inductive sets is still an inductive set. This intersection is the set of the natural numbers. It follows that the natural numbers are defined iteratively as follows: , , , , , etc. It can be checked that the natural numbers satisfy the Peano axioms. With this definition, given a natural number , the sentence "a set has elements" can be formally defined as "there exists a bijection from to ." This formalizes the operation of counting the elements of . Also, if and only if is a subset of . In other words, the set inclusion defines the usual total order on the natural numbers. This order is a well-order. It follows from the definition that each natural number is equal to the set of all natural numbers less than it. This definition, can be extended to the von Neumann definition of ordinals for defining all ordinal numbers, including the infinite ones: "each ordinal is the well-ordered set of all smaller ordinals." If one does not accept the axiom of infinity, the natural numbers may not form a set. Nevertheless, the natural numbers can still be individually defined as above, and they still satisfy the Peano axioms. There are other set theoretical constructions. In particular, Ernst Zermelo provided a construction that is nowadays only of historical interest, and is sometimes referred to as . It consists in defining as the empty set, and . With this definition each nonzero natural number is a singleton set. So, the property of the natural numbers to represent cardinalities is not directly accessible; only the ordinal property (being the th element of a sequence) is immediate. Unlike von Neumann's construction, the Zermelo ordinals do not extend to infinite ordinals. See also Sequence – Function of the natural numbers in another set Notes References Bibliography – English translation of . External links Cardinal numbers Elementary mathematics Integers Number theory Sets of real numbers
Natural number
Mathematics
4,260
7,807,871
https://en.wikipedia.org/wiki/Basu%27s%20theorem
In statistics, Basu's theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu. It is often used in statistics as a tool to prove independence of two statistics, by first demonstrating one is complete sufficient and the other is ancillary, then appealing to the theorem. An example of this is to show that the sample mean and sample variance of a normal distribution are independent statistics, which is done in the Example section below. This property (independence of sample mean and sample variance) characterizes normal distributions. Statement Let be a family of distributions on a measurable space and a statistic maps from to some measurable space . If is a boundedly complete sufficient statistic for , and is ancillary to , then conditional on , is independent of . That is, . Proof Let and be the marginal distributions of and respectively. Denote by the preimage of a set under the map . For any measurable set we have The distribution does not depend on because is ancillary. Likewise, does not depend on because is sufficient. Therefore Note the integrand (the function inside the integral) is a function of and not . Therefore, since is boundedly complete the function is zero for almost all values of and thus for almost all . Therefore, is independent of . Example Independence of sample mean and sample variance of a normal distribution Let X1, X2, ..., Xn be independent, identically distributed normal random variables with mean μ and variance σ2. Then with respect to the parameter μ, one can show that the sample mean, is a complete and sufficient statistic – it is all the information one can derive to estimate μ, and no more – and the sample variance, is an ancillary statistic – its distribution does not depend on μ. Therefore, from Basu's theorem it follows that these statistics are independent conditional on , conditional on . This independence result can also be proven by Cochran's theorem. Further, this property (that the sample mean and sample variance of the normal distribution are independent) characterizes the normal distribution – no other distribution has this property. Notes References Mukhopadhyay, Nitis (2000). Probability and Statistical Inference. Statistics: A Series of Textbooks and Monographs. 162. Florida: CRC Press USA. . Theorems in statistics Independence (probability theory) Articles containing proofs
Basu's theorem
Mathematics
504
24,488,598
https://en.wikipedia.org/wiki/C22H23O11
{{DISPLAYTITLE:C22H23O11}} The molecular formula C22H23O11, molar mass = 463.41 g/mol (Aglycone), exact mass : 463.124036578 u (C22H23O11+ (aglycone), C22H23O11Cl (chloride), 498.9 g/mol (chloride)) may refer to: Peonidin-3-O-glucoside Pulchellidin 3-rhamnoside
C22H23O11
Chemistry
121
46,832,709
https://en.wikipedia.org/wiki/REACH%20authorisation%20procedure
The authorisation procedure is one of the regulatory tools of the European regulation (EC) REACH n°1907/2006 aiming to ban the use of substances of very high concern (SVHC) included in the Annex XIV of REACH, so as to replace them with technically and economically feasible alternatives. This process concerns manufacturers, importers and downstream users of substances. Only representatives of foreign manufacturers can also apply for an authorisation. Authorisation today impacts many industries, including the aerospace, electronics, automotive, energy, and paint industries. Moreover, defence applications are not de facto exempted from the authorisation process. Member states must decide on a case by case basis whether a company can benefit or not from this procedure (outlined in article 2.3 of REACH). General principles The REACh regulation relies on four main procedures: registration, evaluation, restriction and authorisation of chemical substances. From the Candidate List to the Annex XIV of REACH EU Member States or the European Chemicals Agency, on request of the European Commission, can submit propositions to identify Substances of Very High Concern, based on the criteria laid down in article 57 of REACH: Carcinogenic, mutagenic or toxic for reproduction meeting the criteria for classification as CMR category 1A or 1B, in accordance with Regulation n°1272/2008 (CLP), Persistents bioaccumulative and toxic (PBT), Very persistent and very bioaccumulative (vPvB), Or substances of equivalent concern. This work is supported by Expert Groups of ECHA and the EU Member States and is based on various criteria and screening methodologies in order to identify the most relevant SVHCs. Annex XIV is the last step of this prioritisation process. It lists SVHCs which exhibit a particularly high risk for human health or the environment (based on their inherent properties, quantities and uses) in order to forbid their use in the EU market. Recommendations to include SVHCs in the Annex XIV are made by ECHA and are debated by all relevant stakeholders (Member States, companies, NGOs, etc.). The final decision of inclusion of the substance into Annex XIV is taken by the European Commission. When listed in Annex XIV of the REACh regulation, a substance is therefore assigned a “sunset date” after which its use will be banned, unless an Authorisation is granted for a definite period of time. As of today (2022/03/02), 223 substances are listed on the Candidate List and 54 substances (date 2022/03/02) are listed in the Annex XIV. The candidate list is usually updated every 6 months and the Annex XIV is updated every 12 to 18 months. Scope of authorisation The authorisation procedure is complex and concerns manufacturers, importers, downstream users and Only Representatives of substances for which: No alternative to the Annex XIV substance is deemed technically and/or economically feasible or for which Alternatives may exist but still need time to be fully qualified and deployed. The banning of the use takes effect at the "Sunset date". As of this date, the use of the substance is only possible for companies which have been granted an authorisation or for those that have submitted their dossier before the Latest Application Date. The latter indeed benefit from a transitional period, pending the EU Commission final decision. An exception is made for downstream users in the case where an upstream stakeholder, within the supply chain, has been granted an authorisation for this very substance and this very use. From this point of view, it is the supply chain of the substance that matters. For instance, subcontractors of authorised importers will not be covered if the Annex XIV substance they use is supplied via a supply chain for which no authorisation application has been made or granted. Finally, downstream users that do not need to apply for an authorisation nevertheless have the obligation to notify their use(s) to ECHA (art. 66 of REACH) and check the compliance of their risk management measures. Concerned companies are thus invited to take measures as soon as a substance they use enters the Candidate list by enquiring on the impacted actors and their strategies. The use applied for "Use" under the authorisation process Authorisation applications are made for one or several specific uses. Article 3 of REACH thus defines a “use” as: “any processing, formulation, consumption, storage, keeping, treatment, filling into containers, transfer from one container to another, mixing, production of an article or any other use”. In the framework of an authorisation dossier, the description of the use applied for has to specify the market, the supply chain, processes or the type of articles concerned. The use applied for has to be consistent enough to cover the Exposure Scenario but also the Analysis of Alternatives. The Use applied for should not be mistaken with the identified use which corresponds to the REACH registration process. An identified use focuses on the process and does not consider performances or markets questions. Cases where no authorisation is needed A few exceptions exist, for which the application for authorisation is not required: The manufacturer of a substance does not require an authorisation since it is not considered as a use, Impurities, additives and components of another substance do not constitute a use of the substance as such (except if annex XIV refers to it), The formulation of a mixture is considered as a use under REACh and the substance of the Annex XIV is only subject to authorisation if it exceeds the required concentrations. If the production of an article may require an authorisation at some point, finished articles themselves do not require an authorisation to be put on the market, even though they still contain a substance subject to authorisation. Consequently, articles requiring the use of an Annex XIV substance can still be produced outside the EU and then imported. In that particular case, future restrictions procedure could however limit the placing on the market of such articles if a risk remains (art. 58.6 of REACH). Review period The review period is the duration for which the EU Commission authorises the use of a substance after the Sunset date. Following durations are considered for the review periods: < 7 years: in case the Analysis of Alternatives report is insufficient and/or doubts remain on the impacts of the granting of an authorisation or, alternatively, when a quick transition is possible; 7 years: standard duration to develop a technically and economically feasible alternative solution; 12 years and > 12 years: long cycle of investments, low risks. In exceptional cases, a longer duration may be considered if it can be demonstrated that 12 years would create disproportionate impacts compared with a longer review period. At the end of the review period, the application for authorisation is reassessed to evaluate the progress made in terms or research & development or substitution. Applications to extend the review period have to be made at the latest 18 months before the expiry date. The European Commission may also reduce this duration if new circumstances appear, in terms of risks or impacts. Only the Court of the European Union is qualified to rule on appeals for applications for authorisation. Member States are, in turn, responsible for controlling the decision's implementation. The Application For Authorisation dossier (AFA) The application for authorisation (AfA) is made up of three main parts: the Chemical Safety Report (CSR), the Analysis of Alternatives (AoA) and the Socio-Economic Analysis (SEA). The goal of this dossier is to demonstrate that no alternative substance is immediately available, that risks are controlled and that the social and economic advantages of the use of the substance outweigh the risks to human health or the environment. The dossier usually takes 6 to 18 months of preparation and ECHA's guidelines are available to assist with its drafting. The application for authorisation should be filed before the Latest Application Date (LAD), set at 18 months before the sunset date. The LAD enables to benefit from a transitional period, pending the European Commission's decision. Basic of an AfA An AfA can be made for one or several substances (in that case, grouping will need to be demonstrated on the basis of annex XI of REACH), one or several uses applied for and by one or several companies. The latter case is called a joint application and it requires appointing a main applicant that will be the contact point for ECHA. Two submissions routes Two submission routes are planned by the REACh regulation: Content of the dossier Chemical Safety Report (CSR) For the adequate control route, the goal of the Chemical Safety Report is to prove that threshold values are respected; for the socio-economic route, the goal of the Chemical Safety Report is to demonstrate that risks are reduced to the minimum. The Chemical Safety Report contains: A summary of the risk management measure A declaration on the implementation and the communication of risk management measures along the supply chain The identity of the substance and the identified uses The assessment of the human and environmental hazard assessment The assessment of the properties for which the substance was included in Annex XIV The exposure assessment The risk characterisation Analysis of Alternatives (AoA) The AoA aims to demonstrate that no alternative is appropriate, i.e. technically and / or economically feasible, less risky and available. The Analysis of Alternatives therefore presents all the substance's alternative solutions and contains: Part 1 – Summary Part 2 – Description of substance's purpose Part 3 – Identification of potential alternatives Part 4 – Description of the suitability and availability of alternatives Part 5 – Conclusion of the report Socio-Economic Analysis (SEA) The Socio-Economic Analysis is a compulsory document for the socio-economic route and can also complete an application justified by the adequate control route. It aims to demonstrate that the advantages of the use of the substance outweigh the risks to human health or the environment. For this purpose, applicants must compare two scenarios: the ‘use scenario’ (continued use of the substance) on the one hand and the ‘non-use scenario’ (cease of use of the substance) on the other hand to discuss their impacts. It contains: Part 1 – Summary Part 2 – Definition of the objective and scope Part 3 – Analysis of impacts Part 4 – Comparison of impacts Part 5 – Conclusion of the report Submission of the dossier Dossiers should be submitted during submission windows in February, May, August and November. ECHA strongly advise to follow these windows as plenary sessions of the two committees (RAC and SEAC) are organised in March, June, September and December of each year. Submitting before the plenary sessions, during the submission windows, therefore helps the efficient assessment of the application. Applications are deemed received after business rules check is successfully passed and provided ECHA fees are paid on time. Examination of the dossier The examination of the dossier is carried out by the Risk Assessment Committee (RAC) and the Socio-Economic Analysis Committee (SEAC) and opens with a public consultation on alternatives. During 8 weeks, companies, NGOs or any other interested party have the possibility to comment on and possibly challenge the alternatives proposed by the applicant. The consultation can be followed by additional discussions with the two Committees in order to clarify the application. This process is called Trialogue and stakeholders can be invited to participate. Duration of the examination varies according to the complexity and the clarity of the dossier. Committees however have the obligation to deliver their first opinion on the dossier 10 months after its submission at the latest. The dossier with the Committee's opinion is then sent to the Commission. The whole process can take up to 2 years. Implementation and feedback As of May 4, 2015, 28 applications for authorisation had been filed for a total of 56 uses. A specific strategy for every use Each dossier requires implementing a specific strategy, being it to define the uses, the analysis of the industrial processes and associated risks, the alternatives, or the socio-economic impacts of the banning of the substance. Committees expect each dossier to contain a precise description of the industrial process and the operational conditions that are representative of the dossier as well as the risk management measures implemented by the applicant The main issue in the application for authorisation process is the duration of the review period which will be granted. It is therefore critical to bring all the necessary precisions to the dossiers to justify the requested review period duration. A justification that is too weak or an argumentation that is too generic may induce the granting of a shorter-than-requested review period. An in-depth analysis of the companies’ activity Beyond the simple drafting of the application for authorisation dossier, the whole process implies both expertise and deep analysis of a company's activity, on multiple aspects: Technical aspects(Chemical Safety Report, Analysis of Alternatives), Business aspects (Supply chain security, public consultation) and Strategic aspects (Anticipating the development and growth of the activity within a 5 to 10 years timeframe). This analysis therefore requires an extensive collection of information and also, possibly, contacts with customers so as to strengthen the analysis of alternatives. Public consultation at the heart of the decision Public consultation is one of the core mechanisms of the authorisation process. Stakeholders’ (competing companies, universities, laboratories, NGOS, Member States, etc.) involvement in the consultation process has been growing over the last years (to reach up to 400 comments for a single dossier) and the influence of comments, in particular concerning alternatives, makes it a major step of the authorisation process. In order to streamline this process, templates for comments and instructions are available on the ECHA website. Consulting costs Consulting costs of an application for authorisation have been estimated by ECHA and amounts, in average, to around 230,000 EUR for a single use. Notes and references References External links Access control Chemical safety European Union regulations Toxicology
REACH authorisation procedure
Chemistry,Environmental_science
2,803
286,789
https://en.wikipedia.org/wiki/The%20Colophon%2C%20A%20Book%20Collectors%27%20Quarterly
The Colophon, subtitled A Book Collectors' Quarterly or A quarterly for booklovers, was a limited edition quarterly periodical begun late in 1929 and continuing in various guises until 1950. It was the brainchild of Elmer Adler (1884–1962), founder of Pynson Printers of New York City. His idea was that various printers around the world would be willing to contribute their time and expertise to produce signatures (articles) using their own choice of papers, typography and illustration. These articles would then be bound together in boards by Pynson Printers and marketed to 2,000 subscribers. Content Some articles comment on a current or historical issue related to printing, publishing or art. Other articles were themselves intended to be an example of printing or a work of art. As each article or item was normally a short section produced by a different designer and printer, a typical issue included a range of styles, papers, and typography, often using unusual or experimental approaches not widely seen in longer or mainstream printed items. Each issue would also include an original piece of graphic art, some signed by the artist, many produced by using etching, lithography, or engraving. Some of these are rare and valuable; as a result, some copies of the bound volumes have been vandalized to remove the prints. Editorial board Adler gathered around him an editorial board of John T. Winterich, Alfred Stanford and F.B. Adams, Jr. and contributing editors including Rockwell Kent, W. A. Dwiggins, Frederic Goudy, Dard Hunter, Bruce Rogers, A. Edward Newton and many others who were well known in the book world. It was their responsibility to provide not only editorial expertise but articles (and in the case of Rockwell Kent, The Colophon logos and colophons). Publication history Beginning in early 1930, this "adventure in enthusiasm", as Adler called it, was greeted with public enthusiasm as well, requiring a waiting list for subscriptions. However, it soon ran into difficulty as the Depression made the then costly subscription price of $15 per year difficult for many. By 1935, only 1,700 subscribers could be found. Nonetheless, the quality of The Colophon remained unsurpassed—through the good will of printers, authors and artists—as well as with the help of a number of anonymous financial gifts. From 1935 to 1938, The Colophon entered a new phase with less lofty production values (at a price of $6<Notice given to subscribers which was included in the last two issues of the original 'The Colophon'. per year), before returning to a higher level of quality in 1939 with the 'New Graphic Series'. Starting again in 1948, the name The New Colophon: A Book Collectors' Quarterly was used by Philip Duschnes and the quarterly was entirely printed by the Anthoensen Press of Portland, Maine, continuing publication until 1950 in a fourth and final format. Contributors Many of the writers of the day, such as Sherwood Anderson and Edith Wharton, wrote autobiographical articles about their first books and many artists now famous, such as Paul Landacre, Gustave Baumann, Howard Cook, and Emil Ganso, provided original prints. Issues Issues of The Colophon and the principal related publications (48 volumes) were: Original Series (20): Vol. 1, part 1 (Feb. 1930) - v. 5, part 20 (Mar. 1935), quarto. Parts 1-4 issued to 2,000 subscribers; 5–12, to 3,000 subscribers; 13–16, to 2,500 subscribers; pt. 17–20 to 1,700 subscribers. Index, the Colophon, 1930–1935; with a history of the quarterly by John T. Winterich New Series (12): Vol. 1, no. 1 (Summer 1935) - v. 3, no. 4 (Autumn 1938), octavo The Annual of Bookmaking (1): 1938, quarto New Graphic Series (4): Vol. 1, no. 1 (Mar. 1939) - v. 1, no. 4 (Feb. 1940), quarto The New Colophon (9): Vol. 1, pt. 1 (Jan. 1948)- v. 3 (1950); v. 1-2 also pts. 1-8 [New York, N.Y. : Duschnes Crawford], quarto Index 1936 - 1950; An Index to The Colophon, New Series . . . . by Keller, Dean H. Metuchen: Scarecrow Press, 1968. 139 pp. Boards. Covers the three series of The Colophon published after the conclusion of the original series: New Series, New Graphic Series, and the New Colophon. Cover artists Here is a list of the cover artists through 1935: Edward A. Wilson Part 1 1930 February Joseph Sinel Part 2 1930 May Gustave Jensen Part 3 1930 September Donald McKay Part 4 1930 December W. A. Dwiggins Part 5 1931 March T.M Cleland Part 6 1931 June Leroy Appleton Part 7 1931 September Frank McIntosh Part 8 1931 December Edward A. Wilson Part 9 1932 February Boris Artzybasheff Part 10 1932 May T.M. Cleland Part 11 1932 September Ervine A. Metzl Part 12 1932 December John Atherton Part 13 1933 February Marie Lawson Part 14 1933 June Jack Tinker part 15 1933 October Louis Bouché Part 16 1934 March Carl Noell part 17 1934 June Farle A. Drewry Part 18 1934 September Kirk C. Wilkinson Part 19 1934 December Frederic W. Goudy Part 20 1935 March References External links An online archive at Carnegie Mellon University Library's site has images of many of the pages. A search-able index for this periodical artistarchive.com Book design Defunct hobby magazines published in the United States Magazines established in 1929 Magazines disestablished in 1950 Magazines published in Maine Quarterly magazines published in the United States Mass media in Portland, Maine
The Colophon, A Book Collectors' Quarterly
Engineering
1,236
10,278
https://en.wikipedia.org/wiki/List%20of%20explosives%20used%20during%20World%20War%20II
Almost all the common explosives listed here were mixtures of several common components: Ammonium picrate TNT (Trinitrotoluene) PETN (Pentaerythritol tetranitrate) RDX Powdered aluminium. This is only a partial list; there were many others. Many of these compositions are now obsolete and only encountered in legacy munitions and unexploded ordnance. Two nuclear explosives, containing mixtures of uranium and plutonium, respectively, were also used at the bombings of Hiroshima and Nagasaki See also List of Japanese World War II explosives Explosive material Little Boy Fat Man Explosives
List of explosives used during World War II
Chemistry
125
76,974,432
https://en.wikipedia.org/wiki/DevOps%20Research%20and%20Assessment
DevOps Research and Assessment (abbreviated to DORA) is a team that is part of Google Cloud that engages in opinion polling of software engineers to conduct research for the DevOps movement. The DORA team was founded by Nicole Forsgren, Jez Humble and Gene Kim. and conducted research for the DevOps company Puppet and later became an independent team (with Puppet continuing to produce reports by a new team). Whilst the founding members have departed, the DORA team continue to publish research in the form of annual State of DevOps Reports. State of DevOps Reports The DORA team began publishing State of DevOps Reports in 2013. The latest DORA State of DevOps Report published in 2023 found culture and a customer centric focus key to success, whilst AI was providing limited benefits. DORA Four Key Metrics For the purposes of their research, Four Key Metrics, sometimes referred to as DORA Metrics, are used to assess the performance of teams. The four metrics are as follows: Change Lead Time - Time to implement, test, and deliver code for a feature (measured from first commit to deployment) Deployment Frequency - Number of deployments in a given duration of time Change Failure Rate - Percentage of failed changes over all changes (regardless of success) Mean Time to Recovery (MTTR) - Time it takes to restore service after production failure Using these performance measures, the team are able to assess how practices (like outsourcing) and risk factors impact performance metrics for an engineering team. These metrics can be crudely measured using psychometrics or using commercial services. Limitations These metrics have been used by organisations to evaluate team-by-team performance, a use-case which the DORA team issued a warning against in October 2023. Some professionals have argued that using the DORA Four Key Metrics as a target within engineering teams encourages focus on wrong incentives. For example; James Walker, CEO at Curiosity Software, has argued the "metrics aren’t a definitive route to DevOps success" and challenges in using them for team comparisons. Research conducted by the computer scientist Junade Ali and the British polling firm Survation found that both software engineers (when building software systems) and public perception (when using software systems) found other factors mattered significantly more than the outcome measures which were treated as the "Four Key Metrics" (which ultimately measure the speed of resolving issues and the speed of fixing bugs, and are used to create the findings in the book), and risk and reward appetite varies from sector-to-sector. Ali has also criticised the research on the basis that reputable opinion polling firms who comply with the rules of organisations like the British Polling Council should publish their full results and raw data tables, which the DORA team did not do - and additionally that the sponsors of the polling (Google Cloud and previously Puppet) create products which have a vested interest in having software engineers deliver faster (despite research indicating high levels of burnout amongst software engineers), which the results of the research ultimately supported. Despite the authors arguing that speed of delivery and software quality go hand-in-hand, Ali has offered several counter-examples; including the comparatively high quality of aviation software despite infrequent changes, contrasted with rapid application development being pioneered in the software that resulted in the British Post Office scandal and agile software development being used in the software responsible for the 2009–2011 Toyota vehicle recalls. The software developer Bryan Finster has also discussed how, as correlation does not imply causation, organisations who are considered "high performing" in the research are not high performing because they focussed on the DORA metrics, but instead focussed on delivering value to users and arguing the metrics should be used as "trailing indicators for poor health, not indicators everything is going well". Accelerate (book) Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations is a software engineering book co-authored by Nicole Forsgren, Jez Humble and Gene Kim from their time in the DORA team. The book explores how software development teams using Lean Software and DevOps can measure their performance and the performance of software engineering teams impacts the overall performance of an organization. The book discusses their research conducted as part of the DORA team for the annual State of DevOps Reports. In total, the authors considered 23,000 data points from a variety of companies of various different sizes (from start-up to enterprises), for-profit and not-for-profit and both those with legacy systems and those with modern systems. 24 Key Capabilities The authors outline 24 practices to improve software delivery which they refer to as "key capabilities" and group them into five categories. Continuous Delivery Use version control for all production artifacts Automate your deployment process Implement Continuous Integration Use trunk-based development methods Implement test automation Support test data management Shift Left on Security Implement Continuous Delivery (CD) Architecture Use a Loosely Coupled Architecture Architect for Empowered Teams Product and Process Gather and Implement Customer Feedback Make the Flow of Work Visible through the Value Stream Work in Small Batches Foster and Enable Team Experimentation Lean Management and Monitoring Have a Lightweight Change Approval Processes Monitor across Application and Infrastructure to Inform Business Decisions Check System Health Proactively Improve Processes and Manage Work with Work-In-Process (WIP) Limits Visualize Work to Monitor Quality and Communicate throughout the Team Cultural Support a Generative Culture Encourage and Support Learning Support and Facilitate Collaboration among Teams Provide Resources and Tools that Make Work Meaningful Support or Embody Transformational Leadership References External links CLOUD AND DEVOPS CONSULTING QCon Plus (May 17-28): Stay Ahead of Emerging Software Trends Q&A on the Book Accelerate: Building and Scaling High Performance Technology Organizations Computer programming books Computer books
DevOps Research and Assessment
Technology
1,151
12,805,751
https://en.wikipedia.org/wiki/Historia%20naturalis%20palmarum
Historia naturalis palmarum: opus tripartitum ("Natural History of Palms, a work in three volumes") is a highly illustrated, three-volume botanical book of palms (Arecaceae) by German botanist Carl Friedrich Philipp von Martius. The work is in Latin and was published in imperial folio format in Leipzig (Lipsiae) by T.O. Weigel, volume one in 1823 and the final volume in 1850. It includes more than 550 pages of text and 240 chromolithographs, including views of habitats and botanical dissections. Historia naturalis palmarum was based on Martius' travels in Brazil and Peru with zoologist Johann Baptist von Spix from 9 December 1817 to 1820. Their expedition was sponsored by the King of Bavaria, Maximilian I, with instructions to investigate natural history and tribal Indians. The pair travelled over 2,250 km (1,400 mi) throughout the Amazon Basin, the most species-rich palm region in the world, collecting and sketching specimens. They began in Rio de Janeiro and São Paulo before making their way north and inland. They became the first non-Portuguese Europeans to obtain permission to visit the Brazilian Amazon. On their return to Bavaria, the King awarded both men knighthoods and lifetime pensions. In the first volume, Martius outlined the modern classification of palms and prepared the first maps of palm biogeography. The second volume described the palms of Brazil, and in the third, known as Expositio Systematica, he systematically described all known genera of the palm family, based on his own work and that of others. The majority of drawings of palms for the second volume, dedicated to Brazilian palms, were credited to Martius, with just a few landscapes, representing areas not travelled by Martius, taken from works by Frans Post and Johann Moritz Rugendas. The book was reprinted in two volumes in 1971. Other works by Martius based on the expedition were Reise in Brasilien (Journey in Brazil), published in three volumes in 1823, 1828 and 1831, and the massive 40-volume monograph Flora Brasiliensis which was completed by others in 1906. E. J. H. Corner (1966) described the book as "the most magnificent treatment of palms that has been produced" Alexander von Humboldt said of the work and author: "For as long as palms are named and known, the name of Martius will be famous." See also Genera Palmarum Notes References — online copy External links 1823 non-fiction books 19th-century books in Latin Florae (publication) Botany in South America Books about Brazil
Historia naturalis palmarum
Biology
531
1,392,529
https://en.wikipedia.org/wiki/Geobacteraceae
The Geobacteraceae are a family within the Thermodesulfobacteriota. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI) See also List of bacterial orders List of bacteria genera References Bacteria families Thermodesulfobacteriota
Geobacteraceae
Biology
80
76,016,053
https://en.wikipedia.org/wiki/Moral%20geography
Moral geographies (a term coined by Felix Driver) are, according to David Smith (2000), the studying of human geography with a normative emphasis. The kind of questions that are examined including asking whether distance from a phenomenon lessons one's duty, whether there is a substantial difference between private spaces and public spaces and analysing which moral positions are personal, which are societal, which are absolute and which are relative. One key question is how to respect difference whilst recognizing universal rights. References Human geography
Moral geography
Environmental_science
105
1,710,903
https://en.wikipedia.org/wiki/Jaakko%20Hintikka
Kaarlo Jaakko Juhani Hintikka (12 January 1929 – 12 August 2015) was a Finnish philosopher and logician. Hintikka is regarded as the founder of formal epistemic logic and of game semantics for logic. Life and career Hintikka was born in Helsingin maalaiskunta (now Vantaa). In 1953, he received his doctorate from the University of Helsinki for a thesis entitled Distributive Normal Forms in the Calculus of Predicates. He was a student of Georg Henrik von Wright. Hintikka was a Junior Fellow at Harvard University (1956-1969), and held several professorial appointments at the University of Helsinki, the Academy of Finland, Stanford University, Florida State University and finally Boston University from 1990 until his death. He was the prolific author or co-author of over 30 books and over 300 scholarly articles, Hintikka contributed to mathematical logic, philosophical logic, the philosophy of mathematics, epistemology, language theory, and the philosophy of science. His works have appeared in over nine languages. Hintikka edited the academic journal Synthese from 1962 to 2002, and was a consultant editor for more than ten journals. He was the first vice-president of the Fédération Internationale des Sociétés de Philosophie, the vice-president of the Institut International de Philosophie (1993–1996), as well as a member of the American Philosophical Association, the International Union of History and Philosophy of Science, Association for Symbolic Logic, and a member of the governing board of the Philosophy of Science Association. In 2005, he won the Rolf Schock Prize in logic and philosophy "for his pioneering contributions to the logical analysis of modal concepts, in particular the concepts of knowledge and belief". In 1985, he was president of the Florida Philosophical Association. He was a member of the Norwegian Academy of Science and Letters. On May 26, 2000, Hintikka received an honorary doctorate from the Faculty of History and Philosophy at Uppsala University, Sweden Philosophical work Early in his career, he devised a semantics of modal logic essentially analogous to Saul Kripke's frame semantics, and discovered the now widely taught semantic tableau, independently of Evert Willem Beth. Later, he worked mainly on game semantics, and on independence-friendly logic, known for its "branching quantifiers", which he believed do better justice to our intuitions about quantifiers than does conventional first-order logic. He did important exegetical work on Aristotle, Immanuel Kant, Ludwig Wittgenstein, and Charles Sanders Peirce. Hintikka's work can be seen as a continuation of the analytic tendency in philosophy founded by Franz Brentano and Peirce, advanced by Gottlob Frege and Bertrand Russell, and continued by Rudolf Carnap, Willard Van Orman Quine, and by Hintikka's teacher Georg Henrik von Wright. For instance, in 1998 he wrote The Principles of Mathematics Revisited, which takes an exploratory stance comparable to that Russell made with his The Principles of Mathematics in 1903. Selected books For a bibliography, see Auxier and Hahn (2006). 1962. Knowledge and Belief – An Introduction to the Logic of the Two Notions 1969. Models for Modalities: Selected Essays 1973 Logic, Language-Games and Information: Kantian Themes in the Philosophy of Logic 1975. The intentions of intentionality and other new models for modalities 1976. The semantics of questions and the questions of semantics: case studies in the interrelations of logic, semantics, and syntax 1989. The Logic of Epistemology and the Epistemology of Logic 1996. Ludwig Wittgenstein: Half-Truths and One-and-a-Half-Truths 1996. Lingua Universalis vs Calculus Ratiocinator 1996. The Principles of Mathematics Revisited 1998. Paradigms for Language Theory and Other Essays 1998. Language, Truth and Logic in Mathematics 1999. Inquiry as Inquiry: A Logic of Scientific Discovery 2004. Analyses of Aristotle 2007. Socratic Epistemology: Explorations of Knowledge-Seeking by Questioning See also Rudolf Carnap Saul Kripke Charles Sanders Peirce Willard Van Orman Quine Alfred Tarski Ludwig Wittgenstein Doxastic logic Hintikka set Notes Further reading Auxier, R.E., & Hahn, L. (eds.) 2006. The Philosophy of Jaakko Hintikka (The Library of Living Philosophers). Open Court. Includes a complete bibliography of Hintikka's publications. Bogdan, Radu (ed.) 1987. Jaakko Hintikka. Kluwer Academic Publishers. Kolak, Daniel 2001. On Hintikka. Wadsworth. Kolak, Daniel & Symons, John (eds.) 2004. Quantifiers, Questions and Quantum Physics: Essays on the Philosophy of Jaakko Hintikka. Springer. External links Jaakko Hintikka's personal website Philosopher Jaakko Hintikka reveals love affair between his wife and JFK. Helsinki Times. Jaakko Hintikka in 375 humanists – 20 May 2015. Faculty of Arts, University of Helsinki. 1929 births 20th-century Finnish philosophers 21st-century Finnish philosophers Analytic philosophers Florida State University faculty Game theorists 2015 deaths Members of the Norwegian Academy of Science and Letters Foreign members of the Russian Academy of Sciences Modal logicians People from Vantaa Rolf Schock Prize laureates Boston University faculty Finnish expatriates in the United States Harvard Fellows Members of the American Philosophical Society Epistemologists
Jaakko Hintikka
Mathematics
1,143
1,241,487
https://en.wikipedia.org/wiki/Silverstone%20%28plastic%29
SilverStone is a non-stick plastic coating made by DuPont. Released in 1976, this three-coat (primer/midcoat/topcoat) fluoropolymer system formulated with PTFE and PFA produces a more durable finish than Teflon coating. As of 1980 Dupont required that the pans carrying the brand be a heavier weight than others on the market. After the coating was applied the cookware was subsequently "baked" in a 700-800 degree oven to affix the coating. The process for creating Silverstone cookware begins by sandblasting the products which creates an uneven surface that encourages adherence. Then a primer layer of Teflon is sprayed on and after it is baked at high heat to "a secure mechanical grip." Gizmodo reported in 2014 that one or two more additional layers were applied after the initial layer. References External links DuPont history of Silverstone If nothing sticks to Teflon, how does it stick to pans? Plastics Lubricants DuPont products Fluoropolymers Brand name materials
Silverstone (plastic)
Physics,Chemistry
219
45,198,458
https://en.wikipedia.org/wiki/Moroccan%20units%20of%20measurement
A number of units of measurement were used in Morocco to measure length, mass, capacity, etc. Metric system has been compulsory in Morocco since 1923. System before metric system A number of local units were used. Length Several units were used. These units were variable, not rigidly defined. Some units included: 1 cubit = 0.533 m 1 canna = 0.533 m 1 pic = 0.61 m 1 tonni = pic. The code, covid, covado, cadee, or dhra was varied from 19.85 to 22.48 in (perhaps the best value was 20.92 in (0.531 3 m). Mass Several units were used. These units were variable, not rigidly defined. Some units included: 1 = 507.5 g 1 = 507.5 g 1 = 3 kg 1 = 22 rotal 1 = 100 rotal. One rotl of commerce was equal to 1.19 lb while one rotl of the markets was equal to 1.7 lbs. Capacity Several units were used. These units were variable, not rigidly defined. Some units included: 1 = 56 L 1 = 56 L 1 mudd = 14 L 1 = 14 L. References Culture of Morocco Morocco
Moroccan units of measurement
Mathematics
261
57,262,748
https://en.wikipedia.org/wiki/Macalloy
McCalls Special Products Ltd is a British manufacturer of steel bar and cable components for tensioned concrete, ground anchors, curtain walling, and steel structures. It operates under the Macalloy brand and claims to be a world leader in that market. Macalloy's work supports landmarks including the Sphere at the Academy Museum of Motion Pictures, VTB Stadium, Stade Roland Garros, Tottenham Hotspur Stadium, Soccer City Stadium, Marina Bay Sands, Jewel Changi Airport, Forth Road Bridge, and Burj Al Arab. , of the company's products are exported. History McCall and Company Ltd was founded in 1921 by T H McCall, Edwin Llwewllyn Raworth, and C W Hamilton, on Queens Street, Sheffield to supply steel rebar for concrete contractors. In 1927, it moved to former railway engineering sheds on Nunnery Lane for two years then again to its steel supplier, United Bar Strip Mills' Templeborough Steelworks. In 1948, the firm began to manufacture bars for post tensioned concrete. These were used to reinforce a central span of the 1960 M2 Medway Bridge, then the world's widest prestressed concrete span. McCall and Company Ltd became a subsidiary of United Steel Companies Ltd in 1962 and three years later moved to Meadowhall Road, Rotherham. In 1966, United Steel Companies Ltd was nationalised as British Steel. Allied Steel and Wire Ltd purchased McCall and Company Ltd in 1975, relocating it to Hawke Street as McCalls Special Products Ltd. Allied Steel and Wire Ltd failed in 2002 and in 2003, a management team led by Peter Hoy purchased the assets of McCalls Special Products Ltd. A new Company was incorporated to continue the McCalls Special Products Ltd name and trade. In 2006, the business moved to the former Dinnington Colliery. Awards McCalls Special Products Ltd was awarded a Queens Award for Export Achievement in 1996 and in 2010, a Queen's Award for Enterprise: International Trade (Export). Controversies Clyde Arc Bridge Severfield plc subsidiary Watson Steel Structures Ltd fabricated the Clyde Arc Bridge in 2007. It had to be closed in 2008 because a clevis connector failed and a 35 metre long tension bar fell onto the carriageway. Another clevis was found to be cracked and it was decided to replace all 14 tension bars in the structure. Watson Steel Structures Ltd claimed £1.8 million from Macalloy, the clevis supplier, alleging its product was faulty. Macalloy denied the claim and countered Watson Steel Structures Ltd had only specified minimum yield stress for the components. Delhi footbridge collapse Twenty seven workers were injured, five of them seriously, by the collapse of a footbridge to the Delhi Commonwealth Games Stadium. The 2010 collapse was highlighted by commentators questioning how ready Delhi was to host the Games. Macalloy fabricated components of the 95 metre collapsed structure to a design provided by Tandon Consultants. The Government of Delhi opened an investigation and demanded explanations from Macalloy. Fatal accident In 2015, McCalls Special Products Ltd pleaded guilty to breaches of Section 2 and 3 of the Health and Safety at Work etc. Act 1974. It was fined £200,000 on each, to a total of £400,000, plus £16,804 in costs. The charges related to a 2013 fatality when the victim's clothes caught in a tape wrapping machine and he was dragged in, suffering crush injuries. Macalloy was criticised for inadequate machine guards and risk assessments. References External links Macalloy British companies established in 1921 Companies based in South Yorkshire Dinnington, South Yorkshire Steel companies of the United Kingdom Reinforced concrete Structural steel Privately held companies of the United Kingdom Family-owned companies of the United Kingdom Family-owned companies of England Privately held companies of England
Macalloy
Engineering
757
54,201,297
https://en.wikipedia.org/wiki/Dacuronium%20bromide
Dacuronium bromide (INN, BAN) (developmental code name NB-68) is an aminosteroid neuromuscular blocking agent which was never marketed. It acts as a competitive antagonist of the nicotinic acetylcholine receptor (nAChR). References Muscle relaxants Nicotinic antagonists Quaternary ammonium compounds Abandoned drugs Neuromuscular blockers Cyclopentanols Piperidines Acetate esters Bromides
Dacuronium bromide
Chemistry
99
14,441,061
https://en.wikipedia.org/wiki/GPR128
G protein-coupled receptor 128 is a protein encoded by the ADGRG7 gene. GPR128 is a member of the adhesion GPCR family. Adhesion GPCRs are characterized by an extended extracellular region often possessing N-terminal protein modules that is linked to a TM7 region via a domain known as the GPCR-Autoproteolysis INducing (GAIN) domain. Tissue distribution GPR128 is specifically expressed in human liver as well as in mouse bone marrow and intestinal tissues. Function Ni et al. showed that Gpr128 deletion in mice causes reduced body weight and induced intestinal contraction frequency. Clinical significance A 111-kb copy number gain with breakpoints within the TRK-fused gene (a target of translocations in lymphoma and thyroid tumors) and GPR128 has been identified in the genome of patients with atypical myeloproliferative neoplasms. Notably, the fused gene was also detected in few healthy individuals. References External links Adhesion GPCR consortium G protein-coupled receptors
GPR128
Chemistry
224
16,334,130
https://en.wikipedia.org/wiki/On-die%20termination
On-die termination (ODT) is the technology where the termination resistor for impedance matching in transmission lines is located inside a semiconductor chip instead of on a printed circuit board (PCB). Overview of electronic signal termination In lower frequency (slow edge rate) applications, interconnection lines can be modelled as "lumped" circuits. In this case, there is no need to consider the concept of "termination". Under the low-frequency condition, every point in an interconnect wire can be assumed to have the same voltage as every other point for any instance in time. However, if the propagation delay in a wire, PCB trace, cable, or connector is significant (for example, if the delay is greater than 1/6 of the rise time of the digital signal), the "lumped" circuit model is no longer valid and the interconnect has to be analyzed as a transmission line. In a transmission line, the signal interconnect path is modeled as a circuit containing distributed inductance, capacitance, and resistance throughout its length. For a transmission line to minimize distortion of the signal, the impedance of every location on the transmission line should be uniform throughout its length. If there is any place in the line where the impedance is not uniform for some reason (open circuit, impedance discontinuity, different material) the signal gets modified by reflection at the impedance change point which results in distortion, ringing, and so forth. When the signal path has impedance discontinuity, in other words, an impedance mismatch, then a termination impedance with the equivalent amount of impedance is placed at the point of line discontinuity. This is described as "termination". For example, resistors can be placed on computer motherboards to terminate high-speed busses. There are several ways of termination depending on how the resistors are connected to the transmission line. Parallel termination and series termination are examples of termination methodologies. On-die termination Instead of having the necessary resistive termination located on the motherboard, the termination is located inside the semiconductor chips–technique called On-Die Termination (abbreviated to ODT). Why is on-die termination needed? Although the termination resistors on the motherboard reduce some reflections on the signal lines, they are unable to prevent reflections resulting from the stub lines that connect to the components on the module card (e.g. DRAM module). A signal propagating from the controller to the components encounters an impedance discontinuity at the stub leading to the components on the module. The signal that propagates along the stub to the component (e.g. DRAM component) will be reflected onto the signal line, thereby introducing unwanted noise into the signal. In addition, on-die termination can reduce the number of resistor elements and complex wiring on the motherboard. Accordingly, the system design can be simpler and cost-effective. Example of ODT: DRAM On-die termination is implemented with several combinations of resistors on the DRAM silicon along with other circuit trees. DRAM circuit designers can use a combination of transistors that have different values of turn-on resistance. In the case of DDR2, there are three kinds of internal resistors 150ohm, 75ohm, and 50ohm. The resistors can be combined to create a proper equivalent impedance value to the outside of the chip, whereby the signal line (transmission line) of the motherboard is controlled by the on-die termination operation signal. Where an on-die termination value control circuit exists the DRAM controller manages the on-die termination resistance through a programmable configuration register that resides in the DRAM. The internal on-die termination values in DDR3 are 120ohm, 60ohm, 40ohm, and so forth. How On-Die Termination (ODT) Works: An Example of DRAM Utilizing On-Die Termination (ODT) involves two steps. First, the On-Die Termination (ODT) value must be selected within the DRAM. Second, it can be dynamically enabled/disabled using the ODT pin from the ODT Controller. To configure ODT there could be different methods. In DRAM, it is done by setting up the device’s extended mode register with the proper ODT value. There are synchronous and asynchronous timing requirements, depending on the state of the DRAM device. Essentially, the On-Die Termination (ODT) is turned on just before the data transfer and then shut off immediately after. If there is more than one DRAM device loaded on the channel, either the active or inactive DRAM can terminate the signal. This flexibility enables optimal termination to occur as precisely as needed. Let’s try to understand how On-Die Termination (ODT) works in DRAM read and write operations. All data-group signals fall under point-to-point singling. The data-group signals are driven by the DRAM controller on writes and driven by the DRAM memories during reads. No external resistors are needed on these routes on PCB as the DRAM controller and Memory are equipped with ODT. The receivers in both cases (DRAMS memory on writes and DRAM controller on reads) will assert on-die terminations (ODT) at the appropriate times. The following diagrams show the impedances seen on these nets during write and read cycles. On-Die Termination (ODT) in Write Cycle Let’s take an example of the impedances seen on the nets during a write cycle as per the below picture. During writes, the output impedance of the DRAM device is approximately 45Ω. It is recommended that the SDRAM be implemented with a 240Ω. Assuming the RZQ resistor is 240Ω, Termination resistors can be configured to present an On-Die Termination (ODT) of RZQ/4 for an effective termination of 40Ω. On-die Termination (ODT) in Read Cycle The picture shows the impedances seen on the PCB nets during a read cycle. During reads, it is recommended that the DRAM be configured for an effective drive impedance of RZQ/7 or 34 Ω (assuming the RZQ resistor is 240 Ω). The on-die termination (ODT) within the DRAM controller will have an effective Thevenin impedance of 45 Ω. Fly-By Signals Now let’s talk about the fly-by signals, which include the address, control, command, and clock routing groups. The fly-by signals consist of the fly-by routing from the DRAM controller, stubs at each SDRAM, and terminations after the last SDRAM. In this example, address, control, and command groups will be terminated through a 39.2-2 resistor to VTT. The clock pairs will be terminated through 39.2Ω resistors to a common node connected to a capacitor that is then connected to VDDQ. The DRAM controller will present a 45-2 output impedance when driving these signals. See also Reflections of signals on conducting lines References Semiconductors
On-die termination
Physics,Chemistry,Materials_science,Engineering
1,497
59,714,879
https://en.wikipedia.org/wiki/NGC%203665
NGC 3665 is a lenticular galaxy located in the constellation Ursa Major. It is located at a distance of circa 85 million light-years from Earth, which, given its apparent dimensions, means that NGC 3665 is about 85,000 light years across. It was discovered by William Herschel on March 23, 1789. Characteristics NGC 3665 is a lenticular galaxy whose disk is characterised by the presence of a circular dust lane. The galaxy has high molecular gas content, as determined by the detection of CO lines. The molecular gas mass in the galaxy is estimated to be 108.91 . The galaxy has a UV excess that indicates the presence of a young stellar population. The total star formation rate in NGC 3665 is estimated to be 1.7 per year. This rate is less than the one expected based on the molecular gas reservoirs of the galaxy. It has been suggested that the compact yet massive bulge of the galaxy has stabilised the cold gas, and thus suppressed star formation. The nucleus of the galaxy is active and hosts a low luminosity transition active galactic nucleus (AGN). The most accepted theory for the energy source of AGNs is the presence of an accretion disk around a supermassive black hole. The mass of the black hole in the centre of NGC 3665 is measured using the rotation of the molecular gas around the nucleus as , or based on the M–sigma relation. NGC 3665 has been found to emit radio waves. Its emission appears elongated at a position angle perpendicular to the dust lane, with the most luminous region being in the nucleus. The emission extends for more than 5 arcminutes in 610 MHz. In 1.4 GHz, the galaxy has one jet with FRI morphology, that extends for more than 3 kpc. In 5 GHz, emission appears in the nucleus and two jet-like structures. The southeast source has not been resolved, while the northwest extends for 0.7 arcseconds, which corresponds to 120 pc at the distance of NGC 3665. The radio emission is likely associated with a low luminosity AGN. Supernova One supernova has been detected in NGC 3665, SN 2002hl. It was discovered by T. Boles in unfiltered CCD images taken on November 5, 2002 with a 0.35-m reflector, as part of the U.K. Nova/Supernova Patrol. The supernova had then a magnitude of 16.3. The spectrum of the supernova obtained on November 5 indicated it was a type Ia supernova about one or two months after maximum light. Nearby galaxies NGC 3665 is the brightest member of a galaxy group known as the NGC 3665 group. Other members of the group include NGC 3648, NGC 3652, NGC 3658, and UGC 6433. NGC 3658 lies 15 arcminutes from NGC 3665. References External links NGC 3665 on SIMBAD Lenticular galaxies Ursa Major 3665 06426 35064 Astronomical objects discovered in 1789 Discoveries by William Herschel Galaxies discovered in 1789 +07-24-003
NGC 3665
Astronomy
634
1,885,045
https://en.wikipedia.org/wiki/List%20of%20R-phrases
R-phrases (short for risk phrases) are defined in Annex III of European Union Directive 67/548/EEC: Nature of special risks attributed to dangerous substances and preparations. The list was consolidated and republished in Directive 2001/59/EC, where translations into other EU languages may be found. These risk phrases are used internationally, not just in Europe, and there is an ongoing effort towards complete international harmonization using the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) which now generally replaces these risk phrases. Risk phrases Missing R-numbers indicate phrases that have been deleted or replaced by other phrases. Combinations R-phrases no longer in use R13: Extremely flammable liquefied gas. R47: May cause birth defects. See also List of S-phrases Material safety data sheet Risk and Safety Statements References External links Chemical Risk & Safety Phrases. in 23 European Languages Occupational safety and health International standards Regulation of chemicals in the European Union Safety codes
List of R-phrases
Chemistry
200
35,544,075
https://en.wikipedia.org/wiki/Methoxyketamine
Methoxyketamine or 2-MeO-2-deschloroketamine is a designer drug of the arylcyclohexylamine class first reported in 1963. It is an analog of ketamine in which the chlorine atom has been replaced with a methoxy group. Its synthesis by rearrangement of an amino ketone has been reported. As an arylcyclohexylamine, methoxyketamine most likely functions as an NMDA receptor antagonist. It produces sedative, hallucinogenic, and (at high doses) anesthetic effects, but with a lower potency than ketamine itself. See also 2-Fluorodeschloroketamine Bromoketamine Methoxmetamine Trifluoromethyldeschloroketamine References Arylcyclohexylamines Designer drugs Dissociative drugs Ketones 2-Methoxyphenyl compounds
Methoxyketamine
Chemistry
205
2,958,034
https://en.wikipedia.org/wiki/Supply%20network%20operations
Supply network operations or supply chain operations involve the synchronized execution of compliant manufacturing and logistics processes across a dynamically reconfigurable supply network to profitably meet demand. References Supply chain management Business process Transport operations Global business organization
Supply network operations
Physics
47
3,989,011
https://en.wikipedia.org/wiki/The%20Music%20of%20the%20Primes
The Music of the Primes (British subtitle: Why an Unsolved Problem in Mathematics Matters; American subtitle: Searching to Solve the Greatest Mystery in Mathematics) is a 2003 book by Marcus du Sautoy, a professor in mathematics at the University of Oxford, on the history of prime number theory. In particular he examines the Riemann hypothesis, the proof of which would revolutionize our understanding of prime numbers. He traces the prime number theorem back through history, highlighting the work of some of the greatest mathematical minds along the way. The cover design for the hardback version of the book contains several pictorial depictions of prime numbers, such as the number 73 bus. It also has an image of a clock, referring to clock arithmetic, which is a significant theme in the text. References 2003 non-fiction books Mathematics books Analytic number theory Prime numbers Fourth Estate books
The Music of the Primes
Mathematics
178
23,014,654
https://en.wikipedia.org/wiki/Scud%20missile
A Scud missile is one of a series of tactical ballistic missiles developed by the Soviet Union during the Cold War. It was exported widely to both Second and Third World countries. The term comes from the NATO reporting name attached to the missile by Western intelligence agencies. The Russian names for the missile are the R-11 (the first version), and the R-17 (later R-300) Elbrus (later developments). The name Scud has been widely used to refer to these missiles and the wide variety of derivative variants developed in other countries based on the Soviet design. Scud missiles have been used in combat since the 1970s, mostly in wars in the Middle East. They became familiar to the Western public during the 1991 Persian Gulf War, when Iraq fired dozens at Saudi Arabia and Israel. In Russian service it is being replaced by the 9K720 Iskander. Development The first use of the term Scud was in the NATO name SS-1b Scud-A, applied to the R-11 Zemlya ballistic missile. The earlier R-1 missile had carried the NATO name SS-1 Scunner, but was of a very different design, almost directly a copy of the German V-2 rocket. The R-11 used technology gained from the V-2 as well, but was a new design, smaller and differently shaped than the V-2 and R-1 weapons. The R-11 was developed by the Korolyev OKB and entered service in 1957. The most revolutionary innovation in the R-11 was the engine, designed by A. M. Isaev. Far simpler than the V-2's multi-chamber design, and employing an anti-oscillation baffle to prevent chugging, it was a forerunner to the larger engines used in Soviet launch vehicles. Further developed variants were the R-17 (later R-300) Elbrus / SS-1c Scud-B in 1961 and the SS-1d Scud-C in 1965, both of which could carry either a conventional high-explosive, a 5- to 80-kiloton thermonuclear, or a chemical (thickened VX) warhead. The SS-1e Scud-D variant developed in the 1980s can deliver a terminally guided warhead capable of greater precision. All models are long (except Scud-A, which is shorter) and in diameter. They are propelled by a single liquid-fuel rocket engine burning kerosene and corrosion-inhibited red fuming nitric acid (IRFNA) with unsymmetrical dimethylhydrazine (UDMH, Russian TG-02 like German Tonka 250) as liquid igniter (self-ignition with IRFNA) in all models. The missile reaches a maximum speed of Mach 5. Variants Scud-A The first of the "Scud" series, designated R-11 (SS-1B Scud-A) originated in a 1951 requirement for a ballistic missile with similar performance to the German V-2 rocket. The R-11 was developed by engineer Viktor Makeev, who was then working in the OKB-1, headed by Sergey Korolev. It first flew on 18 April 1953, was fitted with an Isayev engine using kerosene and nitric acid as propellant. On 13 December 1953, a production order was passed with SKB-385 in Zlatoust, a factory dedicated to producing long-range rockets. In June 1955, Makeev was appointed chief designer of the SKB-385 to oversee the program and, in July, the R-11 was formally accepted into military service. The definitive R-11M, designed to carry a nuclear warhead, was accepted officially into service on 1 April 1958. The launch system was placed on an IS-2 chassis and received the GRAU designation 8K11; only 100 Scud-A launchers were built. The R-11M had a maximum range of 270 km, but when carrying a nuclear warhead, this was reduced to 150 km. Its purpose was strictly as a mobile nuclear strike vector, giving the Soviet Army the ability to hit European targets from forward areas, armed with a nuclear warhead with an estimated yield of 50 kilotons. A naval variant, the R-11FM (SS-N-1 Scud-A) was first tested in February 1955, and was first launched from a converted Project 611 (Zulu class) submarine in September of the same year. While the initial design was done by Korolev's OKB-1, the program was transferred to Makeev's SKB-385 in August 1955. It became operational in 1959 and was deployed onboard Project 611 and Project 629 (Golf Class) submarines. During its service, 77 launches were conducted, of which 59 were successful. Scud-B The successor to the R-11, the R-17 (SS-1C Scud-B), renamed R-300 in the 1970s, was the most prolific of the series, with a production run estimated at 7,000. It served in 32 countries and four countries besides the Soviet Union manufactured copied versions. The first launch was conducted in 1961, and it entered service in 1964. The R-17 was an improved version of the R-11. It could carry nuclear, chemical, conventional or fragmentation weapons. At first, the Scud-B was carried on a tracked transporter erector launcher (TEL) similar to that of the Scud-A, designated 2P19, but this was not successful and a wheeled replacement was designed by the Titan Central Design Bureau, becoming operational in 1967. The new MAZ-543 vehicle was officially designated 9P117 Uragan. The launch sequence could be conducted autonomously, but was usually directed from a separate command vehicle. The missile is raised to a vertical position by means of hydraulically powered cranes, which usually takes four minutes, while the total sequence lasts about one hour. Scud-C The Makeyev OKB also worked on an extended-range version of the R-17, known in the West as SS-1d Scud-C, that was first launched from Kapustin Yar in 1965. Its range was brought up to 500–600 km, but at the cost of a greatly reduced accuracy and warhead size. Eventually, the advent of more modern types in the same category, such as the TR-1 Temp (SS-12 Scaleboard), made the Scud-C redundant, and it apparently did not enter service with the Soviet armed forces. Scud-D The R-17 VTO (SS-1e Scud-D) project was an attempt to enhance the accuracy of the R-17. The Central Scientific Research Institute for Automation and Hydraulics (TsNIAAG) began work on the project in 1967, using an optical guidance system where a photograph of the target (provided through air reconnaissance) was inserted into a holder. This method was impractical, as the system was only effective in clear weather and it was difficult to take the proper photographs under field conditions. The Soviet artillery troops were not favorable towards the concept due to those limitations. In 1974, the VTO programme was revisited to take advantage of miniaturized computer hardware, where the guidance system would rely on a digitized image (DSMAC). This also made it possible to reassign targets from the missile warhead's computer library. The warhead was tested on an Su-17 in 1975 and the first test launch was conducted on 29 September 1979, where the missile hit within a few meters of the target. Development continued through the 1980s, and the design was modified to have a separating warhead and to have it make terminal corrections before impact. The first two test launches of this version in 1984 failed; the optical lens' inner surface on the missile's nose suffered from dust buildup and this was corrected after redesign work. The system was accepted into initial service as the 9K720 Aerofon in 1989. However, by this time, more advanced weapons were in use, such as the OTR-21 Tochka (SS-21) and the R-400 Oka (SS-23), and the Scud-D was not acquired by the Soviet Armed Forces. Instead it was proposed for export as an upgrade for Scud-B users, in the 1990s. Unlike previous Scud versions, the 9K720 had a warhead that separated from the missile's body, and was fitted with its own terminal guidance system. With a TV camera fitted in the nose, the system could compare the target area with data from an onboard computer library (DSMAC). In this way, it was thought to attain a circular error probable (CEP) of 50 m. Characteristics Al Hussein and Iraqi variants Hwasong-5/Shahab-1 North Korea obtained its first Scud-Bs from Egypt in 1979 or 1980. These missiles were reverse engineered, and reproduced using North Korean infrastructure, including the 125 factory at Pyongyang, a research and development institute at Sanum-dong and the Musudan-ri Launch Facility. The first prototypes were completed in 1984, and designated Hwasong-5. They were exact replicas of the R-17Es obtained from Egypt. The first test flights occurred in April 1984, but the first version saw only limited production, and no operational deployment, as its purpose was only to validate the production process. Production of the definitive version began at a slow rate in 1985. The type incorporated several minor improvements over the original Soviet design. The range was increased by 10 to 15 percent and it could carry High Explosive (HE) or cluster chemical warheads. Throughout the production cycle, until it was phased out in favour of the Hwasong-6 in 1989, the DPRK manufacturers are thought to have carried out small enhancements, in particular to the guidance system. In 1985, Iran acquired 90 to 100 Hwasong-5 missiles from North Korea. A production line was also established in Iran, where the Hwasong-5 was produced as the Shahab-1. Hwasong-6/Shahab-2 The Hwasong-6 was first test-flown in June 1990, and entered full-scale production the same year, or in 1991, until it was superseded by the Rodong-1. It features an improved guidance system, a range of 500 km, but had its payload reduced to 770 kg, though the dimensions are identical to the original Scud. Due to difficulties in procuring MAZ-543 TELs, the North Koreans had to produce a local copy. By 1999, North Korea was estimated to have produced 600 to 1,000 Hwasong-6 missiles, of which 25 served for testing, 300 to 500 were exported, and 300 to 600 are used by the Korean People's Army. The Hwasong-6 was exported to Iran where it is known as the Shahab-2, and to Syria, where it is manufactured under license with Chinese assistance. Also, according to SIPRI, 150 Scud-C were exported to Syria in 1991–96, 5 to Libya in 1999, 45 to Yemen in 2001–02. Hwasong-7/Shahab-3 The Nodong (also referred as RoDong, Hwasong-7), was the first North Korean missile to feature important modifications from the Scud design. Development began in 1988, and the first missile was launched in 1990, but it apparently exploded on its launch pad. A second test was carried out in May 1993 successfully. The main characteristics of the Rodong are a range of 1000 km and a CEP estimated at 2,000–4,000 m, giving the North Koreans the ability to strike Japan. The missile is substantially larger than the Hwasong series, and its Isayev 9D21 engine was upgraded with help from Makeyev OKB. Some assistance came also from China and Ukraine while a new TEL was designed using an Italian Iveco truck chassis and an Austrian crane. The rapidity with which the Rodong was designed and exported after just two tests came as a surprise for many Western observers, and led to some speculation that it was in fact based on a cancelled Soviet project from the Cold War period, but this has not been proven. Iran is known to have financed much of the Rodong program, and in return is allowed to produce the missile, as the Shahab-3. While the first prototypes may have been acquired as early as 1992, production began only in 2001, with assistance from Russia. The Rodong has also been exported to Egypt and Libya. Hwasong-9/Scud-ER The Hwasong-9 also called the Scud-ER (extended range), is essentially a North Korean modification of the Hwasong-6 that exchanges payload for greater range; estimates range from to as much as through a reduced payload of and enlarging the fuel and oxidant tanks along with a slight enlargement of the fuselage. The missile is single-stage and road mobile employing an HE, submunition, chemical, or potentially miniaturized nuclear warhead with a CEP of . Its range allows the North Korean military to strike anywhere on the Korean peninsula and threaten areas of Japan. Development of the Hwasong-9 reportedly began in 1991 and production started in 1994. Deployment began in 2003, intelligence imagery first observed it in 2005, and it was only first revealed publicly in 2007. Reports suggest Syria received Scud-ER missiles in 2000, giving them the ability to target all of Israel and southeastern Turkey, including Ankara. Syria reportedly converted its own Hwasong-6 production line in order to make the longer-range Hwasong-9. Scud-ER/Hwasong-9 demonstrated range of 1,000 km with 500 kg payload. South Korean and United States intelligence made assessment that missile can travel over 1,000 km, Japan previously rated its range at 1,000 km in 2015 white paper and considers to increase range estimate in 2016's white paper. The UN confirmed North Korea assisted Syria in development of manoeuvrable vehicle for Hwasong-9/Scud-ER since 2008. The UN also confirmed that the missile guidance and electronics were upgraded/improved by the Syrian experts. Qiam 1 Iran began development of the indigenous Qiam missile prior to 2010, when it was first publicly tested. It is developed from the Shahab-2/Hwasong-6. The Qiam 1 has a range of and (CEP) accuracy. The most noticeable difference from the Shahab-2 is a lack of fins—which could be used to reduce the missile's radar signature during ascent as fins reflect radar. Removing fins from a missile also reduces the structural mass, so the payload weight or missile range can be increased. Without the fins and associated drag, the missile can be more responsive to changes in trajectory. Iranian sources cite an improved guidance system on the missile, and analysts note that adjusting the missile's in-flight trajectory without fins requires a highly responsive guidance system. The Qiam 1's accuracy is also improved with the addition of a separable warhead. Other changes to the warhead include a "baby-bottle" shape, possibly to increase drag and thus stability during reentry at the expense of range, potentially increasing accuracy. The shape can also increase the terminal velocity of the warhead, making it harder to intercept. Deliveries began in either 2010 or 2011. The missile's first combat use was against ISIS militants on 18 June 2017. The Burkan 2-H used by the Houthis in Yemen is potentially related, or the Qiam 1 has potentially been used by that group. Burkan-1 The Houthi forces in Yemen unveiled the Burkan-1 (also spelled as Borkan 1 and Burqan 1) on 2 September 2016 when it was fired toward King Fahd International Airport. The missile's range is , greater than the Soviet-made Scud-B missiles the Houthi forces took control of in 2015. Missiles shot down mid-flight in October 2016 and July 2017 were claimed to target the holy city of Mecca by Saudi Arabia, while the Houthis claimed the targets were airports in the region. Burkan 2-H The Houthi forces in Yemen unveiled the Burkan 2-H (also spelled as Borkan H2 and Burqan 2H) when it was launched at Saudi Arabia on 22 July 2017. Analysts identify it as based on the Iranian Qiam 1/Scud-C, Iranian Shahab-2/Scud-C, or Scud-D missile. Pictures indicate a "baby bottle" re-entry vehicle, like the Shahab-3 and Qiam 1 missiles. The missile's exact range is unknown, but is greater than . It has been launched in July 2017, and a second launch was claimed on 4 November 2017, with the missile shot down over the Saudi Arabian capital, Riyadh. According to the US State Department, the missile was actually an Iranian Qiam 1. Saudi Arabia's Ministry of Culture and Information also supplied the Associated Press with pictures from a military briefing of what it claimed were components from the intercepted missile bearing Iranian markings matching those on other pictures of the Qiam 1. Golan-1 and Golan-2 The Golan missiles are Syrian modernization and licensed production of Scud missiles (versions B, C and D). The Golan-1 missile is simply a licensed copy of the Scud-C missile without major changes. The Golan-2 missile is a modernization of the Scud-D, with an increased range of up to 850 km (compared to 700 km for the basic version). Syrian engineers have also converted various versions of Scud missiles (possibly including Golan missiles) to use cluster munitions. Developed and produced in Syria by Syrian Scientific Studies and Research Center. The number of missiles produced is unknown. Used in the Syrian Civil War. Operational use The Scud missile family is one of the few ballistic missiles to be extensively used in actual warfare by different forces, second only to the V-2 in terms of combat launches.The first recorded combat use of the Scud was at the end of the Yom Kippur War in 1973, when three missiles were fired by Egypt against Israeli-held Arish and bridgehead on the western bank of the Suez canal. Seven Israeli soldiers were killed. Libya responded to U.S. airstrikes in 1986 by firing two Scud missiles at a U.S. Coast Guard navigation station on the nearby Italian island of Lampedusa, which missed their target. Scud missiles were used in several regional conflicts that included use by Soviet and Afghan Communist forces in Afghanistan, and Iranians and Iraqis against one another in the so-called "War of the cities" during the Iran–Iraq War. Scuds were used by Iraq during the Gulf War against Israel and coalition targets in Saudi Arabia. More than a dozen Scuds were fired from Afghanistan at targets in Pakistan in 1988, and against targets within Afghanistan in March 1991. There were also a small number of Scud missiles used in the 1994 civil war in Yemen, as well as by Russian forces in Chechnya in 1996 and onwards. The missiles saw some minor use by forces loyal to Muammar Gaddafi in the Libyan Civil War. They have reportedly been used recently in the ongoing Syrian civil war by the Syrian Army. Iran–Iraq War Iraq was the first to use ballistic missiles during the Iran–Iraq War, firing limited numbers of Frog-7 rockets at the towns of Dezful and Ahvaz. On 27 October 1982, Iraq launched its first Scud-Bs at Dezful killing 21 civilians and wounding 100. Scud strikes continued during the following years, intensifying sharply in 1985, with more than 100 missiles falling inside Iran. In response, the Iranians searched for a source of ballistic weapons, finally meeting success in 1985, when they obtained a small number of Scud-Bs from Libya and Syria: in addition to supplying these missiles, Syria also taught Iran the technology to produce them. These weapons were assigned to a special unit, the Khatam Al-Anbya force, attached to the Pasdaran. On 12 March, the first Iranian Scuds fell in Baghdad and Kirkuk. The strikes infuriated Saddam Hussein, but the Iraqi response was limited by the range of their Scuds, that could not reach Tehran. After a request for TR-1 Temp (SS-12 Scaleboard) missiles was refused by the Soviets, Iraq turned to developing its own long-range version of the Scud missile, that became known as the Al Hussein. In the meantime, both sides quickly ran out of missiles, and had to contact their international partners for resupply. In 1986, Iraq ordered 300 Scud-Bs from the Soviet Union, while Iran turned to North Korea for missile deliveries and for assistance in developing a domestic missile industry. By 1988 the fighting along the border had reached a stalemate, and both belligerents began employing terror tactics in order to break the deadlock. Lasting from 29 February to 20 April, this conflict became known as the war of the cities and saw an intensive use of Scud missiles in what became known as the "Scud duel". The first rounds were fired by Iraq, when seven Al-Husseins landed in Tehran on 29 February. In all, Iraq fired 189 missiles, mostly of the Al-Hussein type, of which 135 landed in Tehran, 23 in Qom, 22 in Isfahan, four in Tabriz, three in Shiraz and two in Karaj. During this episode, Iraq's missiles killed 2,000 Iranians, injured 6,000, and caused a quarter of Tehran's population of ten million to flee the city. The Iranian response included launching 75 to 77 Hwasong-5s, a North Korean Scud variant, at targets in Iraq, mostly in Baghdad. Prior to the 2003 invasion of Iraq, the government of Saddam Hussein had asserted that Iran fired dozens of Scud missiles at the People's Mujahedin (MKO) in Iraq in 1999 and 2001, with the MKO itself claiming that Iran fired more missiles at Iraq in 2001 than it did during the entire Iran–Iraq War. Civil war in Afghanistan The most intensive – and less well-known – use of Scud missiles occurred during the civil war in Afghanistan between 1989 and 1992. As compensation for the withdrawal of Soviet troops in 1989, the USSR agreed to deliver sophisticated weapons to the Democratic Republic of Afghanistan (DRA), among which were large quantities of Scud-Bs, and possibly some Scud-Cs as well. The first 500 were transferred during the early months of 1989, and soon proved to be a critical strategic asset for the DRA. Every Scud battery was composed of three TELs, three reloading vehicles, a mobile meteorological unit, one tanker and several command and control trucks. During the mujahideen attack against Jalalabad, between March and June 1989, three firing batteries manned by Afghan crews advised by Soviets fired approximately 438 missiles in defense of the embattled garrison. Soon all the heavily contested areas of Afghanistan, such as the Salang Pass and the city of Kandahar, were under attack by Scud missiles. Due to its imprecision, the Scud was used as an area bombing weapon, and its effect was psychological as well as physical: the missiles would explode without warning, as they travelled faster than the sound they produced in-flight. At the time, reports indicated that Scud attacks had devastating consequences on the morale of the Afghan rebels, who eventually learned that by applying guerilla tactics, and keeping their forces dispersed and hidden, they could minimize casualties from Scud attacks. The Scud was also used as a punitive weapon, striking areas that were held by the resistance. In March 1991, shortly after the town of Khost was captured, it was hit by a Scud attack. On 20 April 1991, the marketplace of Asadabad was hit by two Scuds, which killed 300 and wounded 500 inhabitants. Though the exact toll is unknown, these attacks resulted in heavy civilian casualties. The explosions destroyed the headquarters of Islamic leader Jamil al-Rahman, and killed a number of his followers. In all, between October 1988 and February 1992, with 1,700 to 2,000 Scud launches, Afghanistan saw the greatest concentration of ballistic weapons fired since World War II. After January 1992, the Soviet advisors were withdrawn, reducing the Afghan army's ability to use their ballistic missiles. On 24 April 1992, the mujahideen forces of Ahmad Shah Massoud captured the main Scud stockpile at Afshur but members of the 99th Missile Brigade had ditched their uniforms leaving Massoud's men with no way of operating such weapons. As the communist government collapsed, the few remaining Scuds and their TELs were divided among the rival factions fighting for power. Due to the lack of knowledge on such weapons, between April 1992 and 1996, only 44 Scuds were fired in Afghanistan. When the Taliban arrived in power in 1996, they captured a few of the remaining Scuds, but lack of maintenance had reduced the state of the missile force to such an extent that there were only five Scud firings, until 2001. After the U.S. invasion of Afghanistan, the last four surviving Scud launchers were destroyed in 2005. Gulf War Scud attacks At the outbreak of the Gulf War, Iraq had an effective, if limited, ballistic missile force. Besides the original Scud-B, several local variants had been developed. These included the Al-Hussein, developed during the Iran–Iraq War, the Al-Hijarah, a shortened Al-Hussein, and the Al-Abbas, an extended-range Scud fired from fixed launching sites, that was never used. The Soviet-built MAZ-543 vehicle was the prime launcher, along with a few locally designed TELs, the Al Nida and the Al Waleed. Scuds were responsible for most of the coalition deaths outside Iraq and Kuwait. Of a total 88 Scud missiles, 46 were fired into Saudi Arabia and 42 into Israel. Twenty-eight members of the Pennsylvania National Guard were killed when a Scud struck a United States Army barracks in Dhahran, Saudi Arabia. Scud hunting The United States Air Force organized air patrols over areas where Scud launchers were suspected to operate, namely western Iraq near the Jordanian border, where the Scuds were fired at Israel, and southern Iraq, where they were aimed at Saudi Arabia. A-10 strike aircraft flew over these zones during the day, and F-15Es fitted with LANTIRN pods and synthetic aperture radars patrolled at night. However, the infrared signatures and radar signatures of the Iraqi TELSs were almost impossible to distinguish from ordinary trucks and from the surrounding electromagnetic clutter. During the war, while patrolling, strike aircraft managed to sight mobile TELs on 42 occasions, but only eight times the aircraft were able to locate the targets well enough to release their ordnance. In addition, the Iraqi missile units dispersed their Scud TELs and hid them in culverts, wadis, or under highway bridges. They also practiced "shoot-and-scoot" tactics, withdrawing the launcher to a hidden location immediately after it had fired, while the launch sequence that usually took 90 minutes was reduced to half an hour. This enabled them to preserve their forces, despite optimistic claims by the coalition. A post-war Pentagon study concluded that relatively few launchers had been destroyed by coalition aircraft. Ground-based special forces from the United Kingdom were covertly inserted into Iraq to locate and destroy Scud launchers, either by directing airstrikes or in some cases attacking them directly with MILAN man-portable missiles. An example was the 8-man SAS patrol designated Bravo Two Zero, led by "Andy McNab" (a pseudonym). This patrol resulted in the death or capture of all but one of its members, "Chris Ryan". The mobility of Scud TELs allowed for a choice of firing position and increased the survivability of the weapon system to such an extent that, of the approximately 100 launchers claimed destroyed by coalition pilots and special forces in the Gulf War, not a single destruction could be confirmed afterwards. After the war, UNSCOM investigations showed that Iraq still had 12 MAZ-543 vehicles, as well as seven Al-Waleed and Al-Nidal launchers, and 62 complete Al-Hussein missiles. 1994 Yemen civil war During the 1994 civil war in Yemen, South Yemen separatists fired Scud missiles at the Yemeni capital of Sana'a. Chechen Wars A small number of Scuds were used by Russian forces in 1996 during the First Chechen War and in late 1999/early 2000 during the Second Chechen War, including during the Battle of Grozny (1999–2000). Although frequently reported by media as Scuds, the majority of the 60–100 SRBMs fired in the Chechen Wars were the OTR-21 Tochka (SS-21 Scarab-B). Libyan Civil War In May 2011, early during the Libyan Civil War, it was rumored that Scud-B's had been fired by Muammar Gaddafi's forces against anti-Gaddafi forces. The first confirmed use happened several months later, when on 15 August 2011, as anti-Gaddafi forces encircled the Gaddafi-controlled capital of Tripoli, Libyan Army forces near Gaddafi's hometown of Sirte fired a Scud missile toward anti-Gaddafi positions in Cyrenaica, well over 100 kilometers away. The missile struck the desert near Ajdabiya, causing no casualties. On 22 August 2011, a second Scud-B also fired by Gaddafi forces in Sirte. On 23 August, opposition forces in Misrata reported that four Scud-B missiles were fired against the city from Sirte, but had caused no damage. Initial claims that an Aegis Ballistic Missile Defense System-equipped US Navy cruiser shot down the missiles over the Gulf of Sidra were later denied by US DoD officials. Syrian Civil War On 12 December 2012, it was reported by various outlets that the Syrian Army has begun using short-range ballistic missiles against rebels in the Syrian civil war. According to NATO officials, "allied intelligence, surveillance and reconnaissance assets" had detected the launch of a number (later reports said at least 6) of unguided, short-range ballistic missiles inside Syria. The trajectory and distance travelled indicated that they were Scud-type missiles, although no information on the type of Scud being used was provided at the time. An American intelligence official, who asked not to be identified, confirmed that missiles had been fired from the Damascus area at targets in northern Syria, where the majority of the rebels' bases and facilities are located. Three districts in the rebel-held eastern part of Aleppo and the nearby city of Tel Rifat were hit by ballistic missiles on 22 February 2013, flattening up to 20 houses in each of the places hit. Human Rights Watch inspector Ole Solvang toured the areas targeted by Scuds on 25 February, saying that he "has never seen such destruction" during his past visits to the country. According to the New York-based organization at least 141 people were killed in the attacks, including 71 children. The statement added that there was no sign of rebel presence in the areas hit, meaning that the attacks were unlawful. Syrian Information Minister Omran al-Zoabi denied the government was using ballistic weapons, even as opposition activists claimed more than 30 had been launched since December 2012. Yemeni Civil War (2015–present) Houthis possessed 300 Scud missiles as of June 2015, although Saudi-led air strikes allegedly damaged or destroyed "most of them." Between 2015 and November 2017, Houthi forces fired more than 170 ballistic missiles at Saudi Arabia, including Scud, Scarab, and modified SA-2 missiles. As of October 2016, there were 85 confirmed interceptions using Patriot missiles. In addition to Scud-B missiles, there is a report of a single Scud-C missile launched on 6 June 2015 at Al-Salil Military Base. Local versions of Scud missiles, known as the Burkan 1 and Burkan 2-H, have also been displayed and used by the Houthis beginning in September 2016. Nagorno-Karabakh war (2020) On 11 October 2020 a Scud missile was fired from the territory of Nagorno-Karabakh at Ganja, Azerbaijan, the country's second largest city. As a result, according to Azerbaijan official sources, 10 people, including four women, were killed and 35 people, including children, were injured. On 16 October 2020, Armenian Armed Forces in Nagorno-Karabakh fired another Scud missile at Ganja. Officials in Azerbaijan announced that at least 13 people, including two infants, had been killed, with more than 50 others injured. Azerbaijan destroyed at least one Scud missile launcher during the course of the war. Operators Operators of Scuds or Scud derivatives as of 2022 are: Current operators (Scud-B, Scud-D): Some sources said that Algeria has received Scud B and D (Scud-D): 8 launchers, 32 missiles (Scud-B) (Scud-B, Hwasong-6, Project T) (Scud-B, Hwasong-5, Shahab-1, Shahab-2, Shahab-3, Rodong-1, Qiam 1) (Scud-B) (Scud-B) (Hwasong-6, Hwasong-5) (Scud-E, Scud-B, Scud-C, Hwasong-5, Hwasong-6, Rodong-1) (Scud-B) (Scud-B, Scud-C, Golan-1, Golan-2, Scud-D, Scud-ER, Shahab-1, Shahab-2, Hwasong-5, Hwasong-6, Rodong-1) c. 30 Scud-B missiles and four TELs acquired in 1995, and converted into targets by Lockheed Martin. (Scud-B, Scud-C, Hwasong-5, Hwasong-6) (Scud-B, Scud-C, Volcano 1, Volcano H-2) Former operators (Scud-B) – 30 launchers and 2,300 missiles delivered to the DRA between 1988 and 1991. After the US invasion of Afghanistan, the last 4 remaining launchers were scrapped in 2005 60 launchers, retired in May 2005 (Scud-B) – 36 launchers, retired (Scud-B) – 30 launchers (Scud-B) – 27 launchers, retired (Scud-A, Scud-B) – 24 launchers plus decoys, retired 1990 (Scud-B) – 9 launchers, retired, destroyed in 1995 (Scud-B, Al-Hussein, Al-Abbas) – 24–36 launchers plus decoys, 819 missiles, plus 11 MAZ-543 launchers for Al-Hussein. (Scud-B) – 30 launchers, retired in 1989 (Scud-B) - unknown number of Scuds were acquired from former Afghan National Army in 1980s as part of foreign technology assessments, none were deployed in the military service. (Scud-B) – 18 launchers, retired (Scud-C, Scud-D) – ~300 launchers remaining at the dissolution of the Soviet Union, retired 6 launchers and an unknown quantity of R-17E missiles bought in 1979 ~660 launchers (Scud-B)-retired 25 Hwasong-5s purchased from North Korea in 1989. The UAE military were not satisfied with the quality of the missiles, and they were kept in storage. 50 launchers and 185 missiles, all destroyed See also List of missiles Shahab-1 – An Iranian copy of the Scud-B 9K720 Iskander – Russian Scud replacement Notes References Further reading External links R-11 / SS-1B SCUD-A A Lucid Interval 1953 in spaceflight Chemical weapon delivery systems Cold War missiles of the Soviet Union Tactical ballistic missiles of the Soviet Union Tactical ballistic missiles of Iraq Tactical ballistic missiles of Iran Weapons of Afghanistan Weapons of Egypt Military equipment introduced in the 1950s
Scud missile
Chemistry
7,587
14,178,042
https://en.wikipedia.org/wiki/GAS6
Growth arrest – specific 6, also known as GAS6, is a human gene coding for the GAS6 protein. It is similar to the Protein S with the same domain organization and 43% amino acid identity. It was originally found as a gene upregulated by growth arrested fibroblasts. Function Gas6 is a gamma-carboxyglutamic acid (Gla) domain-containing protein thought to be involved in the stimulation of cell proliferation. Interactions Gas6 has been shown to interact with AXL receptor tyrosine kinase, MerTK and TYRO3. The presence of Gla needs a vitamin K-dependent enzymatic reaction that carboxylates the gamma carbon of certain glutamic residues of the protein during its production in the endoplasmic reticulum. The action of vitamin K is essential on GAS6 function. References Further reading
GAS6
Chemistry
182
2,166,357
https://en.wikipedia.org/wiki/Caldarium
A (also called a calidarium, cella caldaria or cella coctilium) was a room with a hot plunge bath, used in a Roman bath complex. The boiler supplying hot water to a baths complex was also called . This was a very hot and steamy room heated by a hypocaust, an underfloor heating system using tunnels with hot air, heated by a furnace tended by slaves. It was also the hottest room in the regular sequence of bathing rooms; after the caldarium, bathers would progress back through the tepidarium to the frigidarium. A in both public and private baths followed a common plan which had three main parts. The common arrangement would include a warm-water bath -- usually called , but also referred to as or -- sunk into the floor, a semicircular alcove -- -- where bathers would sit in order to induce sweating, and in the middle of the room a vacant space -- or -- meant for physical exercise before going to sit in . The bath's patrons would use olive oil to cleanse themselves by applying it to their bodies and using a strigil to remove the excess. This was sometimes left on the floor for the slaves to pick up or put back in the pot for the women to use for their hair. The temperature of the is not known exactly. However, a floor surface temperature above would have been uncomfortable to stand on with bare feet. See also Ancient Roman bathing References External links Greek and Roman baths at the Perseus Project Ancient Roman baths Rooms
Caldarium
Engineering
327
24,857,122
https://en.wikipedia.org/wiki/Amanita%20daucipes
Amanita daucipes is a species of fungus in the family Amanitaceae of the mushroom order Agaricales. Found exclusively in North America, the mushroom may be recognized in the field by the medium to large white caps with pale orange tints, and the dense covering of pale orange or reddish-brown powdery conical warts on the cap surface. The mushroom also has a characteristic large bulb at the base of its stem with a blunt short rooting base, whose shape is suggestive of the common names carrot-footed lepidella, carrot-foot amanita, or turnip-foot amanita. The mushroom has a strong odor that has been described variously as "sweet and nauseous", or compared to an old ham bone, or soap. Edibility is unknown for the species, but consumption of species belonging to Amanita subgroup Lepidella is risky. Taxonomy Amanita daucipes was first described in 1856 by the mycologists Miles Joseph Berkeley and Camille Montagne, who named it Agaricus daucipes. It was later renamed to Amanitopsis daucipes by Pier Andrea Saccardo, in 1887. In 1899, American mycologist Curtis Gates Lloyd transferred the species to the genus Amanita. It is in the section Lepidella of the genus Amanita, in the subgenus Lepidella, a grouping of related Amanita mushrooms characterized by their amyloid spores. Other North American species in this subgenus include A. abrupta, A. atkinsoniana, A. chlorinosma, A. cokeri, A. mutabilis, A. onusta, A. pelioma, A. polypyramis, A. ravenelii and A. rhopalopus. Its common names include the "carrot-foot Amanita", the "turnip-foot Amanita", or the "carrot-footed Lepidella". The specific epithet daucipes means "carrot foot". Description The caps of the fruit bodies initially have a convex shape before flattening out in maturity, and measure in diameter. The cap surface is dry to shiny, and white with a pale orange hue. It is densely covered with white to pale orange or reddish-brown conical warts. The warts, remnants of the universal veil, are randomly distributed on the cap surface and become fluffier and cotton-like (flocculent) near the edge (or margin) of the cap. Drier specimens may have the cap surface completely cracked around the bases of the individual warts. The conical warts are detersile, meaning they may be easily removed from the cap surface without leaving a residue or a scar. The margin of the cap does not have striations, and like other Lepidella members, may have irregular veil remnants hanging from it. The gills are free, crowded closely together, moderately narrow, and white to yellowish white in color. The short gills that do not extend the full distance from the stem to the cap edge (known as lamellulae) are rounded to attenuate (gradually narrowing), and of varying lengths. The stem is long, thick, and is attached to the center of the cap. It tapers slightly towards the apex, and is solid, dry, white or sometimes with a pale orange tint, and covered with tufts of soft woolly hairs. If handled, the stem will slowly bruise and discolor to approximately the same color as the cap. The basal bulb is large, reaching up to , and is broadly spindle- to turnip-shaped. The bulb has a circular ridge on its upper part where the universal veil was previously attached, and the bulb may have longitudinal splits. It is covered with pinkish to reddish veil remains. The partial veil forms an ephemeral ring on the upper part of the stem. It is white to pale yellow, and usually falls off as the cap expands; fragments of the ring may often be found lying on the ground near the base of the stem. The universal veil remnants, when present, are similar to that on the cap. The flesh is firm and white. Fruit bodies have an odor that is strong and unpleasant, described as "sweet and nauseous". The odor has also been compared to that of "an old ham bone or soap" or "decaying protein", especially older specimens. Microscopic characteristics Viewed in deposit, such as with a spore print, the spores of A. daucipes are white, cream, or yellowish in color. Viewed with a microscope, they have an ellipsoid to elongate shape (sometimes kidney-shape, or reniform), and dimensions of 8–11 by 5–7 μm. They are translucent (hyaline), with thin walls, and are amyloid, meaning that they absorb iodine when stained Melzer's reagent. The basidia (the spore-bearing cells) are 30–50 by 7–11 μm, club-shaped, and 4-spored, with clamps at their bases. The cheilocystidia are abundant, small, roughly spherical to club-shaped cells, with dimensions of 15–40 by 10–28 μm. The cap cuticle is between 75 and 180 μm thick, and consists of a dense layer of thin-walled, interwoven, and slightly gelatinized hyphae that are 2–5 μm in diameter. Clamp connections are present in the hyphae of this species. Similar species A. daucipes is superficially similar to another related North American species, the chlorine lepidella (A. chlorinosma), but may be distinguished from the latter by its color and the large basal bulb. Further, A. daucipes has "tougher, more distinct volval scales that are tinged with orange-yellow to orange-brown or light reddish-brown." Distribution and habitat A. daucipes is a mycorrhizal species, and its fruit bodies may be found growing solitary or scattered on the ground in mixed coniferous and deciduous forests (especially those dominated by oak trees) in Maryland, North Carolina, New Jersey, Ohio, Pennsylvania, Tennessee, Virginia, West Virginia, Kentucky, and Texas; other associated tree species include hickory (genus Carya) and birch (Betula). A predilection for disturbed soil, such as roadsides, has been noted. Amanita authority Cornelis Bas, writing in his extensive 1969 monograph on the genus, claimed A. daucipes to be a rare species; subsequent investigations have shown it to be common in oak forests in the eastern United States. The southern end of its distribution extends to Sonora, Mexico. Toxicity The edibility of A. daucipes is unknown, but the mushroom is not recommended for consumption because the Lepidella section of Amanita also contains several poisonous species. See also List of Amanita species References daucipes Inedible fungi Fungi of North America Fungi described in 1856 Taxa named by Camille Montagne Fungus species
Amanita daucipes
Biology
1,466
28,049,207
https://en.wikipedia.org/wiki/Digital%20Envoy
Digital Envoy, Inc., part of Dominion Enterprises, is a media and information services company. It is the parent company of Digital Element and Digital Resolve, whose primary product is the NetAcuity IP location service. The incumbent President of Digital Envoy is Jerrod Stoller. Company & History Founded in 1999 by Rob Friedman, Sanjay Parekh and Dennis Maicon, Digital Envoy patented, and introduced geotargeting technology (also recognized as geolocation or IP location technology). In 2005, Digital Envoy created two business units: Digital Element and Digital Resolve. In 2007, Landmark Communications, Inc. purchased Digital Envoy. The company became part of Dominion Enterprises in 2009. Digital Envoy, through its Digital Element business unit, is often considered to be the main supplier of IP targeting technology to the online advertising industry. According to AdOps Insider, "practically speaking, most of the advertising industry relies on a small company called Digital Envoy" for IP address geolocation of advertisements. In August 2021, Digital Envoy acquired location data provider, X-Mode and rebranded it as "Outlogic". In 2024, the Federal Trade Commission (FTC) issued an order prohibiting Digital Envoy's data broker companies X-Mode Social and its successor Outlogic from selling or sharing sensitive location data. This action follows allegations that the company sold precise location data without proper consumer consent, potentially revealing visits to sensitive locations such as medical clinics and places of worship. References Mass media companies of the United States Geomarketing Data collection Data brokers
Digital Envoy
Technology
311
40,059,937
https://en.wikipedia.org/wiki/Mean%20field%20annealing
Mean field annealing is a deterministic approximation to the simulated annealing technique of solving optimization problems. This method uses mean field theory and is based on Peierls' inequality. References Mathematical optimization
Mean field annealing
Mathematics
42
11,650,133
https://en.wikipedia.org/wiki/Process%20decision%20program%20chart
Process Decision Program Chart (PDPC) is a technique designed to help prepare contingency plans. The emphasis of the PDPC is to identify the consequential impact of failure on activity plans, and create appropriate contingency plans to limit risks. Process diagrams and planning tree diagrams are extended by a couple of levels when the PDPC is applied to the bottom level tasks on those diagrams. Methodology From the bottom level of some activity box, the PDPC adds levels for: identifying what can go wrong (failure mode or risks) consequences of that failure (effect or consequence) possible countermeasures (risk mitigation action plan) Similar techniques The PDPC is similar to the failure mode and effects analysis (FMEA) in that both identify risks, consequences of failure, and contingency actions. The FMEA adds prioritized risk levels through rating relative risk for each potential failure point. Evaporating Cloud is a visually similar technique that is used for Conflict Management and Problem Solving. It follows the flow of data, either horizontally or vertically, breaking Ideas into increments of change to be easy to follow. References Further reading Charts Disaster management tools Failure Reliability engineering Risk analysis methodologies
Process decision program chart
Engineering
244
11,569,233
https://en.wikipedia.org/wiki/Amphobotrys%20ricini
Amphobotrys ricini is a species of fungus in the family Sclerotiniaceae. It is a plant pathogen that causes disease on several species including gray mold blight on Euphorbia milii and poinsettia. Originally described as a species of Botrytis in 1949, it was transferred to the genus Amphobotrys in 1973. References Fungi described in 1949 Fungal plant pathogens and diseases Sclerotiniaceae Fungus species
Amphobotrys ricini
Biology
94
5,924,217
https://en.wikipedia.org/wiki/Hilbert%20symbol
In mathematics, the Hilbert symbol or norm-residue symbol is a function (–, –) from K× × K× to the group of nth roots of unity in a local field K such as the fields of reals or p-adic numbers. It is related to reciprocity laws, and can be defined in terms of the Artin symbol of local class field theory. The Hilbert symbol was introduced by in his Zahlbericht, with the slight difference that he defined it for elements of global fields rather than for the larger local fields. The Hilbert symbol has been generalized to higher local fields. Quadratic Hilbert symbol Over a local field K whose multiplicative group of non-zero elements is K×, the quadratic Hilbert symbol is the function (–, –) from K× × K× to {−1,1} defined by Equivalently, if and only if is equal to the norm of an element of the quadratic extension . Properties The following three properties follow directly from the definition, by choosing suitable solutions of the diophantine equation above: If a is a square, then (a, b) = 1 for all b. For all a,b in K×, (a, b) = (b, a). For any a in K× such that a−1 is also in K×, we have (a, 1−a) = 1. The (bi)multiplicativity, i.e., (a, b1b2) = (a, b1)·(a, b2) for any a, b1 and b2 in K× is, however, more difficult to prove, and requires the development of local class field theory. The third property shows that the Hilbert symbol is an example of a Steinberg symbol and thus factors over the second Milnor K-group , which is by definition K× ⊗ K× / (a ⊗ (1−a), a ∈ K× \ {1}) By the first property it even factors over . This is the first step towards the Milnor conjecture. Interpretation as an algebra The Hilbert symbol can also be used to denote the central simple algebra over K with basis 1,i,j,k and multiplication rules , , . In this case the algebra represents an element of order 2 in the Brauer group of K, which is identified with -1 if it is a division algebra and +1 if it is isomorphic to the algebra of 2 by 2 matrices. Hilbert symbols over the rationals For a place v of the rational number field and rational numbers a, b we let (a, b)v denote the value of the Hilbert symbol in the corresponding completion Qv. As usual, if v is the valuation attached to a prime number p then the corresponding completion is the p-adic field and if v is the infinite place then the completion is the real number field. Over the reals, (a, b)∞ is +1 if at least one of a or b is positive, and −1 if both are negative. Over the p-adics with p odd, writing and , where u and v are integers coprime to p, we have , where and the expression involves two Legendre symbols. Over the 2-adics, again writing and , where u and v are odd numbers, we have , where It is known that if v ranges over all places, (a, b)v is 1 for almost all places. Therefore, the following product formula makes sense. It is equivalent to the law of quadratic reciprocity. Kaplansky radical The Hilbert symbol on a field F defines a map where Br(F) is the Brauer group of F. The kernel of this mapping, the elements a such that (a,b)=1 for all b, is the Kaplansky radical of F. The radical is a subgroup of F*/F*2, identified with a subgroup of F*. The radical is equal to F* if and only if F has u-invariant at most 2. In the opposite direction, a field with radical F*2 is termed a Hilbert field. The general Hilbert symbol If K is a local field containing the group of nth roots of unity for some positive integer n prime to the characteristic of K, then the Hilbert symbol (,) is a function from K*×K* to μn. In terms of the Artin symbol it can be defined by Hilbert originally defined the Hilbert symbol before the Artin symbol was discovered, and his definition (for n prime) used the power residue symbol when K has residue characteristic coprime to n, and was rather complicated when K has residue characteristic dividing n. Properties The Hilbert symbol is (multiplicatively) bilinear: (ab,c) = (a,c)(b,c) (a,bc) = (a,b)(a,c) skew symmetric: (a,b) = (b,a)−1 nondegenerate: (a,b)=1 for all b if and only if a is in K*n It detects norms (hence the name norm residue symbol): (a,b)=1 if and only if a is a norm of an element in K() It has the "symbol" properties: (a,1–a)=1, (a,–a)=1. Hilbert's reciprocity law Hilbert's reciprocity law states that if a and b are in an algebraic number field containing the nth roots of unity then where the product is over the finite and infinite primes p of the number field, and where (,)p is the Hilbert symbol of the completion at p. Hilbert's reciprocity law follows from the Artin reciprocity law and the definition of the Hilbert symbol in terms of the Artin symbol. Power residue symbol If K is a number field containing the nth roots of unity, p is a prime ideal not dividing n, π is a prime element of the local field of p, and a is coprime to p, then the power residue symbol () is related to the Hilbert symbol by The power residue symbol is extended to fractional ideals by multiplicativity, and defined for elements of the number field by putting ()=() where (b) is the principal ideal generated by b. Hilbert's reciprocity law then implies the following reciprocity law for the residue symbol, for a and b prime to each other and to n: See also Azumaya algebra External links HilbertSymbol at Mathworld References Class field theory Quadratic forms David Hilbert
Hilbert symbol
Mathematics
1,375
560,807
https://en.wikipedia.org/wiki/Feedlot
A feedlot or feed yard is a type of animal feeding operation (AFO) which is used in intensive animal farming, notably beef cattle, but also swine, horses, sheep, turkeys, chickens or ducks, prior to slaughter. Large beef feedlots are called concentrated animal feeding operations (CAFO) in the United States and intensive livestock operations (ILOs) or confined feeding operations (CFO) in Canada. They may contain thousands of animals in an array of pens. The basic purpose of the feedlot is to increase the amount of fat gained by each animal as quickly as possible; if animals are kept in confined quarters rather than being allowed to range freely over grassland, they will gain weight more quickly and efficiently with the added benefit of economies of scale. Regulation Most feedlots require some type of governmental approval to operate, which generally consists of an agricultural site permit. Feedlots also would have an environmental plan in place to deal with the large amount of waste that is generated from the numerous livestock housed. The environmental farm plan is set in place to raise awareness about the environment and covers 23 different aspects around the farm that may affect the environment. The Environmental Protection Agency has authority under the Clean Water Act to regulate all animal feeding operations in the United States. This authority is delegated to individual states in some cases. In Canada, regulation of feedlots is shared between all levels of government. Certain provinces are required by law to have a nutrient management plan, which looks at everything the farm is going to feed to their animals, down to the minerals. New farms are required to complete and obtain a license under the livestock operations act, which looks at proper manure storage as well as proper distance away from other farms or dwellings. A mandatory RFID tag is required in every animal that passes through a Canadian feedlot, these are called CCIA tags (Canadian Cattle Identification Agency) which is controlled by the Canadian Food Inspection Agency CFIA. In Australia this role is handled by the National Feedlot Accreditation Scheme (NFAS). Scheduling The cattle industry works in sequence with one another, prior to entering a feedlot, young calves are born typically in the spring where they spend the summer with their mothers in a pasture or on rangeland. These producers are called cow-calf operations and are essential for feedlot operations to run. Once the young calves reach a weight between they are rounded up and either sold directly to feedlots, or sent to cattle auctions for feedlots to bid on them. Once transferred to a feedlot, they are housed and looked after for the next six to eight months where they are fed a total mixed ration to gain weight. Feedlot diets encourage growth of muscle mass and the distribution of some fat (known as marbling in butchered meat). The marbling is desirable to consumers, as it contributes to flavour and tenderness. These animals may gain an additional 400-600 pounds (180 kg) during its approximate 200 days in the feedlot, depending on its entrance weight into the lot, and also how well the animal gains muscle. Once cattle are fattened up to their finished weight, the fed cattle are transported to a slaughterhouse. Diet Typically the total mixed ration (TMR) consist of forage, grains, minerals, and supplements to benefit the animals' health and to maximize feed efficiency. These rations are also known to contain various other forms of feed such as a specialized animal feed which consists of corn, corn byproducts (some of which is derived from ethanol and high fructose corn syrup production), milo, barley, and various grains. Some rations may also contain roughage such as corn stalks, straw, sorghum, or other hay, cottonseed meal, premixes which may contain but not limited to antibiotics, fermentation products, micro & macro minerals and other essential ingredients that are purchased from mineral companies, usually in sacked form, for blending into commercial rations. Many feed companies are able to be prescribed a drug to be added into a farms feed if required by a vet. Farmers generally work with nutritionists who aid in the formulation of these rations to ensure their animals are getting the recommended levels of minerals and vitamins, but also to make sure the animals are not wasting feed in their manure. In the American northwest and Canada, barley, low grade durum wheat, chick peas (garbanzo beans), oats and occasionally potatoes are used as feed. In a typical feedlot, a cow's diet is roughly 62% roughage, 31% grain, 5% supplements (minerals and vitamins), and 2% premix. High-grain diets lower the pH in the animals' rumen. Due to the stressors of these conditions, and due to some illnesses, it may be necessary to give the animals antibiotics on occasion. Animal health and welfare A feedlot is highly dependent on the health of its livestock, as disease can have a great impact on the animals, and controlling sickness can be difficult with numerous animals living together. Many feedlots will have an entrance protocol in which new animals entering the lot are given vaccines to protect them against potential sickness that may arise in the first few weeks in the feedlot. These entrance protocols are usually discussed and created with the farm's veterinarian, as there are numerous factors that can impact the health of feedlot cattle. One challenging but crucial role on a feedlot is to identify any sick cattle, and treat them in order to rebound them back to health. Knowing when an animal is sick is sometimes difficult as cattle are prey animals and will try and hide their weakness from potential threats. A sick animal will generally look gaunt, may have a snotty nose and/or dry nose, and will have droopy ears, catching these symptoms early may be the key to successfully treating an animal. The best indicator of health is the body temperature of a cow, but this is not always possible when looking over many animals per day. The diet of the animals and the different ingredients within the ration are controversial. Cattle in feedlots are fed grain rather than more natural forage. This is designed to make them gain weight faster, but it leads to internal abscesses and discomfort. Grain-based diets can also lead to the growth of harmful bacteria such as Clostridium perfringens and E. coli. Too much grain in the diet can cause cattle to have issues such as bloating, diarrhea and digestive discomfort, which is why close monitoring of the animals, as well as working with ruminant nutritionists is very important for farmers. Animal welfare is a major controversy towards farms today as consumers have shown their concern for the welfare of these animals. Indoor feedlots with concrete surfaces can cause leg problems including swollen joints. On outdoor feedlots, welfare issues include mud in rainy areas; heat stress in feedlots that are not shaded; insufficient water to drink; excessive cold, and problems with cattle handling (e.g. electric prods). Water troughs shared among many cattle can increase the spread of diseases including bovine respiratory disease. Waste recycling There are a few common methods of waste recycling within feedlots, with the most common being spreading it back on the cropping fields used to feed the livestock. Generally, feedlots provide bedding for their animals such as straw, sawdust, wood shavings, or other byproducts from crops (soybean chaff, corn chaff), which are then mixed in with the manure as the livestock use the bedding. Once the bedding has outlasted its use, the manure is either spread directly on the fields or stock piled to breakdown and begin composting. A less common type of recycling in the feedlot industry is liquid manure which is where minimal bedding is found in the manure, so it stays a liquid and is then spread on the fields in a liquid form. Increasing numbers of cattle feedlots are utilizing out-wintering pads made of timber residue bedding in their operations. Nutrients are retained in the waste timber and livestock effluent and can be recycled within the farm system after use. Biogas plants are also able to use livestock manure to create biofuels, and these anaerobic digestion systems are known to capture methane in a usable form, while concentrating nitrogen, a valuable nutrient found in the manure which they then use to spread on their fields. History Cattle feeding on a large scale was first introduced in the early 60's, when a demand for higher quality beef in large quantities emerged. Farmers started becoming familiar with the finishing of beef, but also showed interest in various other aspects associated with the feedlot such as soil health, crop management, and how to manage labour costs. From the early 60's to the 90's feeding beef cattle in the feedlot style showed immense growth, and even today the feedlot industry is constantly being upgraded with new knowledge and science as well as technology. In the early 20th century, feeder operations were separate from all other related operations and feedlots were non-existent. They appeared in the 1950s and 1960s as a result of hybrid grains and irrigation techniques; the ensuing larger grain crops led to abundant grain harvests. It was suddenly possible to feed large numbers of cattle in one location and so, to cut transportation costs, grain farms and feedlot locations merged. Cattle were no longer sent from all across the southern states to places like California, where large slaughter houses were located. In the 1980s, meat packers followed the path of feedlots and are now located close by to them as well. Marketing There are many methods used to sell cattle to meat packers. Spot, or cash, marketing is the traditional and most commonly used method. Prices are influenced by current supply & demand and are determined by live weight or per head. Similar to this is forward contracting, in which prices are determined the same way but are not directly influenced by market demand fluctuations. Forward contracts determine the selling price between the two parties negotiating for a set amount of time. However, this method is the least used because it requires some knowledge of production costs and the willingness of both sides to take a risk in the futures market. Another method, formula pricing, is becoming the most popular process, as it more accurately represents the value of meat received by the packer. This requires trust between the packers and feedlots though, and is under criticism from the feedlots because the amount paid to the feedlots is determined by the packers’ assessment of the meat received. Finally, live- or carcass-weight based formula pricing is most common. Other types include grid pricing and boxed beef pricing. The most controversial marketing method stems from the vertical integration of packer-owned feedlots, which still represents less than 10% of all methods, but has been growing over the years. Alternatives The alternative to feedlots is to allow cattle to graze on grass throughout their lives, but this is not efficient and can be very challenging. For Canada and the Northern USA, year round grazing is not possible due to the severe winter weather conditions. Controlled grazing methods of this sort necessitate higher beef prices and the cattle take longer to reach market weight. See also Intensive fish farm Golden Triangle of Meat-packing Livestock Managed intensive grazing Temple Grandin References Further reading Encyclopedia of Oklahoma History and Culture – Feedlots External links Canada Beef Inc Texas Cattle Feeders Association Clean Water and Factory Farms – Inhumane Treatment of Farm Animals Australian Lot Feeders Association "Power Steer", Michael Pollan, New York Times, March 31, 2002 Broken Bow South Lot, possibly the world's largest capacity Livestock Meat industry Intensive farming Cruelty to animals
Feedlot
Chemistry
2,401
343,286
https://en.wikipedia.org/wiki/Toffoli%20gate
In logic circuits, the Toffoli gate, also known as the CCNOT gate (“controlled-controlled-not”), invented by Tommaso Toffoli, is a CNOT gate with two control qubits and one target qubit. That is, the target qubit (third qubit) will be inverted if the first and second qubits are both 1. It is a universal reversible logic gate, which means that any classical reversible circuit can be constructed from Toffoli gates. The truth table and matrix are as follows: Background An input-consuming logic gate L is reversible if it meets the following conditions: (1) L(x) = y is a gate where for any output y, there is a unique input x; (2) The gate L is reversible if there is a gate L´(y) = x which maps y to x, for all y. An example of a reversible logic gate is a NOT, which can be described from its truth table below: The common AND gate is not reversible, because the inputs 00, 01 and 10 are all mapped to the output 0. Reversible gates have been studied since the 1960s. The original motivation was that reversible gates dissipate less heat (or, in principle, no heat). More recent motivation comes from quantum computing. In quantum mechanics the quantum state can evolve in two ways: by Schrödinger's equation (unitary transformations), or by their collapse. Logic operations for quantum computers, of which the Toffoli gate is an example, are unitary transformations and therefore evolve reversibly. Hardware description The classical Toffoli gate is implemented in a hardware description language such as Verilog: module toffoli_gate ( input u1, input u2, input in, output v1, output v2, output out); always @(*) begin v1 = u1; v2 = u2; out = in ^ (u1 && u2); end endmodule Universality and Toffoli gate Any reversible gate that consumes its inputs and allows all input computations must have no more input bits than output bits, by the pigeonhole principle. For one input bit, there are two possible reversible gates. One of them is NOT. The other is the identity gate, which maps its input to the output unchanged. For two input bits, the only non-trivial gate is the controlled NOT gate (CNOT), which XORs the first bit to the second bit and leaves the first bit unchanged. Unfortunately, there are reversible functions that cannot be computed using just those gates. For example, AND cannot be achieved by those gates. In other words, the set consisting of NOT and XOR gates is not universal. To compute an arbitrary function using reversible gates, the Toffoli gate, proposed in 1980 by Toffoli, can indeed achieve the goal. It can be also described as mapping bits {a, b, c} to {a, b, c XOR (a AND b)}. This can also be understood as a modulo operation on bit c: {a, b, c} → {a, b, (c + ab) mod 2}, often written as {a, b, c} → {a, b, c ⨁ ab}. The Toffoli gate is universal; this means that for any Boolean function f(x1, x2, ..., xm), there is a circuit consisting of Toffoli gates that takes x1, x2, ..., xm and some extra bits set to 0 or 1 to outputs x1, x2, ..., xm, f(x1, x2, ..., xm), and some extra bits (called garbage). A NOT gate, for example, can be constructed from a Toffoli gate by setting the three input bits to {a, 1, 1}, making the third output bit (1 XOR (a AND 1)) = NOT a; (a AND b) is the third output bit from {a, b, 0}. Essentially, this means that one can use Toffoli gates to build systems that will perform any desired Boolean function computation in a reversible manner. Related logic gates The Fredkin gate is a universal reversible 3-bit gate that swaps the last two bits if the first bit is 1; a controlled-swap operation. The n-bit Toffoli gate is a generalization of the Toffoli gate. It takes n bits x1, x2, ..., xn as inputs and outputs n bits. The first n − 1 output bits are just x1, ..., xn−1. The last output bit is (x1 AND ... AND xn−1) XOR xn. The Toffoli gate can be realized by five two-qubit quantum gates, but it can be shown that it is not possible using fewer than five. Another universal gate, the Deutsch gate, can be realized by five optical pulses with neutral atoms. The Deutsch gate is a universal gate for quantum computing. The Margolus gate (named after Norman Margolus), also called simplified Toffoli, is very similar to a Toffoli gate but with a −1 in the diagonal: RCCX = diag(1, 1, 1, 1, 1, −1, X). The Margolus gate is also universal for reversible circuits and acts very similar to a Toffoli gate, with the advantage that it can be constructed with about half of the CNOTs compared to the Toffoli gate. The iToffoli gate was implemented in superconducting qubits with pair-wise coupling by simultaneously applying noncommuting operations. Relation to quantum computing Any reversible gate can be implemented on a quantum computer, and hence the Toffoli gate is also a quantum operator. However, the Toffoli gate cannot be used for universal quantum computation, though it does mean that a quantum computer can implement all possible classical computations. The Toffoli gate has to be implemented along with some inherently quantum gate(s) in order to be universal for quantum computation. In fact, any single-qubit gate with real coefficients that can create a nontrivial quantum state suffices. A Toffoli gate based on quantum mechanics was successfully realized in January 2009 at the University of Innsbruck, Austria. While the implementation of an n-qubit Toffoli with circuit model requires 2n CNOT gates, the best known upper bound stands at 6n − 12 CNOT gates. It has been suggested that trapped Ion Quantum computers may be able to implement an n-qubit Toffoli gate directly. The application of many-body interaction could be used for direct operation of the gate in trapped ions, Rydberg atoms and superconducting circuit implementations. Following the dark-state manifold, Khazali-Mølmer Cn-NOT gate operates with only three pulses, departing from the circuit model paradigm. The iToffoli gate was implemented in a single step using three superconducting qubits with pair-wise coupling. See also Controlled NOT gate Fredkin gate Reversible computing Bijection Quantum computing Quantum logic gate Quantum programming Adiabatic logic References External links CNOT and Toffoli Gates in Multi-Qubit Setting at the Wolfram Demonstrations Project. Logic gates Quantum gates Reversible computing Italian inventions
Toffoli gate
Physics
1,581
68,046,678
https://en.wikipedia.org/wiki/Nirogacestat
Nirogacestat, sold under the brand name Ogsiveo, is an anti-cancer medication used for the treatment of desmoid tumors. It is a selective gamma secretase inhibitor that is taken by mouth. The most common side effects include diarrhea, ovarian toxicity, rash, nausea, fatigue, stomatitis, headache, abdominal pain, cough, alopecia, upper respiratory tract infection and dyspnea. Nirogacestat was approved for medical use in the United States in November 2023. It is the first medication approved by the US Food and Drug Administration (FDA) for the treatment of desmoid tumors. The FDA considers it to be a first-in-class medication. Medical uses Nirogacestat is indicated for adults with progressing desmoid tumors who require systemic treatment. History The effectiveness of nirogacestat was evaluated in DeFi (NCT03785964), an international, multicenter, randomized (1:1), double-blind, placebo-controlled trial in 142 adult participants with progressing desmoid tumors not amenable to surgery. Participants were randomized to receive 150 milligrams (mg) of nirogacestat or placebo orally, twice daily, until disease progression or unacceptable toxicity. The main efficacy outcome measure was progression-free survival (the length of time after the start of treatment for which a person is alive and their cancer does not grow or spread). Objective response rate (a measure of tumor shrinkage) was an additional efficacy outcome measure. The pivotal clinical trial demonstrated that nirogacestat provided clinically meaningful and statistically significant improvement in progression-free survival compared to placebo. Additionally, the objective response rate was also statistically different between the two arms with a response rate of 41% in the nirogacestat arm and 8% in the placebo arm. The progression-free survival results were also supported by an assessment of patient-reported pain favoring the nirogacestat arm. As of 2021, nirogacestat was in phase II clinical trials for unresectable desmoid tumors. In addition, a phase III clinical trial, DeFi, was in progress for nirogacestat for adults with desmoid tumors and aggressive fibromatosis. In addition, three trials were recruiting patients that include nirogacestat with other anticancer therapies in multiple myeloma, including the UNIVERSAL study for nirogacestat with the allogeneic CAR-T therapy ALLO-715. The FDA granted the application for nirogacestat priority review, fast track, breakthrough therapy, and orphan drug designations. The FDA granted the approval of Ogsiveo to SpringWorks Therapeutics Inc. Society and culture Legal status Nirogacestat was granted breakthrough therapy designation by the FDA in September 2019, for adults with progressive, unresectable, recurrent or refractory desmoid tumors or deep fibromatosis. References Amides Chemotherapy Fluoroarenes Gamma secretase inhibitors Imidazoles Orphan drugs Secondary amines Tetralins Neopentyl compounds
Nirogacestat
Chemistry
659
52,974,068
https://en.wikipedia.org/wiki/N%2CN%2CN%E2%80%B2%2CN%E2%80%B2-Tetramethylformamidinium%20chloride
N,N,N′,N′-Tetramethylformamidinium chloride is the simplest representative of quaternary formamidinium cations of the general formula [R2N−CH=NR2]+ with a chloride as a counterion in which all hydrogen atoms of the protonated formamidine [HC(=NH2)NH2]+ are replaced by methyl groups. Deprotonation results in the exceptionally basic bis(dimethylamino)carbene R2N−C̈−NR2. Preparation It is generated by protonation of (CH3)3COCH(N(CH3)2)2 (Bredereck's reagent). (CH3)3COCH(N(CH3)2)2 + H+ → (CH3)3COH + [CH(N(CH3)2)2]+ N,N,N′,N′-Tetramethylformamidinium chloride is also obtained in high yield (95%) in the reaction of dimethylformamide (DMF) with dimethylcarbamoyl chlorideThe conversion of DMF with thionyl chloride in a ratio of 3:1 obtains the product in a is significantly lower yield (72%) which appears, however, more realistic in view of the tricky handling of the chloride salt. Properties N,N,N′,N′-Tetramethylformamidinium chloride is a light yellow, strongly hygroscopic solid. For drying, the salt is dissolved in dichloromethane and the solution is treated with solid anhydrous sodium sulfate. After several dissolutions in dichloromethane and acetone, and precipitations with tetrahydrofuran, a colorless solid is obtained, which is stable under air and moisture sealing. The presumption of a mesomeric equilibrium between ionic formamidinium chloride and covalent bis(dimethylamino)chloromethane structure: could be decided by reaction with germanium dichloride or tin(II) chloride in favour of the presence of N,N,N′,N′-tetramethylformamidinium chloride. The hygroscopy of the chloride salt complicates the handling of the compound. Therefore, also syntheses of the much better processible salts N,N,N′,N′-tetramethylformamidinium methylsulfate (from the dimethylformamide–dimethylsulfate complex) and of N,N,N′,N′-tetramethylformamidinium p-toluenesulfonate (from dimethylformamide and p-toluenesulfonyl chloride) were investigated. Applications N,N,N′,N′-Tetramethylformamidinium chloride is useful as a reagent for aminomethylenation (that is, to introduce a =CH−NR1R2 function to CH-acidic compounds). For example, ethyl cyanoacetate reacts with the formamidinium salt in the presence of solid sodium hydroxide to give ethyl (dimethylaminomethylene)cyanoacetate in practically quantitative yields. The aminomethylenation provides intermediates for the synthesis of heterocycles such as indoles, pyrimidines, pyridines and quinolones. N,N,N′,N′-Tetramethylformamidinium chloride reacts with alkali metal dimethylamides (such as lithium dimethylamide or sodium dimethylamide) to tris(dimethylamino)methane in yields of 55% to 84%. The reaction product is suited as a reagent for formylation and aminomethylenation. From N,N,N′,N′-tetramethylformamidinium chloride and sodium ethoxide in ethanol, dimethylformamide diethyl acetal is formed in 68% yield. In aqueous sodium cyanide, N,N,N′,N′-tetramethylformamidinium reacts to bis(dimethylamino)acetonitrile. From N,N,N′,N′-tetramethylformamidinium and anhydrous hydrogen cyanide, dimethylaminomalonic acid dinitrile is obtained in 92% yield. N,N,N′,N′-Tetramethylformamidinium can be regaminated with cyclo-aliphatic amines to the corresponding heterocyclic formamidines. N,N,N′,N′-tetramethylformamidinium is a catalyst in the preparation of acyl chlorides from carboxylic acids and phosgene has been reported. Strong bases (such as phenyllithium) can abstract a proton from the formamidinium cation of N,N,N′,N′-tetramethylformamidinium forming bis(dimethylamino)carbene. See also Bis(dimethylamino)methane References Amidines Dimethylamino compounds
N,N,N′,N′-Tetramethylformamidinium chloride
Chemistry
1,115
1,214,697
https://en.wikipedia.org/wiki/Wet-bulb%20globe%20temperature
The wet-bulb globe temperature (WBGT) is a measure of environmental heat as it affects humans. Unlike a simple temperature measurement, WBGT accounts for all four major environmental heat factors: air temperature, humidity, radiant heat (from sunlight or sources such as furnaces), and air movement (wind or ventilation). It is used by industrial hygienists, athletes, sporting events and the military to determine appropriate exposure levels to high temperatures. A WBGT meter combines three sensors, a dry-bulb thermometer, a natural (static) wet-bulb thermometer, and a black globe thermometer. For outdoor environments, the meter uses all sensor data inputs, calculating WBGT as: where Tw = Natural wet-bulb temperature (combined with dry-bulb temperature indicates humidity) Tg = Globe thermometer temperature (measured with a globe thermometer, also known as a black globe thermometer) Td = Dry-bulb temperature (actual air temperature) Temperatures may be in either Celsius or Fahrenheit Indoors the following formula is used: If a meter is not available, the WBGT can be calculated from current or historic weather data. A clothing adjustment may be added to the WBGT to determine the "effective WBGT", WBGTeff. Uses The American Conference of Governmental Industrial Hygienists publishes threshold limit values (TLVs) that have been adopted by many governments for use in the workplace. The process for determining the WBGT is also described in ISO 7243, Hot Environments - Estimation of the Heat Stress on Working Man, Based on the WBGT Index. The American College of Sports Medicine bases its guidelines on the intensity of sport practices based on WBGT. In hot areas, some US military installations display a flag to indicate the heat category based on the WBGT. The military publishes guidelines for water intake and physical activity level for acclimated and unacclimated individuals in different uniforms based on the heat category. The University of Georgia adapted these categories for use in college sports as a guideline for how strenuous practices can be. {| class="wikitable" !Category||WBGT (°F)||WBGT (°C)||Flag color |- |1||≤ 78–81.9||≤ 25.6–27.7||style="background:white;color:black"|White |- |2||82–84.9||27.8–29.4||style="background:limegreen;color:black"|Green |- |3||85–87.9||29.5–31.0||style="background:yellow;color:black"|Yellow |- |4||88–89.9||31.1–32.1||style="background:red;color:white"|Red |- |5||≥ 90||≥ 32.2||style="background:black;color:white"|Black |} Related temperature comfort measures The heat index used by the U.S. National Weather Service and the humidex used by the Meteorological Service of Canada, along with the wind chill used in both countries, are also measures of perceived heat or cold, but they do not account for the effects of radiation. The NWS office in Tulsa, Oklahoma, in conjunction with Oral Roberts University's mathematics department, published an approximation formula to the WBGT that takes into account cloud cover and wind speed; in limited experimentation (four samples), the office claimed the estimate was regularly accurate to within , even with a simplification that reduces the equation from a four-degree polynomial to a linear relationship (the authors noted that the linear approximation was not tested for air temperatures under since the WBGT is designed to measure heat stress, which seldom occurs below that threshold). See also Hygrometer References Further reading Air Force Pamphlet 48-151 U.S. Army Technical Bulletin Medical 507/Air Force Pamphlet 48-152 Zunis Foundation background article External links Thermal Comfort observations from the Australian Bureau of Meteorology Extreme Hot or Cold Temperature Conditions from the Canadian Centre for Occupational Health and Safety OSHA Technical Manual: Heat Stress from the U.S. Occupational Safety and Health Administration Occupational safety and health Atmospheric thermodynamics Temperature Meteorological indices
Wet-bulb globe temperature
Physics,Chemistry
902
14,725,565
https://en.wikipedia.org/wiki/Homeobox%20A10
Homeobox protein Hox-A10 is a protein that in humans is encoded by the HOXA10 gene. Function In vertebrates, the genes encoding the class of transcription factors called homeobox genes are found in clusters named A, B, C, and D on four separate chromosomes. Expression of these proteins is spatially and temporally regulated during embryonic development. This gene is part of the A cluster on chromosome 7 and encodes a DNA-binding transcription factor that may regulate gene expression, morphogenesis, and differentiation. More specifically, it may function in fertility, embryo viability, and regulation of hematopoietic lineage commitment. Alternatively spliced transcript variants encoding different isoforms have been described. Downregulation of HOXA10 is observed in the human and baboon decidua after implantation and this downregulation promotes trophoblast invasion by activating STAT3. Interactions Homeobox A10 has been shown to interact with PTPN6. See also Homeobox References Further reading External links Transcription factors
Homeobox A10
Chemistry,Biology
222
28,076,898
https://en.wikipedia.org/wiki/Roman%20Zubarev
Roman A. Zubarev is a professor of medicinal proteomics in the Department of Medical Biochemistry and Biophysics at the Karolinska Institutet. His research focuses on the use of mass spectrometry in biology and medicine. Early life and education M.S. in applied physics at Moscow Engineering Physics Institute, 1986 Ph.D. in ion physics at Uppsala University, 1997 Research interests Electron-capture dissociation. In 1997, while in Fred McLafferty's lab at Cornell University, Zubarev discovered the phenomenon of electron-capture dissociation (ECD) of polypeptides. He later developed ECD and other ion-electron reactions2 as analytical techniques in Odense (1998–2002) and Uppsala (2002–2008). Isotopic resonance hypothesis. Discovered the phenomenon of isotopic resonance in Uppsala (2008), formulated the isotopic resonance hypothesis and experimentally verified it in Stockholm (2009–2013). Isoaspartate theory of Alzheimer's disease. The role of isoaspartate in Alzheimer's disease has first been suggested as early as in 1991. Over the next three decades the evidence gradually accumulated, being strongly supported by blood proteomics data. Origin of life studies. Zubarev showed that the organic matter produced abiotically in a Miller–Urey experiment can be assimilated by bacteria, and thus proved that early Earth has been a hospitable place for life. For the first time in history of science, Zubarev obtained a living cell from dead matter. In that landmark experiment, bacteria were separated into lipids, nucleic acids and proteins, and these ingredients were isolated and incubated separately and in a mixture. After incubation, the isolates were seeded on Petri dish. While isolated molecules showed no growth (negative control), lipid-containing mixtures gave bacterial colonies, proving that life can self-assemble from a mixture of right ingredients. Awards Curt Brunnée Award, 2006 Biemann Medal, 2007 Gold medal, Russian Mass Spectrometry Society, 2013 External links Roman Zubarev (Karolinska Institutet) Roman Zubarev (Uppsala University) References Mass spectrometrists Living people Cornell University people Year of birth missing (living people) People from Stavropol
Roman Zubarev
Physics,Chemistry
479
14,644,044
https://en.wikipedia.org/wiki/Okimate%2010
The Okimate 10 by Oki Electric Industry was a low-cost 1980s color printer with interface "plug 'n print" modules for Commodore, Atari, IBM PC, and Apple Inc. home computers. Unlike thermal printers, which use thermal printing technology and require thermal paper, the Okimate used thermal transfer technology and was advertised as being able to print on any type of paper. In practice, however, printing to common printer/copier paper did not produce adequate results. Best results were obtained by printing to special "thermal transfer paper" which looks like ordinary copier paper but is actually an ultra-smooth paper for the wax-transfer to adhere to. A thermal transfer printer contains a ribbon cartridge that uses a wax ink. When the heating elements in the print head heat up, they melt the wax and transfer it to the paper, thus the need for the paper to be really smooth. This also means that the ribbon cannot be reused after the head runs over it, since the wax transfers off the ribbon to the paper. The Okimate 10 had two interchangeable wax-ink cartridges, a black one and a color one. The black cartridge was used for text printing, and the color was used for graphics. The color ribbon had three primary colors which were overlaid and dithered on top of each other to create secondary colors. Thus to print a graphic, the printer typically needed to make three passes over the same line before advancing. It was one of the first low-cost color printers available to consumers and became a popular printer for printing computer art drawn with software packages such as KoalaPad, Deluxe Paint, Doodle! and NEOchrome but was criticized for its slowness and high cost of operation, as the wax-coated ribbon only lasted for one pass, unlike an ink ribbon. The Okimate 10 was succeeded by the Okimate 20. Reception Ahoy! favorably reviewed the Okimate 10 with the Commodore 64 interface, calling the color output "impressive enough" given the slow speed. It concluded that "for the home user for whom it is intended, it represents an excellent value". References External links Contemporary review of the Okimate 20 RUN Magazine Dec, 1986 Non-impact printing
Okimate 10
Technology
445
30,257,036
https://en.wikipedia.org/wiki/Methiopropamine
Methiopropamine (MPA), also known as N-methylthiopropamine, is an organic compound structurally related to methamphetamine. Originally reported in 1942, the molecule consists of a thiophene group with an alkyl amine substituent at the 2-position. It appeared for public sale in the United Kingdom in December 2010 as a "research chemical" or "legal high", recently branded as Blow. It has limited popularity as a recreational stimulant. Pharmacology Methiopropamine functions as a norepinephrine–dopamine reuptake inhibitor (NDRI) that is approximately 1.85 times more selective for norepinephrine than dopamine. It is approximately one-third as potent as dextroamphetamine as a norepinephrine reuptake inhibitor and one-fifth as much as a dopamine reuptake inhibitor. It displays negligible activity as a serotonin reuptake inhibitor. Methiopropamine has the potential for significant acute toxicity with cardiovascular, gastrointestinal, and psychotic symptoms. Metabolism For N-alkyl amphetamines, deamination and N-dealkylation are the major elimination pathways and renal excretion is a minor one. Methiopropamine is metabolized into active thiopropamine, 4-hydroxymethiopropamine and thiophene S-oxides. These N-demethylated metabolites are further deaminated by the cytochrome P450 enzyme CYP2C19 in the liver transforming them into inactive 1-(thiophen-2-yl)-2-propan-2-one which can be seen as a phenylacetone derivative. Thiophene-2-carboxylic acid is the final major metabolic product. It is very hydrophilic and is excreted in urine. Methiopropamine and especially thiopropamine are also excreted renally, unchanged. Synthesis There is a four-step synthesis of methiopropamine. It begins with (thiophen-2-yl)magnesium bromide, which is reacted with propylene oxide, yielding 1-(thiophen-2-yl)-2-hydroxypropane which is reacted with phosphorus tribromide, yielding 1-(thiophen-2-yl)-2-bromopropane which is finally reacted with methylamine, yielding 1-(thiophen-2-yl)-2-methylaminopropane. Legal status China As of October 2015 MPA is a controlled substance in China. Finland Methiopropamine is illegal in Finland, it is scheduled in "government decree on narcotic substances, preparations and plants". Germany Methiopropamine is explicitly illegal in Germany. United Kingdom Following the ban on ethylphenidate, authorities noticed an increase in methiopropamine use by injecting users. The ACMD suggested it be banned on 18 November 2015 as it had similar effects to ethylphenidate. The government enacted a temporary drug control order a week later which came into force on 27 November 2015. Though ordinarily the TCDO would only last 1 year, the ACMD reported that since its invocation prevalence of MPA had significantly decreased, and that it had been challenging to collect information about the drug. As a result of this, they requested that the TCDO be extended a further year. Methiopropanine was made a Class B controlled drug under the Misuse of Drugs Act 1971 (as amended) (Amendment)(No.2) Order 2017 [SI 2017/1114], this came into effect on the 27th of November 2017. United States Methiopropamine is scheduled at the federal level in the United States. The DEA had planned to place methiopropamine in Schedule I of Controlled Substances and was accepting public comments until October 4, 2021. Later, the compound was placed in Schedule I. Florida Methiopropamine is a Schedule I controlled substance in the state of Florida making it illegal to buy, sell, or possess in Florida. Tasmania (Australia) Methiopropamine is a "controlled substance" and therefore an "illegal drug" to import, possess or sell/traffic in without express authority of the relevant government agency. See also 5-MMPA α-Pyrrolidinopentiothiophenone (α-PVT) Thiopropamine, demethylated counterpart Propylhexedrine, another ring substituted stimulant used as over-the-counter decongestant Thiothinone References Amines Designer drugs Norepinephrine–dopamine reuptake inhibitors Stimulants Thiopropamines
Methiopropamine
Chemistry
1,027
276,410
https://en.wikipedia.org/wiki/Module%20%28mathematics%29
In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a (not necessarily commutative) ring. The concept of a module also generalizes the notion of an abelian group, since the abelian groups are exactly the modules over the ring of integers. Like a vector space, a module is an additive abelian group, and scalar multiplication is distributive over the operations of addition between elements of the ring or module and is compatible with the ring multiplication. Modules are very closely related to the representation theory of groups. They are also one of the central notions of commutative algebra and homological algebra, and are used widely in algebraic geometry and algebraic topology. Introduction and definition Motivation In a vector space, the set of scalars is a field and acts on the vectors by scalar multiplication, subject to certain axioms such as the distributive law. In a module, the scalars need only be a ring, so the module concept represents a significant generalization. In commutative algebra, both ideals and quotient rings are modules, so that many arguments about ideals or quotient rings can be combined into a single argument about modules. In non-commutative algebra, the distinction between left ideals, ideals, and modules becomes more pronounced, though some ring-theoretic conditions can be expressed either about left ideals or left modules. Much of the theory of modules consists of extending as many of the desirable properties of vector spaces as possible to the realm of modules over a "well-behaved" ring, such as a principal ideal domain. However, modules can be quite a bit more complicated than vector spaces; for instance, not all modules have a basis, and, even for those that do (free modules), the number of elements in a basis need not be the same for all bases (that is to say that they may not have a unique rank) if the underlying ring does not satisfy the invariant basis number condition, unlike vector spaces, which always have a (possibly infinite) basis whose cardinality is then unique. (These last two assertions require the axiom of choice in general, but not in the case of finite-dimensional vector spaces, or certain well-behaved infinite-dimensional vector spaces such as Lp spaces.) Formal definition Suppose that R is a ring, and 1 is its multiplicative identity. A left R-module M consists of an abelian group and an operation such that for all r, s in R and x, y in M, we have , , , The operation · is called scalar multiplication. Often the symbol · is omitted, but in this article we use it and reserve juxtaposition for multiplication in R. One may write RM to emphasize that M is a left R-module. A right R-module MR is defined similarly in terms of an operation . Authors who do not require rings to be unital omit condition 4 in the definition above; they would call the structures defined above "unital left R-modules". In this article, consistent with the glossary of ring theory, all rings and modules are assumed to be unital. An (R,S)-bimodule is an abelian group together with both a left scalar multiplication · by elements of R and a right scalar multiplication ∗ by elements of S, making it simultaneously a left R-module and a right S-module, satisfying the additional condition for all r in R, x in M, and s in S. If R is commutative, then left R-modules are the same as right R-modules and are simply called R-modules. Examples If K is a field, then K-modules are called K-vector spaces (vector spaces over K). If K is a field, and K[x] a univariate polynomial ring, then a K[x]-module M is a K-module with an additional action of x on M by a group homomorphism that commutes with the action of K on M. In other words, a K[x]-module is a K-vector space M combined with a linear map from M to M. Applying the structure theorem for finitely generated modules over a principal ideal domain to this example shows the existence of the rational and Jordan canonical forms. The concept of a Z-module agrees with the notion of an abelian group. That is, every abelian group is a module over the ring of integers Z in a unique way. For , let (n summands), , and . Such a module need not have a basis—groups containing torsion elements do not. (For example, in the group of integers modulo 3, one cannot find even one element that satisfies the definition of a linearly independent set, since when an integer such as 3 or 6 multiplies an element, the result is 0. However, if a finite field is considered as a module over the same finite field taken as a ring, it is a vector space and does have a basis.) The decimal fractions (including negative ones) form a module over the integers. Only singletons are linearly independent sets, but there is no singleton that can serve as a basis, so the module has no basis and no rank, in the usual sense of linear algebra. However this module has a torsion-free rank equal to 1. If R is any ring and n a natural number, then the cartesian product Rn is both a left and right R-module over R if we use the component-wise operations. Hence when , R is an R-module, where the scalar multiplication is just ring multiplication. The case yields the trivial R-module {0} consisting only of its identity element. Modules of this type are called free and if R has invariant basis number (e.g. any commutative ring or field) the number n is then the rank of the free module. If Mn(R) is the ring of matrices over a ring R, M is an Mn(R)-module, and ei is the matrix with 1 in the -entry (and zeros elsewhere), then eiM is an R-module, since . So M breaks up as the direct sum of R-modules, . Conversely, given an R-module M0, then M0⊕n is an Mn(R)-module. In fact, the category of R-modules and the category of Mn(R)-modules are equivalent. The special case is that the module M is just R as a module over itself, then Rn is an Mn(R)-module. If S is a nonempty set, M is a left R-module, and MS is the collection of all functions , then with addition and scalar multiplication in MS defined pointwise by and , MS is a left R-module. The right R-module case is analogous. In particular, if R is commutative then the collection of R-module homomorphisms (see below) is an R-module (and in fact a submodule of NM). If X is a smooth manifold, then the smooth functions from X to the real numbers form a ring C∞(X). The set of all smooth vector fields defined on X forms a module over C∞(X), and so do the tensor fields and the differential forms on X. More generally, the sections of any vector bundle form a projective module over C∞(X), and by Swan's theorem, every projective module is isomorphic to the module of sections of some vector bundle; the category of C∞(X)-modules and the category of vector bundles over X are equivalent. If R is any ring and I is any left ideal in R, then I is a left R-module, and analogously right ideals in R are right R-modules. If R is a ring, we can define the opposite ring Rop, which has the same underlying set and the same addition operation, but the opposite multiplication: if in R, then in Rop. Any left R-module M can then be seen to be a right module over Rop, and any right module over R can be considered a left module over Rop. Modules over a Lie algebra are (associative algebra) modules over its universal enveloping algebra. If R and S are rings with a ring homomorphism , then every S-module M is an R-module by defining . In particular, S itself is such an R-module. Submodules and homomorphisms Suppose M is a left R-module and N is a subgroup of M. Then N is a submodule (or more explicitly an R-submodule) if for any n in N and any r in R, the product (or for a right R-module) is in N. If X is any subset of an R-module M, then the submodule spanned by X is defined to be where N runs over the submodules of M that contain X, or explicitly , which is important in the definition of tensor products of modules. The set of submodules of a given module M, together with the two binary operations + (the module spanned by the union of the arguments) and ∩, forms a lattice that satisfies the modular law: Given submodules U, N1, N2 of M such that , then the following two submodules are equal: . If M and N are left R-modules, then a map is a homomorphism of R-modules if for any m, n in M and r, s in R, . This, like any homomorphism of mathematical objects, is just a mapping that preserves the structure of the objects. Another name for a homomorphism of R-modules is an R-linear map. A bijective module homomorphism is called a module isomorphism, and the two modules M and N are called isomorphic. Two isomorphic modules are identical for all practical purposes, differing solely in the notation for their elements. The kernel of a module homomorphism is the submodule of M consisting of all elements that are sent to zero by f, and the image of f is the submodule of N consisting of values f(m) for all elements m of M. The isomorphism theorems familiar from groups and vector spaces are also valid for R-modules. Given a ring R, the set of all left R-modules together with their module homomorphisms forms an abelian category, denoted by R-Mod (see category of modules). Types of modules Finitely generated An R-module M is finitely generated if there exist finitely many elements x1, ..., xn in M such that every element of M is a linear combination of those elements with coefficients from the ring R. Cyclic A module is called a cyclic module if it is generated by one element. Free A free R-module is a module that has a basis, or equivalently, one that is isomorphic to a direct sum of copies of the ring R. These are the modules that behave very much like vector spaces. Projective Projective modules are direct summands of free modules and share many of their desirable properties. Injective Injective modules are defined dually to projective modules. Flat A module is called flat if taking the tensor product of it with any exact sequence of R-modules preserves exactness. Torsionless A module is called torsionless if it embeds into its algebraic dual. Simple A simple module S is a module that is not {0} and whose only submodules are {0} and S. Simple modules are sometimes called irreducible. Semisimple A semisimple module is a direct sum (finite or not) of simple modules. Historically these modules are also called completely reducible. Indecomposable An indecomposable module is a non-zero module that cannot be written as a direct sum of two non-zero submodules. Every simple module is indecomposable, but there are indecomposable modules that are not simple (e.g. uniform modules). Faithful A faithful module M is one where the action of each in R on M is nontrivial (i.e. for some x in M). Equivalently, the annihilator of M is the zero ideal. Torsion-free A torsion-free module is a module over a ring such that 0 is the only element annihilated by a regular element (non zero-divisor) of the ring, equivalently implies or . Noetherian A Noetherian module is a module that satisfies the ascending chain condition on submodules, that is, every increasing chain of submodules becomes stationary after finitely many steps. Equivalently, every submodule is finitely generated. Artinian An Artinian module is a module that satisfies the descending chain condition on submodules, that is, every decreasing chain of submodules becomes stationary after finitely many steps. Graded A graded module is a module with a decomposition as a direct sum over a graded ring such that for all x and y. Uniform A uniform module is a module in which all pairs of nonzero submodules have nonzero intersection. Further notions Relation to representation theory A representation of a group G over a field k is a module over the group ring k[G]. If M is a left R-module, then the action of an element r in R is defined to be the map that sends each x to rx (or xr in the case of a right module), and is necessarily a group endomorphism of the abelian group . The set of all group endomorphisms of M is denoted EndZ(M) and forms a ring under addition and composition, and sending a ring element r of R to its action actually defines a ring homomorphism from R to EndZ(M). Such a ring homomorphism is called a representation of R over the abelian group M; an alternative and equivalent way of defining left R-modules is to say that a left R-module is an abelian group M together with a representation of R over it. Such a representation may also be called a ring action of R on M. A representation is called faithful if and only if the map is injective. In terms of modules, this means that if r is an element of R such that for all x in M, then . Every abelian group is a faithful module over the integers or over some ring of integers modulo n, Z/nZ. Generalizations A ring R corresponds to a preadditive category R with a single object. With this understanding, a left R-module is just a covariant additive functor from R to the category Ab of abelian groups, and right R-modules are contravariant additive functors. This suggests that, if C is any preadditive category, a covariant additive functor from C to Ab should be considered a generalized left module over C. These functors form a functor category C-Mod, which is the natural generalization of the module category R-Mod. Modules over commutative rings can be generalized in a different direction: take a ringed space (X, OX) and consider the sheaves of OX-modules (see sheaf of modules). These form a category OX-Mod, and play an important role in modern algebraic geometry. If X has only a single point, then this is a module category in the old sense over the commutative ring OX(X). One can also consider modules over a semiring. Modules over rings are abelian groups, but modules over semirings are only commutative monoids. Most applications of modules are still possible. In particular, for any semiring S, the matrices over S form a semiring over which the tuples of elements from S are a module (in this generalized sense only). This allows a further generalization of the concept of vector space incorporating the semirings from theoretical computer science. Over near-rings, one can consider near-ring modules, a nonabelian generalization of modules. See also Group ring Algebra (ring theory) Module (model theory) Module spectrum Annihilator Notes References F.W. Anderson and K.R. Fuller: Rings and Categories of Modules, Graduate Texts in Mathematics, Vol. 13, 2nd Ed., Springer-Verlag, New York, 1992, , Nathan Jacobson. Structure of rings. Colloquium publications, Vol. 37, 2nd Ed., AMS Bookstore, 1964, External links Algebraic structures Module
Module (mathematics)
Mathematics
3,481
47,707,754
https://en.wikipedia.org/wiki/EGGS%20Design
EGGS is a design agency founded in Oslo, Norway, in 2012. It specialises in cross-disciplinary design projects where expertise in a diversity of fields is required. With offices in Norway, Brazil, and Denmark, EGGS has created projects across a variety of industrial sectors. History The consultancy was founded in Oslo in 2012 through the merging of the established product design agency Kadabra Design and the digital design agency Oslo-D. EGGS has offices in Trondheim, Stavanger, and Oslo in Norway, in São Paulo in Brazil, and opened its fifth office in Copenhagen, Denmark, in 2018. Organisational structure The agency is owned by its employees, who work in cross-disciplinary teams with the goal of incorporating clients and end-users in a user-centered design process. More than 80 people with expertise in service design, digital design, UX, interaction design, product design, technology and development, innovation, process facilitation, education, business design, and organisational design combine to deliver projects in a variety of industries, including maritime, marine technology, consumer goods, healthcare, public health care services, IT, finance, oil and energy, banking and finance, transportation, and airport infrastructure and services. Design awards A number of EGGS projects have received design awards. The Young Talent Design Award from the Norwegian Design Council, the Red Dot Design Award, and the Norwegian State Design Competition have also been awarded to EGGS. References External links Companies based in Oslo Design companies
EGGS Design
Engineering
297
26,910,524
https://en.wikipedia.org/wiki/Computational%20transportation%20science
Computational Transportation Science (CTS) is an emerging discipline that combines computer science and engineering with the modeling, planning, and economic aspects of transport. The discipline studies how to improve the safety, mobility, and sustainability of the transport system by taking advantage of information technologies and ubiquitous computing. A list of subjects encompassed by CTS can be found at include. Computational Transportation Science is an emerging discipline going beyond vehicular technology, addressing pedestrian systems on hand-held devices but also issues such as transport data mining (or movement analysis), as well as data management aspects. CTS allows for an increasing flexibility of the system as local and autonomous negotiations between transport peers, partners and supporting infrastructure are allowed. Thus, CTS provides means to study localized computing, self-organization, cooperation and simulation of transport systems. Several academic conferences on CTS have been held up to date: The Fourth ACM SIGSPATIAL International Workshop on Computational Transportation Science The Third ACM SIGSPATIAL International Workshop on Computational Transportation Science Dagstuhl Seminar 10121 on Computational Transportation Science The Second International Workshop on Computational Transportation Science The First International Workshop on Computational Transportation Science There is also an IGERT PHD program on Computational Transportation Science at the University of Illinois at Chicago. References External links Computational Transportation Science Transportation engineering Computational science Computational fields of study
Computational transportation science
Mathematics,Technology,Engineering
266
6,185,605
https://en.wikipedia.org/wiki/Principles%20of%20attention%20stress
The principles of attention stress is a user interface design theory to measure the amount of attention that is required to perform certain tasks in a web application. Antradar Software develops it in an attempt to benchmark the ease of use of open source CMS products and to monitor the trend of UI designs. The attention stress theory is based on many psychological observations, of which the two most important ones are: attention shift selection threshold Attention shift addresses the issue of "getting lost", or the experience of a "broken flow". It is usually measured by the number of page refreshes or the amount of hand–eye coordination required to complete a task. According to attention shift, new pages cause more stress than pop-ups, and pop-ups are more "expensive" than things like inline-editing. Selection threshold deals with the matter of "being overwhelmed". It is observed that when the users are presented with more than 4 choices at a time, their decisions tend to base on random guess instead of reasoning. This is especially true with users who suffer minor dyslexic symptoms. A well-known solution to this problem is the "personal menu" in Microsoft Office products, where rarely used menu items are hidden from the users. Although the emergence of AJAX provides many ways to reduce attention shift, the paradox between attention shift and selection threshold still cannot be resolved. Because of the nature of some application logic, the overall attention stress bears a lower bound. This limit is termed "UI capacity" in the principles of attention stress. See also Attention management Attentive user interface Cognitive load User interfaces
Principles of attention stress
Technology
324
421,597
https://en.wikipedia.org/wiki/Tractive%20effort
In railway engineering, the term tractive effort describes the pulling or pushing capability of a locomotive. The published tractive force value for any vehicle may be theoretical—that is, calculated from known or implied mechanical properties—or obtained via testing under controlled conditions. The discussion herein covers the term's usage in mechanical applications in which the final stage of the power transmission system is one or more wheels in frictional contact with a railroad track. Defining tractive effort The term tractive effort is often qualified as starting tractive effort, continuous tractive effort and maximum tractive effort. These terms apply to different operating conditions, but are related by common mechanical factors: input torque to the driving wheels, the wheel diameter, coefficient of friction () between the driving wheels and supporting surface, and the weight applied to the driving wheels (). The product of and is the factor of adhesion, which determines the maximum torque that can be applied before the onset of wheelspin or wheelslip. Starting tractive effort Starting tractive effort is the tractive force that can be generated at a standstill. This figure is important on railways because it determines the maximum train weight that a locomotive can set into motion. Maximum tractive effort Maximum tractive effort is defined as the highest tractive force that can be generated under any condition that is not injurious to the vehicle or machine. In most cases, maximum tractive effort is developed at low speed and may be the same as the starting tractive effort. Continuous tractive effort Continuous tractive effort is the tractive force that can be maintained indefinitely, as distinct from the higher tractive effort that can be maintained for a limited period of time before the power transmission system overheats. Due to the relationship between power (), velocity () and force (), described as: or Tractive effort inversely varies with speed at any given level of available power. Continuous tractive effort is often shown in graph form at a range of speeds as part of a tractive effort curve. Vehicles having a hydrodynamic coupling, hydrodynamic torque multiplier or electric motor as part of the power transmission system may also have a maximum continuous tractive effort rating, which is the highest tractive force that can be produced for a short period of time without causing component harm. The period of time for which the maximum continuous tractive effort may be safely generated is usually limited by thermal considerations. such as temperature rise in a traction motor. Tractive effort curves Specifications of locomotives often include tractive effort curves, showing the relationship between tractive effort and velocity. The shape of the graph is shown at right. The line AB shows operation at the maximum tractive effort, the line BC shows continuous tractive effort that is inversely proportional to speed (constant power). Tractive effort curves often have graphs of rolling resistance superimposed on them—the intersection of the rolling resistance graph and tractive effort graph gives the maximum velocity at zero grade (when net tractive effort is zero). Rail vehicles In order to start a train and accelerate it to a given speed, the locomotive(s) must develop sufficient tractive force to overcome the train's resistance, which is a combination of axle bearing friction, the friction of the wheels on the rails (which is substantially greater on curved track than on tangent track), and the force of gravity if on a grade. Once in motion, the train will develop additional drag as it accelerates due to aerodynamic forces, which increase with the square of the speed. Drag may also be produced at speed due to truck (bogie) hunting, which will increase the rolling friction between wheels and rails. If acceleration continues, the train will eventually attain a speed at which the available tractive force of the locomotive(s) will exactly offset the total drag, causing acceleration to cease. This top speed will be increased on a downgrade due to gravity assisting the motive power, and will be decreased on an upgrade due to gravity opposing the motive power. Tractive effort can be theoretically calculated from a locomotive's mechanical characteristics (e.g., steam pressure, weight, etc.), or by actual testing with strain sensors on the drawbar and a dynamometer car. Power at rail is a railway term for the available power for traction, that is, the power that is available to propel the train. Steam locomotives An estimate for the tractive effort of a single cylinder steam locomotive can be obtained from the cylinder pressure, cylinder bore, stroke of the piston and the diameter of the wheel. The torque developed by the linear motion of the piston depends on the angle that the driving rod makes with the tangent of the radius on the driving wheel. For a more useful value an average value over the rotation of the wheel is used. The driving force is the torque divided by the wheel radius. As an approximation, the following formula can be used (for a two-cylinder locomotive): where t is tractive effort in pounds-force d is the piston diameter in inches (bore) s is the piston stroke in inches p is the working pressure in pounds per square inch w is the diameter of the driving wheels in inches The constant 0.85 was the Association of American Railroads (AAR) standard for such calculations, and overestimated the efficiency of some locomotives and underestimated that of others. Modern locomotives with roller bearings were probably underestimated. European designers used a constant of 0.6 instead of 0.85, so the two cannot be compared without a conversion factor. In Britain main-line railways generally used a constant of 0.85 but builders of industrial locomotives often used a lower figure, typically 0.75. The constant c also depends on the cylinder dimensions and the time at which the steam inlet valves are open; if the steam inlet valves are closed immediately after obtaining full cylinder pressure the piston force can be expected to have dropped to less than half the initial force. giving a low c value. If the cylinder valves are left open for longer the value of c will rise nearer to one. Three or four cylinders (simple) The result should be multiplied by 1.5 for a three-cylinder locomotive and by two for a four-cylinder locomotive. Alternatively, tractive effort of all "simple" (i.e. non-compound) locomotives can be calculated thus: where t is tractive effort in pounds-force n is the number of cylinders d is the piston diameter in inches s is the piston stroke in inches p is the maximum rated boiler pressure in psi w is the diameter of the driving wheels in inches Multiple cylinders (compound) For other numbers and combinations of cylinders, including double and triple expansion engines the tractive effort can be estimated by adding the tractive efforts due to the individual cylinders at their respective pressures and cylinder strokes. Values and comparisons for steam locomotives Tractive effort is the figure often quoted when comparing the powers of steam locomotives, but is misleading because tractive effort shows the ability to start a train, not the ability to haul it. Possibly the highest tractive effort ever claimed was for the Virginian Railway's 2-8-8-8-4 triplex locomotive, which in simple expansion mode had a calculated starting T.E. of 199,560 lbf (887.7 kN)—but the boiler could not produce enough steam to haul at speeds over 5 mph (8 km/h). Of more successful steam locomotives, those with the highest rated starting tractive effort were the Virginian Railway AE-class 2-10-10-2s, at 176,000 lbf (783 kN) in simple-expansion mode (or 162,200 lb if calculated by the usual formula). The Union Pacific Big Boys had a starting T.E. of 135,375 lbf (602 kN); the Norfolk & Western's Y5, Y6, Y6a, and Y6b class 2-8-8-2s had a starting T.E. of 152,206 lbf (677 kN) in simple expansion mode (later modified to 170,000 lbf (756 kN), claim some enthusiasts); and the Pennsylvania Railroad's freight duplex Q2 attained 114,860 lbf (510.9 kN, including booster)—the highest for a rigid-framed locomotive. Later two-cylinder passenger locomotives were generally 40,000 to 80,000 lbf (170 to 350 kN) of T.E. Diesel and electric locomotives For an electric locomotive or a diesel-electric locomotive, starting tractive effort can be calculated from the amount of weight on the driving wheels (which may be less than the total locomotive weight in some cases), combined stall torque of the traction motors, the gear ratio between the traction motors and axles, and driving wheel diameter. For a diesel-hydraulic locomotive, the starting tractive effort is affected by the stall torque of the torque converter, as well as gearing, wheel diameter and locomotive weight. The relationship between power and tractive effort was expressed by Hay (1978) as where t is tractive effort, in newtons (N) P is the power in watts (W) E is the efficiency, with a suggested value of 0.82 to account for losses between the motor and the rail, as well as power diverted to auxiliary systems such as lighting v is the speed in metres per second (m/s) Freight locomotives are designed to produce higher maximum tractive effort than passenger units of equivalent power, necessitated by the much higher weight that is typical of a freight train. In modern locomotives, the gearing between the traction motors and axles is selected to suit the type of service in which the unit will be operated. As traction motors have a maximum speed at which they can rotate without incurring damage, gearing for higher tractive effort is at the expense of top speed. Conversely, the gearing used with passenger locomotives favors speed over maximum tractive effort. Electric locomotives with monomotor bogies are sometimes fitted with two-speed gearing. This allows higher tractive effort for hauling freight trains but at reduced speed. Examples include the SNCF classes BB 8500 and BB 25500. See also Braking Drag equation Factor of adhesion, which is simply the weight on the locomotive's driving wheels divided by the starting tractive effort Power classification – British Railways and London, Midland and Scottish railway classification scheme Rail adhesion Tractor pulling, bollard pull – articles relating to tractive effort for other forms of vehicle Notes References Further reading A simple guide to train physics Tractive effort, acceleration and braking Rolling stock Force Vehicles
Tractive effort
Physics,Mathematics
2,162