id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
40,190,528
https://en.wikipedia.org/wiki/Planted%20motif%20search
In the field of computational biology, a planted motif search (PMS) also known as a (l, d)-motif search (LDMS) is a method for identifying conserved motifs within a set of nucleic acid or peptide sequences. PMS is known to be NP-complete. The time complexities of most of the planted motif search algorithms depend exponentially on the alphabet size and l. The PMS problem was first introduced by Keich and Pevzner. The problem of identifying meaningful patterns (e.g., motifs) from biological data has been studied extensively since they play a vital role in understanding gene function, human disease, and may serve as therapeutic drug targets. Description The search problem may be summarized as follows: Input are n strings (s1, s2, ... , sn) of length m each from an alphabet Σ and two integers l and d. Find all strings x such that |x| = l and every input string contains at least one variant of x at a Hamming distance of at most d. Each such x is referred to as an (l, d) motif. For example, if the input strings are GCGCGAT, CACGTGA, and CGGTGCC; l = 3 and d = 1, then GGT is a motif of interest. Note that the first input string has GAT as a substring, the second input string has CGT as a substring, and the third input string has GGT as a substring. GAT is a variant of GGT that is within a Hamming distance of 1 from GGT, etc. Call the variants of a motif that occur in the input strings as instances of the motif. For example, GAT is an instance of the motif GGT that occurs in the first input string. Zero or more (l, d) motifs are contained in any given set of input strings. Many of the known algorithms for PMS consider DNA strings for which Σ ={G, C, T, A}. There exist algorithms that deal with protein strings as well. The PMS problem is also known as the (l, d)-motif search (LDMS) problem. Notation The following mathematical notation is often used to describe PMS algorithms. Assume that S = {s1, s2, s3, ..., sn} is the given set of input strings from an alphabet Σ. An l-mer of any string is nothing but a substring of the string of length l. Let dH(a, b) stand for the Hamming distance between any two l-mers a and b. Let a be an l-mer and s be an input string. Then, let dH(a, s) stand for the minimum Hamming distance between a and any l-mer b of s. If a is any l-mer and S is a set of input strings then let dH(a, S) stand for maxsєSdH(a, s). Let u be any l-mer. Then, the d-neighborhood of u, (denoted as Bd(u)), is nothing but the set of all the l-mers v such that dH(u, v) ≤ d. In other words, Bd(u)={v: dH(u, v)≤d}. Refer to any such l-mer v as a d-neighbor of u. Bd(x, y) is used to denote the common d-neighborhood of x and y, where x and y are two l-mers. Bd(x, y) is nothing but the set of all l-mers that are within a distance of d from both x and y. Similarly, Bd(x, y, z), etc. can be defined. Algorithms The scientific literature describes numerous algorithms for solving the PMS problem. These algorithms can be classified into two major types. Those algorithms that may not return the optimal answer(s) are referred to as approximation algorithms (or heuristic algorithms) and those that always return the optimal answer(s) are called exact algorithms. Approximate Examples of approximation (or heuristic) algorithms include Random Projection, PatternBranching, MULTIPROFILER, CONSENSUS, and ProfileBranching. These algorithms have been experimentally demonstrated to perform well. Random projection The algorithm is based on random projections. Let the motif M of interest be an l-mer and C be the collection of all the l-mers from all the n input strings. The algorithm projects these l-mers along k randomly chosen positions (for some appropriate value of k). The projection of each l-mer may be thought of as an integer. The projected values (which are k-mers) are grouped according to their integer values. In other words, hash all the l-mers using the k-mer of any l-mer as its hash value. All the l-mers that have the same hash value fall into the same hash bucket. Since the instances of any (l, d) motif are similar to each other, many of these instances will fall into the same bucket. Note that the Hamming distance between any two instances of an (l, d) motif is no more than 2d. The key idea of this algorithm is to examine those buckets that have a large number of l-mers in them. For each such bucket, an expectation maximization (EM) algorithm is used to check if an (l, d) motif can be found using the l-mers in the bucket. Pattern branching This algorithm is a local searching algorithm. If u is any l-mer, then there are l-mers that are d-neighbors of u, for DNA strings. This algorithm starts from each l-mer u in the input, searches the neighbors of u, scores them appropriately and outputs the best scoring neighbor. Exact Many exact algorithms are known for solving the PMS problem as well. Examples include the ones in (Martinez 1983), (Brazma, et al. 1998), (Galas, et al. 1985), (Sinha, et al. 2000), (Staden 1989), (Tompa 1999), (Helden, et al. 1998) (Rajasekaran, et al.), (Davila and Rajasekaran 2006), (Davila, Balla, and Rajasekaran 2006), Voting and RISOTTO. WINNOWER and SP-STAR The WINNOWER algorithm is a heuristic algorithm and it works as follows. If A and B are two instances of the same motif in two different input strings, then the Hamming distance between A and B is at most 2d. It can be shown that the expected Hamming distance between A and B is . WINNOWER constructs a collection C of all possible l-mers in the input. A graph G(V,E) is constructed in which each l-mer of C will be a node. Two nodes u and v in G are connected by an edge if and only if the Hamming distance between u and v is at most 2d and they come from two different input strings. If M is an (l, d) motif and if M1, M2, ..., and Mn are instances of M in the input strings, then, clearly, these instances will form a clique in G. The WINNOWER algorithm has two phases. In the first phase, it identifies large cliques in G. In the second phase each such clique is examined to see if a motif can be extracted from this clique. Since the CLIQUE problem is intractable, WINNOWER uses a heuristic to solve CLIQUE. It iteratively constructs cliques of larger and larger sizes. If N = mn, then the run time of the algorithm is . This algorithm runs in a reasonable amount of time in practice especially for small values of d. Another algorithm called SP-STAR, is faster than WINNOWER and uses less memory. WINNOWER algorithm treats all the edges of G equally without distinguishing between edges based on similarities. SP-STAR scores the l-mers of C as well as the edges of G appropriately and hence eliminates more edges than WINNOWER per iteration. (Bailey and Elkan, 1994) employs expectation maximization algorithms while Gibbs sampling is used by (Lawrence et al., 1993). MULTIPROFILER MEME, are also known PMS algorithms. PMS series In the last decade a series of algorithms with PMS as a prefix has been developed in the lab of Rajasekaran. Some of these algorithms are described below. PMS0 PMSo works as follows. Let s1, s2, ..., sn be a given set of input strings each of length m. Let C be the collection of l-mers in s1. Let C′ = ∪u∈CBd(u). For each element v of C′ check if it is a valid (l, d)-motif or not. Given an l-mer v, a check if it is a valid (l, d)-motif or not can be made in O(mnl) time. Thus the run time of PMS0, assuming an alphabet of size 4, is . PMS1 This algorithm is based on radix sorting and has the following steps. Generate the set of all l-mers in each input string. Let Ci correspond to the l-mers of si, for 1≤i≤n. For each l-mer u in Ci (1 < i < n), generate Bd(u). Let Li be a collection of all of these neighbors (corresponding to all the l-mers of si). Sort Li (using radix sort) and eliminate any duplicates. Compute :. This can be done by merging the lists L1, L2, ..., Ln. All the l-mers in this intersection are valid (l, d) motifs. PMS2 Let the motif M of interest be of length l. If M occurs in every input string then any substring of M also occurs in every input string. Here occurrence means occurrence within a Hamming distance of d. It follows that there are at least l-k+1 strings each of length k (for k ≤ l) such that each of these occurs in every input string. Let Q be a collection of k-mers in M. Note that, in every input string si, there will be at least one position ij such that a k-mer of Q occurs starting from ij. Another k-mer of Q occurs starting from ij +1 and so on, with the last k-mer occurring at ij + l – k. An l-mer can be obtained by combining these k-mers that occur starting from each such ij. PMS2 works as follows. In the first phase find all the (k, d) motifs present in all the input strings (for some appropriate value of k<l). In the second phase, look for (l-k+1) of these (k, d) motifs that occur starting from successive positions in each of the input strings. From every such collection of (l-k+1) (k, d)-motifs, l-mer can be generated (if possible). Each such l-mer is a candidate (l, d)-motif. For each candidate motif, check if it is an (l, d)-motif or not in O(mnl) time. This l-mer is returned as output if this is an (l, d)-motif. PMS3 This algorithm enables one to handle large values of d. Let d′=d/2. Let M be the motif to be found with |M|=l=2l′ for some integer l′. Let M1 refer to the first half of M and M2 be the next half. Let s= a1a2...am be one of the input strings. M occurs in every input string. Let the occurrence of M (within a Hamming distance of d) in s start at position i. Let s′=aiai+1...ai+l'-1 and s′′ =ai+l'...ai+l-1. It is clear that either the Hamming distance between M1 and s′ is at most d′ or the Hamming distance between M2 and s′′ is at most d′. Either M1 or M2 occurs in every input string at a Hamming distance of at most d′. As a result, in at least n′ strings (where n′ = n/2) either M1 or M2 occurs with a Hamming distance of at most d. The algorithm first obtains all the (l′, d′)-motifs that occur in at least n/2 of the input strings. It then uses these motifs and the above observations to identify all the (l, d)-motifs present in the input strings. PMSPrune This algorithm introduces a tree structure for the motif candidates and uses a branch-and-bound algorithm to reduce the search space. Let S = {s1, s2, ..., sn} be a given set of input strings. PMSprune follows the same strategy as PMS0: For every l-mer y in s1, it generates the set of neighbors of y and, for each of them, checks whether this is a motif or not. Some key steps in the algorithm are: It generates the d-neighborhood of every l-mer y in s1 using a tree of height d. The root of this tree will have y. Every l-mer that is at a distance of 1 from y will be a node in the tree that is at a distance of 1 from the root; every l-mer that is at a distance of 2 from y will be a node in the tree that is at a distance of 2 from the root; and so on. When a node in this tree is visited, check if the corresponding l-mer is an (l, d)-motif. I.e., if the l-mer is x, check if dH(x, S)≤d. If so, output this l-mer. In any case move to the next node in the tree. This tree is explored in a depth first manner. If each node in the tree is visited for each l-mer y in s1, then the run time of PMSPrune will be at least as much as that of PMS0. PMSPrune uses some pruning conditions to prune subtrees that cannot possibly have any motifs in them. For an l-mer x, which corresponds to a node in a subtree of height h, the algorithm uses the value of dH(x, S) and h to prune the descendants of x. PMSPrune calculates the value of dH(x, S) for the nodes (x) in the tree in an incremental way, taking into account the way in which the neighborhood is generated. PMS4 PMS4 is a technique that can be used to speedup any algorithm for the PMS problem. In many of the above algorithms there are two phases. In the first phase we come up with a set of candidate motifs and in the second phase check, for each candidate motif, if it is a valid (l, d)-motif. For each candidate motif it takes O(mnl) time to check if it is a valid motif or not. PMS4 employs a similar two phase strategy. These phases are explained below. Let A be any PMS algorithm. Run the algorithm A on k input strings (where k < n). An optimal value of k can be determined empirically. The k strings may be picked in a number of ways. For example, they could be the first k strings, random k strings, and so on. Let C be the collection of (l, d)-motifs found in these k strings. Clearly, C is a superset of the (l, d)-motifs present in the n given input strings. for each l-mer v in C do Check if v is a valid motif in O(mnl) time. If so, output v. PMS5 and PMS6 PMS5 is an extension of PMS0. If S = {s1, s2, ..., sn} is a set of strings (not necessarily of the same length), let denote the (l, d)-motifs present in S. Let S′ = {s2, s3, ..., sn}. PMS5 computes the (l, d)-motifs of S as . Here L refers to an l-mer. One of the key steps in the algorithm is a subroutine to compute the common d-neighborhood of three l-mers. Let x, y, z be any three l-mers. To compute Bd(x, y, z), PMS5 represents Bd(x) as a tree Td(x). Each node in this tree represents an l-mer in Bd(x). The root of Td(x) stands for the l-mer x. Td(x) has a depth of d. Nodes of Td(x) are traversed in a depth-first manner. The node and the l-mer it represents may be used interchangeably. While the tree is traversed, any node t will be output if t is in . When any node t is visited, check if there is a descendent t′ of t such that t′ is in . Prune the subtree rooted at t if there is no such descendent. In PMS5, the problem of checking if t has any descendent that is in is formulated as an integer linear program (ILP) on ten variables. This ILP is solved in O(1) time. Solving the ILP instances is done as a preprocessing step and the results are stored in a lookup table. Algorithm PMS6 is an extension of PMS5 that improves the preprocessing step and also it uses efficient hashing techniques to store the lookup tables. As a result, it is typically faster than PMS5. Shibdas Bandyopadhyay, Sartaj Sahni, Sanguthevar Rajasekaran, "PMS6: A fast algorithm for motif discovery," iccabs, pp. 1–6, 2012 IEEE 2nd International Conference on Computational Advances in Bio and medical Sciences, 2012 qPMSPrune and qPMS7 Given a set S={s1, s2, ..., sn} of strings, and integers l, d, and q, an (l, d, q)-motif is defined to be a string M of length l that occurs in at least q of the n input strings within a Hamming distance of d. The qPMS (Quorum Planted Motif Search) problem is to find all the (l, d, q)-motifs present in the input strings. The qPMS problem captures the nature of motifs more precisely than the PMS problem does because, in practice, some motifs may not have motif instances in all of the input strings. Any algorithm for solving the qPMS problem (when q ≠ n) is typically named with a prefix of q. qPMSPrune is one of the first algorithms to address this version of the PMS problem. qPMSPrune exploits the following fact: If M is any (l, d, q)-motif of the input strings s1, s2, ..., sn, then there exists an i (with 1 ≤ i ≤ n – q + 1) and an l-mer such that M is in Bd(x) and M is an (l, d, q-1)-motif of the input strings excluding si. The algorithm processes every si, 1≤ i ≤ n. While processing si, it considers every l-mer x of si. When considering x, it constructs Bd(x) and identifies elements of Bd(x) that are (l, d, q-1) motifs (with respect to input strings other than si). Bd(x) is represented as a tree with x as the root. This tree will be traversed in a depth first manner. The algorithm does not traverse the entire tree. Some of the subtrees are pruned using effective pruning conditions. In particular, a subtree is pruned if it can be inferred that none of the nodes in this subtree carries a motif of interest. Algorithm qPMS7 is an extension of qPMSPrune. Specifically, it is based on the following observation: If M is any (l, d, q)-motif of the input strings s1, s2, ..., sn, then there exist 1 ≤ i ≠ j ≤ n and l-mer and l-mer such that M is in and M is an (l, d, q-2)-motif of the input strings excluding si and sj. The algorithm considers every possible pair (i, j), 1≤ i, j ≤ n and i ≠ j. For any pair (i, j), every possible pair of l-mers (x, y) is considered (where x is from si and y is from sj). While considering any x and y, the algorithm identifies all the elements of that are (l, d, q-2) motifs (with respect to input strings other than si and sj). An acyclic graph is used to represent and explore . Call this graph Gd(x, y). Gd(x, y) is traversed in a depth first manner. Like in qPMSPrune, qPMS7 also employs some pruning conditions to prune subgraphs of Gd(x, y). RISOTTO RISOTTO employs a suffix tree to identify the (l, d)-motifs. It is somewhat similar to PMS0. For every l-mer in s1, it generates the d-neighborhood and for every l-mer in this neighborhood it walks through a suffix tree to check if this l-mer is an (l, d)-motif. Voting is similar to PMS1. Instead of using radix sorting, it uses hashing to compute Lis and their intersections. Relative performance PMS algorithms are typically tested on random benchmark data generated as follows: Twenty strings each of length 600 are generated randomly from the alphabet of interest. The motif M is also generated randomly and planted in each of the input strings within a Hamming distance of d. The motif instances are also generated randomly. Certain instances of the (l, d)-motif problem have been identified to be challenging. For a given value of l, the instance (l, d) is called challenging if d is the smallest integer for which the expected number of (l, d)-motifs that occur by random chance (in addition to the planted one) is one or more. For example, the following instances are challenging: (9, 2), (11, 3), (13, 4), (15, 5), (17, 6), (19, 7), etc. The performance of PMS algorithms is customarily shown only for challenging instances. Following is a table of time comparison of different PMS algorithms on the challenging instances of DNA sequences for the special case. This table is taken from the paper qPMS7. In this table several algorithms have been compared: qPMSPrune, qPMSPruneI, Pampa, Voting, RISOTTO, PMS5, PMS6, qPMS7. In the following table, the alphabet Σ={A,C,G,T}, n=20, m=600, and q=n=20. References External links Bioinformatics Computational biology NP-complete problems
Planted motif search
Mathematics,Engineering,Biology
5,054
2,493,815
https://en.wikipedia.org/wiki/Weather%20hole
A weather hole is a location that receives calmer weather than the surrounding area. It is an area thunderstorms often miss, or near which approaching storms often dissipate. Details A 2005 study entitled "Do Meteorologists Suppress Thunderstorms?: Radar-Derived Statistics and the Behavior of Moist Convection" examined 28 target cities and random control points in the U.S. They were compared to their surroundings in order to determine whether these target points were weather holes or the opposite, hot spots. A determination was made by measuring and comparing the convective echoes of the target points with the surrounding area. Only one target was classified as a weather hole, and one was classified as a hot spot. Most selected targets experienced convective echoes as often as their surrounding areas. According to the a study's authors, the "results suggest that meteorologists are unnecessarily cranky about the frequency of storms in their hometowns." It has been suggested that Grand Forks, ND is a weather hole. References Weather prediction
Weather hole
Physics
211
24,642,315
https://en.wikipedia.org/wiki/Mootral
Mootral is a British-Swiss company that is developing a food supplement to reduce methane emissions from ruminant animals, chiefly cows and sheep, but also goats. Methane is a major target greenhouse gas and in the 4th protocol report of the Intergovernmental Panel on Climate Change (IPCC) is recommended to increase from a x23 to x72 multiplier because of the magnitude of its effect relative to carbon dioxide and short longevity in Earth's atmosphere. Natural feed supplement The Mootral natural feed supplement is produced by Neem Biotech. The active ingredient is an organic organosulfur compound (normally found in garlic), Research at the University of Aberystwyth, Wales has demonstrated up to a 94% reduction in methane production. But that was in vitro test (rumen model)with high dosis. The same authors found only 15-25 % reduction in vivo when they tested on sheep. Emission trading Companies using Mootral's feed supplement generate carbon credits that may be used to offset their emissions levels or sold to third parties. In December 2019, Verra announced that it had approved Mootral as the world's first methodology to reduce methane emissions from ruminant livestock. Publicity Mootral attracted much attention as runner-up in the FT Global Climate Challenge and Dutch Postcode Lottery. Mootral was a finalist in the Shell/BBC/Newsweek World Challenge 2009 as one of the 12 most promising solutions to climate change. Mootral is privately funded by Chris Sacca and Tribe Capital. References Sources http://www.incropsproject.co.uk/documents/Events/Launch2009/David%20Wiliams%20Neem%20Biotech.pdf 2009 Finalist EMISSION CONTROL; UNITED KINGDOM Animal Feed Science and Technology, Volume 147, Issues 1-3, 14 November 2008, Pages 36-52 Animal Feed Science and Technology, Volume 145, Issues 1-4, 14 August 2008, Pages 351-363 J. Dairy Sci. 89:761-771, 2006 Climate change and agriculture Emissions reduction Methane
Mootral
Chemistry
432
4,090,318
https://en.wikipedia.org/wiki/Vela%20Supernova%20Remnant
The Vela supernova remnant is a supernova remnant in the southern constellation Vela. Its source Type II supernova exploded approximately 11,000 years ago (and was about 900 light-years away). The association of the Vela supernova remnant with the Vela pulsar, made by astronomers at the University of Sydney in 1968, was direct observational evidence that supernovae form neutron stars. The Vela supernova remnant includes NGC 2736. Viewed from Earth, the Vela supernova remnant overlaps the Puppis A supernova remnant, which is four times more distant. Both the Puppis and Vela remnants are among the largest and brightest features in the X-ray sky. The Vela supernova remnant is one of the closest known to us. The Geminga pulsar is closer (and also resulted from a supernova), and in 1998 another near-Earth supernova remnant was discovered, RX J0852.0-4622, which from our point of view appears to be contained in the southeastern part of the Vela remnant. This remnant was not seen earlier because when viewed in most wavelengths, it is lost in the Vela remnant. See also CG 4 List of supernova remnants List of supernovae References External links Gum Nebula (annotated) Bill Blair's Vela Supernova Remnant page Gum Nebula Supernova remnants Vela (constellation)
Vela Supernova Remnant
Astronomy
290
29,363,365
https://en.wikipedia.org/wiki/Segmented%20filamentous%20bacteria
Segmented filamentous bacteria or Candidatus Savagella are members of the gut microbiota of rodents, fish and chickens, and have been shown to potently induce immune responses in mice. They form a distinct lineage within the Clostridiaceae and the name Candidatus Savagella has been proposed for this lineage. They were previously named Candidatus Arthromitus because of their morphological resemblance to bacterial filaments previously observed in the guts of insects by Joseph Leidy. Despite the fact that they have been widely referred to as segmented filamentous bacteria, this term is somewhat problematic as it does not allow one to distinguish between bacteria that colonize various hosts or even if segmented filamentous bacteria are actually several different bacterial species. In mice, these bacteria grow primarily in the terminal ileum in close proximity to the intestinal epithelium where they are thought to help induce T helper 17 cell responses. Intriguingly, Segmented Filamentous Bacteria were found to expand in AID-deficient mice, which lack the ability to mount an appropriate humoral immune response because of impaired somatic hypermutation; parabiotic experiments revealed the importance of IgA in eliminating Segmented Filamentous Bacteria. This goes hand in hand with an earlier study demonstrating the ability of monocolonization with Segmented Filamentous Bacteria to dramatically increase mucosal IgA levels. Segmented Filamentous Bacteria are species specific, and may be important to immune development. See also List of bacteria genera List of bacterial orders References Gut Immune Maturation Depends on Colonization with a Host-Specific Microbiota (Cell Volume 149, Issue 7 2012 1578 – 1593) Further reading At least 50 scholarly articles on the subject at Google scholar. Two review articles Bacteria Candidatus taxa
Segmented filamentous bacteria
Biology
372
351,616
https://en.wikipedia.org/wiki/Chyme
Chyme or chymus (; from Greek χυμός khymos, "juice") is the semi-fluid mass of partly digested food that is expelled by the stomach, through the pyloric valve, into the duodenum (the beginning of the small intestine). Chyme results from the mechanical and chemical breakdown of a bolus and consists of partially digested food, water, hydrochloric acid, and various digestive enzymes. Chyme slowly passes through the pyloric sphincter and into the duodenum, where the extraction of nutrients begins. Depending on the quantity and contents of the meal, the stomach will digest the food into chyme in some time from 40 minutes to 3 hours. With a pH of approximately 2, chyme emerging from the stomach is very acidic. The duodenum secretes a hormone, cholecystokinin (CCK), which causes the gall bladder to contract, releasing alkaline bile into the duodenum. CCK also causes the release of digestive enzymes from the pancreas. The duodenum is a short section of the small intestine located between the stomach and the rest of the small intestine. The duodenum also produces the hormone secretin to stimulate the pancreatic secretion of large amounts of sodium bicarbonate, which then raises pH of the chyme to 7. The chyme moves through the jejunum and the ileum, where digestion progresses, and the non-useful portion continues onward into the large intestine. The duodenum is protected by a thick layer of mucus and the neutralizing actions of the sodium bicarbonate and bile. At a pH of 7, the enzymes that were present from the stomach are no longer active. The breakdown of any nutrients still present is by anaerobic bacteria, which at the same time help to package the remains. These bacteria also help synthesize vitamin B and vitamin K, which will be absorbed along with other nutrients. Properties Chyme has a low pH that is countered by the production of bile, which helps the further digestion of food. Chyme is part liquid and part solid: a thick semifluid mass of partially digested food and digestive secretions that is formed in the stomach and small intestine during digestion. Chyme also contains cells from the mouth and esophagus that slough off from the mechanical action of chewing and swallowing. Path of chyme After hours of mechanical and chemical digestion, food has been reduced into chyme. As particles of food become small enough, they are passed out of the stomach at regular intervals into the small intestine, which stimulates the pancreas to release fluid containing a high concentration of bicarbonate. This fluid neutralizes the gastric juices, which can damage the lining of the intestine and result in duodenal ulcer. Other secretions from the pancreas, gallbladder, liver, and glands in the intestinal wall help in digestion, as these secretions contain a variety of digestive enzymes and chemicals that assist in the breakdown of complex compounds into those that can be absorbed and used by the body. When food particles are sufficiently reduced in size and composition, they are absorbed by the intestinal wall and transported to the bloodstream. Some food material is passed from the small intestine to the large intestine. In the large intestine, bacteria break down any proteins and starches in chyme that were not digested fully in the small intestine. When all of the nutrients have been absorbed from chyme, the remaining waste material changes into semisolids that are called feces. The feces pass to the rectum, to be stored until ready to be discharged from the body during defecation. Uses The chyme of an unweaned calf is the defining ingredient of pajata, a traditional Roman recipe. Chyme is sometimes used in Pinapaitan, a bitter Ilocano stew. See also Vomiting References Digestive system Body fluids
Chyme
Biology
843
34,556,149
https://en.wikipedia.org/wiki/Smart%20card%20management%20system
A Smart Card Management System (SCMS) is a system for managing smart cards through the life cycle of the smart cards. Thus, the system can issue the smart cards, maintain the smart cards while in use and finally take the smart cards out of use (EOL). Chip/smart cards provide the foundation for secure electronic identity, and can be used to control access to facilities, networks or computers. As the smart cards are security credentials for authenticating the smart card holder (for example using two-factor authentication) the security requirements for a smart card management system are often high and therefore the vendors of these systems are found in the computer security industry. Smart card management systems are generally implemented as software applications. If the system needs to be accessible by more than one operator or user simultaneously (this is normally the case) the software application is often provided in the form of a server application accessible from several different client systems. An alternative approach is to have multiple synchronized systems. Smart card management systems connect smart cards to other systems. Which systems the smart card management system must connect to depends on the use case for the smart cards. Typical systems to connect to include: Connected smart card reader Unconnected (RFID) smart card reader Card printer User directory Certificate authority Hardware security module Physical access control systems During the smart card lifecycle, the smart card is changing state (examples of such states include issued, blocked and revoked), the process of taking a smart card from one state to another, is the main responsibility of a smart card management system. Different smart card management systems call these processes by different names. Below a list of the most widely used names of the processes are listed and briefly explained: Register – adding a smart card to the smart card management system Issue – issuing or personalizing the smart card for a smart card holder Initiate – activating the smart card for first use by the smart card holder Deactivate – putting the smart card on hold in the backend system Activate – reactivating the smart card from a deactivated state Lock – also called block; smart card holder access to the smart card is not possible Unlock – also called unblock; smart card holder access to the smart card is re-enabled Revoke – credentials on the smart card are made invalid Retire – the smart card is disconnected from the smart card holder Delete – the smart card is permanently removed from the system Unregister – the smart card is removed from the system (but could potentially be reused) Backup - Backup smart card certificates and selected keys Restore - Restore smart card certificates and selected keys Notes References Schneier, Bruce (1996). "Applied Cryptography," John Wiley & Sons Inc. Rankl, Wolfgang & Effing, Wolfgang (2003). "Smart Card Handbook," John Wiley & Sons Ltd Wilson, Chuck (2001). "Get Smart," Mullaney Publishing Group Hansche, Susan & Berti, John & Hare Chris (2004). "Official (ISC)2 guide to the CISSP exam," Auberbach Publications Smart Card Industry Glossary from Smart Card Alliance Smart cards Public-key cryptography Computer network security
Smart card management system
Engineering
639
38,235,429
https://en.wikipedia.org/wiki/HD%20103774
HD 103774 is a star with a close orbiting planetary companion in the southern constellation of Corvus. With an apparent visual magnitude of 7.13, it is too faint to be readily visible to the naked eye. Parallax measurements provide a distance estimate of 184 light years from the Sun. It is drifting closer with a radial velocity of −3 km/s. The star has an absolute magnitude of 3.41. The stellar classification of HD 103774 is F6 V, indicating this is an F-type main-sequence star that is generating energy through core hydrogen fusion. It is a young star with age estimates ranging from 260 million up to 2 billion years of age. The star is mildly active and is spinning with a projected rotational velocity of 8 km/s. It has 1.4 times the mass and 1.56 times the radius of the Sun. The star is radiating 3.7 times the luminosity of the Sun from its photosphere at an effective temperature of 6,391 K. Planetary system This star has been under observation as part of a survey using the HARPS spectrogram for a period of 7.5 years. In 2012, the detection of an exoplanetary companion using the radial velocity method was announced. This result was published in January 2013. The object is orbiting close to the host star at a distance of with a period of just 5.9 days and an eccentricity (ovalness) of 0.09. As the inclination of the orbital plane is unknown, only a lower limit on the mass can be determined; this lower bound is about equal to the mass of Saturn. There is marginal evidence for an infrared excess at a wavelength of 12 μm, indicating the likely grain size. More measurements are needed to confirm this signal. References F-type main-sequence stars Planetary systems with one confirmed planet Corvus (constellation) BD-11 3211 103774 058263
HD 103774
Astronomy
403
1,564,929
https://en.wikipedia.org/wiki/Marin%20Computer%20Center
Opened in 1977 in Marin County, California, the Marin Computer Center was the world's first public access microcomputer center. The non-profit company was co-created by David Fox (later to become one of Lucasfilm Games' founding members) and author Annie Fox. MCC (as it was known) initially featured the Atari 2600, an Equinox 100, 9 Processor Technology Sol 20 computers (S-100 bus systems), the Radio Shack Model I and the Commodore PET. In addition to providing computer access to the public it had classes on the programming language BASIC. Later, it added Apple II and Atari 8-bit computers, for a total of about 40 systems. The Foxes left MCC in 1981, turning it over to new management, and later to the teens and young adults who helped run it. See also Public computer External links Marin Computer Center in People's Computers, Nov-Dec 1978 You Want to Open a What? - Article from Creative Computing, November 1984 Electric Eggplant Scan of 1981 advertisement for Marin Computer Center Buildings and structures in Marin County, California History of computing
Marin Computer Center
Technology
226
3,764,317
https://en.wikipedia.org/wiki/Corey%E2%80%93Kim%20oxidation
The Corey–Kim oxidation is an oxidation reaction used to synthesize aldehydes and ketones from primary and secondary alcohols. It is named for American chemist and Nobel Laureate Elias James Corey and Korean-American chemist Choung Un Kim. Although the Corey–Kim oxidation possesses the distinctive advantage over Swern oxidation of allowing an operation above –25 °C, it is not so commonly used due to issues with selectivity in substrates susceptible to chlorination by N-chlorosuccinimide. Reaction mechanism Dimethyl sulfide (Me2S) is treated with N-chlorosuccinimide (NCS), resulting in formation of an "active DMSO" species that is used for the activation of the alcohol. Addition of triethylamine to the activated alcohol leads to its oxidation to aldehyde or ketone and generation of dimethyl sulfide. In variance with other alcohol oxidation using "activated DMSO," the reactive oxidizing species is not generated by reaction of DMSO with an electrophile. Rather, it is formed by oxidation of dimethyl sulfide with an oxidant (NCS). Under Corey–Kim conditions allylic and benzylic alcohols have a tendency to evolve to the corresponding allyl and benzyl chlorides unless the alcohol activation is very quickly followed by addition of triethylamine. In fact, Corey–Kim conditions —with no addition of triethylamine— are very efficient for the transformation of allylic and benzylic alcohols to chlorides in presence of other alcohols. Variations Substituting dimethyl sulfide with something less noxious has been the goal of several research projects. Ohsugia et al. substituted a long-chain sulfide, dodecyl methyl sulfide, for dimethyl sulfide. Crich et al. utilized fluorous technology in a similar manner. See also Pfitzner–Moffatt oxidation Sulfonium-based oxidation of alcohols to aldehydes References External links Corey–Kim Oxidation from Organic-Chemistry Organic oxidation reactions Name reactions
Corey–Kim oxidation
Chemistry
431
19,090,834
https://en.wikipedia.org/wiki/John%20Loveday%20%28physicist%29
John Stephen Loveday is an experimental physicist working in high pressure research. He was educated at Coopers School in Chislehurst and at the University of Bristol, from where he took his PhD in Physics. He currently works as a Reader in the School of Physics and Astronomy at the University of Edinburgh, Scotland. Loveday is considered one of the pioneers of neutron diffraction at high pressure and was a founder member of the Paris–Edinburgh high-pressure neutron diffraction collaboration. His specialism is in techniques for high-pressure neutron scattering and examining the application of these techniques for investigating structures and transitions in planetary ices, hydrates, water and other simple molecular systems. He is the author of more than seventy papers and his work on the behaviour of clathrate hydrates at high pressure and their relevance to models of planetary bodies including Titan was published in Nature and has been highly cited. In 2004 he helped establish the Centre for Science at Extreme Conditions, where he works with Andrew D. Huxley and Paul Attfield. References External links University of Edinburgh: School of Physics and Astronomy:John Loveday (Accessed August 2012) Experimental physicists Living people Alumni of the University of Bristol Academics of the University of Edinburgh Year of birth missing (living people)
John Loveday (physicist)
Physics
256
2,014,760
https://en.wikipedia.org/wiki/Finnish%20Meteorological%20Institute
The Finnish Meteorological Institute (FMI; ; ) is the government agency responsible for gathering and reporting weather data and forecasts in Finland. It is a part of the Ministry of Transport and Communications but it operates semi-autonomously. The Institute is an impartial research and service organisation with expertise covering a wide range of atmospheric science activities other than gathering and reporting weather data and forecasts. The headquarters of the Institute is in Kumpula Campus, Helsinki, Finland. Services FMI provides weather forecasts for aviation, traffic, shipping and media as well as private citizens via internet and mobile devices. It also has air quality services. For sea areas, it provides information about ice cover, sea level changes and waves. In 2013 FMI made openly available data sets such as weather, sea and climate observation data, time series and model data. The open data is targeted to benefit application developers who want to develop new services, applications and products. In 2009, researchers from VTT published a study assessing the benefits generated by the services offered by the Finnish Meteorological Institute. They concluded in sum in range of 260-290 million euros, while the annual budget of the institute was around 50 – 60 million Euros. This leads to estimate for annual benefit-cost ratio for the services to be at least 5:1. Observations Finnish Meteorological Institute makes observations of the atmosphere, sea and space at over 400 stations around Finland. Its weather radar network consists of 10 C-band Doppler weather radars. Research The research areas of FMI include meteorology, air quality, climate change, earth observation, marine and arctic research. Scientific research at FMI is mainly organized around three centers; "Weather, Sea and Climate Service Center", "Observing and Information Service Systems Center", "Space and Earth Observation Center", and two programs; "Meteorological and Marine Research Program", "Climate Research Program". Every year FMI's researchers publish about 300 peer-reviewed articles. Air quality activities The Finnish Meteorological Institute has investigated air quality processes and air pollution prevention techniques since the early 1970s. Their staff members have comprehensive competence within the areas of meteorology, physics, chemistry, biology and engineering. Integrated work is done in cooperation with many other European research institutes and universities. The air quality activities conducted by the Institute include: Research, testing and development of air quality measuring methods and equipment. Development of air pollutant emission inventories. Development of air pollution dispersion models Performing chemical analyses of air quality. Study and development of air pollution prevention techniques. The suite of local-scale (0 – 30 km) dispersion models available at the Institute includes: An urban area, multiple-source dispersion model. Vehicular pollution line-source dispersion models. Dispersion models for hazardous materials. Dispersion models for odorous compounds. Dispersion models for larger scales (30 to 3000 km) are also available. Space Research The Finnish Meteorological Institute is one of the few places in Finland where space research takes place. The institute has been a part of several high-profile NASA and ESA missions, such as Phoenix, Mars Science Laboratory, Rosetta and BepiColombo, in addition to leading a lander mission of their own, MetNet. They worked with Spain and the United States to contribute to the Rover Environmental Monitoring Station (REMS) on Mars Science Laboratory (Curiosity). The Finnish Meteorological Institute has designed and produced parts to the robotic space probe Rosetta and robotic lander Philae, which sent some data from comet 67P/Churyumov-Gerasimenko in 2014-2015. The electric solar wind sail, invented 2006 by FMi scientist Pekka Janhunen, got the 2010 Finnish Quality Innovation Prize among Potential innovations. It was tested in ESTCube-1 satellite. Staff The number of full-time staff of the Finnish Meteorological Institute is about 540. Permanent staff members account for about 2/3 of the Institute's personnel and those with contractual posts account for the remainder. The Institute operates in on a round-the-clock basis and about 30 percent of the full-time staff work in shifts. 54 percent of the staff have university degrees and 15 percent have a licentiate or PhD degree. The average age of the staff is 43 years. See also Climate change in Finland Foreca List of atmospheric dispersion models National Center for Atmospheric Research National Environmental Research Institute of Denmark NILU, the Norwegian Institute for Air Research Roadway air dispersion modeling SILAM Swedish Meteorological and Hydrological Institute TA Luft UK Atmospheric Dispersion Modelling Liaison Committee UK Dispersion Modelling Bureau University Corporation for Atmospheric Research References External links Finnish Meteorological Institute Public GIS Map Database Ministry of Transport and Communications Finland Atmospheric dispersion modeling Governmental meteorological agencies in Europe Research institutes in Finland
Finnish Meteorological Institute
Chemistry,Engineering,Environmental_science
974
447,152
https://en.wikipedia.org/wiki/Facial%20%28sexual%20act%29
A facial is a sexual activity in which a man ejaculates semen onto the face of one or more sexual partners. A facial is a form of non-penetrative sex, though it is generally performed after some other means of sexual stimulation, such as vaginal sex, anal sex, oral sex, manual sex or masturbation. Facials are regularly portrayed in pornographic films and videos, often as a way to close a scene. The performance of a facial is typically preceded by activities that result in the sexual arousal and stimulation of the ejaculating participant. After the prerequisite level of sexual stimulation has been achieved, and ejaculation becomes imminent, the male will position his penis in a way that allows the discharged semen to be deposited onto his partner's face. The volume of semen that is ejaculated depends on several factors, including the male's health, age, degree of sexual excitement, and the time since his last ejaculation. Normal quantities of ejaculate range from 1.5 to 5.0 milliliters (1 teaspoon). Seconds after being deposited onto the face, the semen thickens, before liquefying 15–30 minutes later. Cultural depictions Predating the modern age of pornography, facials were described in literature. As an example, the French aristocrat Marquis de Sade wrote about performing facials in his work The 120 Days of Sodom, written in 1785. One passage of the novel reads "… I show them my prick, then what do you suppose I do? I squirt the fuck in their face… That's my passion my child, I have no other… and you're about to behold it." In mainstream pornography In the 1970s, the hardcore pornography genre introduced the stereotypical cumshot (also known as the money shot) scene as a central element (leitmotif) of the hardcore film, in which the male actor ejaculates in a way ensuring maximum visibility of the act itself. These scenes may involve the female actor "calling for" the shot to be directed at some specific part of her body. Now facial cumshots are regularly portrayed in pornographic films, videos, magazines and internet web sites. In addition to mainstream pornography, the popularity of facials has led to creation of its own niche market, like video series that specialize in showing the act. In 2010, psychologist Ana Bridges and colleagues conducted a content analysis of best-selling heterosexual pornographic videos showing that over 96% of all scenes concluded with a male performer ejaculating onto the body of his female partner. The mouth was the most common area to be ejaculated upon. When all regions of the face are included, facial cum shots occur in approximately 62% of scenes where external ejaculation occurs. In feminist pornography When feminist pornography emerged in 1980s, pioneer Candida Royalle always excluded facial cum shots, and with few exceptions all other male external ejaculations, from her sex scenes. Ms. Naughty's (since 2000) and Petra Joy's work (since 2004) has followed the same principle. In the early works Tristan Taormino (since 1999), facials were also deliberately excluded, but after her thinking about feminist porn gradually changed, she sometimes included such acts in her later productions. Erika Lust has occasionally featured facials ever since her 2004 debut The Good Girl. Health risks Transmission of disease Any sexual activity that involves contact with the bodily fluids of another person contains the risk of transmission of sexually transmitted infections (STIs/STDs). Semen is in itself generally harmless on the skin or if swallowed. However, semen can be the vehicle for many sexually transmitted infections, such as HIV and hepatitis. The California Occupational Safety and Health Administration categorizes semen as "other potentially infectious material" or OPIM. The risks incurred by the giving and receiving partner during the facial sexual act are drastically different. For the ejaculating partner there is almost no risk of contracting an STI. For the receiving partner, the risk is higher. Since potentially infected semen could come into contact with broken skin or sensitive mucous membranes (i.e., eyes, lips, mouth), there is a risk of contracting an infectious disease. Allergic reactions In rare cases, people have been known to experience allergic reactions to seminal fluids, known as human seminal plasma hypersensitivity. Symptoms can be either localized or systemic, and may include itching, redness, swelling, or blisters within 30 minutes of contact. They may also include hives and even difficulty breathing. Treatment options for semen allergy include avoiding exposure to seminal fluid by use of condoms and attempting desensitization. Criticisms and responses Criticisms There are a variety of views ranging from facials being an act of degradation and elicit humiliation to being grounded in mutual respect and elicit pleasure. Feminist views of the depiction of male-on-female facials are primarily critical, even amongst some sex-positive feminists (including feminist pornographers), although other sex-positive feminists regard it as always acceptable, or only acceptable if certain conditions are met. General Sex therapist Ruth Westheimer believes facials are "humiliating and not sexy". She advises the average person contemplating oral sex to not think that a facial is a necessary part of the act. In response to an inquiry from a reader, sex columnist Dan Savage wrote: "Facials are degrading—and that's why they're so hot." Daily Nexus columnist Nina Love Anthony views the practice of facials in a non-threatening light, feeling that it adds variety to the sexual experience. In one of her weekly articles she wrote, "But let's give credit where credit is due: The money shot, by itself, is great for a number of reasons. Blowing it on someone's face is like a change-up pitch—if you've been throwing the heat for a while, maybe you should consider hooking the curve ball." She continues with, "Also, being on the receiving end of the shot can satisfy the secret porn star in everyone and it's minor kink for beginners." Anti-porn feminists Sociologists Gail Dines, Robert Jensen and Russo echo these sentiments in the book Pornography: The Production and Consumption of Inequality. It asserts, "In pornography, ejaculating onto a woman is a primary method by which she is turned into a slut, something (not really someone) whose primary, if not only, purpose is to be sexual with men." Radical feminist and noted critic of pornography Andrea Dworkin said "it is a convention of pornography that the sperm is on her not in her. It marks the spot, what he owns and how he owns it. The ejaculation on her is a way of saying (through showing) that she is contaminated with his dirt; that she is dirty." In Padraig McGrath's review of Laurence O'Toole's book Pornocopia – Porn, Sex, Technology and Desire, he rhetorically asks whether "…women enjoy having men ejaculate on their faces?" He suggests that the role of such a scene is to illustrate that "…it doesn't matter what the woman likes—she'll like whatever the man wants her to like because she has no inner life of her own, in turn because she's not a real person". McGrath argues that there is a "power-aspect" to depictions such as cum shots. He suggests that the "…central theme [of pornography] is power…[,] implicitly violent… eroticized hatred." Gail Dines, writing in Pornland: How Porn Has Hijacked Our Sexuality, describes the money shot of a man ejaculating on the face or body of a woman as "one of the most degrading acts in porn". To Dines, the ejaculate on the female performer's body "marks the woman as used goods", conveying a sense of ownership, and she quotes veteran porn actor and producer Bill Margold as saying, "I'd like to really show what I believe the men want to see: violence against women. I firmly believe that we serve a purpose by showing that. The most violent we can get is the cum shot in the face. Men get off behind that because they get even with the women they can't have." She adds that at least for some posters on adult forums discussing such scenes, the pleasure is derived from watching a woman suffer. However, Dines also describes that "when you speak to pornographers, they tend themselves not to know" the origins of these sorts of things. Feminist pornographers Feminist pornographers disagree amongst themselves whether facials should be regarded as representing or having the effect of gender inequality, should therefore not be considered feminist and thus excluded from feminist pornography, or that such depictions can be feminist if many female viewers enjoy it, or depending on a number of factors such as consent, context, chemistry, and performer agency. It is widely recognised amongst sex-positive feminists that the fact that people see facials in porn can lead them to want to do it in real life with their partners as well, and that this could (but, according to some, does not necessarily have to) have a negative impact on real-life sexuality. Pornography-actress-turned-filmmaker Candida Royalle was a critic of "cum shot" scenes in mainstream pornography. She produced pornographic films aimed at women and their partners that avoid the "misogynous predictability" and depiction of sex in "…as grotesque and graphic [a way] as possible." Royalle was also critical of the male-centredness of the typical pornography film, in which scenes end when the male actor ejaculates, and therefore decided to exclude all facial cum shots, and with few exceptions all other male external ejaculations, from her porn films. Commenting on Erika Lust's work, feminist pornographer Petra Joy (2007) argued: 'Feminism is committed to equality of the sexes, so surely "feminist porn" should show women as equals to men rather than as subservient beings... If you want to show cum on a woman's face that's fine but don't call it feminist.' Lust (2007) retorted, mocking 'the Church of the Pure Feminist Porn Producers... declaring that certain sexual practices that me and other women across the world happen to like, are a sin.' Separately, as some of her critics alleged, Tristan Taormino (2013) has admitted that she cannot control how certain portrayals such as facials may be received by some viewers, 'specifically that men's orgasms represent the apex of a scene (and of sex itself) and women's bodies are things to be used, controlled, and marked like territory'. When making her first film, Taormino 'embraced the notion that certain depictions were turn-offs to all women, like facial cum shots. But my thinking on this has changed over time. I believe viewers appreciate consent, context, chemistry, and performer agency more than the presence or absence of a specific act.' Responses Sociologist Lisa Jean Moore suggests that Dworkin's explanation does not take into account that it is the pleasure the actresses exhibit that the male partners enjoy, and that it is more accurate to think men want their semen to be wanted. Correspondingly it used to be a porn industry standard for the actress to act eager and loving for the facial she receives, and not in displeasure. If displeasure was shown it was usually considered a failed shot. Women's activist Beatrice Faust argued, "since ejaculating into blank space is not much fun, ejaculating over a person who responds with enjoyment sustains a lighthearted mood as well as a degree of realism." She goes on to say "Logically, if sex is natural and wholesome and semen is as healthy as sweat, there is no reason to interpret ejaculation as a hostile gesture." Joseph Slade, professor at Ohio University, notes in his book Pornography and sexual representation: a reference guide that adult industry actresses in the 1960s and 1970s did not trust birth control methods, and that more than one actress of the period told him that ejaculation inside her body was deemed inconsiderate if not rude. Sexologist Peter Sándor Gardos argues that his research suggests that "… the men who get most turned on by watching cum shots are the ones who have positive attitudes toward women" (on the annual meeting of the Society for the Scientific Study of Sex in 1992). Later, on The World Pornography Conference in 1998, he reported a similar conclusion, namely that "no pornographic image is interpretable outside of its historical and social context. Harm or degradation does not reside in the image itself". Cindy Patton, activist and scholar on human sexuality, claims that in western culture male sexual fulfillment is synonymous with orgasm and that the male orgasm is an essential punctuation of the sexual narrative. No orgasm, no sexual pleasure. No cum shot, no narrative closure. In other words, the cum shot is the period at the end of the sentence. In her essay "Speaking Out: Teaching In", Patton reached the conclusion that critics have devoted too little space to discovering the meaning that viewers attach to specific acts such as cum shots. See also Bukkake Creampie Cum shot Dirty Sanchez Erotic humiliation Fellatio Gokkun Money shot Pearl necklace Sexual slang Snowballing References Bibliography Pornography terminology Sexual acts Orgasm Oral eroticism Face Ejaculation
Facial (sexual act)
Biology
2,809
71,099,027
https://en.wikipedia.org/wiki/Chialvo%20map
The Chialvo map is a two-dimensional map proposed by Dante R. Chialvo in 1995 to describe the generic dynamics of excitable systems. The model is inspired by Kunihiko Kaneko's Coupled map lattice numerical approach which considers time and space as discrete variables but state as a continuous one. Later on Rulkov popularized a similar approach. By using only three parameters the model is able to efficiently mimic generic neuronal dynamics in computational simulations, as single elements or as parts of inter-connected networks. The model The model is an iterative map where at each time step, the behavior of one neuron is updated as the following equations: in which, is called activation or action potential variable, and is the recovery variable. The model has four parameters, is a time-dependent additive perturbation or a constant bias, is the time constant of recovery , is the activation-dependence of the recovery process and is an offset constant. The model has a rich dynamics, presenting from oscillatory  to chaotic behavior, as well as non trivial responses to small stochastic fluctuations. Analysis Bursting and chaos The map is able to capture the aperiodic solutions and the bursting behavior which are remarkable in the context of neural systems. For example, for the values , and and changing b from to the system passes from oscillations to aperiodic bursting solutions. Fixed points Considering the case where and the model mimics the lack of ‘voltage-dependence inactivation’ for real neurons and the evolution of the recovery variable is fixed at . Therefore, the dynamics of the activation variable is basically described by the iteration of the following equations in which as a function of has a period-doubling bifurcation structure. Examples Example 1 A practical implementation is the combination of neurons over a lattice, for that, it can be defined as a coupling constant for combining the neurons. For neurons in a single row, we can define the evolution of action potential on time by the diffusion of the local temperature in: where is the time step and is the index of each neuron. For the values , , and , in absence of perturbations they are at the resting state. If we introduce a stimulus over cell 1, it induces two propagated waves circulating in opposite directions that eventually collapse and die in the middle of the ring. Example 2 Analogous to the previous example, it's possible create a set of coupling neurons over a 2-D lattice, in this case the evolution of action potentials is given by: where , , represent the index of each neuron in a square lattice of size , . With this example spiral waves can be represented for specific values of parameters. In order to visualize the spirals, we set the initial condition in a specific configuration and the recovery as . The map can also present chaotic dynamics for certain parameter values. In the following figure we show the chaotic behavior of the variable on a square network of for the parameters , , and . The map can be used to simulated a nonquenched disordered lattice (as in Ref ), where each map connects with four nearest neighbors on a square lattice, and in addition each map has a probability of connecting to another one randomly chosen, multiple coexisting circular excitation waves will emerge at the beginning of the simulation until spirals takes over. Chaotic and periodic behavior for a neuron For a neuron, in the limit of , the map becomes 1D, since converges to a constant. If the parameter is scanned in a range, different orbits will be seen, some periodic, others chaotic, that appear between two fixed points, one at ; and the other close to the value of (which would be the regime excitable). References Chaotic maps Neuroscience
Chialvo map
Mathematics,Biology
765
1,711,021
https://en.wikipedia.org/wiki/Dry%20media%20reaction
A dry media reaction or solid-state reaction or solventless reaction is a chemical reaction performed in the absence of a solvent. Dry media reactions have been developed in the wake of developments in microwave chemistry, and are a part of green chemistry. The drive for the development of dry media reactions in chemistry is: economics (save money on solvents) ease of purification (no solvent removal post-synthesis) high reaction rate (due to high concentration of reactants) environmentally friendly (solvent is not required), see green chemistry Drawbacks to overcome: reactants should mix to a homogeneous system high viscosity in reactant system unsuitable for solvent assisted chemical reactions problems with dissipating heat safely; risk of thermal runaway side reactions accelerated if reagents are solids, very high energy consumption from milling In one type of solventless reaction a liquid reactant is used neat, for instance the reaction of 1-bromonaphthalene with Lawesson's reagent is done with no added liquid solvent, but the 1-bromonaphthalene acts as a solvent. A reaction which is closer to a true solventless reaction is a Knoevenagel condensation of ketones with (malononitrile) where a 1:1 mixture of the two reactants (and ammonium acetate) is irradiated in a microwave oven. Colin Raston's research group have been responsible for a number of new solvent free reactions. In some of these reactions all the starting materials are solids, they are ground together with some sodium hydroxide to form a liquid, which turns into a paste which then hardens to a solid. In another development the two components of an aldol reaction are combined with the asymmetric catalyst S-proline in a ball mill in a mechanosynthesis. The reaction product has 97% enantiomeric excess. A reaction rate acceleration is observed in several systems when a homogeneous solvent system is rapidly evaporated in a rotavap in a vacuum, one of them a Wittig reaction. The reaction goes to completion in 5 minutes with immediate evaporation whereas the same reaction in solution after the same 5 minutes (dichloromethane) has only 70% conversion and even after 24 hours some of the aldehyde remains. References Chemical reactions Green chemistry
Dry media reaction
Chemistry,Engineering,Environmental_science
476
20,207,864
https://en.wikipedia.org/wiki/Landsberg%E2%80%93Schaar%20relation
In number theory and harmonic analysis, the Landsberg–Schaar relation (or identity) is the following equation, which is valid for arbitrary positive integers p and q: The standard way to prove it is to put  =  + ε, where ε > 0 in this identity due to Jacobi (which is essentially just a special case of the Poisson summation formula in classical harmonic analysis): and then let ε → 0. A proof using only finite methods was discovered in 2018 by Ben Moore. If we let q = 1, the identity reduces to a formula for the quadratic Gauss sum modulo p. The Landsberg–Schaar identity can be rephrased more symmetrically as provided that we add the hypothesis that pq is an even number. References Theorems in analytic number theory
Landsberg–Schaar relation
Mathematics
165
6,006,708
https://en.wikipedia.org/wiki/Stable%20isotope%20labeling%20by%20amino%20acids%20in%20cell%20culture
Stable isotope labeling by/with amino acids in cell culture (SILAC) is a technique based on mass spectrometry that detects differences in protein abundance among samples using non-radioactive isotopic labeling. It is a popular method for quantitative proteomics. Procedure Two populations of cells are cultivated in cell culture. One of the cell populations is fed with growth medium containing normal amino acids. In contrast, the second population is fed with growth medium containing amino acids labeled with stable (non-radioactive) heavy isotopes. For example, the medium can contain arginine labeled with six carbon-13 atoms (13C) instead of the normal carbon-12 (12C). When the cells are growing in this medium, they incorporate the heavy arginine into all of their proteins. Thereafter, all peptides containing a single arginine are 6 Da heavier than their normal counterparts. Alternatively, uniform labeling with 13C or 15N can be used. Proteins from both cell populations are combined and analyzed together by mass spectrometry as pairs of chemically identical peptides of different stable-isotope composition can be differentiated in a mass spectrometer owing to their mass difference. The ratio of peak intensities in the mass spectrum for such peptide pairs reflects the abundance ratio for the two proteins. Applications A SILAC approach involving incorporation of tyrosine labeled with nine carbon-13 atoms (13C) instead of the normal carbon-12 (12C) has been utilized to study tyrosine kinase substrates in signaling pathways. SILAC has emerged as a very powerful method to study cell signaling, post translation modifications such as phosphorylation, protein–protein interaction and regulation of gene expression. In addition, SILAC has become an important method in secretomics, the global study of secreted proteins and secretory pathways. It can be used to distinguish between proteins secreted by cells in culture and serum contaminants. It has also been adapted as a 'forward+reverse' SILAC method for simultaneous labeling of host and microbe, which enables the study of host-microbe interactions. Standardized protocols of SILAC for various applications have also been published. Pulsed SILAC Pulsed SILAC (pSILAC) is a variation of the SILAC method where the labelled amino acids are added to the growth medium for only a short period of time. This allows monitoring differences in de novo protein production rather than raw concentration. NeuCode SILAC Traditionally the level of multiplexing in SILAC was limited due to the number of SILAC isotopes available. Recently, a new technique called NeuCode (neutron encoding) SILAC, has augmented the level of multiplexing achievable with metabolic labeling (up to 4). The NeuCode amino acid method is similar to SILAC but differs in that the labeling only utilizes heavy amino acids. The use of only heavy amino acids eliminates the need for 100% incorporation of amino acids needed for SILAC. The increased multiplexing capability of NeuCode amino acids is from the use of mass defects from extra neutrons in the stable isotopes. These small mass differences however need to be resolved on high-resolution mass spectrometers. References Further reading External links SILAC resource Mann Lab SILAC resource Pandey Lab SILAC resource Center for Experimental Bioinformatics (CEBI) Biochemistry methods Biotechnology Mass spectrometry Proteomics Protein–protein interaction assays
Stable isotope labeling by amino acids in cell culture
Physics,Chemistry,Biology
718
22,571,832
https://en.wikipedia.org/wiki/Journal%20of%20the%20American%20Medical%20Informatics%20Association
The Journal of the American Medical Informatics Association is a peer-reviewed scientific journal covering research in the field of medical informatics published by the American Medical Informatics Association. According to the Journal Citation Reports, the journal has a 2021 impact factor of 7.942. It is among the very top journals in the category "Medical Informatics". References External links Bimonthly journals Academic journals established in 1994 Biomedical informatics journals Oxford University Press academic journals English-language journals Academic journals associated with learned and professional societies of the United States
Journal of the American Medical Informatics Association
Biology
109
59,995,500
https://en.wikipedia.org/wiki/Crouzeix%27s%20conjecture
Crouzeix's conjecture is an unsolved problem in matrix analysis. It was proposed by Michel Crouzeix in 2004, and it can be stated as follows: where the set is the field of values of a n×n (i.e. square) complex matrix and is a complex function that is analytic in the interior of and continuous up to the boundary of . Slightly reformulated, the conjecture can also be stated as follows: for all square complex matrices and all complex polynomials : holds, where the norm on the left-hand side is the spectral operator 2-norm. History Crouzeix's theorem, proved in 2007, states that: (the constant is independent of the matrix dimension, thus transferable to infinite-dimensional settings). Michel Crouzeix and Cesar Palencia proved in 2017 that the result holds for , improving the original constant of . The not yet proved conjecture states that the constant can be refined to . Special cases While the general case is unknown, it is known that the conjecture holds for some special cases. For instance, it holds for all normal matrices, for tridiagonal 3×3 matrices with elliptic field of values centered at an eigenvalue and for general n×n matrices that are nearly Jordan blocks. Furthermore, Anne Greenbaum and Michael L. Overton provided numerical support for Crouzeix's conjecture. Further reading References See also Von Neumann's inequality Conjectures Matrix theory Unsolved problems in mathematics
Crouzeix's conjecture
Mathematics
304
39,901,743
https://en.wikipedia.org/wiki/List%20of%20PowerPC-based%20game%20consoles
There are several ways in which game consoles can be categorized. One is by its console generation, and another is by its computer architecture. Game consoles have long used specialized and customized computer hardware with the base in some standardized processor instruction set architecture. In this case, it is PowerPC and Power ISA, processor architectures initially developed in the early 1990s by the AIM alliance, i.e. Apple, IBM, and Motorola. Even though these consoles share much in regard to instruction set architecture, game consoles are still highly specialized computers so it is not common for games to be readily portable or compatible between devices. Only Nintendo has kept a level of portability between their consoles, and even there it is not universal. The first devices used standard processors, but later consoles used bespoke processors with special features, primarily developed by or in cooperation with IBM for the explicit purpose of being in a game console. In this regard, these computers can be considered "embedded". All three major consoles of the seventh generation were PowerPC based. As of 2019, no PowerPC-based game consoles are currently in production. The most recent release, Nintendo's Wii U, has since been discontinued and succeeded by the Nintendo Switch (which uses a Nvidia Tegra ARM processor). The Wii Mini, the last PowerPC-based game console to remain in production, was discontinued in 2017. List See also PowerPC applications List of PowerPC processors References Embedded systems Lists of video game consoles
List of PowerPC-based game consoles
Technology,Engineering
299
17,064,122
https://en.wikipedia.org/wiki/Ver%20%28command%29
In computing, ver (short for version) is a command in various command-line interpreters (shells) such as COMMAND.COM, cmd.exe and 4DOS/4NT. It prints the name and version of the operating system, the command shell, or in some implementations the version of other commands. It is roughly equivalent to the Unix command uname. Implementations The command is available in FLEX, HDOS, DOS, FlexOS, SpartaDOS X, 4690 OS, OS/2, Windows, and ReactOS. It is also available in the open-source MS-DOS emulator DOSBox, in the KolibriOS Shell and in the EFI shell. TSC FLEX In TSC's FLEX operating system, the VER command is used to display the version number of a utility or program. In some versions the command is called VERSION. DOS The command is available in MS-DOS versions 2 and later. MS-DOS versions up to 6.22 typically derive the DOS version from the DOS kernel. This may be different from the string printed on start-up. The argument "/r" can be added to give more information and to list whether DOS is running in the HMA (high memory area). PC DOS typically derives the version from an internal string in command.com (so PC DOS 6.1 command.com reports the version as 6.10, although the kernel version is 6.00.) DR DOS 6.0 also includes an implementation of the command. DR-DOS reports whatever value the environment variable OSVER reports. PTS-DOS includes an implementation of this command that can display, modify, and restore the DOS version number. IBM OS/2 OS/2 command.com reports an internal string, with the OS/2 version. The underlying kernel here is 5.00, but modified to report x0.xx (where x.xx is the OS/2 version). Microsoft Windows Windows 9x command.com report a string from inside command.com. The build version (e.g. 2222), is also derived from there. Windows NT command.com reports either the 32-bit processor string (4nt, cmd), or under some loads, MS-DOS 5.00.500, (for all builds). The underlying kernel reports 5.00 or 5.50 depending on the interrupt. MS-DOS 5.00 commands run unmodified on NT. Microsoft Windows also includes a GUI (Windows dialog) variant of the command called winver, which shows the Service Pack or Windows Update installed (if any) as well as the version. In Windows before Windows for Workgroups 3.11, running winver from DOS reported an embedded string in winver.exe. Windows also includes the setver command that is used to set the version number that the MS-DOS subsystem (NTVDM) reports to a DOS program. This command is not available on Windows XP 64-Bit Edition. DOSBox In DOSBox, the command is used to view and set the reported DOS version. It also displays the running DOSBox version. The syntax to set the reported DOS version is the following: VER SET <MAJOR> [MINOR] The parameter MAJOR is the number before the period, and MINOR is what comes after. Versions can range from 0.0 to 255.255. Any values over 255 will loop from zero. (That is, 256=0, 257=1, 258=2, etc.) Others AmigaDOS provides a version command. It displays the current version number of the Kickstart and Workbench. The DEC OS/8 CCL ver command prints the version numbers of both the OS/8 Keyboard Monitor and CCL. Syntax C:\WINDOWS\system32>ver Microsoft Windows [Version 10.0.10586] Some versions of MS-DOS support an undocumented /r switch, which will show the revision as well as the version. Version list The following table lists version numbers from various Microsoft operating systems: See also Comparison of Microsoft Windows versions List of DOS commands uname References Further reading External links ver | Microsoft Docs How to find Windows version, service pack number and edition from CMD How to determine what version of Windows you are running in a batch file Internal DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands Microcomputer software Windows administration
Ver (command)
Technology
915
12,774,077
https://en.wikipedia.org/wiki/Aronszajn%20tree
In set theory, an Aronszajn tree is a tree of uncountable height with no uncountable branches and no uncountable levels. For example, every Suslin tree is an Aronszajn tree. More generally, for a cardinal κ, a κ-Aronszajn tree is a tree of height κ in which all levels have size less than κ and all branches have height less than κ (so Aronszajn trees are the same as -Aronszajn trees). They are named for Nachman Aronszajn, who constructed an Aronszajn tree in 1934; his construction was described by . A cardinal κ for which no κ-Aronszajn trees exist is said to have the tree property (sometimes the condition that κ is regular and uncountable is included). Existence of κ-Aronszajn trees Kőnig's lemma states that -Aronszajn trees do not exist. The existence of Aronszajn trees (-Aronszajn trees) was proven by Nachman Aronszajn, and implies that the analogue of Kőnig's lemma does not hold for uncountable trees. The existence of -Aronszajn trees is undecidable in ZFC: more precisely, the continuum hypothesis implies the existence of an -Aronszajn tree, and Mitchell and Silver showed that it is consistent (relative to the existence of a weakly compact cardinal) that no -Aronszajn trees exist. Jensen proved that V = L implies that there is a κ-Aronszajn tree (in fact a κ-Suslin tree) for every infinite successor cardinal κ. showed (using a large cardinal axiom) that it is consistent that no -Aronszajn trees exist for any finite n other than 1. If κ is weakly compact then no κ-Aronszajn trees exist. Conversely, if κ is inaccessible and no κ-Aronszajn trees exist, then κ is weakly compact. Special Aronszajn trees An Aronszajn tree is called special if there is a function f from the tree to the rationals so that f(x) < f(y) whenever x < y. Martin's axiom MA() implies that all Aronszajn trees are special, a proposition sometimes abbreviated by EATS. The stronger proper forcing axiom implies the stronger statement that for any two Aronszajn trees there is a club set of levels such that the restrictions of the trees to this set of levels are isomorphic, which says that in some sense any two Aronszajn trees are essentially isomorphic . On the other hand, it is consistent that non-special Aronszajn trees exist, and this is also consistent with the generalized continuum hypothesis plus Suslin's hypothesis . Construction of a special Aronszajn tree A special Aronszajn tree can be constructed as follows. The elements of the tree are certain well-ordered sets of rational numbers with supremum that is rational or −∞. If x and y are two of these sets then we define x ≤ y (in the tree order) to mean that x is an initial segment of the ordered set y. For each countable ordinal α we write Uα for the elements of the tree of level α, so that the elements of Uα are certain sets of rationals with order type α. The special Aronszajn tree T is the union of the sets Uα for all countable α. We construct the countable levels Uα by transfinite induction on α as follows starting with the empty set as U0: If α + 1 is a successor then Uα+1 consists of all extensions of a sequence x in Uα by a rational greater than sup x. Uα + 1 is countable as it consists of countably many extensions of each of the countably many elements in Uα. If α is a limit then let Tα be the tree of all points of level less than α. For each x in Tα and for each rational number q greater than sup x, choose a level α branch of Tα containing x with supremum q. Then Uα consists of these branches. Uα is countable as it consists of countably many branches for each of the countably many elements in Tα. The function f(x) = sup x is rational or −∞, and has the property that if x < y then f(x) < f(y). Any branch in T is countable as f maps branches injectively to −∞ and the rationals. T is uncountable as it has a non-empty level Uα for each countable ordinal α which make up the first uncountable ordinal. This proves that T is a special Aronszajn tree. This construction can be used to construct κ-Aronszajn trees whenever κ is a successor of a regular cardinal and the generalized continuum hypothesis holds, by replacing the rational numbers by a more general η set. See also Kurepa tree Aronszajn line References External links PlanetMath Trees (set theory) Independence results
Aronszajn tree
Mathematics
1,077
49,167,470
https://en.wikipedia.org/wiki/Match%20Analysis
Match Analysis is a US company with headquarters in Emeryville, California. The company employs 70 staff in their offices and data collection facilities in California and Mexico City, Mexico. The company provides video analysis tools and digital library archiving services supplying performance and physical tracking data to football (soccer) coaches, teams, and players. The objective is to improve individual and team performance and/or analyze opposition patterns of play to give tactical advantage. Match Analysis records and verifies over 2,500 distinct events per football match with every touch by every player catalogued, synchronized against video feeds, and stored in a searchable video database. History Match Analysis was founded in 2000 by Mark Brunkhart, its current President, after he developed a system to help amateur football players see the game objectively. The system evolved from a collection of printed reports and info graphics into video analysis software and statistical data tools supplied to professional and amateur football teams, governing bodies/professional organizations and media partners around the world. Match Analysis is one of the pioneers of statistical analysis in football. In 2002, the company released Mambo Studio, the first video editing and retrieval system for football. In 2004, Tango Online was launched to replace printed reports with the first instant access online video database of a complete league. In May 2012, Match Analysis acquired Edinburgh based Spinsight Ltd purchasing the intellectual property and other assets relating to its K2 Panoramic Video Camera System. Match Analysis signed strategic alliances with Major League Soccer and Liga MX in 2013. In addition Match Analysis's K2 Panoramic Video Camera System was implemented in every stadium across Major League Soccer and Liga MX in the summer of 2013. During November 2015, Match Analysis participated in discussions with IFAB and FIFA at their headquarters in Zürich, Switzerland to advise on global standards for electronic performance and tracking systems. In May 2016, Match Analysis announced the introduction of Tango VIP their new foundational technology platform for their extensive online presence. Products Match Analysis tools and services provide video indexing and archiving, statistical analysis, live data collection, player tracking, fitness reports, and performance analysis. The company's product range includes Mambo Studio, K2 Panoramic Video, TrueView Visualizations, Tango Online, Tango Live, Tango ToGo, Player Tracking and Fitness Reports. Clients The company has worked with eight different national teams including Germany, the United States, and Mexico and has relationships with over 50 professional clubs. Match Analysis currently supports league-wide deals with Major League Soccer and Liga MX. Over the past decade, Match Analysis has worked with almost every major professional club in North America and media outlets including the New York Times World Cup coverage. Current Match Analysis clients include all 18 Liga MX clubs in Mexico, 17 MLS clubs, the Mexico national team, PRO Professional Referee Organization and a wide array of college and amateur sides. References External links Association football equipment Motion in computer vision Tracking
Match Analysis
Physics,Technology
582
38,993,942
https://en.wikipedia.org/wiki/Windows%208.1
Windows 8.1 is a release of the Windows NT operating system developed by Microsoft. It was released to manufacturing on August 27, 2013, and broadly released for retail sale on October 17, 2013, about a year after the retail release of its predecessor, and succeeded by Windows 10 on July 29, 2015. Windows 8.1 was made available for download via MSDN and Technet and available as a free upgrade for retail copies of Windows 8 and Windows RT users via the Windows Store. A server version, Windows Server 2012 R2, was released on October 18, 2013. Windows 8.1 aimed to address complaints of Windows 8 users and reviewers on launch. Enhancements include an improved Start screen, additional snap views, additional bundled apps, tighter OneDrive (formerly SkyDrive) integration, Internet Explorer 11 (IE11), a Bing-powered unified search system, restoration of a visible Start button on the taskbar, and the ability to restore the previous behavior of opening the user's desktop on login instead of the Start screen. Windows 8.1 also added support for then emerging technologies like high-resolution displays, 3D printing, Wi-Fi Direct, and Miracast streaming, as well as the ReFS file system. Windows 8.1 received more positive reception than Windows 8, with people appreciating the expanded functionality available to apps in comparison to Windows 8, its OneDrive integration, its user interface tweaks, and the addition of expanded tutorials for operating the Windows 8 interface. Despite these improvements, Windows 8.1 was still criticized for not addressing all issues of Windows 8 (such as poor integration between Metro-style apps and the desktop interface), and the potential privacy implications of the expanded use of online services. Windows 8.1 would be succeeded by Windows 10 in 2015. Mainstream support for Windows 8.1 ended on January 9, 2018, and extended support ended on January 10, 2023. Mainstream support for the Embedded Industry edition of Windows 8.1 ended on July 10, 2018, and extended support ended on July 11, 2023. History In February 2013, ZDNet writer Mary Jo Foley disclosed potential rumors about "Blue", the codename for a wave of planned updates across several Microsoft products and services, including Windows 8, Windows Phone 8, Outlook.com, and SkyDrive. In particular, the report detailed that Microsoft was planning to shift to a more "continuous" development model, which would see major revisions to its main software platforms released on a consistent yearly cycle to keep up with market demands. Lending credibility to the reports, Foley noted that a Microsoft staff member had listed experience with "Windows Blue" on his LinkedIn profile, and listed it as a separate operating system from 8. A post-RTM build of Windows 8, build 9364, was leaked in March 2013. The build, which was believed to be of "Windows Blue", revealed a number of enhancements across Windows 8's interface, including additional size options for tiles, expanded color options on the Start screen, the expansion of PC Settings to include more options that were previously exclusive to the desktop Control Panel, the ability for apps to snap to half of the screen, the ability to take screenshots from the Share charm, additional stock apps, increased SkyDrive integration (such as automatic device backups) and Internet Explorer 11. Shortly afterward on March 26, 2013, corporate vice president of corporate communications Frank X. Shaw officially acknowledged the "Blue" project, stating that continuous development would be "the new normal" at Microsoft, and that "our product groups are also taking a unified planning approach so people get what they want—all of their devices, apps and services working together wherever they are and for whatever they are doing." In early May, press reports announcing the upcoming version in Financial Times and The Economist negatively compared Windows 8 to New Coke. The theme was then echoed and debated in the computer press. Shaw rejected this criticism as "extreme", adding that he saw a comparison with Diet Coke as more appropriate. On May 14, 2013, Microsoft announced that "Blue" was officially unveiled as Windows 8.1. Following a keynote presentation focusing on this version, the public beta of Windows 8.1 was released on June 26, 2013, during Build. Build 9600 of Windows 8.1 was released to OEM hardware partners on August 27, 2013, and became generally available on October 17, 2013. Unlike past releases of Windows and its service packs, volume license customers and subscribers to MSDN Plus and TechNet Plus were unable to obtain the RTM version upon its release; a spokesperson stated that the change in policy was to allow Microsoft to work with OEMs "to ensure a quality experience at general availability." Microsoft stated that Windows 8.1 would be released to the general public on October 17, 2013. However, after criticism, Microsoft reversed its decision and released the RTM build on MSDN and TechNet on September 9, 2013. Microsoft announced that Windows 8.1, along with Windows Server 2012 R2, was released to manufacturing on August 27, 2013. Prior to the release of Windows 8.1, Microsoft premiered a new television commercial in late-September 2013 that focused on its changes as part of the "Windows Everywhere" campaign. Shortly after its release, Windows RT 8.1 was temporarily recalled by Microsoft following reports that some users had encountered a rare bug which corrupted the operating system's Boot Configuration Data during installation, resulting in an error on startup. On October 21, 2013, Microsoft confirmed that the bug was limited to the original Surface tablet, and only affected 1 in 1000 installations. The company released recovery media and instructions which could be used to repair the device, and restored access to Windows RT 8.1 the next day. It was also found that changes to screen resolution handling on Windows 8.1 resulted in mouse input lag in certain video games that do not use the DirectInput APIs—particularly first-person shooter games, including Deus Ex: Human Revolution, Hitman: Absolution, and Metro 2033. Users also found the issues to be more pronounced when using gaming mice with high resolution and/or polling rates. Microsoft released a patch to fix the bug on certain games in November 2013, and acknowledged that it was caused by "changes to mouse-input processing for low-latency interaction scenarios". Update On April 8, 2014, Microsoft released the Windows 8.1 Update, which included all past updates plus new features. It was unveiled by Microsoft vice president Joe Belfiore at Mobile World Congress on February 23, 2014, and detailed in full at Microsoft's Build conference on April 2. Belfiore noted that the update would lower the minimum system requirements for Windows, so it can be installed on devices with as little as 1 GB of RAM and 16 GB of storage. Unlike Windows 8.1 itself, this cumulative update is distributed through Windows Update, and must be installed in order to receive any further patches for Windows 8.1. At the 2014 Build conference, during April, Microsoft's Terry Myerson unveiled further user interface changes for Windows 8.1, including the ability to run Metro-style apps inside desktop windows, and a revised Start menu, which creates a compromise between the Start menu design used by Windows 7 and the Start screen, by combining the application listing in the first column with a second that can be used to display app tiles, whereas Windows 8.0 used a screen hotspot ("hot corner"). Myerson stated that these changes would occur in a future update, but did not elaborate further. A distinction is the removal of the tooltip with the preview thumbnail of the Start screen. Microsoft also unveiled a concept known as "Universal Windows apps", in which a Windows Runtime app can be ported to Windows Phone 8.1 and Xbox One while sharing a common codebase. While it does not entirely unify Windows' app ecosystem with that of Windows Phone, it will allow developers to synchronize data between versions of their app on each platform, and bundle access to Windows, Windows Phone, and Xbox One versions of an app in a single purchase. Microsoft originally announced that users who did not install the update would not receive any other updates after May 13, 2014. However, meeting this deadline proved challenging: The ability to deploy Windows 8.1 Update through Windows Server Update Services (WSUS) was disabled shortly after its release following the discovery of a bug which affects the ability to use WSUS as a whole in certain server configurations. Microsoft later fixed the issue but users continued to report that the update may fail to install. Microsoft's attempt to fix the problem was ineffective, to the point that Microsoft pushed the support deadline further to June 30, 2014. On 16 May, Microsoft released additional updates to fix a problem of BSOD in the update. Distribution Microsoft markets Windows 8.1 as an "update" for Windows 8, avoiding the term "upgrade". Microsoft's support lifecycle policy treats Windows 8.1 similar to previous service packs of Windows: It is part of Windows 8's support lifecycle, and upgrading to Windows 8.1 is required to maintain access to support and Windows updates after January 12, 2016. Retail and OEM copies of Windows 8, Windows 8 Pro, and Windows RT can be upgraded through Windows Store free of charge. However, volume license customers, TechNet or MSDN subscribers and users of Windows 8 Enterprise must acquire standalone installation media for Windows 8.1 and install through the traditional Windows setup process, either as an in-place upgrade or clean install. This requires a Windows 8.1-specific product key. Upgrading through Windows Store requires each machine to download an upgrade package as big as 2–3.6 GB. Unlike the traditional Windows service packs, the standalone installer, which could be downloaded once and installed as many times as needed, requires a Windows 8.1-specific product key. On July 1, 2014, acknowledging difficulties users may have had through the Windows Store update method, Microsoft began to phase in an automatic download process for Windows 8.1. Windows 8 was re-issued at retail as Windows 8.1 alongside the online upgrade for those who did not currently own a Windows 8 license. Retail copies of Windows 8.1 contain "Full" licenses that can be installed on any computer, regardless of their existing operating system, unlike Windows 8 retail copies, which were only available at retail with upgrade licenses. Microsoft stated that the change was in response to customer feedback, and to allow more flexibility for users. Pricing for the retail copies of Windows 8.1 remained the same. Windows 8.1 with Bing is a reduced-cost SKU of Windows 8.1 that was introduced by Microsoft in May 2014 in an effort to further encourage the production of low-cost Windows devices, whilst "driving end-user usage of Microsoft Services such as Bing and OneDrive". It is subsidized by Microsoft's Bing search engine, which is set as the default within Internet Explorer and cannot be changed by OEMs. However, this restriction does not apply to end-users, who can still change the default search engine freely. It is otherwise and functionally identical to the base edition of Windows 8.1. New and changed features Many of the changes on Windows 8.1, particularly to the user interface, were made in response to criticisms from early adopters and other critics after the release of Windows 8. User interface and desktop The Start screen received several enhancements on Windows 8.1, including an extended "All Apps" view with sort modes (accessed by clicking a new down arrow button or swiping upward), small and extra-large sizes for tiles, and colored tiles for desktop program shortcuts. Additional customization options were also added, such as expanded color options, new backgrounds (some of which incorporate animated elements), and the ability for the Start screen to use the desktop background instead. Applications were no longer added to the Start screen automatically when installed, and all applications now have colored tiles (desktop programs were previously shown in a single color). The app snapping system was also extended; up to four apps can be snapped onto a single display depending on screen size, apps can be snapped to fill half the screen, and can also be used on any display in a multi-monitor configuration. Apps can also launch other apps in a snapped view to display content; for example, the Mail app can open a photo attachment in a picture viewer snapped to another half of the screen. Improved support is also provided by apps for using devices in a portrait (vertical) orientation. The lock screen offers the ability to use a photo slideshow as its backdrop, and a shortcut to the Camera app by swiping up. The on-screen keyboard has an improved autocomplete mechanism which displays multiple word suggestions, and allows users to select from them by sliding on the spacebar. The autocomplete dictionary is also automatically updated using data from Bing, allowing it to recognize and suggest words relating to current trends and events. Similarly to Windows Phone, certain apps now display a narrow bar with three dots on it to indicate the presence of a pop-up menu accessible by swiping, clicking on the dots, or right-clicking. To improve the usability of the desktop interface, a visible Start button was restored to the taskbar for opening the Start screen, and the Quick Links menu (accessed by right-clicking the Start button or pressing ) now contains shutdown and sign-out options. Users can also modify certain user interface behaviors, such as disabling the upper hot corners for using the charms and recent apps list, going to the desktop instead of the Start screen on login or after closing all apps on a screen, automatically opening the "All Apps" view on the Start screen when opened, and prioritizing desktop programs on the "Category" sort mode on "All Apps". To assist users in learning the Windows 8 user interface, an interactive tutorial was also offered, along with a new Help + Tips app for additional information. In contrast, Windows RT 8.1 downplays the desktop interface further by not displaying the Desktop tile on its default Start screen at all (however, it can still be manually added to the Start screen). Windows manager Chaitanya Sareen stated that the restoration of the visible Start button was intended to be a "warm blanket" for users who had become confused by the removal of the button on 8; the Start button was originally removed to reflect Windows 8's treatment of the desktop as an "app" rather than the main interface. Further interface behavior changes are made on the April 2014 "Windows 8.1 Update", which are oriented towards non-touch environments (such as desktop and laptop PCs) that use a keyboard and mouse, and improve integration between Windows Store apps and the desktop. When a mouse is in use, the Desktop is shown on startup by default, the Start screen uses context menus instead of a toolbar across the bottom of the screen for manipulating tiles, an autohiding title bar with minimize and close buttons is displayed within apps at the top of the screen, the taskbar can display and pin apps alongside desktop programs and be accessed from within apps, and visible search and power buttons are added to the Start screen. In non-touch environments, the default image viewer and media player programs were changed back to Windows Photo Viewer and Windows Media Player in lieu of the Xbox Video and Photos apps. Apps The suite of pre-loaded apps bundled with Windows 8 were changed in Windows 8.1; PC Settings was expanded to include options that were previously exclusive to the desktop Control Panel, Windows Store was updated with an improved interface for browsing apps and automatic updates, the Mail app includes an updated interface and additional features, the Camera app integrates Photosynth for creating panoramas, and additional editing tools were added to the Photos app (while integration with Flickr and Facebook was completely removed). A number of additional stock apps were also added, including Calculator, Food and Drink, Health and Fitness, Sound Recorder, Reading List (which can be used to collect and sync content from apps through OneDrive), Scan, and Help + Tips. For Windows RT users, Windows 8.1 also adds a version of Microsoft Outlook to the included Office 2013 RT suite. However, it does not support data loss protection, Group Policy, Lync integration, or creating emails with information rights management. Windows Store is enabled by default within Windows To Go environments. On January 31, 2020, Microsoft released the new Microsoft Edge web browser for Windows 8.1. Online services and functionality Windows 8.1 adds tighter integration with several Microsoft-owned services. OneDrive (formerly SkyDrive) is integrated at the system level to sync user settings and files. Files are automatically downloaded in the background when they are accessed from the user's OneDrive folder, unless they are marked to be available offline. By default, only file metadata and thumbnails are stored locally, and reparse points are used to give the appearance of a normal directory structure to provide backwards compatibility. The OneDrive app was updated to include a local file manager. OneDrive use on Windows 8.1 requires that a user's Windows account be linked to a Microsoft account; the previous SkyDrive desktop client (which did not have this requirement) is not supported on Windows 8.1. A Bing-based unified search system was added; it can analyze a user's search habits to return results featuring relevant local and online content. Full-screen "hero" displays aggregate news articles, Wikipedia entries, multimedia, and other content related to a search query; for instance, searching for a music performer would return photos of the performer, a biography, and their available songs and albums on Xbox Music. The messaging app from Windows 8 has been replaced by Skype, which also allows users to accept calls directly from the lock screen. Windows 8.1 also includes Internet Explorer 11, which adds support for SPDY and WebGL, and expanded developer tools. The Metro-style variant of IE11 also adds tab syncing, the ability to open an unlimited number of tabs, and Reading List integration. Due to Facebook Connect service changes, Facebook support is disabled in all bundled apps effective June 8, 2015. Security and hardware compatibility On compatible hardware, Windows 8.1 also features a transparent "device encryption" system based on BitLocker. Encryption begins as soon as a user begins using the system; the recovery key is stored to either the user's Microsoft account or an Active Directory login, allowing it to be retrieved from any computer. While device encryption is offered on all editions of Windows 8.1 unlike BitLocker (which is exclusive to the Pro and Enterprise editions), device encryption requires that the device meet the Connected Standby specification and have a Trusted Platform Module (TPM) 2.0 chip. Windows 8.1 also introduces improved fingerprint recognition APIs, which allows user login, User Account Control, Windows Store and Windows Store apps to use enrolled fingerprints as an authentication method. A new kiosk mode known as "Assigned Access" was also added, allowing a device to be configured to use a single app in a restricted environment. Additionally, Windows Defender includes an intrusion detection system which can scan network activity for signs of malware. Windows 8.1 also allows third-party VPN clients to automatically trigger connections. For enterprise device management, Windows 8.1 adds support for the Workplace Join feature of Windows Server 2012 R2, which allows users to enroll their own device into corporate networks with finer control over access to resources and security requirements. Windows 8.1 also supports the OMA Device Management specifications. Remote Data Control can be used to remotely wipe specific "corporate" data from Windows 8.1 devices. The 64-bit variants of Windows 8.1 no longer support processors which do not implement the double-width compare and exchange (CMPXCHG16B) CPU instruction (which the installer reports as a lack of support for "CompareExchange128"). A Microsoft spokesperson noted that the change primarily affects systems with older AMD 64-bit processors, and that "the number of affected processors are extremely small since this instruction has been supported for greater than 10 years." It mostly concerns Socket 754 and Socket 939 Athlon 64 from 2004 and 2005; the Socket AM2 CPUs should all have the instruction. Brad Chacos of PC World also reported a case in which Windows 8.1 rejected Intel Core 2 Quad Q9300 and a Q9550S despite their support for this instruction, because the associated Intel DP35DP motherboard did not. These changes do not affect the 32-bit variants of Windows 8.1. Hardware functionality Windows 8.1 adds support for 3D printing, pairing with printers using NFC tags, Wi-Fi Direct, Miracast media streaming, tethering, and NVMe. In response to the increasing pixel density in displays, Windows 8.1 can scale text and GUI elements up to 200% (whereas Windows 8 supported only 150%) and set scaling settings independently on each display in multi-monitor configurations. Removed features Backup and Restore, the backup component of Windows that had been deprecated but was available in Windows 8 through a Control Panel applet called "Windows 7 File Recovery", was removed. Windows 8.1 also removes the graphical user interface for the Windows System Assessment Tool, meaning that the Windows Experience Index is no longer displayed. The command line variant of the tool remains available on the system. Microsoft reportedly removed the graphical Windows Experience Index in order to promote the idea that all kinds of hardware run Windows 8 equally well. Windows 8.1 removed the ability of several Universal Windows Platform apps to act as "hubs" connecting similar services within a single interface: The Photos app lost the ability to view photos from Facebook, Flickr or SkyDrive (branded as OneDrive since February 2014). Instead, each service provider is expected to create its own app; The Messaging app, which was interoperable with Windows Live Messenger and Facebook Chat, was deprecated in favor of a Skype app that is not compatible with Facebook Chat; The Calendar app can only connect to Microsoft services such as Outlook.com and Microsoft Exchange, with support for Google Calendar removed. Since October 2016, all future patches are cumulative as with Windows 10; individual patches can no longer be downloaded. Users can only upgrade their previous version of Windows to Windows 8.1 using the revised installer introduced in Windows 8: they can no longer use setup.exe in the sources folder to upgrade their system. Reception Windows 8.1 received more positive reviews than Windows 8. Tom Warren of The Verge still considered the platform to be a "work in progress" due to the number of apps available, the impaired level of capabilities that apps have in comparison to desktop programs, and because he felt that mouse and keyboard navigation was still "awkward". However, he touted many of the major changes on Windows 8.1, such as the expanded snapping functionality, increased Start screen customization, SkyDrive and Bing integration, improvements to stock apps, and particularly he considered the Mail app to be "lightyears ahead" of the original version from 8. He concluded that "Microsoft has achieved a lot within 12 months, even a lot of the additions feel like they should have been there from the very start with Windows 8." Joel Hruska of ExtremeTech criticized continuing integration problems between the Desktop and apps on Windows 8.1, pointing out examples such as the Photos app, which "still refuses to acknowledge that users might have previous photo directories", and that the Mail app "still can't talk to the desktop—if you try to send an email from the Desktop without another mail client installed, Windows will tell you there's no mail client capable of performing that action." However, he praised the improvements to other apps, such as People and News (pointing out UI improvements, and the News app using proper links when sharing stories, rather than non-standard links that can only be recognized by the app). Although praising the more flexible snapping system, he still pointed out flaws, such as an inability to maintain snap configurations in certain situations. Windows 8.1's search functionality was met with mixed reviews; while noting the Bing integration and updated design, the system was panned for arbitrarily leaving out secondary storage devices from the "Everything" mode. Peter Bright of Ars Technica praised many of the improvements on Windows 8.1, such as its more "complete" touch interface, the "reasonable" tutorial content, the new autocomplete tools on the on-screen keyboard, software improvements, and the deep SkyDrive integration. However, he felt that the transition between the desktop and apps "still tends to feel a bit disjointed and disconnected" (even though the option to use the desktop wallpaper on the Start screen made it feel more integrated with the desktop interface rather than dissimilar), and that the restoration of the Start button made the two interfaces feel even more inconsistent because of how different it operates between the desktop and apps. Certain aspects of Windows 8.1 were also cause for concern because of their privacy implications. In his review of Windows 8.1, Joel Hruska noted that Microsoft had deliberately made it harder for users to create a "Local" account that is not tied to a Microsoft account for syncing, as it "[makes] clear that the company really, really, wants you to share everything you do with it, and that's not something an increasing number of people and businesses are comfortable doing." Woody Leonhard of InfoWorld noted that by default Windows 8.1's "Smart Search" system sends search queries and other information to Microsoft, which could be used for targeted advertising. Leonhard considered this to be ironic, given that Microsoft had criticized Google's use of similar tactics with its "Scroogled" advertising campaign. Market share According to Net Applications, the adoption rate in October 2024 for Windows 8.1 was 0.31%. Windows 8.1 reached a peak adoption rate of 13.12% in June 2015 compared with Windows 8 peak adoption rate of 8.02% in September 2013. See also Comparison of operating systems History of operating systems List of operating systems Microsoft Windows version history References Further reading External links Windows 8.1 Update (KB 2919355) Windows 8.1 update history Introducing Windows 8.1 for IT Professionals - Technical Overview Bott, Ed. (2013) 2013 software IA-32 operating systems Windows 8 8.1 X86-64 operating systems Tablet operating systems Products and services discontinued in 2023 Microsoft Windows
Windows 8.1
Technology
5,486
24,214,437
https://en.wikipedia.org/wiki/Gripple
A Gripple wire joiner is a device used to join and tension wire, to terminate and suspend wires and wire ropes, and also to support false ceilings, cable baskets, and similar items. They are manufactured in Sheffield, England by Gripple Ltd. The name derives from the fact the device both "grips" and "pulls" wire. History Wire salesman Hugh Facey invented the original "Gripple" wire tensioner and joiner after a conversation in 1986 with a Welsh farmer. The first Gripple wire joiner was launched in the UK in 1988, and Gripple Ltd was established in 1991. Description Wire or wire rope is inserted into a channel in the Gripple wire joiner, where it is gripped by a spring-loaded roller or wedge, and tensioned by being pulled through. The channel is mirrored on the opposite side of the Gripple wire joiner, allowing a second piece of wire to be joined. By orienting the channel of a gripple wire joiner vertically and using it with wire rope, it makes a convenient suspension system capable of holding up substantial loads, and this has given rise to a range of Gripple suspension systems, which are sold to the construction industry worldwide. Thousands of Gripple wire joiners hold together the Great Dingo Fence in Australia, the world's longest fence. The company produces over 30 million Gripple wire joiners per year. These are also used extensively in the sport of orienteering as a form of security for the controls, particularly in urban environments. Images See also List of companies in Sheffield Notes References External links Official company website Manufacturing companies of the United Kingdom Wire Fasteners Fences Companies based in Sheffield English inventions Companies established in 1988
Gripple
Engineering
344
43,815,140
https://en.wikipedia.org/wiki/Interceptor%20ditch
In geotechnical engineering, an interceptor ditch is a small ditch or channel constructed to intercept and drain water to an area where it can be safely discharged. These are used for excavation purposes of limited depth made in a coarse-grained soils. These are constructed around an area to be dewatered. Sump pits are also placed at suitable intervals for installation of centrifugal pumps to remove the water collected in an efficient manner. In fine sands and silts, there may be sloughing, erosion or quick conditions. For such type of soils the method is confined to a depth of 1 to 2 m. Interceptor ditches are most economical for carrying away water which emerge on the slopes and near the bottom of the foundation pit. Its size depends on the original ground slope, runoff area, type of soil and vegetation, and other factors related to runoff volume. Construction guidelines The interceptor ditch commonly consists of a ditch and may have an associated dike. Sediment control measures may be required to filter or trap sediments before the runoff leaves the construction area. The construction of the interceptor ditch at the crown of a slope is normally accomplished prior to the excavation of the cut section. Maintenance Inspection and maintenance is necessary after completion of construction of any structure. Here some steps followed in the maintenance of interceptor ditches are summarized below: Periodic inspection and maintenance will be required based on post-construction site conditions. Make any repairs necessary to ensure that it is operating properly. Locate any damaged areas and repair as necessary. Remove any channel obstructions (particularly waste materials) which would otherwise obstruct dewatering. See also Earthworks (engineering) Digging References Soil mechanics Geotechnical engineering Civil engineering
Interceptor ditch
Physics,Engineering
341
39,292,725
https://en.wikipedia.org/wiki/C18H16O4
{{DISPLAYTITLE:C18H16O4}} The molecular formula C18H16O4 may refer to: Truxillic acid, several crystalline stereoisomeric cyclic dicarboxylic acids Truxinic acid, several stereoisomeric cyclic dicarboxylic acids with the formula
C18H16O4
Chemistry
67
11,210,719
https://en.wikipedia.org/wiki/Cornish%20hedge
A Cornish hedge is an ancient style of hedge built of stone and earth found in Cornwall, southwest England. Sometimes hedging plants or trees are planted on the hedge to increase its windbreaking height. A rich flora develops over the lifespan of a Cornish hedge. The Cornish hedge contributes to the distinctive field-pattern of the Cornish landscape, and form the county's largest semi-natural wildlife habitat. Construction A Cornish hedge has two sides which are built by placing huge stone blocks into the earth and packing them in with sub-soil. Smaller interlocking rocks are used to build the hedge high until it reaches a level when random turns into neat rows of square stones called "edgers". Two inches of grass are sliced from the ground and stuck on top of the structure with sticks. — Article in The West Briton The hedge is slightly wider at bottom than at the top, because of the large "grounder" stones at the base. The structure is very stable and will stand for a hundred years or more. The hedge has two stone faces with soil between the two walls. Bushes such as gorse may grow on the top, rooted in the soil between the walls. It is called a hedge because of its living component. A professional hedger can build about a metre of double-sided hedge in a day. The archaeologist Francis Pryor observes: History There are about of hedges in Cornwall today, and their development over the centuries is preserved in their structure. The first Cornish hedges enclosed land for cereal crops during the Neolithic Age (4000–6000 years ago). Prehistoric farms were of about , with fields about for hand cultivation. Some hedges date from the Bronze and Iron Ages, 2000–4000 years ago, when Cornwall's traditional pattern of landscape became established. Others were built during the Mediaeval field rationalisations; more originated in the tin and copper industrial boom of the 18th and 19th centuries, when heaths and uplands were enclosed. Cornish planning authorities have frequently made it a condition of approval of new developments that the site is bounded by newly made Cornish hedges. Maintenance Cornish hedges suffer from the effects of tree roots, burrowing rabbits, rain, wind, farm animals and people. Eventually the hedge sides lose their batter, bulge outwards and stones fall. How often repairs are needed depends on how well the hedge was built, its stone, and what has happened to it since it was last repaired. Typically a hedge needs a cycle of repair every 150 years or so, or less often if it is fenced. Building new hedges, and repairing existing hedges, is a skilled craft, and there are professional hedgers in Cornwall. The Guild of Cornish Hedgers is the main body promoting the understanding of Cornish hedges in Cornwall. Charles, at that time Prince of Wales, visited Boscastle on 15 July 2019 to commemorate the anniversary of the Cornwall AONB and to visit a local Cornish hedge restoration project. ‘Kerdroya: The Cornish Hedge Community Heritage Project’ is being carried out in partnership by Golden Tree Productions and the Cornwall Area of Outstanding Natural Beauty (AONB). See also Devon hedge Bocage References Further reading Balchin, W. G. V. (1954) Cornwall: an illustrated essay on the history of the landscape; chap. 4. London: Hodder and Stoughton External links The Guild of Cornish Hedgers Buildings and structures in Cornwall Agriculture in England Fences Types of wall Garden features Environment of Cornwall Agricultural buildings in Cornwall
Cornish hedge
Engineering
702
68,225,807
https://en.wikipedia.org/wiki/C/2021%20J1%20%28Maury%E2%80%93Attard%29
C/2021 J1 (Maury-Attard) is a Halley-type comet discovered on May 9, 2021, by French amateur astronomers Alain Maury and Georges Attard with the MAP (Maury/Attard/Parrott) observation program. It is the first comet discovered with the synthetic tracking technique, made possible with the Tycho Tracker commercial software developed by Daniel Parrott. When it was discovered, it had a magnitude of 19. It has a 124 day observation arc. It came to perihelion on 19 February 2021. The next perihelion will be in early 2154. References External links Halley-type comets 20210103 Comets in 2021
C/2021 J1 (Maury–Attard)
Astronomy
145
23,534,147
https://en.wikipedia.org/wiki/PSI%20Protein%20Classifier
PSI Protein Classifier is a program generalizing the results of both successive and independent iterations of the PSI-BLAST program. PSI Protein Classifier determines belonging of the found by PSI-BLAST proteins to the known families. The unclassified proteins are grouped according to similarity. PSI Protein Classifier allows to measure evolutionary distances between families of homologous proteins by the number of PSI-BLAST iterations. Sources D.G. Naumoff and M. Carreras. PSI Protein Classifier: a new program automating PSI-BLAST search results. Molecular Biology (Engl Transl), 2009, 43(4):652-664. PDF External links PSI Protein Classifier Bioinformatics algorithms Phylogenetics software Laboratory software
PSI Protein Classifier
Chemistry,Biology
153
21,850,316
https://en.wikipedia.org/wiki/Mobile%20Display%20Digital%20Interface
Mobile Display Digital Interface (MDDI) is a high-speed digital interface developed by Qualcomm to interconnect the upper and lower clamshell in a flip phone. The MDDI solution supports variable data rates of up to 3.2 Gbit/s, and decreases the number of signals that connect the digital baseband controller with the LCD and camera. The integration of MDDI is said to enable the adoption of advanced features, such as high-definition (QVGA) LCDs and high-resolution megapixel cameras for wireless devices, and supports capabilities such as driving an external display or a video projector from a handset. A Video Electronics Standards Association (VESA) approved standard, the MDDI solution is currently available and integrated into select Qualcomm chipsets. See also List of display interfaces DBI from MIPI DSI from MIPI References External links Qualcomm MDDI page Mobile telecommunications
Mobile Display Digital Interface
Technology
195
355,011
https://en.wikipedia.org/wiki/Icon%20%28computing%29
In computing, an icon is a pictogram or ideogram displayed on a computer screen in order to help the user navigate a computer system. The icon itself is a quickly comprehensible symbol of a software tool, function, or a data file, accessible on the system and is more like a traffic sign than a detailed illustration of the actual entity it represents. It can serve as an electronic hyperlink or file shortcut to access the program or data. The user can activate an icon using a mouse, pointer, finger, or voice commands. Their placement on the screen, also in relation to other icons, may provide further information to the user about their usage. In activating an icon, the user can move directly into and out of the identified function without knowing anything further about the location or requirements of the file or code. Icons as parts of the graphical user interface of a computer system, in conjunction with windows, menus and a pointing device (mouse), belong to the much larger topic of the history of the graphical user interface that has largely supplanted the text-based interface for casual use. Overview The computing definition of "icon" can include three distinct semiotical elements: Icon, which resembles its referent (such as a road sign for falling rocks). This category includes stylized drawings of objects from the office environment or from other professional areas such as printers, scissors, file cabinets and folders. Index, which is associated with its referent (smoke is a sign of fire). This category includes stylized drawings used to refer to actions "printer" and "print", "scissors" and "cut" or "magnifying glass" and "search". Symbol, which is related to its referent only by convention (letters, musical notation, mathematical operators etc.). This category includes standardized symbols found across many electronic devices, such as the power on/off symbol and the USB icon. The majority of icons are encoded and decoded using metonymy, synecdoche, and metaphor. An example of metaphorical representation characterizes all the major desktop-based computer systems including the desktop that uses an iconic representation of objects from the 1980s office environment to transpose attributes from a familiar context/object to an unfamiliar one. This is known as skeuomorphism, and an example is the use of the floppy disk to represent saving data; even though floppy disks have been obsolete for roughly a quarter century, it is still recognized as "the save icon". Metonymy is in itself a subset of metaphors that use one entity to point to another related to it such as using a fluorescent bulb instead of a filament one to represent power saving settings. Synecdoche is considered as a special case of metonymy, in the usual sense of the part standing for the whole such as a single component for the entire system, speaker driver for the entire audio system settings. Additionally, a group of icons can be categorised as brand icons, used to identify commercial software programs and are related to the brand identity of a company or software. These commercial icons serve as functional links on the system to the program or data files created by a specific software provider. Although icons are usually depicted in graphical user interfaces, icons are sometimes rendered in a TUI using special characters such as MouseText or PETSCII. The design of all computer icons is constricted by the limitations of the device display. They are limited in size, with the standard size of about a thumbnail for both desktop computer systems and mobile devices. They are frequently scalable, as they are displayed in different positions in the software, a single icon file such as the Apple Icon Image format can include multiple versions of the same icon optimized to work at a different size, in colour or grayscale as well as on dark and bright backgrounds. The colors used, for both the image and the icon background, should stand out on different system backgrounds and among each other. The detailing of the icon image needs to be simple, remaining recognizable in varying graphical resolutions and screen sizes. Computer icons are by definition language-independent but often not culturally independent; they do not rely on letters or words to convey their meaning. These visual parameters place rigid limits on the design of icons, frequently requiring the skills of a graphic artist in their development. Because of their condensed size and versatility, computer icons have become a mainstay of user interaction with electronic media. Icons also provide rapid entry into the system functionality. On most systems, users can create and delete, replicate, select, click or double-click standard computer icons and drag them to new positions on the screen to create a customized user environment. Types Standardized electrical device symbols A series of recurring computer icons are taken from the broader field of standardized symbols used across a wide range of electrical equipment. Examples of these are the power symbol and the USB icon, which are found on a wide variety of electronic devices. The standardization of electronic icons is an important safety-feature on all types of electronics, enabling a user to more easily navigate an unfamiliar system. As a subset of electronic devices, computer systems and mobile devices use many of the same icons; they are corporated into the design of both the computer hardware and on the software. On the hardware, these icons identify the functionality of specific buttons and plugs. In the software, they provide a link into the customizable settings. System warning icons also belong to the broader area of ISO standard warning signs. These warning icons, first designed to regulate automobile traffic in the early 1900s, have become standardized and widely understood by users without the necessity of further verbal explanations. In designing software operating systems, different companies have incorporated and defined these standard symbols as part of their graphical user interface. For example, the Microsoft MSDN defines the standard icon use of error, warning, information and question mark icons as part of their software development guidelines. Different organizations are actively involved in standardizing these icons, as well as providing guidelines for their creation and use. The International Electrotechnical Commission (IEC) has defined "Graphical symbols for use on equipment", published as IEC 417, a document which displays IEC standardized icons. Another organization invested in the promotion of effective icon usage is the ICT (information and communications technologies), which has published guidelines for the creation and use of icons. Many of these icons are available on the Internet, either to purchase or as freeware to incorporate into new software. Metaphorical icons An icon is a signifier pointing to the signified. Easily comprehendible icons will make use of familiar visual metaphors directly connected to the signified: actions the icon initiate or the content that would be revealed. Metaphors, metonymy and synecdoche are used to encode the meaning in an icon system. The signified can have multiple natures: virtual objects such as files and applications, actions within a system or an application (e.g. snap a picture, delete, rewind, connect/disconnect etc...), action in the physical world (e.g. print, eject DVD, change volume or brightness etc...) as well as physical objects (e.g. monitor, compact disk, mouse, printer etc...). The Desktop metaphor A subgroup of the more visually rich icons is based on objects lifted from a 1970 physical office space and desktop environment. It includes the basic icons used for a file, file folder, trashcan, inbox, together with the spatial real estate of the screen, i.e. the electronic desktop. This model originally enabled users, familiar with common office practices and functions, to intuitively navigate the computer desktop and system. (Desktop Metaphor, pg 2). The icons stand for objects or functions accessible on the system and enable the user to do tasks common to an office space. These desktop computer icons developed over several decades; data files in the 1950s, the hierarchical storage system (i.e. the file folder and filing cabinet) in the 1960s, and finally the desktop metaphor itself (including the trashcan) in the 1970s. Dr. David Canfield Smith associated the term "icon" with computing in his landmark 1975 PhD thesis "Pygmalion: A Creative Programming Environment". In his work, Dr. Smith envisioned a scenario in which "visual entities", called icons, could execute lines of programming code, and save the operation for later re-execution. Dr. Smith later served as one of the principal designers of the Xerox Star, which became the first commercially available personal computing system based on the desktop metaphor when it was released in 1981. "The icons on [the desktop] are visible concrete embodiments of the corresponding physical objects." The desktop and icons displayed in this first desktop model are easily recognizable by users several decades later, and display the main components of the desktop metaphor GUI. This model of the desktop metaphor has been adopted by most personal computing systems in the last decades of the 20th century; it remains popular as a "simple intuitive navigation by single user on single system." It is only at the beginning of the 21st century that personal computing is evolving a new metaphor based on Internet connectivity and teams of users, cloud computing. In this new model, data and tools are no longer stored on the single system, instead they are stored someplace else, "in the cloud". The cloud metaphor is replacing the desktop model; it remains to be seen how many of the common desktop icons (file, file folder, trashcan, inbox, filing cabinet) find a place in this new metaphor. Brand icons for commercial software A further type of computer icon is more related to the brand identity of the software programs available on the computer system. These brand icons are bundled with their product and installed on a system with the software. They function in the same way as the hyperlink icons described above, representing functionality accessible on the system and providing links to either a software program or data file. Over and beyond this, they act as a company identifier and advertiser for the software or company. Because these company and program logos represent the company and product itself, much attention is given to their design, done frequently by commercial artists. To regulate the use of these brand icons, they are trademark registered and are considered part of the company's intellectual property. In closed systems such as iOS and Android, the use of icons is to a degree regulated or guided to create a sense of consistency in the UI. Overlay icons On some GUI systems (e.g. Windows), on an icon which represents an object (e.g. a file) a certain additional subsystem can add a smaller secondary icon, laid over the primary icon and usually positioned in one of its corners, to indicate the status of the object which is represented with the primary icon. For instance, the subsystem for locking files can add a "padlock" overlay icon on an icon which represents a file in order to indicate that the file is locked. Placement and spacing In order to display the number of icons representing the growing complexity offered on a device, different systems have come up with different solutions for screen space management. The computer monitor continues to display primary icons on the main page or desktop, allowing easy and quick access to the most commonly used functions for a user. This screen space also invites almost immediate user customization, as the user adds favourite icons to the screen and groups related icons together on the screen. Secondary icons of system programs are also displayed on the task bar or the system dock. These secondary icons do not provide a link like the primary icons, instead, they are used to show availability of a tool or file on the system. Spatial management techniques play a bigger role in mobile devices with their much smaller screen real estate. In response, mobile devices have introduced, among other visual devices, scrolling screen displays and selectable tabs displaying groups of related icons. Even with these evolving display systems, the icons themselves remain relatively constant in both appearance and function. Above all, the icon itself must remain clearly identifiable on the display screen regardless of its position and size. Programs might display their icon not only as a desktop hyperlink, but also in the program title bar, on the Start menu, in the Microsoft tray or the Apple dock. In each of these locations, the primary purpose is to identify and advertise the program and functionality available. This need for recognition in turn sets specific design restrictions on effective computer icons. Design In order to maintain consistency in the look of a device, OS manufacturers offer detailed guidelines for the development and use of icons on their systems. This is true for both standard system icons and third party application icons to be included in the system. The system icons currently in use have typically gone through widespread international acceptance and understandability testing. Icon design factors have also been the topic for extensive usability studies. The design itself involves a high level of skill in combining an attractive graphic design with the required usability features. Shape The icon needs to be clear and easily recognizable, able to display on monitors of widely varying size and resolutions. Its shape should be simple with clean lines, without too much detailing in the design. Together with the other design details, the shape also needs to make it unique on the display and clearly distinguishable from other icons. Color The icon needs to be colorful enough to easily pick out on the display screen, and contrast well with any background. With the increasing ability to customize the desktop, it is important for the icon itself to display in a standard color which cannot be modified, retaining its characteristic appearance for immediate recognition by the user. Through color it should also provide some visual indicator as to the icon state; activated, available or currently not accessible ("greyed out"). Size and scalability The standard icon is generally the size of an adult thumb, enabling both easy visual recognition and use in a touchscreen device. For individual devices the display size correlates directly to the size of the screen real estate and the resolution of the display. Because they are used in multiple locations on the screen, the design must remain recognizable at the smallest size, for use in a directory tree or title bar, while retaining an attractive shape in the larger sizes. In addition to scaling, it may be necessary to remove visual details or simplify the subject between discrete sizes. Larger icons serve also as part of the accessibility features for the visually impaired on many computer systems. The width and height of the icon are the same (1:1 aspect ratio) in almost all areas of traditional use. Motion Icons can also be augmented with iconographic motion - geometric manipulations applied to a graphical element over time, for example, a scale, rotation, or other deformation. One example is when application icons "wobble" in iOS to convey to the user they are able to be repositioned by being dragged. This is different from an icon with animated graphics, such as a Throbber. In contrast to static icons and icons with animated graphics, kinetic behaviors do not alter the visual content of an element (whereas fades, blurs, tints, and addition of new graphics, such as badges, exclusively alter an icon's pixels). Stated differently, pixels in an icon can be moved, rotated, stretched, and so on - but not altered or added to. Research has shown iconographic motion can act as a powerful and reliable visual cue, a critical property for icons to embody. Localization In its primary function as a symbolic image, the icon design should ideally be divorced from any single language. For products which are targeting the international marketplace, the primary design consideration is that the icon is non-verbal; localizing text in icons is costly and time-consuming. Cultural context Beyond text, there are other design elements which can be dependent upon the cultural context for interpretation. These include color, numbers, symbols, body parts and hand gestures. Each of these elements needs to be evaluated for their meaning and relevance across all markets targeted by the product. Related visual tools Other graphical devices used in the computer user interface fulfill GUI functions on the system similar to the computer icons described above. However each of these related graphical devices differs in one way or another from the standard computer icon. Windows The graphical windows on the computer screen share some of the visual and functional characteristics of the computer icon. Windows can be minimized to an icon format to serve as a hyperlink to the window itself. Multiple windows can be open and even overlapping on the screen. However where the icon provides a single button to initiate some function, the principal function of the window is a workspace, which can be minimized to an icon hyperlink when not in use. Control widgets Over time, certain GUI widgets have gradually appeared which are useful in many contexts. These are graphical controls which are used across computer systems and can be intuitively manipulated by the user even in a new context because the user recognises them from having seen them in a more familiar context. Examples of these control widgets are scroll bars, sliders, listboxes and buttons used in many programs. Using these widgets, a user is able to define and manipulate the data and the display for the software program they are working with. The first set of computer widgets was originally developed for the Xerox Alto. Now they are commonly bundled in widget toolkits and distributed as part of a development package. These control widgets are standardized pictograms used in the graphical interface, they offer an expanded set of user functionalities beyond the hyperlink function of computer icons. Emoticons Another GUI icon is exemplified by the smiley face, a pictogram embedded in a text message. The smiley, and by extension other emoticons, are used in computer text to convey information in a non-verbal binary shorthand, frequently involving the emotional context of the message. These icons were first developed for computers in the 1980s as a response to the limited storage and transmission bandwidth used in electronic messaging. Since then they have become both abundant and more sophisticated in their keyboard representations of varying emotions. They have developed from keyboard character combinations into real icons. They are widely used in all forms of electronic communications, always with the goal of adding context to the verbal content of the message. In adding an emotional overlay to the text, they have also enabled electronic messages to substitute for and frequently supplant voice-to-voice messaging. These emoticons are very different from the icon hyperlinks described above. They do not serve as links, and are not part of any system function or computer software. Instead they are part of the communication language of users across systems. For these computer icons, customization and modifications are not only possible but in fact expected of the user. Hyperlinks A text hyperlink performs much the same function as the functional computer icon: it provides a direct link to some function or data available on the system. Although they can be customized, these text hyperlinks generally share a standardized recognizable format, blue text with underlining. Hyperlinks differ from functional computer icons in that they are normally embedded in text, whereas icons are displayed as stand-alone on the screen real estate. They are also displayed in text, either as the link itself or a friendly name, whereas icons are defined as being primarily non-textual. Icon creation Because of the design requirements, icon creation can be a time-consuming and costly process. There are a plethora of icon creation tools to be found on the Internet, ranging from professional level tools through utilities bundled with software development programs to stand-alone freeware. Given this wide availability of icon tools and icon sets, a problem can arise with custom icons which are mismatched in style to the other icons included on the system. Tools Icons underwent a change in appearance from the early 8-bit pixel art used pre-2000 to a more photorealistic appearance featuring effects such as softening, sharpening, edge enhancement, a glossy or glass-like appearance, or drop shadows which are rendered with an alpha channel. Icon editors used on these early platforms usually contain a rudimentary raster image editor capable of modifying images of an icon pixel by pixel, by using simple drawing tools, or by applying simple image filters. Professional icon designers seldom modify icons inside an icon editor and use a more advanced drawing or 3D modeling application instead. The main function performed by an icon editor is generation of icons from images. An icon editor resamples a source image to the resolution and color depth required for an icon. Other functions performed by icon editors are icon extraction from executable files (exe, dll), creation of icon libraries, or saving individual images of an icon. All icon editors can make icons for system files (folders, text files, etc.), and for web pages. These have a file extension of .ICO for Windows and web pages or .ICNS for the Macintosh. If the editor can also make a cursor, the image can be saved with a file extension of .CUR or .ANI for both Windows and the Macintosh. Using a new icon is simply a matter of moving the image into the correct file folder and using the system tools to select the icon. In Windows XP you could go to My Computer, open Tools on the explorer window, choose Folder Options, then File Types, select a file type, click on Advanced and select an icon to be associated with that file type. Developers also use icon editors to make icons for specific program files. Assignment of an icon to a newly created program is usually done within the Integrated Development Environment used to develop that program. However, if one is creating an application in the Windows API he or she can simply add a line to the program's resource script before compilation. Many icon editors can copy a unique icon from a program file for editing. Only a few can assign an icon to a program file, a much more difficult task. Simple icon editors and image-to-icon converters are also available online as web applications. List of tools This is a list of notable computer icon software. Axialis IconWorkshop – Supports both Windows and Mac icons. (Commercial, Windows) IcoFX – Icon editor supporting Windows Vista and Macintosh icons with PNG compression (Commercial, Windows) IconBuilder – Plug-in for Photoshop; focused on Mac. (Commercial, Windows/Mac) Microangelo Toolset – a set of tools (Studio, Explorer, Librarian, Animator, On Display) for editing Windows icons and cursors. (Commercial, Windows) Microsoft Visual Studio - can author ICO/CUR files but cannot edit 32-bit icon frames with 8-bit transparency. (Commercial, Windows) The following is a list of raster graphic applications capable of creating and editing icons: GIMP – Image Editor Supports reading and writing Windows ICO/CUR/ANI files and PNG files that can be converted to Mac .icns files. (Open Source, Free Software, Multi-Platform) ImageMagick and GraphicsMagick – Command Line image conversion & generation that can be used to create Windows ICO files and PNG files that can be converted to Mac .ICNS files. (Open Source, Free Software, Multi-Platform) IrfanView – Support converting graphic file formats into Windows ICO files. (Proprietary, free for non-commercial use, Windows) ResEdit – Supports creating classic Mac OS icon resources. (Proprietary, Discontinued, Classic Mac OS) See also Apple Icon Image format Distinguishable interfaces Favicon Font Awesome ICO (file format) Icon design Iconfinder Resource (Windows) Semasiography The Noun Project Unicode symbols WIMP (computing) XPM References Further reading Wolf, Alecia. 2000. "Emotional Expression Online: Gender Differences in Emoticon Katz, James E., editor (2008). Handbook of Mobile Communication Studies. MIT Press, Cambridge, Massachusetts. Levine, Philip and Scollon, Ron, editors (2004). Discourse & Technology: Multimodal Discourse Analysis. Georgetown University Press, Washington, D.C. Abdullah, Rayan and Huebner, Roger (2006). Pictograms, Icons and Signs: A Guide to Information Graphics. Thames & Hudson, London. Handa, Carolyn (2004). Visual Rhetoric in a Digital World: A Critical Sourcebook. Bedford / St. Martins, Boston. Zenon W. Pylyshyn and Liam J. Bannon (1989). Perspectives on the Computer Revolution. Ablex, New York. External links Graphical user interface elements Pictograms
Icon (computing)
Mathematics,Technology
5,025
39,747,568
https://en.wikipedia.org/wiki/Kirjava%20%22Puolue%22%20%E2%80%93%20Elonkeh%C3%A4n%20Puolesta
Kirjava "Puolue" – Elonkehän Puolesta (KiPu; ) was a Finnish political party founded in 1988, best known for its alliance with Pertti "Veltto" Virtanen. It was a faction of the Green movement, which is now represented by the Green League in parliament. Virtanen went on to change allegiance to the Finns Party and was re-elected for two terms in 2007 and 2011. History The original name from 1992 was Vihreät (The Greens), then it changed to Ekologinen puolue Vihreät (Ecological Party the Greens) and became Kirjava Puolue in 1998. The only MP the party had was Virtanen, who served between 1995 and 1999. In 2003, the party was removed from the party register after failing to gain MPs in two consecutive elections. The party advocated degrowth and rejection of Finnish membership in any global economic organizations; thus, EU, WTO, GATT, IMF and World Bank. It also opposed new construction, wanted to reduce energy consumption and limit population growth. Instead, it proposed that organic farming should be the main livelihood. References Defunct political parties in Finland Defunct green political parties Green parties in Europe Degrowth 1988 establishments in Finland Political parties with year of disestablishment missing Political parties established in 1988
Kirjava "Puolue" – Elonkehän Puolesta
Environmental_science
279
8,364,462
https://en.wikipedia.org/wiki/Energetic%20space
In mathematics, more precisely in functional analysis, an energetic space is, intuitively, a subspace of a given real Hilbert space equipped with a new "energetic" inner product. The motivation for the name comes from physics, as in many physical problems the energy of a system can be expressed in terms of the energetic inner product. An example of this will be given later in the article. Energetic space Formally, consider a real Hilbert space with the inner product and the norm . Let be a linear subspace of and be a strongly monotone symmetric linear operator, that is, a linear operator satisfying for all in for some constant and all in The energetic inner product is defined as for all in and the energetic norm is for all in The set together with the energetic inner product is a pre-Hilbert space. The energetic space is defined as the completion of in the energetic norm. can be considered a subset of the original Hilbert space since any Cauchy sequence in the energetic norm is also Cauchy in the norm of (this follows from the strong monotonicity property of ). The energetic inner product is extended from to by where and are sequences in Y that converge to points in in the energetic norm. Energetic extension The operator admits an energetic extension defined on with values in the dual space that is given by the formula for all in Here, denotes the duality bracket between and so actually denotes If and are elements in the original subspace then by the definition of the energetic inner product. If one views which is an element in as an element in the dual via the Riesz representation theorem, then will also be in the dual (by the strong monotonicity property of ). Via these identifications, it follows from the above formula that In different words, the original operator can be viewed as an operator and then is simply the function extension of from to An example from physics Consider a string whose endpoints are fixed at two points on the real line (here viewed as a horizontal line). Let the vertical outer force density at each point on the string be , where is a unit vector pointing vertically and Let be the deflection of the string at the point under the influence of the force. Assuming that the deflection is small, the elastic energy of the string is and the total potential energy of the string is The deflection minimizing the potential energy will satisfy the differential equation with boundary conditions To study this equation, consider the space that is, the Lp space of all square-integrable functions in respect to the Lebesgue measure. This space is Hilbert in respect to the inner product with the norm being given by Let be the set of all twice continuously differentiable functions with the boundary conditions Then is a linear subspace of Consider the operator given by the formula so the deflection satisfies the equation Using integration by parts and the boundary conditions, one can see that for any and in Therefore, is a symmetric linear operator. is also strongly monotone, since, by the Friedrichs's inequality for some The energetic space in respect to the operator is then the Sobolev space We see that the elastic energy of the string which motivated this study is so it is half of the energetic inner product of with itself. To calculate the deflection minimizing the total potential energy of the string, one writes this problem in the form for all in . Next, one usually approximates by some , a function in a finite-dimensional subspace of the true solution space. For example, one might let be a continuous piecewise linear function in the energetic space, which gives the finite element method. The approximation can be computed by solving a system of linear equations. The energetic norm turns out to be the natural norm in which to measure the error between and , see Céa's lemma. See also Inner product space Positive-definite kernel References Functional analysis Hilbert spaces
Energetic space
Physics,Mathematics
788
617,193
https://en.wikipedia.org/wiki/God%20Bless%20You%2C%20Dr.%20Kevorkian
God Bless You, Dr. Kevorkian, by Kurt Vonnegut, is a collection of short fictional interviews written by Vonnegut and first broadcast on WNYC. The title parodies that of Vonnegut's 1965 novel God Bless You, Mr. Rosewater. It was published in book form in 1999. Synopsis The premise of the collection is that Vonnegut employs Dr. Jack Kevorkian to give him near-death experiences, allowing Vonnegut access to heaven and those in it for a limited time. While in the afterlife Vonnegut interviews a range of people including Adolf Hitler, William Shakespeare, Eugene V. Debs, Isaac Asimov, Isaac Newton and the ever-present Kilgore Trout (a fictional character created by Vonnegut in his earlier works). Resources The book's page in the website of Seven Stories Press Many of the original WNYC radio reports forming the basis of the book References 1999 short story collections Bangsian fantasy Books by Kurt Vonnegut Fiction about the afterlife Seven Stories Press books Cultural depictions of physicians Cultural depictions of Adolf Hitler Cultural depictions of writers Cultural depictions of William Shakespeare Cultural depictions of Isaac Newton Isaac Asimov
God Bless You, Dr. Kevorkian
Astronomy
244
45,689,762
https://en.wikipedia.org/wiki/Interpersonal%20Reactivity%20Index
The Interpersonal Reactivity Index (IRI) is a published measurement tool for the multi-dimensional assessment of empathy. It was developed by Mark H. Davis, a professor of psychology at Eckerd College. The paper describing IRI, published in 1983, has been cited over 10,000 times, according to Google Scholar. IRI s a self-report comprising 28-items answered on a 5-point Likert scale ranging from "Does not describe me well" to "Describes me very well". The four subscales are: Perspective Taking – the tendency to spontaneously adopt the psychological point of view of others. Fantasy – taps respondents' tendencies to transpose themselves imaginatively into the feelings and actions of fictitious characters in books, movies, and plays. Empathic Concern – assesses "other-oriented" feelings of sympathy and concern for unfortunate others. Personal Distress – measures "self-oriented" feelings of personal anxiety and unease in tense interpersonal settings. Example questions: 11. I sometimes try to understand my friends better by imagining how things look from their perspective. 28. Before criticizing somebody, I try to imagine how I would feel if I were in their place. Versatility and Adaptability A study by De Corte et al. (2007) translated the IRI into Dutch. The researchers found that their translation is just as valid and reliable as Davis's original version, albeit in their educated, still Westernized sample. Another study by Péloquin and Lafontaine (2010) adapted the IRI to specifically measure empathy in couples rather than individuals. This was achieved by rewording some phrases used in the original, for example replacing references to "people" and "somebody" to be "my partner," etc. Several couples were also asked to return after twelve months to be re-evaluated. This new version still adequately measured empathy as well as demonstrated predictive validity in the returning couples, correlating relationship satisfaction and each partner's empathy. Garcia-Barrera, Karr, Trujillo-Orrego, Trujillo-Orrego, and Pineda (2017) translated and modified the IRI into a Colombian Spanish version. This version was used to measure empathy in Colombian militants returning to society after having seen combat. The study encountered more difficulty in obtaining valid and reliable findings than previous studies. They attributed this difficulty largely due to the lack of education of the participants, which resulted in the introspective and abstract items of the IRI being difficult to understand. References External links "Interpersonal Reactivity Index" on Eckerd College website Mark H. Davis web page on Eckerd College website Personality tests Interpersonal relationships
Interpersonal Reactivity Index
Biology
545
44,441,142
https://en.wikipedia.org/wiki/Robert%20R.%20Squires
Robert Reed Squires (January 11, 1953 – September 30, 1998) was an American chemist known for his work in gas phase ion chemistry and flowing afterglow mass spectrometry. Early life and education Squires was born in Northern California and grew up in Los Angeles. He received an A.A. degree at El Camino College in 1973 and then returned to Northern California where he received a B.A. at California State University, Chico. He then went on to Yale University where he worked in the laboratory of Kenneth B. Wiberg on the thermochemistry of organic compounds. He received his M.Phil. degree in 1977 and a Ph.D. in 1980. He took a postdoctoral position with Charles DePuy and Veronica Bierbaum at the University of Colorado, Boulder where he studied the reactions of gas-phase ions using the flowing afterglow technique. Academic career Squires took a position as an assistant professor at Purdue University in 1981 where he constructed two unique mass spectrometers: a flowing afterglow triple quadrupole mass spectrometer and a flowing afterglow selected ion flow tube triple quadrupole mass spectrometer. In 1986, he was promoted to Associate Professor and in 1990 to Professor. Awards Among his awards were an Alfred P. Sloan Foundation Fellowship in 1987, the American Chemical Society Nobel Laureate Signature Award for Graduate Education in Chemistry (with Susan Graul) in 1991, and the American Society for Mass Spectrometry Biemann Medal in 1998. The Purdue University Department of Chemistry Robert R. Squires Scholarship was established in his honor. References 1953 births 1998 deaths 20th-century American chemists Mass spectrometrists
Robert R. Squires
Physics,Chemistry
347
49,635
https://en.wikipedia.org/wiki/SEX%20%28computing%29
In computing, the SEX assembly language mnemonic has often been used for the "Sign EXtend" machine instruction found in the Motorola 6809. A computer's or CPU's "sex" can also mean the endianness of the computer architecture used. x86 computers do not have the same "byte sex" as HC11 computers, for example. Functions are sometimes needed for computers of different endianness to communicate with each other over the internet, as protocols often use big endian byte coding by default. On the RCA 1802 series of microprocessors, the SEX, for "SEt X," instruction is used to designate which of the machine's sixteen 16-bit registers is to be the X (index) register. SEX in software: rarely used jargon The TLA SEX has humorously been said to stand for Software EXchange, meaning copying of software. As file sharing has sometimes spread computer viruses, it has been stated that “illicit SEX can transmit viral diseases to your computer.” The involvement of FTP servers' /pub directories in this process has led to the name being explained as a contraction of 'pubic'. References Machine code Computer jargon
SEX (computing)
Technology
244
77,834,346
https://en.wikipedia.org/wiki/Zandelisib
Zandelisib is an investigational new drug that is being evaluated to treat follicular lymphoma. It is a phosphatidylinositol 3 kinase delta inhibitor. References Antineoplastic drugs Amines Benzimidazoles Morpholines Piperidines Triazines Difluoromethyl compounds
Zandelisib
Chemistry
72
11,243,697
https://en.wikipedia.org/wiki/Suits%20index
The Suits index of a public policy is a measure of tax progressiveness, named for economist Daniel B. Suits. Similar to the Gini coefficient, the Suits index is calculated by comparing the area under the Lorenz curve to the area under a proportional line. For a progressive tax (for example, where higher income tax units pay a greater fraction of their income as tax), the Suits index is positive. A proportional tax (for example, where each unit pays an equal fraction of income) has a Suits index of zero, and a regressive tax (for example, where lower income tax units pay a greater fraction of income in tax) has a negative Suits index. A theoretical tax where the richest person pays all the tax has a Suits index of 1, and a tax where the poorest person pays everything has a Suits index of −1. Tax preferences (credits and deductions) also have a Suits index. Types of tax Income tax By definition, a flat income tax has a Suits index of zero. However, almost all income tax systems allow for some amount of income to be earned without tax (an exemption amount) to avoid collecting tax from very low income units. Also, most income tax systems provide for higher marginal tax rates at higher income. These effects combine to make income taxes generally progressive, and therefore have a positive Suits index. Sales tax Sales taxes are generally charged on each purchase, with no low income exemption. Additionally, lower income tax units generally spend a greater proportion of income on taxable purchases, while higher income units will save or invest a larger part of income. Therefore, sales taxes are generally regressive, and have a negative Suits index. Excise taxes Excise taxes are typically charged on items like gasoline, alcohol or tobacco products. Since the tax rate is typically high, and there is a practical limit to the amount of product that can be consumed, this tax is generally more regressive and has a very negative Suits index. Properties The Suits index has the useful property that the total Suits index of a group of taxes or policies is the revenue-weighted sum of the individual indexes. The Suits index is also related closely to the Gini coefficient. While a Gini coefficient of zero means that everyone receives the same income or benefit as a per capita value, a Suits index of zero means that each person pays the same tax as a percentage of income. Additionally, a poll tax has a Suits index equal to the negative of the Gini coefficient for the same group. Examples References Welfare economics Economic indicators Fiscal policy Index numbers
Suits index
Mathematics
519
53,830,256
https://en.wikipedia.org/wiki/Methane%20emissions
Increasing methane emissions are a major contributor to the rising concentration of greenhouse gases in Earth's atmosphere, and are responsible for up to one-third of near-term global heating. During 2019, about 60% (360 million tons) of methane released globally was from human activities, while natural sources contributed about 40% (230 million tons). Reducing methane emissions by capturing and utilizing the gas can produce simultaneous environmental and economic benefits. Since the Industrial Revolution, concentrations of methane in the atmosphere have more than doubled, and about 20 percent of the warming the planet has experienced can be attributed to the gas. About one-third (33%) of anthropogenic emissions are from gas release during the extraction and delivery of fossil fuels; mostly due to gas venting and gas leaks from both active fossil fuel infrastructure and orphan wells. Russia is the world's top methane emitter from oil and gas. Animal agriculture is a similarly large source (30%); primarily because of enteric fermentation by ruminant livestock such as cattle and sheep. According to the Global Methane Assessment published in 2021, methane emissions from livestock (including cattle) are the largest sources of agricultural emissions worldwide A single cow can make up to 99 kg of methane gas per year. Ruminant livestock can produce 250 to 500 L of methane per day. Human consumer waste flows, especially those passing through landfills and wastewater treatment, have grown to become a third major category (18%). Plant agriculture, including both food and biomass production, constitutes a fourth group (15%), with rice production being the largest single contributor. The world's wetlands contribute about three-quarters (75%) of the enduring natural sources of methane. Seepages from near-surface hydrocarbon and clathrate hydrate deposits, volcanic releases, wildfires, and termite emissions account for much of the remainder. Contributions from the surviving wild populations of ruminant mammals are vastly overwhelmed by those of cattle, humans, and other livestock animals. The Economist recommended setting methane emissions targets as a reduction in methane emissions would allow for more time to tackle the more challenging carbon emissions". Atmospheric concentration and warming influence The atmospheric methane (CH4) concentration is increasing and exceeded 1860 parts per billion in 2019, equal to two-and-a-half times the pre-industrial level. The methane itself causes direct radiative forcing that is second only to that of carbon dioxide (CO2). Due to interactions with oxygen compounds stimulated by sunlight, CH4 can also increase the atmospheric presence of shorter-lived ozone and water vapour, themselves potent warming gases: atmospheric researchers call this amplification of methane's near-term warming influence indirect radiative forcing. When such interactions occur, longer-lived and less-potent CO2 is also produced. Including both the direct and indirect forcings, the increase in atmospheric methane is responsible for about one-third of near-term global heating. Though methane causes far more heat to be trapped than the same mass of carbon dioxide, less than half of the emitted CH4 remains in the atmosphere after a decade. On average, carbon dioxide warms for much longer, assuming no change in rates of carbon sequestration. The global warming potential (GWP) is a way of comparing the warming due to other gases to that from carbon dioxide, over a given time period. Methane's GWP20 of 85 means that a ton of CH4 emitted into the atmosphere creates approximately 85 times the atmospheric warming as a ton of CO2 over a period of 20 years. On a 100-year timescale, methane's GWP100 is in the range of 28–34. Methane emissions are important as reducing them can buy time to tackle carbon emissions. Overview of emission sources Biogenic methane is actively produced by microorganisms in a process called methanogenesis. Under certain conditions, the process mix responsible for a sample of methane may be deduced from the ratio of the isotopes of carbon, and through analysis methods similar to carbon dating. Anthropogenic , emission volumes from some sources remain more uncertain than others; due in part to localized emission spikes not captured by the limited global measurement capability. The time required for a methane emission to become well-mixed throughout earth's troposphere is about 1–2 years. Satellite data indicate over 80% of the growth of methane emissions during 2010–2019 are tropical terrestrial emissions. There is accumulating research and data showing that oil and gas industry methane emissions – or from fossil fuel extraction, distribution and use – are much larger than thought. Natural Natural sources have always been a part of the methane cycle. Wetland emissions have been declining due to draining for agricultural and building areas. Methanogenesis Most ecological emissions of methane relate directly to methanogens generating methane in warm, moist soils as well as in the digestive tracts of certain animals. Methanogens are methane producing microorganisms. In order to produce energy, they use an anaerobic process called methanogenesis. This process is used in lieu of aerobic, or with oxygen, processes because methanogens are unable to metabolise in the presence of even small concentrations of oxygen. When acetate is broken down in methanogenesis, the result is the release of methane into the surrounding environment. Methanogenesis, the scientific term for methane production, occurs primarily in anaerobic conditions because of the lack of availability of other oxidants. In these conditions, microscopic organisms called archaea use acetate and hydrogen to break down essential resources in a process called fermentation. Acetoclastic methanogenesis – certain archaea cleave acetate produced during anaerobic fermentation to yield methane and carbon dioxide. H3C-COOH → CH4 + CO2 Hydrogenotrophic methanogenesis – archaea oxidize hydrogen with carbon dioxide to yield methane and water. 4H2 + CO2 → CH4 + 2H2O While acetoclastic methanogenesis and hydrogenotrophic methanogenesis are the two major source reactions for atmospheric methane, other minor biological methane source reactions also occur. For example, it has been discovered that leaf surface wax exposed to UV radiation in the presence of oxygen is an aerobic source of methane. Natural methane cycles Emissions of methane into the atmosphere are directly related to temperature and moisture. Thus, the natural environmental changes that occur during seasonal change act as a major control of methane emission. Additionally, even changes in temperature during the day can affect the amount of methane that is produced and consumed. Its concentration is higher in the Northern Hemisphere since most sources (both natural and human) are located on land and the Northern Hemisphere has more land mass. The concentrations vary seasonally, with, for example, a minimum in the northern tropics during April−May mainly due to removal by the hydroxyl radical. For example, plants that produce methane can emit as much as two to four times more methane during the day than during the night. This is directly related to the fact that plants tend to rely on solar energy to enact chemical processes. Additionally, methane emissions are affected by the level of water sources. Seasonal flooding during the spring and summer naturally increases the amount of methane released into the air. Wetlands In wetlands, where the rate of methane production is high, plants help methane travel into the atmosphere—acting like inverted lightning rods as they direct the gas up through the soil and into the air. They are also suspected to produce methane themselves, but because the plants would have to use aerobic conditions to produce methane, the process itself is still unidentified, according to a 2014 Biogeochemistry article. A 1994 article on methane emissions from northern wetlands said that since the 1800s, atmospheric methane concentrations increased annually at a rate of about 0.9%. Human-caused methane emissions The AR6 of the IPCC said, "It is unequivocal that the increases in atmospheric carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) since the pre-industrial period are overwhelmingly caused by human activities." Atmospheric methane accounted for 20% of the total radiative forcing (RF) from all of the long-lived and globally mixed greenhouse gases. According to the 2021 assessment by the Climate and Clean Air Coalition (CCAC) and the United Nations Environment Programme (UNEP) over 50% of global methane emissions are caused by human activities in fossil fuels (35%), waste (20%), and agriculture (40%). The oil and gas industry accounts for 23%, and coal mining for 12%. Twenty percent of global anthropogenic emissions stem from landfills and wastewater. Manure and enteric fermentation represent 32%, and rice cultivation represents 8%. The most clearly identified rise in atmospheric methane as a result of human activity occurred in the 1700s during the industrial revolution. During the 20th centurymainly because of the use of fossil fuelsconcentration of methane in the atmosphere increased, then stabilized briefly in the 1990s, only to begin to increase again in 2007. After 2014, the increase accelerated and by 2017, reached 1,850 (parts per billion) ppb. Increases in methane levels due to modern human activities arise from a number of specific sources including industrial activity; from extraction of oil and natural gas from underground reserves; transportation via pipeline of oil and natural gas; and thawing permafrost in Arctic regions, due to global warming which is caused by human use of fossil fuels. The primary component of natural gas is methane, which is emitted to the atmosphere in every stage of natural gas "production, processing, storage, transmission, and distribution". Emissions due to oil and gas extraction A 2005 Wuppertal Institute for Climate, Environment and Energy article identified pipelines that transport natural gas as a source of methane emissions. The article cited the example of Trans-Siberian natural gas pipeline system to western and Central Europe from the Yamburg and Urengoy exist gas fields in Russia with a methane concentration of 97%. In accordance with the IPCC and other natural gas emissions control groups, measurements had to be taken throughout the pipeline to measure methane emissions from technological discharges and leaks at the pipeline fittings and vents. Although the majority of the natural gas leaks were carbon dioxide, a significant amount of methane was also being consistently released from the pipeline as a result of leaks and breakdowns. In 2001, natural gas emissions from the pipeline and natural gas transportation system accounted for 1% of the natural gas produced. Between 2001 and 2005, this was reduced to 0.7%, the 2001 value was significantly less than that of 1996. A 2012 Climatic Change article and 2014 publication by a team of scientists led by Robert W. Howarth said that there was strong evidence that "shale gas has a larger GHG footprint than conventional gas, considered over any time scale. The GHG footprint of shale gas also exceeds that of oil or coal when considered at decadal time scales." Howarth called for policy changes to regulate methane emissions resulting from hydraulic fracturing and shale gas development. A 2013 study by a team of researchers led by Scot M. Miller, said that U.S. greenhouse gas reduction policies in 2013 were based on what appeared to be significant underestimates of anthropogenic methane emissions. The article said, that "greenhouse gas emissions from agriculture and fossil fuel extraction and processing"oil and/or natural gaswere "likely a factor of two or greater than cited in existing studies." By 2001, following a detailed study anthropogenic sources on climate change, IPCC researchers found that there was "stronger evidence that most of the observed warming observed over the last 50 years [was] attributable to human activities." Since the Industrial Revolution humans have had a major impact on concentrations of atmospheric methane, increasing atmospheric concentrations roughly 250%. According to the 2021 IPCC report, 30 - 50% of the current rise in temperatures is caused by emissions of methane, and reducing methane is a fast way of climate change mitigation. An alliance of 107 countries, including Brazil, the EU and the US, have joined the pact known as the Global Methane Pledge, committing to a collective goal of reducing global methane emissions by at least 30% from 2020 levels by 2030. The European Union adopted methane regulations in 2024. The law requires oil and gas developers to monitor, measure, and report methane emissions. Producers must stop flaring unused natural gas and use satellite imagery to detect leaks. Animals and livestock Ruminant animals, particularly cows and sheep, contain bacteria in their gastrointestinal systems that help to break down plant material. Some of these microorganisms use the acetate from the plant material to produce methane, and because these bacteria live in the stomachs and intestines of ruminants, whenever the animal "burps" or defecates, it emits methane as well. Based upon a 2012 study in the Snowy Mountains region, the amount of methane emitted by one cow is equivalent to the amount of methane that around 3.4 hectares of methanotrophic bacteria can consume. research in the Snowy Mountains region of Australia showed 8 tonnes of methane oxidized by methanotrophic bacteria per year on a 1,000 hectare farm. 200 cows on the same farm emitted 5.4 tonnes of methane per year. Hence, one cow emitted 27 kg of methane per year, while the bacteria oxidized 8 kg per hectare. The emissions of one cow were oxidized by 27/8 ≈ 3.4 hectare. Termites also contain methanogenic microorganisms in their gut. However, some of these microorganisms are so unique that they live nowhere else in the world except in the third gut of termites. These microorganisms also break down biotic components to produce ethanol, as well as methane byproduct. However, unlike ruminants who lose 20% of the energy from the plants they eat, termites only lose 2% of their energy in the process. Thus comparatively, termites do not have to eat as much food as ruminants to obtain the same amount of energy, and give off proportionally less methane. In 2001, NASA researchers confirmed the vital role of enteric fermentation in livestock on global warming. A 2006 UN FAO report reported that livestock generate more greenhouse gases as measured in CO2 equivalents than the entire transportation sector. Livestock accounts for 9% of anthropogenic CO2, 65%t of anthropogenic nitrous oxide and 37% of anthropogenic methane. Since then, animal science and biotechnology researchers have focused research on methanogens in the rumen of livestock and mitigation of methane emissions. Nicholas Stern, the author of the 2006 Stern Review on climate change has stated "people will need to turn vegetarian if the world is to conquer climate change". In 2003, the National Academy of Sciences's president, Ralph Ciceronean atmospheric scientistraised concerns about the increase in the number of methane-producing dairy and beef cattle was a "serious topic" as methane was the "second-most-important greenhouse gas in the atmosphere". Approximately 5% of the methane is released via the flatus, whereas the other 95% is released via eructation. Vaccines are under development to reduce the amount introduced through eructation. Asparagopsis seaweed as a livestock feed additive has reduced methane emissions by more than 80%. Waste Landfills Due to the large collections of organic matter and availability of anaerobic conditions, landfills are the third largest source of atmospheric methane in the United States, accounting for roughly 18.2% of methane emissions globally in 2014. When waste is first added to a landfill, oxygen is abundant and thus undergoes aerobic decomposition; during which time very little methane is produced. However, generally within a year oxygen levels are depleted and anaerobic conditions dominate the landfill allowing methanogens to takeover the decomposition process. These methanogens emit methane into the atmosphere and even after the landfill is closed, the mass amount of decaying matter allows the methanogens to continue producing methane for years. Waste water treatment Waste water treatment facilities act to remove organic matter, solids, pathogens, and chemical hazards as a result of human contamination. Methane emission in waste treatment facilities occurs as a result of anaerobic treatments of organic compounds and anaerobic biodegradation of sludge. Release of stored methane from the Arctic Others Aquatic ecosystems Natural and anthropogenic methane emissions from aquatic ecosystems are estimated to contribute about half of total global emissions. Urbanization and eutrophication are expected to lead to increased methane emissions from aquatic ecosystems. Ecological conversion Conversion of forests and natural environments into agricultural plots increases the amount of nitrogen in the soil, which inhibits methane oxidation, weakening the ability of the methanotrophic bacteria in the soil to act as sinks. Additionally, by changing the level of the water table, humans can directly affect the soil's ability to act as a source or sink. The relationship between water table levels and methane emission is explained in the wetlands section of natural sources. Rice agriculture Rice agriculture is a significant source of methane. With warm weather and water-logged soil, rice paddies act like wetlands, but are generated by humans for the purpose of food production. Due to the swamp-like environment of rice fields, these paddies emitted about 30 of the 400 million metric tons of anthropogenic methane in 2022. Biomass burning Incomplete burning of both living and dead organic matter results in the emission of methane. While natural wildfires can contribute to methane emissions, the bulk majority of biomass burning occurs as a result of humans – including everything from accidental burnings by civilians to deliberate burnings used to clear out land to biomass burnings occurring as a result of destroying waste. Oil and natural gas supply chain Methane is a primary component of natural gas, and thus during the production, processing, storage, transmission, and distribution of natural gas, a significant amount of methane is lost into the atmosphere. According to the EPA Inventory of U.S Greenhouse Gas Emissions and Sinks: 1990–2015 report, 2015 methane emissions from natural gas and petroleum systems totaled 8.1 Tg per year in the United States. Individually, the EPA estimates that the natural gas system emitted 6.5 Tg per year of methane while petroleum systems emitted 1.6 Tg per year of methane. Methane emissions occur in all sectors of the natural gas industry, from drilling and production, through gathering and processing and transmission, to distribution. These emissions occur through normal operation, routine maintenance, fugitive leaks, system upsets, and venting of equipment. In the oil industry, some underground crude contains natural gas that is entrained in the oil at high reservoir pressures. When oil is removed from the reservoir, associated gas is produced. However, a review of methane emissions studies reveals that the EPA Inventory of Greenhouse Gas Emissions and Sinks: 1990–2015 report likely significantly underestimated 2015 methane emissions from the oil and natural gas supply chain. The review concluded that in 2015 the oil and natural gas supply chain emitted 13 Tg per year of methane, which is about 60% more than the EPA report for the same time period. The authors write that the most likely cause for the discrepancy is an under sampling by the EPA of so-called "abnormal operating conditions", during which large quantities of methane can be emitted. Coal mining In 2014 NASA researchers reported the discovery of a methane cloud floating over the Four Corners region of the south-west United States. The discovery was based on data from the European Space Agency's Scanning Imaging Absorption Spectrometer for Atmospheric Chartography instrument from 2002 to 2012. The report concluded that "the source is likely from established gas, coal, and coalbed methane mining and processing." The region emitted 590,000 metric tons of methane every year between 2002 and 2012—almost 3.5 times the widely used estimates in the European Union's Emissions Database for Global Atmospheric Research. In 2019, the International Energy Agency (IEA) estimated that the methane emissions leaking from the world's coalmines are warming the global climate at the same rate as the shipping and aviation industries combined. Methane gas from methane clathrates At high pressures, such as are found on the bottom of the ocean, methane forms a solid clathrate with water, known as methane hydrate. An unknown, but possibly very large quantity of methane is trapped in this form in ocean sediments. Researchers are investigating possible changes in this process (clathrate gun hypothesis). However, the 2021 IPCC Sixth Assessment Report found that it was "very unlikely that gas clathrates (mostly methane) in deeper terrestrial permafrost and subsea clathrates will lead to a detectable departure from the emissions trajectory during this century". Methane slip from gas engines The use of natural gas and biogas in internal combustion engines for such applications as electricity production, cogeneration and heavy vehicles or marine vessels such as LNG carriers using the boil off gas for propulsion, emits a certain percentage of unburned hydrocarbons of which 85% is methane. The climate issues of using gas to fuel internal combustion engines may offset or even cancel out the advantages of less CO2 and particle emissions is described in this 2016 EU Issue Paper on methane slip from marine engines: "Emissions of unburnt methane (known as the 'methane slip') were around 7 g per kg LNG at higher engine loads, rising to 23–36 g at lower loads. This increase could be due to slow combustion at lower temperatures, which allows small quantities of gas to avoid the combustion process". Road vehicles run more on low load than marine engines causing relatively higher methane slip. Global methane emissions monitoring The Tropospheric Monitoring Instrument aboard the European Space Agency's Sentinel-5P spacecraft launched in October 2017 provides the most detailed methane emissions monitoring which is publicly available. It has a resolution of about 50 square kilometres. MethaneSAT is under development by the Environmental Defense Fund in partnership with researchers at Harvard University, to monitor methane emissions with an improved resolution of 1 kilometer. MethaneSAT is designed to monitor 50 major oil and gas facilities, and could also be used for monitoring of landfills and agriculture. It receives funding from Audacious Project (a collaboration of TED and the Gates Foundation), and is projected to launch as soon as 2024. In 2023, 12 satellites were deployed by GHGSat for monitoring methane emissions. Uncertainties in methane emissions, including so-called "super-emitter" fossil extractions and unexplained atmospheric fluctuations, highlight the need for improved monitoring at both regional and global scale. Satellites have recently begun to come online with capability to measure methane and other more powerful greenhouse gases with improving resolution. The Tropomi instrument on Sentinel-5 launched in 2017 by the European Space Agency can measure methane, sulphur dioxide, nitrogen dioxide, carbon monoxide, aerosol, and ozone concentrations in earth's troposphere at resolutions of several kilometers. In 2022, a study using data from the instrument monitoring large methane emissions worldwide was published; 1,200 large methane plumes were detected over oil and gas extraction sites. NASA's instrument also identified super-emitters. A 50% increase was observed in large methane emissions events detected by satellites in 2023 compared to 2022. Japan's GOSAT-2 platform launched in 2018 provides similar capability. The Claire satellite launched in 2016 by the Canadian firm GHGSat uses data from Tropomi to home in on sources of methane emissions as small as 15 m2. Other satellites are planned that will increase the precision and frequency of methane measurements, as well as provide a greater ability to attribute emissions to terrestrial sources. These include MethaneSAT, expected to be launched in 2022, and CarbonMapper. Global maps combining satellite data to help identify and monitor major methane emission sources are being built. The International Methane Emissions Observatory was created by the UN. Quantifying the global methane budget In order to mitigate climate change, scientists have been focusing on quantifying the global methane CH4 budget as the concentration of methane continues to increase—it is now second after carbon dioxide in terms of climate forcing. Further understanding of atmospheric methane is necessary in "assessing realistic pathways" towards climate change mitigation. Various research groups give the following values for methane emissions: National reduction policies China implemented regulations requiring coal plants to either capture methane emissions or convert methane into in 2010. According to a Nature Communications paper published in January 2019, methane emissions instead increased 50 percent between 2000 and 2015. In March 2020, Exxon called for stricter methane regulations, which would include detection and repair of methane leaks, minimization of venting and releases of unburned methane, and reporting requirements for companies. However, in August 2020, the U.S. Environmental Protection Agency rescinded a prior tightening of methane emission rules for the U.S. oil and gas industry. Approaches to reduce emissions Natural gas industries About 40% of methane emissions from the fossil fuel industry could be "eliminated at no net cost for firms", according to the International Energy Agency (IEA) by using existing technologies. Forty percent represents 9% of all human methane emissions. To reduce emissions from the natural gas industries, the EPA developed the Natural Gas STAR Program, also known as Gas STAR. The Coalbed Methane Outreach Program (CMOP) helps and encourages the mining industry to find ways to use or sell methane that would otherwise be released from the coal mine into the atmosphere. In 2023, the European Union agreed to legislation that will require fossil fuel companies to monitor and report methane leaks and to repair them within a short time period. The law also compels remediation of methane venting and methane flaring. The United States and China stated that they will include methane reduction targets in their next climate plans but have not enacted rules that would compel monitoring, reporting or repair of methane leaks. Livestock In order to counteract the amount of methane that ruminants give off, a type of drug called monensin (marketed as rumensin) has been developed. This drug is classified as an ionophore, which is an antibiotic that is naturally produced by a harmless bacteria strain. This drug not only improves feed efficiency but also reduces the amount of methane gas emitted from the animal and its manure. In addition to medicine, specific manure management techniques have been developed to counteract emissions from livestock manure. Educational resources have begun to be provided for small farms. Management techniques include daily pickup and storage of manure in a completely closed off storage facility that will prevent runoff from making it into bodies of water. The manure can then be kept in storage until it is either reused for fertilizer or taken away and stored in an offsite compost. Nutrient levels of various animal manures are provided for optimal use as compost for gardens and agriculture. Crops and soils In order to reduce effects on methane oxidation in soil, several steps can be taken. Controlling the usage of nitrogen enhancing fertilizer and reducing the amount of nitrogen pollution into the air can both lower inhibition of methane oxidation. Additionally, using drier growing conditions for crops such as rice and selecting strains of crops that produce more food per unit area can reduce the amount of land with ideal conditions for methanogenesis. Careful selection of areas of land conversion (for example, plowing down forests to create agricultural fields) can also reduce the destruction of major areas of methane oxidation. Landfills To counteract methane emissions from landfills, on March 12, 1996, the EPA (Environmental Protection Agency) added the "Landfill Rule" to the Clean Air Act. This rule requires large landfills that have ever accepted municipal solid waste, have been used as of November 8, 1987, can hold at least 2.5 million metric tons of waste with a volume greater than 2.5 million cubic meters, and/or have nonmethane organic compound (NMOC) emissions of at least 50 metric tons per year to collect and combust emitted landfill gas. This set of requirements excludes 96% of the landfills in the USA. While the direct result of this is landfills reducing emission of non-methane compounds that form smog, the indirect result is reduction of methane emissions as well. In an attempt to absorb the methane that is already being produced from landfills, experiments in which nutrients were added to the soil to allow methanotrophs to thrive have been conducted. These nutrient supplemented landfills have been shown to act as a small scale methane sink, allowing the abundance of methanotrophs to sponge the methane from the air to use as energy, effectively reducing the landfill's emissions. See also Notes References External links Greenhouse gas emissions Methane
Methane emissions
Chemistry
5,916
66,305,965
https://en.wikipedia.org/wiki/Cobalt%20tris%28diethyldithiocarbamate%29
Cobalt tris(diethyldithiocarbamate) is the coordination complex of cobalt with diethyldithiocarbamate with the formula Co(S2CNEt2)3 (Et = ethyl). It is a diamagnetic green solid that is soluble in organic solvents. Synthesis, structure, bonding Cobalt tris(dithiocarbamate)s are typically are prepared by air-oxidation of mixtures of dithiocarbamate salts and a cobalt(II) nitrate. Cobalt tris(diethyldithiocarbamate) is an octahedral coordination complex of low-spin Co(III) with idealized D3 symmetry. The Co-S distances are 267 pm. Reactions Oxidation of Co(Et2dtc)3 occurs at mild potentials to give the cobalt(IV) derivative. Treatment of Co(Et2dtc)3 with fluoroboric acid results in the removal of 0.5 equiv of ligand giving a binuclear cation: 2Co(Et2dtc)3 + HBF4 → [Co2(Et2dtc)5]BF4 + "Et2NdtcH" See also Iron tris(diethyldithiocarbamate) - a related dimethyldithiocarbamate complex of iron References Dithiocarbamates Cobalt complexes
Cobalt tris(diethyldithiocarbamate)
Chemistry
283
8,195,160
https://en.wikipedia.org/wiki/Blue%20Norther%20%28weather%29
A Blue Norther, also known as a Texas Norther, is a fast moving cold front marked by a rapid drop in temperature, strong winds, and dark blue or "black" skies. The cold front originates from the north, hence the "norther", and can send temperatures plummeting by 20 or 30 degrees in merely minutes. Effects The Midwestern United States lacks natural geographic barriers to protect itself from the frigid winter air masses that originate in Canada and the arctic. Multiple times per year conditions will become favorable to push severe cold fronts as far south as Texas, bringing sleet and snow and causing the windchill to plunge into the teens. Depending on the time of year, high temperatures that immediately precede a Texas Norther can reach 85 °F (29°C) or even 90 °F (32°C) under bright sunlight in nearly-calm conditions before the cold front approaches. However, most Blue Northers do not advance as far south as Mexico, and even the most severe examples typically reach their apex midway through Texas. For example, cities in North Texas, like Dallas, experience drastically more Blue Northers than cities along the Gulf of Mexico, like Houston. As a city is struck by a Blue Norther, its temperatures can be 30 to 50 degrees colder than neighboring cities that are only a few miles away that have not yet been struck. Blue Northers can be dangerous due to their volatile temperature swings which catch some people unprepared. Frequency Blue Northers occur multiple times per year. They are usually recorded between the months of November and March, although they have been recorded less frequently in October and April as well. The Blue Norther phenomenon is especially common in November, when the last vestiges of autumn are still clinging to life. One of the most famous Blue Northers was the Great Blue Norther of November 11, 1911, which spawned multiple tornadoes and dropped temperatures 40 degrees in only 15 minutes and 67 degrees in 10 hours, a world record. See also Weather front References External links Severe weather and convection Convection Cold Weather fronts Climate of the United States
Blue Norther (weather)
Physics,Chemistry
426
40,543,215
https://en.wikipedia.org/wiki/Rendezvous%20hashing
Rendezvous or highest random weight (HRW) hashing is an algorithm that allows clients to achieve distributed agreement on a set of options out of a possible set of options. A typical application is when clients need to agree on which sites (or proxies) objects are assigned to. Consistent hashing addresses the special case using a different method. Rendezvous hashing is both much simpler and more general than consistent hashing (see below). History Rendezvous hashing was invented by David Thaler and Chinya Ravishankar at the University of Michigan in 1996. Consistent hashing appeared a year later in the literature. Given its simplicity and generality, rendezvous hashing is now being preferred to consistent hashing in real-world applications. Rendezvous hashing was used very early on in many applications including mobile caching, router design, secure key establishment, and sharding and distributed databases. Other examples of real-world systems that use Rendezvous Hashing include the Github load balancer, the Apache Ignite distributed database, the Tahoe-LAFS file store, the CoBlitz large-file distribution service, Apache Druid, IBM's Cloud Object Store, the Arvados Data Management System, Apache Kafka, and the Twitter EventBus pub/sub platform. One of the first applications of rendezvous hashing was to enable multicast clients on the Internet (in contexts such as the MBONE) to identify multicast rendezvous points in a distributed fashion. It was used in 1998 by Microsoft's Cache Array Routing Protocol (CARP) for distributed cache coordination and routing. Some Protocol Independent Multicast routing protocols use rendezvous hashing to pick a rendezvous point. Problem definition and approach Algorithm Rendezvous hashing solves a general version of the distributed hash table problem: We are given a set of sites (servers or proxies, say). How can any set of clients, given an object , agree on a k-subset of sites to assign to ? The standard version of the problem uses k = 1. Each client is to make its selection independently, but all clients must end up picking the same subset of sites. This is non-trivial if we add a minimal disruption constraint, and require that when a site fails or is removed, only objects mapping to that site need be reassigned to other sites. The basic idea is to give each site a score (a weight) for each object , and assign the object to the highest scoring site. All clients first agree on a hash function . For object , the site is defined to have weight . Each client independently computes these weights and picks the k sites that yield the k largest hash values. The clients have thereby achieved distributed -agreement. If a site is added or removed, only the objects mapping to are remapped to different sites, satisfying the minimal disruption constraint above. The HRW assignment can be computed independently by any client, since it depends only on the identifiers for the set of sites and the object being assigned. HRW easily accommodates different capacities among sites. If site has twice the capacity of the other sites, we simply represent twice in the list, say, as . Clearly, twice as many objects will now map to as to the other sites. Properties Consider the simple version of the problem, with k = 1, where all clients are to agree on a single site for an object O. Approaching the problem naively, it might appear sufficient to treat the n sites as buckets in a hash table and hash the object name O into this table. Unfortunately, if any of the sites fails or is unreachable, the hash table size changes, forcing all objects to be remapped. This massive disruption makes such direct hashing unworkable. Under rendezvous hashing, however, clients handle site failures by picking the site that yields the next largest weight. Remapping is required only for objects currently mapped to the failed site, and disruption is minimal. Rendezvous hashing has the following properties: Low overhead: The hash function used is efficient, so overhead at the clients is very low. Load balancing: Since the hash function is randomizing, each of the n sites is equally likely to receive the object O. Loads are uniform across the sites. Site capacity: Sites with different capacities can be represented in the site list with multiplicity in proportion to capacity. A site with twice the capacity of the other sites will be represented twice in the list, while every other site is represented once. High hit rate: Since all clients agree on placing an object O into the same site SO, each fetch or placement of O into SO yields the maximum utility in terms of hit rate. The object O will always be found unless it is evicted by some replacement algorithm at SO. Minimal disruption: When a site fails, only the objects mapped to that site need to be remapped. Disruption is at the minimal possible level, as proved in. Distributed k-agreement: Clients can reach distributed agreement on k sites simply by selecting the top k sites in the ordering. O(log n) running time via skeleton-based hierarchical rendezvous hashing The standard version of Rendezvous Hashing described above works quite well for moderate n, but when is extremely large, the hierarchical use of Rendezvous Hashing achieves running time. This approach creates a virtual hierarchical structure (called a "skeleton"), and achieves running time by applying HRW at each level while descending the hierarchy. The idea is to first choose some constant and organize the sites into clusters Next, build a virtual hierarchy by choosing a constant and imagining these clusters placed at the leaves of a tree of virtual nodes, each with fanout . In the accompanying diagram, the cluster size is , and the skeleton fanout is . Assuming 108 sites (real nodes) for convenience, we get a three-tier virtual hierarchy. Since , each virtual node has a natural numbering in octal. Thus, the 27 virtual nodes at the lowest tier would be numbered in octal (we can, of course, vary the fanout at each level - in that case, each node will be identified with the corresponding mixed-radix number). The easiest way to understand the virtual hierarchy is by starting at the top, and descending the virtual hierarchy. We successively apply Rendezvous Hashing to the set of virtual nodes at each level of the hierarchy, and descend the branch defined by the winning virtual node. We can in fact start at any level in the virtual hierarchy. Starting lower in the hierarchy requires more hashes, but may improve load distribution in the case of failures. For example, instead of applying HRW to all 108 real nodes in the diagram, we can first apply HRW to the 27 lowest-tier virtual nodes, selecting one. We then apply HRW to the four real nodes in its cluster, and choose the winning site. We only need hashes, rather than 108. If we apply this method starting one level higher in the hierarchy, we would need hashes to get to the winning site. The figure shows how, if we proceed starting from the root of the skeleton, we may successively choose the virtual nodes , , and , and finally end up with site 74. The virtual hierarchy need not be stored, but can be created on demand, since the virtual nodes names are simply prefixes of base- (or mixed-radix) representations. We can easily create appropriately sorted strings from the digits, as required. In the example, we would be working with the strings (at tier 1), (at tier 2), and (at tier 3). Clearly, has height , since and are both constants. The work done at each level is , since is a constant. The value of can be chosen based on factors like the anticipated failure rate and the degree of desired load balancing. A higher value of leads to less load skew in the event of failure at the cost of higher search overhead. The choice is equivalent to non-hierarchical rendezvous hashing. In practice, the hash function is very cheap, so can work quite well unless is very high. For any given object, it is clear that each leaf-level cluster, and hence each of the sites, is chosen with equal probability. Replication, site failures, and site additions One can enhance resiliency to failures by replicating each object O across the highest ranking r < m sites for O, choosing r based on the level of resiliency desired. The simplest strategy is to replicate only within the leaf-level cluster. If the leaf-level site selected for O is unavailable, we select the next-ranked site for O within the same leaf-level cluster. If O has been replicated within the leaf-level cluster, we are sure to find O in the next available site in the ranked order of r sites. All objects that were held by the failed server appear in some other site in its cluster. (Another option is to go up one or more tiers in the skeleton and select an alternate from among the sibling virtual nodes at that tier. We then descend the hierarchy to the real nodes, as above.) When a site is added to the system, it may become the winning site for some objects already assigned to other sites. Objects mapped to other clusters will never map to this new site, so we need to only consider objects held by other sites in its cluster. If the sites are caches, attempting to access an object mapped to the new site will result in a cache miss, the corresponding object will be fetched and cached, and operation returns to normal. If sites are servers, some objects must be remapped to this newly added site. As before, objects mapped to other clusters will never map to this new site, so we need to only consider objects held by sites in its cluster. That is, we need only remap objects currently present in the m sites in this local cluster, rather than the entire set of objects in the system. New objects mapping to this site will of course be automatically assigned to it. Comparison with consistent hashing Because of its simplicity, lower overhead, and generality (it works for any k < n), rendezvous hashing is increasingly being preferred over consistent hashing. Recent examples of its use include the Github load balancer, the Apache Ignite distributed database, and by the Twitter EventBus pub/sub platform. Consistent hashing operates by mapping sites uniformly and randomly to points on a unit circle called tokens. Objects are also mapped to the unit circle and placed in the site whose token is the first encountered traveling clockwise from the object's location. When a site is removed, the objects it owns are transferred to the site owning the next token encountered moving clockwise. Provided each site is mapped to a large number (100–200, say) of tokens this will reassign objects in a relatively uniform fashion among the remaining sites. If sites are mapped to points on the circle randomly by hashing 200 variants of the site ID, say, the assignment of any object requires storing or recalculating 200 hash values for each site. However, the tokens associated with a given site can be precomputed and stored in a sorted list, requiring only a single application of the hash function to the object, and a binary search to compute the assignment. Even with many tokens per site, however, the basic version of consistent hashing may not balance objects uniformly over sites, since when a site is removed each object assigned to it is distributed only over as many other sites as the site has tokens (say 100–200). Variants of consistent hashing (such as Amazon's Dynamo) that use more complex logic to distribute tokens on the unit circle offer better load balancing than basic consistent hashing, reduce the overhead of adding new sites, and reduce metadata overhead and offer other benefits. Advantages of Rendezvous hashing over consistent hashing Rendezvous hashing (HRW) is much simpler conceptually and in practice. It also distributes objects uniformly over all sites, given a uniform hash function. Unlike consistent hashing, HRW requires no precomputing or storage of tokens. Consider k =1. An object is placed into one of sites by computing the hash values and picking the site that yields the highest hash value. If a new site is added, new object placements or requests will compute hash values, and pick the largest of these. If an object already in the system at maps to this new site , it will be fetched afresh and cached at . All clients will henceforth obtain it from this site, and the old cached copy at will ultimately be replaced by the local cache management algorithm. If is taken offline, its objects will be remapped uniformly to the remaining sites. Variants of the HRW algorithm, such as the use of a skeleton (see below), can reduce the time for object location to , at the cost of less global uniformity of placement. When is not too large, however, the placement cost of basic HRW is not likely to be a problem. HRW completely avoids all the overhead and complexity associated with correctly handling multiple tokens for each site and associated metadata. Rendezvous hashing also has the great advantage that it provides simple solutions to other important problems, such as distributed -agreement. Consistent hashing is a special case of Rendezvous hashing Rendezvous hashing is both simpler and more general than consistent hashing. Consistent hashing can be shown to be a special case of HRW by an appropriate choice of a two-place hash function. From the site identifier the simplest version of consistent hashing computes a list of token positions, e.g., where hashes values to locations on the unit circle. Define the two place hash function to be where denotes the distance along the unit circle from to (since has some minimal non-zero value there is no problem translating this value to a unique integer in some bounded range). This will duplicate exactly the assignment produced by consistent hashing. It is not possible, however, to reduce HRW to consistent hashing (assuming the number of tokens per site is bounded), since HRW potentially reassigns the objects from a removed site to an unbounded number of other sites. Weighted variations In the standard implementation of rendezvous hashing, every node receives a statically equal proportion of the keys. This behavior, however, is undesirable when the nodes have different capacities for processing or holding their assigned keys. For example, if one of the nodes had twice the storage capacity as the others, it would be beneficial if the algorithm could take this into account such that this more powerful node would receive twice the number of keys as each of the others. A straightforward mechanism to handle this case is to assign two virtual locations to this node, so that if either of that larger node's virtual locations has the highest hash, that node receives the key. But this strategy does not work when the relative weights are not integer multiples. For example, if one node had 42% more storage capacity, it would require adding many virtual nodes in different proportions, leading to greatly reduced performance. Several modifications to rendezvous hashing have been proposed to overcome this limitation. Cache Array Routing Protocol The Cache Array Routing Protocol (CARP) is a 1998 IETF draft that describes a method for computing load factors which can be multiplied by each node's hash score to yield an arbitrary level of precision for weighting nodes differently. However, one disadvantage of this approach is that when any node's weight is changed, or when any node is added or removed, all the load factors must be re-computed and relatively scaled. When the load factors change relative to one another, it triggers movement of keys between nodes whose weight was not changed, but whose load factor did change relative to other nodes in the system. This results in excess movement of keys. Controlled replication Controlled replication under scalable hashing or CRUSH is an extension to RUSH that improves upon rendezvous hashing by constructing a tree where a pseudo-random function (hash) is used to navigate down the tree to find which node is ultimately responsible for a given key. It permits perfect stability for adding nodes; however, it is not perfectly stable when removing or re-weighting nodes, with the excess movement of keys being proportional to the height of the tree. The CRUSH algorithm is used by the ceph data storage system to map data objects to the nodes responsible for storing them. Other variants In 2005, Christian Schindelhauer and Gunnar Schomaker described a logarithmic method for re-weighting hash scores in a way that does not require relative scaling of load factors when a node's weight changes or when nodes are added or removed. This enabled the dual benefits of perfect precision when weighting nodes, along with perfect stability, as only a minimum number of keys needed to be remapped to new nodes. A similar logarithm-based hashing strategy is used to assign data to storage nodes in Cleversafe's data storage system, now IBM Cloud Object Storage. Systems using Rendezvous hashing Rendezvous hashing is being used widely in real-world systems. A partial list includes Oracle's Database in-memory, the GitHub load balancer, the Apache Ignite distributed database, the Tahoe-LAFS file store, the CoBlitz large-file distribution service, Apache Druid, IBM's Cloud Object Store, the Arvados Data Management System, Apache Kafka, and by the Twitter EventBus pub/sub platform. Implementation Implementation is straightforward once a hash function is chosen (the original work on the HRW method makes a hash function recommendation). Each client only needs to compute a hash value for each of the sites, and then pick the largest. This algorithm runs in time. If the hash function is efficient, the running time is not a problem unless is very large. Weighted rendezvous hash Python code implementing a weighted rendezvous hash: import mmh3 import math from dataclasses import dataclass from typing import List def hash_to_unit_interval(s: str) -> float: """Hashes a string onto the unit interval (0, 1]""" return (mmh3.hash128(s) + 1) / 2**128 @dataclass class Node: """Class representing a node that is assigned keys as part of a weighted rendezvous hash.""" name: str weight: float def compute_weighted_score(self, key: str): score = hash_to_unit_interval(f"{self.name}: {key}") log_score = 1.0 / -math.log(score) return self.weight * log_score def determine_responsible_node(nodes: list[Node], key: str): """Determines which node of a set of nodes of various weights is responsible for the provided key.""" return max( nodes, key=lambda node: node.compute_weighted_score(key), default=None) Example outputs of WRH: >>> import wrh >>> node1 = wrh.Node("node1", 100) >>> node2 = wrh.Node("node2", 200) >>> node3 = wrh.Node("node3", 300) >>> str(wrh.determine_responsible_node([node1, node2, node3], "foo")) "Node(name='node1', weight=100)" >>> str(wrh.determine_responsible_node([node1, node2, node3], "bar")) "Node(name='node2', weight=300)" >>> str(wrh.determine_responsible_node([node1, node2, node3], "hello")) "Node(name='node2', weight=300)" >>> nodes = [node1, node2, node3] >>> from collections import Counter >>> responsible_nodes = [wrh.determine_responsible_node( ... nodes, f"key: {key}").name for key in range(45_000)] >>> print(Counter(responsible_nodes)) Counter({'node3': 22487, 'node2': 15020, 'node1': 7493}) References External links Rendezvous Hashing: an alternative to Consistent Hashing Algorithms Articles with example Python (programming language) code Hashing
Rendezvous hashing
Mathematics
4,231
15,470,144
https://en.wikipedia.org/wiki/Ouzo%20effect
The ouzo effect ( ), also known as the louche effect ( ) and spontaneous emulsification, is the phenomenon of formation of a milky oil-in-water emulsion when water is added to ouzo and other anise-flavored liqueurs and spirits, such as pastis, rakı, arak, sambuca and absinthe. Such emulsions occur with only minimal mixing and are highly stable. Observation and explanation First a strongly hydrophobic essential oil such as trans-anethole is dissolved in a water-miscible solvent, such as ethanol, and the ethanol itself forms a solution (a homogeneous mixture) with water. If then the concentration of ethanol is lowered by addition of more water the hydrophobic substance precipitates from the solution and forms an emulsion with the remaining ethanol-water-mixture. The tiny droplets of the substance in the emulsion scatter light and thus make the mixture appear white. Oil-in-water emulsions are not normally stable. Oil droplets coalesce until complete phase separation is achieved at macroscopic levels. Addition of a small amount of surfactant or the application of high shear rates (strong stirring) can stabilize the oil droplets. In a water-rich ouzo mixture the droplet coalescence is dramatically slowed without mechanical agitation, dispersing agents, or surfactants. It forms a stable homogeneous fluid dispersion by liquid–liquid nucleation. The size of the droplets when measured by small-angle neutron scattering was found to be on the order of a micron. Using dynamic light scattering, Sitnikova et al. showed that the droplets of oil in the emulsion grow by Ostwald ripening, and that droplets do not coalesce. The Ostwald ripening rate is observed to diminish with increasing ethanol concentrations until the droplets stabilize in size with an average diameter of . Based on thermodynamic considerations of the multi-component mixture, the emulsion derives its stability from trapping between the binodal and spinodal curves in the phase diagram. However, the microscopic mechanisms responsible for the observed slowing of Ostwald ripening rates at increasing ethanol concentrations appear not fully understood. Applications Emulsions have many commercial uses. A large range of prepared food products, detergents, and body-care products take the form of emulsions that are required to be stable over a long period of time. The ouzo effect is seen as a potential mechanism for generating surfactant-free emulsions without the need for high-shear stabilisation techniques that are costly in large-scale production processes. The creation of a variety of dispersions such as pseudolatexes, silicone emulsions, and biodegradable polymeric nanocapsules, have been synthesized using the ouzo effect, though as stated previously, the exact mechanism of this effect remains unclear. Nanoparticles formed using the ouzo effect are thought to be kinetically stabilized as opposed to thermodynamically stabilized micelles formed using a surfactant due to the fast solidification of the polymer during the preparation process. See also Interface and colloid science Miniemulsion Anise-flavored liqueurs on the list of liqueurs Spinodal References External links Colloidal chemistry Chemical mixtures Soft matter Absinthe Articles containing video clips
Ouzo effect
Physics,Chemistry,Materials_science
696
4,994,693
https://en.wikipedia.org/wiki/1979%20Mississauga%20train%20derailment
The Mississauga train derailment, also known as the Mississauga Miracle, occurred on November 10, 1979, in Mississauga, Ontario, Canada, when a CP Rail freight train carrying hazardous chemicals derailed and caught fire. More than 200,000 people were evacuated in the largest peacetime evacuation in North America until Hurricane Katrina. The fire was caused by a failure of the lubricating system. No deaths resulted from the incident. Causes A CP Rail freight train, led by FP7A #4069, F9B #1964, GP35 #5005 and GP38AC #3010 was eastbound from Windsor, Ontario. The train consisted of 106 cars that carried multiple chemicals and explosives, including styrene, toluene, propane, caustic soda, and chlorine. On the 33rd car, heat began to build up in an improperly-lubricated journal bearing on one of the wheels, resulting in the condition known among train workers as a "hot box". (This was one of the few still in use at that time as most had long since been replaced with roller bearings.) Residents living beside the tracks reported smoke and sparks coming from the car, and those who were close to Mississauga thought the train was on fire. The friction eventually burned through the axle and bearing, and as the train was passing the Mavis Road level crossing, near the intersection with Dundas Street, a wheelset (one axle and pair of wheels) fell off completely. Explosion and evacuation At 11:53 p.m., at the Mavis Road crossing, the damaged bogie (undercarriage) left the track, causing the remaining parts of the train to derail. The impact caused several tank cars filled with propane to burst into flames. The derailment also ruptured several other tankers, spilling styrene, toluene, propane, caustic soda, and chlorine onto the tracks and into the air. A huge explosion resulted, sending a fireball into the sky which could be seen from away. As the flames were erupting, the train's brakeman, Larry Krupa, 27, at the suggestion of the engineer (also his father-in-law), managed to close an air brake angle spigot at the west end of the undamaged 32nd car, allowing the engineer to release the air brakes between the locomotives and the derailed cars and move the front part of the train eastward along the tracks, away from danger. This prevented those cars from becoming involved in the fire, important as many of them also contained dangerous goods. Krupa was later recommended for the Order of Canada for his bravery, which a later writer has described as "bordering on lunacy." After more explosions, firefighters concentrated on cooling cars, allowing the fire to burn itself out, but a ruptured chlorine tank became a cause for concern. With the possibility of a deadly cloud of chlorine gas spreading through suburban Mississauga, more than 200,000 people were evacuated. A number of residents (mostly the extreme west and north of Mississauga) allowed evacuees to stay with them until the crisis abated. Some of these people were later moved again as their hosts were also evacuated. The evacuation was managed by various officials including the mayor of Mississauga, Hazel McCallion, the Peel Regional Police and other governmental authorities. McCallion sprained her ankle early during the crisis, but continued to hobble to press conferences. Aftermath Within a few days Mississauga was practically deserted, until the contamination had been cleared, the danger neutralized and residents were allowed to return to their homes. The city was finally reopened on the evening of November 16. The chlorine tank was emptied on November 19. It was the largest peacetime evacuation in North American history until the evacuation of New Orleans due to Hurricane Katrina in 2005, and remained the second-largest until Hurricane Irma in 2017. It was the last major explosion in the Greater Toronto Area until the Sunrise Propane blast in 2008. Due to the speed and efficiency with which it was conducted, many cities later studied and modelled their own emergency plans after Mississauga's. As a result of the accident, rail regulators in both the U.S. and Canada required that any line used to carry hazardous materials into or through a populated area have hotbox detectors. Larry Krupa was inducted into the North America Railway Hall of Fame for his contribution to the railway industry. He was recognized in the "National" division of the "Railway Workers & Builders" category. The city of Mississauga sued CP in hopes of holding the railroad responsible for the massive emergency services bill. However, the city dropped its suit after CP dropped its longstanding opposition to passenger service on its trackage near Mississauga. This cleared the way for GO Transit to open the Milton line two years later. Hazel McCallion, in her first term as mayor at the time of the accident, was continuously re-elected until her retirement in 2014 at age 93. In popular culture "Party Rapp," a 1979 rap song by Mr. Q, references the derailment, the first known explicit reference to Canada in a Canadian hip hop song. "Trainwreck 1979," a 2014 rock single by Canadian band Death From Above 1979, is about the derailment: It ran off the track, 11-79 While the immigrants slept, there wasn't much time The mayor came calling and got 'em outta bed They packed up their families and headed upwind A poison cloud, a flaming sky, 200,000 people and no one died And all before the pocket dial, yeah! See also List of rail accidents (1970–79) List of rail accidents in Canada References "Miracle of Mississauga", Toronto Sun Publishing, 1979. History of Mississauga Explosions in Canada Explosions in 1979 Gas explosions Railway accidents in 1979 Train and rapid transit fires Accidents and incidents involving Canadian Pacific Railway Mississauga Train Derailment, 1979 Disasters in Ontario Environmental disasters in Canada 1970s fires in North America 1979 fires 1979 in the environment Derailments in Canada 1979 in Ontario November 1979 events in Canada Railway accidents and incidents in Ontario
1979 Mississauga train derailment
Chemistry
1,278
37,399,176
https://en.wikipedia.org/wiki/Lowitz%20arc
A Lowitz arc is an optical phenomenon that occurs in the atmosphere; specifically, it is a rare type of ice crystal halo that forms a luminous arc which extends inwards from a sun dog (parhelion) and may continue above or below the sun. History The phenomenon is named after Johann Tobias Lowitz (or Lovits) (1757 - 1804), a German-born Russian apothecary and experimental chemist. On the morning of June 18, 1790 in St. Petersburg, Russia, Lowitz witnessed a spectacular display of solar halos. Among his observations, he noted arcs descending from the sun dogs and extending below the sun: Original (in French): 6. Ces deux derniers parhélies qui se trouvoient à quelque distance des intersections du grand cercle horizontal par les deux couronnes qui entourent le soleil, renvoyoient d'abord des deux cotés de parties d'arc très courtes colorées xi & yk dont la direction s'inclinoit au dessous du soleil jusqu'aux deux demi-arcs de cercle intérieurs die & dke. En second lieu ils étoient pourvues des queues longues, claires & blanches x ζ & y η , opposées au soleil & renfermées dans la circonference du grand cercle afbg. Translation : 6. These last two parhelia which were at some distance from the intersections of the great horizontal circle by the two coronas which surrounded the sun, sent, in the first place, from the two sides very short colored arcs xi & yk whose direction inclined below the sun as far as the two interior semicircular arcs die & dke. In the second place, they had long tails, bright and white x ζ & y η , directed away from the sun and included in the circumference of the great circle afhg. Lowitz formally reported the phenomenon to the St. Petersburg Academy of Sciences on October 18, 1790, including a detailed illustration of what he had witnessed. The illustration included what are now called “lower Lowitz arcs”. However, some scientists (not unreasonably) doubted the existence of the phenomenon: the phenomenon rarely occurs; and since Lowitz arcs were little known, people who witnessed them didn’t always recognize them; furthermore, until the advent of small, inexpensive digital cameras, witnesses rarely had, at hand, cameras to record them, and even if they did have cameras, the cameras weren’t always sensitive enough to record the faint Lowitz arcs. Only since circa 1990 have photographs of what are clearly Lowitz arcs become available for study and analysis. The phenomenon and hypotheses about its cause Sometimes, when the sun is low in the sky, there are luminous spots to the left and right of the sun and at the same elevation as the sun. These luminous spots are called "sun dogs" or "parhelia". (Often on these occasions, the sun is also surrounded by a luminous ring or halo, the angle between the sun and the halo (with the observer at the angle’s vertex) measuring 22°.) On rare occasions, faint arcs extend upwards or downwards from these sun dogs. These arcs extending from the sun dogs are "Lowitz arcs". As many as three distinct arcs may extend from the sun dogs. The short arc that first inclines towards the sun and then extends downward is called the "lower Lowitz arc". A longer second arc may also extend downward from the sun dog but then curve under the sun, perhaps joining the other sun dog; this is the "middle Lowitz arc" or "circular Lowitz arc". Finally, a third arc may extend upwards from the sun dog; this is the "upper Lowitz arc". In his diagram of 1790, Lowitz recorded only a lower Lowitz arc. Like the 22° solar halo and sun dogs, Lowitz arcs are believed to be caused by sunlight refracting (bending) through ice crystals. However, there still remains some dispute about the shape and orientation of the ice crystals that produce Lowitz arcs. In 1840, the German astronomer Johann Gottfried Galle (1812 - 1910) proposed that lower Lowitz arcs were produced as sun dogs are; that is, by sunlight refracting through hexagonal ice crystals. However, in the case of sun dogs, the columnar crystals are oriented vertically, whereas in the case of Lowitz arcs, Galle proposed, the crystals oscillated about their vertical axes. Charles Sheldon Hastings (1848 - 1932), an American physicist who specialized in optics, suggested in 1901 that Lowitz arcs were due to hexagonal plates of ice, which oscillated around a horizontal axis in the plane of the plate as the plate fell, similar to the fluttering of a falling leaf. Later, in 1920, he proposed that the plates rotate, rather than merely oscillate, around their long diagonals. According to Hastings, sunlight enters one of the faces on the edge of the plate, is refracted, propagates through the ice crystal, and then exits through another face on the edge of the plate, which is at 60° to the first face, refracts again as it exits, and finally reaches the observer. Because the ice plates rotate, plates throughout an arc are—at some time during each rotation—oriented to refract sunlight to the observer. A hexagonal plate has three long diagonals about which it can rotate, but rotation around only one of the axes causes the lower Lowitz arc. The other Lowitz arcs—the middle and upper arcs—are caused by sunlight passing through the two other pairs of faces of the hexagonal ice plate. However, since circa 1990, photographs of what are clearly Lowitz arcs have become available for study. Furthermore, numerical ray-tracing software allows Lowitz arcs to be simulated by computers, so that, from hypotheses about the shape and orientation of ice crystals, the shape and intensity of a hypothetical Lowitz arc can be predicted and compared against photographs of actual arcs. As a result of such simulations, the traditional explanation of Lowitz arcs has been found to have some shortcomings. Specifically, simulations assuming that only perfectly hexagonal, rotating plates produce Lowitz arcs, predict the wrong intensities for the arcs. More accurate simulations were obtained by assuming that the plates were almost horizontal, or that the ice crystals had a more rhombic shape or were hexagonal columns that were oriented horizontally. Hence the exact mechanism by which Lowitz arcs are produced, remains unresolved. References Further reading Auguste Bravais (1847) "Mémoire sur les halos et les phénomènes optiques qui les accompagnent" (Memoir on halos and the optical phenomena that accompany them), Journal de l'Ecole royale polytechnique, 18 : 1-270. See: pages 47-49: "§ X. -- Arcs obliques de Lowitz." (Lowitz's oblique arcs). Josef Maria Pernter and Felix Maria Exner, Meteorologische Optik, 2nd ed. (Vienna, Austria: Wilhelm Braumüller, 1922). See: pages 360-380: 47. Nebensonne, Halo von 22° und Lowitz' schiefe Bögen. (47. Sun dogs, 22° halo and Lowitz's oblique bows.) William Jackson Humphreys, Physics of the Air, 2nd ed. (New York, New York: McGraw-Hill, 1929); Lowitz arcs are discussed on pages 495-501. Robert Greenler, Rainbows, Halos, and Glories (Cambridge, England: Cambridge University Press, 1980); Lowitz arcs are discussed on pages 44-47. Walter Tape (researcher of halos): Walter Tape, ed., Atmospheric Halos, Antarctic Research Series, vol. 64 (Washington, D. C.: American Geophysical Union, 1994). On page 98 (in the absence of photographic evidence), Tape regards Lowitz arcs as merely Parry arcs. Walter Tape and Jarmo Moilanen, Atmospheric Halos and the Search for Angle X (Washington, D. C.: American Geophysical Union, 2006). Walter Tape's website James R. Mueller, Robert G. Greenler, and A. James Mallmann (August 1, 1979) "Arcs of Lowitz," Journal of the Optical Society of America, 69 (8) : 1103-1106. R.A.R. (Ronald Alfred Ranson) Tricker, Introduction to Meteorological Optics (New York, New York: Elsevier, 1970). Marko Riikonen, Halot. Jääkidepilvien valoilmiöt [Halos. The optical phenomena of ice crystal clouds] (Helsinki, Finland: Ursa, 2011) -- in Finnish. External links Atmospheric Optics (Website devoted to halos, etc.) Atmospheric Optics : Lowitz arcs Atmospheric Optics : HaloSim3 (software for simulating halos, etc.) Halo Observation Project : database of observations of rare halos, etc., with photos (from 1990s to 2006) Arbeitskreis Meteore e.V. (German group devoted to observations of atmospheric optics): Arbeitskreis Meteore e.V. : Upper Lowitz arc Arbeitskreis Meteore e.V. : Lowitz arc Arbeitskreis Meteore e.V. : Spectacular display of halos, etc., on November 27, 2010 in the Sudelfeld in the Bavarian Alps (in German) Ice Crystal Halos : a collection of photos of halos, etc. Photos of halo phenomena at Mount Geigelstein in the Bavarian Alps (Oct. 15, 2005) Optical phenomena Atmospheric optical phenomena
Lowitz arc
Physics
2,049
53,720,265
https://en.wikipedia.org/wiki/Syntactic%20noise
In computer science, syntactic noise is syntax within a programming language that makes the programming language more difficult to read and understand for humans. It fills the language with excessive clutter that makes it a hassle to write code. Syntactic noise is considered to be the opposite of syntactic sugar, which is syntax that makes a programming language more readable and enjoyable for the programmer. References Computer jargon
Syntactic noise
Technology
85
19,289,994
https://en.wikipedia.org/wiki/Oleandrin
Oleandrin is a cardiac glycoside found in the poisonous plant oleander (Nerium oleander L.). As a main phytochemical of oleander, oleandrin is associated with the toxicity of oleander sap, and has similar properties to digoxin. Oleander has been used in traditional medicine for its presumed therapeutic purposes, such as for treating cardiac insufficiency. There is no clinical evidence that oleander or its constituents, including oleandrin, are safe or effective. Oleandrin is not approved by regulatory agencies as a prescription drug or dietary supplement. Structure and reactivity The structure of oleandrin contains a central steroid nucleus with an unsaturated lactone ring structure on C17 and a dideoxy arabinose group on C3. In addition, the steroid ring has a substitute of an acetyloxy group on C16. The sugar forming the glycoside is L-oleandrose. Oleandrin resembles very much other glycosides like ouabain and digoxin but has less effect than digoxin. It is however, just like its derivate oleandrigenin, a more potent glycoside than ouabain. Synthesis Oleandrin and its derivate oleandrigenin are formed in the N. oleander plant. The oleandrin itself can be won out of the leaves and other parts of the plant but can also be produced in the lab by using cell cultures. Here, the oleandrin synthesis (along with other metabolites) can be stimulated in untransformed plant cell cultures with supplementation of phytohormone. However, this is not enough to produce large quantities because of early cell death. Transgenic cultures of Agrobacteria are able to synthesize great quantities of oleandrin and other metabolites of the oleander plants, fit for pharmaceutical purposes. Related substances Oleandrin is, apart from its pure form, also closely related to structural similar glycosides and alkaloids, which all have more or less the same characteristics as oleandrin: Oleandrigenin is a deglycosylated metabolite of oleandrin. It has however a more mild effect. Conessine Neritaloside Odorside Metabolism Although oleandrigenin is not formed in human plasma, it was found in the volunteers injected with oleandrin, suggesting that it is formed in other human tissues. Because of its lipophilic properties, oleandrin can be easily absorbed in the gastrointestinal tract after oral dosing. The clearance is slow. The plasma concentration obtains its maximum at twenty minutes after oral intake (half-life of about 2 hours, but half-life after IV administration is half an hour). It is excreted mostly in feces, but also in urine. Because the main route of excretion is through biliary excretion into the feces, it is mainly the liver that is exposed to oleandrin. As excretion in urine is only a smaller route, the kidneys are less exposed. There is also accumulation in the heart, which explains its potential for cardiac toxicity. Mechanism of action Because of its properties as a cardiac glycoside, oleandrin interferes in some essential processes within the cell, the most important of these being the inhibition of the Na-K ATPase. This protein enables the cell to exchange the cations Na+ and K+ between the intercellular and extracellular spaces by which, for instance, electric signaling is made possible in nerve cells. Oleandrin binds to specific amino acids in the protein, causing it to lose its function. Apart from being a potent toxic compound, there are no results on oleandrin from human clinical research that support its use as a treatment for cancer or any disease. Toxicity Due to its considerable toxicity, use of oleander or its constituents, such as oleandrin, is regarded as unsafe and potentially lethal. Use of oleander may cause contact dermatitis, headache, nausea, lethargy, and high blood levels of potassium, with symptoms appearing within a few hours of ingestion. In one fatality, the blood concentration of oleandrin and a related cardiac glycoside from the oleander plant was estimated at 20 ng/ml. In practice, there have been adult cases wherein 14–20 oleander leaves (of unknown oleandrin concentration) proved not to be fatal, but also a lethal case of a child that consumed only one leaf. Symptoms Symptoms of oleandrin poisoning can cause both gastrointestinal and cardiac effects. The gastrointestinal effects can consist of nausea, abdominal pain, and vomiting, as well as higher salivation and diarrhea (which may contain blood). After these first symptoms, the heart may be affected by tachyarrhythmia, bradyarrhythmia, premature ventricular contractions, or atrioventricular blockage. Also, xanthopsia (yellow vision), a burning sensation of the mucous membranes of the eyes, and gastrointestinal tract and respiratory paralysis can occur. Reactions to poisonings from this plant can also affect the central nervous system. These symptoms can include drowsiness, tremors, or shaking of the muscles, seizures, collapse, and even coma that can lead to death. Oleander sap can cause skin irritations, severe eye inflammation and irritation, and allergy reactions characterized by dermatitis when administered topically. Diagnosis Diagnosis of oleandrin poisoning is mainly based on description of the plant, how much of it was ingested, time since ingestion, and symptoms. Three methods are used for detecting oleandrin in the blood. Fluorescence polarization immunoassay is widely used. This test is slower and has a lower sensitivity than digoxin immunoassay (Digoxin III). A direct analytic technique like liquid chromatography-electrospray tandem mass spectrometry is used when there are medical or legal issues. Treatment Oleander toxicity should be treated aggressively, including as needed gastric lavage or induced emesis. Onset of symptoms may vary with the way of intake. Teas made of leaves or root of N. oleander give rise to a more acute onset, while eating raw leaves causes a slower onset of symptoms. Management of oleandrin poisoning is done in the following steps: There is a lack of evidence that weighs efficacy versus harm. Activated charcoal is still used, since it binds toxins in the gastrointestinal tract to reduce absorption. It is uncertain whether repeated administration of activated charcoal is effective, in theory interrupting enterohepatic cycling. This treatment is used for digoxin poisoning, another cardiac glycoside. Supportive care like monitoring vitals and electrolyte and fluid balance is important. Patients may present hypovolemic due to vomiting and diarrhea, but severely elevated potassium can also occur. Electrolyte balance is vital, since patients with low cardiac glycoside levels can still die after adequate digoxin Fab antibody treatment if they have disturbed electrolyte levels. Treatment of slow heart rate and heart rhythm irregularities may require intravenous isoprenaline or atropine. In moderate cases, prolonging of the PR interval and progression to AV dissociation, cardiac pacing is used. The effectiveness of all these interventions is unknown and are associated with side-effects. Therefore, consultation with a cardiologist is recommended when managing significant N. Oleander induced arrhythmias. The use of anti-digoxin Fab IV has proven successful in cases of oleandrin poisoning A dose of 400 mg is used in digoxin poisoning, but a dose of 800 mg is recommended for oleandrin poisoning due to the lower binding affinity of the antibody to oleandrin. Patients receiving an adequate dose of anti-digoxin Fab show a good response, resolving serious arrhythmias in two hours in fifty percent of the cases. Treated patients showed a rapid increase in heart rate and a significant decline in serum potassium levels. The reason anti-digoxin Fab is sparingly used in developing countries is its high cost, even though it is such an effective treatment. Traditional medicine Although oleander has been used in traditional medicine for treating various disorders, there is no evidence that it is safe or effective for any medicinal purpose. Political controversy During the COVID-19 pandemic, Donald Trump's Secretary of Housing and Urban Development Ben Carson, and MyPillow CEO Mike Lindell, a major Trump booster and an investor in a company that develops oleandrin, promoted oleandrin as a potential treatment of the disease in a July 2020 Oval Office meeting with Trump, who expressed enthusiasm for the substance. These claims were widely regarded by scientists as dubious, misleading, and alarming, as well as having no clinical proof of safety or effectiveness. The unproven claims of benefit further caused concern among scientists that the Trump administration might force unwarranted FDA approval of oleandrin as a safe and effective treatment for COVID-19 infection. However, on 14 August 2020, the FDA rejected the application for marketing an oleandrin dietary supplement by Phoenix Biotechnology, Inc. – the manufacturer of the product – due to concerns that oleandrin would not be safe to consume. Effects on animals Oleandrin poisoning by eating oleander leaves can be lethal at low dosages. Cases of sheep lethality have been reported to only one leaf of oleander. Symptoms present in poisoned animals include bloody diarrhea and colic, the latter especially in horses. Because the leaf itself is quite bitter, only starving animals will be likely to eat the plant. The lethal dosage for animals is estimated to be about 0.5 mg/kg. References Acetate esters ATPase inhibitors Cardenolides Cardiac glycosides Plant toxins
Oleandrin
Chemistry
2,060
44,981,215
https://en.wikipedia.org/wiki/West%20Anatolia%20region%20%28statistical%29
The West Anatolia Region (Turkish: Batı Anadolu Bölgesi) (TR5) is a statistical region in Turkey. Its largest city is Ankara, which serves as the national capital. Subregions and provinces Ankara Subregion (TR51) Ankara Province (TR510) Konya Subregion (TR52) Konya Province (TR521) Karaman Province (TR522) Age groups Internal immigration State register location of West Anatolia residents Marital status of 15+ population by gender Education status of 15+ population by gender See also NUTS of Turkey References External links TURKSTAT Sources ESPON Database Statistical regions of Turkey
West Anatolia region (statistical)
Mathematics
130
216,592
https://en.wikipedia.org/wiki/Postal%20stationery
A piece of postal stationery is a stationery item, such as a stamped envelope, letter sheet, postal card, lettercard, aerogram or wrapper, with an imprinted stamp or inscription indicating that a specific rate of postage or related service has been prepaid. It does not, however, include any postcard without a pre-printed stamp, and it is different from freepost for preprinted cards issued by businesses. In general, postal stationery is handled similarly to postage stamps; sold from post offices either at the face value of the printed postage or, more likely, with a surcharge to cover the additional cost of the stationery. It can take the form of an official mail issue produced only for the use of government departments. History Postal stationery has been in use since at least 1608 with folded letters bearing the coat of arms Venice. Other early examples include British newspaper stamps that were first issued in 1712, 25-centime letter sheets that were issued in 1790 by the government of Luxembourg, and Australian postal stationery that predated more well known issues like the British Mulready stationery that was introduced in 1840. The first modern form of postal stationery was the stamped, or postal stationery, envelope created by the United Kingdom around 1841.. Other countries quickly followed suite, including the United States, which released the Nesbitt series of stamped envelopes in 1853. A variation of the stamped envelope, a registered envelope, has been widely used throughout Great Britain and the British Commonwealth. Although none have been issued in the United States due to differences in mail registration procedures. Another form of stamped envelopes are so called wrappers, a form of postal stationery envelope that can be used to prepay the cost of delivery for a newspaper or periodical. Wrappers were first introduced in 1961 by the United States, which was followed by 110 other countries in total. Although all the countries have stopped producing then due to declining sales. With Cyprus being the last country to stop their use in 1991. The next innovation in postal stationery came in 1869 with the introduction of the postal card in Austria-Hungry. Postal cards are a type of cardstock that contains an imprinted stamp or indicium. They quickly caught on due to being mostly uniform and less bulky then traditional letters. To the point that Great Britain, Finland, Switzerland and Württemberg had all issued postal cards by 1871. Followed by the United States in 1873. Despite its popularity, the postal card was soon followed by the letter card. A letter card is a postal stationery item consisting of a folded card with a prepaid imprinted stamp. The format was first issued by Belgium in 1882. Great Britain issued their first official letter cards in 1892 and Newfoundland introduced small reply cards starting in 1912. Letter cards had the advantage of providing twice the room for writing a message then postal cards and were more private due to being folded over.. A variation of the letter card called an aerogram was introduced in 1933 by a Lieutenant Colonel while he was doing a tour in the Middle East theatre. Although the format was not officially endorsed by the Universal Postal Union until 1952. An aerogram is a thin, lightweight piece foldable paper that is used for writing letters and sending them via airmail. Although unlike letter cards they can come unstamped and be issued by private companies.. Collecting Most postal stationery pieces are collected as entires, that is, the whole card, sheet, or envelope. In the 19th century, it was common to collect "cut squares" (or cut-outs in the UK), which involved clipping the embossed or otherwise pre-printed indicia from postal stationery entires. This destroyed the envelope. As a result, one cannot tell from a cut square what specific envelope it came from and, many times, the cancellation information. The manner in which the stamped envelope is cut out (defined by the term "knife") cannot be determined from a cut square. Thus, most collectors prefer entires to cut squares. Many country-specific stamp catalogs list postal stationery and there are books devoted to the postal stationery of individual countries. The current, but now dated, principal encyclopedic work is the nineteen volume Higgins & Gage World Postal Stationery Catalog. Collectors societies Collectors of postal stationery may seek out postal stationery societies or study groups in other countries. These societies provide information, publications and guidance to those who are interested. They include: Australia: The Postal Stationery Society of Australia Belgium: Societe Belge de l'Entier Postal Canada: British North America Philatelic Society Postal Stationery Study Group France: Entiers Postaux Français Germany: Berliner Ganzsachen-Sammler-Verein Great Britain / UK: Postal Stationery Society of Great Britain, The Postal Stationery Society Netherlands: Nederlandse Vereniging van Poststukken Switzerland: Swiss Postal Stationery Collectors Society / Schweizerischer Ganzsachen-Sammler-Verein (SGSSV) United States: United Postal Stationery Society Publications Catalogs World wide Higgins & Gage World Postal Stationery Catalog Postal stationery: A Collector's guide to a Fascinating World-Wide Philatelic Pursuit The Collectors' Guide to Postal Stationery What is Postal Stationery? Great Britain British Postal Stationery, A Priced Handbook of the Postal Stationery of Great Britain Collect British Postal Stationery: A Simplified Listing of British Postal Stationery 1840 to 2007 Postal Stationery of Great Britain United States Guide to the Stamped Envelopes and Wrappers of the United States United States Postal Card Catalog Hawaii Postal Stationery United States Stamped Envelopes Essays and Proofs Canada Canada & Newfoundland Postal Stationery Catalogue Canadian Precancelled Postal Stationery Handbook The Postal Stationery of Canada. A reference catalogue The Postal History of the Post Card in Canada 1878 - 1911 Australia The Postal Stationery of the Commonwealth of Australia India The Comprehensive India States Postal Stationery Listing A Guide to Modern Indian Postal Stationery, 1947-2003, Vol. 1 (Envelopes) Encyclopedia of Indian Postal Stationery British India Postal Stationery A Guide to Postal Stationery of India Vol. I, II, III, IV Phila Catalogue of Indian Postal Stationery The Indian Convention States Postal Stationery The Comprehensive India States Postal Stationery Listing Postal Stationery of British India, 1856-1947 Russia Marked Postal Cards of the USSR South America Postal Stationery of Mexico Postal Stationery of the Canal Zone Postal Stationery of Peru The Postal Stationery of the Cuban Republic Postal Stationery of Cuba and Puerto Rico Under United States Administration Postal Cards of Spanish Colonial Cuba, Philippines and Puerto Rico Africa Liberian Postal Stationery Periodicals Postal Stationery (United Postal Stationery Society) The Postal Stationery Collector (Postal Stationery Society of Australia) The Postal Card Specialist References External links British Postal Stationery from 1971 (archived 8 April 2016) Catalog of Postal Stationery Items Postal Cards of Cuba (1878–1958): Comprehensive Cuban collection (archived 11 February 2011) Postal Stationery of Denmark : Internet display of a Danish postal stationery collection, 1871–1905 by Lars Engelbrecht. Postal stationery at postalhistory.org (archived 5 January 2002) The FIP (Federation Internationale de Philatelie) Postal Stationery Commission: Worldwide Philatelic terminology Postal systems Stationery Envelopes
Postal stationery
Technology
1,496
2,339,273
https://en.wikipedia.org/wiki/Best%20available%20technology
The best available technology or best available techniques (BAT) is the technology approved by legislators or regulators for meeting output standards for a particular process, such as pollution abatement. Similar terms are best practicable means or best practicable environmental option. BAT is a moving target on practices, since developing societal values and advancing techniques may change what is currently regarded as "reasonably achievable", "best practicable" and "best available". A literal understanding will connect it with a "spare no expense" doctrine which prescribes the acquisition of the best state of the art technology available, without regard for traditional cost-benefit analysis. In practical use, the cost aspect is also taken into account. See also discussions on the topic of the precautionary principle which, along with considerations of best available technologies and cost-benefit analyses, is also involved in discussions leading to formulation of environmental policies and regulations (or opposition to same). History Best practicable means was used for the first time in UK national primary legislation in section 5 of the Salmon Fishery Act 1861, and another early use was found in the Alkali Act Amendment Act 1874, but before that appeared in the Leeds Act of 1848. Best available techniques not entailing excessive costs (BATNEEC), sometimes referred to as best available technology, was introduced in 1984 into European Economic Community law with Directive 84/360/EEC. The BAT concept was first time used in the 1992 OSPAR Convention for the protection of the marine environment of the North-East Atlantic for all types of industrial installations (for instance, chemical plants). Some doctrine deem it already acquired the status of customary law. In the United States, BAT or similar terminology is used in the Clean Air Act and Clean Water Act. European Union directives Best available techniques not entailing excessive costs (BATNEEC), sometimes referred to as best available technology, was introduced in 1984 with Directive 84/360/EEC and applied to air pollution emissions from large industrial installations. In 1996, Directive 84/360/EEC was superseded by the Integrated Pollution Prevention and Control (IPPC) Directive 96/61/EC, which applied the framework concept of Best Available Techniques (BAT) to, amongst others, the integrated control of pollution to the three media air, water and soil. The concept is also part of the directive's recast in 2008 (Directive 2008/1/EC) and its successor directive, the Industrial Emissions Directive 2010/75/EU published in 2010. A list, with "Adopted Documents", of industries which are subject to the IPPC directive contains more than 30 entries, including everything from the ceramic manufacturing industry to the wood-based panels production industry. BAT for a given industrial sector are described in reference documents called BREFs (Best Available Techniques Reference documents), as defined in article 3(11) of the Industrial Emissions Directive. BREFs are the result of an exchange of information between European Union Member States, the industries concerned, non-governmental organizations promoting environmental protection and the European Commission pursuant to article 13 of the directive. This exchange of information is referred to as the Sevilla process because it is steered by the European IPPC Bureau within the Institute for Prospective Technological Studies of the European Commissions' Joint Research Centre, which is based in Seville. The process is codified into law by Commission Implementing Decision 2012/119/EU. The most important chapter of the BREFs, the BAT conclusions, are published as implementing decisions of the European Commission in the Official Journal of the European Union. According to article 14(3) of the Industrial Emissions Directive, the BAT conclusions shall be the reference for setting permit conditions of large industrial installations. Pollution control According to article 15(2) of the Industrial Emissions Directive, emission limit values and the equivalent parameters and technical measures in permits shall be based on the best available techniques, without prescribing the use of any technique or specific technology. The directive includes a definition of best available techniques in article 3(10): "best available techniques" means the most effective and advanced stage in the development of activities and their methods of operation which indicates the practical suitability of particular techniques for providing the basis for emission limit values and other permit conditions designed to prevent and, where that is not practicable, to reduce emissions and the impact on the environment as a whole: - "techniques" includes both the technology used and the way in which the installation is designed, built, maintained, operated and decommissioned; - "available" means those developed on a scale which allows implementation in the relevant industrial sector, under economically and technically viable conditions, taking into consideration the costs and advantages, whether or not the techniques are used or produced inside the Member State in question, as long as they are reasonably accessible to the operator; - "best" means most effective in achieving a high general level of protection of the environment as a whole. Food, drink and milk industries A Reference Document on Best Available Techniques (BREF) in the food, drink and milk industries of the European Union was published in August 2006, and reflected an information exchange carried out according to Article 16.2 of Council Directive 96/61/EC. It runs to more than 600 pages, and is replete with tables and flowchart diagrams. The 2006 BREF on these industries was superseded by another published in January 2017, which runs to more than 1000 pages. United States environmental law Clean Air Act The Clean Air Act Amendments of 1990 require that certain facilities employ Best Available Control Technology to limit emissions. ...an emission limitation based on the maximum degree of reduction of each pollutant subject to regulation under this Act emitted from or which results from any major emitting facility, which the permitting authority, on a case-by-case basis, taking into account energy, environmental, and economic impacts and other costs, determines is achievable for such facility through application of production processes and available methods, systems, and techniques, including fuel cleaning, clean fuels, or treatment or innovative fuel combustion techniques for control of each such pollutant. Clean Water Act The Clean Water Act (CWA) requires issuance of national industrial wastewater discharge regulations (called "effluent guidelines"), which are based on BAT and several related standards. ...effluent limitations for categories and classes of point sources,... which (i) shall require application of the best available technology economically achievable for such category or class, which will result in reasonable further progress toward the national goal of eliminating the discharge of all pollutants. ...Factors relating to the assessment of best available technology shall take into account the age of equipment and facilities involved, the process employed, the engineering aspects of the application of various types of control techniques, process changes, the cost of achieving such effluent reduction, non-water quality environmental impact (including energy requirements), and such other factors as the Administrator deems appropriate. In the development of the effluent standards, the BAT concept is a "model" technology rather than a specific regulatory requirement. The U.S. Environmental Protection Agency (EPA) identifies a particular model technology for an industry, and then writes a regulatory performance standard based on the model. The performance standard is typically expressed as a numeric effluent limit measured at the discharge point. The industrial facility may use any technology that meets the performance standard. A related CWA provision for cooling water intake structures requires standards based on "best technology available." ...the location, design, construction, and capacity of cooling water intake structures reflect the best technology available for minimizing adverse environmental impact. International conventions The concept of BAT is also used in a number of international conventions such as the Minamata Convention on Mercury, the Stockholm Convention on Persistent Organic Pollutants, or the OSPAR Convention for the protection of the marine environment of the North-East Atlantic. See also Appropriate technology Best Available Control Technology Lowest Achievable Emissions Rate References External links BAT reference documents and BAT conclusions of the European Union OECD overview on BAT and similar concepts worldwide Air pollution European Union directives European Union food law Pollution control technologies United States federal environmental legislation Water pollution Industrial ecology History of agriculture in the United Kingdom Agricultural health and safety Food law
Best available technology
Chemistry,Engineering,Environmental_science
1,688
14,167,166
https://en.wikipedia.org/wiki/CALM2
Calmodulin 2 is a protein that in humans is encoded by the CALM2 gene. A member of the calmodulin family of signaling molecules, it is an intermediary between calcium ions, which act as a second messenger, and many intracellular processes, such as the contraction of cardiac muscle. Clinical significance Mutations in CALM2 are associated with cardiac arrhythmias. In particular, several single-nucleotide polymorphisms of CALM2 have been reported as potential causes of sudden infant death syndrome. Due to their heritability, CALM2 mutations can affect multiple children in a family, and the discovery of the deadly consequences of these mutations has led to challenges against the murder convictions of mothers of multiple deceased infants, as in the case of Kathleen Folbigg, acquitted after more than 20 years imprisonment, in Australia. Interactions CALM2 has been shown to interact with AKAP9. References External links Further reading EF-hand-containing proteins
CALM2
Chemistry
196
77,457,489
https://en.wikipedia.org/wiki/Bal%20%26%20co
Distillerie Bal & Co was a jenever distillery in Merksem, founded in 1861. History After an early career as clerks at the Van den Bergh Distillery, Jan Baptist and Corneel Jozef Bal opened their own jenever distillery under the name La Couronne - De Kroon Distillery. When Jan Baptist Bal died in 1866, his brother Corneel Jozef and his 4 sons continued the company under the name Distillerie C.J. Bal & Cie. In 1894 the company's name was changed to Usines Bal & Cie. The Neoclassical offices (1888) of De Kroon Steam Distillery, a.k.a. Distillerie à vapeur 'La Couronne', were located in Korte Winkelstraat in Antwerp. They were designed by Antwerp architect Joseph Hertogs (1861-1931). The company expanded its activities to include a yeast factory, on the corner of Cassiersstraat and Van de Wervestraat in Antwerp, and the production of butter and margarine in Merksem. In 1959 the company was taken over by Frans Hol. International Exhibitions The distillery took part in several international exhibitions: Antwerp, 1885 Antwerp, 1894 Ghent, 1913 Products Known registered labels of the company: Bal's Oude Klare, registered on 11 May 1906 Hollandia Bitter (eau-de-vie d'amer), registered on 28 May 1907 Zuivere Oude Graan Genever van 't kruikje, registered on 8 November 1923 Graanjenever van 't kruikje, Bal & Co, registered on 8 November 1923 References Distilleries Antwerp
Bal & co
Chemistry
353
56,997,080
https://en.wikipedia.org/wiki/LISAA%20School%20of%20Art%20%26%20Design
LISAA School of Art & Design, (L'Institut Supérieur des Arts Appliqués) is a French private college for applied arts education founded in 1986. LISAA has locations in Paris, Rennes, Nantes, Strasbourg, Bordeaux and Toulouse. The school is one of about 100 recognized by the French Ministry of Culture and Communication. Diplomas are offered in graphic design, animation & video games, interior architecture & design, and fashion. History In 1986, Michel Glize, architect and entrepreneur founds LISAA. In 2012, the school is sold to Galileo Global Education. Teaching The main concentrations in the academic curriculum are graphic design, animation & video games, interior architecture & design, and fashion. Many programs last five years (bac+5), while some are shorter (bachelors, BTS, MANAA). Foundation year Two one-year foundation courses are offered at LISAA: Introductory course in applied arts Foundation year in Architecture (preparatory class for schools of architecture) Fashion courses Master and bachelor diplomas exist for the fashion industry: Master of Interior Decoration Master of Journalist, blogger, influencer fashion/beauty & luxury Master of Fashion Design & Business Two or three year course of Artistic make-up artist Bachelor (BTS) of Fashion Design / Pattern Making Bachelor (BTS) of Fashion Design / Textiles, materials, surfaces Bachelor (BTS) of Fashion Design / Textile Design Master of Fashion & Luxury Management Interior architecture and design courses Master, bachelor diplomas are offered in the interior architecture and design field: Master of Interior Decoration Postgraduate program of Interior Architecture & Design (5 years) Foundation year in Architecture (preparatory class for schools of architecture in one year) Bachelor (BTS) of Interior Architecture Master of Interior Architecture & Connected Design Master of Interior Architecture & Service Design Master of Interior Architecture & Global Design Animation and video games Bachelor, master diplomas are offered in the animation and video game field: MBA Video Game Production (1 year) Master Supervisor & Director Animation & Special Effects (2-year programme) Master of Video Game Creative Director Bachelor of 2-D/3-D Animation Bachelor of 2-D Animation Bachelor of 3-D Animation Bachelor of 2-D/3-D Video Games Bachelor of Visual Effects Graphic and motion design Bachelor, master diplomas are offered in the graphic and motion design field: Bachelor (BTS) of Graphic Design / Print Bachelor (BTS) of Graphic Design / Digital Media Bachelor (BTS) of Graphic Design Bachelor Motion Design Bachelor Graphic Design Master Digital Art Direction / Animated media Master Digital Art Direction / UX Design Master Digital Art Direction Master Art Direction for Creative & Cultural Industries Partnerships Academic partnerships LISAA has academic partnerships with schools in Europe or on the American continent: Helmo (Liège / Belgium) Vilnius College of Design (Vilnius / Lithuania) Thomas More Mechelen (Malines / Belgium) IED (Milan / Italy) IED (Madrid / Spain) IADE Instituto Superior de Design (Lisbon / Portugal) VIA (The Netherlands) KISD (Cologne / Germany) Hochschule (Trier / Germany) Manchester Metropolitan University (United Kingdom) Abadir Academy (Italy) University of Applied Sciences MACROMEDIA Leeds College of Arts (United Kingdom) ARTCOM (Morocco) Nazareth College, Rochester, New York (USA) Universidad IEU - (Mexico) Rankings The French student magazine "l'Étudiant" and "Le Figaro Etudiant" regularly rank LISAA in the top schools in France in various fields of applied arts. International LISAA School of Art & Design is a member of the international association CUMULUS. For the design course student exchanges are made with other schools of the international association of art & design universities CUMULUS. Ten percent of the cohort is of foreign origin. Ownership The school was bought by the investment fund Galileo Global Education in 2012. References External links Private universities and colleges in France Art schools in France Art schools in Paris Design schools in France Architecture schools in France Industrial design Education in Paris Education in Île-de-France
LISAA School of Art & Design
Engineering
819
63,770,837
https://en.wikipedia.org/wiki/Sequences%20%28book%29
Sequences is a mathematical monograph on integer sequences. It was written by Heini Halberstam and Klaus Roth, published in 1966 by the Clarendon Press, and republished in 1983 with minor corrections by Springer-Verlag. Although planned to be part of a two-volume set, the second volume was never published. Topics The book has five chapters, each largely self-contained and loosely organized around different techniques used to solve problems in this area, with an appendix on the background material in number theory needed for reading the book. Rather than being concerned with specific sequences such as the prime numbers or square numbers, its topic is the mathematical theory of sequences in general. The first chapter considers the natural density of sequences, and related concepts such as the Schnirelmann density. It proves theorems on the density of sumsets of sequences, including Mann's theorem that the Schnirelmann density of a sumset is at least the sum of the Schnirelmann densities and Kneser's theorem on the structure of sequences whose lower asymptotic density is subadditive. It studies essential components, sequences that when added to another sequence of Schnirelmann density between zero and one, increase their density, proves that additive bases are essential components, and gives examples of essential components that are not additive bases. The second chapter concerns the number of representations of the integers as sums of a given number of elements from a given sequence, and includes the Erdős–Fuchs theorem according to which this number of representations cannot be close to a linear function. The third chapter continues the study of numbers of representations, using the probabilistic method; it includes the theorem that there exists an additive basis of order two whose number of representations is logarithmic, later strengthened to all orders in the Erdős–Tetali theorem. After a chapter on sieve theory and the large sieve (unfortunately missing significant developments that happened soon after the book's publication), the final chapter concerns primitive sequences of integers, sequences like the prime numbers in which no element is divisible by another. It includes Behrend's theorem that such a sequence must have logarithmic density zero, and the seemingly-contradictory construction by Abram Samoilovitch Besicovitch of primitive sequences with natural density close to 1/2. It also discusses the sequences that contain all integer multiples of their members, the Davenport–Erdős theorem according to which the lower natural and logarithmic density exist and are equal for such sequences, and a related construction of Besicovitch of a sequence of multiples that has no natural density. Audience and reception This book is aimed at other mathematicians and students of mathematics; it is not suitable for a general audience. However, reviewer J. W. S. Cassels suggests that it could be accessible to advanced undergraduates in mathematics. Reviewer E. M. Wright notes the book's "accurate scholarship", "most readable exposition", and "fascinating topics". Reviewer Marvin Knopp describes the book as "masterly", and as the first book to overview additive combinatorics. Similarly, although Cassels notes the existence of material on additive combinatorics in the books Additive Zahlentheorie (Ostmann, 1956) and Addition Theorems (Mann, 1965), he calls this "the first connected account" of the area, and reviewer Harold Stark notes that much of material covered by the book is "unique in book form". Knopp also praises the book for, in many cases, correcting errors or deficiencies in the original sources that it surveys. Reviewer Harold Stark writes that the book "should be a standard reference in this area for years to come". References Additive combinatorics Integer sequences Mathematics books 1966 non-fiction books 1983 non-fiction books Clarendon Press books
Sequences (book)
Mathematics
794
49,875,385
https://en.wikipedia.org/wiki/Holin%20superfamily%20VI
The Holin superfamily VI is a superfamily of integral membrane transport proteins. It is one of the seven different holin superfamilies in total. In general, these proteins are thought to play a role in regulated cell death, although functionality varies between families and individual members. The Holin superfamily VI includes two TC families: 1.E.12 - The φAdh Holin (φAdh Holin) Family 1.E.26 - The Holin LLH (Holin LLH) Family Superfamily VI includes families with members only from Bacillota. These proteins appear to have one N-terminal transmembrane segment (TMS), followed by an amphipathic, weakly hydrophobic peak that was not predicted to be transmembrane by the topological programs used by Reddy and Saier (2013). The average sizes of the members of the two families belonging to the Holin superfamily VI are 135 ± 11 and 130 ± 26 amino acyl residues (aas), respectively. See also Holin Lysin Transporter Classification Database References Holins Protein families Protein superfamilies
Holin superfamily VI
Biology
229
3,980,235
https://en.wikipedia.org/wiki/Prairie%20Creek%20Redwoods%20State%20Park
Prairie Creek Redwoods State Park is a state park, located in Humboldt County, California, near the town of Orick and north of Eureka. The park is a coastal sanctuary for old-growth Coast Redwood trees. The park is jointly managed by the California Department of Parks and Recreation and the National Park Service as a part of the Redwood National and State Parks. This group of parks (which includes Del Norte Coast Redwoods State Park, Jedediah Smith Redwoods State Park, and Redwood National Park) has been collectively designated as a World Heritage Site and forms a part of the California Coast Ranges International Biosphere Reserve. The meadow along the Newton B. Drury Scenic Parkway, with its population of Roosevelt elk, is considered a centerpiece of the park, located near the information center and campground. These open areas of grassland within the redwood forest are locally known as prairies; and the park takes its name from Prairie Creek flowing near the western edge of the meadow and along the west side of the parkway. Other popular sites in the park are Fern Canyon and Gold Bluffs Beach. The park is also home to the tailed frog and several species of salmon. History The Yurok, who traditionally lived near the Klamath River and along the coastline of the Pacific Ocean, primarily lived on the lands of what is now known as Prairie Creek Redwoods State Park. Yurok villages, numbering no less than fifty in total, were located from Little River, California, in the south to the Wilson Creek Basin which runs into False Klamath Cove in the north (on the southern edge of Del Norte Coast Redwoods State Park). Some of the first Euro-Americans to visit the vicinity arrived in 1851 with the discovery of gold in the area that would become known as Gold Bluffs. Gold Bluffs had at one time been a substantial mining camp, although little remains of the camp today. With the end of the Civil War and a fall in the price of gold, operations at the Gold Bluffs were shut down. In 1872 a Captain Taylor of New York visited the Gold Bluffs to obtain the mine and exploit the rich sands supposedly deposited offshore. In the spring of 1873, over 100 tons of sand were raised from an area from one-half mile to within of the bluffs, and in depths of from eight to four fathoms of water. However, by the 1880s activities at the Gold Bluffs again began to slump. One story of the region comes from the area of Gold Bluffs. In a newspaper article from 1984, Thelma Hufford recorded this tale: At Upper Bluffs, imported miners from Cornwall [England], who were expert tunnel builders and who know how to set up timbers in a tunnel [were employed] ... Arthur Davison said [a tunnel] was opened in 1898. The tunnel was through to the coast. It was long and . The tunnel was built to bring water from Prairie Creek to the headwaters of Butler Creek. A reservoir was built on the west side of the creek. It was used for five years. ... The opening, Fay Aldrich said, was on the Prairie Creek side of Joe Stockel's place near the apple tree on Highway 101. By 1920 mining operations at the Gold Bluffs had been closed. The park was created in 1923 with an initial donation of by then owner Zipporah Russ to the Save the Redwoods League. By 1931, the League had acquired an additional from the Sage Land and Improvement Company, a large timber concern. During the great depression, a Civilian Conservation Corps camp was stationed in the park, clearing out campsites and creating fences on the borders of the prairie. Trees Notable redwoods include Big Tree, Corkscrew Redwood, and the Cathedral Trees. Many redwoods in the park have reached tall. Besides Coast redwoods, other tall coniferous tree species in the park's forests include coast Douglas fir, Sitka spruce and western hemlock. Trails Trails in the park include: Miners Ridge and James Irvine – Brown Creek Loop – Big Tree Loop – Ten Taypo Trail – Rhododendron and Cal Barrel – West Ridge and Prairie Creek South – West Ridge and Rhododendron North – The Friendship Ridge Trail – The Ah Pah Trail – The Nature Trail – Hiker Jim Hamm was attacked by a mountain lion while hiking the Brown Creek Loop in 2007. Facilities A visitor center is provided with displays, wall maps and bookstore. It is along the same drive as the campground and some day-use parking is available. There is also parking along parts of the parkway, where elk may be seen; no day-use fee is required to park and leave vehicles in that area during daylight hours. Restrooms are located near the visitor center and also nearby at the Big Tree parking lot. References External links California State Parks: official Prairie Creek Redwoods State Park website Images and Information including Prairie Creek's Atlas Grove North Coast Redwood Interpretive Association State parks of California Redwood National and State Parks Parks in Humboldt County, California Coast redwood groves Old-growth forests Beaches of Del Norte County, California Protected areas established in 1925 1925 establishments in California Beaches of Northern California
Prairie Creek Redwoods State Park
Biology
1,043
70,888,472
https://en.wikipedia.org/wiki/Antimonide%20iodide
Antimonide iodides or iodide antimonides are compounds containing anions composed of iodide (I−) and antimonide (Sb3−). They can be considered as mixed anion compounds. They are in the category of pnictide halides. Related compounds include the antimonide chlorides, antimonide bromides, phosphide iodides, and arsenide iodides. List References Antimonides Iodides Mixed anion compounds
Antimonide iodide
Physics,Chemistry
107
10,307,206
https://en.wikipedia.org/wiki/Legacy%20Plug%20and%20Play
The term Legacy Plug and Play, also shortened to Legacy PnP, describes a series of specifications and Microsoft Windows features geared towards operating system configuration of devices, and some device IDs are assigned by UEFI Forum. The standards were primarily aimed at the IBM PC standard bus, later dubbed Industry Standard Architecture (ISA). Related specifications are also defined for the common external or specialist buses commonly attached via ISA at the time of development, including RS-232 and parallel port devices. As a Windows feature, Plug and Play refers to operating system functionality that supports connectivity, configuration and management with native plug and play devices. Originally considered part of the same feature set as the specifications, Plug and Play in this context refers primarily to the responsibilities and interfaces associated with Windows driver development. Plug and Play allows for detection of devices without user intervention, and occasionally for minor configuration of device resources, such as I/O ports and device memory maps. PnP is a specific set of standards, not be confused with the generic term plug and play, which describes any hardware specification that alleviates the need for user configuration of device resources. ACPI is the successor to Legacy Plug and Play. Overview The Plug and Play standard requires configuration of devices to be handled by the PnP BIOS, which then provides details of resources allocations to the operating system. The process is invoked at boot time. When the computer is first turned on, compatible devices are identified and assigned non-conflicting IO addresses, interrupt request numbers and DMA channels. The term was adopted by Microsoft in reference to their Windows 95 product. Other operating systems, such as AmigaOS Autoconfig and the Mac OS NuBus system, had already supported such features for some time (under various names, or no name). Even Yggdrasil Linux advertised itself as "Plug and Play Linux" at least two years before Windows 95. But the term plug and play gradually became universal due to worldwide acceptance of Windows. Typically, non-PnP devices need to be identified in the computer's BIOS setup so that the PnP system will not assign other devices the resources in use by the non-PnP devices. Problems in the interactions between legacy non-PnP devices and the PnP system can cause it to fail, leading to this technology having historically been referred to as "plug and pray". Specifications Legacy Plug and Play Specification was defined by Microsoft and Intel, which proposed changes to legacy hardware, as well as the BIOS to support operating system-bound discovery of devices. These roles were later assumed by the ACPI standard, which also moves support for power management and configuration into the operating system, as opposed to the firmware as previously required by the "Plug and Play BIOS" and APM specifications. The following standards compose what Microsoft describe as Legacy Plug and Play, as opposed to native Plug-and-Play specifications such as PCI and USB. Plug and Play BIOS Specification Plug and Play ISA Specification Plug and Play Design Specification for IEEE 1394 Plug and Play External COM Device Specification Plug and Play Parallel Port Device Specification Plug and Play ATA Specification Plug and Play SCSI Specification Legacy Plug and Play Guidelines Windows Vista requires an ACPI-compliant BIOS, and the ISAPnP is disabled by default. Requirements To use Plug and Play, three requirements have to be met: The OS must be compatible with Plug and Play. The BIOS must support Plug and Play. The device to be installed must be a Plug and Play compliant device. Hardware identification Plug-and-play hardware typically also requires some sort of ID code that it can supply, in order for the computer software to correctly identify it. The Plug-and-play ID can have two form: 3-byte manufacturer ID plus 2-byte hex number (e.g. PNP0A08), or 4-byte manufacturer ID plus 2-byte hex number (e.g. MSFT0101). In addition, a PnP device may have Class Code and Subsystem ID. This ID code system was not integrated into the early Industry Standard Architecture (ISA) hardware common in PCs when Plug and Play was first introduced. ISA Plug and Play caused some of the greatest difficulties that made PnP initially very unreliable. This led to the derisive term "Plug and Pray", since I/O addresses and IRQ lines were often set incorrectly in the early days. Later computer buses like MCA, EISA and PCI (which was becoming the industry standard at that time) integrated this functionality. Finally, the operating system of the computer needs to be able to handle these changes. Typically, this means looking for interrupts from the bus saying that the configuration has changed, and then reading the information from the bus to locate what happened. Older bus designs often required the entire system to be read in order to locate these changes, which can be time-consuming for many devices. More modern designs use some sort of system to either reduce or eliminate this "hunt"; for example, USB uses a hub system for this purpose. When the change is located, the OS then examines the information in the device to figure out what it is. It then has to load up the appropriate device drivers in order to make it work. In the past, this was an all-or-nothing affair, but modern operating systems often include the ability to find the proper driver on the Internet and install it automatically. See also User friendliness Extended System Configuration Data (ESCD) Universal Plug and Play (UPnP) Low Pin Count (LPC) References External links UEFI Forum PNP ID and ACPI ID Registry Microsoft Plug and Play Specifications and Papers https://web.archive.org/web/20040615191235/http://www.microsoft.com/whdc/system/pnppwr/pnp/pnpid.mspx (P&P ID) https://web.archive.org/web/20041019180414/http://www.microsoft.com/whdc/archive/idpnp.mspx https://web.archive.org/web/20050107175505/http://www.microsoft.com/whdc/archive/pnpbiosp.mspx Plug-n-Play SECS/GEM for Legacy Equipment IBM PC compatibles Computer peripherals Motherboard BIOS
Legacy Plug and Play
Technology
1,321
62,224,619
https://en.wikipedia.org/wiki/Undercut%20%28welding%29
In welding, undercutting is when the weld reduces the cross-sectional thickness of the base metal. This type of defect reduces the strength of the weld and workpieces. One reason for this defect is excessive current, causing the edges of the joint to melt and drain into the weld; this leaves a drain-like impression along the length of the weld. Another reason is if a poor technique is used that does not deposit enough filler metal along the edges of the weld. A third reason is using an incorrect filler metal, because it will create greater temperature gradients between the center of the weld and the edges. Other causes include too small of an electrode angle, a dampened electrode, excessive arc length, and slow speed. References Welding
Undercut (welding)
Engineering
159
77,132,409
https://en.wikipedia.org/wiki/Claudia%20Turro
Claudia Turro is an American inorganic chemist who is the Dow Professor of Chemistry at The Ohio State University (OSU). Since July 2019 she has been the Chair of the OSU Department of Chemistry and Biochemistry. She was elected Fellow of the American Chemical Society in 2010 and is a member of the American Academy of Arts and Sciences (2023) and the National Academy of Sciences (2024). Education Claudia Turro earned her B.S. with Honors from Michigan State University in 1987. She completed her Ph.D. in 1992 at the same institution, where she collaborated with Daniel G. Nocera and George E. Leroi. Following this, she was awarded a Jane Coffin Childs Memorial Fund for Medical Research Postdoctoral Fellowship, which allowed her to conduct postdoctoral research at Columbia University with Nicholas J. Turro (no relation) from 1992 to 1995. Research and career Turro joined the faculty of The Ohio State University in 1996. She and her group study light-initiated reactions of metal complexes, with applications in photochemotherapy (PCT) and treatment of diseases, luminescent sensors, and solar energy conversion. They investigate the excited states of mononuclear and dinuclear transition metal complexes to enhance their reactivity. Their research focuses on controlling the dynamics of excited states, including photophysical properties and reactivity, such as energy transfer, charge separation, recombination, and photochemical reactions. This understanding is crucial for applications in solar energy, PCT, and sensing. Awards and honors 1998 Early CAREER Award by the National Science Foundation 1999 Arnold and Mabel Beckman Foundation Young Investigators Award 2010 Elected Fellow of the American Chemical Society 2012 Fellow of the American Association for the Advancement of Science 2014 Award in Photochemistry from the Inter-American Photochemical Society 2016 Recipient of Edward W. Morley Medal from the Cleveland section of the ACS 2023 Elected member of the American Academy of Arts and Sciences 2024 Elected member of the National Academy of Sciences Selected publications References Living people American chemists American inorganic chemists American women chemists Fellows of the American Chemical Society Lists of American Academy of Arts and Sciences members Members of the United States National Academy of Sciences Year of birth missing (living people)
Claudia Turro
Chemistry
451
6,850,683
https://en.wikipedia.org/wiki/Shell%20shoveling
Shell shoveling, in network security, is the act of redirecting the input and output of a shell to a service so that it can be remotely accessed, a remote shell. In computing, the most basic method of interfacing with the operating system is the shell. On Microsoft Windows based systems, this is a program called cmd.exe or COMMAND.COM. On Unix or Unix-like systems, it may be any of a variety of programs such as bash, ksh, etc. This program accepts commands typed from a prompt and executes them, usually in real time, displaying the results to what is referred to as standard output, usually a monitor or screen. In the shell shoveling process, one of these programs is set to run (perhaps silently or without notifying someone observing the computer) accepting input from a remote system and redirecting output to the same remote system; therefore the operator of the shoveled shell is able to operate the computer as if they were present at the console. See also Console redirection CTTY (DOS command) Serial over LAN redirection (SOL) Remote Shell References Further reading Computer network security Command shells
Shell shoveling
Engineering
239
47,782,660
https://en.wikipedia.org/wiki/OREDA
The Offshore and Onshore Reliability Data (OREDA) project was established in 1981 in cooperation with the Norwegian Petroleum Directorate (now Petroleum Safety Authority Norway). It is "one of the main reliability data sources for the oil and gas industry" and considered "a unique data source on failure rates, failure mode distribution and repair times for equipment used in the offshore industr[y]. OREDA's original objective was the collection of petroleum industry safety equipment reliability data. The current organization, as a cooperating group of several petroleum and natural gas companies, was established in 1983, and at the same time the scope of OREDA was extended to cover reliability data from a wide range of equipment used in oil and gas exploration and production (E&P). OREDA primarily covers offshore, subsea and topside equipment, but does also include some onshore E&P, and some downstream equipment as well. The main objective of the OREDA project is to contribute to an improved safety and cost-effectiveness in design and operation of oil and gas E&P facilities, through collection and analysis of maintenance and operational data, establishment of a high quality reliability database, and exchange of reliability, availability, maintenance and safety (RAMS) technology among the participating companies. History Work on the OREDA project proceeds in phases spanning 2–3 years. Handbooks summarizing the data collected and other work completed are issued regularly. Phase I (1983–1985)The primary activity during this phase was the collection and compilation of offshore drilling installations, and the publication of these data in the first OREDA Handbook. This demonstrated the ability of the eight petroleum industry companies involved in the project to cooperate on safety issues. While data in this initial phase included a wide range of equipment types, the level of detail was not as complete as in later phases of the project.  Data collected in this phase are published in the OREDA Handbook (1984 edition); Phase I data are not, however, included in the OREDA database. Phase II (1987–1990)To improve data quality, the project's scope was altered to include collection of production-critical equipment data only. Data began to be stored in a Windows OS database. The development of a tailor-made data collection and analysis program, the OREDA software, was begun.  Data collected in this phase are published in the OREDA Handbook (1992 edition), which also contains re-published data collected in phase I. Phase III (1990–1992)The number of equipment categories included was increased, and more data on maintenance programs were collected. Data quality was improved following established "Guidelines for Data Collection" and via improved quality control. The user interface of the OREDA software was improved, and programming changes allowed it to be used as a broader-purpose tool for data collection.  Data collected in this phase are published in the OREDA Handbook (1997 edition). Phase IV (1993–1996)New software for data collection and analysis was developed, plus specific software and procedures for automatic data import and conversion. Data collected were mainly for the same equipment classes as in phase III, and the data collection was — to a greater extent than previously — carried out by the companies themselves. Data on planned maintenance were also included.  Data collected in this phase are published in the OREDA Handbook (2002 edition). Phase V (1997–2000)New classes of equipment were added to the project, coinciding with a greater emphasis on the collection of subsea data. Development of a new ISO standard, "Petroleum and natural gas industries — Collection and exchange of reliability and maintenance data for equipment" was begun; ISO standard 14224 was issued in July 1999.  Data collected in this phase are published in the OREDA Handbook (2002 edition). Phase VI (2000–2001)Data collection on subsea equipment and new equipment classes were prioritised. A forum for co-operation between major subsea equipment manufacturers was formed.  Data collected in this phase are published in the OREDA Handbook (2009 edition). Phase VII (2002–2003)Priority continued to be given to subsea equipment data collection. A revision of ISO 14224 was begun, with contribution from members of the OREDA project.  Data collected in this phase are published in the OREDA Handbook (2009 edition). Phase VIII (2004–2005)Phase VIII mainly continued the goals and activities of phase VII. OREDA members participated in the revision of ISO 14224, issued in December 2006.  Data collected in this phase are published in the OREDA Handbook (2015 edition). Phase IX (2006–2008)OREDA software and taxonomy were made consistent with ISO 14224. There was a continued focus on including worldwide safety data. In observance of OREDA's 25-year anniversary, a seminar was conducted.  Data collected in this phase are published in the OREDA Handbook (2015 edition). Phase X (2009–2011)The 5th OREDA Handbook (2009 edition) was released; new safety analysis software was developed; initial steps to SIL (safety integrity level) data based on OREDA were taken; and GDF Suez and Petrobras became associated members. Phase XI (2012–2014)New data collection software was developed; the 6th OREDA Handbook (2015 edition) was planned; a quality assurance review of the database was conducted; a new logo was designed, as were new looks for both the Handbook and the website. Phase XII (2015–2017)The OREDA project is in its 12th phase as of 2015. During this phase, the 6th OREDA Handbook (2015 edition) was published. A new webshop for the project has been established in collaboration with the European Safety Reliability & Data Association (ESReDA). Phase XIII (2018-)From 2018 the OREDA project will enter its 13th phase. Digitalization and efficiency improvements is part of the industry and there is a need for OREDA data as a decision support tool and as support for equipment in operation. Cost effective solutions is a focus area in the industry and in line with this trend the OREDA project will provide more efficient procedures and digitalized solutions. Participants At times companies have left or joined the project, sometimes as the result of name changes or mergers. The following table lists which companies have contributed data to the OREDA project in phases VIII, IX and XII. Organization The OREDA project's Steering Committee consists of one member and one deputy member from each of the participating companies. From these members, a chairperson is elected, and appoints a Project Manager to coordinate activities approved by the steering committee. The Project Manager is also responsible for data quality assurance. Det Norske Veritas (DNV, now called DNV GL), an international certification body and classification society, served as Project Manager during phases I and II and SINTEF (Stiftelsen for INdustriell og TEknisk Forskning; "Foundation for Scientific and Industrial Research" in English) during phases III–IX, after which DNV GL again took over Project Manager duties. The OREDA Handbook releases have been prepared as separate projects, but in consultation with the OREDA Steering Committee; the current version, 2015's 6th Edition, was prepared by SINTEF and NTNU (Norges Teknisk-Naturvitenskapelige Universitet; "Norwegian University of Science and Technology" in English), and is marketed by DNV GL. Need Before the OREDA project began collecting data, "no authenticated source of failure information existed for offshore installations," and risk assessments had to be made using "generic data from onshore petroleum plants and data from other industries." Data By 1996, OREDA had collated data about 24,000 pieces of equipment in use in offshore installations, and documented 33,000 equipment failures. The severity of failures documented in the database are categorized as either critical, degradation, incipient, or unknown severity. The database contains data from almost 300 installations, over 15,000 pieces of equipment, nearly 40,000 failure records, and close to 75,000 maintenance records. Access to this data, and to the search and analysis functions of the OREDA software, is restricted to the OREDA member companies, though contractors working with member companies may be granted temporary access. Database structure Data are entered by installation and by owner. Each piece of equipment (e.g. a gas turbine) constitutes a single database inventory record, which includes a technical description of the equipment, and of its environmental and operating conditions, along with all associated failure events. Every failure event is given a set of data including failure cause, date, effect, and mode. Corrective and preventive maintenance data are also included. Software The OREDA software handles data acquisition, analysis and collation. Features include advanced data search, automated data transfer, quality checking, reliability analyses, a tailor-made module for subsea data which includes an event-logging tool, and the option to configure user-defined applications. It can also be used to collect internal company data. The most current version of the software, released in concert with the 6th edition of the OREDA Handbook, contains an expanded set of equipment classes, including common subsea components, subsea control systems, subsea power cables, subsea pumps, and subsea vessels. Impact Use of the OREDA database has "led to significant savings in the development and operation of platforms." OREDA's example has inspired the creation of similar inter-company cooperation projects in related fields, such as the SPARTA (System Performance, Availability and Reliability Trend Analysis) database created by the wind farm industry in the UK. References Energy economics International energy organizations Organizations established in 1981 Petroleum politics
OREDA
Chemistry,Engineering,Environmental_science
1,979
9,841,948
https://en.wikipedia.org/wiki/Barringer%20Medal
The Barringer Medal recognizes outstanding work in the field of impact cratering and/or work that has led to a better understanding of impact phenomena. The Barringer Medal and Award were established to honor the memory of D. Moreau Barringer Sr. and his son D. Moreau Barringer Jr. and are sponsored by the Barringer Crater Company. The medal is awarded by the Meteoritical Society. The senior Barringer was the first to seriously propose an impact origin for the crater that now bears his name. Barringer Medal Winners The first recipient, Eugene Shoemaker, co-discovered Comet Shoemaker–Levy 9 and was the first to offer accepted proof of Barringer Crater’s meteoritic origin. See also List of astronomy awards Glossary of meteoritics References Astronomy prizes Meteorite prizes Awards established in 1984
Barringer Medal
Astronomy,Technology
166
3,026,193
https://en.wikipedia.org/wiki/Alias%20%28command%29
In computing, alias is a command in various command-line interpreters (shells), which enables a replacement of a word by another string. It is mainly used for abbreviating a system command, or for adding default arguments to a regularly used command. alias is available in Unix shells, AmigaDOS, 4DOS/4NT, FreeDOS, KolibriOS, Windows PowerShell, ReactOS, and the EFI shell. Aliasing functionality in the MS-DOS and Microsoft Windows operating systems is provided by the DOSKey command-line utility. An alias will last for the life of the shell session. Regularly used aliases can be set from the shell's rc file (such as .bashrc) so that they will be available upon the start of the corresponding shell session. The alias commands may either be written in the config file directly or sourced from a separate file. History In Unix, aliases were introduced in the C shell to survive in descendant shells such as tcsh and bash. C shell aliases were strictly limited to one line. This was useful for creating simple shortcut commands, but not more complex constructs. Older versions of the Bourne shell did not offer aliases, but it did provide functions, which are more powerful than the csh alias concept. The alias concept from csh was imported into Bourne Again Shell (bash) and the Korn shell (ksh). With shells that support both functions and aliases but no parameterized inline shell scripts, the use of functions wherever possible is recommended. Cases where aliases are necessary include situations where chained aliases are required (bash and ksh). The command has also been ported to the IBM i operating system. Usage Creating aliases Common Unix shells Non-persistent aliases can be created by supplying name/value pairs as arguments for the alias command. In Unix shells the syntax is: alias gc='git commit' C shell The corresponding syntax in the C shell or tcsh shell is: alias gc "git commit" This alias means that when the command gc is read in the shell, it will be replaced with git commit and that command will be executed instead. 4DOS In the 4DOS/4NT shell the following syntax is used to define cp as an alias for the 4DOS copy command: alias cp copy Windows PowerShell To create a new alias in Windows PowerShell, the new-alias cmdlet can be used: new-alias ci copy-item This creates a new alias called ci that will be replaced with the copy-item cmdlet when executed. In PowerShell, an alias cannot be used to specify default arguments for a command. Instead, this must be done by adding items to the collection $PSDefaultParameterValues, one of the PowerShell preference variables. Viewing currently defined aliases To view defined aliases the following commands can be used: alias # Used without arguments; displays a list of all current aliases alias -p # List aliases in a way that allows re-creation by sourcing the output; not available in 4DOS/4NT and PowerShell alias myAlias # Displays the command for a defined alias Overriding aliases In Unix shells, it is possible to override an alias by quoting any character in the alias name when using the alias. For example, consider the following alias definition: alias ls='ls -la' To override this alias and execute the ls command as it was originally defined, the following syntax can be used: 'ls' or \ls In the 4DOS/4NT shell it is possible to override an alias by prefixing it with an asterisk. For example, consider the following alias definition: alias dir = *dir /2/p The asterisk in the 2nd instance of dir causes the unaliased dir to be invoked, preventing recursive alias expansion. Also the user can get the unaliased behaviour of dir at the command line by using the same syntax: *dir Changing aliases In Windows PowerShell, the set verb can be used with the alias cmdlet to change an existing alias: set-alias ci cls The alias ci will now point to the cls command. In the 4DOS/4NT shell, the eset command provides an interactive command line to edit an existing alias: eset /a cp The /a causes the alias cp to be edited, as opposed to an environment variable of the same name. Removing aliases In Unix shells and 4DOS/4NT, aliases can be removed by executing the unalias command: unalias copy # Removes the copy alias unalias -a # The -a switch will remove all aliases; not available in 4DOS/4NT unalias * # 4DOS/4NT equivalent of `unalias -a` - wildcards are supported In Windows PowerShell, the alias can be removed from the alias:\ drive using remove-item: remove-item alias:ci # Removes the ci alias Features Chaining An alias usually replaces just the first word. But some shells, such as and , allow a sequence or words to be replaced. This particular feature is unavailable through the function mechanism. The usual syntax is to define the first alias with a trailing space character. For instance, using the two aliases: alias list='ls ' # note the trailing space to trigger chaining alias long='-Flas' # options to ls for a long listing allows: list long myfile # becomes "ls -Flas myfile" when run for a long listing, where "long" is also evaluated as an alias. Command arguments In the C Shell, arguments can be embedded inside the command using the string . For example, with this alias: alias ls-more 'ls \!* | more' ls-more /etc /usr expands to ls /etc /usr | more to list the contents of the directories /etc and /usr, pausing after every screenful. Without , alias ls-more 'ls | more' would instead expand to ls | more /etc /usr which incorrectly attempts to open the directories in more. The Bash and Korn shells instead use shell functions — see § Alternatives below. Alternatives Aliases should usually be kept simple. Where it would not be simple, the recommendation is usually to use one of the following: Shell scripts, which essentially provide the full ability to create new system commands. Symbolic links in the user's PATH (such as /bin). This method is useful for providing an additional way of calling the command, and in some cases may allow access to a buried command function for the small number of commands that use their invocation name to select the mode of operation. Shell functions, especially if the command being created needs to modify the internal runtime environment of the shell itself (such as environment variables), needs to change the shell's current working directory, or must be implemented in a way which guarantees they it appear in the command search path for anything but an interactive shell (especially any "safer" version of , , and so forth). The most common form of aliases, which just add a few options to a command and then include the rest of the command line, can be converted easily to shell functions following this pattern: alias ll='ls -Flas' # long listing, alias ll () { ls -Flas "$@" ; } # long listing, function To prevent a function from calling itself recursively, use command: ls () { command ls --color=auto "$@" ; } In older Bourne shells use /bin/ls instead of command ls. References Further reading External links Bash man page for alias The alias Command by The Linux Information Project (LINFO) Alias IBM i Qshell commands ReactOS commands Windows commands Unix SUS2008 utilities Windows administration
Alias (command)
Technology
1,649
43,478,202
https://en.wikipedia.org/wiki/Ogden%E2%80%93Roxburgh%20model
The Ogden–Roxburgh model is an approach published in 1999 which extends hyperelastic material models to allow for the Mullins effect. It is used in several commercial finite element codes, and is named after R.W. Ogden and D. G. Roxburgh. The fundamental idea of the approach can already be found in a paper by De Souza Neto et al. from 1994. The basis of pseudo-elastic material models is a hyperelastic second Piola–Kirchhoff stress , which is derived from a suitable strain energy density function : The key idea of pseudo-elastic material models is that the stress during the first loading process is equal to the basic stress . Upon unloading and reloading is multiplied by a positive softening function . The function thereby depends on the strain energy of the current load and its maximum in the history of the material: It was shown that this idea can also be used to extend arbitrary inelastic material models for softening effects. References Continuum mechanics Elasticity (physics) Rubber properties Solid mechanics
Ogden–Roxburgh model
Physics,Materials_science
212
32,242,871
https://en.wikipedia.org/wiki/WorldWideWhiteboard
World Wide Whiteboard is a Web-based online collaboration and conferencing tool designed for use in online education. It was developed by Link-Systems International (LSI), a privately held distance-learning software corporation in Tampa, Florida. The World Wide Whiteboard went online in 1996, under the name NetTutor, although the LSI NetTutor online tutoring service is technically an implementation of the World Wide Whiteboard product. Version 3.8 of the World Wide Whiteboard is used in the current NetTutor online tutoring service, and in its on-campus online tutoring programs, online courses, and collaborative learning environments. As a Java applet, it can be run on Windows, Mac, and Linux without downloading software. LSI maintains the application and leases both hosted and unhosted access to it. LSI operations, tutoring, product development, online content services, management, and technical support are housed in the company's Tampa offices. History The World Wide Whiteboard is the first software product developed by Link-Systems International (LSI). LSI was launched in 1995 primarily as a company that converted text-based content, like scholarly journals, into an SGML format. Incorporated in the State of Florida on February 27, 1996, LSI expanded its mission to the Web-based implementation of a variety of traditionally face-to-face academic activities. The company developed what it called the "Net Tutor" product that included a whiteboard-like interface and, later, a tutoring service that used the whiteboard to conduct online tutoring. For about five years, the software was leased as NetTutor to schools, individual educators, and programs to, for example, conduct online tutoring using tutors provided by the institution, as well as for online instruction and office hours. Eventually, LSI renamed the interface to World Wide Whiteboard. The company still owns the NetTutor trademark, which refers to the online tutoring it supplies via the World Wide Whiteboard and using professional tutors it employs. This history seems to justify the company's claim that it was the first to offer commercially a tool for Web access to a shared, real-time environment with such education-oriented features as subject-specific toolbars. The World Wide Whiteboard was also adopted as an option available with certain textbooks by publishers such as McGraw-Hill, John Wiley and Sons, Pearson, Cengage Learning, and Bedford, Freeman and Worth. The use of the World Wide Whiteboard by campuses and educational programs to support online environments, give classes and hold faculty office hours and meetings expanded over the next decade. In 2010, LSI began the development of an HTML5 version of the World Wide Whiteboard. This version has now replaced the earlier, Java-based version and allows for the use of the interface on mobile devices. This is one of several cases where software developers have opted for browser-based development, counting on the future development of HTML5 APIs to support audio and video interaction, rather than self-standing phone apps. As of 2013, installations of World Wide Whiteboard had been converted from Java-based to HTML5-based versions. As the World Wide Whiteboard gained more popularity, LSI was included in the Inc. 5000 in 2014. Use The World Wide Whiteboard product appeared as the first successful web collaboration tool in wide use in online education. Online whiteboards generally can accommodate a theoretically unlimited number of participants and an instructor in a live or synchronous interactive session. The World Wide Whiteboard allows for both audio and video streams, as well as numerous asynchronous modes of interaction. Usage studies A study at Hampton University in 1999 concluded that the World Wide Whiteboard could effectively support such activities as online office hours. A 2004 study at Stony Brook University comparing the World Wide Whiteboard with tools available in Blackboard concluded that "despite some flaws, according to our research NetTutor remains the only workable math-friendly e-learning communication system." The World Wide Whiteboard supported the online tutoring programs of individual universities, such as at Utah Valley State College, in a study describing its use as "one of the earliest synchronous models for math tutoring". The World Wide Whiteboard was used to coordinate education across multiple campuses at the University of Idaho, in a study beginning in 2005, that showed increasing acceptance of Web-based online tutoring in the university setting. Certain of these studies have been cited widely, mainly because of the seminal role of the World Wide Whiteboard as an educational Web conferencing tool. Other questions raised by scholars include: How comfortable is the average learner with the technology of the World Wide Whiteboard? Does the conversation with an online educator—even a "live" one—fully synthesize the give and take of the classroom environment? Degree of customization The technology of the World Wide Whiteboard is not geared to a particular pedagogic or andragogic approach. Its set of activities can therefore be used by any educator for online communication. This also means that questions about how the World Wide Whiteboard is set up are matters for the institution to decide. LSI has no consultancy component, so that, while figures about usage of the World Wide Whiteboard can be used to demonstrate how a school's Quality Enhancement Program is working, it is up to the user to create the documentation for this or any other accreditation-related measures. However, each feature of the World Wide Whiteboard can be customized for use in different institutions and for different subjects. The appearance of the dashboard, and symbols displayed on the whiteboard interface can be customized, as well. As an example, fractions and graphing tools may appear on an Algebra class whiteboard, while there may be integral signs as well on the toolbar for a calculus course. See also E-learning Interactive whiteboards Notes References Collison, G., Elbaum, B., Haavind, S. & Tinker, R. (2000). Facilitating online learning: Effective strategies for moderators. Atwood Publishing, Madison. Hewitt, Beth L. (2010). The online writing conference: a guide for teachers and tutors. Boynton/Cook Heinemann, Portsmouth, NJ. Jacques, D., and Salmon, G (2007) Learning in Groups: A Handbook for on and off line environments, Routledge, London and New York. Kersaint, G., Barber, J., Dogbey, J. and Kephart, D. (2011) "The Effect of Access to an Online Tutorial Service on College Algebra Student Outcomes." Mentoring and Tutoring: Partnership in Learning. 19(1), February, 2011. Educational technology companies of the United States Educational math software Science education software
WorldWideWhiteboard
Mathematics
1,391
37,749,068
https://en.wikipedia.org/wiki/Extreme%20mass%20ratio%20inspiral
In astrophysics, an extreme mass ratio inspiral (EMRI) is the orbit of a relatively light object around a much heavier (by a factor 10,000 or more) object, that gradually spirals in due to the emission of gravitational waves. Such systems are likely to be found in the centers of galaxies, where stellar mass compact objects, such as stellar black holes and neutron stars, may be found orbiting a supermassive black hole. In the case of a black hole in orbit around another black hole this is an extreme mass ratio binary black hole. The term EMRI is sometimes used as a shorthand to denote the emitted gravitational waveform as well as the orbit itself. The main reason for scientific interest in EMRIs is that they are one of the most promising sources for gravitational wave astronomy using future space-based detectors such as the Laser Interferometer Space Antenna (LISA). If such signals are successfully detected, they will allow accurate measurements of the mass and angular momentum of the central object, which in turn gives crucial input for models for the formation and evolution of supermassive black holes. Moreover, the gravitational wave signal provides a detailed map of the spacetime geometry surrounding the central object, allowing unprecedented tests of the predictions of general relativity in the strong gravity regime. Overview Scientific potential If successfully detected, the gravitational wave signal from an EMRI will carry a wealth of astrophysical data. EMRIs evolve slowly and complete many (~10,000) cycles before eventually plunging. Therefore, the gravitational wave signal encodes a precise map of the spacetime geometry of the supermassive black hole. Consequently, the signal can be used as an accurate test of the predictions of general relativity in the regime of strong gravity; a regime in which general relativity is completely untested. In particular, it is possible to test the hypothesis that the central object is indeed a supermassive black hole to high accuracy by measuring the quadrupole moment of the gravitational field to an accuracy of a fraction of a percent. In addition, each observation of an EMRI system will allow an accurate determination of the parameters of the system, including: The mass and angular momentum of the central object to an accuracy of 1 in 10,000. By gathering the statistics of the mass and angular momentum of a large number of supermassive black holes, it should be possible to answer questions about their formation. If the angular momentum of the supermassive black holes is large, then they probably acquired most of their mass by swallowing gas from their accretion disc. Moderate values of the angular momentum indicate that the object is most likely formed from the merger of several smaller objects with a similar mass, while low values indicate that the mass has grown by swallowing smaller objects coming in from random directions. The mass of the orbiting object to an accuracy of 1 in 10,000. The population of these masses could yield interesting insights in the population of compact objects in the nuclei of galaxies. The eccentricity (1 in 10,000) and the (cosine of the) inclination (1 in 100-1000) of the orbit. The statistics for the values concerning the shape and orientation of the orbit contains information about the formation history of these objects. (See the Formation section below.) The luminosity distance (5 in 100) and position (with an accuracy of 10−3 steradian) of the system. Because the shape of the signal encodes the other parameters of the system, we know how strong the signal was when it was emitted. Consequently, one can infer the distance of the system from the observed strength of the signal (since it diminishes with the distance travelled). Unlike other means of determining distances of the order of several billion light-years, the determination is completely self-contained and does not rely on the cosmic distance ladder. If the system can be matched with an optical counterpart, then this provides a completely independent way of determining the Hubble parameter at cosmic distances. Testing the validity of the Kerr conjecture. This hypothesis states that all black holes are rotating black holes of the Kerr or Kerr–Newman types. Formation It is currently thought that the centers of most (large) galaxies consist of a supermassive black hole of 106 to 109 solar masses () surrounded by a cluster of 107 to 108 stars maybe 10 light-years across, called the nucleus. The orbits of the objects around the central supermassive black hole are continually perturbed by two-body interactions with other objects in the nucleus, changing the shape of the orbit. Occasionally, an object may pass close enough to the central supermassive black hole for its orbit to produce large amounts of gravitational waves, significantly affecting the orbit. Under specific conditions such an orbit may become an EMRI. In order to become an EMRI, the back-reaction from the emission of gravitational waves must be the dominant correction to the orbit (compared to, for example, two-body interactions). This requires that the orbiting objects passes very close the central supermassive black hole. A consequence of this is that the inspiralling object cannot be a large heavy star, because it will be ripped apart by the tidal forces. However, if the object passes too close to the central supermassive black hole, it will make a direct plunge across the event horizon. This will produce a brief violent burst of gravitational radiation which would be hard to detect with currently planned observatories. Consequently, the creation of EMRI requires a fine balance between objects passing too close and too far from the central supermassive black hole. Currently, the best estimates are that a typical supermassive black hole of , will capture an EMRI once every 106 to 108 years. This makes witnessing such an event in our Milky Way unlikely. However, a space based gravitational wave observatory like LISA will be able to detect EMRI events up to cosmological distances, leading to an expected detection rate somewhere between a few and a few thousand per year. Extreme mass ratio inspirals created in this way tend to have very large eccentricities (e > 0.9999). The initial, high eccentricity orbits may also be a source of gravitational waves, emitting a short burst as the compact object passes through periapsis. These gravitational wave signals are known as extreme mass ratio bursts. As the orbit shrinks due to the emission of gravitational waves, it becomes more circular. When it has shrunk enough for the gravitational waves to become strong and frequent enough to be continuously detectable by LISA, the eccentricity will typically be around 0.7. Since the distribution of objects in the nucleus is expected to be approximately spherically symmetric, there is expected to be no correlation between the initial plane of the inspiral and the spin of the central supermassive black holes. In 2011, an impediment to the formation of EMRIs was proposed. The "Schwarzschild Barrier" was thought to be an upper limit to the eccentricity of orbits near a supermassive black hole. Gravitational scattering would drive by torques from the slightly asymmetric distribution of mass in the nucleus ("resonant relaxation"), resulting in a random walk in each star's eccentricity. When its eccentricity would become sufficiently large, the orbit would begin to undergo relativistic precession, and the effectiveness of the torques would be quenched. It was believed that there would be a critical eccentricity, at each value of the semi-major axis, at which stars would be "reflected" back to lower eccentricities. However, it is now clear that this barrier is nothing but an illusion, probably originating from an animation based on numerical simulations, as described in detail in two works. The role of the spin It was realised that the role of the spin of the central supermassive black hole in the formation and evolution of EMRIs is crucial. For a long time it has been believed that any EMRI originating farther away than a certain critical radius of about a hundredth of a parsec would be either scattered away from the capture orbit or directly plunge into the supermassive black hole on an extremely radial orbit. These events would lead to one or a few bursts, but not to a coherent set of thousands of them. Indeed, when taking into account the spin, proved that these capture orbits accumulate thousands of cycles in the detector band. Since they are driven by two-body relaxation, which is chaotic in nature, they are ignorant of anything related to a potential Schwarzchild barrier. Moreover, since they originate in the bulk of the stellar distribution, the rates are larger. Additionally, due to their larger eccentricity, they are louder, which enhances the detection volume. It is therefore expected that EMRIs originate at these distances, and that they dominate the rates. Alternatives Several alternative processes for the production of extreme mass ratio inspirals are known. One possibility would be for the central supermassive black hole to capture a passing object that is not bound to it. However, the window where the object passes close enough to the central black hole to be captured, but far enough to avoid plunging directly into it is extremely small, making it unlikely that such event contribute significantly to the expected event rate. Another possibility is present if the compact object occurs in a bound binary system with another object. If such a system passes close enough to the central supermassive black hole it is separated by the tidal forces, ejecting one of the objects from the nucleus at a high velocity while the other is captured by the central black hole with a relatively high probability of becoming an EMRI. If more than 1% of the compact objects in the nucleus is found in binaries this process may compete with the "standard" picture described above. EMRIs produced by this process typically have a low eccentricity, becoming very nearly circular by the time they are detectable by LISA. A third option is that a giant star passes close enough to the central massive black hole for the outer layers to be stripped away by tidal forces, after which the remaining core may become an EMRI. However, it is uncertain if the coupling between the core and outer layers of giant stars is strong enough for stripping to have a significant enough effect on the orbit of the core. Finally, supermassive black holes are often accompanied by an accretion disc of matter spiraling towards the black hole. If this disc contains enough matter, instabilities can collapse to form new stars. If massive enough, these can collapse to form compact objects, which are automatically on a trajectory to become an EMRI. Extreme mass ratio inspirals created in this way are characterized by the fact their orbital plane is strongly correlated with the plane of the accretion disc and the spin of the supermassive black hole. Intermediate mass ratio inspirals Besides stellar black holes and supermassive black holes, it is speculated that a third class of intermediate mass black holes with masses between 102 and 104 also exists. One way that these may possibly form is through a runway series of collisions of stars in a young cluster of stars. If such a cluster forms within a thousand light years from the galactic nucleus, it will sink towards the center due to dynamical friction. Once close enough the stars are stripped away through tidal forces and the intermediate mass black hole may continue on an inspiral towards the central supermassive black hole. Such a system with a mass ratio around 1000 is known as an intermediate mass ratio inspiral (IMRI). There are many uncertainties in the expected frequency for such events, but some calculations suggest there may be up to several tens of these events detectable by LISA per year. If these events do occur, they will result in an extremely strong gravitational wave signal, that can easily be detected. Another possible way for an intermediate mass ratio inspiral is for an intermediate mass black hole in a globular cluster to capture a stellar mass compact object through one of the processes described above. Since the central object is much smaller, these systems will produce gravitational waves with a much higher frequency, opening the possibility of detecting them with the next generation of Earth-based observatories, such as Advanced LIGO and Advanced VIRGO. Although the event rates for these systems are extremely uncertain, some calculations suggest that Advanced LIGO may see several of them per year. Modelling Although the strongest gravitational wave from EMRIs may easily be distinguished from the instrumental noise of the gravitational wave detector, most signals will be deeply buried in the instrumental noise. However, since an EMRI will go through many cycles of gravitational waves (~105) before making the plunge into the central supermassive black hole, it should still be possible to extract the signal using matched filtering. In this process, the observed signal is compared with a template of the expected signal, amplifying components that are similar to the theoretical template. To be effective this requires accurate theoretical predictions for the wave forms of the gravitational waves produced by an extreme mass ratio inspiral. This, in turn, requires accurate modelling of the trajectory of the EMRI. The equations of motion in general relativity are notoriously hard to solve analytically. Consequently, one needs to use some sort of approximation scheme. Extreme mass ratio inspirals are well suited for this, as the mass of the compact object is much smaller than that of the central supermassive black hole. This allows it to be ignored or treated perturbatively. Issues with traditional binary modelling approaches Post-Newtonian expansion One common approach is to expand the equations of motion for an object in terms of its velocity divided by the speed of light, v/c. This approximation is very effective if the velocity is very small, but becomes rather inaccurate if v/c becomes larger than about 0.3. For binary systems of comparable mass, this limit is not reached until the last few cycles of the orbit. EMRIs, however, spend their last thousand to a million cycles in this regime, making the post-Newtonian expansion an inappropriate tool. Numerical relativity Another approach is to completely solve the equations of motion numerically. The non-linear nature of the theory makes this very challenging, but significant success has been achieved in numerically modelling the final phase of the inspiral of binaries of comparable mass. The large number of cycles of an EMRI make the purely numerical approach prohibitively expensive in terms of computing time. Gravitational self force The large value of the mass ratio in an EMRI opens another avenue for approximation: expansion in one over the mass ratio. To zeroth order, the path of the lighter object will be a geodesic in the Kerr spacetime generated by the supermassive black hole. Corrections due to the finite mass of the lighter object can then be included, order-by-order in the mass ratio, as an effective force on the object. This effective force is known as the gravitational self force. In the last decade or so, a lot of progress has been made in calculating the gravitational self force for EMRIs. Numerical codes are available to calculate the gravitational self force on any bound orbit around a non-rotating (Schwarzschild) black hole. And significant progress has been made for calculating the gravitational self force around a rotating black hole. Notes References Further reading External links The Schwarzschild Barrier Black holes Binary systems Gravitational-wave astronomy
Extreme mass ratio inspiral
Physics,Astronomy
3,124
37,412,684
https://en.wikipedia.org/wiki/Dreaming%20%28journal%29
Dreaming is a peer-reviewed academic journal published by the American Psychological Association on behalf of the International Association for the Study of Dreams. IASD's other peer-reviewed publication, the International Journal of Dream Research (IJoDR) is published on Heidelberg University Library servers. The Dreaming journal covers research on dreaming, as well as on dreaming from the viewpoint of any of the arts and humanities. The current editor-in-chief is Deirdre Barrett (Harvard Medical School). Abstracting and indexing According to the Journal Citation Reports, the journal has a 2020 impact factor of 0.76. See also Dreams in analytical psychology International Association for the Study of Dreams (IASD) Oneirology References External links with free sample articles International Association for the Study of Dreams Oneirology Analytical psychology Sleep physiology American Psychological Association academic journals English-language journals Psychology journals Quarterly journals Academic journals established in 1991 1991 establishments in the United States
Dreaming (journal)
Mathematics,Biology
188
11,353,099
https://en.wikipedia.org/wiki/Architects%20Act%201997
The Architects Act 1997 (c. 22) is the consolidating Act of the Parliament of the United Kingdom for the keeping and publishing of the statutory Register of Architects by the Architects Registration Board. It has the long title: An Act to consolidate the enactments relating to architects. It consolidated two Acts of the 1930s as later amended both by primary legislation and by Orders in Council implementing the EC directive on architects providing for the recognition of architects qualified in other EC states, and the changes which had been made by Part III of the Housing Grants, Construction and Regeneration Act 1996. Passage of the consolidating Bill The Architects Act 1997 consolidated the originating and amending Acts relating to the registration of architects, namely the Architects Acts 1931-1996 (section 125 of the Housing Grants, Construction and Regeneration Act 1996). The Bill was introduced to the House of Lords on 17 December 1996 by the Lord Chancellor, Lord Mackay of Clashfern, and given its first reading. It received its second reading without call for a debate on 20 January 1997 and was passed to the Joint Committee on Consolidation Bills. On 3 March 1997 it was read for a third time without debate and passed to the House of Commons. The Bill was given Royal Assent on 19 March 1997 and came into force on 21 July 1997. Legislative continuity The legislative continuity from the originating Act of 1931 to the consolidating Act of 1997 is shown by paragraph 19(2)(a) of Schedule 2 in the 1997 Act: "the Council" means the Architects' Registration Council of the United Kingdom established under the 1931 Act, which was renamed as the Board by section 118(1) of the 1996 Act. The Architects Registration Board (ARB) has limited powers to make rules in the manner prescribed by the Architects Act 1997, but not the power to make regulations which had previously been ascribed to the registration body when it was constituted as the Architects' Registration Council of the United Kingdom (ARCUK). Statutory purpose The Act embodied previous legislation consequent upon EU directives concerning the mutual recognition of professional qualifications in the member states of the European Union and other EEA States, and certain changes which had been made to the previous legislation after the publication of the Warne Report in 1993. For the purpose of ascertaining the duties and functions which the Architects Registration Board is required to execute and perform under the Architects Act 1997, the constraints on the Board include the requirements judicially applicable in the name of administrative law. In May 2006 ministerial responsibility for the ARB was transferred from the ODPM to the DCLG (Department for Communities and Local Government). The DCLG website shows that of "four categories" of "non-departmental public bodies" (NDPBs) the ARB was being classified (at the end of May 2007) as one of two "public corporations", the other one being the Audit Commission, a body of entirely different political and legislative origin, function and capacities, so that the two have practically nothing in common. The website there briefly described the ARB as: The independent statutory regulator of all UK registered architects which has a dual mandate to protect the consumer and to safeguard the reputation of architects. That appears to have been more a politically advised than a factual statement, in that it lacks congruity with an ordinary or accurate reading of the legislation enacted by Parliament (see further information below "Accuracy of Government Information"). Amendment in 2008 under the European Communities Act 1972 Amendments made in June 2008 by statutory instrument established rules for the recognition of professional qualifications enabling migrants from the European Economic Area or Switzerland to register as architects in the United Kingdom. It also set out provisions for facilitating temporary and occasional professional services cross-border. Changes: before and after July 1997 The previous legislation had enabled and required the Register of Architects to be established, maintained and published; and for that purpose there had been a Council, called the Architects' Registration Council of the United Kingdom (ARCUK), which had been established as a body corporate by the originating Act, namely the Architects (Registration) Act, 1931. The changes embodied in the consolidating Act of 1997 had first been enacted in Part III of the Housing Grants, Construction and Regeneration Act 1996. The changes had been made on the basis of a government consultation document dated 19 July 1994 which the Department of the Environment had issued with the title "Reform of Architects Registration". The consultation document had set out fourteen proposals for reform, stemming from a request from ARCUK to the Government in 1992 that the Architects Registration Acts should be reviewed; and stated that a report on the review which had been carried out had been published by HMSO in 1993. The report had been made by Mr E J D Warne, CB, and is commonly known as "The Warne Report". The legislation which followed carried the proposed purposes into effect only in part. In the consultation document the purpose of the reformed body was stated to be: setting criteria for admission to the Register; preventing misuse of the title "architect"; and the discipline of unprofessional conduct, and the setting of fee levels. To that end, fourteen proposals had been enumerated. Some were later abandoned; and others substantially altered, whether in the Bill which was presented to Parliament or in its passage through Parliament, including: that the reformed Board would be given statutory authority to make regulations consistent with the provisions of the legislation governing architects registration; that the reformed Board would publicise a statement of the criteria for disciplinary offences; and that in disciplinary cases, there would be a range of non-monetary penalties, and hearings would be before a small statutory committee composed of both architects and non-architects, with a right to appeal to the full Board. Proposals mentioned in the consultation document which were later enacted and are now operative were: that ARCUK would remain as a legal entity, but, with no impact on its role or status, the name would be changed to "Architects Registration Board" (ARB); that there should be an office of Registrar whose functions would be to maintain the Register and carry out the instructions of the Board; that the Board would be made up of 8 lay members appointed by the Government, and 7 architects elected by registered architects; and that the Board of Architectural Education would be abolished, by reason of it being an unwieldy body which would be unnecessary for fulfilling the functions of the reformed Registration Board. The Table of Derivations, set out at the end of the Act after Schedule 3, by showing the changes which had been made by the 1996 Act to the originating Act of 1931 (as it had by then been amended by the 1938 Act and other legislation), distinguishes them from the provisions which were in the legislation before the 1996 Act, and so were operative in the time of ARCUK and have remained operative from 21 July 1997, when the reconstituting changes took effect. Professional Conduct Committee One of the changes made was replacing the Discipline Committee of ARCUK with a Professional Conduct Committee under Part III of the Act with statutory powers to inflict fines expressly on a par with criminal penalties. Under the Act, the committee was to be a body having persons who were not themselves members of the profession in the decisive majority and who would not necessarily have the appropriate skill and knowledge to be able to act competently and fairly in respect of hazarding an architect's professional reputation or livelihood; nor would members of the committee be acting under the judicial oath of a judge or a magistrate in a court of criminal or civil jurisdiction, or pursuant to the consensual jurisdiction of an arbitrator. As a safeguard of due process in accordance with the rule of law the statutory provisions for constituting the Professional Conduct Committee in Part II of Schedule 1 of the Act (and as later amended) reflect the usual practice for appointing a legally qualified chairman, with appropriate experience, who can be held to have a professional and judicial responsibility for protecting the basic right of any accused person, whose reputation and livelihood could be at stake, to a fair and unprejudiced hearing and trial. Exoneration Under the first section of Part III of the Act the Board is required to issue a code "laying down standards of professional conduct and practice expected of registered persons", but the same section states explicitly that failure to comply with the provisions of the code shall not be taken of itself to constitute unacceptable professional conduct. In the case of an architect against whom an allegation of unacceptable professional conduct or serious professional incompetence has not been sustained by the Professional Conduct Committee the Act provides for publication of an exonerating statement. Definitions The Table of Derivations also shows that certain definitions which were inserted for the purpose of the consolidation included one to make clear that where there is a reference to "unacceptable professional conduct", it has the same meaning as it has in section 14 (not vice versa): in section 14(1) the phrase is expanded as "conduct which falls short of the standard required of a registered person". Interpretation Legislative context of Architects Registration Under the legislation, the registration body has been a statutory corporation from its inception, first as a Council of numerous persons nominated mainly by professional bodies under the 1931 Act, and, from July 1997, as a Board of fifteen persons, of which the majority has been appointed by the Privy Council in the manner prescribed by paragraph 3(1) of Schedule 1 of the Act, that is: ...after consultation with the Secretary of State and such other persons or bodies as the Privy Council thinks fit, to represent the interests of users of architectural services and the general public, Members of the general public clearly have an interest to the extent that the legislation makes it a criminal offence to infringe the restrictions placed upon the freedom of individuals (including qualified architects), and of firms and partnerships, and companies and corporations of all kinds, to use the word "architect". From the 1880s, it has been a moot point whether the effect of such registration and protection of the title "architect" would be to place an undue burden on the profession for too little benefit for the public, or to confer an unfair advantage on the profession, or one section of it, as against competitors. But in more recent decades the statutory Register of Architects and the protection of the title "architect" under the legislation has been affected by the obligation of the Government to secure compliance with obligations in connection with membership of the European Union and the European Economic Area. This has brought in its train questions about the criteria and standards for deciding upon equivalence of professional qualifications in EU and EEA countries, and the legitimate expectations of those who have qualified. Accuracy of UK government information about architects registration In May 2006 the then Prime Minister (Mr Blair) arranged for ministerial responsibility for the Architects Registration Board to be transferred from the then Office of the Deputy Prime Minister (ODPM) to a newly formed Department which was to be called "Communities and Local Government" (DCLG) and to be headed by a Secretary of State (Ruth Kelly), to whom the Prime Minister addressed a letter setting out what was required. This Department's official website published the Prime Minister's letter and stated its vision to be of "prosperous and cohesive communities, offering a safe, healthy and sustainable environment for all". The website also had a page for describing the Architects Registration Board where it offered a summary of the effect of the legislation, but which had a thread of inaccuracy in three out of four sentences, namely: stating that ARB "succeeded" ARCUK, when the legislation had expressly stated that it was the same body but with another name as from July 1997 (See above: Continuity of Legislation); stating that it was established to guarantee the professional competence of architects to consumers, when there is nothing in the legislation or otherwise giving ARB either the legal powers or the funds to honour any such guarantee; and that all architects must be registered by ARB in order to practise legally in the United Kingdom, when under the legislation (see related articles cited below) an architect, or any other person, is free to perform or supply the services of an architect subject only to the restrictions on the use of the vernacular word "architect" contained in the legislation first enacted by Parliament in the 1938 Act. See also Architects (Registration) Acts, 1931 to 1938 Architects Registration in the United Kingdom Architects Registration Board Architects' Registration Council of the United Kingdom Housing Grants, Construction and Regeneration Act 1996 Reform of Architects Registration References External links The Rt Hon Tony Blair, former Prime Minister About the DCLG Letter from the Prime Minister to Ruth Kelly, 9 May 2006 Non-Departmental Public Bodies The Architects Registration Board Parliamentary Stages of a Government Bill - see page 8 for Consolidation Bills Registration of architects in the United Kingdom United Kingdom Acts of Parliament 1997 Architectural education
Architects Act 1997
Engineering
2,597
517,385
https://en.wikipedia.org/wiki/Dasher%20%28software%29
Dasher is an input method and computer accessibility tool which enables users to compose text without using a keyboard, by entering text on a screen with a pointing device such as a mouse, touch screen, or mice operated by the foot or head. Such instruments could serve as prosthetic devices for disabled people who cannot use standard keyboards, or where the use of one is impractical. Dasher is free and open-source software, subject to the requirements of the GNU General Public License (GPL), version 2. Dasher is available for operating systems with GTK+ support, i.e. Linux, BSDs and other Unix-like including macOS, Microsoft Windows, Pocket PC, iOS and Android. Dasher was invented by David J. C. MacKay and developed by David Ward and other members of MacKay's Cambridge research group. The Dasher project is supported by the Gatsby Charitable Foundation and by the EU aegis-project. Design For whatever the writer intends to write, they select a letter from ones displayed on a screen by using a pointer, whereupon the system uses a probabilistic predictive model to anticipate the likely character combinations for the next piece of text, and accord these higher priority by displaying them more prominently than less likely letter combinations. This saves the user effort and time as they proceed to choose the next letter from those offered. The process of composing text in this way has been likened to an arcade game, as users zoom through characters that fly across the screen and select them in order to compose text. The system learns from experience which letter combinations are the most popular, and changes its display protocol over time to reflect this. Features The Dasher package contains various architecture-independent data files: alphabet descriptions for over 150 languages letter colours settings training files in all supported languages References External links User interfaces User interface techniques Pointing-device text input Disability software Free software programmed in C Free software programmed in C++ Free software programmed in Java (programming language) GNOME Accessibility Cross-platform free software Free and open-source Android software
Dasher (software)
Technology
419
7,057,536
https://en.wikipedia.org/wiki/Eva%20Nogales
Eva Nogales (born in Colmenar Viejo, Madrid, Spain) is a Spanish-American biophysicist at the Lawrence Berkeley National Laboratory and a professor at the University of California, Berkeley, where she served as head of the Division of Biochemistry, Biophysics and Structural Biology of the Department of Molecular and Cell Biology (2015–2020). She is a Howard Hughes Medical Institute investigator. Nogales is a pioneer in using electron microscopy for the structural and functional characterization of macromolecular complexes. She used electron crystallography to obtain the first structure of tubulin and identify the binding site of the important anti-cancer drug taxol. She is a leader in combining cryo-EM, computational image analysis and biochemical assays to gain insights into function and regulation of biological complexes and molecular machines. Her work has uncovered aspects of cellular function that are relevant to the treatment of cancer and other diseases. Early life and education Eva Nogales obtained her BS degree in physics from the Autonomous University of Madrid in 1988. She later earned her PhD from the University of Keele in 1992 while working at the Synchrotron Radiation Source under the supervision of Joan Bordas. Career During her post-doctoral work in the laboratory of Ken Downing at the Lawrence Berkeley National Laboratory, Eva Nogales was the first to determine the atomic structure of tubulin and the location of the taxol-binding site by electron crystallography. She became an assistant professor in the Department of Molecular and Cell Biology at the University of California, Berkeley in 1998. In 2000 she became an investigator in the Howard Hughes Medical Institute. As cryo-EM techniques became more powerful, she became a leader in applying cryo-EM to the study of microtubule structure and function and other large macromolecular assemblies such as eukaryotic transcription and translation initiation complexes, the polycomb complex PRC2, and telomerase. Selected publications Awards 2000: investigator, Howard Hughes Medical Institute 2005: Early Career Life Scientist Award, American Society for Cell Biology 2006: Chabot Science Award for Excellence 2015: Dorothy Crowfoot Hodgkin Award, Protein Society 2015: Elected as a member of the US National Academy of Sciences 2016: Elected to the American Academy of Arts and Sciences 2018: Women in Cell Biology Award (Senior), American Society for Cell Biology 2019: Grimwade Medal for Biochemistry. 2021: AAAS Fellows Award. 2023: Shaw Prize in Life Sciences. Personal life Nogales is married to Howard Padmore and they have two children. References External links Molecules in motion Nogales lab Howard Hughes Medical Investigators Living people Molecular biologists Spanish biophysicists Women biophysicists Biophysicists Spanish emigrants to the United States University of California, Berkeley faculty American women biologists 21st-century American women scientists Members of the United States National Academy of Sciences Alumni of Keele University Autonomous University of Madrid alumni Structural biologists 1965 births
Eva Nogales
Chemistry
600
21,671,434
https://en.wikipedia.org/wiki/Barlow%27s%20law
Barlow's law is an incorrect physical law proposed by Peter Barlow in 1825 to describe the ability of wires to conduct electricity. It says that the strength of the effect of electricity passing through a wire varies inversely with the square root of its length and directly with the square root of its cross-sectional area, or, in modern terminology: where I is electric current, A is the cross-sectional area of the wire, and L is the length of the wire. Barlow formulated his law in terms of the diameter d of a cylindrical wire. Since A is proportional to the square of d the law becomes for cylindrical wires. Barlow undertook his experiments with the aim of determining whether long-distance telegraphy was feasible and believed that he proved that it was not. The publication of Barlow's law delayed research into telegraphy for several years, until 1831 when Joseph Henry and Philip Ten Eyck constructed a circuit 1,060 feet long, which used a large battery to activate an electromagnet. Barlow did not investigate the dependence of the current strength on electric tension (that is, voltage). He endeavoured to keep this constant, but admitted there was some variation. Barlow was not entirely certain that he had found the correct law, writing "the discrepancies are rather too great to enable us to say, with confidence, that such is the law in question." In 1827, Georg Ohm published a different law, in which current varies inversely with the wire's length, not its square root; that is, where is a constant dependent on the circuit setup. Ohm's law is now considered the correct law, and Barlow's false. The law Barlow proposed was not in error due to poor measurement; in fact, it fits Barlow's careful measurements quite well. Heinrich Lenz pointed out that Ohm took into account "all the conducting resistances … of the circuit", whereas Barlow did not. Ohm explicitly included a term for what we would now call the internal resistance of the battery. Barlow did not have this term and approximated the results with a power law instead. Ohm's law in modern usage is rarely stated with this explicit term, but nevertheless an awareness of it is necessary for a full understanding of the current in a circuit. References History of science Obsolete theories in physics
Barlow's law
Physics,Technology
473
3,207,172
https://en.wikipedia.org/wiki/Malcolm%20Smith%20%28artist%29
Malcolm H. Smith (1910–1966) was an American artist identified with the retro-futurist tradition. Early life Malcolm Hadden Smith was born in Memphis, Tennessee in 1910. He displayed a talent for drawing and painting at an early age. He also enjoyed reading pulp sci-fi books and was a world-class target archer, winning many trophies and medals. He studied art at Southwestern Junior College in Memphis. In his early 20s, he moved to Chicago, Illinois, and attended the American Academy of Art and the Art Institute. While attending college, Malcolm worked as a haberdasher and a cabinetmaker. After graduating from school, he immediately started work as a commercial illustrator. He did illustrations for newspapers, magazine ads, posters and billboards. Career In the mid to late 1930s, Malcolm started working as a freelance artist for the various pulp publishers in Chicago. He did illustrations and paintings for mystery, romance, detective, western, and sci-fi pulps. He was, for a while, the art director at Ziff-Davis Publishing. At other times he worked in a Chicago-based artist's cooperative called Bendelow and Associates. Malcolm often used himself, his friends, and family members as models for his works. In the later part of 1959, Malcolm moved his family to Huntsville, Alabama, where he worked as a concept artist and animator for the Marshall Space Flight Center of NASA. While at NASA, Malcolm worked with Fred Ordway who became the technical adviser for the Kubrick movie 2001:a Space Odyssey. Fred suggested to Arthur C. Clarke that they use Malcolm for the concept and design work for the movie. Arthur C. Clarke sent a letter to Malcolm requesting his assistance on the movie, but, at the time, Malcolm was busy illustrating a book called "In Search for Life on Other Worlds" and he was unavailable for the movie. Also in the 60s, Malcolm was asked to appear on the show "I've Got A Secret" as a famous sci-fi artist. Throughout his career he became friends with many other artists, writers, scientists and engineers including: Virgil Finlay, Ray Paul, Edgar Rice Burroughs, and Wernher Von Braun. Malcolm H. Smith is considered by many Sci-Fi fans to be one of the founding fathers of this genre of art. He died in 1966 of lung cancer, and is buried in Huntsville, Alabama. BTS PAVED THE WAY SKZ WORL DOMINATION MINSUNG & JIMSU CANON Atte: nini_min References External links Field Guide To Wild American Pulp Artists 1910 births 1966 deaths American artists Pulp fiction artists Space artists
Malcolm Smith (artist)
Astronomy
534
41,669
https://en.wikipedia.org/wiki/Ringdown
In telephony, ringdown is a method of signaling an operator in which telephone ringing current is sent over the line to operate a lamp or cause the operation of a self-locking relay known as a drop. Ringdown is used in manual operation, and is distinguished from automatic signaling by dialing a number. The signal consists of a continuous or pulsed alternating current (AC) signal transmitted over the line. It may be used with or without a telephone switchboard. The term originated in magneto telephone signaling in which cranking the magneto generator, either integrated into the telephone set or housed in a connected ringer box, would not only ring its bell but also cause a drop to fall down at the telephone exchange switchboard, marked with the number of the line to which the magneto telephone instrument was connected. At the end of the conversation, one participant would crank to ring off, signaling the operator to take down the connection. In modern British English, "ring off" still means ending a telephone conversation, though it is of course done by other means. Ring off is also used figuratively to indicate no longer communicating with a person. The last ringdown telephone exchange in the United States was located at Bryant Pond, Maine, had 400+ subscribers, and converted to dial service in October 1983. Ringdown operator In telephone systems where calls from distant automated exchanges arrive for manual subscribers or non-dialable points, there often would be a ringdown operator (reachable from the distant operator console by dialling NPA+181) who would manually ring the desired subscriber on a party line or toll station. On some systems, this function was carried out by the inward operator (NPA+121). In both cases, this is a telephone operator at the destination who provides assistance solely to other operators on inbound toll calls; the ringdown operator nominally cannot be dialed directly by the subscriber. Non-operator use In an application not involving a telephone operator, a two-point automatic ringdown circuit, or ringdown, has a telephone at each end. When the telephone at one end goes off-hook, the phone at the other end instantly rings. No dialing is involved and therefore telephone sets without dials are sometimes used. Many ringdown circuits work in both directions. In some cases a circuit is designed to work in one direction only. That is, going off-hook at one end (end A) rings the other (end B). Going off-hook at end B has no effect at end A. Ringdown features are often part of a key telephone system. In the wire spring relay key service units of the Bell System 1A2, a model 216 automatic ringdown was used to operate the circuit. In the 400-series units, a number of different KTUs operate (supervise) a ringdown, including the model 415. In other situations, the ringdown is powered and operated by equipment inside the telephone exchange. In the case of enterprises with a private branch exchange (PBX) switch, the ringdown can be operated by the PBX key. The switch is programmed to ring a specific extension (the called phone) when a defined extension (the calling phone) goes off-hook. The PBX does not offer dial tone to the calling extension: it only detects on-hook or off-hook status. Voice over IP adapters can be networked and configured to provide automatic ringdown by selecting a dial plan which replaces the empty string with a predefined number or SIP address, dialed immediately. (Some Cisco VoIP phones and analog adapters treat a dial plan of (S0 <:1234567890>) as a hotline configuration which dials 1-234-567890 zero seconds after the telephone is taken off-hook, for instance). These circuits are used: over high-volume routes where one site calls another very frequently. Example: an information desk and the information desk staff supervisor's desk. where a tamper-proof ability to call from one point to another is needed. Example: a phone used to summon a taxicab to an airport or hotel. where a limited ability to contact one entity (but no ability to make outside calls) is desired. Example: a "house phone" in a hotel lobby to the live operator at the hotel's switchboard where the public, or users that are not trained in using a specific office telephone system, must place calls. Example: the after-hours phone to reach the watchman from the front door at a warehouse. in locations where emergencies are handled and the time required to dial digits would cause an unacceptable delay in handling of an emergency. Example: an airport control tower to the airport's fire station or fire dispatch center. Example: Independent System Operator (ISO) communication to a power plant. in situations where the called party needs to be certain of who is calling. Example: a hospital emergency department and an ambulance dispatch center. In some cases, automatic ringdown circuits have one-to-many configurations. When one phone goes off-hook, a group of phones is made to ring simultaneously. In cases where one or both ends of the circuit terminate in a key telephone system, a well designed system will have no hold feature on the ringdown circuit unless supervision provides a Calling Party Control (CPC) signal. PLAR Private line automatic ringdown (PLAR) is a type of analog signaling often used in telephone-based systems. When a device is taken off-hook, ringing voltage is automatically applied to a circuit to alert other stations on the line. When answered on another station, a call is maintained over the circuit. The telephone company switch is not involved in the process, making this a private line. See also Courtesy phone Dedicated line References External links The Last Ringdown, 1980 documentary on the Bryant Pond Telephone Company PLAR Configuration Example, on Cisco Call Manager (CUCM) v.6 Communication circuits Telephony equipment
Ringdown
Engineering
1,220
57,148,349
https://en.wikipedia.org/wiki/HD%20191806%20b
HD 191806 b is an exoplanet orbiting HD 191806, a K-type star. It has a minimum mass 8.52 times that of Jupiter. It does not orbit within the habitable zone. In 2022, the inclination and true mass of HD 191806 b were measured via astrometry. References External links Giant planets Exoplanets discovered in 2016 Exoplanets detected by radial velocity Exoplanets detected by astrometry Cygnus (constellation)
HD 191806 b
Astronomy
103
902,820
https://en.wikipedia.org/wiki/Emissivity
The emissivity of the surface of a material is its effectiveness in emitting energy as thermal radiation. Thermal radiation is electromagnetic radiation that most commonly includes both visible radiation (light) and infrared radiation, which is not visible to human eyes. A portion of the thermal radiation from very hot objects (see photograph) is easily visible to the eye. The emissivity of a surface depends on its chemical composition and geometrical structure. Quantitatively, it is the ratio of the thermal radiation from a surface to the radiation from an ideal black surface at the same temperature as given by the Stefan–Boltzmann law. (A comparison with Planck's law is used if one is concerned with particular wavelengths of thermal radiation.) The ratio varies from 0 to 1. The surface of a perfect black body (with an emissivity of 1) emits thermal radiation at the rate of approximately 448 watts per square metre (W/m) at a room temperature of . Objects have emissivities less than 1.0, and emit radiation at correspondingly lower rates. However, wavelength- and subwavelength-scale particles, metamaterials, and other nanostructures may have an emissivity greater than 1. Practical applications Emissivities are important in a variety of contexts: Insulated windows Warm surfaces are usually cooled directly by air, but they also cool themselves by emitting thermal radiation. This second cooling mechanism is important for simple glass windows, which have emissivities close to the maximum possible value of 1.0. "Low-E windows" with transparent low-emissivity coatings emit less thermal radiation than ordinary windows. In winter, these coatings can halve the rate at which a window loses heat compared to an uncoated glass window. Solar heat collectors Similarly, solar heat collectors lose heat by emitting thermal radiation. Advanced solar collectors incorporate selective surfaces that have very low emissivities. These collectors waste very little of the solar energy through emission of thermal radiation. Thermal shielding For the protection of structures from high surface temperatures, such as reusable spacecraft or hypersonic aircraft, high-emissivity coatings (HECs), with emissivity values near 0.9, are applied on the surface of insulating ceramics. This facilitates radiative cooling and protection of the underlying structure and is an alternative to ablative coatings, used in single-use reentry capsules. Passive daytime radiative cooling Daytime passive radiative coolers use the extremely cold temperature of outer space (~2.7 K) to emit heat and lower ambient temperatures while requiring zero energy input. These surfaces minimize the absorption of solar radiation to lessen heat gain in order to maximize the emission of LWIR thermal radiation. It has been proposed as a solution to global warming. Planetary temperatures The planets are solar thermal collectors on a large scale. The temperature of a planet's surface is determined by the balance between the heat absorbed by the planet from sunlight, heat emitted from its core, and thermal radiation emitted back into space. Emissivity of a planet is determined by the nature of its surface and atmosphere. Temperature measurements Pyrometers and infrared cameras are instruments used to measure the temperature of an object by using its thermal radiation; no actual contact with the object is needed. The calibration of these instruments involves the emissivity of the surface that's being measured. Mathematical definitions In its most general form, emissivity can be specified for a particular wavelength, direction, and polarization. However, the most commonly used form of emissivity is the hemispherical total emissivity, which considers emissions as totaled over all wavelengths, directions, and polarizations, given a particular temperature. Some specific forms of emissivity are detailed below. Hemispherical emissivity Hemispherical emissivity of a surface, denoted ε, is defined as where Me is the radiant exitance of that surface; Me° is the radiant exitance of a black body at the same temperature as that surface. Spectral hemispherical emissivity Spectral hemispherical emissivity in frequency and spectral hemispherical emissivity in wavelength of a surface, denoted εν and ελ, respectively, are defined as where Me,ν is the spectral radiant exitance in frequency of that surface; Me,ν° is the spectral radiant exitance in frequency of a black body at the same temperature as that surface; Me,λ is the spectral radiant exitance in wavelength of that surface; Me,λ° is the spectral radiant exitance in wavelength of a black body at the same temperature as that surface. Directional emissivity Directional emissivity of a surface, denoted εΩ, is defined as where Le,Ω is the radiance of that surface; Le,Ω° is the radiance of a black body at the same temperature as that surface. Spectral directional emissivity Spectral directional emissivity in frequency and spectral directional emissivity in wavelength of a surface, denoted εν,Ω and ελ,Ω, respectively, are defined as where Le,Ω,ν is the spectral radiance in frequency of that surface; Le,Ω,ν° is the spectral radiance in frequency of a black body at the same temperature as that surface; Le,Ω,λ is the spectral radiance in wavelength of that surface; Le,Ω,λ° is the spectral radiance in wavelength of a black body at the same temperature as that surface. Hemispherical emissivity can also be expressed as a weighted average of the directional spectral emissivities as described in textbooks on "radiative heat transfer". Emissivities of common surfaces Emissivities ε can be measured using simple devices such as Leslie's cube in conjunction with a thermal radiation detector such as a thermopile or a bolometer. The apparatus compares the thermal radiation from a surface to be tested with the thermal radiation from a nearly ideal, black sample. The detectors are essentially black absorbers with very sensitive thermometers that record the detector's temperature rise when exposed to thermal radiation. For measuring room temperature emissivities, the detectors must absorb thermal radiation completely at infrared wavelengths near 10×10−6 metre. Visible light has a wavelength range of about 0.4–0.7×10−6 metre from violet to deep red. Emissivity measurements for many surfaces are compiled in many handbooks and texts. Some of these are listed in the following table. Notes: These emissivities are the total hemispherical emissivities from the surfaces. The values of the emissivities apply to materials that are optically thick. This means that the absorptivity at the wavelengths typical of thermal radiation doesn't depend on the thickness of the material. Very thin materials emit less thermal radiation than thicker materials. Most emissitivies in the chart above were recorded at room temperature, . Closely related properties Absorptance There is a fundamental relationship (Gustav Kirchhoff's 1859 law of thermal radiation) that equates the emissivity of a surface with its absorption of incident radiation (the "absorptivity" of a surface). Kirchhoff's law is rigorously applicable with regard to the spectral directional definitions of emissivity and absorptivity. The relationship explains why emissivities cannot exceed 1, since the largest absorptivity—corresponding to complete absorption of all incident light by a truly black object—is also 1. Mirror-like, metallic surfaces that reflect light will thus have low emissivities, since the reflected light isn't absorbed. A polished silver surface has an emissivity of about 0.02 near room temperature. Black soot absorbs thermal radiation very well; it has an emissivity as large as 0.97, and hence soot is a fair approximation to an ideal black body. With the exception of bare, polished metals, the appearance of a surface to the eye is not a good guide to emissivities near room temperature. For example, white paint absorbs very little visible light. However, at an infrared wavelength of 10×10−6 metre, paint absorbs light very well, and has a high emissivity. Similarly, pure water absorbs very little visible light, but water is nonetheless a strong infrared absorber and has a correspondingly high emissivity. Emittance Emittance (or emissive power) is the total amount of thermal energy emitted per unit area per unit time for all possible wavelengths. Emissivity of a body at a given temperature is the ratio of the total emissive power of a body to the total emissive power of a perfectly black body at that temperature. Following Planck's law, the total energy radiated increases with temperature while the peak of the emission spectrum shifts to shorter wavelengths. The energy emitted at shorter wavelengths increases more rapidly with temperature. For example, an ideal blackbody in thermal equilibrium at , will emit 97% of its energy at wavelengths below . The term emissivity is generally used to describe a simple, homogeneous surface such as silver. Similar terms, emittance and thermal emittance, are used to describe thermal radiation measurements on complex surfaces such as insulation products. Measurement of Emittance Emittance of a surface can be measured directly or indirectly from the emitted energy from that surface. In the direct radiometric method, the emitted energy from the sample is measured directly using a spectroscope such as Fourier transform infrared spectroscopy (FTIR). In the indirect calorimetric method, the emitted energy from the sample is measured indirectly using a calorimeter. In addition to these two commonly applied methods, inexpensive emission measurement technique based on the principle of two-color pyrometry. Emissivities of planet Earth The emissivity of a planet or other astronomical body is determined by the composition and structure of its outer skin. In this context, the "skin" of a planet generally includes both its semi-transparent atmosphere and its non-gaseous surface. The resulting radiative emissions to space typically function as the primary cooling mechanism for these otherwise isolated bodies. The balance between all other incoming plus internal sources of energy versus the outgoing flow regulates planetary temperatures. For Earth, equilibrium skin temperatures range near the freezing point of water, 260±50 K (-13±50 °C, 8±90 °F). The most energetic emissions are thus within a band spanning about 4-50 μm as governed by Planck's law. Emissivities for the atmosphere and surface components are often quantified separately, and validated against satellite- and terrestrial-based observations as well as laboratory measurements. These emissivities serve as parametrizations within some simpler meteorlogic and climatologic models. Surface Earth's surface emissivities (εs) have been inferred with satellite-based instruments by directly observing surface thermal emissions at nadir through a less obstructed atmospheric window spanning 8-13 μm. Values range about εs=0.65-0.99, with lowest values typically limited to the most barren desert areas. Emissivities of most surface regions are above 0.9 due to the dominant influence of water; including oceans, land vegetation, and snow/ice. Globally averaged estimates for the hemispheric emissivity of Earth's surface are in the vicinity of εs=0.95. Atmosphere Water also dominates the planet's atmospheric emissivity and absorptivity in the form of water vapor. Clouds, carbon dioxide, and other components make substantial additional contributions, especially where there are gaps in the water vapor absorption spectrum. Nitrogen () and oxygen () - the primary atmospheric components - interact less significantly with thermal radiation in the infrared band. Direct measurement of Earths atmospheric emissivities (εa) are more challenging than for land surfaces due in part to the atmosphere's multi-layered and more dynamic structure. Upper and lower limits have been measured and calculated for εa in accordance with extreme yet realistic local conditions. At the upper limit, dense low cloud structures (consisting of liquid/ice aerosols and saturated water vapor) close the infrared transmission windows, yielding near to black body conditions with εa≈1. At a lower limit, clear sky (cloud-free) conditions promote the largest opening of transmission windows. The more uniform concentration of long-lived trace greenhouse gases in combination with water vapor pressures of 0.25-20 mbar then yield minimum values in the range of εa=0.55-0.8 (with ε=0.35-0.75 for a simulated water-vapor-only atmosphere). Carbon dioxide () and other greenhouse gases contribute about ε=0.2 to εa when atmospheric humidity is low. Researchers have also evaluated the contribution of differing cloud types to atmospheric absorptivity and emissivity. These days, the detailed processes and complex properties of radiation transport through the atmosphere are evaluated by general circulation models using radiation transport codes and databases such as MODTRAN/HITRAN. Emission, absorption, and scattering are thereby simulated through both space and time. For many practical applications it may not be possible, economical or necessary to know all emissivity values locally. "Effective" or "bulk" values for an atmosphere or an entire planet may be used. These can be based upon remote observations (from the ground or outer space) or defined according to the simplifications utilized by a particular model. For example, an effective global value of εa≈0.78 has been estimated from application of an idealized single-layer-atmosphere energy-balance model to Earth. Effective emissivity due to atmosphere The IPCC reports an outgoing thermal radiation flux (OLR) of 239 (237–242) W m and a surface thermal radiation flux (SLR) of 398 (395–400) W m, where the parenthesized amounts indicate the 5-95% confidence intervals as of 2015. These values indicate that the atmosphere (with clouds included) reduces Earth's overall emissivity, relative to its surface emissions, by a factor of 239/398 ≈ 0.60. In other words, emissions to space are given by where is the effective emissivity of Earth as viewed from space and is the effective temperature of the surface. History The concepts of emissivity and absorptivity, as properties of matter and radiation, appeared in the late-eighteenth thru mid-nineteenth century writings of Pierre Prévost, John Leslie, Balfour Stewart and others. In 1860, Gustav Kirchhoff published a mathematical description of their relationship under conditions of thermal equilibrium (i.e. Kirchhoff's law of thermal radiation). By 1884 the emissive power of a perfect blackbody was inferred by Josef Stefan using John Tyndall's experimental measurements, and derived by Ludwig Boltzmann from fundamental statistical principles. Emissivity, defined as a further proportionality factor to the Stefan-Boltzmann law, was thus implied and utilized in subsequent evaluations of the radiative behavior of grey bodies. For example, Svante Arrhenius applied the recent theoretical developments to his 1896 investigation of Earth's surface temperatures as calculated from the planet's radiative equilibrium with all of space. By 1900 Max Planck empirically derived a generalized law of blackbody radiation, thus clarifying the emissivity and absorptivity concepts at individual wavelengths. Other radiometric coefficients See also Albedo Black-body radiation Passive daytime radiative cooling Radiant barrier Reflectance Sakuma–Hattori equation Stefan–Boltzmann law View factor Wien's displacement law References Further reading An open community-focused website & directory with resources related to spectral emissivity and emittance. On this site, the focus is on available data, references and links to resources related to spectral emissivity as it is measured & used in thermal radiation thermometry and thermography (thermal imaging). Resources, Tools and Basic Information for Engineering and Design of Technical Applications. This site offers an extensive list of other material not covered above. Physical quantities Radiometry Heat transfer
Emissivity
Physics,Chemistry,Mathematics,Engineering
3,352
18,661,117
https://en.wikipedia.org/wiki/Lattice%20problem
In computer science, lattice problems are a class of optimization problems related to mathematical objects called lattices. The conjectured intractability of such problems is central to the construction of secure lattice-based cryptosystems: lattice problems are an example of NP-hard problems which have been shown to be average-case hard, providing a test case for the security of cryptographic algorithms. In addition, some lattice problems which are worst-case hard can be used as a basis for extremely secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. For applications in such cryptosystems, lattices over vector spaces (often ) or free modules (often ) are generally considered. For all the problems below, assume that we are given (in addition to other more specific inputs) a basis for the vector space V and a norm N. The norm usually considered is the Euclidean norm L2. However, other norms (such as Lp) are also considered and show up in a variety of results. Throughout this article, let denote the length of the shortest non-zero vector in the lattice L: that is, Shortest vector problem (SVP) In the SVP, a basis of a vector space V and a norm N (often L2) are given for a lattice L and one must find the shortest non-zero vector in V, as measured by N, in L. In other words, the algorithm should output a non-zero vector v such that . In the γ-approximation version SVPγ, one must find a non-zero lattice vector of length at most for given . Hardness results The exact version of the problem is only known to be NP-hard for randomized reductions. By contrast, the corresponding problem with respect to the uniform norm is known to be NP-hard. Algorithms for the Euclidean norm To solve the exact version of the SVP under the Euclidean norm, several different approaches are known, which can be split into two classes: algorithms requiring superexponential time () and memory, and algorithms requiring both exponential time and space () in the lattice dimension. The former class of algorithms most notably includes lattice enumeration and random sampling reduction, while the latter includes lattice sieving, computing the Voronoi cell of the lattice, and discrete Gaussian sampling. An open problem is whether algorithms for solving exact SVP exist running in single exponential time () and requiring memory scaling polynomially in the lattice dimension. To solve the γ-approximation version SVPγ for for the Euclidean norm, the best known approaches are based on using lattice basis reduction. For large , the Lenstra–Lenstra–Lovász (LLL) algorithm can find a solution in time polynomial in the lattice dimension. For smaller values , the Block Korkine-Zolotarev algorithm (BKZ) is commonly used, where the input to the algorithm (the blocksize ) determines the time complexity and output quality: for large approximation factors , a small block size suffices, and the algorithm terminates quickly. For small , larger are needed to find sufficiently short lattice vectors, and the algorithm takes longer to find a solution. The BKZ algorithm internally uses an exact SVP algorithm as a subroutine (running in lattices of dimension at most ), and its overall complexity is closely related to the costs of these SVP calls in dimension . GapSVP The problem GapSVPβ consists of distinguishing between the instances of SVP in which the length of the shortest vector is at most or larger than , where can be a fixed function of the dimension of the lattice . Given a basis for the lattice, the algorithm must decide whether or . Like other promise problems, the algorithm is allowed to err on all other cases. Yet another version of the problem is GapSVPζ,γ for some functions ζ and γ. The input to the algorithm is a basis and a number . It is assured that all the vectors in the Gram–Schmidt orthogonalization are of length at least 1, and that and that , where is the dimension. The algorithm must accept if , and reject if . For large (i.e. ), the problem is equivalent to GapSVPγ because a preprocessing done using the LLL algorithm makes the second condition (and hence, ) redundant. Closest vector problem (CVP) In CVP, a basis of a vector space V and a metric M (often L2) are given for a lattice L, as well as a vector v in V but not necessarily in L. It is desired to find the vector in L closest to v (as measured by M). In the -approximation version CVPγ, one must find a lattice vector at distance at most . Relationship with SVP The closest vector problem is a generalization of the shortest vector problem. It is easy to show that given an oracle for CVPγ (defined below), one can solve SVPγ by making some queries to the oracle. The naive method to find the shortest vector by calling the CVPγ oracle to find the closest vector to 0 does not work because 0 is itself a lattice vector and the algorithm could potentially output 0. The reduction from SVPγ to CVPγ is as follows: Suppose that the input to the SVPγ is the basis for lattice . Consider the basis and let be the vector returned by . The claim is that the shortest vector in the set is the shortest vector in the given lattice. Hardness results Goldreich et al. showed that any hardness of SVP implies the same hardness for CVP. Using PCP tools, Arora et al. showed that CVP is hard to approximate within factor unless . Dinur et al. strengthened this by giving a NP-hardness result with for . Sphere decoding Algorithms for CVP, especially the Fincke and Pohst variant, have been used for data detection in multiple-input multiple-output (MIMO) wireless communication systems (for coded and uncoded signals). In this context it is called sphere decoding due to the radius used internal to many CVP solutions. It has been applied in the field of the integer ambiguity resolution of carrier-phase GNSS (GPS). It is called the LAMBDA method in that field. In the same field, the general CVP problem is referred to as Integer Least Squares. GapCVP This problem is similar to the GapSVP problem. For GapSVPβ, the input consists of a lattice basis and a vector , and the algorithm must answer whether one of the following holds: there is a lattice vector such that the distance between it and is at most 1, and every lattice vector is at a distance greater than away from . The opposite condition is that the closest lattice vector is at a distance , hence the name GapCVP. Known results The problem is trivially contained in NP for any approximation factor. Schnorr, in 1987, showed that deterministic polynomial time algorithms can solve the problem for . Ajtai et al. showed that probabilistic algorithms can achieve a slightly better approximation factor of . In 1993, Banaszczyk showed that GapCVPn is in . In 2000, Goldreich and Goldwasser showed that puts the problem in both NP and coAM. In 2005, Aharonov and Regev showed that for some constant , the problem with is in . For lower bounds, Dinur et al. showed in 1998 that the problem is NP-hard for . Shortest independent vectors problem (SIVP) Given a lattice L of dimension n, the algorithm must output n linearly independent so that , where the right-hand side considers all bases of the lattice. In the -approximate version, given a lattice L with dimension n, one must find n linearly independent vectors of length , where is the th successive minimum of . Bounded distance decoding This problem is similar to CVP. Given a vector such that its distance from the lattice is at most , the algorithm must output the closest lattice vector to it. Covering radius problem Given a basis for the lattice, the algorithm must find the largest distance (or in some versions, its approximation) from any vector to the lattice. Shortest basis problem Many problems become easier if the input basis consists of short vectors. An algorithm that solves the Shortest Basis Problem (SBP) must, given a lattice basis , output an equivalent basis such that the length of the longest vector in is as short as possible. The approximation version SBPγ problem consist of finding a basis whose longest vector is at most times longer than the longest vector in the shortest basis. Use in cryptography Average-case hardness of problems forms a basis for proofs-of-security for most cryptographic schemes. However, experimental evidence suggests that most NP-hard problems lack this property: they are probably only worst case hard. Many lattice problems have been conjectured or proven to be average-case hard, making them an attractive class of problems to base cryptographic schemes on. Moreover, worst-case hardness of some lattice problems have been used to create secure cryptographic schemes. The use of worst-case hardness in such schemes makes them among the very few schemes that are very likely secure even against quantum computers. The above lattice problems are easy to solve if the algorithm is provided with a "good" basis. Lattice reduction algorithms aim, given a basis for a lattice, to output a new basis consisting of relatively short, nearly orthogonal vectors. The Lenstra–Lenstra–Lovász lattice basis reduction algorithm (LLL) was an early efficient algorithm for this problem which could output an almost reduced lattice basis in polynomial time. This algorithm and its further refinements were used to break several cryptographic schemes, establishing its status as a very important tool in cryptanalysis. The success of LLL on experimental data led to a belief that lattice reduction might be an easy problem in practice; however, this belief was challenged in the late 1990s, when several new results on the hardness of lattice problems were obtained, starting with the result of Ajtai. In his seminal papers, Ajtai showed that the SVP problem was NP-hard and discovered some connections between the worst-case complexity and average-case complexity of some lattice problems. Building on these results, Ajtai and Dwork created a public-key cryptosystem whose security could be proven using only the worst case hardness of a certain version of SVP, thus making it the first result to have used worst-case hardness to create secure systems. See also Learning with errors Short integer solution problem References Further reading Computational hardness assumptions Lattice-based cryptography Mathematical problems Post-quantum cryptography
Lattice problem
Mathematics
2,192
52,385
https://en.wikipedia.org/wiki/Axiom%20of%20pairing
In axiomatic set theory and the branches of logic, mathematics, and computer science that use it, the axiom of pairing is one of the axioms of Zermelo–Fraenkel set theory. It was introduced by as a special case of his axiom of elementary sets. Formal statement In the formal language of the Zermelo–Fraenkel axioms, the axiom reads: In words: Given any object A and any object B, there is a set C such that, given any object D, D is a member of C if and only if D is equal to A or D is equal to B. Consequences As noted, what the axiom is saying is that, given two objects A and B, we can find a set C whose members are exactly A and B. We can use the axiom of extensionality to show that this set C is unique. We call the set C the pair of A and B, and denote it {A,B}. Thus the essence of the axiom is: Any two objects have a pair. The set {A,A} is abbreviated {A}, called the singleton containing A. Note that a singleton is a special case of a pair. Being able to construct a singleton is necessary, for example, to show the non-existence of the infinitely descending chains from the Axiom of regularity. The axiom of pairing also allows for the definition of ordered pairs. For any objects and , the ordered pair is defined by the following: Note that this definition satisfies the condition Ordered n-tuples can be defined recursively as follows: Alternatives Non-independence The axiom of pairing is generally considered uncontroversial, and it or an equivalent appears in just about any axiomatization of set theory. Nevertheless, in the standard formulation of the Zermelo–Fraenkel set theory, the axiom of pairing follows from the axiom schema of replacement applied to any given set with two or more elements, and thus it is sometimes omitted. The existence of such a set with two elements, such as { {}, { {} } }, can be deduced either from the axiom of empty set and the axiom of power set or from the axiom of infinity. In the absence of some of the stronger ZFC axioms, the axiom of pairing can still, without loss, be introduced in weaker forms. Weaker In the presence of standard forms of the axiom schema of separation we can replace the axiom of pairing by its weaker version: . This weak axiom of pairing implies that any given objects and are members of some set . Using the axiom schema of separation we can construct the set whose members are exactly and . Another axiom which implies the axiom of pairing in the presence of the axiom of empty set is the axiom of adjunction . It differs from the standard one by use of instead of . Using {} for A and x for B, we get {x} for C. Then use {x} for A and y for B, getting {x,y} for C. One may continue in this fashion to build up any finite set. And this could be used to generate all hereditarily finite sets without using the axiom of union. Stronger Together with the axiom of empty set and the axiom of union, the axiom of pairing can be generalised to the following schema: that is: Given any finite number of objects A1 through An, there is a set C whose members are precisely A1 through An. This set C is again unique by the axiom of extensionality, and is denoted {A1,...,An}. Of course, we can't refer to a finite number of objects rigorously without already having in our hands a (finite) set to which the objects in question belong. Thus, this is not a single statement but instead a schema, with a separate statement for each natural number n. The case n = 1 is the axiom of pairing with A = A1 and B = A1. The case n = 2 is the axiom of pairing with A = A1 and B = A2. The cases n > 2 can be proved using the axiom of pairing and the axiom of union multiple times. For example, to prove the case n = 3, use the axiom of pairing three times, to produce the pair {A1,A2}, the singleton {A3}, and then the pair {{A1,A2},{A3}}. The axiom of union then produces the desired result, {A1,A2,A3}. We can extend this schema to include n=0 if we interpret that case as the axiom of empty set. Thus, one may use this as an axiom schema in the place of the axioms of empty set and pairing. Normally, however, one uses the axioms of empty set and pairing separately, and then proves this as a theorem schema. Note that adopting this as an axiom schema will not replace the axiom of union, which is still needed for other situations. References Paul Halmos, Naive set theory. Princeton, NJ: D. Van Nostrand Company, 1960. Reprinted by Springer-Verlag, New York, 1974. (Springer-Verlag edition). Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. . Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. . . English translation: . Axioms of set theory de:Zermelo-Fraenkel-Mengenlehre#Die Axiome von ZF und ZFC
Axiom of pairing
Mathematics
1,179
69,728,002
https://en.wikipedia.org/wiki/Mona%20Canyon
Mona Canyon (Spanish: Cañón de la Mona), also known as the Mona Rift, is an submarine canyon located in the Mona Passage, between the islands of Hispaniola (particularly the Dominican Republic) and Puerto Rico, with steep walls measuring between in height from bottom to top. The Mona Canyon stretches from the Desecheo Island platform, specifically the Desecheo Ridge, in the south to the Puerto Rico Trench, which contains some of the deepest points in the Atlantic Ocean, in the north. The canyon is also particularly associated with earthquakes and subsequent tsunamis, with the 1918 Puerto Rico earthquake having its epicenter in the submarine canyon. Geomorphology The Mona submarine canyon geomorphology is highly complex yet unexplored. The complex seafloor is the result of oceanographic and tectonic forces that are actively forming and reshaping the landscape of the region. The canyon is located in an intricate and irregular tectonic region at the boundary between the Caribbean and North American plates, where the east–west transversing subduction Septentrional Fault ends in an approximately hole west of the landform. See also List of submarine canyons References Canyons and gorges of the United States Geography of Puerto Rico Landforms of Puerto Rico Physical oceanography Submarine canyons of the Atlantic Ocean
Mona Canyon
Physics
270
38,707,231
https://en.wikipedia.org/wiki/Snub%20tetraoctagonal%20tiling
In geometry, the snub tetraoctagonal tiling is a uniform tiling of the hyperbolic plane. It has Schläfli symbol of sr{8,4}. Images Drawn in chiral pairs, with edges missing between black triangles: Related polyhedra and tiling The snub tetraoctagonal tiling is seventh in a series of snub polyhedra and tilings with vertex figure 3.3.4.3.n. References John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations) See also Square tiling Tilings of regular polygons List of uniform planar tilings List of regular polytopes External links Hyperbolic and Spherical Tiling Gallery KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings Hyperbolic Planar Tessellations, Don Hatch Chiral figures Hyperbolic tilings Isogonal tilings Snub tilings Uniform tilings
Snub tetraoctagonal tiling
Physics,Chemistry
224
19,114,761
https://en.wikipedia.org/wiki/TOX3
TOX high mobility group box family member 3, also known as TOX3, is a human gene. The protein encoded by this gene is a member of a subfamily of transcription factors that also includes TOX, TOX2, and TOX4 that share almost identical HMG-box DNA-binding domains which function to modify chromatin structure by unwinding and bending DNA. The protein TOX3 has a glutamine-rich C-terminus due to CAG repeats. TOX3 is located on human chromosome band 16q12.1. The gene consists of seven exons and is highly expressed in both the brain and luminal epithelial breast tissue. Mutations in the gene are associated with increased susceptibility to breast cancer. TOX3 plays a role in regulating calcium-dependent transcription and interacts with cAMP-response-element-binding protein (CREB) and CREB-binding protein (CBP). It also increases transcription via interaction with CITED1, a transcription co-regulator that increases transcription factor activity. Disease linkage Mutations in the TOX3 gene are associated with an increased risk of breast cancer. The risk allele rs3803662 is a low-penetrance SNP (single nucleotide polymorphism) associated with decreased expression of TOX3 and an increase in breast cancer risk. The risk locus was reported to regulate affinity of FOXA1 binding to chromatin, potentially affecting TOX3 expression. This locus also interacts with high-penetrance mutations BRCA1 and BRCA2 to increase risk. The rs3803662 variant has a high frequency in the population, with a minor allele frequency of 0.25. Little is known of the transcriptional mechanisms and protein interactions of TOX3. However, a 2019 publication identified TOX3 as a cancer suppressor gene in clear cell renal cell carcinoma (ccRCC) and reported that downregulation of TOX3 facilitates the epithelial mesenchymal transition by decreasing repression of SNAI1 and SNAI2, resulting in tumor growth and metastasis. Like breast cancer, downregulation of TOX3 is associated with worse prognosis in ccRCC patients. References Transcription factors
TOX3
Chemistry,Biology
470
2,902,713
https://en.wikipedia.org/wiki/33%20Arietis
33 Arietis (abbreviated 33 Ari) is a binary star in the northern constellation of Aries. 33 Arietis is the Flamsteed designation. The combined apparent magnitude of 5.33 is bright enough to be seen with the naked eye. Based upon an annual parallax shift of 14.09 mas, the distance to this system is approximately . The primary component is an A-type main sequence star with a magnitude of 5.40 and a stellar classification of A3 V. It has a magnitude 8.40 companion at an angular separation of 28.6 arcseconds. An excess of infrared emission suggests the presence of circumstellar dust in this system. In the 24μm band, this debris disk has a mean temperature of 815 K, which puts it at a radius of 0.85 astronomical units (AU) from the primary star. Excess emission appears in the 70μm band, which has a temperature of 103 K and a radius out to 42 AU. This star was located in the constellation Musca Borealis. References External links Aladin previewer Aladin sky atlas HR 782 CCDM J02407+2704 016628 Binary stars 012489 Arietis, 33 Aries (constellation) A-type main-sequence stars 0782 Durchmusterung objects
33 Arietis
Astronomy
278
6,553,354
https://en.wikipedia.org/wiki/Tightness%20of%20measures
In mathematics, tightness is a concept in measure theory. The intuitive idea is that a given collection of measures does not "escape to infinity". Definitions Let be a Hausdorff space, and let be a σ-algebra on that contains the topology . (Thus, every open subset of is a measurable set and is at least as fine as the Borel σ-algebra on .) Let be a collection of (possibly signed or complex) measures defined on . The collection is called tight (or sometimes uniformly tight) if, for any , there is a compact subset of such that, for all measures , where is the total variation measure of . Very often, the measures in question are probability measures, so the last part can be written as If a tight collection consists of a single measure , then (depending upon the author) may either be said to be a tight measure or to be an inner regular measure. If is an -valued random variable whose probability distribution on is a tight measure then is said to be a separable random variable or a Radon random variable. Another equivalent criterion of the tightness of a collection is sequentially weakly compact. We say the family of probability measures is sequentially weakly compact if for every sequence from the family, there is a subsequence of measures that converges weakly to some probability measure . It can be shown that a family of measure is tight if and only if it is sequentially weakly compact. Examples Compact spaces If is a metrizable compact space, then every collection of (possibly complex) measures on is tight. This is not necessarily so for non-metrisable compact spaces. If we take with its order topology, then there exists a measure on it that is not inner regular. Therefore, the singleton is not tight. Polish spaces If is a Polish space, then every probability measure on is tight. Furthermore, by Prokhorov's theorem, a collection of probability measures on is tight if and only if it is precompact in the topology of weak convergence. A collection of point masses Consider the real line with its usual Borel topology. Let denote the Dirac measure, a unit mass at the point in . The collection is not tight, since the compact subsets of are precisely the closed and bounded subsets, and any such set, since it is bounded, has -measure zero for large enough . On the other hand, the collection is tight: the compact interval will work as for any . In general, a collection of Dirac delta measures on is tight if, and only if, the collection of their supports is bounded. A collection of Gaussian measures Consider -dimensional Euclidean space with its usual Borel topology and σ-algebra. Consider a collection of Gaussian measures where the measure has expected value (mean) and covariance matrix . Then the collection is tight if, and only if, the collections and are both bounded. Tightness and convergence Tightness is often a necessary criterion for proving the weak convergence of a sequence of probability measures, especially when the measure space has infinite dimension. See Finite-dimensional distribution Prokhorov's theorem Lévy–Prokhorov metric Weak convergence of measures Tightness in classical Wiener space Tightness in Skorokhod space Tightness and stochastic ordering A family of real-valued random variables is tight if and only if there exists an almost surely finite random variable such that for all , where denotes the stochastic order defined by if for all nondecreasing functions . Exponential tightness A strengthening of tightness is the concept of exponential tightness, which has applications in large deviations theory. A family of probability measures on a Hausdorff topological space is said to be exponentially tight if, for any , there is a compact subset of such that References (See chapter 2) Measure theory Measures (measure theory)
Tightness of measures
Physics,Mathematics
787
50,447,852
https://en.wikipedia.org/wiki/Hyperview%20%28computing%29
A hyperview in computing is a hypertextual view of the content of a database or set of data on a group of activities. As with a hyperdiagram multiple views are linked to form a hyperview. References Database theory
Hyperview (computing)
Technology
48
26,412,019
https://en.wikipedia.org/wiki/Physics%20of%20magnetic%20resonance%20imaging
Magnetic resonance imaging (MRI) is a medical imaging technique mostly used in radiology and nuclear medicine in order to investigate the anatomy and physiology of the body, and to detect pathologies including tumors, inflammation, neurological conditions such as stroke, disorders of muscles and joints, and abnormalities in the heart and blood vessels among other things. Contrast agents may be injected intravenously or into a joint to enhance the image and facilitate diagnosis. Unlike CT and X-ray, MRI uses no ionizing radiation and is, therefore, a safe procedure suitable for diagnosis in children and repeated runs. Patients with specific non-ferromagnetic metal implants, cochlear implants, and cardiac pacemakers nowadays may also have an MRI in spite of effects of the strong magnetic fields. This does not apply on older devices, and details for medical professionals are provided by the device's manufacturer. Certain atomic nuclei are able to absorb and emit radio frequency energy when placed in an external magnetic field. In clinical and research MRI, hydrogen atoms are most often used to generate a detectable radio-frequency signal that is received by antennas close to the anatomy being examined. Hydrogen atoms are naturally abundant in people and other biological organisms, particularly in water and fat. For this reason, most MRI scans essentially map the location of water and fat in the body. Pulses of radio waves excite the nuclear spin energy transition, and magnetic field gradients localize the signal in space. By varying the parameters of the pulse sequence, different contrasts may be generated between tissues based on the relaxation properties of the hydrogen atoms therein. When inside the magnetic field (B0) of the scanner, the magnetic moments of the protons align to be either parallel or anti-parallel to the direction of the field. While each individual proton can only have one of two alignments, the collection of protons appear to behave as though they can have any alignment. Most protons align parallel to B0 as this is a lower energy state. A radio frequency pulse is then applied, which can excite protons from parallel to anti-parallel alignment, only the latter are relevant to the rest of the discussion. In response to the force bringing them back to their equilibrium orientation, the protons undergo a rotating motion (precession), much like a spun wheel under the effect of gravity. The protons will return to the low energy state by the process of spin-lattice relaxation. This appears as a magnetic flux, which yields a changing voltage in the receiver coils to give a signal. The frequency at which a proton or group of protons in a voxel resonates depends on the strength of the local magnetic field around the proton or group of protons, a stronger field corresponds to a larger energy difference and higher frequency photons. By applying additional magnetic fields (gradients) that vary linearly over space, specific slices to be imaged can be selected, and an image is obtained by taking the 2-D Fourier transform of the spatial frequencies of the signal (k-space). Due to the magnetic Lorentz force from B0 on the current flowing in the gradient coils, the gradient coils will try to move producing loud knocking sounds, for which patients require hearing protection. History The MRI scanner was developed from 1975 to 1977 at the University of Nottingham by Prof Raymond Andrew FRS FRSE following from his research into nuclear magnetic resonance. The full body scanner was created in 1978. Nuclear magnetism Subatomic particles have the quantum mechanical property of spin. Certain nuclei such as 1H (protons), 2H, 3He, 23Na or 31P, have a non–zero spin and therefore a magnetic moment. In the case of the so-called spin- nuclei, such as 1H, there are two spin states, sometimes referred to as up and down. Nuclei such as 12C have no unpaired neutrons or protons, and no net spin; however, the isotope 13C does. When these spins are placed in a strong external magnetic field they precess around an axis along the direction of the field. Protons align in two energy eigenstates (the Zeeman effect): one low-energy and one high-energy, which are separated by a very small splitting energy. Resonance and relaxation Quantum mechanics is required to accurately model the behaviour of a single proton. However, classical mechanics can be used to describe the behaviour of an ensemble of protons adequately. As with other spin particles, whenever the spin of a single proton is measured it can only have one of two results commonly called parallel and anti-parallel. When we discuss the state of a proton or protons we are referring to the wave function of that proton which is a linear combination of the parallel and anti-parallel states. In the presence of the magnetic field, B0, the protons will appear to precess at the Larmor frequency determined by the particle's gyro-magnetic ratio and the strength of the field. The static fields used most commonly in MRI cause precession which corresponds to a radiofrequency (RF) photon. The net longitudinal magnetization in thermodynamic equilibrium is due to a tiny excess of protons in the lower energy state. This gives a net polarization that is parallel to the external field. Application of an RF pulse can tip this net polarization vector sideways (with, i.e., a so-called 90° pulse), or even reverse it (with a so-called 180° pulse). The protons will come into phase with the RF pulse and therefore each other. The recovery of longitudinal magnetization is called longitudinal or T1 relaxation and occurs exponentially with a time constant T1. The loss of phase coherence in the transverse plane is called transverse or T2 relaxation. T1 is thus associated with the enthalpy of the spin system, or the number of nuclei with parallel versus anti-parallel spin. T2 on the other hand is associated with the entropy of the system, or the number of nuclei in phase. When the radio frequency pulse is turned off, the transverse vector component produces an oscillating magnetic field which induces a small current in the receiver coil. This signal is called the free induction decay (FID). In an idealized nuclear magnetic resonance experiment, the FID decays approximately exponentially with a time constant T2. However, in practical MRI there are small differences in the static magnetic field at different spatial locations ("inhomogeneities") that cause the Larmor frequency to vary across the body. This creates destructive interference, which shortens the FID. The time constant for the observed decay of the FID is called the T relaxation time, and is always shorter than T2. At the same time, the longitudinal magnetization starts to recover exponentially with a time constant T1 which is much larger than T2 (see below). In MRI, the static magnetic field is augmented by a field gradient coil to vary across the scanned region, so that different spatial locations become associated with different precession frequencies. Only those regions where the field is such that the precession frequencies match the RF frequency will experience excitation. Usually, these field gradients are modulated to sweep across the region to be scanned, and it is the almost infinite variety of RF and gradient pulse sequences that gives MRI its versatility. Change of field gradient spreads the responding FID signal in the frequency domain, but this can be recovered and measured by a refocusing gradient (to create a so-called "gradient echo"), or by a radio frequency pulse (to create a so-called "spin-echo"), or in digital post-processing of the spread signal. The whole process can be repeated when some T1-relaxation has occurred and the thermal equilibrium of the spins has been more or less restored. The repetition time (TR) is the time between two successive excitations of the same slice. Typically, in soft tissues T1 is around one second while T2 and T are a few tens of milliseconds. However, these values can vary widely between different tissues, as well as between different external magnetic fields. This behavior is one factor giving MRI its tremendous soft tissue contrast. MRI contrast agents, such as those containing Gadolinium(III) work by altering (shortening) the relaxation parameters, especially T1. Imaging Imaging schemes A number of schemes have been devised for combining field gradients and radio frequency excitation to create an image: 2D or 3D reconstruction from projections, such as in computed tomography. Building the image point-by-point or line-by-line. Gradients in the RF field rather than the static field. Although each of these schemes is occasionally used in specialist applications, the majority of MR Images today are created either by the two-dimensional Fourier transform (2DFT) technique with slice selection, or by the three-dimensional Fourier transform (3DFT) technique. Another name for 2DFT is spin-warp. What follows here is a description of the 2DFT technique with slice selection. The 3DFT technique is rather similar except that there is no slice selection and phase-encoding is performed in two separate directions. Echo-planar imaging Another scheme which is sometimes used, especially in brain scanning or where images are needed very rapidly, is called echo-planar imaging (EPI): In this case, each RF excitation is followed by a train of gradient echoes with different spatial encoding. Multiplexed-EPI is even faster, e.g., for whole brain functional MRI (fMRI) or diffusion MRI. Image contrast and contrast enhancement Image contrast is created by differences in the strength of the NMR signal recovered from different locations within the sample. This depends upon the relative density of excited nuclei (usually water protons), on differences in relaxation times (T1, T2, and T) of those nuclei after the pulse sequence, and often on other parameters discussed under specialized MR scans. Contrast in most MR images is actually a mixture of all these effects, but careful design of the imaging pulse sequence allows one contrast mechanism to be emphasized while the others are minimized. The ability to choose different contrast mechanisms gives MRI tremendous flexibility. In the brain, T1-weighting causes the nerve connections of white matter to appear white, and the congregations of neurons of gray matter to appear gray, while cerebrospinal fluid (CSF) appears dark. The contrast of white matter, gray matter and cerebrospinal fluid is reversed using T2 or T imaging, whereas proton-density-weighted imaging provides little contrast in healthy subjects. Additionally, functional parameters such as cerebral blood flow (CBF), cerebral blood volume (CBV) or blood oxygenation can affect T1, T2, and T and so can be encoded with suitable pulse sequences. In some situations it is not possible to generate enough image contrast to adequately show the anatomy or pathology of interest by adjusting the imaging parameters alone, in which case a contrast agent may be administered. This can be as simple as water, taken orally, for imaging the stomach and small bowel. However, most contrast agents used in MRI are selected for their specific magnetic properties. Most commonly, a paramagnetic contrast agent (usually a gadolinium compound) is given. Gadolinium-enhanced tissues and fluids appear extremely bright on T1-weighted images. This provides high sensitivity for detection of vascular tissues (e.g., tumors) and permits assessment of brain perfusion (e.g., in stroke). There have been concerns raised recently regarding the toxicity of gadolinium-based contrast agents and their impact on persons with impaired kidney function. (See Safety/Contrast agents below.) More recently, superparamagnetic contrast agents, e.g., iron oxide nanoparticles, have become available. These agents appear very dark on T-weighted images and may be used for liver imaging, as normal liver tissue retains the agent, but abnormal areas (e.g., scars, tumors) do not. They can also be taken orally, to improve visualization of the gastrointestinal tract, and to prevent water in the gastrointestinal tract from obscuring other organs (e.g., the pancreas). Diamagnetic agents such as barium sulfate have also been studied for potential use in the gastrointestinal tract, but are less frequently used. k-space In 1983, Ljunggren and Twieg independently introduced the k-space formalism, a technique that proved invaluable in unifying different MR imaging techniques. They showed that the demodulated MR signal S(t) generated by the interaction between an ensemble of freely precessing nuclear spins in the presence of a linear magnetic field gradient G and a receiver-coil equals the Fourier transform of the effective spin density, . Fundamentally, the signal is derived from Faraday's law of induction: where: In other words, as time progresses the signal traces out a trajectory in k-space with the velocity vector of the trajectory proportional to the vector of the applied magnetic field gradient. By the term effective spin density we mean the true spin density corrected for the effects of T1 preparation, T2 decay, dephasing due to field inhomogeneity, flow, diffusion, etc. and any other phenomena that affect that amount of transverse magnetization available to induce signal in the RF probe or its phase with respect to the receiving coil' s electromagnetic field. From the basic k-space formula, it follows immediately that we reconstruct an image by taking the inverse Fourier transform of the sampled data, viz. Using the k-space formalism, a number of seemingly complex ideas became simple. For example, it becomes very easy (for physicists, in particular) to understand the role of phase encoding (the so-called spin-warp method). In a standard spin echo or gradient echo scan, where the readout (or view) gradient is constant (e.g., G), a single line of k-space is scanned per RF excitation. When the phase encoding gradient is zero, the line scanned is the kx axis. When a non-zero phase-encoding pulse is added in between the RF excitation and the commencement of the readout gradient, this line moves up or down in k-space, i.e., we scan the line ky = constant. The k-space formalism also makes it very easy to compare different scanning techniques. In single-shot EPI, all of k-space is scanned in a single shot, following either a sinusoidal or zig-zag trajectory. Since alternating lines of k-space are scanned in opposite directions, this must be taken into account in the reconstruction. Multi-shot EPI and fast spin echo techniques acquire only part of k-space per excitation. In each shot, a different interleaved segment is acquired, and the shots are repeated until k-space is sufficiently well-covered. Since the data at the center of k-space represent lower spatial frequencies than the data at the edges of k-space, the TE value for the center of k-space determines the image's T2 contrast. The importance of the center of k-space in determining image contrast can be exploited in more advanced imaging techniques. One such technique is spiral acquisition—a rotating magnetic field gradient is applied, causing the trajectory in k-space to spiral out from the center to the edge. Due to T2 and T decay the signal is greatest at the start of the acquisition, hence acquiring the center of k-space first improves contrast to noise ratio (CNR) when compared to conventional zig-zag acquisitions, especially in the presence of rapid movement. Since and are conjugate variables (with respect to the Fourier transform) we can use the Nyquist theorem to show that a step in k-space determines the field of view of the image (maximum frequency that is correctly sampled) and the maximum value of k sampled determines the resolution; i.e., (These relationships apply to each axis independently.) Example of a pulse sequence In the timing diagram, the horizontal axis represents time. The vertical axis represents: (top row) amplitude of radio frequency pulses; (middle rows) amplitudes of the three orthogonal magnetic field gradient pulses; and (bottom row) receiver analog-to-digital converter (ADC). Radio frequencies are transmitted at the Larmor frequency of the nuclide to be imaged. For example, for 1H in a magnetic field of 1 T, a frequency of 42.5781 MHz would be employed. The three field gradients are labeled GX (typically corresponding to a patient's left-to-right direction and colored red in diagram), GY (typically corresponding to a patient's front-to-back direction and colored green in diagram), and GZ (typically corresponding to a patient's head-to-toe direction and colored blue in diagram). Where negative-going gradient pulses are shown, they represent reversal of the gradient direction, i.e., right-to-left, back-to-front or toe-to-head. For human scanning, gradient strengths of 1–100 mT/m are employed: Higher gradient strengths permit better resolution and faster imaging. The pulse sequence shown here would produce a transverse (axial) image. The first part of the pulse sequence, SS, achieves "slice selection". A shaped pulse (shown here with a sinc modulation) causes a 90° nutation of longitudinal nuclear magnetization within a slab, or slice, creating transverse magnetization. The second part of the pulse sequence, PE, imparts a phase shift upon the slice-selected nuclear magnetization, varying with its location in the Y direction. The third part of the pulse sequence, another slice selection (of the same slice) uses another shaped pulse to cause a 180° rotation of transverse nuclear magnetization within the slice. This transverse magnetisation refocuses to form a spin echo at a time TE. During the spin echo, a frequency-encoding (FE) or readout gradient is applied, making the resonant frequency of the nuclear magnetization vary with its location in the X direction. The signal is sampled nFE times by the ADC during this period, as represented by the vertical lines. Typically nFE of between 128 and 512 samples are taken. The longitudinal magnetisation is then allowed to recover somewhat and after a time TR the whole sequence is repeated nPE times, but with the phase-encoding gradient incremented (indicated by the horizontal hatching in the green gradient block). Typically nPE of between 128 and 512 repetitions are made. The negative-going lobes in GX and GZ are imposed to ensure that, at time TE (the spin echo maximum), phase only encodes spatial location in the Y direction. Typically TE is between 5 ms and 100 ms, while TR is between 100 ms and 2000 ms. After the two-dimensional matrix (typical dimension between 128 × 128 and 512 × 512) has been acquired, producing the so-called k-space data, a two-dimensional inverse Fourier transform is performed to provide the familiar MR image. Either the magnitude or phase of the Fourier transform can be taken, the former being far more common. Overview of main sequences MRI scanner Construction and operation The major components of an MRI scanner are: the main magnet, which polarizes the sample, the shim coils for correcting inhomogeneities in the main magnetic field, the gradient system which is used to localize the MR signal and the RF system, which excites the sample and detects the resulting NMR signal. The whole system is controlled by one or more computers. Magnet The magnet is the largest and most expensive component of the scanner, and the remainder of the scanner is built around it. The strength of the magnet is measured in teslas (T). Clinical magnets generally have a field strength in the range 0.1–3.0 T, with research systems available up to 9.4 T for human use and 21 T for animal systems. In the United States, field strengths up to 7 T have been approved by the FDA for clinical use. Just as important as the strength of the main magnet is its precision. The straightness of the magnetic lines within the center (or, as it is technically known, the iso-center) of the magnet needs to be near-perfect. This is known as homogeneity. Fluctuations (inhomogeneities in the field strength) within the scan region should be less than three parts per million (3 ppm). Three types of magnets have been used: Permanent magnet: Conventional magnets made from ferromagnetic materials (e.g., steel alloys containing rare-earth elements such as neodymium) can be used to provide the static magnetic field. A permanent magnet that is powerful enough to be used in an MRI will be extremely large and bulky; they can weigh over 100 tonnes. Permanent magnet MRIs are very inexpensive to maintain; this cannot be said of the other types of MRI magnets, but there are significant drawbacks to using permanent magnets. They are only capable of achieving weak field strengths compared to other MRI magnets (usually less than 0.4 T) and they are of limited precision and stability. Permanent magnets also present special safety issues; since their magnetic fields cannot be "turned off," ferromagnetic objects are virtually impossible to remove from them once they come into direct contact. Permanent magnets also require special care when they are being brought to their site of installation. Resistive electromagnet: A solenoid wound from copper wire is an alternative to a permanent magnet. An advantage is low initial cost, but field strength and stability are limited. The electromagnet requires considerable electrical energy during operation which can make it expensive to operate. This design is essentially obsolete. Superconducting electromagnet: When a niobium-titanium or niobium-tin alloy is cooled by liquid helium to 4 K (−269 °C, −452 °F) it becomes a superconductor, losing resistance to flow of electric current. An electromagnet constructed with superconductors can have extremely high field strengths, with very high stability. The construction of such magnets is extremely costly, and the cryogenic helium is expensive and difficult to handle. However, despite their cost, helium cooled superconducting magnets are the most common type found in MRI scanners today. Most superconducting magnets have their coils of superconductive wire immersed in liquid helium, inside a vessel called a cryostat. Despite thermal insulation, sometimes including a second cryostat containing liquid nitrogen, ambient heat causes the helium to slowly boil off. Such magnets, therefore, require regular topping-up with liquid helium. Generally a cryocooler, also known as a coldhead, is used to recondense some helium vapor back into the liquid helium bath. Several manufacturers now offer 'cryogenless' scanners, where instead of being immersed in liquid helium the magnet wire is cooled directly by a cryocooler. Alternatively, the magnet may be cooled by carefully placing liquid helium in strategic spots, dramatically reducing the amount of liquid helium used, or, high temperature superconductors may be used instead. Magnets are available in a variety of shapes. However, permanent magnets are most frequently C-shaped, and superconducting magnets most frequently cylindrical. C-shaped superconducting magnets and box-shaped permanent magnets have also been used. Magnetic field strength is an important factor in determining image quality. Higher magnetic fields increase signal-to-noise ratio, permitting higher resolution or faster scanning. However, higher field strengths require more costly magnets with higher maintenance costs, and have increased safety concerns. A field strength of 1.0–1.5 T is a good compromise between cost and performance for general medical use. However, for certain specialist uses (e.g., brain imaging) higher field strengths are desirable, with some hospitals now using 3.0 T scanners. Shims When the MR scanner is placed in the hospital or clinic, its main magnetic field is far from being homogeneous enough to be used for scanning. That is why before doing fine tuning of the field using a sample, the magnetic field of the magnet must be measured and shimmed. After a sample is placed into the scanner, the main magnetic field is distorted by susceptibility boundaries within that sample, causing signal dropout (regions showing no signal) and spatial distortions in acquired images. For humans or animals the effect is particularly pronounced at air-tissue boundaries such as the sinuses (due to paramagnetic oxygen in air) making, for example, the frontal lobes of the brain difficult to image. To restore field homogeneity a set of shim coils is included in the scanner. These are resistive coils, usually at room temperature, capable of producing field corrections distributed as several orders of spherical harmonics. After placing the sample in the scanner, the B0 field is 'shimmed' by adjusting currents in the shim coils. Field homogeneity is measured by examining an FID signal in the absence of field gradients. The FID from a poorly shimmed sample will show a complex decay envelope, often with many humps. Shim currents are then adjusted to produce a large amplitude exponentially decaying FID, indicating a homogeneous B0 field. The process is usually automated. Gradients Gradient coils are used to spatially encode the positions of protons by varying the magnetic field linearly across the imaging volume. The Larmor frequency will then vary as a function of position in the x, y and z-axes. Gradient coils are usually resistive electromagnets powered by sophisticated amplifiers which permit rapid and precise adjustments to their field strength and direction. Typical gradient systems are capable of producing gradients from 20 to 100 mT/m (i.e., in a 1.5 T magnet, when a maximal z-axis gradient is applied, the field strength may be 1.45 T at one end of a 1 m long bore and 1.55 T at the other). It is the magnetic gradients that determine the plane of imaging—because the orthogonal gradients can be combined freely, any plane can be selected for imaging. Scan speed is dependent on performance of the gradient system. Stronger gradients allow for faster imaging, or for higher resolution; similarly, gradient systems capable of faster switching can also permit faster scanning. However, gradient performance is limited by safety concerns over nerve stimulation. Some important characteristics of gradient amplifiers and gradient coils are slew rate and gradient strength. As mentioned earlier, a gradient coil will create an additional, linearly varying magnetic field that adds or subtracts from the main magnetic field. This additional magnetic field will have components in all 3 directions, viz. x, y and z; however, only the component along the magnetic field (usually called the z-axis, hence denoted Gz) is useful for imaging. Along any given axis, the gradient will add to the magnetic field on one side of the zero position and subtract from it on the other side. Since the additional field is a gradient, it has units of gauss per centimeter or millitesla per meter (mT/m). High performance gradient coils used in MRI are typically capable of producing a gradient magnetic field of approximate 30 mT/m or higher for a 1.5 T MRI. The slew rate of a gradient system is a measure of how quickly the gradients can be ramped on or off. Typical higher performance gradients have a slew rate of up to 100–200 T·m−1·s−1. The slew rate depends both on the gradient coil (it takes more time to ramp up or down a large coil than a small coil) and on the performance of the gradient amplifier (it takes a lot of voltage to overcome the inductance of the coil) and has significant influence on image quality. Radio frequency system The radio frequency (RF) transmission system consists of an RF synthesizer, power amplifier and transmitting coil. That coil is usually built into the body of the scanner. The power of the transmitter is variable, but high-end whole-body scanners may have a peak output power of up to 35 kW, and be capable of sustaining average power of 1 kW. Although these electromagnetic fields are in the RF range of tens of megahertz (often in the shortwave radio portion of the electromagnetic spectrum) at powers usually exceeding the highest powers used by amateur radio, there is very little RF interference produced by the MRI machine. The reason for this is that the MRI is not a radio transmitter. The RF frequency electromagnetic field produced in the "transmitting coil" is a magnetic near-field with very little associated changing electric field component (such as all conventional radio wave transmissions have). Thus, the high-powered electromagnetic field produced in the MRI transmitter coil does not produce much electromagnetic radiation at its RF frequency, and the power is confined to the coil space and not radiated as "radio waves." Thus, the transmitting coil is a good EM field transmitter at radio frequency, but a poor EM radiation transmitter at radio frequency. The receiver consists of the coil, pre-amplifier and signal processing system. The RF electromagnetic radiation produced by nuclear relaxation inside the subject is true EM radiation (radio waves), and these leave the subject as RF radiation, but they are of such low power as to also not cause appreciable RF interference that can be picked up by nearby radio tuners (in addition, MRI scanners are generally situated in metal mesh lined rooms which act as Faraday cages.) While it is possible to scan using the integrated coil for RF transmission and MR signal reception, if a small region is being imaged, then better image quality (i.e., higher signal-to-noise ratio) is obtained by using a close-fitting smaller coil. A variety of coils are available which fit closely around parts of the body such as the head, knee, wrist, breast, or internally, e.g., the rectum. A recent development in MRI technology has been the development of sophisticated multi-element phased array coils which are capable of acquiring multiple channels of data in parallel. This 'parallel imaging' technique uses unique acquisition schemes that allow for accelerated imaging, by replacing some of the spatial coding originating from the magnetic gradients with the spatial sensitivity of the different coil elements. However, the increased acceleration also reduces the signal-to-noise ratio and can create residual artifacts in the image reconstruction. Two frequently used parallel acquisition and reconstruction schemes are known as SENSE and GRAPPA. A detailed review of parallel imaging techniques can be found here: References Further reading Magnetic resonance imaging
Physics of magnetic resonance imaging
Chemistry
6,392
58,980,330
https://en.wikipedia.org/wiki/IC%202574
IC 2574, also known as Coddington's Nebula, is a dwarf spiral galaxy discovered by American astronomer Edwin Foster Coddington in 1898. Located in Ursa Major, a constellation in the northern sky, it is an outlying member of the M81 Group. It is believed that 90% of its mass is in the form of dark matter. IC 2574 does not show evidence of interaction with other galaxies. It is currently forming stars; a UV analysis showed clumps of star formation 85 to 500 light-years (26 to 150 pc) in size. References Dwarf spiral galaxies Intermediate spiral galaxies Magellanic spiral galaxies Ursa Major M81 Group 2574 05666 30819 Discoveries by Edwin Foster Coddington Astronomical objects discovered in 1898
IC 2574
Astronomy
154
62,578,977
https://en.wikipedia.org/wiki/European%20Accessibility%20Act
The European Accessibility Act (EAA) is a directive of the European Union (EU) which took effect in April 2019. This directive aims to improve the trade between members of the EU for accessible products and services, by removing country specific rules. Businesses benefit from having a common set of rules within the EU, which should facilitate easier cross-border trade. It should also allow a greater market for companies providing accessible products and services. Persons with disabilities and elderly people will benefit from having more accessible products and services in the market. An increased market size should produce more competitive prices. There should be fewer barriers within the EU and more job opportunities as well. Originally proposed in 2011, this act was built to complement the EU's Web Accessibility Directive which targets the public sector and became law in 2016. It also reflects the obligations of the UN's Convention on the Rights of Persons with Disabilities. It includes a wide range of systems including personal devices such as computers, smartphones, e-books, and TVs, as well as public services like television broadcast, automated teller machine (ATMs), ticketing machines, public transport services, banking services and e-commerce sites. The laws, regulations and administrative provisions necessary to comply with this Directive have to be adopted and published by the member states by 28 June 2022. Three years later, in 2025, the requirements of the European Accessibility Act must have been implemented. The requirements and obligations of this Directive do not apply to microenterprises providing services within the scope of this Directive – whereby ‘microenterprise’ means an enterprise which employs fewer than 10 persons and which has an annual turnover not exceeding €2 million or an annual balance sheet total not exceeding €2 million. The European policy of applying "Design for all" principles on digital technology led to the creation of the European Harmonized Accessibility Standards EN 301 549 which defines "Accessibility requirements suitable for public procurement of ICT products and services in Europe". See also Web Accessibility Directive for the public sector. Accessible Canada Act (2019) for the corresponding Canadian federal legislation. Americans with Disabilities Act (1990) for the corresponding American federal legislation. Disability Discrimination Act (1995) and Equality Act (2010) for the corresponding UK legislation. External links European Accessibility Act: A Big Step On A Long Journey EU Web Accessibility Compliance and Legislation ACCESSIBILITY REQUIREMENTS FOR PRODUCTS AND SERVICES / 2015-12 All set for design for all! An update on the European Accessibility Act Video by the European Commission explaining the benefits and background of the EAA Lainey Feingold's Global Law and Policy: Europe Digital Accessibility Centre's - The European Accessibility Act: Understanding Digital Accessibility References 2019 in Europe 2019 in law Disability legislation Accessibility European Union directives Accessible procurement
European Accessibility Act
Engineering
551
15,669,613
https://en.wikipedia.org/wiki/Spinning%20dancer
The Spinning Dancer, also known as the Silhouette Illusion, is a kinetic, bistable, animated optical illusion originally distributed as a GIF animation showing a silhouette of a pirouetting female dancer. The illusion, created in 2003 by Japanese web designer Nobuyuki Kayahara, involves the apparent direction of motion of the figure. Some observers initially see the figure as spinning clockwise (as viewed from above) and some counterclockwise. Additionally, some may see the figure suddenly spin in the opposite direction. Effect The illusion derives from the lack of visual cues for depth. For instance, as the dancer's arms move from viewer's left to right, it is possible to view her arms passing between her body and the viewer (that is, in the foreground of the picture, in which case she would be circling counterclockwise on her right foot) and it is also possible to view her arms as passing behind the dancer's body (that is, in the background of the picture, in which case she is seen circling clockwise on her left foot). When she is facing to the left or to the right, the breasts and ponytail clearly define the direction she is facing, although there is ambiguity in which leg is which. However, as she moves away from facing to the left (or from facing to the right), the dancer can be seen facing in either of two directions. At first, these two directions are fairly close to each other (both left, say, but one facing slightly forward, the other facing slightly backward) but they become further away from each other until they reach a position where her ponytail and breasts are in line with the viewer (so that neither the breasts nor the ponytail are seen so readily). In this position, she could be facing either away from the viewer or towards the viewer, so that the two possible positions are 180 degrees apart. Another aspect of this illusion can be triggered by placing a mirror vertically beside the image. The natural expectation would be for the normal image and its reflection to spin in opposite directions. This does not necessarily happen, and provides a paradoxical situation where both dancers appear to spin in the same direction. Psychology of visual perception It has been established that the silhouette is more often seen rotating clockwise than counterclockwise. According to an online survey of over 1600 participants, approximately two thirds of observers initially perceived the silhouette to be rotating clockwise. In addition, observers who initially perceived a clockwise rotation had more difficulty experiencing the alternative. These results can be explained by a psychological study providing evidence for a viewing-from-above bias that influences observers' perceptions of the silhouette. Kayahara's dancer is presented with a camera elevation slightly above the horizontal plane. Consequently, the dancer may also be seen from above or below in addition to spinning clockwise or counterclockwise, and facing toward or away from the observer. Upon inspection, one may notice that in Kayahara's original illusion, seeing the dancer spin clockwise is paired with constantly holding an elevated viewpoint and seeing the dancer from above. The opposite is also true; an observer maintaining a counterclockwise perception has assumed a viewpoint below the dancer. If observers report perceiving Kayahara's original silhouette as spinning clockwise more often than counterclockwise, there are two chief possibilities. They may have a bias to see her spinning clockwise, or they may have a bias to assume a viewpoint from above. To tease these two possibilities apart, the researchers created their own versions of Kayahara's silhouette illusion by recreating the dancer and varying the camera elevations. This allowed for clockwise-from-above (like Kayahara's original) and clockwise-from-below pairings. The results indicated that there was no clockwise bias, but rather a viewing-from-above bias. Furthermore, this bias was dependent upon camera elevation. In other words, the greater the camera elevation, the more often an observer saw the dancer from above. In popular psychology, the illusion has been incorrectly identified as a personality test that supposedly reveals which hemisphere of the brain is dominant in the observer. Under this wrong interpretation, it has been popularly called the "right brain–left brain test", and was widely circulated on the Internet during late 2007 to early 2008. A 2014 paper describes the brain activation related to the switching of perception. Utilizing fMRI in a volunteer capable of switching at will the direction of rotation, it was found that a part of the right parietal lobe is responsible for the switching. The authors relate this brain activation to the above described spontaneous brain fluctuations. Bistable perception There are other optical illusions that depend on the same or a similar kind of visual ambiguity known as multistable, in that case bistable, perception. One example is the Necker cube. Depending on the perception of the observer, the apparent direction of spin may change any number of times, a typical feature of so-called bistable percepts such as the Necker cube which may be perceived from time to time as seen from above or below. These alternations are spontaneous and may randomly occur without any change in the stimulus or intention by the observer. However some observers may have difficulty perceiving a change in motion at all. One way of changing the direction perceived is to use averted vision and mentally look for an arm going behind instead of in front, then carefully move the eyes back. Some may perceive a change in direction more easily by narrowing visual focus to a specific region of the image, such as the spinning foot or the shadow below the dancer and gradually looking upwards. One can also try to tilt one's head to perceive a change in direction. Another way is to watch the base shadow foot, and perceive it as the toes always pointing away from oneself and it can help with direction change. One can also close one's eyes and try and envision the dancer going in a direction then reopen them and the dancer should change directions. Still another way is to wait for the dancer's legs to cross in the projection and then try to perceive a change in the direction in what follows. One can also try using one's peripheral vision to distract the dominant part of the brain, slowly look away from the ballerina and one may begin to see her spin in the other direction. Perhaps the easiest method is to blink rapidly (slightly varying the rate if necessary) until consecutive images are going in the "new" direction. Then one can open one's eyes and the new rotational direction is maintained. It is even possible to see the illusion in a way that the dancer is not spinning at all, but simply rotating back and forth 180 degrees. Slightly altered versions of the animation have been created with an additional visual cue to assist viewers who have difficulty seeing one rotation direction or the other. Labels and white edges have been added to the legs, to make it clear which leg is passing in front of the other. First looking at one of these modified images can then sometimes make the original dancer image spin in the corresponding direction. Further analysis References Dance animation Optical illusions Perception Silhouettes 2003 in art
Spinning dancer
Physics
1,442
54,389,086
https://en.wikipedia.org/wiki/Inosperma%20calamistratoides
Inocybe calamistratoides is a species of Inocybaceae fungus found in New Zealand. References Fungi of New Zealand calamistratoides Fungus species
Inosperma calamistratoides
Biology
38
28,017,022
https://en.wikipedia.org/wiki/Metaflumizone
Metaflumizone is a semicarbazone broad-spectrum insecticide developed by Nihon Nohyaku with activity on Lepidoptera, Coleoptera, and certain Hemiptera. It is also used for the veterinary treatment of fleas and ticks, marketed under the brand name ProMeris. A discontinued variant of ProMeris, called ProMeris Duo or Promeris for Dogs, was indicated for canine use and was a formulated blend of metaflumizone and amitraz. The metaflumizone-only formulation is waterproof and typically remain effective for 30–45 days in a cutaneous application at the base of the neck. Similar insecticides Metaflumizone is chemically similar to pyrazoline sodium channel blocker insecticides (SCBIs) discovered at Philips-Duphar in the early 1970s, but is less dangerous to mammals than earlier compounds. Action Metaflumizone belongs to IRAC group 22B and works by blocking sodium channels in target insects, resulting in flaccid paralysis. Metaflumizone blocks sodium channels by binding selectively to the slow-inactivated state, which is characteristic of the SCBIs. The toxin has been tested for efficacy against Spodoptera eridania moths and is indicated for control of fleas and ticks. However, in a cross comparison with other veterinary flea control substances, Metaflumizone was not shown to result in a significant reduction in the number of engorged adult female Culex mosquitoes. Therefore, its usefulness as a heartworm control treatment is likely to be insignificant when compared with comparable treatments such as selamectin that do impact the mosquito disease vector. Adverse effects reported In 2011, Pfizer Animal Care decided to cease production of the drug based on findings which linked its use to an elevated incidence of the autoimmune disorder pemphigus foliaceus. References External links Insecticides Trifluoromethyl compounds Nitriles Trifluoromethyl ethers
Metaflumizone
Chemistry
411
71,799,261
https://en.wikipedia.org/wiki/Smart%20Upper%20Stage%20for%20Innovative%20Exploration
The Smart Upper Stage for Innovative Exploration (SUSIE) is a proposal for a reusable spacecraft designed by ArianeGroup. It is capable of manned operations, carrying up to five astronauts to low Earth orbit (LEO), or alternatively functioning as an automated freighter capable of delivering payloads of up to seven tons. It is envisioned to be launched on the Ariane 64 launch vehicle for European Space Agency (ESA) missions. History Work on what would become SUSIE commenced during 2020; in addition to ArianeGroup, various other European aerospace companies, including Airbus, Thales Alenia Space, and D-Orbit, have been early contributors to the project. The existence of SUSIE was revealed during the 2022 International Astronautical Congress in Paris. From an early stage, its development has been actively supported via research funding provided by the ESA's "New European Space Transportation Solutions" (NESTS) initiative; It has also benefitted from other programmes, such as the Intermediate eXperimental Vehicle. Work on SUSIE had been reportedly enacted in response to the recognition of a strategic priority to ensure the ESA possesses autonomous logistics capabilities. SUSIE is designed to be a fully-reusable spacecraft, both the takeoff and landing phases are to be performed vertically. It has an internal cargo bay volume of 40 cubic meters, which is capable of accommodating up to five astronauts or, in an automated cargo configuration, carry a maximum payload of seven tons. The design of SUSIE is intended to be scalable without necessitated significant aerodynamic changes; this scalability permits it to better perform various mission roles. The payload bay is to be adaptable, such as being convertible into additional habitable volume for the crew to occupy during a longer mission, or to be replaced with propellant tanks and engines that could function comparably to complete upper stage. On longer duration crewed missions, such as beyond Earth orbit, the payload bay can convert into extra habitable volume for the crew to live comfortably. With the addition of a suitable additional space transfer module, SUSIE could reportedly conduct lunar missions. It has also been envisioned that it could participate in the construction of large orbital infrastructure and deorbit end-of-life satellites and other orbital debris. The takeoff of SUSIE requires an external launch vehicle, which is initially intended to be the Ariane 64 launch vehicle. Later on, SUSIE could be used in conjunction with a future ArianeGroup reusable heavy-lift launcher. When paired with the Ariane 64, the latter's payload fairing is substituted for SUSIE. When fully fuelled, the total mass of the spacecraft is predicted to be 25 tons, which corresponds to the low Earth orbit (LEO) performance of the Ariane 64. During atmospheric re-entry, SUSIE is intended to perform a propulsive landing (instead of using parachutes); one advantage of this approach is that the mission abort safety system would remain effective at all stages of a crewed mission, not only during the launch phase. Throughout the descent, no greater than three Gs is to be encountered at any point. Instead of using an escape tower, the envisaged emergency crew escape system uses a series of rocket motors at key locations across the exterior of the craft. Several comparisons have been made to other contemporary reusable spaceship programmes, including the SpaceX Starship, SpaceX Dragon 2, and Boeing Starliner, in particular due to the 'bellyflop' style maneuver that SUSIE is envisioned to perform during re-entry. On 25 October 2023, a 1/6th-scale demonstrator, weighing 100 kg and with a height of 2m, was test-fired by ArianeGroup for the first time at their facility in Les Mureaux outside Paris. By this point, ArianeGroup had also reportedly started work on an intermediate version of SUSIE, which would be smaller than the heavy version. So-called 'hop' testing of the demonstrator is scheduled to continue through to mid-2025; early tests are to be focused on guidance and navigation functionality, while later testing shall include rocket-powered controlled descent, drop, and abort sequences. , the proposed development timeline set out that a smaller commercial cargo version of SUSIE could be potentially ready for 2028, while crewed missions using the full-scale craft would not be expected to occur before the early 2030s. The project has yet to secure both approval and funding from European officials. References External links Announcement at press.ariane.group Arianespace Proposed crewed spacecraft
Smart Upper Stage for Innovative Exploration
Astronomy
910
5,205,171
https://en.wikipedia.org/wiki/Guanabenz
Guanabenz (pronounced GWAHN-a-benz, sold under the trade name Wytensin) is an alpha agonist that is selective to the alpha-2 adrenergic receptor. Guanabenz is used as an antihypertensive drug used in the treatment of high blood pressure (hypertension). The most common side effects during guanabenz therapy are dizziness, drowsiness, dry mouth, headache and weakness. Guanabenz can make one drowsy or less alert, therefore driving or operating dangerous machinery is not recommended. Research Guanabenz also has some anti-inflammatory properties in different pathological situations, including multiple sclerosis. Guanabenz was found in one study to exert an inhibitory effect by decreasing the abundance of the enzyme CH25H, a cholesterol hydroxylase linked to antiviral immunity. Therefore, it is suggested that the drug and similar compounds could be used to treat type I interferon-dependent pathologies and that the CH25H enzyme could be a therapeutic target to control these diseases, including amyotrophic lateral sclerosis. See also Guanoxabenz Guanfacine References Alpha-2 adrenergic receptor agonists Chloroarenes Guanidines
Guanabenz
Chemistry
271
9,898,403
https://en.wikipedia.org/wiki/Andrey%20Kursanov
Andrey Lvovich Kursanov (; 8 November 1902 – 20 September 1999) was a Soviet specialist on the physiology and biochemistry of plants. He was an academician of the Soviet and Russian Academies of Sciences since 1953. He was a member of the Presidium of the Academy of Sciences of the Soviet Union in 1957–1963. Kursanov graduated from Moscow State University in 1926. He was awarded the degree of Doctor of Sciences in biology in 1940 and became a professor at his alma mater in 1944. In 1954, Kursanov and Boris Rybakov represented the Soviet Academy of Sciences at the Columbia University Bicentennial in New York City. Professor Kursanov was awarded a number of honorary doctorates and was an honorary member of a number of foreign scientific societies and academies. He was elected a foreign fellow of the American Academy of Arts and Sciences in 1962 and member of the Polish Academy of Sciences in 1965. Awards and honors Hero of Socialist Labour (1969) Order of Lenin, four times (1953, 1969, 1972, 1975) Order of the October Revolution (1982) Order of the Red Banner of Labour, twice (1945, 1962) Lomonosov Gold Medal (1983) References 1902 births 1999 deaths Scientists from Moscow Academic staff of Moscow State Pedagogical University Academicians of the Russian Academy of Agriculture Sciences Academicians of the VASKhNIL Fellows of the American Academy of Arts and Sciences Full Members of the Russian Academy of Sciences Full Members of the USSR Academy of Sciences Members of the German National Academy of Sciences Leopoldina Members of the Polish Academy of Sciences Moscow State University alumni Heroes of Socialist Labour Recipients of the Lomonosov Gold Medal Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Plant physiologists Russian biochemists Soviet biochemists Burials at Novodevichy Cemetery
Andrey Kursanov
Technology
375
12,912,325
https://en.wikipedia.org/wiki/Sodium%20sesquicarbonate
Sodium sesquicarbonate (systematic name: trisodium hydrogendicarbonate) Na3H(CO3)2 is a double salt of sodium bicarbonate and sodium carbonate (NaHCO3 · Na2CO3), and has a needle-like crystal structure. However, the term is also applied to an equimolar mixture of those two salts, with whatever water of hydration the sodium carbonate includes, supplied as a powder. The dihydrate, Na3H(CO3)2 · 2H2O, occurs in nature as the evaporite mineral trona. Due to concerns about the toxicity of borax which was withdrawn as a cleaning and laundry product, sodium sesquicarbonate is sold in the European Union (EU) as "Borax substitute". It is also known as one of the E number food additives E500(iii). Uses Sodium sesquicarbonate is used in bath salts, swimming pools, as an alkalinity source for water treatment, and as a phosphate-free product replacing the trisodium phosphate for heavy duty cleaning. Sodium sesquicarbonate is used in the conservation of copper and copper alloy artifacts that corrode due to contact with salt (called "bronze disease" due to its effect on bronze). The chloride from salt forms copper(I) chloride. In the presence of oxygen and water, even the small amount of moisture in the atmosphere, the cuprous chloride forms copper(II) chloride and hydrochloric acid, the latter of which dissolves the metal and forms more cuprous chloride in a self-sustaining reaction that leads to the entire destruction of the object. Treatment with sodium sesquicarbonate removes copper(II) chlorides from the corroded layer. It is also used as a precipitating water softener, which combines with hard water minerals (calcium- and magnesium-based minerals) to form an insoluble precipitate, removing these hardness minerals from the water. It is the carbonate moiety which forms the precipitate, the bicarbonate being included to moderate the material's alkalinity. In Chinese cuisine, it is known as mǎyájiǎn (simplified:马牙碱 traditional: 馬牙鹼) which roughly translates to “horse tooth alkaline” and traditionally used as an ingredient in the marinade for century eggs, a dish generally made from duck eggs preserved whole in an highly alkaline mixture. References Sodium compounds Carbonates Acid salts Double salts E-number additives
Sodium sesquicarbonate
Chemistry
529
57,797,047
https://en.wikipedia.org/wiki/Space%20jellyfish
A space jellyfish (also jellyfish UFO or rocket jellyfish) is a rocket launch-related phenomenon caused by sunlight reflecting off the high-altitude rocket plume gases emitted by a launching rocket during morning or evening twilight. The observer is in darkness, while the exhaust plumes at high altitudes are still in direct sunlight. This luminous apparition is reminiscent of a jellyfish. Sightings of the phenomenon have led to panic, fear of nuclear missile strike, and reports of unidentified flying objects. List of rocket launches causing space jellyfish See also Noctilucent cloud Exhaust gas Contrail Twilight phenomenon Notes References Further reading External links Associated Press, , 10 December 2009 News4JAX (WJXT4), , 6 May 2022 UFO-related phenomena Atmospheric optical phenomena Rocketry Smoke
Space jellyfish
Physics,Engineering
162
17,539,184
https://en.wikipedia.org/wiki/Opaque%20travel%20inventory
An opaque inventory is the market of selling unsold travel inventory at a discounted price. The inventory is called "opaque" because the specific suppliers (i.e. hotel, airline, etc.) remain hidden until after the purchase has been completed. This is done to prevent sales of unsold inventory from cannibalizing full-price retail sales. According to TravelClick, the opaque channel accounted for 6% of all hotel reservations for major brands in 2012, up 2% from 2010. The primary consumers of opaque inventories are price-conscious people whose primary aim is the cheapest travel possible and are less concerned with the specifics of their travel plans. Hotel discounts of 30-60% are typical, and bargains are stronger at a higher star hotel. While one has control over the dates and times of a travel itinerary, the downside is these purchases are absolutely non-refundable and non-changeable and, as noted above, the specific hotel or airline is not revealed until after purchase. The main sources of opaque inventories are Hotwire.com and Priceline.com, but Travelocity.com and Expedia.com also offer opaque booking options. Hotwire has a fixed pricing model, where it sells a room at a fixed price with a limited description of a given venue, whereas Priceline offers both a similar fixed pricing model and a bidding model where travelers bid for a hotel room from among a group of hotels of a given star rating and location. Typically hotel deals are greater than airline discounts on opaque travel sites, namely because airlines have limited seating and also take monetary cuts when publishing discounted fares, whereas a hotel sells to opaque sites to fill empty rooms. In response to these opaque travel sites, there are also 'decoder' sites that use feedback from recent purchasers to help construct a profile of the opaque travel property. References Travel technology Inventory optimization Computer reservation systems
Opaque travel inventory
Technology
395
2,892,279
https://en.wikipedia.org/wiki/Eta%20Aurigae
Eta Aurigae (η Aurigae, abbreviated Eta Aur, η Aur), officially named Haedus , is a star in the northern constellation of Auriga. With an apparent visual magnitude of 3.18, it is visible to the naked eye. Based upon parallax measurements made during the Hipparcos mission, this star is approximately distant from the Sun. Nomenclature η Aurigae (Latinised to Eta Aurigae) is the star's Bayer designation. Along with Zeta Aurigae it represents one of the kids of the she-goat Capella, from which it derived its Latin traditional name Haedus II or Hoedus II, from the Latin haedus "kid" (Zeta Aurigae was Haedus I). It also had the less common traditional name Mahasim, from the Arabic المِعْصَم al-miʽşam "the wrist" (of the charioteer), which it shared with Theta Aurigae. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the names Haedus for Eta Aurigae and Saclateni for Zeta Aurigae A on 30 June 2017 and they are both now so included in the List of IAU-approved Star Names. In Chinese, (), meaning Pillars, refers to an asterism consisting of Eta Aurigae, Epsilon Aurigae, Zeta Aurigae, Upsilon Aurigae, Nu Aurigae, Tau Aurigae, Chi Aurigae and 26 Aurigae. Consequently, the Chinese name for Eta Aurigae itself is (, ). Properties Since 1943, the spectrum of Eta Aurigae has served as one of the stable anchor points by which other stars are classified. Eta Aurigae is larger than the Sun, with more than five times the Sun's mass and over three times its radius. The spectrum of this star matches a stellar classification of B3 V, which is a B-type main-sequence star that is generating its energy through the nuclear fusion of hydrogen at its core. It is radiating 1,450 times the Sun's luminosity from its outer atmosphere at an effective temperature of . Based upon its projected rotational velocity of 95 km/s, it is spinning with a rotation period of only 1.8 days. Eta Aurigae is around 39 million years old. References External links HR 1641 Image Eta Aurigae 032630 023767 Aurigae, Eta Auriga B-type main-sequence stars Haedus Aurigae, 10 1641 Durchmusterung objects
Eta Aurigae
Astronomy
581
48,084,368
https://en.wikipedia.org/wiki/Microsoft%20Lumia%20550
Microsoft Lumia 550 is a budget smartphone manufactured by Microsoft Mobile as part of its Lumia family of Windows-based mobile computing products. It was introduced along with the Lumia 950 and the Lumia 950 XL on 6 October 2015 at a press event held in New York City. The Lumia 550 was released in December 2015 with Windows 10 Mobile version 1511. Windows 10 Mobile version 1607, Anniversary Update was released in August 2016. Windows 10 Mobile version 1709, Fall Creators Update, was the final software update. Specifications Hardware The Lumia 550 has a 4.7-inch IPS LCD display, quad-core 1.1 GHz Cortex-A7 Qualcomm Snapdragon 210 processor, 1 GB of RAM and 8 GB of internal storage that can be expanded using microSD cards up to 256 GB. The phone has a 2100 mAh Li-ion battery, 5 MP rear camera and 2 MP front-facing camera. It is available in black and white. Software The Lumia 550 ships with Windows 10 Mobile. Reception Michael Allison of MSPoweruser criticized the Lumia 550 for having lower specs than its predecessor, such as an inferior 5 MP camera that lacks a wide angle lens, missing functionality and a shorter battery life, making it uncompetitive with similar phones in its price range. Sean Cameron of Techradar gave the Lumia 550 3 stars out of 5, praising its screen, camera and speakers, but criticizing the battery life and performance and calling the Windows 10 Mobile operating system "unfinished". References External links Windows 10 Mobile devices Mobile phones introduced in 2015 Discontinued smartphones 550 Microsoft hardware Mobile phones with user-replaceable battery
Microsoft Lumia 550
Technology
339