id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
8,726,659
https://en.wikipedia.org/wiki/Structural%20Engineering%20exam
The Structural Engineering exam is a written examination given by state licensing boards in the United States as part of the testing for licensing structural engineers. This exam is written by the National Council of Examiners for Engineering and Surveying. It is given in eight-hour segments over two days, with the first day covering vertical forces. Problems involving lateral forces are covered on the second day. Each day's morning session features multiple-choice questions, while the afternoon sessions are devoted to essay questions. References Structural engineering Standardized tests in the United States Engineering education
Structural Engineering exam
Engineering
109
62,141,894
https://en.wikipedia.org/wiki/Truthful%20cake-cutting
Truthful cake-cutting is the study of algorithms for fair cake-cutting that are also truthful mechanisms, i.e., they incentivize the participants to reveal their true valuations to the various parts of the cake. The classic divide and choose procedure for cake-cutting is not truthful: if the cutter knows the chooser's preferences, they can get much more than 1/2 by acting strategically. For example, suppose the cutter values a piece by its size while the chooser values a piece by the amount of chocolate in it. So the cutter can cut the cake into two pieces with almost the same amount of chocolate, such that the smaller piece has slightly more chocolate. Then, the chooser will take the smaller piece and the cutter will win the larger piece, which may be worth much more than 1/2 (depending on how the chocolate is distributed). Randomized mechanisms There is a trivial randomized truthful mechanism for fair cake-cutting: select a single agent uniformly at random, and give him/her the entire cake. This mechanism is trivially truthful because it asks no questions. Moreover, it is fair in expectation: the expected value of each partner is exactly 1/n. However, the resulting allocation is not fair. The challenge is to develop truthful mechanisms that are fair ex-post and not just ex-ante. Several such mechanisms have been developed. Exact division mechanism An exact division (aka consensus division) is a partition of the cake into n pieces such that each agent values each piece at exactly 1/n. The existence of such a division is a corollary of the Dubins–Spanier convexity theorem. Moreover, there exists such a division with at most cuts; this is a corollary of the Stromquist–Woodall theorem and the necklace splitting theorem. In general, an exact division cannot be found by a finite algorithm. However, it can be found in some special cases, for example when all agents have piecewise-linear valuations. Suppose we have a non-truthful algorithm (or oracle) for finding an exact division. It can be used to construct a randomized mechanism that is truthful in expectation. The randomized mechanism is a direct-revelation mechanism - it starts by asking all agents to reveal their entire value-measures: Ask the agents to report their value measures. Use the existing algorithm/oracle to generate an exact division. Perform a random permutation on the consensus partition and give each partner one of the pieces. Here, the expected value of each agent is always 1/n regardless of the reported value function. Hence, the mechanism is truthful – no agent can gain anything from lying. Moreover, a truthful partner is guaranteed a value of exactly 1/n with probability 1 (not only in expectation). Hence the partners have an incentive to reveal their true value functions. Super-proportional mechanism A super-proportional division is a cake-division in which each agent receives strictly more than 1/n by their own value measures. Such a division is known to exist if and only if there are at least two agents that have different valuations to at least one piece of the cake. Any deterministic mechanism that always returns a proportional division, and always returns a super-proportional division when it exists, cannot be truthful. Mossel and Tamuz present a super-proportional randomized mechanism that is truthful in expectation: Pick a division from a certain distribution D over divisions. Ask each agent to evaluate his/her piece. If all n evaluations are more than 1/n, then implement the allocation and finish. Otherwise, use the exact-division mechanism. The distribution D in step 1 should be chosen such that, regardless of the agents' valuations, there is a positive probability that a super-proportional division be selected if it exists. Then, in step 2 it is optimal for each agent to report the true value: reporting a lower value either has no effect or might cause the agent's value to drop from super-proportional to just proportional (in step 4); reporting a higher value either has no effect or might cause the agent's value to drop from proportional to less than 1/n (in step 3). Approximate exact division using queries Suppose that, rather than directly revealing their valuations, the agents reveal their values indirectly by answering mark and eval queries (as in the Robertson-Webb model). Branzei and Miltersen show that the exact-division mechanism can be "discretized" and executed in the query model. This yields, for any , a randomized query-based protocol, that asks at most queries, is truthful in expectation, and allocates each agent a piece of value between and , by the valuations of all agents. On the other hand, they prove that, in any deterministic truthful query-based protocol, if all agents value all parts of the cake positively, there is at least one agent who gets the empty piece. This implies that, if there are only two agents, then at least one agent is a "dictator" and gets the entire cake. Obviously, any such mechanism cannot be envy-free. Randomized mechanism for piecewise-constant valuations Suppose all agents have piecewise-constant valuations. This means that, for each agent, the cake is partitioned into finitely many subsets, and the agent's value density in each subset is constant. For this case, Aziz and Ye present a randomized algorithm that is more economically-efficient: Constrained Serial Dictatorship is truthful in expectation, robust proportional, and satisfies a property called unanimity: if each agent's most preferred 1/n length of the cake is disjoint from other agents, then each agent gets their most preferred 1/n length of the cake. This is a weak form of efficiency that is not satisfied by the mechanisms based on exact division. When there are only two agents, it is also polynomial-time and robust envy-free. Deterministic mechanisms: piecewise-constant valuations For deterministic mechanisms, the results are mostly negative, even when all agents have piecewise-constant valuations. Kurokawa, Lai and Procaccia prove that there is no deterministic, truthful and envy-free mechanism that requires a bounded number of Robertson-Webb queries. Aziz and Ye prove that there is no deterministic truthful mechanism that satisfies either one of the following properties: Proportional and Pareto-optimal; Robust-proportional and non-wasteful ("non-wasteful" means that no piece is allocated to an agent who does not want it; it is weaker than Pareto-optimality). Menon and Larson introduce the notion of ε-truthfulness, which means that no agent gains more than a fraction ε from misreporting, where ε is a positive constant independent of the agents' valuations. They prove that no deterministic mechanism satisfies either one of the following properties: ε-truthful, approximately-proportional and non-wasteful (for approximation constants at most 1/n); Truthful, approximately-proportional and connected (for approximation constant at most 1/n). They present a minor modification to the Even–Paz protocol and prove that it is ε-truthful with ε = 1 - 3/(2n) when n is even, and ε = 1 - 3/(2n) + 1/n2 when n is odd. Bei, Chen, Huzhang, Tao and Wu prove that there is no deterministic, truthful and envy-free mechanism, even in the direct-revelation model, that satisfies either one of the following additional properties: Connected pieces; Non-wasteful; Position oblivious - the allocation of a cake-part is based only on the agents' valuations of that part, and not on its relative position on the cake. Note that these impossibility results hold with or without free disposal. On the positive side, in a replicate economy, where each agent is replicated k times, there are envy-free mechanisms in which truth-telling is a Nash equilibrium: With connectivity requirement, in any envy-free mechanism, truth-telling converges to a Nash equilibrium when k approaches infinity; Without connectivity requirement, in the mechanism that allocates each homogeneous sub-interval equally among all agents, truth-telling is a Nash equilibrium already when k ≥ 2. Tao improves the previous impossibility result by Bei, Chen, Huzhang, Tao and Wu and shows that there is no deterministic, truthful and proportional mechanism, even in the direct-revelation model, and even when all of the followings hold: There are only two agents; Agents are hungry: each agent's valuation is positive (i.e., cannot be 0); The mechanism is allowed to leave some part of the cake unallocated. It is open whether this impossibility result extends to three or more agents. On the positive side, Tao presents two algorithms that attain a weaker notion called "proportional risk-averse truthfulness" (PRAT). It means that, in any profitable deviation for agent i, there exist valuations of the other agents, for which i gets less than his proportional share. This property is stronger than "risk-averse truthfulness", which means that, in any profitable deviation for i, there exist valuations of the other agents, for which i gets less than his value in a truthful reporting. He presents an algorithm that is PRAT and envy-free, and an algorithm that is PRAT, proportional and connected. Piecewise-uniform valuations Suppose all agents have piecewise-uniform valuations. This means that, for each agent, there is a subset of the cake that is desirable for the agent, and the agent's value for each piece is just the amount of desirable cake that it contains. For example, suppose some parts of the cake are covered by a uniform layer of chocolate, while other parts are not. An agent who values each piece only by the amount of chocolate it contains has a piecewise-uniform valuation. This is a special case of piecewise-constant valuations. Several truthful algorithms have been developed for this special case. Chen, Lai, Parkes and Procaccia present a direct-revelation mechanism that is deterministic, proportional, envy-free, Pareto-optimal, and polynomial-time. It works for any number of agents. Here is an illustration of the CLPP mechanism for two agents (where the cake is an interval). Ask each agent to report his/her desired intervals. Each sub-interval, that is desired by no agent, is discarded. Each sub-interval, that is desired by exactly one agent, is allocated to that agent. The sub-intervals, that are desired by both agents, are allocated such that both agents get an equal total length. Now, if an agent says that he wants an interval that he actually does not want, then he may get more useless cake in step 3 and less useful cake in step 4. If he says that he does not want an interval that he actually wants, then he gets less useful cake in step 3 and more useful cake in step 4, however, the amount given in step 4 is shared with the other agent, so all in all, the lying agent is at a loss. The mechanism can be generalized to any number of agents. The CLPP mechanism relies on the free disposal assumption, i.e., the ability to discard pieces that are not desired by any agent.Note: Aziz and Ye presented two mechanisms that extend the CLPP mechanism to piecewise-constant valuations - Constrained Cake Eating Algorithm and Market Equilibrium Algorithm. However, both these extensions are no longer truthful when the valuations are not piecewise-uniform. Maya and Nisan show that the CLPP mechanism is unique in the following sense. Consider the special case of two agents with piecewise-uniform valuations, where the cake is [0,1], Alice wants only the subinterval [0,a] for some a<1, and Bob desires only the subinterval [1−b,1] for some b<1. Consider only non-wasteful mechanisms - mechanisms that allocate each piece desired by at least one player to a player who wants it. Each such mechanism must give Alice a subset [0,c] for some c<1 and Bob a subset [1−d,1] for some d<1. In this model: A non-wasteful determininstic mechanism is truthful iff, for some parameter t in [0,1], it gives Alice the interval [0, min(a, max(1−b,t))] and Bob the interval [1−min(b,max(1−a,1−t)),1] Such mechanism is envy-free iff t=1/2; in this case it is equivalent to the CLPP mechanism They also show that, even for 2 agents, any truthful mechanism achieves at most 0.93 of the optimal social welfare. Li, Zhang and Zhang show that the CLPP mechanism works well even when there are externalities (i.e., some agents derive some benefit from the value given to others), as long as the externalities are sufficiently small. On the other hand, if the externalities (either positive or negative) are large, no truthful non-wasteful and position independent mechanism exists. Alijani, Farhadi, Ghodsi, Seddighin and Tajik present several mechanisms for special cases of piecewise-uniform valuations: The expansion process handles piecewise-uniform valuations where each agent has a single desired interval, and moreover, the agents' desired intervals satisfy an ordering property. It is polynomial-time, truthful, envy-free, and guarantees connected pieces. The expansion process with unlocking handles piecewise-uniform valuations where each agent has a single desired interval, but without the ordering requirement. It is polynomial-time, truthful, envy-free, and not necessarily connected, but it makes at most 2n−2 cuts. Bei, Huzhang and Suksompong present a mechanism for two agents with piecewise-uniform valuations, that has the same properties of CLPP (truthful, deterministic, proportional, envy-free, Pareto-optimal and runs in polynomial time), but guarantees that the entire cake is allocated: Find the smallest x in [0,1] such that Alice's desired length in [0,x] equals Bob's desired length in [x,1]. Give Alice the intervals in [0,x] valued by Alice and the intervals in [x,1] not valued by Bob; give the remainder to Bob. The BHS mechanism works both for cake-cutting and for chore division (where the agents' valuations are negative). Note that BHS does not satisfy some natural desirable properties: It does not guarantee connected pieces, for example when Alice wants [0,1] and Bob wants [0,0.5], then x=0.25, Alice gets [0,0.25] and [0.5,1], and Bob gets [0.25,0.5]. It is not anonymous (see symmetric fair cake-cutting): if Alice wants [0,1] and Bob wants [0,0.5], then Alice gets a desired length of 0.75 and Bob gets 0.25, but if the valuations are switched (Alice wants [0,0.5] and Bob wants [0,1]), then x=0.5 and both agents get desired length 0.5. It is not position oblivious: if Alice wants [0,0.5] and Bob wants [0,1] then both agents get value 0.5, but if Alice's desired interval moves to [0.5,1] then x=0.75 and Alice gets 0.25 and Bob gets 0.75. This is not a problem with the specific mechanism: it is provably impossible to have a truthful and envy-free mechanism that allocates the entire cake and guarantees any of these three properties, even for two agents with piecewise-uniform valuations. The BHS mechanism was extended to any number of agents, but only for a special case of piecewise-uniform valuations, in which each agent desires only a single interval of the form [0, xi]. Ianovsky proves that no truthful mechanism can attain a utilitarian-optimal cake-cutting, even when all agents have piecewise-uniform valuations. Moreover, no truthful mechanism can attain an allocation with utilitarian welfare at least as large as any other mechanism. However, there is a simple truthful mechanism (denoted Lex Order) that is non-wasteful: give to agent 1 all pieces that he likes; then, give to agent 2 all pieces that he likes and were not yet given to agent 1; etc. A variant of this mechanism is the Length Game, in which the agents are renamed by the total length of their desired intervals, such that the agent with the shortest interval is called 1, the agent with the next-shortest interval is called 2, etc. This is not a truthful mechanism, however: If all agents are truthful, then it produces a utilitarian-optimal allocation. If the agents are strategic, then all its well-behaved Nash equilibria are Pareto-efficient and envy-free, and yield the same payoffs as the CLPP mechanism. Summary of truthful mechanisms and impossibility results See also Strategic fair division Truthful resource allocation References Fair division protocols Mechanism design Cake-cutting
Truthful cake-cutting
Mathematics
3,693
5,205,992
https://en.wikipedia.org/wiki/Simiispumavirus%20pantrosch
Simian foamy virus (SFV), historically Human foamy virus (HFV), is a species of the genus Spumavirus that belongs to the family of Retroviridae. It has been identified in a wide variety of primates, including prosimians, New World and Old World monkeys, as well as apes, and each species has been shown to harbor a unique (species-specific) strain of SFV, including African green monkeys, baboons, macaques, and chimpanzees. As it is related to the more well-known retrovirus human immunodeficiency virus (HIV), its discovery in primates has led to some speculation that HIV may have been spread to the human species in Africa through contact with blood from apes, monkeys, and other primates, most likely through bushmeat-hunting practices. The foamy viruses derive their name from the characteristic ‘foamy’ appearance of the cytopathic effect (CPE) induced in the cells. Foamy virus in humans occurs only as a result of zoonotic infection. Description Although the simian foamy virus is endemic in African apes and monkeys, there are extremely high infection rates in captivity, ranging from 70% to 100% in adult animals. As humans are in close proximity to infected individuals, people who have had contact with primates can become infected with SFV, making SFV a zoonotic virus. Its ability to cross over to humans was proven in 2004 by a joint United States and Cameroonian team which found the retrovirus in gorillas, mandrills, and guenons; unexpectedly, they also found it in 10 of 1,100 local Cameroon residents. Of those found infected, the majority are males who had been bitten by a primate. While this only accounts for 1% of the population, this detail alarms some who fear the outbreak of another zoonotic epidemic. SFV causes cells to fuse with each other to form syncytia, whereby the cell becomes multi-nucleated and many vacuoles form, giving it a "foamy" appearance. Structure The SFV is a spherical, enveloped virus that ranges from 80 to 100 nm in diameter. The cellular receptors have not been characterized, but it is hypothesized that it has a molecular structure with near ubiquitous prevalence, since a wide range of cells are permissible to infection. FV is characterized by an immature looking core with an electron lucent center with glycoprotein spikes on the surface. As a retrovirus, SFV poses the following structural characteristics: Envelope: Composed of phospholipids taken from a lipid bilayer, in this case the endoplasmic reticulum; this difference gives FV a unique morphology. Additional glycoproteins are synthesized from the env gene. The envelope protects the interior of the virus from the environment, and enables entry by fusing to the membrane of the permissive cell. RNA: The genetic material that carries the code for protein production to create additional viral particles. Proteins: consisting of gag proteins, protease (PR), pol proteins, and env proteins. Group-specific antigen (gag) proteins are major components of the viral capsid. Protease performs proteolytic cleavages during virion maturation to make mature gag and pol proteins. Pol proteins are responsible for synthesis of viral DNA and integration into host DNA after infection. Env proteins are required for the entry of virions into the host cell. The ability of the retrovirus to bind to its target host cell using specific cell-surface receptors is given by the surface component (SU) of the Env protein, while the ability of the retrovirus to enter the cell via membrane fusion is imparted by the membrane-anchored trans-membrane component (TM). Lack of or imperfections in Env proteins make the virus non-infectious. Genome As a retrovirus, the genomic material is monopartite, linear, positive-sense single-stranded RNA that forms a double stranded DNA intermediate through the use of the enzyme reverse transcriptase. The RNA strand is approximately 12kb's in length, with a 5'-cap and a 3'poly-A tail. The first full genome annotation of a proviral SFV isolated from cynomolgus macaque (Macaca fascicularis) had been performed in December 2016, where it revealed two regulatory sequences, tas and bet, in addition to the structural sequences of gag, pol and env. There are two long terminal repeats (LTRs) of about 600 nucleotides long at the 5' and 3' ends that function as promoters, with an additional internal promoter (IP) located near the 3' end of env. The LTRs contain the U3, R, and U5 regions that are characteristic of retroviruses. There is also a primer binding site (PBS) at the 5'end and a polypurine tract (PPT) at the 3'end. Whereas gag, pol, and env are conserved throughout retroviruses, the tas gene is unique and found only in Spumaviridae. It encodes for a trans-activator protein required for transcription from both the LTR promoter and the IP. The synthesized Tas protein, which was initially known as Bel-1, is a 36-kDa phosphoprotein which contains an acidic transcription activation domain at its C-terminus and a centrally located DNA binding domain. The Bet protein is required for viral replication, as it counteracts the innate antiretroviral activity of APOBEC3 family defense factors by obstructing their incorporation into virions. The DNA found is linear and the length of the genome. The genome encodes the usual retroviral genes pol, gag, and env as well as two additional genes tas or bel-1 and bet. The role for bet is not quite clear, research has shown that it is dispensable for replication of the virus in tissue culture. Recently, a novel mechanism was reported where foamy virus accessory protein Bet (unlike HIV-1 Vif) impaired the cytoplasmic solubility of APOBEC3G. The tas gene, however, is required for replication. It encodes a protein that functions in transactivating the long terminal repeat (LTR) promoter. FV has a second promoter, the internal promoter (IP) which is located in the env gene. The IP drives expression of the tas and bet genes. The IP is also unique in that the virus has the capacity to transcribe mRNAs from it; usually the complex retroviruses exclusively express transcripts from the LTR. The structural genes of FV are another one of its unique features. The Gag protein is not efficiently cleaved into the mature virus which lends to the immature morphology. The Pol precursor protein is only partially cleaved; the integrase domain is removed by viral protease. As in other retroviruses, the Env protein is cleaved into surface and transmembrane domains but the FV Env protein also contains an endoplasmic reticulum retention signal which is part of why the virus buds from the endoplasmic reticulum. Another area of difference between FV and other retroviruses is the possibility of recycling the core once the virus is in the cell. Replication cycle FV replication more closely resembles the Hepadnaviridae, which are another family of reverse transcriptase encoding viruses. Reverse transcription of the genome occurs at a later step in the replication cycle, which results in the infectious particles having DNA rather than RNA, this also leads to less integration in the host genome. Entry into cell The virus attaches to host receptors through the SU glycoprotein, and the TM glycoprotein mediates fusion with the cell membrane. The entry receptor that triggers viral entry has not been identified, but the absence of heparan sulfate in one study resulted in a decrease of infection, acknowledging it as an attachment factor that assists in mediating the entry of the viral particle. It is not clear if the fusion is pH-dependent or independent, although some evidence has been provided to indicate that SFV does enter cells through a pH-dependent step. Once the virus has entered the interior of the cell, the retroviral core undergoes structural transformations through the activity of viral proteases. Studies have revealed that there are three internal protease-dependent cleavage sites that are critical for the virus to be infectious. One mutation within the gag gene had caused a structural change to the first cleavage site, preventing subsequent cleavage at the two other sites by the viral PR, reflecting its prominent role. Once disassembled, the genetic material and enzymes are free within the cytoplasm to continue with the viral replication. Whereas most retroviruses deposit ssRNA(+) into the cell, SFV and other related species are different in that up to 20% of released viral particles already contains dsDNA genomes. This is due to a unique feature of spumaviruses in which the onset of reverse transcription of genomic RNA occurs before release rather than after entry of the new host cell like in other retroviruses. Replication and transcription As both ssRNA(+) and dsDNA enter the cell, the remaining ssRNA is copied into dsDNA through reverse transcriptase. Nuclear entry of the viral dsDNA is covalently integrated into the cell's genome by the viral integrase, forming a provirus. The integrated provirus utilizes the promoter elements in the 5'LTR to drive transcription. This gives rise to the unspliced full length mRNA that will serve as genomic RNA to be packaged into virions, or used as a template for translation of gag. The spliced mRNAs encode pol (PR, RT, RnaseH, IN) and env (SU, TM) that will be used to later assemble the viral particles. The Tas trans-activator protein augments transcription directed by the LTR through cis-acting targets in the U3 domain of the LTR. The presence of this protein is crucial, as in the absence of Tas, LTR-mediated transcription cannot be detected. Foamy viruses utilize multiple promoters.The IP is required for viral infectivity in tissue culture, as this promoter has a higher basal transcription level than the LTR promoter, and its use leads to transcripts encoding Tas and Bet. Once levels of Tas accumulate, it begins to make use of the LTR promoter, which binds Tas with lower affinity than the IP and leads to accumulation of gag, pol, and env transcripts. Assembly and release The SFV capsid is assembled in the cytoplasm as a result of multimerization of Gag molecules, but unlike other related viruses, SFV Gag lacks an N-terminal myristylation signal and capsids are not targeted to the plasma membrane (PM). They require expression of the envelope protein for budding of intracellular capsids from the cell, suggesting a specific interaction between the Gag and Env proteins. Evidence for this interaction was discovered in 2001 when a deliberate mutation for a conserved arginine (Arg) residue at position 50 to alanine of the SFVcpz inhibited proper capsid assembly and abolished viral budding even in the presence of the envelope glycoproteins. Analysis of the glycoproteins on the envelope of the viral particle indicate that it is localized to the endoplasmic reticulum (ER), and that once it buds from the organelle, the maturation process is finalized and can leave to infect additional cells. A dipeptide of two lysine residues (dilysine) was the identified motif that determined to be the specific molecule that mediated the signal, localizing viral particles in the ER. Modulation and interaction of host cell There is little data on how SFV interacts with the host cell as the infection takes its course. The most obvious effect that can be observed is the formation of syncytia that results in multinucleated cells. While the details for how SFV can induce this change are not known, the related HIV does cause similar instances among CD4+ T cells. As the cell transcribes the integrated proviral genome, glycoproteins are produced and displayed at the surface of the cell. If enough proteins are at the surface with other CD4+ T cells nearby, the glycoproteins will attach and result in the fusion of several cells. Foamy degeneration, or vacuolization is another observable change within the cells, but it is unknown how SFV results in the formation of numerous cytoplasmic vacuoles. This is another characteristic of retroviruses, but there are no studies or explanations on why this occurs. Transmission and pathogenicity The transmission of SFV is believed to spread through saliva, because large quantities of viral RNA, indicative of SFV gene expression and replication, are present in cells of the oral mucosa. Aggressive behaviors such as bites, to nurturing ones such as a mother licking an infant all have the ability to spread the virus. Studies of natural transmission suggest that infants of infected mothers are resistant to infection, presumably because of passive immunity from maternal antibodies, but infection becomes detectable by three years of age. Little else is known about the prevalence and transmission patterns of SFV in wild-living primate populations. The first case of a spumavirus being isolated from a primate was in 1955 (Rustigan et al., 1955) from the kidneys. What is curious about the cytopathology of SFV is that while it results in rapid cell death for cells in vitro, it loses its highly cytopathic nature in vivo. With little evidence to suggest that SFV infection causes illness, some scientists believe that it has a commensal relationship to simians. In one study to determine the effects of SFV(mac239) on rhesus macaques that were previously infected with another type of the virus, the experiment had provided evidence that previous infection can increase the risk viral loads reaching unsustainable levels, killing CD4+ T cells and ultimately resulting in the expiration of the doubly infected subjects. SFV/SIV models have since been proposed to replicate the relationship between SFV and HIV in humans, a potential health concern for officials. Tropism SFV can infect a wide range of cells, with in vitro experiments confirming that fibroblasts, epithelial cells, and neural cells all showed extensive cytopathology that is characteristic of foamy virus infection. The cytopathic effects in B lymphoid cells and macrophages was reduced, where reverse transcriptase values were lower when compared to fibroblasts and epithelial cells. Cells that expressed no signs of cytopathy from SFV were the Jurkat and Hut-78 T-cell lines. Cospeciation of SFV and primates The phylogenetic tree analysis of SFV polymerase and mitochondrial cytochrome oxidase subunit II (COII has been shown as a powerful marker used for primate phylogeny) from African and Asian monkeys and apes provides very similar branching order and divergence times among the two trees, supporting the cospeciation. Also, the substitution rate in the SFV gene was found to be extremely slow, i.e. the SFV has evolved at a very low rate (1.7×10−8 substitutions per site per year). These results suggest SFV has been cospeciated with Old World primates for about 30 million years, making them the oldest known vertebrate RNA viruses. The SFV sequence examination of species and subspecies within each clade of the phylogenetic tree of the primates indicated cospeciation of SFV and the primate hosts, as well. A strong linear relationship was found between the branch lengths for the host and SFV gene trees, which indicated synchronous genetic divergence in both data sets. By using the molecular clock, it was observed that the substitution rates for the host and SFV genes were very similar. The substitution rates for host COII gene and the SFV gene were found out to be and respectively. This is the slowest rate of substitution observed for RNA viruses and is closer to that of DNA viruses and endogenous retroviruses. This rate is quite different from that of exogenous RNA viruses such as HIV and influenza A virus (10−3 to 10−4 substitutions per site per year). Prevalence Researchers in Cameroon, the Democratic Republic of the Congo, France, Gabon, Germany, Japan, Rwanda, the United Kingdom, and the United States have found that simian foamy virus is widespread among wild chimpanzees throughout equatorial Africa. Humans exposed to wild primates, including chimpanzees, can acquire SFV infections. Since the long-term consequences of these cross-species infections are not known, it is important to determine to what extent wild primates are infected with simian foamy viruses. In this study, researchers tested this question for wild chimpanzees by using novel noninvasive methods. Analyzing over 700 fecal samples from 25 chimpanzee communities across sub-Saharan Africa, the researchers obtained viral sequences from a large proportion of these communities, showing a range of infection rates from 44% to 100%. Major disease outbreaks have originated from cross-species transmission of infectious agents between primates and humans, making it important to learn more about how these cross-species transfers occur. The high SFV infection rates of chimpanzees provide an opportunity to monitor where humans are exposed to these viruses. Identifying the locations may help determine where the highest rates of human–chimpanzee interactions occur. This may predict what other pathogens may jump the species barrier next. Human infection Persistence in the absence of disease, but in the presence of antibodies is a defining characteristic of FV infection. HFV has been isolated from patients with various neoplastic and degenerative diseases such as myasthenia gravis, multiple sclerosis, De Quervain's thyroiditis, and Graves’ disease but the virus’ etiological role is still unclear. Recent studies indicate that it is not pathogenic in humans and experimentally infected animals. If, in fact, HFV is not pathogenic in humans and is a retrovirus, it is an ideal vector for gene therapy. Another important feature of the virus is that the Gag, Pol, and Env proteins are synthesized independently; this is important because it means that each protein can be provided in trans on three different plasmids to create a stable packaging cell line. Having this would possibly reduce the need for a replication-competent helper virus. Other advantages are human to human transmission has never been reported, it has a safer spectrum of insertional mutagenesis than other retroviruses, and since there are two promoters in the genome, it may be possible to make a vector that expresses the foreign genes under the control of both promoters. A disadvantage of HFV as a gene therapy vector is that since it buds from an intracellular membrane (endoplasmic reticulum membrane); it results in low extracellular titers of the viral vector. History The first description of foamy virus (FV) was in 1954. It was found as a contaminant in primary monkey kidney cultures. The first isolate of the “foamy viral agent” was in 1955. Not too long after this, it was isolated from a wide variety of New and Old World monkeys, cats, and cows. It was not until several years later that humans entered the scene. In 1971, a viral agent with FV-like characteristics was isolated from lymphoblastoid cells released from a human nasopharyngeal carcinoma (NPC) from a Kenyan patient. The agent was termed a human FV because of its origin, and named SFVcpz(hu) as the prototypic laboratory strain. The SFV came from its similarity to simian foamy virus (SFV). Not long after this, a group of researchers concluded that it was a distinct type of FV and most closely related to SFV types 6 and 7, both of which were isolated from chimpanzees. In another report, however, a different group of researchers claimed that SFVcpz(hu) was not a distinct type of FV but rather a variant strain of chimpanzee FV. The debate came to an end in 1994 when the virus was cloned and sequenced. The sequencing showed that there are 86–95% identical amino acids between the SFV and the one isolated from the Kenyan patient. In addition, phylogenetic analysis showed that the pol regions of the two genomes shared 89–92% of their nucleotides and 95–97% of the amino acids are identical between the human virus and various SFV strains. These results indicated that SFVcpz(hu) is likely a variant of SFV and not a unique isolate. When looking at the origin of the human FV, sequence comparisons showed that from four different species of chimpanzees, SFVcpz(hu) was most closely related to the Eastern chimpanzee. This subspecies has a natural habitat in Kenya and thus was most likely the origin of this SFV variant, and the virus was probably acquired as a zoonotic infection. References External links Animal viral diseases Primate diseases Spumaviruses Unaccepted virus taxa
Simiispumavirus pantrosch
Biology
4,500
12,769,013
https://en.wikipedia.org/wiki/Permissive%20dialing
In North America, permissive dialing is the ability to make phone calls in an area subject to a newly introduced area code by using both the new and preexisting dialing methods. When an area is given a new area code under a split plan, the area's previous area code would no longer be valid for calls in the area, so calls to numbers using the old area code will not work. To alleviate misdialing frustration, the local routing can be set up such that both the old and new area codes will work for the same telephone exchange. During this period, the local numbering authority must not reassign the area's existing exchanges to the remaining area of the old area code, nor vice versa. At the end of the permissive dialing period, the old area code is no longer valid for numbers in the affected area. Under an overlay plan, permissive dialing refers to the ability to continue to connect calls via 7-digit dialing while also making 10-digit dialing valid. Again, the affected area must not introduce any new ambiguous telephone exchanges. At the end of the period, 10-digit dialing becomes mandatory. References External links FCC Area Code Fact Sheet, 1995 Area Code 878 press release, 2001 Telephone numbers
Permissive dialing
Mathematics
260
1,631,010
https://en.wikipedia.org/wiki/Risk-adjusted%20return%20on%20capital
Risk-adjusted return on capital (RAROC) is a risk-based profitability measurement framework for analysing risk-adjusted financial performance and providing a consistent view of profitability across businesses. The concept was developed by Bankers Trust and principal designer Dan Borge in the late 1970s. Note, however, that increasingly return on risk-adjusted capital (RORAC) is used as a measure, whereby the risk adjustment of Capital is based on the capital adequacy guidelines as outlined by the Basel Committee. Basic formula The formula is given by Broadly speaking, in business enterprises, risk is traded off against benefit. RAROC is defined as the ratio of risk adjusted return to economic capital. The economic capital is the amount of money which is needed to secure the survival in a worst-case scenario, it is a buffer against unexpected shocks in market values. Economic capital is a function of market risk, credit risk, and operational risk, and is often calculated by VaR. This use of capital based on risk improves the capital allocation across different functional areas of banks, insurance companies, or any business in which capital is placed at risk for an expected return above the risk-free rate. RAROC system allocates capital for two basic reasons: Risk management Performance evaluation For risk management purposes, the main goal of allocating capital to individual business units is to determine the bank's optimal capital structure—that is economic capital allocation is closely correlated with individual business risk. As a performance evaluation tool, it allows banks to assign capital to business units based on the economic value added of each unit. Decision measures based on regulatory and economic capital With the financial crisis of 2007, and the introduction of Dodd–Frank Act, and Basel III, the minimum required regulatory capital requirements have become onerous. An implication of stringent regulatory capital requirements spurred debates on the validity of required economic capital in managing an organization's portfolio composition, highlighting that constraining requirements should have organizations focus entirely on the return on regulatory capital in measuring profitability and in guiding portfolio composition. The counterargument highlights that concentration and diversification effects should play a prominent role in portfolio selection – dynamics recognized in economic capital, but not regulatory capital. It did not take long for the industry to recognize the relevance and importance of both regulatory and economic measures, and eschewed focusing exclusively on one or the other. Relatively simple rules were devised to have both regulatory and economic capital enter into the process. In 2012, researchers at Moody's Analytics designed a formal extension to the RAROC model that accounts for regulatory capital requirements as well as economic risks. In the framework, capital allocation can be represented as a composite capital measure (CCM) that is a weighted combination of economic and regulatory capital – with the weight on regulatory capital determined by the degree to which an organization is a capital constrained. See also Enterprise risk management Omega ratio Risk return ratio Risk-return spectrum Sharpe ratio Sortino ratio Notes References External links RAROC & Economic Capital Between RAROC and a hard place Actuarial science Capital requirement Financial ratios Financial risk
Risk-adjusted return on capital
Mathematics
622
5,737,035
https://en.wikipedia.org/wiki/Chordin
Chordin (from Greek χορδή, string, catgut) is a protein with a prominent role in dorsal–ventral patterning during early embryonic development. In humans it is encoded for by the CHRD gene. History Chordin was originally identified in the African clawed frog (Xenopus laevis) in the laboratory of Edward M. De Robertis as a key developmental protein that dorsalizes early vertebrate embryonic tissues. It was first hypothesized that chordin plays a role in the dorsal homeobox genes in Spemann's organizer. The chordin gene was discovered through its activation following use of gsc (goosecoid) and Xnot mRNA injections. The discoverers of chordin concluded that it is expressed in embryo regions where gsc and Xnot were also expressed, which included the prechordal plate, the notochord, and the chordoneural hinge. The expression of the gene in these regions led to the name chordin. Initial functions of chordin were thought to include recruitment of neighboring cells to assist in the forming of the axis along with mediating cell interactions for organization of tail, head, and body regions. Protein Structure Chordin is a 941 amino-acids long protein, whose three-dimensional transmission electron microscopy structure resembles a horseshoe. A characteristic structural feature of chordin is the presence of four cysteine-rich repeats, which are 58–75 residues long, each containing 10 cysteines with characteristic spacings. These repeats are homologous with domains in a number of extracellular matrix proteins, including von Willebrand factor. There are five named isoforms of this protein that are produced by alternative splicing. Gene structure CHRD is 23 exons long and has a length of 11.5 kb and is localized at 3q27. The THPO (thrombopoietin) gene is located in the same single cosmid clone along with the eukaryotic translation initiation factor-4-gamma gene (EIF4G1). Function Chordin dorsalizes the developing embryo by binding ventralizing TGFβ proteins such as bone morphogenetic proteins (BMP) through its four cytosine rich regions. Chordin blocks BMP signaling by preventing BMP from interacting with cell surface receptors, which inhibits the formation of epidermis and promoting the formation of neural tissue. Chordin specifically inhibits BMP-2,-4,-7. Chordin function is improved by a few co-factors that include the Twisted Gastrulation gene (Tsg) and the zinc metalloprotease. Tsg improves the ability of Chordin to become a BMP antagonist. Zinc metalloprotease functions by cleaving chordin allows for improved signaling with BMP in complexes that were inactive. This occurs by improving Chordin's substrate ability in cleavage reactions and by releasing BMP from chordin products. Experiments with zebrafish showed that a chordin gene mutation can lead to less neural and dorsal tissue. Target gene deletions of chordin, follistatin, and noggin in mice were shown to also have effects on neural induction, while deletion of both chordin and noggin showed more severe effects on neural development. The phenotype for this type of deletion showed almost full headlessness. This is significant because when only noggin is deficient there are mild defects but the head still forms. Noggin has been shown to have overlap at the midgastrula in its expression with chordin. Further experiments testing the role of both noggin and chordin showed that these two proteins are essential for mesodermal development and anterior pattern elaboration. However, noggin and chordin were not shown to play a significant role in the development of the anterior visceral endoderm. Chordin mRNA in mice are expressed early on during the anterior primitive streak. In the chick embryo it is expressed in the anterior cells of Koller's sickle, which form the anterior cells of the primitive streak, a key structure through which gastrulation occurs. As the streak evolves to a node and axial mesoderm, the chordin mRNA is still expressed. This evidence suggests a patterning role of chordin during the early embryo stages. When chordin was inactivated, animals may initially appear to have normal development, but later on issues manifest in the inner and outer ear along with pharyngeal and cardiovascular abnormalities. Experiments with Xenopus embryos showed that overexpression of BMP1 and TLL1 can be used to counteract chordin's dorsalization functions. This finding suggests that the major chordin antagonist is BMP1. In mice, chordin is expressed in the node but not in the anterior visceral endoderm. It has been found to be required for forebrain development. In developing mice that are deficient in both chordin and noggin, the head is nearly absent. Chordin is also involved in avian gastrulation and may also play a role in organogenesis. References Proteins Vertebrate developmental biology Von Willebrand factor type C domain CHRD domain
Chordin
Chemistry
1,076
53,124,143
https://en.wikipedia.org/wiki/Bang%27s%20theorem%20on%20tetrahedra
In geometry, Bang's theorem on tetrahedra states that, if a sphere is inscribed within a tetrahedron, and segments are drawn from the points of tangency to each vertex on the same face of the tetrahedron, then all four points of tangency have the same triple of angles. In particular, it follows that the 12 triangles into which the segments subdivide the faces of the tetrahedron form congruent pairs across each edge of the tetrahedron. It is named after A. S. Bang, who posed it as a problem in 1897. References Theorems in geometry Euclidean solid geometry Tetrahedra
Bang's theorem on tetrahedra
Physics,Mathematics
134
529,891
https://en.wikipedia.org/wiki/Ventilation%20%28architecture%29
Ventilation is the intentional introduction of outdoor air into a space. Ventilation is mainly used to control indoor air quality by diluting and displacing indoor pollutants; it can also be used to control indoor temperature, humidity, and air motion to benefit thermal comfort, satisfaction with other aspects of the indoor environment, or other objectives. The intentional introduction of outdoor air is usually categorized as either mechanical ventilation, natural ventilation, or mixed-mode ventilation. Mechanical ventilation is the intentional fan-driven flow of outdoor air into and/or out from a building. Mechanical ventilation systems may include supply fans (which push outdoor air into a building), exhaust fans (which draw air out of a building and thereby cause equal ventilation flow into a building), or a combination of both (called balanced ventilation if it neither pressurizes nor depressurizes the inside air, or only slightly depressurizes it). Mechanical ventilation is often provided by equipment that is also used to heat and cool a space. Natural ventilation is the intentional passive flow of outdoor air into a building through planned openings (such as louvers, doors, and windows). Natural ventilation does not require mechanical systems to move outdoor air. Instead, it relies entirely on passive physical phenomena, such as wind pressure, or the stack effect. Natural ventilation openings may be fixed, or adjustable. Adjustable openings may be controlled automatically (automated), owned by occupants (operable), or a combination of both. Cross ventilation is a phenomenon of natural ventilation. Mixed-mode ventilation systems use both mechanical and natural processes. The mechanical and natural components may be used at the same time, at different times of day, or in different seasons of the year. Since natural ventilation flow depends on environmental conditions, it may not always provide an appropriate amount of ventilation. In this case, mechanical systems may be used to supplement or regulate the naturally driven flow. Ventilation is typically described as separate from infiltration. Infiltration is the circumstantial flow of air from outdoors to indoors through leaks (unplanned openings) in a building envelope. When a building design relies on infiltration to maintain indoor air quality, this flow has been referred to as adventitious ventilation. The design of buildings that promote occupant health and well-being requires a clear understanding of the ways that ventilation airflow interacts with, dilutes, displaces, or introduces pollutants within the occupied space. Although ventilation is an integral component of maintaining good indoor air quality, it may not be satisfactory alone. A clear understanding of both indoor and outdoor air quality parameters is needed to improve the performance of ventilation in terms of occupant health and energy. In scenarios where outdoor pollution would deteriorate indoor air quality, other treatment devices such as filtration may also be necessary. In kitchen ventilation systems, or for laboratory fume hoods, the design of effective effluent capture can be more important than the bulk amount of ventilation in a space. More generally, the way that an air distribution system causes ventilation to flow into and out of a space impacts the ability of a particular ventilation rate to remove internally generated pollutants. The ability of a system to reduce pollution in space is described as its "ventilation effectiveness". However, the overall impacts of ventilation on indoor air quality can depend on more complex factors such as the sources of pollution, and the ways that activities and airflow interact to affect occupant exposure. An array of factors related to the design and operation of ventilation systems are regulated by various codes and standards. Standards dealing with the design and operation of ventilation systems to achieve acceptable indoor air quality include the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standards 62.1 and 62.2, the International Residential Code, the International Mechanical Code, and the United Kingdom Building Regulations Part F. Other standards that focus on energy conservation also impact the design and operation of ventilation systems, including ASHRAE Standard 90.1, and the International Energy Conservation Code. When indoor and outdoor conditions are favorable, increasing ventilation beyond the minimum required for indoor air quality can significantly improve both indoor air quality and thermal comfort through ventilative cooling, which also helps reduce the energy demand of buildings. During these times, higher ventilation rates, achieved through passive or mechanical means (air-side economizer, ventilative pre-cooling), can be particularly beneficial for enhancing people's physical health. Conversely, when conditions are less favorable, maintaining or improving indoor air quality through ventilation may require increased use of mechanical heating or cooling, leading to higher energy consumption. Ventilation should be considered for its relationship to "venting" for appliances and combustion equipment such as water heaters, furnaces, boilers, and wood stoves. Most importantly, building ventilation design must be careful to avoid the backdraft of combustion products from "naturally vented" appliances into the occupied space. This issue is of greater importance for buildings with more air-tight envelopes. To avoid the hazard, many modern combustion appliances utilize "direct venting" which draws combustion air directly from outdoors, instead of from the indoor environment. Design of air flow in rooms The air in a room can be supplied and removed in several ways, for example via ceiling ventilation, cross ventilation, floor ventilation or displacement ventilation. Furthermore, the air can be circulated in the room using vortexes which can be initiated in various ways: Ventilation rates for indoor air quality The ventilation rate, for commercial, industrial, and institutional (CII) buildings, is normally expressed by the volumetric flow rate of outdoor air, introduced to the building. The typical units used are cubic feet per minute (CFM) in the imperial system, or liters per second (L/s) in the metric system (even though cubic meter per second is the preferred unit for volumetric flow rate in the SI system of units). The ventilation rate can also be expressed on a per person or per unit floor area basis, such as CFM/p or CFM/ft², or as air changes per hour (ACH). Standards for residential buildings For residential buildings, which mostly rely on infiltration for meeting their ventilation needs, a common ventilation rate measure is the air change rate (or air changes per hour): the hourly ventilation rate divided by the volume of the space (I or ACH; units of 1/h). During the winter, ACH may range from 0.50 to 0.41 in a tightly air-sealed house to 1.11 to 1.47 in a loosely air-sealed house. ASHRAE now recommends ventilation rates dependent upon floor area, as a revision to the 62-2001 standard, in which the minimum ACH was 0.35, but no less than 15 CFM/person (7.1 L/s/person). As of 2003, the standard has been changed to 3 CFM/100 sq. ft. (15 L/s/100 sq. m.) plus 7.5 CFM/person (3.5 L/s/person). Standards for commercial buildings Ventilation rate procedure Ventilation Rate Procedure is rate based on standard and prescribes the rate at which ventilation air must be delivered to space and various means to the condition that air. Air quality is assessed (through CO2 measurement) and ventilation rates are mathematically derived using constants. Indoor Air Quality Procedure uses one or more guidelines for the specification of acceptable concentrations of certain contaminants in indoor air but does not prescribe ventilation rates or air treatment methods. This addresses both quantitative and subjective evaluations and is based on the Ventilation Rate Procedure. It also accounts for potential contaminants that may have no measured limits, or for which no limits are not set (such as formaldehyde off-gassing from carpet and furniture). Natural ventilation Natural ventilation harnesses naturally available forces to supply and remove air in an enclosed space. Poor ventilation in rooms is identified to significantly increase the localized moldy smell in specific places of the room including room corners. There are three types of natural ventilation occurring in buildings: wind-driven ventilation, pressure-driven flows, and stack ventilation. The pressures generated by 'the stack effect' rely upon the buoyancy of heated or rising air. Wind-driven ventilation relies upon the force of the prevailing wind to pull and push air through the enclosed space as well as through breaches in the building's envelope. Almost all historic buildings were ventilated naturally. The technique was generally abandoned in larger US buildings during the late 20th century as the use of air conditioning became more widespread. However, with the advent of advanced Building Performance Simulation (BPS) software, improved Building Automation Systems (BAS), Leadership in Energy and Environmental Design (LEED) design requirements, and improved window manufacturing techniques; natural ventilation has made a resurgence in commercial buildings both globally and throughout the US. The benefits of natural ventilation include: Improved indoor air quality (IAQ) Energy savings Reduction of greenhouse gas emissions Occupant control Reduction in occupant illness associated with sick building syndrome Increased worker productivity Techniques and architectural features used to ventilate buildings and structures naturally include, but are not limited to: Operable windows Clerestory windows and vented skylights Lev/convection doors Night purge ventilation Building orientation Wind capture façades Airborne diseases Natural ventilation is a key factor in reducing the spread of airborne illnesses such as tuberculosis, the common cold, influenza, meningitis or COVID-19. Opening doors and windows are good ways to maximize natural ventilation, which would make the risk of airborne contagion much lower than with costly and maintenance-requiring mechanical systems. Old-fashioned clinical areas with high ceilings and large windows provide the greatest protection. Natural ventilation costs little and is maintenance-free, and is particularly suited to limited-resource settings and tropical climates, where the burden of TB and institutional TB transmission is highest. In settings where respiratory isolation is difficult and climate permits, windows and doors should be opened to reduce the risk of airborne contagion. Natural ventilation requires little maintenance and is inexpensive. Natural ventilation is not practical in much of the infrastructure because of climate. This means that the facilities need to have effective mechanical ventilation systems and or use Ceiling Level UV or FAR UV ventilation systems. Ventilation is measured in terms of air changes per hour (ACH). , the CDC recommends that all spaces have a minimum of 5 ACH. For hospital rooms with airborne contagions the CDC recommends a minimum of 12 ACH. Challenges in facility ventilation are public unawareness, ineffective government oversight, poor building codes that are based on comfort levels, poor system operations, poor maintenance, and lack of transparency. Pressure, both political and economic, to improve energy conservation has led to decreased ventilation rates. Heating, ventilation, and air conditioning rates have dropped since the energy crisis in the 1970s and the banning of cigarette smoke in the 1980s and 1990s. Mechanical ventilation Mechanical ventilation of buildings and structures can be achieved by the use of the following techniques: Whole-house ventilation Mixing ventilation Displacement ventilation Dedicated subaerial air supply Demand-controlled ventilation (DCV) Demand-controlled ventilation (DCV, also known as Demand Control Ventilation) makes it possible to maintain air quality while conserving energy. ASHRAE has determined that "It is consistent with the ventilation rate procedure that demand control be permitted for use to reduce the total outdoor air supply during periods of less occupancy." In a DCV system, CO2 sensors control the amount of ventilation. During peak occupancy, CO2 levels rise, and the system adjusts to deliver the same amount of outdoor air as would be used by the ventilation-rate procedure. However, when spaces are less occupied, CO2 levels reduce, and the system reduces ventilation to conserves energy. DCV is a well-established practice, and is required in high occupancy spaces by building energy standards such as ASHRAE 90.1. Personalized ventilation Personalized ventilation is an air distribution strategy that allows individuals to control the amount of ventilation received. The approach delivers fresh air more directly to the breathing zone and aims to improve the air quality of inhaled air. Personalized ventilation provides much higher ventilation effectiveness than conventional mixing ventilation systems by displacing pollution from the breathing zone with far less air volume. Beyond improved air quality benefits, the strategy can also improve occupants' thermal comfort, perceived air quality, and overall satisfaction with the indoor environment. Individuals' preferences for temperature and air movement are not equal, and so traditional approaches to homogeneous environmental control have failed to achieve high occupant satisfaction. Techniques such as personalized ventilation facilitate control of a more diverse thermal environment that can improve thermal satisfaction for most occupants. Local exhaust ventilation Local exhaust ventilation addresses the issue of avoiding the contamination of indoor air by specific high-emission sources by capturing airborne contaminants before they are spread into the environment. This can include water vapor control, lavatory effluent control, solvent vapors from industrial processes, and dust from wood- and metal-working machinery. Air can be exhausted through pressurized hoods or the use of fans and pressurizing a specific area.A local exhaust system is composed of five basic parts: A hood that captures the contaminant at its source Ducts for transporting the air An air-cleaning device that removes/minimizes the contaminant A fan that moves the air through the system An exhaust stack through which the contaminated air is discharged In the UK, the use of LEV systems has regulations set out by the Health and Safety Executive (HSE) which are referred to as the Control of Substances Hazardous to Health (CoSHH). Under CoSHH, legislation is set to protect users of LEV systems by ensuring that all equipment is tested at least every fourteen months to ensure the LEV systems are performing adequately. All parts of the system must be visually inspected and thoroughly tested and where any parts are found to be defective, the inspector must issue a red label to identify the defective part and the issue. The owner of the LEV system must then have the defective parts repaired or replaced before the system can be used. Smart ventilation Smart ventilation is a process of continually adjusting the ventilation system in time, and optionally by location, to provide the desired IAQ benefits while minimizing energy consumption, utility bills, and other non-IAQ costs (such as thermal discomfort or noise). A smart ventilation system adjusts ventilation rates in time or by location in a building to be responsive to one or more of the following: occupancy, outdoor thermal and air quality conditions, electricity grid needs, direct sensing of contaminants, operation of other air moving and air cleaning systems. In addition, smart ventilation systems can provide information to building owners, occupants, and managers on operational energy consumption and indoor air quality as well as a signal when systems need maintenance or repair. Being responsive to occupancy means that a smart ventilation system can adjust ventilation depending on demand such as reducing ventilation if the building is unoccupied. Smart ventilation can time-shift ventilation to periods when a) indoor-outdoor temperature differences are smaller (and away from peak outdoor temperatures and humidity), b) when indoor-outdoor temperatures are appropriate for ventilative cooling, or c) when outdoor air quality is acceptable. Being responsive to electricity grid needs means providing flexibility to electricity demand (including direct signals from utilities) and integration with electric grid control strategies. Smart ventilation systems can have sensors to detect airflow, systems pressures, or fan energy use in such a way that systems failures can be detected and repaired, as well as when system components need maintenance, such as filter replacement. Ventilation and combustion Combustion (in a fireplace, gas heater, candle, oil lamp, etc.) consumes oxygen while producing carbon dioxide and other unhealthy gases and smoke, requiring ventilation air. An open chimney promotes infiltration (i.e. natural ventilation) because of the negative pressure change induced by the buoyant, warmer air leaving through the chimney. The warm air is typically replaced by heavier, cold air. Ventilation in a structure is also needed for removing water vapor produced by respiration, burning, and cooking, and for removing odors. If water vapor is permitted to accumulate, it may damage the structure, insulation, or finishes. When operating, an air conditioner usually removes excess moisture from the air. A dehumidifier may also be appropriate for removing airborne moisture. Calculation for acceptable ventilation rate Ventilation guidelines are based on the minimum ventilation rate required to maintain acceptable levels of effluents. Carbon dioxide is used as a reference point, as it is the gas of highest emission at a relatively constant value of 0.005 L/s. The mass balance equation is: Q = G/(Ci − Ca) Q = ventilation rate (L/s) G = CO2 generation rate Ci = acceptable indoor CO2 concentration Ca = ambient CO2 concentration Smoking and ventilation ASHRAE standard 62 states that air removed from an area with environmental tobacco smoke shall not be recirculated into ETS-free air. A space with ETS requires more ventilation to achieve similar perceived air quality to that of a non-smoking environment. The amount of ventilation in an ETS area is equal to the amount of an ETS-free area plus the amount V, where: V = DSD × VA × A/60E V = recommended extra flow rate in CFM (L/s) DSD = design smoking density (estimated number of cigarettes smoked per hour per unit area) VA = volume of ventilation air per cigarette for the room being designed (ft3/cig) E = contaminant removal effectiveness History Primitive ventilation systems were found at the Pločnik archeological site (belonging to the Vinča culture) in Serbia and were built into early copper smelting furnaces. The furnace, built on the outside of the workshop, featured earthen pipe-like air vents with hundreds of tiny holes in them and a prototype chimney to ensure air goes into the furnace to feed the fire and smoke comes out safely. Passive ventilation and passive cooling systems were widely written about around the Mediterranean by Classical times. Both sources of heat and sources of cooling (such as fountains and subterranean heat reservoirs) were used to drive air circulation, and buildings were designed to encourage or exclude drafts, according to climate and function. Public bathhouses were often particularly sophisticated in their heating and cooling. Icehouses are some millennia old, and were part of a well-developed ice industry by classical times. The development of forced ventilation was spurred by the common belief in the late 18th and early 19th century in the miasma theory of disease, where stagnant 'airs' were thought to spread illness. An early method of ventilation was the use of a ventilating fire near an air vent which would forcibly cause the air in the building to circulate. English engineer John Theophilus Desaguliers provided an early example of this when he installed ventilating fires in the air tubes on the roof of the House of Commons. Starting with the Covent Garden Theatre, gas burning chandeliers on the ceiling were often specially designed to perform a ventilating role. Mechanical systems A more sophisticated system involving the use of mechanical equipment to circulate the air was developed in the mid-19th century. A basic system of bellows was put in place to ventilate Newgate Prison and outlying buildings, by the engineer Stephen Hales in the mid-1700s. The problem with these early devices was that they required constant human labor to operate. David Boswell Reid was called to testify before a Parliamentary committee on proposed architectural designs for the new House of Commons, after the old one burned down in a fire in 1834. In January 1840 Reid was appointed by the committee for the House of Lords dealing with the construction of the replacement for the Houses of Parliament. The post was in the capacity of ventilation engineer, in effect; and with its creation there began a long series of quarrels between Reid and Charles Barry, the architect. Reid advocated the installation of a very advanced ventilation system in the new House. His design had air being drawn into an underground chamber, where it would undergo either heating or cooling. It would then ascend into the chamber through thousands of small holes drilled into the floor, and would be extracted through the ceiling by a special ventilation fire within a great stack. Reid's reputation was made by his work in Westminster. He was commissioned for an air quality survey in 1837 by the Leeds and Selby Railway in their tunnel. The steam vessels built for the Niger expedition of 1841 were fitted with ventilation systems based on Reid's Westminster model. Air was dried, filtered and passed over charcoal. Reid's ventilation method was also applied more fully to St. George's Hall, Liverpool, where the architect, Harvey Lonsdale Elmes, requested that Reid should be involved in ventilation design. Reid considered this the only building in which his system was completely carried out. Fans With the advent of practical steam power, ceiling fans could finally be used for ventilation. Reid installed four steam-powered fans in the ceiling of St George's Hospital in Liverpool, so that the pressure produced by the fans would force the incoming air upward and through vents in the ceiling. Reid's pioneering work provides the basis for ventilation systems to this day. He was remembered as "Dr. Reid the ventilator" in the twenty-first century in discussions of energy efficiency, by Lord Wade of Chorlton. History and development of ventilation rate standards Ventilating a space with fresh air aims to avoid "bad air". The study of what constitutes bad air dates back to the 1600s when the scientist Mayow studied asphyxia of animals in confined bottles. The poisonous component of air was later identified as carbon dioxide (), by Lavoisier in the very late 1700s, starting a debate as to the nature of "bad air" which humans perceive to be stuffy or unpleasant. Early hypotheses included excess concentrations of and oxygen depletion. However, by the late 1800s, scientists thought biological contamination, not oxygen or , was the primary component of unacceptable indoor air. However, it was noted as early as 1872 that concentration closely correlates to perceived air quality. The first estimate of minimum ventilation rates was developed by Tredgold in 1836. This was followed by subsequent studies on the topic by Billings in 1886 and Flugge in 1905. The recommendations of Billings and Flugge were incorporated into numerous building codes from 1900–the 1920s and published as an industry standard by ASHVE (the predecessor to ASHRAE) in 1914. The study continued into the varied effects of thermal comfort, oxygen, carbon dioxide, and biological contaminants. The research was conducted with human subjects in controlled test chambers. Two studies, published between 1909 and 1911, showed that carbon dioxide was not the offending component. Subjects remained satisfied in chambers with high levels of , so long as the chamber remained cool. (Subsequently, it has been determined that is, in fact, harmful at concentrations over 50,000ppm) ASHVE began a robust research effort in 1919. By 1935, ASHVE-funded research conducted by Lemberg, Brandt, and Morse – again using human subjects in test chambers – suggested the primary component of "bad air" was an odor, perceived by the human olfactory nerves. Human response to odor was found to be logarithmic to contaminant concentrations, and related to temperature. At lower, more comfortable temperatures, lower ventilation rates were satisfactory. A 1936 human test chamber study by Yaglou, Riley, and Coggins culminated much of this effort, considering odor, room volume, occupant age, cooling equipment effects, and recirculated air implications, which guided ventilation rates. The Yaglou research has been validated, and adopted into industry standards, beginning with the ASA code in 1946. From this research base, ASHRAE (having replaced ASHVE) developed space-by-space recommendations, and published them as ASHRAE Standard 62-1975: Ventilation for acceptable indoor air quality. As more architecture incorporated mechanical ventilation, the cost of outdoor air ventilation came under some scrutiny. In 1973, in response to the 1973 oil crisis and conservation concerns, ASHRAE Standards 62-73 and 62–81) reduced required ventilation from 10 CFM (4.76 L/s) per person to 5 CFM (2.37 L/s) per person. In cold, warm, humid, or dusty climates, it is preferable to minimize ventilation with outdoor air to conserve energy, cost, or filtration. This critique (e.g. Tiller) led ASHRAE to reduce outdoor ventilation rates in 1981, particularly in non-smoking areas. However subsequent research by Fanger, W. Cain, and Janssen validated the Yaglou model. The reduced ventilation rates were found to be a contributing factor to sick building syndrome. The 1989 ASHRAE standard (Standard 62–89) states that appropriate ventilation guidelines are 20 CFM (9.2 L/s) per person in an office building, and 15 CFM (7.1 L/s) per person for schools, while 2004 Standard 62.1-2004 has lower recommendations again (see tables below). ANSI/ASHRAE (Standard 62–89) speculated that "comfort (odor) criteria are likely to be satisfied if the ventilation rate is set so that 1,000 ppm CO2 is not exceeded" while OSHA has set a limit of 5000 ppm over 8 hours. ASHRAE continues to publish space-by-space ventilation rate recommendations, which are decided by a consensus committee of industry experts. The modern descendants of ASHRAE standard 62-1975 are ASHRAE Standard 62.1, for non-residential spaces, and ASHRAE 62.2 for residences. In 2004, the calculation method was revised to include both an occupant-based contamination component and an area–based contamination component. These two components are additive, to arrive at an overall ventilation rate. The change was made to recognize that densely populated areas were sometimes overventilated (leading to higher energy and cost) using a per-person methodology. Occupant Based Ventilation Rates, ANSI/ASHRAE Standard 62.1-2004 Area-based ventilation rates, ANSI/ASHRAE Standard 62.1-2004 The addition of occupant- and area-based ventilation rates found in the tables above often results in significantly reduced rates compared to the former standard. This is compensated in other sections of the standard which require that this minimum amount of air is delivered to the breathing zone of the individual occupant at all times. The total outdoor air intake of the ventilation system (in multiple-zone variable air volume (VAV) systems) might therefore be similar to the airflow required by the 1989 standard. From 1999 to 2010, there was considerable development of the application protocol for ventilation rates. These advancements address occupant- and process-based ventilation rates, room ventilation effectiveness, and system ventilation effectiveness Problems In hot, humid climates, unconditioned ventilation air can daily deliver approximately 260 milliliters of water for each cubic meters per hour (m3/h) of outdoor air (or one pound of water each day for each cubic feet per minute of outdoor air per day), annual average. This is a great deal of moisture and can create serious indoor moisture and mold problems. For example, given a 150 m2 building with an airflow of 180 m3/h this could result in about 47 liters of water accumulated per day. Ventilation efficiency is determined by design and layout, and is dependent upon the placement and proximity of diffusers and return air outlets. If they are located closely together, supply air may mix with stale air, decreasing the efficiency of the HVAC system, and creating air quality problems. System imbalances occur when components of the HVAC system are improperly adjusted or installed and can create pressure differences (too much-circulating air creating a draft or too little circulating air creating stagnancy). Cross-contamination occurs when pressure differences arise, forcing potentially contaminated air from one zone to an uncontaminated zone. This often involves undesired odors or VOCs. Re-entry of exhaust air occurs when exhaust outlets and fresh air intakes are either too close, prevailing winds change exhaust patterns or infiltration between intake and exhaust air flows. Entrainment of contaminated outdoor air through intake flows will result in indoor air contamination. There are a variety of contaminated air sources, ranging from industrial effluent to VOCs put off by nearby construction work. A recent study revealed that in urban European buildings equipped with ventilation systems lacking outdoor air filtration, the exposure to outdoor-originating pollutants indoors resulted in more Disability-Adjusted Life Years (DALYs) than exposure to indoor-emitted pollutants. See also Architectural engineering Biological safety Cleanroom Environmental tobacco smoke Fume hood Head-end power Heating, ventilation, and air conditioning Heat recovery ventilation Mechanical engineering Room air distribution Sick building syndrome Siheyuan Solar chimney Tulou Windcatcher References External links Air Infiltration & Ventilation Centre (AIVC) Publications from the Air Infiltration & Ventilation Centre (AIVC) International Energy Agency (IEA) Energy in Buildings and Communities Programme (EBC) Publications from the International Energy Agency (IEA) Energy in Buildings and Communities Programme (EBC) ventilation-related research projects-annexes: EBC Annex 9 Minimum Ventilation Rates EBC Annex 18 Demand Controlled Ventilation Systems EBC Annex 26 Energy Efficient Ventilation of Large Enclosures EBC Annex 27 Evaluation and Demonstration of Domestic Ventilation Systems EBC Annex 35 Control Strategies for Hybrid Ventilation in New and Retrofitted Office Buildings (HYBVENT) EBC Annex 62 Ventilative Cooling International Society of Indoor Air Quality and Climate Indoor Air Journal Indoor Air Conference Proceedings American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) ASHRAE Standard 62.1 – Ventilation for Acceptable Indoor Air Quality ASHRAE Standard 62.2 – Ventilation for Acceptable Indoor Air Quality in Residential Buildings Heating, ventilation, and air conditioning Building biology Fluid dynamics Indoor air pollution
Ventilation (architecture)
Chemistry,Engineering
6,179
24,457,748
https://en.wikipedia.org/wiki/Central%20Sava%20Statistical%20Region
The Central Sava Statistical Region () is a statistical region in Slovenia. This statistical region in the Sava Hills is the smallest region in the country in terms of both area and population. In early-2010 almost 41,700 people lived on 264 km2, meaning that together with the Central Slovenia Statistical Region it is the most densely populated statistical region. The natural and geographic features of this region create conditions for industrial activities and more than a third of gross value added is still generated by manufacturing, mining, and other industry. In 2013, the region once again recorded the highest negative annual population growth rate (−11.9‰), which was mainly a result of migration to other statistical regions. Among all statistical regions in 2013, this region had the highest negative net migration between regions; namely, −9.5. This region also stands out by age of mothers at childbirth. In 2013 first-time mothers in the region were on average 28.5 years old, whereas first-time mothers in the Central Slovenia Statistical Region were on average 1 year older. In the same year, the number of unemployed persons increased further. The registered unemployment rate was among the highest in the country (16.6%). In comparison with other regions, this is 7 percentage points more than in the region with the lowest registered unemployment rate, Upper Carniola, and almost 1 percentage point less than in the region with the highest unemployment rate, the Mura Statistical Region. According to the labour migration index, this is the most residential statistical region. In 2013, 60% of people in the region worked in their region of residence, and 40% worked in another region. Municipalities The Central Sava Statistical Region comprises the following four municipalities: Hrastnik Litija Trbovlje Zagorje ob Savi Demographics The population in 2020 was 41,657. It has a total area of 264 km2. Economy Employment structure: 51.2% services, 46.9% industry, 1.9% agriculture. Tourism It attracts very few tourists with only 0.1% of the total number of tourists in Slovenia. Transportation Length of motorways: 0.6 km Length of other roads: 751 km Sources Slovenian regions in figures 2014 Statistical regions of Slovenia
Central Sava Statistical Region
Mathematics
454
57,117,046
https://en.wikipedia.org/wiki/NGC%204918
NGC 4918 is a spiral galaxy in the constellation Virgo. The object was discovered in 1886 by the American astronomer Francis Preserved Leavenworth. References Notes Unbarred spiral galaxies Virgo (constellation) 4918 044934
NGC 4918
Astronomy
48
51,388,505
https://en.wikipedia.org/wiki/Community%20respiration
Community respiration (CR) refers to the total amount of carbon-dioxide that is produced by individuals organisms in a given community, originating from the cellular respiration of organic material. CR is an important ecological index as it dictates the amount of production for the higher trophic levels and influence biogeochemical cycles. CR is often used as a proxy for the biological activity of the microbial community. Overview The process of cellular respiration is foundational to the ecological index, community respiration (CR). Cellular respiration can be used to explain relationships between heterotrophic organisms and the autotrophic ones they consume. The process of cellular respiration consists of a series of metabolic reactions using biological material produced by autotrophic organisms, such as oxygen () and glucose (C6H12O6) to turn its chemical energy into adenosine triphosphate (ATP) which can then be used in other metabolic reactions to power the organism, creating carbon dioxide () and water () as a by-product. The overall process of cellular respiration can be summarized with, C6H12O6 + 6O2 → 6CO2 + 6H2O + ATP. The ATP created during cellular respiration is absolutely necessary for a living being to function as it is the 'Energy currency" of the cell and none of the other metabolic functions could be sustained without it. The process of cellular respiration is an essential component of the Carbon Cycle, which tracks the recycling of carbon through the earth and atmosphere in various compounds such as: CO2 ,H2CO3, HCO3− ,C6H12O6 , CH4 to name a few. The concentration of carbon dioxide in a given area can act as a proxy indicator for metabolic function of an individual, or individuals in that area. Since the process of cellular respiration consumes oxygen and produces carbon dioxide the amount of carbon dioxide can be used to infer the amount of oxygen used in the environment specifically for metabolic requirements. Since cellular respiration has been studied in depth across all taxa, research surrounding the process can have many further biological implications. Community respiration is a good example, but our data is low. More research needs to be done to further elucidate its usefulness as a ecological index. Significance Community respiration (CR) is an important ecological index used primarily in marine and freshwater aquatic ecosystems and is often tightly coupled with Gross Primary Production (GPP). Since CR is a measure of the total amount of CO2 that is produced by all the organisms in a community solely from cellular respiration it can be a useful tool in finding the amount of O2 which is used directly to fuel Cellular respiration. The elements can be isolated by using the Electron Transport System (ETS) as a respiratory index and measuring that, which would indicate the rate of cellular respiration. CR can be used in conjunction with other ecological indexes such as Dissolved oxygen concentration (DO), Gross Primary Production (GPP), Nutrient availability, Light availability and Temperature. Using CR as a measure of the total amount of Carbon dioxide that is produced by a community is useful to aid in our understanding of an ecosystems biogeochemical cycles. CR is also useful in understanding an ecosystems net balance and trophic levels Using Dissolved oxygen as another ecological index to compare it to is one of the more useful applications of community respiration. Because global warming is so significant, it is of great concern to scientists. As ocean temperatures rise, the levels of dissolved oxygen drop from the subsequent oxygen loss by warmer water. GPP and CR will differ significantly because of their sensitivity to global warming. See also Microbial ecology References Ecology
Community respiration
Biology
754
865,142
https://en.wikipedia.org/wiki/Hydrozincite
Hydrozincite, also known as zinc bloom or marionite, is a white carbonate mineral consisting of Zn5(CO3)2(OH)6. It is usually found in massive rather than crystalline form. It occurs as an oxidation product of zinc ores and as post mine incrustations. It occurs associated with smithsonite, hemimorphite, willemite, cerussite, aurichalcite, calcite and limonite. It was first described in 1853 for an occurrence in Bad Bleiberg, Carinthia, Austria and named for its chemical content. References Mineral galleries data External links Carbonate minerals Luminescent minerals Minerals described in 1853 Minerals in space group 12 Monoclinic minerals Zinc minerals
Hydrozincite
Chemistry
154
46,349,305
https://en.wikipedia.org/wiki/Protein%20methylation
Protein methylation is a type of post-translational modification featuring the addition of methyl groups to proteins. It can occur on the nitrogen-containing side-chains of arginine and lysine, but also at the amino- and carboxy-termini of a number of different proteins. In biology, methyltransferases catalyze the methylation process, activated primarily by S-adenosylmethionine. Protein methylation has been most studied in histones, where the transfer of methyl groups from S-adenosyl methionine is catalyzed by histone methyltransferases. Histones that are methylated on certain residues can act epigenetically to repress or activate gene expression. Methylation by substrate Multiple sites of proteins can be methylated. For some types of methylation, such as N-terminal methylation and prenylcysteine methylation, additional processing is required, whereas other types of methylation such as arginine methylation and lysine methylation do not require pre-processing. Arginine Arginine can be methylated once (monomethylated arginine) or twice (dimethylated arginine). Methylation of arginine residues is catalyzed by three different classes of protein arginine methyltransferases (PRMTs): Type I PRMTs (PRMT1, PRMT2, PRMT3, PRMT4, PRMT6, and PRMT8) attach two methyl groups to a single terminal nitrogen atom, producing asymmetric dimethylarginine (N G,N G-dimethylarginine). In contrast, type II PRMTs (PRMT5 and PRMT9) catalyze the formation of symmetric dimethylarginine with one methyl group on each terminal nitrogen (symmetric N G,N' G-dimethylarginine). Type I and II PRMTs both generate N G-monomethylarginine intermediates; PRMT7, the only known type III PRMT, produces only monomethylated arginine. Arginine-methylation usually occurs at glycine and arginine-rich regions referred to as "GAR motifs", which is likely due to the enhanced flexibility of these regions that enables insertion of arginine into the PRMT active site. Nevertheless, PRMTs with non-GAR consensus sequences exist. PRMTs are present in the nucleus as well as in the cytoplasm. In interactions of proteins with nucleic acids, arginine residues are important hydrogen bond donors for the phosphate backbone — many arginine-methylated proteins have been found to interact with DNA or RNA. Enzymes that facilitate histone acetylation as well as histones themselves can be arginine methylated. Arginine methylation affects the interactions between proteins and has been implicated in a variety of cellular processes, including protein trafficking, signal transduction and transcriptional regulation. In epigenetics, arginine methylation of histones H3 and H4 is associated with a more accessible chromatin structure and thus higher levels of transcription. The existence of arginine demethylases that could reverse arginine methylation is controversial. Lysine Lysine can be methylated once, twice, or three times by lysine methyltransferases (PKMTs). Most lysine methyltransferases contain an evolutionarily conserved SET domain, which possesses S-adenosylmethionine-dependent methyltransferase activity, but are structurally distinct from other S-adenosylmethionine binding proteins. Lysine methylation plays a central part in how histones interact with proteins. Lysine methylation can be reverted by lysine demethylases (PKDMs). Different SET domain-containing proteins possess distinct substrate specificities. For example, SET1, SET7 and MLL methylate lysine 4 of histone H3, whereas Suv39h1, ESET and G9a specifically methylate lysine 9 of histone H3. Methylation at lysine 4 and lysine 9 are mutually exclusive and the epigenetic consequences of site-specific methylation are diametrically opposed: Methylation at lysine 4 correlates with an active state of transcription, whereas methylation at lysine 9 is associated with transcriptional repression and heterochromatin. Other lysine residues on histone H3 and histone H4 are also important sites of methylation by specific SET domain-containing enzymes. Although histones are the prime target of lysine methyltransferases, other cellular proteins carry N-methyllysine residues, including elongation factor 1A and the calcium sensing protein calmodulin. N-terminal methylation Many eukaryotic proteins are post-translationally modified on their N-terminus. A common form of N-terminal modification is N-terminal methylation (Nt-methylation) by N-terminal methyltransferases (NTMTs). Proteins containing the consensus motif H2N-X-Pro-Lys- (where X can be Ala, Pro or Ser) after removal of the initiator methionine (iMet) can be subject to N-terminal α-amino-methylation. Monomethylation may have slight effects on α-amino nitrogen nucleophilicity and basicity, whereas trimethylation (or dimethylation in case of proline) will result in abolition of nucleophilicity and a permanent positive charge on the N-terminal amino group. Although from a biochemical point of view demethylation of amines is possible, Nt-methylation is considered irreversible as no N-terminal demethylase has been described to date. Histone variants CENP-A and CENP-B have been found to be Nt-methylated in vivo. Prenylcysteine Eukaryotic proteins with C-termini that end in a CAAX motif are often subjected to a series of posttranslational modifications. The CAAX-tail processing takes place in three steps: First, a prenyl lipid anchor is attached to the cysteine through a thioester linkage. Then endoproteolysis occurs to remove the last three amino acids of the protein to expose the prenylcysteine α-COOH group. Finally, the exposed prenylcysteine group is methylated. The importance of this modification can be seen in targeted disruption of the methyltransferase for mouse CAAX proteins, where loss of isoprenylcysteine carboxyl methyltransferase resulted in mid-gestation lethality. The biological function of prenylcysteine methylation is to facilitate the targeting of CAAX proteins to membrane surfaces within cells. Prenylcysteine can be demethylated and this reverse reaction is catalyzed by isoprenylcysteine carboxyl methylesterases. CAAX box containing proteins that are prenylcysteine methylated include Ras, GTP-binding proteins, nuclear lamins and certain protein kinases. Many of these proteins participate in cell signaling, and they utilize prenylcysteine methylation to concentrate them on the cytosolic surface of the plasma membrane where they are functional. Methylations on the C-terminus can increase a protein's chemical repertoire and are known to have a major effect on the functions of a protein. Protein phosphatase 2 In eukaryotic cells, phosphatases catalyze the removal of phosphate groups from tyrosine, serine and threonine phosphoproteins. The catalytic subunit of the major serine/threonine phosphatases, like Protein phosphatase 2 is covalently modified by the reversible methylation of its C-terminus to form a leucine carboxy methyl ester. Unlike CAAX motif methylation, no C-terminal processing is required to facilitate methylation. This C-terminal methylation event regulates the recruitment of regulatory proteins into complexes through the stimulation of protein–protein interactions, thus indirectly regulating the activity of the serine-threonine phosphatases complex. Methylation is catalyzed by a unique protein phosphatase methyltransferase. The methyl group is removed by a specific protein phosphatase methylesterase. These two opposed enzymes make serine-threonine phosphatases methylation a dynamic process in response to stimuli. L-isoaspartyl Damaged proteins accumulate isoaspartyl which causes protein instability, loss of biological activity and stimulation of autoimmune responses. The spontaneous age-dependent degradation of L-aspartyl residues results in the formation of a succinimidyl intermediate, a succinimide radical. This is spontaneously hydrolyzed either back to L-aspartyl or, in a more favorable reaction, to abnormal L-isoaspartyl. A methyltransferase dependent pathway exists for the conversion of L-isoaspartyl back to L-aspartyl. To prevent the accumulation of L-isoaspartyl, this residue is methylated by the protein L-isoaspartyl methyltransferase, which catalyzes the formation of a methyl ester, which in turn is converted back to a succinimidyl intermediate. Loss and gain of function mutations have unmasked the biological importance of the L-isoaspartyl O-methyltransferase in age-related processes: Mice lacking the enzyme die young of fatal epilepsy, whereas flies engineered to over-express it have an increase in life span of over 30%. Physical effects A common theme with methylated proteins, as with phosphorylated proteins, is the role this modification plays in the regulation of protein–protein interactions. The arginine methylation of proteins can either inhibit or promote protein–protein interactions depending on the type of methylation. The asymmetric dimethylation of arginine residues in close proximity to proline-rich motifs can inhibit the binding to SH3 domains. The opposite effect is seen with interactions between the survival of motor neurons protein and the snRNP proteins SmD1, SmD3 and SmB/B', where binding is promoted by symmetric dimethylation of arginine residues in the snRNP proteins. A well-characterized example of a methylation dependent protein–protein interaction is related to the selective methylation of lysine 9, by SUV39H1 on the N-terminal tail of the histone H3. Di- and tri-methylation of this lysine residue facilitates the binding of heterochromatin protein 1 (HP1). Because HP1 and Suv39h1 interact, it is thought the binding of HP1 to histone H3 is maintained and even allowed that to spread along the chromatin. The HP1 protein harbors a chromodomain which is responsible for the methyl-dependent interaction between it and lysine 9 of histone H3. It is likely that additional chromodomain-containing proteins will bind the same site as HP1, and to other lysine methylated positions on histones H3 and Histone H4. C-terminal protein methylation regulates the assembly of protein phosphatase. Methylation of the protein phosphatase 2A catalytic subunit enhances the binding of the regulatory B subunit and facilitates holoenzyme assembly. References Proteins Epigenetics Post-translational modification
Protein methylation
Chemistry
2,471
2,035,678
https://en.wikipedia.org/wiki/Gibbs%27%20inequality
In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality. It was first presented by J. Willard Gibbs in the 19th century. Gibbs' inequality Suppose that and are discrete probability distributions. Then with equality if and only if for . Put in words, the information entropy of a distribution is less than or equal to its cross entropy with any other distribution . The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written: Note that the use of base-2 logarithms is optional, and allows one to refer to the quantity on each side of the inequality as an "average surprisal" measured in bits. Proof For simplicity, we prove the statement using the natural logarithm, denoted by , since so the particular logarithm base that we choose only scales the relationship by the factor . Let denote the set of all for which pi is non-zero. Then, since for all x > 0, with equality if and only if x=1, we have: The last inequality is a consequence of the pi and qi being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero qi, however, may have been excluded since the choice of indices is conditioned upon the pi being non-zero. Therefore, the sum of the qi may be less than 1. So far, over the index set , we have: , or equivalently . Both sums can be extended to all , i.e. including , by recalling that the expression tends to 0 as tends to 0, and tends to as tends to 0. We arrive at For equality to hold, we require for all so that the equality holds, and which means if , that is, if . This can happen if and only if for . Alternative proofs The result can alternatively be proved using Jensen's inequality, the log sum inequality, or the fact that the Kullback-Leibler divergence is a form of Bregman divergence. Proof by Jensen's inequality Because log is a concave function, we have that: where the first inequality is due to Jensen's inequality, and being a probability distribution implies the last equality. Furthermore, since is strictly concave, by the equality condition of Jensen's inequality we get equality when and . Suppose that this ratio is , then we have that where we use the fact that are probability distributions. Therefore, the equality happens when . Proof by Bregman divergence Alternatively, it can be proved by noting thatfor all , with equality holding iff . Then, sum over the states, we havewith equality holding iff . This is because the KL divergence is the Bregman divergence generated by the function . Corollary The entropy of is bounded by: The proof is trivial – simply set for all i. See also Information entropy Bregman divergence Log sum inequality References Information theory Coding theory Probabilistic inequalities Articles containing proofs
Gibbs' inequality
Mathematics,Technology,Engineering
650
29,194,266
https://en.wikipedia.org/wiki/Fort%20du%20T%C3%A9l%C3%A9graphe
The Fort du Télégraphe, or Fort Berwick, is located in the Maurienne valley on the road to the Col du Galibier between Valloire and Saint-Michel-de-Maurienne, at the Col du Télégraphe, dominating the valley of the Arc. The location at an altitude of previously accommodated a telegraph of the Chappe system using articulating arms to send messages between Lyon and Milan, and after 1809, Venice. The fort has two entrances with drawbridges to allow access to different levels of the fort, with inclined ramps to allow easy movement of artillery pieces. When completed in 1884 after four years of construction, the fort was manned by 170 men, firing four artillery pieces at the main fort and four more at detached batteries. History The site was first occupied by Marshal Berwick in the early 18th century. The Fort du Télégraphe was completed between 1886 and 1890 as a part of the Séré de Rivières system of fortifications. It saw no action until 1940, when it fired on Italian forces with 155 mm guns. The fort was part of the "Second Position" (Deuxième Position), a backup to the main fortifications of the modern Alpine Line, the southwestern component of the Maginot Line. The fort was armed with six 155 mm and four 95 mm guns, manned by the 6th battery of the 164th Position Artillery Regiment (164e Régiment d'Artillerie de Position (RAP)). In 1944 the fort was used by the French Forces of the Interior (FFI) as an artillery position. The fort's peacetime barracks have been retained as high-altitude quarters for the 93rd Mountain Artillery Regiment (Regiment d'Artillerie de Montagne (RAM)) of the 27th Mountain Infantry Brigade. The Fort du Télégraphe is open for visitation during the summer months. The continuing military use has resulted in its preservation, much as the fort's advantageous position has resulted in its continued use as a communications post, albeit using microwaves in place of semaphores. References External links Fort du Télégraphe at fortiffsere.fr Fort du Télégraphe at savoie-fortifications Fort du Télégraphe at Les Sentinelles des Alpes Séré de Rivières system Fortified Sector of Savoy
Fort du Télégraphe
Engineering
463
62,296,023
https://en.wikipedia.org/wiki/Decamethylzirconocene%20dichloride
Decamethylzirconocene dichloride is an organozirconium compound with the formula Cp*2ZrCl2 (where Cp* is C5(CH3)5, derived from pentamethylcyclopentadiene). It is a pale yellow, moisture sensitive solid that is soluble in nonpolar organic solvents. The complex has been the subject of extensive research. It is a precursor to many other complexes, including the dinitrogen complex [Cp*2Zr]2(N2)3). It is a precatalyst for the polymerization of ethylene and propylene. Further reading References Organozirconium compounds Metallocenes Chloro complexes Cyclopentadienyl complexes Zirconium(IV) compounds
Decamethylzirconocene dichloride
Chemistry
170
43,471,536
https://en.wikipedia.org/wiki/Prime%20avoidance%20lemma
In algebra, the prime avoidance lemma says that if an ideal I in a commutative ring R is contained in a union of finitely many prime ideals Pi's, then it is contained in Pi for some i. There are many variations of the lemma (cf. Hochster); for example, if the ring R contains an infinite field or a finite field of sufficiently large cardinality, then the statement follows from a fact in linear algebra that a vector space over an infinite field or a finite field of large cardinality is not a finite union of its proper vector subspaces. Statement and proof The following statement and argument are perhaps the most standard. Statement: Let E be a subset of R that is an additive subgroup of R and is multiplicatively closed. Let be ideals such that are prime ideals for . If E is not contained in any of 's, then E is not contained in the union . Proof by induction on n: The idea is to find an element that is in E and not in any of 's. The basic case n = 1 is trivial. Next suppose n ≥ 2. For each i, choose where the set on the right is nonempty by inductive hypothesis. We can assume for all i; otherwise, some avoids all the 's and we are done. Put . Then z is in E but not in any of 's. Indeed, if z is in for some , then is in , a contradiction. Suppose z is in . Then is in . If n is 2, we are done. If n > 2, then, since is a prime ideal, some is in , a contradiction. E. Davis' prime avoidance There is the following variant of prime avoidance due to E. Davis. Proof: We argue by induction on r. Without loss of generality, we can assume there is no inclusion relation between the 's; since otherwise we can use the inductive hypothesis. Also, if for each i, then we are done; thus, without loss of generality, we can assume . By inductive hypothesis, we find a y in J such that . If is not in , we are done. Otherwise, note that (since ) and since is a prime ideal, we have: . Hence, we can choose in that is not in . Then, since , the element has the required property. Application Let A be a Noetherian ring, I an ideal generated by n elements and M a finite A-module such that . Also, let = the maximal length of M-regular sequences in I = the length of every maximal M-regular sequence in I. Then ; this estimate can be shown using the above prime avoidance as follows. We argue by induction on n. Let be the set of associated primes of M. If , then for each i. If , then, by prime avoidance, we can choose for some in such that = the set of zero divisors on M. Now, is an ideal of generated by elements and so, by inductive hypothesis, . The claim now follows. Notes References Mel Hochster, Dimension theory and systems of parameters, a supplementary note Abstract algebra
Prime avoidance lemma
Mathematics
653
20,167,416
https://en.wikipedia.org/wiki/Bombardment%20of%20Papeete
The Bombardment of Papeete occurred in French Polynesia when German warships attacked on 22 September 1914, during World War I. The German armoured cruisers and entered the port of Papeete on the island of Tahiti and sank the French gunboat and freighter Walküre before bombarding the town's fortifications. French shore batteries and a gunboat resisted the German intrusion but were greatly outgunned. The main German objective was to seize the coal piles stored on the island, but these were destroyed by the French at the start of the action. The German vessels were largely undamaged but the French lost their gunboat. Several of Papeete's buildings were destroyed and the town's economy was severely disrupted. The main strategic consequence of the engagement was the disclosure of the cruisers' positions to the British Admiralty, which led to the Battle of Coronel where the entire German East Asia Squadron defeated a Royal Navy squadron. The depletion of Scharnhorsts and Gneisenaus ammunition at Papeete also contributed to their subsequent destruction at the Battle of the Falklands. Background Word of war reached Admiral Maximilian von Spee—of the German East Asia Squadron—while at Ponape (17 July – 6 August). He concentrated the majority of his squadron at Pagan Island in the nearby Mariana Islands, and then steamed off into the Pacific with the Scharnhorst-class armored cruisers and , the Königsberg-class light cruiser , the auxiliary cruiser SMS Titania, and several colliers at his disposal. Nürnberg and Titania were sent to gather intelligence at Hawaii and raid the cable station at Fanning Island. Spee then learned that Australian and New Zealand forces had captured German Samoa, and he sailed off in his flagship Scharnhorst—along with her sister ship Gneisenau—to engage what Allied forces they could find there. Failing to catch the Samoa Expeditionary Force at Apia and having seen no action at all since leaving Pagan Island, the men of Spee's armored cruisers were eager to meet the enemy in battle. Spee decided to raid Papeete in Tahiti on his way to rendezvous with the rest of his squadron at Easter Island. The French held over of high-quality Cardiff coal at the port, and Spee hoped to seize the coal piles to replenish his squadron's supply. Additionally, Spee aimed at destroying what allied shipping he could find in the harbour, and thought the raid might help raise his men's morale. Spee intended to coal at Suwarrow Atoll before sailing to Papeete, but was prevented by foul weather. Instead, Spee decided to take Scharnhorst and Gneisenau and attempt to resupply at Bora Bora while Nürnberg and Titania were dispatched to Nuku Hiva to guard the fleet's colliers. The German admiral intended to keep his vessels' identities secret by disguising them as French ships, flying French flags, and only allowing French- and English-speaking members of his crew contact with the Frenchmen present there. Spee managed to replenish his food stores using gold seized by Titania and Nürnberg during their raid of Fanning, and was able to discover the strength of the French military in the region as well as the exact size and positions of the coal piles at Papeete. The French had no heavy defenses at Papeete but had been warned that Spee's squadron might raid Tahiti and that a German squadron had been sighted off Samoa. Although Papeete was the capital of the French Settlements in Oceania, by 1914 it had become a colonial backwater, lacking a wireless station and having a garrison of only 25 colonial infantry and 20 gendarmes. In order to bolster the town's defenses, Lieutenant Maxime Destremau—commander of the old wooden gunboat and the ranking officer at Papeete—had his ship's stern gun and all of her and guns removed from his vessel and placed ashore to be used in place of Papeete's antiquated land batteries. Several Ford trucks were turned into impromptu armored cars by mounting them with Zélées 37-mm guns and 160 sailors and marines drilled in preparation to repel any German attempt at landing. Zélée retained only her 100-mm bow gun and 10 men under the ship's second in command. In addition to the gunboat and harbor fortifications, the French also had at Papeete the unarmed German freighter Walküre, which had been captured by Zélée at the start of the war. Despite the French preparations, the two German cruisers were more than a match for the forces Destremau commanded at Papeete. Both Scharnhorst and Gneisenau heavily outgunned Zélée, each being armed with eight guns, six guns, eighteen guns, and four torpedo tubes. Spee's forces also outnumbered the French with over 1,500 sailors aboard their vessels, more than enough to form a landing party and overwhelm the forces Destremau had to oppose them. Battle At 07:00 on 22 September 1914, the French sighted two unidentified cruisers approaching the harbor of Papeete. The alarm was raised, the harbor's signal beacons destroyed, and three warning shots were fired by the French batteries to signal the approaching cruisers that they must identify themselves. The cruisers replied with a shot of their own and raised the German colors, signaling the town to surrender. The French refused the German demands, and Spee's vessels began to shell the shore batteries and town from a distance of . The land batteries and the gunboat in the harbor returned fire but scored no hits on the armored cruisers. Having difficulty in discovering the exact position of the French batteries, the German cruisers soon turned their attention to the French shipping in the harbor. The French commander—Destremau—had ordered the coal piles burned at the start of the action and now smoke began billowing over the town. Zélée and Walküre were sighted and fired upon by the Germans. The French had begun to scuttle their vessels when the action had begun, but both were still afloat when Scharnhorst and Gneisenau began firing upon them and finished the two ships off. By now, most of the Papeete's inhabitants had fled and the town had caught fire from the German shelling, with two blocks of Papeete set alight. With the coal piles destroyed and the threat of mines in the harbor, Spee saw no meaningful purpose in making a landing. Accordingly, the German admiral withdrew his ships from Papeete's harbor by 11:00. After leaving Papeete, the ships steamed out towards Nuku Hiva to meet Nürnberg, Titania, and colliers waiting there. Aftermath By the time Spee withdrew his ships, large portions of the town had been destroyed. Two entire blocks of Papeete had burnt to the ground before the fires were finally put out. A copra store, a market, and several other buildings and residences were among those destroyed by the shellfire and resulting inferno. While the majority of Papeete's civilians fled to the interior of the island as soon as the fighting began, a Japanese civilian and a Polynesian boy were both killed by German shellfire. Although the two French vessels in the harbor had been sunk, there were no military casualties on either side and the German vessels took no damage. Overall, the bombardment was estimated in 1915 to have caused over 2 million francs' worth of property damage, some of which was recouped through the seizure of a German store on the island. In addition to the seizure of their property, several local Germans were interned and forced to repair the damage Spee's squadron had caused. Perhaps the most lasting effect of the bombardment on the French was the dramatic fall of copra prices in the region, as local suppliers had previously sold a majority of their produce to German merchants in the area who were now interned. Further havoc and distress spread throughout the island 18 days after Spee's squadron had left when rumors started to spread that a second German bombardment was about to begin. After withdrawing, Scharnhorst and Gneisenau rendezvoused with Nürnberg and Titania at Nuku Hiva, where they resupplied and their crews took shore leave before moving on to meet the rest of the squadron at Easter Island. Although the Germans had destroyed the shipping at Papeete and wreaked havoc in the town, they had been denied their primary objective of seizing the French coal piles and replenishing their own stocks. Spee's raid allowed the British Admiralty to receive word on his position and heading, allowing them to inform Rear Admiral Christopher Cradock of the German intentions thus leading to the Battle of Coronel. Another effect was the reduction of ammunition available to the two German cruisers. The hundreds of shells fired by Spee's ships at Papeete were irreplaceable. The depletion of ammunition as a result of the action at Papeete contributed to the German East Asia Squadron's failure to adequately defend itself at the Battle of the Falkland Islands against British battlecruisers. Lieutenant Destremau was chastised by his misinformed superior officer for his actions during the defense of Papeete and for the loss of the gunboat Zélée. He was summoned back to Toulon under arrest to be court-martialled but died of illness in 1915 before the trial. In 1918, Destremau was finally recognized for his actions at Papeete and was posthumously awarded the Légion d'honneur. Citations References Further reading Heinz Burmester: Die Beschießung von Papeete durch deutsche Panzerkreuzer – ein neutraler Bericht, Deutsches Schiffahrtsarchiv 7, 1984, pp. 147–152. 1914 in France Conflicts in 1914 History of French Polynesia 1914 in French Polynesia Naval battles of the Asian and Pacific Theatre (World War I) Naval battles of World War I involving France Naval battles of World War I involving Germany September 1914 Naval bombing operations and battles of World War I Attacks on naval bases Attacks on buildings and structures in Oceania Scorched earth operations Industrial fires and explosions Coal mining disasters in Oceania 1914 fires 1910s fires in Oceania Attacks on military installations in the 1910s
Bombardment of Papeete
Chemistry
2,125
61,646,222
https://en.wikipedia.org/wiki/Journal%20of%20Modern%20Dynamics
The Journal of Modern Dynamics is a peer-reviewed scientific journal of mathematics published by the American Institute of Mathematical Sciences with the support of the Anatole Katok Center for Dynamical Systems and Geometry (Pennsylvania State University). The editor-in-chief is Giovanni Forni (University of Maryland College Park). History The journal was established in 2007 with Anatole Katok as the founding editor-in-chief. It covers the theory of dynamical systems with particular emphasis on the mutual interaction between dynamics and other major areas of mathematical research: number theory, symplectic geometry, differential geometry, rigidity, quantum chaos, Teichmüller theory, geometric group theory, and harmonic analysis on manifolds. Until 2015 the journal was published quarterly. Since then, accepted papers are published online first and a single printed volume is published yearly. Abstracting and indexing The journal is abstracted and indexed in: Current Contents/Physical, Chemical & Earth Sciences EBSCO databases MathSciNet Science Citation Index Expanded Scopus Zentralblatt MATH According to MathSciNet, the journal has a 2018 Mathematical Citation Quotient of 0.89. References External links Dynamical systems journals Physics journals Academic journals established in 2007 English-language journals Continuous journals
Journal of Modern Dynamics
Mathematics
256
50,448,673
https://en.wikipedia.org/wiki/Levinson%27s%20theorem
Levinson's theorem is an important theorem of scattering theory. In non-relativistic quantum mechanics, it relates the number of bound states in channels with a definite orbital momentum to the difference in phase of a scattered wave at infinite and zero momenta. It was published by Norman Levinson in 1949. The theorem applies to a wide range of potentials that increase limitedly at zero distance and decrease sufficiently fast as the distance grows. Statement of theorem The difference in the -wave phase shift of a scattered wave at infinite momentum, , and zero momentum, , for a spherically symmetric potential is related to the number of bound states by: , where or . The scenario is uncommon and can only occur in -wave scattering, if a bound state with zero energy exists. The following conditions are sufficient to guarantee the theorem: continuous in except for a finite number of finite discontinuities, Generalizations of Levinson's theorem include tensor forces, nonlocal potentials, and relativistic effects. In relativistic scattering theory, essential information about the system is contained in the Jost function, whose analytical properties are well defined and can be used to prove and generalize Levinson's theorem. The presence of Castillejo, Dalitz and Dyson (CDD) poles and Jaffe and Low primitives which correspond to zeros of the Jost function at the unitary cut modifies the theorem. In general case, the phase difference at infinite and zero particle momenta is determined by the number of bound states, , the number of primitives, , and the number of CDD poles, : . The bound states and primitives give a negative contribution to the phase asymptotics, while the CDD poles give a positive contribution. In the context of potential scattering, a decrease (increase) in the scattering phase shift due to greater particle momentum is interpreted as the action of a repulsive (attractive) potential. The following universal properties of the Jost function, , are essential to guarantee the generalized theorem: an analytic function of the square of energy, , in the center-of-mass frame of the scattered particles with a cut from threshold to infinity, simple zeros below the threshold, simple zeros above the threshold, and simple poles on the real axis. The zeros correspond to bound states and primitives in a fixed channel with total angular momentum . References External links Larry Spruch, "Levinson's Theorem", http://physics.nyu.edu/LarrySpruch/LevinsonsTheorem.PDF#Levinson_theorem. M. Wellner, "Levinson's Theorem (an Elementary Derivation," Atomic Energy Research Establishment, Harwell, England. March 1964. Theorems in quantum mechanics de:Compton-Effekt#Compton-Wellenlänge
Levinson's theorem
Physics,Mathematics
583
12,295,288
https://en.wikipedia.org/wiki/Hampson%E2%80%93Linde%20cycle
The Hampson–Linde cycle is a process for the liquefaction of gases, especially for air separation. William Hampson and Carl von Linde independently filed for patents of the cycle in 1895: Hampson on 23 May 1895 and Linde on 5 June 1895. The Hampson–Linde cycle introduced regenerative cooling, a positive-feedback cooling system. The heat exchanger arrangement permits an absolute temperature difference (e.g. J–T cooling for air) to go beyond a single stage of cooling and can reach the low temperatures required to liquefy "fixed" gases. The Hampson–Linde cycle differs from the Siemens cycle only in the expansion step. Whereas the Siemens cycle has the gas do external work to reduce its temperature, the Hampson–Linde cycle relies solely on the Joule–Thomson effect; this has the advantage that the cold side of the cooling apparatus needs no moving parts. The cycle The cooling cycle proceeds in several steps: The gas is compressed, which adds external energy into the gas, to give it what is needed for running through the cycle. Linde's US patent gives an example with the low side pressure of and high side pressure of . The high pressure gas is then cooled by immersing the gas in a cooler environment; the gas loses some of its energy (heat). Linde's patent example gives an example of brine at 10°C. The high pressure gas is further cooled with a countercurrent heat exchanger; the cooler gas leaving the last stage cools the gas going to the last stage. The gas is further cooled by passing the gas through a Joule–Thomson orifice (expansion valve); the gas is now at the lower pressure. The low pressure gas is now at its coolest in the current cycle. Some of the gas condenses and becomes output product. The low pressure gas is directed back to the countercurrent heat exchanger to cool the warmer, incoming, high-pressure gas. After leaving the countercurrent heat exchanger, the gas is warmer than it was at its coldest, but cooler than it started out at step 1. The gas is sent back to the compressor, mixed with warm incoming makeup gas (to replace condensed product), and returned to the compressor to make another trip through the cycle (and become still colder). In each cycle the net cooling is more than the heat added at the beginning of the cycle. As the gas passes more cycles and becomes cooler, reaching lower temperatures at the expansion valve becomes more difficult. References Further reading Thermodynamic cycles Cryogenics Industrial gases 1895 in science 1895 in Germany
Hampson–Linde cycle
Physics,Chemistry
541
23,774,111
https://en.wikipedia.org/wiki/C3H6S
{{DISPLAYTITLE:C3H6S}} The molecular formula C3H6S (molar mass: 74.14 g/mol, exact mass: 74.0190 u) may refer to: Allyl mercaptan (AM) Thietane Thioacetone
C3H6S
Chemistry
65
14,879,596
https://en.wikipedia.org/wiki/CPLX1
Complexin-1 is a protein that in humans is encoded by the CPLX1 gene. Function Proteins encoded by the complexin/synaphin gene family are cytosolic proteins that function in synaptic vesicle exocytosis. These proteins bind syntaxin, part of the SNAP receptor. The protein product of this gene binds to the SNAP receptor complex and disrupts it, allowing transmitter release. Interactions CPLX1 has been shown to interact with SNAP-25 and STX1A. References External links Further reading
CPLX1
Chemistry
112
52,207,873
https://en.wikipedia.org/wiki/Buzi%20%28fortification%29
Buzi are small forts built along the northern frontier of China. They are prevalent in the Loess Plateau of Shaanxi, Gansu and Ningxia provinces, usually square or oval (as hill forts) and built out of rammed earth walls. Geography The forts are built on hilltops, at strategic locations or within villages. They were mainly financed by local notables, and constructed by villagers of the area. A large number of forts are found in Tianshui (over 500) and Dingxi prefectures, totalling over 1400 forts. One of the densest concentration of forts is Tongwei County, which has the nickname "thousand forts county" (). Qin'an County is home over 200 forts including three larger castles. In Wushan County over 200 of these forts are estimated to exist, of which 61 are relatively well preserved. Although each fort may not be impressive on its own, the combined defense line of forts has been compared to the Great Wall of China. Usage Some of the forts date back to the Qin dynasty, though many are around 150 years old, dating to the late Qing dynasty. During the Dungan Revolt, villagers sought refuge from the raiding and fighting in these forts, strengthened and expanded existing forts and even constructed new forts with the same methods. The forts have been used for defensive purposes as late as the Sino-Japanese war. Nowadays, most of the forts lie abandoned, partly due to the difficulty of reaching the hilltops. The courtyards of some forts have filled by farmhouses or Taoist temples. The defenders inside the forts varied, with some larger forts being permanently manned by trained military, smaller ones were just refuge places for nearby villagers. References Fortification lines Chinese architectural history Military history of China Forts in China Buildings and structures in Gansu
Buzi (fortification)
Engineering
361
40,048,943
https://en.wikipedia.org/wiki/Geoffrey%20G.%20Parker
Geoffrey G Parker is a scholar whose work focuses on distributed innovation, energy markets, and the economics of information. He co-developed the theory of two-sided markets with Marshall Van Alstyne. His current research includes studies of platform business strategy, data governance, and technical/economic systems to integrate distributed energy resources. Parker is Professor of Engineering and Director, Master of Engineering Management, (MEM) Thayer School of Engineering at Dartmouth College, the first national research university to graduate a class of engineers with more women than men. He has set the Thayer School of Engineering apart with the introduction of Data Analytics and Platform Design classes, emphasizing the business aspects of engineering and giving engineers the background they need to be business innovators and entrepreneurs. Parker is part of a unique culture that is breaking gender barriers. Parker is also a Faculty Fellow at MIT and the MIT Center for Digital Business. Parker is co-author of the book Platform Revolution, which was included among the 16 must-read business books for 2016 by Forbes. Early life and education Geoffrey Parker was born in Dayton, Ohio. He received a BS in Electrical Engineering and Computer Science from Princeton University in 1986. He then completed the General Electric Company Financial Management Training Program and held multiple positions in engineering and finance at General Electric in North Carolina and Wisconsin. He obtained an MS in Electrical Engineering (Technology and Policy Program) in 1993 and a PhD in Management Science in 1998, both at the Massachusetts Institute of Technology. Career Parker is Professor of Engineering and Director, Master of Engineering Management, Thayer School of Engineering, Dartmouth College. In addition, he is a Fellow at MIT's Initiative on the Digital Economy where he leads platform industry research studies and co-chairs the annualMIT Platform Strategy Summit. Parker is a visiting scholar at the MIT Sloan School. His teaching includes platform strategy courses that provide managers the tools they need to understand the digital economy and technical courses that give students the skills they need to transform large data sets into actionable knowledge. He was formerly Professor of Management Science at Tulane University where he served as Director of the Tulane Energy Institute. Parker has taught undergraduate and full-time MBA courses as well as professional MBA and executive MBA programs. Parker served as a National Science Foundation panelist from 2009 to 2011. He is a senior editor for the journal Production and Operations Management, an associate editor for the journal Management Science and President of the Industry Studies Association. Parker is a member of General Electric’s Learning Advisory Board, consisting of academics drawn from across Africa, the United States of America and the United Kingdom, that assists in development and broadening of skills across Africa. Parker co-organizes and co-chairs the annual MIT Platform Strategy Summit, an executive meeting on platform-centered economics and management, where he stressed the growth of platforms, their welfare implications and their takeover of government functions. At the same time, he co-chairs an academic meeting, the Platform Strategy Research Symposium. Parker served as chair of the U.S.-Israel Energy Summit in 2014. Work Parker has made significant contributions to the field of network economics and strategy as co-developer of the theory of two-sided markets with Marshall Van Alstyne. Parker and Van Alstyne observed that, unlike traditional value chains with cost and revenue on different sides, two-sided networks have cost and revenue on both sides, because the “platform” has a distinct group of users on each side. Their approach has been described as the “chicken and egg” problem of how to build a platform. They concluded that the problem must be solved by platform owners, typically by cross-subsidizing between groups or even giving away products or services for free. Two-sided network effects can cause markets to concentrate in the hands of a few firms. These properties inform the strategies and antitrust law approaches at all firms involved in the network. His research includes studies of distributed innovation, business platform strategy, and platforms to integrate intermittent energy. Parker is a frequent keynote speaker and advises senior leaders on their organizations’ platform strategies. Before attending MIT, he held positions in engineering and finance at GE. Publications Parker's research has appeared in journals such as Harvard Business Review, MIT Sloan Management Review, Energy Economics, Information Systems Research', Journal of Economics and Management Strategy, Management Science, Production and Operations Management, and Strategic Management Journal. His work has also been featured on business news publications such as “MarketWatch” and Wired. He is the co-author of Platform Revolution: How Networked Markets Are Transforming the Economy and How to Make Them Work for You. The book describes the information technologies, standards, and rules that make up platforms, and are used and developed by the biggest and most innovative global companies. Forbes included it among 16 must-read business books for 2016, describing it as "a practical guide to the new business model that is transforming the way we work and live." Parker also co-wrote Operations Management For Dummies within the For Dummies franchise. Awards Parker won the Wick Skinner Early Career Research Accomplishments Award in 2003. He was given the Dean's Excellence in Teaching Award for Graduate Education at Freeman School of Business in 2014. References External links Platform Economics Platform Revolution Articles available for download at SSRN Living people Economists from Ohio Information economists Innovation economists Information systems researchers Energy economists Tulane University faculty MIT School of Engineering alumni 21st-century American economists Year of birth missing (living people)
Geoffrey G. Parker
Technology
1,091
1,456,984
https://en.wikipedia.org/wiki/Organic%20synthesis
Organic synthesis is a branch of chemical synthesis concerned with the construction of organic compounds. Organic compounds are molecules consisting of combinations of covalently-linked hydrogen, carbon, oxygen, and nitrogen atoms. Within the general subject of organic synthesis, there are many different types of synthetic routes that can be completed including total synthesis, stereoselective synthesis, automated synthesis, and many more. Additionally, in understanding organic synthesis it is necessary to be familiar with the methodology, techniques, and applications of the subject. Total synthesis A total synthesis refers to the complete chemical synthesis of molecules from simple, natural precursors. Total synthesis is accomplished either via a linear or convergent approach. In a linear synthesis—often adequate for simple structures—several steps are performed sequentially until the molecule is complete; the chemical compounds made in each step are called synthetic intermediates. Most often, each step in a synthesis is a separate reaction taking place to modify the starting materials. For more complex molecules, a convergent synthetic approach may be better suited. This type of reaction scheme involves the individual preparations of several key intermediates, which are then combined to form the desired product. Robert Burns Woodward, who received the 1965 Nobel Prize for Chemistry for several total syntheses including his synthesis of strychnine, is regarded as the grandfather of modern organic synthesis. Some latter-day examples of syntheses include Wender's, Holton's, Nicolaou's, and Danishefsky's total syntheses of the anti-cancer drug paclitaxel (trade name Taxol). Methodology and applications Before beginning any organic synthesis, it is important to understand the chemical reactions, reagents, and conditions required in each step to guarantee successful product formation. When determining optimal reaction conditions for a given synthesis, the goal is to produce an adequate yield of pure product with as few steps as possible. When deciding conditions for a reaction, the literature can offer examples of previous reaction conditions that can be repeated, or a new synthetic route can be developed and tested. For practical, industrial applications additional reaction conditions must be considered to include the safety of both the researchers and the environment, as well as product purity. Synthetic techniques Organic Synthesis requires many steps to separate and purify products. Depending on the chemical state of the product to be isolated, different techniques are required. For liquid products, a very common separation technique is liquid–liquid extraction and for solid products, filtration (gravity or vacuum) can be used. Liquid–liquid extraction Liquid–liquid extraction uses the density and polarity of the product and solvents to perform a separation. Based on the concept of "like-dissolves-like", non-polar compounds are more soluble in non-polar solvents, and polar compounds are more soluble in polar solvents. By using this concept, the relative solubility of compounds can be exploited by adding immiscible solvents into the same flask and separating the product into the solvent with the most similar polarity. Solvent miscibility is of major importance as it allows for the formation of two layers in the flask, one layer containing the side reaction material and one containing the product. As a result of the differing densities of the layers, the product-containing layer can be isolated and the other layer can be removed. Heated reactions and reflux condensers Many reactions require heat to increase reaction speed. However, in many situations increased heat can cause the solvent to boil uncontrollably which negatively affects the reaction, and can potentially reduce product yield. To address this issue, reflux condensers can be fitted to reaction glassware. Reflux condensers are specially calibrated pieces of glassware that possess two inlets for water to run in and out through the glass against gravity. This flow of water cools any escaping substrate and condenses it back into the reaction flask to continue reacting and ensure that all product is contained. The use of reflux condensers is an important technique within organic syntheses and is utilized in reflux steps, as well as recrystallization steps. When being used for refluxing a solution, reflux condensers are fitted and closely observed. Reflux occurs when condensation can be seen dripping back into the reaction flask from the reflux condenser; 1 drop every second or few seconds. For recrystallization, the product-containing solution is equipped with a condenser and brought to reflux again. Reflux is complete when the product-containing solution is clear. Once clear, the reaction is taken off heat and allowed to cool which will cause the product to re-precipitate, yielding a purer product. Gravity and vacuum filtration Solid products can be separated from a reaction mixture using filtration techniques. To obtain solid products a vacuum filtration apparatus can be used. Vacuum filtration uses suction to pull liquid through a Büchner funnel equipped with filter paper, which catches the desired solid product. This process removes any unwanted solution in the reaction mixture by pulling it into the filtration flask and leaving the desired product to collect on the filter paper. Liquid products can also be separated from solids by using gravity filtration. In this separatory method, filter paper is folded into a funnel and placed on top of a reaction flask. The reaction mixture is then poured through the filter paper, at a rate such that the total volume of liquid in the funnel does not exceed the volume of the funnel. This method allows for the product to be separated from other reaction components by the force of gravity, instead of a vacuum. Stereoselective synthesis Most complex natural products are chiral, and the bioactivity of chiral molecules varies with the enantiomer. Some total syntheses target racemic mixtures, which are mixtures of both possible enantiomers. A single enantiomer can then be selected via enantiomeric resolution.   As chemistry has developed methods of stereoselective catalysis and kinetic resolution have been introduced whereby reactions can be directed, producing only one enantiomer rather than a racemic mixture. Early examples include stereoselective hydrogenations (e.g., as reported by William Knowles and Ryōji Noyori) and functional group modifications such as the asymmetric epoxidation by Barry Sharpless; for these advancements in stereochemical preference, these chemists were awarded the Nobel Prize in Chemistry in 2001. Such preferential stereochemical reactions give chemists a much more diverse choice of enantiomerically pure materials. Using techniques developed by Robert B. Woodward paired with advancements in synthetic methodology, chemists have been able synthesize stereochemically selective complex molecules without racemization. Stereocontrol provides the target molecules to be synthesized as pure enantiomers (i.e., without need for resolution). Such techniques are referred to as stereoselective synthesis. Synthesis design Many synthetic procedures are developed from a retrosynthetic framework, a type of synthetic design developed by Elias James Corey, for which he won the Nobel Prize in Chemistry in 1990. In this approach, the synthesis is planned backwards from the product, obliging to standard chemical rules. Each step breaks down the parent structure into achievable components, which are shown via the use of graphical schemes with retrosynthetic arrows (drawn as ⇒, which in effect, means "is made from"). Retrosynthesis allows for the visualization of desired synthetic designs. Automated organic synthesis A recent development within organic synthesis is automated synthesis. To conduct organic synthesis without human involvement, researchers are adapting existing synthetic methods and techniques to create entirely automated synthetic processes using organic synthesis software. This type of synthesis is advantageous as synthetic automation can increase yield with continual "flowing" reactions. In flow chemistry, substrates are continually fed into the reaction to produce a higher yield. Previously, this type of reaction was reserved for large-scale industrial chemistry but has recently transitioned to bench-scale chemistry to improve the efficiency of reactions on a smaller scale. Currently integrating automated synthesis into their work is SRI International, a nonprofit research institute. Recently SRI International has developed Autosyn, an automated multi-step chemical synthesizer that can synthesize many FDA-approved small molecule drugs. This synthesizer demonstrates the versatility of substrates and the capacity to potentially expand the type of research conducted on novel drug molecules without human intervention. Automated chemistry and the automated synthesizers used demonstrate a potential direction for synthetic chemistry in the future. Characterization Necessary to organic synthesis is characterization. Characterization refers to the measurement of chemical and physical properties of a given compound, and comes in many forms. Examples of common characterization methods include: nuclear magnetic resonance (NMR), mass spectrometry, Fourier-transform infrared spectroscopy (FTIR), and melting point analysis. Each of these techniques allow for a chemist to obtain structural information about a newly synthesized organic compound. Depending on the nature of the product, the characterization method used can vary. Relevance Organic synthesis is an important chemical process that is integral to many scientific fields. Examples of fields beyond chemistry that require organic synthesis include the medical industry, pharmaceutical industry, and many more. Organic processes allow for the industrial-scale creation of pharmaceutical products. An example of such a synthesis is Ibuprofen. Ibuprofen can be synthesized from a series of reactions including: reduction, acidification, formation of a Grignard reagent, and carboxylation. In the synthesis of Ibuprofen proposed by Kjonass et al., p-isobutylacetophenone, the starting material, is reduced with sodium borohydride (NaBH4) to form an alcohol functional group. The resulting intermediate is acidified with HCl to create a chlorine group. The chlorine group is then reacted with magnesium turnings to form a Grignard reagent. This Grignard is carboxylated and the resulting product is worked up to synthesize ibuprofen. This synthetic route is just one of many medically and industrially relevant reactions that have been created, and continued to be used. See also Automated synthesis Electrosynthesis Methods in Organic Synthesis (journal) Organic Syntheses (journal) References Further reading External links The Organic Synthesis Archive Chemical synthesis database https://web.archive.org/web/20070927231356/http://www.webreactions.net/search.html https://www.organic-chemistry.org/synthesis/ Prof. Hans Reich's collection of natural product syntheses Chemical synthesis semantic wiki
Organic synthesis
Chemistry
2,181
3,654,070
https://en.wikipedia.org/wiki/Hazen%E2%80%93Williams%20equation
The Hazen–Williams equation is an empirical relationship that relates the flow of water in a pipe with the physical properties of the pipe and the pressure drop caused by friction. It is used in the design of water pipe systems such as fire sprinkler systems, water supply networks, and irrigation systems. It is named after Allen Hazen and Gardner Stewart Williams. The Hazen–Williams equation has the advantage that the coefficient C is not a function of the Reynolds number, but it has the disadvantage that it is only valid for water. Also, it does not account for the temperature or viscosity of the water, and therefore is only valid at room temperature and conventional velocities. General form Henri Pitot discovered that the velocity of a fluid was proportional to the square root of its head in the early 18th century. It takes energy to push a fluid through a pipe, and Antoine de Chézy discovered that the hydraulic head loss was proportional to the velocity squared. Consequently, the Chézy formula relates hydraulic slope S (head loss per unit length) to the fluid velocity V and hydraulic radius R: The variable C expresses the proportionality, but the value of C is not a constant. In 1838 and 1839, Gotthilf Hagen and Jean Léonard Marie Poiseuille independently determined a head loss equation for laminar flow, the Hagen–Poiseuille equation. Around 1845, Julius Weisbach and Henry Darcy developed the Darcy–Weisbach equation. The Darcy-Weisbach equation was difficult to use because the friction factor was difficult to estimate. In 1906, Hazen and Williams provided an empirical formula that was easy to use. The general form of the equation relates the mean velocity of water in a pipe with the geometric properties of the pipe and the slope of the energy line. where: V is velocity (in ft/s for US customary units, in m/s for SI units) k is a conversion factor for the unit system (k = 1.318 for US customary units, k = 0.849 for SI units) C is a roughness coefficient R is the hydraulic radius (in ft for US customary units, in m for SI units) S is the slope of the energy line (head loss per length of pipe or hf/L) The equation is similar to the Chézy formula but the exponents have been adjusted to better fit data from typical engineering situations. A result of adjusting the exponents is that the value of C appears more like a constant over a wide range of the other parameters. The conversion factor k was chosen so that the values for C were the same as in the Chézy formula for the typical hydraulic slope of S=0.001. The value of k is 0.001−0.04. Typical C factors used in design, which take into account some increase in roughness as pipe ages are as follows: Pipe equation The general form can be specialized for full pipe flows. Taking the general form and exponentiating each side by gives (rounding exponents to 3–4 decimals) Rearranging gives The flow rate , so The hydraulic radius (which is different from the geometric radius ) for a full pipe of geometric diameter is ; the pipe's cross sectional area is , so U.S. customary units (Imperial) When used to calculate the pressure drop using the US customary units system, the equation is: where: Spsi per foot = frictional resistance (pressure drop per foot of pipe) in psig/ft (pounds per square inch gauge pressure per foot) Sfoot of water per foot of pipe Pd = pressure drop over the length of pipe in psig (pounds per square inch gauge pressure)L = length of pipe in feetQ = flow, gpm (gallons per minute)C = pipe roughness coefficientd = inside pipe diameter, in (inches) Note: Caution with U S Customary Units is advised. The equation for head loss in pipes, also referred to as slope, S, expressed in "feet per foot of length" vs. in 'psi per foot of length' as described above, with the inside pipe diameter, d, being entered in feet vs. inches, and the flow rate, Q, being entered in cubic feet per second, cfs, vs. gallons per minute, gpm, appears very similar. However, the constant is 4.73 vs. the 4.52 constant as shown above in the formula as arranged by NFPA for sprinkler system design. The exponents and the Hazen-Williams "C" values are unchanged. SI units When used to calculate the head loss with the International System of Units, the equation will then become where: S = Hydraulic slope hf = head loss in meters (water) over the length of pipe L = length of pipe in meters Q = volumetric flow rate, m3/s (cubic meters per second) C = pipe roughness coefficient d'' = inside pipe diameter, m (meters) Note: pressure drop can be computed from head loss as hf × the unit weight of water (e.g., 9810 N/m3 at 4 deg C) See also Darcy–Weisbach equation and Prony equation for alternatives Fluid dynamics Friction Minor losses in pipe flow Plumbing Pressure Volumetric flow rate References Further reading Williams and Hazen, Second edition, 1909 External links Engineering Toolbox reference Engineering toolbox Hazen–Williams coefficients Online Hazen–Williams calculator for gravity-fed pipes. Online Hazen–Williams calculator for pressurized pipes. https://books.google.com/books?id=DxoMAQAAIAAJ&pg=PA736 https://books.google.com/books?id=RAMX5xuXSrUC&pg=PA145 States pocket calculators and computers make calculations easier. H-W is good for smooth pipes, but Manning better for rough pipes (compared to D-W model). Eponymous equations of physics Equations of fluid dynamics Piping Plumbing Hydraulics Hydrodynamics Irrigation
Hazen–Williams equation
Physics,Chemistry,Engineering
1,243
105,836
https://en.wikipedia.org/wiki/Oncotic%20pressure
Oncotic pressure, or colloid osmotic-pressure, is a type of osmotic pressure induced by the plasma proteins, notably albumin, in a blood vessel's plasma (or any other body fluid such as blood and lymph) that causes a pull on fluid back into the capillary. It has an effect opposing both the hydrostatic blood pressure, which pushes water and small molecules out of the blood into the interstitial spaces at the arterial end of capillaries, and the interstitial colloidal osmotic pressure. These interacting factors determine the partitioning of extracellular water between the blood plasma and the extravascular space. Oncotic pressure strongly affects the physiological function of the circulatory system. It is suspected to have a major effect on the pressure across the glomerular filter. However, this concept has been strongly criticised and attention has shifted to the impact of the intravascular glycocalyx layer as the major player. Etymology The word 'oncotic' by definition is termed as 'pertaining to swelling', indicating the effect of oncotic imbalance on the swelling of tissues. The word itself is derived from onco- and -ic; 'onco-' meaning 'pertaining to mass or tumors' and '-ic', which forms an adjective. Description Throughout the body, dissolved compounds have an osmotic pressure. Because large plasma proteins cannot easily cross through the capillary walls, their effect on the osmotic pressure of the capillary interiors will, to some extent, balance out the tendency for fluid to leak out of the capillaries. In other words, the oncotic pressure tends to pull fluid into the capillaries. In conditions where plasma proteins are reduced, e.g. from being lost in the urine (proteinuria), there will be a reduction in oncotic pressure and an increase in filtration across the capillary, resulting in excess fluid buildup in the tissues (edema). The large majority of oncotic pressure in capillaries is generated by the presence of high quantities of albumin, a protein that constitutes approximately 80% of the total oncotic pressure exerted by blood plasma on interstitial fluid . The total oncotic pressure of an average capillary is about 28 mmHg with albumin contributing approximately 22 mmHg of this oncotic pressure, despite only representing 50% of all protein in blood plasma at 35-50 g/L. Because blood proteins cannot escape through capillary endothelium, oncotic pressure of capillary beds tends to draw water into the vessels. It is necessary to understand the oncotic pressure as a balance; because the blood proteins reduce interior permeability, less plasma fluid can exit the vessel. Oncotic pressure is represented by the symbol Π or π in the Starling equation and elsewhere. The Starling equation in particular describes filtration in volume/s () by relating oncotic pressure () to capillary hydrostatic pressure (), interstitial fluid hydrostatic pressure (), and interstitial fluid oncotic pressure (), as well as several descriptive coefficients, as shown below: At the arteriolar end of the capillary, blood pressure starts at about 36 mm Hg and decreases to around 15 mm Hg at the venous end, with oncotic pressure at a stable 25–28 mm Hg. Within the capillary, reabsorption due to this venous pressure difference is estimated to be around 90% that of the filtered fluid, with the extra 10% being returned via lymphatics in order to maintain stable blood volume. Physiological impact In tissues, physiological disruption can arise with decreased oncotic pressure, which can be determined using blood tests for protein concentration. Decreased colloidal osmotic pressure, most notably seen in hypoalbuminemia, can cause edema and decrease in blood volume as fluid is not reabsorbed into the bloodstream. Colloid pressure in these cases can be lost due to a number of different factors, but primarily decreased colloid production or increased loss of colloids through glomerular filtration. This low pressure often correlates with poor surgical outcomes. In the clinical setting, there are two types of fluids that are used for intravenous drips: crystalloids and colloids. Crystalloids are aqueous solutions of mineral salts or other water-soluble molecules. Colloids contain larger insoluble molecules, such as gelatin. There is some debate concerning the advantages and disadvantages of using biological vs. synthetic colloid solutions. Oncotic pressure values are approximately 290 mOsm per kg of water, which slightly differs from the osmotic pressure of the blood that has values approximating 300 mOsm /L. These colloidal solutions are typically used to remedy low colloid concentration, such as in hypoalbuminemia, but is also suspected to assist in injuries that typically increase fluid loss, such as burns. References External links Overview at cvphysiology.com Physiology
Oncotic pressure
Biology
1,068
45,355,856
https://en.wikipedia.org/wiki/List%20of%20F4%20polytopes
{{DISPLAYTITLE:List of F4 polytopes}} In 4-dimensional geometry, there are 9 uniform 4-polytopes with F4 symmetry, and one chiral half symmetry, the snub 24-cell. There is one self-dual regular form, the 24-cell with 24 vertices. Visualization Each can be visualized as symmetric orthographic projections in Coxeter planes of the F4 Coxeter group, and other subgroups. The 3D picture are drawn as Schlegel diagram projections, centered on the cell at pos. 3, with a consistent orientation, and the 5 cells at position 0 are shown solid. Coordinates Vertex coordinates for all 15 forms are given below, including dual configurations from the two regular 24-cells. (The dual configurations are named in bold.) Active rings in the first and second nodes generate points in the first column. Active rings in the third and fourth nodes generate the points in the second column. The sum of each of these points are then permutated by coordinate positions, and sign combinations. This generates all vertex coordinates. Edge lengths are 2. The only exception is the snub 24-cell, which is generated by half of the coordinate permutations, only an even number of coordinate swaps. φ=(+1)/2. References J.H. Conway and M.J.T. Guy: Four-Dimensional Archimedean Polytopes, Proceedings of the Colloquium on Convexity at Copenhagen, page 38 und 39, 1965 John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26) H.S.M. Coxeter: H.S.M. Coxeter, Regular Polytopes, 3rd Edition, Dover New York, 1973 Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, Wiley::Kaleidoscopes: Selected Writings of H.S.M. Coxeter (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 External links Uniform, convex polytopes in four dimensions:, Marco Möller Uniform 4-polytopes
List of F4 polytopes
Physics
616
18,573,838
https://en.wikipedia.org/wiki/New%20York%20Studio%20and%20Forum%20of%20Stage%20Design
The Studio and Forum of Stage Design was an American training school for theatre designers that was started by scenic designer Lester Polakov in 1958 in the Greenwich Village neighborhood of the Manhattan borough of New York City, New York. History Polakov was born in Chicago, Illinois, in 1916 and studied in New York with George Grosz and at Columbia University. He began his career designing sets in summer stock theatre and in 1939 made his debut in New York City as the scenic designer for The Mother. This was quickly followed in 1940 by designing the costumes for Reunion in New York, as well as assisting scenic designer Harry Horner on Lady in the Dark. After service in World War II, he resumed designing and also painting, with several exhibitions of his paintings as a result. In 1958, he established the Lester Polakov Studio of Stage Design, later known as the Studio and Forum of Stage Design, where he employed some of the best-known designers of sets, lights, and costumes to teach design. In addition to teaching and overseeing the operation of the school, he continued to design sets and costumes for the stage. In 1993, Polakov published his book, We Live to Paint Again. In 1999, Polakov was the recipient of the Distinguished Achievement Award in Scenery presented to him by the United States Institute for Theatre Technology. His credits include Call Me Mister (1946, scenic design), Crime and Punishment (1947, costume design), The Member of the Wedding (1950, scenic, costume, and lighting design; also replacement stage manager), The Skin of Our Teeth (1955, scenic design), Great Day in the Morning (1962, scenic and lighting design), and Charlotte (1980, scenic design). He was a scenic designer who, through his years of working in theatre, gathered together a group of like-minded designers to create a teaching environment that was esteemed, though it was not connected to any university or academic institution. Through this school, and the corresponding work done by union members of United Scenic Artists l.u. 829 (now a part of the International Alliance of Theatrical Stage Employees), they created and promoted the idea of conceptual-based designing. All the teachers in the school were working professionals, usually Broadway-based designers. The list included: Lester Polakov, John Gleason, Tom Skelton and Arden Fingerhut. The school had courses that included scenic design, scenic painting, still-life sketching, costume design and lighting design. Today, none of the original creators of this school survive. See also References Organizations with year of establishment missing 1958 establishments in New York City Defunct schools in New York City Educational institutions established in 1958 Greenwich Village Performing arts education in New York City Universities and colleges in Manhattan Scenic design
New York Studio and Forum of Stage Design
Engineering
563
48,657,291
https://en.wikipedia.org/wiki/Loenhout
Loenhout is a village and deelgemeente (sub-municipality) of the municipality of Wuustwezel in the province of Antwerp, Belgium. The village is located near the Dutch border, and about north-east of the city of Antwerp. History The area around Loenhout used to contain two heerlijkheden under the same Lord, but one was a fief of Hoogstraten while the other belonged to the Duchy of Brabant. Loenhout became a parish in 13th century, and was awarded to the St. Bernard's Abbey of Hemiksen in 1277. The Dutch Revolt in the late 16th century resulted in the destruction of the castle and a near depopulation of the area. In the 18th century, during the rule of Maria Theresa of Naples and Sicily, the wilderness was cultivated, and Loenhout became an agricultural community specialising in livestock. Loenhout was an independent municipality until 1977 when it was merged with Wuustwezel. Nature and landscape Loenhout is located in the North Campine, a region with a poor sandy soil sparsely populated before the 20th century. It is a flat area with a height of 13-22 meters. A number of streams run north from the south, of which the Kleine Aa (Small Aa river) is the most important. These streams come together at the Belgian-Dutch border to form the Aa of Weerijs. The A1 (E19) motorway and the high-speed train line (Thalys) Schiphol – Antwerp cut through the landscape southeast of Loenhout. Underground gas storage facility The Belgian Fluxys company operates an Underground Gas Storage (UGS) facility of 680 millions cubic meters of natural gas in Loenhout. The gas is stored in a fissured aquifer system in the top Dinantian karstic limestones (Visean age) of the Heibaart structure. The Lower Carboniferous carbonates in the Campine-Brabant Basin are highly fractured, a prerequisite condition to store gas or to recover deep geothermal energy. The Heibaart structure was investigated by means of several deep exploration boreholes, the first performed for Petrofina in 1962. Petrofina, the main petroleum company in Belgium at this time, was drilling in the North of Belgium in the hope of discovering a petroleum reservoir. The results of the exploration campaign were not successful, but it allowed to identify a promising trapping geological structure that could be used for gas storage. The gas storage project was developed two decades later. The Loenhout UGS is one of the two gas storage systems connected to the Fluxys gas transport grid. The gas storage system consists of a geological dome structure (in fact looking more like an 'upside-down soup plate' than like a well-shaped dome) covered by a layer of low-permeable caprock. The impermeable caprock is water and gastight and allows to confine high-calorie gas in the underlying fissured limestone aquifer. From April to November, natural gas is injected under pressure in the system and draw down the water table. In the winter, when the gas consumption is high, gas is retrieved from the reservoir and the water table rises again. This unique infrastructure for Belgium was drilled at 1,000 – 1,500 m depth in the 1970's below the land of 5 municipalities (Loenhout, Wuustwezel, Hoogstraten, Rijkevorsel and Brecht) but is relatively poorly known by the public in Belgium. With it 9 terrawatt hours (TWh), the gas storage site of Loenhout plays a strategical role in the safety of the gas supply of Belgium (4.74 % of the 190 TWh of natural gas annually consumed in Belgium in 2021), as emphasized by the Federal Belgian government in the frame of the energetic gas crisis in Europe resulting from the 2022 Russian invasion of Ukraine. The gas storage system plays a role of buffer to attenuate seasonal fluctuations in the gas market. While being filled in the summer, it contributes to nearly 15% of the winter gas peak consumption in Belgium. In 2007, a second potential site for underground gas storage in the Campine-Brabant Basin was investigated by VITO, the Flemisch Institute for Technological Research, in Poederlee (municipality of Lille, Belgium). Its estimated total gas storage capacity was lower (300 millions cubic meters) than that of Loenhout (680 millions cubic meters), but the project was abandoned by Fluxys and Gasprom in 2008. There was two reasons for that: (1) the seismic survey showed that only 120 millions cubic meters of gas could be stored in the reservoir structure, and (2) GREG, the regulatory body of energy in Belgium, gave a negative advice to the government. GREG disagreed with the project because it was a joint venture between Fluxys and the Russian giant gas company Gasprom and that according to the contract of the joint company NV Poederlee Gas Storage, Gasprom would have had the monopoly to operate the site during 25 years and without any regulation. Events The largest bloemencorso (flower parade) of Belgium is held in Loenhout. All the villages and hamlets in the area compete who has the most beautiful floats made out of flowers. The event is held on the second Sunday of September. The Azencross is an annual cyclo-cross race held at the end of December in Loenhout. Notable people Goswin Haex van Loenhout (1398–1475), Roman Catholic prelate who served as Auxiliary Bishop of Utrecht Johannes Stadius (1527–1579), astronomer, astrologer, and mathematician Marten Van Riel (born 1992), triathlete Gallery References Populated places in Antwerp Province Wuustwezel Natural gas storage
Loenhout
Chemistry
1,224
12,353,466
https://en.wikipedia.org/wiki/Gadd45
The Growth Arrest and DNA Damage or gadd45 genes, including GADD45A (originally termed gadd45) GADD45B (originally termed MyD118), and GADD45G (originally termed CR6), are implicated as stress sensors that modulate the response of mammalian cells to genotoxic/physiological stress, and modulate tumor formation. Gadd45 proteins interact with other proteins implicated in stress responses, including PCNA, p21, Cdc2/CyclinB1, MEKK4, and p38 kinase. GADD45 proteins regulate differentiation at the two cell stage of embryogenesis, a key stage of zygotic genome activation. GADD45 likely acts by promoting TET-mediated DNA demethylation leading to the induction of expression of genes necessary for zygote activation. Overexpression of the GADD45 gene in the Drosophila melanogaster nervous system significantly increases longevity. This longevity increase can be attributed to more efficient recognition and repair of spontaneous DNA damages generated by physiological processes and environmental factors. History Gadd45a was discovered and characterized in the laboratory of Dr. Albert J. Fornace Jr. in 1988. Gadd45b (MyD118) was discovered and characterized in the laboratories of Drs. Dan A. Liebermann and Barbara Hoffman in 1991. Gadd45g (CR6) was discovered and characterized in the laboratories of Drs. Kenneth Smith, Dan A. Liebermann, and Barbara Hoffman in 1993 and 1999. See also GADD45A GADD45B GADD45G References External links Mammal genes
Gadd45
Chemistry,Biology
339
66,979,123
https://en.wikipedia.org/wiki/WiFi%20Sensing
Wi-Fi Sensing (also referred to as WLAN Sensing) is a technology that uses existing Wi-Fi signals for the purpose of  detecting events or changes such as motion, gesture recognition, and biometric measurement (e.g. breathing). Wi-Fi Sensing allows for the utilization of conventional  Wi-Fi transceiver hardware and Radio Frequency (RF) spectrum for both communication and sensing purposes. The integration of communication and sensing functionalities within mobile networking technology constitutes a large area of exploration and is commonly referred to as Joint Communications and radar/radio Sensing (JCAS). This convergence of technologies presents an opportunity to harness pre-existing hardware and infrastructure, fostering the emergence of novel services, while facilitating a higher level of interaction with networked devices (e.g. IoT and automation).   Wi-Fi technology operates across multiple frequency bands, Broadly categorized into two groups: (a) sub-7 GHz (including 2.4 GHz, 5 GHz and 6 GHz) and (b) 60 GHz. Common Wi-Fi routers and IoT devices (including those compliant with IEEE 802.11n/ac/ax/be, or Wi-Fi 4/5/6/7) predominantly operate within the sub-7 GHz range. The widespread global adoption of these frequencies has at times resulted in pronounced network congestion, particularly in the 2.4 GHz and 5 GHz bands. Consequently, the 6 GHz band, characterized by reduced congestion and reduced latency, has been introduced. Separately, a new branch of Wi-Fi, called WiGig, operates at 60 GHz supporting higher data rates over very short distances through wider bandwidth (including IEEE 802.11ad/aj/ay). These two groups provide a unique range of possible use cases dependent on the physical electro-magnetic propagation properties, approved power levels, and allocated bandwidth resources. The features of this technology can be broadly categorized into four domains: Detection (binary classification, e.g. intruder detection, fall-down detection, presence detection), Localization (e.g. where motion occurs) Recognition (multi-class classification, e.g. gesture, gait, human/pet, activity of daily living), and Estimation (e.g. quantity values of size, length, angle, distance, breathing rate, heart rate, people counting, etc.). To date, detection of motion, filter of motion (i.e., pets and fans), the relative amount of motion and as well as localization have been included in commercialized Wi-Fi Sensing applications. Technical Wi-Fi possesses a structured architecture comprising a well-defined Medium Access Control (MAC) layer, complemented by a distinct PHY layer, as specified in the 802.11 standard. Wi-Fi Sensing leverages the standard physical layer (PHY) of Wi-Fi for both sensing measurements as well as digital communication. Since the PHY has been designed for communications, sensing operations must rely on the normal transmissions as defined by the 802.11 standard. Wi-Fi Sensing adds measurements of the orthogonal frequency-division multiplexing (OFDM) RF signals used by the PHY to detect features in the local physical environment. Using measurements like received signal strength and signal phase information among others, it is possible to detect objects in proximity to the radio. Noting how these change over time enables interpretation of changes in the environment. Development work continues on more powerful processing, higher resolution measurements in new generation radios, and new software models to enable better detection within the local environment. This improves the performance in existing use cases as well as opens new opportunities for the technology. History Wi-Fi Sensing originated with the establishment of the Wi-Fi Sensing Work Group by the Wireless Broadband Alliance (WBA). The WBA, an industry association, focuses on promoting the widespread integration of wireless broadband and advancing converged wireless services. Key figures within the WBA, including executives and wireless technology experts, recognized the innovative potential of Wi-Fi Sensing. In response, the WBA strategically formed the Wi-Fi Sensing Work Group to raise awareness and foster industry engagement. The Wi-Fi Sensing group, acknowledging the imperative for scalability and widespread acceptance, initially conducted extensive work on the potential of Wi-Fi Sensing which involved delving into the applications, key performance indicators (KPIs), testing guidelines, and challenges associated with the technology. The culmination of these efforts formed the basis for a formal proposal that was presented to the IEEE with the aim of establishing standardized protocols for Wi-Fi Sensing. On September 29, 2020 the IEEE Standards Association granted approval to the IEEE 802.11bf project, which focuses on Wireless Local Area Network (WLAN) Sensing standardization. The primary objective of this endeavor was the formulation of standards governing the interoperability of wireless devices compliant with the IEEE 802.11bf specifications. These standards were designed to facilitate the generation and provision of  low-level (PHY and MAC) channel measurements such as channel state information (CSI). This initiative sought to enable a wide range of Wi-Fi Sensing applications.  IEEE 802.11bf supports WLAN sensing across both sub-7 GHz and 60 GHz frequency bands. Academic Much of the early academic research on wireless sensing was based on large Software-Defined-Radio (SDR) hardware, such as the Ettus Research USRP. By employing wireless signals distinct from conventional Wi-Fi, SDR technology offered the advantage of flexibility, enabling the execution of custom operations which were impossible with off-the-shelf Wi-Fi hardware due to its inherently inflexible design and implementation. The requirement of a high-end SDR made it challenging for it to be commercialized as a product. DrivenL’s subsequent efforts within the academic community shifted the focus of SDR technology predominantly back to Wi-Fi, culminating in the development of tools for extracting Channel State Information (CSI) measurements from standard 802.11n Network Interface Cards (NICs). Some early academic papers and conference mentions include: “Advancing wireless link signatures for location distinction,” Proc. of ACM MobiCom, 2008, pp.   “FIMD: Fine-grained Device-free Motion Detection,” 2012 IEEE 18th International Conference on Parallel and Distribution Systems, pp. 219-234 “E-eyes: Device-free Location-oriented Activity Identification Using Fine-grained Wi-Fi Signatures” from 2014 at the 20th annual international conference on Mobile computing and networking “Tracking Vital Signs During Sleep Leveraging Off-the-shelf Wi-Fi” from the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing in 2015 “Gait recognition using Wi-Fi signals” from the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing "Inferring motion direction using commodity wi-fi for interactive exergames" at the CHI Conference from the 2017 CHI conference on human factors in computing systems “Smart user authentication through actuation of daily activities leveraging Wi-Fi-enabled IoT” at the 18th ACM international symposium on mobile ad hoc networking and computing in 2017 There is also a book available via Cambridge Press, Wireless AI: Wireless Sensing, Positioning, IoT, and Communications. Industry Associations The Wireless Broadband Alliance (WBA) has taken proactive measures to foster industry awareness and comprehension of Wi-Fi Sensing by instituting a specialized workgroup dedicated to this technology domain. As part of their educational initiative, the WBA created a Wi-Fi Sensing workgroup which has released a series of white papers addressing various facets of Wi-Fi Sensing. In October 2019, The Wireless Broadband Alliance (WBA) published an industry white paper, Wi-Fi Sensing: A New Technology Emerges, providing a comprehensive analysis of existing Wi-Fi standards discerning gaps and unexplored domains that hold promise for potential enhancements. The paper explores early applications of Wi-Fi Sensing, including motion detection, gesture recognition, and biometric measurement. Moreover, it identifies potential business opportunities within the home security, health care, enterprise, and building automation/management markets. Subsequently, the WBA supplemented this foundational document with additional white papers on “Wi-Fi Sensing:Test Methodology and Performance Metrics” and “Wi-Fi Sensing: Deployment Guidelines”, extending these resources to its membership base. The IEEE 802.11bf Task Group, operating within the broader IEEE 802.11 Working Group, is diligently working on the standardization of Wi-Fi Sensing. Recognizing the growing importance and potential of Wi-Fi Sensing in various applications, spanning from smart homes to healthcare, the task group aims to establish a unified framework for its implementation. Their efforts revolve around defining technical specifications and protocols to ensure interoperability, reliability, and operational efficiency of Wi-Fi Sensing technologies. By setting these standards, the IEEE 802.11bf Task Group is facilitating an integrated and harmonized adoption of Wi-Fi Sensing across industries, ensuring seamless communication among devices and systems thereby maximizing the benefits of this innovative technology. Commercialization 2017: Aura branded consumer product introduced 2017: Cloud-based sensing solution 2018: Expansion of Wi-Fi Sensing into Wi-Fi mesh networks 2019: Successful field testing and deployment of Aerial's Wi-Fi Sensing by Telefonica S.A (Spain) to its ISP customers (Telefonica, Aerial) 2021: Airties integrates Wi-Fi Sensing into its Wi-Fi 6 access points for ISPs 2022: First commercial Elder Care solution 2022: “Home Awareness” launched by Verizon Fios 2023: “TruPresence” incorporated into Airties access points References Wi-Fi
WiFi Sensing
Technology
1,966
19,634,105
https://en.wikipedia.org/wiki/JANOG
JANOG is the Internet network operators' group for the Japanese Internet service provider (ISP) community. It was originally established in 1997. JANOG holds regular meetings for the ISP community, with hundreds of attendees. Although JANOG has no formal budget of its own, it draws on the resources of its member companies to do so. References External links JANOG English-language home page Main Japanese-language JANOG home page Internet Network Operators' Groups Computer networking Organizations established in 1997 Internet in Japan Trade associations based in Japan
JANOG
Technology,Engineering
107
3,293,594
https://en.wikipedia.org/wiki/Decimal%20representation
A decimal representation of a non-negative real number is its expression as a sequence of symbols consisting of decimal digits traditionally written with a single separator: Here is the decimal separator, is a nonnegative integer, and are digits, which are symbols representing integers in the range 0, ..., 9. Commonly, if The sequence of the —the digits after the dot—is generally infinite. If it is finite, the lacking digits are assumed to be 0. If all are , the separator is also omitted, resulting in a finite sequence of digits, which represents a natural number. The decimal representation represents the infinite sum: Every nonnegative real number has at least one such representation; it has two such representations (with if ) if and only if one has a trailing infinite sequence of , and the other has a trailing infinite sequence of . For having a one-to-one correspondence between nonnegative real numbers and decimal representations, decimal representations with a trailing infinite sequence of are sometimes excluded. Integer and fractional parts The natural number , is called the integer part of , and is denoted by in the remainder of this article. The sequence of the represents the number which belongs to the interval and is called the fractional part of (except when all are equal to ). Finite decimal approximations Any real number can be approximated to any desired degree of accuracy by rational numbers with finite decimal representations. Assume . Then for every integer there is a finite decimal such that: Proof: Let , where . Then , and the result follows from dividing all sides by . (The fact that has a finite decimal representation is easily established.) Non-uniqueness of decimal representation and notational conventions Some real numbers have two infinite decimal representations. For example, the number 1 may be equally represented by 1.000... as by 0.999... (where the infinite sequences of trailing 0's or 9's, respectively, are represented by "..."). Conventionally, the decimal representation without trailing 9's is preferred. Moreover, in the standard decimal representation of , an infinite sequence of trailing 0's appearing after the decimal point is omitted, along with the decimal point itself if is an integer. Certain procedures for constructing the decimal expansion of will avoid the problem of trailing 9's. For instance, the following algorithmic procedure will give the standard decimal representation: Given , we first define (the integer part of ) to be the largest integer such that (i.e., ). If the procedure terminates. Otherwise, for already found, we define inductively to be the largest integer such that: The procedure terminates whenever is found such that equality holds in ; otherwise, it continues indefinitely to give an infinite sequence of decimal digits. It can be shown that (conventionally written as ), where and the nonnegative integer is represented in decimal notation. This construction is extended to by applying the above procedure to and denoting the resultant decimal expansion by . Types Finite The decimal expansion of non-negative real number x will end in zeros (or in nines) if, and only if, x is a rational number whose denominator is of the form 2n5m, where m and n are non-negative integers. Proof: If the decimal expansion of x will end in zeros, or for some n, then the denominator of x is of the form 10n = 2n5n. Conversely, if the denominator of x is of the form 2n5m, for some p. While x is of the form , for some n. By , x will end in zeros. Infinite Repeating decimal representations Some real numbers have decimal expansions that eventually get into loops, endlessly repeating a sequence of one or more digits: = 0.33333... = 0.142857142857... = 7.1243243243... Every time this happens the number is still a rational number (i.e. can alternatively be represented as a ratio of an integer and a positive integer). Also the converse is true: The decimal expansion of a rational number is either finite, or endlessly repeating. Finite decimal representations can also be seen as a special case of infinite repeating decimal representations. For example, = 1.44 = 1.4400000...; the endlessly repeated sequence is the one-digit sequence "0". Non-repeating decimal representations Other real numbers have decimal expansions that never repeat. These are precisely the irrational numbers, numbers that cannot be represented as a ratio of integers. Some well-known examples are: = 1.41421356237309504880...   e  = 2.71828182845904523536...   π  = 3.14159265358979323846... Conversion to fraction Every decimal representation of a rational number can be converted to a fraction by converting it into a sum of the integer, non-repeating, and repeating parts and then converting that sum to a single fraction with a common denominator. For example, to convert to a fraction one notes the lemma: Thus one converts as follows: If there are no repeating digits one assumes that there is a forever repeating 0, e.g. , although since that makes the repeating term zero the sum simplifies to two terms and a simpler conversion. For example: See also Decimal Series (mathematics) IEEE 754 Simon Stevin References Further reading Mathematical notation Articles containing proofs br:Dispakadur dekredel ckb:نواندنی دەدەیی
Decimal representation
Mathematics
1,161
35,136,449
https://en.wikipedia.org/wiki/Lactofuchsin%20mount
A Lactofuchsin mount (also spelled Lacto-fuchsin or Lacto-Fuchsin) is a technique used for mounting fungi with hyphae on a microscope slide for examination. The main advantage of a lactofuchsin mount is that if performed correctly, it preserves the structure and arrangement of any hyphae that are present. Advantages To examine the hyphae of fungi under a microscope, a wet mount is essential. While this is possible to do with a water based mount, a better result can be obtained with lactofuchsin mounting fluid, which both sticks to the cell walls and colours the cell walls red in the process. Lactofuchsin, a 1% solution of basic fuchsine in lactic acid, dries much slower than water, so the slide may be preserved for a longer period, particularly if the edges of the finished slide are sealed, for example with clear nail polish. In addition, the refractive index of the fluid is significantly different to that of the cell walls, which provides a stronger visual contrast of the cell walls against the background. Disadvantages A significant disadvantage of Lactofuchsin is its cost; prices are over US$100 for a small 20mL bottle. Only a few drops are used for each mount. Lactofuchsin is poisonous. References Microscopy Mycology
Lactofuchsin mount
Chemistry,Biology
279
2,526,931
https://en.wikipedia.org/wiki/Isotopes%20of%20terbium
Naturally occurring terbium (65Tb) is composed of one stable isotope, 159Tb. Thirty-seven radioisotopes have been characterized, with the most stable being 158Tb with a half-life of 180 years, 157Tb with a half-life of 71 years, and 160Tb with a half-life of 72.3 days. All of the remaining radioactive isotopes have half-lives that are less than 6.907 days, and the majority of these have half-lives that are less than 24 seconds. This element also has 27 meta states, with the most stable being 156m1Tb (t1/2 = 24.4 hours), 154m2Tb (t1/2 = 22.7 hours) and 154m1Tb (t1/2 = 9.4 hours). The primary decay mode before the most abundant stable isotope, 159Tb, is electron capture, and the primary mode behind is beta decay. The primary decay products before 159Tb are element Gd (gadolinium) isotopes, and the primary products after 159Tb are element Dy (dysprosium) isotopes. List of isotopes |-id=Terbium-135 | 135Tb | style="text-align:right" | 65 | style="text-align:right" | 70 | 134.96452(43)# | 1.01(28) ms | p | 134Gd | (7/2−) | |-id=Terbium-139 | 139Tb | style="text-align:right" | 65 | style="text-align:right" | 74 | 138.94833(32)# | 1.6(2) s | β+ | 139Gd | 5/2−# | |-id=Terbium-140 | rowspan=3|140Tb | rowspan=3 style="text-align:right" | 65 | rowspan=3 style="text-align:right" | 75 | rowspan=3|139.94581(86) | rowspan=3|2.29(15) s | β+ (99.74%) | 140Gd | rowspan=3|(7+) | rowspan=3| |- | EC (<3%) | 140Gd |- | β+, p (0.26%) | 139Eu |-id=Terbium-141 | 141Tb | style="text-align:right" | 65 | style="text-align:right" | 76 | 140.94145(11) | 3.5(2) s | β+ | 141Gd | (5/2−) | |-id=Terbium-141m | style="text-indent:1em" | 141mTb | colspan="3" style="text-indent:2em" | 0(200)# keV | 7.9(6) s | β+ | 141Gd | 11/2−# | |-id=Terbium-142 | rowspan=3|142Tb | rowspan=3 style="text-align:right" | 65 | rowspan=3 style="text-align:right" | 77 | rowspan=3|141.93928(75) | rowspan=3|597(17) ms | β+ (96.8%) | 142Gd | rowspan=3|1+ | rowspan=3| |- | EC (3.2%) | 142Gd |- | β+, p (0.0022%) | 141Eu |-id=Terbium-142m1 | style="text-indent:1em" | 142m1Tb | colspan="3" style="text-indent:2em" | 279.7(4) keV | 303(17) ms | IT | 142Tb | 5− | |-id=Terbium-142m2 | style="text-indent:1em" | 142m2Tb | colspan="3" style="text-indent:2em" | 652.1(6) keV | 26(1) μs | IT | 142Tb | 8+ | |-id=Terbium-143 | 143Tb | style="text-align:right" | 65 | style="text-align:right" | 78 | 142.935137(55) | 12(1) s | β+ | 143Gd | (11/2−) | |-id=Terbium-143m | style="text-indent:1em" | 143mTb | colspan="3" style="text-indent:2em" | 0(100)# keV | 17(4) s | | | 5/2+# | |-id=Terbium-144 | 144Tb | style="text-align:right" | 65 | style="text-align:right" | 79 | 143.933045(30) | ~1 s | β+ | 144Gd | 1+ | |-id=Terbium-144m1 | rowspan=2 style="text-indent:1em" | 144m1Tb | rowspan=2 colspan="3" style="text-indent:2em" | 396.9(5) keV | rowspan=2|4.25(15) s | IT (66%) | 144Tb | rowspan=2|6− | rowspan=2| |- | β+ (34%) | 144Gd |-id=Terbium-144m2 | style="text-indent:1em" | 144m2Tb | colspan="3" style="text-indent:2em" | 476.2(5) keV | 2.8(3) μs | IT | 144Tb | (8−) | |-id=Terbium-144m3 | style="text-indent:1em" | 144m3Tb | colspan="3" style="text-indent:2em" | 517.1(5) keV | 670(60) ns | IT | 144Tb | (9+) | |-id=Terbium-144m4 | style="text-indent:1em" | 144m4Tb | colspan="3" style="text-indent:2em" | 544.5(6) keV | <300 ns | IT | 144Tb | (10+) | |-id=Terbium-145 | 145Tb | style="text-align:right" | 65 | style="text-align:right" | 80 | 144.92872(12) | 30.9(6) s | β+ | 145Gd | (11/2−) | |-id=Terbium-145m | style="text-indent:1em" | 145mTb | colspan="3" style="text-indent:2em" | 860(230) keV | | | | (3/2+) | |-id=Terbium-146 | 146Tb | style="text-align:right" | 65 | style="text-align:right" | 81 | 145.927253(48) | 8(4) s | β+ | 146Gd | 1+ | |-id=Terbium-146m1 | style="text-indent:1em" | 146m1Tb | colspan="3" style="text-indent:2em" | 150(100)# keV | 24.1(5) s | β+ | 146Gd | 5− | |-id=Terbium-146m2 | style="text-indent:1em" | 146m2Tb | colspan="3" style="text-indent:2em" | 930(100)# keV | 1.18(2) ms | IT | 146Tb | 10+ | |-id=Terbium-147 | 147Tb | style="text-align:right" | 65 | style="text-align:right" | 82 | 146.9240546(87) | 1.64(3) h | β+ | 147Gd | (1/2+) | |-id=Terbium-147m | style="text-indent:1em" | 147mTb | colspan="3" style="text-indent:2em" | 50.6(9) keV | 1.87(5) min | β+ | 147Gd | (11/2−) | |-id=Terbium-148 | 148Tb | style="text-align:right" | 65 | style="text-align:right" | 83 | 147.924275(13) | 60(1) min | β+ | 148Gd | 2− | |-id=Terbium-148m1 | style="text-indent:1em" | 148m1Tb | colspan="3" style="text-indent:2em" | 90.1(3) keV | 2.20(5) min | β+ | 148Gd | (9)+ | |-id=Terbium-148m2 | style="text-indent:1em" | 148m2Tb | colspan="3" style="text-indent:2em" | 8618.6(10) keV | 1.310(7) μs | IT | 148Tb | (27+) | |-id=Terbium-149 | rowspan=2|149Tb | rowspan=2 style="text-align:right" | 65 | rowspan=2 style="text-align:right" | 84 | rowspan=2|148.9232538(39) | rowspan=2|4.118(25) h | β+ (83.3%) | 149Gd | rowspan=2|1/2+ | rowspan=2| |- | α (16.7%) | 145Eu |-id=Terbium-149m | rowspan=2 style="text-indent:1em" | 149mTb | rowspan=2 colspan="3" style="text-indent:2em" | 35.78(13) keV | rowspan=2|4.16(4) min | β+ (99.98%) | 149Gd | rowspan=2|11/2− | rowspan=2| |- | α (0.022%) | 145Eu |-id=Terbium-150 | 150Tb | style="text-align:right" | 65 | style="text-align:right" | 85 | 149.9236648(79) | 3.48(16) h | β+ | 150Gd | (2)− | |-id=Terbium-150m | style="text-indent:1em" | 150mTb | colspan="3" style="text-indent:2em" | 461(27) keV | 5.8(2) min | β+ | 150Gd | 9+ | |-id=Terbium-151 | rowspan=2|151Tb | rowspan=2 style="text-align:right" | 65 | rowspan=2 style="text-align:right" | 86 | rowspan=2|150.9231090(44) | rowspan=2|17.609(1) h | β+ (99.99%) | 151Gd | rowspan=2|1/2+ | rowspan=2| |- | α (.0095%) | 147Eu |-id=Terbium-151m | rowspan=2 style="text-indent:1em" | 151mTb | rowspan=2 colspan="3" style="text-indent:2em" | 99.53(5) keV | rowspan=2|25(3) s | IT (93.4%) | 151Tb | rowspan=2|11/2− | rowspan=2| |- | β+ (6.6%) | 151Gd |-id=Terbium-152 | rowspan=3|152Tb | rowspan=3 style="text-align:right" | 65 | rowspan=3 style="text-align:right" | 87 | rowspan=3|151.924082(43) | rowspan=3|17.8784(95) h | EC (83%) | rowspan=2|152Gd | rowspan=3|2− | rowspan=3| |- | β+ (17%) |- | α (<7×10−7%) | 148Eu |-id=Terbium-152m1 | style="text-indent:1em" | 152m1Tb | colspan="3" style="text-indent:2em" | 342.15(16) keV | 960(10) ns | IT | 152Tb | 5− | |-id=Terbium-152m2 | rowspan=2 style="text-indent:1em" | 152m2Tb | rowspan=2 colspan="3" style="text-indent:2em" | 501.74(19) keV | rowspan=2|4.2(1) min | IT (78.9%) | 152Tb | rowspan=2|8+ | rowspan=2| |- | β+ (21.1%) | 152Gd |-id=Terbium-153 | 153Tb | style="text-align:right" | 65 | style="text-align:right" | 88 | 152.9234417(42) | 2.34(1) d | β+ | 153Gd | 5/2+ | |-id=Terbium-153m | style="text-indent:1em" | 153mTb | colspan="3" style="text-indent:2em" | 163.175(5) keV | 186(4) μs | IT | 153Tb | 11/2− | |-id=Terbium-154 | 154Tb | style="text-align:right" | 65 | style="text-align:right" | 89 | 153.924684(49) | 9.994(39) h | β+ | 154Gd | 3− | |-id=Terbium-154m1 | style="text-indent:1em" | 154m1Tb | colspan="3" style="text-indent:2em" | 130(50)# keV | 21.5(4) h | β+ | 154Gd | 0− | |-id=Terbium-154m2 | style="text-indent:1em" | 154m2Tb | colspan="3" style="text-indent:2em" | 200(150)# keV | 22.7(5) h | β+ | 154Gd | 7− | |-id=Terbium-154m3 | style="text-indent:1em" | 154m3Tb | colspan="3" style="text-indent:2em" | 405(150)# keV | 513(42) ns | IT | 154Tb | | |-id=Terbium-155 | 155Tb | style="text-align:right" | 65 | style="text-align:right" | 90 | 154.923510(11) | 5.32(6) d | EC | 155Gd | 3/2+ | |-id=Terbium-156 | 156Tb | style="text-align:right" | 65 | style="text-align:right" | 91 | 155.9247542(40) | 5.35(10) d | β+ | 156Gd | 3− | |-id=Terbium-156m1 | style="text-indent:1em" | 156m1Tb | colspan="3" style="text-indent:2em" | 88.4(2) keV | 5.3(2) h | IT | 156Tb | (0+) | |-id=Terbium-156m2 | style="text-indent:1em" | 156m2Tb | colspan="3" style="text-indent:2em" | 100(50)# keV | 24.4(10) h | IT | 156Tb | (7−) | |-id=Terbium-157 | 157Tb | style="text-align:right" | 65 | style="text-align:right" | 92 | 156.9240319(11) | 71(7) y | EC | 157Gd | 3/2+ | |-id=Terbium-158 | rowspan=2|158Tb | rowspan=2 style="text-align:right" | 65 | rowspan=2 style="text-align:right" | 93 | rowspan=2|157.9254199(14) | rowspan=2|180(11) y | β+ (83.4%) | 158Gd | rowspan=2|3− | rowspan=2| |- | β− (16.6%) | 158Dy |-id=Terbium-158m1 | style="text-indent:1em" | 158m1Tb | colspan="3" style="text-indent:2em" | 110.3(12) keV | 10.70(17) s | IT | 158Tb | 0− | |-id=Terbium-158m2 | style="text-indent:1em" | 158m2Tb | colspan="3" style="text-indent:2em" | 388.39(11) keV | 0.40(4) ms | IT | 158Tb | 7− | |-id=Terbium-159 | 159Tb | style="text-align:right" | 65 | style="text-align:right" | 94 | 158.9253537(12) | colspan=3 align=center|Stable | 3/2+ | 1.0000 |-id=Terbium-160 | 160Tb | style="text-align:right" | 65 | style="text-align:right" | 95 | 159.9271746(12) | 72.3(2) d | β− | 160Dy | 3− | |-id=Terbium-161 | 161Tb | style="text-align:right" | 65 | style="text-align:right" | 96 | 160.9275768(13) | 6.948(5) d | β− | 161Dy | 3/2+ | |-id=Terbium-162 | 162Tb | style="text-align:right" | 65 | style="text-align:right" | 97 | 161.9292754(22) | 7.60(15) min | β− | 162Dy | (1−) | |-id=Terbium-162m | style="text-indent:1em" | 162mTb | colspan="3" style="text-indent:2em" | 286(3) keV | 10# min | | | 4−# | |-id=Terbium-163 | 163Tb | style="text-align:right" | 65 | style="text-align:right" | 98 | 162.9306536(44) | 19.5(3) min | β− | 163Dy | 3/2+ | |-id=Terbium-164 | 164Tb | style="text-align:right" | 65 | style="text-align:right" | 99 | 163.9333276(20) | 3.0(1) min | β− | 164Dy | (5+) | |-id=Terbium-164m | style="text-indent:1em" | 164mTb | colspan="3" style="text-indent:2em" | 145(12) keV | 2# min | | | 2+# | |-id=Terbium-165 | 165Tb | style="text-align:right" | 65 | style="text-align:right" | 100 | 164.9349552(17) | 2.11(10) min | β− | 165Dy | (3/2+) | |-id=Terbium-165m | style="text-indent:1em" | 165mTb | colspan="3" style="text-indent:2em" | 207(5) keV | 0.81(8) μs | IT | 165Tb | (7/2−) | |-id=Terbium-166 | 166Tb | style="text-align:right" | 65 | style="text-align:right" | 101 | 165.9379397(16) | 27.1(15) s | β− | 166Dy | (1−) | |-id=Terbium-166m | style="text-indent:1em" | 166mTb | colspan="3" style="text-indent:2em" | 159.0(15) keV | 3.5(4) μs | IT | 166Tb | 4−# | |-id=Terbium-167 | 167Tb | style="text-align:right" | 65 | style="text-align:right" | 102 | 166.9400070(21) | 18.9(16) s | β− | 167Dy | (3/2+) | |-id=Terbium-167m | style="text-indent:1em" | 167mTb | colspan="3" style="text-indent:2em" | 200(6) keV | 1.2(1) μs | IT | 167Tb | (7/2−) | |-id=Terbium-168 | 168Tb | style="text-align:right" | 65 | style="text-align:right" | 103 | 167.9433371(45) | 9.4(4) s | β− | 168Dy | (4−) | |-id=Terbium-168m | style="text-indent:1em" | 168mTb | colspan="3" style="text-indent:2em" | 211(1) keV | 0.71(3) μs | IT | 168Tb | (6+) | |-id=Terbium-169 | 169Tb | style="text-align:right" | 65 | style="text-align:right" | 104 | 168.94581(32)# | 5.13(32) s | β− | 169Dy | 3/2+# | |-id=Terbium-170 | 170Tb | style="text-align:right" | 65 | style="text-align:right" | 105 | 169.94986(32)# | 960(78) ms | β− | 170Dy | 2−# | |-id=Terbium-171 | 171Tb | style="text-align:right" | 65 | style="text-align:right" | 106 | 170.95301(43)# | 1.23(10) s | β− | 171Dy | 3/2+# | |-id=Terbium-172 | 172Tb | style="text-align:right" | 65 | style="text-align:right" | 107 | 171.95739(54)# | 760(190) ms | β− | 172Dy | 6+# | |-id=Terbium-173 | 173Tb | style="text-align:right" | 65 | style="text-align:right" | 108 | 172.96081(54)# | 400# ms[>550 ns] | | | 3/2+# | |-id=Terbium-174 | 174Tb | style="text-align:right" | 65 | style="text-align:right" | 109 | 173.96568(54)# | 240# ms[>550 ns] | | | 2−# | References Isotope masses from: Isotopic compositions and standard atomic masses from: Half-life, spin, and isomer data selected from the following sources. Terbium Terbium
Isotopes of terbium
Chemistry
5,725
6,453,717
https://en.wikipedia.org/wiki/Air%20source%20heat%20pump
An air source heat pump (ASHP) is a heat pump that can absorb heat from air outside a building and release it inside; it uses the same vapor-compression refrigeration process and much the same equipment as an air conditioner, but in the opposite direction. ASHPs are the most common type of heat pump and, usually being smaller, tend to be used to heat individual houses or flats rather than blocks, districts or industrial processes. Air-to-air heat pumps provide hot or cold air directly to rooms, but do not usually provide hot water. Air-to-water heat pumps use radiators or underfloor heating to heat a whole house and are often also used to provide domestic hot water. An ASHP can typically gain 4 kWh thermal energy from 1 kWh electric energy. They are optimized for flow temperatures between , suitable for buildings with heat emitters sized for low flow temperatures. With losses in efficiency, an ASHP can even provide full central heating with a flow temperature up to . about 10% of building heating worldwide is from ASHPs. They are the main way to phase out gas boilers (also known as "furnaces") from houses, to avoid their greenhouse gas emissions. Air-source heat pumps are used to move heat between two heat exchangers, one outside the building which is fitted with fins through which air is forced using a fan and the other which either directly heats the air inside the building or heats water which is then circulated around the building through radiators or underfloor heating which releases the heat to the building. These devices can also operate in a cooling mode where they extract heat via the internal heat exchanger and eject it into the ambient air using the external heat exchanger. Some can be used to heat water for washing which is stored in a domestic hot water tank. Air-source heat pumps are relatively easy and inexpensive to install, so are the most widely used type. In mild weather, coefficient of performance (COP) may be between 2 and 5, while at temperatures below around an air-source heat pump may still achieve a COP of 1 to 4. While older air-source heat pumps performed relatively poorly at low temperatures and were better suited for warm climates, newer models with variable-speed compressors remain highly efficient in freezing conditions allowing for wide adoption and cost savings in places like Minnesota and Maine in the United States. Technology Air at any natural temperature contains some heat. An air source heat pump transfers some of this from one place to another, for example between the outside and inside of a building. An air-to air system can be designed to transfer heat in either direction, to heat or cool the interior of the building in winter and summer respectively. Internal ducting may be used to distribute the air. An air-to-water system only pumps heat inwards, and can provide space heating and hot water. For simplicity, the description below focuses on use for interior heating. The technology is similar to a refrigerator or freezer or air conditioning unit: the different effect is due to the location of the different system components. Just as the pipes on the back of a refrigerator become warm as the interior cools, so an ASHP warms the inside of a building whilst cooling the outside air. The main components of a split-system (called split as there are both inside and outside coils) air source heat pump are: An outdoor evaporator heat exchanger coil, which extracts heat from ambient air One or more indoor condenser heat exchanger coils. They transfer the heat into the indoor air, or an indoor heating system such as water-filled radiators or underfloor circuits and a domestic hot water tank. Less commonly a packaged ASHP has everything outside, with hot (or cold) air sent inside through a duct. These are also called monobloc and are useful for keeping flammable propane outside the house. An ASHP can provide three or four times as much heat as an electric resistance heater using the same amount of electricity. Burning gas or oil will emit carbon dioxide and also NOx, which can be harmful to health. An air source heat pump issues no carbon dioxide, nitrogen oxide or any other kind of gas. It uses a small amount of electricity to transfer a large amount of heat. Most ASHPs are reversible and are able to either warm or cool buildings and in some cases also provide domestic hot water. The use of an air-to-water heat pump for house cooling has been criticised. Heating and cooling is accomplished by pumping a refrigerant through the heat pump's indoor and outdoor coils. Like in a refrigerator, a compressor, condenser, expansion valve and evaporator are used to change states of the refrigerant between colder liquid and hotter gas states. When the liquid refrigerant at a low temperature and low pressure passes through the outdoor heat exchanger coils, ambient heat causes the liquid to boil (change to gas or vapor). Heat energy from the outside air has been absorbed and stored in the refrigerant as latent heat. The gas is then compressed using an electric pump; the compression increases the temperature of the gas. Inside the building, the gas passes through a pressure valve into heat exchanger coils. There, the hot refrigerant gas condenses back to a liquid and transfers the stored latent heat to the indoor air, water heating or hot water system. The indoor air or heating water is pumped across the heat exchanger by an electric pump or fan. The cool liquid refrigerant then re-enters the outdoor heat exchanger coils to begin a new cycle. Each cycle usually takes a few minutes. Most heat pumps can also operate in a cooling mode where the cold refrigerant is moved through the indoor coils to cool the room air. As of 2024 tech other than vapour compression is insignificant in the market. Usage ASHPs are the most common type of heat pump and, usually being smaller, are generally more suitable to heat individual houses rather than blocks of flats, compact urban districts or industrial processes. In dense city centres heat networks may be better than ASHP. Air source heat pumps are used to provide interior space heating and cooling even in colder climates, and can be used efficiently for water heating in milder climates. A major advantage of some ASHPs is that the same system may be used for heating in winter and cooling in summer. Though the cost of installation is generally high, it is less than the cost of a ground source heat pump, because a ground source heat pump requires excavation to install its ground loop. The advantage of a ground source heat pump is that it has access to the thermal storage capacity of the ground which allows it to produce more heat for less electricity in cold conditions. Home batteries can mitigate the risk of power cuts and like ASHPs are becoming more popular. Some ASHPs can be coupled to solar panels as primary energy source, with a conventional electric grid as backup source. Thermal storage solutions incorporating resistance heating can be used in conjunction with ASHPs. Storage may be more cost-effective if time of use electricity rates are available. Heat is stored in high density ceramic bricks contained within a thermally-insulated enclosure; storage heaters are an example. ASHPs may also be paired with passive solar heating. Thermal mass (such as concrete or rocks) heated by passive solar heat can help stabilize indoor temperatures, absorbing heat during the day and releasing heat at night, when outdoor temperatures are colder and heat pump efficiency is lower. Replacing gas heating in existing houses Good home insulation is important. ASHPs are bigger than gas boilers and need more space outside, so the process is more complex and can be more expensive than if it was possible to just remove a gas boiler and install an ASHP in its place. If running costs are important choosing the right size is important because an ASHP which is too large will be more expensive to run. It can be more complicated to retrofit conventional heating systems that use radiators/radiant panels, hot water baseboard heaters, or even smaller diameter ducting, with ASHP-sourced heat. The lower heat pump output temperatures means radiators (and possibly pipes) may have to be replaced with larger sizes, or a low temperature underfloor heating system installed instead. Alternatively, a high temperature heat pump can be installed and existing heat emitters can be retained, however these heat pumps are more expensive to buy and run so may only be suitable for buildings which are hard to alter or insulate, such as some large historic houses. ASHP are claimed to be healthier than fossil-fuelled heating such as gas heaters by maintaining a more even temperature and avoiding harmful fumes risk. By filtering the air and reducing humidity in hot humid summer climates, they are also said to reduce dust, allergens, and mold, which poses a health risk. In cold climates Operation of normal ASHPs is generally not recommended below −10 °C. However, ASHPs designed specifically for very cold climates (in the US, these are certified under Energy Star) can extract useful heat from ambient air as cold as but electric resistance heating may be more efficient below −25 °C. This is made possible by the use of variable-speed compressors, powered by inverters. Although air source heat pumps are less efficient than well-installed ground source heat pumps (GSHPs) in cold conditions, air source heat pumps have lower initial costs and may be the most economic or practical choice. A hybrid system, with both a heat pump and an alternative source of heat such as a fossil fuel boiler, may be suitable if it is impractical to properly insulate a large house. Alternatively multiple heat pumps or a high temperature heat pump may be considered. In some weather conditions condensation will form and then freeze onto the coils of the heat exchanger of the outdoor unit, reducing air flow through the coils. To clear this condensation, the unit operates a defrost cycle, switching to cooling mode for a few minutes and heating the coils until the ice melts. Air-to-water heat pumps use heat from the circulating water for this purpose, which results in a small and probably undetectable drop in water temperature; for air-to-air systems, heat is either taken from the air in the building or using an electrical heater. Some air-to-air systems simply stop the operation of the fans of both units and switch to cooling mode so that the outdoor unit returns to being the condenser such that it heats up and defrosts. As discussed above, typical air-source heat pumps (ASHPs) struggle to perform efficiently at low temperatures. Ground-source heat pumps (GSHPs), which transfer heat to or from the ground using fluid-filled underground pipes (ground heat exchangers or GHEs), offer higher efficiency but are expensive to install due to labor and material costs. A ground source air heat pump (GSAHP)—or water-to-refrigerant type GSHPs —presents a viable alternative, integrating elements of ASHPs and water-to-water GSHPs. A GSAHP has three components: a GHE (vertical or horizontal), a heat pump, and a fan coil unit (FCU). The heat pump unit contains an evaporator, compressor, condenser, and expansion valve. Thermal energy is extracted from the ground through an antifreeze solution in the GHE, transferred to the refrigerant in the heat pump, and compressed before being delivered to a refrigerant-to-air heat exchanger. A fan then circulates the heated air indoors. Unlike conventional GSHPs, GSAHPs eliminate the need for hydronic systems (e.g., underfloor heating systems or wall-mounted radiators), relying instead on fans to distribute heat directly into indoor air. This reduces installation costs and complexity while retaining the efficiency benefits of GSHPs in cold climates. By extracting heat from stable ground temperatures, GSAHPs outperform ASHPs in low temperatures, achieving higher efficiency and reduced greenhouse gas emissions. Installation costs for GSAHPs are intermediate between ASHP and GSHP systems; while they eliminate the need for indoor pipework, they still require drilling or digging for the GHE. Electricity consumption drives the climate impact of heat pump systems. GSAHPs demonstrate a coefficient of performance (COP) approximately 35% higher than ASHPs under certain conditions, due to the stable ground temperatures they leverage. Additionally, the operation phase accounts for 84% of its climate impacts over a heat pump's life cycle, highlighting the importance of efficiency (i.e., higher COPs) in reducing emissions. The global warming potential (GWP) of GSAHPs is nearly 40% lower than ASHPs, further demonstrating their environmental advantages in cold climates. This efficiency advantage is especially pronounced during winter when ASHP efficiency typically declines. GSAHPs consume less electricity for heating, resulting in lower greenhouse gas emissions, particularly in regions with high heating demands and carbon-intensive electricity grids. Noise An air source heat pump requires an outdoor unit containing moving mechanical components including fans which produce noise. Modern devices offer schedules for silent mode operation with reduced fan speed. This will reduce the maximum heating power but can be applied at mild outdoor temperatures without efficiency loss. Acoustic enclosures are another approach to reduce the noise in a sensitive neighbourhood. In insulated buildings, operation can be paused at night without significant temperature loss. Only at low temperatures, frost protection forces operation after a few hours. Proper siting is also important. In the United States, the allowed night-time noise level is 45 A-weighted decibels (dBA). In the UK the limit is set at 42 dB measured from the nearest neighbour according to the MCS 020 standard or equivalent. In Germany the limit in residential areas is 35, which is usually measured by European Standard EN 12102. Another feature of air source heat pumps (ASHPs) external heat exchangers is their need to stop the fan from time to time for a period of several minutes in order to get rid of frost that accumulates in the outdoor unit in the heating mode. After that, the heat pump starts to work again. This part of the work cycle results in two sudden changes of the noise made by the fan. The acoustic effect of such disruption is especially powerful in quiet environments where background night-time noise may be as low as 0 to 10dBA. This is included in legislation in France. According to the French concept of noise nuisance, "noise emergence" is the difference between ambient noise including the disturbing noise, and ambient noise without the disturbing noise. By contrast a ground source heat pump has no need for an outdoor unit with moving mechanical components. Efficiency ratings The efficiency of air source heat pumps is measured by the coefficient of performance (COP). A COP of 4 means the heat pump produces 4 units of heat energy for every 1 unit of electricity it consumes. Within temperature ranges of to , the COP for many machines is fairly stable. Approximately TheoreticalMaxCOP = (desiredIndoorTempC + 273) ÷ (desiredIndoorTempC - outsideTempC). In mild weather with an outside temperature of , the COP of efficient air source heat pumps ranges from 4 to 6. However, on a cold winter day, it takes more work to move the same amount of heat indoors than on a mild day. The heat pump's performance is limited by the Carnot cycle and will approach 1.0 as the outdoor-to-indoor temperature difference increases, which for most air source heat pumps happens as outdoor temperatures approach .Heat pump construction that enables carbon dioxide as a refrigerant may have a COP of greater than 2 even down to −20 °C, pushing the break-even figure downward to . A ground source heat pump has comparatively less of a change in COP as outdoor temperatures change, because the ground from which they extract heat has a more constant temperature than outdoor air. The design of a heat pump has a considerable impact on its efficiency. Many air source heat pumps are designed primarily as air conditioning units, mainly for use in summer temperatures. Designing a heat pump specifically for the purpose of heat exchange can attain greater COP and an extended life cycle. The principal changes are in the scale and type of compressor and evaporator. Seasonally adjusted heating and cooling efficiencies are given by the heating seasonal performance factor (HSPF) and seasonal energy efficiency ratio (SEER) respectively. In the US the legal minimum efficiency is 14 or 15 SEER and 8.8 HSPF. Variable speed compressors are more efficient because they can often run more slowly and because the air passes through more slowly giving its water more time to condense, thus more efficient as drier air is easier to cool. However, they are more expensive and more likely to need maintenance or replacement. Maintenance such as changing filters can improve performance by 10% to 25%. Refrigerant types Impact on decarbonization and electricity supply Heat pumps are key to decarbonizing home energy use by phasing out gas boilers. As of 2024 the IEA says that 500 million tonnes of CO2 emissions could be cut by 2030. As wind farms are increasingly used to supply electricity to some grids, such as Canada's Yukon Territory, the increased winter load matches well with the increased winter generation from wind turbines, and calmer days result in decreased heating load for most houses even if the air temperature is low. Heat pumps could help stabilize grids through demand response. As heat pump penetration increases some countries, such as the UK, may need to encourage households to use thermal energy storage, such as very well insulated water tanks. In some countries, such as Australia, integration of this thermal storage with rooftop solar would also help. Although higher cost heat pumps can be more efficient a 2024 study concluded that for the UK "from an energy system perspective, it is overall cost-optimal to design heat pumps with nominal COP in the range of 2.8–3.2, which typically has a specific cost lower than 650 £/kWth, and simultaneously to invest in increased capacities of renewable energy generation technologies and batteries, in the first instance, followed by OCGT and CCGT with CCS." Economics Cost buying and installing an ASHP in an existing house is expensive if there is no government subsidy, but the lifetime cost will likely be less than or similar to a gas boiler and air conditioner. This is generally also true if cooling is not required, as the ASHP will likely last longer if only heating. The lifetime cost of an air source heat pump will be affected by the price of electricity compared to gas (where available), and may take two to ten years to break even. The IEA recommends governments subsidize the purchase price of residential heat pumps, and some countries do so. Market In Norway, Australia and New Zealand most heating is from heat pumps. In 2022 heat pumps outsold fossil fuel based heating in the US and France. In the UK, annual heat pump sales have steadily grown in recent years with 26,725 heat pumps sold in 2018, a figure which has increased to 60,244 heat pumps sales in 2023. ASHPs can be helped to compete by increasing the price of fossil gas compared to that of electricity and using suitable flexible electricity pricing. In the US air-to-air is the most common type. over 80% of heat pumps are air source. In 2023 the IEA appealed for better data - especially on air-to-air. Maintenance and reliability Many of the maintenance needs for air source heat pumps reflect that of conventional air conditioning and furnace installations, such as regular air filter replacements and cleaning of both the indoor evaporator and outdoor condenser coils. However, there are additional maintenance measures unique to the operation of air source heat pumps that concern the physical means by which a heat pump extracts heat from the outdoor air. Since a heat pump running in cooling mode operates essentially the same as a conventional air conditioning system, these measures primarily concern the performance of ASHPs during the winter, especially in colder climates. In colder climates, where the compressor works harder to extract heat from the outside air, it is critical to prevent the buildup of ice and frost on the outdoor coil to maintain ASHP performance. This buildup acts as an insulation layer and decreases the rate of heat exchange by blocking the continuous flow of air over the outdoor coil. To prevent this issue, it is necessary to keep the outdoor coil clean of any dirt or grime, as this can trap moisture from the air, which freezes over the coil. In addition, it is necessary to keep the fins surrounding the condenser coil and air intake grill of the outdoor unit free of any debris, such as leaves, that could further block airflow and impede heat exchange. This upkeep helps minimize the need for frequent defrost cycles that put the heat pump into cooling mode and send heated refrigerant to the condenser coil to melt accumulated ice. These defrost cycles can cause pressure fluctuations in the refrigerant lines that lead to refrigerant leaks and diminish performance. When heating performance drops, an ASHP can remain reliable through its auxiliary heating strip that provides an additional source of heat through electrical resistance to compensate for any heat losses, although this process is significantly less efficient. It is thought that ASHP need less maintenance than fossil fuelled heating, and some say that ASHPs are easier to maintain than ground source heat pumps due to the difficulty of finding and fixing underground leaks. Installing too small an ASHP could shorten its lifetime (but one which is too large will be less efficient). However others say that boilers require less maintenance than ASHPs. A Consumer Reports survey found that "on average, around half of heat pumps are likely to experience problems by the end of the eighth year of ownership". History Modern chemical refrigeration techniques developed after the proposal of the Carnot cycle in 1824. Jacob Perkins invented an ice-making machine that used ether in 1843, and Edmond Carré built a refrigerator that used water and sulfuric acid in 1850. In Japan, Fusanosuke Kuhara, founder of Hitachi, Ltd., made an air conditioner for his own home use using compressed CO2 as a refrigerant in 1917. In 1930 Thomas Midgley Jr. discovered dichlorodifluoromethane, a chlorinated fluorocarbon (CFC) known as freon. CFCs rapidly replaced traditional refrigerant substances, including CO2 (which proved hard to compress for domestic use), for use in heat pumps and refrigerators. But from the 1980s CFCs began to lose favor as refrigerant when their damaging effects on the ozone layer were discovered. Two alternative types of refrigerant, hydrofluorocarbons (HFCs) and hydrochlorofluorocarbons (HCFCs), also lost favor when they were identified as greenhouse gases (additionally, HCFCs were found to be more damaging to the ozone layer than originally thought). The Vienna Convention for the Protection of the Ozone Layer, the Montreal Protocol and the Kyoto Protocol call for the complete abandonment of such refrigerants by 2030. In 1989, amid international concern about the effects of chlorofluorocarbons and hydrochlorofluorocarbons on the ozone layer, scientist Gustav Lorentzen and SINTEF patented a method for using CO2 as a refrigerant in heating and cooling. Further research into CO2 refrigeration was then conducted at Shecco (Sustainable HEating and Cooling with CO2) in Brussels, Belgium, leading to increasing use of CO2 refrigerant technology in Europe. In 1993 the Japanese company Denso, in collaboration with Gustav Lorentzen, developed an automobile air conditioner using CO2 as a refrigerant. They demonstrated the invention at the June 1998 International Institute of Refrigeration/Gustav Lorentzen Conference. After the conference, CRIEPI (Central Research Institute of Electric Power Industry) and TEPCO (The Tokyo Electric Power Company) approached Denso about developing a prototype air conditioner using natural refrigerant materials instead of freon. Together they produced 30 prototype units for a year-long experimental installation at locations throughout Japan, from the cold climate of Hokkaidō to hotter Okinawa. After this successful feasibility study, Denso obtained a patent to compress CO2 refrigerant for use in a heat pump from SINTEF in September 2000. During the early 21st century CO2 heat pumps, under the EcoCute patent, became popular for new-build housing in Japan but were slower to take off elsewhere. Manufacturing Demand for heat pumps increased in the first quarter of the 21st century in the US and Europe, with governments subsidizing them to increase energy security and decarbonisation. Europeans tend to use air-to-water (also called hydronic) systems which utilize radiators, rather than the air-to-air systems more common elsewhere. Asian countries made three-quarters of heat pumps globally in 2021. See also :Category:Heating, ventilation, and air conditioning companies Transcritical cycle References Sources IPCC reports Consumer electronics Heating, ventilation, and air conditioning Energy conservation Building engineering Construction Energy economics Environmental design Heating Heat pumps Sustainable technologies
Air source heat pump
Engineering,Environmental_science
5,228
55,763,195
https://en.wikipedia.org/wiki/N11%20%28emission%20nebula%29
N11 (also known as LMC N11, LHA 120-N 11) is the brightest emission nebula in the north-west part of the Large Magellanic Cloud in the Dorado constellation. The N11 complex is the second largest H II region of that galaxy, the largest being the Tarantula Nebula. It covers an area approximately 6 arc minutes across. It has an elliptical shape and consists of a large bubble, generally clear interstellar area, surrounded by nine large nebulae. It was named by Karl Henize in 1956. When close-up, the nebula has pink clouds of glowing gas which resembles candy floss. It has been well studied over the years and extends 1,000 light-years across. Its particularly notable features include a huge cavity measuring 80 by 60 pc and a five million year old central cluster (NGC 1761). It is surrounded by several ionized clouds where young O stars are forming. Several massive stars are within it, including LH 9, LH 10, LH 13, LH 14. It includes a supernova remnant N11L. In the very centre of NGC 1761 is a bright multiple star HD 32228 which contains a rare blue Wolf-Rayet star, type WC5 or WC6, and an O-type bright giant. Bean Nebula The brightest nebulosity within N11 is the northern region N11B (NGC 1763), also known as the Bean Nebula because of its shape. Other most notable nebulae On N11B's north-east edge is the more compact N11A, known as the Rose Nebula, which has rose-like petals of gas and dust and are illuminated due to the massive hot stars within its centre. It is also known as IC 2116 and was catalogued as a star HD 32340. The east side of the N11 complex is N11C (NGC 1769), an emission nebula containing at least two compact open clusters. Outside the main "bubble" of N11 to the northeast is N11E, also known as NGC 1773, a small bright nebula containing several massive young stars. The south portion of the bubble is N11F, also called NGC 1760. The western portion of the bubble is faint and poorly-defined. To the south-west of N11 is the 7th magnitude red giant HD 31754, a foreground star/star system, lying close to our sightline with open cluster NGC 1733. Three farther galaxies visible from most southern deep space telescopes and observatories are west of N11: the pair PGC 16243 and PGC 16244; and LEDA 89996. To the south of them lie NGC 1731 and TYC 8889-619-1 which are part of the galaxy's N4 complex. The bright globular cluster NGC 1783 figures to the north of N11. Gallery References Large Magellanic Cloud Emission nebulae Dorado H II regions Star-forming regions Astronomical objects discovered in 1956
N11 (emission nebula)
Astronomy
611
32,961,505
https://en.wikipedia.org/wiki/RFID%20testing
RFID is a wireless technology supported by many different vendors for tags (also called transponders or smart cards) and readers (also called interrogators or terminals). In order to ensure global operability of the products multiple test standards have been developed. Furthermore, standardization organizations like ETSI organize RFID Plugtests, where products from multiple vendors are tested against each other in order to ensure interoperability. Test standards The most important test standards are: ISO/IEC 10373-6 for conformance to ISO/IEC 14443 ISO/IEC 10373-7 for conformance to ISO/IEC 15693 ISO/IEC 18047 multiple parts for conformance to ISO/IEC 18000 multiple parts ISO/IEC 18046 multiple parts for performance of ISO/IEC 18000 systems, interrogators and tags Testing Equipment Vendors CISC Semiconductor GmbH JX Instrumentation Ltd Voyantic Ltd Testing Vendors MET Laboratories, Inc. TRaC Global FIME Arsenal Testhouse See also RFID Journal Reference External links QR RFID Reader Radio-frequency identification
RFID testing
Engineering
219
11,587,822
https://en.wikipedia.org/wiki/Tarka%20Shastra
Tarka Shastra (, IAST: ) is a Sanskrit term for the philosophy of dialectics, logic and reasoning, and art of debate that analyzes the nature and source of knowledge and its validity. Shastra in Sanskrit means that which gives teaching, instruction or command. Tarka means debate or an argument. According to one reckoning, there are six shastras. Vyākaraṇa is one of them. Four of the shastras are particularly important: Vyākaraṇa, Mīmāṃsā, Tarka, and Vedanta. Tarka shastra has concepts called purva paksha and apara paksha. When one raises a point (purva paksha) the other party criticizes it (apara paksha). Then the debate starts. Each one tries to support his point of view by getting various references. The meaning of the word tarka also is specific, in that it does not imply a pure logical analysis but a complex activity of discourse guided by strict definitions and goals. Tarka-Sangraha is a foundational text followed as guidelines for logic and discourse ever since it was composed in the second half of 17th century CE. Tarka may be translated as "hypothetical argument". Tarka is the process of questioning and cross-questioning that leads to a particular conclusion. It is a form of supposition that can be used as an aid to the attainment of valid knowledge. There are several scholars renowned as well-versed in Tarka shastra: Adi Shankara (sixth century CE), Udyotakara (Nyāyavārttika, 6th–7th century), Vācaspati Miśra (Tatparyatika, 9th century), Ramanujacharya (9th century), Udayanacharya (Tātparyaparishuddhi, 10th century), Jayanta Bhatta (Nyāyamanjari, 9th century), Madhvacharya (13th century), Visvanatha (Nyāyasūtravṛtti, 17th century), Rādhāmohana Gosvāmī (Nyāyasūtravivarana, 18th century), and Kumaran Asan (1873–1924). Paruthiyur Krishna Sastri (1842–1911) and Sengalipuram Anantarama Dikshitar (1903–1969) specialized in Vyākaraṇa, Mīmāṃsā and Tarka shastra. References Bibliography JSTOR: WorldCat: Krishna Jain (2011). Tarka-śāstra: eka rūpa-rekhā (Raj Verma Sinha, translator) [A textbook of logic: an introduction]. Naī Dillī: Ḍī. Ke. Priṇṭavarlḍa. , , [language: Hindi, translated from 2007 English original , , ] Pavitra Kumāra Śarmā (2007). Tarka śāstra. Jayapura: Haṃsā Prakāśana. [language: Hindi] Gulābarāya. Tarka śāstra. Kāśī: Nāgarīpracāriṇī Sabhā. [language: Hindi] (on Hindu logic) George William Brown (1915). Hindi logic. Jubbulpore: Christian Mission Press. [language: Hindi] External links Tarka Shastra - Shastra Nethralaya Samskara - The Forty Samskaras Hindu philosophical concepts Hermeneutics Ritual
Tarka Shastra
Biology
686
47,079,128
https://en.wikipedia.org/wiki/Neoichnology
Neoichnology (Greek néos „new“, íchnos „footprint“, logos „science“) is the science of footprints and traces of extant animals. Thus, it is a counterpart to paleoichnology, which investigates tracks and traces of fossil animals. Neoichnological methods are used in order to study the locomotion and the resulting tracks of both invertebrates and vertebrates. Often these methods are applied in the field of palaeobiology to gain a deeper understanding of fossilized footprints. Neoichnological methods Working with living animals Typically, when working with living animals, a race track is prepared and covered with a substrate, which allows for the production of footprints, i.e. sand of varying moisture content, clay or mud. After preparation, the animal is lured or shooed over the race track. This results in the production of numerous footprints that constitute a complete track. In some cases the animal is filmed during track production in order to subsequently study the impact of the animal's velocity or its behavior on the produced track. This poses an important advantage of working with living animals: changes in speed or direction, resting, slippage or moments of fright become visible in the produced tracks. After track production and prior to reuse, the track can be photographed, drawn or molded. Changes in the experimental setup are possible throughout the experiment, i. e. regulation of the moisture content of the substrate. As an alternative, also tracks of free living animals can be studied in nature (i.e. nearby lakes) and without any special experimental setup. However, without the standardized environment of the lab, matching the tracks with the behavior of the animal during track production is undoubtedly harder. Working with foot models or severed limbs Another field of methods is the experimental work done with foot models or severed limbs. With these methods, the natural behavior of the animal is excluded from the analysis. In a typical experimental setup, the prepared foot is pressed into the substrate of interest, which again allows for the production of a footprint. Other than in the methods previously mentioned, the experimenter has now the opportunity to regulate manually the pressure, direction and speed of foot touchdown. Because of that, the effects of those manipulations can be studied more directly. The layering of differently colored substrates furthermore allows to study the consequences of touchdown in lower substrate layers. References External links Neoichnological analysis of blue-tongued skink locomotion: https://www.youtube.com/watch?v=GRMeWDf_KVc Subfields of paleontology Zoology
Neoichnology
Biology
524
36,048,753
https://en.wikipedia.org/wiki/Strawberry%20Tree%20%28solar%20energy%20device%29
The Strawberry Tree is the world’s first public solar charger for mobile devices. It was developed by Serbian company Strawberry Energy. It won first place in the European Commission’s "Sustainable energy week 2011" competition in Brussels, in the category Consuming. Functionalities Strawberry Tree is a solar and WiFi station which is permanently installed in public places such as streets, parks and squares, providing passersby with the opportunity to charge their mobile devices for free when they are outside. Its main parts are: Solar panels that transform solar energy to electrical energy Rechargeable batteries which accumulate energy and make Strawberry Tree function for more than 14 days without sunshine Sixteen cords for different types of mobile devices such as mobile phones, cameras, mp3 players etc. Smart electronics which enables balance between produced and consumed energy Also, Strawberry Tree provides free wireless internet in the immediate surroundings. History The first idea of a public solar charger for mobile devices, Strawberry tree was developed by Miloš Milisavljević, founder of Strawberry Energy company. , there were eleven Strawberry Trees installed. The first Strawberry Tree was installed in October, 2010 in the main square of Obrenovac municipality, Serbia. During the first 40 days from presentation of the solar charger, 10,000 chargings were measured. One year later, in cooperation with Telekom Serbia Company, a second Public solar charger for mobile devices was set up in Zvezdara municipality, Belgrade, Serbia. In the same month, a third Strawberry Tree was set in Novi Sad, Serbia. By the beginning of 2012, more than 100,000 chargings had been achieved on all three Strawberry trees. In cooperation with Telekom Serbia Company, Strawberry energy also installed Strawberry Tree at these locations: Kikinda, Serbia, in July, 2012. Vranje, Serbia, in August, 2012. Bor, Serbia, in October, 2012. Valjevo, Serbia, in October, 2012. In cooperation with city of Belgrade and Palilula municipality, Strawberry energy installed Strawberry Tree Black in Belgrade in Tašmajdan Park, in November 2012, with a completely new design by Serbian architect Miloš Milivojević. In the beginning of 2013, Strawberry energy, in cooperation with the city of Belgrade and Mikser organization, set up Public solar charger Strawberry Tree Flow with the new design by Serbian designers Tamara Švonja and Vojin Stojadinović, in Slavija square, Belgrade, Serbia. Later in 2013, through the project "Bijeljina and Bogatić – together on the way towards energy sustainability through increasing energy efficiency and promotion of renewable energy sources" within Cross Border Cooperation Programme Serbia – Bosnia and Herzegovina, two solar chargers have been installed in Bijeljina: in front of Cultural center and in the City park. References External links Electric power Solar energy companies Serbian inventions 2010 establishments in Serbia Energy in Serbia Charging stations Applications of photovoltaics Projects established in 2010
Strawberry Tree (solar energy device)
Physics,Engineering
597
6,176,311
https://en.wikipedia.org/wiki/Inertial%20reference%20unit
An inertial reference unit (IRU) is a type of inertial sensor which uses gyroscopes (electromechanical, ring laser gyro or MEMS) and accelerometers (electromechanical or MEMS) to determine a moving aircraft’s or spacecraft’s change in rotational attitude (angular orientation relative to some reference frame) and translational position (typically latitude, longitude and altitude) over a period of time. In other words, an IRU allows a device, whether airborne or submarine, to travel from one point to another without reference to external information. Another name often used interchangeably with IRU is Inertial Measurement Unit. The two basic classes of IRUs/IMUs are "gimballed" and "strapdown". The older, larger gimballed systems have become less prevalent over the years as the performance of newer, smaller strapdown systems has improved greatly via the use of solid-state sensors and advanced real-time computer algorithms. Gimballed systems are still used in some high-precision applications where strapdown performance may not be as good. See also Air data inertial reference unit Inertial measurement unit External links Optical Inertial Reference Units (IRUs) Navigational equipment Aircraft instruments Avionics
Inertial reference unit
Technology,Engineering
267
40,856,991
https://en.wikipedia.org/wiki/John%20F.%20Rockart
John Fralick (Jack) Rockart (1931 – February 3, 2014) was an American organizational theorist, and Senior Lecturer Emeritus at the Center for Information Systems Research at the MIT Sloan School of Management. Biography Born in New York City, Rockart received his AB from the Woodrow Wilson School of Public and International Affairs at Princeton University, his MBA from Harvard Business School, and in 1968 his PhD in management from MIT. After graduation, Rockart started his academic career as Instructor in the MIT Sloan School of Management. In 1967 he was appointed Assistant Professor, in 1970 Associate Professor, and in 1974 Senior Lecturer. Since 1976 he was also Director of the MIT Center for Information Systems Research (CISR), where he was succeeded by Peter Weill in 2000. In 1989 Rockart was awarded the Nonfiction Computer Press Association Book of the Year Award; and in 2003, the Leo Award by the Association for Information Systems. Rockart was founding editor-in-chief of the MIS Quarterly Executive. Rockart's research interests focused on the "managers’ usage of computer-based information with a special concentration on the need to design information flow for effective decision making... [and] the changing role of information technology and the implementation of integrated global systems." Selected publications Books and papers: Bullen, Christine V., and John F. Rockart. A primer on critical success factors. (1981). Rockart, John F., and David W. De Long. Executive support systems: The emergence of top management computer use. Dow Jones-Irwin, 1988. Articles, a selection: Rockart, John F. "Chief executives define their own data needs." Harvard Business Review 57.2 (1979): 81. Rockart, John F. "The changing role of the information systems executive: a critical success factors perspective." Sloan Management Review Fall 1982; 24, pp. 3–13 Rockart, John F., and Lauren S. Flannery. "The management of end user computing." Communications of the ACM 26.10 (1983): 776-784. Malone, Thomas W., and John F. Rockart. "Computers, networks and the corporation." Scientific American 265.3 (1991): 128-136. Rockart, John F., Michael J. Earl, and Jeanne W. Ross. "Eight imperatives for the new IT organization." Sloan management review 38.1 (1996): 43-55. References External links MIT CISR remembers Jack Rockart at cisr.mit.edu, 2014.02.05. Jack Rockart, 82; cofounded IT research center at MIT at bostonglobe.com 2014.02.26. John F. "Jack" Rockart: Obituary 1931 births 2014 deaths American business theorists Information systems researchers Princeton School of Public and International Affairs alumni Harvard Business School alumni MIT Sloan School of Management alumni MIT Sloan School of Management faculty
John F. Rockart
Technology
602
4,194,692
https://en.wikipedia.org/wiki/Microspore
Microspores are land plant spores that develop into male gametophytes, whereas megaspores develop into female gametophytes. The male gametophyte gives rise to sperm cells, which are used for fertilization of an egg cell to form a zygote. Megaspores are structures that are part of the alternation of generations in many seedless vascular cryptogams, all gymnosperms and all angiosperms. Plants with heterosporous life cycles using microspores and megaspores arose independently in several plant groups during the Devonian period. Microspores are haploid, and are produced from diploid microsporocytes by meiosis. Morphology The microspore has three different types of wall layers. The outer layer is called the perispore, the next is the exospore, and the inner layer is the endospore. The perispore is the thickest of the three layers while the exospore and endospore are relatively equal in width. Seedless vascular plants In heterosporous seedless vascular plants, modified leaves called microsporophylls bear microsporangia containing many microsporocytes that undergo meiosis, each producing four microspores. Each microspore may develop into a male gametophyte consisting of a somewhat spherical antheridium within the microspore wall. Either 128 or 256 sperm cells with flagella are produced in each antheridium. The only heterosporous ferns are aquatic or semi-aquatic, including the genera Marsilea, Regnellidium, Pilularia, Salvinia, and Azolla. Heterospory also occurs in the lycopods in the spikemoss genus Selaginella and in the quillwort genus Isoëtes. Types of seedless vascular plants: Water ferns Spikemosses Quillworts Gymnosperms In seed plants the microspores develop into pollen grains each containing a reduced, multicellular male gametophyte. The megaspores, in turn, develop into reduced female gametophytes that produce egg cells that, once fertilized, develop into seeds. Pollen cones or microstrobili usually develop toward the tips of the lower branches in clusters up to 50 or more. The microsporangia of gymnosperms develop in pairs toward the bases of the scales, which are therefore called microsporophylls. Each of the microsporocytes in the microsporangia undergoes meiosis, producing four haploid microspores. These develop into pollen grains, each consisting of four cells and, in conifers, a pair of external air sacs. The air sacs give the pollen grains added buoyancy that helps with wind dispersal. Types of Gymnosperms: Conifers Pines Ginkgos Cycads Gnetophytes Angiosperms As the anther of a flowering plant develops, four patches of tissue differentiate from the main mass of cells. These patches of tissue contain many diploid microsporocyte cells, each of which undergoes meiosis producing a quartet of microspores. Four chambers (pollen sacs) lined with nutritive tapetal cells are visible by the time the microspores are produced. After meiosis, the haploid microspores undergo several changes: The microspore divides by mitosis producing two cells. The first of the cells (the generative cell) is small and is formed inside the second larger cell (the tube cell). The members of each part of the microspores separate from each other. A double-layered wall then develops around each microspore. These steps occur in sequence and when complete, the microspores have become pollen grains. Embryogenesis Although it is not the usual route of a microspore, this process is the most effective way of yielding haploid and double haploid plants through the use of male sex hormones. Under certain stressors such as heat or starvation, plants select for microspore embryogenesis. It was found that over 250 different species of angiosperms responded this way. In the anther, after a microspore undergoes microsporogenesis, it can deviate towards embryogenesis and become star-like microspores. The microspore can then go one of four ways: Become an embryogenic microspore, undergo callogenesis to organogenesis (haploid/double haploid plant), become a pollen-like structure or die. Microspore embryogenesis is used in biotechnology to produce double haploid plants, which are immediately fixed as homozygous for each locus in only one generation. The haploid microspore is stressed to trigger the embryogenesis pathway and the resulting haploid embryo either doubles its genome spontaneously or with the help of chromosome doubling agents. Without this double haploid technology, conventional breeding methods would take several generations of selection to produce a homozygous line. See also Microsporangium Spore Megaspore References Plant reproduction
Microspore
Biology
1,050
6,315
https://en.wikipedia.org/wiki/Air%20%28classical%20element%29
Air or Wind is one of the four classical elements along with water, earth and fire in ancient Greek philosophy and in Western alchemy. Greek and Roman tradition According to Plato, it is associated with the octahedron; air is considered to be both hot and wet. The ancient Greeks used two words for air: aer meant the dim lower atmosphere, and aether meant the bright upper atmosphere above the clouds. Plato, for instance writes that "So it is with air: there is the brightest variety which we call aether, the muddiest which we call mist and darkness, and other kinds for which we have no name...." Among the early Greek Pre-Socratic philosophers, Anaximenes (mid-6th century BCE) named air as the arche. A similar belief was attributed by some ancient sources to Diogenes Apolloniates (late 5th century BCE), who also linked air with intelligence and soul (psyche), but other sources claim that his arche was a substance between air and fire. Aristophanes parodied such teachings in his play The Clouds by putting a prayer to air in the mouth of Socrates. Air was one of many archai proposed by the Pre-socratics, most of whom tried to reduce all things to a single substance. However, Empedocles of Acragas (c. 495-c. 435 BCE) selected four archai for his four roots: air, fire, water, and earth. Ancient and modern opinions differ as to whether he identified air by the divine name Hera, Aidoneus or even Zeus. Empedocles’ roots became the four classical elements of Greek philosophy. Plato (427–347 BCE) took over the four elements of Empedocles. In the Timaeus, his major cosmological dialogue, the Platonic solid associated with air is the octahedron which is formed from eight equilateral triangles. This places air between fire and water which Plato regarded as appropriate because it is intermediate in its mobility, sharpness, and ability to penetrate. He also said of air that its minuscule components are so smooth that one can barely feel them. Plato's student Aristotle (384–322 BCE) developed a different explanation for the elements based on pairs of qualities. The four elements were arranged concentrically around the center of the universe to form the sublunary sphere. According to Aristotle, air is both hot and wet and occupies a place between fire and water among the elemental spheres. Aristotle definitively separated air from aether. For him, aether was an unchanging, almost divine substance that was found only in the heavens, where it formed celestial spheres. Humorism and temperaments In ancient Greek medicine, each of the four humours became associated with an element. Blood was the humor identified with air, since both were hot and wet. Other things associated with air and blood in ancient and medieval medicine included the season of spring, since it increased the qualities of heat and moisture; the sanguine temperament (of a person dominated by the blood humour); hermaphrodite (combining the masculine quality of heat with the feminine quality of moisture); and the northern point of the compass. Alchemy The alchemical symbol for air is an upward-pointing triangle, bisected by a horizontal line. Modern reception The Hermetic Order of the Golden Dawn, founded in 1888, incorporates air and the other Greek classical elements into its teachings. The elemental weapon of air is the dagger which must be painted yellow with magical names and sigils written upon it in violet. Each of the elements has several associated spiritual beings. The archangel of air is Raphael, the angel is Chassan, the ruler is Ariel, the king is Paralda, and the air elementals (following Paracelsus) are called sylphs. Air is considerable and it is referred to the upper left point of the pentagram in the Supreme Invoking Ritual of the Pentagram. Many of these associations have since spread throughout the occult community. In the Golden Dawn and many other magical systems, each element is associated with one of the cardinal points and is placed under the care of guardian Watchtowers. The Watchtowers derive from the Enochian system of magic founded by Dee. In the Golden Dawn, they are represented by the Enochian elemental tablets. Air is associated with the east, which is guarded by the First Watchtower. Air is one of the five elements that appear in most Wiccan and Pagan traditions. Wicca in particular was influenced by the Golden Dawn system of magic and Aleister Crowley's mysticism. Parallels in non-Western traditions Air is not one of the traditional five Chinese classical elements. Nevertheless, the ancient Chinese concept of Qi or chi is believed to be close to that of air. Qi is believed to be part of every living thing that exists, as a kind of "life force" or "spiritual energy". It is frequently translated as "energy flow", or literally as "air" or "breath". (For example, tiānqì, literally "sky breath", is the Chinese word for "weather"). The concept of qi is often reified, however no scientific evidence supports its existence. The element air also appears as a concept in the Buddhist philosophy which has an ancient history in China. Some Western modern occultists equate the Chinese classical element of metal with air, others with wood due to the elemental association of wind and wood in the bagua. Enlil was the god of air in ancient Sumer. Shu was the ancient Egyptian deity of air and the husband of Tefnut, goddess of moisture. He became an emblem of strength by virtue of his role in separating Nut from Geb. Shu played a primary role in the Coffin Texts, which were spells intended to help the deceased reach the realm of the afterlife safely. On the way to the sky, the spirit had to travel through the air as one spell indicates: "I have gone up in Shu, I have climbed on the sunbeams." According to Jain beliefs, the element air is inhabited by one-sensed beings or spirits called vāyukāya ekendriya, sometimes said to inhabit various kinds of winds such as whirlwinds, cyclones, monsoons, west winds and trade winds. Prior to reincarnating into another lifeform, spirits can remain as vāyukāya ekendriya from anywhere between one instant to up to three-thousand years, depending on the karma of the spirits. See also Atmosphere of Earth Sky deity Wind deity Notes References Barnes, Jonathan. Early Greek Philosophy. London: Penguin, 1987. Brier, Bob. Ancient Egyptian Magic. New York: Quill, 1980. Guthrie, W. K. C. A History of Greek Philosophy. 6 volumes. Cambridge: Cambridge University Press, 1962–81. Hutton, Ronald. Triumph of the Moon: A History of Modern Pagan Witchcraft. Oxford: Oxford University Press, 1999, 2001. Kraig, Donald Michael. Modern Magick: Eleven Lessons in the High Magickal Arts. St. Paul: Llewellyn, 1994. Lloyd, G. E. R. Aristotle: The Growth and Structure of His Thought. Cambridge: Cambridge University Press, 1968. Plato. Timaeus and Critias. Translated by Desmond Lee. Revised edition. London: Penguin, 1977. Regardie, Israel. The Golden Dawn. 6th edition. St. Paul: Llewellyn, 1990. Schiebinger, Londa. The Mind Has No Sex? Women in the Origins of Modern Science. Cambridge: Harvard University Press, 1989. Valiente, Doreen. Witchcraft for Tomorrow. Custer, Wash.: Phoenix Publishing, 1978. Valiente, Doreen. The Rebirth of Witchcraft. Custer, Wash.: Phoenix Publishing, 1989. Vlastos, Gregory. Plato’s Universe. Seattle: University of Washington Press, 1975. Further reading Cunningham, Scott. Earth, Air, Fire and Water: More Techniques of Natural Magic. Starhawk. The Spiral Dance: A Rebirth of the Ancient Religion of the Great Goddess. 3rd edition. 1999. External links Atmosphere of Earth Classical elements Esoteric cosmology History of astrology Technical factors of astrology Gases Concepts in ancient Greek metaphysics
Air (classical element)
Physics,Chemistry,Astronomy
1,709
36,033,938
https://en.wikipedia.org/wiki/Mu2%20Chamaeleontis
{{DISPLAYTITLE:Mu2 Chamaeleontis}} Mu2 Chamaeleontis (μ2 Cha) is a star located in the constellation Chamaeleon. It is not bright enough to be readily visible to the naked eye, having an apparent visual magnitude of 6.60, but has an absolute magnitude of 0.59. The distance to this object is approximately 556 light years, based on the star's parallax. The star's radial velocity is poorly constrained, but it appears to be drifting further away at the rate of around +3 km/s. This object is an aging G-type giant star with a stellar classification of G6/8 III. Having exhausted the supply of hydrogen at its core, the star has cooled and expanded until now it has 11 times the girth of the Sun. It is a suspected variable star of unknown type. The star is radiating 71 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,967 K. References G-type giants Suspected variables Chamaeleon Chamaeleontis, Mu2 Durchmusterung objects 088351 049326 3997
Mu2 Chamaeleontis
Astronomy
252
38,683,862
https://en.wikipedia.org/wiki/International%20Middleware%20Conference
The International Middleware Conference brings together academic and industrial delegates who have an interest in the development, optimisation, evaluation and evolution of middleware. History The first instance of the Middleware conference was held in 1998. Since 2003 the conference has been run annually. Many recent conference events have been ACM/IFIP/USENIX supported events. Conference structure Middleware uses a single-track conference program, although it includes a growing number of submission categories. As of 2013, these include: Research papers Experimentation and deployment papers Big ideas papers The conference also includes: Tutorials Demonstrations and posters A doctoral workshop A number (six, in 2012) of workshops are typically co-located with the main conference. See also List of computer science conferences References External links http://www.middleware-conference.org/ https://web.archive.org/web/20130511173841/http://2013.middleware-conference.org/ Computer science conferences Middleware
International Middleware Conference
Technology,Engineering
205
11,421,770
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORA35
In molecular biology, Homo sapiens snoRA35 (also known as HBI-36) is an H/ACA box snoRNA, first cloned from a mouse adult brain cDNA library by Cavaillé et al. (2000), and is found to be specifically expressed in the choroid plexus. Its human orthologue, HBI-36 was discovered by a homology search and was discovered to be specifically expressed in the brain. Its gene resides in the second intron of the serotonin receptor 2c (5HT-2c) gene, which is predominantly expressed in choroid plexus epithelial cells. The human 5HT-2c mRNA was predicted to be 2'-O-methylated by the C/D box snoRNP HBII-52 at a position also subjected to A-to-I editing. HBI-36 has no documented RNA target. References External links Small nuclear RNA
Small nucleolar RNA SNORA35
Chemistry
201
2,870,411
https://en.wikipedia.org/wiki/Omega%20Aquilae
The Bayer designation Omega Aquilae (ω Aql / ω Aquilae) is shared by two stars in the constellation Aquila: Omega¹ Aquilae (Flamsteed designation 25 Aquilae.) Omega² Aquilae (Flamsteed designation 29 Aquilae.) They are separated by 0.51° on the sky. Aquilae, Omega Aquila (constellation)
Omega Aquilae
Astronomy
89
61,166,674
https://en.wikipedia.org/wiki/C14H19NO4
{{DISPLAYTITLE:C14H19NO4}} The molecular formula C14H19NO4 (molar mass: 265.31 g/mol, exact mass: 265.1314 u) may refer to: Anisomycin, also known as flagecidin Filenadol Molecular formulas
C14H19NO4
Physics,Chemistry
68
21,687,322
https://en.wikipedia.org/wiki/Fc%20%28Unix%29
is a standard program on Unix and Unix-like operating systems that lists, edits and reexecutes commands previously entered to an interactive shell. fc is a builtin command in the Bash and Zsh shells and is an initialism for "fix command". It is particularly helpful for editing complex, multi-line commands. The editor can be specified by setting the EDITOR (changes the default editor) or the FCEDIT environment variable. Examples Flag -l used to list previous command history, with example showing command ls as item 1001 in the user's history. $ fc -l 1001 ls Flag -s with this index would then recall the history command from 1001: $ fc -s 1001 ls Though more powerfully, -s enables inline substitution. $ ls floder # user typo $ fc -s ^floder^folder^ # Command revised and runs with correction ls folder Most powerfully, executing fc on its own edits the last command executed. Editor can be specified on command line (-e) or via environment variable FCEDIT. User is thus able to fully modify the last command executed via the editor, upon exiting will execute the resultant command. $ fc # Change 'ls' to 'ls -la' in editor and exit ls -la See also List of Unix commands Solaris manual page for fc command References External links Standard Unix programs Unix SUS2008 utilities
Fc (Unix)
Technology
299
61,086,978
https://en.wikipedia.org/wiki/Mycena%20lazulina
Mycena lazulina is a bioluminescent species of mushroom in the genus Mycena and family Mycenaceae. It was first described in 2016 from southwestern Japan. The specific epithet, lazulina, is Latin for blue (c.f. lapis lazuli). See also List of bioluminescent fungi References External links Mycena lazulina in Mycobank Gallery of images at The Forum of Fungi Bioluminescent fungi lazulina Fungi described in 2016 Fungus species
Mycena lazulina
Biology
107
19,371,215
https://en.wikipedia.org/wiki/Metric%20circle
In mathematics, a metric circle is the metric space of arc length on a circle, or equivalently on any rectifiable simple closed curve of bounded length. The metric spaces that can be embedded into metric circles can be characterized by a four-point triangle equality. Some authors have called metric circles Riemannian circles, especially in connection with the filling area conjecture in Riemannian geometry, but this term has also been used for other concepts. A metric circle, defined in this way, is unrelated to and should be distinguished from a metric ball, the subset of a metric space within a given radius from a central point. Characterization of subspaces A metric space is a subspace of a metric circle (or of an equivalently defined metric line, interpreted as a degenerate case of a metric circle) if every four of its points can be permuted and labeled as so that they obey the equalities of distances and . A space with this property has been called a circular metric space. Filling The Riemannian unit circle of length 2 can be embedded, without any change of distance, into the metric of geodesics on a unit sphere, by mapping the circle to a great circle and its metric to great-circle distance. The same metric space would also be obtained from distances on a hemisphere. This differs from the boundary of a unit disk, for which opposite points on the unit disk would have distance 2, instead of their distance on the Riemannian circle. This difference in internal metrics between the hemisphere and the disk led Mikhael Gromov to pose his filling area conjecture, according to which the unit hemisphere is the minimum-area surface having the Riemannian circle as its boundary. References circle Circles Metric geometry Bernhard Riemann
Metric circle
Mathematics
357
26,457,869
https://en.wikipedia.org/wiki/C22H35NO2
{{DISPLAYTITLE:C22H35NO2}} The molecular formula C22H35NO2 (molar mass: 345.52 g/mol, exact mass: 345.2668 u) may refer to: Himbacine LY-255582 Molecular formulas
C22H35NO2
Physics,Chemistry
62
31,019,592
https://en.wikipedia.org/wiki/C25H36O4
{{DISPLAYTITLE:C25H36O4}} The molecular formula C25H36O4 (molar mass: 400.55 g/mol, exact mass: 400.2614 u) may refer to: Ajulemic acid HU-320, or 7-nor-7-carboxy-CBD-1,1-DMH
C25H36O4
Chemistry
80
35,283,555
https://en.wikipedia.org/wiki/Ellen%20Gleditsch
Ellen Gleditsch (29 December 1879 – 5 June 1968) was a Norwegian radiochemist and Norway's second female professor. Starting her career as an assistant to Marie Curie, she became a pioneer in radiochemistry, establishing the half-life of radium and helping demonstrate the existence of isotopes. She was Vice President of the Norwegian Association for Women's Rights 1937–1939. Early life and education Ellen Gleditsch was born in 1879 in Mandal, Norway. She was the daughter of Petra Birgitte Hansen (1857–1913) and headmaster Karl Kristian Gleditsch (1851–1913). Her siblings included architect Eivind Gleditsch(nl), Adler (1893–1978) who lived with her for the rest of her life following the death of their parents, Liv Gleditsch (1895–1977) who graduated with a degree in chemistry, and civil engineer and geodesist Kristian Gleditsch. The family moved to Trondhjem and then Fredrikshald in 1905. She was the niece of Jens Gran Gleditsch and Kristen Gran Gleditsch, a first cousin of Henry Gleditsch and second cousin of Rolf Juell Gleditsch and Odd Gleditsch, Sr. Her sister in law through Kristian was Nini Haslund Gleditsch (1908–1996). Although she graduated from high school at the top of her class, the college entrance exams were not available to women at the time. Therefore, she worked as a pharmacy assistant where she was able to work toward a non-academic degree in chemistry and pharmacology in 1902. In 1905 with the support of her mentor Eyvind Bødtker, she passed the university entrance exam, but chose to study in Paris. Career After starting her career in pharmacy, she went on to study radioactivity at the Sorbonne and work in Marie Curie's laboratory from 1907 to 1912. At the Curie Institute, Gleditsch performed a technique called fractional crystallisations, which purified radium. The work, which was highly specialized and few could complete, allowed her laboratory fees to be waived. She spent five years of analysis with Curie and returned even after leaving the lab to supervise experiments. In 1911, she received a "Licenciée en sciences degree" from the Sorbonne and was awarded a teaching post at University of Oslo where she worked with Margot Dorenfeldt. After working one year, Gleditsch won the first scholarship ever given to a woman from the American-Scandinavian Association to study in the United States, but was turned down by both of the schools at which she applied. She went anyway and despite having been rejected was able to work at the laboratory of Bertram Boltwood at Yale University, where she measured the half-life of radium, creating a standard measurement that was used for many years. One of the scientists who had originally turned her away from Yale, co-authored two articles with her and in June 1914, Smith College awarded her an honorary doctorate for her work. In 1913–14, she returned to the University of Oslo and became the second woman to be elected to Oslo's Academy of Science in 1917. During the 1920s, Gleditsch made several trips to France to assist Curie, as well as a trip to Cornwall to investigate a mine located there. In 1919, Gleditsch co-founded the Norwegian Women Academics' Association, to focus on development of science and the conditions under which women scientists worked. She also believed that cooperation of scientists would foster peace. She served as president of the organization from 1924 to 1928. Joining the International Federation of University Women in 1920, she served as its President from 1926 to 1929, working to provide scholarships to enable women to study abroad. In 1929, she made a trip to the United States traveling from New York to California with the intention of promoting scholarships for women. Though her appointment as professor at Oslo in 1929 caused controversy, she successfully started a radioactivity research group there. Throughout the 1930s, she continued to produce articles in English, French, German and Norwegian. She also hosted a series of radio shows to promote and popularize scientific study. In the 1930s she directed, a laboratory doing radiochemistry in Norway, which was used as an underground laboratory by scientists fleeing from the Nazi regime. In 1939, she was appointed to the International committee on intellectual cooperation, where Marie Curie had also been sitting a few years earlier. When Norway was occupied during the war, she hid scientists and continued using her home for experiments. During a raid on her laboratory in 1943, the women scientists were able to rescue the radioactive minerals, but all of the men were arrested. She retired from the university in 1946 and began working with UNESCO in their efforts to end illiteracy. In 1949, she was actively involved on the working committee and in 1952 was named to the Norwegian commission working to control use of the atomic bomb. That same year she resigned from UNESCO in protest over the admittance of Spain under Franco's fascist regime as a member. In 1962 at the age of 83, she received an honorary doctorate from the Sorbonne, the first woman to receive such an honor. Honours and awards In 1920, Ellen Gleditsch was awarded Fridtjof Nansen's reward for outstanding research. In 1948 she was awarded an honorary doctorate by the University of Strasbourg. In 1946 she was appointed a Knight of the 1st Class of the Order of St. Olav. In 1957 she became an honorary citizen of Paris. In 1962, she was named an honorary doctor at the University of the Sorbonne, as the first woman ever. In 1966, she was appointed an honorary member of the Norwegian Chemical Society. Commemoration Oslo Municipality has named a road after her; Ellen Gleditsch's road is located in the district Stovner in Oslo. In November 2018, OsloMet named a university building (P35) on the Pilstredet campus after her. In 2019, she got a street named after her in her hometown Mandal. Ellen Gleditsch road is located on Malmøy. In 2021, Radiumhospitalet's new cyclotron was named Ellen Gleditsch. Works (with Marie Curie) Sur le radium et l'uranium contenus dans les mineraux radioactifs, Comptes Rendus 148:1451 (1909) 'Ratio Between Uranium and Radium in the Radio-active Minerals', Comptes Rendus 149:267 (1909). Sur le rapport entre l'uranium et le radium dans les mineraux actifs, Radium 8:256 (1911). References External links Ellen Gleditsch at the Journal of Chemical Education Scientist of the Day – Ellen Gleditsch at Linda Hall Library Further reading 1879 births 1968 deaths Norwegian chemists 20th-century Norwegian women scientists Nuclear chemists Norwegian women chemists Norwegian Association for Women's Rights people People from Mandal, Norway Order of Saint Olav
Ellen Gleditsch
Chemistry
1,452
52,433,615
https://en.wikipedia.org/wiki/Lead%E2%80%93crime%20hypothesis
After decades of increasing crime across the industrialised world, crime rates started to decline sharply in the 1990s, a trend that continued into the new millennium. Many explanations have been proposed, including situational crime prevention and interactions between many other factors with complex, multifactorial causation. Lead is widely understood to be toxic to multiple organs of the human body, particularly the human brain. Concerns about even low levels of exposure began in the 1970s; in the decades since, scientists have concluded that no safe threshold for lead exposure exists. The major source of lead exposure during the 20th century was leaded gasoline. Proponents of the lead–crime hypothesis argue that the removal of lead additives from motor fuel, and the consequent decline in children's lead exposure, explains the fall in crime rates in the United States beginning in the 1990s. This hypothesis also offers an explanation of the in crime in the preceding decades as the result of increased lead exposure throughout the mid-20th century. The lead–crime hypothesis is not mutually exclusive with other explanations of the drop in US crime rates, which includes analysis of the hypothesized legalized abortion and crime effect. The difficulty in measuring the effect of lead exposure on crime rates is in separating the effect from other indicators of poverty such as poorer schools, nutrition, and medical care, exposure to other pollutants, and other variables that may lead to crime. Background and research Usage of lead in modern history Lead, a naturally occurring metal of bluish-grey color, has been used for multiple purposes in the history of human civilization. Advantages include being somewhat soft and pliable as well as resistant to corrosion compared to other metals. The widespread substance is also able to function as a shield against various forms of radiation. Expanded scientific investigation into organolead chemistry and the varied ways in which human biology changes due to lead exposure took place throughout the 20th century. Although it has continued to be in wide use even into the 21st century, greater understanding of blood lead levels (BLLs) and other factors have meant that a new scientific consensus has emerged. No 'safe' level of lead in the human bloodstream exists as such; any amount can contribute to neurological problems and other health issues. Medical analyses of the role of lead exposure in the brain note increases in impulsive actions and social aggression as well as the possibility of developing attention deficit hyperactivity disorder (ADHD). Those conditions likely influence personality traits and behavioral choices, with examples including having poor job performance, beginning a pattern of substance abuse, and undergoing teenage pregnancy. Evidence that lead exposure contributes to lower intelligence quotient (IQ) scores goes back to a seminal 1979 study in Nature, with later analysis finding the link particularly robust. The international process of trying to lower the prevalence of lead has been largely spearheaded by the Partnership for Clean Fuels and Vehicles (PCFV). The non-governmental organization partners with major oil companies, various governmental departments, multiple civil society groups, and other such institutions worldwide. Efforts to phase-out lead in transport fuel achieved major gains in over seventy-five nations. In discussions at the 2002 'Earth Summit', institutions under the umbrella of the United Nations vowed to emphasize public–private partnerships (PPPs) in order to help developing and transitional countries go unleaded. Correlation between lead exposure and crime In terms of crime, multiple commentators and researchers have noted that, after decades of relatively steady increases, crime rates in the United States started to sharply decline in the 1990s. The trend continued even into the new millennium. Multiple possible explanations have been suggested, with academic studies pointing to complex, multifactorial causation concurrent with various social trends. The economists Steven D. Levitt and John J. Donohue III, of the University of Chicago and Stanford University, respectively, have argued that the decline in U.S. crime rates was the combined result of an increase in the number of police, hikes in size of the prison population, waning of the spread of crack cocaine, and the widespread legalization of abortion from the 1970s onward. Possible other factors include changes in alcohol consumption. Later studies have upheld many of these findings while disputing others. While noting that correlation does not imply causation, the fact that in the United States anti-lead efforts took place simultaneously alongside falls in violent crime rates attracted attention from researchers. Changes were not uniform across the country, even while increasingly stringent Environmental Protection Agency rules went into force from the 1970s onward. Several areas had far greater lead exposure compared to others for years. A 2007 report published by The B.E. Journal of Economic Analysis & Policy, authored by Jessica Wolpaw Reyes of Amherst College, found that between 1992 and 2002 the phase-out of lead from gasoline in the U.S. "was responsible for approximately a 56% decline in violent crime". While cautioning that the findings relating to "murder are not robust if New York and the District of Columbia are included," the author concluded that "[o]verall, the phase-out of lead and the legalization of abortion appear to have been responsible for significant reductions in violent crime rates." She additionally speculated that by "2020, all adults in their 20s and 30s will have grown up without any direct exposure to gasoline lead during childhood, and their crime rates could be correspondingly lower." In 2011, a report published by the official United Nations News Centre remarked, "Ridding the world of leaded petrol [...] has resulted in $2.4 trillion in annual benefits, 1.2 million fewer premature deaths, higher overall intelligence and 58 million fewer crimes". The California State University did the specific study. Then U.N. Environment Programme (UNEP) executive director Achim Steiner argued, "Although this global effort has often flown below the radar of media and global leaders, it is clear that the elimination of leaded petrol is an immense achievement on par with the global elimination of major deadly diseases." In a 2013 article, Mother Jones ran a report by Kevin Drum arguing: Drum writes: According to Reyes, "Childhood lead exposure increases the likelihood of behavioral and cognitive traits such as impulsivity, aggressivity, and low IQ that are strongly associated with criminal behavior". A May 2017 study by Anna Aizer and Janet Currie found that lead exposure in childhood substantially increased school suspensions and juvenile detention among boys in Rhode Island, suggesting that the phasing out of leaded gasoline may explain a significant part of the decline in crime in the United States beginning in the 1990s. Systematic reviews / meta-analysis The first meta-analysis of the lead-crime hypothesis was published in 2022. "The Lead-Crime Hypothesis: A Meta-Analysis", authored by Anthony Higney, Nick Hanley, and Mirko Moro consolidates findings of 24 studies on the subject. It found that there is substantial evidence linking lead exposure to a heightened risk of criminal behavior, particularly violent crimes. This aligns with earlier research suggesting lead exposure may foster impulsive and aggressive tendencies, potential precursors to violent offenses. The study concluded that, while a correlation between declining lead pollution and declining criminality is supported by research, it is likely not a significant factor in reduced crime rates, and that the link is generally overstated in lead-crime literature. The study's implications point towards the potential benefits of reducing lead exposure to decrease crime rates. Such reductions could be achieved through initiatives like removing lead from products like gasoline and paint, water pipes and enhancing lead abatement measures in schools and residences. See also Brain health and pollution Euthenics Biosocial criminology Environmental toxicology Evolutionary mismatch Lead abatement Lead poisoning Organolead chemistry Pollution control Societal impacts of cars Statistical correlations of criminal behavior Tetraethyllead References Further reading Correlates of crime Environmental toxicology Lead poisoning Mass poisoning
Lead–crime hypothesis
Environmental_science
1,606
66,479,755
https://en.wikipedia.org/wiki/Grammatophora%20%28alga%29
Grammatophora is a genus of Chromista belonging to the family Grammatophoraceae. The genus was first described by C. G. Ehrenberg in 1840. Species: Grammatophora marina Grammatophora oceanica References Diatoms Diatom genera
Grammatophora (alga)
Biology
59
19,594,213
https://en.wikipedia.org/wiki/Planck%20constant
The Planck constant, or Planck's constant, denoted by is a fundamental physical constant of foundational importance in quantum mechanics: a photon's energy is equal to its frequency multiplied by the Planck constant, and the wavelength of a matter wave equals the Planck constant divided by the associated particle momentum. The closely related reduced Planck constant, equal to and denoted is commonly used in quantum physics equations. The constant was postulated by Max Planck in 1900 as a proportionality constant needed to explain experimental black-body radiation. Planck later referred to the constant as the "quantum of action". In 1905, Albert Einstein associated the "quantum" or minimal element of the energy to the electromagnetic wave itself. Max Planck received the 1918 Nobel Prize in Physics "in recognition of the services he rendered to the advancement of Physics by his discovery of energy quanta". In metrology, the Planck constant is used, together with other constants, to define the kilogram, the SI unit of mass. The SI units are defined in such a way that, when the Planck constant is expressed in SI units, it has the exact value History Origin of the constant Planck's constant was formulated as part of Max Planck's successful effort to produce a mathematical expression that accurately predicted the observed spectral distribution of thermal radiation from a closed furnace (black-body radiation). This mathematical expression is now known as Planck's law. In the last years of the 19th century, Max Planck was investigating the problem of black-body radiation first posed by Kirchhoff some 40 years earlier. Every physical body spontaneously and continuously emits electromagnetic radiation. There was no expression or explanation for the overall shape of the observed emission spectrum. At the time, Wien's law fit the data for short wavelengths and high temperatures, but failed for long wavelengths. Also around this time, but unknown to Planck, Lord Rayleigh had derived theoretically a formula, now known as the Rayleigh–Jeans law, that could reasonably predict long wavelengths but failed dramatically at short wavelengths. Approaching this problem, Planck hypothesized that the equations of motion for light describe a set of harmonic oscillators, one for each possible frequency. He examined how the entropy of the oscillators varied with the temperature of the body, trying to match Wien's law, and was able to derive an approximate mathematical function for the black-body spectrum, which gave a simple empirical formula for long wavelengths. Planck tried to find a mathematical expression that could reproduce Wien's law (for short wavelengths) and the empirical formula (for long wavelengths). This expression included a constant, , which is thought to be for (auxiliary quantity), and subsequently became known as the Planck constant. The expression formulated by Planck showed that the spectral radiance per unit frequency of a body for frequency at absolute temperature is given by where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. Planck soon realized that his solution was not unique. There were several different solutions, each of which gave a different value for the entropy of the oscillators. To save his theory, Planck resorted to using the then-controversial theory of statistical mechanics, which he described as "an act of desperation". One of his new boundary conditions was With this new condition, Planck had imposed the quantization of the energy of the oscillators, in his own words, "a purely formal assumption ... actually I did not think much about it", but one that would revolutionize physics. Applying this new approach to Wien's displacement law showed that the "energy element" must be proportional to the frequency of the oscillator, the first version of what is now sometimes termed the "Planck–Einstein relation": Planck was able to calculate the value of from experimental data on black-body radiation: his result, , is within 1.2% of the currently defined value. He also made the first determination of the Boltzmann constant from the same data and theory. Development and application The black-body problem was revisited in 1905, when Lord Rayleigh and James Jeans (together) and Albert Einstein independently proved that classical electromagnetism could never account for the observed spectrum. These proofs are commonly known as the "ultraviolet catastrophe", a name coined by Paul Ehrenfest in 1911. They contributed greatly (along with Einstein's work on the photoelectric effect) in convincing physicists that Planck's postulate of quantized energy levels was more than a mere mathematical formalism. The first Solvay Conference in 1911 was devoted to "the theory of radiation and quanta". Photoelectric effect The photoelectric effect is the emission of electrons (called "photoelectrons") from a surface when light is shone on it. It was first observed by Alexandre Edmond Becquerel in 1839, although credit is usually reserved for Heinrich Hertz, who published the first thorough investigation in 1887. Another particularly thorough investigation was published by Philipp Lenard (Lénárd Fülöp) in 1902. Einstein's 1905 paper discussing the effect in terms of light quanta would earn him the Nobel Prize in 1921, after his predictions had been confirmed by the experimental work of Robert Andrews Millikan. The Nobel committee awarded the prize for his work on the photo-electric effect, rather than relativity, both because of a bias against purely theoretical physics not grounded in discovery or experiment, and dissent amongst its members as to the actual proof that relativity was real. Before Einstein's paper, electromagnetic radiation such as visible light was considered to behave as a wave: hence the use of the terms "frequency" and "wavelength" to characterize different types of radiation. The energy transferred by a wave in a given time is called its intensity. The light from a theatre spotlight is more intense than the light from a domestic lightbulb; that is to say that the spotlight gives out more energy per unit time and per unit space (and hence consumes more electricity) than the ordinary bulb, even though the color of the light might be very similar. Other waves, such as sound or the waves crashing against a seafront, also have their intensity. However, the energy account of the photoelectric effect did not seem to agree with the wave description of light. The "photoelectrons" emitted as a result of the photoelectric effect have a certain kinetic energy, which can be measured. This kinetic energy (for each photoelectron) is independent of the intensity of the light, but depends linearly on the frequency; and if the frequency is too low (corresponding to a photon energy that is less than the work function of the material), no photoelectrons are emitted at all, unless a plurality of photons, whose energetic sum is greater than the energy of the photoelectrons, acts virtually simultaneously (multiphoton effect). Assuming the frequency is high enough to cause the photoelectric effect, a rise in intensity of the light source causes more photoelectrons to be emitted with the same kinetic energy, rather than the same number of photoelectrons to be emitted with higher kinetic energy. Einstein's explanation for these observations was that light itself is quantized; that the energy of light is not transferred continuously as in a classical wave, but only in small "packets" or quanta. The size of these "packets" of energy, which would later be named photons, was to be the same as Planck's "energy element", giving the modern version of the Planck–Einstein relation: Einstein's postulate was later proven experimentally: the constant of proportionality between the frequency of incident light and the kinetic energy of photoelectrons was shown to be equal to the Planck constant . Atomic structure In 1912 John William Nicholson developed an atomic model and found the angular momentum of the electrons in the model were related by h/2. Nicholson's nuclear quantum atomic model influenced the development of Niels Bohr 's atomic model and Bohr quoted him in his 1913 paper of the Bohr model of the atom. Bohr's model went beyond Planck's abstract harmonic oscillator concept: an electron in a Bohr atom could only have certain defined energies , defined by where is the speed of light in vacuum, is an experimentally determined constant (the Rydberg constant) and . This approach also allowed Bohr to account for the Rydberg formula, an empirical description of the atomic spectrum of hydrogen, and to account for the value of the Rydberg constant in terms of other fundamental constants. In discussing angular momentum of the electrons in his model Bohr introduced the quantity , now known as the reduced Planck constant as the quantum of angular momentum. Uncertainty principle The Planck constant also occurs in statements of Werner Heisenberg's uncertainty principle. Given numerous particles prepared in the same state, the uncertainty in their position, , and the uncertainty in their momentum, , obey where the uncertainty is given as the standard deviation of the measured value from its expected value. There are several other such pairs of physically measurable conjugate variables which obey a similar rule. One example is time vs. energy. The inverse relationship between the uncertainty of the two conjugate variables forces a tradeoff in quantum experiments, as measuring one quantity more precisely results in the other quantity becoming imprecise. In addition to some assumptions underlying the interpretation of certain values in the quantum mechanical formulation, one of the fundamental cornerstones to the entire theory lies in the commutator relationship between the position operator and the momentum operator : where is the Kronecker delta. Photon energy The Planck relation connects the particular photon energy with its associated wave frequency : This energy is extremely small in terms of ordinarily perceived everyday objects. Since the frequency , wavelength , and speed of light are related by , the relation can also be expressed as de Broglie wavelength In 1923, Louis de Broglie generalized the Planck–Einstein relation by postulating that the Planck constant represents the proportionality between the momentum and the quantum wavelength of not just the photon, but the quantum wavelength of any particle. This was confirmed by experiments soon afterward. This holds throughout the quantum theory, including electrodynamics. The de Broglie wavelength of the particle is given by where denotes the linear momentum of a particle, such as a photon, or any other elementary particle. The energy of a photon with angular frequency is given by while its linear momentum relates to where is an angular wavenumber. These two relations are the temporal and spatial parts of the special relativistic expression using 4-vectors. Statistical mechanics Classical statistical mechanics requires the existence of (but does not define its value). Eventually, following upon Planck's discovery, it was speculated that physical action could not take on an arbitrary value, but instead was restricted to integer multiples of a very small quantity, the "[elementary] quantum of action", now called the Planck constant. This was a significant conceptual part of the so-called "old quantum theory" developed by physicists including Bohr, Sommerfeld, and Ishiwara, in which particle trajectories exist but are hidden, but quantum laws constrain them based on their action. This view has been replaced by fully modern quantum theory, in which definite trajectories of motion do not even exist; rather, the particle is represented by a wavefunction spread out in space and in time. Related to this is the concept of energy quantization which existed in old quantum theory and also exists in altered form in modern quantum physics. Classical physics cannot explain quantization of energy. Dimension and value The Planck constant has the same dimensions as action and as angular momentum. The Planck constant is fixed at = as part of the definition of the SI units. This value is used to define the SI unit of mass, the kilogram: "the kilogram [...] is defined by taking the fixed numerical value of to be when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of speed of light and duration of hyperfine transition of the ground state of an unperturbed caesium-133 atom ." Technologies of mass metrology such as the Kibble balance measure refine the value of kilogram applying fixed value of the Planck constant. Significance of the value The Planck constant is one of the smallest constants used in physics. This reflects the fact that on a scale adapted to humans, where energies are typical of the order of kilojoules and times are typical of the order of seconds or minutes, the Planck constant is very small. When the product of energy and time for a physical event approaches the Planck constant, quantum effects dominate. Equivalently, the order of the Planck constant reflects the fact that everyday objects and systems are made of a large number of microscopic particles. For example, in green light (with a wavelength of 555 nanometres or a frequency of ) each photon has an energy . That is a very small amount of energy in terms of everyday experience, but everyday experience is not concerned with individual photons any more than with individual atoms or molecules. An amount of light more typical in everyday experience (though much larger than the smallest amount perceivable by the human eye) is the energy of one mole of photons; its energy can be computed by multiplying the photon energy by the Avogadro constant, with the result of , about the food energy in three apples. Reduced Planck constant Many equations in quantum physics are customarily written using the reduced Planck constant, equal to and denoted (pronounced h-bar). The fundamental equations look simpler when written using as opposed to and it is usually rather than that gives the most reliable results when used in order-of-magnitude estimates. For example, using dimensional analysis to estimate the ionization energy of a hydrogen atom, the relevant parameters that determine the ionization energy are the mass of the electron the electron charge and either the Planck constant or the reduced Planck constant : Since both constants have the same dimensions, they will enter the dimensional analysis in the same way, but with the estimate is within a factor of two, while with the error is closer to Names and symbols The reduced Planck constant is known by many other names: reduced Planck's constant ), the rationalized Planck constant (or rationalized Planck's constant , the Dirac constant (or Dirac's constant ), the Dirac (or Dirac's ), the Dirac (or Dirac's ), and h-bar. It is also common to refer to this as "Planck's constant" while retaining the relationship . By far the most common symbol for the reduced Planck constant is . However, there are some sources that denote it by instead, in which case they usually refer to it as the "Dirac " (or "Dirac's "). History The combination appeared in Niels Bohr's 1913 paper, where it was denoted by For the next 15 years, the combination continued to appear in the literature, but normally without a separate symbol. Then, in 1926, in their seminal papers, Schrödinger and Dirac again introduced special symbols for it: in the case of Schrödinger, and in the case of Dirac. Dirac continued to use in this way until 1930, when he introduced the symbol in his book The Principles of Quantum Mechanics. See also Committee on Data of the International Science Council International System of Units Introduction to quantum mechanics List of scientists whose names are used in physical constants Planck units Wave–particle duality Hashgraph Notes References Citations Sources External links "The role of the Planck constant in physics" – presentation at 26th CGPM meeting at Versailles, France, November 2018 when voting took place. "The Planck constant and its units" – presentation at the 35th Symposium on Chemical Physics at the University of Waterloo, Waterloo, Ontario, Canada, November 3 2019. Fundamental constants 1900 in science Max Planck
Planck constant
Physics
3,285
71,739,928
https://en.wikipedia.org/wiki/Cliff%20Obrecht
Clifford Obrecht (born 1985 or 1986) is an Australian billionaire technology entrepreneur, who is the co-founder (with Melanie Perkins) and chief operating officer (COO) of Canva and owns 18% of the company. Early life and education Clifford was born to Stan Obrecht, a government employee and Mary Obrecht, a school teacher. He grew up in , a suburb to the north of Perth. Obrecht earned a degree from The University of Western Australia. Personal life In January 2021, he married Melanie Perkins on Rottnest Island. Net worth Obrecht first appeared on The Australian Financial Review Rich List in 2020 with a net worth of 3.43 billion. , The Australian Financial Review assessed his and Perkins' joint net worth as 13.18 billion on the 2023 Rich List; making them the ninth wealthiest Australians. As of September 2022, Forbes assessed his net worth at 6.5 billion. Notes : Obrecht's net worth is assessed in Financial Review Rich List as being held jointly with his spouse and business partner, Melanie Perkins. References 1980s births Year of birth missing (living people) Living people Australian billionaires Australian company founders Technology company founders Businesspeople from Perth, Western Australia 21st-century Australian businesspeople University of Western Australia alumni
Cliff Obrecht
Technology
259
69,562,680
https://en.wikipedia.org/wiki/Samarium%28III%29%20phosphide
Samarium(III) phosphide is an inorganic compound of samarium and phosphorus with the chemical formula SmP. Synthesis Samarium(III) phosphide can be obtained by heating samarium and phosphorus: 4 Sm + P4 → 4 SmP Physical properties Samarium(III) phosphide forms crystals of a cubic system, space group Fm3m, cell size a = 0.5760 nm, Z = 4, with a structure similar to sodium chloride NaCl. The compound exists in the temperature range of 1315–2020 °C and has a homogeneity region described by the SmP1÷0.982. Chemical properties Samarium(III) phosphide readily dissolves in nitric acid. Uses Samarium(III) phosphide compound is a semiconductor used in high power, high frequency applications and in laser diodes. References Phosphides Samarium(III) compounds Semiconductors Rock salt crystal structure
Samarium(III) phosphide
Physics,Chemistry,Materials_science,Engineering
201
434,188
https://en.wikipedia.org/wiki/Bioremediation
Bioremediation broadly refers to any process wherein a biological system (typically bacteria, microalgae, fungi in mycoremediation, and plants in phytoremediation), living or dead, is employed for removing environmental pollutants from air, water, soil, flue gasses, industrial effluents etc., in natural or artificial settings. The natural ability of organisms to adsorb, accumulate, and degrade common and emerging pollutants has attracted the use of biological resources in treatment of contaminated environment. In comparison to conventional physicochemical treatment methods bioremediation may offer advantages as it aims to be sustainable, eco-friendly, cheap, and scalable. Most bioremediation is inadvertent, involving native organisms. Research on bioremediation is heavily focused on stimulating the process by inoculation of a polluted site with organisms or supplying nutrients to promote their growth. Environmental remediation is an alternative to bioremediation. While organic pollutants are susceptible to biodegradation, heavy metals cannot be degraded, but rather oxidized or reduced. Typical bioremediations involves oxidations. Oxidations enhance the water-solubility of organic compounds and their susceptibility to further degradation by further oxidation and hydrolysis. Ultimately biodegradation converts hydrocarbons to carbon dioxide and water. For heavy metals, bioremediation offers few solutions. Metal-containing pollutant can be removed, at least partially, with varying bioremediation techniques. The main challenge to bioremediations is rate: the processes are slow. Bioremediation techniques can be classified as (i) in situ techniques, which treat polluted sites directly, vs (ii) ex situ techniques which are applied to excavated materials. In both these approaches, additional nutrients, vitamins, minerals, and pH buffers are added to enhance the growth and metabolism of the microorganisms. In some cases, specialized microbial cultures are added (biostimulation). Some examples of bioremediation related technologies are phytoremediation, bioventing, bioattenuation, biosparging, composting (biopiles and windrows), and landfarming. Other remediation techniques include thermal desorption, vitrification, air stripping, bioleaching, rhizofiltration, and soil washing. Biological treatment, bioremediation, is a similar approach used to treat wastes including wastewater, industrial waste and solid waste. The end goal of bioremediation is to remove harmful compounds to improve soil and water quality. In situ techniques Bioventing Bioventing is a process that increases the oxygen or air flow into the unsaturated zone of the soil, this in turn increases the rate of natural in situ degradation of the targeted hydrocarbon contaminant. Bioventing, an aerobic bioremediation, is the most common form of oxidative bioremediation process where oxygen is provided as the electron acceptor for oxidation of petroleum, polyaromatic hydrocarbons (PAHs), phenols, and other reduced pollutants. Oxygen is generally the preferred electron acceptor because of the higher energy yield and because oxygen is required for some enzyme systems to initiate the degradation process. Microorganisms can degrade a wide variety of hydrocarbons, including components of gasoline, kerosene, diesel, and jet fuel. Under ideal aerobic conditions, the biodegradation rates of the low- to moderate-weight aliphatic, alicyclic, and aromatic compounds can be very high. As molecular weight of the compound increases, the resistance to biodegradation increases simultaneously. This results in higher contaminated volatile compounds due to their high molecular weight and an increased difficulty to remove from the environment. Most bioremediation processes involve oxidation-reduction reactions where either an electron acceptor (commonly oxygen) is added to stimulate oxidation of a reduced pollutant (e.g. hydrocarbons) or an electron donor (commonly an organic substrate) is added to reduce oxidized pollutants (nitrate, perchlorate, oxidized metals, chlorinated solvents, explosives and propellants). In both these approaches, additional nutrients, vitamins, minerals, and pH buffers may be added to optimize conditions for the microorganisms. In some cases, specialized microbial cultures are added (bioaugmentation) to further enhance biodegradation. Approaches for oxygen addition below the water table include recirculating aerated water through the treatment zone, addition of pure oxygen or peroxides, and air sparging. Recirculation systems typically consist of a combination of injection wells or galleries and one or more recovery wells where the extracted groundwater is treated, oxygenated, amended with nutrients and re-injected. However, the amount of oxygen that can be provided by this method is limited by the low solubility of oxygen in water (8 to 10 mg/L for water in equilibrium with air at typical temperatures). Greater amounts of oxygen can be provided by contacting the water with pure oxygen or addition of hydrogen peroxide (H2O2) to the water. In some cases, slurries of solid calcium or magnesium peroxide are injected under pressure through soil borings. These solid peroxides react with water releasing H2O2 which then decomposes releasing oxygen. Air sparging involves the injection of air under pressure below the water table. The air injection pressure must be great enough to overcome the hydrostatic pressure of the water and resistance to air flow through the soil. Biostimulation Bioremediation can be carried out by bacteria that are naturally present. In biostimulation, the population of these helpful bacteria can be increased by adding nutrients. Bacteria can in principle be used to degrade hydrocarbons. Specific to marine oil spills, nitrogen and phosphorus have been key nutrients in biodegradation. The bioremediation of hydrocarbons suffers from low rates. Bioremediation can involve the action of microbial consortium. Within the consortium, the product of one species could be the substrate for another species. Anaerobic bioremediation can in principle be employed to treat a range of oxidized contaminants including chlorinated ethylenes (PCE, TCE, DCE, VC), chlorinated ethanes (TCA, DCA), chloromethanes (CT, CF), chlorinated cyclic hydrocarbons, various energetics (e.g., perchlorate, RDX, TNT), and nitrate. This process involves the addition of an electron donor to: 1) deplete background electron acceptors including oxygen, nitrate, oxidized iron and manganese and sulfate; and 2) stimulate the biological and/or chemical reduction of the oxidized pollutants. The choice of substrate and the method of injection depend on the contaminant type and distribution in the aquifer, hydrogeology, and remediation objectives. Substrate can be added using conventional well installations, by direct-push technology, or by excavation and backfill such as permeable reactive barriers (PRB) or biowalls. Slow-release products composed of edible oils or solid substrates tend to stay in place for an extended treatment period. Soluble substrates or soluble fermentation products of slow-release substrates can potentially migrate via advection and diffusion, providing broader but shorter-lived treatment zones. The added organic substrates are first fermented to hydrogen (H2) and volatile fatty acids (VFAs). The VFAs, including acetate, lactate, propionate and butyrate, provide carbon and energy for bacterial metabolism. Bioattenuation During bioattenuation, biodegradation occurs naturally with the addition of nutrients or bacteria. The indigenous microbes present will determine the metabolic activity and act as a natural attenuation. While there is no anthropogenic involvement in bioattenuation, the contaminated site must still be monitored. Biosparging Biosparging is the process of groundwater remediation as oxygen, and possible nutrients, is injected. When oxygen is injected, indigenous bacteria are stimulated to increase rate of degradation. However, biosparging focuses on saturated contaminated zones, specifically related to ground water remediation. UNICEF, power producers, bulk water suppliers, and local governments are early adopters of low cost bioremediation, such as aerobic bacteria tablets which are simply dropped into water. Ex situ techniques Biopiles Biopiles, similar to bioventing, are used to remove petroleum pollutants by introducing aerobic hydrocarbons to contaminated soils. However, the soil is excavated and piled with an aeration system. This aeration system enhances microbial activity by introducing oxygen under positive pressure or removes oxygen under negative pressure. Windrows Windrow systems are similar to compost techniques where soil is periodically turned in order to enhance aeration. This periodic turning also allows contaminants present in the soil to be uniformly distributed which accelerates the process of bioremediation. Landfarming Landfarming, or land treatment, is a method commonly used for sludge spills. This method disperses contaminated soil and aerates the soil by cyclically rotating. This process is an above land application and contaminated soils are required to be shallow in order for microbial activity to be stimulated. However, if the contamination is deeper than 5 feet, then the soil is required to be excavated to above ground. While it is an ex situ technique, it can also be considered an in situ technique as Landfarming can be performed at the site of contamination. In situ vs. Ex situ Ex situ techniques are often more expensive because of excavation and transportation costs to the treatment facility, while in situ techniques are performed at the site of contamination so they only have installation costs. While there is less cost there is also less of an ability to determine the scale and spread of the pollutant. The pollutant ultimately determines which bioremediation method to use. The depth and spread of the pollutantare other important factors. Heavy metals Heavy metals are introduced into the environment by both anthropogenic activities and natural factors. Anthropogenic activities include industrial emissions, electronic waste, and mining. Natural factors include mineral weathering, soil erosion, and forest fires. Heavy metals including cadmium, chromium, lead and uranium are unlike organic compounds and cannot be biodegraded. However, bioremediation processes can potentially be used to minimize the mobility of these material in the subsurface, lowering the potential for human and environmental exposure. Heavy metals from these factors are predominantly present in water sources due to runoff where it is uptake by marine fauna and flora. Hexavalent chromium (Cr[VI]) and uranium (U[VI]) can be reduced to less mobile and/or less toxic forms (e.g., Cr[III], U[IV]). Similarly, reduction of sulfate to sulfide (sulfidogenesis) can be used to immobilize certain metals (e.g., zinc, cadmium). The mobility of certain metals including chromium (Cr) and uranium (U) varies depending on the oxidation state of the material. Microorganisms can be used to lower the toxicity and mobility of chromium by reducing hexavalent chromium, Cr(VI) to trivalent Cr(III). Reduction of the more mobile U(VI) species affords the less mobile U(IV) derivatives. Microorganisms are used in this process because the reduction rate of these metals is often slow in the absence of microbial interactions Research is also underway to develop methods to remove metals from water by enhancing the sorption of the metal to cell walls. This approach has been evaluated for treatment of cadmium, chromium, and lead. Genetically modified bacteria has also been explored for use in sequestration of Arsenic. Phytoextraction processes concentrate contaminants in the biomass for subsequent removal. Metal extractions can in principle be performed in situ or ex situ where in situ is preferred since it is less expensive to excavate the substrate. Bioremediation is not specific to metals. In 2010 there was a massive oil spill in the Gulf of Mexico. Populations of bacteria and archaea were used to rejuvenate the coast after the oil spill. These microorganisms over time have developed metabolic networks that can utilize hydrocarbons such as oil and petroleum as a source of carbon and energy. Microbial bioremediation is a very effective modern technique for restoring natural systems by removing toxins from the environment. Pesticides Of the many ways to deal with pesticide contamination, bioremediation promises to be more effective. Many sites around the world are contaminated with agrichemicals. These agrichemicals often resist biodegradation, by design. Harming all manners of organic life with long term health issues such as cancer, rashes, blindness, paralysis, and mental illness. An example is Lindane which was a commonly used insecticide in the 20th century. Long time exposure poses a serious threat to humans and the surrounding ecosystem. Lindane reduces the potential of beneficial bacteria in the soil such as nitrogen fixation cyanobacteria. As well as causing central nervous system issues in smaller mammals such as seizures, dizziness, and even death. What makes it so harmful to these organisms is how quickly distributed it gets through the brain and fatty tissues. While Lindane has been mostly limited to specific use, it is still produced and used around the world. Actinobacteria has been a promising candidate in situ technique specifically for removing pesticides. When certain strains of Actinobacteria have been grouped together, their efficiency in degrading pesticides has enhanced. As well as being a reusable technique that strengthens through further use by limiting the migration space of these cells to target specific areas and not fully consume their cleansing abilities. Despite encouraging results, Actinobacteria has only been used in controlled lab settings and will need further development in finding the cost effectiveness and scalability of use. Limitations of bioremediation Bioremediation can be used to mineralize organic pollutants, to partially transform the pollutants, or alter their mobility. Heavy metals and radionuclides generally cannot be biodegraded, but can be bio-transformed to less mobile forms. In some cases, microbes do not fully mineralize the pollutant, potentially producing a more toxic compound. For example, under anaerobic conditions, the reductive dehalogenation of TCE may produce dichloroethylene (DCE) and vinyl chloride (VC), which are suspected or known carcinogens. However, the microorganism Dehalococcoides can further reduce DCE and VC to the non-toxic product ethene. The molecular pathways for bioremediation are of considerable interest. In addition, knowing these pathways will help develop new technologies that can deal with sites that have uneven distributions of a mixture of contaminants. Biodegradation requires microbial population with the metabolic capacity to degrade the pollutant. The biological processes used by these microbes are highly specific, therefore, many environmental factors must be taken into account and regulated as well. It can be difficult to extrapolate the results from the small-scale test studies into big field operations. In many cases, bioremediation takes more time than other alternatives such as land filling and incineration. Another example is bioventing, which is inexpensive to bioremediate contaminated sites, however, this process is extensive and can take a few years to decontaminate a site.> Another major drawback is finding the right species to perform bioremediation. In order to prevent the introduction and spreading of an invasive species to the ecosystem, an indigenous species is needed. As well as a species plentiful enough to clean the whole site without exhausting the population. Finally the species should be resilient enough to withstand the environmental conditions. These specific criteria may make it difficult to perform bioremediation on a contaminated site. In agricultural industries, the use of pesticides is a top factor in direct soil contamination and runoff water contamination. The limitation or remediation of pesticides is the low bioavailability. Altering the pH and temperature of the contaminated soil is a resolution to increase bioavailability which, in turn, increased degradation of harmful compounds. The compound acrylonitrile is commonly produced in industrial setting but adversely contaminates soils. Microorganisms containing nitrile hydratases (NHase) degraded harmful acrylonitrile compounds into non-polluting substances. Since the experience with harmful contaminants are limited, laboratory practices are required to evaluate effectiveness, treatment designs, and estimate treatment times. Bioremediation processes may take several months to several years depending on the size of the contaminated area. Genetic engineering The use of genetic engineering to create organisms specifically designed for bioremediation is under preliminary research. Two category of genes can be inserted in the organism: degradative genes, which encode proteins required for the degradation of pollutants, and reporter genes, which encode proteins able to monitor pollution levels. Numerous members of Pseudomonas have been modified with the lux gene for the detection of the polyaromatic hydrocarbon naphthalene. A field test for the release of the modified organism has been successful on a moderately large scale. There are concerns surrounding release and containment of genetically modified organisms into the environment due to the potential of horizontal gene transfer. Genetically modified organisms are classified and controlled under the Toxic Substances Control Act of 1976 under United States Environmental Protection Agency. Measures have been created to address these concerns. Organisms can be modified such that they can only survive and grow under specific sets of environmental conditions. In addition, the tracking of modified organisms can be made easier with the insertion of bioluminescence genes for visual identification. Genetically modified organisms have been created to treat oil spills and break down certain plastics (PET). Additive manufacturing Additive manufacturing technologies such as bioprinting offer distinctive benefits that can be leveraged in bioremediation to develop structures with characteristics tailored to biological systems and environmental cleanup needs, and even though the adoption of this technology in bioremediation is in its early stages, the area is seeing massive growth. See also Bioremediation of radioactive waste Biosurfactant Chelation Dutch pollutant standards Folkewall In situ chemical oxidation In situ chemical reduction List of environment topics Mega Borg Oil Spill Microbial biodegradation Mycoremediation Mycorrhizal bioremediation Pleurotus Phytoremediation Pseudomonas putida (used for degrading oil) Restoration ecology Xenocatabolism References External links Phytoremediation, hosted by the Missouri Botanical Garden To remediate or to not remediate? Anaerobic Bioremediation Biotechnology Environmental soil science Environmental engineering Environmental terminology Conservation projects Ecological restoration Soil contamination Radioactive waste
Bioremediation
Chemistry,Technology,Engineering,Biology,Environmental_science
3,979
3,117,974
https://en.wikipedia.org/wiki/Microsoft%20Transaction%20Server
Microsoft Transaction Server (MTS) was software that provided services to Component Object Model (COM) software components, to make it easier to create large distributed applications. The major services provided by MTS were automated transaction management, instance management (or just-in-time activation) and role-based security. MTS is considered to be the first major software to implement aspect-oriented programming. MTS was first offered in the Windows NT 4.0 Option Pack. In Windows 2000, MTS was enhanced and better integrated with the operating system and COM, and was renamed COM+. COM+ added object pooling, loosely-coupled events and user-defined simple transactions (compensating resource managers) to the features of MTS. COM+ is still provided with Windows Server 2003 and Windows Server 2008, and the Microsoft .NET Framework provides a wrapper for COM+ in the EnterpriseServices namespace. The Windows Communication Foundation (WCF) provides a way of calling COM+ applications with web services. However, COM+ is based on COM, and Microsoft's strategic software architecture is now web services and .NET, not COM. There are pure .NET-based alternatives for many of the features provided by COM+, and in the long term it is likely COM+ will be phased out. Architecture A basic MTS architecture comprises: the MTS Executive (mtxex.dll) the Factory Wrappers and Context Wrappers for each component the MTS Server Component MTS clients auxiliary systems like: COM runtime services the Service Control Manager (SCM) the Microsoft Distributed Transaction Coordinator (MS-DTC) the Microsoft Message Queue (MSMQ) the COM-Transaction Integrator (COM-TI) etc. COM components that run under the control of the MTS Executive are called MTS components. In COM+, they are referred to as COM+ Applications. MTS components are in-process DLLs. MTS components are deployed and run in the MTS Executive which manages them. As with other COM components, an object implementing the IClassFactory interface serves as a Factory Object to create new instances of these components. MTS inserts a Factory Wrapper Object and an Object Wrapper between the actual MTS object and its client. This interposing of wrappers is called interception. Whenever the client makes a call to the MTS component, the wrappers (Factory and Object) intercept the call and inject their own instance-management algorithm called the Just-In-Time Activation (JITA) into the call. The wrapper then makes this call on the actual MTS component. Interception was considered difficult at the time due to a lack of extensible metadata. In addition, based on the information from the component's deployment properties, transaction logic and security checks also take place in these wrapper objects. For every MTS-hosted object, there also exists a Context Object, which implements the IObjectContext interface. The Context Object maintains specific information about that object, such as its transactional information, security information and deployment information. Methods in the MTS component call into the Context Object through its IObjectContext interface. MTS does not create the actual middle-tier MTS object until the call from a client reaches the container. Since the object is not running all the time, it does not use up a lot of system resources (even though an object wrapper and skeleton for the object do persist). As soon as the call comes in from the client, the MTS wrapper process activates its Instance Management algorithm called JITA. The actual MTS object is created "just in time" to service the request from the wrapper. And when the request is serviced and the reply is sent back to the client, the component either calls SetComplete()/SetAbort(), or its transaction ends, or the client calls Release() on the reference to the object, and the actual MTS object is destroyed. In short, MTS uses a stateless component model. Generally, when a client requests services from a typical MTS component, the following sequence occurs on the server : acquire a database connection read the component's state from either the Shared Property Manager or from an already existing object or from the client perform the business logic write the component's changed state, if any, back to the database close and release the database connection vote on the result of the transaction. MTS components do not directly commit transactions, rather they communicate their success or failure to MTS. It is thus possible to implement high-latency resources as asynchronous resource pools, which should take advantage of the stateless JIT activation afforded by the middleware server. References External links and references Quick Tour of Microsoft Transaction Server Windows components Windows communication and services Inter-process communication Microsoft application programming interfaces Component-based software engineering Transaction processing
Microsoft Transaction Server
Technology
1,002
379,185
https://en.wikipedia.org/wiki/Gastrovascular%20cavity
The gastrovascular cavity is the primary organ of digestion and circulation in two major animal phyla: the Coelenterates or cnidarians (including jellyfish and corals) and Platyhelminthes (flatworms). The cavity may be extensively branched into a system of canals. In cnidarians, the gastrovascular system is also known as the coelenteron, and is commonly known as a "blind gut" or "blind sac", since food enters and waste exits through the same orifice. The radially symmetrical cnidarians have a sac-like body in two distinct layers, the epidermis and gastrodermis, with a jellylike layer called the mesoglea between. Extracellular digestion takes place within the central cavity of the sac-like body. This cavity has only one opening to the outside which, in most cnidarians, is surrounded by tentacles for capturing prey. References Cnidarian biology Digestive system
Gastrovascular cavity
Biology
205
37,441
https://en.wikipedia.org/wiki/Nitrous%20oxide
Nitrous oxide (dinitrogen oxide or dinitrogen monoxide), commonly known as laughing gas, nitrous, or factitious air, among others, is a chemical compound, an oxide of nitrogen with the formula . At room temperature, it is a colourless non-flammable gas, and has a slightly sweet scent and taste. At elevated temperatures, nitrous oxide is a powerful oxidiser similar to molecular oxygen. Nitrous oxide has significant medical uses, especially in surgery and dentistry, for its anaesthetic and pain-reducing effects, and it is on the World Health Organization's List of Essential Medicines. Its colloquial name, "laughing gas", coined by Humphry Davy, describes the euphoric effects upon inhaling it, which cause it to be used as a recreational drug inducing a brief "high". When abused chronically, it may cause neurological damage through inactivation of vitamin B12. It is also used as an oxidiser in rocket propellants and motor racing fuels, and as a frothing gas for whipped cream. Nitrous oxide is also an atmospheric pollutant, with a concentration of 333 parts per billion (ppb) in 2020, increasing at 1 ppb annually. It is a major scavenger of stratospheric ozone, with an impact comparable to that of CFCs. About 40% of human-caused emissions are from agriculture, as nitrogen fertilisers are digested into nitrous oxide by soil micro-organisms. As the third most important greenhouse gas, nitrous oxide substantially contributes to global warming. Reduction of emissions is an important goal in the politics of climate change. Discovery and early use The gas was first synthesised in 1772 by English natural philosopher and chemist Joseph Priestley who called it dephlogisticated nitrous air (see phlogiston theory) or inflammable nitrous air. Priestley published his discovery in the book Experiments and Observations on Different Kinds of Air (1775), where he described how to produce the preparation of "nitrous air diminished", by heating iron filings dampened with nitric acid. The first important use of nitrous oxide was made possible by Thomas Beddoes and James Watt, who worked together to publish the book Considerations on the Medical Use and on the Production of Factitious Airs (1794). This book was important for two reasons. First, James Watt had invented a novel machine to produce "factitious airs" (including nitrous oxide) and a novel "breathing apparatus" to inhale the gas. Second, the book also presented the new medical theories by Thomas Beddoes, that tuberculosis and other lung diseases could be treated by inhalation of "Factitious Airs". The machine to produce "Factitious Airs" had three parts: a furnace to burn the needed material, a vessel with water where the produced gas passed through in a spiral pipe (for impurities to be "washed off"), and finally the gas cylinder with a gasometer where the gas produced, "air", could be tapped into portable air bags (made of airtight oily silk). The breathing apparatus consisted of one of the portable air bags connected with a tube to a mouthpiece. With this new equipment being engineered and produced by 1794, the way was paved for clinical trials, which began in 1798 when Thomas Beddoes established the "Pneumatic Institution for Relieving Diseases by Medical Airs" in Hotwells (Bristol). In the basement of the building, a large-scale machine was producing the gases under the supervision of a young Humphry Davy, who was encouraged to experiment with new gases for patients to inhale. The first important work of Davy was examination of the nitrous oxide, and the publication of his results in the book: Researches, Chemical and Philosophical (1800). In that publication, Davy notes the analgesic effect of nitrous oxide at page 465 and its potential to be used for surgical operations at page 556. Davy coined the name "laughing gas" for nitrous oxide. Despite Davy's discovery that inhalation of nitrous oxide could relieve a conscious person from pain, another 44 years elapsed before doctors attempted to use it for anaesthesia. The use of nitrous oxide as a recreational drug at "laughing gas parties", primarily arranged for the British upper class, became an immediate success beginning in 1799. While the effects of the gas generally make the user appear stuporous, dreamy and sedated, some people also "get the giggles" in a state of euphoria, and frequently erupt in laughter. One of the earliest commercial producers in the U.S. was George Poe, cousin of the poet Edgar Allan Poe, who also was the first to liquefy the gas. The first time nitrous oxide was used as an anaesthetic drug in the treatment of a patient was when dentist Horace Wells, with assistance by Gardner Quincy Colton and John Mankey Riggs, demonstrated insensitivity to pain from a dental extraction on 11 December 1844. In the following weeks, Wells treated the first 12 to 15 patients with nitrous oxide in Hartford, Connecticut, and, according to his own record, only failed in two cases. In spite of these convincing results having been reported by Wells to the medical society in Boston in December 1844, this new method was not immediately adopted by other dentists. The reason for this was most likely that Wells, in January 1845 at his first public demonstration to the medical faculty in Boston, had been partly unsuccessful, leaving his colleagues doubtful regarding its efficacy and safety. The method did not come into general use until 1863, when Gardner Quincy Colton successfully started to use it in all his "Colton Dental Association" clinics, that he had just established in New Haven and New York City. Over the following three years, Colton and his associates successfully administered nitrous oxide to more than 25,000 patients. Today, nitrous oxide is used in dentistry as an anxiolytic, as an adjunct to local anaesthetic. Nitrous oxide was not found to be a strong enough anaesthetic for use in major surgery in hospital settings, however. Instead, diethyl ether, being a stronger and more potent anaesthetic, was demonstrated and accepted for use in October 1846, along with chloroform in 1847. When Joseph Thomas Clover invented the "gas-ether inhaler" in 1876, however, it became a common practice at hospitals to initiate all anaesthetic treatments with a mild flow of nitrous oxide, and then gradually increase the anaesthesia with the stronger ether or chloroform. Clover's gas-ether inhaler was designed to supply the patient with nitrous oxide and ether at the same time, with the exact mixture being controlled by the operator of the device. It remained in use by many hospitals until the 1930s. Although hospitals today use a more advanced anaesthetic machine, these machines still use the same principle launched with Clover's gas-ether inhaler, to initiate the anaesthesia with nitrous oxide, before the administration of a more powerful anaesthetic. Colton's popularisation of nitrous oxide led to its adoption by a number of less than reputable quacksalvers, who touted it as a cure for consumption, scrofula, catarrh and other diseases of the blood, throat and lungs. Nitrous oxide treatment was administered and licensed as a patent medicine by the likes of C. L. Blood and Jerome Harris in Boston and Charles E. Barney of Chicago. Chemical properties and reactions Nitrous oxide is a colourless gas with a faint, sweet odour. Nitrous oxide supports combustion by releasing the dipolar bonded oxygen radical, and can thus relight a glowing splint. is inert at room temperature and has few reactions. At elevated temperatures, its reactivity increases. For example, nitrous oxide reacts with at to give : This reaction is the route adopted by the commercial chemical industry to produce azide salts, which are used as detonators. Mechanism of action The pharmacological mechanism of action of inhaled is not fully known. However, it has been shown to directly modulate a broad range of ligand-gated ion channels, which likely plays a major role. It moderately blocks NMDAR and β-subunit-containing nACh channels, weakly inhibits AMPA, kainate, GABA and 5-HT receptors, and slightly potentiates GABA and glycine receptors. It also has been shown to activate two-pore-domain channels. While affects several ion channels, its anaesthetic, hallucinogenic and euphoriant effects are likely caused mainly via inhibition of NMDA receptor-mediated currents. In addition to its effects on ion channels, may act similarly to nitric oxide (NO) in the central nervous system. Nitrous oxide is 30 to 40 times more soluble than nitrogen. The effects of inhaling sub-anaesthetic doses of nitrous oxide may vary unpredictably with settings and individual differences; however, Jay (2008) suggests that it reliably induces the following states and sensations: Intoxication Euphoria/dysphoria Spatial disorientation Temporal disorientation Reduced pain sensitivity A minority of users also experience uncontrolled vocalisations and muscular spasms. These effects generally disappear minutes after removal of the nitrous oxide source. Anxiolytic effect In behavioural tests of anxiety, a low dose of is an effective anxiolytic. This anti-anxiety effect is associated with enhanced activity of GABA receptors, as it is partially reversed by benzodiazepine receptor antagonists. Mirroring this, animals that have developed tolerance to the anxiolytic effects of benzodiazepines are partially tolerant to . Indeed, in humans given 30% , benzodiazepine receptor antagonists reduced the subjective reports of feeling "high", but did not alter psychomotor performance. Analgesic effect The analgesic effects of are linked to the interaction between the endogenous opioid system and the descending noradrenergic system. When animals are given morphine chronically, they develop tolerance to its pain-killing effects, and this also renders the animals tolerant to the analgesic effects of . Administration of antibodies that bind and block the activity of some endogenous opioids (not β-endorphin) also block the antinociceptive effects of . Drugs that inhibit the breakdown of endogenous opioids also potentiate the antinociceptive effects of . Several experiments have shown that opioid receptor antagonists applied directly to the brain block the antinociceptive effects of , but these drugs have no effect when injected into the spinal cord. Apart from an indirect action, nitrous oxide, like morphine also interacts directly with the endogenous opioid system by binding at opioid receptor binding sites. Conversely, α-adrenoceptor antagonists block the pain-reducing effects of when given directly to the spinal cord, but not when applied directly to the brain. Indeed, α-adrenoceptor knockout mice or animals depleted in norepinephrine are nearly completely resistant to the antinociceptive effects of . Apparently -induced release of endogenous opioids causes disinhibition of brainstem noradrenergic neurons, which release norepinephrine into the spinal cord and inhibit pain signalling. Exactly how causes the release of endogenous opioid peptides remains uncertain. Production Various methods of producing nitrous oxide are used. Industrial methods Nitrous oxide is prepared on an industrial scale by carefully heating ammonium nitrate at about 250 °C, which decomposes into nitrous oxide and water vapour. The addition of various phosphate salts favours formation of a purer gas at slightly lower temperatures. This reaction may be difficult to control, resulting in detonation. Laboratory methods The decomposition of ammonium nitrate is also a common laboratory method for preparing the gas. Equivalently, it can be obtained by heating a mixture of sodium nitrate and ammonium sulfate: Another method involves the reaction of urea, nitric acid and sulfuric acid: Direct oxidation of ammonia with a manganese dioxide-bismuth oxide catalyst has been reported: cf. Ostwald process. Hydroxylammonium chloride reacts with sodium nitrite to give nitrous oxide. If the nitrite is added to the hydroxylamine solution, the only remaining by-product is salt water. If the hydroxylamine solution is added to the nitrite solution (nitrite is in excess), however, then toxic higher oxides of nitrogen also are formed: Treating with and HCl also has been demonstrated: Hyponitrous acid decomposes to NO and water with a half-life of 16 days at 25 °C at pH 1–3. Atmospheric occurrence Nitrous oxide is a minor component of Earth's atmosphere and is an active part of the planetary nitrogen cycle. Based on analysis of air samples gathered from sites around the world, its concentration surpassed 330 ppb in 2017. The growth rate of about 1 ppb per year has also accelerated during recent decades. Nitrous oxide's atmospheric abundance has grown more than 20% from a base level of about 270 ppb in 1750. Important atmospheric properties of are summarized in the following table: In 2022 the IPCC reported that: "The human perturbation of the natural nitrogen cycle through the use of synthetic fertilizers and manure, as well as nitrogen deposition resulting from land-based agriculture and fossil fuel burning has been the largest driver of the increase in atmospheric N2O of 31.0 ± 0.5 ppb (10%) between 1980 and 2019." Emissions by source 17.0 (12.2 to 23.5) million tonnes total annual average nitrogen in was emitted in 2007–2016. About 40% of emissions are from humans and the rest are part of the natural nitrogen cycle. The emitted each year by humans has a greenhouse effect equivalent to about 3 billion tonnes of carbon dioxide: for comparison humans emitted 37 billion tonnes of actual carbon dioxide in 2019, and methane equivalent to 9 billion tonnes of carbon dioxide. Most of the emitted into the atmosphere, from natural and anthropogenic sources, is produced by microorganisms such as denitrifying bacteria and fungi in soils and oceans. Soils under natural vegetation are an important source of nitrous oxide, accounting for 60% of all naturally produced emissions. Other natural sources include the oceans (35%) and atmospheric chemical reactions (5%). Wetlands can also be emitters of nitrous oxide. Emissions from thawing permafrost may be significant, but as of 2022 this is not certain. The main components of anthropogenic emissions are fertilised agricultural soils and livestock manure (42%), runoff and leaching of fertilisers (25%), biomass burning (10%), fossil fuel combustion and industrial processes (10%), biological degradation of other nitrogen-containing atmospheric emissions (9%) and human sewage (5%). Agriculture enhances nitrous oxide production through soil cultivation, the use of nitrogen fertilisers and animal waste handling. These activities stimulate naturally occurring bacteria to produce more nitrous oxide. Nitrous oxide emissions from soil can be challenging to measure as they vary markedly over time and space, and the majority of a year's emissions may occur when conditions are favorable during "hot moments" and/or at favorable locations known as "hotspots". Among industrial emissions, the production of nitric acid and adipic acid are the largest sources of nitrous oxide emissions. The adipic acid emissions specifically arise from the degradation of the nitrolic acid intermediate derived from the nitration of cyclohexanone. Biological processes Microbial processes that generate nitrous oxide may be classified as nitrification and denitrification. Specifically, they include: aerobic autotrophic nitrification, the stepwise oxidation of ammonia () to nitrite () and to nitrate () anaerobic heterotrophic denitrification, the stepwise reduction of to , nitric oxide (NO), and ultimately , where facultative anaerobe bacteria use as an electron acceptor in the respiration of organic material in the condition of insufficient oxygen () nitrifier denitrification, which is carried out by autotrophic -oxidising bacteria and the pathway whereby ammonia () is oxidised to nitrite (), followed by the reduction of to nitric oxide (NO), and molecular nitrogen () heterotrophic nitrification aerobic denitrification by the same heterotrophic nitrifiers fungal denitrification non-biological chemodenitrification These processes are affected by soil chemical and physical properties such as the availability of mineral nitrogen and organic matter, acidity and soil type, as well as climate-related factors such as soil temperature and water content. The emission of the gas to the atmosphere is limited greatly by its consumption inside the cells, by a process catalysed by the enzyme nitrous oxide reductase. Uses Rocket motors Nitrous oxide may be used as an oxidiser in a rocket motor. Compared to other oxidisers, it is much less toxic and more stable at room temperature, making it easier to store and safer to carry on a flight. Its high density and low storage pressure (when maintained at low temperatures) make it highly competitive with stored high-pressure gas systems. In a 1914 patent, American rocket pioneer Robert Goddard suggested nitrous oxide and gasoline as possible propellants for a liquid-fuelled rocket. Nitrous oxide has been the oxidiser of choice in several hybrid rocket designs (using solid fuel with a liquid or gaseous oxidiser). The combination of nitrous oxide with hydroxyl-terminated polybutadiene fuel has been used by SpaceShipOne and others. It also is notably used in amateur and high power rocketry with various plastics as the fuel. Nitrous oxide may also be used as a monopropellant. In the presence of a heated catalyst at a temperature of , decomposes exothermically into nitrogen and oxygen. Because of the large heat release, the catalytic action rapidly becomes secondary, as thermal autodecomposition becomes dominant. In a vacuum thruster, this may provide a monopropellant specific impulse (I) up to 180 s. While noticeably less than the I available from hydrazine thrusters (monopropellant, or bipropellant with dinitrogen tetroxide), the decreased toxicity makes nitrous oxide a worthwhile option. The ignition of nitrous oxide depends critically on pressure. It deflagrates at approximately at a pressure of 309 psi (21 atmospheres). At 600 , the required ignition energy is only 6 joules, whereas at 130 psi a 2,500-joule ignition energy input is insufficient. Internal combustion engine In vehicle racing, nitrous oxide (often called "nitrous") increases engine power by providing more oxygen during combustion, thus allowing the engine to burn more fuel. It is an oxidising agent roughly equivalent to hydrogen peroxide, and much stronger than molecular oxygen. Nitrous oxide is not flammable at low pressure/temperature, but at about , its breakdown delivers more oxygen than atmospheric air. It often is mixed with another fuel that is easier to deflagrate. Nitrous oxide is stored as a compressed liquid. In an engine intake manifold, the evaporation and expansion of the liquid causes a large drop in intake charge temperature, resulting in a denser charge and allowing more air/fuel mixture to enter the cylinder. Sometimes nitrous oxide is injected into (or prior to) the intake manifold, whereas other systems directly inject it just before the cylinder (direct port injection). The technique was used during World War II by Luftwaffe aircraft with the GM-1 system to boost the power output of aircraft engines. Originally meant to provide the Luftwaffe standard aircraft with superior high-altitude performance, technological considerations limited its use to extremely high altitudes. Accordingly, it was only used by specialised planes such as high-altitude reconnaissance aircraft, high-speed bombers and high-altitude interceptor aircraft. It sometimes could be found on Luftwaffe aircraft also fitted with another engine-boost system, MW 50, a form of water injection for aviation engines that used methanol for its boost capabilities. One of the major problems of nitrous oxide oxidant in a reciprocating engine is excessive power: if the mechanical structure of the engine is not properly reinforced, it may be severely damaged or destroyed. It is important with nitrous oxide augmentation of petrol engines to maintain proper and evenly spread operating temperatures and fuel levels to prevent pre-ignition (also called detonation or spark knock). However, most problems associated with nitrous oxide come not from excessive power but from excessive pressure, since the gas builds up a much denser charge in the cylinder. The increased pressure and temperature can melt, crack, or warp the piston, valve, and cylinder head. Automotive-grade liquid nitrous oxide differs slightly from medical-grade. A small amount of sulfur dioxide () is added to prevent substance abuse. Aerosol propellant for food The gas is approved for use as a food additive (E number: E942), specifically as an aerosol spray propellant. It is commonly used in aerosol whipped cream canisters and cooking sprays. The gas is extremely soluble in fatty compounds. In pressurised aerosol whipped cream, it is dissolved in the fatty cream until it leaves the can, when it becomes gaseous and thus creates foam. This produces whipped cream four times the volume of the liquid, whereas whipping air into cream only produces twice the volume. Unlike air, nitrous oxide inhibits rancidification of the butterfat. Carbon dioxide cannot be used for whipped cream because it is acidic in water, which would curdle the cream and give it a seltzer-like "sparkle". Extra-frothed whipped cream produced with nitrous oxide is unstable, and will return to liquid within half an hour to one hour. Thus, it is not suitable for decorating food that will not be served immediately. In December 2016, there was a shortage of aerosol whipped cream in the United States, with canned whipped cream use at its peak during the Christmas and holiday season, due to an explosion at the Air Liquide nitrous oxide facility in Florida in late August. The company prioritized the remaining supply of nitrous oxide to medical customers rather than to food manufacturing. Also, cooking spray, made from various oils with lecithin emulsifier, may use nitrous oxide propellant, or alternatively food-grade alcohol or propane. Medical Nitrous oxide has been used in dentistry and surgery, as an anaesthetic and analgesic, since 1844. In the early days, the gas was administered through simple inhalers consisting of a breathing bag made of rubber cloth. Today, the gas is administered in hospitals by means of an automated relative analgesia machine, with an anaesthetic vaporiser and a medical ventilator, that delivers a precisely dosed and breath-actuated flow of nitrous oxide mixed with oxygen in a 2:1 ratio. Nitrous oxide is a weak general anaesthetic, and so is generally not used alone in general anaesthesia, but used as a carrier gas (mixed with oxygen) for more powerful general anaesthetic drugs such as sevoflurane or desflurane. It has a minimum alveolar concentration of 105% and a blood/gas partition coefficient of 0.46. The use of nitrous oxide in anaesthesia can increase the risk of postoperative nausea and vomiting. Dentists use a simpler machine which only delivers an / mixture for the patient to inhale while conscious but must still be a recognised purpose designed dedicated relative analgesic flowmeter with a minimum 30% of oxygen at all times and a maximum upper limit of 70% nitrous oxide. The patient is kept conscious throughout the procedure, and retains adequate mental faculties to respond to questions and instructions from the dentist. Inhalation of nitrous oxide is used frequently to relieve pain associated with childbirth, trauma, oral surgery and acute coronary syndrome (including heart attacks). Its use during labour has been shown to be a safe and effective aid for birthing women. Its use for acute coronary syndrome is of unknown benefit. In Canada and the UK, Entonox and Nitronox are used commonly by ambulance crews (including unregistered practitioners) as rapid and highly effective analgesic gas. Fifty percent nitrous oxide can be considered for use by trained non-professional first aid responders in prehospital settings, given the relative ease and safety of administering 50% nitrous oxide as an analgesic. The rapid reversibility of its effect would also prevent it from precluding diagnosis. Recreational Recreational inhalation of nitrous oxide, to induce euphoria and slight hallucinations, began with the British upper class in 1799 in gatherings known as "laughing gas parties". From the 19th century, the widespread availability of the gas for medical and culinary purposes allowed for recreational use to greatly expand globally. In the UK as of 2014, nitrous oxide was estimated to be used by almost half a million young people at nightspots, festivals and parties. Widespread recreational use of the drug throughout the UK was featured in the 2017 Vice documentary Inside The Laughing Gas Black Market, in which journalist Matt Shea met with dealers of the drug who stole it from hospitals. A significant issue cited in London's press is the effect of nitrous oxide canister littering, which is highly visible and causes significant complaints from communities. Prior to 8 November 2023 in the UK, nitrous oxide was subject to the Psychoactive Substances Act 2016, making it illegal to produce, supply, import or export nitrous oxide for recreational use. The updated law prohibited possession of nitrous oxide, classifying it as a Class C drug under the Misuse of Drugs Act 1971. While nitrous oxide is understood by most recreational users to give a "safe high", many are unaware that excessive consumption may cause neurological harm which, if left untreated, can cause permanent neurological damage. In Australia, recreation use became a public health concern following a rise in reports of neurotoxicity and emergency room admissions. In the state of South Australia, legislation was passed in 2020 to restrict canister sales. In 2024, under the street name "Galaxy Gas", nitrous oxide has exploded in popularity among young people for recreational use. Most of the popularity has been fostered through TikTok. Safety Nitrous oxide is a significant occupational hazard for surgeons, dentists and nurses. Because the gas is minimally metabolised in humans (with a rate of 0.004%), it retains its potency when exhaled into the room by the patient, and can intoxicate the clinic staff if the room is poorly ventilated, with potential chronic exposure. A continuous-flow fresh-air ventilation system or scavenger system may be needed to prevent waste-gas buildup. The National Institute for Occupational Safety and Health recommends that workers' exposure to nitrous oxide should be controlled during the administration of anaesthetic gas in medical, dental and veterinary operators. It set a recommended exposure limit (REL) of 25 ppm (46 mg/m3) to escaped anaesthetic. Exposure to nitrous oxide causes short-term impairment of cognition, audiovisual acuity, and manual dexterity, as well as spatial and temporal disorientation, putting the user at risk of accidental injury. Nitrous oxide is neurotoxic, and medium or long-term habitual consumption of significant quantities can cause neurological harm with the potential for permanent damage if left untreated. It is believed that, like other NMDA receptor antagonists, produces Olney's lesions in rodents upon prolonged (several hour) exposure. However, because it is normally expelled from the body rapidly, it is less likely to be neurotoxic than other NMDAR antagonists. In rodents, short-term exposure results in only mild injury that is rapidly reversible, and neuronal death occurs only after constant and sustained exposure. Nitrous oxide may also cause neurotoxicity after extended exposure because of hypoxia. This is especially true of non-medical formulations such as whipped-cream chargers ("whippits" or "nangs"), which contain no oxygen gas. In reports to poison control centers, heavy users (≥400 g or ≥200 L of gas in one session) or frequent users (regular, i.e., daily or weekly) have developed signs of peripheral neuropathy: ataxia (gait abnormalities) or paresthesia (perception of sensations such as tingling, numbness, or prickling, mostly in the extremities). Such early signs of neurological damage indicate chronic toxicity. Nitrous oxide might have therapeutic use in treating stroke. In a rodent model, nitrous oxide at 75% by volume reduced ischemia-induced neuronal death induced by occlusion of the middle cerebral artery, and decreased NMDA-induced Ca2+ influx in neuronal cell cultures, a cause of excitotoxicity. Occupational exposure to ambient nitrous oxide has been associated with DNA damage, due to interruptions in DNA synthesis. This correlation is dose-dependent and does not appear to extend to casual recreational use; however, further research is needed to confirm the level of exposure needed to cause damage. Inhalation of pure nitrous oxide causes oxygen deprivation, resulting in low blood pressure, fainting, and even heart attacks. This can occur if the user inhales large quantities continuously, as with a strap-on mask connected to a gas canister or other inhalation system, or prolonged breath-holding. Long-term exposure to nitrous oxide may cause vitamin B deficiency. This can cause serious neurotoxicity if the user has preexisting vitamin B deficiency. It inactivates the cobalamin form of vitamin B by oxidation. Symptoms of vitamin B deficiency, including sensory neuropathy, myelopathy and encephalopathy, may occur within days or weeks of exposure to nitrous oxide anaesthesia in people with subclinical vitamin B deficiency. Symptoms are treated with high doses of vitamin B, but recovery can be slow and incomplete. People with normal vitamin B levels have stores to make the effects of nitrous oxide insignificant, unless exposure is repeated and prolonged (nitrous oxide abuse). Vitamin B levels should be checked in people with risk factors for vitamin B deficiency prior to using nitrous oxide anaesthesia. Several experimental studies in rats indicate that chronic exposure of pregnant females to nitrous oxide may have adverse effects on the developing fetus. At room temperature () the saturated vapour pressure is 50.525 bar, rising up to 72.45 bar at —the critical temperature. The pressure curve is thus unusually sensitive to temperature. As with many strong oxidisers, contamination of parts with fuels have been implicated in rocketry accidents, where small quantities of nitrous/fuel mixtures explode due to "water hammer"-like effects (sometimes called "dieseling"—heating due to adiabatic compression of gases can reach decomposition temperatures). Some common building materials such as stainless steel and aluminium can act as fuels with strong oxidisers such as nitrous oxide, as can contaminants that may ignite due to adiabatic compression. There also have been incidents where nitrous oxide decomposition in plumbing has led to the explosion of large tanks. Environmental impact Global accounting of sources and sinks over the decade ending 2016 indicates that about 40% of the average 17 TgN/yr (teragrams, or million metric tons, of nitrogen per year) of emissions originated from human activity, and shows that emissions growth chiefly came from expanding agriculture. Nitrous oxide has significant global warming potential as a greenhouse gas. On a per-molecule basis, considered over a 100-year period, nitrous oxide has 265 times the atmospheric heat-trapping ability of carbon dioxide (). However, because of its low concentration (less than 1/1,000 of that of ), its contribution to the greenhouse effect is less than one third that of carbon dioxide, and also less than methane. On the other hand, since about 40% of the entering the atmosphere is the result of human activity, control of nitrous oxide is part of efforts to curb greenhouse gas emissions. Most human caused nitrous oxide released into the atmosphere is a greenhouse gas emission from agriculture, when farmers add nitrogen-based fertilizers onto the fields, and through the breakdown of animal manure. Reduction of emissions can be a hot topic in the politics of climate change. Nitrous oxide is also released as a by-product of burning fossil fuel, though the amount released depends on which fuel was used. It is also emitted through the manufacture of nitric acid, which is used in the synthesis of nitrogen fertilizers. The production of adipic acid, a precursor to nylon and other synthetic clothing fibres, also releases nitrous oxide. A rise in atmospheric nitrous oxide concentrations has been implicated as a possible contributor to the extremely intense global warming during the Cenomanian-Turonian boundary event. Nitrous oxide has also been implicated in thinning the ozone layer. A 2009 study suggested that emission was the single most important ozone-depleting emission and it was expected to remain the largest throughout the 21st century. Legality In India transfer of nitrous oxide from bulk cylinders to smaller, more transportable E-type, 1,590-litre-capacity tanks is legal when intended for medical anaesthesia. The Ministry of Health has warned that nitrous oxide is a prescription medicine whose sale or possession without a prescription is an offense under the Medicines Act. This would seemingly prohibit all non-medicinal uses of nitrous oxide, although it is implied that only recreational use will be targeted. In August 2015, the Council of the London Borough of Lambeth (UK) banned the use of the drug for recreational purposes, making offenders liable to an on-the-spot fine of up to £1,000. In September 2023, the UK Government announced that nitrous oxide would be made illegal by the end of the year, with possession potentially carrying up to a two-year prison sentence or an unlimited fine. Possession of nitrous oxide is legal under United States federal law and is not subject to DEA purview. It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption without a proper medical license. Many states have laws regulating the possession, sale and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount that may be sold without special license. For example, in California, possession for recreational use is prohibited and qualifies as a misdemeanor. See also DayCent Fink effect Nitrous oxide fuel blend References Further reading External links Occupational Safety and Health Guideline for Nitrous Oxide Paul Crutzen Interview Freeview video of Paul Crutzen Nobel Laureate for his work on decomposition of ozone talking to Harry Kroto Nobel Laureate by the Vega Science Trust. National Pollutant Inventory – Oxide of nitrogen fact sheet National Institute for Occupational Safety and Health – Nitrous Oxide CDC – NIOSH Pocket Guide to Chemical Hazards – Nitrous Oxide Nitrous Oxide FAQ Erowid article on Nitrous Oxide Nitrous oxide fingered as monster ozone slayer , Science News Dental Fear Central article on the use of nitrous oxide in dentistry Altered States Database 5-HT3 antagonists Aerosol propellants Dissociative drugs E-number additives Euphoriants GABAA receptor positive allosteric modulators Gaseous signaling molecules General anesthetics Glycine receptor agonists Greenhouse gases Industrial gases Industrial hygiene Inhalants Nitrogen oxides Monopropellants Nicotinic antagonists Nitrogen cycle NMDA receptor antagonists Rocket oxidizers Trace gases World Health Organization essential medicines Neurotoxins
Nitrous oxide
Chemistry,Environmental_science
7,657
28,530,086
https://en.wikipedia.org/wiki/Cortinarius%20anomalus
Cortinarius anomalus, also known as the variable webcap, is a basidiomycete fungus of the genus Cortinarius. It produces a medium-sized mushroom with a grayish-brown cap up to wide, gray-violet gills and a whitish stem with pale yellow belts below. The mushroom grows solitarily or in scattered groups on the ground in deciduous and coniferous forests. It is found throughout the temperate zone of the northern hemisphere. Taxonomy, phylogeny, and naming The species was first described as Agaricus anomalus by Elias Magnus Fries in 1818. Fries later transferred it to the genus Cortinarius in 1838 in his Epicrisis Systematis Mycologici. Friedrich Otto Wünsche placed it in Dermocybe as Dermocybe anomala. Phylogenetic analysis suggests that Cortinarius anomalus is closely related to Cortinarius collinitus, Cortinarius violaceus, and Cortinarius odorifer. The fungus is commonly known as the "variable webcap". The specific epithet anomalus is derived from the Latin word for "paradoxical". Description The cap is up to , initially almost spherical, then expanded convex and finally flattened. The cap has a broad, blunt and low umbo, which frequently lies in a depression since the margin which is initially rolled inward, then straight, often becomes turned upward. The cap cuticle is dry and difficult to peel. The cap surface is dry or humid, non-shiny in the center, but shiny towards the margin which is covered with fibrils when young. The cap is almost uniformly colored dirty rusty-brown or ashy-brown to grayish-tan, sometimes slightly paler towards the margin, with or without a faint grayish-violet tinge when young. The gills are moderately crowded, about wide when mature, thin, and whitish-blue, grayish-blue or pale lilac when young. As the mushroom matures, the gill color rapidly fades and soon becomes brown, then a rusty-clay color, without any trace of the blue characteristic of young specimens. The gill attachment to the stem is adnate (fused to the stem) and emarginate (notched). The edge of the gills is pale, and the edge ranges from finely denticulate (with a very finely toothed margin) to straight. The stem is long and thick, cylindrical above, slightly club-shaped below, and usually somewhat curved. It is initially very fibrillose, later silvery shiny and wavy, violet or grayish violet at the apex when young, more gray or grayish-brown at the base. The violet coloring soon disappears and then the stem is whitish or pale clay brownish and silkily fibrillose. Beneath the cap there is a golden yellow ring-like region. On the rest of the stem there are sometimes remnants of the partial veil as yellowish-saffron hairy tufts, which form incomplete rings or scattered minute scales. The cortina (a cobweb-like partial veil consisting of silky fibrils) is thick, whitish, and lasts only a short time. The flesh in the cap is thin, rarely thicker than , whitish to pale violet or pale lilac in the upper part of the stem when young, but soon fading, grayish-white in the lower part of the stem. Its smell is faintly fruity, and its taste mild. It is considered inedible. The spore deposit is rusty-brown. The spores are spherical to egg-shaped, with a distinct apiculus (the part of a spore which attaches to the sterigmata at the end of a basidium), finely verrucose, 5.7–9 by 7–8.5 μm. The basidia (spore-bearing cells) are four-spored and measure 30–40 by 8–9 μm. Similar species Cortinarius alboviolaceus is silvery-white to gray-violet when young and has a thick, white fibrillose veil, a bulkier stem, and elliptical spores. C. caninus has a browner cap when young, with more developed veil remnants, which are also browner. Distribution and habitat Cortinarius anomalus is a common species in deciduous, mixed, or more rarely coniferous woods. The fruit bodies appear late in the summer and autumn throughout the temperate zone of the northern hemisphere. See also List of Cortinarius species References External links anomalus Fungi described in 1818 Fungi of North America Fungi of Europe Inedible fungi Fungus species
Cortinarius anomalus
Biology
957
46,602,031
https://en.wikipedia.org/wiki/Multiplier%20%28linguistics%29
In linguistics, more precisely in traditional grammar, a multiplier is a word that counts how many times its object should be multiplied, such as single or double. They are contrasted with distributive numbers. In English, this part of speech is relatively marginal, and less recognized than cardinal numbers and ordinal numbers. English In English native multipliers exist, formed by the suffix -fold, as in onefold, twofold, threefold. However, these have largely been replaced by single, double, and triple, which are of Latin origin, via French. They have a corresponding distributive number formed by suffixing -y (reduction of Middle English -lely > -ly), as in singly. However, the series is primarily used for the first few numbers; quadruple and quintuple are less common, and hextuple and above are quite rare. For larger multiples a cardinal number and a counter are used instead, such as "five portions" or "a portion five times the normal size" instead of "a quintuple portion". In espresso servings, the Italian solo, doppio, and triplo are sometimes used, with doppio being most common. The Latin multipliers simplex, duplex, triplex etc. have occasional use in English, primarily in technical use, though duplex is more common. See also Cardinal number Distributive number Ordinal number Multiple (mathematics) Numeral (linguistics) Numerals
Multiplier (linguistics)
Mathematics
321
24,385,535
https://en.wikipedia.org/wiki/C18H18
{{DISPLAYTITLE:C18H18}} The molecular formula C18H18 (molar mass: 234.33 g/mol, exact mass: 234.1409 u) may refer to: Cyclooctadecanonaene, or [18]annulene Retene Molecular formulas
C18H18
Physics,Chemistry
68
6,269,757
https://en.wikipedia.org/wiki/Thioredoxin%20fold
The thioredoxin fold is a protein fold common to enzymes that catalyze disulfide bond formation and isomerization. The fold is named for the canonical example thioredoxin and is found in both prokaryotic and eukaryotic proteins. It is an example of an alpha/beta protein fold that has oxidoreductase activity. The fold's spatial topology consists of a four-stranded antiparallel beta sheet sandwiched between three alpha helices. The strand topology is 2134 with 3 antiparallel to the rest. Sequence conservation Despite sequence variability in many regions of the fold, thioredoxin proteins share a common active site sequence with two reactive cysteine residues: Cys-X-Y-Cys, where X and Y are often but not necessarily hydrophobic amino acids. The reduced form of the protein contains two free thiol groups at the cysteine residues, whereas the oxidized form contains a disulfide bond between them. Disulfide bond formation Different thioredoxin fold-containing proteins vary greatly in their reactivity and in the pKa of their free thiols, which derives from the ability of the overall protein structure to stabilize the activated thiolate. Although the structure is fairly consistent among proteins containing the thioredoxin fold, the pKa is extremely sensitive to small variations in structure, especially in the placement of protein backbone atoms near the first cysteine. Examples Human proteins containing this domain include: DNAJC10 ERP70 GLRX3 P4HB; PDIA2; PDIA3; PDIA4; PDIA5; PDIA6 (P5); PDILT QSOX1; QSOX2 STRF8 TXN; TXN2; TXNDC1; TXNDC10; TXNDC11; TXNDC13; TXNDC14; TXNDC15; TXNDC16; TXNDC2; TXNDC3; TXNDC4; TXNDC5; TXNDC6; TXNDC8; TXNL1; TXNL3 References External links SCOP thioredoxin superfamily CATH glutaredoxin topology Protein domains Protein folds Protein superfamilies
Thioredoxin fold
Biology
470
4,558,491
https://en.wikipedia.org/wiki/Statistical%20machine%20translation
Statistical machine translation (SMT) is a machine translation approach where translations are generated on the basis of statistical models whose parameters are derived from the analysis of bilingual text corpora. The statistical approach contrasts with the rule-based approaches to machine translation as well as with example-based machine translation, that superseded the previous rule-based approach that required explicit description of each and every linguistic rule, which was costly, and which often did not generalize to other languages. Since 2003, the statistical approach itself has been gradually superseded by the deep learning-based neural machine translation. The first ideas of statistical machine translation were introduced by Warren Weaver in 1949, including the ideas of applying Claude Shannon's information theory. Statistical machine translation was re-introduced in the late 1980s and early 1990s by researchers at IBM's Thomas J. Watson Research Center. Before the introduction of neural machine translation, it was by far the most widely studied machine translation method. Basis The idea behind statistical machine translation comes from information theory. A document is translated according to the probability distribution that a string in the target language (for example, English) is the translation of a string in the source language (for example, French). The problem of modeling the probability distribution has been approached in a number of ways. One approach which lends itself well to computer implementation is to apply Bayes Theorem, that is , where the translation model is the probability that the source string is the translation of the target string, and the language model is the probability of seeing that target language string. This decomposition is attractive as it splits the problem into two subproblems. Finding the best translation is done by picking up the one that gives the highest probability: . For a rigorous implementation of this one would have to perform an exhaustive search by going through all strings in the native language. Performing the search efficiently is the work of a machine translation decoder that uses the foreign string, heuristics and other methods to limit the search space and at the same time keeping acceptable quality. This trade-off between quality and time usage can also be found in speech recognition. As the translation systems are not able to store all native strings and their translations, a document is typically translated sentence by sentence. Language models are typically approximated by smoothed n-gram models, and similar approaches have been applied to translation models, but this introduces additional complexity due to different sentence lengths and word orders in the languages. Statistical translation models were initially word based (Models 1-5 from IBM Hidden Markov model from Stephan Vogel and Model 6 from Franz-Joseph Och), but significant advances were made with the introduction of phrase based models. Later work incorporated syntax or quasi-syntactic structures. Benefits The most frequently cited benefits of statistical machine translation (SMT) over rule-based approach are: More efficient use of human and data resources There are many parallel corpora in machine-readable format and even more monolingual data. Generally, SMT systems are not tailored to any specific pair of languages. More fluent translations owing to use of a language model Shortcomings Corpus creation can be costly. Specific errors are hard to predict and fix. Results may have superficial fluency that masks translation problems. Statistical machine translation usually works less well for language pairs with significantly different word order. The benefits obtained for translation between Western European languages are not representative of results for other language pairs, owing to smaller training corpora and greater grammatical differences. Word-based translation In word-based translation, the fundamental unit of translation is a word in some natural language. Typically, the number of words in translated sentences are different, because of compound words, morphology and idioms. The ratio of the lengths of sequences of translated words is called fertility, which tells how many foreign words each native word produces. Necessarily it is assumed by information theory that each covers the same concept. In practice this is not really true. For example, the English word corner can be translated in Spanish by either rincón or esquina, depending on whether it is to mean its internal or external angle. Simple word-based translation can't translate between languages with different fertility. Word-based translation systems can relatively simply be made to cope with high fertility, such that they could map a single word to multiple words, but not the other way about. For example, if we were translating from English to French, each word in English could produce any number of French words— sometimes none at all. But there's no way to group two English words producing a single French word. An example of a word-based translation system is the freely available GIZA++ package (GPLed), which includes the training program for IBM models and HMM model and Model 6. The word-based translation is not widely used today; phrase-based systems are more common. Most phrase-based systems are still using GIZA++ to align the corpus. The alignments are used to extract phrases or deduce syntax rules. And matching words in bi-text is still a problem actively discussed in the community. Because of the predominance of GIZA++, there are now several distributed implementations of it online. Phrase-based translation In phrase-based translation, the aim is to reduce the restrictions of word-based translation by translating whole sequences of words, where the lengths may differ. The sequences of words are called blocks or phrases. These are typically not linguistic phrases, but phrasemes that were found using statistical methods from corpora. It has been shown that restricting the phrases to linguistic phrases (syntactically motivated groups of words, see syntactic categories) decreased the quality of translation. The chosen phrases are further mapped one-to-one based on a phrase translation table, and may be reordered. This table could be learnt based on word-alignment, or directly from a parallel corpus. The second model is trained using the expectation maximization algorithm, similarly to the word-based IBM model. Syntax-based translation Syntax-based translation is based on the idea of translating syntactic units, rather than single words or strings of words (as in phrase-based MT), i.e. (partial) parse trees of sentences/utterances. Until the 1990s, with advent of strong stochastic parsers, the statistical counterpart of the old idea of syntax-based translation did not take off. Examples of this approach include DOP-based MT and later synchronous context-free grammars. Hierarchical phrase-based translation Hierarchical phrase-based translation combines the phrase-based and syntax-based approaches to translation. It uses synchronous context-free grammar rules, but the grammars can be constructed by an extension of methods for phrase-based translation without reference to linguistically motivated syntactic constituents. This idea was first introduced in Chiang's Hiero system (2005). Language models A language model is an essential component of any statistical machine translation system, which aids in making the translation as fluent as possible. It is a function that takes a translated sentence and returns the probability of it being said by a native speaker. A good language model will for example assign a higher probability to the sentence "the house is small" than to "small the is house". Other than word order, language models may also help with word choice: if a foreign word has multiple possible translations, these functions may give better probabilities for certain translations in specific contexts in the target language. Systems implementing statistical machine translation Google Translate (started transition to neural machine translation in 2016) Microsoft Translator (started transition to neural machine translation in 2016) Yandex.Translate (switched to hybrid approach incorporating neural machine translation in 2017) Challenges with statistical machine translation Problems with statistical machine translation include: Sentence alignment Single sentences in one language can be found translated into several sentences in the other and vice versa. Long sentences may be broken up, while short sentences may be merged. There are even languages that use writing systems without clear indication of a sentence end, such as Thai. Sentence aligning can be performed through the Gale-Church alignment algorithm. Efficient search and retrieval of the highest scoring sentence alignment is possible through this and other mathematical models. Word alignment Sentence alignment is usually either provided by the corpus or obtained by the aforementioned Gale-Church alignment algorithm. To learn e.g. the translation model, however, we need to know which words align in a source-target sentence pair. The IBM-Models or the HMM-approach were attempts at solving this challenge. Function words that have no clear equivalent in the target language are another issue for the statistical models. For example, when translating from English to German, in the sentence "John does not live here", the word "does" has no clear alignment in the translated sentence "John wohnt hier nicht". Through logical reasoning, it may be aligned with the words "wohnt" (as it contains grammatical information for the English word "live") or "nicht" (as it only appears in the sentence because it is negated) or it may be unaligned. Statistical anomalies An example of such an anomaly is the phrase "I took the train to Berlin" being mistranslated as "I took the train to Paris" due to the statistical abundance of "train to Paris" in the training set. Idiom and register Depending on the corpora used, the use of idiom and linguistic register might not receive a translation that accurately represents the original intent. For example, the popular Canadian Hansard bilingual corpus primarily consists of parliamentary speech examples, where "Hear, Hear!" is frequently associated with "Bravo!" Using a model built on this corpus to translate ordinary speech in a conversational register would lead to incorrect translation of the word hear as Bravo! This problem is connected with word alignment, as in very specific contexts the idiomatic expression aligned with words that resulted in an idiomatic expression of the same meaning in the target language. However, it is unlikely, as the alignment usually does not work in any other contexts. For that reason, idioms could only be subjected to phrasal alignment, as they could not be decomposed further without losing their meaning. This problem was specific for word-based translation. Different word orders Word order in languages differ. Some classification can be done by naming the typical order of subject (S), verb (V) and object (O) in a sentence and one can talk, for instance, of SVO or VSO languages. There are also additional differences in word orders, for instance, where modifiers for nouns are located, or where the same words are used as a question or a statement. In speech recognition, the speech signal and the corresponding textual representation can be mapped to each other in blocks in order. This is not always the case with the same text in two languages. For SMT, the machine translator can only manage small sequences of words, and word order has to be thought of by the program designer. Attempts at solutions have included re-ordering models, where a distribution of location changes for each item of translation is guessed from aligned bi-text. Different location changes can be ranked with the help of the language model and the best can be selected. Out of vocabulary (OOV) words SMT systems typically store different word forms as separate symbols without any relation to each other, and word forms or phrases that were not in the training data cannot be translated. This might be because of the lack of training data, changes in the human domain where the system is used, or differences in morphology. See also AppTek Cache language model Duolingo Europarl corpus Example-based machine translation Google Translate Hybrid machine translation Microsoft Translator Moses (machine translation), free software Rule-based machine translation SDL Language Weaver Statistical parsing Notes and references External links Annotated list of statistical natural language processing resources — Includes links to freely available statistical machine translation software Machine translation Statistical natural language processing
Statistical machine translation
Technology
2,431
2,567,355
https://en.wikipedia.org/wiki/Minatec
Minatec (initially called the Micro and Nanotechnology Innovation Centre) is a research complex specializing in micro/nano technologies in Grenoble, France. The center was inaugurated in June 2006 by François Loos, French Minister Delegate for Industry, as a partnership between LETI (the Electronics and Information Technologies Laboratory of CEA, the French Atomic Energy Commission) and by Grenoble Institute of Technology (Université Grenoble Alpes). The site was already home to LETI, Europe's top center for applied research in microelectronics and nanotechnology. Minatec combines a physical research campus with a network of companies, researchers, and engineering schools. It was launched to foster technology transfer, with applications in energy and communications. The complex is home to 3,000 researchers, 1,200 students, and 600 technology transfer experts on a 20-hectare campus offering 10,000 square meters for cleanroom space. It offers a continuum that includes student technology transfer, industry, and applied research. The Minatec campus has dedicated special-events facilities (900 m²), including a 20-person conference rooms and a 400-seat amphitheater. These spaces are available to researchers for their scientific events such as the international conference held every two years. Minatec includes fundamental research labs like INAC and FMNT, plus a major technological research lab, Leti. MINATEC also cooperates with the INSTITUT NÉEL and RTRA, which are located nearby. Funding Minatec represents an investment of 193.5 million euros between 2002 and 2005, mainly paid by local authorities and the CEA. See also Polygone Scientifique References External links Research institutes in France Microtechnology Nanotechnology institutions Educational institutions in Grenoble Grenoble Institute of Technology Science and technology in Grenoble Organizations established in 2006 2006 establishments in France
Minatec
Materials_science,Engineering
376
59,609,098
https://en.wikipedia.org/wiki/Optical%20baffle
An optical baffle is an opto-mechanical construction designed to block light from a source shining into the front of an optical system and reaching the image as unwanted light. Principles Optical systems which have stringent requirements on stray light levels often need optical baffles. There are many designs, depending on the desired goals. Generic optical baffle designs and their advantages for stray light control can be classified as reflective, absorbing or refractive; reimaging and nonreimaging systems. References Optical devices
Optical baffle
Materials_science,Engineering
102
3,446,698
https://en.wikipedia.org/wiki/Humidistat
A humidistat or hygrostat is an electronic device analogous to a thermostat but which responds to relative humidity, not temperature. A typical humidistat is usually included with portable humidifiers or dehumidifiers. It can also be included with combined air cleaner or humidifier units to control a home's humidity level or any other indoor space. Usage Humidistats are used in a number of devices including dehumidifiers, humidifiers, and microwave ovens. In humidifiers and dehumidifiers, the humidistat is used where constant relative humidity conditions need to be maintained such as a refrigerator, greenhouse, or climate-controlled warehouse. When adjusting the controls in these applications the humidistat would be what is being set. In microwaves they are used in conjunction with smart cooking one-button features such as those for microwave popcorn. Humidistats employ hygrometers but are not the same. A humidistat has the functionality of a switch and is not just a measuring instrument like a hygrometer is. For heating, ventilation, and air conditioning (HVAC) of buildings, humidistats or humidity sensors are used to sense the air relative humidity in the controlled space and turn on and off the HVAC equipment. References External links Types Of Humidifiers Switches Temperature control Humidity and hygrometry
Humidistat
Technology
283
4,936,677
https://en.wikipedia.org/wiki/Cranege%20brothers
Thomas and George Cranege (also spelled Cranage), who worked in the ironworking industry in England in the 1760s, are notable for introducing a new method of producing wrought iron from pig iron. Experiment of 1766 The process of converting pig iron into wrought iron (also known as bar iron) was at that time carried out in a finery forge, which was fuelled by charcoal. Charcoal was a limited resource, but coal, more widely available, could not be used because the sulphur in coal would adversely affect the quality of the wrought iron. George Cranege worked in Coalbrookdale in Shropshire, at the ironworks established by Abraham Darby I, and his brother Thomas worked at a forge in Bridgnorth in Shropshire. They suggested to Richard Reynolds, manager of the works at Coalbrookdale, that the conversion process could be done in a reverbatory furnace, where the iron did not mix with the coal. Reynolds was sceptical, but authorized the brothers to try out the idea. Richard Reynolds, in a letter dated 25 April 1766 to his colleague Thomas Goldney III, described his conversation with the Craneges and the experiment: I told them, consistent with the notion I had adopted in common with all others I had conversed with, that I thought it impossible, because the vegetable salts in the charcoal being an alkali acted as an absorbent to the sulphur of the iron, which occasions the red-short quality of the iron, and pit coal abounding with sulphur would increase it.... They replied that from the observations they had made, and repeated conversations together, they were both firmly of opinion that the alteration from the quality of pig iron into that of bar iron was effected merely by heat, and if I would give them leave, they would make a trial some day.... A trial of it has been made this week, and the success has surpassed the most sanguine expectations.... I look upon it as one of the most important discoveries ever made.... A patent for the process, dated 17 June 1766, in the name of the brothers Cranege, was secured. It apparently made little difference to the lives of the brothers. The process was improved soon afterwards, by Peter Onions who received a patent in 1783, and by Henry Cort who received patents in 1783 and 1784 for his improvements. References History of metallurgy People of the Industrial Revolution
Cranege brothers
Chemistry,Materials_science
502
50,639,093
https://en.wikipedia.org/wiki/Bright%20Computing
Bright Computing, Inc. is a developer of software for deploying and managing high-performance (HPC) clusters, Kubernetes clusters, and OpenStack private clouds in on-premises data centers as well as in the public cloud. History Bright Computing was founded by Matthijs van Leeuwen in 2009, who spun the company out of ClusterVision, which he had co-founded with Alex Ninaber and Arijan Sauer. Alex and Matthijs had worked together at UK’s Compusys, which was one of the first companies to commercially build HPC clusters. They left Compusys in 2002 to start ClusterVision in the Netherlands, after determining there was a growing market for building and managing supercomputer clusters using off-the-shelf hardware components and open source software, tied together with their own customized scripts. ClusterVision also provided delivery and installation support services for HPC clusters at universities and government entities. In 2004, Martijn de Vries joined ClusterVision and began development of cluster management software. The software was made available to customers in 2008, under the name ClusterVisionOS v4. In 2009, Bright Computing was spun out of ClusterVision. ClusterVisionOS was renamed Bright Cluster Manager, and van Leeuwen was named Bright Computing’s CEO. In February 2016, Bright appointed Bill Wagner as chief executive officer. Matthijs van Leeuwen became chief strategy officer, and then left the company and board of directors in 2018. In January 2022 Bright was acquired by Nvidia. Nvidia cited using Bright's Amsterdam facility as a development center.< ref></ref> Customers Early customers included Boeing, Sandia National Laboratories, Virginia Tech, Hewlett Packard, NSA, and Drexel University. Many early customers were introduced through resellers, including SICORP, Cray, Dell, and Advanced HPC. As of 2019, the company has more than 700 customers, including more than fifty Fortune 500 Companies. Products and services Bright Cluster Manager for HPC lets customers deploy and manage complete clusters. It provides management for the hardware, the operating system, the HPC software, and users. In 2014, the company announced Bright OpenStack, software to deploy, provision, and manage OpenStack-based private cloud infrastructures. In 2016, Bright started bundling several machine learning frameworks and associated tools and libraries with the product, to make it very easy to get machine learning workload up and running on a Bright cluster. In December 2018, version 8.2 was released, which introduced support for the ARM64 architecture, edge capabilities to build clusters spread out over many different geographical locations, improved workload accounting & reporting features, as well as many improvements to Bright's integration with Kubernetes. Bright Cluster Manager software is frequently sold through original equipment manufacturer (OEM) resellers, including Dell and HPE. Bright Computing was covered by Software Magazine and Yahoo! Finance, among other publications. Awards In 2016, Bright Computing was awarded a €1.5M Horizon 2020 SME Instrument grant from the European Commission. Bright Computing was one of only 33 grant recipients from 960 submitted proposals. In its category only 5 out of 260 grants were awarded. 2015 HPCwire Editor’s Choice Award for “Best HPC Cluster Solution or Technology" Main Software 50 “Highest Growth” award winner, 2013 Deloitte Technology Fast50 “Rising Star 2013” award winner Bio-IT World Conference & Expo ‘13, Boston, MA, winner of “IT Hardware & Infrastructure” category of the “Best of Show Award” program Red Herring Top 100 Global Award, 2013 References Big data companies Cloud computing Cloud infrastructure Cluster computing Data management Supercomputers
Bright Computing
Technology
763
44,927,859
https://en.wikipedia.org/wiki/HIP%2085605
HIP 85605 is a star in the constellation Hercules with a visual apparent magnitude of 11.03. It was once thought to be a M dwarf or K-type main-sequence star and a possible companion of the brighter star HIP 85607, but they are now known to be an optical double, both objects being red giants located much more distant (HIP 85605 is 1,790 ± 30 light years away, and HIP 85607 is 1,323 ± 13 light years away). Distance estimation The original Hipparcos parallax measurement in 1997 was 202 mas, which would have placed it 16.1 light-years from the Solar System. In 2007, van Leeuwen revised the number to 147 mas (0.147 arcseconds), or 22.2 light-years. With this new value, HIP 85605 would have been unlikely to be one of the 100 closest star systems to the Sun. In 2014, it was estimated that HIP 85605 could approach to about from the Sun within 240,000 to 470,000 years, assuming the then-known parallax and distance measurements to the object were correct. In that case its gravitational influence could have disrupted the orbits of comets in the Oort cloud and caused some of them to enter the inner Solar System. With the release of Gaia DR2, it was determined that HIP 85605 is actually a much more distant 1790 ± 30 light-years away, and as such will not be passing remotely close to the Sun at any point in time. See also Stars that actually passed/will pass close to the Sun: Scholz's Star Gliese 710 List of nearest stars and brown dwarfs Notes References External links Frequently asked questions to Close encounters of the stellar kind by C.A.L. Bailer-Jones K-type main-sequence stars Hercules (constellation) 085605
HIP 85605
Astronomy
389
2,542,609
https://en.wikipedia.org/wiki/International%20Congress%20Calendar
The International Congress Calendar is a calendar of events organized by non-profit international organizations, mainly those organizations which are included in the Yearbook of International Organizations. It has been published since 1960 by the Union of International Associations (UIA) and includes over 425,000 meetings. Over 15,000 new meetings are included every year. It is one of the most comprehensive sources of information on future international meetings organized or sponsored by international organizations. All information is provided, or confirmed, by the organizations themselves. The Calendar is published in print and online. References International Congress Calendar (official page) See also Yearbook of International Organizations Union of International Associations Encyclopedia of World Problems and Human Potential Anthony Judge Calendars
International Congress Calendar
Physics
139
55,372,363
https://en.wikipedia.org/wiki/Chantal%20David
Chantal David (born 1964) is a French Canadian mathematician who works as a professor of mathematics at Concordia University. Her interests include analytic number theory, arithmetic statistics, and random matrix theory, and she has shown interest in elliptic curves and Drinfeld modules. She is the 2013 winner of the Krieger–Nelson Prize, given annually by the Canadian Mathematical Society to an outstanding female researcher in mathematics. Education and career David completed her doctorate in mathematics in 1993 at McGill University, under the supervision of Ram Murty. Her thesis was entitled Supersingular Drinfeld Modules. In the same year, she joined the faculty at Concordia University. She became the deputy director of the Centre de Recherches Mathématiques in 2004. In 2008, David was an invited professor at Université Henri Poincaré. She spent September 2009 through April 2010 at the Institute for Advanced Study. From January through May 2017, she co-organized a program on analytic number theory at the Mathematical Sciences Research Institute. Research In 1999, David published a paper with Francesco Pappalardi which proved that the Lang–Trotter conjecture holds in most cases. She has shown that for several families of curves over finite fields, the zeroes of zeta functions are compatible with the Katz–Sarnak conjectures. She has also used random matrix theory to study the zeroes in families of elliptic curves. David and her collaborators have exhibited a new Cohen–Lenstra phenomenon for the group of points of elliptic curves over finite fields. Awards and honors David was awarded the Krieger-Nelson Prize by the Canadian Mathematical Society in 2013. References External links 1964 births Place of birth missing (living people) Living people Canadian mathematicians Women mathematicians 21st-century Canadian women scientists Academic staff of Concordia University McGill University Faculty of Science alumni Number theorists French mathematicians
Chantal David
Mathematics
367
9,305,752
https://en.wikipedia.org/wiki/Fanno%20flow
In fluid dynamics, Fanno flow (after Italian engineer Gino Girolamo Fanno) is the adiabatic flow through a constant area duct where the effect of friction is considered. Compressibility effects often come into consideration, although the Fanno flow model certainly also applies to incompressible flow. For this model, the duct area remains constant, the flow is assumed to be steady and one-dimensional, and no mass is added within the duct. The Fanno flow model is considered an irreversible process due to viscous effects. The viscous friction causes the flow properties to change along the duct. The frictional effect is modeled as a shear stress at the wall acting on the fluid with uniform properties over any cross section of the duct. For a flow with an upstream Mach number greater than 1.0 in a sufficiently long duct, deceleration occurs and the flow can become choked. On the other hand, for a flow with an upstream Mach number less than 1.0, acceleration occurs and the flow can become choked in a sufficiently long duct. It can be shown that for flow of calorically perfect gas the maximum entropy occurs at M = 1.0. Theory The Fanno flow model begins with a differential equation that relates the change in Mach number with respect to the length of the duct, dM/dx. Other terms in the differential equation are the heat capacity ratio, γ, the Fanning friction factor, f, and the hydraulic diameter, Dh: Assuming the Fanning friction factor is a constant along the duct wall, the differential equation can be solved easily. One must keep in mind, however, that the value of the Fanning friction factor can be difficult to determine for supersonic and especially hypersonic flow velocities. The resulting relation is shown below where L* is the required duct length to choke the flow assuming the upstream Mach number is supersonic. The left-hand side is often called the Fanno parameter. Equally important to the Fanno flow model is the dimensionless ratio of the change in entropy over the heat capacity at constant pressure, cp. The above equation can be rewritten in terms of a static to stagnation temperature ratio, which, for a calorically perfect gas, is equal to the dimensionless enthalpy ratio, H: The equation above can be used to plot the Fanno line, which represents a locus of states for given Fanno flow conditions on an H-ΔS diagram. In the diagram, the Fanno line reaches maximum entropy at H = 0.833 and the flow is choked. According to the Second law of thermodynamics, entropy must always increase for Fanno flow. This means that a subsonic flow entering a duct with friction will have an increase in its Mach number until the flow is choked. Conversely, the Mach number of a supersonic flow will decrease until the flow is choked. Each point on the Fanno line corresponds with a different Mach number, and the movement to choked flow is shown in the diagram. The Fanno line defines the possible states for a gas when the mass flow rate and total enthalpy are held constant, but the momentum varies. Each point on the Fanno line will have a different momentum value, and the change in momentum is attributable to the effects of friction. Additional Fanno flow relations As was stated earlier, the area and mass flow rate in the duct are held constant for Fanno flow. Additionally, the stagnation temperature remains constant. These relations are shown below with the * symbol representing the throat location where choking can occur. A stagnation property contains a 0 subscript. Differential equations can also be developed and solved to describe Fanno flow property ratios with respect to the values at the choking location. The ratios for the pressure, density, temperature, velocity and stagnation pressure are shown below, respectively. They are represented graphically along with the Fanno parameter. Applications The Fanno flow model is often used in the design and analysis of nozzles. In a nozzle, the converging or diverging area is modeled with isentropic flow, while the constant area section afterwards is modeled with Fanno flow. For given upstream conditions at point 1 as shown in Figures 3 and 4, calculations can be made to determine the nozzle exit Mach number and the location of a normal shock in the constant area duct. Point 2 labels the nozzle throat, where M = 1 if the flow is choked. Point 3 labels the end of the nozzle where the flow transitions from isentropic to Fanno. With a high enough initial pressure, supersonic flow can be maintained through the constant area duct, similar to the desired performance of a blowdown-type supersonic wind tunnel. However, these figures show the shock wave before it has moved entirely through the duct. If a shock wave is present, the flow transitions from the supersonic portion of the Fanno line to the subsonic portion before continuing towards M = 1. The movement in Figure 4 is always from the left to the right in order to satisfy the second law of thermodynamics. The Fanno flow model is also used extensively with the Rayleigh flow model. These two models intersect at points on the enthalpy-entropy and Mach number-entropy diagrams, which is meaningful for many applications. However, the entropy values for each model are not equal at the sonic state. The change in entropy is 0 at M = 1 for each model, but the previous statement means the change in entropy from the same arbitrary point to the sonic point is different for the Fanno and Rayleigh flow models. If initial values of si and Mi are defined, a new equation for dimensionless entropy versus Mach number can be defined for each model. These equations are shown below for Fanno and Rayleigh flow, respectively. Figure 5 shows the Fanno and Rayleigh lines intersecting with each other for initial conditions of si = 0 and Mi = 3. The intersection points are calculated by equating the new dimensionless entropy equations with each other, resulting in the relation below. The intersection points occur at the given initial Mach number and its post-normal shock value. For Figure 5, these values are M = 3 and 0.4752, which can be found the normal shock tables listed in most compressible flow textbooks. A given flow with a constant duct area can switch between the Fanno and Rayleigh models at these points. See also Rayleigh flow Mass injection flow Isentropic process Isothermal flow Gas dynamics Compressible flow Choked flow Enthalpy Entropy Isentropic nozzle flow References External links Purdue University Adiabatic and Isothermal Fanno flow calculators University of Kentucky Fanno flow Webcalculator Maurice W. Downey, Gino Fanno Flow regimes Aerodynamics
Fanno flow
Chemistry,Engineering
1,396
24,120,049
https://en.wikipedia.org/wiki/Radio%20frequency%20over%20glass
In telecommunications, radio frequency over glass (RFoG) is a deep-fiber network design in which the coax portion of the hybrid fiber coax (HFC) network is replaced by a single-fiber passive optical network (PON). Downstream and return-path transmission use different wavelengths to share the same fiber (typically 1550 nm downstream, and 1310 nm or 1590/1610 nm upstream). The return-path wavelength standard is expected to be 1610 nm, but early deployments have used 1590 nm. Using 1590/1610 nm for the return path allows the fiber infrastructure to support both RFoG and a standards-based PON simultaneously, operating with 1490 nm downstream and 1310 nm return-path wavelengths. Advantages RFoG delivers the same services as an RF/DOCSIS/HFC network, with the added benefit of improved noise performance and increased usable RF spectrum in both the downstream and return-path directions. Both RFoG and HFC systems can concurrently operate out of the same headend/hub, making RFoG a good solution for node-splitting and capacity increases on an existing network. RFoG allows service providers to continue to leverage traditional HFC equipment and back-office applications with the new FTTP deployments. Cable operators can continue to rely on the existing provisioning and billing systems, cable modem termination system (CMTS) platforms, headend equipment, set-top boxes, conditional access technology and cable modems while gaining benefits inherent with RFoG and FTTx. RFoG provides several benefits over traditional network architecture: More downstream spectrum; RFoG systems support 1 GHz and beyond, directly correlating to increased video and/or downstream data service support More upstream bandwidth; RFoG's improved noise characteristics allow for the use of the full 5–42 MHz return-path spectrum. Additionally, higher-performance RFoG systems not only support DOCSIS 3.0 with bonding, but also enable 64 quadrature amplitude modulation (QAM) upstream transmission in a DOCSIS 3.0 bonded channel, dramatically increasing return-path bandwidth. Improved operational expenses; RFoG brings the benefits of a passive fiber topology. Removing active devices in the access network reduces overall power requirements, as well as ongoing maintenance costs that would normally be needed for active elements (such as nodes and amplifiers). Both cost savings and increased capacity for new services (revenue generating and/or competitive positioning) are driving the acceptance of RFoG as a cost-effective step on the path towards a 100-percent PON-based access network. Implementation As with an HFC architecture, video controllers and data-networking services are fed through a CMTS/edge router. These electrical signals are then converted to optical ones, and transported via a 1550 nm wavelength through a wavelength-division multiplexing (WDM) platform and a passive splitter to a fiber-optic micro-node located at the customer premises. If necessary, an optical amplifier can be used to boost the downstream optical signal to cover a greater distance. The fiber-optic micro-nodes – which are also referred to as RFoG optical-networking units (R-ONUs) – terminate the fiber connection and convert traffic for delivery over the in-home network. Video traffic can be fed over coax to a set-top box, while voice and data traffic can be delivered to an embedded multimedia terminal adapter (eMTA), which connects to analog telephone lines over the subscriber’s internal phone wiring and to PCs via Ethernet or WiFi. The return path for voice, data, and video traffic is over a 1310 or 1590/1610 nm wavelength to a return path receiver, which converts the optical signal to RF and feeds it back into the CMTS and video controller. Although RFoG is providing a capacity increase, one undesired effect of the system is that more than one R-ONU can have the optical return path activated at the same time and on the same wavelength (for instance, one R-ONU falsely triggered by ingress); thus, an optical collision may occur (optical beating). R-ONUs convert optical signals into electrical ones. This is done in place of the same function traditionally performed back at the higher-level serving area nodes in the HFC network. The RF infrastructure remains in place; the difference is that the fiber termination is moved from a fiber node to the customer's premises. The R-ONU can be located in any type of premises: a home, a business, a multi-tenant dwelling (MTU/MDU), or apartments in an MTU. When the network is upgraded, the RFoG elements can remain in place while the provider rolls out the necessary components (OLTs and ONTs) for a full PON implementation. Standards The Society of Cable and Telecommunications Engineers (SCTE) has approved SCTE 174 2010, the standards for RFoG. The standard has been approved by the American National Standard Institute (ANSI). Status Cable service providers (also known as MSOs) have generally responded favorably to the technology and the benefits it brings to their networks. Many have tried the technology, and some have begun to deploy RFoG. Following positive experience with smaller deployments in newly built housing and with the finalization of the standard, it is expected to become more widely adopted. References Leveraging RFoG to Deliver DOCSIS and GPON Services Over Fiber (Motorola Whitepaper, 09/2008) “RFoG for Business Services” by Michael Emmendorfer Is Radio Frequency over Glass (RFoG) the Solution for CATV Operators (PBN Whitepaper, 08/2009) Radio Frequency over Glass Fiber-to-the-Home Specification (ANSI SCTE 174 2010 document) External links Society of Telecommunications Engineers Broadband Digital cable Fiber-optic communications Network architecture
Radio frequency over glass
Engineering
1,185
19,604,228
https://en.wikipedia.org/wiki/Dark%20energy
In physical cosmology and astronomy, dark energy is a proposed form of energy that affects the universe on the largest scales. Its primary effect is to drive the accelerating expansion of the universe. Assuming that the lambda-CDM model of cosmology is correct, dark energy dominates the universe, contributing 68% of the total energy in the present-day observable universe while dark matter and ordinary (baryonic) matter contribute 26% and 5%, respectively, and other components such as neutrinos and photons are nearly negligible. Dark energy's density is very low: ( in mass-energy), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space. The first observational evidence for dark energy's existence came from measurements of supernovae. Type Ia supernovae have constant luminosity, which means that they can be used as accurate distance measures. Comparing this distance to the redshift (which measures the speed at which the supernova is receding) shows that the universe's expansion is accelerating. Prior to this observation, scientists thought that the gravitational attraction of matter and energy in the universe would cause the universe's expansion to slow over time. Since the discovery of accelerating expansion, several independent lines of evidence have been discovered that support the existence of dark energy. The exact nature of dark energy remains a mystery, and many possible explanations have been theorized. The main candidates are a cosmological constant (representing a constant energy density filling space homogeneously) and scalar fields (dynamic quantities having energy densities that vary in time and space) such as quintessence or moduli. A cosmological constant would remain constant across time and space, while scalar fields can vary. Yet other possibilities are interacting dark energy, an observational effect, and cosmological coupling (see the section ). History of discovery and previous speculation Einstein's cosmological constant The "cosmological constant" is a constant term that can be added to Einstein field equations of general relativity. If considered as a "source term" in the field equation, it can be viewed as equivalent to the mass of empty space (which conceptually could be either positive or negative), or "vacuum energy". The cosmological constant was first proposed by Einstein as a mechanism to obtain a solution to the gravitational field equation that would lead to a static universe, effectively using dark energy to balance gravity. Einstein gave the cosmological constant the symbol Λ (capital lambda). Einstein stated that the cosmological constant required that 'empty space takes the role of gravitating negative masses which are distributed all over the interstellar space'. The mechanism was an example of fine-tuning, and it was later realized that Einstein's static universe would not be stable: local inhomogeneities would ultimately lead to either the runaway expansion or contraction of the universe. The equilibrium is unstable: if the universe expands slightly, then the expansion releases vacuum energy, which causes yet more expansion. Likewise, a universe which contracts slightly will continue contracting. According to Einstein, "empty space" can possess its own energy. Because this energy is a property of space itself, it would not be diluted as space expands. As more space comes into existence, more of this energy-of-space would appear, thereby causing accelerated expansion. These sorts of disturbances are inevitable, due to the uneven distribution of matter throughout the universe. Further, observations made by Edwin Hubble in 1929 showed that the universe appears to be expanding and is not static. Einstein reportedly referred to his failure to predict the idea of a dynamic universe, in contrast to a static universe, as his greatest blunder. Inflationary dark energy Alan Guth and Alexei Starobinsky proposed in 1980 that a negative pressure field, similar in concept to dark energy, could drive cosmic inflation in the very early universe. Inflation postulates that some repulsive force, qualitatively similar to dark energy, resulted in an enormous and exponential expansion of the universe slightly after the Big Bang. Such expansion is an essential feature of most current models of the Big Bang. However, inflation must have occurred at a much higher (negative) energy density than the dark energy we observe today, and inflation is thought to have completely ended when the universe was just a fraction of a second old. It is unclear what relation, if any, exists between dark energy and inflation. Even after inflationary models became accepted, the cosmological constant was thought to be irrelevant to the current universe. Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the cosmic microwave background by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess et al. and in Perlmutter et al., and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima cosmic microwave background experiments observed the first acoustic peak in the cosmic microwave background, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters. The term "dark energy", echoing Fritz Zwicky's "dark matter" from the 1930s, was coined by Michael S. Turner in 1998. Change in expansion over time High-precision measurements of the expansion of the universe are required to understand how the expansion rate changes over time and space. In general relativity, the evolution of the expansion rate is estimated from the curvature of the universe and the cosmological equation of state (the relationship between temperature, pressure, and combined matter, energy, and vacuum energy density for any region of space). Measuring the equation of state for dark energy is one of the biggest efforts in observational cosmology today. Adding the cosmological constant to cosmology's standard FLRW metric leads to the Lambda-CDM model, which has been referred to as the "standard model of cosmology" because of its precise agreement with observations. As of 2013, the Lambda-CDM model is consistent with a series of increasingly rigorous cosmological observations, including the Planck spacecraft and the Supernova Legacy Survey. First results from the SNLS reveal that the average behavior (i.e., equation of state) of dark energy behaves like Einstein's cosmological constant to a precision of 10%. Recent results from the Hubble Space Telescope Higher-Z Team indicate that dark energy has been present for at least 9 billion years and during the period preceding cosmic acceleration. Nature The nature of dark energy is more hypothetical than that of dark matter, and many things about it remain in the realm of speculation. Dark energy is thought to be very homogeneous and not dense, and is not known to interact through any of the fundamental forces other than gravity. Since it is rarefied and un-massive—roughly 10−27 kg/m3—it is unlikely to be detectable in laboratory experiments. The reason dark energy can have such a profound effect on the universe, making up 68% of universal density in spite of being so dilute, is that it is believed to uniformly fill otherwise empty space. The vacuum energy, that is, the particle-antiparticle pairs generated and mutually annihilated within a time frame in accord with Heisenberg's uncertainty principle in the energy-time formulation, has been often invoked as the main contribution to dark energy. The mass–energy equivalence postulated by general relativity implies that the vacuum energy should exert a gravitational force. Hence, the vacuum energy is expected to contribute to the cosmological constant, which in turn impinges on the accelerated expansion of the universe. However, the cosmological constant problem asserts that there is a huge disagreement between the observed values of vacuum energy density and the theoretical large value of zero-point energy obtained by quantum field theory; the problem remains unresolved. Independently of its actual nature, dark energy would need to have a strong negative pressure to explain the observed acceleration of the expansion of the universe. According to general relativity, the pressure within a substance contributes to its gravitational attraction for other objects just as its mass density does. This happens because the physical quantity that causes matter to generate gravitational effects is the stress–energy tensor, which contains both the energy (or matter) density of a substance and its pressure. In the Friedmann–Lemaître–Robertson–Walker metric, it can be shown that a strong constant negative pressure (i.e., tension) in all the universe causes an acceleration in the expansion if the universe is already expanding, or a deceleration in contraction if the universe is already contracting. This accelerating expansion effect is sometimes labeled "gravitational repulsion". Technical definition In standard cosmology, there are three components of the universe: matter, radiation, and dark energy. This matter is anything whose energy density scales with the inverse cube of the scale factor, i.e., , while radiation is anything whose energy density scales to the inverse fourth power of the scale factor (). This can be understood intuitively: for an ordinary particle in a cube-shaped box, doubling the length of an edge of the box decreases the density (and hence energy density) by a factor of eight (23). For radiation, the decrease in energy density is greater, because an increase in spatial distance also causes a redshift. The final component is dark energy: it is an intrinsic property of space and has a constant energy density, regardless of the dimensions of the volume under consideration (). Thus, unlike ordinary matter, it is not diluted by the expansion of space. Evidence of existence The evidence for dark energy is indirect but comes from three independent sources: Distance measurements and their relation to redshift, which suggest the universe has expanded more in the latter half of its life than in the former half of its life. The theoretical need for a type of additional energy that is not matter or dark matter to form the observationally flat universe (absence of any detectable global curvature). Measurements of large-scale wave patterns of mass density in the universe. Supernovae In 1998, the High-Z Supernova Search Team published observations of Type Ia ("one-A") supernovae. In 1999, the Supernova Cosmology Project followed by suggesting that the expansion of the universe is accelerating. The 2011 Nobel Prize in Physics was awarded to Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess for their leadership in the discovery. Since then, these observations have been corroborated by several independent sources. Measurements of the cosmic microwave background, gravitational lensing, and the large-scale structure of the cosmos, as well as improved measurements of supernovae, have been consistent with the Lambda-CDM model. Some people argue that the only indications for the existence of dark energy are observations of distance measurements and their associated redshifts. Cosmic microwave background anisotropies and baryon acoustic oscillations serve only to demonstrate that distances to a given redshift are larger than would be expected from a "dusty" Friedmann–Lemaître universe and the local measured Hubble constant. Supernovae are useful for cosmology because they are excellent standard candles across cosmological distances. They allow researchers to measure the expansion history of the universe by looking at the relationship between the distance to an object and its redshift, which gives how fast it is receding from us. The relationship is roughly linear, according to Hubble's law. It is relatively easy to measure redshift, but finding the distance to an object is more difficult. Usually, astronomers use standard candles: objects for which the intrinsic brightness, or absolute magnitude, is known. This allows the object's distance to be measured from its actual observed brightness, or apparent magnitude. Type Ia supernovae are the most accurate known standard candles across cosmological distances because of their extreme and consistent luminosity. Recent observations of supernovae are consistent with a universe made up 71.3% of dark energy and 27.4% of a combination of dark matter and baryonic matter. Large-scale structure The theory of large-scale structure, which governs the formation of structures in the universe (stars, quasars, galaxies and galaxy groups and clusters), also suggests that the density of matter in the universe is only 30% of the critical density. A 2011 survey, the WiggleZ galaxy survey of more than 200,000 galaxies, provided further evidence towards the existence of dark energy, although the exact physics behind it remains unknown. The WiggleZ survey from the Australian Astronomical Observatory scanned the galaxies to determine their redshift. Then, by exploiting the fact that baryon acoustic oscillations have left voids regularly of ≈150 Mpc diameter, surrounded by the galaxies, the voids were used as standard rulers to estimate distances to galaxies as far as 2,000 Mpc (redshift 0.6), allowing for accurate estimate of the speeds of galaxies from their redshift and distance. The data confirmed cosmic acceleration up to half of the age of the universe (7 billion years) and constrain its inhomogeneity to 1 part in 10. This provides a confirmation to cosmic acceleration independent of supernovae. Cosmic microwave background The existence of dark energy, in whatever form, is needed to reconcile the measured geometry of space with the total amount of matter in the universe. Measurements of cosmic microwave background anisotropies indicate that the universe is close to flat. For the shape of the universe to be flat, the mass–energy density of the universe must be equal to the critical density. The total amount of matter in the universe (including baryons and dark matter), as measured from the cosmic microwave background spectrum, accounts for only about 30% of the critical density. This implies the existence of an additional form of energy to account for the remaining 70%. The Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft seven-year analysis estimated a universe made up of 72.8% dark energy, 22.7% dark matter, and 4.5% ordinary matter. Work done in 2013 based on the Planck spacecraft observations of the cosmic microwave background gave a more accurate estimate of 68.3% dark energy, 26.8% dark matter, and 4.9% ordinary matter. Late-time integrated Sachs–Wolfe effect Accelerated cosmic expansion causes gravitational potential wells and hills to flatten as photons pass through them, producing cold spots and hot spots on the cosmic microwave background aligned with vast supervoids and superclusters. This so-called late-time Integrated Sachs–Wolfe effect (ISW) is a direct signal of dark energy in a flat universe. It was reported at high significance in 2008 by Ho et al. and Giannantonio et al. Observational Hubble constant data A new approach to test evidence of dark energy through observational Hubble constant data (OHD), also known as cosmic chronometers, has gained significant attention in recent years. The Hubble constant, H(z), is measured as a function of cosmological redshift. OHD directly tracks the expansion history of the universe by taking passively evolving early-type galaxies as "cosmic chronometers". From this point, this approach provides standard clocks in the universe. The core of this idea is the measurement of the differential age evolution as a function of redshift of these cosmic chronometers. Thus, it provides a direct estimate of the Hubble parameter The reliance on a differential quantity, brings more information and is appealing for computation: It can minimize many common issues and systematic effects. Analyses of supernovae and baryon acoustic oscillations (BAO) are based on integrals of the Hubble parameter, whereas measures it directly. For these reasons, this method has been widely used to examine the accelerated cosmic expansion and study properties of dark energy. Theories of dark energy Dark energy's status as a hypothetical force with unknown properties makes it an active target of research. The problem is attacked from a variety of angles, such as modifying the prevailing theory of gravity (general relativity), attempting to pin down the properties of dark energy, and finding alternative ways to explain the observational data. Cosmological constant The simplest explanation for dark energy is that it is an intrinsic, fundamental energy of space. This is the cosmological constant, usually represented by the Greek letter (Lambda, hence the name Lambda-CDM model). Since energy and mass are related according to the equation Einstein's theory of general relativity predicts that this energy will have a gravitational effect. It is sometimes called vacuum energy because it is the energy density of empty space – of vacuum. A major outstanding problem is that the same quantum field theories predict a huge cosmological constant, about 120 orders of magnitude too large. This would need to be almost, but not exactly, cancelled by an equally large term of the opposite sign. Some supersymmetric theories require a cosmological constant that is exactly zero. Also, it is unknown whether there is a metastable vacuum state in string theory with a positive cosmological constant, and it has been conjectured by Ulf Danielsson et al. that no such state exists. This conjecture would not rule out other models of dark energy, such as quintessence, that could be compatible with string theory. Quintessence In quintessence models of dark energy, the observed acceleration of the scale factor is caused by the potential energy of a dynamical field, referred to as quintessence field. Quintessence differs from the cosmological constant in that it can vary in space and time. In order for it not to clump and form structure like matter, the field must be very light so that it has a large Compton wavelength. In the simplest scenarios, the quintessence field has a canonical kinetic term, is minimally coupled to gravity, and does not feature higher order operations in its Lagrangian. No evidence of quintessence is yet available, nor has it been ruled out. It generally predicts a slightly slower acceleration of the expansion of the universe than the cosmological constant. Some scientists think that the best evidence for quintessence would come from violations of Einstein's equivalence principle and variation of the fundamental constants in space or time. Scalar fields are predicted by the Standard Model of particle physics and string theory, but an analogous problem to the cosmological constant problem (or the problem of constructing models of cosmological inflation) occurs: renormalization theory predicts that scalar fields should acquire large masses. The coincidence problem asks why the acceleration of the Universe began when it did. If acceleration began earlier in the universe, structures such as galaxies would never have had time to form, and life, at least as we know it, would never have had a chance to exist. Proponents of the anthropic principle view this as support for their arguments. However, many models of quintessence have a so-called "tracker" behavior, which solves this problem. In these models, the quintessence field has a density which closely tracks (but is less than) the radiation density until matter–radiation equality, which triggers quintessence to start behaving as dark energy, eventually dominating the universe. This naturally sets the low energy scale of the dark energy. In 2004, when scientists fit the evolution of dark energy with the cosmological data, they found that the equation of state had possibly crossed the cosmological constant boundary (w = −1) from above to below. A no-go theorem has been proved that this scenario requires models with at least two types of quintessence. This scenario is the so-called Quintom scenario. Some special cases of quintessence are phantom energy, in which the energy density of quintessence actually increases with time, and k-essence (short for kinetic quintessence) which has a non-standard form of kinetic energy such as a negative kinetic energy. They can have unusual properties: phantom energy, for example, can cause a Big Rip. A group of researchers argued in 2021 that observations of the Hubble tension may imply that only quintessence models with a nonzero coupling constant are viable. Interacting dark energy This class of theories attempts to come up with an all-encompassing theory of both dark matter and dark energy as a single phenomenon that modifies the laws of gravity at various scales. This could, for example, treat dark energy and dark matter as different facets of the same unknown substance, or postulate that cold dark matter decays into dark energy. Another class of theories that unifies dark matter and dark energy are suggested to be covariant theories of modified gravities. These theories alter the dynamics of spacetime such that the modified dynamics stems to what have been assigned to the presence of dark energy and dark matter. Dark energy could in principle interact not only with the rest of the dark sector, but also with ordinary matter. However, cosmology alone is not sufficient to effectively constrain the strength of the coupling between dark energy and baryons, so that other indirect techniques or laboratory searches have to be adopted. It was briefly theorized in the early 2020s that excess observed in the XENON1T detector in Italy may have been caused by a chameleon model of dark energy, but further experiments disproved this possibility. Variable dark energy models The density of dark energy might have varied in time during the history of the universe. Modern observational data allows us to estimate the present density of dark energy. Using baryon acoustic oscillations, it is possible to investigate the effect of dark energy in the history of the universe, and constrain parameters of the equation of state of dark energy. To that end, several models have been proposed. One of the most popular models is the Chevallier–Polarski–Linder model (CPL). Some other common models are Barboza & Alcaniz (2008), Jassal et al. (2005), Wetterich. (2004), and Oztas et al. (2018). Possibly decreasing levels Researchers using the Dark Energy Spectroscopic Instrument (DESI) to make the largest 3-D map of the universe as of 2024, have obtained an expansion history that has greater than 1% precision. From this level of detail, DESI Director Michael Levi stated:We're also seeing some potentially interesting differences that could indicate that dark energy is evolving over time. Those may or may not go away with more data, so we're excited to start analyzing our three-year dataset soon. Observational skepticism Some alternatives to dark energy, such as inhomogeneous cosmology, aim to explain the observational data by a more refined use of established theories. In this scenario, dark energy does not actually exist, and is merely a measurement artifact. For example, if we are located in an emptier-than-average region of space, the observed cosmic expansion rate could be mistaken for a variation in time, or acceleration. A different approach uses a cosmological extension of the equivalence principle to show how space might appear to be expanding more rapidly in the voids surrounding our local cluster. While weak, such effects considered cumulatively over billions of years could become significant, creating the illusion of cosmic acceleration, and making it appear as if we live in a Hubble bubble. Yet other possibilities are that the accelerated expansion of the universe is an illusion caused by the relative motion of us to the rest of the universe, or that the statistical methods employed were flawed. A laboratory direct detection attempt failed to detect any force associated with dark energy. Observational skepticism explanations of dark energy have generally not gained much traction among cosmologists. For example, a paper that suggested the anisotropy of the local Universe has been misrepresented as dark energy was quickly countered by another paper claiming errors in the original paper. Another study questioning the essential assumption that the luminosity of Type Ia supernovae does not vary with stellar population age was also swiftly rebutted by other cosmologists. As a general relativistic effect due to black holes This theory was formulated by researchers of the University of Hawaiʻi at Mānoa in February 2023. The idea is that if one requires the Kerr metric (which describes rotating black holes) to asymptote to the Friedmann-Robertson-Walker metric (which describes the isotropic and homogeneous universe that is the basic assumption of modern cosmology), then one finds that black holes gain mass as the universe expands. The rate is measured to be , where a is the scale factor. This particular rate means that the energy density of black holes remains constant over time, mimicking dark energy (see Dark energy#Technical definition). The theory is called "cosmological coupling" because the black holes couple to a cosmological requirement. Other astrophysicists are skeptical, with a variety of papers claiming that the theory fails to explain other observations. Other mechanism driving acceleration Modified gravity The evidence for dark energy is heavily dependent on the theory of general relativity. Therefore, it is conceivable that a modification to general relativity also eliminates the need for dark energy. There are many such theories, and research is ongoing. The measurement of the speed of gravity in the first gravitational wave measured by non-gravitational means (GW170817) ruled out many modified gravity theories as explanations to dark energy. Astrophysicist Ethan Siegel states that, while such alternatives gain mainstream press coverage, almost all professional astrophysicists are confident that dark energy exists and that none of the competing theories successfully explain observations to the same level of precision as standard dark energy. Implications for the fate of the universe Cosmologists estimate that the acceleration began roughly 5 billion years ago. Before that, it is thought that the expansion was decelerating, due to the attractive influence of matter. The density of dark matter in an expanding universe decreases more quickly than dark energy, and eventually the dark energy dominates. Specifically, when the volume of the universe doubles, the density of dark matter is halved, but the density of dark energy is nearly unchanged (it is exactly constant in the case of a cosmological constant). Projections into the future can differ radically for different models of dark energy. For a cosmological constant, or any other model that predicts that the acceleration will continue indefinitely, the ultimate result will be that galaxies outside the Local Group will have a line-of-sight velocity that continually increases with time, eventually far exceeding the speed of light. This is not a violation of special relativity because the notion of "velocity" used here is different from that of velocity in a local inertial frame of reference, which is still constrained to be less than the speed of light for any massive object (see Uses of the proper distance for a discussion of the subtleties of defining any notion of relative velocity in cosmology). Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually. However, because of the accelerating expansion, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future because the light never reaches a point where its "peculiar velocity" toward us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Uses of the proper distance). Assuming the dark energy is constant (a cosmological constant), the current distance to this cosmological event horizon is about 16 billion light years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event were less than 16 billion light years away, but the signal would never reach us if the event were more than 16 billion light years away. As galaxies approach the point of crossing this cosmological event horizon, the light from them will become more and more redshifted, to the point where the wavelength becomes too large to detect in practice and the galaxies appear to vanish completely (see Future of an expanding universe). Planet Earth, the Milky Way, and the Local Group of galaxies of which the Milky Way is a part, would all remain virtually undisturbed as the rest of the universe recedes and disappears from view. In this scenario, the Local Group would ultimately suffer heat death, just as was hypothesized for the flat, matter-dominated universe before measurements of cosmic acceleration. There are other, more speculative ideas about the future of the universe. The phantom energy model of dark energy results in divergent expansion, which would imply that the effective force of dark energy continues growing until it dominates all other forces in the universe. Under this scenario, dark energy would ultimately tear apart all gravitationally bound structures, including galaxies and solar systems, and eventually overcome the electrical and nuclear forces to tear apart atoms themselves, ending the universe in a "Big Rip". On the other hand, dark energy might dissipate with time or even become attractive. Such uncertainties leave open the possibility of gravity eventually prevailing and lead to a universe that contracts in on itself in a "Big Crunch", or that there may even be a dark energy cycle, which implies a cyclic model of the universe in which every iteration (Big Bang then eventually a Big Crunch) takes about a trillion (1012) years. While none of these are supported by observations, they are not ruled out. In philosophy of science The astrophysicist David Merritt identifies dark energy as an example of an "auxiliary hypothesis", an ad hoc postulate that is added to a theory in response to observations that falsify it. He argues that the dark energy hypothesis is a conventionalist hypothesis, that is, a hypothesis that adds no empirical content and hence is unfalsifiable in the sense defined by Karl Popper. However, his opinion is not shared by all scientists. See also Conformal gravity Dark Energy Spectroscopic Instrument Dark matter De Sitter invariant special relativity Illustris project Inhomogeneous cosmology Joint Dark Energy Mission Negative mass Quintessence: The Search for Missing Mass in the Universe Dark Energy Survey Quantum vacuum state Notes References External links Euclid ESA Satellite, a mission to map the geometry of the dark universe "Surveying the dark side" by Roberto Trotta and Richard Bower, Astron.Geophys. 1998 neologisms Concepts in astronomy Dark concepts in astrophysics Energy (physics) Physical cosmological concepts Unsolved problems in astronomy Unsolved problems in physics
Dark energy
Physics,Astronomy,Mathematics
6,592
730,906
https://en.wikipedia.org/wiki/Exclusion%20principle%20%28philosophy%29
The exclusion principle is a philosophical principle that states: If an event e causes event e*, then there is no event e# such that e# is non-supervenient on e and e# causes e*. In physicalism The exclusion principle is most commonly applied when one poses this scenario: One usually considers the desire to lift one's arm as a mental event, and the lifting of one's arm a physical event. According to the exclusion principle, there must be no event that does not supervene on e while causing e*. To show this better, substitute "the desire to lift one's arm" for "e", and "one to lift their arm" for "e*". If the desire to lift one's arm causes one to lift their arm, then there is no event such that it is non-supervenient on the desire to lift one's arm and it causes one to lift their arm. This is interpreted as meaning that mental events supervene upon the physical. However, some philosophers do not accept this principle, and accept epiphenomenalism, which states that mental events are caused by physical events, but physical events are not caused by mental events (called causal impotence). However, If e# does not cause e, then there is no way to verify that e* exists. Yet, this debate has not been settled in the philosophical community. External links Princeton University Press Stanford Encyclopedia of Philosophy Arguments in philosophy of mind Concepts in epistemology Causality Physicalism Metaphysical principles Concepts in the philosophy of mind
Exclusion principle (philosophy)
Physics
326
1,126,135
https://en.wikipedia.org/wiki/Alfred%20Wegener%20Institute%20for%20Polar%20and%20Marine%20Research
The Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (German: Alfred-Wegener-Institut, Helmholtz-Zentrum für Polar- und Meeresforschung) is located in Bremerhaven, Germany, and a member of the Helmholtz Association of German Research Centres. It conducts research in the Arctic, the Antarctic, and the high and mid latitude oceans. Additional research topics are: North Sea research, marine biological monitoring, and technical marine developments. The institute was founded in 1980 and is named after meteorologist, climatologist, and geologist Alfred Wegener. AWI is the biggest institution for polar and ocean research and science in Germany. The annual budget is 140 Mio EUR (2018) and the institute has a staff of more than 1000 people. History The foundation of the AWI happened in a political environment that was characterized by system competition between East and West. The GDR had been conducting its own Antarctic research for decades. In the 1970s it became clear that there would be one scarcity of biological- and mineral resources. Also due to the experience of the oil crisis of 1973, the Federal Republic of Germany decided to intensify its activities in polar research for geostrategic reasons and undertake larger research projects in the Antarctic Ocean and in Antarctica. In 1975/76 and 1977/78 expeditions were conducted to exploration of migration routs of the krill. In 1978 the German Bundestag decided that polar research will be a governmental task of national interest, that West Germany will become a member of the Antarctic Treaty System and will found a polar research institute. In 1980 the "AWI act" was decided by the Bürgerschaft of Bremen. The founding director was Gotthilf Hempel. The construction of the first German antarctic base, the first Georg von Neumayer station (GvN station I), had already begun in 1979. In 1981 the station was operational. In 1978, the Federal Ministry of Education and Research commissioned the tender for a research icebreaker. After the public tender, the hull of the first German polar research ship was laid by HDW Howaldtswerke-Deutsche Werft in 1981. The RV Polarstern has been in operation for the AWI since 1982. On 24 February 1985, the Polar 3, a research airplane of the institute of the type Dornier 228, was shot down by members of Polisario Front over West Sahara. Both pilots and the mechanic died. Polar 3, together with unharmed Polar 2, was on its way back from Antarctica and had taken off in Dakar, Senegal, to reach Arrecife, Canary Islands. In 1986 the main building of AWi were built at "Old harbour" (Alten Hafen) in the center of Bremerhaven by plans of architect Oswald Mathias Ungers (Building D). In 2004 the headquarter of AWI moved to Fischereihafenschleuse and a new building by Otto Steidle had been built at Am Handelshafen. In January 2005, Polar 4 was severely damaged during a rough landing at the British overwintering station Rothera on the Antarctic Peninsula. As it was impossible to repair the plane, the aircraft had to be decommissioned. Since then, scientific and logistical tasks of polar flights have been performed by Polar 2. After years of preparation, Alfred Wegener Institute conduct up from September 2019 the international Arctic expedition MOSAiC (the Multidisciplinary drifting Observatory for the Study of Arctic Climate), which was one of the largest research actions of its kind. Around 442 scientists from 20 countries worked at different tasks in extreme weather. The research expedition had a budget of 140 million Euros. Also no other polar research trip was exploited as much in the media as the MOSAiC expedition since then. The Alfred Wegener Institute increased its press department before and during the expedition, hired a "Communications Manager MOSAiC" and an own photographer to feed “MOSAiC” channels on Twitter and Instagram. At the beginning, the AWI focus was to set up the complex infrastructure for research in the Arctic and Antarctic regions. In addition to international prestige, the territorial claim to resources from terrestrial and maritime areas was one of the reasons for Germany for the cost-intensive work of Alfred Wegener Institute. Climatologists and geophysicists at AWI recognized the fatal effects of global warming in the most affected geographical areas in the 1980s early on, but gained less attention outside the scientific community. In the 1990s, the mainly geophysical-oceanographic research was expanded to include the biological aspects of polar and deep-sea habitats, among other things. From the 2000s, the problem of climate change reached the consciousness of German society and the politics that funded the AWI. The focus and promotion of the institus work get adopted to the debate about global change. Current projects had often also the aim to research special aspect of climate change and the effectes of global warming especially to the polar regions. With the director Boetius, the public relations and the marketing of the polar research were pushed forward. In 2024, the AWI signed a memorandum of understanding with Antarctica New Zealand to foster cooperation between the two polar science bodies, amid China's growing presence in Antarctica. Research The institute has three major departments: Climate System Department, which studies oceans, ice and atmosphere as physical and chemical systems. Biosciences Department, which studies the biological processes in marine and coastal ecosystems. Geoscientific Department, which studies climate development, especially as revealed by sediments. Facilities The institute is distributed over several sites within North Germany and the Otto Schmidt Laboratory for Polar and Marine Research (OSL) at the Arctic and Antarctic Research Institute (AARI) in Saint Petersburg as Russian-German cooperation in the field of Arctic research, named after the polar explorer Otto Schmidt. Bremerhaven The headquarters was founded by Gotthilf Hempel. Nowadays, the AWI has several buildings within the city of Bremerhaven. Building D is located next to the old port (German: Alter Hafen). The dark clink-brick building was designed by Oswald Mathias Ungers in 1985 who won the architecture-award BDA-prize for the building. It hosts the AWI library, the main lecture hall and various laboratories and offices. The main building E is next to the lock Doppelschleuse. Main characteristics are the chequered tiles and the fact that there are three office towers. The building was designed by Otto Steidle and constructed in 2004 as an extension of the complex A, B, C. House F is close to the Weser ferry at the Geeste estuary. The housing association StäWoG (German: Städtische Wohnungsgesellschaft) renovated the building of the former nautical college (German: Hochschule für Seefahrt) in 1999. For this reason, there is a planetarium which is nowadays used by the Bremerhaven's friends of the stars. The small Bathymetry Building is located close to the radar tower. The Nordsee Villa formerly belonged to the fast-food restaurant chain Nordsee and hosts a few offices of AWI. Nowadays it is a Haus der Technik and part of the oldest German institute for technical further education with close cooperation to RWTH Aachen and other universities. The Harbour Warehouse (German: Hafenlager) is located within the Lloyd Werft, see Lloyd Werft Map. Potsdam The Forschungsstelle Potsdam is situated on the Telegrafenberg next to Potsdam. It belongs to AWI since 1992. The research focuses on the atmospheric physics and atmospheric chemistry of the atmosphere on the one hand and periglacial research on the other hand. Sylt The Wadden Sea Station Sylt is located on the North German island Sylt. It was founded in 1924 as an oyster laboratory to study the decline of oyster stocks and in order to study how they could be cultivated. In 1937, the name changed from oyster laboratory to Wadden Sea station. The station grew, and in 1949 the station was shifted from the northernmost edge of the island to the current location, next to the harbor of List. In 1998 the station became part of AWI. Nowadays, there are about 30 scientists and technicians. Two guest houses allow to perform workshops and video conferences are possible with the AWI headquarters. The research focuses on coast ecology and coast geology. In the 1930 there have been oyster reefs below the mussel banks at the water level. Below these, there have been sabellaria reefs which have been destroyed by fishery. Nowadays there are only the mussel banks left. Helgoland The Biologische Anstalt Helgoland is situated at on the island Heligoland (German: Helgoland). The station exists since 1892. Scientists study the ecology of the North Sea in this research station. Since 1962, at Heligoland roadstead , phytoplankton and water samples are taken every weekday morning, the turbidity is measured (e.g. using a Secchi disk) and other parameters are recorded. The North Sea warmed by 1.65 °C since the start of the time series. Stations The institute maintains several research stations around the Arctic Ocean and on the Antarctic continent. Neumayer Station Neumayer Station III is located at , about away from the previous station, Neumayer II which is now abandoned and covered by a thick ice cover. The new station (Webcam) is a futuristic-looking combined platform above the snow surface offering space for research, operations, and living since 2009. The station stands on 16 hydraulic posts which are used to adjust the building to the growing snow cover. A balloon-launching hall is located on the stations roof. Below the station PistenBully, Ski-Doos, sledges, and other equipment are stored in a garage built beneath the snow, with a ramp with a lid that seals the hole for the vehicles to enter. In summer, the station can host up to 40 people. The station contains several laboratories, has a weather balloon launching facility, and a hospital with telemedical equipment. The station has a stairwell and several utility and storage rooms in the garage. There is a snowmelt and power unit at the station. Dallmann Laboratory In cooperation with the Instituto Antártico Argentino (IAA), in 1994 the AWI opened a research station on King George Island at . The station is named after Eduard Dallmann, a German whaler, trader and Polar explorer who lived near Bremen. Koldewey Station Koldewey Station at is named after the German polar explorer Carl Koldewey and part of the French-German Arctic Research AWIPEV Arctic in Ny-Ålesund on Svalbard. Kohnen Station Kohnen Station was established in 2001 as logistical base for ice core drilling in Dronning Maud Land, Antarctica, at Samoylov Station Samoylov Station is a Russian research station at lays within the Lena Delta close to the Laptev Sea. The station was set up as a logistic base for joint Russian-German permafrost studies by the Lena Delta Reserve (LDR) and the AWI. Ships Altogether there are six ships that belong to AWI. RV Polarstern The AWI flagship is Germany's research icebreaker RV Polarstern. The ship was commissioned in 1982. The double-hulled icebreaker is operational up to temperatures as low as −50 °C (−58 °F). Polarstern can break through sea ice of 1.5 m thickness at a speed of 5 knots, thicker ice must be broken by ramming. In 2022 the German Bundestag approved 2 million Euros budget for the contract award procedure for the construction of the new icebreaker Polarstern II. RV Heincke The vessel RV Heincke is a multifunctional and low-noise ship for research in ice-free waters, named after German zoologist and ichthyologist Friedrich Heincke . With a length of 54.6 m, a width of 12.5 m and a draft 4.16 m, the ship is categorized as "medium research vessel" within the German research fleet. The ship was put into operation in 1990, its building costs have been around 16 Millionen Euro. On the vessel, up to 12 scientists and 8 crew members can work for up to 30 passage days. This corresponds to an operating range of roughly 7500 nautical miles. The shipowner is Briese Schiffahrts GmbH & Co. KG from Leer, a city in East Frisia. RV Uthörn The research cutter RV Uthörn is named after the small island Uthörn next to Sylt in the North Sea. The vessel is regularly on research tours in German Bight, but is also used to supply the AWI branch Biologische Anstalt Helgoland mentioned above. Two scientists and four crew members can live and work on board for up to 180 days, but the vessel mainly used for day trips. Another operation purpose are short term cruises of a few hours for up to 25 students to demonstrate oceanographic and biological sampling methods. Being commissioned in 1982 RV Uthörn replaced a vessel with the same name which was built in 1947 and had a length of 24 m. The current vessel is powered by two V12 four-stroke Diesel engines manufactured by the company MWM GmbH from Mannheim. Each engine delivers up to 231 kW to a controllable-pitch propeller; the maximal speed is around 10 kn. On the working deck, there is a dry lab and a laboratory for wet work like sorting fish. The ship is equipped with standard sampling devices: You may find on board a demersal trawl, a Van Veen Grab Sampler, Niskin bottles, and even deprecated reversing thermometers for teaching purposes. Mya, Mya II, Aade and Diker The research catamaran Mya was a specially designed for research in the intertidal zone, it could fall dry at low tide. In 2013 it was replaced by the conventional ship Mya II. The main research area is the Wadden Sea and offshore wind farms. Last but not least, there are two small motor boats, Aade and Diker for sampling and diving operations around Heligoland. Aircraft Past aircraft The Alfred Wegener Institute operated five airplanes under the name of Polar, those being: Polar 1, a Dornier 128 commissioned in 1983, now in possession of the TU Braunschweig Polar 2, a Dornier 228 commissioned in 1983, still in service with the AWI Polar 3, like Polar 2 commissioned in 1983, shot down possibly by SA-2 Guideline missile on 24 February 1985 over Western Sahara Polar 4, a Dorniers 228 commissioned in 1985, severely damaged at a landing at the British Rothera Research Station in 2005, now on display at the Institute Current fleet Homeport of AWI-fleet is Bremen Airport. AWI uses two Basler BT-67.These planes are 20 m long, 5.2 m high and have a wingspan of 29 m. The empty weight is 7680 kg, with ski landing gear it weighs 8340 kg. The minimal cruising speed is 156 km/h, the maximum is SFA. Without payload, the flying range is around 3900 km. The planes are maintained by the company Kenn Borek Air located in Calgary, Alberta, Canada. Polar 5 The plane hull was built in 1942 but completely refurbished after the AWI acquired the plane in 2007. Since then it "has supplied a large volume of valuable data" said Prof. Heinrich Miller, the former director of the AWI. Polar 6 This plane with the call sign C-GHGF was acquired by AWI in 2011. The BMBF, the German Federal Minister of Education and Research funded the purchase and equipping of the plane with a total of 9.78 million euros. See also Open access in Germany Ocean Frontier Institute, an oceans research centre in Halifax, Canada References External links Official website Research institutes in Germany Environmental research institutes Earth science research institutes Organisations based in Bremerhaven Environmental organizations established in 1980 Research institutes established in 1980 1980 establishments in West Germany Antarctica research agencies Non-profit organisations based in Bremen (state)
Alfred Wegener Institute for Polar and Marine Research
Environmental_science
3,354
43,672,392
https://en.wikipedia.org/wiki/Networks%20and%20States
Networks and States: The Global Politics of Internet Governance is a 2010 book by Professor at the Syracuse University School of Information Studies Milton L. Muller. This book shows an influence of networks on the government. Synopsis Chapter I, Networks and Governance Chapter II, Transnational Institutions Chapter III, Drivers of Internet Governance Sources Internet governance Books about the Internet 2010 non-fiction books
Networks and States
Technology
75
3,405,004
https://en.wikipedia.org/wiki/Rng%20%28algebra%29
In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term rng, pronounced like rung (IPA: ), is meant to suggest that it is a ring without i, that is, without the requirement for an identity element. There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see ). The term rng was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity. A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space. Definition Formally, a rng is a set R with two binary operations called addition and multiplication such that (R, +) is an abelian group, (R, ·) is a semigroup, Multiplication distributes over addition. A rng homomorphism is a function from one rng to another such that f(x + y) = f(x) + f(y) f(x · y) = f(x) · f(y) for all x and y in R. If R and S are rings, then a ring homomorphism is the same as a rng homomorphism that maps 1 to 1. Examples All rings are rngs. A simple example of a rng that is not a ring is given by the even integers with the ordinary addition and multiplication of integers. Another example is given by the set of all 3-by-3 real matrices whose bottom row is zero. Both of these examples are instances of the general fact that every (one- or two-sided) ideal is a rng. Rngs often appear naturally in functional analysis when linear operators on infinite-dimensional vector spaces are considered. Take for instance any infinite-dimensional vector space V and consider the set of all linear operators with finite rank (i.e. ). Together with addition and composition of operators, this is a rng, but not a ring. Another example is the rng of all real sequences that converge to 0, with component-wise operations. Also, many test function spaces occurring in the theory of distributions consist of functions decreasing to zero at infinity, like e.g. Schwartz space. Thus, the function everywhere equal to one, which would be the only possible identity element for pointwise multiplication, cannot exist in such spaces, which therefore are rngs (for pointwise addition and multiplication). In particular, the real-valued continuous functions with compact support defined on some topological space, together with pointwise addition and multiplication, form a rng; this is not a ring unless the underlying space is compact. Example: even integers The set 2Z of even integers is closed under addition and multiplication and has an additive identity, 0, so it is a rng, but it does not have a multiplicative identity, so it is not a ring. In 2Z, the only multiplicative idempotent is 0, the only nilpotent is 0, and the only element with a reflexive inverse is 0. Example: finite quinary sequences The direct sum equipped with coordinate-wise addition and multiplication is a rng with the following properties: Its idempotent elements form a lattice with no upper bound. Every element x has a reflexive inverse, namely an element y such that and . For every finite subset of , there exists an idempotent in that acts as an identity for the entire subset: the sequence with a one at every position where a sequence in the subset has a non-zero element at that position, and zero in every other position. Properties Adjoining an identity element (Dorroh extension) Every rng R can be enlarged to a ring R^ by adjoining an identity element. A general way in which to do this is to formally add an identity element 1 and let R^ consist of integral linear combinations of 1 and elements of R with the premise that none of its nonzero integral multiples coincide or are contained in R. That is, elements of R^ are of the form where n is an integer and . Multiplication is defined by linearity: More formally, we can take R^ to be the cartesian product and define addition and multiplication by The multiplicative identity of R^ is then . There is a natural rng homomorphism defined by . This map has the following universal property: The map g can be defined by . There is a natural surjective ring homomorphism which sends to n. The kernel of this homomorphism is the image of R in R^. Since j is injective, we see that R is embedded as a (two-sided) ideal in R^ with the quotient ring R^/R isomorphic to Z. It follows that Note that j is never surjective. So, even when R already has an identity element, the ring R^ will be a larger one with a different identity. The ring R^ is often called the Dorroh extension of R after the American mathematician Joe Lee Dorroh, who first constructed it. The process of adjoining an identity element to a rng can be formulated in the language of category theory. If we denote the category of all rings and ring homomorphisms by Ring and the category of all rngs and rng homomorphisms by Rng, then Ring is a (nonfull) subcategory of Rng. The construction of R^ given above yields a left adjoint to the inclusion functor . Notice that Ring is not a reflective subcategory of Rng because the inclusion functor is not full. Properties weaker than having an identity There are several properties that have been considered in the literature that are weaker than having an identity element, but not so general. For example: Rings with enough idempotents: A rng R is said to be a ring with enough idempotents when there exists a subset E of R given by orthogonal (i.e. for all in E) idempotents (i.e. for all e in E) such that . Rings with local units: A rng R is said to be a ring with local units in case for every finite set r1, r2, ..., rt in R we can find e in R such that and for every i. s-unital rings: A rng R is said to be s-unital in case for every finite set r1, r2, ..., rt in R we can find s in R such that for every i. Firm rings: A rng R is said to be firm if the canonical homomorphism given by is an isomorphism. Idempotent rings: A rng R is said to be idempotent (or an irng) in case , that is, for every element r of R we can find elements ri and si in R such that . It is not difficult to check that each of these properties is weaker than having an identity element and weaker than the property preceding it. Rings are rings with enough idempotents, using A ring with enough idempotents that has no identity is for example the ring of infinite matrices over a field with just a finite number of nonzero entries. Those matrices with a 1 in precisely one entry of the main diagonal and 0's in all other entries are the orthogonal idempotents. Rings with enough idempotents are rings with local units as can be seen by taking finite sums of the orthogonal idempotents to satisfy the definition. Rings with local units are in particular s-unital; s-unital rings are firm and firm rings are idempotent. Rng of square zero A rng of square zero is a rng R such that for all x and y in R. Any abelian group can be made a rng of square zero by defining the multiplication so that for all x and y; thus every abelian group is the additive group of some rng. The only rng of square zero with a multiplicative identity is the zero ring {0}. Any additive subgroup of a rng of square zero is an ideal. Thus a rng of square zero is simple if and only if its additive group is a simple abelian group, i.e., a cyclic group of prime order. Unital homomorphism Given two unital algebras A and B, an algebra homomorphism is unital if it maps the identity element of A to the identity element of B. If the associative algebra A over the field K is not unital, one can adjoin an identity element as follows: take as underlying K-vector space and define multiplication ∗ by for x, y in A and r, s in K. Then ∗ is an associative operation with identity element . The old algebra A is contained in the new one, and in fact is the "most general" unital algebra containing A, in the sense of universal constructions. See also Semiring Citations References Ring theory Algebraic structures Algebras he:חוג (מבנה אלגברי)#איבר יחידה
Rng (algebra)
Mathematics
1,948
14,445,752
https://en.wikipedia.org/wiki/GPR119
G protein-coupled receptor 119 also known as GPR119 is a G protein-coupled receptor that in humans is encoded by the GPR119 gene. GPR119, along with GPR55 and GPR18, have been implicated as novel cannabinoid receptors. Pharmacology GPR119 is expressed predominantly in the pancreas and gastrointestinal tract in rodents and humans, as well as in the brain in rodents. Activation of the receptor has been shown to cause a reduction in food intake and body weight gain in rats. GPR119 has also been shown to regulate incretin and insulin hormone secretion. As a result, new drugs acting on the receptor have been suggested as novel treatments for obesity and diabetes. Ligands A number of endogenous, synthetic and plant derived ligands for this receptor have been identified: 2-Oleoylglycerol (2OG) Anandamide AR-231,453 MBX-2982 Oleoylethanolamide (OEA) (Endogenous Ligand) PSN-375,963 PSN-632,408 Human microbiota and GPR119 activation Commensal bacteria are found to have important roles in human health, as bacterial metabolites are likely to be key components of host interactions by which they affect mammalian physiology. N-acyl amide synthase genes are found enriched in gastrointestinal bacteria and the lipids, that they encode, interact with GPCRs, which regulate gastrointestinal tract physiology, where cell-based models have demonstrated, that commensal GPR119 agonists regulate metabolic hormones and glucose homeostasis as efficiently as human ligands, and the clearest overlap in structure and function between bacterial and human GPCR-active ligands, is found for the endocannabinoid receptor GPR119. The experiment have isolated both the palmitoyl and oleoyl analogs of N-acyl serinol, and found the latter only differs from 2-OG: C21H40O4 by the presence of an amide instead of an ester, and from OEA: C20H39NO2 by the presence of an additional ethanol substituent, where the N-oleoyl serinol (C21H41NO3; 18:1,n-9), is a similarly potent GPR119 agonist compared to the endogenous ligand OEA (EC50 12 μM vs. 7 μM), but elicits almost a 2-fold greater maximum activation, do suggest that chemical mimicry of eukaryotic signalling molecules may be common among commensal bacteria, that communicate through interactions between these two fundamental systems—which form the gut microbiota-endocannabinoidome axis. Evolution Paralogues Source: GPR6 MC5R MC3R MC4R CNR1 GPR12 S1PR1 MC1R S1PR3 S1PR5 GPR3 S1PR2 CNR2 LPAR3 LPAR1 LPAR2 MC2R S1PR4 References Further reading G protein-coupled receptors
GPR119
Chemistry
664
45,485,130
https://en.wikipedia.org/wiki/Deadjectival%20verb
A deadjectival verb is a type of verb derived from an adjective. In English, the verb may be created by adding a suffix to the adjective: intense (A)+ -ify (Verbalizer) → intensify or a prefix, e.g., en- + large → enlarge. References Parts of speech Verb types
Deadjectival verb
Technology
74
48,314,578
https://en.wikipedia.org/wiki/VFTS%20352
VFTS 352 is a contact binary star system away in the Tarantula Nebula, which is part of the Large Magellanic Cloud. It is the most massive and earliest spectral type overcontact system known. The discovery of this O-type binary star system made use of the European Southern Observatory's Very Large Telescope, and the description was published on 13 October 2015. VFTS 352 is composed of two very hot (40,000 °C), bright and massive stars of equal size that orbit each other in little more than a day. The stars are so close that their atmospheres overlap. Both stars are rotating at a rate equal to their orbital period; that is, they are tidally locked. Extreme stars like the two components of VFTS 352 are thought to be the main producers of elements such as oxygen. The future of VFTS 352 is uncertain, and there are two possible scenarios. If the two stars merge, a very rapidly rotating star will be produced. If it keeps spinning rapidly it might end its life in a long-duration gamma-ray burst. In a second hypothetical scenario, the components would end their lives in supernova explosions, forming a close binary black hole system, hence a potential gravitational wave source through black hole–black hole merger. See also Contact binary (small Solar System body), two asteroids gravitating toward each other until they touch References Stars in the Large Magellanic Cloud Binary stars O-type main-sequence stars Tarantula Nebula Extragalactic stars J05382845-6911191 Dorado Emission-line stars
VFTS 352
Astronomy
329
45,235,736
https://en.wikipedia.org/wiki/Zoxazolamine
Zoxazolamine (INN, USAN, BAN) (brand name Contrazole, Deflexol, Flexin, Miazol, Uri-Boi, Zoxamine, Zoxine) is a muscle relaxant that is no longer marketed. It was synthesized in 1953 and introduced clinically in 1955 but was withdrawn due to hepatotoxicity. One of its active metabolites, chlorzoxazone, was found to show less toxicity, and was subsequently marketed in place of zoxazolamine. These drugs activate IKCa channels. References Amines Benzoxazoles Hepatotoxins Muscle relaxants Organochlorides Withdrawn drugs
Zoxazolamine
Chemistry
145
63,577
https://en.wikipedia.org/wiki/Cashew
Cashew is the common name of a tropical evergreen tree Anacardium occidentale, in the family Anacardiaceae. It is native to South America and is the source of the cashew nut and the cashew apple, an accessory fruit. The tree can grow as tall as , but the dwarf cultivars, growing up to , prove more profitable, with earlier maturity and greater yields. The cashew nut is edible and is eaten on its own as a snack, used in recipes, or processed into cashew cheese or cashew butter. The nut is often simply called a 'cashew'. In 2019, four million tonnes of cashew nuts were produced globally, with Ivory Coast and India the leading producers. As well as the nut and fruit, the plant has several other uses. The shell of the cashew seed yields derivatives that can be used in many applications including lubricants, waterproofing, paints, and, starting in World War II, arms production. The cashew apple is a light reddish to yellow fruit, whose pulp and juice can be processed into a sweet, astringent fruit drink or fermented and distilled into liquor. Description The cashew tree is large and evergreen, growing to tall, with a short, often irregularly shaped trunk. The leaves are spirally arranged, leathery textured, elliptic to obovate, long and broad, with smooth margins. The flowers are produced in a panicle or corymb up to long; each flower is small, pale green at first, then turning reddish, with five slender, acute petals long. The largest cashew tree in the world covers an area around and is located in Natal, Brazil. The fruit of the cashew tree is an accessory fruit (sometimes called a pseudocarp or false fruit). What appears to be the fruit is an oval or pear-shaped structure, a hypocarpium, that develops from the pedicel and the receptacle of the cashew flower. Called the cashew apple, better known in Central America as , it ripens into a yellow or red structure about long. The true fruit of the cashew tree is a kidney-shaped or boxing glove-shaped drupe that grows at the end of the cashew apple. The drupe first develops on the tree and then the pedicel expands to become the cashew apple. The drupe becomes the true fruit, a single shell-encased seed, which is often considered a nut in the culinary sense. The seed is surrounded by a double shell that contains an allergenic phenolic resin, anacardic acid—which is a potent skin irritant chemically related to the better-known and also toxic allergenic oil urushiol, which is found in the related poison ivy and lacquer tree. Etymology The English name derives from the Portuguese name for the fruit of the cashew tree: (), also known as , which itself is from the Tupi word , literally meaning "nut that produces itself". The generic name Anacardium is composed of the Greek prefix ana- (), the Greek cardia (), and the Neo-Latin suffix . It possibly refers to the heart shape of the fruit, to "the top of the fruit stem" or to the seed. The word anacardium was earlier used to refer to Semecarpus anacardium (the marking nut tree) before Carl Linnaeus transferred it to the cashew; both plants are in the same family. The epithet occidentale derives from the Western (or Occidental) world. The plant has diverse common names in various languages among its wide distribution range, including (French) with the fruit referred to as , (), or (Portuguese). Distribution and habitat The species is native to tropical South America and later was distributed around the world in the 1500s by Portuguese explorers. Portuguese colonists in Brazil began exporting cashew nuts as early as the 1550s. The Portuguese took it to Goa, formerly Estado da Índia Portuguesa in India, between 1560 and 1565. From there, it spread throughout Southeast Asia and eventually Africa. Cultivation The cashew tree is cultivated in the tropics between 25°N and 25°S, and is well-adapted to hot lowland areas with a pronounced dry season, where the mango and tamarind trees also thrive. The traditional cashew tree is tall (up to ) and takes three years from planting before it starts production, and eight years before economic harvests can begin. More recent breeds, such as the dwarf cashew trees, are up to tall and start producing after the first year, with economic yields after three years. The cashew nut yields for the traditional tree are about , in contrast to over a ton per hectare for the dwarf variety. Grafting and other modern tree management technologies are used to further improve and sustain cashew nut yields in commercial orchards. Production In 2021, global production of cashew nuts (as the kernel) was 3.7 million tonnes, led by Ivory Coast and India with a combined 43% of the world total (table). Trade The top ten exporters of cashew nuts (in-shell; HS Code 080131) in value (USD) in 2021 were Ghana, Tanzania, Guinea-Bissau, Nigeria, Ivory Coast, Burkina Faso, Senegal, Indonesia, United Arab Emirates (UAE), and Guinea. From 2017 to 2021, the top ten exporters of cashew nuts (shelled; HS Code 080132) were Vietnam, India, the Netherlands, Germany, Brazil, Ivory Coast, Nigeria, Indonesia, Burkina Faso, and the United States. In 2014, the rapid growth of cashew cultivation in the Ivory Coast made this country the top African exporter. Fluctuations in world market prices, poor working conditions, and low pay for local harvesting have caused discontent in the cashew nut industry. Almost all cashews produced in Africa between 2000 and 2019 were exported as raw nuts which are much less profitable than shelled nuts. One of the goals of the African Cashew Alliance is to promote Africa's cashew processing capabilities to improve the profitability of Africa's cashew industry. In 2011, Human Rights Watch reported that forced labour was used for cashew processing in Vietnam. Around 40,000 current or former drug users were forced to remove shells from "blood cashews" or perform other work and often beaten at more than 100 rehabilitation centers. Toxicity Some people are allergic to cashews, but they are a less frequent allergen than other tree nuts or peanuts. For up to 6% of children and 3% of adults, consuming cashews may cause allergic reactions, ranging from mild discomfort to life-threatening anaphylaxis. These allergies are triggered by the proteins found in tree nuts, and cooking often does not remove or change these proteins. Reactions to cashew and tree nuts can also occur as a consequence of hidden nut ingredients or traces of nuts that may inadvertently be introduced during food processing, handling, or manufacturing. The shell of the cashew nut contains oil compounds that can cause contact dermatitis similar to poison ivy, primarily resulting from the phenolic lipids, anacardic acid, and cardanol. Because it can cause dermatitis, cashews are typically not sold in the shell to consumers. Readily and inexpensively extracted from the waste shells, cardanol is under research for its potential applications in nanomaterials and biotechnology. Uses Nutrition Raw cashew nuts are 5% water, 30% carbohydrates, 44% fat, and 18% protein (table). In a 100-gram reference amount, raw cashews provide 553 kilocalories, 67% of the Daily Value (DV) in total fats, 36% DV of protein, 13% DV of dietary fiber and 11% DV of carbohydrates. Cashew nuts are rich sources (20% or more of the DV) of dietary minerals, including particularly copper, manganese, phosphorus, and magnesium (79–110% DV), and of thiamin, vitamin B6 and vitamin K (32–37% DV). Iron, potassium, zinc, and selenium are present in significant content (14–61% DV) (table). Cashews (100 g, raw) contain of beta-sitosterol. Nut and shell Culinary uses for cashew seeds in snacking and cooking are similar to those for all tree seeds called nuts. Cashews are commonly used in South Asian cuisine, whole for garnishing sweets or curries, or ground into a paste that forms a base of sauces for curries (e.g., korma), or some sweets (e.g., kaju barfi). It is also used in powdered form in the preparation of several Indian sweets and desserts. In Goan cuisine, both roasted and raw kernels of Goa Kaju are used whole for making curries and sweets. Cashews are also used in Thai and Chinese cuisines, generally in whole form. In the Philippines, cashew is a known product of Antipolo and is eaten with suman. The province of Pampanga also has a sweet dessert called turrones de casuy, which is cashew marzipan wrapped in white wafers. In Indonesia, roasted and salted cashews are called kacang mete or kacang mede, while the cashew apple is called jambu monyet ( 'monkey rose apple'). In the 21st century, cashew cultivation increased in several African countries to meet the manufacturing demands for cashew milk, a plant milk alternative to dairy milk. In Mozambique, bolo polana is a cake prepared using powdered cashews and mashed potatoes as the main ingredients. This dessert is common in South Africa. Husk The cashew nut kernel has a slight curvature and two cotyledons, each representing around 20–25% of the weight of the nut. It is encased in a reddish-brown membrane called a husk, which accounts for approximately 5% of the total nut. Cashew nut husk is used in emerging industrial applications, such as an adsorbent, composites, biopolymers, dyes and enzyme synthesis. Apple The mature cashew apple can be eaten fresh, cooked in curries, or fermented into vinegar, citric acid or an alcoholic drink. It is also used to make preserves, chutneys, and jams in some countries, such as India and Brazil. In many countries, particularly in South America, the cashew apple is used to flavor drinks, both alcoholic and nonalcoholic. In Brazil, cashew fruit juice and fruit pulp are used in the production of sweets, and juice mixed with alcoholic beverages such as cachaça, and as flour, milk, or cheese. In Panama, the cashew fruit is cooked with water and sugar for a prolonged time to make a sweet, brown, paste-like dessert called ( being a Spanish name for cashew). Cashew nuts are more widely traded than cashew apples, because the fruit, unlike the nut, is easily bruised and has a very limited shelf life. Cashew apple juice, however, may be used for manufacturing blended juices. When the apple is consumed, its astringency is sometimes removed by steaming the fruit for five minutes before washing it in cold water. Steeping the fruit in boiling salt water for five minutes also reduces the astringency. In Cambodia, where the plant is usually grown as an ornamental rather than an economic tree, the fruit is a delicacy and is eaten with salt. Alcohol In the Indian state of Goa, the ripened cashew apples are mashed, and the juice, called "neero", is extracted and kept for fermentation for a few days. This fermented juice then undergoes a double distillation process. The resulting beverage is called feni or fenny. Feni is about 40–42% alcohol (80–84 proof). The single-distilled version is called urrak, which is about 15% alcohol (30 proof). In Tanzania, the cashew apple (bibo in Swahili) is dried and reconstituted with water and fermented, then distilled to make a strong liquor called gongo. Nut oil Cashew nut oil is a dark yellow oil derived from pressing the cashew nuts (typically from lower-value broken chunks created accidentally during processing) and is used for cooking or as a salad dressing. The highest quality oil is produced from a single cold pressing. Shell oil Cashew nutshell liquid (CNSL) or cashew shell oil (CAS registry number 8007-24-7) is a natural resin with a yellowish sheen found in the honeycomb structure of the cashew nutshell, and is a byproduct of processing cashew nuts. As it is a strong irritant, it should not be confused with edible cashew nut oil. It is dangerous to handle in small-scale processing of the shells, but is itself a raw material with multiple uses. It is used in tropical folk medicine and for anti-termite treatment of timber. Its composition varies depending on how it is processed. Cold, solvent-extracted CNSL is mostly composed of anacardic acids (70%), cardol (18%) and cardanol (5%). Heating CNSL decarboxylates the anacardic acids, producing a technical grade of CNSL that is rich in cardanol. Distillation of this material gives distilled, technical CNSL containing 78% cardanol and 8% cardol (cardol has one more hydroxyl group than cardanol). This process also reduces the degree of thermal polymerization of the unsaturated alkyl-phenols present in CNSL. Anacardic acid is also used in the chemical industry for the production of cardanol, which is used for resins, coatings, and frictional materials. These substances are skin allergens, like lacquer and the oils of poison ivy, and they present a danger during manual cashew processing. This natural oil phenol has interesting chemical structural features that can be modified to create a wide spectrum of biobased monomers. These capitalize on the chemically versatile construct, which contains three functional groups: the aromatic ring, the hydroxyl group, and the double bonds in the flanking alkyl chain. These include polyols, which have recently seen increased demand for their biobased origin and key chemical attributes such as high reactivity, range of functionalities, reduction in blowing agents, and naturally occurring fire retardant properties in the field of rigid polyurethanes, aided by their inherent phenolic structure and larger number of reactive units per unit mass. CNSL may be used as a resin for carbon composite products. CNSL-based novolac is another versatile industrial monomer deriving from cardanol typically used as a reticulating agent (hardener) for epoxy matrices in composite applications providing good thermal and mechanical properties to the final composite material. Animal feed Discarded cashew nuts unfit for human consumption, alongside the residues of oil extraction from cashew kernels, can be fed to livestock. Animals can also eat the leaves of cashew trees. Other uses As well as the nut and fruit, the plant has several other uses. In Cambodia, the bark gives a yellow dye, the timber is used in boat-making, and for house-boards, and the wood makes excellent charcoal. The shells yield a black oil used as a preservative and water-proofing agent in varnishes, cement, and as a lubricant or timber seal. Timber is used to manufacture furniture, boats, packing crates, and charcoal. Its juice turns black on exposure to air, providing an indelible ink. See also List of culinary nuts Semecarpus anacardium (the Oriental Anacardium), a native of India and closely related to the cashew References External links {Shri Adinath Cashew} Dealer in India Anacardium Crops originating from South America Drupes Edible nuts and seeds Flora of Southern America Fruit trees Medicinal plants of South America Nut oils Plants described in 1753 Resins Tropical agriculture
Cashew
Physics
3,404
39,840,659
https://en.wikipedia.org/wiki/ZSpace%20%28company%29
zSpace, Inc. is an American technology firm based in San Jose, California that combines elements of virtual and augmented reality in a computer. zSpace mostly provides AR/VR technology to the education market. It allows teachers and learners to interact with simulated objects in virtual environments. zSpace does not require the use of a head-mounted display. Users experience 3D content through a 3D computer screen, aided by head-tracking technology and a stylus. The hardware switches between the left and right images through a circularly polarized light that enters the eye. In some models, eyewear contains small reflective tabs that the computer uses to track where users are looking. Other models are equipped with head tracking technology and do not require any glasses or eyewear. Paul Kellenberger is the company's current CEO and president. History zSpace was founded as Infinite Z in 2007. Infinite Z's virtual-holographic platform was created with backing from the Central Intelligence Agency's In-Q-Tel fund, which invests in technology startups. Infinite Z formally changed its name to zSpace in 2013. In 2014, zSpace collaborated with NASA to be tested as an interface technology for future robots, using the program to interact with simulated objects in virtual environments using imaging displays. In November 2012, zSpace released an independent software development kit. In the same year, zSpace collaborated with researchers at the University of Tokyo to develop a high-speed gesture tracking system. The technology is used in hospitals by surgeons before procedures. Although the initial target markets for zSpace were enterprise-based, company employees and customers began to recognize the potential for zSpace in education, including K-12, higher education and career and technical education (CTE). In September 2015, zSpace announced a partnership with Leopoly, a 3D content provider and modelling platform, to create an application that enabled users to create and customize digital objects for 3D printing. That same year, the company released an updated version of its desktop all-in-one system, zSpace for Education. The new platform allowed users to manipulate an array of virtual, 3D objects including building circuitry and experimenting with gravity. The release included approximately 250 STEAM (science, technology, art and math) lesson plans aligned to the Common Core, Next Generation Science Standards (NGSS) and other state standards for K-12 education. In January 2016, zSpace released a VR web browser it developed in partnership with Google Chrome's WebGL team. zSpace and GeoGebra announced the release of VR Math in February 2016 with subjects like geometry, algebra, spreadsheets, graphing, statistics and calculus. At the ISTE conference in June 2016, zSpace announced Human Anatomy Atlas content in partnership with Visible Body. The company also announced that it had partnered with Google to combine zSpace's VR technology with the Google Expeditions Pioneer Program. Beginning that same year, zSpace demonstrated its technology to schools across the country via its Mobile Classroom Tour. The tour allows K-12 students around the country to engage with the company's STEAM applications in a lab setting and experience a variety of different simulations. The company partnered with Shenzhen GTA Education Tech Ltd., and Mimbus in 2017 on automotive training and welding applications. In 2018, zSpace announced integration with Autodesk Tinkercad as well as a partnership with Merriam-Webster's online dictionary. At ISTE 2018, zSpace released its first Windows 10 laptop for schools. In 2019, zSpace added Career and Technical Education (CTE) to its roster of applications, which prepares students for certifications and supplementary trainings through VR learning. In 2022, zSpace announced a merger with EdtechX Holdings Acquisition Corporation II (Nasdaq: EDTXU, EDTX, and EDTXW) (“EdtechX II”), an edtech-focused SPAC. As a result of the merger, the combined company is expected to be named zSpace Technologies, Inc. and listed on the Nasdaq Stock Market under the new ticker symbol ZSPX. In December 2024, the company went public via an initial public offering on the Nasdaq. Product zSpace Inspire and Inspire Pro were launched in January 2022 which allows users to experience AR/VR without a head-mounted display (HMD) or glasses. The system includes integrated face-tracking technology, a haptic-feedback stylus, and a stylus sensor module, which tracks the position of the stylus to create the AR/VR experience. It runs on a Windows 11 Operating System with a 15.6-inch, Ultra HD 3840 x 2160 pixel display, a NVIDIA GeForce RTX graphics card, and face-tracking cameras. The AIO (All-in-One) and AIO Pro, launched in 2015, is geared towards users running performance-heavy applications, such as software developers, designers and CTE students and professionals. The AIO includes a haptic-feedback stylus and eyewear which is tracked by technology built into the display. The system has a 24-inch display, runs on Windows 10 and has an Intel i3 or i7 processor. The zSpace Laptop uses the same basic technology of the AIO products, requiring specialized eyewear and the haptic-feedback stylus. The processor is an AMD 7th generation APU that combines the CPU cache and discrete class Radeon GPU on the same chip die. Recognition The company's technology has been awarded Tech & Learning's "Best in Show" at the ISTE conference from 2015 to 2019 and the magazine's "Award in Excellence" in 2016, 2017 and 2018. In 2016, zSpace was named one of the 5 most innovative and fast-growing companies in America by Inc. magazine while also ranking 143 on the Inc. 5000 list and second in the Silicon Valley Business Journal. In 2017, zSpace ranked #305 on Inc.'s 5000 lists. As of 2019, zSpace operates in more than 1,500 school districts, community colleges and universities to over 1 million students. zSpace was recognized on the 2019 Fast Company list for the "World's Most Innovative Companies" in Education. The zSpace Laptop won the 2019 Edison Award in Edutech and the Cool Tool Award for Best VR/AR Solution by EdTech. It was named one of three Grand Prize Winners for Tech & Learning's "Awards of Excellence" for 2019. zSpace was recognized as one of the "100 Best Inventions of 2019" by Time. See also References Technology companies established in 2007 Companies based in San Jose, California Display technology companies Virtual reality companies Technology companies of the United States American companies established in 2007 2007 establishments in California Computer companies of the United States Computer hardware companies Computer systems companies Companies listed on the Nasdaq 2024 initial public offerings
ZSpace (company)
Technology
1,404
813,086
https://en.wikipedia.org/wiki/Rotational%20frequency
Rotational frequency, also known as rotational speed or rate of rotation (symbols ν, lowercase Greek nu, and also n), is the frequency of rotation of an object around an axis. Its SI unit is the reciprocal seconds (s−1); other common units of measurement include the hertz (Hz), cycles per second (cps), and revolutions per minute (rpm). Rotational frequency can be obtained dividing angular frequency, ω, by a full turn (2π radians): νω/(2πrad). It can also be formulated as the instantaneous rate of change of the number of rotations, N, with respect to time, t: ndN/dt (as per International System of Quantities). Similar to ordinary period, the reciprocal of rotational frequency is the rotation period or period of rotation, Tν−1n−1, with dimension of time (SI unit seconds). Rotational velocity is the vector quantity whose magnitude equals the scalar rotational speed. In the special cases of spin (around an axis internal to the body) and revolution (external axis), the rotation speed may be called spin speed and revolution speed, respectively. Rotational acceleration is the rate of change of rotational velocity; it has dimension of squared reciprocal time and SI units of squared reciprocal seconds (s−2); thus, it is a normalized version of angular acceleration and it is analogous to chirpyness. Related quantities Tangential speed (Latin letter v), rotational frequency , and radial distance , are related by the following equation: An algebraic rearrangement of this equation allows us to solve for rotational frequency: Thus, the tangential speed will be directly proportional to when all parts of a system simultaneously have the same , as for a wheel, disk, or rigid wand. The direct proportionality of to is not valid for the planets, because the planets have different rotational frequencies. Regression analysis Rotational frequency can measure, for example, how fast a motor is running. Rotational speed is sometimes used to mean angular frequency rather than the quantity defined in this article. Angular frequency gives the change in angle per time unit, which is given with the unit radian per second in the SI system. Since 2π radians or 360 degrees correspond to a cycle, we can convert angular frequency to rotational frequency by where is rotational frequency, with unit cycles per second is angular frequency, with unit radian per second or degree per second For example, a stepper motor might turn exactly one complete revolution each second. Its angular frequency is 360 degrees per second (360°/s), or 2π radians per second (2π rad/s), while the rotational frequency is 60 rpm. Rotational frequency is not to be confused with tangential speed, despite some relation between the two concepts. Imagine a merry-go-round with a constant rate of rotation. No matter how close to or far from the axis of rotation you stand, your rotational frequency will remain constant. However, your tangential speed does not remain constant. If you stand two meters from the axis of rotation, your tangential speed will be double the amount if you were standing only one meter from the axis of rotation. See also Angular velocity Radial velocity Rotation period Rotational spectrum Tachometer Notes References Kinematic properties Temporal rates Rotation
Rotational frequency
Physics,Mathematics
674
2,956,372
https://en.wikipedia.org/wiki/Foot%20per%20second
The foot per second (plural feet per second) is a unit of both speed (scalar) and velocity (vector quantity, which includes direction). It expresses the distance in feet (ft) traveled or displaced, divided by the time in seconds (s). The corresponding unit in the International System of Units (SI) is the meter per second. Abbreviations include ft/s, fps, and the scientific notation ft s−1. Conversions See also Foot per second squared, a corresponding unit of acceleration. Feet per minute References Units of velocity Customary units of measurement in the United States
Foot per second
Mathematics
121
46,921,420
https://en.wikipedia.org/wiki/Grand%20Maket%20Rossiya
Grand Maket Rossiya () is a private museum in Saint Petersburg, Russia. It is a model layout designed on a scale of 1:87 (HO scale) and covers an area of . In this area, collective images of regions of the Russian Federation are represented. It is the largest model layout in Russia and the second largest in the world (after the Miniatur Wunderland in Hamburg, Germany). The model is located in a two-story building built in 1953, in the style of Stalin’s empire. The creator of the project is a Saint Petersburg businessman Sergey Morozov. Layout The model took five years to build and employed over one hundred people. First, a wooden frame to be used under the model was made. Then the foundations for the roads and railroads were made. Later, the model wooden ribs were installed and a layer of plaster (11 tons were used) was ca Museum opening The first visitors were allowed in the museum in April 2011. For the next 14 months, the museum worked in test mode, only taking visitors at the weekend. The official opening took place on 8 June 2012. View The model presents an image of everyday life in Russia, realized through models. These everyday situations represent different human activities, such as work, leisure, sports, studying, military service, country life, travel, mass celebrations and even an attempt to escape from prison. Ground transportation is represented by various kinds of cars and trucks, trams, buses, trains, and agricultural, construction and military equipment. Visitors have an opportunity to set things in motion on the model by pushing interactive buttons placed around the layout. Technical solutions The day/night system Every 13 minutes the lighting of the model is changed as day turns gradually into night. The night lighting lasts for 2 minutes. More than 800 000 LED lights in different colours were used to illuminate the model without creating shadows Movement of road vehicles The movement of the cars in the model is realistic. The cars and buses stop at traffic lights, bus stops and flashing signals, and they change speeds and bypass each other. The electrical energy for the automobiles is obtained remotely from underneath the model, so the cars themselves don't appear to have a power supply. This was the first use of this method for moving cars in this way. Railway traffic To optimize the traffic on the layout, it was built in many levels using more than of rails and 452 switch boxes. The total number of rolling stock is more than 2700, of which 250 are locomotives and 10 are special cleaning trains. The variety of the moving trains is created using two revolver exchangers that store up to 60 trains and dispatch them when needed. On the layout there are also two turntables which can turn the locomotives up to 180 degrees. The largest height difference in the model is . This is achieved with over of spiral lifts. Awards In 2013, Sergey Morozov won the 'sobaka.ru' top 50-most famous people in Saint Petersburg award in the science field. In 2014, Grand Maket Russia entered the list of top 10 museums in Russia, listed by the users of tourist site TripAdvisor. In 2014, Sergey Morozov won the business award of 'Boss of the Year' for his nomination in 'Breakthrough Boss of the Year'. In 2014, Grand Maket Russia entered the top 10-most photographed places in Saint Petersburg. References External links Model railway shows and exhibitions Museums in Saint Petersburg Scale modeling Railway museums in Russia 2012 establishments in Russia Museums established in 2012
Grand Maket Rossiya
Physics
715
22,945,040
https://en.wikipedia.org/wiki/Society%20of%20Toxicology
The Society of Toxicology (SOT) is a learned society (professional association) based in the United States that supports scientific inquiry in the field of toxicology. Goals The SOT is committed to creating a safer and healthier world by advancing the science of toxicology. The Society promotes the acquisition and utilization of knowledge in toxicology, aids in the protection of public health, and facilitates disciplines. SOT's definition of toxicology is 'the study of the adverse effects of chemical, physical or biological agents on living organisms and the ecosystem, including the prevention and amelioration of such adverse effects.' The society organizes an annual meeting (usually in the early spring) and several smaller colloquia via its special interest sections and groups. It publishes the journal Toxicological Sciences, as well as public position papers and guidelines on conflicts of interest in toxicology. Membership Full membership of the society is restricted to people with significant published work and/or professional experience in toxicology and members are bound by a Code of Ethics. There are also several categories of associate and student membership for people who do not fulfill the professional requirements for full membership. The Society has more than 8,000 members from 70 countries. Leadership SOT is run by a team of full-time board members, called councilors. The councilors are selected by other full SOT members and other SOT members who manage the affairs of SOT. The elections are made by ballot. References External links Official website 1961 establishments in the United States Scientific societies based in the United States Toxicology organizations Organizations established in 1961
Society of Toxicology
Environmental_science
320
54,268,322
https://en.wikipedia.org/wiki/Cloud%20tree
A cloud tree is a tree shaped using topiary techniques. The leaves are pruned into a ball or cloud shape, leaving the stems thin and exposed. The shape of the tree as a whole resembles a set of clouds. Cloud trees differ from bonsai trees because they are not miniature. Typically, cloud trees are planted in plain soil, rather than in pots. Similarly to bonsai, the practice of shaping cloud trees comes from Japan, deriving from a Japanese style of gardening known as Niwaki. Gallery References External links http://www.silktree.co.uk/cloudtree.html https://web.archive.org/web/20180312204223/http://warners.com.au/our-plants/plant/cloud-tree Site of Royal Horticulture Society (RHS) Japanese style of gardening Landscape architecture Trees
Cloud tree
Engineering
184
4,608,190
https://en.wikipedia.org/wiki/Night%20skiing
Night skiing is the sport of skiing or snowboarding after sundown, offered at many ski areas. There are floodlights – with metal halide, LED or magnetic induction lamps – along the piste which allow for better visibility. The night skiing session typically begins around sunset, and ends between 8:00 PM and 10:30 PM. Night skiing offers reduced price access versus daylight hours. Trails at night are normally not as busy as during the day, but there are usually fewer runs available. The trails also tend to be icier than during the day, due to melting and refreezing. Starting in 1997 Planai in Austria has held a World Cup slalom competition at night. A few ski resorts offer opportunities for night skiing wearing personal headlamps, or by the light of the full moon. History Processions of skiers holding torches, lanterns or flares while skiing down a slope at night has been a scheduled event of winter festivals such as the Nordic Games since at least 1903. The dramatic spectacle of torchlight ski descents is a program element at the Holmenkollen Ski Festival, and ski resort holiday celebrations. A torchlit ski race was held in Switzerland in 1920. In the 1925 Winter Carnival at Rumford, Maine, night ski jumping was included. Chicopee Ski Club in Ontario Canada had lighted night skiing in 1935, with lights powered by car batteries. Lighted slope skiing at Bousquet Ski Area in Pittsfield, Massachusetts began in 1936 thanks to a local partnership with General Electric. Other early lighted slopes include Fryeburg, Maine (1936), North Creek, New York (1937), Rossland, British Columbia (1937), Jackson, New Hampshire (1937), Hyak, Washington (1938), Juneau, Alaska (1938), Lake Placid, New York (1938) and Brattleboro, Vermont (1938). References External links Types of skiing Lighting Skiing
Night skiing
Astronomy
389
2,314,852
https://en.wikipedia.org/wiki/LF-space
In mathematics, an LF-space, also written (LF)-space, is a topological vector space (TVS) X that is a locally convex inductive limit of a countable inductive system of Fréchet spaces. This means that X is a direct limit of a direct system in the category of locally convex topological vector spaces and each is a Fréchet space. The name LF stands for Limit of Fréchet spaces. If each of the bonding maps is an embedding of TVSs then the LF-space is called a strict LF-space. This means that the subspace topology induced on by is identical to the original topology on . Some authors (e.g. Schaefer) define the term "LF-space" to mean "strict LF-space," so when reading mathematical literature, it is recommended to always check how LF-space is defined. Definition Inductive/final/direct limit topology Throughout, it is assumed that is either the category of topological spaces or some subcategory of the category of topological vector spaces (TVSs); If all objects in the category have an algebraic structure, then all morphisms are assumed to be homomorphisms for that algebraic structure. is a non-empty directed set; is a family of objects in where is a topological space for every index ; To avoid potential confusion, should not be called 's "initial topology" since the term "initial topology" already has a well-known definition. The topology is called the original topology on or 's given topology. is a set (and if objects in also have algebraic structures, then is automatically assumed to have whatever algebraic structure is needed); is a family of maps where for each index , the map has prototype . If all objects in the category have an algebraic structure, then these maps are also assumed to be homomorphisms for that algebraic structure. If it exists, then the final topology on in , also called the colimit or inductive topology in , and denoted by or , is the finest topology on such that is an object in , and for every index , the map is a continuous morphism in . In the category of topological spaces, the final topology always exists and moreover, a subset is open (resp. closed) in if and only if is open (resp. closed) in for every index . However, the final topology may not exist in the category of Hausdorff topological spaces due to the requirement that belong to the original category (i.e. belong to the category of Hausdorff topological spaces). Direct systems Suppose that is a directed set and that for all indices there are (continuous) morphisms in such that if then is the identity map on and if then the following compatibility condition is satisfied: where this means that the composition If the above conditions are satisfied then the triple formed by the collections of these objects, morphisms, and the indexing set is known as a direct system in the category that is directed (or indexed) by . Since the indexing set is a directed set, the direct system is said to be directed. The maps are called the bonding, connecting, or linking maps of the system. If the indexing set is understood then is often omitted from the above tuple (i.e. not written); the same is true for the bonding maps if they are understood. Consequently, one often sees written " is a direct system" where "" actually represents a triple with the bonding maps and indexing set either defined elsewhere (e.g. canonical bonding maps, such as natural inclusions) or else the bonding maps are merely assumed to exist but there is no need to assign symbols to them (e.g. the bonding maps are not needed to state a theorem). Direct limit of a direct system For the construction of a direct limit of a general inductive system, please see the article: direct limit. Direct limits of injective systems If each of the bonding maps is injective then the system is called injective. If the 's have an algebraic structure, say addition for example, then for any , we pick any index such that and then define their sum using by using the addition operator of . That is, where is the addition operator of . This sum is independent of the index that is chosen. In the category of locally convex topological vector spaces, the topology on the direct limit of an injective directed inductive limit of locally convex spaces can be described by specifying that an absolutely convex subset of is a neighborhood of if and only if is an absolutely convex neighborhood of in for every index . Direct limits in Top Direct limits of directed direct systems always exist in the categories of sets, topological spaces, groups, and locally convex TVSs. In the category of topological spaces, if every bonding map is/is a injective (resp. surjective, bijective, homeomorphism, topological embedding, quotient map) then so is every . Problem with direct limits Direct limits in the categories of topological spaces, topological vector spaces (TVSs), and Hausdorff locally convex TVSs are "poorly behaved". For instance, the direct limit of a sequence (i.e. indexed by the natural numbers) of locally convex nuclear Fréchet spaces may to be Hausdorff (in which case the direct limit does not exist in the category of Hausdorff TVSs). For this reason, only certain "well-behaved" direct systems are usually studied in functional analysis. Such systems include LF-spaces. However, non-Hausdorff locally convex inductive limits do occur in natural questions of analysis. Strict inductive limit If each of the bonding maps is an embedding of TVSs onto proper vector subspaces and if the system is directed by with its natural ordering, then the resulting limit is called a strict (countable) direct limit. In such a situation we may assume without loss of generality that each is a vector subspace of and that the subspace topology induced on by is identical to the original topology on . In the category of locally convex topological vector spaces, the topology on a strict inductive limit of Fréchet spaces can be described by specifying that an absolutely convex subset is a neighborhood of if and only if is an absolutely convex neighborhood of in for every . Properties An inductive limit in the category of locally convex TVSs of a family of bornological (resp. barrelled, quasi-barrelled) spaces has this same property. LF-spaces Every LF-space is a meager subset of itself. The strict inductive limit of a sequence of complete locally convex spaces (such as Fréchet spaces) is necessarily complete. In particular, every LF-space is complete. Every LF-space is barrelled and bornological, which together with completeness implies that every LF-space is ultrabornological. An LF-space that is the inductive limit of a countable sequence of separable spaces is separable. LF spaces are distinguished and their strong duals are bornological and barrelled (a result due to Alexander Grothendieck). If is the strict inductive limit of an increasing sequence of Fréchet space then a subset of is bounded in if and only if there exists some such that is a bounded subset of . A linear map from an LF-space into another TVS is continuous if and only if it is sequentially continuous. A linear map from an LF-space into a Fréchet space is continuous if and only if its graph is closed in . Every bounded linear operator from an LF-space into another TVS is continuous. If is an LF-space defined by a sequence then the strong dual space of is a Fréchet space if and only if all are normable. Thus the strong dual space of an LF-space is a Fréchet space if and only if it is an LB-space. Examples Space of smooth compactly supported functions A typical example of an LF-space is, , the space of all infinitely differentiable functions on with compact support. The LF-space structure is obtained by considering a sequence of compact sets with and for all i, is a subset of the interior of . Such a sequence could be the balls of radius i centered at the origin. The space of infinitely differentiable functions on with compact support contained in has a natural Fréchet space structure and inherits its LF-space structure as described above. The LF-space topology does not depend on the particular sequence of compact sets . With this LF-space structure, is known as the space of test functions, of fundamental importance in the theory of distributions. Direct limit of finite-dimensional spaces Suppose that for every positive integer , and for , consider Xm as a vector subspace of via the canonical embedding defined by . Denote the resulting LF-space by . Since any TVS topology on makes continuous the inclusions of the Xm's into , the latter space has the maximum among all TVS topologies on an -vector space with countable Hamel dimension. It is a LC topology, associated with the family of all seminorms on . Also, the TVS inductive limit topology of coincides with the topological inductive limit; that is, the direct limit of the finite dimensional spaces in the category TOP and in the category TVS coincide. The continuous dual space of is equal to the algebraic dual space of , that is the space of all real valued sequences and the weak topology on is equal to the strong topology on (i.e. ). In fact, it is the unique LC topology on whose topological dual space is X. See also DF-space Direct limit Final topology F-space LB-space Citations Bibliography Topological vector spaces
LF-space
Mathematics
2,044