id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
32,271,415
https://en.wikipedia.org/wiki/Sono-Seq
Sono-Seq (Sonication of Cross-linked Chromatin Sequencing) is a method in molecular biology used for determining the sequences of those DNA regions in the genome near regions of open chromatin of expressed genes. It is also known as "Input" in the Chip-Seq protocol, since it follows the same steps except it doesn't require immunoprecipitation. References Molecular biology techniques Gene expression
Sono-Seq
[ "Chemistry", "Biology" ]
90
[ "Gene expression", "Molecular biology techniques", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
37,898,548
https://en.wikipedia.org/wiki/Thermally%20isolated%20system
In thermodynamics, a thermally isolated system can exchange no mass or heat energy with its environment. The internal energy of a thermally isolated system may therefore change due to the exchange of work energy. The entropy of a thermally isolated system will increase over time if it is not at equilibrium, but as long as it is at equilibrium, its entropy will be at a maximum and constant value and will not change, no matter how much work energy the system exchanges with its environment. To maintain this constant entropy, any exchange of work energy with the environment must therefore be quasi-static in nature in order to ensure that the system remains essentially at equilibrium during the process. The opposite of a thermally isolated system is a thermally open system, which allows the transfer of heat energy and entropy. Thermally open systems may vary, however, in the rate at which they equilibrate, depending on the nature of the boundary of the open system. At equilibrium, the temperatures on both sides of a thermally open boundary are equal. At equilibrium, only a thermally isolating boundary can support a temperature difference. See also Closed system Dynamical system Mechanically isolated system Open system Thermodynamic system Isolated system References Thermodynamic systems
Thermally isolated system
[ "Physics", "Chemistry", "Mathematics" ]
255
[ "Thermodynamics stubs", "Thermodynamic systems", "Physical systems", "Thermodynamics", "Physical chemistry stubs", "Dynamical systems" ]
37,898,741
https://en.wikipedia.org/wiki/Mechanically%20isolated%20system
In thermodynamics, a mechanically isolated system is a system that is mechanically constrained to disallow deformations, so that it cannot perform any work on its environment. It may however, exchange heat across the system boundary. For a simple system, mechanical isolation is equivalent to a state of constant volume and any process which occurs in such a simple system is said to be isochoric. The opposite of a mechanically isolated system is a mechanically open system, which allows the transfer of mechanical energy. For a simple system, a mechanically open boundary is one that is allowed to move under pressure differences between the two sides of the boundary. At mechanical equilibrium, the pressures on both sides of a mechanically open boundary are equal, but only a mechanically isolating boundary can support pressure differences. See also Closed system Thermally isolated system Dynamical system Open system Thermodynamic system Isolated system References Thermodynamic systems
Mechanically isolated system
[ "Physics", "Chemistry", "Mathematics" ]
187
[ "Thermodynamic systems", "Thermodynamics stubs", "Physical systems", "Thermodynamics", "Physical chemistry stubs", "Dynamical systems" ]
37,899,800
https://en.wikipedia.org/wiki/Tris%28triphenylphosphine%29rhodium%20carbonyl%20hydride
Carbonyl hydrido tris(triphenylphosphine)rhodium(I) [Carbonyl(hydrido)tris(triphenylphosphane)rhodium(I)] is an organorhodium compound with the formula [RhH(CO)(PPh3)3] (Ph = C6H5). It is a yellow, benzene-soluble solid, which is used industrially for hydroformylation. Preparation [RhH(CO)(PPh3)3] was first prepared by the reduction of [RhCl(CO)(PPh3)2], e.g. with sodium tetrahydroborate, or triethylamine and hydrogen, in ethanol in the presence of excess triphenylphosphine: [RhCl(CO)(PPh3)2] + NaBH4 + PPh3 → [RhH(CO)(PPh3)3] + NaCl + BH3 It can also be prepared from an aldehyde, rhodium trichloride and triphenylphosphine in basic alcoholic media. Structure The complex adopts a trigonal bipyramidal geometry with trans CO and hydrido ligands, resulting in pseudo-C3v symmetry. The Rh-P, Rh-C, and Rh-H distances are 2.32, 1.83, and 1.60 Å, respectively. This complex is one of a small number of stable pentacoordinate rhodium hydrides. Use in hydroformylation This precatalyst was uncovered in attempts to use tris(triphenylphosphine)rhodium chloride as a hydroformylation catalyst. It was found that the complex would quickly carbonylate and that the catalytic activity of the resulting material was enhanced by a variety of additives but inhibited by halides. This inhibition did not occur in the presence of base, suggesting that the hydrido-complex represented the catalytic form of the complex. Mechanistic considerations [RhH(CO)(PPh3)3] is a catalyst for the selective hydroformylation of 1-olefins to produce aldehydes at low pressures and mild temperatures. The selectivity for n-aldehydes increases in the presence of excess PPh3 and at low CO partial pressures. The first step in the hydroformylation process is the dissociative substitution of an alkene for a PPh3. The migratory insertion of this 18-electron complex can result in either a primary or secondary rhodium alkyl. This step sets the regiochemistry of the product, however it is rapidly reversible. The 16-electron alkyl complex undergoes migratory insertion of a CO to form the coordinately unsaturated acyl. This species once again gives an 18-electron acyl complex. The last step involves β-H elimination via hydrogenolysis which results in the cleavage of the aldehyde product and regeneration of the rhodium catalyst. References Rhodium(I) compounds Catalysts Homogeneous catalysis Triphenylphosphine complexes Reagents for organic chemistry Carbonyl complexes Hydrido complexes
Tris(triphenylphosphine)rhodium carbonyl hydride
[ "Chemistry" ]
691
[ "Catalysis", "Catalysts", "Reagents for organic chemistry", "Homogeneous catalysis", "Chemical kinetics" ]
37,905,049
https://en.wikipedia.org/wiki/C19H20FN3
{{DISPLAYTITLE:C19H20FN3}} The molecular formula C19H20FN3 (molar mass: 309.38 g/mol, exact mass: 309.1641 u) may refer to: Fluperlapine, or fluoroperlapine Gevotroline (WY-47,384) Molecular formulas
C19H20FN3
[ "Physics", "Chemistry" ]
78
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
29,993,443
https://en.wikipedia.org/wiki/Establishment%20of%20sister%20chromatid%20cohesion
Sister chromatid cohesion refers to the process by which sister chromatids are paired and held together during certain phases of the cell cycle. Establishment of sister chromatid cohesion is the process by which chromatin-associated cohesin protein becomes competent to physically bind together the sister chromatids. In general, cohesion is established during S phase as DNA is replicated, and is lost when chromosomes segregate during mitosis and meiosis. Some studies have suggested that cohesion aids in aligning the kinetochores during mitosis by forcing the kinetochores to face opposite cell poles. Cohesin loading Cohesin first associates with the chromosomes during G1 phase. The cohesin ring is composed of two SMC (structural maintenance of chromosomes) proteins and two additional Scc proteins. Cohesin may originally interact with chromosomes via the ATPase domains of the SMC proteins. In yeast, the loading of cohesin on the chromosomes depends on proteins Scc2 and Scc4. Cohesin interacts with the chromatin at specific loci. High levels of cohesin binding are observed at the centromere. Cohesin is also loaded at cohesin attachment regions (CARs) along the length of the chromosomes. CARs are approximately 500-800 base pair regions spaced at approximately 9 kilobase intervals along the chromosomes. In yeast, CARs tend to be rich in adenine-thymine base pairs. CARs are independent of origins of replication. Establishment of cohesion Establishment of cohesion refers to the process by which chromatin-associated cohesin becomes cohesion-competent. Chromatin association of cohesin is not sufficient for cohesion. Cohesin must undergo subsequent modification ("establishment") to be capable of physically holding the sister chromosomes together. Though cohesin can associate with chromatin earlier in the cell cycle, cohesion is established during S phase. Early data suggesting that S phase is crucial to cohesion was based on the fact that after S phase, sister chromatids are always found in the bound state. Tying establishment to DNA replication allows the cell to institute cohesion as soon as the sister chromatids are formed. This solves the problem of how the cell might properly identify and pair sister chromatids by ensuring that the sister chromatids are never separate once replication has occurred. The Eco1/Ctf7 gene (yeast) was one of the first genes to be identified as specifically required for the establishment of cohesion. Eco1 must be present in S phase to establish cohesion, but its continued presence is not required to maintain cohesion. Eco1 interacts with many proteins directly involved in DNA replication, including the processivity clamp PCNA, clamp loader subunits, and a DNA helicase. Though Eco1 contains several functional domains, it is the acetyltransferase activity of the protein which is crucial for establishment of cohesion. During S phase, Eco1 acetylates lysine residues in the Smc3 subunit of cohesin. Smc3 remains acetylated until at least anaphase. Once cohesin has been removed from the chromatin, Smc3 is deacetylated by Hos1. The Pds5 gene was also identified in yeast as necessary for the establishment of cohesion. In humans, the gene has two homologs, Pds5A and Pds5B. Pds5 interacts with chromatin-associated cohesin. Pds5 is not strictly establishment-specific, as Pds5 is necessary for maintenance of cohesion during G2 and M phase. The loss of Pds5 negates the requirement for Eco1. As such, Pds5 is often termed an "anti-establishment" factor. In addition to interacting with cohesin, Pds5 also interacts with Wapl (wings apart-like), another protein that has been implicated in the regulation of sister chromatid cohesion. Human Wapl binds cohesin through the Scc cohesin subunits (in humans, Scc1 and SA1). Wapl has been tied to the loss of cohesin from the chromatids during M phase. Wapl interacts with Pds5 through phenylalanine-glycine-phenylalanine (FGF) sequence motifs. One model of establishment of cohesion suggests that establishment is mediated by the replacement of Wapl in the Wapl-Pds5-cohesin complex with the Sororin protein. Like Wapl, Sororin contains an FGF domain and is capable of interacting with Pds5. In this model, put forward by Nishiyama et al., Wapl interacts with Pds5 and cohesin during G1, before establishment. During S phase, Eco1 (Esco1/Esco2 in humans) acetylates Smc3. This results in recruitment of Sororin. Sororin then replaces Wapl in the Pds5-cohesin complex. This new complex is the established, cohesion-competent cohesin state. At entry to mitosis, Sororin is phosphorylated and replaced again by Wapl, leading to loss of cohesion. Sororin also has chromatin binding activity independent of its ability to mediate cohesion. Meiosis Cohesion proteins SMC1ß, SMC3, REC8 and STAG3 appear to participate in the cohesion of sister chromatids throughout the meiotic process in human oocytes. SMC1ß, REC8 and STAG3 are meiosis specific cohesin proteins. The STAG3 protein is essential for female meiosis and fertility. Cohesins are involved in meiotic recombination. Ties to DNA replication A growing body of evidence ties establishment of cohesion to DNA replication. As mentioned above, functional coupling of these two processes prevents the cell from having to later distinguish which chromosomes are sisters by ensuring that the sister chromatids are never separate after replication. Another significant tie between DNA replication and cohesion pathways is through Replication Factor C (RFC). This complex, the "clamp loader," is responsible for loading PCNA onto DNA. An alternative form of RFC is required for sister chromatin cohesion. This alternative form is composed of core RFC proteins RFC2, RFC3, RFC4, and RFC5, but replaces the RFC1 protein with cohesion specific proteins Ctf8, Ctf18, and Dcc1. A similar function-specific alternative RFC (replacing RFC1 with Rad24) plays a role in the DNA damage checkpoint. The presence of an alternative RFC in the cohesion pathway can be interpreted as evidence in support of the polymerase switch model for cohesion establishment. Like the non-cohesion RFC, the cohesion RFC loads PCNA onto DNA. Some of the evidence tying cohesion and DNA replication comes from the multiple interactions of Eco1. Eco1 interacts with PCNA, RFC subunits, and a DNA helicase, Chl1, either physically or genetically. Studies have also found replication-linked proteins which influence cohesion independent of Eco1. The Ctf18 subunit of the cohesion-specific RFC can interact with cohesin subunits Smc1 and Scc1. Polymerase switch model Though the protein was originally identified as a Topoisomerase I redundant factor, the TRF4 gene product was later shown to be required for sister chromatid cohesion. Wang et al. showed that Trf4 is actually a DNA polymerase, which they called Polymerase κ. This polymerase is also referred to as Polymerase σ. In the same paper in which they identified Pol σ, Wang et al. suggested a polymerase switch model for establishment of cohesion. In this model, upon reaching a CAR, the cell switches DNA polymerases in a mechanism similar to that used in Okazaki fragment synthesis. The cell off-loads the processive replication polymerase and instead uses Pol σ for synthesis of the CAR region. It has been suggested that the cohesion-specific RFC could function in off-loading or on-loading PNCA and polymerases in such a switch. Ties to DNA damage pathways Changes in patterns of sister chromatid cohesion have been observed in cases of DNA damage. Cohesin is required for repair of DNA double-strand breaks (DSBs). One mechanism of DSB repair, homologous recombination (HR), requires the presence of the sister chromatid for repair at the break site. Thus, it is possible that cohesion is required for this process because it ensures that the sister chromatids are physically close enough to undergo HR. DNA damage can lead to cohesin loading at non-CAR sites and establishment of cohesion at these sites even during G2 phase. In the presence of ionizing radiation (IR), the Smc1 subunit of cohesin is phosphorylated by the ataxia telangiectasia mutated (ATM) kinase. ATM is a key kinase in the DNA damage checkpoint. Defects in cohesion can increase genome instability, a result consistent with the ties between cohesion and DNA damage pathways. In the bacterium Escherichia coli, repair of mitomycin C-induced DNA damages occurs by a sister chromatid cohesion process involving the RecN protein. Sister chromatid interaction followed by homologous recombination appears to significantly contribute to the repair of DNA double-strand damages. Medical relevance Defects in the establishment of sister chromatid cohesion have serious consequences for the cell and are therefore tied to many human diseases. Failure to establish cohesion correctly or inappropriate loss of cohesion can lead to missegregation of chromosomes during mitosis, which results in aneuploidy. The loss of the human homologs of core cohesin proteins or of Eco1, Pds5, Wapl, Sororin, or Scc2 has been tied to cancer. Mutations affecting cohesion and establishment of cohesion are also responsible for Cornelia de Lange Syndrome and Roberts Syndrome. Diseases arising from defects in cohesin or other proteins involved in sister chromatid cohesion are referred to as cohesinopathies. Cornelia de Lange Syndrome Genetic alterations in genes NIPBL, SMC1A, SMC3, RAD21 and HDAC8 are associated with Cornelia de Lange Syndrome. The proteins encoded by these genes all function in the chromosome cohesion pathway that is employed in the cohesion of sister chromatids during mitosis, DNA repair, chromosome segregation and the regulation of developmental gene expression. Defects in these functions likely underlie many of the features of Cornelia de Lang Syndrome. References Molecular genetics
Establishment of sister chromatid cohesion
[ "Chemistry", "Biology" ]
2,264
[ "Molecular genetics", "Molecular biology" ]
53,585,184
https://en.wikipedia.org/wiki/VIP2%20experiment
The VIP2 experiment (Violation of the Pauli Principle) is an atomic physics experiment studying the possible violation of the Pauli exclusion principle for electrons. The experiment is located in the underground laboratory of Gran Sasso, LNGS-INFN, near the town L'Aquila in Italy. It is run by an international collaboration of researchers from Austria, Italy, France and Romania. The sources for funding include the INFN (Italy), the Austrian Science Fund and the John Templeton Foundation (JTF). Within the JTF project, also the implications for physics, cosmology and philosophy are being investigated. Principle of the experiment The different approaches to investigate the Pauli exclusion principle need to be distinguished concerning their possible fulfillment of the Messiah–Greenberg superselection rule. This rule states that the symmetry of the wave function of a steady state is constant in time. As a consequence, the symmetry of a quantum state can only change if a particle, which is new to the system, interacts with the state. One way to fulfill this rule and test the Pauli exclusion principle with high precision is to introduce "new" electrons in a conductor. The electrons form new quantum states with the atoms in the conductor. These "new" states could violate the Pauli exclusion principle. The aim of VIP2 is to search for new quantum states, which have a symmetric component in an otherwise antisymmetric state. These non-Paulian states can be identified by the characteristic X-rays emitted during Pauli exclusion principle—prohibited atomic transitions to the ground state. An example for a transition of this kind would be a third electron arriving on the 1s level. The emitted X-rays are detected by silicon drift detectors. The energies of the transitions between Pauli-forbidden states were calculated using a multi-configuration Dirac–Fock method and they differ slightly from the corresponding normal transition. For example, the Pauli-forbidden K-alpha transition in copper, which is used in VIP2, where 2 electrons are in the 1s orbital before the transition happens, is shifted by 300 eV to lower energies with respect to the normal transition. VIP2 runs at LNGS, where the background radiation introduced by cosmic rays is strongly reduced. Data are taken in alternating runs without current (background) and in runs with current (signal). By analysing the detected energy spectra in the region where Pauli exclusion principle violating transitions are expected, upper limits on the probability for a violation of the Pauli exclusion principle are obtained. Results The experiment is taking data in stable conditions since summer 2016 in the Gran Sasso underground laboratory. With the data taken until the end of 2020, an intermediate upper limit can be set for the probability that the Pauli exclusion principle is violated in an atom of The VIP-2 experiment is in 2023 in its last year of operation. A major upgrade is on the way, which will be characterized by a larger vacuum chamber allowing stronger current, and thicker silicon drift detectors, allowing a higher efficiency of X-rays detection and sensitivity to Pauli-forbidden transitions in heavier elements. References Physics experiments Pauli exclusion principle Quantum mechanics
VIP2 experiment
[ "Physics" ]
642
[ "Physics experiments", "Theoretical physics", "Quantum mechanics", "Experimental physics", "Pauli exclusion principle" ]
53,585,937
https://en.wikipedia.org/wiki/Quantum%20coin%20flipping
Consider two remote players, connected by a channel, that don't trust each other. The problem of them agreeing on a random bit by exchanging messages over this channel, without relying on any trusted third party, is called the coin flipping problem in cryptography. Quantum coin flipping uses the principles of quantum mechanics to encrypt messages for secure communication. It is a cryptographic primitive which can be used to construct more complex and useful cryptographic protocols, e.g. Quantum Byzantine agreement. Unlike other types of quantum cryptography (in particular, quantum key distribution), quantum coin flipping is a protocol used between two users who do not trust each other. Consequently, both users (or players) want to win the coin toss and will attempt to cheat in various ways. In the classical setting, i.e. without quantum communication, one player can (in principle) always cheat against any protocol. There are classical protocols based on commitment schemes, but they assume that the players lack the computing power to break the scheme. In contrast, quantum coin flipping protocols can resist cheating even by players with unlimited computing power. The most basic figure of merit for a coin-flipping protocol is given by its bias, a number between and . The bias of a protocol captures the success probability of an all-powerful cheating player who uses the best conceivable strategy. A protocol with bias means that no player can cheat. A protocol with bias means that at least one player can always succeed at cheating. Obviously, the smaller the bias better the protocol. When the communication is over a quantum channel, it has been shown that even the best conceivable protocol can not have a bias less than . Consider the case where each player knows the preferred bit of the other. A coin flipping problem which makes this additional assumption constitutes the weaker variant thereof called weak coin flipping (WCF). In the case of classical channels this extra assumption yields no improvement. On the other hand, it has been proven that WCF protocols with arbitrarily small biases do exist. However, the best known explicit WCF protocol has bias . Although quantum coin flipping offers clear advantages over its classical counterpart in theory, accomplishing it in practice has proven difficult. History Theory Manuel Blum introduced coin flipping as part of a classical system in 1983 based on computational algorithms and assumptions. Blum's version of coin flipping answers the following cryptographic problem: Alice and Bob are recently divorced, living in two separate cities, and want to decide who gets to keep the car. To decide, Alice wants to flip a coin over the telephone. However, Bob is concerned that if he were to tell Alice heads, she would flip the coin and automatically tell him that he lost. Thus, the problem with Alice and Bob is that they do not trust each other; the only resource they have is the telephone communication channel, and there is not a third party available to read the coin. Therefore, Alice and Bob must be either truthful and agree on a value or be convinced that the other is cheating. In 1984, quantum cryptography emerged from a paper written by Charles H. Bennett and Giles Brassard. In this paper, the two introduced the idea of using quantum mechanics to enhance previous cryptographic protocols such as coin flipping. Since then, many researchers have applied quantum mechanics to cryptography as they have proven theoretically to be more secure than classical cryptography, however, demonstrating these protocols in practical systems is difficult to accomplish. Experiment As published in 2014, a group of scientists at the Laboratory for Communication and Processing of Information (LTCI) in Paris have implemented quantum coin flipping protocols experimentally. The researchers have reported that the protocol performs better than a classical system over a suitable distance for a metropolitan area optical network. Definition Coin flipping In cryptography, coin flipping is defined to be the problem where two mutually distrustful and remote players want to agree on a random bit without relying on any third party. Strong coin flipping In quantum cryptography, strong coin flipping (SCF) is defined to be a coin flipping problem where each player is oblivious to the preference of the other. Weak coin flipping In quantum cryptography, weak coin flipping (WCF) is defined to be a coin flipping problem where each player knows the preference of the other. It follows that the players have opposite preferences. If this were not the case then the problem will be pointless as the players can simply choose the outcome they desire. Bias Consider any coin flipping protocol. Let Alice and Bob be the two players who wish to implement the protocol. Consider the scenario where Alice cheats using her best strategy against Bob who honestly follows the protocol. Let the probability that Bob obtains the outcome Alice preferred be given by . Consider the reversed situation, i.e. Bob cheats using his best strategy against Alice who honestly follows the protocol. Let the corresponding probability that Alice obtains the outcome Bob preferred to be given by . The bias of the protocol is defined to be . The half is subtracted because a player will get the desired value half the time purely by chance. Extensions Coin flipping can be defined for biased coins as well, i.e. the bits are not equally likely. The notion of correctness has also been formalized which requires that when both players follow the protocol (nobody cheats) the players always agree on the bit generated and that the bit follows some fixed probability distribution. Protocols Using conjugate encoding Quantum coin flipping and other types of quantum cryptography communicate information through the transmission of qubits. The accepting player does not know the information in the qubit until he performs a measurement. Information about each qubit is stored on and carried by a single photon. Once the receiving player measures the photon, it is altered, and will not produce the same output if measured again. Since a photon can only be read the same way once, any other party attempting to intercept the message is easily detectable. Quantum coin flipping is when random qubits are generated between two players that do not trust each other because both of them want to win the coin toss, which could lead them to cheat in a variety of ways. The essence of coin flipping occurs when the two players issue a sequence of instructions over a communication channel that then eventually results in an output. A basic quantum coin flipping protocol involves two people: Alice and Bob. Alice sends Bob a set number of Κ photon pulses in the quantum states . Each of these photon pulses is independently prepared following a random choice by Alice of basis αi and bit ci where i = 1, 2, 3...Κ. Bob then measures the pulses from Alice by identifying a random basis βi. Bob records these photons and then reports back the first successfully measured photon j to Alice along with a random bit b. Alice reveals the basis and bit that she used at the basis Bob gave her. If the two bases and bits match, then both parties are truthful and can exchange information. If the bit reported by Bob is different than that of Alice's, one is not being truthful. A more general explanation of the above protocol is as follows: Alice first chooses a random basis (such as diagonally) and a sequence of random qubits. Alice then encodes her chosen qubits as a sequence of photons following the chosen basis. She then sends these qubits as a train of polarized photons to Bob through the communication channel. Bob chooses a sequence of reading bases randomly for each individual photon. He then reads the photons and records the results in two tables. One table is of the rectilinear (horizontal or vertical) received photons and one of the diagonally received photons. Bob may have holes in his tables due to losses in his detectors or in the transmission channels. Bob now makes a guess as to which basis Alice used and announces his guess to Alice. If he guessed correctly, he wins and if not, he loses. Alice reports whether he won or not by announcing what basis she used to Bob. Alice then confirms the information by sending Bob her entire original qubit sequence that she used in step 1. Bob compares Alice's sequence with his tables to confirm that no cheating occurred on Alice's part. The tables should correspond to Alice's basis and there should be no correlation with the other table. Assumptions There are a few assumptions that must be made for this protocol to work properly. The first is that Alice can create each state independent of Bob, and with an equal probability. Second, for the first bit that Bob successfully measures, his basis and bit are both random and completely independent of Alice. The last assumption, is that when Bob measures a state, he has a uniform probability to measure each state, and no state is easier to be detected than others. This last assumption is especially important because if Alice were aware of Bob's inability to measure certain states, she could use that to her advantage. Cheating The key issue with coin flipping is that it occurs between two distrustful parties. These two parties are communicating through the communication channel some distance from each other and they have to agree on a winner or loser with each having a 50 percent chance of winning. However, since they are distrustful of one another, cheating is likely to occur. Cheating can occur in a number of ways such as claiming they lost some of the message when they do not like the result or increasing the average number of photons contained in each of the pulses. For Bob to cheat, he would have to be able to guess Alice's basis with a probability greater than . In order to accomplish this, Bob would have to be able to determine a train of photons randomly polarized in one basis from a train of photons polarized in another basis. Alice, on the other hand, could cheat in a couple of different ways, but she has to be careful because Bob could easily detect it. When Bob sends a correct guess to Alice, she could convince Bob that her photons are actually polarized the opposite of Bob's correct guess. Alice could also send Bob a different original sequence than she actually used in order to beat Bob. Detecting a third-party Single photons are used to pass the information from one player to the other (qubits). In this protocol, the information is encoded in the single photons with polarization directions of 0, 45, 90, and 135 degrees, non-orthogonal quantum states. When a third party attempts to read or gain information on the transmission, they alter the photon's polarization in a random way that is likely detected by the two players because it does not match the pattern exchanged between the two legitimate users. The Dip Dip Boom protocol (weak coin flipping with bias ) The Dip Dip Boom (DDB) protocol is a quantum version of the following game. Consider a list of numbers , each between 0 and 1. The players, Alice and Bob, take turns to say "Dip" or "Boom" with probability at round . The player who says "Boom" wins. Obviously, a cheating player can simply say "Boom" and win as there are no rewards for longer games. We will consider games that terminate so that for some (large) , say , we set . Consider round . Let us denote by and the probability of, respectively, Alice and Bob winning. Let be the probability that the game remains undecided. These numbers for the classical game described above can be evaluated inductively. We now describe the quantum version. Let be a three dimensional Hilbert space spanned by . Let be a two dimensional Hilbert space which is spanned by . Initialisation: Alice holds the registers and initialises the state to . Bob holds the register and initialises it to the state . Iteration: For to the following must be performed. For odd we set X=A (for Alice) and Y=B (for Bob); for even we set X=B and Y=A. X implements the operation . X sends the message register to Y. Y implements the operation . Y measures the message register in the computational basis. If the outcome is BOOM then Y aborts and declares him/herself the winner. Measurement: Alice and Bob both measure their local register and respectively. If the outcome is U then they declare themselves to be the winner. If the outcome is A then Alice is the winner and for B it is Bob. Remarks To obtain a balanced protocol one must choose the s such that . If both players follow the protocol, i.e. no player cheats, then the outcome at the end of step two will never be BOOM and neither will the outcome at step 3 be . The bias analysis of this protocol uses SDP duality. For large the bias of the protocol can be made arbitrarily close to . Optimal strong coin flipping It has been shown that using a WCF protocol with an arbitrarily small bias one can construct a SCF protocol with bias arbitrarily close to which is known to be optimal. Experimental implementation Using conjugate encoding As mentioned in the history section, scientists at the LTCI in Paris have experimentally carried out a quantum coin flipping protocol. Previous protocols called for a single photon source or an entangled source to be secure. However, these sources are why it is difficult for quantum coin flipping to be implemented. Instead, the researchers at LTCI used the effects of quantum superposition rather than a single photon source, which they claim makes implementation easier with the standard photon sources available. The researchers used the Clavis2 platform developed by IdQuantique for their protocol, but needed to modify the Clavis2 system in order for it to work for the coin flipping protocol. The experimental setup they used with the Clavis2 system, involves a two-way approach. Light pulsed at 1550 nanometres is sent from Bob to Alice. Alice then uses a phase modulator to encrypt the information. After encryption, she then uses a Faraday mirror to reflect and attenuate the pulses at her chosen level and sends them back to Bob. Using two high quality single photon detectors, Bob chooses a measurement basis in his phase modulator to detect the pulses from Alice. They replaced the detectors on Bob's side because of the low detection efficiencies of the previous detectors. When they replaced the detectors, they were able to show a quantum advantage on a channel for over . A couple of other challenges the group faced was reprogramming the system because photon source attenuation was high and performing system analyses to identify losses and errors in system components. With these corrections, the scientists were capable of implementing a coin flipping protocol by introducing a small honest abort probability, the probability that two honest participants cannot obtain a coin flip at the end of the protocol, but at a short communication distance. References Quantum cryptography Information theory
Quantum coin flipping
[ "Mathematics", "Technology", "Engineering" ]
3,008
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
53,589,134
https://en.wikipedia.org/wiki/Ab%20initio%20methods%20%28nuclear%20physics%29
In nuclear physics, ab initio methods seek to describe the atomic nucleus from the bottom up by solving the non-relativistic Schrödinger equation for all constituent nucleons and the forces between them. This is done either exactly for very light nuclei (up to four nucleons) or by employing certain well-controlled approximations for heavier nuclei. Ab initio methods constitute a more fundamental approach compared to e.g. the nuclear shell model. Recent progress has enabled ab initio treatment of heavier nuclei such as nickel. A significant challenge in the ab initio treatment stems from the complexities of the inter-nucleon interaction. The strong nuclear force is believed to emerge from the strong interaction described by quantum chromodynamics (QCD), but QCD is non-perturbative in the low-energy regime relevant to nuclear physics. This makes the direct use of QCD for the description of the inter-nucleon interactions very difficult (see lattice QCD), and a model must be used instead. The most sophisticated models available are based on chiral effective field theory. This effective field theory (EFT) includes all interactions compatible with the symmetries of QCD, ordered by the size of their contributions. The degrees of freedom in this theory are nucleons and pions, as opposed to quarks and gluons as in QCD. The effective theory contains parameters called low-energy constants, which can be determined from scattering data. Chiral EFT implies the existence of many-body forces, most notably the three-nucleon interaction which is known to be an essential ingredient in the nuclear many-body problem. After arriving at a Hamiltonian (based on chiral EFT or other models) one must solve the Schrödinger equation where is the many-body wavefunction of the A nucleons in the nucleus. Various ab initio methods have been devised to numerically find solutions to this equation: Green's function Monte Carlo (GFMC) No-core shell model (NCSM) Coupled cluster (CC) Self-consistent Green's function (SCGF) In-medium similarity renormalization group (IM-SRG) Further reading References Nuclear physics
Ab initio methods (nuclear physics)
[ "Physics" ]
469
[ "Nuclear physics" ]
40,666,874
https://en.wikipedia.org/wiki/Bis%28dinitrogen%29bis%281%2C2-bis%28diphenylphosphino%29ethane%29molybdenum%280%29
trans-Bis(dinitrogen)bis[1,2-bis(diphenylphosphino)ethane]molybdenum(0) is a coordination complex with the formula Mo(N2)2(dppe)2. It is a relatively air stable yellow-orange solid. It is notable as being the first discovered dinitrogen containing complex of molybdenum. Structure Mo(N2)2(dppe)2 is an octahedral complex with idealized D2h point group symmetry. The dinitrogen ligands are mutually trans across the metal center. The Mo-N bond has a length of 2.01 Å, and the N-N bond has a length of 1.10 Å. This length is close to the free nitrogen bond length, but coordination to the metal weakens the N-N bond making it susceptible to electrophilic attack. Synthesis The first synthetic route to Mo(N2)2(DPPE)2 involved a reduction of molybdenum(III) acetylacetonate with triethylaluminium in the presence of dppe and nitrogen. A higher yielding synthesis involves a four-step process. In the first step, molybdenum(V) chloride is reduced by acetonitrile (CH3CN) to give [MoCl4(CH3CN)2]. Acetonitrile is displaced by tetrahydrofuran (THF) to give [MoCl4(THF)2]. This Mo(IV) compound is reduced by tin powder to [MoCl3(thf)3]. The desired compound is formed in the presence of nitrogen gas, dppe ligand, and magnesium turnings as the reductant: 3 Mg + 2 MoCl3(THF)3 + 4 Ph2PCH2CH2PPh2 + 4 N2 → 2 trans-[Mo(N2)2(Ph2PCH2CH2PPh2)2] + 3 MgCl2 + 6 THF Reactivity The terminal nitrogen is susceptible to electrophilic attack, allowing for the fixation of nitrogen to ammonia in the presence of acid. In this way, Mo(N2)2(dppe)2 serves as a model for biological nitrogen fixation. Carbon-nitrogen bonds can also be formed with this complex through condensation reactions with ketones and aldehydes, and substitution reactions with acid chlorides. The terminal nitrogen can also be silylated. See also Transition metal dinitrogen complex Nitrogen fixation References Coordination complexes Molybdenum(0) compounds Phosphine complexes Nitrogen compounds
Bis(dinitrogen)bis(1,2-bis(diphenylphosphino)ethane)molybdenum(0)
[ "Chemistry" ]
562
[ "Coordination chemistry", "Coordination complexes" ]
40,670,179
https://en.wikipedia.org/wiki/C10H20O3
{{DISPLAYTITLE:C10H20O3}} The molecular formula C10H20O3 may refer to: Hydroxydecanoic acids 10-Hydroxydecanoic acid Myrmicacin (3-hydroxydecanoic acid) Promoxolane Molecular formulas
C10H20O3
[ "Physics", "Chemistry" ]
63
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
40,674,284
https://en.wikipedia.org/wiki/Curtright%20field
In theoretical physics, the Curtright field (named after Thomas Curtright) is a tensor quantum field of mixed symmetry, whose gauge-invariant dynamics are dual to those of the general relativistic graviton in higher (D>4) spacetime dimensions. Or at least this holds for the linearized theory. For the full nonlinear theory, less is known. Several difficulties arise when interactions of mixed symmetry fields are considered, but at least in situations involving an infinite number of such fields (notably string theory) these difficulties are not insurmountable. The Lanczos tensor has a gauge-transformation dynamics similar to that of Curtright. But Lanczos tensor exists only in 4D. Overview In four spacetime dimensions, the field is not dual to the graviton, if massless, but it can be used to describe massive, pure spin 2 quanta. Similar descriptions exist for other massive higher spins, in D≥4. The simplest example of the linearized theory is given by a rank three Lorentz tensor whose indices carry the permutation symmetry of the Young diagram corresponding to the integer partition 3=2+1. That is to say, and where indices in square brackets are totally antisymmetrized. The corresponding field strength for is This has a nontrivial trace where is the Minkowski metric with signature The action for in D spacetime dimensions is bilinear in the field strength and its trace. This action is gauge invariant, assuming there is zero net contribution from any boundaries, while the field strength itself is not. The gauge transformation in question is given by where S and A are arbitrary symmetric and antisymmetric tensors, respectively. An infinite family of mixed symmetry gauge fields arises, formally, in the zero tension limit of string theory, especially if D>4. Such mixed symmetry fields can also be used to provide alternate local descriptions for massive particles, either in the context of strings with nonzero tension, or else for individual particle quanta without reference to string theory. See also Kalb–Ramond field p-form electrodynamics dual graviton Massive gravity Montonen-Olive duality References String theory Hypothetical particles Gauge bosons
Curtright field
[ "Physics", "Astronomy" ]
449
[ "Hypothetical particles", "Matter", "Astronomical hypotheses", "Unsolved problems in physics", "String theory", "Physics beyond the Standard Model", "Subatomic particles" ]
28,465,562
https://en.wikipedia.org/wiki/Vuilleumier%20cycle
The Vuilleumier cycle was patented by a Swiss-American engineer named Rudolph Vuilleumier in 1918. The purpose of Vuilleumier's machine was to create a heat pump that would use heat at high temperature as energy input. The Vuilleumier cycle...utilize[s] working gas expansion and compression at three variable volume spaces in order to pump heat from a low to a moderate temperature level. The interesting characteristic of the Vuilleumier machine is that the induced volume variations are realized without the use of work, but thermally. This is the reason why it has a potential to operate at modern applications where the pollution of the environment is not a choice. It is a perfect candidate for such applications, as it consists only of metallic parts and inert gas. Using these units for heating and cooling buildings, large energy savings can be accomplished as they can be operated at small scale in common buildings or at large scale providing heat power to entire building blocks without using fossil fuels. The use of Vuilleumier machines for industrial applications or inside vehicles is also a feasible option. Another field where these machines have already been involved is cryogenics, as they are also able to provide refrigeration at very low temperatures like the very similar and well-known Stirling refrigerators.The Vuilleumier cycle is a thermodynamic cycle with applications in low-temperature cooling. In some respects it resembles a Stirling cycle or engine, although it has two "displacers" with a mechanical linkage connecting them as compared to one in the Stirling cycle. The hot displacer is larger than the cold displacer. The coupling maintains the appropriate phase difference. The displacers do no workthey are not pistons. Thus no work is required in an ideal case to operate the cycle. In reality friction and other losses mean that some work is required. Devices operating on this cycle have been able to produce temperatures as low as 15 K using liquid nitrogen to pre-cool. Without precooling 77 K was reached with a heat flow of 1 W. The cycle was first patented by Vuilleumier in 1918 with patent US1275507, and again in Leiden by KW Taconis in 1951. In March 2014, the Vuilleumier Cycle was tested in application with updating conventional HVAC (heating, ventilation, and air-conditioning) systems by utilizing the cycle's proposed thermodynamic process of moving heat energy, and having results of increased output efficiencies coupled with a reduced carbon footprint. This work was completed by ThermoLift), a company based out of the Advanced Energy Research and Technology Center at Stony Brook University, with collaboration from the US Department of Energy and the New York State Energy Research and Development Authority (NYSERDA). This work culminated in the demonstration of the ThermoLift system at Oak Ridge National Laboratory in August, 2018. The demonstration showed that the ThermoLift technology (TCHP), is able to achieve coefficients of performance (COP) for the cycle that well exceeded the DOE’s target COPs for cold-climate heat pumps (although not at all exceeding Geothermal heat pump efficiencies). Furthermore, due to the nature of the TCHP, there is no significant capacity decrease as the inlet temperature to the cold HX decreases. External links Method and Apparatus for Inducing Heat Changes, Patent Application by R. Vuilleumier, 1918 All US Patents based on Vuilleumier's original patent, 938 patents as of Sept 2020 Vuilleumier Cycle Cryogenic Refrigeration, Technical Report, Air Force Flight Dynamics Laboratory, Air Force Wright Aeronautical Laboratories, Air Force System Command, April 1976 Fractional Watt Vuilleumier Cryogenic Refrigerator, Final Report for Task 1 Preliminary Design, The Garrett Corporation, AiResearch Manufacturing Co, Prepared for Goddard Space Flight Center, March 1973 The High-Capacity, Spaceborne, Vuilleumier Refrigerator by R. D. Doody, November 1980 References Experimental techniques in low-temperature physics, Guy Kendall White, Philip J. Meeson, Oxford University Press, 2002, p. 30 Link Energy Savings Potential and RD&D Opportunities for Non-Vapor-Compression HVAC Technologies, U.S. Department of Energy, William Goetzler, Robert Zogg, Jim Young, Caitlin Johnson, Navigant Consulting, Inc. March 2014 Link Thermodynamics
Vuilleumier cycle
[ "Physics", "Chemistry", "Mathematics" ]
915
[ "Thermodynamics", "Dynamical systems" ]
28,469,244
https://en.wikipedia.org/wiki/Helmholtz%20reciprocity
The Helmholtz reciprocity principle describes how a ray of light and its reverse ray encounter matched optical adventures, such as reflections, refractions, and absorptions in a passive medium, or at an interface. It does not apply to moving, non-linear, or magnetic media. For example, incoming and outgoing light can be considered as reversals of each other, without affecting the bidirectional reflectance distribution function (BRDF) outcome. If light was measured with a sensor and that light reflected on a material with a BRDF that obeys the Helmholtz reciprocity principle one would be able to swap the sensor and light source and the measurement of flux would remain equal. In the computer graphics scheme of global illumination, the Helmholtz reciprocity principle is important if the global illumination algorithm reverses light paths (for example raytracing versus classic light path tracing). Physics The Stokes–Helmholtz reversion–reciprocity principle was stated in part by Stokes (1849) and with reference to polarization on page 169 of Hermann Helmholtz's Handbuch der physiologischen Optik of 1856 as cited by Gustav Kirchhoff and by Max Planck. As cited by Kirchhoff in 1860, the principle is translated as follows:A ray of light proceeding from point 1 arrives at point 2 after suffering any number of refractions, reflections, &c. At point 1 let any two perpendicular planes a1, b1 be taken in the direction of the ray; and let the vibrations of the ray be divided into two parts, one in each of these planes. Take similar planes a2, b2 in the ray at point 2; then the following proposition may be demonstrated. If when the quantity of light i polarized in the plane a1 proceeds from 1 in the direction of the given ray, that part k thereof of light polarized in a2 arrives at 2, then, conversely, if the quantity of light i polarized in a2 proceeds from 2, the same quantity of light k polarized in a1 [Kirchhoff's published text here corrected by Wikipedia editor to agree with Helmholtz's 1867 text] will arrive at 1. Simply put, in suitable conditions, the principle states that the source and observation point may be switched without changing the measured intensity. Intuitively, "If I can see you, you can see me." Like the principles of thermodynamics, in suitable conditions, this principle is reliable enough to use as a check on the correct performance of experiments, in contrast with the usual situation in which the experiments are tests of a proposed law. In his magisterial proof of the validity of Kirchhoff's law of equality of radiative emissivity and absorptivity, Planck makes repeated and essential use of the Stokes–Helmholtz reciprocity principle. Rayleigh stated the basic idea of reciprocity as a consequence of the linearity of propagation of small vibrations, light consisting of sinusoidal vibrations in a linear medium. When there are magnetic fields in the path of the ray, the principle does not apply. Departure of the optical medium from linearity also causes departure from Helmholtz reciprocity, as well as the presence of moving objects in the path of the ray. Helmholtz reciprocity referred originally to light. This is a particular form of electromagnetism that may be called far-field radiation. For this, the electric and magnetic fields do not need distinct descriptions, because they propagate feeding each other evenly. So the Helmholtz principle is a more simply described special case of electromagnetic reciprocity in general, which is described by distinct accounts of the interacting electric and magnetic fields. The Helmholtz principle rests mainly on the linearity and superposability of the light field, and it has close analogues in non-electromagnetic linear propagating fields, such as sound. It was discovered before the electromagnetic nature of light became known. The Helmholtz reciprocity theorem has been rigorously proven in a number of ways, generally making use of quantum mechanical time-reversal symmetry. As these more mathematically complicated proofs may detract from the simplicity of the theorem, A.P Pogany and P. S. Turner have proven it in only a few steps using a Born series. Assuming a light source at a point A and an observation point O, with various scattering points between them, the Schrödinger equation may be used to represent the resulting wave function in space: By applying a Green's function, the above equation can be solved for the wave function in an integral (and thus iterative) form: where . Next, it is valid to assume the solution inside the scattering medium at point O may be approximated by a Born series, making use of the Born approximation in scattering theory. In doing so, the series may be iterated through in the usual way to generate the following integral solution: Noting again the form of the Green's function, it is apparent that switching and in the above form will not change the result; that is to say, , which is the mathematical statement of the reciprocity theorem: switching the light source A and observation point O does not alter the observed wave function. Applications One simple yet important implication of this reciprocity principle is that any light directed through a lens in one direction (from object to image plane) is optically equal to its conjugate, i.e. light being directed through the same set-up but in the opposite direction. An electron being focused through any series of optical components does not “care” from which direction it comes; as long as the same optical events happen to it, the resulting wave function will be the same. For that reason, this principle has important applications in the field of transmission electron microscopy (TEM). The notion that conjugate optical processes produce equivalent results allows the microscope user to grasp a deeper understanding of, and have considerable flexibility in, techniques involving electron diffraction, Kikuchi patterns, dark-field images, and others. An important caveat to note is that in a situation where electrons lose energy after interacting with the scattering medium of the sample, there is not time-reversal symmetry. Therefore, reciprocity only truly applies in situations of elastic scattering. In the case of inelastic scattering with small energy loss, it can be shown that reciprocity may be used to approximate intensity (rather than wave amplitude). So in very thick samples or samples in which inelastic scattering dominates, the benefits of using reciprocity for the previously mentioned TEM applications are no longer valid. Furthermore, it has been demonstrated experimentally that reciprocity does apply in a TEM under the right conditions, but the underlying physics of the principle dictates that reciprocity can only be truly exact if ray transmission occurs through only scalar fields, i.e. no magnetic fields. We can therefore conclude that the distortions to reciprocity due to magnetic fields of the electromagnetic lenses in TEM may be ignored under typical operating conditions. However, users should be careful not to apply reciprocity to magnetic imaging techniques, TEM of ferromagnetic materials, or extraneous TEM situations without careful consideration. Generally, polepieces for TEM are designed using finite element analysis of generated magnetic fields to ensure symmetry.   Magnetic objective lens systems have been used in TEM to achieve atomic-scale resolution while maintaining a magnetic field free environment at the plane of the sample, but the method of doing so still requires a large magnetic field above (and below) the sample, thus negating any reciprocity enhancement effects that one might expect. This system works by placing the sample in between the front and back objective lens polepieces, as in an ordinary TEM, but the two polepieces are kept in exact mirror symmetry with respect to the sample plane between them. Meanwhile, their excitation polarities are exactly opposite, generating magnetic fields that cancel almost perfectly at the plane of the sample. However, since they do not cancel elsewhere, the electron trajectory must still pass through magnetic fields. Reciprocity can also be used to understand the main difference between TEM and scanning transmission electron microscopy (STEM), which is characterized in principle by switching the position of the electron source and observation point. This is effectively the same as reversing time on a TEM so that electrons travel in the opposite direction. Therefore, under appropriate conditions (in which reciprocity does apply), knowledge of TEM imaging can be useful in taking and interpreting images with STEM. See also Reciprocity (electromagnetism) References Optics
Helmholtz reciprocity
[ "Physics", "Chemistry" ]
1,777
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
28,472,413
https://en.wikipedia.org/wiki/Virbhadra%E2%80%93Ellis%20lens%20equation
The Virbhadra-Ellis lens equation in astronomy and mathematics relates to the angular positions of an unlensed source , the image , the Einstein bending angle of light , and the angular diameter lens-source and observer-source distances. . This approximate lens equation is useful for studying the gravitational lens in strong and weak gravitational fields when the angular source position is small. References Gravitational lensing Astrophysics Equations of astronomy
Virbhadra–Ellis lens equation
[ "Physics", "Astronomy" ]
85
[ "Concepts in astronomy", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Equations of astronomy", "Astronomical sub-disciplines" ]
28,473,219
https://en.wikipedia.org/wiki/Vaidya%20metric
In general relativity, the Vaidya metric describes the non-empty external spacetime of a spherically symmetric and nonrotating star which is either emitting or absorbing null dusts. It is named after the Indian physicist Prahalad Chunnilal Vaidya and constitutes the simplest non-static generalization of the non-radiative Schwarzschild solution to Einstein's field equation, and therefore is also called the "radiating(shining) Schwarzschild metric". From Schwarzschild to Vaidya metrics The Schwarzschild metric as the static and spherically symmetric solution to Einstein's equation reads To remove the coordinate singularity of this metric at , one could switch to the Eddington–Finkelstein coordinates. Thus, introduce the "retarded(/outgoing)" null coordinate by and Eq(1) could be transformed into the "retarded(/outgoing) Schwarzschild metric" or, we could instead employ the "advanced(/ingoing)" null coordinate by so Eq(1) becomes the "advanced(/ingoing) Schwarzschild metric" Eq(3) and Eq(5), as static and spherically symmetric solutions, are valid for both ordinary celestial objects with finite radii and singular objects such as black holes. It turns out that, it is still physically reasonable if one extends the mass parameter in Eqs(3) and Eq(5) from a constant to functions of the corresponding null coordinate, and respectively, thus The extended metrics Eq(6) and Eq(7) are respectively the "retarded(/outgoing)" and "advanced(/ingoing)" Vaidya metrics. It is also sometimes useful to recast the Vaidya metrics Eqs(6)(7) into the form where represents the metric of flat spacetime: using . Outgoing Vaidya with pure Emitting field As for the "retarded(/outgoing)" Vaidya metric Eq(6), the Ricci tensor has only one nonzero component while the Ricci curvature scalar vanishes, because . Thus, according to the trace-free Einstein equation , the stress–energy tensor satisfies where and are null (co)vectors (c.f. Box A below). Thus, is a "pure radiation field", which has an energy density of . According to the null energy conditions we have and thus the central body is emitting radiation. Following the calculations using Newman–Penrose (NP) formalism in Box A, the outgoing Vaidya spacetime Eq(6) is of Petrov-type D, and the nonzero components of the Weyl-NP and Ricci-NP scalars are It is notable that, the Vaidya field is a pure radiation field rather than electromagnetic fields. The emitted particles or energy-matter flows have zero rest mass and thus are generally called "null dusts", typically such as photons and neutrinos, but cannot be electromagnetic waves because the Maxwell-NP equations are not satisfied. By the way, the outgoing and ingoing null expansion rates for the line element Eq(6) are respectively Suppose , then the Lagrangian for null radial geodesics of the "retarded(/outgoing)" Vaidya spacetime Eq(6) is where dot means derivative with respect to some parameter . This Lagrangian has two solutions, According to the definition of in Eq(2), one could find that when increases, the areal radius would increase as well for the solution , while would decrease for the solution . Thus, should be recognized as an outgoing solution while serves as an ingoing solution. Now, we can construct a complex null tetrad which is adapted to the outgoing null radial geodesics and employ the Newman–Penrose formalism for perform a full analysis of the outgoing Vaidya spacetime. Such an outgoing adapted tetrad can be set up as and the dual basis covectors are therefore In this null tetrad, the spin coefficients are The Weyl-NP and Ricci-NP scalars are given by Since the only nonvanishing Weyl-NP scalar is , the "retarded(/outgoing)" Vaidya spacetime is of Petrov-type D. Also, there exists a radiation field as . For the "retarded(/outgoing)" Schwarzschild metric Eq(3), let , and then the Lagrangian for null radial geodesics will have an outgoing solution and an ingoing solution . Similar to Box A, now set up the adapted outgoing tetrad by so the spin coefficients are and the Weyl-NP and Ricci-NP scalars are given by The "retarded(/outgoing)" Schwarzschild spacetime is of Petrov-type D with being the only nonvanishing Weyl-NP scalar. Ingoing Vaidya with pure absorbing field As for the "advanced/ingoing" Vaidya metric Eq(7), the Ricci tensors again have one nonzero component and therefore and the stress–energy tensor is This is a pure radiation field with energy density , and once again it follows from the null energy condition Eq(11) that , so the central object is absorbing null dusts. As calculated in Box C, the nonzero Weyl-NP and Ricci-NP components of the "advanced/ingoing" Vaidya metric Eq(7) are Also, the outgoing and ingoing null expansion rates for the line element Eq(7) are respectively The advanced/ingoing Vaidya solution Eq(7) is especially useful in black-hole physics as it is one of the few existing exact dynamical solutions. For example, it is often employed to investigate the differences between different definitions of the dynamical black-hole boundaries, such as the classical event horizon and the quasilocal trapping horizon; and as shown by Eq(17), the evolutionary hypersurface is always a marginally outer trapped horizon (). Suppose , then the Lagrangian for null radial geodesics of the "advanced(/ingoing)" Vaidya spacetime Eq(7) is which has an ingoing solution and an outgoing solution in accordance with the definition of in Eq(4). Now, we can construct a complex null tetrad which is adapted to the ingoing null radial geodesics and employ the Newman–Penrose formalism for perform a full analysis of the Vaidya spacetime. Such an ingoing adapted tetrad can be set up as and the dual basis covectors are therefore In this null tetrad, the spin coefficients are The Weyl-NP and Ricci-NP scalars are given by Since the only nonvanishing Weyl-NP scalar is , the "advanced(/ingoing)" Vaidya spacetime is of Petrov-type D, and there exists a radiation field encoded into . For the "advanced(/ingoing)" Schwarzschild metric Eq(5), still let , and then the Lagrangian for the null radial geodesics will have an ingoing solution and an outgoing solution . Similar to Box C, now set up the adapted ingoing tetrad by so the spin coefficients are and the Weyl-NP and Ricci-NP scalars are given by The "advanced(/ingoing)" Schwarzschild spacetime is of Petrov-type D with being the only nonvanishing Weyl-NP scalar. Comparison with the Schwarzschild metric As a natural and simplest extension of the Schwazschild metric, the Vaidya metric still has a lot in common with it: Both metrics are of Petrov-type D with being the only nonvanishing Weyl-NP scalar (as calculated in Boxes A and B). However, there are three clear differences between the Schwarzschild and Vaidya metric: First of all, the mass parameter for Schwarzschild is a constant, while for Vaidya is a u-dependent function. Schwarzschild is a solution to the vacuum Einstein equation , while Vaidya is a solution to the trace-free Einstein equation with a nontrivial pure radiation energy field. As a result, all Ricci-NP scalars for Schwarzschild are vanishing, while we have for Vaidya. Schwarzschild has 4 independent Killing vector fields, including a timelike one, and thus is a static metric, while Vaidya has only 3 independent Killing vector fields regarding the spherical symmetry, and consequently is nonstatic. Consequently, the Schwarzschild metric belongs to Weyl's class of solutions while the Vaidya metric does not. Extension of the Vaidya metric Kinnersley metric While the Vaidya metric is an extension of the Schwarzschild metric to include a pure radiation field, the Kinnersley metric constitutes a further extension of the Vaidya metric; it describes a massive object that accelerates in recoil as it emits massless radiation anisotropically. The Kinnersley metric is a special case of the Kerr-Schild metric, and in cartesian spacetime coordinates it takes the following form: where for the duration of this section all indices shall be raised and lowered using the "flat space" metric , the "mass" is an arbitrary function of the proper-time along the mass's world line as measured using the "flat" metric, and describes the arbitrary world line of the mass, is then the four-velocity of the mass, is a "flat metric" null-vector field implicitly defined by Eqn. (20), and implicitly extends the proper-time parameter to a scalar field throughout spacetime by viewing it as constant on the outgoing light cone of the "flat" metric that emerges from the event and satisfies the identity Grinding out the Einstein tensor for the metric and integrating the outgoing energy–momentum flux "at infinity," one finds that the metric describes a mass with proper-time dependent four-momentum that emits a net <<link:0>> at a proper rate of as viewed from the mass's instantaneous rest-frame, the radiation flux has an angular distribution where and are complicated scalar functions of and their derivatives, and is the instantaneous rest-frame angle between the 3-acceleration and the outgoing null-vector. The Kinnersley metric may therefore be viewed as describing the gravitational field of an accelerating photon rocket with a very badly collimated exhaust. In the special case where is independent of proper-time, the Kinnersley metric reduces to the Vaidya metric. Vaidya–Bonner metric Since the radiated or absorbed matter might be electrically non-neutral, the outgoing and ingoing Vaidya metrics Eqs(6)(7) can be naturally extended to include varying electric charges, Eqs(18)(19) are called the Vaidya-Bonner metrics, and apparently, they can also be regarded as extensions of the Reissner–Nordström metric, analogously to the correspondence between Vaidya and Schwarzschild metrics. See also Schwarzschild metric Null dust solution References Exact solutions in general relativity Black holes Astrophysics
Vaidya metric
[ "Physics", "Astronomy", "Mathematics" ]
2,405
[ "Exact solutions in general relativity", "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Mathematical objects", "Astrophysics", "Equations", "Density", "Stellar phenomena", "Astronomical objects", "Astronomical sub-disciplines" ]
28,473,945
https://en.wikipedia.org/wiki/Photon%20surface
Photon sphere (definition): A photon sphere of a static spherically symmetric metric is a timelike hypersurface if the deflection angle of a light ray with the closest distance of approach diverges as For a general static spherically symmetric metric the photon sphere equation is: The concept of a photon sphere in a static spherically metric was generalized to a photon surface of any metric. Photon surface (definition) : A photon surface of (M,g) is an immersed, nowhere spacelike hypersurface S of (M, g) such that, for every point p∈S and every null vector k∈TpS, there exists a null geodesic :(-ε,ε)→M of (M,g) such that (0)=k, |γ|⊂S. Both definitions give the same result for a general static spherically symmetric metric. Theorem: Subject to an energy condition, a black hole in any spherically symmetric spacetime must be surrounded by a photon sphere. Conversely, subject to an energy condition, any photon sphere must cover more than a certain amount of matter, a black hole, or a naked singularity. References Black holes General relativity
Photon surface
[ "Physics", "Astronomy" ]
243
[ "Physical phenomena", "Black holes", "Physical quantities", "Unsolved problems in physics", "Astronomy stubs", "Astrophysics", "General relativity", "Stellar astronomy stubs", "Astrophysics stubs", "Density", "Relativity stubs", "Theory of relativity", "Stellar phenomena", "Astronomical ob...
28,474,959
https://en.wikipedia.org/wiki/Exon%20skipping
In molecular biology, exon skipping is a form of RNA splicing used to cause cells to “skip” over faulty or misaligned sections (exons) of genetic code, leading to a truncated but still functional protein despite the genetic mutation. Mechanism Exon skipping is used to restore the reading frame within a gene. Genes are the genetic instructions for creating a protein, and are composed of introns and exons. Exons are the sections of DNA that contain the instruction set for generating a protein; they are interspersed with non-coding regions called introns. The introns are later removed before the protein is made, leaving only the coding exon regions. Splicing naturally occurs in pre-mRNA when introns are being removed to form mature-mRNA that consists solely of exons. Starting in the late 1990s, scientists realized they could take advantage of this naturally occurring cellular splicing to downplay genetic mutations into less harmful ones. The mechanism behind exon skipping is a mutation specific antisense oligonucleotide (AON). An antisense oligonucleotide is a synthesized short nucleic acid polymer, typically fifty or fewer base pairs in length that will bind to the mutation site in the pre-messenger RNA, to induce exon skipping. The AON binds to the mutated exon, so that when the gene is then translated from the mature mRNA, it is “skipped” over, thus restoring the disrupted reading frame. This allows for the generation of an internally deleted, but largely functional protein. Some mutations require exon skipping at multiple sites, sometimes adjacent to one another, in order to restore the reading frame. Multiple exon skipping has successfully been carried out using a combination of AONs that target multiple exons. As a treatment for Duchenne muscular dystrophy Exon skipping is being heavily researched for the treatment of Duchenne muscular dystrophy (DMD), where the muscular protein dystrophin is prematurely truncated, which leads to a non-functioning protein. Successful treatment by way of exon skipping could lead to a mostly functional dystrophin protein, and create a phenotype similar to the less severe Becker muscular dystrophy (BMD). In the case of Duchenne muscular dystrophy, the protein that becomes compromised is dystrophin. The dystrophin protein has two essential functional domains that flank a central rod domain consisting of repetitive and partially dispensable segments. Dystrophin’s function is to maintain muscle fiber stability during contraction by linking the extra cellular matrix to the cytoskeleton. Mutations that disrupt the open reading frame within dystrophin create prematurely truncated proteins that are unable to perform their job. Such mutations lead to muscle fiber damage, replacement of muscle tissue by fat and fibrotic tissue, and premature death typically occurring in the early twenties of DMD patients. Comparatively, mutations that do not upset the open reading frame, lead to a dystrophin protein that is internally deleted and shorter than normal, but still partially functional. Such mutations are associated with the much milder Becker muscular dystrophy. Mildly affected BMD patients carrying deletions that involve over two thirds of the central rod domain have been described, suggesting that this domain is largely dispensable. Dystrophin can maintain a large degree of functionality so long as the essential terminal domains are unaffected, and exon skipping only occurs within the central rod domain. Given these parameters, exon skipping can be used to restore an open reading frame by inducing a deletion of one or several exons within the central rod domain, and thus converting a DMD phenotype into a BMD phenotype. The genetic mutation that leads to Becker muscular dystrophy is an in-frame deletion. This means that, out of the 79 exons that code for dystrophin, one or several in the middle may be removed, without affecting the exons that follow the deletion. This allows for a shorter-than-normal dystrophin protein that maintains a degree of functionality. In Duchenne muscular dystrophy, the genetic mutation is out-of-frame. Out-of-frame mutations cause a premature stop in protein generation - the ribosome is unable to “read” the RNA past the point of initial error - leading to a severely shortened and completely non-functional dystrophin protein. The goal of exon skipping is to manipulate the splicing pattern so that an out-of-frame mutation becomes an in-frame mutation, thus changing a severe DMD mutation into a less harmful in-frame BMD mutation. One exon-skipping drug was approved in 2016, by the US FDA: eteplirsen (ExonDys51), a Morpholino oligo from Sarepta Therapeutics targeting exon 51 of human dystrophin. Another exon-skipping Morpholino, golodirsen (Vyondys 53) (targeting dystrophin exon 53), was approved in the United States in December 2019. A third antisense oligonucleotide, viltolarsen (Viltepso), targeting dystrophin exon 53 was approved for medical use in the United States in August 2020. Genetic testing, usually from blood samples, can be used to determine the precise nature and location of the DMD mutation in the dystrophin gene. It is known that these mutations cluster in areas known as the 'hot spot' regions — primarily in exons 45–53 and to a lesser extent exons 2–20. As the majority of DMD mutations occur in these 'hot spot' regions, a treatment which causes these exons to be skipped could be used to treat up to 50% of DMD patients. See also Antisense therapy Poison exon References Genetics
Exon skipping
[ "Biology" ]
1,222
[ "Genetics" ]
46,719,532
https://en.wikipedia.org/wiki/Electromagnetic%20metasurface
An electromagnetic metasurface refers to a kind of artificial sheet material with sub-wavelength features. Metasurfaces can be either structured or unstructured with subwavelength-scaled patterns. In electromagnetic theory, metasurfaces modulate the behaviors of electromagnetic waves through specific boundary conditions rather than constitutive parameters (such as refractive index) in three-dimensional (3D) space, which is commonly exploited in natural materials and metamaterials. Metasurfaces may also refer to the two-dimensional counterparts of metamaterials. There are also 2.5D metasurfaces that involve the third dimension as additional degree of freedom for tailoring their functionality. Definitions Metasurfaces have been defined in several ways by researchers. 1, “An alternative approach that has gained increasing attention in recent years deals with one- and two-dimensional (1D and 2D) plasmonic arrays with subwavelength periodicity, also known as metasurfaces. Due to their negligible thickness compared to the wavelength of operation, metasurfaces can (near resonances of unit cell constituents) be considered as an interface of discontinuity enforcing an abrupt change in both the amplitude and phase of the impinging light”. 2, “Our results can be understood using the concept of a metasurface, a periodic array of scattering elements whose dimensions and periods are small compared with the operating wavelength”. 3, “Metasurfaces based on thin films”. A highly absorbing ultrathin film on a substrate can also be considered as a metasurface, with properties not occurring in natural materials. Following this definition, the thin metallic films such as that in superlens are also the early type of metasurfaces. History The research of electromagnetic metasurfaces has a long history. Early in 1902, Robert W. Wood found that the reflection spectra of subwavelength metallic grating had dark areas. This unusual phenomenon was named Wood's anomaly and led to the discovery of the surface plasmon polariton (SPP), a particular electromagnetic wave excited at metal surfaces. Subsequently, another important phenomenon, the Levi-Civita relation, was introduced, which states that a subwavelength-thick film can result in a dramatic change in electromagnetic boundary conditions. Generally speaking, metasurfaces could include some traditional concepts in the microwave spectrum, such as frequency selective surfaces (FSS), impedance sheets, and even Ohmic sheets. In the microwave regime, the thickness of these metasurfaces can be much smaller than the wavelength of operation (for example, 1/1000 of the wavelength) since the skin depth could be minimal for highly conductive metals. Recently, some novel phenomena were demonstrated, such as ultra-broadband coherent perfect absorption. The results showed that a 0.3 nm thick film could absorb all electromagnetic waves across the RF, microwave, and terahertz frequencies. In optical applications, an anti-reflective coating could also be regarded as a simple metasurface, as first observed by Lord Rayleigh. In recent years, several new metasurfaces have been developed, including plasmonic metasurfaces, metasurfaces based on geometric phases, metasurfaces based on impedance sheets, and glide-symmetric metasurfaces. Applications One of the most important applications of metasurfaces is to control a wavefront of electromagnetic waves by imparting local, gradient phase shifts to the incoming waves, which leads to a generalization of the ancient laws of reflection and refraction. In this way, a metasurface can be used as a planar lens, illumination lens, planar hologram, vortex generator, beam deflector, axicon and so on. Besides the gradient metasurface lenses, metasurface-based superlenses offer another degree of control of the wavefront by using evanescent waves. With surface plasmons in the ultrathin metallic layers, perfect imaging and super-resolution lithography could be possible, which breaks the common assumption that all optical lens systems are limited by diffraction, a phenomenon called the diffraction limit. Another promising application is in the field of stealth technology. A target's radar cross-section (RCS) has conventionally been reduced by either radiation-absorbent material (RAM) or by purpose shaping of the targets such that the scattered energy can be redirected away from the source. Unfortunately, RAMs have narrow frequency-band functionality, and purpose shaping limits the aerodynamic performance of the target. Metasurfaces have been synthesized that redirect scattered energy away from the source using either array theory or the generalized Snell's law. This has led to aerodynamically favorable shapes for the targets with reduced RCS. Metasurface can also be integrated with optical waveguides for controlling guided electromagnetic waves. Applications for meta-waveguides such as integrated waveguide mode converters, structured-light generations, versatile multiplexers, and photonic neural networks can be enabled. In addition, metasurfaces are also applied in electromagnetic absorbers, polarization converters, polarimeters, and spectrum filters. Metasurface-empowered novel bioimaging and biosensing devices have also emerged and been reported recently. For many optically based bioimaging devices, their bulk footprint and heavy physical weight have limited their usage in clinical settings. Simulation Various methods are available for simulating the interaction of electromagnetic waves on metasurfaces, and to enable their design, such as finite-difference time-domain (FDTD), finite-element methods (FEM) and rigorous coupled-wave analysis (RCWA). For planar optical metasurfaces, prism-based algorithms allow for triangular prismatic space discretization, which is optimal for planar geometries. The prism-based algorithm has fewer elements than conventional tetrahedral methods, bringing higher computational efficiency. A simulation toolkit has been released online, enabling users to efficiently analyze metasurfaces with customized pixel patterns. Optical characterization Characterizing metasurfaces in the optical domain requires advanced imaging methods since the involved optical properties often include both phase and polarization properties. Recent works suggest that vectorial ptychography, a recently developed computational imaging method, can be of relevance. It combines the Jones matrix mapping with a microscopic lateral resolution, even on large specimens. See also Kinoform References Photonics Metamaterials
Electromagnetic metasurface
[ "Materials_science", "Engineering" ]
1,334
[ "Metamaterials", "Materials science" ]
37,908,022
https://en.wikipedia.org/wiki/K-band%20multi-object%20spectrograph
The K-band multi-object spectrograph, or KMOS for short, is an instrument mounted on ESO’s Very Large Telescope Antu (UT1) at the Paranal Observatory in Chile. KMOS is a multi-object spectrograph able to observe 24 objects at the same time in infrared light and to map out how their properties vary from place to place. It will provide crucial data to help understand how galaxies grew and evolved in the early Universe. References External links KMOS Science Pages at ESO Spectrometers Telescope instruments European Southern Observatory
K-band multi-object spectrograph
[ "Physics", "Chemistry", "Astronomy" ]
117
[ "Telescope instruments", "Spectrum (physical sciences)", "Astronomical instruments", "Spectrometers", "Spectroscopy" ]
37,914,029
https://en.wikipedia.org/wiki/Stochastic%20Eulerian%20Lagrangian%20method
In computational fluid dynamics, the Stochastic Eulerian Lagrangian Method (SELM) is an approach to capture essential features of fluid-structure interactions subject to thermal fluctuations while introducing approximations which facilitate analysis and the development of tractable numerical methods. SELM is a hybrid approach utilizing an Eulerian description for the continuum hydrodynamic fields and a Lagrangian description for elastic structures. Thermal fluctuations are introduced through stochastic driving fields. Approaches also are introduced for the stochastic fields of the SPDEs to obtain numerical methods taking into account the numerical discretization artifacts to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics. The SELM fluid-structure equations typically used are The pressure p is determined by the incompressibility condition for the fluid The operators couple the Eulerian and Lagrangian degrees of freedom. The denote the composite vectors of the full set of Lagrangian coordinates for the structures. The is the potential energy for a configuration of the structures. The are stochastic driving fields accounting for thermal fluctuations. The are Lagrange multipliers imposing constraints, such as local rigid body deformations. To ensure that dissipation occurs only through the coupling and not as a consequence of the interconversion by the operators the following adjoint conditions are imposed Thermal fluctuations are introduced through Gaussian random fields with mean zero and the covariance structure To obtain simplified descriptions and efficient numerical methods, approximations in various limiting physical regimes have been considered to remove dynamics on small time-scales or inertial degrees of freedom. In different limiting regimes, the SELM framework can be related to the immersed boundary method, accelerated Stokesian dynamics, and arbitrary Lagrangian Eulerian method. The SELM approach has been shown to yield stochastic fluid-structure dynamics that are consistent with statistical mechanics. In particular, the SELM dynamics have been shown to satisfy detailed-balance for the Gibbs–Boltzmann ensemble. Different types of coupling operators have also been introduced allowing for descriptions of structures involving generalized coordinates and additional translational or rotational degrees of freedom. For numerically discretizing the SELM SPDEs, general methods were also introduced for deriving numerical stochastic fields for SPDEs that take discretization artifacts into account to maintain statistical principles, such as fluctuation-dissipation balance and other properties in statistical mechanics. SELM methods have been used for simulations of viscoelastic fluids and soft materials, particle inclusions within curved fluid interfaces and other microscopic systems and engineered devices. See also Immersed boundary method Stokesian dynamics Volume of fluid method Level-set method Marker-and-cell method References Software : Numerical Codes and Simulation Packages Mango-Selm : Stochastic Eulerian Lagrangian and Immersed Boundary Methods, 3D Simulation Package, (Python interface, LAMMPS MD Integration), P. Atzberger, UCSB Fluid mechanics Computational fluid dynamics Numerical differential equations
Stochastic Eulerian Lagrangian method
[ "Physics", "Chemistry", "Engineering" ]
616
[ "Computational fluid dynamics", "Computational physics", "Civil engineering", "Fluid mechanics", "Fluid dynamics" ]
37,914,061
https://en.wikipedia.org/wiki/Principle%20of%20normality
The Principle of normality in solid mechanics states that if a normal to the yield locus is constructed at the point of yielding, the strains that result from yielding are in the same ratio as the stress components of the normal. References Solid mechanics
Principle of normality
[ "Physics" ]
49
[ "Solid mechanics", "Mechanics" ]
45,386,524
https://en.wikipedia.org/wiki/Chlorobis%28cyclooctene%29rhodium%20dimer
Chlorobis(cyclooctene)rhodium dimer is an organorhodium compound with the formula Rh2Cl2(C8H14)4, where C8H14 is cis-cyclooctene. Sometimes abbreviated Rh2Cl2(coe)4, it is a red-brown, air-sensitive solid that is a precursor to many other organorhodium compounds and catalysts. The complex is prepared by treating an alcohol solution of hydrated rhodium trichloride with cyclooctene at room temperature. The coe ligands are easily displaced by other more basic ligands, more so than the diene ligands in the related complex cyclooctadiene rhodium chloride dimer. Catalyst for C-H activation C-H activation is often catalyzed by chlorobis(cyclooctene)rhodium dimer as demonstrated in the synthesis of a strained bicyclic enamine. The synthesis of a mescaline analogue involves enantioselective annulation of an aryl imine via a C-H activation. The total synthesis of lithospermic acid employs "guided C-H functionalization" late stage to a highly functionalized system. The directing group, a chiral nonracemic imine, is capable of performing an intramolecular alkylation, which allows for the rhodium-catalyzed conversion of imine to the dihydrobenzofuran. References Organorhodium compounds Homogeneous catalysis Alkene complexes Dimers (chemistry) Chloro complexes Rhodium(I) compounds
Chlorobis(cyclooctene)rhodium dimer
[ "Chemistry", "Materials_science" ]
346
[ "Catalysis", "Homogeneous catalysis", "Dimers (chemistry)", "Polymer chemistry" ]
45,386,574
https://en.wikipedia.org/wiki/Chlorobis%28cyclooctene%29iridium%20dimer
Chlorobis(cyclooctene)iridium dimer is an organoiridium compound with the formula Ir2Cl2(C8H14)4, where C8H14 is cis-cyclooctene. Sometimes abbreviated Ir2Cl2(coe)4, it is a yellow, air-sensitive solid that is used as a precursor to many other organoiridium compounds and catalysts. The compound is prepared by heating an alcohol solution of sodium hexachloroiridate with cyclooctene in ethanol. The coe ligands are easily displaced by other more basic ligands, more so than the diene ligands in the related complex cyclooctadiene iridium chloride dimer. For example, with triphenylphosphine (PPh3), it reacts to give IrCl(PPh3)3: Ir2Cl2(C8H14)4 + 6 PPh3 → 2 IrCl(PPh3)3 + 4 C8H14 References Organoiridium compounds Homogeneous catalysis Alkene complexes Dimers (chemistry) Chloro complexes Eight-membered rings
Chlorobis(cyclooctene)iridium dimer
[ "Chemistry", "Materials_science" ]
242
[ "Catalysis", "Homogeneous catalysis", "Dimers (chemistry)", "Polymer chemistry" ]
33,867,932
https://en.wikipedia.org/wiki/Geospatial%20topology
Geospatial topology is the study and application of qualitative spatial relationships between geographic features, or between representations of such features in geographic information, such as in geographic information systems (GIS). For example, the fact that two regions overlap or that one contains the other are examples of topological relationships. It is thus the application of the mathematics of topology to GIS, and is distinct from, but complementary to the many aspects of geographic information that are based on quantitative spatial measurements through coordinate geometry. Topology appears in many aspects of geographic information science and GIS practice, including the discovery of inherent relationships through spatial query, vector overlay and map algebra; the enforcement of expected relationships as validation rules stored in geospatial data; and the use of stored topological relationships in applications such as network analysis. Spatial topology is the generalization of geospatial topology for non-geographic domains, e.g., CAD software. Topological relationships In keeping with the definition of topology, a topological relationship between two geographic phenomena is any spatial relation that is not sensitive to measurable aspects of space, including transformations of space (e.g. map projection). Thus, it includes most qualitative spatial relations, such as two features being "adjacent," "overlapping," "disjoint," or one being "within" another; conversely, one feature being "5km from" another, or one feature being "due north of" another are metric relations. One of the first developments of Geographic Information Science in the early 1990s was the work of Max Egenhofer, Eliseo Clementini, Peter di Felice, and others to develop a concise theory of such relations commonly called the 9-Intersection Model, which characterizes the range of topological relationships based on the relationships between the interiors, exteriors, and boundaries of features. These relationships can also be classified semantically: Inherent relationships are those that are important to the existence or identity of one or both of the related phenomena, such as one expressed in a boundary definition or being a manifestation of a mereological relationship. For example, Nebraska lies within the United States simply because the former was created by the latter as a partition of the territory of the latter. The Missouri River is adjacent to the state of Nebraska because the definition of the boundary of the state says so. These relationships are often stored and enforced in topologically-savvy data. Coincidental relationships are those that are not crucial to the existence of either, although they can be very important. For example, the fact that the Platte River passes through Nebraska is coincidental because both would still exist unproblematically if the relationship did not exist. These relationships are rarely stored as such, but are usually discovered and documented by spatial analysis methods. Topological data structures and validation Topology was a very early concern for GIS. The earliest vector systems, such as the Canadian Geographic Information System, did not manage topological relationships, and problems such as sliver polygons proliferated, especially in operations such as vector overlay. In response, topological vector data models were developed, such as GBF/DIME (U.S. Census Bureau, 1967) and POLYVRT (Harvard University, 1976). The strategy of the topological data model is to store topological relationships (primarily adjacency) between features, and use that information to construct more complex features. Nodes (points) are created where lines intersect and are attributed with a list of the connecting lines. Polygons are constructed from any sequence of lines that forms a closed loop. These structures had three advantages over non-topological vector data (often called "spaghetti data"): First, they were efficient (a crucial factor given the storage and processing capacities of the 1970s), because the shared boundary between two adjacent polygons was only stored once; second, they facilitated the enforcement of data integrity by preventing or highlighting topological errors, such as overlapping polygons, dangling nodes (a line not properly connected to other lines), and sliver polygons (small spurious polygons created where two lines should match but do not); and third, they made the algorithms for operations such as vector overlay simpler. Their primary disadvantage was their complexity, being difficult for many users to understand and requiring extra care during data entry. These became the dominant vector data model of the 1980s. By the 1990s, the combination of cheaper storage and new users who were not concerned with topology led to a resurgence in spaghetti data structures, such as the shapefile. However, the need for stored topological relationships and integrity enforcement still exists. A common approach in current data is to store such as an extended layer on top of data that is not inherently topological. For example, the Esri geodatabase stores vector data ("feature classes") as spaghetti data, but can build a "network dataset" structure of connections on top of a line feature class. The geodatabase can also store a list of topological rules, constraints on topological relationships within and between layers (e.g., counties cannot have gaps, state boundaries must coincide with county boundaries, counties must collectively cover states) that can be validated and corrected. Other systems, such as PostGIS, take a similar approach. A very different approach is to not store topological information in the data at all, but to construct it dynamically, usually during the editing process, to highlight and correct possible errors; this is a feature of GIS software such as ArcGIS Pro and QGIS. Topology in spatial analysis Several spatial analysis tools are ultimately based on the discovery of topological relationships between features: spatial query, in which one is searching for the features in one dataset based on desired topological relationships to the features of a second dataset. For example, "where are the student locations within the boundaries of School X?" spatial join, in which the attribute tables of two datasets are combined, with rows being matched based on a desired topological relationship between features in the two datasets, rather than using a stored key as in a normal table join in a relational database. For example, joining the attributes of a schools layer to the table of students based on which school boundary each student resides within. vector overlay, in which two layers (usually polygons) are merged, with new features being created where features from the two input datasets intersect. transport network analysis, a large class of tools in which connected lines (e.g., roads, utility infrastructure, streams) are analyzed using the mathematics of graph theory. The most common example is determining the optimal route between two locations through a street network, as implemented in most street web maps. Oracle and PostGIS provide fundamental topological operators allowing applications to test for "such relationships as contains, inside, covers, covered by, touch, and overlap with boundaries intersecting." Unlike the PostGIS documentation, the Oracle documentation draws a distinction between "topological relationships [which] remain constant when the coordinate space is deformed, such as by twisting or stretching" and "relationships that are not topological [which] include length of, distance between, and area of." These operators are leveraged by applications to ensure that data sets are stored and processed in a topologically correct fashion. However, topological operators are inherently complex and their implementation requires care to be taken with usability and conformance to standards. See also Digital topology DE-9IM (Dimensionally Extended 9-Intersection Model) References Geographic data and information Cartography Geometric topology Spatial analysis
Geospatial topology
[ "Physics", "Mathematics", "Technology" ]
1,530
[ "Geometric topology", "Spatial analysis", "Geographic data and information", "Topology", "Space", "Data", "Spacetime" ]
33,868,489
https://en.wikipedia.org/wiki/Stephen%20Busby
Stephen Busby FRS is a British biochemist, and professor at the University of Birmingham. His research is concerned with the molecular mechanisms controlling gene expression in bacteria, especially regulation of transcription initiation in Escherichia coli. Career Stephen Busby started his career working for several years at the Pasteur Institute in Paris, where he remained until moving to the University of Birmingham in 1983. After obtaining his doctorate at Oxford, he worked in the laboratory of George Radda, in collaboration with Rex Richards, on nuclear magnetic resonance of metabolites. Subsequently his interest moved towards reguatory mechanisms and transcription in bacteria, participating in making recommendations about transctiption initiation, and developing new methods for studying recombinant protein production. Administrative activities Busby was Head of the School of Biosciences at the University of Birmingham between 2012 and 2016. Over much of the same period he was chair of the Biochemical Society (2011–2016). He has been a Member of BBSRC Committee E (Fellowships). References External links http://www.jic.ac.uk/corporate/about/organisation/scienceandimpactadvisoryboard.html http://www.astbury.leeds.ac.uk/Intro/SAB.html https://web.archive.org/web/20120426002158/http://www2.bioch.ox.ac.uk/oubs/pastevents.php British biochemists Fellows of the Royal Society Academics of the University of Birmingham Living people Year of birth missing (living people) Place of birth missing (living people)
Stephen Busby
[ "Chemistry" ]
338
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
33,872,518
https://en.wikipedia.org/wiki/De%20Donder%E2%80%93Weyl%20theory
In mathematical physics, the De Donder–Weyl theory is a generalization of the Hamiltonian formalism in the calculus of variations and classical field theory over spacetime which treats the space and time coordinates on equal footing. In this framework, the Hamiltonian formalism in mechanics is generalized to field theory in the way that a field is represented as a system that varies both in space and in time. This generalization is different from the canonical Hamiltonian formalism in field theory which treats space and time variables differently and describes classical fields as infinite-dimensional systems evolving in time. De Donder–Weyl formulation of field theory The De Donder–Weyl theory is based on a change of variables known as Legendre transformation. Let xi be spacetime coordinates, for i = 1 to n (with n = 4 representing 3 + 1 dimensions of space and time), and ya field variables, for a = 1 to m, and L the Lagrangian density With the polymomenta pia defined as and the De Donder–Weyl Hamiltonian function H defined as the De Donder–Weyl equations are: This De Donder-Weyl Hamiltonian form of field equations is covariant and it is equivalent to the Euler-Lagrange equations when the Legendre transformation to the variables pia and H is not singular. The theory is a formulation of a covariant Hamiltonian field theory which is different from the canonical Hamiltonian formalism and for n = 1 it reduces to Hamiltonian mechanics (see also action principle in the calculus of variations). Hermann Weyl in 1935 has developed the Hamilton-Jacobi theory for the De Donder–Weyl theory. Similarly to the Hamiltonian formalism in mechanics formulated using the symplectic geometry of phase space the De Donder-Weyl theory can be formulated using the multisymplectic geometry or polysymplectic geometry and the geometry of jet bundles. A generalization of the Poisson brackets to the De Donder–Weyl theory and the representation of De Donder–Weyl equations in terms of generalized Poisson brackets satisfying the Gerstenhaber algebra was found by Kanatchikov in 1993. History The formalism, now known as De Donder–Weyl (DW) theory, was developed by Théophile De Donder and Hermann Weyl. Hermann Weyl made his proposal in 1934 being inspired by the work of Constantin Carathéodory, which in turn was founded on the work of Vito Volterra. The work of De Donder on the other hand started from the theory of integral invariants of Élie Cartan. The De Donder–Weyl theory has been a part of the calculus of variations since the 1930s and initially it found very few applications in physics. Recently it was applied in theoretical physics in the context of quantum field theory and quantum gravity. In 1970, Jedrzej Śniatycki, the author of Geometric quantization and quantum mechanics, developed an invariant geometrical formulation of jet bundles, building on the work of De Donder and Weyl. In 1999 Igor Kanatchikov has shown that the De Donder–Weyl covariant Hamiltonian field equations can be formulated in terms of Duffin–Kemmer–Petiau matrices. See also Hamiltonian field theory Covariant Hamiltonian field theory Further reading Selected papers on GEODESIC FIELDS, Translated and edited by D. H. Delphenich. Part 1 , Part 2 H.A. Kastrup, Canonical theories of Lagrangian dynamical systems in physics, Physics Reports, Volume 101, Issues 1–2, Pages 1-167 (1983). Mark J. Gotay, James Isenberg, Jerrold E. Marsden, Richard Montgomery: "Momentum Maps and Classical Relativistic Fields. Part I: Covariant Field Theory" Cornelius Paufler, Hartmann Römer: De Donder–Weyl equations and multisymplectic geometry , Reports on Mathematical Physics, vol. 49 (2002), no. 2–3, pp. 325–334 Krzysztof Maurin: The Riemann legacy: Riemannian ideas in mathematics and physics, Part II, Chapter 7.16 Field theories for calculus of variation for multiple integrals, Kluwer Academic Publishers, , 1997, p. 482 ff. References Calculus of variations Mathematical physics
De Donder–Weyl theory
[ "Physics", "Mathematics" ]
907
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
33,876,683
https://en.wikipedia.org/wiki/Piezoluminescence
Piezoluminescence is a form of luminescence created by pressure upon certain solids. This phenomenon is characterized by recombination processes involving electrons, holes and impurity ion centres. Some piezoelectric crystals give off a certain amount of piezoluminescence when under pressure. Irradiated salts, such as NaCl, KCl, KBr and polycrystalline chips of LiF (TLD-100), have been found to exhibit piezoluminescent properties. It has also been discovered that ferroelectric polymers exhibit piezoluminescence upon the application of stress. In the folk-literature surrounding psychedelic production, DMT, 5-MeO-DMT, and LSD have been reported to exhibit piezoluminescence. As specifically noted in the book Acid Dreams, it is stated that Augustus Owsley Stanley III, one of the most prolific producers of LSD in the 1960s, observed piezoluminescence in the compound's purest form, an observation confirmed by Alexander Shulgin: "A totally pure salt, when dry and when shaken in the dark, will emit small flashes of white light." See also Mechanoluminescence Triboluminescence References Luminescence
Piezoluminescence
[ "Chemistry" ]
263
[ "Luminescence", "Molecular physics" ]
43,542,926
https://en.wikipedia.org/wiki/Einstein%20problem
In plane geometry, the einstein problem asks about the existence of a single prototile that by itself forms an aperiodic set of prototiles; that is, a shape that can tessellate space but only in a nonperiodic way. Such a shape is called an einstein, a word play on ein Stein, German for "one stone". Several variants of the problem, depending on the particular definitions of nonperiodicity and the specifications of what sets may qualify as tiles and what types of matching rules are permitted, were solved beginning in the 1990s. The strictest version of the problem was solved in 2023, after an initial discovery in 2022. The einstein problem can be seen as a natural extension of the second part of Hilbert's eighteenth problem, which asks for a single polyhedron that tiles Euclidean 3-space, but such that no tessellation by this polyhedron is isohedral. Such anisohedral tiles were found by Karl Reinhardt in 1928, but these anisohedral tiles all tile space periodically. Proposed solutions In 1988, Peter Schmitt discovered a single aperiodic prototile in three-dimensional Euclidean space. While no tiling by this prototile admits a translation as a symmetry, some have a screw symmetry. The screw operation involves a combination of a translation and a rotation through an irrational multiple of π, so no number of repeated operations ever yield a pure translation. This construction was subsequently extended by John Horton Conway and Ludwig Danzer to a convex aperiodic prototile, the Schmitt–Conway–Danzer tile. The presence of the screw symmetry resulted in a reevaluation of the requirements for non-periodicity. Chaim Goodman-Strauss suggested that a tiling be considered strongly aperiodic if it admits no infinite cyclic group of Euclidean motions as symmetries, and that only tile sets which enforce strong aperiodicity be called strongly aperiodic, while other sets are to be called weakly aperiodic. In 1996, Petra Gummelt constructed a decorated decagonal tile and showed that when two kinds of overlaps between pairs of tiles are allowed, the tiles can cover the plane, but only non-periodically. A tiling is usually understood to be a covering with no overlaps, and so the Gummelt tile is not considered an aperiodic prototile. An aperiodic tile set in the Euclidean plane that consists of just one tile–the Socolar–Taylor tile–was proposed in early 2010 by Joshua Socolar and Joan Taylor. This construction requires matching rules, rules that restrict the relative orientation of two tiles and that make reference to decorations drawn on the tiles, and these rules apply to pairs of nonadjacent tiles. Alternatively, an undecorated tile with no matching rules may be constructed, but the tile is not connected. The construction can be extended to a three-dimensional, connected tile with no matching rules, but this tile allows tilings that are periodic in one direction, and so it is only weakly aperiodic. Moreover, the tile is not simply connected. The hat and the spectre In November 2022, hobbyist David Smith discovered a "hat"-shaped tile formed from eight copies of a 60°–90°–120°–90° kite (deltoidal trihexagonals), glued edge-to-edge, which seemed to only tile the plane aperiodically. Smith recruited help from mathematicians Craig S. Kaplan, Joseph Samuel Myers, and Chaim Goodman-Strauss, and in March 2023 the group posted a preprint proving that the hat, when considered with its mirror image, forms an aperiodic prototile set. Furthermore, the hat can be generalized to an infinite family of tiles with the same aperiodic property. As of July 2024 this result has been formally published in the journal Combinatorial Theory. In May 2023 the same team (Smith, Myers, Kaplan, and Goodman-Strauss) posted a new preprint about a family of shapes, called "spectres" and related to the "hat", each of which can tile the plane using only rotations and translations. Furthermore, the "spectre" tile is a "strictly chiral" aperiodic monotile: even if reflections are allowed, every tiling is non-periodic and uses only one chirality of the spectre. That is, there are no tilings of the plane that use both the spectre and its mirror image. In 2023, a public contest run by the National Museum of Mathematics in New York City and the United Kingdom Mathematics Trust in London asked people to submit creative renditions of the hat einstein. Out of over 245 submissions from 32 countries, three winners were chosen and received awards at a ceremony at the House of Commons. Applications Einstein tile's molecular analogs may be used to form chiral, two dimensional quasicrystals. See also Binary tiling, a weakly aperiodic tiling of the hyperbolic plane with a single tile Schmitt–Conway–Danzer tile, in three dimensions References External links An aperiodic monotile by Smith, Myers, Kaplan, and Goodman-Strauss Haran, Brady; MacDonald, Ayliean (2023). "A New Tile in Newtyle" (video). Numberphile. Aperiodic tilings
Einstein problem
[ "Physics" ]
1,099
[ "Tessellation", "Aperiodic tilings", "Symmetry" ]
43,548,860
https://en.wikipedia.org/wiki/MassMatrix
MassMatrix is a mass spectrometry data analysis software that uses a statistical model to achieve increased mass accuracy over other database search algorithms. This search engine is set apart from others dues to its ability to provide extremely efficient judgement between true and false positives for high mass accuracy data that has been obtained from present day mass spectrometer instruments. It is useful for identifying disulphide bonds in tandem mass spectrometry data. This search engine is set apart from others due to its ability to provide extremely efficient judgement between true and false positives for high mass accuracy data that has been obtained from present day mass spectrometer instruments. References Mass spectrometry software
MassMatrix
[ "Physics", "Chemistry" ]
140
[ "Spectrum (physical sciences)", "Chemistry software", "Theoretical chemistry stubs", "Computational chemistry stubs", "Mass spectrometry software", "Mass spectrometry", "Computational chemistry", "Physical chemistry stubs" ]
36,469,390
https://en.wikipedia.org/wiki/Quantum%20metamaterial
Quantum metamaterials apply the science of metamaterials and the rules of quantum mechanics to control electromagnetic radiation. In the broad sense, a quantum metamaterial is a metamaterial in which certain quantum properties of the medium must be taken into account and whose behaviour is thus described by both Maxwell's equations and the Schrödinger equation. Its behaviour reflects the existence of both EM waves and matter waves. The constituents can be at nanoscopic or microscopic scales, depending on the frequency range (e.g., optical or microwave). In a more strict approach, a quantum metamaterial should demonstrate coherent quantum dynamics. Such a system is essentially a spatially extended controllable quantum object that allows additional ways of controlling the propagation of electromagnetic waves. Quantum metamaterials can be narrowly defined as optical media that: Are composed of quantum coherent unit elements with engineered parameters; Exhibit controllable quantum states of these elements; Maintain quantum coherence for longer than the traversal time of a relevant electromagnetic signal. Research Fundamental research in quantum metamaterials creates opportunities for novel investigations in quantum phase transition, new perspectives on adiabatic quantum computation and a route to other quantum technology applications. Such a system is essentially a spatially-extended controllable quantum object that allows additional ways of controlling electromagnetic wave propagation. In other words, quantum metamaterials incorporate quantum coherent states in order to control and manipulate electromagnetic radiation. With these materials, quantum information processing is combined with the science of metamaterials (periodic artificial electromagnetic materials). The unit cells can be imagined to function as qubits that maintain quantum coherence "long enough for the electromagnetic pulse to travel across". The quantum state is achieved through the material's individual cells. As each cell interacts with the propagating electromagnetic pulse, the whole system retains quantum coherence. Several types of metamaterials are being studied. Nanowires can use quantum dots as the unit cells or artificial atoms of the structure, arranged as periodic nanostructures. This material demonstrates a negative index of refraction and effective magnetism and is simple to build. The radiated wavelength of interest is much larger than the constituent diameter. Another type uses periodically arranged cold atom cells, accomplished with ultra-cold gasses. A photonic bandgap can be demonstrated with this structure, along with tunability and control as a quantum system. Quantum metamaterial prototypes based on superconducting devices with and without Josephson junctions are being actively investigated. Recently a superconducting quantum metamaterial prototype based on flux qubits was realized. See also Negative index metamaterials Introduction to quantum mechanics Nanotechnology History of metamaterials References External links META 12. Special Sessions. Conference on Quantum Metamaterials Quantum metamaterials SPIE Metamaterials Quantum mechanics
Quantum metamaterial
[ "Physics", "Materials_science", "Engineering" ]
586
[ "Applied and interdisciplinary physics", "Metamaterials", "Quantum mechanics", "Materials science", "Applications of quantum mechanics" ]
48,400,112
https://en.wikipedia.org/wiki/Accretion%20disk
An accretion disk is a structure (often a circumstellar disk) formed by diffuse material in orbital motion around a massive central body. The central body is most frequently a star. Friction, uneven irradiance, magnetohydrodynamic effects, and other forces induce instabilities causing orbiting material in the disk to spiral inward toward the central body. Gravitational and frictional forces compress and raise the temperature of the material, causing the emission of electromagnetic radiation. The frequency range of that radiation depends on the central object's mass. Accretion disks of young stars and protostars radiate in the infrared; those around neutron stars and black holes in the X-ray part of the spectrum. The study of oscillation modes in accretion disks is referred to as diskoseismology. Manifestations Accretion disks are a ubiquitous phenomenon in astrophysics; active galactic nuclei, protoplanetary disks, and gamma ray bursts all involve accretion disks. These disks very often give rise to astrophysical jets coming from the vicinity of the central object. Jets are an efficient way for the star-disk system to shed angular momentum without losing too much mass. The most prominent accretion disks are those of active galactic nuclei and of quasars, which are thought to be massive black holes at the center of galaxies. As matter enters the accretion disc, it follows a trajectory called a tendex line, which describes an inward spiral. This is because particles rub and bounce against each other in a turbulent flow, causing frictional heating which radiates energy away, reducing the particles' angular momentum, allowing the particle to drift inward, driving the inward spiral. The loss of angular momentum manifests as a reduction in velocity; at a slower velocity, the particle must adopt a lower orbit. As the particle falls to this lower orbit, a portion of its gravitational potential energy is converted to increased velocity and the particle gains speed. Thus, the particle has lost energy even though it is now travelling faster than before; however, it has lost angular momentum. As a particle orbits closer and closer, its velocity increases; as velocity increases frictional heating increases as more and more of the particle's potential energy (relative to the black hole) is radiated away; the accretion disk of a black hole is hot enough to emit X-rays just outside the event horizon. The large luminosity of quasars is believed to be a result of gas being accreted by supermassive black holes. Elliptical accretion disks formed at tidal disruption of stars can be typical in galactic nuclei and quasars. The accretion process can convert about 10 percent to over 40 percent of the mass of an object into energy as compared to around 0.7 percent for nuclear fusion processes. In close binary systems the more massive primary component evolves faster and has already become a white dwarf, a neutron star, or a black hole, when the less massive companion reaches the giant state and exceeds its Roche lobe. A gas flow then develops from the companion star to the primary. Angular momentum conservation prevents a straight flow from one star to the other and an accretion disk forms instead. Accretion disks surrounding T Tauri stars or Herbig stars are called protoplanetary disks because they are thought to be the progenitors of planetary systems. The accreted gas in this case comes from the molecular cloud out of which the star has formed rather than a companion star. Accretion disk physics In the 1940s, models were first derived from basic physical principles. In order to agree with observations, those models had to invoke a yet unknown mechanism for angular momentum redistribution. If matter is to fall inward it must lose not only gravitational energy but also lose angular momentum. Since the total angular momentum of the disk is conserved, the angular momentum loss of the mass falling into the center has to be compensated by an angular momentum gain of the mass far from the center. In other words, angular momentum should be transported outward for matter to accrete. According to the Rayleigh stability criterion, where represents the angular velocity of a fluid element and its distance to the rotation center, an accretion disk is expected to be a laminar flow. This prevents the existence of a hydrodynamic mechanism for angular momentum transport. On one hand, it was clear that viscous stresses would eventually cause the matter toward the center to heat up and radiate away some of its gravitational energy. On the other hand, viscosity itself was not enough to explain the transport of angular momentum to the exterior parts of the disk. Turbulence-enhanced viscosity was the mechanism thought to be responsible for such angular-momentum redistribution, although the origin of the turbulence itself was not well understood. The conventional -model (discussed below) introduces an adjustable parameter describing the effective increase of viscosity due to turbulent eddies within the disk. In 1991, with the rediscovery of the magnetorotational instability (MRI), S. A. Balbus, and J. F. Hawley established that a weakly magnetized disk accreting around a heavy, compact central object would be highly unstable, providing a direct mechanism for angular-momentum redistribution. α-Disk model Shakura and Sunyaev (1973) proposed turbulence in the gas as the source of an increased viscosity. Assuming subsonic turbulence and the disk height as an upper limit for the size of the eddies, the disk viscosity can be estimated as where is the sound speed, is the scale height of the disk, and is a free parameter between zero (no accretion) and approximately one. In a turbulent medium , where is the velocity of turbulent cells relative to the mean gas motion, and is the size of the largest turbulent cells, which is estimated as and , where is the Keplerian orbital angular velocity, is the radial distance from the central object of mass . By using the equation of hydrostatic equilibrium, combined with conservation of angular momentum and assuming that the disk is thin, the equations of disk structure may be solved in terms of the parameter. Many of the observables depend only weakly on , so this theory is predictive even though it has a free parameter. Using Kramers' opacity law it is found that where and are the mid-plane temperature and density respectively. is the accretion rate, in units of , is the mass of the central accreting object in units of a solar mass, , is the radius of a point in the disk, in units of , and , where is the radius where angular momentum stops being transported inward. The Shakura–Sunyaev α-disk model is both thermally and viscously unstable. An alternative model, known as the -disk, which is stable in both senses assumes that the viscosity is proportional to the gas pressure . In the standard Shakura–Sunyaev model, viscosity is assumed to be proportional to the total pressure since . The Shakura–Sunyaev model assumes that the disk is in local thermal equilibrium, and can radiate its heat efficiently. In this case, the disk radiates away the viscous heat, cools, and becomes geometrically thin. However, this assumption may break down. In the radiatively inefficient case, the disk may "puff up" into a torus or some other three-dimensional solution like an Advection Dominated Accretion Flow (ADAF). The ADAF solutions usually require that the accretion rate is smaller than a few percent of the Eddington limit. Another extreme is the case of Saturn's rings, where the disk is so gas-poor that its angular momentum transport is dominated by solid body collisions and disk-moon gravitational interactions. The model is in agreement with recent astrophysical measurements using gravitational lensing. Magnetorotational instability Balbus and Hawley (1991) proposed a mechanism which involves magnetic fields to generate the angular momentum transport. A simple system displaying this mechanism is a gas disk in the presence of a weak axial magnetic field. Two radially neighboring fluid elements will behave as two mass points connected by a massless spring, the spring tension playing the role of the magnetic tension. In a Keplerian disk the inner fluid element would be orbiting more rapidly than the outer, causing the spring to stretch. The inner fluid element is then forced by the spring to slow down, reduce correspondingly its angular momentum causing it to move to a lower orbit. The outer fluid element being pulled forward will speed up, increasing its angular momentum and move to a larger radius orbit. The spring tension will increase as the two fluid elements move further apart and the process runs away. It can be shown that in the presence of such a spring-like tension the Rayleigh stability criterion is replaced by Most astrophysical disks do not meet this criterion and are therefore prone to this magnetorotational instability. The magnetic fields present in astrophysical objects (required for the instability to occur) are believed to be generated via dynamo action. Magnetic fields and jets Accretion disks are usually assumed to be threaded by the external magnetic fields present in the interstellar medium. These fields are typically weak (about few micro-Gauss), but they can get anchored to the matter in the disk, because of its high electrical conductivity, and carried inward toward the central star. This process can concentrate the magnetic flux around the centre of the disk giving rise to very strong magnetic fields. Formation of powerful astrophysical jets along the rotation axis of accretion disks requires a large scale poloidal magnetic field in the inner regions of the disk. Such magnetic fields may be advected inward from the interstellar medium or generated by a magnetic dynamo within the disk. Magnetic fields strengths at least of order 100 Gauss seem necessary for the magneto-centrifugal mechanism to launch powerful jets. There are problems, however, in carrying external magnetic flux inward toward the central star of the disk. High electric conductivity dictates that the magnetic field is frozen into the matter which is being accreted onto the central object with a slow velocity. However, the plasma is not a perfect electric conductor, so there is always some degree of dissipation. The magnetic field diffuses away faster than the rate at which it is being carried inward by accretion of matter. A simple solution is assuming a viscosity much larger than the magnetic diffusivity in the disk. However, numerical simulations and theoretical models show that the viscosity and magnetic diffusivity have almost the same order of magnitude in magneto-rotationally turbulent disks. Some other factors may possibly affect the advection/diffusion rate: reduced turbulent magnetic diffusion on the surface layers; reduction of the Shakura–Sunyaev viscosity by magnetic fields; and the generation of large scale fields by small scale MHD turbulence –a large scale dynamo. In fact, a combination of different mechanisms might be responsible for efficiently carrying the external field inward toward the central parts of the disk where the jet is launched. Magnetic buoyancy, turbulent pumping and turbulent diamagnetism exemplify such physical phenomena invoked to explain such efficient concentration of external fields. Analytic models of sub-Eddington accretion disks (thin disks, ADAFs) When the accretion rate is sub-Eddington and the opacity very high, the standard thin accretion disk is formed. It is geometrically thin in the vertical direction (has a disk-like shape), and is made of a relatively cold gas, with a negligible radiation pressure. The gas goes down on very tight spirals, resembling almost circular, almost free (Keplerian) orbits. Thin disks are relatively luminous and they have thermal electromagnetic spectra, i.e. not much different from that of a sum of black bodies. Radiative cooling is very efficient in thin disks. The classic 1974 work by Shakura and Sunyaev on thin accretion disks is one of the most often quoted papers in modern astrophysics. Thin disks were independently worked out by Lynden-Bell, Pringle, and Rees. Pringle contributed in the past thirty years many key results to accretion disk theory, and wrote the classic 1981 review that for many years was the main source of information about accretion disks, and is still very useful today. A fully general relativistic treatment, as needed for the inner part of the disk when the central object is a black hole, has been provided by Page and Thorne, and used for producing simulated optical images by Luminet and Marck, in which, although such a system is intrinsically symmetric its image is not, because the relativistic rotation speed needed for centrifugal equilibrium in the very strong gravitational field near the black hole produces a strong Doppler redshift on the receding side (taken here to be on the right) whereas there will be a strong blueshift on the approaching side. Due to light bending, the disk appears distorted but is nowhere hidden by the black hole. When the accretion rate is sub-Eddington and the opacity very low, an ADAF (advection dominated accretion flow) is formed. This type of accretion disk was predicted in 1977 by Ichimaru. Although Ichimaru's paper was largely ignored, some elements of the ADAF model were present in the influential 1982 ion-tori paper by Rees, Phinney, Begelman, and Blandford. ADAFs started to be intensely studied by many authors only after their rediscovery in the early 1990s by Popham and Narayan in numerical models of accretion disk boundary layers. Self-similar solutions for advection-dominated accretion were found by Narayan and Yi, and independently by Abramowicz, Chen, Kato, Lasota (who coined the name ADAF), and Regev. Most important contributions to astrophysical applications of ADAFs have been made by Narayan and his collaborators. ADAFs are cooled by advection (heat captured in matter) rather than by radiation. They are very radiatively inefficient, geometrically extended, similar in shape to a sphere (or a "corona") rather than a disk, and very hot (close to the virial temperature). Because of their low efficiency, ADAFs are much less luminous than the Shakura–Sunyaev thin disks. ADAFs emit a power-law, non-thermal radiation, often with a strong Compton component. Analytic models of super-Eddington accretion disks (slim disks, Polish doughnuts) The theory of highly super-Eddington black hole accretion, M≫MEdd, was developed in the 1980s by Abramowicz, Jaroszynski, Paczyński, Sikora, and others in terms of "Polish doughnuts" (the name was coined by Rees). Polish doughnuts are low viscosity, optically thick, radiation pressure supported accretion disks cooled by advection. They are radiatively very inefficient. Polish doughnuts resemble in shape a fat torus (a doughnut) with two narrow funnels along the rotation axis. The funnels collimate the radiation into beams with highly super-Eddington luminosities. Slim disks (name coined by Kolakowska) have only moderately super-Eddington accretion rates, M≥MEdd, rather disk-like shapes, and almost thermal spectra. They are cooled by advection, and are radiatively ineffective. They were introduced by Abramowicz, Lasota, Czerny, and Szuszkiewicz in 1988. Excretion disk The opposite of an accretion disk is an excretion disk where instead of material accreting from a disk on to a central object, material is excreted from the center outward onto the disk. Excretion disks are formed when stars merge. See also Accretion Astrophysical jet Blandford–Znajek process Circumstellar disc Circumplanetary disk Dynamo theory Exoasteroid Gravitational singularity Quasi-star Reverberation mapping Ring system Solar nebula Spin-flip Notes References External links Professor John F. Hawley homepage, University of Virginia (archived 2015) The Dynamical Structure of Nonradiative Black Hole Accretion Flows, John F. Hawley and Steven A. Balbus, 2002 March 19, The Astrophysical Journal, 573:738-748, 2002 July 10 Accretion discs, Scholarpedia - Black holes Space plasmas Concepts in astronomy Unsolved problems in physics Vortices Articles containing video clips Circumstellar disks
Accretion disk
[ "Physics", "Chemistry", "Astronomy", "Mathematics" ]
3,450
[ "Space plasmas", "Black holes", "Physical phenomena", "Physical quantities", "Vortices", "Concepts in astronomy", "Unsolved problems in physics", "Astrophysics", "Density", "Dynamical systems", "Stellar phenomena", "Astronomical objects", "Fluid dynamics" ]
48,403,381
https://en.wikipedia.org/wiki/Dragon%20king%20theory
Dragon king is a double metaphor for an event that is both extremely large in size or effect (a "king") and born of unique origins (a "dragon") relative to its peers (other events from the same system). DK events are generated by or correspond to mechanisms such as positive feedback, tipping points, bifurcations, and phase transitions, that tend to occur in nonlinear and complex systems, and serve to amplify Dragon king events to extreme levels. By understanding and monitoring these dynamics, some predictability of such events may be obtained. The dragon king theory was developed by Didier Sornette, who hypothesizes that many crises are in fact DKs rather than black swans, i.e., they may be predictable to some degree. Given the importance of crises to the long-term organization of a variety of systems, the DK theory urges that special attention be given to the study and monitoring of extremes, and that a dynamic view be taken. From a scientific viewpoint, such extremes are interesting because they may reveal underlying, often hidden, organizing principles. Practically speaking, one should study extreme risks, but not forget that significant uncertainty will almost always be present, and should be rigorously considered in decisions regarding risk management and design. The Dragon king theory is related to concepts such as black swan theory, outliers, complex systems, nonlinear dynamics, power laws, extreme value theory, prediction, extreme risks, and risk management. Black swans and dragon kings A black swan can be considered a metaphor for an event that is surprising (to the observer), has a major effect, and, after being observed, is rationalized in hindsight. The theory of black swans is epistemological, relating to the limited knowledge and understanding of the observer. The term was introduced and popularized by Nassim Taleb and has been associated with concepts such as heavy tails, non-linear payoffs, model error, and even Knightian uncertainty, whose "unknowable unknown" event terminology was popularized by former United States Secretary of Defense Donald Rumsfeld. Taleb claims that black swan events are not predictable, and in practice, the theory encourages one to "prepare rather than predict", and limit one's exposure to extreme fluctuations. The black swan concept is important and poses a valid criticism of people, firms, and societies that are irresponsible in the sense that they are overly confident in their ability to anticipate and manage risk. However, claiming that extreme events are—in general—unpredictable may also lead to a lack of accountability in risk management roles. In fact, it is known that in a wide range of physical systems that extreme events are predictable to some degree. One simply needs to have a sufficiently deep understanding of the structure and dynamics of the focal system, and the ability to monitor it. This is the domain of the dragon kings. Such events have been referred to as "grey swans" by Taleb. A more rigorous distinction between black swans, grey swans, and dragon kings is difficult as black swans are not precisely defined in physical and mathematical terms. However, technical elaboration of concepts in the Black Swan book are elaborated in the Silent Risk document. An analysis of the precise definition of a black swan in a risk management context was written by professor Terje Aven. Beyond power laws It is well known that many phenomena in both the natural and social sciences have power law statistics (Pareto distribution). Furthermore, from extreme value theory, it is known that a broad range of distributions (the Frechet class) have tails that are asymptotically power law. The result of this is that, when dealing with crises and extremes, power law tails are the "normal" case. The unique property of power laws is that they are scale-invariant, self-similar and fractal. This property implies that all events—both large and small—are generated by the same mechanism, and thus there will be no distinct precursors by which the largest events may be predicted. A well-known conceptual framework for events of this type is self-organized criticality. Such concepts are compatible with the theory of the black swan. However Taleb has also stated that considering the power law as a model instead of a model with lighter tails (e.g., a Gaussian) "converts black swans into gray ones", in the sense that the power law model gives non-negligible probability to large events. In a variety of studies it has been found that, despite the fact that a power law models the tail of the empirical distribution well, the largest events are significantly outlying (i.e., much larger than what would be expected under the model). Such events are interpreted as dragon kings as they indicate a departure from the generic process underlying the power law. Examples of this include the largest radiation release events occurring in nuclear power plant accidents, the largest city (agglomeration) within the sample of cities in a country, the largest crashes in financial markets, and intraday wholesale electricity prices. Mechanisms Physically speaking, dragon kings may be associated with the regime changes, bifurcations, and tipping points of complex out-of-equilibrium systems. For instance, the catastrophe (fold bifurcation) of the global ecology illustrated in the figure could be considered to be a dragon king: Many observers would be surprised by such a dramatic change of state. However, it is well known that in dynamic systems, there are many precursors as the system approaches the catastrophe. Positive feedback is also a mechanism that can spawn dragon kings. For instance, in a stampede the number of cattle running increases the level of panic which causes more cattle to run, and so on. In human dynamics such herding and mob behavior has also been observed in crowds, stock markets, and so on (see herd behavior). The role of positive feedback loops in dragon king formation is well documented, particularly in oscillatory and cascading networks. Such a system periodically reaches a tipping point, becoming unstable and triggering a self-amplifying cascade of cascades, leading to a dragon king. Dragon kings are also caused by attractor bubbling in coupled oscillator systems. Attractor bubbling is a generic behavior appearing in networks of coupled oscillators where the system typically orbits in an invariant manifold with a chaotic attractor (where the peak trajectories are low), but is intermittently pushed (by noise) into a region where orbits are locally repelled from the invariant manifold (where the peak trajectories are large). These excursions form the dragon kings, as illustrated in the figure. It is claimed that such models can describe many real phenomena such as earthquakes, brain activity, etc. A block and spring mechanical model, considered as a model of geological faults and their earthquake dynamics, produced a similar distribution. It could also be the case that dragon kings are created as a result of system control or intervention. That is, trying to suppress the release of stress or death in dynamic complex systems may lead to an accumulation of stress or a maturation towards instability. For instance, brush/forest fires are a natural occurrence in many areas. Such fires are inconvenient and thus we may wish that they are diligently extinguished. This leads to long periods without inconvenient fires, however, in the absence of fires, dead wood accumulates. Once this accumulation reaches a critical point, and a fire starts, the fire becomes so large that it cannot be controlled—a singular event that could be considered to be a dragon king. Other policies, such as doing nothing (allowing for small fires to occur naturally), or performing strategic controlled burning, would avoid enormous fires by allowing for frequent small ones. Another example is monetary policy. Quantitative easing programs and low interest rate policies are common, with the intention of avoiding recessions, promoting growth, etc. However, such programs build instability by increasing income inequality, keeping weak firms alive, and inflating asset bubbles. Ultimately such policies, aimed at smoothing out economic fluctuations, will enable an enormous correction—a dragon king. As discussed in the previous section, dragon kings often arise in systems where other events follow a power law distribution. Mechanisms generating power laws are well-studied under the framework of self-organized criticality (SOC). Many systems exhibiting dragon kings alongside power laws can be understood by how their dynamics differ from pure SOC. SOC typically involves self-organization (SO) around a continuous absorbing-state phase transition (ASPT), where SO refers to an emergent balance between external driving and internal dissipation, and ASPT marks a transition between active and inactive (absorbing) phases. If this phase transition is first-order (discontinuous), it produces hysteresis, leading to large dragon king events. Even continuous transitions, which typically result in SOC, can enter regimes that produce dragon kings, depending on the nuances of the driving and dissipation. The figure on the right illustrates the taxonomy of dragon kings based on this classification: Set A includes all dragon kings, while sets B and C distinguish between those with self-organization near continuous and discontinuous transitions. Sets D and E further focus on cases with specifically absorbing-state transitions. Dragon kings as statistical outliers Dragon kings are outliers by definition. However, when calling DKs outliers there is an important proviso: In standard statistics outliers are typically erroneous values and are discarded, or statistical methods are chosen that are somehow insensitive to outliers. Contrariwise, DKs are outliers that are highly informative, and should be the focus of much statistical attention. Thus a first step is identifying DKs in historical data. Existing tests are either based on the asymptotic properties of the empirical distribution function (EDF) or on an assumption about the underlying cumulative distribution function (CDF) of the data. It turns out that testing for outliers relative to an exponential distribution is very general. The latter follows from the Pickands–Balkema–de Haan theorem of extreme value theory which states that a wide range of distributions asymptotically (above high thresholds) have exponential or power law tails. As an aside, this is one explanation why power law tails are so common when studying extremes. To finish the point, since the natural logarithm of a power law tail is exponential, one can take the logarithm of power law data and then test for outliers relative to an exponential tail. There are many test statistics and techniques for testing for outliers in an exponential sample. An inward test sequentially tests the largest point, then the second largest, and so on, until the first test that is not rejected (i.e., the null hypothesis that the point is not an outlier is not rejected). The number of rejected tests identifies the number of outliers. For instance, where is the sorted sample, the inward robust test uses the test statistic where r is the point being tested , and where m is the pre-specified maximum number of outliers. At each step the p-value for the test statistic must be computed and, if lower than some level, the test rejected. This test has many desirable properties: It does not require that the number of outliers be specified, it is not prone to under (masking) and over (swamping) estimation of the number outliers, it is easy to implement, and the test is independent of the value of the parameter of the exponential tail. Examples Some examples of where dragon kings have been detected as outliers include: financial crashes as measured by drawdowns, where the outliers correspond to terrorist attacks (e.g., the 2005 London bombing), and the flash crash of 2010; the radiation released and financial losses caused by accidents at nuclear power plants, where outliers correspond to runaway disasters where safety mechanisms were overwhelmed; the largest city (measured by the population in its agglomeration) in the population of cities within a country, where the largest city plays a disproportionately important role in the dynamics of the country, and benefits from unique growth; intraday wholesale electricity prices; and three-wave nonlinear interaction—it is possible to suppress the emergence of dragon kings. Modeling and prediction How one models and predicts dragon kings depends on the underlying mechanism. However, the common approach will require continuous monitoring of the focal system and comparing measurements with a (non-linear or complex) dynamic model. It has been proposed that the more homogeneous the system, and the stronger its interactions, the more predictable it will be. For instance, in non-linear systems with phase transitions at a critical point, it is well known that a window of predictability occurs in the neighborhood of the critical point due to precursory signs: the system recovers more slowly from perturbations, autocorrelation changes, variance increases, spatial coherence increases, etc. These properties have been used for prediction in many applications ranging from changes in the bio-sphere to rupture of pressure tanks on the Ariane rocket. The applications to a wide rage of phenomena have stimulated the complex systems perspective, which is a trans-disciplinary approach and do not depend on the first-principles understanding. For the phenomena of unsustainable growth (e.g., of populations or stock prices), one can consider a growth model that features a finite time singularity, which is a critical point where the growth regime changes. In systems that are discrete scale invariant such a model is power law growth, decorated with a log-periodic function. Fitting this model on the growth data (non-linear regression) allows for the prediction of the singularity, i.e., the end of unsustainable growth. This has been applied to many problems, for instance: rupture in materials, earthquakes, and the growth and burst of bubbles in financial markets An interesting dynamic to consider, that may reveal the development of a block-buster success, is epidemic phenomena: e.g., the spread of plague, viral phenomena in media, the spread of panic and volatility in stock markets, etc. In such a case, a powerful approach is to decompose activity/fluctuations into exogeneous and endogeneous parts, and learn about the endogenous dynamics that may lead to highly influential bursts in activity. Prediction and decision-making Given a model and data, one can obtain a statistical model estimate. This model estimate can then be used to compute interesting quantities such as the conditional probability of the occurrence of a dragon king event in a future time interval, and the most probable occurrence time. When doing statistical modeling of extremes, and using complex or nonlinear dynamic models, there is bound to be substantial uncertainty. Thus, one should be diligent in uncertainty quantification: not only considering the randomness present in the fitted stochastic model, but also the uncertainty of its estimated parameters (e.g., with Bayesian techniques or by first simulating parameters and then simulating from the model with those parameters), and the uncertainty in model selection (e.g., by considering an ensemble of different models). One can then use the estimated probabilities and their associated uncertainties to inform decisions. In the simplest case, one performs a binary classification: predicting that a dragon king will occur in a future interval if its probability of occurrence is high enough, with sufficient certainty. For instance, one may take a specific action if a dragon king is predicted to occur. An optimal decision will then balance the cost of false negatives/false positives and misses/false alarms according to a specified loss function. For instance, if the cost of a miss is very large relative to the cost of a false alarm, the optimal decision will detect dragon kings more frequently than they occur. One should also study the true positive rate of the prediction. The smaller this value is, the weaker the test, and the closer one is to black swan territory. In practice the selection of the optimal decision, and the computation of its properties must be done by cross validation with historical data (if available), or on simulated data (if one knows how to simulate the dragon kings). In a dynamic setting the dataset will grow over time, and the model estimate, and its estimated probabilities will evolve. One may then consider combining the sequence of estimates/probabilities when performing prediction. In this dynamic setting, the test will likely be weak most of the time (e.g., when the system is around equilibrium), but as one approaches a dragon king, and precursors become visible, the true positive rate should increase. The importance of extreme risks Dragon kings form special kinds of events leading to extreme risks (which can also be opportunities). That extreme risks are important and should be self-evident. Natural disasters provide many examples (e.g., asteroid impacts leading to extinction). Some statistical examples of the effect of extremes are that: the largest nuclear power plant accident (Chernobyl disaster) had a roughly equal damage cost(as measured by estimated US dollar cost) as all (+- 175) other historical nuclear accidents together, the largest 10 percent of private data breaches from organizations accounts for 99 percent of the total breached private information, the largest five epidemics since 1900 caused 20 times the fatalities of the remaining 1363, etc. In general such statistics arrive in the presence of heavy-tailed distributions, and the presence of dragon kings will augment the already oversized effect of extreme events. Despite the importance of extreme events, due to ignorance, misaligned incentives, and cognitive biases, there is often a failure to adequately anticipate them. Technically speaking, this leads to poorly specified models where distributions that are not heavy-tailed enough, and under-appreciate both serial and multivariate dependence of extreme events. Some examples of such failures in risk assessment include the use of Gaussian models in finance (Black–Scholes, the Gaussian copula, LTCM), the use of Gaussian processes and linear wave theory failing to predict the occurrence of rogue waves, the failure of economic models in general to predict the financial crisis of 2007–2008, and the under-appreciation of external events, cascades, and nonlinear effects in probabilistic risk assessment, leading to not anticipating the Fukushima Daiichi nuclear disaster in 2011. Such influential failures emphasize the importance of the study of extremes. Risk management The dragon king concept raises many questions about how one can deal with risk. Of course, if possible, exposure to large risks should be avoided (often referred to as the "black swan approach"). However, in many developments, exposure to risk is a necessity, and a trade-off between risk and return needs to be navigated. In an adaptive system, where prediction of dragon kings is successful, one can act to defend the system or even profit. How to design such resilient systems, as well as their real time risk monitoring systems, is an important and interdisciplinary problem where dragon kings must be considered. On another note, when it comes to the quantification of risk in a given system (whether it be a bank, an insurance company, a dike, a bridge, or a socio-economic system), risk needs to be accounted for over a period, such as annually. Typically one is interested in statistics such as the annual probability of loss or damage in excess of some value (value at risk), other tail risk measures, and return periods. To provide such risk characterizations, the dynamic dragon kings must be reasoned about in terms of annual frequency and severity statistics. These frequency and severity statistics can then be brought together in a model such as a compound Poisson process. Provided that the statistical properties of the system are consistent over time (stationary), frequency and severity statistics may be constructed based on past observations, simulations, and/or assumptions. If not, one may only construct scenarios. However, in any case, given the uncertainty present, a range of scenarios should be considered. Due to the shortage of data for extreme events, the principle of parsimony, and theoretical results from extreme value theory about universal tail models, one typically relies on a generalized Pareto distribution (GPD) tail model. However such a model excludes DKs. Thus, when one has sufficient reason to believe that Dragon kings are present, or if one simply wants to consider a scenario, one may e.g., consider a density mixture of a GPD and a density for the DK regime. References Prediction Dynamical systems Risk management Statistical outliers Theory Metaphors referring to animals Dragons in popular culture Fictional kings
Dragon king theory
[ "Physics", "Mathematics" ]
4,259
[ "Mechanics", "Dynamical systems" ]
48,404,487
https://en.wikipedia.org/wiki/Three-island%20principle
The three-island principle was a technique used in the construction of steel-hulled ships whereby a ship was built with a forecastle, bridge deck, and poop. The technique allowed the economical and efficient construction of ships and was particularly common in tramp steamers and smaller vessels of the nineteenth and early twentieth centuries. The Knight of Malta, for instance, a 1929 steam ferry of only 16 ft draught that operated between Malta and Sicily, was built on the principle. See also Deck (ship) References Shipbuilding
Three-island principle
[ "Engineering" ]
105
[ "Shipbuilding", "Marine engineering" ]
48,407,923
https://en.wikipedia.org/wiki/Hume%20Feldman
Hume A. Feldman is a physicist specializing in cosmology and astrophysics. He is a Fellow of the American Physical Society and a professor and chair (2013-2023) of the Department of Physics and Astronomy at the University of Kansas. Education Feldman graduated from the University of California at Santa Cruz in 1983. He got his PhD at Stony Brook University in NY, 1989 working with Robert Brandenberger. He was then a postdoc at the Canadian Institute for Theoretical Astrophysics, Toronto, 1989–91, a research fellow at the University of Michigan 1991-94 and a Prof. Research in the Physics Department at Princeton University 1994–96. Research career Hume has been a researcher in the study of the large-scale peculiar velocity field for the past two decades. His explanation of the systematic errors, aliasing and incomplete cancellations of small-scale noise masquerading as large-scale signal lead to the reemergence of peculiar velocities as a premier tool in our arsenal to probe the dynamics and statistics of the large-scale structure of the Universe. He developed a formalism to optimize the determination of cosmic flows from proper distance surveys and enabled for the first time direct comparison of independent surveys and cosmological models as a function of scale, thus establishing the cosmological significance of flow probes. His work has led to many widely cited results (such as a nearly 3-sigma flows on 100 Mpc/h scales), renewed discussion of imposing flow constraints on cosmological models and the redesign of proper distance surveys. He is the coauthor of two recent papers that brought back this field from a decade with no new data (with over 300 and 200 citations, respectively). Hume was a coauthor of the best-cited article on cosmological perturbations (>4000 citations) developing a gauge invariant formalism that is widely considered to be the gold standard in this sub- discipline. His seminal work on the approximation of the matter power spectrum from redshift surveys (>1000 citations) has opened the door to a whole industry of cosmological probes and N-point functions determination in Fourier space. He was a coauthor on the "Loitering Universe" series of papers that predicted an accelerating universe as a solution to the age problem in 1992 and which included a scalar field that acted like an effective cosmological constant or a quintessence field years before the supernovae type IA results. He worked extensively on the constraints on galaxy bias, matter density, and primordial non-Gaussianity in redshift surveys and his detection of the bispectrum signal was the first observational confirmation of the Gravitational Instability Model. He helped develop an artificial neural network formalism to interpolate the fully non-linear power spectrum of matter fluctuation and provided the community with a fast and accurate (<1% errors) software to determine the non-linear power spectrum given an input cosmology. His APS Fellow Citation reads: Public activities He travels widely for presentations to a diverse array of groups, from elementary to high school students to churches, interest groups, legislators and professional societies. He became involved in the Creationism/Intelligent Design in Kansas in 1999 when the nearly successful attempts of various fundamentalist groups to force the teaching of religion in public schools began in earnest. Hume was one of the leaders of a group of academics and educators that addressed, publicized and confronted the issue head on by organizing conferences, public forums and workshops, and through political advocacy including testimonials to the Kansas legislatures, providing high school science teachers with the tools and knowledge to discuss these issues locally to parents, students and the interested public. References Living people Year of birth missing (living people) Physical cosmology University of Kansas faculty American physicists University of California, Santa Cruz alumni Stony Brook University alumni University of Michigan fellows Fellows of the American Physical Society
Hume Feldman
[ "Physics", "Astronomy" ]
800
[ "Astronomical sub-disciplines", "Theoretical physics", "Physical cosmology", "Astrophysics" ]
48,407,985
https://en.wikipedia.org/wiki/Structure%20function
The structure function, like the fragmentation function, is a probability density function in physics. It is somewhat analogous to the structure factor in solid-state physics, and the form factor (quantum field theory). The nucleon (proton and neutron) electromagnetic form factors describe the spatial distributions of electric charge and current inside the nucleon and thus are intimately related to its internal structure; these form factors are among the most basic observables of the nucleon. (Nucleons are the building blocks of almost all ordinary matter in the universe. The challenge of understanding the nucleon's structure and dynamics has occupied a central place in nuclear physics.) The structure functions are important in the study of deep inelastic scattering. The fundamental understanding of structure functions in terms of QCD is one of the outstanding problems in hadron physics. Why do quarks form colourless hadrons with only two stable configurations, proton and neutron? One important step towards answering this question is to characterize the internal structure of the nucleon. High energy electron scattering provides one of the most powerful tools to investigate this structure. See also Photon structure function Fragmentation function Parton distribution function Transverse momentum distributions References Scattering
Structure function
[ "Physics", "Chemistry", "Materials_science" ]
246
[ "Scattering stubs", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
49,642,031
https://en.wikipedia.org/wiki/Surface%20activated%20bonding
Surface activated bonding (SAB) is a non-high-temperature wafer bonding technology with atomically clean and activated surfaces. Surface activation prior to bonding by using fast atom bombardment is typically employed to clean the surfaces. High-strength bonding of semiconductors, metals, and dielectrics can be obtained even at room temperature. Overview In the standard SAB method, wafer surfaces are activated by argon fast atom bombardment in ultra-high vacuum (UHV) of 10−4–10−7 Pa. The bombardment removes adsorbed contaminants and native oxides on the surfaces. The activated surfaces are atomically clean and reactive for formation of direct bonds between wafers when they are brought into contact even at room temperature. Researches on SAB The SAB method has been studied for bonding of various materials, as shown in Table I. The standard SAB, however, failed to bond some materials such as SiO2 and polymer films. The modified SAB was developed to solve this problem, by using a sputtering deposited Si intermediate layer to improve the bond strength. The combined SAB has been developed for SiO2-SiO2 and Cu/SiO2 hybrid bonding, without use of any intermediate layer. Technical specifications References Wafer bonding Semiconductor technology Electronics manufacturing Packaging (microfabrication) Semiconductor device fabrication Microelectronic and microelectromechanical systems
Surface activated bonding
[ "Materials_science", "Engineering" ]
289
[ "Electronics manufacturing", "Packaging (microfabrication)", "Microtechnology", "Materials science", "Semiconductor device fabrication", "Electronic engineering", "Semiconductor technology", "Microelectronic and microelectromechanical systems" ]
49,644,528
https://en.wikipedia.org/wiki/Sarooj
Sarooj is a traditional water-resistant mortar used in Iranian architecture, used in the construction of bridges and yakhchāl, ancient Persian ice houses. It is made of clay and limestone mixed in a six-to-four ratio to make a stiff mix, and kneaded for three days. A portion of furnace slags from baths is combined with cattail (Typha) fibers, egg, and straw, and fixed, then beaten with a wooden stick for even mixing. Egg whites can be used as a water reducer as needed. History Mosaddad et al. report the use of a mixture consisting of lime, sand and ash in the construction of an 1800 year-old Sasanian bridge-dam on the Karoon river south of Shooshtar. The Sheikh's biogas bath-house in Isphahan featured a water-impermeable sarooj composed of lime, egg white, and bamboo dust. Another alternative formulation used for yakhchāl and water tanks in Iran uses "sand, clay, egg whites, lime, goat hair, and ash in specific proportions." All of these examples utilize pozzolanic properties and/or incorporate biopolymerization to increase the durability and impermeability of the plaster. See also Qadad, another preindustrial waterproof plaster Tadelakt, another waterproof lime soap plaster References Concrete Plastering Building materials Architecture in Iran Moisture protection
Sarooj
[ "Physics", "Chemistry", "Engineering" ]
297
[ "Structural engineering", "Building engineering", "Coatings", "Architecture", "Construction", "Materials", "Concrete", "Plastering", "Matter", "Building materials" ]
55,065,601
https://en.wikipedia.org/wiki/MASCARA
MASCARA (Multi-site All-Sky CAmeRA) is an exoplanet experiment by Leiden University. It has two stations, one in each hemisphere, each of which use cameras to make short exposure photographs of most of the visible sky to observe stars to a magnitude of 8.4. The Northern Hemisphere station at Roque de los Muchachos Observatory, La Palma, started observations in February 2015. The Southern Hemisphere station at La Silla Observatory, Chile, saw first light in July 2017. MASCARA-1b On 17 July 2017, the discovery of MASCARA-1b, a confirmed superjovian exoplanet with a mass 3.7, was reported by the survey team. MASCARA-1b is a hot Jupiter transiting its parent A-type star; its orbit is misaligned with the star's rotation. The planet was found unusually reflective for hot Jupiter with the measured geometric albedo of 0.171 and dayside temperature of 3062 K. Attempts to spectroscopically characterize its composition were failing as in 2022 due to relatively high planetary surface gravity resulting in compact atmosphere. MASCARA-2b A second planet, MASCARA-2b, also known as KELT-20b, was also announced in 2017. It is a hot Jupiter orbiting an A-type star. The carbon monoxide, steam and neutral iron detection in the atmosphere of MASCARA-2b was announced in 2022. MASCARA-4b A planet MASCARA-4b (also known as HD 85628 Ab) discovery was announced in 2019. It is a hot Jupiter on retrograde and slightly eccentric orbit. The planet is unusually reflective for a hot Jupiter. Hydrogen, sodium, magnesium, calcium and iron emission from planetary atmosphere was detected. MASCARA-5b In 2021, a planet MASCARA-5b (more commonly known as TOI-1431 b), is an Ultra-hot Jupiter. Its dayside temperature is 2,700 K (2,427 °C), making it hotter than 40% of stars in our galaxy. The nightside temperature is 2,600 K (2,300 °C). List of discovered exoplanets References Papers G.J.J. Talens et al., The Multi-site All-Sky CAmeRA: Finding transiting exoplanets around bright (mV < 8) stars, accepted for publication in A&A External links MASCARA website at Leiden University MASCARA, ESO Leiden University Exoplanet search projects Astronomical instruments Telescope instruments
MASCARA
[ "Astronomy" ]
508
[ "Astronomy projects", "Exoplanet search projects", "Telescope instruments", "Astronomical instruments" ]
55,071,594
https://en.wikipedia.org/wiki/Neutron%20embrittlement
Neutron embrittlement, sometimes more broadly radiation embrittlement, is the embrittlement of various materials due to the action of neutrons. This is primarily seen in nuclear reactors, where the release of high-energy neutrons causes the long-term degradation of the reactor materials. The embrittlement is caused by the microscopic movement of atoms that are hit by the neutrons; this same action also gives rise to neutron-induced swelling causing materials to grow in size, and the Wigner effect causing energy buildup in certain materials that can lead to sudden releases of energy. Neutron embrittlement mechanisms include: Hardening and dislocation pinning due to nanometer features created by irradiation Generation of lattice defects in collision cascades via the high-energy recoil atoms produced in the process of neutron scattering. Diffusion of major defects, which leads to higher amounts of solute diffusion, as well as formation of nanoscale defect-solute cluster complexes, solute clusters, and distinct phases. Embrittlement in Nuclear Reactors Neutron irradiation embrittlement limits the service life of reactor-pressure vessels (RPV) in nuclear power plants due to the degradation of reactor materials. In order to perform at high efficiency and safely contain coolant water at temperatures around 290°C and pressures of ~7 MPa (for boiling water reactors) to 14 MPa (for pressurized water reactors), the RPV must be heavy-section steel. Due to regulations, RPV failure probabilities must be very low. To achieve sufficient safety, the design of the reactor assumes large cracks and extreme loading conditions. Under such conditions, a probable failure mode is rapid, catastrophic fracture if the vessel steel is brittle. Tough RPV base metals that are typically used are A302B, A533B plates, or A508 forgings; these are quenched and tempered, low-alloy steels with primarily tempered bainitic microstructures. Over the past few decades, RPV embrittlement has been addressed by the use of tougher steels with lower trace impurity contents, the decrease of neutron flux that the vessel is subject to, and the elimination of beltline welds. However, embrittlement remains an issue for older reactors. Pressurized water reactors are more susceptible to embrittlement than boiling water reactors. This is due to PWRs sustaining more neutron impacts. To counteract this, many PWRs have a specific core design that reduces the number of neutrons hitting the vessel wall. Moreover, PWR designs must be especially mindful of embrittlement because of pressurized thermal shock, an accident scenario that occurs when cold water enters a pressurized reactor vessel, introducing large thermal stress. This thermal stress may cause fracture if the reactor vessel is sufficiently brittle. See also Radiation damage References Specific Materials degradation embrittlement Scientific terminology
Neutron embrittlement
[ "Physics", "Materials_science", "Engineering" ]
602
[ "Materials degradation", "Nuclear and atomic physics stubs", "Materials science", "Nuclear physics" ]
55,072,303
https://en.wikipedia.org/wiki/Phase%20shift%20torque%20measurement
Phase shift torque measurement involves the use of a shaft, which is either an integral part of the rotating machine under test - such as a turbine, compressor, or jet engine - or positioned between the machine and a dynamometer. The shaft has a pair of identical toothed disks attached at each end and often has a slender portion to enhance its angle of twist. The twist of the shaft can be determined from the phase difference of the magnetically or optically detected wave pattern from each of the disks. Under no-load the waves are synchronised and as a load is applied to the shaft their phase difference increases. The shaft's angle of twist is determined from the measured phase difference. Since the twist of a shaft is linearly proportional to the applied torque within the elastic limit (up to its yield strength), the torque can be calculated using established formulas of torsion mechanics. Phase shift torque meters can measure shaft power to 0.1% accuracy in R & D applications, and to 1.0% when designed for permanent installation, both at confidence levels of 95%. As of 1991, phase shift torque measurement instrumentation had been installed on gas turbine systems with a total power of 2 GW, with over 2 million operational hours recorded, demonstrating good reliability. These systems operated at speeds of up to 90,000 rpm and achieved power outputs of up to 50 MW. References Torque Measurement Dynamometers Mechanical engineering
Phase shift torque measurement
[ "Physics", "Mathematics", "Technology", "Engineering" ]
288
[ "Force", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Classical mechanics stubs", "Classical mechanics", "Measurement", "Size", "Measuring instruments", "Dynamometers", "Mechanical engineering", "Wikipedia categories named after physical quantities", "Torque" ...
55,085,415
https://en.wikipedia.org/wiki/Pesampator
Pesampator (; developmental code names BIIB-104 and PF-04958242) is a positive allosteric modulator (PAM) of the AMPA receptor (AMPAR), an ionotropic glutamate receptor, which was under development by Pfizer for the treatment of cognitive symptoms in schizophrenia. In March 2018, the development of the drug was transferred over from Pfizer to Biogen. It was also under development for the treatment of age-related sensorineural hearing loss, but development for this indication was terminated due to insufficient effectiveness. In July 2022, Biogen discontinued the development of pesampator for cognitive symptoms in schizophrenia due to ineffectiveness. Pesampator belongs to the biarylpropylsulfonamide group of AMPAR PAMs, which also includes LY-404187, LY-503430, and mibampator (LY-451395) among others. It is described as a "high-impact" AMPAR PAM, unlike so-called "low-impact" AMPAR PAMs like CX-516 and its congener farampator (CX-691, ORG-24448). In animals, low doses of pesampator have been found to enhance cognition and memory, whereas higher doses produce motor coordination disruptions and convulsions. The same effects, as well as neurotoxicity at higher doses, have been observed with orthosteric and other high-impact allosteric AMPAR activators. In healthy volunteers, pesampator has been found to significantly reduce ketamine-induced deficits in verbal learning and working memory without attenuating ketamine-induced psychotomimetic effects. It was able to complete reverse ketamine-induced impairments in spatial working memory in the participants. In addition to its actions on the AMPAR, pesampator has been reported to act as a GlyT1 glycine transporter blocker. As such, it is also a glycine reuptake inhibitor, and may act indirectly to activate the glycine receptor and the glycine co-agonist site of the NMDA receptor by increasing extracellular levels of glycine. See also AMPA receptor positive allosteric modulator List of investigational antipsychotics References AMPA receptor positive allosteric modulators Experimental drugs Glycine reuptake inhibitors Nitriles Nootropics Sulfonamides Tetrahydrofurans Thiophenes
Pesampator
[ "Chemistry" ]
536
[ "Nitriles", "Functional groups" ]
32,272,502
https://en.wikipedia.org/wiki/Baghouse
A baghouse, also known as a baghouse filter, bag filter, or fabric filter is an air pollution control device and dust collector that removes particulates entrained in gas released from commercial processes. Power plants, steel mills, pharmaceutical producers, food manufacturers, chemical producers and other industrial companies often use baghouses to control emission of air pollutants. Baghouses came into widespread use in the late 1970s after the invention of high-temperature fabrics (for use in the filter media) capable of withstanding temperatures over . Unlike electrostatic precipitators, where performance may vary significantly depending on process and electrical conditions, functioning baghouses typically have a particulate collection efficiency of 99% or better, even when particle size is very small. Operation Most baghouses use long, cylindrical bags (or tubes) made of woven or felted fabric as a filter medium. For applications where there is relatively low dust loading and gas temperatures are or less, pleated, nonwoven cartridges are sometimes used as filtering media instead of bags. Dust-laden gas or air enters the baghouse through hoppers and is directed into the baghouse compartment. The gas is drawn through the bags, either on the inside or the outside depending on cleaning method, and a layer of dust accumulates on the filter media surface until air can no longer move through it. When a sufficient pressure drop (ΔP) occurs, the cleaning process begins. Cleaning can take place while the baghouse is online (filtering) or is offline (in isolation). When the compartment is clean, normal filtering resumes. Baghouses are very efficient particulate collectors because of the dust cake formed on the surface of the bags. The fabric provides a surface on which dust collects through the following four mechanisms: Inertial collection – Dust particles strike the fibers placed perpendicular to the gas-flow direction instead of changing direction with the gas stream. Interception – Particles that do not cross the fluid streamlines come in contact with fibers because of the fiber size. Brownian movement – Submicrometre particles are diffused, increasing the probability of contact between the particles and collecting surfaces. Electrostatic forces – The presence of an electrostatic charge on the particles and the filter can increase dust capture. A combination of these mechanisms results in formation of the dust cake on the filter, which eventually increases the resistance to gas flow. The filter must be cleaned periodically. To ensure the filter bags have a long lifespan they are commonly coated with a filter enhancer (pre-coat). The use of chemically inert limestone (calcium carbonate) is most common as it increases efficiency of dust collection (including fly ash) via formation of what is called a dustcake or coating on the surface of the filter media. This traps fine particulates but also provides protection for the bag itself from moisture, and oily or sticky particulates which can bind the filter media. Without a pre-coat the filter bag allows fine particulates to bleed through the bag filter system, especially during start-up, as the bag can only do part of the filtration leaving the finer parts to the filter enhancer dustcake. Parts Fabric filters generally have the following parts: Clean plenum Dusty plenum Bag, cage, venturi assembly Tubeplate RAV/SCREW Compressed air header Blow pipe Housing and hopper Types Baghouses are classified by the cleaning method used. The three most common types of baghouses are mechanical shakers, reverse gas, and pulse jet. Mechanical shakers In mechanical-shaker baghouses, tubular filter bags are fastened onto a cell plate at the bottom of the baghouse and suspended from horizontal beams at the top. Dirty gas enters the bottom of the baghouse and passes through the filter, and the dust collects on the inside surface of the bags. Cleaning a mechanical-shaker baghouse is accomplished by shaking the top horizontal bar from which the bags are suspended. Vibration produced by a motor-driven shaft and cam creates waves in the bags to shake off the dust cake. Shaker baghouses range in size from small, handshaker devices to large, compartmentalized units. They can operate intermittently or continuously. Intermittent units can be used when processes operate on a batch basis; when a batch is completed, the baghouse can be cleaned. Continuous processes use compartmentalized baghouses; when one compartment is being cleaned, the airflow can be diverted to other compartments. In shaker baghouses, there must be no positive pressure inside the bags during the shake cycle. Pressures as low as can interfere with cleaning. The air-to-cloth ratio for shaker baghouses is relatively low, hence the space requirements are quite high. However, because of the simplicity of design, they are popular in the minerals processing industry. Reverse air In reverse-air baghouses, the bags are fastened onto a cell plate at the bottom of the baghouse and suspended from an adjustable hanger frame at the top. Dirty gas flow normally enters the baghouse and passes through the bag from the inside, and the dust collects on the inside of the bags. Reverse-air baghouses are compartmentalized to allow continuous operation. Before a cleaning cycle begins, filtration is stopped in the compartment to be cleaned. Bags are cleaned by injecting clean air into the dust collector in a reverse direction, which pressurizes the compartment. The pressure makes the bags collapse partially, causing the dust cake to crack and fall into the hopper below. At the end of the cleaning cycle, reverse airflow is discontinued, and the compartment is returned to the main stream. The flow of the dirty gas helps maintain the shape of the bag. However, to prevent total collapse and fabric chafing during the cleaning cycle, rigid rings are sewn into the bags at intervals. Space requirements for a reverse-air baghouse are comparable to those of a shaker baghouse; however, maintenance needs are somewhat greater. Pulse jet In reverse pulse-jet baghouses, individual bags are supported by a metal cage (filter cage), which is fastened onto a cell plate at the top of the baghouse. Dirty gas enters from the bottom of the baghouse and flows from outside to inside the bags. The metal cage prevents collapse of the bag. The pulse-jet baghouse was invented by MikroPul (currently part of the Nederman group and still a major supplier of filtration solutions) in the 1950s. Bags are cleaned by a short burst of compressed air injected through a common manifold over a row of bags. The compressed air is accelerated by a venturi nozzle mounted at the reverse-jet baghouse top of the bag. Since the duration of the compressed-air burst is short (about 0.1 seconds), it acts as a rapidly moving air bubble, traveling through the entire length of the bag and causing the bag surfaces to flex. This flexing of the bags breaks the dust cake, and the dislodged dust falls into a storage hopper below. Reverse pulse-jet dust collectors can be operated continuously and cleaned without interruption of flow because the burst of compressed air is very small compared with the total volume of dusty air through the collector. On account of this continuous-cleaning feature, reverse-jet dust collectors are usually not compartmentalized. The short cleaning cycle of reverse-jet collectors reduces recirculation and redeposit of dust. These collectors provide more complete cleaning and reconditioning of bags than shaker or reverse-air cleaning methods. Also, the continuous-cleaning feature allows them to operate at higher air-to-cloth ratios, so the space requirements are lower. A digital sequential timer turns on the solenoid valve at set intervals to inject air into the blow pipe and clean the filters. Bag cleaning Cleaning sequences Two main sequence types are used to clean baghouses: Intermittent (periodic) cleaning Continuous cleaning Intermittently cleaned baghouses are composed of many compartments or sections. Each compartment is periodically closed off from the incoming dirty gas stream, cleaned, and then brought back online. While the individual compartment is out of place, the gas stream is diverted from the compartment’s area. This makes shutting down the production process unnecessary during cleaning cycles. Continuously cleaned baghouse compartments always filtering. A blast of compressed air momentarily interrupts the collection process to clean the bag. This is known as pulse jet cleaning. Pulse jet cleaning does not require taking compartments offline. Continuously cleaned baghouses are designed to prevent complete shutdown during bag maintenance and failures to the primary system. Methods Shaking A rod connecting to the bag is powered by a motor. This provides motion to remove caked-on particles. The speed and motion of the shaking depends on the design of the bag and composition of the particulate matter. Generally shaking is horizontal. The top of the bag is closed and the bottom is open. When shaken, the dust collected on the inside of the bag is freed. No dirty gas flows through a bag while it is being cleaned. This redirection of air flow illustrates why baghouses must be compartmentalized. Reverse air Air flow gives the bag structure. Dirty air flows through the bag from the inside, allowing dust to collect on the interior surface. During cleaning, gas flow is restricted from a specific compartment. Without the flowing air, the bags relax. The cylindrical bag contains rings that prevent it from completely collapsing under the pressure of the air. A fan blows clean air in the reverse direction. The relaxation and reverse air flow cause the dust cake to crumble and release into the hopper. Upon the completion of the cleaning process, dirty air flow continues and the bag regains its shape. Pulse jet This type of baghouse cleaning (also known as pressure-jet cleaning) is the most common. It was invented and patented by MikroPul in 1956. A high pressure blast of air is used to remove dust from the bag. The blast enters the top of the bag tube, temporarily ceasing the flow of dirty air. The shock of air causes a wave of expansion to travel down the fabric. The flexing of the bag shatters and discharges the dust cake. The air burst is about 0.1 second and it takes about 0.5 seconds for the shock wave to travel down the length of the bag. Due to its rapid release, the blast of air does not interfere with contaminated gas flow. Therefore, pulse-jet baghouses can operate continuously and are not usually compartmentalized. The blast of compressed air must be powerful enough to ensure that the shock wave will travel the entire length of the bag and fracture the dust cake. The efficiency of the cleaning system allows the unit to have a much higher gas to cloth ratio (or volumetric throughput of gas per unit area of filter) than shaking and reverse air bag filters. This kind of filter thus requires a smaller area to admit the same volume of air. Sonic The least common type of cleaning method is sonic. Some baghouses have ultrasonic horns installed to provide supplementary vibration to increase dust cleaning. The horns, which generate high intensity sound waves at the low end of the ultrasonic spectrum, are turned on just before or at the start of the cleaning cycle to help break the bonds between particles on the filter media surface and aid in dust removal. Sonic cleaning is commonly combined with another method of cleaning to ensure thorough cleaning. Rotating cage Although the principles of this method are basic, the rotating mechanical cage cleaning method is relatively new to the international market. This method can be visualized by reminding users of putting a floor covering rug on a clothes line and beating the dust out of it. The rotating mechanical cage option consists of a fixed cage attached to the cell plate. Nested inside the cage holding the bag is a secondary cage that is allowed to rotate 90 degrees to impact the inside of the filter bag. This beating action accomplishes the same desired effect of creating a force that dislodges the particulates as the cage moves. This rotating action can be as adjusted to meet desired whipping effect on the inside of the bag. Cartridge collectors Cartridge collectors use perforated metal cartridges that contain a pleated, nonwoven filtering media, as opposed to woven or felt bags used in baghouses. The pleated design allows for a greater total filtering surface area than in a conventional bag of the same diameter, The greater filtering area results in a reduced air to media ratio, pressure drop, and overall collector size. Cartridge collectors are available in single use or continuous duty designs. In single-use collectors, the dirty cartridges are changed and collected dirt is removed while the collector is off. In the continuous duty design, the cartridges are cleaned by the conventional pulse-jet cleaning system. Performance Baghouse performance is dependent upon inlet and outlet gas temperature, pressure drop, opacity, and gas velocity. The chemical composition, moisture, acid dew point, and particle loading and size distribution of the gas stream are essential factors as well. Gas temperature – Fabrics are designed to operate within a certain temperature range. Fluctuation outside of these limits, even for a small period of time, can weaken, damage, or ruin the bags. Pressure drop – Baghouses operate most effectively within a certain pressure drop range. This spectrum is based on a specific gas volumetric flow rate. Opacity – Opacity measures the quantity of light scattering that occurs as a result of the particles in a gas stream. Opacity is not an exact measurement of the concentration of particles; however, it is a good indicator of the amount of dust leaving the baghouse. Gas volumetric flow rate – Baghouses are created to accommodate a range of gas flows. An increase in gas flow rates causes an increase in operating pressure drop and air-to-cloth ratio. These increases put more mechanical strain on the baghouses, resulting in more frequent cleanings and high particle velocity, two factors that shorten bag life. Design variables Pressure drop, filter drag, air-to-cloth ratio, and collection efficiency are essential factors in the design of a baghouse. Pressure drop (ΔP) is the resistance to air flow across the baghouse. A high pressure drop corresponds with a higher resistance to airflow. Pressure drop is calculated by determining the difference in total pressure at two points, typically the inlet and outlet. Filter drag is the resistance across the fabric-dust layer. The air-to-cloth ratio (ft/min or cm/s) is defined as the amount of gas entering the baghouse divided by the surface area of the filter cloth. Filter media Fabric filter bags are oval or round tubes, typically long and in diameter, made of woven or felted material. Nonwoven materials are either felted or membrane. Nonwoven materials are attached to a woven backing (scrim). Felted filters contain randomly placed fibers supported by a woven backing material (scrim). In a membrane filter, a thin, porous membrane is bound to the scrim. High energy cleaning techniques such as pulse jet require felted fabrics. Woven filters have a definite repeated pattern. Low energy cleaning methods such as shaking or reverse air allow for woven filters. Various weaving patterns such as plain weave, twill weave, or sateen weave, increase or decrease the amount of space between individual fibers. The size of the space affects the strength and permeability of the fabric. A tighter weave corresponds with low permeability and, therefore, more efficient capture of fine particles. Reverse air bags have anti-collapse rings sewn into them to prevent pancaking when cleaning energy is applied. Pulse jet filter bags are supported by a metal cage, which keeps the fabric taut. To lengthen the life of filter bags, a thin layer of PTFE (teflon) membrane may be adhered to the filtering side of the fabric, keeping dust particles from becoming embedded in the filter media fibers. Some baghouses use pleated cartridge filters, similar to what is found in home air filtration systems. This allows much greater surface area for higher flow at the cost of additional complexity in manufacture and cleaning. See also Dust collector Electrostatic precipitator References External links Baghouse Dust Collector Information Baghouse Technical Drawings Baghouse Knowledgebase Baghouse / Fabric Filter Glossary of Terms Guide To Choosing The Correct Baghouse Filter Pollution control technologies Filters Particulate control
Baghouse
[ "Chemistry", "Engineering" ]
3,312
[ "Chemical equipment", "Filters", "Pollution control technologies", "Filtration", "Environmental engineering" ]
32,273,514
https://en.wikipedia.org/wiki/Minisci%20reaction
The Minisci reaction () is a named reaction in organic chemistry. It is a nucleophilic radical substitution to an electron deficient aromatic compound, most commonly the introduction of an alkyl group to a nitrogen containing heterocycle. The reaction was published in 1971 by F. Minisci. In the case of N-Heterocycles, the conditions must be acidic to ensure protonation of said heterocycle. A typical reaction is that between pyridine and pivalic acid with silver nitrate, sulfuric acid and ammonium persulfate to form 2-tert-butylpyridine. The reaction resembles Friedel-Crafts alkylation but with opposite reactivity and selectivity. The Minisci reaction often produces a mixture of regioisomers that can complicate product purification, but modern reaction conditions are incredibly mild, allowing a wide range of alkyl groups to be introduced. Depending on the radical source used, one side-reaction is acylation, with the ratio between alkylation and acylation depending on the substrate and the reaction conditions. Due to the inexpensive raw materials and simple reaction conditions, the Minisci reaction has found many applications in heterocyclic chemistry. Utility of the Minisci Reaction The reaction allows for alkylation of electron deficient heterocyclic species which is not possible with Friedel-Crafts chemistry. A method for alkylating electron deficient arenes, nucleophilic aromatic substitution, is also unavailable to electron deficient heterocycles as the ionic nucleophilic species used will deprotonate the heterocycle over acting as a nucleophile. Again, in contrast to nucleophilic aromatic substitution, the Minisci reaction does not require functionalisation of the arene, allowing for direct C-H functionalisation. Further to this, the generated alkyl radical species will not rearrange during the reaction in the way that alkyl fragments appended by Friedel-Crafts alkylation often will; meaning groups such as n-pentyl and cyclopropyl groups can be added unchanged. The alkyl radical is also a 'soft' nucleophile and so is very unlikely to interact with any 'hard' electrophiles (carbonyl species for example) already present on the heterocycle, which increases the functional group tolerance of the reaction. The reaction has been the subject of much research in recent years, with a focus placed on improved reactivity towards a greater variety of heterocycles, increasing the number of alkylating reagents that can be used, and employing milder oxidants and acids. Mechanism A free radical is formed from the carboxylic acid in an oxidative decarboxylation with silver salts and an oxidizing agent. The oxidizing agent (ammonium persulfate) oxidizes the Ag(+) to Ag(2+) under the acidic reaction conditions. This induces a hydrogen atom abstraction by the silver, followed by radical decarboxylation. The carbon-centered radical then reacts with the pyridinium aromatic compound. The ultimate product is formed by rearomatization. The acylated product is formed from the acyl radical. References Organic reactions Name reactions Carbon-carbon bond forming reactions
Minisci reaction
[ "Chemistry" ]
702
[ "Coupling reactions", "Name reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
32,274,237
https://en.wikipedia.org/wiki/Metro%20Bridge
Metro Bridge can refer to a bridge span on a metro (subway) system: Russia Moscow Metro Preobrazhensky Metro Bridge Luzhniki Metro Bridge Smolensky Metro Bridge Novosibirsk Metro Novosibirsk Metro Bridge Turkey Istanbul Metro Golden Horn Metro Bridge Ukraine Kyiv Metro Kyiv Metro Bridge, on the Sviatoshynsko-Brovarska Line Pivdennyi Bridge, on the Syretsko-Pecherska Line Podilskyi Metro Bridge, on the Podilsko-Vyhurivska Line (under construction) Kharkiv Metro Kharkiv Metro Bridge, on the Saltivska Line United Kingdom Tyne and Wear Metro Byker Metro Bridge United States New York City Subway Broadway Bridge Manhattan Bridge Williamsburg Bridge Bridges
Metro Bridge
[ "Engineering" ]
157
[ "Structural engineering", "Bridges" ]
32,274,243
https://en.wikipedia.org/wiki/GRB%20090429B
GRB 090429B was a gamma-ray burst observed on 29 April 2009 by the Burst Alert Telescope aboard the Swift satellite. The burst triggered a standard burst-response observation sequence, which started 106 seconds after the burst. The X-ray telescope aboard the satellite identified an uncatalogued fading source. No optical or UV counterpart was seen in the UV–optical telescope. Around 2.5 hours after the burst trigger, a series of observations was carried out by the Gemini North telescope, which detected a bright object in the infrared part of the spectrum. No evidence of a host galaxy was found either by Gemini North or by the Hubble Space Telescope. Though this burst was detected in 2009, it was not until May 2011 that its distance estimate of 13.14 billion light-years was announced. With 90% likelihood, the burst had a photometric redshift greater than z = 9.06, which would make it the most distant GRB known, although the error bar on this estimate is large, providing a lower limit of z > 7. The amount of energy released in the burst was estimated at 3.5 × 1052 erg. For a comparison, the Sun's luminosity is 3.8 × 1033 erg/s. See also GRB 090423, the most distant gamma-ray burst with spectroscopic confirmation References 090429B 20090429 April 2009 Canes Venatici
GRB 090429B
[ "Physics", "Astronomy" ]
299
[ "Physical phenomena", "Astronomical events", "Constellations", "Canes Venatici", "Gamma-ray bursts", "Stellar phenomena" ]
32,285,510
https://en.wikipedia.org/wiki/Magnetic%20radiation%20reaction%20force
The magnetic radiation reaction force is a force on an electromagnet when its magnetic moment changes. One can derive an electric radiation reaction force for an accelerating charged particle caused by the particle emitting electromagnetic radiation. Likewise, a magnetic radiation reaction force can be derived for an accelerating magnetic moment emitting electromagnetic radiation. Similar to the electric radiation reaction force, three conditions must be met in order to derive the following formula for the magnetic radiation reaction force. First, the motion of the magnetic moment must be periodic, an assumption used to derive the force. Second, the magnetic moment is traveling at non-relativistic velocities (that is, much slower than the speed of light). Finally, this only applies this force is proportional to the fifth derivative of the position as a function of time (sometimes somewhat facetiously referred to as the "Crackle"). Unlike the Abraham–Lorentz force, the force points in the direction opposite of the "Crackle". Definition and description Mathematically, the magnetic radiation reaction force is given by, in SI units: where: is the force, is the Pop (the third derivative of acceleration, or the fifth derivative of displacement), is the permeability of free space, is the speed of light in free space is the electric charge of the particle. is the radius of the magnetic moment Note that this formula applies only for non-relativistic velocities. Physically, a time changing magnetic moment emits radiation similar to the Larmor formula of an accelerating charge. Since momentum is conserved, the magnetic moment is pushed in the direction opposite the direction of the emitted radiation. In fact the formula above for radiation force can be derived from the magnetic version of the Larmor formula, as shown below. Background In classical electrodynamics, problems are typically divided into two classes: Problems in which the charge and current sources of fields are specified and the fields are calculated, and The reverse situation, problems in which the fields are specified and the motion of particles are calculated. In some fields of physics, such as plasma physics and the calculation of transport coefficients (conductivity, diffusivity, etc.), the fields generated by the sources and the motion of the sources are solved self-consistently. In such cases, however, the motion of a selected source is calculated in response to fields generated by all other sources. Rarely is the motion of a particle (source) due to the fields generated by that same particle calculated. The reason for this is twofold: Neglect of the "self-fields" usually leads to answers that are accurate enough for many applications, and Inclusion of self-fields leads to problems in physics such as renormalization, some of which still unsolved, that relate to the very nature of matter and energy. This conceptual problems created by self-fields are highlighted in a standard graduate text. [Jackson] The difficulties presented by this problem touch one of the most fundamental aspects of physics, the nature of the elementary particle. Although partial solutions, workable within limited areas, can be given, the basic problem remains unsolved. One might hope that the transition from classical to quantum-mechanical treatments would remove the difficulties. While there is still hope that this may eventually occur, the present quantum-mechanical discussions are beset with even more elaborate troubles than the classical ones. It is one of the triumphs of comparatively recent years (~1948–50) that the concepts of Lorentz covariance and gauge invariance were exploited sufficiently cleverly to circumvent these difficulties in quantum electrodynamics and so allow the calculation of very small radiative effects to extremely high precision, in full agreement with experiment. From a fundamental point of view, however, the difficulties remain. The magnetic radiation reaction force is the result of the most fundamental calculation of the effect of self-generated fields. It arises from the observation that accelerating non-relativistic particles with associated magnetic moment emit radiation. The Abraham–Lorentz force is the average force that an accelerating charged particle feels in the recoil from the emission of radiation. The introduction of quantum effects leads one to quantum electrodynamics. The self-fields in quantum electrodynamics generate a finite number of infinities in the calculations that can be removed by the process of renormalization. This has led to a theory that is able to make the most accurate predictions that humans have made to date. See precision tests of QED. The renormalization process fails, however, when applied to the gravitational force. The infinities in that case are infinite in number, which causes the failure of renormalization. Therefore general relativity has unsolved self-field problems. String theory is a current attempt to resolve these problems for all forces. Derivation We begin with the Larmor formula for radiation of the second derivative of a magnetic moment with respect to time: In the case that the magnetic moment is produced by an electric charge moving along a circular path is where is the position of the charge relative to the center of the circle and is the instantaneous velocity of the charge. The above Larmor formula becomes as follows: If we assume the motion of a charged particle is periodic, then the average work done on the particle by the Abraham–Lorentz force is the negative of the Larmor power integrated over one period from to : Notice that we can integrate the above expression by parts. If we assume that there is periodic motion, the boundary term in the integral by parts disappears: Integrating by parts a second time, we find Clearly, we can identify Signals from the future Below is an illustration of how a classical analysis can lead to surprising results. The classical theory can be seen to challenge standard pictures of causality, thus signaling either a breakdown or a need for extension of the theory. In this case the extension is to quantum mechanics and its relativistic counterpart quantum field theory. See the quote from Rohrlich in the introduction concerning "the importance of obeying the validity limits of a physical theory". For a particle in an external force , we have where This equation can be integrated once to obtain The integral extends from the present to infinitely far in the future. Thus future values of the force affect the acceleration of the particle in the present. The future values are weighted by the factor which falls off rapidly for times greater than in the future. Therefore, signals from an interval approximately into the future affect the acceleration in the present. For an electron, this time is approximately sec, which is the time it takes for a light wave to travel across the "size" of an electron. See also Max Abraham Hendrik Lorentz Cyclotron radiation Electromagnetic mass Radiation resistance Radiation damping Synchrotron radiation Wheeler–Feynman absorber theory References Further reading See sections 11.2.2 and 11.2.3 \ Jose A. Heras, The Radiation Force of an Electron Reexamined, 2003, http://www.joseheras.com/jheras_papers/JAH-PAPER_16.pdf. Donald H. Menzel, Fundamental Formulas of Physics, 1960, Dover Publications Inc., , vol. 1, page 345. External links MathPages - Does A Uniformly Accelerating Charge Radiate? Feynman: The Development of the Space-Time View of Quantum Electrodynamics Heras: The Radiation Reaction Force of an Electron Reexamined Electrodynamics Electromagnetic radiation
Magnetic radiation reaction force
[ "Physics", "Mathematics" ]
1,526
[ "Physical phenomena", "Electromagnetic radiation", "Radiation", "Electrodynamics", "Dynamical systems" ]
32,287,441
https://en.wikipedia.org/wiki/C13H10O5
{{DISPLAYTITLE:C13H10O5}} The molecular formula C13H10O5 (molar mass: 246.21 g/mol, exact mass: 246.0528 u) may refer to: Citromycin Hispidin Isopimpinellin Molecular formulas
C13H10O5
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
31,275,000
https://en.wikipedia.org/wiki/Radiation%20effects%20from%20the%20Fukushima%20Daiichi%20nuclear%20disaster
The radiation effects from the Fukushima Daiichi nuclear disaster are the observed and predicted effects as a result of the release of radioactive isotopes from the Fukushima Daiichii Nuclear Power Plant following the 2011 Tōhoku 9.0 magnitude earthquake and tsunami (Great East Japan Earthquake and the resultant tsunami). The release of radioactive isotopes from reactor containment vessels was a result of venting in order to reduce gaseous pressure, and the discharge of coolant water into the sea. This resulted in Japanese authorities implementing a 30-km exclusion zone around the power plant and the continued displacement of approximately 156,000 people as of early 2013. The number of evacuees has declined to 49,492 as of March 2018. Radioactive particles from the incident, including iodine-131 and caesium-134/137, have since been detected at atmospheric radionuclide sampling stations around the world, including in California and the Pacific Ocean. Preliminary dose-estimation reports by WHO and the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) indicate that, outside the geographical areas most affected by radiation, even in locations within Fukushima prefecture, the predicted risks remain low and no observable increases in cancer above natural variation in baseline rates are anticipated. In comparison, after the Chernobyl reactor accident, only 0.1% of the 110,000 cleanup workers surveyed have so far developed leukemia, although not all cases resulted from the accident. However, 167 Fukushima plant workers received radiation doses that slightly elevate their risk of developing cancer. Estimated effective doses from the accident outside of Japan are considered to be below, or far below the dose levels regarded as very small by the international radiological protection community. The United Nations Scientific Committee on the Effects of Atomic Radiation is expected to release a final report on the effects of radiation exposure from the accident by the end of 2013. A June 2012 Stanford University study estimated, using a linear no-threshold model, that the radioactivity release from the Fukushima Daiichi nuclear plant could cause 130 deaths from cancer globally (the lower bound for the estimate being 15 and the upper bound 1100) and 199 cancer cases in total (the lower bound being 24 and the upper bound 1800), most of which are estimated to occur in Japan. Radiation exposure to workers at the plant was projected to result in 2 to 12 deaths. However, a December 2012 UNSCEAR statement to the Fukushima Ministerial Conference on Nuclear Safety advised that "because of the great uncertainties in risk estimates at very low doses, UNSCEAR does not recommend multiplying very low doses by large numbers of individuals to estimate numbers of radiation-induced health effects within a population exposed to incremental doses at levels equivalent to or lower than natural background levels." Health effects Preliminary dose-estimation reports by the World Health Organization and the United Nations Scientific Committee on the Effects of Atomic Radiation indicate that 167 plant workers received radiation doses that slightly elevate their risk of developing cancer, but that this risk may not be statistically detectable, as has happened in the case of the Chernobyl nuclear disaster. After the Chernobyl accident, only 0.1% of the 110,000 cleanup workers surveyed have so far developed leukemia, although not all cases resulted from the accident Estimated effective doses from the Fukushima accident outside Japan are considered to be below (or far below) the dose levels regarded as very small by the international radiological protection community. According to the Japanese Government, 180,592 people in the general population were screened in March 2011 for radiation exposure, and no case was found which affects health. Thirty workers conducting operations at the plant had exposure levels greater than 100 mSv. It is believed that the health effects of the radioactivity release are primarily psychological rather than physical effects. Even in the most severely affected areas, radiation doses never reached more than a quarter of the radiation dose linked to an increase in cancer risk (25 mSv, whereas 100 mSv has been linked to an increase in cancer rates among victims in Hiroshima and Nagasaki). However, people who have been evacuated have suffered from depression and other mental health effects. While there were no deaths caused by radiation exposure, approximately 18,500 people died due to the earthquake and tsunami. Very few cancers would be expected as a result of the very low radiation doses received by the public. John Ten Hoeve and Stanford University professor Mark Z. Jacobson suggest that according to the linear no-threshold model (LNT) the accident is most likely to cause an eventual total of 130 (15–1100) cancer deaths, while noting that the validity of the LNT model at such low doses remains subject to debate. Radiation epidemiologist Roy Shore contends that estimating health effects in a population from the LNT model "is not wise because of the uncertainties". The LNT model did not accurately model casualties from Chernobyl, Hiroshima, or Nagasaki; it greatly overestimated the casualties. Evidence that the LNT model is a gross distortion of damage from radiation has existed since 1946 and was suppressed by Nobel Prize winner Hermann Müller in favour of assertions that no amount of radiation is safe. Two years after the incident (in 2013), the World Health Organization (WHO) indicated that the residents of the area who were evacuated were exposed to little radiation and that radiation induced health impacts are likely to be below detectable levels. The health risks in the WHO assessment attributable to the Fukushima radioactivity release were calculated by largely applying the conservative Linear no-threshold model of radiation exposure, which assumes that even the smallest amount of radiation exposure has a negative health effect. The WHO report released in 2013 predicts that for populations living around the Fukushima nuclear power plant there is a 70% higher relative risk of developing thyroid cancer for females exposed as infants, a 7% higher relative risk of leukemia in males exposed as infants, and a 6% higher relative risk of breast cancer in females exposed as infants. The WHO stresses that the percentages stated in that section of their report are relative risk increases of developing these cancers, not absolute risk increases, since the lifetime absolute baseline risk of developing thyroid cancer in females is 0.75% and the radiation-induced cancer risk is now predicted to increase that 0.75% to 1.25%, with this 0.75% to 1.25% change representing the "70% higher relative risk": The WHO calculations determined that the most at-risk group, infants who were in the most affected area, would experience an absolute increase in the risk of cancer (of all types) during their lifetime of approximately one percent (1%) due to the accident. And the lifetime risk increase for thyroid cancer due to the accident for a female infant in the most affected radiation location was estimated to be one half of one percent (0.5%). Cancer risks for unborn children are considered to be similar to those for one-year-old infants. The estimated risk of cancer to people who were children and adults during the Fukushima accident in the most affected area was determined to be lower than that of the most at-risk group—infants. A thyroid ultrasound screening programme was conducted in 2013 in the entire Fukushima prefecture; this screening programme is, due to the screening effect, likely to record an increase in the incidence of thyroid disease due to early detection of non-symptomatic disease cases. About one-third of people (~30%) in industrialized nations are presently diagnosed with cancer during their lifetimes. Radiation exposure can increase cancer risk, with the cancers that arise being indistinguishable from cancers resulting from other causes. In the general population, no increase is expected in the frequency of tissue reactions attributable to radiation exposure and no increase is expected in the incidence of congenital or developmental abnormalities, including cognitive impairment attributable to in-utero radiation exposure. No significant increase in heritable effects has been found in studies of the children of the survivors of the atomic bombings of Hiroshima and Nagasaki or in the offspring of cancer survivors treated with radiotherapy, which indicates that moderate acute radiation exposures have little impact on the overall risk of heritable effects in humans. As of August 2013, there have been more than 40 children newly diagnosed with thyroid cancer and other cancers in Fukushima prefecture. 18 of these were diagnosed with thyroid cancer, but these cancers are not attributed to radiation from Fukushima, as similar patterns occurred before the accident in 2006 in Japan, with 1 in 100,000 children per year developing thyroid cancer in that year, that is, this is not higher than the pre-accident rate. While controversial scientist Christopher Busby disagrees, claiming the rate of thyroid cancer in Japan was 0.0 children per 100,000 in 2005, the Japan Cancer Surveillance Research Group showed a thyroid cancer rate of 1.3 per 100,000 children in 2005 based on official cancer cases. As a point of comparison, thyroid cancer incidence rates after the Chernobyl accident of 1986 did not begin to increase above the prior baseline value of about 0.7 cases per 100,000 people per year until 1989 to 1991, 3 to 5 years after the accident in both the adolescent and children age groups. Therefore, data from Chernobyl suggests that an increase in thyroid cancer around Fukushima is not expected to begin to be seen until at least 3 to 5 years after the accident According to the Tenth Report of the Fukushima Prefecture Health Management Survey released in February 2013, more than 40% of children screened around Fukushima prefecture were diagnosed with thyroid nodules or cysts. Ultrasonographic detectable thyroid nodules and cysts are extremely common and can be found at a frequency of up to 67% in various studies. 186 (0.5%) of these had nodules larger than 5.1 mm and/or cysts larger than 20.1 mm and underwent further investigation. None had thyroid cancer. Fukushima Medical University gives the number of children diagnosed with thyroid cancer as of December 2013 as 33 and concluded, "[I]t is unlikely that these cancers were caused by the exposure from 131I from the nuclear power plant accident in March 2011". A 2013 article in the Stars and Stripes asserted that a Japanese government study released in February of that year had found that more than 25 times as many people in the area had developed thyroid cancer compared with data from before the disaster. As part of the ongoing precautionary ultrasound screening program in and around Fukushima, (36%) of children in Fukushima Prefecture in 2012 were found to have thyroid nodules or cysts, but these are not considered abnormal. This screening programme is, due to the screening effect, likely, according to the WHO, to lead to an increase in the incidence of the diagnosis of thyroid disease due to early detection of non-symptomatic disease cases. For example, the overwhelming majority of thyroid growths prior to the accident, and in other parts of the world, are overdiagnosed (that is, a benign growth that will never cause any symptoms, illness, or death for the patient, even if nothing is ever done about the growth) with autopsy studies, again done prior to the accident and in other parts of the world, on people who died from other causes showing that more than one third (33%+), of adults technically has a thyroid growth/cancer, but it is benign/never caused them any harm. A 2019 study evaluated the first and the second screening rounds of the Fukushima Health Management Survey (FHMS, 2011–2016) separately as well as combined covering 184 confirmed cancer cases in 1.080 million observed radiation-exposed person-years. The authors concluded "We suggest an innovative statistical technique to determine the municipality-specific average exposed person-time of the participants in the FHMS. The knowledge of the exposed person-time enables the assessment of the association between the radiation dose rate and the thyroid cancer detection rate more precisely than in previous studies. The thyroid cancer detection rate and the radiation dose-rate in the 59 municipalities in the Fukushima prefecture show statistically significant dose-response relationships. The detection rate ratio per μSv/h was 1.065 (1.013, 1.119) based on all data in both examination rounds combined. In the 53 municipalities subjected to less than 2 μSv/h, the detection rate ratio was considerably higher: 1.555 (1.096, 2.206). Therefore, it became evident that the radiation contamination due to the Fukushima nuclear power plant accidents is positively associated with the thyroid cancer detection rate in children and adolescents. This corroborates previous studies providing evidence for a causal relation between nuclear accidents and the subsequent occurrence of thyroid cancer." Thyroid cancer is one of the most survivable cancers, with an approximate 94% survival rate after first diagnosis, and that rate increases to a 100% survival rate with catching it early. For example, from 1989 to 2005, an excess of 4000 children and adolescent cases of thyroid cancer were observed in those who lived around Chernobyl; of these 4000 people, nine have died so far, a 99% survival rate. In the 47 prefectures of Japan from 2012 onward, the annual proportion of low birth weight babies (< 2500 g) was associated with the prefecture-specific dose-rate derived from Cs-137 deposition after the nuclear power plant accidents. One μSv/h (equivalent to 8.8 mSv/year) increased the odds of observing a low birth weight baby by approximately 10%. Psychological effects of perceived radiation exposure A survey by the newspaper Mainichi Shimbun computed that there were 1,600 deaths related to the evacuation, comparable to the 1,599 deaths due to the earthquake and tsunami in the Fukushima Prefecture. In the former Soviet Union, many patients with negligible radioactive exposure after the Chernobyl disaster displayed extreme anxiety about low level radiation exposure, and therefore developed many psychosomatic problems, including radiophobia, and with this an increase in fatalistic alcoholism being observed. As Japanese health and radiation specialist Shunichi Yamashita noted: Findings from the Chernobyl disaster indicated the need for rigorous resource allocation, and research findings from Chernobyl were utilized in the Fukushima nuclear power plant disaster response. A survey by the Iitate, Fukushima local government obtained responses from approximately 1,743 people who have evacuated from the village, which lies within the emergency evacuation zone around the crippled Fukushima Daiichi Plant. It shows that many residents are experiencing growing frustration and instability due to the nuclear crisis and an inability to return to the lives they were living before the disaster. Sixty percent of respondents stated that their health and the health of their families had deteriorated after evacuating, while 39.9% reported feeling more irritated compared to before the disaster. Summarizing all responses to questions related to evacuees' current family status, one-third of all surveyed families live apart from their children, while 50.1% live away from other family members (including elderly parents) with whom they lived before the disaster. The survey also showed that 34.7% of the evacuees have suffered salary cuts of 50% or more since the outbreak of the nuclear disaster. A total of 36.8% reported a lack of sleep, while 17.9% reported smoking or drinking more than before they evacuated. Experts on the ground in Japan agree that mental health challenges are the most significant issue. Stress, such as that caused by dislocation, uncertainty, and concern about unseen toxicants, often manifests in physical ailments, such as heart disease. After a nuclear power plant disaster, residents of the affected areas are at a higher risk for mental health illnesses such as depression, anxiety, post-traumatic stress disorder (PTSD), medically unexplained somatic symptoms, and suicide. These mental health illnesses, among others, have been highly prevalent in the Fukushima residents following the nuclear power plant disaster. Stressors that were identified as risk factors for these negative mental outcomes include: length of duration of evacuation, house damage, separation from family members, inability to family members and friends after the disaster, watching the earthquake on television, life-threatening experience during the quake and tsunamis, injury, plant explosion, unemployment among middle-aged men, burying loved ones themselves, lack of social support, pre-existing health problems, misunderstanding of radiation exposure risk, lack of clarity regarding benefits, on-going stigma regarding radiation, distrust of government, distrust of public health authorities, distrust of the Tokyo Electric Power Company (TEPCO) management, burnout among mental health workers, low income, loss of colleagues, and intra-family conflict. Poor mental health has been associated with early mortality, disability, and the overuse of medical services. The populations at the highest risk for mental health illnesses following the disaster are the nuclear power plant workers, mothers with infants, children, and middle-aged unemployed males. At-risk populations were identified through the implementation of surveys such as the Fukushima Health Management Survey (FHMS). The FHMS began soon after the Fukushima disaster and has tracked health outcomes for several years after the event. The intent and goals of the FHMS surveys was to "monitor the long-term health of residents, promote their future well-being, and confirm whether long-term low-dose radiation exposure has health effects". The FHMS is a general survey that includes four detailed surveys (thyroid ultrasound exam, comprehensive health check, mental health and lifestyle survey, and pregnancy and birth survey). These surveys, which were given to both children and adults, addressed mental status, physical status, 6-month activities, perception of radiation risk, and experiences during and after disaster. The FHMS surveys are ongoing and are reported out on an annual basis. According to the FHMS survey, the top three mental health diagnoses (discussed further below) include: -Depression -Anxiety -Post-Traumatic Stress Disorder Depression The disaster affected all ages of the population, and though adolescents were more likely to develop mental health problems in general, older adults were more likely to develop depression. For evacuees living in temporary housing, depression was more often long-term for these individuals when compared to the general population in Fukushima. Rates of depression were high among mothers who lived in Fukushima and were pregnant when the disaster occurred, and remained high in the months after the baby was born. Depressive symptoms occurred even more so in women who experienced interruption in obstetrical care because of the nuclear disaster and potentially from damaged healthcare buildings. About a quarter of the women who were pregnant at the time of the disaster experienced symptoms of depression, and though the proportion of concerned expectant mothers decreased over time, counselling services were still provided in the years following due to the number of women concerned about the potential health effects from the event. Also, one study looking at elderly individuals from Iwanuma City in the Miyagi prefecture found that exercise may help decrease depressive symptoms among older adults who survived the earthquake and tsunami disaster. Anxiety One of the most common fears regarding nuclear disasters is radiation exposure. Parental anxiety was one of the reasons for thyroid ultrasound examinations in children after the disaster. In 2015, one study found that in a group of 300,473 children that had undergone a thyroid ultrasound since the Fukushima nuclear disaster, nearly half of this sample had developed nodules or cysts; 116 children from this sample developed nodules that were malignant or otherwise suspicious. Measures were taken to decrease the amount of radiation exposure due to side effects expected to potentially occur with exposure. For example, restrictions were placed on certain food from the region and internationally; Japanese goods were placed under restrictions by some countries initially after the disaster. Tough restrictions were left in place because the public generally did not have a clear idea of the risks from radiation exposure, and implementing policy changes to reflect less restrictive, but low-risk, levels of radiation received push-back in Japan. Furthermore, those in the evacuation zone had to wait to return home, and some residents were unable to return home until several years after the event when living restrictions were finally lifted. However, lifting the living restrictions did not always help residents as most were uneasy about returning home due to fears of health hazards as well as the stability of the communities if they were to return home. Low doses of radiation may not contribute much to create health effects like cancer, and given that such low doses may never lead to disease for most individuals, this raises the question of how evacuation should be handled in a situation like that of Fukushima. Ethical considerations need to be taken regarding the impact on mental health versus the costs of allowing low amounts of exposure. The many ways in which nuclear radiation could affect people in the area, whether through real health consequences or fear, are cause for anxiety; however, it appears that these fears may be settling in the Fukushima population as symptoms of anxiety have become less prevalent over time since the disaster. PTSD At least 10% of participants in studies following the Fukushima disaster developed PTSD. Among power plant workers from the event, it is possible that risk for post-traumatic stress disorder increased with age as younger workers tended not to develop this response as often as older workers. After the nuclear disaster, stigmatization and discrimination were issues in general for nuclear power plant workers in the region whether they worked at the Daiichi plant or another plant that was not part of the nuclear disaster. Greater amounts of discrimination and stressors in the first two to three months after the disaster were associated with general psychological distress and PTSD symptoms a year later, according to one study that assessed the mental health impact of slurs and discrimination on power plant workers. Like other mental health problems, the need for support for PTSD symptoms decreased over time; one study found that the percentage of Fukushima Prefecture adult participants needing support was 15.8% in 2013, a nearly 6% decrease compared to what was observed in 2011 after the disaster. For all mental health needs, support services were provided soon after the disaster and in years following in order to help individuals suffering from symptoms of depression, anxiety, and PTSD; and it appears these services may have been worthwhile as symptoms of these mental health issues decreased in prevalence over time. Depression, anxiety, and PTSD were not the only notable mental health concerns that came out of the Fukushima nuclear disaster. Other mental health issues that came out of the event include increased suicide risk. Suicide One of the most severe long-term effects the survey found is an increase in rates of suicide. In the first few years after the disaster, suicide rates decreased, but after 2013, there was a significant increase in the rate of suicide that surpassed the rate of suicide in the year before the disaster. The rate of suicide also increased more rapidly in Fukushima at this time than in surrounding prefectures that were affected by the earthquake and tsunami. There has been suggestion that support services may have helped cause the decrease in rate of suicide for the first few years after the disaster, and the relapse in 2014 may indicate further need for these resources. Overall, the FHMS survey and other research assisted with identifying the barriers to adequate mental health care. The barriers identified to improving the mental health outcomes of Fukushima residents include: delays and miscommunication of benefits, a decline of health professionals assisting due to "burn out", rumors and public stigma of radiation, cultural stigma in Japan against mental health disorders (causing affected individuals to be less likely to seek assistance), distrust in authorities (i.e. government and healthcare professionals), and tension with the community health workers due to differences in perceptions of radiation risk. Based on these barriers, researchers were able to make recommendations for prevention and treatment of such mental health outcomes. In order to effectively assist Fukushima residents and reduce the negative mental health outcomes, there needs to be further research to adequately identify the risk factors for mental health disorders. By doing so, efficient programs may be implemented. Programs (including mental health screenings), treatments, and resource distribution should be focused on high-risk groups immediately after the disaster, such as mothers and infants and nuclear power plant workers. Strategies that aim to reduce the incidence of the negative cultural stigma on mental health disorders in Japan should be implemented. Furthermore, researchers and policymakers should continue to monitor for long-term mental effects as they may not be present right away. Total emissions On 24 May 2012, more than a year after the disaster, TEPCO released their estimate of radioactivity releases due to the Fukushima Daiichi Nuclear Disaster. An estimated 538.1 petabecquerels (PBq) of iodine-131, caesium-134 and caesium-137 was released. 520 PBq was released into the atmosphere between 12 and 31 March 2011 and 18.1 PBq into the ocean from 26 March to 30 September 2011. A total of 511 PBq of iodine-131 was released into both the atmosphere and the ocean, 13.5 PBq of caesium-134 and 13.6 PBq of caesium-137. In May 2012, TEPCO reported that at least 900 PBq had been released "into the atmosphere in March last year [2011] alone" up from previous estimates of 360–370 PBq total. The primary releases of radioactive nuclides have been iodine and caesium; strontium and plutonium have also been found. These elements have been released into the air via steam; and into the water leaking into groundwater or the ocean. The expert who prepared a frequently cited Austrian Meteorological Service report asserted that the "Chernobyl accident emitted much more radioactivity and a wider diversity of radioactive elements than Fukushima Daiichi has so far, but it was iodine and caesium that caused most of the health risk – especially outside the immediate area of the Chernobyl plant." Iodine-131 has a half-life of 8 days while caesium-137 has a half-life of over 30 years. The IAEA has developed a method that weighs the "radiological equivalence" for different elements. TEPCO has published estimates using a simple-sum methodology, TEPCO has not released a total water and air release estimate. According to a June 2011 report of the International Atomic Energy Agency (IAEA), at that time no confirmed long-term health effects to any person had been reported as a result of radiation exposure from the nuclear accident. According to a report published by one expert in the Journal of Atomic research, the Japanese government claims that the release of radioactivity is about one-tenth of that from the Chernobyl disaster, and the contaminated area is also about one-tenth that of Chernobyl. Air releases A 12 April report prepared by NISA estimated the total release of iodine-131 was 130 PBq and caesium-137 at 6.1 PBq. On 23 April the NSC updated its release estimates, but it did not reestimate the total release, instead indicating that 154 TBq of air release were occurring daily as of 5 April. On 24 August 2011, the Nuclear Safety Commission (NSC) of Japan published the results of the recalculation of the total amount of radioactive materials released into the air during the incident at the Fukushima Daiichi Nuclear Power Station. The total amounts released between 11 March and 5 April were revised downwards to 130 PBq for iodine-131 (I-131) and 11 PBq for caesium-137 (Cs-137). Earlier estimations were 150 PBq and 12 PBq. On 20 September the Japanese government and TEPCO announced the installation of new filters at reactors 1, 2 and 3 to reduce the release of radioactive materials into the air. Gases from the reactors would be decontaminated before they would be released into the air. In the first half of September 2011 the amount of radioactive substances released from the plant was about 200 million becquerels per hour, according to TEPCO, which was approximately one-four millionths of the level of the initial stages of the accident in March. According to TEPCO the emissions immediately after the accident were around 220 billion becquerel; readings declined after that, and in November and December 2011 they dropped to 17 thousand becquerel, about one-13 millionth the initial level. But in January 2012 due to human activities at the plant, the emissions rose again up to 19 thousand becquerel. Radioactive materials around reactor 2, where the surroundings were still highly contaminated, got stirred up by the workers going in and out of the building, when they inserted an optical endoscope into the containment vessel as a first step toward decommissioning the reactor. Iodine-131 A widely cited Austrian Meteorological Service report estimated the total amount of I-131 released into the air as of 19 March based on extrapolating data from several days of ideal observation at some of its worldwide CTBTO radionuclide measuring facilities (Freiburg, Germany; Stockholm, Sweden; Takasaki, Japan and Sacramento, USA) during the first 10 days of the accident. The report's estimates of total I-131 emissions based on these worldwide measuring stations ranged from 10 PBq to 700 PBq. This estimate was 1% to 40% of the 1760 PBq of the I-131 estimated to have been released at Chernobyl. A later, 12 April 2011, NISA and NSC report estimated the total air release of iodine-131 at 130 PBq and 150 PBq, respectively – about 30 grams. However, on 23 April, the NSC revised its original estimates of iodine-131 released. The NSC did not estimate the total release size based upon these updated numbers, but estimated a release of 0.14 TBq per hour (0.00014 PBq/hr) on 5 April. On 22 September the results were published of a survey conducted by the Japanese Science Ministry. This survey showed that radioactive iodine was spread northwest and south of the plant. Soil samples were taken at 2,200 locations, mostly in Fukushima Prefecture, in June and July, and with this a map was created of the radioactive contamination as of 14 June. Because of the short half-life of 8 days only 400 locations were still positive. This map showed that iodine-131 spread northwest of the plant, just like caesium-137 as indicated on an earlier map. But I-131 was also found south of the plant at relatively high levels, even higher than those of caesium-137 in coastal areas south of the plant. According to the ministry, clouds moving southwards apparently caught large amounts of iodine-131 that were emitted at the time. The survey was done to determine the risks for thyroid cancer within the population. Tellurium-129m On 31 October the Japanese ministry of Education, Culture, Sports, Science and Technology released a map showing the contamination of radioactive tellurium-129m within a 100-kilometer radius around the Fukushima No. 1 nuclear plant. The map displayed the concentrations found of tellurium-129m – a byproduct of uranium fission – in the soil at 14 June 2011. High concentrations were discovered northwest of the plant and also at 28 kilometers south near the coast, in the cities of Iwaki, Fukushima Prefecture, and Kitaibaraki, Ibaraki Prefecture. Iodine-131 was also found in the same areas, and most likely the tellurium was deposited at the same time as the iodine. The highest concentration found was 2.66 million becquerels per square meter, two kilometers from the plant in the empty town of Okuma. Tellurium-129m has a half-life of 33.6 days, so present levels are a very small fraction of the initial contamination. Tellurium has no biological functions, so even when drinks or food were contaminated with it, it would not accumulate in the body, like iodine in the thyroid gland. Caesium-137 On 24 March 2011, the Austrian Meteorological Service report estimated the total amount of caesium-137 released into the air as of 19 March based on extrapolating data from several days of ideal observation at a handful of worldwide CTBTO radionuclide measuring facilities. The agency estimated an average being 5 PBq daily. Over the course of the disaster, Chernobyl put out a total of 85 PBq of caesium-137. However, later reporting on 12 April estimated total caesium releases at 6.1 PBq to 12 PBq, respectively by NISA and NSC – about 2–4 kg. On 23 April, NSC updated this number to 0.14 TBq per hour of caesium-137 on 5 April, but did not recalculate the entire release estimate. Strontium-90 On 12 October 2011 a concentration of 195 becquerels/kilogram of Strontium-90 was found in the sediment on the roof of an apartment building in the city of Yokohama, south of Tokyo, some 250 km from the plant in Fukushima. This first find of strontium above 100 becquerels per kilogram raised serious concerns that leaked radioactivity might have spread far further than the Japanese government expected. The find was done by a private agency that conducted the test upon the request of a resident. After this find Yokohama city started an investigation of soil samples collected from areas near the building. The science ministry said that the source of the Strontium was still unclear. Plutonium isotopes On 30 September 2011, the Japanese Ministry of Education and Science published the results of a plutonium fallout survey, for which in June and July 50 soil samples were collected from a radius of slightly more than 80 km around the Fukushima Daiichi plant. Plutonium was found in all samples, which is to be expected since plutonium from the nuclear weapon tests of the 1950s and '60s is found everywhere on the planet. The highest levels found (of Pu-239 and Pu-240 combined) were 15 becquerels per square meters in Fukushima prefecture and 9.4 Bq in Ibaraki prefecture, compared to a global average of 0.4 to 3.7 Bq/kg from atomic bomb tests. Earlier in June, university researchers detected smaller amounts of plutonium in soil outside the plant after they collected samples during filming by NHK. A recent study published in Nature found up to 35 bq/kg plutonium 241 in leaf litter in 3 out of 19 sites in the most contaminated zone in Fukushima. They estimated the Pu-241 dose for a person living for 50 years in the vicinity of the most contaminated site to be 0.44 mSv. However, the Cs-137 activity at the sites where Pu-241 was found was very high (up to 4.7 MBq/kg or about 135,000 times greater than the plutonium 241 activity), which suggests that it will be the Cs-137 which prevents habitation rather than the relatively small amounts of plutonium of any isotope in these areas. Water releases Discharge of radioactive water of the Fukushima Daiichi Nuclear Power Plant began in April 2011. On 21 April, TEPCO estimated that 520 tons of radioactive water leaked into the sea before leaks in a pit in unit 2 were plugged, totaling 4.7 PBq of water release (calculated by simple sum, which is inconsistent with the IAEA methodology for mixed-nuclide releases) (20,000 times facility's annual limit). TEPCO's detailed estimates were 2.8 PBq of I-131, 0.94 PBq of Cs-134, 0.940 PBq of Cs-137. Another 300,000 tons of relatively less-radioactive water had already been reported to have leaked or been purposefully pumped into the sea to free room for storage of highly radioactively contaminated water. TEPCO had attempted to contain contaminated water in the harbor near the plant by installing "curtains" to prevent outflow, but now believes this effort was unsuccessful. According to a report published in October 2011 by the French Institute for Radiological Protection and Nuclear Safety, between 21 March and mid-July around 2.7 × 1016 Bq of caesium-137 (about 8.4 kg) entered the ocean, about 82 percent having flowed into the sea before 8 April. This emission of radioactivity into the sea represents the most important individual emission of artificial radioactivity into the sea ever observed. However, the Fukushima coast has some of the world's strongest currents and these transported the contaminated waters far into the Pacific Ocean, thus causing great dispersion of the radioactive elements. The results of measurements of both the seawater and the coastal sediments led to the supposition that the consequences of the accident, in terms of radioactivity, would be minor for marine life as of autumn 2011 (weak concentration of radioactivity in the water and limited accumulation in sediments). On the other hand, significant pollution of sea water along the coast near the nuclear plant might persist, because of the continuing arrival of radioactive material transported towards the sea by surface water running over contaminated soil. Further, some coastal areas might have less-favorable dilution or sedimentation characteristics than those observed so far. Finally, the possible presence of other persistent radioactive substances, such as strontium-90 or plutonium, has not been sufficiently studied. Recent measurements show persistent contamination of some marine species (mostly fish) caught along the coast of Fukushima district. Organisms that filter water and fish at the top of the food chain are, over time, the most sensitive to caesium pollution. It is thus justified to maintain surveillance of marine life that is fished in the coastal waters off Fukushima. Despite caesium isotopic concentration in the waters off of Japan being 10 to 1000 times above concentration prior to the accident, radiation risks are below what is generally considered harmful to marine animals and human consumers. A year after the disaster, in April 2012, sea fish caught near the Fukushima power plant still contain as much radioactive 134Cs and 137Cs compared to fish caught in the days after the disaster. At the end of October 2012 TEPCO admitted that it could not exclude radioactivity releases into the ocean, although the radiation levels were stabilised. Undetected leaks into the ocean from the reactors, could not be ruled out, because their basements remain flooded with cooling water, and the 2,400-foot-long steel and concrete wall between the site's reactors and the ocean, that should reach 100 feet underground, was still under construction, and would not be finished before mid-2014. Around August 2012 two greenling were caught close to the Fukushima shore, they contained more than 25 kBq per kilogram of caesium, the highest caesium levels found in fish since the disaster and 250 times the government's safety limit. In August 2013, a Nuclear Regulatory Authority task force reported that contaminated groundwater had breached an underground barrier, was rising toward the surface and exceeded legal limits of radioactive discharge. The underground barrier was only effective in solidifying the ground at least 1.8 meters below the surface, and water began seeping through shallow areas of earth into the sea. Radiation at the plant site Radiation fluctuated widely on the site after the tsunami and often correlated to fires and explosions on site. Radiation dose rates at one location between reactor units 3 and 4 was measured at 400 mSv/h at 10:22 JST, 13 March, causing experts to urge rapid rotation of emergency crews as a method of limiting exposure to radiation. Dose rates of 1,000 mSv/h were reported (but not confirmed by the IAEA) close to the certain reactor units on 16 March, prompting a temporary evacuation of plant workers, with radiation levels subsequently dropping back to 800–600 mSv/h. At times, radiation monitoring was hampered by a belief that some radiation levels may be higher than 1 Sv/h, but that "authorities say 1,000 millisieverts [per hour] is the upper limit of their measuring devices." Exposure of workers Prior to the accident, the maximum permissible dose for Japanese nuclear workers was 100 mSv per year, but on 15 March 2011, the Japanese Health and Labor Ministry increased that annual limit to 250 mSv, for emergency situations. This level is below the 500 mSv/year considered acceptable for emergency work by the World Health Organization. Some contract companies working for TEPCO have opted not to use the higher limit. On 15 March, TEPCO decided to work with a skeleton crew (in the media called the Fukushima 50) in order to minimize the number of people exposed to radiation. On 17 March, IAEA reported 17 persons to have suffered deposition of radioactive material on their face; the levels of exposure were too low to warrant hospital treatment. On 22 March, World Nuclear News reported that one worker had received over 100 mSv during "venting work" at Unit 3. An additional 6 had received over 100 mSv, of which for 1 a level of over 150 mSv was reported for unspecified activities on site. On 24 March, three workers were exposed to high levels of radiation which caused two of them to require hospital treatment after radioactive water seeped through their protective clothes while working in unit 3. Based on the dosimeter values, exposures of 170 mSv were estimated, the injuries indicated exposure to 2000 to 6000 mSv around their ankles. They were not wearing protective boots, as their employing firm's safety manuals "did not assume a scenario in which its employees would carry out work standing in water at a nuclear power plant". The amount of the radioactivity of the water was about 3.9 M Bq per cubic centimetre. As of 24 March 19:30 (JST), 17 workers (of which 14 were from plant operator TEPCO) had been exposed to levels of over 100 mSv. By 29 March, the number of workers reported to have been exposed to levels of over 100 mSv had increased to 19. An American physician reported Japanese doctors have considered banking blood for future treatment of workers exposed to radiation. TEPCO has started a re-assessment of the approximately 8300 workers and emergency personnel who have been involved in responding to the incident, which has revealed that by 13 July, of the approximately 6700 personnel tested so far, 88 personnel have received between 100 and 150 mSv, 14 have received between 150 and 200 mSv, 3 have received between 200 and 250 mSv, and 6 have received above 250 mSv. TEPCO has been criticized in its provision of safety equipment for its workers. After NISA warned TEPCO that workers were sharing dosimeters, since most of the devices were lost in the disaster, the utility sent more to the plant. Japanese media has reported that workers indicate that standard decontamination procedures are not being observed. Others reports suggest that contract workers are given more dangerous work than TEPCO employees. TEPCO is also seeking workers willing to risk high radiation levels for short periods of time in exchange for high pay. Confidential documents acquired by the Japanese Asahi newspaper suggest that TEPCO hid high levels of radioactive contamination from employees in the days following the incident. In particular, the Asahi reported that radiation levels of 300 mSv/h were detected at least twice on 13 March, but that "the workers who were trying to bring the disaster under control at the plant were not informed of the levels." Workers on-site now wear full-body radiation protection gear, including masks and helmets covering their entire heads, but it means they have another enemy: heat. As of 19 July 2011, 33 cases of heat stroke had been recorded. In these harsh working conditions, two workers in their 60s died from heart failure. Iodine-intake On 19 July 2013 TEPCO said that 1,973 employees would have a thyroid-radiation dose exceeding 100 millisieverts. 19,592 workers—3,290 TEPCO employees and 16,302 employees of contractor firms—were given health checks. The radiation doses were checked from 522 workers. Those were reported to the World Health Organization in February 2013. From this sample, 178 had experienced a dose of 100 millisieverts or more. After the U.N. Scientific Committee on the Effects of Atomic Radiation, questioned the reliability of TEPCO's thyroid gland dosage readings, the Japanese Health Ministry ordered TEPCO to review the internal dosage readings. The intake of radioactive iodine was calculated based on the radioactive caesium intake and other factors: the airborne iodine-to-caesium ratio on the days that the people worked at the reactor compound and other data. For one worker a reading was found of more than 1,000 millisieverts. According to the workers, TEPCO did little to inform them about the hazards of the intake of radioactive iodine. All workers with an estimated dose of 100 millisieverts were offered an annual ultrasound thyroid test during their lifetime for free. But TEPCO did not know how many of these people had received a medical screening already. A schedule for the thyroid gland test was not announced. TEPCO did not indicate what would be done if abnormalities were spotted during the tests. Radiation within the primary containment of the reactors Within the primary containment of reactors 1, 2, 3 and 4, widely varying levels of radiation were reported: Radiation outside primary containment of the reactors Outside the primary containment, plant radiation-level measurements have also varied significantly. On 25 March, an analysis of stagnant water in the basement floor of the turbine building of Unit 1 showed heavy contamination. On 27 March, TEPCO reported stagnant water in the basement of unit 2 (inside the reactor/turbine building complex, but outside the primary containment) was measured at 1000 mSv/h or more, which prompted evacuation. The exact dose rate remains unknown as the technicians fled the place after their first measurement went off-scale. Additional basement and trench-area measurements indicated 60 mSv/h in unit 1, "over 1000" mSv/h in unit 2, and 750 mSv/h in unit 3. The report indicated the main source was iodine-134 with a half-life of less than an hour, which resulted in a radioactive iodine concentration 10 million times the normal value in the reactor. TEPCO later retracted its report, stating that the measurements were inaccurate and attributed the error to comparing the isotope responsible, iodine-134, to normal levels of another isotope. Measurements were then corrected, stating that the iodine levels were 100,000 times the normal level. On 28 March, the erroneous radiation measurement caused TEPCO to reevaluate the software used in analysis. Measurements within the reactor/turbine buildings, but not in the basement and trench areas, were made on 18 April. These robotic measurements indicated up to 49 mSv/h in unit 1 and 57 mSv/h in unit 3. This is substantially lower than the basement and trench readings, but still exceeds safe working levels without constant worker rotation. Inside primary containment, levels are much higher. By 23 March 2011, neutron radiation had been observed outside the reactors 13 times at the Fukushima I site. While this could indicate ongoing fission, a recriticality event was not believed to account for these readings. Based on those readings and TEPCO reports of high levels of chlorine-38, Dr. Ferenc Dalnoki-Veress speculated that transient criticalities may have occurred. However, Edwin Lyman at the Union of Concerned Scientists was skeptical, believing the reports of chlorine-38 to be in error. TEPCO's chlorine-38 report was later retracted. Noting that limited, uncontrolled chain reactions might occur at Fukushima I, a spokesman for the International Atomic Energy Agency (IAEA) "emphasized that the nuclear reactors won't explode." On 15 April, TEPCO reported that nuclear fuel had melted and fallen to the lower containment sections of three of the Fukushima I reactors, including reactor three. The melted material was not expected to breach one of the lower containers, causing a serious radioactivity release. Instead, the melted fuel was thought to have dispersed uniformly across the lower portions of the containers of reactors No. 1, No. 2 and No. 3, making the resumption of the fission process, known as a "recriticality," most unlikely. On 19 April, TEPCO estimated that the unit-2 turbine basement contained 25,000 cubic meters of contaminated water. The water was measured to have 3 MBq/cm3 of Cs-137 and 13 MBq/cm3 of I-131: TEPCO characterized this level of contamination as "extremely high." To attempt to prevent leakage to the sea, TEPCO planned to pump the water from the basement to the Centralized Radiation Waste Treatment Facility. A suspected hole from the melting of fuel in unit 1 has allowed water to leak in an unknown path from unit 1 which has exhibited radiation measurements "as high as 1,120 mSv/h." Radioactivity measurements of the water in the unit-3 spent-fuel pool were reported at 140 kBq of radioactive caesium-134 per cubic centimeter, 150 kBq of caesium-137 per cubic centimeter, and 11 kBq per cubic centimeter of iodine-131 on 10 May. Site contamination Soil TEPCO have reported at three sites 500 meters from the reactors that the caesium-134 and caesium-137 levels in the soil are between 7.1 kBq and 530 kBq per kilo of undried soil. Small traces of plutonium have been found in the soil near the stricken reactors: repeated examinations of the soil suggest that the plutonium level is similar to the background level caused by atomic bomb tests. As the isotope signature of the plutonium is closer to that of power-reactor plutonium, TEPCO suggested that "two samples out of five may be the direct result of the recent incident." The more important thing to look at is the curium level in the soil; the soil does contain a short-lived isotope (curium-242) which shows that some alpha emitters have been released in small amounts by the accident. The release of the beta/gamma emitters such as caesium-137 has been far greater. In the short and medium term the effects of the iodine and the caesium release will dominate the effect of the accident on farming and the general public. In common with almost all soils, the soil at the reactor site contains uranium, but the concentration of uranium and the isotope signature suggests that the uranium is the normal, natural uranium in the soil. Radioactive strontium-89 and strontium-90 were discovered in soil at the plant on 18 April, amounts detected in soil one-half kilometer from the facility ranging from 3.4 to 4400 Bq/kg of dry soil. Strontium remains in soil from above-ground nuclear testing; however, the amounts measured at the facility are approximately 130 times greater than the amount typically associated with previous nuclear testing. The isotope signature of the release looks very different from that of the Chernobyl accident: the Japanese accident has released much less of the involatile plutonium, minor actinides and fission products than Chernobyl did. On 31 March, TEPCO reported that it had measured radioactivity in the plant-site groundwater which was 10,000 times the government limit. The company did not think that this radioactivity had spread to drinking water. NISA questioned the radioactivity measurement and TEPCO is re-evaluating it. Some debris around the plant has been found to be highly radioactive, including a concrete fragment emanating 900 mSv/h. Air and direct radiation Air outside, but near, unit 3 was reported at 70 mSv/h on 26 April 2011. This was down from radiation levels as high as 130 mSv/h near units 1 and 3 in late March. Removal of debris reduced the radiation measurements from localized highs of up to 900 mSv/h to less than 100 mSv/h at all exterior locations near the reactors; however, readings of 160 mSv/h were still measured at the waste-treatment facility. Discharge to seawater and contaminated sealife Results revealed on 22 March from a sample taken by TEPCO about 100 m south of the discharge channel of units 1–4 showed elevated levels of Cs-137, caesium-134 (Cs-134) and I-131. A sample of seawater taken on 22 March 330 m south of the discharge channel (30 kilometers off the coastline) had elevated levels of I-131 and Cs-137. Also, north of the plant elevated levels of these isotopes were found on 22 March (as well as Cs-134, tellurium-129 and tellurium-129m (Te-129m)), although the levels were lower. Samples taken on 23 and/or 24 March contained about 80 Bq/mL of iodine-131 (1850 times the statutory limit) and 26 Bq/mL and caesium-137, most likely caused by atmospheric deposition. By 26 and 27 March this level had decreased to 50 Bq/mL (11) iodine-131 and 7 Bq/mL (2.9) caesium-137 (80 times the limit). Hidehiko Nishiyama, a senior NISA official, stated that radionuclide contamination would "be very diluted by the time it gets consumed by fish and seaweed." Above the seawater, IAEA reported "consistently low" dose rates of 0.04–0.1 μSv/h on 27 March. By 29 March iodine-131 levels in seawater 330 m south of a key discharge outlet had reached 138 Bq/mL (3,355 times the legal limit), and by 30 March, iodine-131 concentrations had reached 180 Bq/mL at the same location near the Fukushima Daiichi nuclear plant, 4,385 times the legal limit. The high levels could be linked to a feared overflow of highly radioactive water that appeared to have leaked from the unit -2 turbine building. On 15 April, I-131 levels were 6,500 times the legal limits. On 16 April, TEPCO began dumping zeolite, a mineral "that absorbs radioactive substances, aiming to slow down contamination of the ocean." Seawater radionuclide concentration on 29 March 2011: {| class="wikitable" border="1" ! Nuclide ! Concentration (Bq/cm3) ! Regulatory limit (Bq/cm3) ! Concentration / Regulatory Limit |- | |0.16 |40 |.0004 |- | |130 |0.04 |3250 |- | |31 |0.06 |517 |- | |2.8 |0.3 |9.3 |- | |32 |0.09 |356 |- | |5.0 |0.3 |17 |- | |2.5 |0.4 |6.3 |} On 4 April, it was reported that the "operators of Japan's crippled power plant say they will release more than 10,000 tons of contaminated water into the ocean to make room in their storage tanks for water that is even more radioactive." Measurements taken on 21 April indicated 186 Bq/L measured 34 km from the Fukushima plant; Japanese media reported this level of seawater contamination second to the Sellafield nuclear accident. On 11 May, TEPCO announced it believed it had sealed a leak from unit 3 to the sea; TEPCO did not immediately announce the amount of radioactivity released by the leak. On 13 May, Greenpeace announced that 10 of the 22 seaweed samples it had collected near the plant showed 10,000 Bq/Kg or higher, five times the Japanese standard for food of 2 kBq/kg for iodine-131 and 500 Bq/kg for radioactive caesium. In addition to the large releases of contaminated water (520 tons and 4.7 PBq) believed to have leaked from unit 2 from mid-March until early April, another release of radioactive water is believed to have contaminated the sea from unit 3, because on 16 May TEPCO announced seawater measurements of 200 Bq per cubic centimeter of caesium-134, 220 Bq per cubic centimeter of caesium-137, and unspecified high levels of iodine shortly after discovering a unit-3 leak. At two locations 20 kilometers north and south and 3 kilometers from the coast, TEPCO found strontium-89 and strontium-90 in the seabed soil. The samples were taken on 2 June. Up to 44 becquerels per kilogram of strontium-90 were detected, which has a half-life of 29 years. These isotopes were also found in soil and in seawater immediately after the accident. Samples taken from fish and seafood caught off the coast of Ibaraki and Chiba did not contain radioactive strontium. As of October 2012, regular sampling of fish and other sea life off the coast of Fukushima showed that total caesium levels in bottom-dwelling fish were higher off Fukushima than elsewhere, with levels above regulatory limits, leading to a fishing ban for some species. Caesium levels had not decreased 1 year after the accident. Continuous monitoring of radioactivity levels in seafood by the Japanese Ministry of Agriculture, Forestry and Fisheries (MAFF) shows that for the Fukushima prefecture the proportion of catches which exceed Japanese safety standards has been decreasing continuously, falling below 2% in the second half of 2013 and below 0.5% in the fourth quarter of 2014. None of the fish caught in 2014 exceeded the less stringent pre-Fukushima standards. For the rest of Japan, the peak figure using the post-Fukushima standards was 4.7% immediately after the catastrophe, falling below 0.5% by mid-2012, and below 0.1% by mid-2013. In February 2014, NHK reported that TEPCO was reviewing its radioactivity data, after finding much higher levels of radioactivity than was reported earlier. TEPCO now says that levels of 5 MBq of strontium per liter were detected in groundwater collected in July 2013 and not 0.9 MBq, as initially reported. Radiation and nuclide detection in Japan Periodic overall reports of the situation in Japan are provided by the United States Department of Energy. In April 2011, the United States Department of Energy published projections of the radiation risks over the next year (that is, for the future) for people living in the neighborhood of the plant. Potential exposure could exceed 20 mSv/year (2 rems/year) in some areas up to 50 kilometers from the plant. That is the level at which relocation would be considered in the US, and it is a level that could cause roughly one extra cancer case in 500 young adults. However, natural radiation levels are higher in some parts of the world than the projected level mentioned above, and about 4 people out of 10 can be expected to develop cancer without exposure to radiation. Further, the radiation exposure resulting from the incident for most people living in Fukushima is so small compared to background radiation that it may be impossible to find statistically significant evidence of increases in cancer. The highest detection of radiation outside of Fukushima peaked at 40 mSv. This represents a much lower level then the amount required to increase a person's risk of cancer. 100 mSv represents the level at which a definitive increased risk of cancer occurs. Radiation above this level increases the risk of cancer, and after 400 mSv radiation poisoning can occur, but is unlikely to be fatal. Air exposure within 30 kilometers The zone within 20 km from the plant was evacuated on 12 March, while residents within a distance of up to 30 km were advised to stay indoors. IAEA reported on 14 March that about 150 people in the vicinity of the plant "received monitoring for radiation levels"; 23 of these people were also decontaminated. From 25 March, nearby residents were encouraged to participate in voluntary evacuation. At a distance of from the site, radiation of 3–170 μSv/h was measured to the north-west on 17 March, while it was 1–5 μSv/h in other directions. Experts said exposure to this amount of radiation for 6 to 7 hours would result in absorption of the maximum level considered safe for one year. On 16 March Japan's ministry of science measured radiation levels of up to 330 μSv/h 20 kilometers northwest of the power plant. At some locations around 30 km from the Fukushima plant, the dose rates rose significantly in 24 hours on 16–17 March: in one location from 80 to 170 μSv/h and in another from 26 to 95 μSv/h. The levels varied according to the direction from the plant. In most locations, the levels remained well below the levels required to damage human health, as the recommended annual maximum limit is well below the level that would affect human health. Natural exposure varies from place to place but delivers a dose equivalent in the vicinity of 2.4 mSv/year, or about 0.3 μSv/h. For comparison, one chest x-ray is about 0.2 mSv and an abdominal CT scan is supposed to be less than 10 mSv (but it has been reported that some abdominal CT scans can deliver as much as 90 mSv). People can mitigate their exposure to radiation through a variety of protection techniques. On 22 April 2011 a Japanese government report was presented by Minister of Trade Yukio Edano to leaders of the town Futaba. In it predictions were made about radioactivity releases for the years 2012 up to 2132. According to this report, in several parts of Fukushima Prefecture – including Futaba and Okuma – the air would remain dangerously radioactive at levels above 50 millisieverts a year. This was all based on measurements done in November 2011. In August 2012, Japanese academic researchers announced that 10,000 people living near the plant in Minamisoma City at the time of the accident had been exposed to well less than 1 millisievert of radiation. The researchers stated that the health dangers from such exposure was "negligible". Said participating researcher Masaharu Tsubokura, "Exposure levels were much lower than those reported in studies even several years after the Chernobyl incident." Most detailed radiation map published by the Japanese government A detailed map was published by the Ministry of Education, Culture, Sports, Science and Technology, going online on 18 October 2011. The map contains the caesium concentrations and radiation levels caused by the airborne radioactivity from the Fukushima nuclear reactor. This website contains both web-based and PDF versions of the maps, providing information by municipality as had been the case previously, but also measurements by district. The maps were intended to help the residents who had called for better information on contamination levels between areas of the same municipalities, using soil and air sample data already released. A grid is laid over a map of most of eastern Japan. Selecting a square in the grid zooms in on that area, at which point users can choose more detailed maps displaying airborne contamination levels, caesium-134 or -137 levels, or total caesium levels. Radiation maps Ground and water contamination within 30 kilometers The unrecovered bodies of approximately 1,000 quake and tsunami victims within the plant's evacuation zone are believed to be inaccessible at the time of 1 April 2011 due to detectable levels of radiation. Air exposure outside of 30 kilometers Radiation levels in Tokyo on 15 March were at one point measured at 0.809 μSv/hour although they were later reported to be at "about twice the normal level". Later, on 15 March 2011, Edano reported that radiation levels were lower and the average radiation dose rate over the whole day was 0.109 μSv/h. The wind direction on 15 March dispersed radioactivity away from the land and back over the Pacific Ocean. On 16 March, the Japanese radiation warning system, SPEEDI, indicated high levels of radioactivity would spread further than 30 km from the plant, but Japanese authorities did not relay the information to citizens because "the location or the amount of radioactive leakage was not specified at the time." From 17 March, IAEA received regular updates on radiation from 46 cities and indicated that they had remained stable and were "well below levels which are dangerous to human health". In hourly measurements of these cities until 20 March, no significant changes were reported. On 18 June 2012 it became known that from 17 to 19 March 2011 in the days directly after the explosions, American military aircraft gathered radiation data in an area with a radius of 45 kilometers around the plant for the U.S. Department of Energy. The maps revealed radiation levels of more than 125 microsieverts per hour at 25 kilometers northwest of the plant, which means that people in these areas were exposed to the annual permissible dose within eight hours. The maps were neither made public nor used for evacuation of residents. On 18 March 2011 the U.S. government sent the data through the Japanese Foreign Ministry to the NISA under the Ministry of Economy, Trade and Industry, and the Japanese Ministry of Education, Culture, Sports, Science and Technology got the data on 20 March. The data were not forwarded to the prime minister's office and the Nuclear Safety Commission, and subsequently not used to direct the evacuation of the people living around the plant. Because a substantial portion of radioactive materials released from the plant went northwest and fell onto the ground, and some residents were "evacuated" in this direction, these people could have avoided unnecessary exposure to radiation had the data been published directly. According to Tetsuya Yamamoto, chief nuclear safety officer of the Nuclear Safety Agency, "It was very regrettable that we didn't share and utilize the information." But an official of the Science and Technology Policy Bureau of the technology ministry, Itaru Watanabe, said it was more appropriate for the United States, rather than Japan, to release the data. On 23 March – after the Americans – Japan released its own fallout maps, compiled by Japanese authorities from measurements and predictions from the computer simulations of SPEEDI. On 19 June 2012 Minister of Science Hirofumi Hirano said that Japan would review the decision of the Science Ministry and the Nuclear-Safety Agency in 2011 to ignore the radiation maps provided by the United States. He defended his ministry's handling of the matter with the remark that its task was to measure radiation levels on land. But the government should reconsider its decision not to publish the maps or use the information. Studies would be done by the authorities, whether the maps could have been a help with the evacuations. On 30 March 2011, the IAEA stated that its operational criteria for evacuation were exceeded in the village of Iitate, Fukushima, north-west of Fukushima I, outside the existing radiation exclusion zone. The IAEA advised the Japanese authorities to carefully assess the situation there. Experts from Kyoto University and Hiroshima University released a study of soil samples, on 11 April, that revealed that "as much as 400 times the normal levels of radiation could remain in communities beyond a 30-kilometer radius from the Fukushima" site. Urine samples taken from 10 children in the capital of Fukushima Prefecture were analyzed in a French laboratory. All of them contained caesium-134. The sample of an eight-year-old girl contained 1.13 becquerels/liter. The children were living up to 60 kilometers away from the troubled nuclear power plant. The Fukushima Network for Saving Children urged the Japanese government to check the children in Fukushima. The Japanese non-profit Radiation Effects Research Foundation said that people should not overreact, because there are no reports known of health problems with these levels of radiation. Radioactive dust particles On 31 October 2011 a scientist from the Worcester Polytechnic Institute, Marco Kaltofen, presented his findings on the releases of radioactive isotopes from the Fukushima accidents at the annual meeting of the American Public Health Association (APHA). Airborne dust contaminated with radioactive particles was released from the reactors into the air. This dust was found in Japanese car filters: they contained caesium-134 and caesium-137, and cobalt at levels as high as 3 nCi total activity per sample. Materials collected during April 2011 from Japan also contained iodine-131. Soil and settled dust were collected from outdoors and inside homes, and also from used children's shoes. High levels of caesium were found on the shoelaces. US air-filter and dust samples did not contain "hot" particles, except for air samples collected in Seattle, Washington in April 2011. Dust particles contaminated with radioactive caesium were found more than 100 miles from the Fukushima site, and could be detected on the U.S. West Coast. Ground, water and sewage contamination outside of 30 kilometers Tests concluded between 10 and 20 April revealed radioactive caesium in amounts of 2.0 and 3.2 kBq/kg in soil from the Tokyo districts of Chiyoda and Koto, respectively. On 5 May, government officials announced that radioactivity levels in Tokyo sewage had spiked in late March. Simple-sum measurements of all radioactive isotopes in sewage burned at a Tokyo treatment plant measured 170,000 Bq/kg "in the immediate wake of the Fukushima nuclear crisis". The government announced that the reason for the spike was unclear, but suspected rainwater. The 5 May announcement further clarified that as of 28 April, the radioactivity level in Tokyo sewage was 16,000 Bq/kg. A detailed map of ground contamination within 80 kilometers of the plant, the joint product of the U.S. Department of Energy and the Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), was released on 6 May. The map showed that a belt of contamination, with radioactivity from 3 to 14.7 MBq caesium-137 per square meter, spread to the northwest of the nuclear plant. For comparison, areas with activity levels with more than 0.55 MBq caesium-137 per square meter were abandoned after the 1986 Chernobyl accident. The village of Iitate and the town of Namie are impacted. Similar data was used to establish a map that would calculate the amount of radiation a person would be exposed to if a person were to stay outdoors for eight hours per day through 11 March 2012. Scientists preparing this map, as well as earlier maps, targeted a 20 mSv/a dosage target for evacuation. The government's 20 mSv/a target led to the resignation of Toshiso Kosako, Special Adviser on radiation safety issues to Japanese Prime Minister Naoto Kan, who stated "I cannot allow this as a scholar", and argued that the target is too high, especially for children; he also criticized the increased limit for plant workers. In response, parents' groups and schools in some smaller towns and cities in Fukushima Prefecture have organized decontamination of soil surrounding schools, defying orders from Tokyo asserting that the schools are safe. Eventually, the Fukushima education board plans to replace the soil at 26 schools with the highest radiation levels. Anomalous "hot spots" have been discovered in areas far beyond the adjacent region. For example, experts cannot explain how radioactive caesium from the reactors at Fukushima ended up in Kanagawa more than to the south. In the first week of September the Ministry of Science published a new map showing radiation levels in Fukushima and four surrounding prefectures, based on the results of an aerial survey. In the map, different colors were used to show the level of radiation at locations one meter above the ground. Red: 19 microsieverts per hour or higher. The red band pointed in a north-west direction and was more than 30 kilometers long. Yellow: radiation between 3.8 and 19 microsieverts per hour. This corresponds to less than a chest X-ray to 3 chest X-rays. This is the threshold to designate an area an evacuation zone. The yellow area extended far beyond the evacuation zone already put into place. Light green: radiation between 0.5 and one microsieverts per hour. This was still far above the annual level of one hundred millisievert, which should cause no harm to people. This zone contained most of Fukushima Prefecture, southern parts of Miyagi Prefecture, and northern parts of Tochigi and Ibaraki prefectures. Up to 307,000 becquerels of caesium per kilogram of soil were detected during a survey held in Fukushima City, 60 kilometers away from the crippled reactors, on 14 September 2011. This was triple the amount for contaminated soil that by Japanese governmental orders should be sealed into concrete. According to "Citizens Against Fukushima Aging Nuclear Power Plants", these readings were comparable to the high levels in special regulated zones where evacuation was required after the Chernobyl accident. They urged the government to designate the area as a hot spot, where residents would need to voluntarily evacuate and be eligible for state assistance. Professor Tomoya Yamauchi of the University of Kobe, in charge of the study, in which soil samples were tested from five locations around the district, noted that the decontamination conducted in some of the areas tested has not yet reduced the radiation to pre-accident levels. On 18 October 2011 a hot-spot in a public square was found in the city of Kashiwa, Chiba in the Nedokoyadai district, by a resident walking with a dosimeter. He informed the city council. Their first readings were off the scale, as their Geiger-counter could measure up to 10 microsieverts per hour. Later measurements by the Chiba environment foundation reported a final result of 57.5 microsieverts per hour. On 21 October the roads around the place were sealed off, and the place was covered with sandbags three meters thick. Further investigations and check-ups were planned on 24 October 2011. These investigations showed on 23 October levels up to 276,000 becquerels radioactive caesium per kilogram of soil, 30 centimeters below the surface. The first comments of town officials on the find of 57.7 microsieverts per hour were that there could not be a link with the Fukushima disaster, but after the find of this large amount of caesium, officials of the Science Ministry could not deny the possibility that the cause could be found at the Fukushima-site. In October 2011, radiation levels as high as those in the evacuation zone around Japan's Fukushima nuclear plant were detected in a Tokyo suburb. Japanese officials said the contamination was linked to the Fukushima nuclear disaster. Contamination levels "as high as those inside Fukushima's no-go zone have been detected, with officials speculating that the hotspot was created after radioactive caesium carried in rain water became concentrated because of a broken gutter". In October 2011 the Japanese Ministry of Science launched a phone hotline to deal with concerns about radiation exposure outside Fukushima Prefecture. Concerned Japanese citizens had been walking with Geiger-counters through their locality in search of all places with raised radiation levels. Whenever a site was found with a radiation dose at one meter above the ground more than one microsievert per hour and higher than nearby areas, this should be mentioned at the hotline. One microsievert per hour is the limit above this topsoil at school playgrounds would be removed, subsidized by the state of Japan. Local governments were asked to carry out simple decontamination works, such as clearing mud from ditches if necessary. When radiation levels would remain more than one microsievert higher than nearby areas even after the cleaning, the ministry offered to help with further decontamination. On the website of the ministry a guideline was posted on how to measure radiation levels in a proper way, how to hold the dosimeter and how long to wait for a proper reading. In October 2011 hotspots were reported on the grounds of two elementary schools in Abiko in Chiba: 11.3 microsieverts per hour was detected on 25 September just above the surface of the ground near a ditch in the compounds of the Abiko Municipal Daiichi Elementary School. At 50 centimeters above the ground the reading was 1.7 microsieverts per hour. The soil in the ditch contained 60,768 becquerels per kilogram. After the soil was removed, the radiation decreased to 0.6 microsieverts per hour at 50 centimeters above groundlevel. 10.1 microsieverts per hour was found at the Abiko Municipal Namiki Elementary School near the surface of the ground where sludge removed from the swimming pool of the school had been buried. The area was covered with a waterproof tarp and dirt was put on top of the tarp to decrease the radiation; 0.6 microsieverts per hour was measured 50 centimeters above the ground after this was done. Radioactive caesium was found in waste water discharged into Tokyo Bay from a cement factory in the prefecture Chiba east of Tokio. In September and October two water samples were taken, measuring 1,103 becquerels per liter and 1,054 becquerels per liter respectively. These were 14 to 15 times higher than the limit set by NISA. Ash from incinerators in the prefecture constituted the raw material to produce cement. In this process toxic substances are filtered out of the ashes, and the water used to clean these filters was discharged into Tokyo Bay. On 2 November 2011 this waste-water discharge was halted, and the Japanese authorities started a survey on the caesium contamination of the seawater of Tokyo Bay near the plant. Caesium-134 and caesium-137 soil contamination map On 12 November the Japanese government published a contamination map compiled by helicopter. This map covered a much wider area than before. Six new prefectures Iwate, Yamanashi, Nagano, Shizuoka, Gifu, and Toyama were included in this new map of the soil radioactivity of caesium-134 and caesium-137 in Japan. Contamination between 30,000 and 100,000 becquerels per square meter was found in Ichinoseki and Oshu (prefecture Iwate), in Saku, Karuizawa and Sakuho (prefecture Nagano, in Tabayama (prefecture Yamanashi) and elsewhere. Computer simulations of caesium contamination Based on radiation measurements made all over Japan between 20 March and 20 April 2011, and the atmospheric patterns in that period, computer simulations were performed by an international team of researchers, in cooperation with the University of Nagoya, in order to estimate the spread of radioactive materials like caesium-137. Their results, published in two studies on 14 November 2011, suggested that caesium-137 reached up to the northernmost island of Hokkaido, and the regions of Chugoku and Shikoku in western Japan at more than 500 kilometers from the Fukushima plant. Rain accumulated the caesium in the soil. Measured radioactivity per kilogram reached 250 becquerels in eastern Hokkaido, and 25 becquerels in the mountains of western Japan. According to the research group, these levels were not high enough to require decontamination. Professor Tetsuzo Yasunari of the University of Nagoya called for a national soil testing program because of the nationwide spread of radioactive material, and suggested identified hotspots, places with high radiation levels, should be marked with warning signs. The first study concentrated on caesium-137. Around the nuclear plant, places were found containing up to 40.000 becquerels/kg, 8 times the governmental safety limit of 5.000 becquerels/kg. Places further away were just below this maximum. East and north-east from the plant the soil was contaminated the most. North-west and westwards the soil was less contaminated, because of mountain protection. The second study had a wider scope, and was meant to study the geographic spread of more-radioactive isotopes, like tellurium and iodine. Because these isotopes deposit themselves in the soil with rain, Norikazu Kinoshita and his colleagues observed the effect of two specific rain-showers on 15 and 21 March 2011. The rainfall on 15 March contaminated the grounds around the plant; the second shower transported the radioactivity much further from the plant, in the direction of Tokyo. According to the authors, the soil should be decontaminated, but when this is found impossible, farming should be limited. Elementary school yard in Tokyo On 13 December 2011 extremely high readings of radioactive caesium – 90,600 becquerels per kilogram, 11 times the governmental limit of 8000 becquerels – were detected in a groundsheet at the Suginami Ward elementary school in Tokyo at a distance of 230 kilometers from Fukushima. The sheet was used to protect the school lawn against frost from 18 March until 6 April 2011. Until November this sheet was stored alongside a gymnasium. In places near this storage area up to 3.95 microsieverts per hour were measured one centimeter above the ground. The school planned to burn the sheet. Further inspections were requested. Radiation exposure in the city of Fukushima All citizens of the town Fukushima received dosimeters to measure the precise dose of radiation to which they were exposed. After September the city of Fukushima collected the 36,478 "glass badges" of dosimeters from all its citizens for analysis. It turned out that 99 percent had not been exposed to more than 0.3 millisieverts in September 2011, except four young children from one family: a girl, in third year elementary school, had received 1.7 millisieverts, and her three brothers had been exposed to 1.4 to 1.6 millisieverts. Their home was situated near a highly radioactive spot, and after this find the family moved out of Fukushima Prefecture. A city official said that this kind of exposure would not affect their health. Similar results were obtained for a three-month period from September 2011: among a group of 36,767 residents in Fukushima city, 36,657 had been exposed to less than 1 millisievert, and the average dose was 0.26 millisieverts. For 10 residents, the readings ranged from 1.8 to 2.7 millisieverts, but these values are mostly believed to be related to usage errors (dosimeters left outside or exposed to X-ray luggage screening). Disposal of radioactive ash Due to objections from concerned residents it became more and more difficult to dispose of the ashes of burned household garbage in and around Tokyo. The ashes of waste facilities in the Tohoku, Kanto and Kōshin'etsu regions were proven to be contaminated with radioactive caesium. According to the guidelines of the Ministry of Environment, ashes radiating 8,000 becquerels per kilogram or lower could be buried. Ashes with caesium levels between 8,000 and 100,000 becquerels should be secured, and buried in concrete vessels. A survey was done on 410 sites of waste-disposal facilities, on how the ash disposal was proceeding. At 22 sites, mainly in the Tokyo Metropolitan area, the ashes with levels under 8000 becquerels could not be buried due to the objections of concerned residents. At 42 sites, ashes were found that contained over 8,000 becquerels of caesium, which could not be buried. The ministry made plans to send officials to meetings in the municipalities to explain to the Japanese people that the waste disposal was done safely, and to demonstrate how the disposal of the ashes above 8000 becquerels was conducted. On 5 January 2012 the Nambu (south) Clean Center, a waste incinerator in Kashiwa, Chiba, was taken out of production by the city council because the storage room was completely filled with 200 metric tons of radioactive ash that could not disposed of in landfills. Storage at the plant was full, with 1049 drums, and some 30 tons more were still to be taken out of the incinerator. In September 2011, the factory was closed for two months for the same reason. The center's special advanced procedures were able to minimize the volume of the ash, but radioactive caesium was concentrated to levels above the national limit of 8.000 becquerels per kilogram for waste disposal in landfills. It was not possible to secure new storage space for the radioactive ash. Radiation levels in Kashiwa were higher than in surrounding areas, and ashes containing up to 70,800 becquerels of radioactive caesium per kilogram – higher than the national limit – were detected in the city. Other cities around Kashiwa were facing the same problem: radioactive ash was piling up. Chiba prefecture asked Abiko and Inzai to accept temporary storage at the Teganuma waste-disposal facility located at their border. But this met strong opposition from their citizens. Deposition of radioactivity and effect on agricultural products and building materials Radiation monitoring in all 47 prefectures showed wide variation, but an upward trend in 10 of them on 23 March. No deposition could be determined in 28 of them until 25 March The highest value obtained was in Ibaraki (480 Bq/m2 on 25 March) and Yamagata (750 Bq/m2 on 26 March) for iodine-13. For caesium-137, the highest values were in Yamagata at 150 and 1200 Bq/m2 respectively. Measurements made in Japan in a number of locations have shown the presence of radionuclides in the ground. On 19 March, upland soil levels of 8,100 Bq/kg of Cs-137 and 300,000 Bq/kg of I-131 were reported. One day later, the measured levels were 163,000 Bq/kg of Cs-137 and 1,170,000 Bq/kg of I-131. Summary of restrictions imposed by the Japanese government as of 25 April 2011 Agricultural products On 19 March, the Japanese Ministry of Health, Labour and Welfare announced that levels of radioactivity exceeding legal limits had been detected in milk produced in the Fukushima area and in certain vegetables in Ibaraki. On 21 March, IAEA confirmed that "in some areas, iodine-131 in milk and in freshly grown leafy vegetables, such as spinach and spring onions, is significantly above the levels set by Japan for restricting consumption". One day later, iodine-131 (sometimes above safe levels) and caesium-137 (always at safe levels) detection was reported in Ibaraki prefecture. On 21 March, levels of radioactivity in spinach grown in the open air in Kitaibaraki city in Ibaraki, around 75 kilometers south of the nuclear plant, were 24,000 becquerel (Bq)/kg of iodine-131, 12 times more than the limit of 2,000 Bq/kg, and 690 Bq/kg of caesium, 190 Bq/kg above the limit. In four Prefectures (Ibaraki, Totigi, Gunma, Fukushima), distribution of spinach and kakina was restricted as well as milk from Fukushima. On 23 March, similar restrictions were placed on more leafy vegetables (komatsuna, cabbages) and all flowerheads brassicas (like cauliflower) in Fukushima, while parsley and milk distribution was restricted in Ibaraki. On 24 March, IAEA reported that virtually all milk samples and vegetable samples taken in Fukushima and Ibaraki on 18–21 and 16–22 March respectively were above the limit. Samples from Chiba, Ibaraki and Tochigi also had excessive levels in celery, parsley, spinach and other leafy vegetables. In addition, certain samples of beef mainly taken on 27–show of 29 Marched concentrations of iodine-131 and/or caesium-134 and caesium-137 above the regulatory levels. After the detection of radioactive caesium above legal limits in Sand lances caught off the coast of Ibaraki Prefecture, the government of the prefecture banned such fishing. On 11 May, caesium levels in tea leaves from a prefecture "just south of Tokyo" were reported to exceed government limits: this was the first agricultural product from Kanagawa Prefecture that exceeded safety limits. In addition to Kanagawa Prefecture, agricultural products from Tochigi and Ibaraki prefectures have also been found to exceed the government limits, for example, pasture grass collected on 5 May, measured 3,480 Bq/kg of radioactive caesium, approximately 11 times the state limit of 300 becquerels. Even into July radioactive beef was found on sale in eleven prefectures, as far away as Kōchi and Hokkaido. Authorities explained that until that point testing had been performed on the skin and exterior of livestock. Animal feed and meat cuts had not been checked for radioactivity previously. Hay and straw were found contaminated with caesium from the reactors and outside the evacuation zone. The news of the contamination of foods with radioactive substances leaking from the Fukushima nuclear reactors damaged the mutual trust between local food producers, including farmers, and consumers. The source of caesium was found to be rice straw that had been fed to the animals. A notice from the Japanese government that was sent to cattle farmers after the nuclear accident made no mention of the possibility that rice straw could be contaminated with radioactive materials from the fallout. Beef from Fukushima Prefecture was removed from the distribution channels. Health minister Kohei Otsuka stated on 17 July 2011 that this removal might not be sufficient. The urine of all cattle for sale was tested in order to return those cows that showed levels of radioactive substances higher than the government-set limit to farms so they could be decontaminated by feeding them safe hay. The minister said that the government should try to buy uncontaminated straw and hay in other parts of the country and offer this to the farmers in the affected areas. All transport of beef raised in the prefecture Fukushima was prohibited after 19 July. The meat of some 132 cows was sold to at least 36 of the 47 prefectures of Japan. In more and more places contaminated meat was found. In March 2012 up to 18,700 becquerels per kilogram radioactive caesium was detected in yamame, or landlocked masu salmon, caught in the Niida river near the town Iitate, which was over 37 times the legal limit of 500 becquerels/kg. The fish was caught for testing purposes prior to the opening of the fishing season. Fishing cooperatives were asked to refrain from catching and eating yamame fish from this river and all streams adjacent to it. No fish was sold in local markets. No fishing was allowed in the river Nojiri in the region Okuaizu in Fukushima after-mid March 2012. The fish caught in this river contained 119 to 139 becquerels of radioactive caesium per kilogram, although this river is located some 130 kilometers from the damaged reactors. In 2011 at this place the fish measured about 50 becquerels per kilogram, and the fishing season was opened as usual. But fishing was not popular in 2011. Local people hoped it would be better in 2012. After the new findings the fishing season was postponed. On 28 March 2012 smelt caught in the Akagi Onuma lake near the city of Maebashi in the prefecture Gunma was found to be contaminated with 426 becquerels per kilogram of caesium. In April 2012 radioactive caesium concentrations of 110 becquerels per kilogram were found in silver crucian carp fish caught in the Tone River north of Tokyo, some 180 kilometers away from the Fukushima Daiichi Plant. Six fishery cooperatives and 10 towns along the river were asked to stop all shipments of fish caught in the river. In March 2012 fish and shellfish caught in a pond near the same river were found to contain levels above the new legal limits of 100 becquerels per kilogram. The Dutch bio-farming company Waterland International and a Japanese federation of farmers made an agreement in March 2012 to plant and grow camellia on 2000 to 3000 hectare. The seeds will be used to produce bio-diesel, which could be used to produce electricity. According to director William Nolten the region had a big potential for the production of clean energy. Some 800,000 hectares in the region could not be used to produce food anymore, and after the disaster because of fears for contamination the Japanese people refused to buy food produced in the region anyway. Experiments would be done to find out whether camelia was capable of extracting caesium from the soil. An experiment with sunflowers had no success. High levels of radioactive caesium were found in 23 varieties of freshwater fish sampled at five rivers and lakes in Fukushima Prefecture between December 2011 and February 2012 and in 8 locations on the open sea. On 2 July 2012 the Ministry of the Environment published that it had found radioactive caesium between 61 and 2,600 becquerels per kilogram. 2,600 becquerels were found in a kind of goby caught in Mano River, which flows from Iitate Village to the city of Minamisoma, north of the nuclear plant. Water bugs, common food for freshwater fish, also showed high levels of 330 to 670 becquerels per kilogram. Marine fish was found less contaminated and showed levels between 2.15 and 260 Bq/kg. Marine fish might be more capable of excreting caesium from their bodies, because saltwater fish have the ability to excrete salt. The Japanese Ministry of the Environment would closely monitor freshwater fish as radioactive caesium might remain for much longer periods in their bodies. According to Japanese regulations, food is considered safe for consumption up to a maximum of 100 Bq/kg. In August 2012, the Health ministry found that caesium levels had dropped to undetectable levels in most cultivated vegetables from the affected area, while food sourced from forests, rivers or lakes in the Tohoku and northern Kanto regions are showing excessive contamination. In a 'murasoi'-fish (or rock-fish Sebastes pachycephalus) caught in January 2013 at the coast of Fukushima an enormous amount of radioactive caesium was found: 254,000 becquerel/kilogram, or 2540 times the legal limitm in Japan for seafood. On 21 February 2013 a greenling – 38 centimeters long and weighing 564 grams – was caught near a water intake of the reactor units. It did set a new record: containing 740,000 becquerels radioactive caesium per kilogram, 7,400 times the Japanese limit deemed safe for human consumption. The previous record of caesium concentration in fish was 510,000 Bq/kg detected in another greenling. On the sea floor a net was installed by TEPCO, in order to prevent migrating fish to escape from the contaminated area. Cattle and beef As of July 2011, the Japanese government has been unable to control the spread of radioactive material into the nation's food, and "Japanese agricultural officials say meat from more than 500 cattle that were likely to have been contaminated with radioactive caesium has made its way to supermarkets and restaurants across Japan". On 22 July it became known that at least 1400 cows were shipped from 76 farms that were fed with contaminated hay and rice-straw that had been distributed by agents in Miyagi and farmers in the prefectures of Fukushima and Iwate, near the crippled Fukushima Daiichi nuclear power plant. Supermarkets and other stores were asking their customers to return the meat. Farmers were asking for help, and the Japanese government was considering whether it should buy and burn all this suspect meat. Beef had 2% more Caesium than the government's strict limit. On 26 July more than 2,800 cattle carcasses, fed with caesium-contaminated food, had been shipped for public consumption to 46 of the 47 prefectures in Japan, with only Okinawa remaining free. Part of this beef, which had reached the markets, still needed to be tested. In an attempt to ease consumer concern the Japanese government promised to impose inspections on all this beef, and to buy the meat back when higher-than-permissible caesium levels were detected during the tests. The government planned to eventually pass on the buy-back costs to TEPCO. The same day the Japanese ministry of agriculture urged farmers and merchants to renounce the use and sale of compost made of manure from cows that may have been fed the contaminated straw. The measure also applied to humus from leaves fallen from trees. After developing guidelines for safety levels of radioactive caesium in compost and humus, this voluntary ban could be lifted. On 28 July a ban was imposed on all the shipments on cattle from the prefecture Miyagi. Some 1,031 beasts had been shipped that probably were fed with contaminated rice-straw. Measurements of 6 of them revealed 1,150 becquerels per kilogram, more than twice the governmental set safety level. Because the origins were scattered all over the prefecture, Miyagi became the second prefecture with a ban on all beef-cattle shipments. In the year before 11 March about 33,000 cattle were traded from Miyagi. On 1 August a ban was put on all cattle in the prefecture Iwate, after 6 cows from two villages were found with heavy levels of caesium. Iwate was the third prefecture where this was decided. Shipments of cattle and meat would only be allowed after examination, and when the level of caesium was below the regulatory standard. In Iwate some 36,000 cattle were produced in a year. All cattle would be checked for radioactive contamination before shipment, and the Japanese government asked the prefecture to temporarily reduce the number of shipments to match its inspection capability. On 3 August, the prefecture Shimane, in western Japan, conducted radiation checks on all beef cattle to ease consumer concerns about food safety. Starting from the second week of August all cattle were tested. Late July at one farm in this prefecture rice-straw was discovered with radioactive caesium levels exceeding the government safety guide. Although all other tests of beef cattle found far lower levels of radioactivity than the government standard, prices of beef from Shimane plummeted and wholesalers avoided all cattle from the prefecture. All processed beef would undergo preliminary screening, and meat registering 250 becquerels per kilogram or more of radioactive caesium – half the government safety level – would be tested further. The second week of August the prefecture of Fukushima Prefecture initiated a buy-out of all cattle that could not be sold because the high levels of caesium in the meat. The prefecture decided to buy back all beef cattle that had become too old for shipment due to the shipping suspension in place since July. On 2 August a group of farmers agreed with the Fukushima prefectural government to set up a consultative body to regulate this process. The prefectural government provided the subsidies needed. There was some delay, because the farmers and the local government could not agree about the prices. The problems for the farmers were growing, because they did not know how to protect their cattle from contamination and did not know how to feed their cattle. The farmers said that the buy-back plan needed to be implemented immediately. On 5 August 2011, in response to calls for more support by farmers, the Japanese government revealed a plan to buy up all beef contaminated with radioactive caesium, that had already reached the distribution chains, as an additional measurement to support beef cattle farmers. The plan included: the buy-out of about 3,500 head of cattle suspected to have been fed with contaminated rice straw, with caesium in excess of the safety limit. regardless the fact that some beef could be within the national safety limits. all this meat would be burned, to keep it out of distribution-channels Other measurements were the expansion of subsidies to beef cattle farmers: Farmers who were unable to ship their cattle due to restrictions received 50,000 yen, (~ 630 dollars) per head of cattle regardless of the cattle's age. financial support was offered to prefectures that were buying up beef cattle, that had become too old to ship due to the ban. The Japanese Government planned to go on to buy all beef containing unsafe levels of radioactive caesium that reached the market through private organizations. On 19 August 2011 was reported, the meat of 4 cows from one Fukushima farm had been found to be contaminated with radioactive caesium in excess of the government-set safety limits. The day after the meat of 5 other cows from this farm was also found to contain radioactive caesium. Because of this the central government delayed lifting a shipment ban on Fukushima beef. The 9 cows were among a total of over 200 head of cattle shipped from the farm and slaughtered at a facility in Yokohama city between 11 March nuclear accident and April. The beef had been stored by a food producer. The farmer denied feeding the cows contaminated rice straw, instead he used imported hay that had been stored at another farm. Japan banned Fukushima beef. These domestic animals were affected by the food supply. It was reported that 136 cows consumed feed affected by radioactive caesium. A number of cows were found to have consumed rice straw containing high levels of radioactive caesium. This meat had already been distributed nationwide and that it "could have already reached consumers." They traced contaminated beef on farms near the Fukushima power plant, and on farms 100 km (70 miles) away. "The government has also acknowledged that the problem could be wider than just Fukushima." By August 2012, sampling of beef from affected areas revealed that 3 out of 58,460 beef samples contained radioactivity above regulatory limits. Much of the radioactivity is believed to have come from contaminated feed. Radioactivity infiltration into the beef supply has subsided with time, and is projected to continue decreasing. Nattō In August 2011, a group of 5 manufacturers of nattō, or fermented soybeans, in Mito, Ibaraki planned to seek damages from TEPCO because their sales had fallen by almost 50 percent. Nattō is normally packed in rice-straw and after the discovery of caesium contamination, they had lost many customers. The lost sales from April–August 2011 had risen to around 1.3 million dollars. Tea-leaves On 3 September 2011 radioactive caesium exceeding the government's safety limit had been detected in tea leaves in Chiba and Saitama prefectures, near Tokyo. This was the ministry's first discovery of radioactive substances beyond legal limits since the tests of food stuffs started in August. These tests were conducted in order to verify local government data using different numbers and kinds of food samples. Tea leaves of one type of tea from Chiba Prefecture contained 2,720 becquerels of radioactive caesium per kilogram, 5 times above the legal safety limit. A maximum of 1,530 becquerels per kilogram was detected in 3 kinds of tea leaves from Saitama Prefecture. Investigations were done to find out where the tea was grown, and to determine how much tea had already made its way to market. Tea producers were asked to recall their products, when necessary. As tea leaves are never directly consumed, tea produced from processed leaves are expected to contain no more than 1/35th the density of caesium (in the case of 2720bq/kg, the tea will show just 77bq/L, below the 200bq/L legal limit at the time) In the prefecture Shizuoka at the beginning of April 2012, tests done on tea-leaves grown inside a greenhouse were found to contain less than 10 becquerels per kilogram, below the new limit of 100 becquerels, The tests were done in a governmental laboratory in Kikugawa city, to probe caesium-concentrations before the at the end of April the tea-harvest season would start. The health ministry published in August 2012, that caesium levels in tea made from "yacon" leaves and in samples of Japanese tea "shot through the ceiling" this year. Rice On 19 August radioactive caesium was found in a sample of rice. This was in Ibaraki Prefecture, just north of Tokyo, in a sample of rice from the city of Hokota, about 100 miles south of the nuclear plant. The prefecture said the radioactivity was well within safe levels: it measured 52 becquerels per kilogram, about one-tenth of the government-set limit for grains. Two other samples tested at the same time showed no contamination. The Agriculture Ministry said it was the first time that more than trace levels of caesium had been found in rice. On 16 September 2011 the results were published of the measurements of radioactive caesium in rice. The results were known of around 60 percent of all test-locations. Radioactive materials were detected in 94 locations, or 4.3 percent of the total. But the highest level detected so far, in Fukushima prefecture, was 136 becquerels per kilogram, about a quarter of the government's safety limit of 500 Becquerel per kilogram. Tests were conducted in 17 prefectures, and were completed in more than half of them. In 22 locations radioactive materials were detected in harvested rice. The highest level measured was 101.6 becquerels per kilogram, or one fifth of the safety limit. Shipments of rice did start in 15 prefectures, including all 52 municipalities in the prefecture Chiba. In Fukushima shipments of ordinary rice did start in 2 municipalities, and those of early-harvested rice in 20 municipalities. On 23 September 2011 radioactive caesium in concentrations above the governmental safety limit was found in rice samples collected in an area in the northeastern part of the prefecture Fukushima. Rice-samples taken before the harvest showed 500 becquerels per kilogram in the city of Nihonmatsu. The Japanese government ordered a two way testing procedure of samples taken before and after the harvest. Pre-harvest tests were carried out in nine prefectures in the regions of Tohoku and Kanto. After the find of this high level of caesium, the prefectural government dis increase the number of places to be tested within the city from 38 to about 300. The city of Nihonmatsu held an emergency meeting on 24 September with officials from the prefecture government. The farmers, that already had started harvesting, were ordered to store their crop until the post-harvest tests were available. On 16 November 630 becquerels per kilogram of radioactive caesium was detected in rice harvested in the Oonami district in Fukushima City. All rice of the fields nearby was stored and none of this rice had been sold to the market. On 18 November all 154 farmers in the district were asked to suspend all shipments of rice. Tests were ordered on rice samples from all 154 farms in the district. The result of this testing was reported on 25 November: five more farms were found with caesium contaminated rice at a distance of 56 kilometers from the disaster reactors in the Oonami district of Fukushima City, The highest level of caesium detected was 1,270 becquerels per kilogram. On 28 November 2011 the prefecture of Fukushima reported the find of caesium-contaminated rice, up to 1050 Becquerels per kilogram, in samples of 3 farms in the city Date at a distance of 50 kilometers from the Fukushima Daiichi reactors. Some 9 kilo's of this crops were already sold locally before this date. Officials tried to find out who bought this rice. Because of this and earlier finds the government of the prefecture Fukushima decided to control more than 2300 farms in the whole district on caesium-contamination. A more precise number was mentioned by the Japanese newspaper The Mainichi Daily News: on 29 November orders were given to 2381 farms in Nihonmatsu and Motomiya to suspend part of their rice shipments. This number added to the already halted shipments at 1941 farms in 4 other districts including Date, raised the total to 4322 farms. Rice exports from Japan to China became possible again after a bilateral governmental agreement in April 2012. With government-issued certificates of origin Japanese rice produced outside the prefectures Chiba, Fukushima prefecture, Gunma, Ibaraki, Niigata, Nagano, Miyagi, Saitama, Tokyo, Tochigi and Saitama was allowed to be exported. In the first shipment 140.000 tons of Hokkaido rice of the 2011 harvest was sold to China National Cereals, Oils and Foodstuffs Corporation. Noodles On 7 February 2012 noodles contaminated with radioactive caesium (258 becquerels of caesium per kilogram) were found in a restaurant in Okinawa. The noodles, called "Okinawa soba", were apparently produced with water filtered through contaminated ashes from wood originating from the prefecture Fukushima. On 10 February 2012 the Japanese Agency for Forestry set out a warning not to use ashes from wood or charcoal, even when the wood itself contained less than the governmental set maximum of 40 becquerels per kilo for wood or 280 becquerels for charcoal. When the standards were set, nobody thought about the use of the ashes to be used for the production of foods. But, in Japan it was a custom to use ashes when kneading noodles or to take away a bitter taste, or "aku" from "devil's tongue" and wild vegetables. Mushrooms On 13 October 2011 the city of Yokohama terminated the use of dried shiitake-mushrooms in school lunches after tests had found radioactive caesium in them up to 350 becquerels per kilogram. In shiitake mushrooms grown outdoors on wood in a city in the prefecture Ibaraki, 170 kilometers from the nuclear plant, samples contained 830 becquerels per kilogram of radioactive caesium, exceeding the government's limit of 500 becquerels. Radioactive contaminated shiitake mushrooms, above 500 becquerels per kilogram, were also found in two cities of prefecture Chiba, therefore restrictions were imposed on the shipments from these cities. On 29 October the government of the prefecture Fukushima Prefecture announced that shiitake mushrooms grown indoors at a farm in Soma, situated at the coast north from the Fukushima Daiichi plant, were contaminated with radioactive caesium: They contained 850 becquerels per kilogram, and exceeded the national safety-limit of 500-becquerel. The mushrooms were grown on beds made of woodchips mixed with other nutrients. The woodchips in the mushroom-beds sold by the agricultural cooperative of Soma were thought to have caused of the contamination. Since 24 October 2011 this farm had shipped 1,070 100-gram packages of shiitake mushrooms to nine supermarkets. Besides these no other shiitake mushrooms produced by the farm were sold to customers. In the city of Yokohama in March and October food was served to 800 people with dried shiitake-mushrooms that came from a farm near this town at a distance of 250 kilometer from Fukushima. The test-results of these mushrooms showed 2,770 Becquerels per kilo in March and 955 Becquerels per kilo in October, far above the limit of 500 Becquerels per kilo set by the Japanese government. The mushrooms were checked for contamination in the first week of November, after requests of concerned people with questions about possible contamination of the food served. No mushrooms were sold elsewhere. On 10 November 2011 some 120 kilometers away southwest from the Fukushima-reactors in the prefecture Tochigi 649 becquerels of radioactive cesium per kilogram was measured in kuritake mushrooms. Four other cities of Tochigi did already stop with the sales and shipments of the mushrooms grown there. The farmers were asked to stop all shipments and to call back the mushrooms already on the market. Drinking water The regulatory safe level for iodine-131 and caesium-137 in drinking water in Japan are 100 Bq/kg and 200 Bq/kg respectively. The Japanese science ministry said on 20 March that radioactive substances were detected in tap water in Tokyo, as well as Tochigi, Gunma, Chiba and Saitama prefectures. IAEA reported on 24 March that drinking water in Tokyo, Fukushima and Ibaraki had been above regulatory limits between 16 and 21 March. On 26 March, IAEA reported that the values were now within legal limits. On 23 March, Tokyo drinking water exceeded the safe level for infants, prompting the government to distribute bottled water to families with infants. Measured levels were caused by iodine-131 (I-131) and were 103, 137 and 174 Bq/L. On 24 March, iodine-131 was detected in 12 of 47 prefectures, of which the level in Tochigi was the highest at 110 Bq/kg. Caesium-137 was detected in 6 prefectures but always below 10 Bq/kg. On 25 March, tap water was reported to have reduced to 79 Bq/kg and to be safe for infants in Tokyo and Chiba but still exceeded limits in Hitachi and Tokaimura. On 27 April, "radiation in Tokyo's water supply fell to undetectable levels for the first time since 18 March." The following graphs show Iodine-131 water contaminations measured in water purifying plants From 16 March to 7 April: On 2 July samples of tapwater taken in Tokyo Shinjuku ward radioactive caesium-137 was detected for the first time since April. The concentration was 0.14 becquerel per kilogram and none was discovered yesterday, which compares with 0.21 becquerel on 22 April, according to the Tokyo Metropolitan Institute of Public Health. No caesium-134 or iodine-131 was detected. The level was below the safety limit set by the government. "This is unlikely to be the result of new radioactive materials being introduced, because no other elements were detected, especially the more sensitive iodine", into the water supply, were the comments of Hironobu Unesaki, a nuclear engineering professor at Kyoto University. Breast milk Small amounts of radioactive iodine were found in the breast milk of women living east of Tokyo. However, the levels were below the safety limits for tap water consumption by infants. Regulatory limits for infants in Japan are several levels of magnitude beneath what is known to potentially affect human health. Radiation protection standards in Japan are currently stricter than international recommendations and the standards of most other states, including those in North America and Europe. By Nov 2012, no radioactivity was detected in Fukushimas mothers breast milk. 100% of samples contained no detectable amount of radioactivity. Baby-milk Mid November 2011 radioactive caesium was found in milk-powder for baby-food produced by the food company Meiji Co. Although this firm was warned about this matter three times, the matter was taken seriously by its consumer service after it was approached by Kyodo News. Up to 30.8 becquerels per kilogram was found in Meiji Step milk powder. While this is under the governmental safety-limit of 200 becquerels per kilogram, this could be more harmful for young children. Because of this caesium-contaminated milk powder, the Japanese minister of health Yoko Komiyama said on 9 December 2011 at a press conference, that her ministry would start regularly tests on baby food products in connection with the Fukushima Daiichi nuclear plant crisis, every three months and more frequently when necessary. Komiyama said: "As mothers and other consumers are very concerned (about radiation), we want to carry out regular tests", Test done by the government in July and August 2011 on 25 baby products did not reveal any contamination. Children In a survey by the local and central governments conducted on 1,080 children aged 0 to 15 in Iwaki, Kawamata and Iitate on 26–30 March, almost 45 percent of these children had experienced thyroid exposure to radiation with radioactive iodine, although in all cases the amounts of radiation did not warrant further examination, according to the Nuclear Safety Commission on Tuesday 5 July. In October 2011, hormonal irregularities in 10 evacuated children were reported. However, the organization responsible for the study said that no link had been established between the children's condition and exposure to radiation. On 9 October a survey started in the prefecture Fukushima: ultrasonic examinations were done of the thyroid glands of all 360,000 children between 0 and 18 years of age. Follow-up tests will be done for the rest of their lives. This was done in response to concerned parents, alarmed by the evidence showing increased incidence of thyroid cancer among children after the 1986 Chernobyl disaster. The project was done by the Medical University of Fukushima. The results of the tests will be mailed to the children within a month. At the end of 2014 the initial testing of all children should be completed, after this the children will undergo a thyroid checkup every 2 years until they turn 20, and once every 5 years above that age. In November 2011 in urine-samples of 1500 pre-school-children (ages 6 years or younger) from the city of Minamisoma in the prefecture Fukushima radioactive caesium was found in 104 cases. Most had levels between 20 and 30 becquerels per liter, just above the detection limit, but 187 becquerels was found in the urine of a one-year-old baby boy. The parents had been concerned about internal exposure. Local governments covered the tests for elementary schoolchildren and older students. According to RHC JAPAN a medical consultancy firm in Tokyo, these levels could not harm the health of the children. But director Makoto Akashi of the National Institute of Radiological Sciences said, that although those test results should be verified, this still proved the possibility of internal exposure in the children of Fukushima, but that the internal exposure would not increase, when all food was tested for radioactivity before consumption. Soil Also in July citizens groups reported that a survey of soil at four places in the city of Fukushima taken on 26 June proved that all samples were contaminated with radioactive caesium, measuring 16,000 to 46,000 becquerels per kilogram and exceeding the legal limit of 10,000 becquerels per kg, A study published by the PNAS found that caesium 137 had "strongly contaminated the soils in large areas of eastern and northeastern Japan." Wildlife After the find of 8,000 becquerels of caesium per kilogram in wild mushrooms, and a wild boar that was found with radioactivity amounts about 6 times the safety limit, Professor Yasuyuki Muramatsu at the Gakushuin University urged detailed checks on wild plants and animals. Radioactive caesium in soil and fallen leaves in forests in his opinion would be easily absorbed by mushrooms and edible plants. He said that wild animals like boars were bound to accumulate high levels of radioactivity by eating contaminated mushrooms and plants. The professor added that detailed studies were on wild plants and animals. Across Europe the Chernobyl-incident had likewise effects on wild fauna and flora. The first study of the effects of radioactive contamination following the Fukushima Daiichi nuclear disaster suggested, through standard point count censuses that the abundance of birds was negatively correlated with radioactive contamination, and that among the 14 species in common between the Fukushima and the Chernobyl regions, the decline in abundance was presently steeper in Fukushima. However criticism of this conclusion is that naturally there would be less bird species living on a smaller amount of land, that is, in the most contaminated areas, than the number one would find living in a larger body of land, that is, in the broader area. Scientists in Alaska are testing seals struck with an unknown illness to see if it is connected to radiation from Fukushima. About a year after the nuclear disaster some Japanese scientists found what they regarded was an increased number of mutated butterflies. In their paper, they said, this was an unexpected finding, as "insects are very resistant to radiation." Since these are recent findings, the study suggests that these mutations have been passed down from older generations. Timothy Jorgensen, of the Department of Radiation Medicine and the Health Physics Program of Georgetown University raised a number of issues with this "simply not credible" paper, in the journal Nature and concluded that the team's paper is "highly suspect due to both their internal inconsistencies and their incompatibility with earlier and more comprehensive radiation biology research on insects". Plankton Radioactive caesium was found in high concentration in plankton in the sea near the Fukushima Daiichi Nuclear Power Plant. Samples were taken up to 60 kilometers from the coast of Iwaki city in July 2011 by scientists of the Tokyo University of Marine Science and Technology. Up to 669 becquerels per kilogram of radioactive caesium was measured in samples of animal plankton taken 3 kilometers offshore. The leader of the research-group Professor Takashi Ishimaru, said that the sea current continuously carried contaminated water southwards from the plant. Further studies to determine the effect on the food-chain and fish would be needed. Building materials Detectable levels of radiation were found in an apartment building in Nihonmatsu, Fukushima, where the foundation was made using concrete containing crushed stone collected from a quarry near the troubled Fukushima Daiichi nuclear power plant, situated inside the evacuation-zone. Of the 12 households living there were 10 households relocated after the quake. After inspection at the quarry – situated inside the evacuation-zone around the nuclear plant—in the town of Namie, Fukushima between 11 and 40 microsieverts of radiation per hour were detected one meter above gravel held at eight storage sites in the open, while 16 to 21 microsieverts were detected in three locations covered by roofs. From this place about 5,200 metric tons of gravel was shipped from this place and used as building material. On 21 January 2012 the association of quarry agents in the prefecture Fukushima asked its members to voluntarily check their products for radioactivity to ease public concerns over radioactive contamination of building materials. The minister of Industry Yukio Edano did instruct TEPCO to pay compensation for the economical damages. Raised radiation levels were found on many buildings constructed after the quake. Schools, private houses, roads. Because of the public anger raised by these finds. the government of Nihonmatsu, Fukushima decided to examine all 224 city construction projects started after the quake. Some 200 construction companies received stone from the Namie-quarry, and the material was used in at least 1000 building-sites. The contaminated stone was found in some 49 houses and apartments. Radiation levels of 0.8 mSv per hour were found, almost as high as the radiation levels outside the homes. None of these represents a potential danger to human health. On 22 January 2012, the Japanese government survey had identified around 60 houses built with the radioactive contaminated concrete. Even after 12 April 2011, when the area was declared to be an evacuation zone, the shipments continued, and the stone was used for building purposes. In the first weeks of February 2012 up to 214,200 becquerels of radioactive caesium per kilogram was measured in samples gravel in the quarry near Namie, situated inside the evacuation zone. The gravel stored outside showed about 60,000–210,000 becquerels of caesium in most samples. From the 25 quarries in the evacuation zones, up to 122,400 becquerels of radioactive caesium was found at one that has been closed since the nuclear crisis broke out on 11 March 2011. In one quarry, that is still operational 5,170 becquerels per kilogram was found. Inspections were done at some 150 of the 1.100 construction sites, where the gravel form the Namie-quarry was suspected to be used. At 27 locations the radioactivity levels were higher than the surrounding area. Hot spots at school-yards On 6 May 2012 it became known that according to documents of the municipal education board reports submitted by each school in Fukushima prefecture in April at least 14 elementary schools, 7 junior high and 5 nursery schools so called "hot spots" existed, where the radiation exposure was more than 3.8 microsieverts per hour, resulting in an annual cumulative dose above 20 millisieverts. However all restrictions, that limited the maximum time to three hours for the children to play outside at the playgrounds of the schools, were lifted at the beginning of the new academic year in April by the education board. The documents were obtained by a group of civilians after a formal request to disclose the information. Tokiko Noguchi, the foreman of a group of civilians, insisted that the education board would restore the restrictions. New radioactivity limits for food in Japan On 22 December 2011 the Japanese government announced new limits for radioactive caesium in food. The new norms would be enforced in April 2012. On 31 March 2012 the Ministry of Health, Labor and Welfare of Japan published a report on radioactive caesium found in food. Between January and around 15 March 2012 at 421 occasions food was found containing more than 100 becquerels per kilogram caesium. All was found within 8 prefectures: Chiba, Fukushima Prefecture (285 finds), Gunma, Ibaraki (36 finds), Iwate, Miyagi, Tochigi (29 finds) and Yamagata. Most times it involved fish: landlocked salmon and flounder, seafood, after this: Shiitake-mushrooms or the meat of wild animals. In the first week of April 2012 caesium-contamination above legal limits was found in: Shiitake mushrooms in Manazuru Kanagawa prefecture situated at 300 kilometers from Fukushima: 141 becquerels/kg bamboo-shoots in two cities in Chiba prefecture bamboo-shoots and Shiitake-mushrooms in 5 cities in the region Kantō, Ibaraki prefecture In Gunma prefecture 106 becquerels/kg was found in beef. Sharper limits for meat would be taken effect in October 2012, but in order to ease consumer concern the farmers were asked to refrain from shipping. Decontamination efforts In August 2011 Prime Minister Naoto Kan informed the Governor of Fukushima Prefecture about the plans to build a central storage facility to store and treat nuclear waste including contaminated soil in Fukushima. On 27 August at a meeting in Fukushima City Governor Yuhei Sato spoke out his concern about the sudden proposals, and the implications that this would have for the prefecture and its inhabitants, that had already endured so much from the nuclear accident. Kan said, that the government had no intention to make the plant a final facility, but the request was needed in order to make a start with decontamination. Distribution outside Japan Short-lived radioactive Iodine-131 isotopes from the disaster were found in giant kelp off of Coastal California, causing no detectable effects to the kelp or other wildlife. All of the radioactive material had dissipated completely within one month of detection. According to a professor at Stanford, there were some meteorological effects involved and that "81 percent of all the emissions were deposited over the ocean" instead of mainly being spread inland. Distribution by sea Seawater containing measurable levels of iodine-131 and caesium-137 were collected by Japan Agency for Marine-Earth Science and Technology (JAMSTEC) on 22–23 March at several points 30 km from the coastline iodine concentrations were "at or above Japanese regulatory limits" while caesium was "well below those limits" according to an IAEA report on 24 March. On 25 March, IAEA indicated that in the long term, caesium-137 (with a half-life of 30 years) would be the most relevant isotope as far as doses was concerned and indicated the possibility "to follow this nuclide over long distances for several years." The organization also said it could take months or years for the isotope to reach "other shores of the Pacific". The survey by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) reveals that radioactive caesium released from Fukushima I Nuclear Power Plant reached the ocean 2000 kilometers from the plant and 5000 meters deep one month after the accident. It is considered that airborne caesium particles fell on the ocean surface, and sank as they were attached to the bodies of dead plankton. The survey result was announced in a symposium held on 20 November in Tokyo. From 18 to 30 April, JAMSTEC collected "marine snow", sub-millimeter particles made mostly of dead plankton and sand, off the coast of Kamchatka Peninsula, 2000 kilometers away from Fukushima, and off the coast of Ogasawara Islands, 1000 kilometers away, at 5000 meters below the ocean surface. The Agency detected radioactive caesium in both locations, and from the ratio of caesium-137 and caesium-134 and other observations it was determined that it was from Fukushima I Nuclear Power Plant. The density of radioactive caesium is still being analyzed, according to the Agency. It has been thus confirmed that radioactive materials in the ocean are moving and spreading not just by ocean currents but by various other means. Distribution by air The United Nations predicted that the initial radioactivity plume from the stricken Japanese reactors would reach the United States by 18 March. Health and nuclear experts emphasized that radioactivity in the plume would be diluted as it traveled and, at worst, would have extremely minor health consequences in the United States. A simulation by the Belgian Institute for Space Aeronomy indicated that trace amounts of radioactivity would reach California and Mexico around 19 March. These predictions were tested by a worldwide network of highly sensitive radiative isotope measuring equipment, with the resulting data used to assess any potential impact to human health as well as the status of the reactors in Japan. Consequently, by 18 March radioactive fallout including isotopes of iodine-131, iodine-132, tellurium-132, iodine-133, caesium-134 and caesium-137 was detected in air filters at the University of Washington, Seattle, USA. Due to an anticyclone south of Japan, favorable westerly winds were dominant during most of the first week of the accident, depositing most of the radioactive material out to sea and away from population centers, with some unfavorable wind directions depositing radioactive material over Tokyo. Low-pressure area over Eastern Japan gave less favorable wind directions 21–22 March. Wind shift to north takes place Tuesday midnight. After the shift, the plume would again be pushed out to the sea for the next becoming days. Roughly similar prediction results are presented for the next 36 hours by the Finnish Meteorological Institute. In spite of winds blowing towards Tokyo during 21–22 March, he comments, "From what I've been able to gather from official reports of radioactivity releases from the Fukushima plant, Tokyo will not receive levels of radiation dangerous to human health in the coming days, should emissions continue at current levels." Norwegian Institute for Air Research have continuous forecasts of the radioactive cloud and its movement. These are based on the FLEXPART model, originally designed for forecasting the spread of radioactivity from the Chernobyl disaster. As of 28 April, the Washington State Department of Health, located in the U.S. state closest to Japan, reported that levels of radioactive material from the Fukushima plant had dropped significantly, and were now often below levels that could be detected with standard tests. Response in other countries Rush for iodine Fear of radiation from Japan prompted a global rush for iodine pills, including in the United States, Canada, Russia, Korea, China, Malaysia and Finland. There is a rush for iodized salt in China. A rush for iodine antiseptic solution appeared in Malaysia. WHO warned against consumption of iodine pills without consulting a doctor and also warned against drinking iodine antiseptic solution. The United States Pentagon said troops are receiving potassium iodide before missions to areas where possible radiation exposure is likely. The World Health Organisation says it has received reports of people being admitted to poison centres around the world after taking iodine tablets in response to fears about harmful levels of radiation coming out of the damaged nuclear power plant in Fukushima. U.S. military In Operation Tomodachi, the United States Navy dispatched the aircraft carrier and other vessels in the Seventh Fleet to fly a series of helicopter operations. A U.S. military spokesperson said that low-level radiation forced a change of course en route to Sendai. The Reagan and sailors aboard were exposed to "a month's worth of natural background radiation from the sun, rocks or soil" in an hour and the carrier was repositioned. Seventeen sailors were decontaminated after they and their three helicopters were found to have been exposed to low levels of radioactivity. The aircraft carrier was docked for maintenance at Yokosuka Naval Base, about from the plant, when instruments detected radiation at 07:00 JST on 15 March. Rear Admiral Richard Wren stated that the nuclear crisis in Fukushima, from Yokosuka, was too distant to warrant a discussion about evacuating the base. Daily monitoring and some precautionary measures were recommended for Yokosuka and Atsugi bases, such as limiting outdoor activities and securing external ventilation systems. As a precaution, the Washington was pulled out of its Yokosuka port later in the week. The Navy also temporarily stopped moving its personnel to Japan. Isotopes of concern The isotope iodine-131 is easily absorbed by the thyroid. Persons exposed to releases of I-131 from any source have a higher risk for developing thyroid cancer or thyroid disease, or both. Iodine-131 has a short half-life at approximately 8 days, and therefore is an issue mostly in the first weeks after the incident. Children are more vulnerable to I-131 than adults. Increased risk for thyroid neoplasm remains elevated for at least 40 years after exposure. Potassium iodide tablets prevent iodine-131 absorption by saturating the thyroid with non-radioactive iodine. Japan's Nuclear Safety Commission recommended local authorities to instruct evacuees leaving the 20-kilometre area to ingest stable (not radioactive) iodine. CBS News reported that the number of doses of potassium iodide available to the public in Japan was inadequate to meet the perceived needs for an extensive radioactive contamination event. Caesium-137 is also a particular threat because it behaves like potassium and is taken up by cells throughout the body. Additionally, it has a long, 30-year half-life. Cs-137 can cause acute radiation sickness, and increase the risk for cancer because of exposure to high-energy gamma radiation. Internal exposure to Cs-137, through ingestion or inhalation, allows the radioactive material to be distributed in the soft tissues, especially muscle tissue, exposing these tissues to the beta particles and gamma radiation and increasing cancer risk. Prussian blue helps the body excrete caesium-137. Strontium-90 behaves like calcium, and tends to deposit in bone and blood-forming tissue (bone marrow). 20–30% of ingested Sr-90 is absorbed and deposited in the bone. Internal exposure to Sr-90 is linked to bone cancer, cancer of the soft tissue near the bone, and leukemia. Risk of cancer increases with increased exposure to Sr-90. Plutonium is also present in the MOX fuel of the Unit 3 reactor and in spent fuel rods. Officials at the International Atomic Energy Agency say the presence of MOX fuel does not add significantly to the dangers. Plutonium-239 is long-lived and potentially toxic with a half-life of 24,000 years. Radioactive products with long half-lives release less radioactivity per unit time than products with a short half life, as isotopes with a longer half life emit particles much less frequently. For example, one mole (131 grams) of 131I releases 6x1023 decays 99.9% of them within three months, whilst one mole (238 grams) of 238U releases 6x1023 decays 99.9% of them within 45 billion years, but only about 40 parts per trillion in the first three months. Experts commented that the long-term risk associated with plutonium toxicity is "highly dependent on the geochemistry of the particular site." Regulatory levels An overview for regulatory levels in Japan is shown in the table below: Summarised daily events On 11 March, Japanese authorities reported that there had been no "release of radiation" from any of the power plants. On 12 March, the day after the earthquake, increased levels of iodine-131 and caesium-137 were reported near Unit 1 on the plant site. On 13 March, venting to release pressure started at several reactors resulting in the release of radioactive material. From 12 to 15 March the people of Namie were evacuated by the local officials to a place in the north of the town. This may have been in an area directly affected by a cloud of radioactive materials from the plants. There are conflicting reports about whether or not the government knew at the time the extent of the danger, or even how much danger there was. Chief Cabinet Secretary Yukio Edano announced on 15 March 2011 that radiation dose rates had been measured as high as 30 mSv/h on the site of the plant between units 2 and 3, as high as 400 mSv/h near unit 3, between it and unit 4, and 100 mSv/h near unit 4. He said, "there is no doubt that unlike in the past, the figures are the level at which human health can be affected." Prime Minister Naoto Kan urged people living between 20 and 30 kilometers of the plant to stay indoors, "The danger of further radiation leaks (from the plant) is increasing", Kan warned the public at a press conference, while asking people to "act calmly". A spokesman for Japan's nuclear safety agency said TEPCO had told it that radiation levels in Ibaraki, between Fukushima and Tokyo, had risen but did not pose a health risk. Edano reported that the average radiation dose rate over the whole day was 0.109 μSv/h. 23 out of 150 tested persons living close to the plant were decontaminated On 16 March power plant staff were briefly evacuated after smoke rose above the plant and radiation levels measured at the gate increased to 10 mSv/h. Media reported 1,000 mSv/h close to the leaking reactor, with radiation levels subsequently dropping back to 800–600 mSv. Japan's defence ministry criticized the nuclear-safety agency and TEPCO after some of its troops were possibly exposed to radiation when working on the site. Japan's ministry of science (MEXT) measured radiation levels of up to 0.33 mSv/h 20 kilometers northwest of the power plant. Japan's Nuclear Safety Commission recommended local authorities to instruct evacuees leaving the 20-kilometre area to ingest stable (not radioactive) iodine. On 17 March IAEA radiation monitoring over 47 cities showed that levels of radiation in Tokyo had not risen. Although at some locations around 30 km from the Fukushima plant, the dose rates had risen significantly in the preceding 24 hours (in one location from 80 to 170 μSv/h and in another from 26 to 95 μSv/h), levels varied according to the direction from the plant. Spinach grown in open air around 75 kilometers south of the nuclear plant had elevated levels of radioactive iodine and caesium On 18 March IAEA clarified that, contrary to several news reports, the IAEA had not received any notification from the Japanese authorities of people sickened by radiation contamination. On 19 March MEXT said a trace amount of radioactive substances was detected in tap water in Tokyo, as well as Tochigi, Gunma, Chiba and Saitama prefectures. The Japanese Ministry of Health, Labour and Welfare announced that radioactivity levels exceeding legal limits had been detected in milk produced in the Fukushima area and in certain vegetables in Ibaraki. Measurements made by Japan in a number of locations have shown the presence of radionuclides such as iodine-131 (I-131) and caesium-137 (Cs-137) on the ground. On 23 March, MEXT released new environmental data. Radioactivity readings for soil and pond samples were highest at one location 40 km northwest of the plant. On 19 March, upland soil there contained 28.1 kBq/kg of Cs-137 and 300 kBq/kg of I-131. One day later, these same figures were 163 kBq/kg of Cs-137 and 1,170 kBq/kg of I-131. Cs-137 of 163 kBq/kg is equal to 3,260 kBq/m2. On 24 March, three workers were exposed to high levels of radiation which caused two of them to require hospital treatment after radioactive water seeped through their protective clothes while working in unit 3. It rained in Tokyo from the morning of 21 March to 24 March. The rain brought radioactive fallout there. In Shinjuku, based on the research by Tokyo Metropolitan Institute of Public Health, 83900 Bq/m2 of I-131, 6310 Bq/m2 of Cs-134, and 6350 Bq/m2 of Cs-137 were detected for these four days in total as radioactive fallout, including 24 hours from 20 March 9:00 am to 21 March 9:00 am. On 25 March the German Ministry of the Environment announced that small amounts of radioactive iodine had been observed in three places within the German atmosphere. On 26 March, Japan's nuclear safety agency said that contamination from iodine-131 in seawater near the discharge had increased to 1,850 times the limit. 27 March: Levels of "over 1000" (the upper limit of the measuring device) and 750 mSv/h were reported from water within unit 2 (but outside the containment structure) and 3 respectively. A statement that this level was "ten million times the normal level" in unit 2 was later retracted and attributed to iodine-134 rather than to a longer-lived element. Japan's Nuclear and Industrial Safety Agency indicated that "The level of radiation is greater than 1,000 millisieverts. It is certain that it comes from atomic fission [...]. But we are not sure how it came from the reactor." 29 March: iodine-131 levels in seawater 330m south of a key discharge outlet had reached 138 Bq/mL (3,355 times the legal limit) 30 March: iodine-131 concentrations in seawater had reached 180 Bq/mL at a location 330m south of a plant discharge, 4,385 times the legal limit. Tests indicating 3.7 MBq/m2 of Cs-137 caused the IAEA to state that its criteria for evacuation were exceeded in the village of Iitate, Fukushima, outside the existing radiation exclusion zone. On 31 March, IAEA corrected the value of iodine-131 that had been detected in the Iitate village to 20 million Bq/m2. The value that had been announced at a press interview was about 2 million Bq/m2. On 1 April, besides leafy vegetables and parsley, also beef with iodine-131 and/or caesium-134 and caesium-137 levels above the regulatory limit was reported. 3 April: Health officials reported radioactive substances higher than the legal limits were found in mushrooms. The Japanese government publicly stated that it expected ongoing radioactive-material releases for "months" assuming normal containment measures were used. 4 to 10 April TEPCO announced it had begun dumping 9,100 tons of water that was 100 times the contamination limit from a wastewater treatment plant, and dumping would take 6 days. 5 April: Fish caught 50 miles off the coast of Japan had radioactivity exceeding safe levels. 15 April: Iodine-131 in seawater was measured at 6,500 times the legal limit, while levels of caesium-134 and caesium-137 rose nearly fourfold, possibly due to installation of steel plates meant to reduce the possibility of water leaking into the ocean. 18 April: High levels of radioactive strontium-90 were discovered in soil at the plant, prompting the government to begin regularly testing for the element. 22 April: The Japanese government asked residents to leave Iitate and four other villages within a month due to radiation levels. See also Fukushima 50 Hibakusha (surviving victims of the atomic bombings of Hiroshima and Nagasaki) Lists of nuclear disasters and radioactive incidents References Sources External links Japan Radiation Map PM Information on contaminated water leakage at TEPCO's Fukushima Daiichi Nuclear Power Station, Prime Minister of Japan and His Cabinet MOFA Information on contaminated water leakage at TEPCO's Fukushima Daiichi Nuclear Power Station, Ministry of Foreign Affairs TEPCO News Releases, Tokyo Electric Power Company NRA, Japan, Nuclear Regulation Authority NISA, Nuclear and Industrial Safety Agency, former organization IAEA Update on Japan Earthquake, International Atomic Energy Agency Navigating Fukushima: Lessons from Chernobyl, Potential Radiation Effects, and Other Health Impacts, Q&A with Dr. Scott Davis about the mechanics of the crisis in Fukushima and how it compares to Chernobyl Detailed measurements of radiation levels in air at Fukushima I Fukushima Daiichi nuclear disaster Water pollution in Japan Radioactively contaminated areas Radiation health effects
Radiation effects from the Fukushima Daiichi nuclear disaster
[ "Chemistry", "Materials_science", "Technology" ]
30,177
[ "Radiation health effects", "Radioactively contaminated areas", "Radioactive contamination", "Soil contamination", "Radiation effects", "Radioactivity" ]
31,279,073
https://en.wikipedia.org/wiki/Nuclear%20magnetic%20resonance%20crystallography
Nuclear magnetic resonance crystallography (NMR crystallography) is a method which utilizes primarily NMR spectroscopy to determine the structure of solid materials on the atomic scale. Thus, solid-state NMR spectroscopy would be used primarily, possibly supplemented by quantum chemistry calculations (e.g. density functional theory), powder diffraction etc. If suitable crystals can be grown, any crystallographic method would generally be preferred to determine the crystal structure comprising in case of organic compounds the molecular structures and molecular packing. The main interest in NMR crystallography is in microcrystalline materials which are amenable to this method but not to X-ray, neutron and electron diffraction. This is largely because interactions of comparably short range are measured in NMR crystallography. Introduction When applied to organic molecules, NMR crystallography aims at including structural information not only of a single molecule but also on the molecular packing (i.e. crystal structure). Contrary to X-ray, single crystals are not necessary with solid-state NMR and structural information can be obtained from high-resolution spectra of disordered solids. E.g. polymorphism is an area of interest for NMR crystallography since this is encountered occasionally (and may often be previously undiscovered) in organic compounds. In this case a change in the molecular structure and/or in the molecular packing can lead to polymorphism, and this can be investigated by NMR crystallography. Dipolar couplings-based approach The spin interaction that is usually employed for structural analyses via solid state NMR spectroscopy is the magnetic dipolar interaction. Additional knowledge about other interactions within the studied system like the chemical shift or the electric quadrupole interaction can be helpful as well, and in some cases solely the chemical shift has been employed as e.g. for zeolites. The “dipole coupling”-based approach parallels protein NMR spectroscopy to some extent in that e.g. multiple residual dipolar couplings are measured for proteins in solution, and these couplings are used as constraints in the protein structure calculation. In NMR crystallography the observed spins in case of organic molecules would often be spin-1/2 nuclei of moderate frequency (, , , etc.). I.e. is excluded due to its large magnetogyric ratio and high spin concentration leading to a network of strong homonuclear dipolar couplings. There are two solutions with respect to 1H: spin diffusion experiments (see below) and specific labelling with spins (spin = 1). The latter is also popular e.g. in NMR spectroscopic investigations of hydrogen bonds in solution and the solid state. Both intra- and intermolecular structural elements can be investigated e.g. via deuterium REDOR (an established solid state NMR pulse sequence to measure dipolar couplings between deuterons and other spins). This can provide an additional constraint for an NMR crystallographic structural investigation in that it can be used to find and characterize e.g. intermolecular hydrogen bonds. Dipolar interaction The above-mentioned dipolar interaction can be measured directly, e.g. between pairs of heteronuclear spins like 13C/15N in many organic compounds. Furthermore, the strength of the dipolar interaction modulates parameters like the longitudinal relaxation time or the spin diffusion rate which therefore can be examined to obtain structural information. E.g. 1H spin diffusion has been measured providing rich structural information. Chemical shift interaction The chemical shift interaction can be used in conjunction with the dipolar interaction to determine the orientation of the dipolar interaction frame (principal axes system) with respect to the molecular frame (dipolar chemical shift spectroscopy). For some cases there are rules for the chemical shift interaction tensor orientation as for the 13C spin in ketones due to symmetry arguments (sp2 hybridisation). If the orientation of a dipolar interaction (between the spin of interest and e.g. another heteronucleus) is measured with respect to the chemical shift interaction coordinate system, these two pieces of information (chemical shift tensor/molecular orientation and the dipole tensor/chemical shift tensor orientation) combined give the orientation of the dipole tensor in the molecular frame. However, this method is only suitable for small molecules (or polymers with a small repetition unit like polyglycine) and it provides only selective (and usually intramolecular) structural information. Crystal Structure Refinements The dipolar interaction yields the most direct information with respect to structure as it makes it possible to measure the distances between the spins. The sensitivity of this interaction is however lacking and even though dipolar-based NMR crystallography makes the elucidation of structures possible, other methods are necessary to obtain high resolution structures. For these reasons much work was done to include the use other NMR observables such as chemical shift anisotropy, J-coupling and the quadrupolar interaction. These anisotropic interactions are highly sensitive to the 3D local environment making it possible to refine the structures of powdered samples to structures rivaling the quality of single crystal X-ray diffraction. These however rely on adequate methods for predicting these interactions as they do not depend in a straightforward fashion on the structure. Comparison with diffraction methods A drawback of NMR crystallography is that the method is typically more time-consuming and more expensive (due to spectrometer costs and isotope labelling) than X-ray crystallography, it often elucidates only part of the structure, and isotope labelling and experiments may have to be tailored to obtain key structural information. Also a given molecular structure may not always be suitable for a pure NMR-based NMR crystallographic approach, but it can still play an important role in a multimodality (NMR+diffraction) study. Unlike in the case of diffraction methods, it appears that NMR crystallography needs to work on a case-by-case basis. The reason is that different molecular systems will exhibit different spin physics and different observables which can be probed. The method may therefore not find widespread use as different systems will require tailored experimental designs to study them. References Crystallography Scientific techniques Solid-state chemistry
Nuclear magnetic resonance crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,301
[ "Nuclear magnetic resonance", "Materials science", "Crystallography", "Condensed matter physics", "nan", "Nuclear physics", "Solid-state chemistry" ]
39,358,688
https://en.wikipedia.org/wiki/Axial%20fan%20design
An axial fan is a type of fan that causes gas to flow through it in an axial direction, parallel to the shaft about which the blades rotate. The flow is axial at entry and exit. The fan is designed to produce a pressure difference, and hence force, to cause a flow through the fan. Factors which determine the performance of the fan include the number and shape of the blades. Fans have many applications including in wind tunnels and cooling towers. Design parameters include power, flow rate, pressure rise and efficiency. Axial fans generally comprise fewer blades (two to six) than centrifugal fans. Axial fans commonly have larger radius and lower speed (ω) than ducted fans (esp. at similar power. Stress proportional to r^2). Calculation of parameters Since the calculation cannot be done using the inlet and outlet velocity triangles, which is not the case in other turbomachines, calculation is done by considering a mean velocity triangle for flow only through an infinitesimal blade element. The blade is divided into many small elements and various parameters are determined separately for each element. There are two theories that solve the parameters for axial fans: Slipstream Theory Blade Element Theory Slipstream theory In the figure, the thickness of the propeller disc is assumed to be negligible. The boundary between the fluid in motion and fluid at rest is shown. Therefore, the flow is assumed to be taking place in an imaginary converging duct where: D = Diameter of the Propeller Disc. Ds = Diameter at the Exit. In the figure, across the propeller disc, velocities (C1 and C2) cannot change abruptly across the propeller disc as that will create a shockwave but the fan creates the pressure difference across the propeller disc. and The area of the propeller disc of diameter D is: The mass flow rate across the propeller is: Since thrust is change in mass multiplied by the velocity of the mass flow i.e., change in momentum, the axial thrust on the propeller disc due to change in momentum of air, which is: Applying Bernoulli's principle upstream and downstream: On subtracting the above equations: Thrust difference due to pressure difference is projected area multiplied by the pressure difference. Axial thrust due to pressure difference comes out to be: Comparing this thrust with the axial thrust due to change in momentum of air flow, it is found that: A parameter 'a' is defined such that - where Using the previous equation and "a", an expression for Cs comes out to be: Calculating the change in specific stagnation enthalpy across disc: Now, Ideal Value of Power supplied to the Propeller = Mass flow rate * Change in Stagnation enthalpy; where If propeller was employed to propel an aircraft at speed = Cu; then Useful Power = Axial Thrust * Speed of Aircraft; Hence the expression for efficiency comes out to be: Let Ds be the diameter of the imaginary outlet cylinder. By Continuity Equation; From the above equations it is known that - Therefore; Hence the flow can be modeled where the air flows through an imaginary diverging duct, where diameter of propeller disc and diameter of the outlet are related. Blade element theory In this theory, a small element (dr) is taken at a distance r from the root of the blade and all the forces acting on the element are analysed to get a solution. It is assumed that the flow through each section of small radial thickness dr is assumed to be independent of the flow through other elements. Resolving Forces in the figure - Lift Coefficient (CL) and Drag Coefficient (CD) are given as - Also from the figure - Now, No. of Blades (z) and Spacing (s) are related as, and the total thrust for the elemental section of the propeller is zΔFx. Therefore, Similarly, solving for ΔFy, ΔFy is found out to be - and Finally, thrust and torque can be found out for an elemental section as they are proportional to Fx and Fy respectively. Performance characteristics The relationship between the pressure variation and the volume flow rate are important characteristics of fans. The typical characteristics of axial fans can be studied from the performance curves. The performance curve for the axial fan is shown in the figure. (The vertical line joining the maximum efficiency point is drawn which meets the Pressure curve at point "S") The following can be inferred from the curve - As the flow rate increases from zero the efficiency increases to a particular point reaches maximum value and then decreases. The power output of the fans increases with almost constant positive slope. The pressure fluctuations are observed at low discharges and at flow rates(as indicated by the point "S" ) the pressure deceases. The pressure variations to the left of the point "S" causes for unsteady flow which are due to the two effects of Stalling and surging. Causes of unstable flow Stalling and surging affects the fan performance, blades, as well as output and are thus undesirable. They occur because of the improper design, fan physical properties and are generally accompanied by noise generation. Stalling effect/Stall The cause for this is the separation of the flow from the blade surfaces. This effect can be explained by the flow over an air foil. When the angle of incidence increases (during the low velocity flow) at the entrance of the air foil, flow pattern changes and separation occurs. This is the first stage of stalling and through this separation point the flow separates leading to the formation of vortices, back flow in the separated region. For a further the explanation of stall and rotating stall, refer to compressor surge. The stall zone for the single axial fan and axial fans operated in parallel are shown in the figure. The following can be inferred from the graph : For the Fans operated in parallel, the performance is less when compared to the individual fans. The fans should be operated in safe operation zone to avoid the stalling effects. VFDs are not practical for some Axial fans Many Axial fan failures have happened after controlled blade axial fans were locked in a fixed position and Variable Frequency Drives (VFDs) were installed. The VFDs are not practical for some Axial fans. Axial fans with severe instability regions should not be operated at blades angles, rotational speeds, mass flow rates, and pressures that expose the fan to stall conditions. Surging effect/Surge Surging should not be confused with stalling. Stalling occurs only if there is insufficient air entering into the fan blades causing separation of flow on the blade surface. Surging or the Unstable flow causing complete breakdown in fans is mainly contributed by the three factors System surge Fan surge Paralleling System surge This situation occurs when the system resistance curve and static pressure curve of the fan intersect have similar slope or parallel to each other. Rather than intersecting at a definite point the curves intersect over certain region reporting system surge. These characteristics are not observed in axial fans. Fan surge This unstable operation results from the development of pressure gradients in the opposite direction of the flow. Maximum pressure is observed at the discharge of the impeller blade and minimum pressure on the side opposite to the discharge side. When the impeller blades are not rotating these adverse pressure gradients pump the flow in the direction opposite to the direction of the fan. The result is the oscillation of the fan blades creating vibrations and hence noise. Paralleling This effect is seen only in case of multiple fans. The air flow capacities of the fans are compared and connected in same outlet or same inlet conditions. This causes noise, specifically referred to as Beating in case of fans in parallel. To avoid beating use is made of differing inlet conditions, differences in rotational speeds of the fans, etc. Methods to avoid unsteady flow By designing the fan blades with proper hub-to-tip ratio and analyzing performance on the number of blades so that the flow doesn't separate on the blade surface these effects can be reduced. Some of the methods to overcome these effects are re-circulation of excess air through the fan, axial fans are high specific speed devices operating them at high efficiency and to minimize the effects they have to be operated at low speeds. For controlling and directing the flow use of guide vanes is suggested. Turbulent flows at the inlet and outlet of the fans cause stalling so the flow should be made laminar by the introduction of a stator to prevent the effect. See also Mechanical fan Propeller (marine) Propeller (aircraft) Industrial fan Ceiling fan Turbofan Ducted propeller Window fan Compressor surge Compressor stall Propeller walk Cavitation Azimuth thruster Kitchen rudder Paddle steamer Propulsor Cleaver Folding propeller Modular propeller Supercavitating propeller Notes References External links Plastic blades for gas turbine engines - Orenda Engines Ltd Fibre blade attatchment - US Department of Navy Turbine Blade - Ex-Cell-O Corp Composite fan blade - Williams International Corp Method for manufacturing a fiber reinforcement body for a metal matrix composite - Toyota Industries Corp Ventilation fans Mechanical engineering Turbomachinery Gas technologies
Axial fan design
[ "Physics", "Chemistry", "Engineering" ]
1,824
[ "Chemical equipment", "Turbomachinery", "Applied and interdisciplinary physics", "Mechanical engineering" ]
39,359,065
https://en.wikipedia.org/wiki/Phase%20Transitions%20and%20Critical%20Phenomena
Phase Transitions and Critical Phenomena is a 20-volume series of books, comprising review articles on phase transitions and critical phenomena, published during 1972-2001. It is "considered the most authoritative series on the topic". Volumes 1-6 were edited by Cyril Domb and Melville S. Green, and after Green's death, volumes 7-20 were edited by Domb and Joel Lebowitz. Volume 4 was never published. Volume 5 was published in two volumes, as 5A and 5B. The first volume was praised for its coherent approach. While praised for its sound theoretical approach, the first volume remained at considerable distance from being able to explain experimental results in things like structural phase transitions. The second volume was praised for being well written, and was suggested as a standard reference. The third volume was also suggested as an index for researchers. Contents Volume 1: Exact Results (1972) 'Introductory Note on Phase Transitions and Critical Phenomena', by C.N. Yang. 'Rigorous Results and Theorems', by R.B. Griffiths. 'Dilute Quantum Systems', by J. Ginibre. 'The C*-Algebraic Approach to Phase Transitions', by G.G. Emch. 'One-dimensional Models — Short Range Forces', by C.J. Thompson 'Two-dimensional Ising Models', by H.N.V. Temperley. 'Transformation of Ising Models', by I. Syozi. 'Two-dimensional Ferroelectric Models', by E.H. Lieb and Fa Yueh Wu. Volume 2: (1972) 'Thermodynamics' M.J. Buckingham. 'Equilibrium Scaling in Fluids and Magnets', by M. Vicentini-Missoni. 'Surface Tension of Fluids', by B. Widom. 'Surface and Size Effects in Lattice Models', by P.G. Watson. 'Exact Calculations on a Random Ising Systems', by B. McCoy. 'Percolation and Cluster Size', by J.W. Essam. 'Melting and Statistical Geometry of Simple Liquids', by R. Collins. 'Lattice Gas Theories of Melting', by L.K. Runnels. 'Closed Form Approximations for Lattice Systems', by D.M. Burley. 'Critical Properties of the Spherical Model', by G.S. Joyce. 'Kinetics of Ising Models', by K. Kawasaki. Volume 3: Series Expansions for Lattice Models (1974) 'Graph Theory and Embeddings', by C. Domb. 'Computer Techniques for Evaluating Lattice Constants', by J.L. Martin. 'Linked Cluster Expansion' M. Wortis. 'Asymptotic Analysis of Coefficients', by A.J. Guttmann and D.S. Gaunt. 'Heisenberg Model', by G.S. Rushbrooke, G.A. Baker Jr and P.J. Wood. 'Ising Model', by C. Domb. 'Classical Vector Models', by H. Eugene Stanley. 'D-vector Model or "Universality Hamiltonian": Properties of Isotropically-Interacting D-Dimensional Classical Spins', by H. Eugene Stanley. 'X-Y Model', by D.B. Betts. 'Ferroelectric Models', by J.F. Nagle. Volume 4: (Never published) Volume 5a (1976) 'Scaling, Universality and Operator Algebras', by L.P. Kadanoff. 'Generalized Landau Theories', by M. Luban. 'Neutron Scattering and Spatial Correlation near the Critical Point', by J. Als-Nielsen. 'Mode Coupling and Critical Dynamics', by K. Kawasaki. Volume 5b (1976) 'Monte Carlo Investigations of Phase Transitions and Critical Phenomena', by K. Binder. 'System with Weak Long-Range Potentials', by P.C. Hemmer and J.L. Lebowitz. 'Correlation Functions and their Generating Functionals: General Relations with Applications to the Theory of Fluids', by G. Stell. 'Heisenberg Ferromagnet in the Green's Function Approximation', by R.A. Tahir-Kheli. 'Thermal Measurements and Critical Phenomena in Liquids', by A.V. Voronel. Volume 6: (1976) 'The Renormalization Group — Introduction', by Kenneth G. Wilson. 'The Critical State, General Aspects', by F.J. Wegner. 'Field Theoretical Approach to Critical Phenomena' E. Brezin, J.C. Le Guillou and J. Zinn-Justin. 'The 1/n Expansion', by Shang-Keng Ma. 'The ε-Expansion for Exponents and the Equation of State in Isotropic Systems', by D.J. Wallace. 'Dependence of Universal Critical Behaviour on Symmetry and Range of Interaction', by A. Aharony. 'Renormalization: Theory Ising-like Spin Systems', by Th. Niemeijer and J.M.J. van Leeuwen. 'Renormalization Group Approach to Critical Phenomena', by C. Di Castro and G. Jona-Lasinio. Volume 7: (1983) 'Defect-Mediated Phase Transitions', by D.R. Nelson. 'Conformational Phase Transitions in a Macromolecule: Exactly Solvable Models', by F.W. Wiegel. 'Dilute Magnetism', by R.B. Stinchcombe. Volume 8: (1983) 'Critical Behaviour at Surfaces', by K. Binder. 'Finite-Size Scaling', by M.N. Barber. 'The Dynamics of First Order Phase Transitions', by J.D. Gunton, and P.S. Sahni. Volume 9: (1984) 'Theory of Tricritical Points', by I.D. Lawrie and S. Sarbach. 'Multicritical Points in Fluid Mixtures: Experimental Studies', by C.M. Knobler and R.L. Scott. 'Critical Point Statistical Mechanics and Quantum Field Theory', by G.A. Baker, Jr. Volume 10: (1986) 'Surface Structures and Phase Transitions — Exact Results', by D.B. Abraham. 'Field-theoretic Approach to Critical Behaviour at Surfaces', by H.W. Diehl. 'Renormalization Group Theory of Interfaces', by D. Jasnow. Volume 11: (1987) 'Coulomb Gas Formulation of Two-Dimensional Phase Transitions', by B. Nienhus. 'Conformal Invariance', by J.L. Cardy. 'Low-Temperature Properties of Classical Lattice Systems: Phase Transitions and Phase Diagrams', by J. Slawny. Volume 12: (1988) 'Wetting Phenomena', by S. Dietrich. 'The Domain Wall Theory of Two-Dimensional Commensurate-Incommensurate Phase Transition', by M. P. M. den Nijs. 'The Growth of Fractal Aggregates and their Fractal Measures', by P. Meakin. Volume 13: (1989) 'Asymptotic Analysis of Power-Series Expansions', by A.J. Guttmann. 'Dimer Models on Anisotropic Lattices', by S.M. Bhattachargee, S.O. Carlos, J.F. Nagle and C.S.O Yokoi. Volume 14: (1991) 'Universal Critical-Point Amplitude Relations' V. Privman, P.C. Hohenberg, and A. Aharony. 'The Behavior of Interfaces in Ordered and Disordered Systems', by R. Lipowsky, G. Forgacs and Th.M. Nieuwenhuizen. Volume 15: (1992) 'Spatially Modulated Structures in Systems with Competing Interactions', by W. Selke. 'The Large-n Limit in Statistical Mechanics and the Spectral Theory of Disordered Systems', by A.M. Khorunzhy, B.A. Khorzhenko, L.A. Pastur and M.V. Shcherbina. Volume 16: (1994) 'Self-assembling Amphiphilic Systems', by G. Gompper and M. Schick. Volume 17: (1995) 'Statistical Mechanics of Driven Diffusive Systems', by B. Schmittmann and R.K.P. Zia. Volume 18: (2001) 'The Random Geometry of Equilibrium Phases', by O. Häggström, H.O. Georgii, and C. Maes. 'Exact Combinatorial Algorithms: Ground States of Disordered Systems', by M.J. Alava, P.M. Duxbury, C.F. Moukarzel and H. Rieger. Volume 19: (2001) 'Exactly Solvable Models for Many-Body Systems Far from Equilibrium', by G.M. Schütz. 'Polymerized Membranes, a Review', by K.J. Wiese. Volume 20: (2001) Cumulative author, title and subject index including table of contents. Footnotes Physics books Phase transitions Critical phenomena
Phase Transitions and Critical Phenomena
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,952
[ "Physical phenomena", "Phase transitions", "Critical phenomena", "Phases of matter", "Condensed matter physics", "Statistical mechanics", "Matter", "Dynamical systems" ]
52,206,441
https://en.wikipedia.org/wiki/Sigma-D%20relation
The Sigma-D relation, or Σ-D Relation, is the claimed relation between the radio surface brightness and diameter of a supernova remnant. It is generally regarded as of limited physical use, since it has very large scatter and is dominated by observational selection biases. References Physical cosmology Astrophysics Equations of astronomy
Sigma-D relation
[ "Physics", "Astronomy" ]
68
[ "Astronomical sub-disciplines", "Concepts in astronomy", "Theoretical physics", "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Equations of astronomy", "Physical cosmology" ]
52,213,563
https://en.wikipedia.org/wiki/Cross%20ventilation
Cross ventilation is a natural phenomenon where wind, fresh air or a breeze enters upon an opening, such as a window, and flows directly through the space and exits through an opening on the opposite side of the building (where the air pressure is lower). This produces a cool stream of air and as well as a current across the room from the exposed area to the sheltered area. Cross ventilation is a wind-driven effect and requires no energy, in addition to being the most effective method of wind ventilation. A commonly used technique to remove pollutants and heat in an indoor environment, cross ventilation can also decrease or even obviate the need for an air-conditioner and can improve indoor air quality. Other terms used for the effect include, cross-breeze, cross-draft, wind effect ventilation and cross-flow ventilation. Process The phenomenon occurs when openings in an environment (including vehicles) or building (houses, factories, sheds, etc) are set on opposite or adjoining walls, which allow air to enter and exit, thus creating a current of air across the interior environment. Windows or vents positioned on opposite sides of the room allow passive breezes a pathway through the structure, which circulate the air and provide passive cooling. There is also a pressure difference between the opposite sides of the establishment. The effect is mostly driven by the wind, whereby the air is pulled into the building on the high pressure windward part and is pushed out on the low pressure downwind side of the establishment (because of the pressure difference between the openings). A wind's effect on a structure creates regions that have positive pressure on the building's upwind area and a negative pressure on the downwind side. Thus, the building shape and local wind patterns are critical in making wind pressures that force airflow through its openings. If the windows on both sides of the buildings are opened, the overpressure on the side facing the wind, and/or low pressure on the adjacent protected side, will make a current of air through the room from the uncovered side towards the sheltered side. If there are windows on both sides in a building, cross ventilation is appropriate where the width of the room is up to five times the floor-to-ceiling height. If openings are only one side then wind-driven ventilation is more suited for structures where the width is around 2.5 times the floor to ceiling height. Factors Cross ventilation relies on many factors, such as the tightness of the establishment, wind direction and how much wind is available, its potential travel through chimneys, vents and other openings in the home. Casement windows can be installed to improve cross-breezes. Air quality may also affect cross ventilation. Although cross ventilation is generally more direct at its job than stack ventilation, the cons include its effects being unproductive on hot, still days, when it is most necessary. Moreover, cross ventilation is generally only suitable for narrow buildings. The contrasting height of the openings (walls, sill, panels or furniture) ordered by the space also immediately influence the level and velocity of ventilation. Effectiveness Cross ventilation works well in climates with hotter temperatures, where the system allows continual changes of the air within the building, refreshing it and reducing the temperature inside the structure and also when the window on the windward side of the building is not opened as much as the one on the leeward side. Cross ventilation will not be efficacious if the windows are more than 12m apart and if a window is behind a door that is regularly shut. An opened window that faces a prevailing wind and is conjugated with another window on the opposite side of a building will supply natural ventilation for fresh air. A decent and effective cross ventilation will remove heat from the interior and keep indoor air temperatures approximately 1.5 °C (2.7°F) below the outdoor air temperatures, ensuring that there is a steady inflow and outflow of fresh air inside the building. Besides windows, other openings like brise soleils, doors, louvers or ventilation grills and ducts can also work as effective ventilation openings, though an awning window provides the least effectivity. The wind surrounding building structures is important when it comes to assessing the air quality and thermal comfort indoors since both air and heat exchange rely heavily on the wind pressure on the exterior of the building. For the best airflow, the windward windows of the occupied space should not be opened as much as those on the leeward side. Disadvantages of wind-driven ventilation include capricious wind speeds and directions (which may create a strong unpleasant draft), and the polluted air from the outside that may tarnish the indoor air quality. Moreover, cross ventilation is not recommended for use in disease prevention when air is being moved by the cross ventilation from an unclean area into a clean area. Types There are four different types of cross ventilation: Single-sided ventilation: This method depends on the pressure contrasts between different openings within the occupied space. For rooms that only feature a single opening, the ventilation is impelled by turbulence, thereby creating a pumping activity on that lone opening, causing small inflows and outflows. Single-sided ventilation has a weak effect. It is preferable when cross ventilation is not achievable, where it uses windows or vents at the other side of the space to control air pressure. Cross ventilation (single spaces): Being unsophisticated and efficacious, this type of ventilation is a horizontal process that is driven by pressure differences between the windward and leeward sides of the occupied indoor environment. Ventilation here is generally provided using windows and vents at either side of a building where the variation in pressure draw air in and out. Cross ventilation (double-banked spaces): Involving banked rooms, this method features openings in the hallway structure. The openings allow a way for noise to move between spaces. It can provide a much higher air-exchange rate in comparison with single-sided ventilation. Stack ventilation: This ventilation is a vertical process and it's beneficiary for taller buildings with central atriums. It draws cooler air in at a lower level, whereby the air rises thereafter due to heat exposure before it is ventilated out at a higher level. Benefits from temperature compartmentalization and related pressure quality of the air, whereby warm air loses density when it rises and the cooler air supplants it. Equation For a simple volume with two openings, the cross wind flow rate can be calculated using the following equation: where is the far-field wind speed; is a local pressure drag coefficient for the building, defined at the location of the upstream opening; is a local pressure drag coefficient for the building, defined at the location of the downstream opening; is the cross-sectional area of the upstream opening; is the cross-sectional area of the downstream opening; is the discharge coefficient of the upstream opening; and is the discharge coefficient of the downstream opening. For rooms with single opening, the calculation of ventilation rate is more complicated than cross ventilation due to the bi-directional flow and strong turbulent effect. The ventilation rate for single-sided ventilation can be accurately predicted by combining different models for mean flow, pulsating flow and eddy penetration. The mean flow rate for single-sided ventilation is determined by: where l = width of the window; h = elevation of the top edge of the window; z0 = elevation of neural level (where inside and outside pressure balance); zref = reference elevation where the wind velocity is measured (at 10 m) and = mean wind velocity at the reference elevation. As observed in the equation (1), the air exchange depends linearly on the wind speed in the urban place where the architectural project will be built. CFD (Computational Fluid Dynamics) tools and zonal modelings are usually used to design naturally ventilated buildings. Windcatchers can assist wind-driven ventilation by guiding air in and out of structures. See also Windcatcher Passive cooling Passive ventilation Room air distribution Thermal comfort Stack effect References Ventilation Heating, ventilation, and air conditioning Sustainable building Building engineering Fluid dynamics Passive cooling Passive ventilation Architectural elements Low-energy building Cooling technology Physical phenomena Wind
Cross ventilation
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,664
[ "Sustainable building", "Physical phenomena", "Building engineering", "Chemical engineering", "Architecture", "Construction", "Architectural elements", "Civil engineering", "Piping", "Components", "Fluid dynamics" ]
52,214,944
https://en.wikipedia.org/wiki/Proof%20of%20space
Proof of space (PoS) is a type of consensus algorithm achieved by demonstrating one's legitimate interest in a service (such as sending an email) by allocating a non-trivial amount of memory or disk space to solve a challenge presented by the service provider. The concept was formulated in 2013 by Dziembowski et al. and (with a different formulation) by Ateniese et al.. Proofs of space are very similar to proofs of work (PoW), except that instead of computation, storage is used to earn cryptocurrency. Proof-of-space is different from memory-hard functions in that the bottleneck is not in the number of memory access events, but in the amount of memory required. After the release of Bitcoin, alternatives to its PoW mining mechanism were researched, and PoS was studied in the context of cryptocurrencies. Proofs of space are seen as fairer and greener alternatives by blockchain enthusiasts due to the general-purpose nature of storage and the lower energy cost required by storage. In 2014, Signum (formerly Burstcoin) became the first practical implementation of a PoS (initially as proof of capacity) blockchain technology and is still actively developed. Other than Signum, several theoretical and practical implementations of PoS have been released and discussed, such as SpaceMint and Chia, but some were criticized for increasing demand and shortening the life of storage devices due to greater disc reading requirements than Signum. Concept description A proof-of-space is a piece of data that a prover sends to a verifier to prove that the prover has reserved a certain amount of space. For practicality, the verification process needs to be efficient, namely, consume a small amount of space and time. For security, it should be hard for the prover to pass the verification if it does not actually reserve the claimed amount of space. One way of implementing PoS is by using hard-to-pebble graphs. The verifier asks the prover to build a labeling of a hard-to-pebble graph. The prover commits to the labeling. The verifier then asks the prover to open several random locations in the commitment. Proof of storage A proof of storage (also proof of retrievability, proof of data possession) is related to a proof-of-space, but instead of showing that space is available for solving a puzzle, the prover shows that space is actually used to store a piece of data correctly at the time of proof. Proof of capacity A proof of capacity is a system where miners are allowed to pre-calculate ("plot") PoW functions and store them onto the HDD. The first implementation of proof of capacity was Signum (formerly burstcoin). Conditional proof of capacity The Proof of Capacity (PoC) consensus algorithm is used in some cryptocurrencies. Conditional Proof of Capacity (CPOC) is an improved version of PoC. It has a work, stake, and capacity system that works like the PoW, PoS, and PoC algorithms. By pledging their digital assets, users receive a higher income as a reward. Additionally, CPOC has designed a new reward measure for top users. In this algorithm, miners add a conditional component to the proof by ensuring that their plot file contains specific data related to the previous block. This additional condition enhances the security and decentralization of the consensus mechanism beyond traditional proof-of-capacity algorithms. Proof of space-time A proof of space-time (PoST) is a proof that shows the prover has spent an amount of time keeping the reserved space unchanged. Its creators reason that the cost of storage is inextricably linked not only to its capacity, but to the time in which that capacity is used. It is related to a proof-of-storage (but without necessarily storing any useful data), although the Moran-Orlov construction also allows a tradeoff between space and time. The first implementation of PoST is with the Chia blockchain. Uses Proofs of space could be used as an alternative to proofs of work in the traditional client puzzle applications, such as anti-spam measures and denial of service attack prevention. Proof-of-Space has also been used for malware detection, by determining whether the L1 cache of a processor is empty (e.g., has enough space to evaluate the PoS routine without cache misses) or contains a routine that resisted being evicted. Signum (formerly Burstcoin) The first blockchain to use hard disk based blockchain validation, established in 2014. Signum Proof of Capacity consumes disk space rather than computing resources to mine a block. Unlike PoW, where the miners keep changing the block header and hash to find the solution, proof of capacity (as implemented by Burstcoin, and developed further by Signum) generates random solutions, also called plots, using the Shabal cryptographic algorithm in advance and stores it on hard drives. This stage is called plotting, and it may take days or even weeks depending on the storage capacity of the drive. In the next stage - mining, miners match their solutions to the most recent puzzle and the node with the fastest solution gets to mine the next block. SpaceMint In 2015, a paper proposed a cryptocurrency called SpaceMint. It attempts to solve some of the practical design problems associated with the pebbling-based PoS schemes. In using PoS for decentralized cryptocurrency, the protocol has to be adapted to work in a non-interactive protocol since each individual in the network has to behave as a verifier. Chia In 2018, a proposed cryptocurrency Chia presented two papers presenting a new protocol based on proof of space and proof of time. In February 2021, Chia published a white paper outlining its business and has since launched its mainnet and Chia coin (XCH) using the Proof of Space Time concept. The spacetime model of Chia also depends on "plotting" (generation of proof-of-space files) to the storage medium to solve a puzzle. Unlike many proof-of-storage cryptocurrencies, Chia plots do not store any useful data. Also, Chia's proof-of-time method for plotting has raised concerns over shortened lifespans of solid-state drives due to the intensity of write activity involved in plot generation (typically, plotting occurs on an SSD and then the finished plots are transferred to a hard disk drive for long-term storage). See also Proof of work Proof of stake Proof of authority Proof of personhood References Cryptocurrencies Distributed algorithms Cryptography
Proof of space
[ "Mathematics", "Engineering" ]
1,390
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
52,215,328
https://en.wikipedia.org/wiki/Rankine%20half%20body
In the field of fluid dynamics, a Rankine half body is a feature of fluid flow discovered by Scottish physicist and engineer William Rankine that is formed when a fluid source is added to a fluid undergoing potential flow. Superposition of uniform flow and source flow yields the Rankine half body flow. A practical example of this type of flow is a bridge pier or a strut placed in a uniform stream. The resulting stream function () and velocity potential () are obtained by simply adding the stream function and velocity potential for each individual flow. Solution The flow equations of the Rankine half body are solved using the principle of superposition, combining the solutions of the linear flow of the stream and the circular flow of the source. Given the linear flow field and the source , we have The stagnation point for this flow can be determined by equating the velocity to zero in either directions. Because of symmetry of flow in y-direction, stagnation point must lie on x-axis. Equating both and to zero, we obtain . At and we have stagnation points. Now, we note that , so following this constant streamline gives the outline of the body: Then, describes the half body outline. Significance This type of flow provides important information about flow in front part of streamlined body. It is probable that at the boundary, flow is not properly represented for real flow. The pressure and velocity of flow near to boundary layer is calculated by applying the Bernoulli's principle and is approximated with potential flow. The above equations may be used to calculate the stress on the body placed into the flow stream. See also Rankine body References http://www.iust.ac.ir/files/mech/mazidi_9920c/fluid_ii/lecture8.pdf (pg no 22.23) http://www-mdp.eng.cam.ac.uk/web/library/enginfo/aerothermal_dvd_only/aero/fprops/poten/node35.html http://nptel.ac.in/courses/101103004/15 http://poisson.me.dal.ca/site2/courses/mech3300/Superposition.pdf https://faculty.poly.edu/~rlevicky/Files/Other/Handout14_6333.pdf http://web.mit.edu/2.016/www/handouts/2005Reading4.pdf http://www1.maths.leeds.ac.uk/~kersale/2620/Notes/chapter_4.pdf Fluid dynamics
Rankine half body
[ "Chemistry", "Engineering" ]
559
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
26,656,965
https://en.wikipedia.org/wiki/ACN-PCN%20method
The Aircraft Classification Number (ACN) – Pavement Classification Number (PCN) method is a standardized international airport pavement rating system promulgated by the ICAO in 1981. The method has been the official ICAO pavement rating system for pavements intended for aircraft of apron (ramp) mass greater than 5700 kg from 1981 to 2020. The method is scheduled to be replaced by the ACR-PCR method by November 28, 2024. For the safe and efficient use of pavements, the method has been designed to: enable aircraft operators to determine the permissible operating weights for their aircraft; assist aircraft manufacturers to ensure compatibility between airfield pavements and the aircraft under development; permit airport authorities to report on the aircraft they can accept and allow them to use any evaluation procedure of their choice to ascertain the loading the pavements can accept. The method relies on the plain comparison of two numbers: The ACN, a number that expresses the relative effect on an airplane of a given weight on a pavement structure for a specified standard subgrade strength; The PCN, a number (and series of letters) representing the pavement bearing strength (on the same scale as ACN) of a given pavement section (runway, taxiway, apron) for unrestricted operations. Aircraft Classification Number (ACN) The ACN calculation process is fully described in ICAO Doc 9157 Aerodrome Design Manual – Part 3 "Pavements" (2nd ed.). The procedure to calculate the ACN is as such: Design a theoretical pavement according to a defined criterion: For flexible pavements, design the pavement for 10,000 load applications of the aircraft according to the CBR design procedure combined with Boussinesq's solution for deflection in the elastic half-space For rigid pavements, design the pavement to reach a standard flexural stress of 2.75 MPa at the bottom of the cement concrete layer according to Westergaard theory Calculate the single wheel load, inflated at 1.25 MPa, that would require the same pavement – this is the Derived Single Wheel Load (DSWL) The ACN is defined as twice the DSWL, expressed in thousands of kilograms The ACN are calculated for four standard subgrade strengths, for flexible and rigid pavements, thus leading to 8 different values. ACNs depend on the landing gear geometry (number of wheels and wheel spacing), the landing gear load (that is dependent upon the aircraft weight and center of gravity) and the tire pressure. Normally, the aftmost center of gravity for the Maximum Ramp Weight (MRW) lead to the critical ACN. Aircraft manufacturers publish the ACNs of their aircraft in their respective Aircraft Characteristics manuals. The ICAO Aerodrome Design Manual contains the source code of computer programs for the calculation of ACNs. The FAA also developed COMFAA, a software enabling the calculation of ACNs for different aircraft depending on the input parameters. Pavement Classification Number (PCN) Contrary to the ACN, the ICAO does not prescribe a standardized calculation procedure for the PCN. Different PCN calculation procedures may therefore be found around the world. However, the ICAO defines a standardized reporting format for the PCN that comprises the PCN numerical value and a series of 4 letters. PCN may also be known as Load Classification Number or LCN. PCNs depend on both the pavement structure and the aircraft traffic operated on the pavement. The PCNs are determined by airports for their runways, taxiways and aprons and published in the Aeronautical Information Publication (AIP). Application by aerodrome authorities and aircraft operators An aircraft having an ACN (at a given weight) equal to or less than the PCN can operate without restriction on the pavement, provided that its tire pressure does not exceed the PCN limitation. If the ACN exceeds the PCN, some restrictions (for example on weight of frequency of operation) may apply depending on the national or local regulations for overload operations. With the exception of massive overloading, pavements in their structural behaviour are not subject to particular limiting load above which they suddenly or catastrophically fail. As a result, minor or medium overload operations may be allowed by the airport authority depending on the corresponding loss in pavement life expectancy. Evolutions and limitations The ACN-PCN method underwent 2 major changes since its introduction in 1981: In 2007, the ICAO adopted a new set of alpha-factors for the calculation of ACNs on flexible pavements based on findings from full-scale pavement tests. This led to a reduction of the flexible ACNs for all landing gears with four wheels or more. In 2013, the ICAO adopted new limits for the tire pressure categories, again based on findings from full-scale pavement tests. Despite these changes, the ACN-PCN method gradually became inconsistent with recent pavement design methods, mostly based on Linear Elastic Analysis (LEA) or Finite Element Method (FEM). The method is also failing to consider accurately the effect of modern landing gear configurations (with multi-wheels arrangements) and the improved characteristics of new-generation pavement materials. As a result, the ICAO triggered the development of a new pavement rating method aimed at overcoming these deficiencies. This new system, the ACR-PCR method, became effective in July 2020. Aircraft ACN list References Airport infrastructure Pavement engineering
ACN-PCN method
[ "Engineering" ]
1,077
[ "Airport infrastructure", "Aerospace engineering" ]
26,658,547
https://en.wikipedia.org/wiki/BTB/POZ%20domain
The BTB/POZ domain (BTB for BR-C, ttk and bab or POZ for Pox virus and Zinc finger) is a structural domain found in proteins across the domain Eukarya. Given its prevalence in eukaryotes and its absence in Archaea and bacteria, it likely arose after the origin of eukaryotes. While primarily a protein-protein interaction domain, some BTB domains have additional functionality in transcriptional regulation, cytoskeletal mobility, protein ubiquitination and degradation, and ion channel formation and operation. BTB domains have traditionally been classified by the other structural features present in the protein. Discovery The BTB/POZ domain was first described by two independent research groups in 1994. Researchers at UCLA found a conserved 115 amino acid motif in nine Drosophila proteins, including Broad complex, tramtrack, and bric-a-brac, and labelled the conserved region the BTB domain. At the same time, a group at Imperial Cancer Research Fund Laboratories in London discovered the same 120 amino acid motif in a set of otherwise unrelated zinc finger proteins and a set of pox-virus proteins, and thus named the region the POZ domain. Structure The motif is approximately 120 amino acids long, with a core fold of 95 amino acids that form five alpha helices and three beta sheets. The alpha helices form two hairpin structures, A1/A2 and A4/A5, out of the first and second and the fourth and fifth alpha helices respectively. The remaining alpha helix, A3, bridges the two. The three beta sheets cap the A1/A2 hairpin. Additional secondary structures can surround this core fold. For example, BTB domains in Kelch proteins, C2H2 zinc finger proteins, and HTH-containing proteins frequently include an additional alpha helix and beta sheet at the N-terminus of the domain. Function The BTB domain is primarily a protein-protein interaction domain. In zinc-finger proteins, it commonly forms homodimers with other BTB domains, mediates heteromeric dimerization, and recruits transcriptional corepressors. References Protein domains
BTB/POZ domain
[ "Biology" ]
442
[ "Protein domains", "Protein classification" ]
26,660,462
https://en.wikipedia.org/wiki/Shvo%20catalyst
The Shvo catalyst is an organoruthenium compound that catalyzes the hydrogenation of polar functional groups including aldehydes, ketones and imines. The compound is of academic interest as an early example of a catalyst for transfer hydrogenation that operates by an "outer sphere mechanism". Related derivatives are known where p-tolyl replaces some of the phenyl groups. Shvo's catalyst represents a subset of homogeneous hydrogenation catalysts that involves both metal and ligand in its mechanism. Synthesis and structure The catalyst is named after Youval Shvo, who uncovered it through studies on the effect of diphenylacetylene on the catalytic properties of triruthenium dodecacarbonyl. The reaction of diphenylacetylene and Ru3(CO)12 gives the piano stool complex . Subsequent hydrogenation of this tricarbonyl affords Shvo's catalyst. The iron analogue is also known, see Knölker complex. The compound contains a pair of equivalent Ru centres that are bridged by a strong hydrogen bond and a bridging hydride. In solution, the complex dissociates unsymmetrically: (C5Ph4O)2HRu2H(CO)4 C5Ph4OH)RuH(CO)2 + (C5Ph4O)Ru(CO)2 Hydrogenation catalysis In the presence of a suitable hydrogen donor or hydrogen gas, Shvo's catalyst effects the hydrogenation of several polar functional groups, e.g. aldehydes, ketones, imines, and iminium ions. Many alkenes and ketones undergo hydrogenation, although conditions are forcing: 145 °C (500 psi). One obstacle to the use of Shvo's catalyst in the hydrogenation of alkynes is its propensity to bind the alkyne quite tightly, forming a stable complex that gradually poisons the catalyst. Intramolecular reactions proceed as well, illustrated by the conversion of allylic alcohols to ketones. Shvo's catalyst also catalyzes dehydrogenations. Mechanism The mechanism of hydrogenation catalyzed by Shvo's catalyst has been a matter of debate, broadly between two alternative descriptions of the double bond's interaction with the complex at the rate-determining step. The proposed alternatives are an inner-sphere mechanism, where the transition state involves interaction with the metal only, and an outer-sphere mechanism, in which the cyclopentadienol proton also interacts with the substrate. Kinetic isotope studies provide evidence of a concerted transfer due to strong rate influence from both the ligand -OH and the metal hydride. Other reactions Shvo's catalyst facilitates the Tishchenko reaction, i.e., the formation of esters from alcohols. The early step in this reaction is the conversion of the primary alcohol to the aldehyde. Addition of the amine is facilitated through oxidation to the ynone, followed by reduction of the product. Another case of "hydrogen borrowing", the alkylation of amines using other amines is also promoted by Shvo's catalyst. The reaction proceeds through oxidation to an imine, which allows nucleophilic attack, followed by an elimination step and reduction of the double bond. References Organoruthenium compounds Carbonyl complexes Hydrogenation catalysts
Shvo catalyst
[ "Chemistry" ]
699
[ "Hydrogenation catalysts", "Hydrogenation" ]
26,666,199
https://en.wikipedia.org/wiki/Hypercomplex%20analysis
In mathematics, hypercomplex analysis is the extension of complex analysis to the hypercomplex numbers. The first instance is functions of a quaternion variable, where the argument is a quaternion (in this case, the sub-field of hypercomplex analysis is called quaternionic analysis). A second instance involves functions of a motor variable where arguments are split-complex numbers. In mathematical physics, there are hypercomplex systems called Clifford algebras. The study of functions with arguments from a Clifford algebra is called Clifford analysis. A matrix may be considered a hypercomplex number. For example, the study of functions of 2 × 2 real matrices shows that the topology of the space of hypercomplex numbers determines the function theory. Functions such as square root of a matrix, matrix exponential, and logarithm of a matrix are basic examples of hypercomplex analysis. The function theory of diagonalizable matrices is particularly transparent since they have eigendecompositions. Suppose where the Ei are projections. Then for any polynomial , The modern terminology for a "system of hypercomplex numbers" is an algebra over the real numbers, and the algebras used in applications are often Banach algebras since Cauchy sequences can be taken to be convergent. Then the function theory is enriched by sequences and series. In this context the extension of holomorphic functions of a complex variable is developed as the holomorphic functional calculus. Hypercomplex analysis on Banach algebras is called functional analysis. See also Giovanni Battista Rizza References Sources Daniel Alpay (ed.) (2006) Wavelets, Multiscale systems and Hypercomplex Analysis, Springer, . Enrique Ramirez de Arellanon (1998) Operator theory for complex and hypercomplex analysis, American Mathematical Society (Conference proceedings from a meeting in Mexico City in December 1994). J. A. Emanuello (2015) Analysis of functions of split-complex, multi-complex, and split-quaternionic variables and their associated conformal geometries, Ph.D. Thesis, Florida State University Sorin D. Gal (2004) Introduction to the Geometric Function theory of Hypercomplex variables, Nova Science Publishers, . Irene Sabadini and Franciscus Sommen (eds.) (2011) Hypercomplex Analysis and Applications, Birkhauser Mathematics. Irene Sabadini & Michael V. Shapiro & F. Sommen (editors) (2009) Hypercomplex Analysis, Birkhauser . Sabadini, Sommen, Struppa (eds.) (2012) Advances in Hypercomplex Analysis, Springer. Functions and mappings Hypercomplex numbers Mathematical analysis
Hypercomplex analysis
[ "Mathematics" ]
554
[ "Mathematical analysis", "Mathematical structures", "Functions and mappings", "Mathematical objects", "Algebraic structures", "Mathematical relations", "Hypercomplex numbers", "Numbers" ]
26,669,297
https://en.wikipedia.org/wiki/Reduced%20dynamics
In quantum mechanics, especially in the study of open quantum systems, reduced dynamics refers to the time evolution of a density matrix for a system coupled to an environment. Consider a system and environment initially in the state (which in general may be entangled) and undergoing unitary evolution given by . Then the reduced dynamics of the system alone is simply If we assume that the mapping is linear and completely positive, then the reduced dynamics can be represented by a quantum operation. This mean we can express it in the operator-sum form where the are operators on the Hilbert space of the system alone, and no reference is made to the environment. In particular, if the system and environment are initially in a product state , it can be shown that the reduced dynamics are completely positive. However, the most general possible reduced dynamics are not completely positive. Notes References Nielsen, Michael A. and Isaac L. Chuang (2000). Quantum Computation and Quantum Information, Cambridge University Press, Quantum information science
Reduced dynamics
[ "Physics" ]
197
[ "Quantum mechanics", "Quantum physics stubs" ]
26,669,532
https://en.wikipedia.org/wiki/Entropy%20exchange
In quantum mechanics, and especially quantum information processing, the entropy exchange of a quantum operation acting on the density matrix of a system is defined as where is the von Neumann entropy of the system and a fictitious purifying auxiliary system after they are operated on by . Here, and where in the above equation acts on leaving unchanged. References Quantum information science
Entropy exchange
[ "Physics" ]
71
[ "Quantum mechanics", "Quantum physics stubs" ]
42,094,122
https://en.wikipedia.org/wiki/Geometric%20terms%20of%20location
Geometric terms of location describe directions or positions relative to the shape of an object. These terms are used in descriptions of engineering, physics, and other sciences, as well as ordinary day-to-day discourse. Though these terms themselves may be somewhat ambiguous, they are usually used in a context in which their meaning is clear. For example, when referring to a drive shaft it is clear what is meant by axial or radial directions. Or, in a free body diagram, one may similarly infer a sense of orientation by the forces or other vectors represented. Examples Common geometric terms of location are: Axial – along the center of a round body, or the axis of rotation of a body Radial – along a direction pointing along a radius from the center of an object, or perpendicular to a curved path. Circumferential (or azimuthal) – following around a curve or circumference of an object. For instance: the pattern of cells in Taylor–Couette flow varies along the azimuth of the experiment. Tangential – intersecting a curve at a point and parallel to the curve at that point. Collinear – in the same line Parallel – in the same direction. Transverse – intersecting at any angle, i.e. not parallel. Orthogonal (or perpendicular) – at a right angle (at the point of intersection). Elevation – along a curve from a point on the horizon to the zenith, directly overhead. Depression – along a curve from a point on the horizon to the nadir, directly below. Vertical – spanning the height of a body. Longitudinal – spanning the length of a body. Lateral – spanning the width of a body. The distinction between width and length may be unclear out of context. Adjacent – next to Lineal – following along a given path. The shape of the path is not necessarily straight (compare to linear). For instance, a length of rope might be measured in lineal meters or feet. See arc length. Projection / Projected - in architecture, facade sticking out; convex. Recession / Recessed - the action of receding; away from an observer; concave. See also References Orientation (geometry) Position
Geometric terms of location
[ "Physics", "Mathematics" ]
442
[ "Geometric measurement", "Point (geometry)", "Physical quantities", "Position", "Topology", "Space", "Vector physical quantities", "Geometry", "Spacetime", "Wikipedia categories named after physical quantities", "Orientation (geometry)" ]
42,095,335
https://en.wikipedia.org/wiki/Surface%20hopping
Surface hopping is a mixed quantum-classical technique that incorporates quantum mechanical effects into molecular dynamics simulations. Traditional molecular dynamics assume the Born-Oppenheimer approximation, where the lighter electrons adjust instantaneously to the motion of the nuclei. Though the Born-Oppenheimer approximation is applicable to a wide range of problems, there are several applications, such as photoexcited dynamics, electron transfer, and surface chemistry where this approximation falls apart. Surface hopping partially incorporates the non-adiabatic effects by including excited adiabatic surfaces in the calculations, and allowing for 'hops' between these surfaces, subject to certain criteria. Motivation Molecular dynamics simulations numerically solve the classical equations of motion. These simulations, though, assume that the forces on the electrons are derived solely by the ground adiabatic surface. Solving the time-dependent Schrödinger equation numerically incorporates all these effects, but is computationally unfeasible when the system has many degrees of freedom. To tackle this issue, one approach is the mean field or Ehrenfest method, where the molecular dynamics is run on the average potential energy surface given by a linear combination of the adiabatic states. This was applied successfully for some applications, but has some important limitations. When the difference between the adiabatic states is large, then the dynamics must be primarily driven by only one surface, and not an average potential. In addition, this method also violates the principle of microscopic reversibility. Surface hopping accounts for these limitations by propagating an ensemble of trajectories, each one of them on a single adiabatic surface at any given time. The trajectories are allowed to 'hop' between various adiabatic states at certain times such that the quantum amplitudes for the adiabatic states follow the time dependent Schrödinger equation. The probability of these hops are dependent on the coupling between the states, and is generally significant only in the regions where the difference between adiabatic energies is small. Theory behind the method The formulation described here is in the adiabatic representation for simplicity. It can easily be generalized to a different representation. The coordinates of the system are divided into two categories: quantum () and classical (). The Hamiltonian of the quantum degrees of freedom with mass is defined as: , where describes the potential for the whole system. The eigenvalues of as a function of are called the adiabatic surfaces :. Typically, corresponds to the electronic degree of freedom, light atoms such as hydrogen, or high frequency vibrations such as O-H stretch. The forces in the molecular dynamics simulations are derived only from one adiabatic surface, and are given by: where represents the chosen adiabatic surface. The last equation is derived using the Hellmann-Feynman theorem. The brackets show that the integral is done only over the quantum degrees of freedom. Choosing only one adiabatic surface is an excellent approximation if the difference between the adiabatic surfaces is large for energetically accessible regions of . When this is not the case, the effect of the other states become important. This effect is incorporated in the surface hopping algorithm by considering the wavefunction of the quantum degrees of freedom at time t as an expansion in the adiabatic basis: , where are the expansion coefficients. Substituting the above equation into the time dependent Schrödinger equation gives , where and the nonadiabatic coupling vector are given by The adiabatic surface can switch at any given time t based on how the quantum probabilities are changing with time. The rate of change of is given by: , where . For a small time interval dt, the fractional change in is given by . This gives the net change in flux of population from state . Based on this, the probability of hopping from state j to n is proposed to be . This criterion is known as the "fewest switching" algorithm, as it minimizes the number of hops required to maintain the population in various adiabatic states. Whenever a hop takes place, the velocity is adjusted to maintain conservation of energy. To compute the direction of the change in velocity, the nuclear forces in the transition is where is the eigen value. For the last equality, is used. This shows that the nuclear forces acting during the hop are in the direction of the nonadiabatic coupling vector . Hence is a reasonable choice for the direction along which velocity should be changed. Frustrated hops If the velocity reduction required to conserve energy while making a hop is greater than the component of the velocity to be adjusted, then the hop is known as frustrated. In other words, a hop is frustrated if the system does not have enough energy to make the hop. Several approaches have been suggested to deal with these frustrated hops. The simplest of these is to ignore these hops. Another suggestion is not to change the adiabatic state, but reverse the direction of the component of the velocity along the nonadiabatic coupling vector. Yet another approach is to allow the hop to happen if an allowed hopping point is reachable within uncertainty time , where is the extra energy that the system needed to make the hop possible. Ignoring forbidden hops without any form of velocity reversal does not recover the correct scaling for Marcus theory in the nonadiabatic limit, but a velocity reversal can usually correct the errors Decoherence time Surface hopping can develop nonphysical coherences between the quantum coefficients over large time which can degrade the quality of the calculations, at times leading the incorrect scaling for Marcus theory. To eliminate these errors, the quantum coefficients for the inactive state can be damped or set to zero after a predefined time has elapsed after the trajectory crosses the region where hopping has high probabilities. Outline of the algorithm The state of the system at any time is given by the phase space of all the classical particles, the quantum amplitudes, and the adiabatic state. The simulation broadly consists of the following steps: Step 1. Initialize the state of the system. The classical positions and velocities are chosen based on the ensemble required. Step 2. Compute forces using Hellmann-Feynman theorem, and integrate the equations of motion by time step to obtain the classical phase space at time . Step 3. Integrate the Schrödinger equation to evolve quantum amplitudes from time to in increments of . This time step is typically much smaller than . Step 4. Compute probability of hopping from current state to all other states. Generate a random number, and determine whether a switch should take place. If a switch does occur, change velocities to conserve energy. Go back to step 2, till trajectories have been evolved for the desired time. Applications The method has been applied successfully to understand dynamics of systems that include tunneling, conical intersections and electronic excitation. Limitations and foundations In practice, surface hopping is computationally feasible only for a limited number of quantum degrees of freedom. In addition, the trajectories must have enough energy to be able to reach the regions where probability of hopping is large. Most of the formal critique of the surface hopping method comes from the unnatural separation of classical and quantum degrees of freedom. Recent work has shown, however, that the surface hopping algorithm can be partially justified by comparison with the Quantum Classical Liouville Equation. It has further been demonstrated that spectroscopic observables can be calculated in close agreement with the formally exact hierarchical equations of motion. See also Computational chemistry Molecular dynamics Path integral molecular dynamics Quantum chemistry References External links Newton-X: A package for Newtonian dynamics close to the crossing seam. Movie examples of surface hopping. Quantum mechanics Molecular dynamics
Surface hopping
[ "Physics", "Chemistry" ]
1,569
[ "Molecular physics", "Theoretical physics", "Quantum mechanics", "Computational physics", "Molecular dynamics", "Computational chemistry" ]
50,748,263
https://en.wikipedia.org/wiki/M-Xylylenediamine
m-Xylylenediamine is an organic compound with the formula C6H4(CH2NH2)2. A colorless oily liquid, it is produced by hydrogenation of isophthalonitrile. Uses and reactions m-Xylylenediamine (MXDA) is used in a variety of industrial applications including amine based curing agents for epoxy resins which may then be formulated into coatings, adhesives, sealants, and elastomers. m-Xylylenediamine undergoes the Sommelet reaction to give isophthalaldehyde. Hazards Exposure to m-xylylenediamine may occur by inhalation, skin contact, eye exposure, or ingestion. It can cause chemical burns, tissue damage, delayed pulmonary edema, shock, and skin sensitization. Symptoms of inhalation include a burning sensation in the respiratory tract, cough, sore throat, labored breathing, and dyspnea (shortness of breath). It is also flammable and produces toxic fumes when burned. m-Xylylenediamine reacts with acids, acid chlorides, and acid anhydrides. References External links Safety Data Sheet Product Data Diamines Aromatic compounds Monomers
M-Xylylenediamine
[ "Chemistry", "Materials_science" ]
268
[ "Organic compounds", "Aromatic compounds", "Monomers", "Polymer chemistry" ]
50,759,262
https://en.wikipedia.org/wiki/Chemical%20reactor%20materials%20selection
Chemical reactor materials selection is an important aspect in the design of a chemical reactor. There are four main groups of chemical reactors - CSTR, PFR, semi-batch, and catalytic - with variations on each. Depending on the nature of the chemicals involved in the reaction, as well as the operating conditions (e.g. temperature and pressure), certain materials will perform better over others. Material Options There are several broad classes of materials available for use in creating a chemical reactor. Some examples include metals, glasses, ceramics, polymers, carbon, and composites. Metals are the most common class of materials for chemical engineering equipment as they are comparatively easy to manufacture, have high strength, and are resistant to fracture. Glass is common in chemical laboratory equipment, but highly prone to fracture and so is not useful in large-scale industrial use. Ceramics are not that common of a material for chemical reactors as they are brittle and difficult to manufacture. Polymers have begun to gain more popularity in piping and valves as they aid in temperature stability. There are several forms of carbon, but the most useful form for reactors is carbon or graphite fibers in composites. Criteria for Selection The last important criteria for a particular material is its safety. Engineers have a responsibility to ensure the safety of those who handle equipment or utilize a building or road for example, by minimizing the risks of injuries or casualties. Other considerations include strength, resistance to sudden failure from either mechanical or thermal shock, corrosion resistance, and cost, to name a few. To compare different materials to each other, it may prove useful to consult an ASHBY diagram and the ASME Pressure Vessel Codes. The material choice would be ideally drawn from known data as well as experience. Having a deeper understanding of the component requirements and the corrosion and degradation behavior will aid in materials selection. Additionally, knowing the performance of past systems, whether they be good or bad, will benefit the user in deciding on alternative alloys or using a coated system; if previous information is not available, then performing tests is recommended. High Temperature Operation High temperature reactor operation includes a host of problems such as distortion and cracking due to thermal expansion and contraction, and high temperature corrosion. Some indications that the latter is occurring include burnt or charred surfaces, molten phases, distortion, thick scales, and grossly thinned metal. Some typical high-temperature alloys include iron, nickel, or cobalt that have >20% chromium for the purpose of forming a protective oxide against further oxidation. There are also various other elements to aid in corrosion resistance such as aluminum, silicon, and rare earth elements such as yttrium, cerium, and lanthanum. Other additions such as reactive or refractory metals, can improve the mechanical properties of the reactor. Refractory metals can experience catastrophic oxidation, which turns metals into a powdery oxide with little use. This damage is worse in stagnant conditions, however silicide coatings have been proven to offer some resistance. References Chemical reactors
Chemical reactor materials selection
[ "Chemistry", "Engineering" ]
606
[ "Chemical reactors", "Chemical reaction engineering", "Chemical equipment" ]
53,594,762
https://en.wikipedia.org/wiki/Von%20K%C3%A1rm%C3%A1n%20swirling%20flow
Von Kármán swirling flow is a flow created by a uniformly rotating infinitely long plane disk, named after Theodore von Kármán who solved the problem in 1921. The rotating disk acts as a fluid pump and is used as a model for centrifugal fans or compressors. This flow is classified under the category of steady flows in which vorticity generated at a solid surface is prevented from diffusing far away by an opposing convection, the other examples being the Blasius boundary layer with suction, stagnation point flow etc. Flow description Consider a planar disk of infinite radius rotating at a constant angular velocity in fluid which is initially at rest everywhere. Near to the surface, the fluid is being turned by the disk, due to friction, which then causes centrifugal forces which move the fluid outwards. This outward radial motion of the fluid near the disk must be accompanied by an inward axial motion of the fluid towards the disk to conserve mass. Theodore von Kármán noticed that the governing equations and the boundary conditions allow a solution such that and are functions of only, where are the velocity components in cylindrical coordinate with being the axis of rotation and represents the plane disk. Due to symmetry, pressure of the fluid can depend only on radial and axial coordinate . Then the continuity equation and the incompressible Navier–Stokes equations reduce to where is the kinematic viscosity. No rotation at infinity Since there is no rotation at large , becomes independent of resulting in . Hence and . Here the boundary conditions for the fluid are Self-similar solution is obtained by introducing following transformation, where is the fluid density. The self-similar equations are with boundary conditions for the fluid are The coupled ordinary differential equations need to be solved numerically and an accurate solution is given by Cochran(1934). The inflow axial velocity at infinity obtained from the numerical integration is , so the total outflowing volume flux across a cylindrical surface of radius is . The tangential stress on the disk is . Neglecting edge effects, the torque exerted by the fluid on the disk with large () but finite radius is The factor is added to account for both sides of the disk. From numerical solution, torque is given by . The torque predicted by the theory is in excellent agreement with the experiment on large disks up to the Reynolds number of about , the flow becomes turbulent at high Reynolds number. Rigid body rotation at infinity This problem was addressed by George Keith Batchelor(1951). Let be the angular velocity at infinity. Now the pressure at is . Hence and . Then the boundary conditions for the fluid are Self-similar solution is obtained by introducing following transformation, The self-similar equations are with boundary conditions for the fluid is The solution is easy to obtain only for i.e., the fluid at infinity rotates in the same sense as the plate. For , the solution is more complex, in the sense that many-solution branches occur. Evans(1969) obtained solution for the range . Zandbergen and Dijkstra showed that the solution exhibits a square root singularity as and found a second-solution branch merging with the solution found for . The solution of the second branch is continued till , at which point, a third-solution branch is found to emerge. They also discovered an infinity of solution branches around the point . Bodoyni(1975) calculated solutions for large negative , showed that the solution breaks down at . If the rotating plate is allowed to have uniform suction velocity at the plate, then meaningful solution can be obtained for . For ( represents solid body rotation, the whole fluid rotates at the same speed) the solution reaches the solid body rotation at infinity in an oscillating manner from the plate. The axial velocity is negative for and positive for . There is an explicit solution when . Nearly rotating at the same speed, Since both boundary conditions for are almost equal to one, one would expect the solution for to slightly deviate from unity. The corresponding scales for and can be derived from the self-similar equations. Therefore, To the first order approximation(neglecting ), the self-similar equation becomes with exact solutions These solution are similar to an Ekman layer solution. Non-Axisymmetric solutions The flow accepts a non-axisymmetric solution with axisymmetric boundary conditions discovered by Hewitt, Duck and Foster. Defining and the governing equations are with boundary conditions The solution is found to exist from numerical integration for . Bödewadt flow Bödewadt flow describes the flow when a stationary disk is placed in a rotating fluid. Two rotating coaxial disks This problem was addressed by George Keith Batchelor(1951), Keith Stewartson(1952) and many other researchers. Here the solution is not simple, because of the additional length scale imposed in the problem i.e., the distance between the two disks. In addition, the uniqueness and existence of a steady solution are also depend on the corresponding Reynolds number . Then the boundary conditions for the fluid are In terms of , the upper wall location is simply . Thus, instead of the scalings used before, it is convenient to introduce following transformation, so that the governing equations become with six boundary conditions and the pressure is given by Here boundary conditions are six because pressure is not known either at the top or bottom wall; is to be obtained as part of solution. For large Reynolds number , Batchelor argued that the fluid in the core would rotate at a constant velocity, flanked by two boundary layers at each disk for and there would be two uniform counter-rotating flow of thickness for . However, Stewartson predicted that for the fluid in the core would not rotate at , but just left with two boundary layers at each disk. It turns out, Stewartson predictions were correct (see Stewartson layer). There is also an exact solution if the two disks are rotating about different axes but for . Applications Von Kármán swirling flow finds its applications in wide range of fields, which includes rotating machines, filtering systems, computer storage devices, heat transfer and mass transfer applications, combustion-related problems, planetary formations, geophysical applications etc. References Bibliography Fluid dynamics Flow regimes
Von Kármán swirling flow
[ "Chemistry", "Engineering" ]
1,247
[ "Piping", "Chemical engineering", "Flow regimes", "Fluid dynamics" ]
53,596,792
https://en.wikipedia.org/wiki/FGLM%20algorithm
FGLM is one of the main algorithms in computer algebra, named after its designers, Faugère, Gianni, Lazard and Mora. They introduced their algorithm in 1993. The input of the algorithm is a Gröbner basis of a zero-dimensional ideal in the ring of polynomials over a field with respect to a monomial order and a second monomial order. As its output, it returns a Gröbner basis of the ideal with respect to the second ordering. The algorithm is a fundamental tool in computer algebra and has been implemented in most of the computer algebra systems. The complexity of FGLM is O(nD3), where n is the number of variables of the polynomials and D is the degree of the ideal. There are several generalization and various applications for FGLM. References Computer algebra Commutative algebra Polynomials
FGLM algorithm
[ "Mathematics", "Technology" ]
176
[ "Polynomials", "Computer algebra", "Computational mathematics", "Fields of abstract algebra", "Computer science", "Commutative algebra", "Algebra" ]
30,006,925
https://en.wikipedia.org/wiki/G1%20and%20G1/S%20cyclins-%20budding%20yeast
Cln1, Cln2, and Cln3 are cyclin proteins expressed in the G1-phase of the cell cycle of budding yeast. Like other cyclins, they function by binding and activating cyclin-dependent kinase. They are responsible for initiating entry into a new mitotic cell cycle at Start. As described below, Cln3 is the primary regulator of this process during normal yeast growth, with the other two G1 cyclins performing their function upon induction by Cln3. However, Cln1 and Cln2 are also directly regulated by pathways sensing extracellular conditions, including the mating pheremone pathway. Cln3 Cln3 is thought to be the main regulator linking cell growth to the cell cycle. This is because it is the most upstream regulator of Start and because, unlike other cyclins, concentration of Cln3 does not oscillate much with the cell cycle (see Cln3). Rather, Cln3 activity is thought to increase gradually throughout the cycle in response to cell growth. Furthermore, Cln3 levels differ between mother and daughter cells, a difference that explains the asymmetry in cell cycle behavior between these two cell types. Cln3 regulation also responds to external signals, including stress signals that stop division. Cln1,2 The G1 cyclins CLN1 and CLN2, upon transcriptional activation by Cln3 in mid-G1, bind Cdk1 (Cdc28) to complete progression through Start. These cyclins oscillate during the cell cycle - rise in late G1 and fall in early S phase. The primary function of G1/S cyclin-Cdk complexes is to trigger progression through Start and initiate the processes leading to DNA replication, principally by shutting down the various braking systems that suppress S-phase Cdk activity in G1. G1/S cyclins also initiate other early cell-cycles events such as duplication of the spindle pole body in yeast. The rise of G1/S cyclins is accompanied by the appearance of the S cyclins (Clb5 and Clb6 in budding yeast), which form S cyclin-Cdk complexes that are directly responsible for stimulating DNA replication. Cln1 and Cln2 are involved in regulation of the cell cycle. Cln1 is closely related to Cln2 and has overlapping functions with Cln2. For instance, Cln1 and Cln2 repress the mating factor response pathway at Start. Additionally, both Cln1 and Cln2 are expressed in late G1 phase when they associate with Cdc28p to activate its kinase activity. Lastly, late G1-specific expression for both of them depends on transcription factor complexes, MBF and SBF. References Cell cycle
G1 and G1/S cyclins- budding yeast
[ "Biology" ]
586
[ "Cell cycle", "Cellular processes" ]
30,007,269
https://en.wikipedia.org/wiki/Pencil-beam%20scanning
Pencil beam scanning is the practice of steering a beam of radiation or charged particles across an object. It is often used in proton therapy, to reduce unnecessary radiation exposure to surrounding non-cancerous cells. Ionizing radiation Ionizing radiation photons or x-rays (IMRT) use pencil beam scanning to precisely target a tumor. Photon pencil beam scans are defined as crossing of two beams to a fine point. Charged particles Several charged particles devices used with Proton therapy cancer centers use pencil beam scanning. The newer proton therapy machines use a pencil beam scanning technology. This technique is also called spot scanning. The Paul Scherrer Institute was the developer of spot beam. Intensity Modulated Proton Therapy Varian's IMPT system uses all pencil-beam controlled protons where the beam intensity can also be controlled at this small level. This can be done by going back and forth over a previously radiated area during the same radiation session. See also Pencil (mathematics) Pencil (optics) Radiation treatment planning mean free path Monte Carlo method for photon transport Hybrid theory for photon transport in tissue Diffusion theory Monte Carlo method Varian Medical Systems References Medical physics Radiobiology Radiation therapy
Pencil-beam scanning
[ "Physics", "Chemistry", "Biology" ]
232
[ "Radiobiology", "Radioactivity", "Applied and interdisciplinary physics", "Medical physics" ]
30,008,292
https://en.wikipedia.org/wiki/Descartes%20snark
In the mathematical field of graph theory, a Descartes snark is an undirected graph with 210 vertices and 315 edges. It is a snark, a graph with three edges at each vertex that cannot be partitioned into three perfect matchings. It was first discovered by William Tutte in 1948 under the pseudonym Blanche Descartes. A Descartes snark is obtained from the Petersen graph by replacing each vertex with a nonagon and each edge with a particular graph closely related to the Petersen graph. Because there are multiple ways to perform this procedure, there are multiple Descartes snarks. References Graph families
Descartes snark
[ "Mathematics" ]
133
[ "Graph theory stubs", "Mathematical relations", "Graph theory" ]
30,014,576
https://en.wikipedia.org/wiki/A-frame%20complex
In organometallic chemistry, A-frame complexes are coordination compounds that contain two bridging bidentate ligands and a single atom bridge. They have the formula , where bd is a bidentate ligand like dppm, and X and L are a wide variety of ligands. The term was coined to describe products arising from the oxidative addition to Rh(I)Rh(I) complexes. Scope of compounds A-frame complexes typically consist of a pair of square-planar metal centres. Consequently, this family of complexes is found for those metals that tend to adopt that geometry, Rh, Ir, Ni, Pd, Pt, and Au. In addition to dppm, the analogous tetramethyldiphosphine (dmpm) also forms such complexes as do some related ligands, such as diphenyl-2-pyridylphosphine. The bridging site can be occupied by a variety of ligands, including CO, SO, NO, CH2, hydride, and chloride. Preparation A frame complexes are often produced by the addition of reagents of the type AX2 to low valent complexes of dppm: 2 M(0) + AX2 + 2 dppm → M2(μ-A)(dppm)2X2 Alternatively the group "A" can be added across a preformed M-M bond, as indicated by the oxidative addition of elemental sulfur: Pd2(dppm)2Cl2 + S → Pd2(μ-S)(dppm)2Cl2 References Coordination complexes
A-frame complex
[ "Chemistry" ]
342
[ "Coordination chemistry", "Coordination complexes" ]
40,674,864
https://en.wikipedia.org/wiki/Translation%20operator%20%28quantum%20mechanics%29
In quantum mechanics, a translation operator is defined as an operator which shifts particles and fields by a certain amount in a certain direction. It is a special case of the shift operator from functional analysis. More specifically, for any displacement vector , there is a corresponding translation operator that shifts particles and fields by the amount . For example, if acts on a particle located at position , the result is a particle at position . Translation operators are unitary. Translation operators are closely related to the momentum operator; for example, a translation operator that moves by an infinitesimal amount in the direction has a simple relationship to the -component of the momentum operator. Because of this relationship, conservation of momentum holds when the translation operators commute with the Hamiltonian, i.e. when laws of physics are translation-invariant. This is an example of Noether's theorem. Action on position eigenkets and wavefunctions The translation operator moves particles and fields by the amount . Therefore, if a particle is in an eigenstate of the position operator (i.e., precisely located at the position ), then after acts on it, the particle is at the position : An alternative (and equivalent) way to describe what the translation operator determines is based on position-space wavefunctions. If a particle has a position-space wavefunction , and acts on the particle, the new position-space wavefunction is defined by This relation is easier to remember as which can be read as: "The value of the new wavefunction at the new point equals the value of the old wavefunction at the old point". Here is an example showing that these two descriptions are equivalent. The state corresponds to the wavefunction (where is the Dirac delta function), while the state corresponds to the wavefunction These indeed satisfy Momentum as generator of translations In introductory physics, momentum is usually defined as mass times velocity. However, there is a more fundamental way to define momentum, in terms of translation operators. This is more specifically called canonical momentum, and it is usually but not always equal to mass times velocity. One notable exception pertains to a charged particle in a magnetic field in which the canonical momentum includes both the usual momentum and a second terms proportional to the magnetic vector potential. This definition of momentum is especially important because the law of conservation of momentum applies only to canonical momentum, and is not universally valid if momentum is defined instead as mass times velocity (the so-called "kinetic momentum"), for reasons explained below. The (canonical) momentum operator is defined as the gradient of the translation operators near the origin: where is the reduced Planck constant. For example, what is the result when the operator acts on a quantum state? To find the answer, translate the state by an infinitesimal amount in the -direction, calculate the rate that the state is changing, and multiply the result by . For example, if a state does not change at all when it is translated an infinitesimal amount the -direction, then its -component of momentum is 0. More explicitly, is a vector operator (i.e. a vector operator consisting of three operators ), components is given by: where is the identity operator and is the unit vector in the -direction. ( and are defined analogously.) The equation above is the most general definition of . In the special case of a single particle with wavefunction , can be written in a more specific and useful form. In one dimension: While in three dimensions, as an operator acting on position-space wavefunctions. This is the familiar quantum-mechanical expression for , but we have derived it here from a more basic starting point. We have now defined in terms of translation operators. It is also possible to write a translation operator as a function of . The method consists of considering an infinitesimal action on a wavefunction, and expanding the transformed wavefunction as a sum of the initial wavefunction and a first-order perturbative correction; and then expressing a finite translation as a huge number of consecutive tiny translations, and then use the fact that infinitesimal translations can be written in terms of . From what has been stated previously, we know from above that if acts on that the result is The right-hand side may be written as a Taylor series We suppose that for an infinitesimal translation that the higher-order terms in the series become successively smaller. From which we write With this preliminary result, we proceed to write the an infinite amount of infinitesimal actions as The right-hand side is precisely a series for an exponential. Hence, where is the operator exponential and the right-hand side is the Taylor series expansion. For very small , one can use the approximation: The operator equation is an operator version of Taylor's theorem; and is, therefore, only valid under caveats about being an analytic function. Concentrating on the operator part, it shows that is an infinitesimal transformation, generating translations of the real line via the exponential. It is for this reason that the momentum operator is referred to as the generator of translation. A nice way to double-check that these relations are correct is to do a Taylor expansion of the translation operator acting on a position-space wavefunction. Expanding the exponential to all orders, the translation operator generates exactly the full Taylor expansion of a test function: So every translation operator generates exactly the expected translation on a test function if the function is analytic in some domain of the complex plane. Properties Successive translations In other words, if particles and fields are moved by the amount and then by the amount , overall they have been moved by the amount . For a mathematical proof, one can look at what these operators do to a particle in a position eigenstate: Since the operators and have the same effect on every state in an eigenbasis, it follows that the operators are equal. Identity translation The translation , i.e. a translation by a distance of 0 is the same as the identity operator which leaves all states unchanged. Inverse The translation operators are invertible, and their inverses are: This follows from the "successive translations" property above, and the identity translation. Translation operators commute with each other because both sides are equal to . Translation operators are unitary To show that translation operators are unitary, we first must prove that the momentum operator is Hermitian. Then, we can prove that the translation operator meets two criteria that are necessary to be a unitary operator. To begin with, the linear momentum operator is the rule that assigns to any in the domain the one vector in the codomain is. Since therefore the linear momentum operator is, in fact, a Hermitian operator. Detailed proofs of this can be found in many textbooks and online (e.g. https://physics.stackexchange.com/a/832341/194354). Having in hand that the momentum operator is Hermitian, we can prove that the translation operator is a unitary operator. First, it must shown that translation operator is a bounded operator. It is sufficient to state that for all that Second, it must be (and can be) shown that A detailed proof can be found in reference https://math.stackexchange.com/a/4990451/309209. Translation Operator operating on a bra A translation operator operating on a bra in the position eigenbasis gives: Splitting a translation into its components According to the "successive translations" property above, a translation by the vector can be written as the product of translations in the component directions: where are unit vectors. Commutator with position operator Suppose is an eigenvector of the position operator with eigenvalue . We have while Therefore, the commutator between a translation operator and the position operator is: This can also be written (using the above properties) as: where is the identity operator. Commutator with momentum operator Since translation operators all commute with each other (see above), and since each component of the momentum operator is a sum of two scaled translation operators (e.g. ), it follows that translation operators all commute with the momentum operator, i.e. This commutation with the momentum operator holds true generally even if the system is not isolated where energy or momentum may not be conserved. Translation group The set of translation operators for all , with the operation of multiplication defined as the result of successive translations (i.e. function composition), satisfies all the axioms of a group: Closure When two translations are done consecutively, the result is a single different translation. (See "successive translations" property above.) Existence of identity A translation by the vector is the identity operator, i.e. the operator that has no effect on anything. It functions as the identity element of the group. Every element has an inverse As proven above, any translation operator is the inverse of the reverse translation . Associativity This is the claim that . It is true by definition, as is the case for any group based on function composition. Therefore, the set of translation operators for all forms a group. Since there are continuously infinite number of elements, the translation group is a continuous group. Moreover, the translation operators commute among themselves, i.e. the product of two translation (a translation followed by another) does not depend on their order. Therefore, the translation group is an abelian group. The translation group acting on the Hilbert space of position eigenstates is isomorphic to the group of vector additions in the Euclidean space. Expectation values of position and momentum in the translated state Consider a single particle in one dimension. Unlike classical mechanics, in quantum mechanics a particle neither has a well-defined position nor a well-defined momentum. In the quantum formulation, the expectation values play the role of the classical variables. For example, if a particle is in a state , then the expectation value of the position is , where is the position operator. If a translation operator acts on the state , creating a new state then the expectation value of position for is equal to the expectation value of position for plus the vector . This result is consistent with what you would expect from an operation that shifts the particle by that amount. On the other hand, when the translation operator acts on a state, the expectation value of the momentum is not changed. This can be proven in a similar way as the above, but using the fact that translation operators commute with the momentum operator. This result is again consistent with expectations: translating a particle does not change its velocity or mass, so its momentum should not change. Translational invariance In quantum mechanics, the Hamiltonian is the operator corresponding to the total energy of a system. For any in the domain, let the one vector in the codomain be a newly translated state. If then a Hamiltonian is said to be invariant. Since the translation operator is a unitary operator, the antecedent can also be written as Since this hold for any in the domain, the implication is that or that Thus, if Hamiltonian commutes with the translation operator, then the Hamiltonian is invariant under translation. Loosely speaking, if we translate the system, then measure its energy, then translate it back, it amounts to the same thing as just measuring its energy directly. Continuous translational symmetry First we consider the case where all the translation operators are symmetries of the system. Second we consider the case where the translation operator is not a symmetries of the system. As we will see, only in the first case does the conservation of momentum occur. For example, let be the Hamiltonian describing all particles and fields in the universe, and let be the continuous translation operator that shifts all particles and fields in the universe simultaneously by the same amount. If we assert the a priori axiom that this translation is a continuous symmetry of the Hamiltonian (i.e., that is independent of location), then, as a consequence, conservation of momentum is universally valid. On the other hand, perhaps and refer to just one particle. Then the translation operators are exact symmetries only if the particle is alone in a vacuum. Correspondingly, the momentum of a single particle is not usually conserved (it changes when the particle bumps into other objects or is otherwise deflected by the potential energy fields of the other particles), but it is conserved if the particle is alone in a vacuum. Since the Hamiltonian operator commutes with the translation operator when the Hamiltonian is an invariant with respect to translation, therefore Further, the Hamiltonian operator also commutes with the infinitesimal translation operator In summary, whenever the Hamiltonian for a system remains invariant under continuous translation, then the system has conservation of momentum, meaning that the expectation value of the momentum operator remains constant. This is an example of Noether's theorem. Discrete translational symmetry There is another special case where the Hamiltonian may be translationally invariant. This type of translational symmetry is observed whenever the potential is periodic: In general, the Hamiltonian is not invariant under any translation represented by with arbitrary, where has the property: and, (where is the identity operator; see proof above). But, whenever coincides with the period of the potential , Since the kinetic energy part of the Hamiltonian is already invariant under any arbitrary translation, being a function of , the entire Hamiltonian satisfies, Now, the Hamiltonian commutes with translation operator, i.e. they can be simultaneously diagonalised. Therefore, the Hamiltonian is invariant under such translation (which no longer remains continuous). The translation becomes discrete with the period of the potential. Discrete translation in periodic potential: Bloch's theorem The ions in a perfect crystal are arranged in a regular periodic array. So we are led to the problem of an electron in a potential with the periodicity of the underlying Bravais lattice for all Bravais lattice vectors However, perfect periodicity is an idealisation. Real solids are never absolutely pure, and in the neighbourhood of the impurity atoms the solid is not the same as elsewhere in the crystal. Moreover, the ions are not in fact stationary, but continually undergo thermal vibrations about their equilibrium positions. These destroy the perfect translational symmetry of a crystal. To deal with this type of problems the main problem is artificially divided in two parts: (a) the ideal fictitious perfect crystal, in which the potential is genuinely periodic, and (b) the effects on the properties of a hypothetical perfect crystal of all deviations from perfect periodicity, treated as small perturbations. Although, the problem of electrons in a solid is in principle a many-electron problem, in independent electron approximation each electron is subjected to the one-electron Schrödinger equation with a periodic potential and is known as Bloch electron (in contrast to free particles, to which Bloch electrons reduce when the periodic potential is identically zero.) For each Bravais lattice vector we define a translation operator which, when operating on any function shifts the argument by : Since all translations form an Abelian group, the result of applying two successive translations does not depend on the order in which they are applied, i.e. In addition, as the Hamiltonian is periodic, we have, Hence, the for all Bravais lattice vectors and the Hamiltonian form a set of commutating operators. Therefore, the eigenstates of can be chosen to be simultaneous eigenstates of all the : The eigenvalues of the translation operators are related because of the condition: We have, And, Therefore, it follows that, Now let the 's be the three primitive vector for the Bravais lattice. By a suitable choice of , we can always write in the form If is a general Bravais lattice vector, given by it follows then, Substituting one gets, where and the 's are the reciprocal lattice vectors satisfying the equation Therefore, one can choose the simultaneous eigenstates of the Hamiltonian and so that for every Bravais lattice vector , So, This result is known as Bloch's Theorem. Time evolution and translational invariance In the passive transformation picture, translational invariance requires, It follows that where is the unitary time evolution operator. When the Hamiltonian is time independent, If the Hamiltonian is time dependent, the above commutation relation is satisfied if or commutes with for all t. Example Suppose at two observers A and B prepare identical systems at and (fig. 1), respectively. If be the state vector of the system prepared by A, then the state vector of the system prepared by B will be given by Both the systems look identical to the observers who prepared them. After time , the state vectors evolve into and respectively. Using the above-mentioned commutation relation, the later may be written as, which is just the translated version of the system prepared by A at time . Therefore, the two systems, which differed only by a translation at , differ only by the same translation at any instant of time. The time evolution of both the systems appear the same to the observers who prepared them. It can be concluded that the translational invariance of Hamiltonian implies that the same experiment repeated at two different places will give the same result (as seen by the local observers). See also Bloch state Group Periodic function Shift operator Symmetries in quantum mechanics Time translation symmetry Translational symmetry References Quantum mechanics Unitary operators
Translation operator (quantum mechanics)
[ "Physics" ]
3,581
[ "Quantum operators", "Quantum mechanics" ]
40,677,335
https://en.wikipedia.org/wiki/Resonances%20in%20scattering%20from%20potentials
In quantum mechanics, resonance cross section occurs in the context of quantum scattering theory, which deals with studying the scattering of quantum particles from potentials. The scattering problem deals with the calculation of flux distribution of scattered particles/waves as a function of the potential, and of the state (characterized by conservation of momentum/energy) of the incident particle. For a free quantum particle incident on the potential, the plane wave solution to the time-independent Schrödinger wave equation is: For one-dimensional problems, the transmission coefficient is of interest. It is defined as: where is the probability current density. This gives the fraction of incident beam of particles that makes it through the potential. For three-dimensional problems, one would calculate the scattering cross-section , which, roughly speaking, is the total area of the incident beam which is scattered. Another quantity of relevance is the partial cross-section, , which denotes the scattering cross section for a partial wave of a definite angular momentum eigenstate. These quantities naturally depend on , the wave-vector of the incident wave, which is related to its energy by: The values of these quantities of interest, the transmission coefficient (in case of one dimensional potentials), and the partial cross-section show peaks in their variation with the incident energy . These phenomena are called resonances. One-dimensional case Mathematical description A one-dimensional finite square potential is given by The sign of determines whether the square potential is a well or a barrier. To study the phenomena of resonance, the time-independent Schrödinger equation for a stationary state of a massive particle with energy is solved: The wave function solutions for the three regions are Here, and are the wave numbers in the potential-free region and within the potential respectively: To calculate , a coefficient in the wave function is set as , which corresponds to the fact that there is no wave incident on the potential from the right. Imposing the condition that the wave function and its derivative should be continuous at the well/barrier boundaries and , the relations between the coefficients are found, which allows to be found as: It follows that the transmission coefficient reaches its maximum value of 1 when: for any integer value . This is the resonance condition, which leads to the peaking of to its maxima, called resonance. Physical picture (Standing de Broglie Waves and the Fabry-Pérot Etalon) From the above expression, resonance occurs when the distance covered by the particle in traversing the well and back () is an integer multiple of the De Broglie wavelength of a particle inside the potential (). For , reflections at potential discontinuities are not accompanied by any phase change. Therefore, resonances correspond to the formation of standing waves within the potential barrier/well. At resonance, the waves incident on the potential at and the waves reflecting between the walls of the potential are in phase, and reinforce each other. Far from resonances, standing waves can't be formed. Then, waves reflecting between both walls of the potential (at and ) and the wave transmitted through are out of phase, and destroy each other by interference. The physics is similar to that of transmission in Fabry–Pérot interferometer in optics, where the resonance condition and functional form of the transmission coefficient are the same. Nature of resonance curves The transmission coefficient swings between its maximum of 1 and minimum of as a function of the length of square well () with a period of . The minima of the transmission tend to in the limit of large energy , resulting in more shallow resonances, and inversely tend to in the limit of low energy , resulting in sharper resonances. This is demonstrated in plots of transmission coefficient against incident particle energy for fixed values of the shape factor, defined as See also Resonance (particle physics) Levinson's theorem Feshbach–Fano partitioning References Quantum mechanics
Resonances in scattering from potentials
[ "Physics" ]
787
[ "Theoretical physics", "Quantum mechanics" ]
40,679,734
https://en.wikipedia.org/wiki/IEC%2062061
IEC/EN 62061, ”Safety of machinery: Functional safety of electrical, electronic and programmable electronic control systems”, is the machinery specific implementation of IEC/EN 61508. It provides requirements that are applicable to the system level design of all types of machinery safety-related electrical control systems and also for the design of non-complex subsystems or devices. The risk assessment results in a risk reduction strategy which in turn, identifies the need for safety-related control functions. These functions must be documented and must include: Functional requirements specification Safety integrity requirements specification The functional requirements include details like frequency of operation, required response time, operating modes, duty cycles, operating environment, and fault reaction functions. The safety integrity requirements are expressed in levels called safety integrity level (SIL). Depending on the complexity of the system, some or all of the elements in Table 14 must be considered to determine whether the system design meets the required SIL. External links IEC 62061 at International Electrotechnical Commission Electrical standards 62061 Safety
IEC 62061
[ "Physics", "Technology" ]
211
[ "Electrical standards", "Electrical systems", "Computer standards", "IEC standards", "Physical systems" ]
28,479,408
https://en.wikipedia.org/wiki/Wirtinger%20derivatives
In complex analysis of one and several complex variables, Wirtinger derivatives (sometimes also called Wirtinger operators), named after Wilhelm Wirtinger who introduced them in 1927 in the course of his studies on the theory of functions of several complex variables, are partial differential operators of the first order which behave in a very similar manner to the ordinary derivatives with respect to one real variable, when applied to holomorphic functions, antiholomorphic functions or simply differentiable functions on complex domains. These operators permit the construction of a differential calculus for such functions that is entirely analogous to the ordinary differential calculus for functions of real variables. Historical notes Early days (1899–1911): the work of Henri Poincaré Wirtinger derivatives were used in complex analysis at least as early as in the paper , as briefly noted by and by . In the third paragraph of his 1899 paper, Henri Poincaré first defines the complex variable in and its complex conjugate as follows Then he writes the equation defining the functions he calls biharmonique, previously written using partial derivatives with respect to the real variables with ranging from 1 to , exactly in the following way This implies that he implicitly used below: to see this it is sufficient to compare equations 2 and 2' of . Apparently, this paper was not noticed by early researchers in the theory of functions of several complex variables: in the papers of , (and ) and of all fundamental partial differential operators of the theory are expressed directly by using partial derivatives respect to the real and imaginary parts of the complex variables involved. In the long survey paper by (first published in 1913), partial derivatives with respect to each complex variable of a holomorphic function of several complex variables seem to be meant as formal derivatives: as a matter of fact when Osgood expresses the pluriharmonic operator and the Levi operator, he follows the established practice of Amoroso, Levi and Levi-Civita. The work of Dimitrie Pompeiu in 1912 and 1913: a new formulation According to , a new step in the definition of the concept was taken by Dimitrie Pompeiu: in the paper , given a complex valued differentiable function (in the sense of real analysis) of one complex variable defined in the neighbourhood of a given point he defines the areolar derivative as the following limit where is the boundary of a disk of radius entirely contained in the domain of definition of i.e. his bounding circle. This is evidently an alternative definition of Wirtinger derivative respect to the complex conjugate variable: it is a more general one, since, as noted a by , the limit may exist for functions that are not even differentiable at According to , the first to identify the areolar derivative as a weak derivative in the sense of Sobolev was Ilia Vekua. In his following paper, uses this newly defined concept in order to introduce his generalization of Cauchy's integral formula, the now called Cauchy–Pompeiu formula. The work of Wilhelm Wirtinger The first systematic introduction of Wirtinger derivatives seems due to Wilhelm Wirtinger in the paper in order to simplify the calculations of quantities occurring in the theory of functions of several complex variables: as a result of the introduction of these differential operators, the form of all the differential operators commonly used in the theory, like the Levi operator and the Cauchy–Riemann operator, is considerably simplified and consequently easier to handle. The paper is deliberately written from a formal point of view, i.e. without giving a rigorous derivation of the properties deduced. Formal definition Despite their ubiquitous use, it seems that there is no text listing all the properties of Wirtinger derivatives: however, fairly complete references are the short course on multidimensional complex analysis by , the monograph of , and the monograph of which are used as general references in this and the following sections. Functions of one complex variable Consider the complex plane (in a sense of expressing a complex number for real numbers and ). The Wirtinger derivatives are defined as the following linear partial differential operators of first order: Clearly, the natural domain of definition of these partial differential operators is the space of functions on a domain but, since these operators are linear and have constant coefficients, they can be readily extended to every space of generalized functions. Functions of n > 1 complex variables Consider the Euclidean space on the complex field The Wirtinger derivatives are defined as the following linear partial differential operators of first order: As for Wirtinger derivatives for functions of one complex variable, the natural domain of definition of these partial differential operators is again the space of functions on a domain and again, since these operators are linear and have constant coefficients, they can be readily extended to every space of generalized functions. Relation with complex differentiation When a function is complex differentiable at a point, the Wirtinger derivative agrees with the complex derivative . This follows from the Cauchy-Riemann equations. For the complex function which is complex differentiable where the third equality uses the first definition of Wirtinger's derivatives for and . It can also be done through actual application of the Cauchy-Riemann equations. The final equality comes from it being one of four equivalent formulations of the complex derivative through partial derivatives of the components. The second Wirtinger derivative is also related with complex differentiation; is equivalent to the Cauchy-Riemann equations in a complex form. Basic properties In the present section and in the following ones it is assumed that is a complex vector and that where are real vectors, with n ≥ 1: also it is assumed that the subset can be thought of as a domain in the real euclidean space or in its isomorphic complex counterpart All the proofs are easy consequences of and and of the corresponding properties of the derivatives (ordinary or partial). Linearity If and are complex numbers, then for the following equalities hold Product rule If then for the product rule holds This property implies that Wirtinger derivatives are derivations from the abstract algebra point of view, exactly like ordinary derivatives are. Chain rule This property takes two different forms respectively for functions of one and several complex variables: for the n > 1 case, to express the chain rule in its full generality it is necessary to consider two domains and and two maps and having natural smoothness requirements. Functions of one complex variable If and then the chain rule holds Functions of n > 1 complex variables If and then for the following form of the chain rule holds Conjugation If then for the following equalities hold See also CR–function Dolbeault complex Dolbeault operator Pluriharmonic function Notes References Historical references . "On a boundary value problem" (free translation of the title) is the first paper where a set of (fairly complicate) necessary and sufficient conditions for the solvability of the Dirichlet problem for holomorphic functions of several variables is given. . . "Areolar derivative and functions of bounded variation" (free English translation of the title) is an important reference paper in the theory of areolar derivatives. . "Studies on essential singular points of analytic functions of two or more complex variables" (English translation of the title) is an important paper in the theory of functions of several complex variables, where the problem of determining what kind of hypersurface can be the boundary of a domain of holomorphy. . "On the hypersurfaces of the 4-dimensional space that can be the boundary of the domain of existence of an analytic function of two complex variables" (English translation of the title) is another important paper in the theory of functions of several complex variables, investigating further the theory started in . . "On the functions of two or more complex variables" (free English translation of the title) is the first paper where a sufficient condition for the solvability of the Cauchy problem for holomorphic functions of several complex variables is given. . , available at DigiZeitschriften. . . . , available at DigiZeitschriften. In this important paper, Wirtinger introduces several important concepts in the theory of functions of several complex variables, namely Wirtinger's derivatives and the tangential Cauchy-Riemann condition. Scientific references . Introduction to complex analysis is a short course in the theory of functions of several complex variables, held in February 1972 at the Centro Linceo Interdisciplinare di Scienze Matematiche e Loro Applicazioni "Beniamino Segre". . . . . . . . . "Elementary introduction to the theory of functions of complex variables with particular regard to integral representations" (English translation of the title) are the notes form a course, published by the Accademia Nazionale dei Lincei, held by Martinelli when he was "Professore Linceo". . A textbook on complex analysis including many historical notes on the subject. . Notes from a course held by Francesco Severi at the Istituto Nazionale di Alta Matematica (which at present bears his name), containing appendices of Enzo Martinelli, Giovanni Battista Rizza and Mario Benedicty. An English translation of the title reads as:-"Lectures on analytic functions of several complex variables – Lectured in 1956–57 at the Istituto Nazionale di Alta Matematica in Rome". Complex analysis Differential operators Mathematical analysis
Wirtinger derivatives
[ "Mathematics" ]
1,923
[ "Mathematical analysis", "Differential operators" ]
28,487,427
https://en.wikipedia.org/wiki/Witsenhausen%27s%20counterexample
Witsenhausen's counterexample, shown in the figure below, is a deceptively simple toy problem in decentralized stochastic control. It was formulated by Hans Witsenhausen in 1968. It is a counterexample to a natural conjecture that one can generalize a key result of centralized linear–quadratic–Gaussian control systems—that in a system with linear dynamics, Gaussian disturbance, and quadratic cost, affine (linear) control laws are optimal—to decentralized systems. Witsenhausen constructed a two-stage linear quadratic Gaussian system where two decisions are made by decision makers with decentralized information and showed that for this system, there exist nonlinear control laws that outperform all linear laws. The problem of finding the optimal control law remains unsolved. Statement of the counterexample The statement of the counterexample is simple: two controllers attempt to control the system by attempting to bring the state close to zero in exactly two time steps. The first controller observes the initial state There is a cost on the input of the first controller, and a cost on the state after the input of the second controller. The input of the second controller is free, but it is based on noisy observations of the state after the first controller's input. The second controller cannot communicate with the first controller and thus cannot observe either the original state or the input of the first controller. Thus the system dynamics are with the second controller's observation equation The objective is to minimize an expected cost function, where the expectation is taken over the randomness in the initial state and the observation noise , which are distributed independently. The observation noise is assumed to be distributed in a Gaussian manner, while the distribution of the initial state value differs depending on the particular version of the problem. The problem is to find control functions that give at least as good a value of the objective function as do any other pair of control functions. Witsenhausen showed that the optimal functions and cannot be linear. Specific results of Witsenhausen Witsenhausen obtained the following results: An optimum exists (Theorem 1). The optimal control law of the first controller is such that (Lemma 9). The exact solution is given for the case in which both controllers are constrained to be linear (Lemma 11). If has a Gaussian distribution and if at least one of the controllers is constrained to be linear, then it is optimal for both controllers to be linear (Lemma 13). The exact nonlinear control laws are given for the case in which has a two-point symmetric distribution (Lemma 15). If has a Gaussian distribution, for some values of the preference parameter a non-optimal nonlinear solution for the control laws is given which gives a lower value for the expected cost function than does the best linear pair of control laws (Theorem 2). The significance of the problem The counterexample lies at the intersection of control theory and information theory. Due to its hardness, the problem of finding the optimal control law has also received attention from the theoretical computer science community. The importance of the problem was reflected upon in the 47th IEEE Conference on Decision and Control (CDC) 2008, Cancun, Mexico, where an entire session was dedicated to understanding the counterexample 40 years after it was first formulated. The problem is of conceptual significance in decentralized control because it shows that it is important for the controllers to communicate with each other implicitly in order to minimize the cost. This suggests that control actions in decentralized control may have a dual role: those of control and communication. The hardness of the problem The hardness of the problem is attributed to the fact that information of the second controller depends on the decisions of the first controller. Variations considered by Tamer Basar show that the hardness is also because of the structure of the performance index and the coupling of different decision variables. It has also been shown that problems of the spirit of Witsenhausen's counterexample become simpler if the transmission delay along an external channel that connects the controllers is smaller than the propagation delay in the problem. However, this result requires the channels to be perfect and instantaneous, and hence is of limited applicability. In practical situations, the channel is always imperfect, and thus one can not assume that decentralized control problems are simple in presence of external channels. A justification of the failure of attempts that discretize the problem came from the computer science literature: Christos Papadimitriou and John Tsitsiklis showed that the discrete version of the counterexample is NP-complete. Attempts at obtaining a solution A number of numerical attempts have been made to solve the counterexample. Focusing on a particular choice of problem parameters , researchers have obtained strategies by discretization and using neural networks. Further research (notably, the work of Yu-Chi Ho, and the work of Li, Marden and Shamma) has obtained slightly improved costs for the same parameter choice. The best known numerical results for a variety of parameters, including the one mentioned previously, are obtained by a local search algorithm proposed by S.-H. Tseng and A. Tang in 2017. The first provably approximately optimal strategies appeared in 2010 (Grover, Park, Sahai) where information theory is used to understand the communication in the counterexample. The optimal solution of the counterexample is still an open problem. References Control theory Stochastic control
Witsenhausen's counterexample
[ "Mathematics" ]
1,126
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
28,492,034
https://en.wikipedia.org/wiki/Niobium%28IV%29%20fluoride
Niobium(IV) fluoride is a chemical compound with the formula . It is a nonvolatile black solid. Properties absorbs vapor strongly and turns into in moist air. It reacts with water to form a brown solution and a brown precipitate whose components are unknown. It is stable between 275 °C and 325 °C when heated in a vacuum. However, it disproportionates at 350 °C rapidly to form niobium(V) fluoride and niobium(III) fluoride: (at 350 °C) Structure Niobium(IV) fluoride adopts a crystal structure analogous to that of tin(IV) fluoride, in which each niobium atom is surrounded by six fluorine atoms forming an octahedron. Of the six fluorine atoms surrounding a single niobium atom, four are bridging to adjacent octahedra, leading to a structure of octahedra connected in layers. References Niobium(IV) compounds Fluorides Metal halides
Niobium(IV) fluoride
[ "Chemistry" ]
220
[ "Inorganic compounds", "Fluorides", "Metal halides", "Salts" ]
37,917,585
https://en.wikipedia.org/wiki/N%C3%A9el%20effect
In superparamagnetism (a form of magnetism), the Néel effect appears when a superparamagnetic material in a conducting coil is subject to varying frequencies of magnetic fields. The non-linearity of the superparamagnetic material acts as a frequency mixer, with voltage measured at the coil terminals. It consists of several frequency components, at the initial frequency and at the frequencies of certain linear combinations. The frequency shift of the field to be measured allows for detection of a direct current field with a standard coil. History In 1949 French physicist Louis Néel (1904-2000) discovered that when they are finely divided, ferromagnetic nanoparticles lose their hysteresis below a certain size; this phenomenon is known as superparamagnetism. The magnetization of these materials is subject to the applied field, which is highly non-linear. This curve is well described by the Langevin function, but for weak fields it can be simply written as: , where is the susceptibility at zero field and is known as the Néel coefficient. The Néel coefficient reflects the non-linearity of superparamagnetic materials in low fields. Theory If a coil of turns with a surface through which passes a current of excitation is immersed in a magnetic field collinear with the axis of the coil, a superparamagnetic material is deposited inside the coil. The electromotive force to the terminals of a winding of the coil, , is given by the formula: where is the magnetic induction given by the equation: In the absence of magnetic material, and . Differentiating this expression, the frequency of the voltage is the same as the excitation current or the magnetic field . In the presence of superparamagnetic material, neglecting the higher terms of the Taylor expansion, we obtain for B: A new derivation of the first term of the equation provides frequency voltage components of the stream of excitement or the magnetic field . The development of the second term multiplies the frequency components in which intermodular frequencies start components and generate their linear combinations. The non-linearity of the superparamagnetic material acts as a frequency mixer. Calling the total magnetic field within the coil at the abscissa, integrating the above induction coil along the abscissa between 0 and and differentiating with respect to obtains: with The conventional terms of self-inductance and Rogowski effect are found in both the original frequencies. The third term is due to the Néel effect; it reports the intermodulation between the excitation current and the external field. When the excitation current is sinusoidal, the effect is Néel characterized by the appearance of a second harmonic carrying the information flow field: Applications An important application of the Néel effect is as a current sensor, measuring the magnetic field radiated by a conductor with a current; this is the principle of Néel effect current sensors. The Néel effect allows the accurate measurement of currents with very low-frequency-type sensors in a current transformer without contact. The transducer of a Néel-effect current sensor consists of a coil with a core of superparamagnetic nanoparticles. The coil is traversed by a current excitation: . In the presence of an external magnetic field to be measured: the transducer transposes (with the Néel effect) the information to be measured, H (f) around a carrier frequency, the harmonic of order 2 excitation current 2: which is simpler. The electromotive force generated by the coil is proportional to the magnetic field to measure: and to the square of the excitation current: To improve the measurement's performance (such as linearity and sensitivity to temperature and vibration), the sensor includes a second permanent winding-reaction against it to cancel the second harmonic. The relationship of the current reaction against the primary current is proportional to the number of turns against reaction: . References See also Superparamagnetism Louis Néel Magnetic ordering Electric and magnetic fields in matter
Néel effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
839
[ "Magnetic ordering", "Condensed matter physics", "Electric and magnetic fields in matter", "Materials science" ]
37,918,612
https://en.wikipedia.org/wiki/Extractive%20electrospray%20ionization
Extractive electrospray ionization (EESI) is a spray-type, ambient ionization source in mass spectrometry that uses two colliding aerosols, one of which is generated by electrospray. In standard EESI, syringe pumps provide the liquids for both an electrospray and a sample spray. In neutral desorption EESI (ND-EESI), the liquid for the sample aerosol is provided by a flow of nitrogen. Principle of operation A ND-EESI experiment is simple in concept and implementation. A room temperature (20 °C) nitrogen gas stream is flowed through a narrow opening (i.d.~0.1 mm) to form a sharp jet targeted at a surface. The nitrogen molecules desorb analytes from the surface. The jet is only 2–3 mm above the surface, and the gas flow is about 200 mL/min with gas speeds around 300 m/s. The sample area is about 10 mm2. An optional enclosure, most commonly made of glass, can cover the sampling area to ensure proper positioning of the gas jet and the sample transfer line. A tube carries the neutral aerosol to the ESI spray. The sample spray in EESI produces a liquid aerosol with the analyte in sample droplets. The ESI spray produces droplets with protons. The sample droplets and the proton-rich droplets bump into each other. Each droplet has properties: analyte solubility in the ESI spray solvent and surface tension of the spray solution and of the sample solution. With dissimilar properties, some collisions produce no extraction because the droplets “bounce", but with similar properties, some collisions produce coalescence and liquid-liquid extraction. The extent of the extraction depends on the similarity of the properties. Applications Ambient ionization techniques are attractive for many samples for their high tolerance to complex mixtures and for fast testing. EESI has been employed for the rapid characterization of living objects, native proteins, and metabolic biomarkers. EESI has been applied to food samples, urine, serum, exhaled breath and protein samples. A general investigation of urine, serum, milk and milk powders was reported in 2006. Breath analysis of valproic acid with EESI was reported in 2007. The maturity of fruit was classified with the combination of EESI and principal component analysis, and live samples were tested a short time later. Perfumes were classified with the combination of EESI and characteristic ions. On-line monitoring was performed in 2008. Melamine in tainted milk was detected in 2009. Breath analysis was performed with the combination of EESI and an ion trap mass spectrometer. Beverages, over-the-counter drugs, uranyl waste water, and aquiculture water were tested with EESI between 2010 and 2016. See also Secondary electrospray ionization Tandem mass spectrometry References Mass spectrometry Ion source
Extractive electrospray ionization
[ "Physics", "Chemistry" ]
600
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Ion source", "Mass spectrometry", "Matter" ]
37,923,625
https://en.wikipedia.org/wiki/Geoneutrino
In nuclear and particle physics, a geoneutrino is a neutrino or antineutrino emitted during the decay of naturally-occurring radionuclides in the Earth. Neutrinos, the lightest of the known subatomic particles, lack measurable electromagnetic properties and interact only via the weak nuclear force when ignoring gravity. Matter is virtually transparent to neutrinos and consequently they travel, unimpeded, at near light speed through the Earth from their point of emission. Collectively, geoneutrinos carry integrated information about the abundances of their radioactive sources inside the Earth. A major objective of the emerging field of neutrino geophysics involves extracting geologically useful information (e.g., abundances of individual geoneutrino-producing elements and their spatial distribution in Earth's interior) from geoneutrino measurements. Analysts from the Borexino collaboration have been able to get to 53 events of neutrinos originating from the interior of the Earth. Most geoneutrinos are electron antineutrinos originating in decay branches of 40K, 232Th and 238U. Together these decay chains account for more than 99% of the present-day radiogenic heat generated inside the Earth. Only geoneutrinos from 232Th and 238U decay chains are detectable by the inverse beta-decay mechanism on the free proton because these have energies above the corresponding threshold (1.8 MeV). In neutrino experiments, large underground liquid scintillator detectors record the flashes of light generated from this interaction. geoneutrino measurements at two sites, as reported by the KamLAND and Borexino collaborations, have begun to place constraints on the amount of radiogenic heating in the Earth's interior. A third detector (SNO+) is expected to start collecting data in 2017. JUNO experiment is under construction in Southern China. Another geoneutrino detecting experiment is planned at the China Jinping Underground Laboratory. History Neutrinos were hypothesized in 1930 by Wolfgang Pauli. The first detection of antineutrinos generated in a nuclear reactor was confirmed in 1956. The idea of studying geologically produced neutrinos to infer Earth's composition has been around since at least mid-1960s. In a 1984 landmark paper Krauss, Glashow & Schramm presented calculations of the predicted geoneutrino flux and discussed the possibilities for detection. First detection of geoneutrinos was reported in 2005 by the KamLAND experiment at the Kamioka Observatory in Japan. In 2010 the Borexino experiment at the Gran Sasso National Laboratory in Italy released their geoneutrino measurement. Updated results from KamLAND were published in 2011 and 2013, and Borexino in 2013 and 2015. Geological motivation The Earth's interior radiates heat at a rate of about 47 TW (terawatts), which is less than 0.1% of the incoming solar energy. Part of this heat loss is accounted for by the heat generated upon decay of radioactive isotopes in the Earth interior. The remaining heat loss is due to the secular cooling of the Earth, growth of the Earth's inner core (gravitational energy and latent heat contributions), and other processes. The most important heat-producing elements are uranium (U), thorium (Th), and potassium (K). The debate about their abundances in the Earth has not concluded. Various compositional estimates exist where the total Earth's internal radiogenic heating rate ranges from as low as ~10 TW to as high as ~30 TW. About 7 TW worth of heat-producing elements reside in the Earth's crust, the remaining power is distributed in the Earth mantle; the amount of U, Th, and K in the Earth core is probably negligible. Radioactivity in the Earth mantle provides internal heating to power mantle convection, which is the driver of plate tectonics. The amount of mantle radioactivity and its spatial distribution—is the mantle compositionally uniform at large scale or composed of distinct reservoirs?—is of importance to geophysics. The existing range of compositional estimates of the Earth reflects our lack of understanding of what were the processes and building blocks (chondritic meteorites) that contributed to its formation. More accurate knowledge of U, Th, and K abundances in the Earth interior would improve our understanding of present-day Earth dynamics and of Earth formation in early Solar System. Counting antineutrinos produced in the Earth can constrain the geological abundance models. The weakly interacting geoneutrinos carry information about their emitters’ abundances and location in the entire Earth volume, including the deep Earth. Extracting compositional information about the Earth mantle from geoneutrino measurements is difficult but possible. It requires a synthesis of geoneutrino experimental data with geochemical and geophysical models of the Earth. Existing geoneutrino data are a byproduct of antineutrino measurements with detectors designed primarily for fundamental neutrino physics research. Future experiments devised with a geophysical agenda in mind would benefit geoscience. Proposals for such detectors have been put forward. Geoneutrino prediction Calculations of the expected geoneutrino signal predicted for various Earth reference models are an essential aspect of neutrino geophysics. In this context, "Earth reference model" means the estimate of heat producing element (U, Th, K) abundances and assumptions about their spatial distribution in the Earth, and a model of Earth's internal density structure. By far the largest variance exists in the abundance models where several estimates have been put forward. They predict a total radiogenic heat production as low as ~10 TW and as high as ~30 TW, the commonly employed value being around 20 TW. A density structure dependent only on the radius (such as the Preliminary Reference Earth Model or PREM) with a 3-D refinement for the emission from the Earth's crust is generally sufficient for geoneutrino predictions. The geoneutrino signal predictions are crucial for two main reasons: 1) they are used to interpret geoneutrino measurements and test the various proposed Earth compositional models; 2) they can motivate the design of new geoneutrino detectors. The typical geoneutrino flux at Earth's surface is few × 106 cm−2⋅s−1. As a consequence of (i) high enrichment of continental crust in heat producing elements (~7 TW of radiogenic power) and (ii) the dependence of the flux on 1/(distance from point of emission)2, the predicted geoneutrino signal pattern correlates well with the distribution of continents. At continental sites, most geoneutrinos are produced locally in the crust. This calls for an accurate crustal model, both in terms of composition and density, a nontrivial task. Antineutrino emission from a volume V is calculated for each radionuclide from the following equation: where dφ(Eν,r)/dEν is the fully oscillated antineutrino flux energy spectrum (in cm−2⋅s−1⋅MeV−1) at position r (units of m) and Eν is the antineutrino energy (in MeV). On the right-hand side, ρ is rock density (in kg⋅m−3), A is elemental abundance (kg of element per kg of rock) and X is the natural isotopic fraction of the radionuclide (isotope/element), M is atomic mass (in g⋅mol−1), NA is the Avogadro constant (in mol−1), λ is decay constant (in s−1), dn(Eν)/dEν is the antineutrino intensity energy spectrum (in MeV−1, normalized to the number of antineutrinos nν produced in a decay chain when integrated over energy), and Pee(Eν,L) is the antineutrino survival probability after traveling a distance L. For an emission domain the size of the Earth, the fully oscillated energy-dependent survival probability Pee can be replaced with a simple factor ⟨Pee⟩ ≈ 0.55, the average survival probability. Integration over the energy yields the total antineutrino flux (in cm−2⋅s−1) from a given radionuclide: The total geoneutrino flux is the sum of contributions from all antineutrino-producing radionuclides. The geological inputs—the density and particularly the elemental abundances—carry a large uncertainty. The uncertainty of the remaining nuclear and particle physics parameters is negligible compared to the geological inputs. At present it is presumed that uranium-238 and thorium-232 each produce about the same amount of heat in the Earth's mantle, and these are presently the main contributors to radiogenic heat. However, neutrino flux does not perfectly track heat from radioactive decay of primordial nuclides, because neutrinos do not carry off a constant fraction of the energy from the radiogenic decay chains of these primordial radionuclides. Geoneutrino detection Detection mechanism Instruments that measure geoneutrinos are large scintillation detectors. They use the inverse beta decay reaction, a method proposed by Bruno Pontecorvo that Frederick Reines and Clyde Cowan employed in their pioneering experiments in 1950s. Inverse beta decay is a charged current weak interaction, where an electron antineutrino interacts with a proton, producing a positron and a neutron: Only antineutrinos with energies above the kinematic threshold of 1.806 MeV—the difference between rest mass energies of neutron plus positron and proton—can participate in this interaction. After depositing its kinetic energy, the positron promptly annihilates with an electron: With a delay of few tens to few hundred microseconds the neutron combines with a proton to form a deuteron: The two light flashes associated with the positron and the neutron are coincident in time and in space, which provides a powerful method to reject single-flash (non-antineutrino) background events in the liquid scintillator. Antineutrinos produced in man-made nuclear reactors overlap in energy range with geologically produced antineutrinos and are also counted by these detectors. Because of the kinematic threshold of this antineutrino detection method, only the highest energy geoneutrinos from 232Th and 238U decay chains can be detected. Geoneutrinos from 40K decay have energies below the threshold and cannot be detected using inverse beta decay reaction. Experimental particle physicists are developing other detection methods, which are not limited by an energy threshold (e.g., antineutrino scattering on electrons) and thus would allow detection of geoneutrinos from potassium decay. Geoneutrino measurements are often reported in Terrestrial Neutrino Units (TNU; analogy with Solar Neutrino Units) rather than in units of flux (cm−2 s−1). TNU is specific to the inverse beta decay detection mechanism with protons. 1 TNU corresponds to 1 geoneutrino event recorded over a year-long fully efficient exposure of 1032 free protons, which is approximately the number of free protons in a 1 kiloton liquid scintillation detector. The conversion between flux units and TNU depends on the thorium to uranium abundance ratio (Th/U) of the emitter. For Th/U=4.0 (a typical value for the Earth), a flux of 1.0 × 106 cm−2 s−1 corresponds to 8.9 TNU. Detectors and results Existing detectors KamLAND (Kamioka Liquid Scintillator Antineutrino Detector) is a 1.0 kiloton detector located at the Kamioka Observatory in Japan. Results based on a live-time of 749 days and presented in 2005 mark the first detection of geoneutrinos. The total number of antineutrino events was 152, of which 4.5 to 54.2 were geoneutrinos. This analysis put a 60 TW upper limit on the Earth's radiogenic power from 232Th and 238U. A 2011 update of KamLAND's result used data from 2135 days of detector time and benefited from improved purity of the scintillator as well as a reduced reactor background from the 21-month-long shutdown of the Kashiwazaki-Kariwa plant after Fukushima. Of 841 candidate antineutrino events, 106 were identified as geoneutrinos using unbinned maximum likelihood analysis. It was found that 232Th and 238U together generate 20.0 TW of radiogenic power. Borexino is a 0.3 kiloton detector at Laboratori Nazionali del Gran Sasso near L'Aquila, Italy. Results published in 2010 used data collected over live-time of 537 days. Of 15 candidate events, unbinned maximum likelihood analysis identified 9.9 as geoneutrinos. The geoneutrino null hypothesis was rejected at 99.997% confidence level (4.2σ). The data also rejected a hypothesis of an active georeactor in the Earth's core with power above 3 TW at 95% C.L. A 2013 measurement of 1353 days, detected 46 'golden' anti-neutrino candidates with 14.3±4.4 identified geoneutrinos, indicating a 14.1±8.1 TNU mantle signal, setting a 95% C.L limit of 4.5 TW on geo-reactor power and found the expected reactor signals. In 2015, an updated spectral analysis of geoneutrinos was presented by Borexino based on 2056 days of measurement (from December 2007 to March 2015), with 77 candidate events; of them, only 24 are identified as geonetrinos, and the rest 53 events are originated from European nuclear reactors. The analysis shows that the Earth crust contains about the same amount of U and Th as the mantle, and that the total radiogenic heat flow from these elements and their daughters is 23–36 TW. SNO+ is a 0.8 kiloton detector located at SNOLAB near Sudbury, Ontario, Canada. SNO+ uses the original SNO experiment chamber. The detector is being refurbished and is expected to operate in late 2016 or 2017. Planned and proposed detectors Ocean Bottom KamLAND-OBK OBK is a 50 kiloton liquid scintillation detector for deployment in the deep ocean. JUNO (Jiangmen Underground Neutrino Observatory, website) is a 20 kiloton liquid scintillation detector currently under construction in Southern China. The JUNO detector is scheduled to become operational in 2023. Jinping Neutrino Experiment is a 4 kiloton liquid scintillation detector currently under construction in the China JinPing Underground Laboratory (CJPL) scheduled for completion in 2022. LENA (Low Energy Neutrino Astronomy, website) is a proposed 50 kiloton liquid scintillation detector of the LAGUNA project. Proposed sites include Centre for Underground Physics in Pyhäsalmi (CUPP), Finland (preferred) and Laboratoire Souterrain de Modane (LSM) in Fréjus, France. This project seems to be cancelled. at DUSEL (Deep Underground Science and Engineering Laboratory) at Homestake in Lead, South Dakota, USA at BNO (Baksan Neutrino Observatory) in Russia EARTH (Earth AntineutRino TomograpHy) Hanohano (Hawaii Anti-Neutrino Observatory) is a proposed deep-ocean transportable detector. It is the only detector designed to operate away from the Earth's continental crust and from nuclear reactors in order to increase the sensitivity to geoneutrinos from the Earth's mantle. Desired future technologies Directional antineutrino detection. Resolving the direction from which an antineutrino arrived would help discriminate between the crustal geoneutrino and reactor antineutrino signal (most antineutrinos arriving near horizontally) from mantle geoneutrinos (much wider range of incident dip angles). Detection of antineutrinos from 40K decay. Since the energy spectrum of antineutrinos from 40K decay falls entirely below the threshold energy of inverse beta decay reaction (1.8 MeV), a different detection mechanism must be exploited, such as antineutrino scattering on electrons. Measurement of the abundance of 40K within the Earth would constrain Earth's volatile element budget. References Further reading External links Deep Ocean Neutrino Sciences describes deep ocean geo-neutrino detection projects with references and links to workshops. Neutrino Geoscience 2015 Conference provides presentations by experts covering almost all areas of geoneutrino science. Site also contains links to previous "Neutrino Geoscience" meetings. Geoneutrinos.org is an interactive website allowing you to view the geoneutrino spectra anywhere on Earth (see "Reactors" tab) and manipulate global geoneutrino models (see "Model" tab) Geophysics Neutrinos
Geoneutrino
[ "Physics" ]
3,588
[ "Applied and interdisciplinary physics", "Geophysics" ]
45,389,413
https://en.wikipedia.org/wiki/Numerical%20modeling%20in%20echocardiography
Numerical manipulation of Doppler parameters obtain during routine Echocardiography has been extensively utilized to non-invasively estimate intra-cardiac pressures, in many cases removing the need for invasive cardiac catheterization. Echocardiography uses ultrasound to create real-time anatomic images of the heart and its structures. Doppler echocardiography utilizes the Doppler principle to estimate intracardiac velocities. Via the modified Bernoulli equation, velocity is routinely converted to pressure gradient for use in clinical cardiology decision making. A broad discipline of mathematical modeling of intracardiac velocity parameters for pulmonary circulation and aortic Doppler for aortic stenosis have been investigated. Diasatolic dysfunction algorithms use complex combinations of these numeric models to estimate intra-cardiac filling pressures. Shunt defects have been studied using the Relative Atrial Index. See also Medical ultrasonography section: Doppler sonography Echocardiography American Society of Echocardiography Christian Doppler References External links Echocardiography Textbook by Bonita Anderson Echocardiography (Ultrasound of the heart) Doppler Examination - Introduction The Doppler Principle and the Study of Cardiac Flows Cardiac imaging Medical ultrasonography Medical equipment Multidimensional signal processing
Numerical modeling in echocardiography
[ "Biology" ]
254
[ "Medical equipment", "Medical technology" ]
45,391,102
https://en.wikipedia.org/wiki/Viral%20dynamics
Viral dynamics is a field of applied mathematics concerned with describing the progression of viral infections within a host organism. It employs a family of mathematical models that describe changes over time in the populations of cells targeted by the virus and the viral load. These equations may also track competition between different viral strains and the influence of immune responses. The original viral dynamics models were inspired by compartmental epidemic models (e.g. the SI model), with which they continue to share many common mathematical features, such as the concept of the basic reproductive ratio (R0). The major distinction between these fields is in the scale at which the models operate: while epidemiological models track the spread of infection between individuals within a population (i.e. "between host"), viral dynamics models track the spread of infection between cells within an individual (i.e. "within host"). Analyses employing viral dynamic models have been used extensively to study HIV, hepatitis B virus, and hepatitis C virus, among other infections References External links Viral Dynamics Mathematical Modeling Training, Center for AIDS Research, University of Washington Evolutionary dynamics Evolutionary biology Virology Immunology Applied mathematics Mathematical modeling
Viral dynamics
[ "Mathematics", "Biology" ]
236
[ "Immunology", "Applied mathematics", "Evolutionary biology", "Mathematical modeling" ]
45,391,494
https://en.wikipedia.org/wiki/Chlorobis%28ethylene%29rhodium%20dimer
Chlorobis(ethylene)rhodium dimer is an organorhodium compound with the formula Rh2Cl2(C2H4)4. It is a red-orange solid that is soluble in nonpolar organic solvents. The molecule consists of two bridging chloride ligands and four ethylene ligands. The ethylene ligands are labile and readily displaced even by other alkenes. A variety of homogeneous catalysts have been prepared from this complex. Preparation and reactions The complex is prepared by treating an aqueous methanolic solution of hydrated rhodium trichloride with ethylene at room temperature. Rh(III) is reduced with oxidation of ethylene to acetaldehyde: 2 RhCl3(H2O)3 + 6 C2H4 → Rh2Cl2(C2H4)4 + 2 CH3CHO + 4 HCl + 4 H2O Reflecting the lability of its ligands, the complex does not tolerate recrystallization. The complex reacts slowly with water to give acetaldehyde. With HCl, it gives RhCl2(C2H2)2−. Rh2Cl2(C2H4)4 catalyzes the dimerization of ethylene to 1-butene. Carbonylation affords rhodium carbonyl chloride. Treatment with acetylacetone and aqueous KOH gives Rh(acac)(C2H4)2. References Organorhodium compounds Homogeneous catalysis Alkene complexes Dimers (chemistry) Chloro complexes Rhodium(I) compounds
Chlorobis(ethylene)rhodium dimer
[ "Chemistry", "Materials_science" ]
348
[ "Catalysis", "Homogeneous catalysis", "Dimers (chemistry)", "Polymer chemistry" ]
45,395,149
https://en.wikipedia.org/wiki/Rhodium%20carbonyl%20chloride
Rhodium carbonyl chloride is an organorhodium compound with the formula Rh2Cl2(CO)4. It is a red-brown volatile solid that is soluble in nonpolar organic solvents. It is a precursor to other rhodium carbonyl complexes, some of which are useful in homogeneous catalysis. Structure The molecule consists of two planar Rh(I) centers linked by two bridging chloride ligands and four CO ligands. X-ray crystallography shows that the two Rh(I) centers are square planar with the dihedral angle of 126.8° between the two RhCl2 planes. The metals are nonbonding. Synthesis and reactions First prepared by Walter Hieber, it is typically prepared by treating hydrated rhodium trichloride with flowing carbon monoxide, according to this idealized redox equation: 2 RhCl3(H2O)3 + 6 CO → Rh2Cl2(CO)4 + 2 COCl2 + 6 H2O. The complex reacts with triphenylphosphine to give the bis(triphenylphosphine)rhodium carbonyl chloride: Rh2Cl2(CO)4 + 4 PPh3 → 2 trans-RhCl(CO)(PPh3)2 + 2 CO With chloride salts, the dichloride anion forms: Rh2Cl2(CO)4 + 2 Cl− → 2 cis-[RhCl2(CO)2]− With acetylacetone, rhodium carbonyl chloride reacts to give dicarbonyl(acetylacetonato)rhodium(I). The dimer reacts with a variety of Lewis bases (:B) to form adducts RhCl(CO)2:B. Its reaction with tetrahydrothiophene and the corresponding enthalpy are: 1/2 Rh2Cl2(CO)4 + :S(CH2)4 → RhCl(CO)2:S(CH2)4   ΔH = -31.8 kJ mol−1 This enthalpy corresponds to the enthalpy change for a reaction forming one mole of the product, RhCl(CO)2:S(CH2)4, from the acid dimer. The dissociation energy for rhodium(I) dicarbonyl chloride dimer, which is an energy contribution prior to reaction with the donor, Rh2Cl2(CO)4 → 2 RhCl(CO)2 has been determined by the ECW model to be 87.1 kJ mol−1 N-heterocyclic carbene (NHC) ligands react with rhodium carbonyl chloride to give monomeric cis-[RhCl(NHC)(CO)2] complexes. The IR spectra of these complexes have been used to estimate the donor strength of NHCs. References Organorhodium compounds Homogeneous catalysis Dimers (chemistry) Carbonyl complexes Chloro complexes Rhodium(I) compounds
Rhodium carbonyl chloride
[ "Chemistry", "Materials_science" ]
652
[ "Catalysis", "Homogeneous catalysis", "Dimers (chemistry)", "Polymer chemistry" ]
45,395,430
https://en.wikipedia.org/wiki/Augmented%20reality-assisted%20surgery
Augmented reality-assisted surgery (ARAS) is a surgical tool utilizing technology that superimposes a computer-generated image on a surgeon's view of the operative field, thus providing a composite view for the surgeon of the patient with a computer generated overlay enhancing the operative experience. In addition, augmented reality interfaces (ARI) with ARAS allow for non-physical contact by recognizing speech from surgeons and lowering the chances of physical contamination while operating. ARAS can be used for training, preparation for an operation, or performance of an operation. Surgeons are a great way in which these procedures are implemented into medicine. ARAS can be performed using a wide array of technology, including an optical head-mounted display (OHMD)—such as the Google Glass XE 22.1 or Vuzix STAR 1200 XL. A study recorded a relatively positive reaction among trainees towards this technology in the operation room as the ARAS device guided them through a procedure. Some ARAS devices also provide a digital overlay from robotic and laparoscopic surgery feeds. Augmented reality assisted surgery devices have been growing in use in various medical fields such as imagining, interactive body mapping, and modeling possible cancerous growths. It is also being used as a way to plan before executing a complicated surgery. This technology has some specialized uses in urological and cardiovascular areas. Specialized uses A subset of called augmented reality-assisted urologic surgery (ARAUS) specifically aids with urological surgery. This intraoperative training tool was first described and utilized by Tariq S. Hakky, Ryan M. Dickey, and Larry I. Lipshultz within the Scott Department of Urology, Baylor College of Medicine, and Daniel R. Martinez, Rafael E. Carrion, and Philippe E. Spiess within the Sexual Medicine Program in the Department of Urology, at the University of South Florida. It was initially used to teach medical residents how to place a penile implant from start to finish via an application downloaded onto the OHMD. Intraoperatively, an optical display camera output feed combined with software allowing for the detection of points of interest enabled faculty to interact with residents during the placement of the penile implant. Both faculty and residents demonstrated a high degree of satisfaction of the ARAUS experience, and it was shown to be an effective tool in training urological surgical technique. Advantages of ARAUS include real-time feedback of residents during suy and superior visibility and interaction between faculty and residents. ARAS has also been applied to the cardiovascular realm. Terry Peters of the University of Western Ontario in London, Canada has teamed up with other researchers at the Robarts Research Institute to implement ARAS towards the goal of improving repairs to the heart's mitral valve and replacement of the aortic valve. In an interview for the Medical Augmented Reality Blog, Peters stated that his research team could not only use ARAS to "[improve] the speed and safety of the cardiac valve repair procedure"; they also conducted "the evaluation of an AR environment to plan brain-tumor removal, and the development of an ARF-enhanced system for ultrasound-guided spinal injections." Holosurical Inc has developed the clinically tested ARAI™ surgical navigation system that provides real-time patient-specific 3D anatomical visualization for presurgical planning, intraoperative guidance, and postsurgical data analytics. The augmented reality component of the system allows the surgeon to focus their attention on the patient's internal anatomy, without actually exposing it. On January 10, 2019, HoloSurgical Inc completed the 1st spine surgery in the world using augmented reality, artificial intelligence-based navigation system. The system was developed by AI pioneer Pawel Lewicki PhD, surgeon Kris Siemionow MD, PhD, and engineer Cristian Luciano PhD. References Augmented reality User interface techniques Medical technology
Augmented reality-assisted surgery
[ "Biology" ]
784
[ "Medical technology" ]
45,398,062
https://en.wikipedia.org/wiki/Polyadic%20space
In mathematics, a polyadic space is a topological space that is the image under a continuous function of a topological power of an Alexandroff one-point compactification of a discrete space. History Polyadic spaces were first studied by S. Mrówka in 1970 as a generalisation of dyadic spaces. The theory was developed further by R. H. Marty, János Gerlits and Murray G. Bell, the latter of whom introduced the concept of the more general centred spaces. Background A subset K of a topological space X is said to be compact if every open cover of K contains a finite subcover. It is said to be locally compact at a point x ∈ X if x lies in the interior of some compact subset of X. X is a locally compact space if it is locally compact at every point in the space. A proper subset A ⊂ X is said to be dense if the closure Ā = X. A space whose set has a countable, dense subset is called a separable space. For a non-compact, locally compact Hausdorff topological space , we define the Alexandroff one-point compactification as the topological space with the set , denoted , where , with the topology defined as follows: , for every compact subset . Definition Let be a discrete topological space, and let be an Alexandroff one-point compactification of . A Hausdorff space is polyadic if for some cardinal number , there exists a continuous surjective function , where is the product space obtained by multiplying with itself times. Examples Take the set of natural numbers with the discrete topology. Its Alexandroff one-point compactification is . Choose and define the homeomorphism with the mapping It follows from the definition that the image space is polyadic and compact directly from the definition of compactness, without using Heine-Borel. Every dyadic space (a compact space which is a continuous image of a Cantor set) is a polyadic space. Let X be a separable, compact space. If X is a metrizable space, then it is polyadic (the converse is also true). Properties The cellularity of a space is The tightness of a space is defined as follows: let , and . Define Then The topological weight of a polyadic space satisfies the equality . Let be a polyadic space, and let . Then there exists a polyadic space such that and . Polyadic spaces are the smallest class of topological spaces that contain metric compact spaces and are closed under products and continuous images. Every polyadic space of weight is a continuous image of . A topological space has the Suslin property if there is no uncountable family of pairwise disjoint non-empty open subsets of . Suppose that has the Suslin property and is polyadic. Then is dyadic. Let be the least number of discrete sets needed to cover , and let denote the least cardinality of a non-empty open set in . If is a polyadic space, then . Ramsey's theorem There is an analogue of Ramsey's theorem from combinatorics for polyadic spaces. For this, we describe the relationship between Boolean spaces and polyadic spaces. Let denote the clopen algebra of all clopen subsets of . We define a Boolean space as a compact Hausdorff space whose basis is . The element such that is called the generating set for . We say is a -disjoint collection if is the union of at most subcollections , where for each , is a disjoint collection of cardinality at most It was proven by Petr Simon that is a Boolean space with the generating set of being -disjoint if and only if is homeomorphic to a closed subspace of . The Ramsey-like property for polyadic spaces as stated by Murray Bell for Boolean spaces is then as follows: every uncountable clopen collection contains an uncountable subcollection which is either linked or disjoint. Compactness We define the compactness number of a space , denoted by , to be the least number such that has an n-ary closed subbase. We can construct polyadic spaces with arbitrary compactness number. We will demonstrate this using two theorems proven by Murray Bell in 1985. Let be a collection of sets and let be a set. We denote the set by ; all subsets of of size by ; and all subsets of size at most by . If and for all , then we say that is n-linked. If every n-linked subset of has a non-empty intersection, then we say that is n-ary. Note that if is n-ary, then so is , and therefore every space with has a closed, n-ary subbase with . Note that a collection of closed subsets of a compact space is a closed subbase if and only if for every closed in an open set , there exists a finite such that and . Let be an infinite set and let by a number such that . We define the product topology on as follows: for , let , and let . Let be the collection . We take as a clopen subbase for our topology on . This topology is compact and Hausdorff. For and such that , we have that is a discrete subspace of , and hence that is a union of discrete subspaces. Theorem (Upper bound on ): For each total order on , there is an -ary closed subbase of . Proof: For , define and . Set . For , and such that , let such that is an -linked subset of . Show that . For a topological space and a subspace , we say that a continuous function is a retraction if is the identity map on . We say that is a retract of . If there exists an open set such that , and is a retract of , then we say that is a neighbourhood retract of . Theorem (Lower bound on ) Let be such that . Then cannot be embedded as a neighbourhood retract in any space with . From the two theorems above, it can be deduced that for such that , we have that . Let be the Alexandroff one-point compactification of the discrete space , so that . We define the continuous surjection by . It follows that is a polyadic space. Hence is a polyadic space with compactness number . Generalisations Centred spaces, AD-compact spaces and ξ-adic spaces are generalisations of polyadic spaces. Centred space Let be a collection of sets. We say that is centred if for all finite subsets . Define the Boolean space , with the subspace topology from . We say that a space is a centred space if there exists a collection such that is a continuous image of . Centred spaces were introduced by Murray Bell in 2004. AD-compact space Let be a non-empty set, and consider a family of its subsets . We say that is an adequate family if: given , if every finite subset of is in , then . We may treat as a topological space by considering it a subset of the Cantor cube , and in this case, we denote it . Let be a compact space. If there exist a set and an adequate family , such that is the continuous image of , then we say that is an AD-compact space. AD-compact spaces were introduced by Grzegorz Plebanek. He proved that they are closed under arbitrary products and Alexandroff compactifications of disjoint unions. It follows that every polyadic space is hence an AD-compact space. The converse is not true, as there are AD-compact spaces that are not polyadic. ξ-adic space Let and be cardinals, and let be a Hausdorff space. If there exists a continuous surjection from to , then is said to be a ξ-adic space. ξ-adic spaces were proposed by S. Mrówka, and the following results about them were given by János Gerlits (they also apply to polyadic spaces, as they are a special case of ξ-adic spaces). Let be an infinite cardinal, and let be a topological space. We say that has the property if for any family of non-empty open subsets of , where , we can find a set and a point such that and for each neighbourhood of , we have that . If is a ξ-adic space, then has the property for each infinite cardinal . It follows from this result that no infinite ξ-adic Hausdorff space can be an extremally disconnected space. Hyadic space Hyadic spaces were introduced by Eric van Douwen. They are defined as follows. Let be a Hausdorff space. We denote by the hyperspace of . We define the subspace of by . A base of is the family of all sets of the form , where is any integer, and are open in . If is compact, then we say a Hausdorff space is hyadic if there exists a continuous surjection from to . Polyadic spaces are hyadic. See also Dyadic space Eberlein compactum Stone space Stone–Čech compactification Supercompact space References Properties of topological spaces General topology
Polyadic space
[ "Mathematics" ]
1,948
[ "General topology", "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology" ]
45,399,927
https://en.wikipedia.org/wiki/Functor%20represented%20by%20a%20scheme
In algebraic geometry, a functor represented by a scheme X is a set-valued contravariant functor on the category of schemes such that the value of the functor at each scheme S is (up to natural bijections or one-to-one correspondence) the set of all morphisms . The functor F is then said to be naturally equivalent to the functor of points of X; and the scheme X is said to represent the functor F, and to classify geometric objects over S given by F. A functor producing certain geometric objects over S might be represented by a scheme X. For example, the functor taking S to the set of all line bundles over S (or more precisely n-dimensional linear systems) is represented by the projective space . Another example is the Hilbert scheme X of a scheme Y, which represents the functor sending a scheme S to the set of closed subschemes of which are flat families over S. In some applications, it may not be possible to find a scheme that represents a given functor. This led to the notion of a stack, which is not quite a functor but can still be treated as if it were a geometric space. (A Hilbert scheme is a scheme rather than a stack, because, very roughly speaking, deformation theory is simpler for closed schemes.) Some moduli problems are solved by giving formal solutions (as opposed to polynomial algebraic solutions) and in that case, the resulting functor is represented by a formal scheme. Such a formal scheme is then said to be algebraizable if there is a scheme that can represent the same functor, up to some isomorphisms. Motivation The notion is an analog of a classifying space in algebraic topology, where each principal G-bundle over a space S is (up to natural isomorphisms) the pullback of the universal bundle along some map . To give a principal G-bundle over S is the same as to give a map (called a classifying map) from S to the classifying space . A similar phenomenon in algebraic geometry is given by a linear system: to give a morphism from a base variety S to a projective space is equivalent to giving a basepoint-free linear system (or equivalently a line bundle) on S. That is, the projective space X represents the functor which gives all line bundles over S. Yoneda's lemma says that a scheme X determines and is determined by its functor of points. Functor of points Let X be a scheme. Its functor of points is the functorHom(−,X) : (Affine schemes)op ⟶ Setssending an affine scheme Y to the set of scheme maps . A scheme is determined up to isomorphism by its functor of points. This is a stronger version of the Yoneda lemma, which says that a X is determined by the map Hom(−,X) : Schemesop → Sets. Conversely, a functor F : (Affine schemes)op → Sets is the functor of points of some scheme if and only if F is a sheaf with respect to the Zariski topology on (Affine schemes), and F admits an open cover by affine schemes. Examples Points as characters Let X be a scheme over the base ring B. If x is a set-theoretic point of X, then the residue field is the residue field of the local ring (i.e., the quotient by the maximal ideal). For example, if X is an affine scheme Spec(A) and x is a prime ideal , then the residue field of x is the function field of the closed subscheme . For simplicity, suppose . Then the inclusion of a set-theoretic point x into X corresponds to the ring homomorphism: (which is if .) The above should be compared to the spectrum of a commutative Banach algebra. Points as sections By the universal property of fiber product, each R-point of a scheme X determines a morphism of R-schemes ; i.e., a section of the projection . If S is a subset of X(R), then one writes for the set of the images of the sections determined by elements in S. Spec of the ring of dual numbers Let , the Spec of the ring of dual numbers over a field k and X a scheme over k. Then each amounts to the tangent vector to X at the point that is the image of the closed point of the map. In other words, is the set of tangent vectors to X. Universal object Let be the functor represented by a scheme . Under the isomorphism , there is a unique element of that corresponds to the identity map . This unique element is known as the universal object or the universal family (when the objects being classified are families). The universal object acts as a template from which all other elements in for any scheme can be derived via pullback along a morphism from to . See also Moduli space Weil restriction Rational point Descent along torsors Notes References External links http://www.math.washington.edu/~zhang/Shanghai2011/Slides/ardakov.pdf Algebraic geometry
Functor represented by a scheme
[ "Mathematics" ]
1,074
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Fields of abstract algebra", "Category theory", "Functors", "Mathematical relations", "Algebraic geometry" ]
33,882,236
https://en.wikipedia.org/wiki/Adaptive%20collaborative%20control
Adaptive collaborative control is the decision-making approach used in hybrid models consisting of finite-state machines with functional models as subcomponents to simulate behavior of systems formed through the partnerships of multiple agents for the execution of tasks and the development of work products. The term “collaborative control” originated from work developed in the late 1990s and early 2000 by Fong, Thorpe, and Baur (1999). It is important to note that according to Fong et al. in order for robots to function in collaborative control, they must be self-reliant, aware, and adaptive. In literature, the adjective “adaptive” is not always shown but is noted in the official sense as it is an important element of collaborative control. The adaptation of traditional applications of control theory in teleoperations sought initially to reduce the sovereignty of “humans as controllers/robots as tools” and had humans and robots working as peers, collaborating to perform tasks and to achieve common goals. Early implementations of adaptive collaborative control centered on vehicle teleoperation. Recent uses of adaptive collaborative control cover training, analysis, and engineering applications in teleoperations between humans and multiple robots, multiple robots collaborating among themselves, unmanned vehicle control, and fault tolerant controller design. Like traditional control methodologies, adaptive collaborative control takes inputs into the system and regulates the output based on a predefined set of rules. The difference is that those rules or constraints only apply to the higher-level strategy (goals and tasks) set by humans. Lower tactical level decisions are more adaptive, flexible, and accommodating to varying levels of autonomy, interaction and agent (human and/or robotic) capabilities. Models under this methodology may query sources in the event there is some uncertainty in a task that affects the overarching strategy. That interaction will produce an alternative course of action if it provides more certainty in support of the overarching strategy. If not or there is no response, the model will continue performing as originally anticipated. Several important considerations are necessary for the implementation of adaptive collaborative control for simulation. As discussed earlier, data is provided from multiple collaborators to perform necessary tasks. This basic function requires data fusion on behalf of the model and potentially a need to set a prioritization scheme for handling continuous streaming of recommendations. The degree of autonomy of the robot in the case of human–robot interaction and weighting of decisional authority in robot-robot interaction are important for the control architecture. The design of interfaces is an important human system integration consideration that must be addressed. Due to the inherent varied interpretational scheme in humans, it becomes an important design factor to ensure the robot(s) are correctly conveying its message when interacting with humans. History The history of adaptive collaborative control began in 1999 through the efforts of Terrence Fong and Charles Thorpe of Carnegie Mellon University and Charles Baur of École Polytechnique Fédérale de Lausanne. Fong et al. believed existing telerobotic practices, which centered on a human point of view, while sufficient for some domains were sub-optimal for operating multiple vehicles or controlling planetary rovers. The new approach devised by Fong et al. focused on a robot-centric teleoperation model that treated the human as a peer and made requests to them in the manner a person would seek advice from experts. In the nominal work, Fong et al. implemented collaborative control design using a PioneerAT mobile robot and a UNIX workstation with wireless communications and distributed message-based computing. Two years later, Fong utilized collaborative control for several more applications, including the collaboration of a single human operator with multiple mobile robots for surveillance and reconnaissance. Around this same time, Goldberg and Chen presented an adaptive collaborative control system that possessed malfunctioning sources. The control design proved to create a model that maintained a robust performance when subjected to a sizeable fraction of malfunctioning sources. In the work, Goldberg and Chen expanded on the definition of collaborative control to include multiple sensors and multiple control processes in addition to human operators as sources. A collaborative, cognitive workspace in the form of a three-dimensional representation developed by Idaho National Laboratory to support understanding of tasks and environments for human operators expounds on Fong's seminal work which used textual dialogue as the human-robot interaction. The success of the 3-D display provided evidence of the use of mental models for increased team success. During that same time, Fong et al. developed a three dimensional display that was formed via a fusion of sensor data. A recent adaptation of adaptive collaborative control in 2010 was used to design a fault tolerant control system using a Lyapunov function based analysis. Initialization The simuland for adaptive collaborative control centers on robotics. As such, adaptive collaborative control follows the tenets of control theory applied to robotics at its basest level. That means the states of the robot are observed at a given instant and noted if it is within some accepted bound. If it is not, the estimated states of the robot are calculated using equations of dynamics and kinematics at some future time. The process of entering observation data into the model to generate initial conditions is called initialization. The process of initialization for adaptive collaborative control occurs differently depending on the environment: robotics only and human-robotic interaction. Under a robotics only environment, initialization occurs very similarly to the description above. The robotics, systems, subsystems, non-human entities observe some state it finds not in accordance with the higher-level strategy. The entities that are aware of this error use the appropriate equations to present a revised value for a future time step to its peers. For human-robotic interactions, initialization can occur at two different levels. The first level is what was previously described. In this instance, the robot notices some anomaly in its states that is not wholly consistent or is problematic with its higher-level strategy. It queries the human seeking advice to regulate its dilemma. In the other case, the human feels cause to either query some aspect of the robot's state (e.g. health, trajectory, speed) or present advice to the robot that is challenged against the robot's existing tactical approach to the higher-level strategy. The main inputs for adaptive collaborative control are a human-initiated dialogue based command or value presented by either a human or robotic element. The inputs used in the system models serve as the starting point for the collaboration. A number of ways are available to gather observational data for use in functional models. The easiest method to gather observational data is simple human observation of the robotic system. Self-monitoring attributes such as built-in test (BIT) can provide regular reports on important system characteristics. A common approach to gather observations is to employ sensors throughout the robotic system. Vehicles operating in teleoperations have speedometers to indicate how fast they travel. Robotic systems with either stochastic or cyclic motion often employ accelerometers to note the forces exerted. GPS sensors provide a standardized data type that is used nearly universally for depicting location. Multi-sensor systems have been used to gather heterogeneous observational data for applications in path planning. Computation Adaptive collaborative control is most accurately modeled as a closed loop feedback control system. Closed loop feedback control describes the event where the outputs of a system from an input are used to influence the present or future behavior of the system. The feedback control model is governed by a set of equations that are used to predict the future state of the simuland and regulate its behavior. These equations – in conjunction with principles of control theory – are used to evolve physical operations of the simuland to include, but not limited to: dialogue, path planning, motion, monitoring, and lifting objects over time. Many times, these equations are modeled as nonlinear partial differential equations over a continuous time domain. Due to their complexity, powerful computers are necessary to implement these models. A consequence of using computers to simulate these models is that continuous systems cannot be fully calculated. Instead, numerical solutions, such as the Runge–Kutta methods, are utilized to approximate these continuous models. These equations are initialized from the response of one or more sources and rates of changes and outputs are calculated. These rates of changes predict the states of the simuland a short time in the future. The time increment for this prediction is called a time step. These new states are applied to the model to determine the new rates of changes and observational data. This behavior is continued until the desired number of iterations is completed. In the event a future state violates or comes within a tolerance of the violation the simuland will confer with its human counterpart seeking advice on how to proceed from that point. The outputs, or observational data, are used by the human operators to determine what they believe is the best course of action for the simuland. Their commands are fed with the input into the control system and assessed regarding its effectiveness in resolving the issues. If the human commands are determined to be valuable, the simuland will adjust its control input to what the human suggested. If the human's commands are determined to be unbeneficial, malicious, or non-existent, the model will seek its own correction approach. Domain and Codomain The domain for the models used to conduct adaptive collaborative control is commands, queries, and responses from the human operator at the finite-state machine level. Commands from the human operator allow the agent to be provided with additional input in its decision-making process. This information is particularly beneficial when the human is a subject matter expert or the human is aware of how to reach an overarching goal when the agent is focused on only one aspect of the entire problem. Queries from the human are used to gather status information on either support functions of the agent or to determine progress on missions. Many times the robot's response serves as precursor information for issuance of a command as human assistance to the agent. Responses from the human operator are initiated by queries from the agent and feedback into the system to provide additional input to potentially regulate an action or set of actions from the agent. At the functional model level, the system has translated all accepted commands from the human into control inputs used to carry out tasks defined to the agent. Due to the autonomous nature of the simuland, input from the agent is being fed into the machine to operate sustaining functions and tasking that the human operator has ignored or answered to an insufficient manner. The codomain for the models that utilize adaptive collaborative control are queries, information statements, and responses from the agent. Queries and information statements are elements of the dialogue exchange at the finite-state machine level. Queries from the agent are the system's way of soliciting a response from a human operator. This is particularly important when the agent is physically stuck or at a logical impasse. The types of queries the agent can ask must be pre-defined by the modeler. The frequency and detail associated with a particular query depends on the expertise of the human operator or more accurately the expertise of the human operator identified to the agent. When the agent responds it will send an information statement to the human operator. This statement provides a brief description on what the adaptive collaborative control system decided. At the functional model level, the action associated with the information statement is carried out. Applications Vehicle teleoperation Vehicle teleoperation has been around for many years. Early adaptations of vehicle teleoperations were robotic vehicles that were controlled continuously by human operators. Many of these systems were operated with line-of-sight RF communications and are now regarded as toys for children. Recent developments in the area of unmanned systems have brought a measure of autonomy to the robots. Adaptive collaborative control offers a shared mode of control where robotic vehicles and humans exchange ideas and advice regarding the best decisions to make on a route following and obstacle avoidance. This shared mode of operation mitigates problems of humans remotely operating in hazardous environments with poor communications and limited performance when humans have continuous, direct control. For vehicle teleoperations, robots will query humans to receive input on decisions that affect their tasks or when presented with safety-related issues. This dialogue is presented through an interface module that also allows the human operation to view the impact of the dialogue. In addition, this interface module allows the human operator to view what the robot's sensors capture in order to initiate commands or inquiries as necessary. Fault Tolerant System In practice, there are cases where multiple subsystems work together to achieve a common goal. This is a fairly common practice for reliability engineering. This technique involves systems working together collaboratively and the reliable operation of the overarching system is an important issue. Fault tolerant strategies are combined with the subsystems to form a fault tolerant collaborative system. A direct application is the case where two robotic manipulators work together to grasp a common object. For these systems, it is important that when one subsystem becomes faulty, the healthy subsystem reconfigures itself to operate alone to ensure the whole system can still perform its operations until the other subsystem is repaired. In this case, the subsystems create a dialogue between themselves to determine one another's status. In the event of one system starting to exhibit numerous or dangerous faults the secondary subsystem takes over the operation until the faulty system can be repaired. Levels of Autonomy Four levels of autonomy have been devised to serve as a baseline for human-robot interactions that included adaptive collaborative control. The four levels, ranging from full manual to fully autonomous, are: tele mode, safe mode, shared mode, and autonomous mode. Adaptive collaborative controllers typically range from shared mode to autonomous mode. The two modes of interest are: Shared mode – robots can relieve the operator of the burden of direct control, using reactive navigation to find a path based on their perception of the environment. Shared mode provides for a dynamic allocation of roles and responsibilities. The robot accepts varying degrees of operator intervention and supports dialogue through the use of a finite number of scripted suggestions (e.g. “Path blocked! Continue left or right?”) and other text messages that appear within the graphical user interface. Autonomous mode – robots self-regulate high-level tasks such as patrol, search region or follow path. In this mode, the only user intervention occurs at the tasking level, i.e. the robot manages all decision-making and navigation. Limitations Like many other control strategies, adaptive collaborative control has limits to its capabilities. Although the adaptive collaborative control allows for many tasks to be automated and other predefined cases to query the human operator, unstructured decision making remains the domain of humans, especially when common sense is required. Particularly, robots possess poor judgment at high-level perceptual functions, including object recognition and situation assessment. A high number of tasks or a particular task that is very involved may create many questions, thereby increasing the complexity of the dialogue. This complexity to the dialogue in turn adds complexity to the system design. To retain its adaptive nature, the flow of control and information through the simuland will vary with time and events. This dynamic makes debugging, verification, and validation difficult because it is harder to precisely identify an error condition or duplicate a failure situation. This becomes particularly problematic if the system must operate in a regulated facility, such as a nuclear power plant or waste water facility. Issues that affect human-based teams also encumber adaptive collaborative controlled systems. In both cases, teams are required to coordinate activities, exchange information, communicate effectively, and minimize the potential for interference. Other factors that affect teams include resource distribution, timing, sequencing, progress monitoring, and procedure maintenance. Collaboration involves that all partners exhibit trust in the other collaborators and understand the other. To do so, each collaborator needs to have an accurate idea of what the other is capable of doing and how they will carry out an assignment. In some cases, the agent may have to weigh the responses from a human and the human must believe in the decisions a robot makes. References Robot control Collaboration
Adaptive collaborative control
[ "Engineering" ]
3,268
[ "Robotics engineering", "Robot control" ]
33,883,830
https://en.wikipedia.org/wiki/Metamorphic%20testing
Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. Metamorphic relations (MRs) are necessary properties of the intended functionality of the software, and must involve multiple executions of the software. Consider, for example, a program that implements sin x correct to 100 significant figures; a metamorphic relation for sine functions is "sin (π − x) = sin x". Thus, even though the expected value of sin x1 for the source test case x1 = 1.234 correct to the required accuracy is not known, a follow-up test case x2 = π − 1.234 can be constructed. We can verify whether the actual outputs produced by the program under test from the source test case and the follow-up test case are consistent with the MR in question. Any inconsistency (after taking rounding errors into consideration) indicates a failure of the program, caused by a fault in the implementation. MRs are not limited to programs with numerical inputs or equality relations. As an example, when testing a booking website, a web search for accommodation in Sydney, Australia, returns 1,671 results; are the results of this search correct and complete? This is a test oracle problem. Based on a metamorphic relation, we may filter the price range or star rating and apply the search again; it should return a subset of the previous results. A violation of this expectation would similarly reveal a failure of the system. Metamorphic testing was invented by T.Y. Chen in a technical report in 1998. Since then, more than 150 international researchers and practitioners have applied the technique to real-life applications. Some examples include web services, computer graphics, embedded systems, simulation and modeling, machine learning, decision support, bioinformatics, components, numerical analysis, and compilers. The first major survey of the field of MT was conducted in 2016. It was followed by another major survey in 2018, which highlights the challenges and opportunities and clarifies common misunderstandings. Although MT was initially proposed as a software verification technique, it was later developed into a paradigm that covers verification, validation, and other types of software quality assessment. MT can be applied independently, and can also be combined with other static and dynamic software analysis techniques such as proving and debugging. In August 2018, Google acquired GraphicsFuzz, a startup from Imperial College London, to apply metamorphic testing to graphics device drivers for Android smartphones. See also Mutation testing Property testing QuickCheck References External links Software testing
Metamorphic testing
[ "Engineering" ]
557
[ "Software engineering", "Software testing" ]
33,885,598
https://en.wikipedia.org/wiki/Volcano%20Ranch%20experiment
The Volcano Ranch experiment was an array of particle detectors in Volcano Ranch, New Mexico, used to measure ultra-high-energy cosmic rays. The array was built by John Linsley and Livio Scarsi in 1959. On February 22, 1962, Linsley observed an air shower at Volcano Ranch created by a primary particle with an energy greater than 1020 eV, the highest energy cosmic ray particle ever detected at the time. Linsley continued to operate Volcano Ranch until 1978, when it was closed due to lack of funding. References Cosmic-ray experiments
Volcano Ranch experiment
[ "Physics", "Astronomy" ]
112
[ "Astronomy stubs", "Astrophysics", "Astrophysics stubs", "Particle physics", "Particle physics stubs" ]
33,888,206
https://en.wikipedia.org/wiki/Crisis%20%28dynamical%20systems%29
In applied mathematics and astrodynamics, in the theory of dynamical systems, a crisis is the sudden appearance or disappearance of a strange attractor as the parameters of a dynamical system are varied. This global bifurcation occurs when a chaotic attractor comes into contact with an unstable periodic orbit or its stable manifold. As the orbit approaches the unstable orbit it will diverge away from the previous attractor, leading to a qualitatively different behaviour. Crises can produce intermittent behaviour. Grebogi, Ott, Romeiras, and Yorke distinguished between three types of crises: The first type, a boundary or an exterior crisis, the attractor is suddenly destroyed as the parameters are varied. In the postbifurcation state the motion is transiently chaotic, moving chaotically along the former attractor before being attracted to a fixed point, periodic orbit, quasiperiodic orbit, another strange attractor, or diverging to infinity. In the second type of crisis, an interior crisis, the size of the chaotic attractor suddenly increases. The attractor encounters an unstable fixed point or periodic solution that is inside the basin of attraction. In the third type, an attractor merging crisis, two or more chaotic attractors merge to form a single attractor as the critical parameter value is passed. Note that the reverse case (sudden appearance, shrinking or splitting of attractors) can also occur. The latter two crises are sometimes called explosive bifurcations. While crises are "sudden" as a parameter is varied, the dynamics of the system over time can show long transients before orbits leave the neighbourhood of the old attractor. Typically there is a time constant τ for the length of the transient that diverges as a power law (τ ≈ |p − pc|γ) near the critical parameter value pc. The exponent γ is called the critical crisis exponent. There also exist systems where the divergence is stronger than a power law, so-called super-persistent chaotic transients. See also Intermittency Bifurcation diagram Phase portrait References External links Scholarpedia: Crises Dynamical systems Nonlinear systems Bifurcation theory
Crisis (dynamical systems)
[ "Physics", "Mathematics" ]
439
[ "Bifurcation theory", "Nonlinear systems", "Mechanics", "Dynamical systems" ]
33,888,515
https://en.wikipedia.org/wiki/Tennis%20racket%20theorem
The tennis racket theorem or intermediate axis theorem, is a kinetic phenomenon of classical mechanics which describes the movement of a rigid body with three distinct principal moments of inertia. It has also been dubbed the Dzhanibekov effect, after Soviet cosmonaut Vladimir Dzhanibekov, who noticed one of the theorem's logical consequences whilst in space in 1985. The effect was known for at least 150 years prior, having been described by Louis Poinsot in 1834 and included in standard physics textbooks such as Classical Mechanics by Herbert Goldstein throughout the 20th century. The theorem describes the following effect: rotation of an object around its first and third principal axes is stable, whereas rotation around its second principal axis (or intermediate axis) is not. This can be demonstrated by the following experiment: Hold a tennis racket at its handle, with its face being horizontal, and throw it in the air such that it performs a full rotation around its horizontal axis perpendicular to the handle (ê2 in the diagram), and then catch the handle. In almost all cases, during that rotation the face will also have completed a half rotation, so that the other face is now up. By contrast, it is easy to throw the racket so that it will rotate around the handle axis (ê1) without accompanying half-rotation around another axis; it is also possible to make it rotate around the vertical axis perpendicular to the handle (ê3) without any accompanying half-rotation. The experiment can be performed with any object that has three different moments of inertia, for instance with a (rectangular) book, remote control, or smartphone. The effect occurs whenever the axis of rotation differs – even only slightly – from the object's second principal axis; air resistance or gravity are not necessary. Theory The tennis racket theorem can be qualitatively analysed with the help of Euler's equations. Under torque–free conditions, they take the following form: Here denote the object's principal moments of inertia, and we assume . The angular velocities around the object's three principal axes are and their time derivatives are denoted by . Stable rotation around the first and third principal axis Consider the situation when the object is rotating around the axis with moment of inertia . To determine the nature of equilibrium, assume small initial angular velocities along the other two axes. As a result, according to equation (1), is very small. Therefore, the time dependence of may be neglected. Now, differentiating equation (2) and substituting from equation (3), because and . Note that is being opposed and so rotation around this axis is stable for the object. Similar reasoning gives that rotation around the axis with moment of inertia is also stable. Unstable rotation around the second principal axis Now apply the same analysis to the axis with moment of inertia This time is very small. Therefore, the time dependence of may be neglected. Now, differentiating equation (1) and substituting from equation (3), Note that is not opposed (and therefore will grow) and so rotation around the second axis is unstable. Therefore, even a small disturbance, in the form of a very small initial value of or , causes the object to 'flip'. Matrix analysis If the object is mostly rotating along its third axis, so , we can assume does not vary much, and write the equations of motion as a matrix equation:which has zero trace and positive determinant, implying the motion of is a stable rotation around the origin—a neutral equilibrium point. Similarly, the point is a neutral equilibrium point, but is a saddle point. Geometric analysis During motion, both the energy and angular momentum-squared are conserved, thus we have two conserved quantities:and so for any initial condition , the trajectory of must stay on the intersection curve between two ellipsoids defined by This is shown on the animation to the left. By inspecting Euler's equations, we see that implies that two components of are zero—that is, the object is exactly spinning around one of the principal axes. In all other situations, must remain in motion. By Euler's equations, if is a solution, then so is for any constant . In particular, the motion of the body in free space (obtained by integrating ) is exactly the same, just completed faster by a ratio of . Consequently, we can analyze the geometry of motion with a fixed value of , and vary on the fixed ellipsoid of constant squared angular momentum. As varies, the value of also varies—thus giving us a varying ellipsoid of constant energy. This is shown in the animation as a fixed orange ellipsoid and increasing blue ellipsoid. For concreteness, consider , then the angular momentum ellipsoid's major axes are in ratios of , and the energy ellipsoid's major axes are in ratios of . Thus the angular momentum ellipsoid is both flatter and sharper, as visible in the animation. In general, the angular momentum ellipsoid is always more "exaggerated" than the energy ellipsoid. Now inscribe on a fixed ellipsoid of its intersection curves with the ellipsoid of , as increases from zero to infinity. We can see that the curves evolve as follows: For small energy, there is no intersection, since we need a minimum of energy to stay on the angular momentum ellipsoid. The energy ellipsoid first intersects the momentum ellipsoid when , at the points . This is when the body rotates around its axis with the largest moment of inertia. They intersect at two cycles around the points . Since each cycle contains no point at which , the motion of must be a periodic motion around each cycle. They intersect at two "diagonal" curves that intersects at the points , when . If starts anywhere on the diagonal curves, it would approach one of the points, distance exponentially decreasing, but never actually reach the point. In other words, we have 4 heteroclinic orbits between the two saddle points. They intersect at two cycles around the points . Since each cycle contains no point at which , the motion of must be a periodic motion around each cycle. The energy ellipsoid last intersects the momentum ellipsoid when , at the points . This is when the body rotates around its axis with the smallest moment of inertia. The tennis racket effect occurs when is very close to a saddle point. The body would linger near the saddle point, then rapidly move to the other saddle point, near , linger again for a long time, and so on. The motion repeats with period . The above analysis is all done in the perspective of an observer which is rotating with the body. An observer watching the body's motion in free space would see its angular momentum vector conserved, while both its angular velocity vector and its moment of inertia undergo complicated motions in space. At the beginning, the observer would see both mostly aligned with the second major axis of . After a while, the body performs a complicated motion and ends up with , and again both are mostly aligned with the second major axis of . Consequently, there are two possibilities: either the rigid body's second major axis is in the same direction, or it has reversed direction. If it is still in the same direction, then viewed in the rigid body's reference frame are also mostly in the same direction. However, we have just seen that and are near opposite saddle points . Contradiction. Qualitatively, then, this is what an observer watching in free space would observe: The body rotates around its second major axis for a while. The body rapidly undergoes a complicated motion, until its second major axis has reversed direction. The body rotates around its second major axis again for a while. Repeat. This can be easily seen in the video demonstration in microgravity. With dissipation When the body is not exactly rigid, but can flex and bend or contain liquid that sloshes around, it can dissipate energy through its internal degrees of freedom. In this case, the body still has constant angular momentum, but its energy would decrease, until it reaches the minimal point. As analyzed geometrically above, this happens when the body's angular velocity is exactly aligned with its axis of maximal moment of inertia. This happened to Explorer 1, the first satellite launched by the United States in 1958. The elongated body of the spacecraft had been designed to spin about its long (least-inertia) axis but refused to do so, and instead started precessing due to energy dissipation from flexible structural elements. In general, celestial bodies large or small would converge to a constant rotation around its axis of maximal moment of inertia. Whenever a celestial body is found in a complex rotational state, it is either due to a recent impact or tidal interaction, or is a fragment of a recently disrupted progenitor. See also References External links on Mir International Space Station Louis Poinsot, Théorie nouvelle de la rotation des corps, Paris, Bachelier, 1834, 170 p. : historically, the first mathematical description of this effect. - intuitive video explanation by Matt Parker The "Dzhanibekov effect" - an exercise in mechanics or fiction? Explain mathematically a video from a space station, The Bizarre Behavior of Rotating Bodies, Veritasium Classical mechanics Physics theorems Juggling
Tennis racket theorem
[ "Physics" ]
1,944
[ "Equations of physics", "Mechanics", "Classical mechanics", "Physics theorems" ]
33,889,100
https://en.wikipedia.org/wiki/Chronophobia
Chronophobia, also known as prison neurosis, is considered an anxiety disorder describing the fear of time and time moving forward, which is commonly seen in prison inmates. Next to prison inmates, chronophobia is also identified in individuals experiencing quarantine due to COVID-19. As time is understood as a specific concept, chronophobia is categorized as a specific phobia. The term chronophobia comes from the Greek "chronos", meaning time, and "phobo", meaning fear. Symptoms Chronophobia manifests in different ways, since every person that experiences this disorder suffers from different symptoms. Inmates experience a constant psychological discomfort that is characterized through anxiety, panic, and claustrophobia by the duration and immensity of time. The main symptom of chronophobia is a sense of impending danger of loss and the accompanying desire to keep the memory of what happened. The most common signs include procrastination, poor planning of the working day, the inability to say "no", distraction and trying to do too much at one time. People also report that they begin to think that nothing is on time and that they are afraid of not completing their tasks on time. Reasons Risk factors and causes Some people are more likely to have chronophobia. The risk is increased for elderly or ill because persons who are older or who have terminal medical conditions are more likely to be overpowered by fear of approaching death. They may become fixated on the number of days they have left, which can cause severe anxiety. People in prison are also more likely to develop chronophobia. Prison neurosis is another name for this illness. Inmates, particularly those serving extended sentences, often become fixated on the passage of time. They may believe that time is passing them either too slowly or too rapidly, and they frequently count down the days until they are released. They may also experience claustrophobic feelings while in prison. People who have undergone a traumatic experience are also more likely to suffer from chronophobia. They could get it as a result of PTSD. Many people developed chronophobia after being quarantined due to Covid-19. They got fixated on keeping track of time or felt helpless in the face of the passage of time. People who have a history of mental illness make up the final group. Ones with a generalized anxiety disorder, a history of panic attacks or panic disorder, or other phobias are more at risk. If one has depression or a substance addiction problem, the person is also more likely to develop a phobia. Chronophobia and other phobias are caused by a combination of environmental and genetic factors. Chronophobia can develop as a result of being imprisoned, having a fatal illness, or surviving a traumatic experience. People who suffer from anxiety or suffer from mental illness are more likely to develop phobias. Mental disease, mood problems, and phobias are often passed down through generations. If you have a family who has certain illnesses, you are at a higher risk. Triggers Chronophobia causes worry, dread, and anxiety for a variety of reasons. Holidays, birthdays, graduations, and anniversaries can all be triggers for this phobia. And when being triggered, the following concerns appear: They are powerless to stop time from passing them by. Their own passing. They may also be terrified of death or dying (Thanatophobia). Time feeling “immense” (very big) or overwhelming. Time moving too slowly. Treatments Cognitive behavioral therapy Cognitive behavioral therapy (CBT) is a form of talking therapy where the aim is to correct maladaptive thoughts that have a significant negative effect on a person's life. CBT can be used for the treatment of specific phobias. The therapist creates a personalized plan that suits each phobia and person the best. The therapy is goal-oriented and structured and aims to change negative thought patterns during emotional distress and help patients gain information about how their thoughts affect their actions. The goal of the therapy in the case of chronophobia is to gain control over the anxiety and the behavioral patterns created by the overpowering fear responses. Psychopharmacology In some cases, anxiety medication can lead to milder symptoms in phobias. However, when compared to behavioral therapy the results are often less efficient. Short-term pharmacological options often get paired with cognitive behavioral therapy. Long-term plans are rare and are often linked to cases of adverse drug reactions. Specific phobic disorders like chronophobia can be treated with benzodiazepines, selective serotonin reuptake inhibitors (SSRIs), serotonin noradrenaline reuptake inhibitors (SNRIs), monoamine oxidase inhibitors (MAOIs) and β-adrenergic blockers. Hypnotherapy Hypnotherapy strives for a deep level of awareness through focused attention and a structured relaxation process. During the naturally occurring state of heightened awareness or "trance", qualified therapists can help people tune in on the targeted negative or fearful thoughts. It can be used to shape the understanding of the anxiety symptoms connected to chronophobia by altering different structures in memory and perception. Coping methods Relaxation techniques The aim of relaxation techniques is to decrease an individual's physical and psychological anxiety by increasing the individual's sense of calm. Muscle tension and an increased heart rate are physical responses to anxiety, and emotions are psychological responses to anxiety. Relaxation techniques bring an individual's attention to their breathing rhythm which can have a relaxing effect and decrease their chronophobia related anxiety. It also brings attention to their body which has them focusing on the present. Mindfulness techniques Mindfulness techniques can be useful in reducing one's anxiety of passing time because it refocuses one's attention on the present. Meditation is a practice that focuses on sensations, objects, feelings, thoughts and breathing techniques. Mindfulness meditation consists of two parts. First, one pays attention to the present and specifically focusing on one's feelings, thoughts, and sensations. Secondly, one accepts their thoughts and feelings without reaction or judgement before letting them go. The objective of mindfulness meditation is to reduce an individual's anxiety of passing time by refocusing their attention on their breathing and their 5 senses, both of which are neutral topics without emotional connections to chronophobia. Yoga is a combination of exercise and meditation and is often used within mindfulness meditation. Individuals focus on their breathing and balance as they move their bodies into different positions. Like mindfulness meditation yoga brings attention to one's thoughts, feelings and sensations. It promotes a healthy mental state in individual's with chronophobia related stress and anxiety by switching their focus to the exercises. Support groups A support group consists of several individuals facing a similar struggle, such as chronophobia, and a leader. Typically the support group is led by someone who is not currently struggling with the same problems as the other members. The goal of a support group is to overcome their shared problem, or if that is not possible, to find ways of coping with it. Within a chronophobia support group, members can offer and gain advice on how to cope with chronophobia. One also has the opportunity to share their experience and hear other individual's experience with chronophobia, which can help them not feel as alone in their struggle. Epidemiology Populations with a higher prevalence of chronophobia are prisoners, therefore also sometimes called “prison neurosis”. Elderly and also people facing a deathly illness who worry that they will soon die may also experience this fear. People who have survived extreme trauma such as a natural disaster or a shipwreck are also at risk if they remain anxious and uncertain about the passing time and how much they have left. See also List of phobias Phobia Anxiety disorder Specific phobia Neurosis References External links Specific phobia at NIH Anxiety Disorder at MedlinePlus Anxiety and support groups at Curlie Phobia at Medscape Phobias Time
Chronophobia
[ "Physics", "Mathematics" ]
1,708
[ "Physical quantities", "Time", "Quantity", "Spacetime", "Wikipedia categories named after physical quantities" ]
33,889,528
https://en.wikipedia.org/wiki/Paleolightning
Paleolightning refers to the remnants of ancient lightning activity studied in fields such as historical geology, geoarchaeology, and fulminology. Paleolightning provides tangible evidence for the study of lightning activity in Earth's past and the roles lightning may have played in Earth's history. Some studies have speculated that lightning activity played a crucial role in the development of not only Earth's early atmosphere but also early life. Lightning, a non-biological process, has been found to produce biologically useful material through the oxidation and reduction of inorganic matter. Research on the impact of lightning on Earth's atmosphere continues today, especially with regard to feedback mechanisms of lightning-produced nitrate compounds on atmospheric composition and global average temperatures. Detecting lightning activity in the geologic record can be difficult, given the instantaneous nature of lightning strikes in general. However, fulgurite, a glassy tube-like, crust-like, or irregular mineraloid that forms when lightning fuses soil, quartz sands, clay, rock, biomass, or caliche is prevalent in electrically active regions around the globe and provides evidence of not only past lightning activity, but also patterns of convection. Since lightning channels carry an electric current to the ground, lightning can produce magnetic fields as well. While lightning-magnetic anomalies can provide evidence of lightning activity in a region, these anomalies are often problematic for those examining the magnetic record of rock types because they disguise the natural magnetic fields present. Lightning and early Earth The atmospheric composition of early Earth (the first billion years) was drastically different from its current state. Initially, hydrogen and helium compounds dominated the atmosphere. However, given the relatively small size of these elements and the warmer temperature of Earth compared to other planets at the time, most of these lighter compounds escaped, leaving behind an atmosphere composed mainly of methane, nitrogen, oxygen and ammonia with small concentrations of hydrogen compounds and other gases. The atmosphere was transitioning from a reduction atmosphere (an atmosphere that inhibits oxidation) to one of oxidation, similar to our current atmosphere. The origin of life on Earth has been a matter of speculation for quite some time. Living things did not spontaneously appear, so some sort of biological or even non-biological process must have been responsible for the generation of life. Lightning is a non-biological process, and many have speculated that lightning was present on early Earth. One of the most famous studies that investigated lightning on the early Earth was the Miller–Urey experiment. Miller–Urey experiment The Miller–Urey experiment sought to recreate the early Earth atmosphere within a laboratory setting to determine the chemical processes that ultimately led to life on Earth. The basis of this experiment was leveraged on Oparin's hypothesis, which assumed that some organic matter could be created from inorganic material given a reduction atmosphere. Using a mixture of water, methane, ammonia, and hydrogen in glass tubes, Miller and Urey replicated the effects of lightning on the mixture using electrodes. At the conclusion of the experiment, as much as 15 percent of the carbon from the mixture formed organic compounds, while 2 percent of the carbon formed amino acids, a necessary element for the building blocks of living organisms. Volcanic lightning on early Earth The actual composition of the atmosphere of the early Earth is an area of great debate. Varying amounts of certain gaseous constituents can greatly impact the overall effect of a particular process, which includes non-biological processes such as the buildup of charge in thunderstorms. It has been argued that volcano-induced lightning in the early stages of Earth's existence, because the volcanic plume was composed of additional "reducing gases", was more effective at stimulating the oxidation of organic material to accelerate the production of life. In the case of volcanic lightning, the lightning discharge almost exclusively occurs directly within the volcanic plume. Since this process occurs fairly close to ground level, it has been suggested that volcanic lightning contributed to the generation of life to a greater extent than lightning produced within clouds that would lower positive or negative charge from a cloud to the ground. Hill (1992) quantified this enhanced contribution by examining estimated hydrogen cyanide (HCN) concentrations from volcanic lightning and "general lightning". Results showed that HCN concentrations for volcanic lightning were an order of magnitude larger than "general lightning". Hydrogen cyanide is yet another compound that has been linked to the generation of life on Earth. However, given that the intensity and amount of volcanic activity during the early stages of Earth's development is not fully understood, hypotheses regarding past volcanic activity (e.g., Hill, 1992) are usually based on present-day observed volcanic activity. Nitrogen fixation and lightning Nitrogen, the most abundant gas in our atmosphere, is crucial for life and a key component to various biological processes. Biologically usable forms of nitrogen, such as nitrates and ammonia, arise via biological and non-biological processes through nitrogen fixation. One example of a non-biological process responsible for nitrogen fixation is lightning. Lightning strikes are short-lived, high-intensity electrical discharges that can reach temperatures five times hotter than the surface of the Sun. As a result, as a lightning channel travels through the air, ionization occurs, forming nitrogen-oxide (NOx) compounds within the lightning channel. Global production as a result of lightning is around 1–20 Tg N yr−1. Some studies have implied that lightning activity may be the "greatest contributor to the global nitrogen budget", even larger than the burning of fossil fuels. With anywhere between 1500 and 2000 thunderstorms and millions of lightning strikes occurring daily around the Earth, it is understandable that lightning activity plays a vital role in nitrogen fixation. While nitrogen oxide compounds are produced as a lightning channel travels toward the ground, some of those compounds are transferred to the geosphere via wet or dry deposition. Variations of nitrogen in terrestrial and oceanic environments impact primary production and other biological processes. Changes in primary production can impact not only the carbon cycle, but also the climate system. The lightning-biota climatic feedback The lightning-biota climatic feedback (LBF) is a negative feedback response to global warming on a time scale of hundreds or thousands of years, as a result of increased concentrations of nitrogen compounds from lightning activity deposited into biological ecosystems. A zero-dimension Earth conceptual model, which took into account global temperature, soil available nitrogen, terrestrial vegetation, and global atmospheric carbon dioxide concentration, was used to determine the response of global average temperatures to increased concentrations from lightning strikes. It was hypothesized that as a result of increasing global average temperatures, lightning production would increase because increased evaporation from oceans would promote enhanced convection. As a result of more numerous lightning strikes, nitrogen fixation would deposit more biologically useful forms of nitrogen into various ecosystems, encouraging primary production. Impacts on primary production would affect the carbon cycle, leading to a reduction in atmospheric carbon dioxide. A reduction in atmospheric carbon dioxide would result in a negative feedback, or cooling, of the climate system. Model results indicated that, for the most part, the lightning-biota climatic feedback retarded positive perturbations in atmospheric carbon dioxide and temperature back to an "equilibrium" state. Impacts of the lightning-biota climatic feedback on curbing anthropogenic influences on atmospheric carbon dioxide concentrations were investigated as well. Using current levels of atmospheric carbon dioxide and rates of increase of atmospheric carbon dioxide on a yearly basis based on the time of the article, the lightning-biota climatic feedback once again showed a cooling effect on global average temperatures, given an initial perturbation. Given the simplified nature of the model, several parameters (ozone produced by lightning, etc.) and other feedback mechanisms were neglected, so the significance of the results is still an area of discussion. Lightning in the geologic record Indicators of lightning activity in the geologic record are often difficult to decipher. For example, fossil charcoals from the Late Triassic could potentially be the result of lightning-induced wildfires. Even though lightning strikes are, for the most part, instantaneous events, evidence of lightning activity can be found in objects called fulgurites. Fulgurites Fulgurites (from the Latin fulgur, meaning "lightning") are natural tubes, clumps, or masses of sintered, vitrified, and/or fused soil, sand, rock, organic debris and other sediments that sometimes form when lightning discharges into ground. Fulgurites are classified as a variety of the mineraloid lechatelierite. Fulgurites have no fixed composition because their chemical composition is determined by the physical and chemical properties of material struck by lightning. When lightning strikes a grounding substrate, upwards of 100 million volts (100 MV) are rapidly discharged into the ground. This charge propagates into and rapidly vaporizes and melts silica-rich quartzose sand, mixed soil, clay, or other sediments. This results in the formation of hollow and/or branching assemblages of glassy, protocrystalline, and heterogeneously microcrystalline tubes, crusts, slags, and vesicular masses. Fulgurites are homologous to Lichtenberg figures, which are the branching patterns produced on surfaces of insulators during dielectric breakdown by high-voltage discharges, such as lightning. Fulgurites are indicative of thunderstorms; the distribution of fulgurites can hint at patterns of lightning strikes. Sponholz et al. (1993) studied fulgurite distributions along a north–south cross section in the south central Saharan Desert (Niger). The study found that newer fulgurite concentrations increased from north to south, which indicated not only a paleo-monsoon pattern, but also the demarcation for thunderstorms as they progressed from a northern line to a southern location over time. By examining the outcrops in which the fulgurite samples were found, Sponholz et al. (1993) could provide a relative date for the minerals. The fulgurite samples dated back approximately 15,000 years to the mid to upper Holocene. This finding was in agreement with the paleosols of the region, as this period of the Holocene was particularly wet. A wetter climate would suggest that the propensity for thunderstorms was probably elevated, which would result in larger concentrations of fulgurite. These results pointed to the fact that the climate with which the fulgurite was formed was significantly different from the present climate because the current climate of the Saharan Desert is arid. The approximate age of the fulgurite was determined using thermoluminescence (TL). Quartz sands can be used to measure the amount of radiation exposure, so if the temperature at which the fulgurite was formed is known, one could determine the relative age of the mineral by examining the doses of radiation involved in the process. Fulgurites also contain air bubbles. Given that the formation of fulgurite generally takes only about one second, and the process involved in the creation of fulgurite involves several chemical reactions, it is relatively easy to trap gases, such as , within the vesicles. These gases can be trapped for millions of years. Studies have shown that the gases within these bubbles can indicate the soil characteristics during the formation of the fulgurite material, which hint at the paleoclimate. Since fulgurite is almost entirely composed of silica with trace amounts of calcium and magnesium, an approximation of the total amount of organic carbon associated with that lightning strike can be made to calculate a carbon-to-nitrogen ratio to determine the paleoenvironment. Paleomagnetism When geologists study paleoclimate, an important factor to examine is the magnetic field characteristics of rock types to determine not only deviations of Earth's past magnetic field, but also to study possible tectonic activity that might suggest certain climate regimes. Evidence of lightning activity can often be found in the paleomagnetic record. Lightning strikes are the result of tremendous charge buildup in clouds. This excess charge is transferred to the ground via lightning channels, which carry a strong electric current. Because of the intensity of this electric current, when lightning hits the ground, it can produce a strong, albeit brief, magnetic field. Thus, as the electric current travels through soils, rocks, plant roots, etc., it locks a unique magnetic signature within these materials through a process known as lightning-induced remanent magnetization (LIRM). Evidence of LIRM is manifested in concentric magnetic field lines surrounding the location of the lightning strike point. LIRM anomalies normally occur close to the location of the lightning strike, usually encapsulated within several meters of the point of contact. The anomalies are generally linear or radial, which, just like actual lightning channels, branch out from a central point. It is possible to determine the intensity of the electric current from a lightning strike by examining the LIRM signatures. Since rocks and soils already have some preexisting magnetic field, the intensity of the electric current can be determined by examining the change between the "natural" magnetic field and the magnetic field induced by the lightning current, which generally acts parallel to the direction of the lightning channel. Another characteristic feature of an LIRM anomaly compared to other magnetic anomalies is that the electric current intensity is generally stronger. However, some have suggested that the anomalies, like other characteristics in the geologic record, might fade over time as the magnetic field redistributes. LIRM anomalies can often be problematic when examining the magnetic characteristics of rock types. LIRM anomalies can disguise the natural remanent magnetization (NRM) of the rocks in question because the subsequent magnetization caused by the lightning strike reconfigures the magnetic record. While investigating the soil attributes at the 30-30 Winchester archeological site in northeastern Wyoming to discern the daily activities of prehistoric people that had once occupied that region, David Maki noticed peculiar anomalies in the magnetic record that did not match the circular magnetic remnant features of the ovens used by these prehistoric groups for cooking and pottery. The LIRM anomaly was significantly bigger than the other magnetic anomalies and formed a dendritic structure. To test the validity of the assertion that the magnetic anomaly was indeed the result of lightning and not another process, Maki (2005) tested the soil samples against known standards indicative of LIRM anomalies developed by Dunlop et al. (1984), Wasilewski and Kletetschka (1999), and Verrier and Rochette (2002). These standards include, but are not limited to: 1) Average REM (ratio between natural remanent magnetization to a laboratory standard value) greater than 0.2, and 2) Average Koenigsberger ratio (ratio between natural remanent magnetization and the natural field created by Earth's magnetic field). The findings indicated the evidence of LIRM at the archaeological site. LIRM anomalies also complicated the determination of the relative location of the poles during the late Cretaceous from the magnetic field record of basaltic lava flows in Mongolia. The presence of LIRM-affected rocks was determined when calculated Koenigsberger ratios were drastically higher than other magnetic signatures in the region. References External links The Bibliography of Fulgurites Lightning Historical geology Archaeological science
Paleolightning
[ "Physics" ]
3,171
[ "Physical phenomena", "Electrical phenomena", "Lightning" ]
33,890,874
https://en.wikipedia.org/wiki/De%20novo%20transcriptome%20assembly
De novo transcriptome assembly is the de novo sequence assembly method of creating a transcriptome without the aid of a reference genome. Introduction As a result of the development of novel sequencing technologies, the years between 2008 and 2012 saw a large drop in the cost of sequencing. Per megabase and genome, the cost dropped to 1/100,000th and 1/10,000th of the price, respectively. Prior to this, only transcriptomes of organisms that were of broad interest and utility to scientific research were sequenced; however, these developed in 2010s high-throughput sequencing (also called next-generation sequencing) technologies are both cost- and labor- effective, and the range of organisms studied via these methods is expanding. Transcriptomes have subsequently been created for chickpea, planarians, Parhyale hawaiensis, as well as the brains of the Nile crocodile, the corn snake, the bearded dragon, and the red-eared slider, to name just a few. Examining non-model organisms can provide novel insights into the mechanisms underlying the "diversity of fascinating morphological innovations" that have enabled the abundance of life on planet Earth. In animals and plants, the "innovations" that cannot be examined in common model organisms include mimicry, mutualism, parasitism, and asexual reproduction. De novo transcriptome assembly is often the preferred method to studying non-model organisms, since it is cheaper and easier than building a genome, and reference-based methods are not possible without an existing genome. The transcriptomes of these organisms can thus reveal novel proteins and their isoforms that are implicated in such unique biological phenomena. De novo vs. reference-based assembly A set of assembled transcripts allows for initial gene expression studies. Prior to the development of transcriptome assembly computer programs, transcriptome data were analyzed primarily by mapping on to a reference genome. Though genome alignment is a robust way of characterizing transcript sequences, this method is disadvantaged by its inability to account for incidents of structural alterations of mRNA transcripts, such as alternative splicing. Since a genome contains the sum of all introns and exons that may be present in a transcript, spliced variants that do not align continuously along the genome may be discounted as actual protein isoforms. Even if a reference genome is available, de novo assembly should be performed, as it can recover transcripts that are transcribed from segments of the genome that are missing from the reference genome assembly. Transcriptome vs. genome assembly Unlike genome sequence coverage levels – which can vary randomly as a result of repeat content in non-coding intron regions of DNA – transcriptome sequence coverage levels can be directly indicative of gene expression levels. These repeated sequences also create ambiguities in the formation of contigs in genome assembly, while ambiguities in transcriptome assembly contigs usually correspond to spliced isoforms, or minor variation among members of a gene family. Genome assembler can't be directly used in transcriptome assembly for several reasons. First, genome sequencing depth is usually the same across a genome, but the depth of transcripts can vary. Second, both strands are always sequenced in genome sequencing, but RNA-seq can be strand-specific. Third, transcriptome assembly is more challenging because transcript variants from the same gene can share exons and are difficult to resolve unambiguously. Method RNA-seq Once RNA is extracted and purified from cells, it is sent to a high-throughput sequencing facility, where it is first reverse transcribed to create a cDNA library. This cDNA can then be fragmented into various lengths depending on the platform used for sequencing. Each of the following platforms utilizes a different type of technology to sequence millions of short reads: 454 Sequencing, Illumina, and SOLiD. Assembly algorithms The cDNA sequence reads are assembled into transcripts via a short read transcript assembly program. Most likely, some amino acid variations among transcripts that are otherwise similar reflect different protein isoforms. It is also possible that they represent different genes within the same gene family, or even genes that share only a conserved domain, depending on the degree of variation. A number of assembly programs are available (see Assemblers). Although these programs have been generally successful in assembling genomes, transcriptome assembly presents some unique challenges. Whereas high sequence coverage for a genome may indicate the presence of repetitive sequences (and thus be masked), for a transcriptome, they may indicate abundance. In addition, unlike genome sequencing, transcriptome sequencing can be strand-specific, due to the possibility of both sense and antisense transcripts. Finally, it can be difficult to reconstruct and tease apart all splicing isoforms. Short read assemblers generally use one of two basic algorithms: overlap graphs and de Bruijn graphs. Overlap graphs are utilized for most assemblers designed for Sanger sequenced reads. The overlaps between each pair of reads is computed and compiled into a graph, in which each node represents a single sequence read. This algorithm is more computationally intensive than de Bruijn graphs, and most effective in assembling fewer reads with a high degree of overlap. De Bruijn graphs align k-mers (usually 25-50 bp) based on k-1 sequence conservation to create contigs. The k-mers are shorter than the read lengths allowing fast hashing so the operations in de Bruijn graphs are generally less computationally intensive. Functional annotation Functional annotation of the assembled transcripts allows for insight into the particular molecular functions, cellular components, and biological processes in which the putative proteins are involved. Blast2GO (B2G) enables Gene Ontology based data mining to annotate sequence data for which no GO annotation is available yet. It is a research tool often employed in functional genomics research on non-model species. It works by blasting assembled contigs against a non-redundant protein database (at NCBI), then annotating them based on sequence similarity. GOanna is another GO annotation program specific for animal and agricultural plant gene products that works in a similar fashion. It is part of the AgBase database of curated, publicly accessible suite of computational tools for GO annotation and analysis. Following annotation, KEGG (Kyoto Encyclopedia of Genes and Genomes) enables visualization of metabolic pathways and molecular interaction networks captured in the transcriptome. In addition to being annotated for GO terms, contigs can also be screened for open reading frames (ORFs) in order to predict the amino acid sequence of proteins derived from these transcripts. Another approach is to annotate protein domains and determine the presence of gene families, rather than specific genes. Verification and quality control Since a well-resolved reference genome is rarely available, the quality of computer-assembled contigs may be verified either by comparing the assembled sequences to the reads used to generate them (reference-free), or by aligning the sequences of conserved gene domains found in mRNA transcripts to transcriptomes or genomes of closely related species (reference-based). Tools such as Transrate and DETONATE allow statistical analysis of assembly quality by these methods. Another method is to design PCR primers for predicted transcripts, then attempt to amplify them from the cDNA library. Often, exceptionally short reads are filtered out. Short sequences (< 40 amino acids) are unlikely to represent functional proteins, as they are unable to fold independently and form hydrophobic cores. Complementary to these metrics, a quantitative assessment of the gene content may provide additional insights into the quality of the assembly. To perform this step, tools that model the expected gene space based of conserved genes, such as BUSCO, can be used. For eukaryotes, CEGMA may also be used, although it is officially no longer supported since 2015. Assemblers The following is a partial compendium of assembly software that has been used to generate transcriptomes, and has also been cited in scientific literature. SeqMan NGen SOAPdenovo-Trans SOAPdenovo-Trans is a de novo transcriptome assembler inherited from the SOAPdenovo2 framework, designed for assembling transcriptome with alternative splicing and different expression level. The assembler provides a more comprehensive way to construct the full-length transcript sets compare to SOAPdenovo2. Velvet/Oases The Velvet algorithm uses de Bruijn graphs to assemble transcripts. In simulations, Velvet can produce contigs up to 50-kb N50 length using prokaryotic data and 3-kb N50 in mammalian bacterial artificial chromosomes (BACs). These preliminary transcripts are transferred to Oases, which uses paired end read and long read information to build transcript isoforms. Trans-ABySS ABySS is a parallel, paired-end sequence assembler. Trans-ABySS (Assembly By Short Sequences) is a software pipeline written in Python and Perl for analyzing ABySS-assembled transcriptome contigs. This pipeline can be applied to assemblies generated across a wide range of k values. It first reduces the dataset into smaller sets of non-redundant contigs, and identifies splicing events including exon-skipping, novel exons, retained introns, novel introns, and alternative splicing. The Trans-ABySS algorithms are also able to estimate gene expression levels, identify potential polyadenylation sites, as well as candidate gene-fusion events. Trinity Trinity first divides the sequence data into a number of de Bruijn graphs, each representing transcriptional variations at a single gene or locus. It then extracts full-length splicing isoforms and distinguishes transcripts derived from paralogous genes from each graph separately. Trinity consists of three independent software modules, which are used sequentially to produce transcripts: Inchworm assembles the RNA-Seq data into transcript sequences, often generating full-length transcripts for a dominant isoform, but then reports just the unique portions of alternatively spliced transcripts. Chrysalis clusters the Inchworm contigs and constructs complete de Bruijn graphs for each cluster. Each cluster represents the full transcriptional complexity for a given gene (or a family or set of genes that share a conserved sequence). Chrysalis then partitions the full read set among these separate graphs. Butterfly then processes the individual graphs in parallel, tracing the paths of reads within the graph, ultimately reporting full-length transcripts for alternatively spliced isoforms, and teasing apart transcripts that corresponds to paralogous genes. See also Transcriptome Transcriptomics Human-transcriptome database for alternative splicing (H-DBAS) UniGene Full-parasites Exome sequencing References Bioinformatics Systems biology Computational biology Omics
De novo transcriptome assembly
[ "Engineering", "Biology" ]
2,211
[ "Biological engineering", "Bioinformatics", "Omics", "Computational biology", "Systems biology" ]
33,891,046
https://en.wikipedia.org/wiki/Disease%20gene%20identification
Disease gene identification is a process by which scientists identify the mutant genotypes responsible for an inherited genetic disorder. Mutations in these genes can include single nucleotide substitutions, single nucleotide additions/deletions, deletion of the entire gene, and other genetic abnormalities. Significance Knowledge of which genes (when non-functional) cause which disorders will simplify diagnosis of patients and provide insights into the functional characteristics of the mutation. The advent of modern-day high-throughput sequencing technologies combined with insights provided from the growing field of genomics is resulting in more rapid disease gene identification, thus allowing scientists to identify more complex mutations. Generic gene identification procedure Disease gene identification techniques often follow the same overall procedure. DNA is first collected from several patients who are believed to have the same genetic disease. Then, their DNA samples are analyzed and screened to determine probable regions where the mutation could potentially reside. These techniques are mentioned below. These probable regions are then lined-up with one another and the overlapping region should contain the mutant gene. If enough of the genome sequence is known, that region is searched for candidate genes. Coding regions of these genes are then sequenced until a mutation is discovered or another patient is discovered, in which case the analysis can be repeated, potentially narrowing down the region of interest. The differences between most disease gene identification procedures are in the second step (where DNA samples are analyzed and screened to determine regions in which the mutation could reside). Pre-genomics techniques Without the aid of the whole-genome sequences, pre-genomics investigations looked at select regions of the genome, often with only minimal knowledge of the gene sequences they were looking at. Genetic techniques capable of providing this sort of information include Restriction Fragment Length Polymorphism (RFLP) analysis and microsatellite analysis. Loss of heterozygosity (LOH) Loss of heterozygosity (LOH) is a technique that can only be used to compare two samples from the same individual. LOH analysis is often used when identifying cancer-causing oncogenes in that one sample consists of (mutant) tumor DNA and the other (control) sample consists of genomic DNA from non-cancerous cells from the same individual. RFLPs and microsatellite markers provide patterns of DNA polymorphisms, which can be interpreted as residing in a heterozygous region or a homozygous region of the genome. Provided that all individuals are affected with the same disease resulting from a manifestation of a deletion of a single copy of the same gene, all individuals will contain one region where their control sample is heterozygous but the mutant sample is homozygous - this region will contain the disease gene. Post-genomics techniques With the advent of modern laboratory techniques such as High-throughput sequencing and software capable of genome-wide analysis, sequence acquisition has become increasingly less expensive and time-consuming, thus providing significant benefits to science in the form of more efficient disease gene identification techniques. Identity by descent mapping Identity by descent (IBD) mapping generally uses single nucleotide polymorphism (SNP) arrays to survey known polymorphic sites throughout the genome of affected individuals and their parents and/or siblings, both affected and unaffected. While these SNPs probably do not cause the disease, they provide valuable insight into the makeup of the genomes in question. A region of the genome is considered identical by descent if contiguous SNPs share the same genotype. When comparing an affected individual to his/her affected sibling, all identical regions are recorded (ex. Shaded in red in above figure). Given that an affected sibling and an unaffected sibling do not have the same disease phenotype, their DNA must by definition be different (barring the presence of a genetic or environmental modifier). Thus, the IBD mapping results can be further supplemented by removing any regions that are identical in both affected individuals and unaffected siblings. This is then repeated for multiple families, thus generating a small, overlapping fragment, which theoretically contains the disease gene. Homozygosity/autozygosity mapping Homozygosity/Autozygosity mapping is a powerful technique, but is only valid when searching for a mutation segregating within a small, closed population. Such a small population, possibly created by the founder effect, will have a limited gene pool, and thus any inherited disease will probably be a result of two copies of the same mutation segregating on the same haplotype. Since affected individuals will probably be homozygous in the regions, looking at SNPs in a region is an adequate marker of regions of homozygosity and heterozygosity. Modern day SNP arrays are used to survey the genome and identify large regions of homozygosity. Homozygous blocks in the genomes of affected individuals can then be laid on top of each other, and the overlapping region should contain the disease gene. This analysis is often extended by analyzing autozygosity, an extension of homozygosity, in the genomes of affected individuals. This can be accomplished by plotting a cumulative LOD score alongside the overlaid blocks of homozygosity. By taking into consideration the population allele frequencies for all SNPs via autozygosity mapping, the results of homozygosity can be confirmed. Furthermore, if two suspicious regions appear as a result of homozygosity mapping, autozygosity mapping may be able to distinguish between the two (ex. If one block of homozygosity is a result of a very non-diverse region of the genome, the LOD score will be very low). Tools for Homozygosity Mapping HomSI: a homozygous stretch identifier from next-generation sequencing data A tool that identifies homozygous regions using deep sequence data. Genome-wide knockdown studies Genome-wide knockdown studies are an example of the reverse genetics made possible by the acquisition of whole genome sequences, and the advent of genomics and gene-silencing technologies, mainly siRNA and deletion mapping. Genome-wide knockdown studies involve systematic knockdown or deletion of genes or segments of the genome. This is generally done in prokaryotes or in a tissue culture environment due to the massive number of knockdowns that must be performed. After the systematic knockout is completed (and possibly confirmed by mRNA expression analysis), the phenotypic results of the knockdown/knockout can be observed. Observation parameters can be selected to target a highly specific phenotype. The resulting dataset is then queried for samples which exhibit phenotypes matching the disease in question – the gene(s) knocked down/out in said samples can then be considered candidate disease genes for the individual in question. Whole exome sequencing Whole exome sequencing is a brute-force approach that involves using modern day sequencing technology and DNA sequence assembly tools to piece together all coding portions of the genome. The sequence is then compared to a reference genome and any differences are noted. After filtering out all known benign polymorphisms, synonymous changes, and intronic changes (that do not affect splice sites), only potentially pathogenic variants will be left. This technique can be combined with other techniques to further exclude potentially pathogenic variants should more than one be identified. See also Gene Disease Database Gene identification Haplotype tagging References Mutation Genomics Molecular biology Bioinformatics
Disease gene identification
[ "Chemistry", "Engineering", "Biology" ]
1,525
[ "Bioinformatics", "Biological engineering", "Biochemistry", "Molecular biology" ]
33,891,361
https://en.wikipedia.org/wiki/Wouthuysen%E2%80%93Field%20coupling
Wouthuysen–Field coupling, or the Wouthuysen–Field effect, is a mechanism that couples the excitation temperature, also called the spin temperature, of neutral hydrogen to Lyman-alpha radiation. This coupling plays a role in producing a difference in the temperature of neutral hydrogen and the cosmic microwave background at the end of the Dark Ages and the beginning of the epoch of reionization. It is named for Siegfried Adolf Wouthuysen and George B. Field. Background The period after recombination occurred and before stars and galaxies formed is known as the "dark ages". During this time, the majority of matter in the universe is neutral hydrogen. This hydrogen has yet to be observed, but there are experiments underway to detect the hydrogen line produced during this era. The hydrogen line is produced when an electron in a neutral hydrogen atom is excited to the triplet spin state, or de-excited as the electron and proton spins go to the singlet state. The energy difference between these two hyperfine states is electron volts, with a wavelength of 21 centimeters. At times when neutral hydrogen is in thermodynamic equilibrium with the photons in the cosmic microwave background (CMB), the neutral hydrogen and CMB are said to be "coupled", and the hydrogen line is not observable. It is only when the two temperatures differ, i.e. are decoupled, that the hydrogen line can be observed. Coupling mechanism Wouthuysen–Field coupling is a mechanism that couples the spin temperature of neutral hydrogen to Lyman-alpha radiation, which decouples the neutral hydrogen from the CMB. The energy of the Lyman-alpha transition is 10.2 eV—this energy is approximately two million times greater than the hydrogen line, and is produced by astrophysical sources such as stars and quasars. Neutral hydrogen absorbs Lyman-alpha photons, and then re-emits Lyman-alpha photons, and may enter either of the two spin states. This process causes a redistribution of the electrons between the hyperfine states, decoupling the neutral hydrogen from the CMB photons. The coupling between Lyman-alpha photons and the hyperfine states depends not on the intensity of the Lyman-alpha radiation, but on the shape of the spectrum in the vicinity of the Lyman-alpha transition. That this mechanism might affect the population of the hyperfine states in neutral hydrogen was first suggested in 1952 by S. A. Wouthuysen, and then further developed by George B. Field in 1959. The effect of Lyman-alpha photons on the hyperfine levels depends upon the relative intensities of the red and blue wings of the Lyman-alpha line, reflecting the very small difference in energy of the hyperfine states relative to the Lyman-alpha transition. At a cosmological redshift of , Wouthuysen–Field coupling is expected to raise the spin temperature of neutral hydrogen above that of the CMB, and produce emission in the hydrogen line. Observational prospects A hydrogen line signal produced by Wouthuysen–Field coupling has not yet been observed. There are multiple experiments and radio observatories that aim to detect the neutral hydrogen line the Dark Ages and epoch of reionization, the time at which Wouthuysen–Field coupling is expected to be important. These include the Giant Metrewave Radio Telescope, the Precision Array for Probing the Epoch of Reionization, the Murchison Widefield Array, and the Large Aperture Experiment to Detect the Dark Ages. Proposed observatories that aim to detect evidence of Wouthuysen–Field coupling include the Square Kilometer Array and the Dark Ages Radio Explorer. See also Decoupling (cosmology) Notes References Space plasmas Astrophysics Physical cosmology
Wouthuysen–Field coupling
[ "Physics", "Astronomy" ]
775
[ "Space plasmas", "Theoretical physics", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
36,477,606
https://en.wikipedia.org/wiki/Gadolinium%20oxyorthosilicate
Gadolinium oxyorthosilicate (known as GSO) is a type of scintillating inorganic crystal used for imaging in nuclear medicine and for calorimetry in particle physics. The formula is Gd2SiO5. Its main properties are shown below: References Crystals Gadolinium compounds Phosphors and scintillators Silicates
Gadolinium oxyorthosilicate
[ "Chemistry", "Materials_science" ]
78
[ "Crystallography", "Luminescence", "Phosphors and scintillators", "Crystals" ]
36,479,097
https://en.wikipedia.org/wiki/Micro-compounding
Micro-compounding is the mixing or processing of polymer formulations in the melt on a small scale, typically milliliters. It is popular for research and development because it gives faster, more reliable results with smaller samples and less cost. Its applications include pharmaceutical, biomedical, and nutritional areas. Design Micro-compounding is typically performed with a tabletop, twin screw micro-compounder, or micro-extruder with a working volume of 5 or 15 milliliters. With such small volumes, it is difficult to have sufficient mixing in a continuous extruder. Therefore, micro-compounders typically have a batch mode (recirculation) and a conical shape. The L/D of a continuous twin screw extruder is mimicked in a batch micro-compounder by the recirculation mixing time, which is controlled by a manual valve. With this valve, the recirculation can be interrupted to unload the formulation in either a strand or an injection moulder, a film device or a fiber line. Typical recirculation times are one to three minutes, depending on the ease of dispersive and distributive mixing of the formulation. Benefits Micro-compounding can now produce films, fibers, and test samples (rods, rings, tablets) from mixtures as small as 5 ml in less than ten minutes. The small footprint requires less lab space than for a parallel twin screw extruder. One micro-extruder, developed to test whether drug delivery enabled improved bioavailability of poorly soluble drugs or the sustained release of active ingredients show or require sensitive and water destroying invasives. References Polymer chemistry Chemical processes
Micro-compounding
[ "Chemistry", "Materials_science", "Engineering" ]
344
[ "Materials science", "Chemical processes", "Polymer chemistry", "nan", "Chemical process engineering" ]