id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
3,873,319 | https://en.wikipedia.org/wiki/Boron%20trichloride | Boron trichloride is the inorganic compound with the formula BCl3. This colorless gas is a reagent in organic synthesis. It is highly reactive towards water.
Production and structure
Boron reacts with halogens to give the corresponding trihalides. Boron trichloride is, however, produced industrially by chlorination of boron oxide and carbon at 501 °C.
B2O3 + 3 C + 3 Cl2 → 2 BCl3 + 3 CO
The carbothermic reaction is analogous to the Kroll process for the conversion of titanium dioxide to titanium tetrachloride. One consequence of this synthesis route is that samples of boron trichloride are often contaminated with phosgene.
In the laboratory BCl3 can be prepared by treating with AlCl3 with BF3, a halide exchange reaction.
BCl3 is a trigonal planar molecule like the other boron trihalides. The B-Cl bond length is 175 pm. A degree of π-bonding has been proposed to explain the short B− Cl distance, although there is some debate as to its extent. BCl3 does not dimerize, although NMR studies of mixtures of boron trihalides shows the presence of mixed halides. The absence of dimerisation contrasts with the tendencies of AlCl3 and GaCl3, which form dimers or polymers with 4 or 6 coordinate metal centres.
Reactions
BCl3 hydrolyzes readily to give hydrochloric acid and boric acid:
BCl3 + 3 H2O → B(OH)3 + 3 HCl
Alcohols behave analogously giving the borate esters, e.g. trimethyl borate.
As a strong Lewis acid, BCl3 forms adducts with tertiary amines, phosphines, ethers, thioethers, and halide ions. Adduct formation is often accompanied by an increase in B-Cl bond length. BCl3•S(CH3)2 (CAS# 5523-19-3) is often employed as a conveniently handled source of BCl3 because this solid (m.p. 88-90 °C) releases BCl3:
(CH3)2S·BCl3 ⇌ (CH3)2S + BCl3
The mixed aryl and alkyl boron chlorides are also of known. Phenylboron dichloride is commercially available. Such species can be prepared by the redistribution reaction of BCl3 with organotin reagents:
2 BCl3 + R4Sn → 2 RBCl2 + R2SnCl2
Reduction
Reduction of BCl3 to elemental boron is conducted commercially in the laboratory, when boron trichloride can be converted to diboron tetrachloride by heating with copper metal:
2 BCl3 + 2 Cu → B2Cl4 + 2 CuCl
B4Cl4 can also be prepared in this way. Colourless diboron tetrachloride (m.p. -93 °C) is a planar molecule in the solid, (similar to dinitrogen tetroxide, but in the gas phase the structure is staggered. It decomposes (disproportionates) at room temperatures to give a series of monochlorides having the general formula (BCl)n, in which n may be 8, 9, 10, or 11.
n B2Cl4 → BnCln + n BCl3
The compounds with formulas B8Cl8 and B9Cl9 are known to contain closed cages of boron atoms.
Uses
Boron trichloride is a starting material for the production of elemental boron. It is also used in the refining of aluminium, magnesium, zinc, and copper alloys to remove nitrides, carbides, and oxides from molten metal. It has been used as a soldering flux for alloys of aluminium, iron, zinc, tungsten, and monel. Aluminium castings can be improved by treating the melt with boron trichloride vapors. In the manufacture of electrical resistors, a uniform and lasting adhesive carbon film can be put over a ceramic base using BCl3. It has been used in the field of high energy fuels and rocket propellants as a source of boron to raise BTU value. BCl3 is also used in plasma etching in semiconductor manufacturing. This gas etches metal oxides by formation of a volatile BOClx and MxOyClz compounds.
BCl3 is used as a reagent in the synthesis of organic compounds. Like the corresponding bromide, it cleaves C-O bonds in ethers.
Safety
BCl3 is an aggressive reagent that can form hydrogen chloride upon exposure to moisture or alcohols. The dimethyl sulfide adduct (BCl3SMe2), which is a solid, is much safer to use, when possible, but H2O will destroy the BCl3 portion while leaving dimethyl sulfide in solution.
See also
List of highly toxic gases
References
Notes
Further reading
External links
Boron compounds
Chlorides
Boron halides | Boron trichloride | [
"Chemistry"
] | 1,082 | [
"Chlorides",
"Inorganic compounds",
"Salts"
] |
3,873,667 | https://en.wikipedia.org/wiki/P110%CE%B4 | Phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit delta isoform also known as phosphoinositide 3-kinase (PI3K) delta isoform or p110δ is an enzyme that in humans is encoded by the PIK3CD gene.
p110δ regulates immune function. In contrast to the other class IA PI3Ks p110α and p110β, p110δ is principally expressed in leukocytes (white blood cells). Genetic and pharmacological inactivation of p110δ has revealed that this enzyme is important for the function of T cells, B cell, mast cells and neutrophils. Hence, p110δ is a promising target for drugs that aim to prevent or treat inflammation, autoimmunity and transplant rejection.
Phosphoinositide 3-kinases (PI3Ks) phosphorylate the 3-prime OH position of the inositol ring of inositol lipids. The class I PI3Ks display a broad phosphoinositide lipid substrate specificity and include p110α, p110β and p110γ. p110α and p110β interact with SH2/SH3-domain-containing p85 adaptor proteins and with GTP-bound Ras.
Biochemistry
Like the other class IA PI3Ks, p110δ is a catalytic subunit, whose activity and subcellular localisation are controlled by an associated p85α, p55α, p50α or p85β regulatory subunit. The p55γ regulatory subunit is not thought to be expressed at significant levels in immune cells. There is no evidence for selective association between p110α, p110β or p110δ for any particular regulatory subunit. The class IA regulatory subunits (collectively referred to here as p85) bind to proteins that have been phosphorylated on tyrosines. Tyrosine kinases often operate near the plasma membrane and hence control the recruitment of p110δ to the plasma membrane where its substrate PtdIns(4,5)P2 is found. The conversion of PtdIns(4,5)P2 to PtdIns(3,4,5)P3 triggers signal transduction cascades controlled by PKB (also known as Akt), Tec family kinases and other proteins that contain PH domains. In immune cells, antigen receptors, cytokine receptors and costimulatory and accessory receptors stimulate tyrosine kinase activity and hence all have the potential to initiate PI3K signalling.
Functions
For reasons that are not well understood, p110δ appears to be activated in preference to p110α and p110β in a number of immune cells. The following is a brief summary of the role of p110δ in selected leukocyte subsets.
T cells
In T cells, the antigen receptor (TCR) and costimulatory receptors (CD28 and ICOS) are thought to be main receptors responsible for recruiting and activating p110δ. Genetic inactivation of p110δ in mice causes T cells to be less responsive to antigen as determined by their reduced ability to proliferate and secrete interleukin 2. T cell specific deletion of p110δ has revealed its role in antibody responses.
This may in part result from incomplete assembly of other signalling proteins at the immune synapse. The TCR cannot stimulate the phosphorylation of Akt in that absence of p110δ activity.
B cells
p110δ is a regulator of B cell proliferation and function. p110δ-deficient mice have deficient antibody responses. They also lack to B cell subsets: B1 cells (found in body cavities such as the peritoneum) and marginal zone B cells, found in the periphery of spleen follicles).
Mast cells
p110δ controls mast cell release of the granules responsible for allergic reactions. Thus inhibition of p110δ reduces allergic responses.
Neutrophils
In conjunction with p110γ, p110δ controls the release of reactive oxygen species in neutrophils.
Dendritic cells
p110δ controls lipopolysaccharide induced Toll-like-receptor-4 mediated innate immune responses in dendritic cells and mice carrying an inactive p110δ is susceptible to lipopolysaccharide mediated endotoxin shock.
Activated PI3K delta syndrome
Inherited mutations in the PIK3CD gene which increase p110δ catalytic activity cause a primary immunodeficiency syndrome called APDS or PASLI.
Pharmacology
US pharmaceutical company ICOS produced a selective inhibitor of p110δ called IC87114. This inhibitor selectively impairs B cell, mast cell and neutrophil functions and is therefore a potential immune-modulator.
The p110δ inhibitor idelalisib was developed by Gilead Sciences. Idelalisib in combination with rituximab showed favourable progression free survival in a phase III clinical trial for chronic lymphocytic leukemia (CLL) compared with patients that received rituximab and placebo.
In July 2014 idelalisib was approved by the FDA as a treatment for CLL patients.
In September 2017 copanlisib, inhibiting predominantly p110α and p110δ, got FDA approval for the treatment of adult patients with relapsed follicular lymphoma (FL) who have received at least two prior systemic therapies.
In September 2018 duvelisib was approved by the FDA as a treatment for relapsed or refractory CLL, and relapsed follicular lymphoma (FL) patients, who have received at least two prior therapies.
A 2015 study found that p110δ inhibitors had a side-effect of boosting mouse immune responses against multiple cancers, including both solid and hematological types. Breast cancer mice survival times nearly doubled and spread significantly less, with far fewer and smaller tumors. Post-surgical survival also improved. Subject immune systems could also develop an effective memory response, extending protection.
Interactions
PIK3CD interacts with PIK3R1, and PIK3R2.
See also
Phosphoinositide 3-kinase inhibitor
References
Further reading
EC 2.7.1
Immune system | P110δ | [
"Biology"
] | 1,364 | [
"Immune system",
"Organ systems"
] |
3,875,249 | https://en.wikipedia.org/wiki/Copper-clad%20aluminium%20wire | Copper-clad aluminium wire (CCAW or CCA) is a dual-metal electrical conductor composed of an inner aluminium core and outer copper cladding.
Production
A copper strip is formed into the shape of a cylinder, while it is being wrapped around an aluminum core and the edges of the copper strip are welded together. The assembly is then pulled through a die, where the cladded wire is squeezed and stretched while also improving the bonding between the copper and the aluminum core.
Uses
The primary applications of this conductor revolve around weight reduction requirements. These applications include high-quality coils, such as the voice coils in headphones or portable loudspeakers; high frequency coaxial applications, such as RF antennas and cable television distribution cables; and power cables.
CCA was also used in electrical wiring for buildings. The copper/aluminium construction was adopted to avoid some of the problems with aluminium wire yet retain most of the cost advantage. Its low copper content makes it unattractive to copper thieves.
CCA is also seen in counterfeit unshielded twisted pair networking cables. These cables are often less expensive than their full-copper counterparts, but
the official specifications such as Category 6 require conductors to be pure copper. This has exposed the manufacturers or installers of cable with fake certification to legal liabilities.
Properties
The properties of copper-clad aluminium wire include:
Less expensive than a pure copper wire
Lighter than pure copper
Higher electrical conductivity than pure aluminium
Higher strength than aluminium
Electrical connections typically more reliable than pure aluminium
Disadvantages
Easily sold as counterfeit copper wire to unaware clients
Much more prone to mechanical fatigue failure than pure copper wire
Gets much hotter than pure copper in case of severe overcurrent, such as short circuits, although this is not an issue for code-compliant installations with proper fuses or circuit breakers.
Skin effect
The skin effect forces alternating current to flow on the outer periphery of any wire; in this case, the outer copper cladding of the conductor which has lower resistance than the mostly unused aluminum interior. The better conductor on the outer path causes the wire's resistance at high frequencies, where the skin effect is greater, to approach that of a pure copper wire. This improved conductivity over bare aluminum makes the copper-clad aluminium wire a good fit for radio frequency use.
The skin effect is similarly exploited in copper-clad steel wire, such as the center conductors of many coaxial cables, which are commonly used for high frequency feedlines with high strength and conductivity requirements.
See also
Copper conductor
Graphene-clad wire
Electroplating
Galvanization
References
Electrical wiring
Power cables
Aluminium
Copper | Copper-clad aluminium wire | [
"Physics",
"Engineering"
] | 532 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
3,877,901 | https://en.wikipedia.org/wiki/V%28D%29J%20recombination | V(D)J recombination (variable–diversity–joining rearrangement) is the mechanism of somatic recombination that occurs only in developing lymphocytes during the early stages of T and B cell maturation. It results in the highly diverse repertoire of antibodies/immunoglobulins and T cell receptors (TCRs) found in B cells and T cells, respectively. The process is a defining feature of the adaptive immune system.
V(D)J recombination in mammals occurs in the primary lymphoid organs (bone marrow for B cells and thymus for T cells) and in a nearly random fashion rearranges variable (V), joining (J), and in some cases, diversity (D) gene segments. The process ultimately results in novel amino acid sequences in the antigen-binding regions of immunoglobulins and TCRs that allow for the recognition of antigens from nearly all pathogens including bacteria, viruses, parasites, and worms as well as "altered self cells" as seen in cancer. The recognition can also be allergic in nature (e.g. to pollen or other allergens) or may match host tissues and lead to autoimmunity.
In 1987, Susumu Tonegawa was awarded the Nobel Prize in Physiology or Medicine "for his discovery of the genetic principle for generation of antibody diversity".
Background
Human antibody molecules (including B cell receptors) are composed of heavy and light chains, each of which contains both constant (C) and variable (V) regions, genetically encoded on three loci:
The immunoglobulin heavy locus (IGH@) on chromosome 14, containing the gene segments for the immunoglobulin heavy chain.
The immunoglobulin kappa (κ) locus (IGK@) on chromosome 2, containing the gene segments for one type (κ) of immunoglobulin light chain.
The immunoglobulin lambda (λ) locus (IGL@) on chromosome 22, containing the gene segments for another type (λ) of immunoglobulin light chain.
Each heavy chain or light chain gene contains multiple copies of three different types of gene segments for the variable regions of the antibody proteins. For example, the human immunoglobulin heavy chain region contains 2 Constant (Cμ and Cδ) gene segments and 44 Variable (V) gene segments, plus 27 Diversity (D) gene segments and 6 Joining (J) gene segments. The light chain genes possess either a single (Cκ) or four (Cλ) Constant gene segments with numerous V and J gene segments but do not have D gene segments. DNA rearrangement causes one copy of each type of gene segment to go in any given lymphocyte, generating an enormous antibody repertoire; roughly 3×1011 combinations are possible, although some are removed due to self reactivity.
Most T cell receptors are composed of a variable alpha chain and a beta chain. The T cell receptor genes are similar to immunoglobulin genes in that they too contain multiple V, D, and J gene segments in their beta chains (and V and J gene segments in their alpha chains) that are rearranged during the development of the lymphocyte to provide that cell with a unique antigen receptor. The T cell receptor in this sense is the topological equivalent to an antigen-binding fragment of the antibody, both being part of the immunoglobulin superfamily.
An autoimmune response is prevented by eliminating cells that self-react. For T cells, this occurs in the thymus by testing the cell against an array of self antigens expressed through the function of the autoimmune regulator (AIRE). The immunoglobulin lambda light chain locus contains protein-coding genes that can be lost with its rearrangement. This is based on a physiological mechanism and is not pathogenetic for leukemias or lymphomas. A cell persists if it creates a successful product that does not self-react, otherwise it is pruned via apoptosis.
Immunoglobulins
Heavy chain
In the developing B cell, the first recombination event to occur is between one D and one J gene segment of the heavy chain locus. Any DNA between these two gene segments is deleted. This D-J recombination is followed by the joining of one V gene segment, from a region upstream of the newly formed DJ complex, forming a rearranged VDJ gene segment. All other gene segments between V and D segments are now deleted from the cell's genome. Primary transcript (unspliced RNA) is generated containing the VDJ region of the heavy chain and both the constant mu and delta chains (Cμ and Cδ). (i.e. the primary transcript contains the segments: V-D-J-Cμ-Cδ). The primary RNA is processed to add a polyadenylated (poly-A) tail after the Cμ chain and to remove sequence between the VDJ segment and this constant gene segment. Translation of this mRNA leads to the production of the IgM heavy chain protein.
Light chain
The kappa (κ) and lambda (λ) chains of the immunoglobulin light chain loci rearrange in a very similar way, except that the light chains lack a D segment. In other words, the first step of recombination for the light chains involves the joining of the V and J chains to give a VJ complex before the addition of the constant chain gene during primary transcription. Translation of the spliced mRNA for either the kappa or lambda chains results in formation of the Ig κ or Ig λ light chain protein.
Assembly of the Ig μ heavy chain and one of the light chains results in the formation of membrane bound form of the immunoglobulin IgM that is expressed on the surface of the immature B cell.
T cell receptors
During thymocyte development, the T cell receptor (TCR) chains undergo essentially the same sequence of ordered recombination events as that described for immunoglobulins. D-to-J recombination occurs first in the β-chain of the TCR. This process can involve either the joining of the Dβ1 gene segment to one of six Jβ1 segments or the joining of the Dβ2 gene segment to one of six Jβ2 segments. DJ recombination is followed (as above) with Vβ-to-DβJβ rearrangements. All gene segments between the Vβ-Dβ-Jβ gene segments in the newly formed complex are deleted and the primary transcript is synthesized that incorporates the constant domain gene (Vβ-Dβ-Jβ-Cβ). mRNA transcription splices out any intervening sequence and allows translation of the full length protein for the TCR β-chain.
The rearrangement of the alpha (α) chain of the TCR follows β chain rearrangement, and resembles V-to-J rearrangement described for Ig light chains (see above). The assembly of the β- and α- chains results in formation of the αβ-TCR that is expressed on a majority of T cells.
Mechanism
Key enzymes and components
The process of V(D)J recombination is mediated by VDJ recombinase, which is a diverse collection of enzymes. The key enzymes involved are recombination activating genes 1 and 2 (RAG), terminal deoxynucleotidyl transferase (TdT), and Artemis nuclease, a member of the ubiquitous non-homologous end joining (NHEJ) pathway for DNA repair. Several other enzymes are known to be involved in the process and include DNA-dependent protein kinase (DNA-PK), X-ray repair cross-complementing protein 4 (XRCC4), DNA ligase IV, non-homologous end-joining factor 1 (NHEJ1; also known as Cernunnos or XRCC4-like factor [XLF]), the recently discovered Paralog of XRCC4 and XLF (PAXX), and DNA polymerases λ and μ. Some enzymes involved are specific to lymphocytes (e.g., RAG, TdT), while others are found in other cell types and even ubiquitously (e.g., NHEJ components).
To maintain the specificity of recombination, V(D)J recombinase recognizes and binds to recombination signal sequences (RSSs) flanking the variable (V), diversity (D), and joining (J) genes segments. RSSs are composed of three elements: a heptamer of seven conserved nucleotides, a spacer region of 12 or 23 basepairs in length, and a nonamer of nine conserved nucleotides. While the majority of RSSs vary in sequence, the consensus heptamer and nonamer sequences are CACAGTG and ACAAAAACC, respectively; and although the sequence of the spacer region is poorly conserved, the length is highly conserved. The length of the spacer region corresponds to approximately one (12 basepairs) or two turns (23 basepairs) of the DNA helix. Following what is known as the 12/23 Rule, gene segments to be recombined are usually adjacent to RSSs of different spacer lengths (i.e., one has a "12RSS" and one has a "23RSS"). This is an important feature in the regulation of V(D)J recombination.
Process
V(D)J recombination begins when V(D)J recombinase (through the activity of RAG1) binds a RSS flanking a coding gene segment (V, D, or J) and creates a single-strand nick in the DNA between the first base of the RSS (just before the heptamer) and the coding segment. This is essentially energetically neutral (no need for ATP hydrolysis) and results in the formation of a free 3' hydroxyl group and a 5' phosphate group on the same strand. The reactive hydroxyl group is positioned by the recombinase to attack the phosphodiester bond of opposite strand, forming two DNA ends: a hairpin (stem-loop) on the coding segment and a blunt end on the signal segment. The current model is that DNA nicking and hairpin formation occurs on both strands simultaneously (or nearly so) in a complex known as a recombination center.
The blunt signal ends are flush ligated together to form a circular piece of DNA containing all of the intervening sequences between the coding segments known as a signal joint (although circular in nature, this is not to be confused with a plasmid). While originally thought to be lost during successive cell divisions, there is evidence that signal joints may re-enter the genome and lead to pathologies by activating oncogenes or interrupting tumor suppressor gene function(s)[Ref].
The coding ends are processed further prior to their ligation by several events that ultimately lead to junctional diversity. Processing begins when DNA-PK binds to each broken DNA end and recruits several other proteins including Artemis, XRCC4, DNA ligase IV, Cernunnos, and several DNA polymerases. DNA-PK forms a complex that leads to its autophosphorylation, resulting in activation of Artemis. The coding end hairpins are opened by the activity of Artemis. If they are opened at the center, a blunt DNA end will result; however in many cases, the opening is "off-center" and results in extra bases remaining on one strand (an overhang). These are known as palindromic (P) nucleotides due to the palindromic nature of the sequence produced when DNA repair enzymes resolve the overhang. The process of hairpin opening by Artemis is a crucial step of V(D)J recombination and is defective in the severe combined immunodeficiency (scid) mouse model.
Next, XRCC4, Cernunnos, and DNA-PK align the DNA ends and recruit terminal deoxynucleotidyl transferase (TdT), a template-independent DNA polymerase that adds non-templated (N) nucleotides to the coding end. The addition is mostly random, but TdT does exhibit a preference for G/C nucleotides. As with all known DNA polymerases, the TdT adds nucleotides to one strand in a 5' to 3' direction.
Lastly, exonucleases can remove bases from the coding ends (including any P or N nucleotides that may have formed). DNA polymerases λ and μ then insert additional nucleotides as needed to make the two ends compatible for joining. This is a stochastic process, therefore any combination of the addition of P and N nucleotides and exonucleolytic removal can occur (or none at all). Finally, the processed coding ends are ligated together by DNA ligase IV.
All of these processing events result in a paratope that is highly variable, even when the same gene segments are recombined. V(D)J recombination allows for the generation of immunoglobulins and T cell receptors to antigens that neither the organism nor its ancestor(s) need to have previously encountered, allowing for an adaptive immune response to novel pathogens that develop or to those that frequently change (e.g., seasonal influenza). However, a major caveat to this process is that the DNA sequence must remain in-frame in order to maintain the correct amino acid sequence in the final protein product. If the resulting sequence is out-of-frame, the development of the cell will be arrested, and the cell will not survive to maturity. V(D)J recombination is therefore a very costly process that must be (and is) strictly regulated and controlled.
See also
B cell receptor
T cell receptor
Basel Institute for Immunology
Charles M. Steinberg
NKT cell
Recombination-activating gene
References
Further reading
V(D)J Recombination. Series: Advances in Experimental Medicine and Biology, Vol. 650 Ferrier, Pierre (Ed.) Landes Bioscience 2009, XII, 199 p.
Immune system
Lymphocytes
Immunology | V(D)J recombination | [
"Biology"
] | 3,053 | [
"Organ systems",
"Immunology",
"Immune system"
] |
22,980,399 | https://en.wikipedia.org/wiki/Head%20injury%20criterion | The head injury criterion (HIC) is a measure of the likelihood of head injury arising from an impact. The HIC can be used to assess safety related to vehicles, personal protective gear, and sport equipment.
Normally the variable is derived from the measurements of an accelerometer mounted at the center of mass of a crash test dummy’s head, when the dummy is exposed to crash forces.
It is defined as:
where t1 and t2 are the initial and final times (in seconds) chosen to maximize HIC, and acceleration a is measured in gs (standard gravity acceleration). The time duration, t2 – t1, is limited to a maximum value of 36 ms, usually 15 ms.
This means that the HIC includes the effects of head acceleration and the duration of the acceleration. Large accelerations may be tolerated for very short times.
At a HIC of 1000, there is an 18% probability of a severe head injury, a 55% probability of a serious injury and a 90% probability of a moderate head injury to the average adult.
Automobile safety
HIC is used to determine the U.S. National Highway Traffic Safety Administration (NHTSA) star rating for automobile safety and to determine ratings given by the Insurance Institute for Highway Safety.
According to the Insurance Institute for Highway Safety, head injury risk is evaluated mainly on the basis of head injury criterion. A value of 700 is the maximum allowed under the provisions of the U.S. advanced airbag regulation (NHTSA, 2000) and is the maximum score for an "acceptable" IIHS rating for a particular vehicle.
A HIC-15 (meaning a measure of impact over 15 milliseconds) of 700 is estimated to represent a 5 percent risk of a severe injury (Mertz et al., 1997). A "severe" injury is one with a score of 4+ on the Abbreviated Injury Scale (AIS)
Data for specific vehicles can be found on various automotive review websites. Some sample data is as follows, for comparative purposes:
The 1998 Ford Windstar, marketed as one of the safest minivans of that year, tested out to a HIC=305 score for driver
A small car, a 1998 Dodge Neon, tested at HIC=265.
A common family sedan, a 1998 Toyota Camry, tested at HIC=288.
A 2007 Camry at HIC=175.
A comprehensive searchable database of vehicles and their HIC scores is available at safercar.gov.
Athletics and recreation
Sport physiologists and biomechanics experts use the HIC in the research of safety equipment and guidelines for competitive sport and recreation. In one study, concussions were found to occur at HIC=250 in most athletes. Studies have been conducted in skiing and other sports to test adequacy of helmets
See also
Automobile safety
Crash test
Sports injury
Concussion
Sport-related concussion
Concussion grading systems
References
External links
Use of Head Injury Criterion in Crash Test Ratings
Injury Measurements and Criteria
Saving Lives with Impact Protection Product
Automotive safety
Neurotrauma
Mathematical modeling | Head injury criterion | [
"Mathematics"
] | 625 | [
"Applied mathematics",
"Mathematical modeling"
] |
22,981,287 | https://en.wikipedia.org/wiki/Macroscopic%20quantum%20self-trapping | In quantum mechanics, macroscopic quantum self-trapping is when two Bose–Einstein condensates weakly linked by an energy barrier which particles can tunnel through, nevertheless end up with a higher average number of bosons on one side of the junction than the other. The junction of two Bose–Einstein condensates is mostly analogous to a Josephson junction, which is made of two superconductors linked by a non-conducting barrier. However, superconducting Josephson junctions do not display macroscopic quantum self-trapping, and thus macroscopic quantum self-tunneling is a distinguishing feature of Bose–Einstein condensate junctions. Self-trapping occurs when the self-interaction energy between the Bosons is larger than a critical value called .
It was first described in 1997. It has been observed in Bose–Einsten condensates of exciton-polaritons, and predicted for a condensate of magnons.
While the tunneling of a particle through classically forbidden barriers can be described by the particle's wave function, this merely gives the probability of tunneling. Although various factors can increase or decrease the probability of tunneling, one can not be certain whether or not tunneling will occur.
When two condensates are placed in a double potential well and the phase and population differences are such that the system is in equilibrium, the population difference will remain fixed. A naïve conclusion is that there is no tunneling at all, and the bosons are truly "trapped" on one side of the junction. However, macroscopic quantum self-trapping does not rule out quantum tunneling — rather, only the possibility of observing tunneling is ruled out. In the event that a particle tunnels through the barrier, another particle tunnels in the opposite direction. Because the identity of individual particles is lost in that case, no tunneling can be observed, and the system is considered to remain at rest.
See also
Double-well potential
Gross–Pitaevskii equation
References
Quantum mechanics
Bose–Einstein condensates | Macroscopic quantum self-trapping | [
"Physics",
"Chemistry",
"Materials_science"
] | 414 | [
"Bose–Einstein condensates",
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Quantum physics stubs",
"Condensed matter physics",
"Matter"
] |
22,981,865 | https://en.wikipedia.org/wiki/Quantum%20tunnelling%20composite | Quantum tunnelling composites (QTCs) are composite materials of metals and non-conducting elastomeric binder, used as pressure sensors. They use quantum tunnelling: without pressure, the conductive elements are too far apart to conduct electricity; when pressure is applied, they move closer and electrons can tunnel through the insulator. The effect is far more pronounced than would be expected from classical (non-quantum) effects alone, as classical electrical resistance is linear (proportional to distance), while quantum tunnelling is exponential with decreasing distance, allowing the resistance to change by a factor of up to 1012 between pressured and unpressured states.
Quantum tunneling composites hold multiple designations in specialized literature, such as: conductive/semi-conductive polymer composite, piezo-resistive sensor and force-sensing resistor (FSR). However, in some cases Force-sensing resistors may operate predominantly under percolation regime; this implies that the composite resistance grows for an incremental applied stress or force.
Introduction
QTCs were discovered in 1996 by technician David Lussey while he was searching for a way to develop an electrically conductive adhesive. Lussey founded Peratech Ltd, a company devoted to research work and usage of QTCs. Peratech Ltd. and other companies are working on developing quantum tunneling composite to improve touch technology. Currently, there is restricted use of QTC due to its high cost, but eventually this technology is expected to become available to the general user. Quantum tunneling composites are combinations of polymer composites with elastic, rubber-like properties elastomer, and metal particles (nickel). Due to a no-air gap in the sensor contamination or interference between the contact points is impossible. There is also little to no chance of arcing, electrical sparks between contact points. In the QTC's inactive state, the conductive elements are too far from one another to pass electron charges. Thus, current does not flow when there is no pressure on the quantum-tunneling composite. A characterization of a QTC is its spiky silicon covered surface. The spikes do not actually touch, but when a force is applied to the QTC, the spikes move closer to each other and a [quantum] effect occurs as a high concentration of electrons flow from one spike tip to the next. The electric current stops when the force is taken away.
Types
QTCs come in different forms and each form is used differently but has a similar resistance change when deformed. QTC pills are the most commonly used type of QTC. Pills are pressure sensitive variable resistors. The amount of electric current passed is exponentially proportionate to the amount of pressure applied. QTC pills can be used as input sensors which respond to an applied force. These pills can also be used in devices to control higher currents than QTC sheets. QTC sheets are composed of three layers: a thin layer of QTC material, a conductive material and a plastic insulator. QTC sheets allow a quick switch from high to low resistance and vice versa.
Applications
In February 2008 the newly formed company QIO Systems Inc gained, in a deal with Peratech, the worldwide exclusive license to the intellectual property and design rights for the electronics and textile touchpads based on QTC technology and for the manufacture and sale of ElekTex (QTC-based) textile touchpads for use in both consumer and commercial applications.
QTCs were used to provide fingertip sensitivity in NASA's Robonaut in 2012. Robonaut was able to survive and send detailed feedback from space. The sensors on the human-like robot were able to tell how hard and where it was gripping something.
Quantum tunneling composites are relatively new and are still being researched and developed.
QTC has been implemented within clothing to make “smart”, touchable membrane control panels to control electronic devices within clothing, e.g. mp3 players or mobile phones. This allows equipment to be operated without removing clothing layers or opening fastenings and makes standard equipment usable in extreme weather or environmental conditions such as Arctic/Antarctic exploration or spacesuits.
The following are possible uses of QTCs:
Sporting materials such as training dummies or fencing jackets can be covered in QTC material. Sensors on the material can relay information on the force of an impact.
Mirror and window operation such as gesture, stroke, or swipe can be used in automotive applications. Depending on the amount of pressure applied from the gesture, the car parts will adjust to the desired setting at either a fast speed or a slow speed. The more pressure is applied, the faster the operation will be.
Blood pressure cuffs: QTCs in blood pressure cuffs reduce inaccurate readings from improper cuff attachment. The sensors tell how much tension is needed to read a person's blood pressure.
References
Electrical components
Quantum electronics | Quantum tunnelling composite | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 1,003 | [
"Electrical components",
"Quantum electronics",
"Quantum mechanics",
"Condensed matter physics",
"Nanotechnology",
"Electrical engineering",
"Components"
] |
22,984,567 | https://en.wikipedia.org/wiki/Venturi%20flume | In hydrology, a Venturi flume is a device used for measuring the rate of flow of a liquid in situations with large flow rates, such as a river. It is based on the Venturi effect, for which it is named. It was first developed by V.M. Cone in Fort Collins, Colorado.
The Venturi flume consists of a flume with a constricted section in the center. By the Venturi effect, this causes a drop in the fluid pressure at the center of the constriction. By comparing the fluid pressure at the center of the flume with that earlier in the device, the rate of flow can be measured.
References
Fluid mechanics
Fluid dynamics | Venturi flume | [
"Chemistry",
"Engineering"
] | 143 | [
"Chemical engineering",
"Civil engineering",
"Piping",
"Fluid mechanics",
"Fluid dynamics"
] |
22,987,859 | https://en.wikipedia.org/wiki/Endoreversible%20thermodynamics | Endoreversible thermodynamics is a subset of irreversible thermodynamics aimed at making more realistic assumptions about heat transfer than are typically made in reversible thermodynamics. It gives an upper bound on the power that can be derived from a real process that is lower than that predicted by Carnot for a Carnot cycle, and accommodates the exergy destruction occurring as heat is transferred irreversibly.
It is also called finite-time thermodynamics, entropy generation minimization, or thermodynamic optimization.
History
Endoreversible thermodynamics was discovered multiple times, with Reitlinger (1929), Novikov (1957) and Chambadal (1957), although it is most often attributed to Curzon & Ahlborn (1975).
Reitlinger derived it by considering a heat exchanger receiving heat from a finite hot stream fed by a combustion process.
A brief review of the history of rediscoveries is in.
Efficiency at maximal power
Consider a semi-ideal heat engine, in which heat transfer takes time, according to Fourier's law of heat conduction: , but other operations happen instantly.
Its maximal efficiency is the standard Carnot result, but it requires heat transfer to be reversible (quasistatic), thus taking infinite time. At maximum power output, its efficiency is the Chambadal–Novikov efficiency:
Due to occasional confusion about the origins of the above equation, it is sometimes named the Chambadal–Novikov–Curzon–Ahlborn efficiency.
Derivation
This derivation is a slight simplification of Curzon & Ahlborn.
Consider a heat engine, with a single working fluid cycling around the engine. On one side, the working fluid has temperature , and is in direct contact with the hot heat bath. On the other side, it has temperature , and is in direct contact with the cold heat bath.
The heat flow into the engine is , where is the heat conduction coefficient. The heat flow out of the engine is . The power output of the engine is .
Side note: if one cycle of the engine takes time , and during this time, it is in contact with the hot side only for a time , then we can reduce to this case by replacing with . Similar comments apply to the cold side.
By Carnot theorem, we have . This then gives us a problem of constraint optimization:This can be solved by typical methods, such as Lagrange multipliers, giving usat which point the engine is operating at efficiency .
In particular, if , then we have This is often the case with practical heat engines in power generation plants, where the work fluid can only spend a small amount of time with the hot bath (nuclear reactor core, coal furnance, etc), but a much larger amount of time with the cold bath (open atmosphere, a large body of water, etc).
Experimental data
For some typical cycles, the above equation (note that absolute temperatures must be used) gives the following results:
As shown, the endoreversible efficiency much more closely models the observed data.
However, such an engine violates Carnot's principle which states that work can be done any time there is a difference in temperature. The fact that the hot and cold reservoirs are not at the same temperature as the working fluid they are in contact with means that work can and is done at the hot and cold reservoirs. The result is tantamount to coupling the high and low temperature parts of the cycle, so that the cycle collapses.
In the Carnot cycle, the working fluid must always remain constant temperatures, as the heat reservoirs they are in contact with and that they are separated by adiabatic transformations which prevent thermal contact. The efficiency was first derived by William Thomson in his study of an unevenly heated body in which the adiabatic partitions between bodies at different temperatures are removed and maximum work is performed. It is well known that the final temperature is the geometric mean temperature so that the efficiency is the Carnot efficiency for an engine working between and .
== See also ==
An introduction to endoreversible thermodynamics is given in the thesis by Katharina Wagner. It is also introduced by Hoffman et al.
A thorough discussion of the concept, together with many applications in engineering, is given in the book by Hans Ulrich Fuchs.
References
Thermodynamics | Endoreversible thermodynamics | [
"Physics",
"Chemistry",
"Mathematics"
] | 912 | [
"Thermodynamics",
"Dynamical systems"
] |
22,989,100 | https://en.wikipedia.org/wiki/Kato%20theorem | The Kato theorem, or Kato's cusp condition (after Japanese mathematician Tosio Kato), is used in computational quantum physics. It states that for generalized Coulomb potentials, the electron density has a cusp at the position of the nuclei, where it satisfies
Here denotes the positions of the nuclei, their atomic number and is the Bohr radius.
For a Coulombic system one can thus, in principle, read off all information necessary for completely specifying the Hamiltonian directly from examining the density distribution. This is also known as E. Bright Wilson's argument within the framework of density functional theory (DFT). The electron density of the ground state of a molecular system contains cusps at the location of the nuclei, and by identifying these from the total electron density of the system, the positions are thus established. From Kato's theorem, one also obtains the nuclear charge of the nuclei, and thus the external potential is fully defined. Finally, integrating the electron density over space gives the number of electrons, and the (electronic) Hamiltonian is defined. This is valid in a non-relativistic treatment within the Born–Oppenheimer approximation, and assuming point-like nuclei.
References
Theorems in quantum mechanics | Kato theorem | [
"Physics",
"Mathematics"
] | 259 | [
"Theorems in quantum mechanics",
"Equations of physics",
"Quantum mechanics",
"Theorems in mathematical physics",
"Quantum physics stubs",
"Physics theorems"
] |
22,989,201 | https://en.wikipedia.org/wiki/Pan-genome | In the fields of molecular biology and genetics, a pan-genome (pangenome or supragenome) is the entire set of genes from all strains within a clade. More generally, it is the union of all the genomes of a clade. The pan-genome can be broken down into a "core pangenome" that contains genes present in all individuals, a "shell pangenome" that contains genes present in two or more strains, and a "cloud pangenome" that contains genes only found in a single strain. Some authors also refer to the cloud genome as "accessory genome" containing 'dispensable' genes present in a subset of the strains and strain-specific genes. Note that the use of the term 'dispensable' has been questioned, at least in plant genomes, as accessory genes play "an important role in genome evolution and in the complex interplay between the genome and the environment". The field of study of pangenomes is called pangenomics.
The genetic repertoire of a bacterial species is much larger than the gene content of an individual strain. Some species have open (or extensive) pangenomes, while others have closed pangenomes. For species with a closed pan-genome, very few genes are added per sequenced genome (after sequencing many strains), and the size of the full pangenome can be theoretically predicted. Species with an open pangenome have enough genes added per additional sequenced genome that predicting the size of the full pangenome is impossible. Population size and niche versatility have been suggested as the most influential factors in determining pan-genome size.
Pangenomes were originally constructed for species of bacteria and archaea, but more recently eukaryotic pan-genomes have been developed, particularly for plant species. Plant studies have shown that pan-genome dynamics are linked to transposable elements. The significance of the pan-genome arises in an evolutionary context, especially with relevance to metagenomics, but is also used in a broader genomics context. An open access book reviewing the pangenome concept and its implications, edited by Tettelin and Medini, was published in the spring of 2020.
Etymology
The term 'pangenome' was defined with its current meaning by Tettelin et al. in 2005; it derives 'pan' from the Greek word παν, meaning 'whole' or 'everything', while the genome is a commonly used term to describe an organism's complete genetic material. Tettelin et al. applied the term specifically to bacteria, whose pangenome "includes a core genome containing genes present in all strains and a dispensable genome composed of genes absent from one or more strains and genes that are unique to each strain."
Parts of the pangenome
Core
Is the part of the pangenome that is shared by every genome in the tested set. Some authors have divided the core pangenome in hard core, those families of homologous genes that has at least one copy of the family shared by every genome (100% of genomes) and the soft core or extended core, those families distributed above a certain threshold (90%). In a study that involves the pangenomes of Bacillus cereus and Staphylococcus aureus, some of them isolated from the international space station, the thresholds used for segmenting the pangenomes were as follows: "Cloud", "Shell", and "Core" corresponding to gene families with presence in <10%, 10–95%, and >95% of the genomes, respectively.
The core genome size and proportion to the pangenome depends on several factors, but it is especially dependent on the phylogenetic similarity of the considered genomes. For example, the core of two identical genomes would also be the complete pangenome. The core of a genus will always be smaller than the core genome of a species. Genes that belong to the core genome are often related to house keeping functions and primary metabolism of the lineage, nevertheless, the core gene can also contain some genes that differentiate the species from other species of the genus, i.e. that may be related pathogenicity to niche adaptation.
Shell
Is the part of the pangenome shared by the majority of the genomes in a pangenome. There is not a universally accepted threshold to define the shell genome, some authors consider a gene family as part of the shell pangenome if it shared by more than 50% of the genomes in the pangenome. A family can be part of the shell by several evolutive dynamics, for example by gene loss in a lineage where it was previously part of the core genome, such is the case of enzymes in the tryptophan operon in Actinomyces, or by gene gain and fixation of a gene family that was previously part of the dispensable genome such is the case of trpF gene in several Corynebacterium species.
Cloud
The cloud genome consists of those gene families shared by a minimal subset of the genomes in the pangenome, it includes singletons or genes present in only one of the genomes. It is also known as the peripheral genome, or accessory genome. Gene families in this category are often related to ecological adaptation.
Classification
The pan-genome can be somewhat arbitrarily classified as open or closed based on the alpha value of Heaps' law:
Number of gene families.
Number of genomes.
Constant of proportionality.
Exponent calculated in order to adjust the curve of number of gene families vs new genome.
if then the pangenome is considered open.
if then the pangenome is considered closed.
Usually, the pangenome software can calculate the parameters of the Heap law that best describe the behavior of the data.
Open pangenome
An open pangenome occurs when the number of new gene families in one taxonomic lineage keeps increasing without appearing to be asymptotic regardless how many new genomes are added to the pangenome. Escherichia coli is an example of a species with an open pangenome. Any E. coli genome size is in the range of 4000–5000 genes and the pangenome size estimated for this species with approximately 2000 genomes is composed by 89,000 different gene families. The pangenome of the domain bacteria is also considered to be open.
Closed Pangenome
A closed pangenome occurs in a lineage when only few gene families are added when new genomes are incorporated into the pangenome analysis, and the total amount of gene families in the pangenome seem to be asymptotic to one number. It is believed that parasitism and species that are specialists in some ecological niche tend to have closed pangenomes. Staphylococcus lugdunensis is an example of a commensal bacteria with closed pan-genome.
History
Pangenome
The original pangenome concept was developed by Tettelin et al. when they analyzed the genomes of eight isolates of Streptococcus agalactiae, where they described a core genome shared by all isolates, accounting for approximately 80% of any single genome, plus a dispensable genome consisting of partially shared and strain-specific genes. Extrapolation suggested that the gene reservoir in the S. agalactiae pan-genome is vast and that new unique genes would continue to be identified even after sequencing hundreds of genomes. The pangenome comprises the entirety of the genes discovered in the sequenced genomes of a given microbial species and it can change when new genomes are sequenced and incorporated into the analysis.
The pangenome of a genomic lineage accounts for the intra lineage gene content variability. Pangenome evolves due to: gene duplication, gene gain and loss dynamics and interaction of the genome with mobile elements that are shaped by selection and drift. Some studies point that prokaryotes pangenomes are the result of adaptive, not neutral evolution that confer species the ability to migrate to new niches.
Supergenome
The supergenome can be thought of as the real pangenome size if all genomes from a species were sequenced. It is defined as all genes accessible for being gained by a certain species. It cannot be calculated directly but its size can be estimated by the pangenome size calculated from the available genome data. Estimating the size of the cloud genome can be troubling because of its dependence on the occurrence of rare genes and genomes. In 2011 genomic fluidity was proposed as a measure to categorize the gene-level similarity among groups of sequenced isolates. In some lineages the supergenomes did appear infinite, as is the case of the Bacteria domain.
Metapangenome
'Metapangenome' has been defined as the outcome of the analysis of pangenomes in conjunction with the environment where the abundance and prevalence of gene clusters and genomes are recovered through shotgun metagenomes. The combination of metagenomes with pangenomes, also referred to as "metapangenomics", reveals the population-level results of habitat-specific filtering of the pangenomic gene pool.
Other authors consider that Metapangenomics expands the concept of pangenome by incorporating gene sequences obtained from uncultivated microorganisms by a metagenomics approach. A metapangenome comprises both sequences from metagenome-assembled genomes (MAGs) and from genomes obtained from cultivated microorganisms. Metapangenomics has been applied to assess diversity of a community, microbial niche adaptation, microbial evolution, functional activities, and interaction networks of the community. The Anvi'o platform developed a workflow that integrates analysis and visualization of metapangenomes by generating pangenomes and study them in conjunction with metagenomes.
Examples
Prokaryote pangenome
In 2018, 87% of the available whole genome sequences were bacteria fueling researchers interest in calculating prokaryote pangenomes at different taxonomic levels. In 2015, the pangenome of 44 strains of Streptococcus pneumoniae bacteria shows few new genes discovered with each new genome sequenced (see figure). In fact, the predicted number of new genes dropped to zero when the number of genomes exceeds 50 (note, however, that this is not a pattern found in all species). This would mean that S. pneumoniae has a 'closed pangenome'. The main source of new genes in S. pneumoniae was Streptococcus mitis from which genes were transferred horizontally. The pan-genome size of S. pneumoniae increased logarithmically with the number of strains and linearly with the number of polymorphic sites of the sampled genomes, suggesting that acquired genes accumulate proportionately to the age of clones. Another example of prokaryote pan-genome is Prochlorococcus, the core genome set is much smaller than the pangenome, which is used by different ecotypes of Prochlorococcus. Open pan-genome has been observed in environmental isolates such as Alcaligenes sp. and Serratia sp., showing a sympatric lifestyle. Nevertheless, open pangenome is not exclusive to free living microorganisms, a 2015 study on Prevotella bacteria isolated from humans, compared the gene repertoires of its species derived from different body sites of human. It also reported an open pan-genome showing vast diversity of gene pool.
Archaea also have some pangenome studies. Halobacteria pangenome shows the following gene families in the pangenome subsets: core (300), variable components (Softcore: 998, Cloud:36531, Shell:11784).
Eukaryote pangenome
Eukaryote organisms such as fungi, animals and plants have also shown evidence of pangenomes. In four fungi species whose pangenome has been studied, between 80 and 90% of gene models were found as core genes. The remaining accessory genes were mainly involved in pathogenesis and antimicrobial resistance.
In animals, the human pangenome is being studied. In 2010 a study estimated that a complete human pan-genome would contain ~19–40 Megabases of novel sequence not present in the extant reference human genome. The Human Pangenome consortium has the goal to acknowledge the human genome diversity. In 2023, a draft human pangenome reference was published. It is based on 47 diploid genomes from persons of varied ethnicity. Plans are underway for an improved reference capturing still more biodiversity from a still wider sample.
Among plants, there are examples of pangenome studies in model species, both diploid and polyploid, and a growing list of crops.
Pangenomes have shown promise as a tool in plant breeding by accounting for structural variants and SNPs in non-reference genomes, which helps to solve the problem of missing heritability that persists in genome wide association studies. An emerging plant-based concept is that of pan-NLRome, which is the repertoire of nucleotide-binding leucine-rich repeat (NLR) proteins, intracellular immune receptors that recognize pathogen proteins and confer disease resistance.
Virus pangenome
Virus does not necessarily have genes extensively shared by clades such as is the case of 16S in bacteria, and therefore the core genome of the full Virus Domain is empty. Nevertheless, several studies have calculated the pangenome of some viral lineages. The core genome from six species of pandoraviruses comprises 352 gene families only 4.7% of the pangenome, resulting in an open pangenome.
Data structures
The number of sequenced genomes is continuously growing "simply scaling up established bioinformatics pipelines will not be sufficient for leveraging the full potential of such rich genomic data sets". Pangenome graphs are emerging data structures designed to represent pangenomes and to efficiently map reads to them. They have been reviewed by Eizenga et al.
Software tools
As interest in pangenomes increased, there have been several software tools developed to help analyze this kind of data.
To start a pangenomic analysis the first step is the homogenization of genome annotation. The same software should be used to annotate all genomes used, such as GeneMark or RAST. In 2015, a group reviewed the different kinds of analyses and tools a researcher may have available. There are seven kinds of software developed to analyze pangenomes: Those dedicated to cluster homologous genes; identify SNPs; plot pangenomic profiles; build phylogenetic relationships of orthologous genes/families of strains/isolates; function-based searching; annotation and/or curation; and visualization.
The two most cited software tools for pangenomic analysis at the end of 2014 were Panseq and the pan-genomes analysis pipeline (PGAP). Other options include BPGA – A Pan-Genome Analysis Pipeline for prokaryotic genomes, GET_HOMOLOGUES, Roary. and PanDelos. In 2015 a review focused on prokaryote pangenomes and another for plant pan-genomes were published. Among the first software packages designed for plant pangenomes were PanTools. and GET_HOMOLOGUES-EST. In 2018 panX was released, an interactive web tool that allows inspection of gene families evolutionary history. panX can display an alignment of genomes, a phylogenetic tree, mapping of mutations and inference about gain and loss of the family on the core-genome phylogeny. In 2019 OrthoVenn 2.0 allowed comparative visualization of families of homologous genes in Venn diagrams up to 12 genomes. In 2023, BRIDGEcerealwas developed to survey and graph indel-based haplotypes from pan-genome through a gene model ID.
In 2020 Anvi'o was available as a multiomics platform that contains pangenomic and metapangenomic analyses as well as visualization workflows. In Anvi'o, genomes are displayed in concentrical circles and each radius represents a gene family, allowing for comparison of more than 100 genomes in its interactive visualization. In 2020, a computational comparison of tools for extracting gene-based pangenomic contents (such as GET_HOMOLOGUES, PanDelos, Roary, and others) has been released. Tools were compared from a methodological perspective, analyzing the causes that lead a given methodology to outperform other tools. The analysis was performed by taking into account different bacterial populations, which are synthetically generated by changing evolutionary parameters. Results show a differentiation of the performance of each tool that depends on the composition of the input genomes. Again in 2020, several tools introduced a graphical representation of the pangenomes showing the contiguity of genes (PPanGGOLiN, Panaroo).
Other software tools for pangenomics include Prodigal, Prokka, PanVis, PanTools, Pangenome Graph Builder (PGGB), PanX, Pagoo, and pgr-tk.
See also
Metagenomics
Pathogenomics
Quasispecies
Human Pangenome Reference
References
Evolutionary biology
Genomics
Microbiology
Pathogen genomics | Pan-genome | [
"Chemistry",
"Biology"
] | 3,580 | [
"Evolutionary biology",
"Microbiology",
"Molecular genetics",
"DNA sequencing",
"Microscopy",
"Pathogen genomics"
] |
24,481,760 | https://en.wikipedia.org/wiki/Contrast%20transfer%20function | The contrast transfer function (CTF) mathematically describes how aberrations in a transmission electron microscope (TEM) modify the image of a sample. This contrast transfer function (CTF) sets the resolution of high-resolution transmission electron microscopy (HRTEM), also known as phase contrast TEM.
By considering the recorded image as a CTF-degraded true object, describing the CTF allows the true object to be reverse-engineered. This is typically denoted CTF-correction, and is vital to obtain high resolution structures in three-dimensional electron microscopy, especially electron cryo-microscopy. Its equivalent in light-based optics is the optical transfer function.
Phase contrast in HRTEM
The contrast in HRTEM comes from interference in the image plane between the phases of scattered electron waves with the phase of the transmitted electron wave. Complex interactions occur when an electron wave passes through a sample in the TEM. Above the sample, the electron wave can be approximated as a plane wave. As the electron wave, or wavefunction, passes through the sample, both the phase and the amplitude of the electron beam is altered. The resultant scattered and transmitted electron beam is then focused by an objective lens, and imaged by a detector in the image plane.
Detectors are only able to measure the amplitude, not the phase directly. However, with the correct microscope parameters, the phase interference can be indirectly measured via the intensity in the image plane. Electrons interact very strongly with crystalline solids. As a result, the phase changes due to very small features, down to the atomic scale, can be recorded via HRTEM.
Contrast transfer theory
Contrast transfer theory provides a quantitative method to translate the exit wavefunction to a final image. Part of the analysis is based on Fourier transforms of the electron beam wavefunction. When an electron wavefunction passes through a lens, the wavefunction goes through a Fourier transform. This is a concept from Fourier optics.
Contrast transfer theory consists of four main operations:
Take the Fourier transform of the exit wave to obtain the wave amplitude in back focal plane of objective lens
Modify the wavefunction in reciprocal space by a phase factor, also known as the Phase Contrast Transfer Function, to account for aberrations
Inverse Fourier transform the modified wavefunction to obtain the wavefunction in the image plane
Find the square modulus of the wavefunction in the image plane to find the image intensity (this is the signal that is recorded on a detector, and creates an image)
Mathematical form
If we incorporate some assumptions about our sample, then an analytical expression can be found for both phase contrast and the phase contrast transfer function. As discussed earlier, when the electron wave passes through a sample, the electron beam interacts with the sample via scattering, and experiences a phase shift. This is represented by the electron wavefunction exiting from the bottom of the sample. This expression assumes that the scattering causes a phase shift (and no amplitude shift). This is called the Phase Object Approximation.
The exit wavefunction
Following Wade's notation, the exit wavefunction expression is represented by:
Where the exit wavefunction τ is a function of both in the plane of the sample, and perpendicular to the plane of the sample. represents the wavefunction incident on the top of the sample. is the wavelength of the electron beam, which is set by the accelerating voltage. is the effective potential of the sample, which depends on the atomic potentials within the crystal, represented by .
Within the exit wavefunction, the phase shift is represented by:
This expression can be further simplified taken into account some more assumptions about the sample. If the sample is considered very thin, and a weak scatterer, so that the phase shift is << 1, then the wave function can be approximated by a linear Taylor polynomial expansion. This approximation is called the Weak Phase Object Approximation.
The exit wavefunction can then be expressed as:
The phase contrast transfer function
Passing through the objective lens incurs a Fourier transform and phase shift. As such, the wavefunction on the back focal plane of the objective lens can be represented by:
= the scattering angle between the transmitted electron wave and the scattered electron wave
= a delta function representing the non-scattered, transmitted, electron wave
= the Fourier transform of the wavefunction's phase
= the phase shift incurred by the microscope's aberrations, also known as the Contrast Transfer Function:
= the relativistic wavelength of the electron wave, = The spherical aberration of the objective lens
The contrast transfer function can also be given in terms of spatial frequencies, or reciprocal space. With the relationship , the phase contrast transfer function becomes:
= the defocus of the objective lens (using the convention that underfocus is positive and overfocus is negative), = the relativistic wavelength of the electron wave, = The spherical aberration of the objective lens, = the spatial frequency (units of m−1)
Spherical aberration
Spherical aberration is a blurring effect arising when a lens is not able to converge incoming rays at higher angles of incidence to the focus point, but rather focuses them to a point closer to the lens. This will have the effect of spreading an imaged point (which is ideally imaged as a single point in the gaussian image plane) out over a finite size disc in the image plane. Giving the measure of aberration in a plane normal to the optical axis is called a transversal aberration. The size (radius) of the aberration disc in this plane can be shown to be proportional to the cube of the incident angle (θ) under the small-angle approximation, and that the explicit form in this case is
where is the spherical aberration and is the magnification, both effectively being constants of the lens settings. One can then go on to note that the difference in refracted angle between an ideal ray and one which suffers from spherical aberration, is
where is the distance from the lens to the gaussian image plane and is the radial distance from the optical axis to the point on the lens which the ray passed through. Simplifying this further (without applying any approximations) shows that
Two approximations can now be applied to proceed further in a straightforward manner. They rely on the assumption that both and are much smaller than , which is equivalent to stating that we are considering relatively small angles of incidence and consequently also very small spherical aberrations. Under such an assumption, the two leading terms in the denominator are insignificant, and can be approximated as not contributing. By way of these assumptions we have also implicitly stated that the fraction itself can be considered small, and this results in the elimination of the function by way of the small-angle approximation;
If the image is considered to be approximately in focus, and the angle of incidence is again considered small, then
meaning that an approximate expression for the difference in refracted angle between an ideal ray and one which suffers from spherical aberration, is given by
Defocus
As opposed to the spherical aberration, we will proceed by estimating the deviation of a defocused ray from the ideal by stating the longitudinal aberration; a measure of how much a ray deviates from the focal point along the optical axis. Denoting this distance , it is possible to show that the difference in refracted angle between rays originating from a focused and defocused object, can be related to the refracted angle as
where and are defined in the same way as they were for spherical aberration. Assuming that (or equivalently that ), we can show that
Since we required to be small, and since being small implies , we are given an approximation of as
From the thin-lens formula it can be shown that , yielding a final estimation of the difference in refracted angle between in-focus and off-focus rays as
Examples
The contrast transfer function determines how much phase signal gets transmitted to the real space wavefunction in the image plane. As the modulus squared of the real space wavefunction gives the image signal, the contrast transfer function limits how much information can ultimately be translated into an image. The form of the contrast transfer function determines the quality of real space image formation in the TEM.
This is an example contrast transfer function. There are a number of things to note:
The function exists in the spatial frequency domain, or k-space
Whenever the function is equal to zero, that means there is no transmittance, or no phase signal is incorporated into the real space image
The first time the function crosses the x-axis is called the point resolution
To maximize phase signal, it is generally better to use imaging conditions that push the point resolution to higher spatial frequencies
When the function is negative, that represents positive phase contrast, leading to a bright background, with dark atomic features
Every time the CTF crosses the x-axis, there is an inversion in contrast
Accordingly, past the point resolution of the microscope the phase information is not directly interpretable, and must be modeled via computer simulation
Scherzer defocus
The defocus value () can be used to counteract the spherical aberration to allow for greater phase contrast. This analysis was developed by Scherzer, and is called the Scherzer defocus.
The variables are the same as from the mathematical treatment section, with setting the specific Scherzer defocus, as the spherical aberration, and λ as the relativistic wavelength for the electron wave.
The figure in the following section shows the CTF function for a CM300 Microscope at the Scherzer Defocus. Compared to the CTF Function showed above, there is a larger window, also known as a passband, of spatial frequencies with high transmittance. This allows more phase signal to pass through to the image plane.
Envelope function
The envelope function represents the effect of additional aberrations that damp the contrast transfer function, and in turn the phase. The envelope terms comprising the envelope function tend to suppress high spatial frequencies. The exact form of the envelope functions can differ from source to source. Generally, they are applied by multiplying the Contrast Transfer Function by an envelope term Et representing temporal aberrations, and an envelope term Es representing spatial aberrations. This yields a modified, or effective Contrast Transfer Function:
Examples of temporal aberrations include chromatic aberrations, energy spread, focal spread, instabilities in the high voltage source, and instabilities in the objective lens current. An example of a spatial aberration includes the finite incident beam convergence.
As shown in the figure, the most restrictive envelope term will dominate in damping the contrast transfer function. In this particular example, the temporal envelope term is the most restrictive. Because the envelope terms damp more strongly at higher spatial frequencies, there comes a point where no more phase signal can pass through. This is called the Information Limit of the microscope, and is one measure of the resolution.
Modeling the envelope function can give insight into both TEM instrument design, and imaging parameters. By modeling the different aberrations via envelope terms, it is possible to see which aberrations are most limiting the phase signal.
Various software have been developed to model both the Contrast Transfer Function and Envelope Function for particular microscopes, and particular imaging parameters.
Linear imaging theory vs. non-linear imaging theory
The previous description of the contrast transfer function depends on linear imaging theory. Linear imaging theory assumes that the transmitted beam is dominant, there is only weak phase shift by the sample. In many cases, this precondition is not fulfilled. In order to account for these effects, non-linear imaging theory is required. With strongly scattering samples, diffracted electrons will not only interfere with the transmitted beam, but will also interfere with each other. This will produce second order diffraction intensities. Non-linear imaging theory is required to model these additional interference effects.
Contrary to a widespread assumption, the linear/nonlinear imaging theory has nothing to do with kinematical diffraction or dynamical diffraction, respectively.
Linear imaging theory is still used, however, because it has some computational advantages. In Linear imaging theory, the Fourier coefficients for the image plane wavefunction are separable. This greatly reduces computational complexity, allowing for faster computer simulations of HRTEM images.
See also
Airy disk, different but similar phenomena in light
Optical transfer function
Point spread function
Transmission electron microscopy
References
External links
Contrast transfer function (CTF) correction
Talk on the CTF by Henning Stahlberg
CTF reading list
Interactive CTF Modeling
Microscopes
Protein structure | Contrast transfer function | [
"Chemistry",
"Technology",
"Engineering"
] | 2,581 | [
"Measuring instruments",
"Microscopes",
"Structural biology",
"Microscopy",
"Protein structure"
] |
24,483,784 | https://en.wikipedia.org/wiki/Current%20%28hydrology%29 | In hydrology, a current in a water body is the flow of water in any one particular direction. The current varies spatially as well as temporally, dependent upon the flow volume of water, stream gradient, and channel geometry. In tidal zones, the current and streams may reverse on the flood tide before resuming on the ebb tide. On a global scale, wind and the rotation of the earth greatly influence the flow of ocean currents.
In a stream or river there the current is influenced by gravity, the term upstream (or upriver) refers to the direction towards the source of the stream (or river), i.e. against the direction of flow. Likewise, the term downstream or downriver describes the direction towards the mouth of the stream or river, in which the current flows. The term "left bank" and "right bank" refers to banks as seen from the direction of flow, towards the downstream direction.
References
See also
Coastal geography
Hydrology
Rivers
Water streams | Current (hydrology) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 201 | [
"Hydrology",
"Hydrology stubs",
"Environmental engineering"
] |
24,483,950 | https://en.wikipedia.org/wiki/Mulard | The mulard (or moulard) is a hybrid between two different genera of domestic duck: the domestic Muscovy duck (Cairina moschata domestica) and the domestic duck (Anas platyrhynchos domesticus), derived from the wild mallard. American Pekins and other domestic ducks are most commonly used to breed mulards due to the breed's high meat production. Like many interspecific F1 hybrids, mulards are sterile, giving them the nickname mule ducks. While it is possible to produce mulards naturally, artificial insemination is used more often with greater success.
The term mulard or moulard is generally reserved for offspring where the parental drake is a Muscovy and the duck is a Pekin. When the drake is a Pekin, the offspring tend to be smaller and are called hinnies.
Husbandry and production
The mulard is commercially produced on farms for meat and foie gras. The White Muscovy and the Pekin are the two most common purebred, commercially farmed ducks. Hybrids of the two are hardier and calmer, in addition to exhibiting natural hybrid vigor.
The incubation period of the hybrid eggs is between the mallard and Muscovy, with an average of 32 days. About half of the eggs hatch into mulard ducks. Mulards tend to combine certain traits of the parent breeds. Due to their Muscovy heritage, they produce leaner meat than Pekins; females tend to be raised for meat, while males are used for foie gras. Like Muscovy ducks, mulards have claws on their feet, but do not fly and perch; instead, they prefer to stay near water, as Pekins do.
Traditionally, foie gras was primarily produced with geese, but by the 1960s the majority of farmers began to use mulards. Geese are more expensive to maintain than ducks (they are larger and more aggressive), and the more temperamental Muscovies did not accept the process of gavage (force feeding) as readily as Pekins, causing the quality of the foie gras to suffer. This problem was avoided by the introduction of mulards. These hybrids have also become extremely common in countries where foie gras is not produced.
Today in France, the leading foie gras producer and consumer, the use of hybrid ducks outnumbers the use of geese. In 2007, there were 35 million mulard ducks raised in the country, compared with only 800,000 geese. In addition to Europe and the United States, mulards are widely raised throughout Southeast Asia.
Gallery
See also
List of duck breeds
References
Duck breeds
Bird hybrids
Intergeneric hybrids
pt:Pato#Mulard | Mulard | [
"Biology"
] | 577 | [
"Intergeneric hybrids",
"Hybrid organisms"
] |
2,078,218 | https://en.wikipedia.org/wiki/Interchange%20%28road%29 | In the field of road transport, an interchange (American English) or a grade-separated junction (British English) is a road junction that uses grade separations to allow for the movement of traffic between two or more roadways or highways, using a system of interconnecting roadways to permit traffic on at least one of the routes to pass through the junction without interruption from crossing traffic streams. It differs from a standard intersection, where roads cross at grade. Interchanges are almost always used when at least one road is a controlled-access highway (freeway) or a limited-access highway (expressway), though they are sometimes used at junctions between surface streets.
Terminology
Note: The descriptions of interchanges apply to countries where vehicles drive on the right side of the road. For left-side driving, the layout of junctions is mirrored. Both North American (NA) and British (UK) terminology is included.
Freeway junction, highway interchange (NA), or motorway junction (UK)
A type of road junction linking one controlled-access highway (freeway or motorway) facility to another, to other roads, or to a rest area or motorway service area. Junctions and interchanges are often (but not always) numbered either sequentially, or by distance from one terminus of the route (the "beginning" of the route).
The American Association of State Highway and Transportation Officials (AASHTO) defines an interchange as "a system of interconnecting roadways in conjunction with one or more grade separations that provides for the movement of traffic between two or more roadways or highways on different levels."
System interchange
A junction that connects multiple controlled-access highways.
Service interchange
A junction that connects a controlled-access facility to a lower-order facility, such as an arterial or collector road.
The mainline is the controlled-access highway in a service interchange, while the crossroad is the lower-order facility that often includes at-grade intersections or roundabouts, which may pass over or under the mainline.
Complete interchange
A junction where all possible movements between highways can be made from any direction.
Incomplete interchange
A junction that is missing at least one movement between highways.
Ramp (NA), or slip road (UK/Ireland)
A short section of road that allows vehicles to enter or exit a controlled-access highway.
Ingressing traffic is entering the highway via an on-ramp or entrance ramp, while egressing traffic is exiting the highway via an off-ramp or exit ramp.
Directional ramp
A ramp that curves toward the desired direction of travel; i.e., a ramp that makes a left turn exits from the left side of the roadway (a left exit).
Semi-directional ramp
A ramp that exits in a direction opposite from the desired direction of travel, then turns toward the desired direction. Most left turn movements are provided by a semi-directional ramp that exits to the right, rather than exiting from the left.
Weaving
An undesirable situation where traffic entering and exiting a highway must cross paths within a limited distance.
History
The concept of the controlled-access highway developed in the 1920s and 1930s in Italy, Germany, the United States, and Canada. Initially, these roads featured at-grade intersections along their length. Interchanges were developed to provide access between these new highways and heavily-travelled surface streets. The Bronx River Parkway and Long Island Motor Parkway were the first roads to feature grade-separations.
Maryland engineer Arthur Hale filed a patent for the design of a cloverleaf interchange on May24, 1915,
though the conceptual roadwork was not realised until a cloverleaf opened on December15, 1929, in Woodbridge, New Jersey, connecting New Jersey Route 25 and Route 4 (now U.S. Route 1/9 and New Jersey Route 35). It was designed by Philadelphia engineering firm Rudolph and Delano, based on a design seen in an Argentinian magazine.
System interchange
A system interchange connects multiple controlled-access highways, involving no at-grade signalised intersections.
Four-legged interchanges
Cloverleaf interchange
A cloverleaf interchange is a four-legged junction where left turns across opposing traffic are handled by non-directional loop ramps.
It is named for its appearance from above, which resembles a four-leaf clover.
A cloverleaf is the minimum interchange required for a four-legged system interchange. Although they were commonplace until the 1970s, most highway departments and ministries have sought to rebuild them into more efficient and safer designs.
The cloverleaf interchange was invented by Maryland engineer Arthur Hale, who filed a patent for its design on May24, 1915.
The first one in North America opened on December15, 1929, in Woodbridge, New Jersey, connecting New Jersey Route25 and Route4 (now U.S. Route1/9 and New Jersey Route35). It was designed by Philadelphia engineering firm Rudolph and Delano based on a design seen in an Argentinian magazine.
The first cloverleaf in Canada opened in 1938
at the junction of Highway 10 and what would become the Queen Elizabeth Way.
The first cloverleaf outside of North America opened in Stockholm on October15, 1935. Nicknamed Slussen, it was referred to as a "traffic carousel" and was considered a revolutionary design at the time of its construction.
A cloverleaf offers uninterrupted connections between two roads but suffers from weaving issues. Along the mainline, a loop ramp introduces traffic prior to a second loop ramp providing access to the crossroad, between which ingress and egress traffic mixes. For this reason, the cloverleaf interchange has fallen out of favour in place of combination interchanges. Some may be half cloverleaf containing ghost ramps which can be upgraded to full cloverleafs if the road is extended. US 70 and US 17 west of New Bern, North Carolina is an example.
Stack interchange
A stack interchange is a four-way interchange whereby a semi-directional left turn and a directional right turn are both available. Usually, access to both turns is provided simultaneously by a single off-ramp. Assuming right-handed driving, to cross over incoming traffic and go left, vehicles first exit onto an off-ramp from the rightmost lane. After demerging from right-turning traffic, they complete their left turn by crossing both highways on a flyover ramp or underpass. The penultimate step is a merge with the right-turn on-ramp traffic from the opposite quadrant of the interchange. Finally, an on-ramp merges both streams of incoming traffic into the left-bound highway. As there is only one off-ramp and one on-ramp (in that respective order), stacks do not suffer from the problem of weaving, and due to the semi-directional flyover ramps and directional ramps, they are generally safe and efficient at handling high traffic volumes in all directions.
A standard stack interchange includes roads on four levels, also known as a 4-level stack, including the two perpendicular highways, and one more additional level for each pair of left-turn ramps. These ramps can be stacked (cross) in various configurations above, below, or between the two interchanging highways. This makes them distinct from turbine interchanges, where pairs of left-turn ramps are separated but at the same level. There are some stacks that could be considered 5-level; however, these remain four-way interchanges, since the fifth level actually consists of dedicated ramps for HOV/bus lanes or frontage roads running through the interchange. The stack interchange between I-10 and I-405 in Los Angeles is a 3-level stack, since the semi-directional ramps are spaced out far enough, so they do not need to cross each other at a single point as in a conventional 4-level stack.
Stacks are significantly more expensive than other four-way interchanges are due to the design of the four levels; additionally, they may suffer from objections of local residents because of their height and high visual impact. Large stacks with multiple levels may have a complex appearance and are often colloquially described as Mixing Bowls, Mixmasters (for a Sunbeam Products brand of electric kitchen mixers), or as Spaghetti Bowls or Spaghetti Junctions (being compared to boiled spaghetti). However, they consume a significantly smaller area of land compared to a cloverleaf interchange.
Combination interchange
A combination interchange (sometimes referred to by the portmanteau, cloverstack) is a hybrid of other interchange designs. It uses loop ramps to serve slower or less-occupied traffic flow, and flyover ramps to serve faster and heavier traffic flows.
If local and express ways serving the same directions and each roadway is connected righthand to the interchange, extra ramps are installed. The combination interchange design is commonly used to upgrade cloverleaf interchanges to increase their capacity and eliminate weaving.
Turbine interchange
The turbine interchange is an alternative four-way directional interchange. The turbine interchange requires fewer levels (usually two or three) while retaining directional ramps throughout. It features right-exit, left-turning ramps that sweep around the center of the interchange in a clockwise spiral. A full turbine interchange features a minimum of 18 overpasses, and requires more land to construct than a four-level stack interchange; however, the bridges are generally short in length. Coupled with reduced maintenance costs, a turbine interchange is a less costly alternative to a stack.
Windmill interchange
A windmill interchange is similar to a turbine interchange, but it has much sharper turns, reducing its size and capacity. The interchange is named for its similar overhead appearance to the blades of a windmill.
A variation of the windmill, called the diverging windmill, increases capacity by altering the direction of traffic flow of the interchanging highways, making the connecting ramps much more direct. There also is a hybrid interchange somewhat like the diverging windmill in which left turn exits merge on the left, but it differs in that the left turn exits use left directional ramps.
Braided interchange
A braided or diverging interchange is a two-level, four-way interchange. An interchange is braided when at least one of the roadways reverses sides. It seeks to make left and right turns equally easy. In a pure braided interchange, each roadway has one right exit, one left exit, one right on-ramp, and one left on-ramp, and both roadways are flipped.
The first pure braided interchange was built in Baltimore at Interstate 95 at Interstate 695; however, the interchange was reconfigured in 2008 to a traditional stack interchange.
Examples
Interstate 65 and Interstate 20/Interstate 59 in Birmingham, Alabama ()
Interstate 196 and U.S. Route 131 in Grand Rapids, Michigan ()
Interstate 77 and Interstate 85 in Charlotte, North Carolina ()
Eastern Ring Road and Southern Ring Branch Road, Riyadh ()
Three-level roundabout
A three-level roundabout interchange features a grade-separated roundabout which handles traffic exchanging between highways.
The ramps of the interchanging highways meet at a roundabout, or rotary, on a separated level above, below, or in the middle of the two highways.
Three-legged interchanges
These interchanges can also be used to make a "linking road" to the destination for a service interchange, or the creation of a new basic road as a service interchange.
Trumpet interchange
Trumpet interchanges may be used where one highway terminates at another highway, and are named as such for to their resemblance to trumpets. They are sometimes called jug handles.
These interchanges are very common on toll roads, as they concentrate all entering and exiting traffic into a single stretch of roadway, where toll plazas can be installed once to handle all traffic, especially on ticket-based tollways. A double-trumpet interchange can be found where a toll road meets another toll road or a free highway. They are also useful when most traffic on the terminating highway is going in the same direction. The turn that is used less often would contain the slower loop ramp.
Trumpet interchanges are often used instead of directional or semi-directional T or Y interchanges because they require less bridge construction but still eliminate weaving.
T and Y interchanges
A full Y interchange (also known as a directional T interchange) is typically used when a three-way interchange is required for two or three highways interchanging in semi-parallel/perpendicular directions, but it can also be used in right-angle case as well. Their connecting ramps can spur from either the right or left side of the highway, depending on the direction of travel and the angle.
Directional T interchanges use flyover/underpass ramps for both connecting and mainline segments, and they require a moderate amount of land and moderate costs since only two levels of roadway are typically used. Their name derives from their resemblance to the capital letter T, depending upon the angle from which the interchange is seen and the alignment of the roads that are interchanging. It is sometimes known as the "New England Y", as this design is often seen in the northeastern United States, particularly in Connecticut.
This type of interchange features directional ramps (no loops, or weaving right to turn left) and can use multilane ramps in comparatively little space. Some designs have two ramps and the "inside" through road (on the same side as the freeway that ends) crossing each other at a three-level bridge. The directional T interchange is preferred to a trumpet interchange because a trumpet requires a loop ramp by which speeds can be reduced, but flyover ramps can handle much faster speeds. The disadvantage of the directional T is that traffic from the terminating road enters and leaves on the passing lane, so the semi-directional T interchange (see below) is preferred.
The interchange of Highway 416 and Highway 417 in Ontario, constructed in the early 1990s, is one of the few directional T interchanges, as most transportation departments had switched to the semi-directional T design.
As with a directional T interchange, a semi-directional T interchange uses flyover (overpass) or underpass ramps in all directions at a three-way interchange. However, in a semi-directional T, some of the splits and merges are switched to avoid ramps to and from the passing lane, eliminating the major disadvantage of the directional T. Semi-directional T interchanges are generally safe and efficient, though they do require more land and are costlier than trumpet interchanges.
Semi-directional T interchanges are built as two- or three-level junctions, with three-level interchanges typically used in urban or suburban areas where land is more expensive. In a three-level semi-directional T, the two semi-directional ramps from the terminating highway cross the surviving highway at or near a single point, which requires both an overpass and underpass. In a two-level semi-directional T, the two semi-directional ramps from the terminating highway cross each other at a different point than the surviving highway, necessitating longer ramps and often one ramp having two overpasses. Highway 412 has a three-level semi-directional T at Highway 407 and a two-level semi-directional T at Highway 401.
Service interchange
Service interchanges are used between a controlled-access route and a crossroad that is not controlled-access. A full cloverleaf may be used as a system or a service interchange.
Diamond interchange
A diamond interchange is an interchange involving four ramps where they enter and leave the freeway at a small angle and meet the non-freeway at almost right angles. These ramps at the non-freeway can be controlled through stop signs, traffic signals, or turn ramps.
Diamond interchanges are much more economical in use of materials and land than other interchange designs, as the junction does not normally require more than one bridge to be constructed. However, their capacity is lower than other interchanges and when traffic volumes are high they can easily become congested.
Double roundabout diamond
A double roundabout diamond interchange, also known as a dumbbell interchange or a dogbone interchange, is similar to the diamond interchange, but uses a pair of roundabouts in place of intersections to join the highway ramps with the crossroad. This typically increases the efficiency of the interchange when compared to a diamond, but is only ideal in light traffic conditions. In the dogbone variation, the roundabouts do not form a complete circle, instead having a teardrop shape, with the points facing towards the center of the interchange. Longer ramps are often required due to line-of-sight requirements at roundabouts.
Partial cloverleaf interchange
A partial cloverleaf interchange (often shortened to the portmanteau, parclo) is an interchange with loops ramps in one to three quadrants, and diamond interchange ramps in any number of quadrants. The various configurations are generally a safer modification of the cloverleaf design, due to a partial or complete reduction in weaving, but may require traffic lights on the lesser-travelled crossroad. Depending on the number of ramps used, they take up a moderate to large amount of land, and have varying capacity and efficiency.
Parclo configurations are given names based on the location of and number of quadrants with ramps. The letter A denotes that, for traffic on the controlled-access highway, the loop ramps are located in advance of (or approaching) the crossroad, and thus provide an onramp to the highway. The letter B indicated that the loop ramps are beyond the crossroad, and thus provide an offramp from the highway. These letters can be used together when opposite directions of travel on the controlled-access highway are not symmetrical, thus a parclo AB features a loop ramp approaching the crossroad in one direction, and beyond the crossroad in the opposing direction, as in the example image.
Diverging diamond interchange
A diverging diamond interchange (DDI) or double crossover diamond interchange (DCD) is similar to a traditional diamond interchange, except the opposing lanes on the crossroad cross each other twice, once on each side of the highway. This allows all highway entrances and exits to avoid crossing the opposite direction of travel and saves one signal phase of traffic lights each.
The first DDIs were constructed in the French communities of Versailles (A13 at D182), Le Perreux-sur-Marne (A4 at N486) and Seclin (A1 at D549), in the 1970s. Despite the fact that such interchanges already existed, the idea for the DDI was "reinvented" around 2000, inspired by the freeway-to-freeway interchange between Interstate 95 and I-695 north of Baltimore. The first DDI in the United States opened on July7, 2009, in Springfield, Missouri, at the junction of Interstate 44 and Missouri Route 13.
Single-point urban interchange
A single-point urban interchange (SPUI) or single-point diamond interchange (SPDI) is a modification of a diamond interchange in which all four ramps to and from a controlled-access highway converge at a single, three-phase traffic light in the middle of an overpass or underpass. While the compact design is safer, more efficient, and offers increased capacity—with three light phases as opposed to four in a traditional diamond, and two left turn queues on the arterial road instead of four—the significantly wider overpass or underpass structure makes them more costly than most service interchanges.
Since single-point urban interchanges can exist in rural areas, such as the interchange of U.S. Route 23 with M-59 in Michigan; the term single-point diamond interchange is considered the correct phrasing.
Single-point interchanges were first built in the early 1970s along U.S. Route 19 in the Tampa Bay area of Florida, including the SR 694 interchange in St. Petersburg and SR 60 in Clearwater.
See also
Free-flow interchange
Grade separation
Intersection (road)
Junction (traffic)
Unused highway
Ramp meter
Roundabout
Notes
References
External links
Glossary Part of the publication Highway Design Handbook for Older Drivers and Pedestrians by the Turner-Fairbank Highway Research Center branch of the U.S. Federal Highway Administration
Kurumi.com U.S. interchanges directory
Detailed history of interchanges with diagrams
How New Jersey Saved Civilization: The first cloverleaf interchange
Bridges
Road infrastructure
Road junction types | Interchange (road) | [
"Engineering"
] | 4,092 | [
"Structural engineering",
"Bridges"
] |
2,080,260 | https://en.wikipedia.org/wiki/Carrier%20generation%20and%20recombination | In solid-state physics of semiconductors, carrier generation and carrier recombination are processes by which mobile charge carriers (electrons and electron holes) are created and eliminated. Carrier generation and recombination processes are fundamental to the operation of many optoelectronic semiconductor devices, such as photodiodes, light-emitting diodes and laser diodes. They are also critical to a full analysis of p-n junction devices such as bipolar junction transistors and p-n junction diodes.
The electron–hole pair is the fundamental unit of generation and recombination in inorganic semiconductors, corresponding to an electron transitioning between the valence band and the conduction band where generation of an electron is a transition from the valence band to the conduction band and recombination leads to a reverse transition.
Overview
Like other solids, semiconductor materials have an electronic band structure determined by the crystal properties of the material. Energy distribution among electrons is described by the Fermi level and the temperature of the electrons. At absolute zero temperature, all of the electrons have energy below the Fermi level; but at non-zero temperatures the energy levels are filled following a Fermi-Dirac distribution.
In undoped semiconductors the Fermi level lies in the middle of a forbidden band or band gap between two allowed bands called the valence band and the conduction band. The valence band, immediately below the forbidden band, is normally very nearly completely occupied. The conduction band, above the Fermi level, is normally nearly completely empty. Because the valence band is so nearly full, its electrons are not mobile, and cannot flow as electric current.
However, if an electron in the valence band acquires enough energy to reach the conduction band as a result of interaction with other electrons, holes, photons, or the vibrating crystal lattice itself, it can flow freely among the nearly empty conduction band energy states. Furthermore, it will also leave behind a hole that can flow like a physically charged particle.
Carrier generation describes processes by which electrons gain energy and move from the valence band to the conduction band, producing two mobile carriers; while recombination describes processes by which a conduction band electron loses energy and re-occupies the energy state of an electron hole in the valence band.
These processes must conserve quantized energy crystal momentum, and the vibrating lattice which plays a large role in conserving momentum as in collisions, photons can transfer very little momentum in relation to their energy.
Relation between generation and recombination
Recombination and generation are always happening in semiconductors, both optically and thermally. As predicted by thermodynamics, a material at thermal equilibrium will have generation and recombination rates that are balanced so that the net charge carrier density remains constant. The resulting probability of occupation of energy states in each energy band is given by Fermi–Dirac statistics.
The product of the electron and hole densities ( and ) is a constant at equilibrium, maintained by recombination and generation occurring at equal rates. When there is a surplus of carriers (i.e., ), the rate of recombination becomes greater than the rate of generation, driving the system back towards equilibrium. Likewise, when there is a deficit of carriers (i.e., ), the generation rate becomes greater than the recombination rate, again driving the system back towards equilibrium. As the electron moves from one energy band to another, the energy and momentum that it has lost or gained must go to or come from the other particles involved in the process (e.g. photons, electron, or the system of vibrating lattice atoms).
Carrier generation
When light interacts with a material, it can either be absorbed (generating a pair of free carriers or an exciton) or it can stimulate a recombination event. The generated photon has similar properties to the one responsible for the event. Absorption is the active process in photodiodes, solar cells and other semiconductor photodetectors, while stimulated emission is the principle of operation in laser diodes.
Besides light excitation, carriers in semiconductors can also be generated by an external electric field, for example in light-emitting diodes and transistors.
When light with sufficient energy hits a semiconductor, it can excite electrons across the band gap. This generates additional charge carriers, temporarily lowering the electrical resistance of materials. This higher conductivity in the presence of light is known as photoconductivity. This conversion of light into electricity is widely used in photodiodes.
Recombination mechanisms
Carrier recombination can happen through multiple relaxation channels. The main ones are band-to-band recombination, Shockley–Read–Hall (SRH) trap-assisted recombination, Auger recombination and surface recombination. These decay channels can be separated into radiative and non-radiative. The latter occurs when the excess energy is converted into heat by phonon emission after the mean lifetime , whereas in the former at least part of the energy is released by light emission or luminescence after a radiative lifetime . The carrier lifetime is then obtained from the rate of both type of events according to:
From which we can also define the internal quantum efficiency or quantum yield, as:
Radiative recombination
Band-to-band radiative recombination
Band-to-band recombination is the name for the process of electrons jumping down from the conduction band to the valence band in a radiative manner. During band-to-band recombination, a form of spontaneous emission, the energy absorbed by a material is released in the form of photons. Generally these photons contain the same or less energy than those initially absorbed. This effect is how LEDs create light. Because the photon carries relatively little momentum, radiative recombination is significant only in direct bandgap materials. This process is also known as bimolecular recombination.
This type of recombination depends on the density of electrons and holes in the excited state, denoted by and respectively. Let us represent the radiative recombination as and the carrier generation rate as G.
Total generation is the sum of thermal generation G0 and generation due to light shining on the semiconductor GL:
Here we will consider the case in which there is no illumination on the semiconductor. Therefore and , and we can express the change in carrier density as a function of time as
Because the rate of recombination is affected by both the concentration of free electrons and the concentration of holes that are available to them, we know that Rr should be proportional to np:
and we add a proportionality constant Br to eliminate the sign:
If the semiconductor is in thermal equilibrium, the rate at which electrons and holes recombine must be balanced by the rate at which they are generated by the spontaneous transition of an electron from the valence band to the conduction band. The recombination rate must be exactly balanced by the thermal generation rate .
Therefore:
where and are the equilibrium carrier densities.
Using the mass action law ,with being the intrinsic carrier density, we can rewrite it as
The non-equilibrium carrier densities are given by
Then the new recombination rate becomes,
Because and , we can say that
In an n-type semiconductor,
and
thus
Net recombination is the rate at which excess holes disappear
Solve this differential equation to get a standard exponential decay
where pmax is the maximum excess hole concentration when t = 0. (It can be proved that , but here we will not discuss that).
When , all of the excess holes will have disappeared. Therefore, we can define the lifetime of the excess holes in the material
So the lifetime of the minority carrier is dependent upon the majority carrier concentration.
Stimulated emission
Stimulated emission is a process where an incident photon interacts with an excited electron causing it to recombine and emit a photon with the same properties as the incident photon , in terms of phase, frequency, polarization, and direction of travel. Stimulated emission together with the principle of population inversion are at the heart of operation of lasers and masers. It has been shown by Einstein at the beginning of the twentieth century that if the excited and the ground level are non degenerate then the absorption rate and the stimulated emission rate are the same. Else if level 1 and level 2 are -fold and -fold degenerate respectively, the new relation is:
Trap emission
Trap emission is a multistep process wherein a carrier falls into defect-related wave states in the middle of the bandgap. A trap is a defect capable of holding a carrier. The trap emission process recombines electrons with holes and emits photons to conserve energy. Due to the multistep nature of trap emission, a phonon is also often emitted. Trap emission can proceed by use of bulk defects or surface defects.
Non-radiative recombination
Non-radiative recombination is a process in phosphors and semiconductors, whereby charge carriers recombine releasing phonons instead of photons. Non-radiative recombination in optoelectronics and phosphors is an unwanted process, lowering the light generation efficiency and increasing heat losses.
Non-radiative life time is the average time before an electron in the conduction band of a semiconductor recombines with a hole. It is an important parameter in optoelectronics where radiative recombination is required to produce a photon; if the non-radiative life time is shorter than the radiative, a carrier is more likely to recombine non-radiatively. This results in low internal quantum efficiency.
Shockley–Read–Hall (SRH)
In Shockley-Read-Hall recombination (SRH), also called trap-assisted recombination, the electron in transition between bands passes through a new energy state (localized state) created within the band gap by a dopant or a defect in the crystal lattice; such energy states are called traps. Non-radiative recombination occurs primarily at such sites. The energy is exchanged in the form of lattice vibration, a phonon exchanging thermal energy with the material.
Since traps can absorb differences in momentum between the carriers, SRH is the dominant recombination process in silicon and other indirect bandgap materials. However, trap-assisted recombination can also dominate in direct bandgap materials under conditions of very low carrier densities (very low level injection) or in materials with high density of traps such as perovskites. The process is named after William Shockley, William Thornton Read and Robert N. Hall, who published it in 1952.
Types of traps
Electron traps vs. hole traps
Even though all the recombination events can be described in terms of electron movements, it is common to visualize the different processes in terms of excited electron and the electron holes they leave behind. In this context, if trap levels are close to the conduction band, they can temporarily immobilize excited electrons or in other words, they are electron traps. On the other hand, if their energy lies close to the valence band they become hole traps.
Shallow traps vs. deep traps
The distinction between shallow and deep traps is commonly made depending on how close electron traps are to the conduction band and how close hole traps are to the valence band. If the difference between trap and band is smaller than the thermal energy kBT it is often said that it is a shallow trap. Alternatively, if the difference is larger than the thermal energy, it is called a deep trap. This difference is useful because shallow traps can be emptied more easily and thus are often not as detrimental to the performance of optoelectronic devices.
SRH model
In the SRH model, four things can happen involving trap levels:
An electron in the conduction band can be trapped in an intragap state.
An electron can be emitted into the conduction band from a trap level.
A hole in the valence band can be captured by a trap. This is analogous to a filled trap releasing an electron into the valence band.
A captured hole can be released into the valence band. Analogous to the capture of an electron from the valence band.
When carrier recombination occurs through traps, we can replace the valence density of states by that of the intragap state. The term is replaced by the density of trapped electrons/holes .
Where is the density of trap states and is the probability of that occupied state. Considering a material containing both types of traps, we can define two trapping coefficients and two de-trapping coefficients . In equilibrium, both trapping and de-trapping should be balanced and ). Then, the four rates as a function of become:
Where and are the electron and hole densities when the quasi Fermi level matches the trap energy.
In steady-state condition, the net recombination rate of electrons should match the net recombination rate for holes, in other words: . This eliminates the occupation probability and leads to the Shockley-Read-Hall expression for the trap-assisted recombination:
Where the average lifetime for electrons and holes are defined as:
Auger recombination
In Auger recombination the energy is given to a third carrier which is excited to a higher energy level without moving to another energy band. After the interaction, the third carrier normally loses its excess energy to thermal vibrations. Since this process is a three-particle interaction, it is normally only significant in non-equilibrium conditions when the carrier density is very high. The Auger effect process is not easily produced, because the third particle would have to begin the process in the unstable high-energy state.
In thermal equilibrium the Auger recombination and thermal generation rate equal each other
where are the Auger capture probabilities. The non-equilibrium Auger recombination rate and resulting net recombination rate under steady-state conditions are
The Auger lifetime is given by
The mechanism causing LED efficiency droop was identified in 2007 as Auger recombination, which met with a mixed reaction. In 2013, an experimental study claimed to have identified Auger recombination as the cause of efficiency droop. However, it remains disputed whether the amount of Auger loss found in this study is sufficient to explain the droop. Other frequently quoted evidence against Auger as the main droop-causing mechanism is the low-temperature dependence of this mechanism, which is the opposite of that found for the droop.
Surface recombination
Trap-assisted recombination at the surface of a semiconductor is referred to as surface recombination. This occurs when traps at or near the surface or interface of the semiconductor form due to dangling bonds caused by the sudden discontinuation of the semiconductor crystal. Surface recombination is characterized by surface recombination velocity which depends on the density of surface defects. In applications such as solar cells, surface recombination may be the dominant mechanism of recombination due to the collection and extraction of free carriers at the surface. In some applications of solar cells, a layer of transparent material with a large band gap, also known as a window layer, is used to minimize surface recombination. Passivation techniques are also employed to minimize surface recombination.
Langevin recombination
For free carriers in low-mobility systems, the recombination rate is often described with the Langevin recombination rate. The model is often used for disordered systems such as organic materials (and is hence relevant for organic solar cells) and other such systems. The Langevin recombination strength is defined as .
See also
Auger effect
Cage effect
References
Further reading
N.W. Ashcroft and N.D. Mermin, Solid State Physics, Brooks Cole, 1976
External links
PV Lighthouse Recombination Calculator
PV Lighthouse Band Gap Calculator
PV Education
Semiconductors
Optoelectronics
Charge carriers | Carrier generation and recombination | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,353 | [
"Electrical resistance and conductance",
"Physical phenomena",
"Physical quantities",
"Charge carriers",
"Semiconductors",
"Materials",
"Electrical phenomena",
"Electronic engineering",
"Condensed matter physics",
"Solid state engineering",
"Matter"
] |
2,081,987 | https://en.wikipedia.org/wiki/Chapman%20function | A Chapman function describes the integration of atmospheric absorption along a slant path on a spherical Earth, relative to the vertical case. It applies to any quantity with a concentration decreasing exponentially with increasing altitude. To a first approximation, valid at small zenith angles, the Chapman function for optical absorption is equal to
where z is the zenith angle and sec denotes the secant function.
The Chapman function is named after Sydney Chapman, who introduced the function in 1931.
Definition
In an isothermal model of the atmosphere, the density varies exponentially with altitude according to the Barometric formula:
,
where denotes the density at sea level () and the so-called scale height.
The total amount of matter traversed by a vertical ray starting at altitude towards infinity is given by the integrated density ("column depth")
.
For inclined rays having a zenith angle , the integration is not straight-forward due to the non-linear relationship between altitude and path length when considering the
curvature of Earth. Here, the integral reads
,
where we defined ( denotes the Earth radius).
The Chapman function is defined as the ratio between slant depth and vertical column depth . Defining , it can be written as
.
Representations
A number of different integral representations have been developed in the literature. Chapman's original representation reads
.
Huestis developed the representation
,
which does not suffer from numerical singularities present in Chapman's representation.
Special cases
For (horizontal incidence), the Chapman function reduces to
.
Here, refers to the modified Bessel function of the second kind of the first order. For large values of , this can further be approximated by
.
For and , the Chapman function converges to the secant function:
.
In practical applications related to the terrestrial atmosphere, where , is a good approximation for zenith angles up to 60° to 70°, depending on the accuracy required.
See also
Air mass
Atmospheric physics
Ionosphere
References
External links
Chapman function at Science World
Radio frequency propagation
Special functions | Chapman function | [
"Physics",
"Mathematics"
] | 390 | [
"Physical phenomena",
"Special functions",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Combinatorics",
"Waves"
] |
2,082,201 | https://en.wikipedia.org/wiki/Potential%20temperature | The potential temperature of a parcel of fluid at pressure is the temperature that the parcel would attain if adiabatically brought to a standard reference pressure , usually . The potential temperature is denoted and, for a gas well-approximated as ideal, is given by
where is the current absolute temperature (in K) of the parcel, is the specific gas constant of air, and is the specific heat capacity at a constant pressure.
for air (meteorology). The reference point for potential temperature in the ocean is usually at the ocean's surface which has a water pressure of 0 dbar. The potential temperature in the ocean doesn't account for the varying heat capacities of seawater, therefore it is not a conservative measure of heat content. Graphical representation of potential temperature will always be less than the actual temperature line in a temperature vs depth graph.
Contexts
The concept of potential temperature applies to any stratified fluid. It is most frequently used in the atmospheric sciences and oceanography. The reason that it is used
in both fields is that changes in pressure can result in warmer fluid residing under colder fluid – examples being dropping air temperature with altitude and increasing water temperature with depth in very deep ocean trenches and
within the ocean mixed layer. When the potential temperature is used instead, these apparently unstable conditions vanish as a parcel of fluid is invariant along its isolines. In the oceans, the potential temperature referenced to the surface will be slightly less than the in-situ temperature (the temperature that a water volume has at the specific depth that the instrument measured it in) since the expansion due to reduction in pressure leads to cooling. The numeric difference between the in situ and potential temperature is almost always less than 1.5 degrees Celsius. However, it's important to use potential temperature when comparing temperatures of water from very different depths.
Comments
Potential temperature is a more dynamically important quantity than the actual temperature. This is because it is not affected by the physical lifting or sinking associated with flow over obstacles or large-scale atmospheric turbulence. A parcel of air moving over a small mountain will expand and cool as it ascends the slope, then compress and warm as it descends on the other side- but the potential temperature will not change in the absence of heating, cooling, evaporation, or condensation (processes that exclude these effects are referred to as dry adiabatic). Since parcels with the same potential temperature can be exchanged without work or heating being required, lines of constant potential temperature are natural flow pathways.
Under almost all circumstances, potential temperature increases upwards in the atmosphere, unlike actual temperature which may increase or decrease. Potential temperature is conserved for all dry adiabatic processes, and as such is an important quantity in the planetary boundary layer (which is often very close to being dry adiabatic).
Potential temperature is a useful measure of the static stability of the unsaturated atmosphere. Under normal, stably stratified conditions, the potential temperature increases with height,
and vertical motions are suppressed. If the potential temperature decreases with height,
the atmosphere is unstable to vertical motions, and convection is likely. Since convection acts to quickly mix the atmosphere and return to a stably stratified state, observations of decreasing potential temperature with height are uncommon, except while vigorous convection is underway or during periods of strong insolation. Situations in which the equivalent potential temperature decreases with height, indicating instability in saturated air, are much more common.
Since potential temperature is conserved under adiabatic or isentropic air motions, in steady, adiabatic flow lines or surfaces of constant potential temperature act as streamlines or flow surfaces, respectively. This fact is used in isentropic analysis, a form of synoptic analysis which allows visualization of air motions and in particular analysis of large-scale vertical motion.
Potential temperature perturbations
The atmospheric boundary layer (ABL) potential temperature perturbation is defined as the difference between the potential temperature of the ABL and the potential temperature of the free atmosphere above the ABL. This value is called the potential temperature deficit in the case of a katabatic flow, because the surface will always be colder than the free atmosphere and the PT perturbation will be negative.
Derivation
The enthalpy form of the first law of thermodynamics can be written as:
where denotes the enthalpy change, the temperature, the change in entropy, the specific volume, and the pressure.
For adiabatic processes, the change in entropy is 0 and the 1st law simplifies to:
For approximately ideal gases, such as the dry air in the Earth's atmosphere, the equation of state, can be substituted into the 1st law
yielding, after some rearrangement:
where the was used and both terms were divided by the product
Integrating yields:
and solving for , the temperature a parcel would acquire if moved adiabatically to the pressure level , you get:
Potential virtual temperature
The potential virtual temperature , defined by
is the theoretical potential temperature of the dry air which would have the same density as the humid air at a standard pressure P0. It is used as a practical substitute for density in buoyancy calculations. In this definition is the potential temperature, is the mixing ratio of water vapor, and is the mixing ratio of liquid water in the air.
Related quantities
The Brunt–Väisälä frequency is a closely related quantity that uses potential temperature and is used extensively in investigations of atmospheric stability.
See also
Wet-bulb potential temperature
Atmospheric thermodynamics
Conservative temperature
Equivalent potential temperature
References
Bibliography
M K Yau and R.R. Rogers, Short Course in Cloud Physics, Third Edition, published by Butterworth-Heinemann, January 1, 1989, 304 pages.
External links
Eric Weisstein's World of Physics at Wolfram Research
Atmospheric thermodynamics
Meteorological quantities
Physical oceanography | Potential temperature | [
"Physics",
"Mathematics"
] | 1,188 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Quantity",
"Meteorological quantities",
"Physical oceanography"
] |
2,889,394 | https://en.wikipedia.org/wiki/Affinity%20laws | The affinity laws (also known as the "Fan Laws" or "Pump Laws") for pumps/fans are used in hydraulics, hydronics and/or HVAC to express the relationship between variables involved in pump or fan performance (such as head, volumetric flow rate, shaft speed) and power. They apply to pumps, fans, and hydraulic turbines. In these rotary implements, the affinity laws apply both to centrifugal and axial flows.
The laws are derived using the Buckingham π theorem. The affinity laws are useful as they allow the prediction of the head discharge characteristic of a pump or fan from a known characteristic measured at a different speed or impeller diameter. The only requirement is that the two pumps or fans are dynamically similar, that is, the ratios of the fluid forced are the same. It is also required that the two impellers' speed or diameter are running at the same efficiency.
Essential to understanding the affinity laws requires understanding the pump discharge and head coefficient dimensionless numbers. For a given pump, one can compute the discharge and head coefficients as follows:
The coefficient for a given pump is considered to be constant over a range of input values. Therefore, you can estimate the impact of changing one variable while keeping the others constant. When determining the ideal pump for a given application we are regularly changing the motor (i.e. altering the pump speed), or milling down the impeller diameter to tune the pump to operate at the flowrate and head needed for our system. The following laws are derived from the two coefficient equations by setting the coefficient for one operating condition (e.g. Q1, n1, D1) equal to the coefficient for a different operating condition (e.g. Q2, n2, D2).
Fan affinity laws
The equations below are the fan affinity laws: (see for more info)
Volume flow rate:
Head or pressure gain:
Power consumption:
where
is the volumetric flow rate (units length^3/time)
is the impeller diameter (units of length)
is the shaft rotational speed (units of 1/time)
is the fluid density (units of mass/length^3)
is the pressure or head developed by the fan/pump (units of pressure)
is the shaft power (units of power, or energy/time).
These laws assume that the pump/fan efficiency remains constant i.e. , which is rarely exactly true, but can be a good approximation when used over appropriate frequency or diameter ranges. The exact relationship between speed, diameter, and efficiency depends on the particulars of the individual fan or pump design. Product testing or computational fluid dynamics become necessary if the range of acceptability is unknown, or if a high level of accuracy is required in the calculation. Interpolation from accurate data is also more accurate than the affinity laws.
Obtaining affinity laws through Buckingham Pi theorem
Consider we have a pump/fan with the following relevant similarity variables and units:
(volumetric flow rate)
(impeller diameter)
(rotational speed)
(fluid density)
(pressure gain)
(shaft power)
There are similarity variables and units: (length), (time), and (mass). Electing the variables , and to be fixed, we have dimensionless numbers:
Dimensionless for :
It is trivial to find that , and , therefore:
Dimensionless for :
Here, , and , therefore:
Dimensionless for :
Here, , and , thus:
This simple dimensional analysis indicates that, if two fans or pumps with matching conditions (i.e, all other variables such as shape and flow dynamics are matching); then the dimensionless numbers , and will be matching. This rationale results in the fan affinity laws highlighted in the previous section (i.e., ). Note that in practice, scaling the variables , and generally results in significant changes on important parameters in the flow around the impeller blades, such as blade Reynolds number, angle of attack, as well as potential for significant changes in flow state and separation. Thus, the fan affinity laws have a very limited span of validity in practice, but can be used as a "quick and dirty" estimate for a pumping system scaling behavior that can be useful for design efforts.
See also
Centripetal force
References
Hydraulics
Pumps
Ventilation fans
Turbines | Affinity laws | [
"Physics",
"Chemistry"
] | 875 | [
"Pumps",
"Turbomachinery",
"Turbines",
"Physical systems",
"Hydraulics",
"Fluid dynamics"
] |
2,889,448 | https://en.wikipedia.org/wiki/Powder%20coating | Powder coating is a type of coating that is applied as a free-flowing, dry powder. Unlike conventional liquid paint, which is delivered via an evaporating solvent, powder coating is typically applied electrostatically and then cured under heat or with ultraviolet light. The powder may be a thermoplastic or a thermosetting polymer. It is usually used to create a thick, tough finish that is more durable than conventional paint. Powder coating is mainly used for coating of metal objects, particularly those subject to rough use. Advancements in powder coating technology like UV-curable powder coatings allow for other materials such as plastics, composites, carbon fiber, and medium-density fibreboard (MDF) to be powder coated, as little heat or oven dwell time is required to process them.
History, properties, and uses of powder coating
The powder coating process was invented around 1945 by Daniel Gustin and received US Patent 2538562 in 1945. This process coats an object electrostatically and then cures it with heat, creating a finish harder and tougher than conventional paint. Originally used on metal manufactures, such as household appliances, aluminium extrusions, drum hardware, automobile parts, and bicycle frames, the practice of powder coating has been expanded to allow finishing of other materials.
Because powder coating does not have a liquid carrier, it can produce thicker coatings than conventional liquid coatings without running or sagging, and powder coating produces minimal appearance differences between horizontally coated surfaces and vertically coated surfaces. Further, because no carrier fluid evaporates away, the coating process emits few volatile organic compounds (VOC). Finally, several powder colors can be applied before all are cured together, allowing color blending and special bleed effects in a single layer.
While it is relatively easy to apply thick coatings that cure to smooth, texture-free coating, it is not as easy to apply smooth thin films. As the film thickness is reduced, the film becomes more and more orange peeled in texture because of the particle size and glass transition temperature (Tg) of the powder.
Most powder coatings have a particle size in the range of 2 to 50 μm, a softening temperature Tg around 80 °C, and a melting temperature around 150 °C, and are cured at around 200 °C for a minimum of 10 minutes to 15 minutes (exact temperatures and times may depend on the thickness of the item being coated). For such powder coatings, film build-ups of greater than 50 μm may be required to obtain an acceptably smooth film. The surface texture which is considered desirable or acceptable depends on the end product. Many manufacturers prefer to have a certain degree of orange peel since it helps to hide metal defects that have occurred during manufacture, and the resulting coating is less prone to showing fingerprints.
There are very specialized operations that apply powder coatings of less than 30 μm or with a Tg below 40 °C in order to produce smooth thin films. One variation of the dry powder coating process, the Powder Slurry process, combines the advantages of powder coatings and liquid coatings by dispersing very fine powders of 1–5 μm sized particles into water, which then allows very smooth, low-film-thickness coatings to be produced.
For small-scale jobs, "rattle can" spray paint is less expensive and complex than powder coating. At the professional scale, the capital expense and time required for a powder coat gun, booth and oven are similar to those for a spray gun system. Powder coatings do have a major advantage in that the overspray can be recycled. However, if multiple colors are being sprayed in a single spray booth, this may limit the ability to recycle the overspray.
Advantages over other coating processes
Powder coatings contain no solvents and release little or no amount of volatile organic compounds (VOC) into the atmosphere. Thus, there is no need for finishers to buy costly pollution control equipment. Companies can comply more easily and economically with environmental regulations, such as those issued by the U.S. Environmental Protection Agency.
Powder coatings can produce much thicker coatings than conventional liquid coatings without running or sagging.
Powder coated items generally have fewer appearance differences than liquid coated items between horizontally coated surfaces and vertically coated surfaces.
A wide range of speciality effects are easily accomplished using powder coatings that would be impossible to achieve with other coating processes.
Curing time is significantly faster with powder coatings compared to liquid coatings especially when using ultraviolet cured powder Coatings or advanced low bake thermosetting powders.
Types of powder coating
There are three main categories of powder coatings: thermosets, thermoplastics, and UV curable powder coatings. Thermoset powder coatings incorporate a cross-linker into the formulation.
Most common cross-linkers are solid epoxy resins in so-called hybrid powders in mixing ratios of 50/50, 60/40 and 70/30 (polyester resin/ epoxy resin) for indoor applications and triglycidyl isocyanurate (TGIC) in a ratio of 93/7 and β-hydroxy alkylamide (HAA) hardener in 95/5 ratio for outdoor applications. When the powder is baked, it reacts with other chemical groups in the powder to polymerize, improving the performance properties. The chemical cross-linking for hybrids and TGIC powders—representing the major part of the global powder coating market—is based on the reaction of organic acid groups with an epoxy functionality; this carboxy–epoxy reaction is thoroughly investigated and well understood, by addition of catalysts the conversion can be accelerated and curing schedule can be triggered in time and/or temperature. In the powder coating industry it is common to use catalyst masterbatches where 10–15% of the active ingredient is introduced into a polyester carrier resin as matrix. This approach provides the best possible even dispersion of a small amount of a catalyst over the bulk of the powder. Concerning the cross-linking of the TGIC-free alternative based on HAA hardeners, there is no known catalyst available.
For special applications like coil coatings or clear coats it is common to use glycidylesters as hardener component, their cross-linking is based on the carboxy–epoxy chemistry too. A different chemical reaction is used in so-called polyurethane powders, where the binder resin carries hydroxyl functional groups that react with isocyanate groups of the hardener component. The isocyanate group is usually introduced into the powder in blocked form where the isocyanate functionality is pre-reacted with ε-caprolactame as blocking agent or in form of uretdiones, at elevated temperatures (deblocking temperature) the free isocyanate groups are released and available for the cross-linking reaction with hydroxyl functionality.
In general all thermosetting powder formulations contain next to the binder resin and cross-linker additives to support flow out and levelling and for degassing. Common is the use of flow promoter where the active ingredient—a polyacrylate—is absorbed on silica as carrier or as masterbatch dispersed in a polyester resin as matrix. Vast majority of powders contain benzoin as degassing agent to avoid pinholes in final powder coating film.
The thermoplastic variety does not undergo any additional actions during the baking process as it flows to form the final coating. UV-curable powder coatings are photopolymerisable materials containing a chemical photoinitiator that instantly responds to UV light energy by initiating the reaction that leads to crosslinking or cure. The differentiating factor of this process from others is the separation of the melt stage before the cure stage. UV-cured powder will melt in 60 to 120 seconds when reaching a temperature 110 °C and 130 °C. Once the melted coating is in this temperature window, it is instantly cured when exposed to UV light.
The most common polymers used are polyester, polyurethane, polyester-epoxy (known as hybrid), straight epoxy (fusion bonded epoxy) and acrylics.
Production
The polymer granules are mixed with hardener, pigments and other powder ingredients in an industrial mixer, such as a turbomixer
The mixture is heated in an extruder
The extruded mixture is rolled flat, cooled and broken into small chips
The chips are milled and sieved to make a fine powder
Methodology
The powder coating process involves three basic steps: part preparation or the pre-treatment, the powder application, and curing.
Part preparation processes and equipment
Removal of oil, dirt, lubrication greases, metal oxides, welding scale etc. is essential prior to the powder coating process. It can be done by a variety of chemical and mechanical methods. The selection of the method depends on the size and the material of the part to be powder coated, the type of impurities to be removed and the performance requirement of the finished product. Some heat-sensitive plastics and composites have low surface tensions and plasma treating can be necessary to improve powder adhesion.
Chemical pre-treatments involve the use of phosphates or chromates in submersion or spray application. These often occur in multiple stages and consist of degreasing, etching, de-smutting, various rinses and the final phosphating or chromating of the substrate and new nanotechnology chemical bonding. The pre-treatment process both cleans and improves bonding of the powder to the metal. Recent additional processes have been developed that avoid the use of chromates, as these can be toxic to the environment. Titanium, zirconium and silanes offer similar performance against corrosion and adhesion of the powder.
In many high end applications, the part is electrocoated following the pretreatment process, and subsequent to the powder coating application. This has been particularly useful in automotive and other applications requiring high end performance characteristics.
Another method of preparing the surface prior to coating is known as abrasive blasting or sandblasting and shot blasting. Blast media and blasting abrasives are used to provide surface texturing and preparation, etching, finishing, and degreasing for products made of wood, plastic, or glass. The most important properties to consider are chemical composition and density; particle shape and size; and impact resistance.
Silicon carbide grit blast medium is brittle, sharp, and suitable for grinding metals and low-tensile strength, non-metallic materials. Plastic media blast equipment uses plastic abrasives that are sensitive to substrates such as aluminum, but still suitable for de-coating and surface finishing. Sand blast medium uses high-purity crystals that have low-metal content. Glass bead blast medium contains glass beads of various sizes.
Cast steel shot or steel grit is used to clean and prepare the surface before coating. Shot blasting recycles the media and is environmentally friendly. This method of preparation is highly efficient on steel parts such as I-beams, angles, pipes, tubes and large fabricated pieces.
Different powder coating applications can require alternative methods of preparation such as abrasive blasting prior to coating. The online consumer market typically offers media blasting services coupled with their coating services at additional costs.
A recent development for the powder coating industry is the use of plasma pretreatment for heat-sensitive plastics and composites. These materials typically have low-energy surfaces, are hydrophobic, and have a low degree of wetability which all negatively impact coating adhesion. Plasma treatment physically cleans, etches, and provides chemically active bonding sites for coatings to anchor to. The result is a hydrophilic, wettable surface that is amenable to coating flow and adhesion.
Powder application processes
The most common way of applying the powder coating to metal objects is to spray the powder using an electrostatic gun, or corona gun. The gun imparts a negative charge to the powder, which is then sprayed towards the grounded object by mechanical or compressed air spraying and then accelerated toward the workpiece by the powerful electrostatic charge. There is a wide variety of spray nozzles available for use in electrostatic coating. The type of nozzle used will depend on the shape of the workpiece to be painted and the consistency of the paint. The object is then heated, and the powder melts into a uniform film, and is then cooled to form a hard coating. It is also common to heat the metal first and then spray the powder onto the hot substrate. Preheating can help to achieve a more uniform finish but can also create other problems, such as runs caused by excess powder.
Another type of gun is called a tribo gun, which charges the powder by the triboelectric. In this case, the powder picks up a positive charge while rubbing along the wall of a Teflon tube inside the barrel of the gun. These charged powder particles then adhere to the grounded substrate. Using a tribo gun requires a different formulation of powder than the more common corona guns. Tribo guns are not subject to some of the problems associated with corona guns, however, such as back-ionization and the Faraday cage effect.
Powder can also be applied using specifically adapted electrostatic discs.
Another method of applying powder coating, named as the fluidized bed method, is by heating the substrate and then dipping it into an aerated, powder-filled bed. The powder sticks and melts to the hot object. Further heating is usually required to finish curing the coating. This method is generally used when the desired thickness of coating is to exceed 300 micrometres. This is how most dishwasher racks are coated.
Electrostatic fluidized bed coating
Electrostatic fluidized bed application uses the same fluidizing technique as the conventional fluidized bed dip process but with much more powder depth in the bed. An electrostatic charging medium is placed inside the bed so that the powder material becomes charged as the fluidizing air lifts it up. Charged particles of powder move upward and form a cloud of charged powder above the fluid bed. When a grounded part is passed through the charged cloud the particles will be attracted to its surface. The parts are not preheated as they are for the conventional fluidized bed dip process.
Electrostatic magnetic brush (EMB) coating
A coating method for flat materials that applies powder with a roller, enabling relatively high speeds and accurate layer thickness between 5 and 100 micrometres. The base for this process is conventional copier technology. It is currently in use in some coating applications and looks promising for commercial powder coating on flat substrates (steel, aluminium, MDF, paper, board) as well as in sheet to sheet and/or roll to roll processes. This process can potentially be integrated in an existing coating line.
Curing
Thermoset
When a thermosetting powder is exposed to elevated temperature, it begins to melt, flows out, and then chemically reacts to form a higher-molecular-weight polymer in a network-like structure. This cure process, called crosslinking, requires a certain temperature for a certain length of time in order to reach full cure and establish the full film properties for which the material was designed.
The architecture of the polyester resin and type of curing agent have a major impact on crosslinking.
Common powders cure at object temperature for 10 minutes. In European and Asian markets, a curing schedule of for 10 minutes has been the industrial standard for decades, but is nowadays shifting towards a temperature level of at the same curing time. Advanced hybrid systems for indoor applications are established to cure at a temperature level of preferably for applications on medium-density fiberboards (MDF); outdoor durable powders with triglycidyl isocyanurate (TGIC) as hardener can operate at a similar temperature level, whereas TGIC-free systems with β-hydroxy alkylamides as curing agents are limited to approx. .
The low-temperature bake approach results in energy savings, especially in cases where coating of massive parts are task of the coating operation. The total oven residence time needs to be only 18–19 min to completely cure the reactive powder at .
A major challenge for all low-temperature cures is to optimize simultaneously reactivity, flow-out (aspect of the powder film) and storage stability. Low-temperature-cure powders tend to have less color stability than their standard bake counterparts because they contain catalysts to augment accelerated cure. HAA polyesters tend to overbake yellow more than do TGIC polyesters.
The curing schedule may vary according to the manufacturer's specifications. The application of energy to the product to be cured can be accomplished by convection cure ovens, infrared cure ovens, or by laser curing process. The latter demonstrates significant reduction of curing time.
UV cure
Ultraviolet (UV)-cured powder coatings have been in commercial use since the 1990s and were initially developed to finish heat-sensitive medium density fiberboard (MDF) furniture components. This coating technology requires less heat energy and cures significantly faster than thermally-cured powder coatings. Typical oven dwell times for UV curable powder coatings are 1–2 minutes with temperatures of the coating reaching 110–130 °C. The use of UV LED curing systems, which are highly energy efficient and do not generate IR energy from the lamp head, make UV-cured powder coating even more desirable for finishing a variety of heat-sensitive materials and assemblies. An additional benefit for UV-cured powder coatings is that the total process cycle, application to cure, is faster than other coating methods.
Removing powder coating
Methylene chloride and acetone are generally effective at removing powder coating. Most other organic solvents (thinners, etc.) are completely ineffective. Recently, the suspected human carcinogen methylene chloride is being replaced by benzyl alcohol with great success. Powder coating can also be removed with abrasive blasting. 98% sulfuric acid commercial grade also removes powder coating film. Certain low grade powder coats can be removed with steel wool, though this might be a more labor-intensive process than desired.
Powder coating can also be removed by a burning off process, in which parts are put into a large high-temperature oven with temperatures typically reaching an air temperature of 300–450 °C. The process takes about four hours and requires the parts to be cleaned completely and re-powder coated. Parts made with a thinner-gauge material need to be burned off at a lower temperature to prevent the material from warping.
Market
According to a market report prepared in August 2016 by Grand View Research, Inc., the powder coating industry includes Teflon, anodizing and electro-plating. The global powder coatings market is expected to reach US$16.55 billion by 2024. Increasing use of powder coatings for aluminum extrusion used in windows, door frames, building facades, kitchen, bathroom and electrical fixtures will fuel industry expansion. Rising construction spending in various countries including China, the U.S., Mexico, Qatar, UAE, India, Vietnam, and Singapore will fuel growth over the forecast period. Increasing government support for eco-friendly and economical products will stimulate demand over the forecast period. General industries were the prominent application segment and accounted for 20.7% of the global volume in 2015. The global market is predicted to be 20 billion dollars by 2027.
Increasing demand for tractors in the U.S., Brazil, Japan, India, and China is expected to augment the use of powder coatings on account of its corrosion protection, excellent outdoor durability, and high-temperature performance. Moreover, growing usage in agricultural equipment, exercise equipment, file drawers, computer cabinets, laptop computers, cell phones, and electronic components will propel industry expansion.
See also
Fusion bonded epoxy coating
Hammer paint
Laser printer
Phosphate conversion coating
Powder coating on glass
References
External links
Building
Painting materials
Motorcycle technology
Surface finishing
Coating | Powder coating | [
"Physics",
"Engineering"
] | 4,119 | [
"Building",
"Construction",
"Materials",
"Powders",
"Matter"
] |
2,890,925 | https://en.wikipedia.org/wiki/Acoustic%20ecology | Acoustic ecology, sometimes called ecoacoustics or soundscape studies, is a discipline studying the relationship, mediated through sound, between human beings and their environment. Acoustic ecology studies started in the late 1960s with R. Murray Schafer a musician, composer and former professor of communication studies at Simon Fraser University (Vancouver, British Columbia, Canada) with the help of his team there as part of the World Soundscape Project. The original WSP team included Barry Truax and Hildegard Westerkamp, Bruce Davies and Peter Huse, among others. The first study produced by the WSP was titled The Vancouver Soundscape. This innovative study raised the interest of researchers and artists worldwide, creating enormous growth in the field of acoustic ecology. In 1993, the members of the by now large and active international acoustic ecology community formed the World Forum for Acoustic Ecology.
The radio art of Schafer and his colleague, has found expression in many different fields. While most have taken some inspiration from Schafer's writings, in recent years there have also been divergences from the initial ideas. The expanded expressions of acoustic ecology are increasing due to the sonic impacts of road and airport construction that affect the soundscapes in and around cities where the human population is more dense. There has also been a broadening of bioacoustics (the use of sound by animals) to consider the subjective and objective responses of animals to human noise, with ocean noise capturing the most attention. Acoustic ecology can also be informative of changes in the climate or other environmental changes since every day we listen to sounds in the world to identify their source such as bird, car, plane, wind, water. But we don't listen those sounds as a network, a mesh of relationships that form an ecology. Acoustic ecology finds expression in many different fields that characterize a soundscape, which are biophony, geophony, and anthrophony.
World Forum for Acoustic Ecology
The World Forum for Acoustic Ecology is an international collective of people and organizations who study the world's soundscapes. There are eight groups that make up the World Forum for Acoustic Ecology: the Australian Forum for Acoustic Ecology, the Canadian Association for Acoustic Ecology, the Finnish Society for Acoustic Ecology, the Hellenic Society for Acoustic Ecology, the Japanese Association for Soundscape Ecology, the Midwest Society for Acoustic Ecology, Red Ecologia Acustica Mexico, and the UK and Ireland Soundscape Community. Every three years since the WFAE's founding at Banff, Canada in 1993, an international symposium has taken place. Stockholm, Amsterdam, Devon, Peterborough, and Melbourne followed. In November 2006, the WFAE meeting took place in Hirosaki, Japan. Koli, Finland, was the meeting place of the latest WFAE world conference.
Members of the WFAE, many of whom are recording artists and composers, are focused on improving the quality of public soundscapes through the design and planning of community spaces that preserve desirable sound while reducing noise pollution. Acoustic ecologists value the exercise of listening as well as promoting a more conscious appreciation and awareness of one's sonic environment.
Bioacoustics
Noise is generally a by-product of increased urbanization and development. As our cities became more industrialized, the volume and frequency of anthrophony, man-made noise signals, increased. Noise can alter the acoustic environment of aquatic and terrestrial habitats.
Animal biodiversity has shown to decline because of chronic noise levels in cities and along roadways. Musician and soundscape ecologist Bernie Krause relates biophony to an orchestra, where different groups of animals in an environment make sounds at different levels to avoid overlap or competition in their communication. Manmade noise such as jets flying over a habitat can disrupt the natural order of these sounds, even putting certain species in danger of predators. For example, some frogs synchronize in a way that protects individuals from attracting attention. The noise of a jet can cause the frogs to stop or fall out of sync, temporarily breaking this effect and exposing them to other animals.
On land, animal communication is shaped by physical characteristics of an environment such as distance, range of vision, weather, and surrounding noise. The physical layout of a habitat may impede the spread of soundwaves while air conditions can affect sound quality and speed. Animals can adapt to factors like distance by adjusting the frequency and amplitute of their calls to maximize communication effectiveness. Some species such as the urban great tits have changed the frequency of their calls to adapt. Soundscapes of particular habitats are always evolving because the activities and species that exist in those habitats changes over time.
In terms of evolution, man-made noise is a much more recent phenomenon. Indeed, through investigating collected recordings, ecologists can study ethology of animal acoustic communication, evolution, and development of acoustic behavior, relationships between animal sounds and their environment. However, all those ecological research goals have a precondition that those bioacoustic recordings are well investigated so that the animal species can be accurately recognized. Scientific research has shown that it has potential to change behavior, alter physiology and even restructure animal communities.
Soundscapes
Soundscapes are composed of the anthrophony, geophony and biophony of a particular environment. They are specific to location and change over time. Acoustic ecology aims to study the relationship between these things, i.e. the relationship between humans, animals and nature, within these soundscapes. These relationships are delicate and subject to disruption by natural or man-made means.
In his book The Tuning of the World, Schafer used new terms like 'soundmarks' -- a specific community's distinctive sounds—and 'keynotes' -- prevalent but overlooked background sounds such as traffic—to help categorize the different elements of a soundscape.
Biophony
Biophony is the study of sounds emerging from animal sources, like whale vocalizations or birdsong.
Geophony
Geophony can be defined as the sounds originating from the Earth's natural processes, such as the blowing of wind or movement of waves.
Anthrophony
Anthrophony is the soundscape defined by man-made sources, like speech or road noise.
Research on the relationship between visual and auditory experiences in urban settings finds that the positive or negative visual perceptions of a landscape can directly affect the emotional assessment of the location's soundscape. A pleasant view or comfortable surroundings can increase people's tolerance and even appreciation for the sounds of an environment.
People's preferences for noise control have been shown to differ based on the culture and technology of the time period as well as the familiarity or practicality of certain sounds. For example, accepted sources of loud noise such as church bells or trucks may not bother a neighborhood as much as someone's new leaf blower, even if it is not as loud as more familiar sounds. This is why it is considered difficult to generalize which sounds are unwanted in a community.
Studying the soundscapes and traumatic impact of war has shown the effectiveness of noise as a psychological weapon to produce fear.
Impacts of Man-Made Sound on Biospheres
Aircraft activity has been a continuing development around the world and has some very good potential to change social-ecological systems. In Alaska, for example the communities are reporting that the aircraft disturb wildlife and negatively influence harvest practices and experiences. The limited data has some restricted knowledge about the extent of aircraft activity over traditional harvest areas. It is actually very impressive to see the amount of aircraft overflight around the rural subsistence, because the activity is increasing significantly and they have reached a median of 12 overflights per day near human development, which is six times greater than undeveloped areas. Therefore, those planes startle caribou prefer to avoid aircraft themselves, which has a result that they will need to go farther to do a better harvest, but this will occur in adding some costs for fuel, equipment, and the effort for sure. Those kind of examples help to understand the impact on social-ecological dynamics in Antarctica.
The ambient noise present within the world's oceans; geophonic, anthrophonic, and biophonic, has been identified as a critical indicator to the well-being of the regional biosphere.
Acoustic niche
The acoustic niche hypothesis, as proposed by acoustic ecologist Bernie Krause in 1993, refers to the process in which organisms partition the acoustic domain, finding their own niche in frequency and/or time in order to communicate without competition from other species. The theory draws from the ideas of niche differentiation and can be used to predict differences between young and mature ecosystems. Similar to how interspecific competition can place limits on the number of coexisting species that can utilize a given availability of habitats or resources, the available acoustic space in an environment is a limited resource that is partitioned among those species competing to utilize it.
In mature ecosystems, species will sing at unique bandwidths and specific times, displaying a lack of interspecies competition in the acoustic environment. Conversely, in young ecosystems, one is more likely to encounter multiple species using similar frequency bandwidths, which can result in interference between their respective calls, or a complete lack of activity in uncontested bandwidths. Biological invasions can also result in interference in the acoustic niche, with non-native species altering the dynamics of the native community by producing signals that mask or degrade native signals. This can cause a variety of ecological impacts, such as decreased reproduction, aggressive interactions, and altered predator-prey dynamics. The degree of partitioning in an environment can be used to indicate ecosystem health and biodiversity.
List of compositional works
"Dominion" by Barry Truax
"Dominion" uses Canadian soundmarks that were made in different province by the World Soundscape Project at Simon Fraser University for an event of cross-country tour that happened in 1973. What is interesting about those sounds is that they are stretched over the time, so the extended versions allowed the people that listen to the sound in a more harmonic way. Those unique sound signals, were picked up by the live performers and then amplified to give the best experience possible to his audience.
Archaeoacoustics
This is a subfield of archeology and acoustics that in general study the relation between people and sound along the history. This is an interdisciplinary field that has methodological contributions from acoustics, archeology and computer simulation. Many cultures explored through archaeology were mostly focused on the oral, which lead the researchers to believe that studying the sonic nature of archaeological sites and artifacts may reveal new information on the civilization being scrutinized. Marc E. Moglen (2007) recreated pre-historical Soundscapes (Acoustic Ecology) at University of California, Berkeley's Department of Anthropology, combining compositional techniques with site recordings for a non-diegetic piece in the virtual world of Second Life, on "Okapi Island" . At the Center for New Media the acoustic ecological setting of the former jazz scene in Oakland, CA was developed for a virtual world setting.
See also
Biophony
Bernie Krause
Human auditory ecology
Lombard effect
Marine mammals and sonar
Fisheries acoustics
Noise map
Soundscape
Ecomusicology
References
Bibliography
External links
Acoustic Ecology and the Soundscape Bibliography
Bazilchuk, Nancy. 2007. Choral Reefs: An inexpensive device monitors ocean health through sound. Conservation 8(1).
"An Introduction to Acoustic Ecology" by Kendall Wrightson
"Science of sound" Canadian Geographic
Ecological techniques
Sound
Acoustics | Acoustic ecology | [
"Physics",
"Biology"
] | 2,343 | [
"Soundscape ecology",
"Ecological techniques",
"Classical mechanics",
"Acoustics"
] |
2,891,089 | https://en.wikipedia.org/wiki/CETAC | Chalmers Engineering Trainee Appointment Committee (CETAC) is a student group located at Chalmers University of Technology in Gothenburg, Sweden. The sole purpose of the organisation is to bring American companies and Swedish students together. Each summer, CETAC members go to North America to gain practical experience in their particular field of study.
CETAC was founded in 1966 and has since then sent approximately 10-20 students every year to United States and Canada. CETAC is only open for students studying Computer Science & Engineering, Electrical Engineering, Engineering Physics, Engineering Mathematics or Software Engineering.
CETAC helps their members with practical details such as insurances, visas, etc. with the support by the American-Scandinavian Foundation.
Since May 1, 2006, CETAC has an alumni organization called CETAC Alumni
External links
Official webpage of CETAC
Official webpage of CETAC Alumni
American-Scandinavian Foundation
Chalmers University of Technology
Engineering organizations
Student societies in Sweden
Student organizations established in 1966 | CETAC | [
"Engineering"
] | 198 | [
"nan"
] |
2,891,127 | https://en.wikipedia.org/wiki/Equity%20value | Equity value is the value of a company available to owners or shareholders. It is the enterprise value plus all cash and cash equivalents, short and long-term investments, and less all short-term debt, long-term debt and minority interests.
Equity value accounts for all the ownership interest in a firm including the value of unexercised stock options and securities convertible to equity.
From a mergers and acquisitions to an academic perspective, equity value differs from market capitalization or market value in that it incorporates all equity interests in a firm whereas market capitalization or market value only reflects those common shares currently outstanding.
Calculating equity value
Equity value can be calculated in two ways, either the intrinsic value method, or the fair market value method. The intrinsic value method is calculated as follows:
Equity Value =
Market capitalization
+ Amount that in-the-money stock options are in the money
+ Value of equity issued from in-the-money convertible securities
- Proceeds from the conversion of convertible securities
The fair market value method is as follows:
Equity Value =
Market capitalization
+ fair value of all stock options (in the money and out of the money), calculated using the Black–Scholes formula or a similar method
+ Value of convertible securities in excess of what the same securities would be valued without the conversion attribute
The fair market value method more accurately captures the value of out of the money securities.
References
Mathematical finance
Fundamental analysis | Equity value | [
"Mathematics"
] | 286 | [
"Applied mathematics",
"Mathematical finance"
] |
2,891,216 | https://en.wikipedia.org/wiki/Pinacol%20coupling%20reaction | A pinacol coupling reaction is an organic reaction in which a carbon–carbon bond is formed between the carbonyl groups of an aldehyde or a ketone in presence of an electron donor in a free radical process. The reaction product is a vicinal diol. The reaction is named after pinacol (also known as 2,3-dimethyl-2,3-butanediol or tetramethylethylene glycol), which is the product of this reaction when done with acetone as reagent. The reaction is usually a homocoupling but intramolecular cross-coupling reactions are also possible. Pinacol was discovered by Wilhelm Rudolph Fittig in 1859.
Reaction mechanism
The first step in the reaction mechanism is a one-electron reduction of the carbonyl group by a reducing agent —such as magnesium— to a ketyl radical anion species. Two ketyl groups react in a coupling reaction yielding a vicinal diol with both hydroxyl groups deprotonated. Addition of water or another proton donor gives the diol. With magnesium as an electron donor, the initial reaction product is a 5-membered cyclic compound with the two oxygen atoms coordinated to the oxidized Mg2+ ion. This complex is broken up by addition of water with formation of magnesium hydroxide. The pinacol coupling can be followed up by a pinacol rearrangement. A related reaction is the McMurry reaction, which uses titanium(III) chloride or titanium(IV) chloride in conjunction with a reducing agent for the formation of the metal-diol complex, and which takes place with an additional deoxygenation reaction step in order to provide an alkene product.
Scope
The pinacol reaction is extremely well-studied and tolerates many different reductants, including electrochemical syntheses. Variants are known for homo- and cross-coupling, intra- and inter-molecular reactions with appropriate diastereo- or enantioselectivity; as of 2006, the only unsettled frontier was enantioselective cross-coupling of aliphatic aldehydes. In general, aryl carbonyls give higher yields than aliphatic carbonyls, and diaryls may spontaneously react with a hydride donor in the presence of light.
Although an active metal reduction, modern pinacol reactions tolerate protic substrates and solvents; it is sometimes performed in water. Ester groups do not react, but some nitriles do. In general, aza variants are less well-studied, but the analogous reaction with imines yields diamines.
Traditionally, the pinacol reductant is an alkali or alkaline earth metal, but these result in low yields and selectivity. Catalytic salts of most early transition metals and a nonmetal reductant (e.g. iodides) give dramatically improved performance, but stoichiometric reductions typically deoxygenate to the alkene (the McMurry reaction).
The reaction's applications include closure of large rings. Two famous examples of pinacol coupling used in total synthesis are the Mukaiyama Taxol total synthesis and the Nicolaou Taxol total synthesis.
Benzophenone may undergo the pinacol coupling photochemically. Benzaldehyde may also be used as a substrate with the use of catalytic vanadium(III) chloride and aluminium metal as the stoichiometric reductant. This heterogeneous reaction in water at room temperature yields 72% after 3 days with 56:44 dl:meso composition.
In another system with benzaldehyde, Montmorillonite K-10]] and zinc chloride in aqueous THF under ultrasound the reaction time is reduced to 3 hours (composition 55:45). On the other hand, certain tartaric acid derivatives can be obtained with high diastereoselectivity in a system of samarium(II) iodide and HMPA.
A titanium-catalyzed photocatalytic approach was also developed: the use of catalytic titanocene dichloride in the presence of a red-absorbing organic dye as the photosensitizer, and Hantzsch ester as the terminal reducing agent, enabled the homocoupling reactions of a wide variety of aromatic aldehydes in trifluorotoluene under orange-light irradiation, with high yields and diastereoselectivities (more than 20:1 dl:meso). An enantioselective version (up to 92% e.e.), using catalytic amounts of a chiral titanium salen, was also developed.
p-Hydroxypropiophenone is used as the substrate in the synthesis of diethylstilbestrol.
An unsymmetrical pinacol coupling reaction between para-chloro-acetophenone and acetone was employed to give phenaglycodol in a 40% yield.
References
Further reading
Addition reactions
Free radical reactions
Carbon-carbon bond forming reactions
1859 in science | Pinacol coupling reaction | [
"Chemistry"
] | 1,064 | [
"Coupling reactions",
"Free radical reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
2,891,226 | https://en.wikipedia.org/wiki/Mechanotransduction | In cellular biology, mechanotransduction (mechano + transduction) is any of various mechanisms by which cells convert mechanical stimulus into electrochemical activity. This form of sensory transduction is responsible for a number of senses and physiological processes in the body, including proprioception, touch, balance, and hearing. The basic mechanism of mechanotransduction involves converting mechanical signals into electrical or chemical signals.
In this process, a mechanically gated ion channel makes it possible for sound, pressure, or movement to cause a change in the excitability of specialized sensory cells and sensory neurons. The stimulation of a mechanoreceptor causes mechanically sensitive ion channels to open and produce a transduction current that changes the membrane potential of the cell. Typically the mechanical stimulus gets filtered in the conveying medium before reaching the site of mechanotransduction. Cellular responses to mechanotransduction are variable and give rise to a variety of changes and sensations. Broader issues involved include molecular biomechanics.
Single-molecule biomechanics studies of proteins and DNA, and mechanochemical coupling in molecular motors have demonstrated the critical importance of molecular mechanics as a new frontier in bioengineering and life sciences. Protein domains, connected by intrinsically disordered flexible linker domains, induce long-range allostery via protein domain dynamics.
The resultant dynamic modes cannot be generally predicted from static structures of either the entire protein or individual domains. They can however be inferred by comparing different structures of a protein (as in Database of Molecular Motions). They can also be suggested by sampling in extensive molecular dynamics trajectories and principal component analysis, or they can be directly observed using spectra
measured by neutron spin echo spectroscopy. Current findings indicate that the mechanotransduction channel in hair cells is a complex biological machine. Mechanotransduction also includes the use of chemical energy to do mechanical work.
Ear
Air pressure changes in the ear canal cause the vibrations of the tympanic membrane and middle ear ossicles. At the end of the ossicular chain, movement of the stapes footplate within the oval window of the cochlea generates a pressure field within the cochlear fluids, imparting a pressure differential across the basilar membrane. A sinusoidal pressure wave results in localized vibrations of the organ of Corti: near the base for high frequencies, near the apex for low frequencies. Hair cells in the cochlea are stimulated when the basilar membrane is driven up and down by differences in the fluid pressure between the scala vestibuli and scala tympani. This motion is accompanied by a shearing motion between the tectorial membrane and the reticular lamina of the organ of Corti, causing the hair bundles that link the two to be deflected, initiating mechano-electrical transduction. When the basilar membrane is driven upward, shear between the hair cells and the tectorial membrane deflects hair bundles in the excitatory direction, toward their tall edge. At the midpoint of an oscillation the hair bundles resume their resting position. When the basilar membrane moves downward, the hair bundles are driven in the inhibitory direction.
Skeletal muscle
When a deformation is imposed on a muscle, changes in cellular and molecular conformations link the mechanical forces with biochemical signals, and the close integration of mechanical signals with electrical, metabolic, and hormonal signaling may disguise the aspect of the response that is specific to the mechanical forces.
Cartilage
One of the main mechanical functions of articular cartilage is to act as a low-friction, load-bearing surface. Due to its unique location at joint surfaces, articular cartilage experiences a range of static and dynamic forces that include shear, compression and tension. These mechanical loads are absorbed by the cartilage extracellular matrix (ECM), where they are subsequently dissipated and transmitted to chondrocytes (cartilage cells).
Chondrocytes sense and convert the mechanical signals they receive into biochemical signals, which subsequently direct and mediate both anabolic (matrix building) and catabolic (matrix degrading) processes. These processes include the synthesis of matrix proteins (type II collagen and proteoglycans), proteases, protease inhibitors, transcription factors, cytokines and growth factors.
The balance that is struck between anabolic and catabolic processes is strongly influenced by the type of loading that cartilage experiences. High strain rates (such as which occurs during impact loading) cause tissue damage, degradation, decreased matrix production and apoptosis. Decreased mechanical loading over long periods, such as during extended bed-rest, causes a loss of matrix production. Static loads have been shown to be detrimental to biosynthesis while oscillatory loads at low frequencies (similar that of a normal walking gait) have been shown to be beneficial in maintaining health and increasing matrix synthesis. Due to the complexity of in-vivo loading conditions and the interplay of other mechanical and biochemical factors, the question of what an optimal loading regimen may be or whether one exists remain unanswered.
Although studies have shown that, like most biological tissues, cartilage is capable of mechanotransduction, the precise mechanisms by which this is done remain unknown. However, there exist a few hypotheses which begin with the identification of mechanoreceptors.
In order for mechanical signals to be sensed, there need to be mechanoreceptors on the surface of chondrocytes. Candidates for chondrocyte mechanoreceptors include stretch-activated ion channels (SAC), the hyaluronan receptor CD44, annexin V (a collagen type II receptor), and integrin receptors (of which there exist several types on chondrocytes).
Using the integrin-linked mechanotransduction pathway as an example (being one of the better studied pathways), it has been shown to mediate chondrocyte adhesion to cartilage surfaces, mediate survival signaling and regulate matrix production and degradation.
Integrin receptors have an extracellular domain that binds to the ECM proteins (collagen, fibronectin, laminin, vitronectin and osteopontin), and a cytoplasmic domain that interacts with intracellular signaling molecules. When an integrin receptor binds to its ECM ligand and is activated, additional integrins cluster around the activated site. In addition, kinases (e.g., focal adhesion kinase, FAK) and adapter proteins (e.g., paxillin, aka Pax, talin, aka Tal, and Shc) are recruited to this cluster, which is called the focal adhesion complex (FAC). The activation of these FAC molecules in turn, triggers downstream events that up-regulate and /or down-regulate intracellular processes such as transcription factor activation and gene regulation resulting in apoptosis or differentiation.
In addition to binding to ECM ligands, integrins are also receptive to autocrine and paracrine signals such as growth factors in the TGF-beta family. Chondrocytes have been shown to secrete TGF-b, and upregulate TGF-b receptors in response to mechanical stimulation; this secretion may be a mechanism for autocrine signal amplification within the tissue.
Integrin signaling is just one example of multiple pathways that are activated when cartilage is loaded. Some intracellular processes that have been observed to occur within these pathways include phosphorylation of ERK1/2, p38 MAPK, and SAPK/ERK kinase-1 (SEK-1) of the JNK pathway as well as changes in cAMP levels, actin re-organization and changes in the expression of genes which regulate cartilage ECM content.
More recent studies have hypothesized that chondrocyte primary cilium act as a mechanoreceptor for the cell, transducing forces from the extracellular matrix into the cell. Each chondrocyte has one cilium and it is hypothesized to transmit mechanical signals by way of bending in response to ECM loading. Integrins have been identified on the upper shaft of the cilium, acting as anchors to the collagen matrix around it. Recent studies published by Wann et al. in FASEB Journal have demonstrated for the first time that primary cilia are required for chondrocyte mechanotransduction. Chondrocytes derived from IFT88 mutant mice did not express primary cilia and did not show the characteristic mechanosensitive up regulation of proteoglycan synthesis seen in wild type cells
It is important to examine the mechanotransduction pathways in chondrocytes since mechanical loading conditions which represent an excessive or injurious response upregulates synthetic activity and increases catabolic signalling cascades involving mediators such as NO and MMPs. In addition, studies by Chowdhury TT and Agarwal S have shown that mechanical loading which represents physiological loading conditions will block the production of catabolic mediators (iNOS, COX-2, NO, PGE2) induced by inflammatory cytokines (IL-1) and restore anabolic activities. Thus an improved understanding of the interplay of biomechanics and cell signalling will help to develop therapeutic methods for blocking catabolic components of the mechanotransduction pathway. A better understanding of the optimal levels of in vivo mechanical forces are therefore necessary for maintaining the health and viability of cartilage, preventative techniques may be devised for the prevention of cartilage degradation and disease.
References
Further reading
1. Kandel, E.R., Schwartz, J.H., Jessell, T.M., Principles of Neural Science. New York: McGraw-Hill ed, ed. 4th. 2000.
External links
www.du.edu/~kinnamon/3640/hearing/hearing.html
Biophysics
Cell signaling | Mechanotransduction | [
"Physics",
"Biology"
] | 2,134 | [
"Applied and interdisciplinary physics",
"Biophysics"
] |
2,892,078 | https://en.wikipedia.org/wiki/Ferrate%28VI%29 | Ferrate(VI) is the inorganic anion with the chemical formula [FeO4]2−. It is photosensitive, contributes a pale violet colour to compounds and solutions containing it and is one of the strongest water-stable oxidizing species known. Although it is classified as a weak base, concentrated solutions containing ferrate(VI) are corrosive and attack the skin and are only stable at high pH. It is similar to the somewhat more stable permanganate.
Nomenclature
The term ferrate is normally used to mean ferrate(VI), although it can refer to other iron-containing anions, many of which are more commonly encountered than salts of [FeO4]2−. These include the highly reduced species disodium tetracarbonylferrate , and salts of the iron(III) complex tetrachloroferrate [FeCl4]− in 1-Butyl-3-methylimidazolium tetrachloroferrate. Although rarely studied, ferrate(V) [FeO4]3− and ferrate(IV) [FeO4]4− oxyanions of iron also exist. These too are called ferrates.
Synthesis
Ferrate(VI) salts are formed by oxidizing iron in an aqueous medium with strong oxidizing agents under alkaline conditions, or in the solid state by heating a mixture of iron filings and powdered potassium nitrate.
For example, ferrates are produced by heating iron(III) hydroxide with sodium hypochlorite in alkaline solution:
2 + 3 + 4 → 2 + 5 + 3
The anion is typically precipitated as the barium(II) salt, forming barium ferrate.
Properties
Fe(VI) is a strong oxidizing agent over the entire pH range, with a reduction potential (Fe(VI)/Fe(III) couple) varying from +2.2 V to +0.7 V versus SHE in acidic and basic media respectively.
+ 8 + 3 e− + 4 ; E0 = +2.20 V (acidic medium)
+ 4 + 3 e− + 5 ; E0 = +0.72 V (basic medium)
Because of this, the ferrate(VI) anion is unstable at neutral or acidic pH values, decomposing to iron(III): The reduction goes through intermediate species in which iron has oxidation states +5 and +4. These anions are even more reactive than ferrate(VI). In alkaline conditions ferrates are more stable, lasting for about 8 to 9 hours at pH 8 or 9.
Aqueous solutions of ferrates are pink when dilute, and deep red or purple at higher concentrations. The ferrate ion is a stronger oxidizing agent than permanganate, and oxidizes ammonia to molecular nitrogen.
The ferrate(VI) ion has two unpaired electrons and is thus paramagnetic. It has a tetrahedral molecular geometry, isostructural with the chromate and permanganate ions.
Applications
Ferrates are excellent disinfectants, and are capable of removing and destroying viruses. They are also of interest as potential as an environmentally friendly water treatment chemical, as the byproduct of ferrate oxidation is the relatively benign iron(III).
Sodium ferrate () is a useful reagent with good selectivity and is stable in aqueous solution of high pH, remaining soluble in an aqueous solution saturated with sodium hydroxide.
See also
High-valent iron
Potassium ferrate
Barium ferrate
References
Oxidizing agents
Transition metal oxyanions
Oxometallates | Ferrate(VI) | [
"Chemistry"
] | 787 | [
"Ferrates",
"Redox",
"Oxidizing agents",
"Salts"
] |
2,892,412 | https://en.wikipedia.org/wiki/Hirzebruch%E2%80%93Riemann%E2%80%93Roch%20theorem | In mathematics, the Hirzebruch–Riemann–Roch theorem, named after Friedrich Hirzebruch, Bernhard Riemann, and Gustav Roch, is Hirzebruch's 1954 result generalizing the classical Riemann–Roch theorem on Riemann surfaces to all complex algebraic varieties of higher dimensions. The result paved the way for the Grothendieck–Hirzebruch–Riemann–Roch theorem proved about three years later.
Statement of Hirzebruch–Riemann–Roch theorem
The Hirzebruch–Riemann–Roch theorem applies to any holomorphic vector bundle E on a compact complex manifold X, to calculate the holomorphic Euler characteristic of E in sheaf cohomology, namely the alternating sum
of the dimensions as complex vector spaces, where n is the complex dimension of X.
Hirzebruch's theorem states that χ(X, E) is computable in terms of the Chern classes ck(E) of E, and the Todd classes of the holomorphic tangent bundle of X. These all lie in the cohomology ring of X; by use of the fundamental class (or, in other words, integration over X) we can obtain numbers from classes in The Hirzebruch formula asserts that
where the sum is taken over all relevant j (so 0 ≤ j ≤ n), using the Chern character ch(E) in cohomology. In other words, the products are formed in the cohomology ring of all the 'matching' degrees that add up to 2n. Formulated differently, it gives the equality
where is the Todd class of the tangent bundle of X.
Significant special cases are when E is a complex line bundle, and when X is an algebraic surface (Noether's formula). Weil's Riemann–Roch theorem for vector bundles on curves, and the Riemann–Roch theorem for algebraic surfaces (see below), are included in its scope. The formula also expresses in a precise way the vague notion that the Todd classes are in some sense reciprocals of the Chern Character.
Riemann Roch theorem for curves
For curves, the Hirzebruch–Riemann–Roch theorem is essentially the classical Riemann–Roch theorem. To see this, recall that for each divisor D on a curve there is an invertible sheaf O(D) (which corresponds to a line bundle) such that the linear system of D is more or less the space of sections of O(D). For curves the Todd class is and the Chern character of a sheaf O(D) is just 1+c1(O(D)), so the Hirzebruch–Riemann–Roch theorem states that
(integrated over X).
But h0(O(D)) is just l(D), the dimension of the linear system of D, and by Serre duality h1(O(D)) = h0(O(K − D)) = l(K − D) where K is the canonical divisor. Moreover, c1(O(D)) integrated over X is the degree of D, and c1(T(X)) integrated over X is the Euler class 2 − 2g of the curve X, where g is the genus. So we get the classical Riemann Roch theorem
For vector bundles V, the Chern character is rank(V) + c1(V), so we get Weil's Riemann Roch theorem for vector bundles over curves:
Riemann Roch theorem for surfaces
For surfaces, the Hirzebruch–Riemann–Roch theorem is essentially the Riemann–Roch theorem for surfaces
combined with the Noether formula.
If we want, we can use Serre duality to express h2(O(D)) as h0(O(K − D)), but unlike the case of curves there is in general no easy way to write the h1(O(D)) term in a form not involving sheaf cohomology (although in practice it often vanishes).
Asymptotic Riemann–Roch
Let D be an ample Cartier divisor on an irreducible projective variety X of dimension n. Then
More generally, if is any coherent sheaf on X then
See also
Grothendieck–Riemann–Roch theorem - contains many computations and examples
Hilbert polynomial - HRR can be used to compute Hilbert polynomials
References
Friedrich Hirzebruch,Topological Methods in Algebraic Geometry
External links
The Hirzebruch-Riemann-Roch Theorem
Topological methods of algebraic geometry
Theorems in complex geometry
Theorems in algebraic geometry
Bernhard Riemann | Hirzebruch–Riemann–Roch theorem | [
"Mathematics"
] | 1,000 | [
"Theorems in algebraic geometry",
"Theorems in complex geometry",
"Theorems in geometry"
] |
2,892,513 | https://en.wikipedia.org/wiki/Nonimaging%20optics | Nonimaging optics (also called anidolic optics) is a branch of optics that is concerned with the optimal transfer of light radiation between a source and a target. Unlike traditional imaging optics, the techniques involved do not attempt to form an image of the source; instead an optimized optical system for optimal radiative transfer from a source to a target is desired.
Applications
The two design problems that nonimaging optics solves better than imaging optics are:
solar energy concentration: maximizing the amount of energy applied to a receiver, typically a solar cell or a thermal receiver
illumination: controlling the distribution of light, typically so it is "evenly" spread over some areas and completely blocked from other areas
Typical variables to be optimized at the target include the total radiant flux, the angular distribution of optical radiation, and the spatial distribution of optical radiation. These variables on the target side of the optical system often must be optimized while simultaneously considering the collection efficiency of the optical system at the source.
Solar energy concentration
For a given concentration, nonimaging optics provide the widest possible acceptance angles and, therefore, are the most appropriate for use in solar concentration as, for example, in concentrated photovoltaics. When compared to "traditional" imaging optics (such as parabolic reflectors or fresnel lenses), the main advantages of nonimaging optics for concentrating solar energy are:
wider acceptance angles resulting in higher tolerances (and therefore higher efficiencies) for:
less precise tracking
imperfectly manufactured optics
imperfectly assembled components
movements of the system due to wind
finite stiffness of the supporting structure
deformation due to aging
capture of circumsolar radiation
other imperfections in the system
higher solar concentrations
smaller solar cells (in concentrated photovoltaics)
higher temperatures (in concentrated solar thermal)
lower thermal losses (in concentrated solar thermal)
widen the applications of concentrated solar power, for example to solar lasers
possibility of a uniform illumination of the receiver
improve reliability and efficiency of the solar cells (in concentrated photovoltaics)
improve heat transfer (in concentrated solar thermal)
design flexibility: different kinds of optics with different geometries can be tailored for different applications
Also, for low concentrations, the very wide acceptance angles of nonimaging optics can avoid solar tracking altogether or limit it to a few positions a year.
The main disadvantage of nonimaging optics when compared to parabolic reflectors or Fresnel lenses is that, for high concentrations, they typically have one more optical surface, slightly decreasing efficiency. That, however, is only noticeable when the optics are aiming perfectly towards the Sun, which is typically not the case because of imperfections in practical systems.
Illumination optics
Examples of nonimaging optical devices include optical light guides, nonimaging reflectors, nonimaging lenses or a combination of these devices. Common applications of nonimaging optics include many areas of illumination engineering (lighting). Examples of modern implementations of nonimaging optical designs include automotive headlamps, LCD backlights, illuminated instrument panel displays, fiber optic illumination devices, LED lights, projection display systems and luminaires.
When compared to "traditional" design techniques, nonimaging optics has the following advantages for illumination:
better handling of extended sources
more compact optics
color mixing capabilities
combination of light sources and light distribution to different places
well suited to be used with increasingly popular LED light sources
tolerance to variations in the relative position of light source and optic
Examples of nonimaging illumination optics using solar energy are anidolic lighting or solar pipes.
Other applications
Modern portable and wearable optical devices, and systems of small sizes and low weights may require nanotechnology. This issue may be addressed by nonimaging metaoptics, which uses metalenses and metamirrors to deal with the optimal transfer of light energy.
Collecting radiation emitted by high-energy particle collisions using the fewest photomultiplier tubes.
Collecting luminescent radiation in photon upconversion devices with the compound parabolic concentrator being to-date the most promising geometrical optics collector.
Some of the design methods for nonimaging optics are also finding application in imaging devices, for example some with ultra-high numerical aperture.
Theory
Early academic research in nonimaging optical mathematics seeking closed form solutions was first published in textbook form in a 1978 book. A modern textbook illustrating the depth and breadth of research and engineering in this area was published in 2004. A thorough introduction to this field was published in 2008.
Special applications of nonimaging optics such as Fresnel lenses for solar concentration or solar concentration in general have also been published, although this last reference by O'Gallagher describes mostly the work developed some decades ago. Other publications include book chapters.
Imaging optics can concentrate sunlight to, at most, the same flux found at the surface of the Sun.
Nonimaging optics have been demonstrated to concentrate sunlight to 84,000 times the ambient intensity of sunlight, exceeding the flux found at the surface of the Sun, and approaching the theoretical (2nd law of thermodynamics) limit of heating objects to the temperature of the Sun's surface.
The simplest way to design nonimaging optics is called "the method of strings", based on the edge ray principle. Other more advanced methods were developed starting in the early 1990s that can better handle extended light sources than the edge-ray method. These were developed primarily to solve the design problems related to solid state automobile headlamps and complex illumination systems. One of these advanced design methods is the simultaneous multiple surface design method (SMS). The 2D SMS design method () is described in detail in the aforementioned textbooks. The 3D SMS design method () was developed in 2003 by a team of optical scientists at Light Prescriptions Innovators.
Edge ray principle
In simple terms, the edge ray principle states that if the light rays coming from the edges of the source are redirected towards the edges of the receiver, this will ensure that all light rays coming from the inner points in the source will end up on the receiver. There is no condition on image formation, the only goal is to transfer the light from the source to the target.
Figure Edge ray principle on the right illustrates this principle. A lens collects light from a source S1S2 and redirects it towards a receiver R1R2.
The lens has two optical surfaces and, therefore, it is possible to design it (using the SMS design method) so that the light rays coming from the edge S1 of the source are redirected towards edge R1 of the receiver, as indicated by the blue rays. By symmetry, the rays coming from edge S2 of the source are redirected towards edge R2 of the receiver, as indicated by the red rays. The rays coming from an inner point S in the source are redirected towards the target, but they are not concentrated onto a point and, therefore, no image is formed.
Actually, if we consider a point P on the top surface of the lens, a ray coming from S1 through P will be redirected towards R1. Also a ray coming from S2 through P will be redirected towards R2. A ray coming through P from an inner point S in the source will be redirected towards an inner point of the receiver. This lens then guarantees that all light from the source crossing it will be redirected towards the receiver. However, no image of the source is formed on the target. Imposing the condition of image formation on the receiver would imply using more optical surfaces, making the optic more complicated, but would not improve light transfer between source and target (since all light is already transferred). For that reason nonimaging optics are simpler and more efficient than imaging optics in transferring radiation from a source to a target.
Design methods
Nonimaging optics devices are obtained using different methods. The most important are: the flow-line or Winston-Welford design method, the SMS or Miñano-Benitez design method and the Miñano design method using Poisson brackets. The first (flow-line) is probably the most used, although the second (SMS) has proven very versatile, resulting in a wide variety of optics. The third has remained in the realm of theoretical optics and has not found real world application to date. Often optimization is also used.
Typically optics have refractive and reflective surfaces and light travels through media of different refractive indices as it crosses the optic. In those cases a quantity called optical path length (OPL) may be defined as where index i indicates different ray sections between successive deflections (refractions or reflections), ni is the refractive index and di the distance in each section i of the ray path.
The OPL is constant between wavefronts. This can be seen for refraction in the figure "constant OPL" to the right. It shows a separation c(τ) between two media of refractive indices n1 and n2, where c(τ) is described by a parametric equation with parameter τ. Also shown are a set of rays perpendicular to wavefront w1 and traveling in the medium of refractive index n1. These rays refract at c(τ) into the medium of refractive index n2 in directions perpendicular to wavefront w2. Ray rA crosses c at point c(τA) and, therefore, ray rA is identified by parameter τA on c. Likewise, ray rB is identified by parameter τB on c. Ray rA has optical path length . Also, ray rB has optical path length . The difference in optical path length for rays rA and rB is given by:
In order to calculate the value of this integral, we evaluate , again with the help of the same figure. We have and . These expressions can be rewritten as and . From the law of refraction and therefore , leading to . Since these may be arbitrary rays crossing c, it may be concluded that the optical path length between w1 and w2 is the same for all rays perpendicular to incoming wavefront w1 and outgoing wavefront w2.
Similar conclusions may be drawn for the case of reflection, only in this case . This relationship between rays and wavefronts is valid in general.
Flow-line design method
The flow-line (or Winston-Welford) design method typically leads to optics which guide the light confining it between two reflective surfaces. The best known of these devices is the CPC (Compound Parabolic Concentrator).
These types of optics may be obtained, for example, by applying the edge ray of nonimaging optics to the design of mirrored optics, as shown in figure "CEC" on the right. It is composed of two elliptical mirrors e1 with foci S1 and R1 and its symmetrical e2 with foci S2 and R2.
Mirror e1 redirects the rays coming from the edge S1 of the source towards the edge R1 of the receiver and, by symmetry, mirror e2 redirects the rays coming from the edge S2 of the source towards the edge R2 of the receiver. This device does not form an image of the source S1S2 on the receiver R1R2 as indicated by the green rays coming from a point S in the source that end up on the receiver but are not focused onto an image point. Mirror e2 starts at the edge R1 of the receiver since leaving a gap between mirror and receiver would allow light to escape between the two. Also, mirror e2 ends at ray r connecting S1 and R2 since cutting it short would prevent it from capturing as much light as possible, but extending it above r would shade light coming from S1 and its neighboring points of the source. The resulting device is called a CEC (Compound Elliptical Concentrator).
A particular case of this design happens when the source S1S2 becomes infinitely large and moves to an infinite distance. Then the rays coming from S1 become parallel rays and the same for those coming from S2 and the elliptical mirrors e1 and e2 converge to parabolic mirrors p1 and p2. The resulting device is called a CPC (Compound Parabolic Concentrator), and shown in the "CPC" figure on the left. CPCs are the most common seen nonimaging optics. They are often used to demonstrate the difference between Imaging optics and nonimaging optics.
When seen from the CPC, the incoming radiation (emitted from the infinite source at an infinite distance) subtends an angle ±θ (total angle 2θ). This is called the acceptance angle of the CPC. The reason for this name can be appreciated in the figure "rays showing the acceptance angle" on the right. An incoming ray r1 at an angle θ to the vertical (coming from the edge of the infinite source) is redirected by the CPC towards the edge R1 of the receiver.
Another ray r2 at an angle α<θ to the vertical (coming from an inner point of the infinite source) is redirected towards an inner point of the receiver. However, a ray r3 at an angle β>θ to the vertical (coming from a point outside the infinite source) bounces around inside the CPC until it is rejected by it. Therefore, only the light inside the acceptance angle ±θ is captured by the optic; light outside it is rejected.
The ellipses of a CEC can be obtained by the (pins and) string method, as shown in the figure "string method" on the left. A string of constant length is attached to edge point S1 of the source and edge point R1 of the receiver.
The string is kept stretched while moving a pencil up and down, drawing the elliptical mirror e1. We can now consider a wavefront w1 as a circle centered at S1. This wavefront is perpendicular to all rays coming out of S1 and the distance from S1 to w1 is constant for all its points. The same is valid for wavefront w2 centered at R1. The distance from w1 to w2 is then constant for all light rays reflected at e1 and these light rays are perpendicular to both, incoming wavefront w1 and outgoing wavefront w2.
Optical path length (OPL) is constant between wavefronts. When applied to nonimaging optics, this result extends the string method to optics with both refractive and reflective surfaces. Figure "DTIRC" (Dielectric Total Internal Reflection Concentrator) on the left shows one such example.
The shape of the top surface s is prescribed, for example, as a circle. Then the lateral wall m1 is calculated by the condition of constant optical path length S=d1+n d2+n d3 where d1 is the distance between incoming wavefront w1 and point P on the top surface s, d2 is the distance between P and Q and d3 the distance between Q and outgoing wavefront w2, which is circular and centered at R1. Lateral wall m2 is symmetrical to m1. The acceptance angle of the device is 2θ.
These optics are called flow-line optics and the reason for that is illustrated in figure "CPC flow-lines" on the right. It shows a CPC with an acceptance angle 2θ, highlighting one of its inner points P.
The light crossing this point is confined to a cone of angular aperture 2α. A line f is also shown whose tangent at point P bisects this cone of light and, therefore, points in the direction of the "light flow" at P. Several other such lines are also shown in the figure. They all bisect the edge rays at each point inside the CPC and, for that reason, their tangent at each point points in the direction of the flow of light. These are called flow-lines and the CPC itself is just a combination of flow line p1 starting at R2 and p2 starting at R1.
Variations to the flow-line design method
There are some variations to the flow-line design method.
A variation are the multichannel or stepped flow-line optics in which light is split into several "channels" and then recombined again into a single output. Aplanatic (a particular case of SMS) versions of these designs have also been developed. The main application of this method is in the design of ultra-compact optics.
Another variation is the confinement of light by caustics. Instead of light being confined by two reflective surfaces, it is confined by a reflective surface and a caustic of the edge rays. This provides the possibility to add lossless non-optical surfaces to the optics.
Simultaneous multiple surface (SMS) design method
This section describes
The design procedure
The SMS (or Miñano-Benitez) design method is very versatile and many different types of optics have been designed using it. The 2D version allows the design of two (although more are also possible) aspheric surfaces simultaneously. The 3D version allows the design of optics with freeform surfaces (also called anamorphic) surfaces which may not have any kind of symmetry.
SMS optics are also calculated by applying a constant optical path length between wavefronts. Figure "SMS chain" on the right illustrates how these optics are calculated. In general, the rays perpendicular to incoming wavefront w1 will be coupled to outgoing wavefront w4 and the rays perpendicular to incoming wavefront w2 will be coupled to outgoing wavefront w3 and these wavefronts may be any shape. However, for the sake of simplicity, this figure shows a particular case or circular wavefronts. This example shows a lens of a given refractive index n designed for a source S1S2 and a receiver R1R2.
The rays emitted from edge S1 of the source are focused onto edge R1 of the receiver and those emitted from edge S2 of the source are focused onto edge R2 of the receiver. We first choose a point T0 and its normal on the top surface of the lens. We can now take a ray r1 coming from S2 and refract it at T0. Choosing now the optical path length S22 between S2 and R2 we have one condition that allows us to calculate point B1 on the bottom surface of the lens. The normal at B1 can also be calculated from the directions of the incoming and outgoing rays at this point and the refractive index of the lens. Now we can repeat the process taking a ray r2 coming from R1 and refracting it at B1. Choosing now the optical path length S11 between R1 and S1 we have one condition that allows us to calculate point T1 on the top surface of the lens. The normal at T1 can also be calculated from the directions of the incoming and outgoing rays at this point and the refractive index of the lens. Now, refracting at T1 a ray r3 coming from S2 we can calculate a new point B3 and corresponding normal on the bottom surface using the same optical path length S22 between S2 and R2. Refracting at B3 a ray r4 coming from R1 we can calculate a new point T3 and corresponding normal on the top surface using the same optical path length S11 between R1 and S1. The process continues by calculating another point B5 on the bottom surface using another edge ray r5, and so on. The sequence of points T0 B1 T1 B3 T3 B5 is called an SMS chain.
Another SMS chain can be constructed towards the right starting at point T0. A ray from S1 refracted at T0 defines a point and normal B2 on the bottom surface, by using constant optical path length S11 between S1 and R1. Now a ray from R2 refracted at B2 defines a new point and normal T2 on the top surface, by using constant optical path length S22 between S2 and R2. The process continues as more points are added to the SMS chain.
In this example shown in the figure, the optic has a left-right symmetry and, therefore, points B2 T2 B4 T4 B6 can also be obtained by symmetry about the vertical axis of the lens.
Now we have a sequence of spaced points on the plane. Figure "SMS skinning" on the left illustrates the process used to fill the gaps between points, completely defining both optical surfaces.
We pick two points, say B1 and B2, with their corresponding normals and interpolate a curve c between them. Now we pick a point B12 and its normal on c. A ray r1 coming from R1 and refracted at B12 defines a new point T01 and its normal between T0 and T1 on the top surface, by applying the same constant optical path length S11 between S1 and R1. Now a ray r2 coming from S2 and refracted at T01 defines a new point and normal on the bottom surface, by applying the same constant optical path length S22 between S2 and R2. The process continues with rays r3 and r4 building a new SMS chain filling the gaps between points. Picking other points and corresponding normals on curve c gives us more points in between the other SMS points calculated originally.
In general, the two SMS optical surfaces do not need to be refractive. Refractive surfaces are noted R (from Refraction) while reflective surfaces are noted X (from the Spanish word refleXión). Total Internal Reflection (TIR) is noted I. Therefore, a lens with two refractive surfaces is an RR optic, while another configuration with a reflective and a refractive surface is an XR optic. Configurations with more optical surfaces are also possible and, for example, if light is first refracted (R), then reflected (X) then reflected again by TIR (I), the optic is called an RXI.
The SMS 3D is similar to the SMS 2D, only now all calculations are done in 3D space. Figure "SMS 3D chain" on the right illustrates the algorithm of an SMS 3D calculation.
The first step is to choose the incoming wavefronts w1 and w2 and outgoing wavefronts w3 and w4 and the optical path length S14 between w1 and w4 and the optical path length S23 between w2 and w3. In this example the optic is a lens (an RR optic) with two refractive surfaces, so its refractive index must also be specified. One difference between the SMS 2D and the SMS 3D is on how to choose initial point T0, which is now on a chosen 3D curve a. The normal chosen for point T0 must be perpendicular to curve a. The process now evolves similarly to the SMS 2D. A ray r1 coming from w1 is refracted at T0 and, with the optical path length S14, a new point B2 and its normal is obtained on the bottom surface. Now ray r2 coming from w3 is refracted at B2 and, with the optical path length S 23, a new point T2 and its normal is obtained on the top surface. With ray r3 a new point B2 and its normal are obtained, with ray r4 a new point T4 and its normal are obtained, and so on. This process is performed in 3D space and the result is a 3D SMS chain. As with the SMS 2D, a set of points and normals to the left of T0 can also be obtained using the same method. Now, choosing another point T0 on curve a the process can be repeated and more points obtained on the top and bottom surfaces of the lens.
The power of the SMS method lies in the fact that the incoming and outgoing wavefronts can themselves be free-form, giving the method great flexibility. Also, by designing optics with reflective surfaces or combinations of reflective and refractive surfaces, different configurations are possible.
Miñano design method using Poisson brackets
This design method was developed by Miñano and is based on Hamiltonian optics, the Hamiltonian formulation of geometrical optics which shares much of the mathematical formulation with Hamiltonian mechanics. It allows the design of optics with variable refractive index, and therefore solves some nonimaging problems that are not solvable using other methods. However, manufacturing of variable refractive index optics is still not possible and this method, although potentially powerful, did not yet find a practical application.
Conservation of etendue
Conservation of etendue is a central concept in nonimaging optics. In concentration optics, it relates the acceptance angle with the maximum concentration possible. Conservation of etendue may be seen as constant a volume moving in phase space.
Köhler integration
In some applications it is important to achieve a given irradiance (or illuminance) pattern on a target, while allowing for movements or inhomogeneities of the source. Figure "Köhler integrator" on the right illustrates this for the particular case of solar concentration. Here the light source is the sun moving in the sky. On the left this figure shows a lens L1 L2 capturing sunlight incident at an angle α to the optical axis and concentrating it onto a receiver L3 L4. As seen, this light is concentrated onto a hotspot on the receiver. This may be a problem in some applications. One way around this is to add a new lens extending from L3 to L4 that captures the light from L1 L2 and redirects it onto a receiver R1 R2, as shown in the middle of the figure.
The situation in the middle of the figure shows a nonimaging lens L1 L2 is designed in such a way that sunlight (here considered as a set of parallel rays) incident at an angle θ to the optical axis will be concentrated to point L3. On the other hand, nonimaging lens L3 L4 is designed in such a way that light rays coming from L1 are focused on R2 and light rays coming from L2 are focused on R1. Therefore, ray r1 incident on the first lens at an angle θ will be redirected towards L3. When it hits the second lens, it is coming from point L1 and it is redirected by the second lens to R2. On the other hand, ray r2 also incident on the first lens at an angle θ will also be redirected towards L3. However, when it hits the second lens, it is coming from point L2 and it is redirected by the second lens to R1. Intermediate rays incident on the first lens at an angle θ will be redirected to points between R1 and R2, fully illuminating the receiver.
Something similar happens in the situation shown in the same figure, on the right. Ray r3 incident on the first lens at an angle α<θ will be redirected towards a point between L3 and L4. When it hits the second lens, it is coming from point L1 and it is redirected by the second lens to R2. Also, Ray r4 incident on the first lens at an angle α<θ will be redirected towards a point between L3 and L4. When it hits the second lens, it is coming from point L2 and it is redirected by the second lens to R1. Intermediate rays incident on the first lens at an angle α<θ will be redirected to points between R1 and R2, also fully illuminating the receiver.
This combination of optical elements is called Köhler illumination. Although the example given here was for solar energy concentration, the same principles apply for illumination in general. In practice, Köhler optics are typically not designed as a combination of nonimaging optics, but they are simplified versions with a lower number of active optical surfaces. This decreases the effectiveness of the method, but allows for simpler optics. Also, Köhler optics are often divided into several sectors, each one of them channeling light separately and then combining all the light on the target.
An example of one of these optics used for solar concentration is the Fresnel-R Köhler.
Compound parabolic concentrator
In the drawing opposite there are two parabolic mirrors CC' (red) and DD' (blue). Both parabolas are cut at B and A respectively. A is the focal point of parabola CC' and B is the focal point of the parabola DD' The area DC is the entrance aperture and the flat absorber is AB. This CPC has an acceptance angle of θ.
The parabolic concentrator has an entrance aperture of DC and a focal point F.
The parabolic concentrator only accepts rays of light that are perpendicular to the entrance aperture DC. The tracking of this type of concentrator must be more exact and requires expensive equipment.
The compound parabolic concentrator accepts a greater amount of light and needs less accurate tracking.
For a 3-dimensional "nonimaging compound parabolic concentrator", the maximum concentration possible in air or in vacuum (equal to the ratio of input and output aperture areas), is:
where is the half-angle of the acceptance angle (of the larger aperture).
History
The development started in the mid-1960s at three different locations by V. K. Baranov (USSR) with the study of the focons (focusing cones) Martin Ploke (Germany), and Roland Winston (United States), and led to the independent origin of the first nonimaging concentrators, later applied to solar energy concentration. Among these three earliest works, the one most developed was the American one, resulting in what nonimaging optics is today.
A good introduction was published by - Winston, Roland. “Nonimaging Optics.” Scientific American, vol. 264, no. 3, 1991, pp. 76–81. JSTOR,
There are different commercial companies and universities working on nonimaging optics. Currently the largest research group in this subject is the Advanced Optics group at the CeDInt, part of the Technical University of Madrid (UPM).
See also
Etendue
Acceptance angle
Concentrated photovoltaics
Concentrated solar power
Solid-state lighting
Lighting
Anidolic lighting
Hamiltonian optics
Winston cone
References
External links
Oliver Dross et al., Review of SMS design methods and real-world applications, SPIE Proceedings Vol. 5529, pp. 35–47, 2004
Compound Parabolic Concentrator for Passive Radiative Cooling
Photovoltaic applications of Compound Parabolic Concentrator (CPC)
Optics | Nonimaging optics | [
"Physics",
"Chemistry"
] | 6,261 | [
"Applied and interdisciplinary physics",
"Optics",
" molecular",
"Atomic",
" and optical physics"
] |
2,893,444 | https://en.wikipedia.org/wiki/Building%20automation | Building automation (BAS), also known as building management system (BMS) or building energy management system (BEMS), is the automatic centralized control of a building's HVAC (heating, ventilation and air conditioning), electrical, lighting, shading, access control, security systems, and other interrelated systems. Some objectives of building automation are improved occupant comfort, efficient operation of building systems, reduction in energy consumption, reduced operating and maintaining costs and increased security.
BAS functionality may keep a buildings climate within a specified range, provide light to rooms based on occupancy, monitor performance and device failures, and provide malfunction alarms to building maintenance staff. A BAS works to reduce building energy and maintenance costs compared to a non-controlled building. Most commercial, institutional, and industrial buildings built after 2000 include a BAS, whilst older buildings may be retrofitted with a new BAS.
A building controlled by a BAS is often referred to as an "intelligent building", a "smart building", or (if a residence) a smart home. Commercial and industrial buildings have historically relied on robust proven protocols (like BACnet) while proprietary protocols (like X-10) were used in homes.
With the advent of wireless sensor networks and the Internet of Things, an increasing number of smart buildings are resorting to using low-power wireless communication technologies such as Zigbee, Bluetooth Low Energy and LoRa to interconnect the local sensors, actuators and processing devices.
Almost all multi-story green buildings are designed to accommodate a BAS for the energy, air and water conservation characteristics. Electrical device demand response is a typical function of a BAS, as is the more sophisticated ventilation and humidity monitoring required of "tight" insulated buildings. Most green buildings also use as many low-power DC devices as possible. Even a passivhaus design intended to consume no net energy whatsoever will typically require a BAS to manage heat capture, shading and venting, and scheduling device use.
Characteristics
Building management systems are most commonly implemented in large projects with extensive mechanical, HVAC, and electrical systems. Systems linked to a BMS typically represent 40% of a building's energy usage; if lighting is included, this number approaches to 70%. BMS systems are a critical component to managing energy demand. Improperly configured BMS systems are believed to account for 20% of building energy usage, or approximately 8% of total energy usage in the United States.
In addition to controlling the building's internal environment, BMS systems are sometimes linked to access control (turnstiles and access doors controlling who is allowed access and egress to the building) or other security systems such as closed-circuit television (CCTV) and motion detectors. Fire alarm systems and elevators are also sometimes linked to a BMS for monitoring. In case a fire is detected then only the fire alarm panel could close dampers in the ventilation system to stop smoke spreading, shut down air handlers, start smoke evacuation fans, and send all the elevators to the ground floor and park them to prevent people from using them.
Building management systems have also included disaster-response mechanisms (such as base isolation) to save structures from earthquakes. In more recent times, companies and governments have been working to find similar solutions for flood zones and coastal areas at-risk to rising sea levels. Self-adjusting floating environment draws from existing technologies used to float concrete bridges and runways such as Washington's SR 520 and Japan's Mega-Float.
Types of inputs and outputs
Sensors
Analog inputs are used to read a variable measurement. Examples are temperature, humidity and pressure sensors which could be thermistor, 4–20 mA, 0–10 volt or platinum resistance thermometer (resistance temperature detector), or wireless sensors.
A digital input indicates a device is on or off. Some examples of digital inputs would be a door contact switch, a current switch, an air flow switch, or a voltage-free relay contact (dry contact). Digital inputs could also be pulse inputs counting the pulses over a period of time. An example is a turbine flow meter transmitting flow data as a frequency of pulses to an input.
Nonintrusive load monitoring is software relying on digital sensors and algorithms to discover appliance or other loads from electrical or magnetic characteristics of the circuit. It is however detecting the event by an analog means. These are extremely cost-effective in operation and useful not only for identification but to detect start-up transients, line or equipment faults, etc.
Controls
Analog outputs control the speed or position of a device, such as a variable frequency drive, an I-P (current to pneumatics) transducer, or a valve or damper actuator. An example is a hot water valve opening up 25% to maintain a setpoint. Another example is a variable frequency drive ramping up a motor slowly to avoid a hard start.
Digital outputs are used to open and close relays and switches as well as drive a load upon command. An example would be to turn on the parking lot lights when a photocell indicates it is dark outside. Another example would be to open a valve by allowing 24VDC/AC to pass through the output powering the valve. Analog outputs could also be pulse type outputs emitting a frequency of pulses over a given period of time. An example is an energy meter calculating kWh and emitting a frequency of pulses accordingly.
Infrastructure
Controller
Controllers are essentially small, purpose-built computers with input and output capabilities. These controllers come in a range of sizes and capabilities to control devices commonly found in buildings, and to control sub-networks of controllers.
Inputs allow a controller to read temperature, humidity, pressure, current flow, air flow, and other essential factors. The outputs allow the controller to send command and control signals to slave devices, and to other parts of the system. Inputs and outputs can be either digital or analog. Digital outputs are also sometimes called discrete depending on manufacturer.
Controllers used for building automation can be grouped in three categories: programmable logic controllers (PLCs), system/network controllers, and terminal unit controllers. However an additional device can also exist in order to integrate third-party systems (e.g. a stand-alone AC system) into a central building automation system.
Terminal unit controllers usually are suited for control of lighting and/or simpler devices such as a package rooftop unit, heat pump, VAV box, fan coil, etc. The installer typically selects one of the available pre-programmed personalities best suited to the device to be controlled, and does not have to create new control logic.
Occupancy
Occupancy is one of two or more operating modes for a building automation system; unoccupied, morning warmup, and night-time setback are other common modes.
Occupancy is usually based on time of day schedules. In cccupancy mode, the BAS aims to provides a comfortable climate and adequate lighting, often with zone-based control so that users on one side of a building have a different thermostat (or a different system, or sub system) than users on the opposite side.
A temperature sensor in the zone provides feedback to the controller, so it can deliver heating or cooling as needed.
If enabled, morning warmup (MWU) mode occurs prior to occupancy. During morning warmup the BAS tries to bring the building to setpoint just in time for occupancy. The BAS often factors in outdoor conditions and historical experience to optimize MWU. This is also referred to as optimized start.
Some buildings rely on occupancy sensors to activate lighting or climate conditioning. Given the potential for long lead times before a space becomes sufficiently cool or warm, climate conditioning is not often initiated directly by an occupancy sensor.
Lighting
Lighting can be turned on, off, or dimmed with a building automation or lighting control system based on time of day, or on occupancy sensor, photosensors and timers. One typical example is to turn the lights in a space on for a half-hour since the last motion was sensed. A photocell placed outside a building can sense darkness, and the time of day, and modulate lights in outer offices and the parking lot.
Lighting is also a good candidate for demand response, with many control systems providing the ability to dim (or turn off) lights to take advantage of DR incentives and savings.
In newer buildings, the lighting control can be based on the field bus Digital Addressable Lighting Interface (DALI). Lamps with DALI ballasts are fully dimmable. DALI can also detect lamp and ballast failures on DALI luminaires and signals failures.
Shading and glazing
Shading and glazing are essential components in the building system, they affect occupants’ visual, acoustical, and thermal comfort and provide the occupant with a view outdoor. Automated shading and glazing systems are solutions for controlling solar heat gains and glare. It refers to the use of technology to control external or internal shading devices (such as blinds, and shades) or glazing itself. The system has an active and rapid response to various changing outdoor data (such as solar, wind) and to changing interior environment (such as temperature, illuminance, and occupant demands). Building shading and glazing systems can contribute to thermal and lighting improvement from both energy conservation and comfort point of view.
Dynamic shading
Dynamic shading devices allow the control of daylight and solar energy to enter into built environment in relation to outdoor conditions, daylighting demands and solar positions. The common products include venetian blinds, roller shades, louvers, and shutters. They are mostly installed on the interior side of the glazing system because of the low maintenance cost, but also can be used on the exterior or a combination of both.
Air handlers
Most air handlers mix return and outside air so less temperature/humidity conditioning is needed. This can save money by using less chilled or heated water (not all AHUs use chilled or hot water circuits). Some external air is needed to keep the building's air healthy. To optimize energy efficiency while maintaining healthy indoor air quality (IAQ), demand control (or controlled) ventilation (DCV) adjusts the amount of outside air based on measured levels of occupancy.
Analog or digital temperature sensors may be placed in the space or room, the return and supply air ducts, and sometimes the external air. Actuators are placed on the hot and chilled water valves, the outside air and return air dampers. The supply fan (and return if applicable) is started and stopped based on either time of day, temperatures, building pressures or a combination.
Alarms and security
All modern building automation systems have alarm capabilities. It does little good to detect a potentially hazardous or costly situation if no one who can solve the problem is notified. Notification can be through a computer (email or text message), pager, cellular phone voice call, audible alarm, or all of these. For insurance and liability purposes all systems keep logs of who was notified, when and how.
Alarms may immediately notify someone or only notify when alarms build to some threshold of seriousness or urgency. At sites with several buildings, momentary power failures can cause hundreds or thousands of alarms from equipment that has shut down – these should be suppressed and recognized as symptoms of a larger failure. Some sites are programmed so that critical alarms are automatically re-sent at varying intervals. For example, a repeating critical alarm (of an uninterruptible power supply in 'bypass') might resound at 10 minutes, 30 minutes, and every 2 to 4 hours thereafter until the alarms are resolved.
Security systems can be interlocked to a building automation system. If occupancy sensors are present, they can also be used as burglar alarms. Because security systems are often deliberately sabotaged, at least some detectors or cameras should have battery backup and wireless connectivity and the ability to trigger alarms when disconnected. Modern systems typically use power-over-Ethernet (which can operate a pan-tilt-zoom camera and other devices up to 30–90 watts) which is capable of charging such batteries and keeps wireless networks free for genuinely wireless applications, such as backup communication in outage.
Fire alarm panels and their related smoke alarm systems are usually hard-wired to override building automation. For example: if the smoke alarm is activated, all the outside air dampers close to prevent air coming into the building, and an exhaust system can isolate the blaze. Similarly, electrical fault detection systems can turn entire circuits off, regardless of the number of alarms this triggers or persons this distresses. Fossil fuel combustion devices also tend to have their own over-rides, such as natural gas feed lines that turn off when slow pressure drops are detected (indicating a leak), or when excess methane is detected in the building's air supply.
Buses and protocols
Most building automation networks consist of a primary and secondary bus which connect high-level controllers (generally specialized for building automation, but may be generic programmable logic controllers) with lower-level controllers, input/output devices and a user interface (also known as a human interface device). ASHRAE's open protocol BACnet or the open protocol LonTalk specify how most such devices interoperate. Modern systems use SNMP to track events, building on decades of history with SNMP-based protocols in the computer networking world.
Physical connectivity between devices was historically provided by dedicated optical fiber, ethernet, ARCNET, RS-232, RS-485 or a low-bandwidth special purpose wireless network. Modern systems rely on standards-based multi-protocol heterogeneous networking such as that specified in the IEEE 1905.1 standard and verified by the nVoy auditing mark. These accommodate typically only IP-based networking but can make use of any existing wiring, and also integrate powerline networking over AC circuits, power over Ethernet low-power DC circuits, high-bandwidth wireless networks such as LTE and IEEE 802.11n and IEEE 802.11ac and often integrate these using the building-specific wireless mesh open standard Zigbee.
Proprietary hardware dominates the controller market. Each company has controllers for specific applications. Some are designed with limited controls and no interoperability, such as simple packaged roof top units for HVAC. Software will typically not integrate well with packages from other vendors. Cooperation is at the Zigbee/BACnet/LonTalk level only.
Current systems provide interoperability at the application level, allowing users to mix-and-match devices from different manufacturers, and to provide integration with other compatible building control systems. These typically rely on SNMP, long used for this same purpose to integrate diverse computer networking devices into one coherent network.
Protocols and industry standards
1-Wire
BACnet
Bluetooth
DALI
Dynet
EnOcean
KNX
LonTalk
OPC
OpenTherm
OpenWebNet
VSCP
Z-Wave
Zigbee
Security concerns
With the growing spectrum of capabilities and connections to the Internet of Things, building automation systems were repeatedly reported to be vulnerable, allowing hackers and cybercriminals to attack their components. Buildings can be exploited by hackers to measure or change their environment: sensors allow surveillance (e.g. monitoring movements of employees or habits of inhabitants) while actuators allow to perform actions in buildings (e.g. opening doors or windows for intruders). Several vendors and committees started to improve the security features in their products and standards, including KNX, Zigbee and BACnet (see recent standards or standard drafts). However, researchers report several open problems in building automation security.
On November 11, 2019, a 132-page security research paper was released titled "I Own Your Building (Management System)" by Gjoko Krstic and Sipke Mellema that addressed more than 100 vulnerabilities affecting various BMS and access control solutions by various vendors.
Room automation
Room automation is a subset of building automation and with a similar purpose; it is the consolidation of one or more systems under centralized control, though in this case in one room.
The most common example of room automation is corporate boardroom, presentation suites, and lecture halls, where the operation of the large number of devices that define the room function (such as videoconferencing equipment, video projectors, lighting control systems, public address systems etc.) would make manual operation of the room very complex. It is common for room automation systems to employ a touchscreen as the primary way of controlling each operation.
See also
Building information modeling (BIM)
Control engineering
Digital home
Index of home automation articles
Smart environment
Testing, adjusting, balancing
References
External links
v:Building Automation - learning resources for professionals in this area
Sustainable urban planning | Building automation | [
"Engineering"
] | 3,463 | [
"Building engineering",
"Building automation",
"Automation"
] |
2,893,603 | https://en.wikipedia.org/wiki/Hirudiculture | Hirudiculture is the culture, or farming, of leeches in both natural and artificial environments. This practice drew the attention of Parisian savants and members of the French Société Zoologique d'Acclimitation in the mid-to-late 19th century as a part of a larger interest in the culture of fish and oysters. Leech culture was seen as a solution to growing demand for medicinal leeches throughout the world.
The use of leeches for medicinal purposes, or hirudotherapy, has been revived by contemporary medicine.
See also
Aquaculture
References
External links
Leechcraft in nineteenth century British medicine
Pharmacy
Aquaculture | Hirudiculture | [
"Chemistry"
] | 132 | [
"Pharmacology",
"Pharmacy"
] |
935,041 | https://en.wikipedia.org/wiki/Urbanism | Urbanism is the study of how inhabitants of urban areas, such as towns and cities, interact with the built environment. It is a direct component of disciplines such as urban planning, a profession focusing on the design and management of urban areas, and urban sociology, an academic field which studies urban life.
Many architects, planners, geographers, and sociologists investigate the way people live in densely populated urban areas. There is a wide variety of different theories and approaches to the study of urbanism. However, in some contexts internationally, urbanism is synonymous with urban planning, and urbanist refers to an urban planner.
The term urbanism originated in the late nineteenth century with the Spanish civil engineer Ildefons Cerdà, whose intent was to create an autonomous activity focused on the spatial organization of the city. Urbanism's emergence in the early 20th century was associated with the rise of centralized manufacturing, mixed-use neighborhoods, social organizations and networks, and what has been described as "the convergence between political, social and economic citizenship".
Urbanism can be understood as placemaking and the creation of place identity at a citywide level, however as early as 1938 Louis Wirth wrote that it is necessary to stop 'identify[ing] urbanism with the physical entity of the city', go 'beyond an arbitrary boundary line' and consider how 'technological developments in transportation and communication have enormously extended the urban mode of living beyond the confines of the city itself.'
Concepts
Network-based theories
Gabriel Dupuy applied network theory to the field of urbanism and suggests that the single dominant characteristic of modern urbanism is its networked character, as opposed to segregated conceptions of space (i.e. zones, boundaries and edges).
Stephen Graham and Simon Marvin argue that we are witnessing a post-urban environment where decentralized, loosely connected neighborhoods and zones of activity assume the former organizing role played by urban spaces. Their theory of splintering urbanism involves the "fragmentation of the social and material fabric of cities" into "cellular clusters of globally connected high-service enclaves and network ghettos" driven by electronic networks that segregate as much as they connect. Dominique Lorrain argues that the process of splintering urbanism began towards the end of the 20th century with the emergence of the gigacity, a new form of a networked city characterised by three-dimensional size, network density and the blurring of city boundaries.
Manuel Castells suggested that within a network society, "premium" infrastructure networks (high-speed telecommunications, "smart" highways, global airline networks) selectively connect together the most favored users and places and bypass the less favored. Graham and Marvin argue that attention to infrastructure networks is reactive to crises or collapse, rather than sustained and systematic, because of a failure to understand the links between urban life and urban infrastructure networks.
Other modern theorists
Douglas Kelbaugh identifies three paradigms within urbanism: New Urbanism, Everyday Urbanism, and Post-Urbanism.
Paul L. Knox refers to one of many trends in contemporary urbanism as the "aestheticization of everyday life".
Alex Krieger states that urban design is less a technical discipline than a mind-set based on a commitment to cities.
Other contemporary urbanists such as Edward Soja and Liz Ogbu focus on urbanism as a field for applying principles of community building and spatial justice.
See also
New urbanism
Ecological urbanism extends on concepts of Landscape urbanism
Feminist urbanism
Green urbanism
Landscape urbanism, an urbanism where cities are seen though the lens of landscape architecture and ecology
Latino urbanism
Principles of Intelligent Urbanism
Sustainable Urbanism
Unitary urbanism, a critique of urbanism as a technology of power by the situationists
Urban economics, the application of economic models and tools to analyse the urban issues such as crime, house and public transit
Urban geography
Urbanate, a living environment envisioned by the Technocracy movement
Urban vitality
World Urbanism Day
Endnotes
External links
International Forum on Urbanism
Urban planning
2010s fads and trends
2020s fads and trends | Urbanism | [
"Engineering"
] | 827 | [
"Urban planning",
"Architecture"
] |
935,361 | https://en.wikipedia.org/wiki/36%20Ophiuchi | 36 Ophiuchi (or Guniibuu for component A) is a triple star system 19.5 light-years from Earth. It is in the constellation Ophiuchus.
The primary and secondary stars (also known as HD 155886) are nearly identical orange main-sequence dwarfs of spectral type K2/K1. This binary is unusual because its eruptions do not seem to conform to the Waldmeier effect; that is, the strongest eruptions of HD 155886 are not the ones characterized by the fast eruption onset. The tertiary star is an orange main-sequence dwarf of spectral type K5.
Star C is separated from the A-B pair by 700 arcseconds, compared to a minimum of 4.6 arcseconds for A-B, so its effect on the movements of the A-B pair is small. A and B have active chromospheres.
At present the distance between the stars forming the AB-pair is 5.1 arcseconds and the position angle is 139 degrees, while star C is 731.6 arcseconds away from the A-component and situated at a position angle of 74 degrees.
Nomenclature
In the beliefs of the Kamilaroi and Euahlayi Aboriginal peoples in New South Wales, Australia, the star is called Guniibuu that represents the robin red-breast bird (Petroica boodang). In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN approved the name Guniibuu for the star A on 10 August 2018 and it is now so included in the List of IAU-approved Star Names.
Hunt for substellar objects
The McDonald Observatory team has set limits to the presence of one or more planets around 36 Ophiuchi A with masses between 0.13 and 5.4 Jupiter masses and average separations spanning between 0.05 and 5.2 astronomical units (AU), although beyond 1.5 AU orbits are inherently unstable around either 36 Ophiuchi A or 36 Ophiuchi B.
The star C (or namely HD 156026) is among five nearby paradigms as K-type stars of a type in a 'sweet spot’ between Sun-analog stars and M stars for the likelihood of evolved life, per analysis of Giada Arney from NASA's Goddard Space Flight Center.
Notes
References
Further reading
External links
https://arxiv.org/abs/astro-ph/0604171
Ophiuchus
Triple star systems
Ophiuchi, 36
K-type main-sequence stars
Ophiuchi, 36
6401 2
155885 5886 6026
084405 78
CD-26 12026
Ophiuchi, A
RS Canum Venaticorum variables
0663 4
Ophiuchi, V2215
Guniibuu | 36 Ophiuchi | [
"Astronomy"
] | 597 | [
"Ophiuchus",
"Constellations"
] |
935,830 | https://en.wikipedia.org/wiki/Kelvin%20wave | A Kelvin wave is a wave in the ocean, a large lake or the atmosphere that balances the Earth's Coriolis force against a topographic boundary such as a coastline, or a waveguide such as the equator. A feature of a Kelvin wave is that it is non-dispersive, i.e., the phase speed of the wave crests is equal to the group speed of the wave energy for all frequencies. This means that it retains its shape as it moves in the alongshore direction over time.
A Kelvin wave (fluid dynamics) is also a long scale perturbation mode of a vortex in superfluid dynamics; in terms of the meteorological or oceanographical derivation, one may assume that the meridional velocity component vanishes (i.e. there is no flow in the north–south direction, thus making the momentum and continuity equations much simpler). This wave is named after the discoverer, Lord Kelvin (1879).
Coastal Kelvin wave
In a stratified ocean of mean depth H, whose height is perturbed by some amount η (a function of position and time), free waves propagate along coastal boundaries (and hence become trapped in the vicinity of the coast itself) in the form of Kelvin waves. These waves are called coastal Kelvin waves. Using the assumption that the cross-shore velocity v is zero at the coast, v = 0, one may solve a frequency relation for the phase speed of coastal Kelvin waves, which are among the class of waves called boundary waves, edge waves, trapped waves, or surface waves (similar to the Lamb waves). Assuming that the depth H is constant, the (linearised) primitive equations then become the following:
the continuity equation (accounting for the effects of horizontal convergence and divergence):
the u-momentum equation:
the v-momentum equation:
in which f is the Coriolis coefficient, which depends on the latitude φ:
where Ω ≈ 2π / (86164 sec) ≈ is the angular speed of rotation of the earth.
If one assumes that u, the flow perpendicular to the coast, is zero, then the primitive equations become the following:
the continuity equation:
the u-momentum equation:
the v-momentum equation:
The first and third of these equations are solved at constant x by waves moving in either the positive or negative y direction at a speed the speed of so-called shallow-water gravity waves without the effect of Earth's rotation. However, only one of the two solutions is valid, having an amplitude that decreases with distance from the coast, whereas in the other solution the amplitude increases with distance from the coast. For an observer traveling with the wave, the coastal boundary (maximum amplitude) is always to the right in the northern hemisphere and to the left in the southern hemisphere (i.e. these waves move equatorward – negative phase speed – at the western side of an ocean and poleward – positive phase speed – at the eastern boundary; the waves move cyclonically around an ocean basin).
If we assume constant f, the general solution is an arbitrary wave form propagating at speed c multiplied by with the sign chosen so that the amplitude decreases with distance from the coast.
Equatorial Kelvin wave
Kelvin waves can also exist going eastward parallel to the equator. Although waves can cross the equator, the Kelvin wave solution does not. The primitive equations are identical to those used to develop the coastal Kelvin wave solution (U-momentum, V-momentum, and continuity equations). Because these waves are equatorial, the Coriolis parameter vanishes at 0 degrees; therefore, it is necessary to use the equatorial beta plane approximation:
where β is the variation of the Coriolis parameter with latitude. The wave speed is identical to that of coastal Kelvin waves (for the same depth H), indicating that the equatorial Kelvin waves propagate toward the east without dispersion (as if the earth were a non-rotating planet). The dependence of the amplitude on x (here the north-south direction) though is now
For a depth of four kilometres, the wave speed, is about 200 metres per second, but for the first baroclinic mode in the ocean, a typical phase speed would be about 2.8 m/s, causing an equatorial Kelvin wave to take 2 months to cross the Pacific Ocean between New Guinea and South America; for higher ocean and atmospheric modes, the phase speeds are comparable to fluid flow speeds.
When the wave at the Equator is moving to the east, a height gradient going downwards toward the north is countered by a force toward the Equator because the water will be moving eastward and the Coriolis force acts to the right of the direction of motion in the Northern Hemisphere, and vice versa in the Southern Hemisphere. Note that for a wave moving toward the west, the Coriolis force would not restore a northward or southward deviation back toward the Equator; thus, equatorial Kelvin waves are only possible for eastward motion (as noted above). Both atmospheric and oceanic equatorial Kelvin waves play an important role in the dynamics of El Nino-Southern Oscillation, by transmitting changes in conditions in the Western Pacific to the Eastern Pacific.
There have been studies that connect equatorial Kelvin waves to coastal Kelvin waves. Moore (1968) found that as an equatorial Kelvin wave strikes an "eastern boundary", part of the energy is reflected in the form of planetary and gravity waves; and the remainder of the energy is carried poleward along the eastern boundary as coastal Kelvin waves. This process indicates that some energy may be lost from the equatorial region and transported to the poleward region.
Equatorial Kelvin waves are often associated with anomalies in surface wind stress. For example, positive (eastward) anomalies in wind stress in the central Pacific excite positive anomalies in 20 °C isotherm depth which propagate to the east as equatorial Kelvin waves.
In 2017, using data from ERA5, equatorial Kelvin waves were shown to be a case of classical topologically protected excitations, similar to those found in a topological insulator.
See also
Rossby wave
Rossby-gravity waves
Equatorial Rossby wave
Kelvin–Helmholtz instability
Edge wave
Tropical wave
References
External links
Overview of Kelvin waves from the American Meteorological Society.
US Navy page on Kelvin waves.
Slideshow at utexus.edu about Kelvin waves.
Kelvin Wave Renews El Niño - NASA, Earth Observatory, image of the day, 2010 March 21
Gravity waves
Tropical meteorology
Atmospheric dynamics
Fluid mechanics
Wave | Kelvin wave | [
"Chemistry",
"Engineering"
] | 1,321 | [
"Atmospheric dynamics",
"Gravity waves",
"Civil engineering",
"Fluid mechanics",
"Fluid dynamics"
] |
936,059 | https://en.wikipedia.org/wiki/Representative%20sequences | In social sciences and other domains, representative sequences are whole sequences that best characterize or summarize a set of sequences. In bioinformatics, representative sequences also designate substrings of a sequence that characterize the sequence.
Social sciences
In Sequence analysis in social sciences, representative sequences are used to summarize sets of sequences describing for example the family life course or professional career of several thousands individuals.
The identification of representative sequences proceeds from the pairwise dissimilarities between sequences. One typical solution is the medoid sequence, i.e., the observed sequence that minimizes the sum of its distances to all other sequences in the set. An other solution is the densest observed sequence, i.e., the sequence with the greatest number of other sequences in its neighborhood. When the diversity of the sequences is large, a single representative is often insufficient to efficiently characterize the set. In such cases, an as small as possible set of representative sequences covering (i.e., which includes in at least one neighborhood of a representative) a given percentage of all sequences is searched.
A solution also considered is to select the medoids of relative frequency groups. More specifically, the method consists in sorting the sequences (for example, according to the first principal coordinate of the pairwise dissimilarity matrix), splitting the sorted list into equal sized groups (called relative frequency groups), and selecting the medoids of the equal sized groups.
The methods for identifying representative sequences described above have been implemented in the R package TraMineR.
Bioinformatics
Representative sequences are short regions within protein sequences that can be used to approximate the evolutionary relationships of those proteins, or the organisms from which they come. Representative sequences are contiguous subsequences (typically 300 residues) from ubiquitous, conserved proteins, such that each orthologous family of representative sequences taken alone gives a distance matrix in close agreement with the consensus matrix.
Use
Protein sequences can provide data about the biological function and evolution of proteins and protein domains. Grouping and interrelating protein sequences can therefore provide information about both human biological processes, and the evolutionary development of biological processes on earth; such sequence clusters allow for the effective coverage of sequence space. Sequence clusters can reduce a large database of sequences to a smaller set of sequence representatives, each of which should represent its cluster at the sequence level. Sequence representatives allow the effective coverage of the original database with fewer sequences. The database of sequence representatives is called non-redundant, as similar (or redundant) sequences have been removed at a certain similarity threshold.
See also
Sequence analysis in social sciences
Sequence analysis in bioinformatics
References
Protein structure
Bioinformatics
Social sciences | Representative sequences | [
"Chemistry",
"Engineering",
"Biology"
] | 546 | [
"Bioinformatics",
"Biological engineering",
"Protein structure",
"Structural biology"
] |
938,177 | https://en.wikipedia.org/wiki/Medicinal%20chemistry | Medicinal or pharmaceutical chemistry is a scientific discipline at the intersection of chemistry and pharmacy involved with designing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships (QSAR).
Medicinal chemistry is a highly interdisciplinary science combining organic chemistry with biochemistry, computational chemistry, pharmacology, molecular biology, statistics, and physical chemistry.
Compounds used as medicines are most often organic compounds, which are often divided into the broad classes of small organic molecules (e.g., atorvastatin, fluticasone, clopidogrel) and "biologics" (infliximab, erythropoietin, insulin glargine), the latter of which are most often medicinal preparations of proteins (natural and recombinant antibodies, hormones etc.). Medicines can also be inorganic and organometallic compounds, commonly referred to as metallodrugs (e.g., platinum, lithium and gallium-based agents such as cisplatin, lithium carbonate and gallium nitrate, respectively). The discipline of Medicinal Inorganic Chemistry investigates the role of metals in medicine metallotherapeutics, which involves the study and treatment of diseases and health conditions associated with inorganic metals in biological systems. There are several metallotherapeutics approved for the treatment of cancer (e.g., contain Pt, Ru, Gd, Ti, Ge, V, and Ga), antimicrobials (e.g., Ag, Cu, and Ru), diabetes (e.g., V and Cr), broad-spectrum antibiotic (e.g., Bi), bipolar disorder (e.g., Li). Other areas of study include: metallomics, genomics, proteomics, diagnostic agents (e.g., MRI: Gd, Mn; X-ray: Ba, I) and radiopharmaceuticals (e.g., 99mTc for diagnostics, 186Re for therapeutics).
In particular, medicinal chemistry in its most common practice—focusing on small organic molecules—encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biology, enzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure–activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products.
At the biological interface, medicinal chemistry combines to form a set of highly interdisciplinary sciences, setting its organic, physical, and computational emphases alongside biological areas such as biochemistry, molecular biology, pharmacognosy and pharmacology, toxicology and veterinary and human medicine; these, with project management, statistics, and pharmaceutical business practices, systematically oversee altering identified chemical agents such that after pharmaceutical formulation, they are safe and efficacious, and therefore suitable for use in treatment of disease.
In the path of drug discovery
Discovery
Discovery is the identification of novel active chemical compounds, often called "hits", which are typically found by assay of compounds for a desired biological activity. Initial hits can come from repurposing existing agents toward a new pathologic processes, and from observations of biologic effects of new or existing natural products from bacteria, fungi, plants, etc. In addition, hits also routinely originate from structural observations of small molecule "fragments" bound to therapeutic targets (enzymes, receptors, etc.), where the fragments serve as starting points to develop more chemically complex forms by synthesis. Finally, hits also regularly originate from en-masse testing of chemical compounds against biological targets using biochemical or chemoproteomics assays, where the compounds may be from novel synthetic chemical libraries known to have particular properties (kinase inhibitory activity, diversity or drug-likeness, etc.), or from historic chemical compound collections or libraries created through combinatorial chemistry. While a number of approaches toward the identification and development of hits exist, the most successful techniques are based on chemical and biological intuition developed in team environments through years of rigorous practice aimed solely at discovering new therapeutic agents.
Hit to lead and lead optimization
Further chemistry and analysis is necessary, first to identify the "triage" compounds that do not provide series displaying suitable SAR and chemical characteristics associated with long-term potential for development, then to improve the remaining hit series concerning the desired primary activity, as well as secondary activities and physiochemical properties such that the agent will be useful when administered in real patients. In this regard, chemical modifications can improve the recognition and binding geometries (pharmacophores) of the candidate compounds, and so their affinities for their targets, as well as improving the physicochemical properties of the molecule that underlie necessary pharmacokinetic/pharmacodynamic (PK/PD), and toxicologic profiles (stability toward metabolic degradation, lack of geno-, hepatic, and cardiac toxicities, etc.) such that the chemical compound or biologic is suitable for introduction into animal and human studies.
Process chemistry and development
The final synthetic chemistry stages involve the production of a lead compound in suitable quantity and quality to allow large scale animal testing, and then human clinical trials. This involves the optimization of the synthetic route for bulk industrial production, and discovery of the most suitable drug formulation. The former of these is still the bailiwick of medicinal chemistry, the latter brings in the specialization of formulation science (with its components of physical and polymer chemistry and materials science). The synthetic chemistry specialization in medicinal chemistry aimed at adaptation and optimization of the synthetic route for industrial scale syntheses of hundreds of kilograms or more is termed process synthesis, and involves thorough knowledge of acceptable synthetic practice in the context of large scale reactions (reaction thermodynamics, economics, safety, etc.). Critical at this stage is the transition to more stringent GMP requirements for material sourcing, handling, and chemistry.
Synthetic analysis
The synthetic methodology employed in medicinal chemistry is subject to constraints that do not apply to traditional organic synthesis. Owing to the prospect of scaling the preparation, safety is of paramount importance. The potential toxicity of reagents affects methodology.
Structural analysis
The structures of pharmaceuticals are assessed in many ways, in part as a means to predict efficacy, stability, and accessibility. Lipinski's rule of five focus on the number of hydrogen bond donors and acceptors, number of rotatable bonds, surface area, and lipophilicity. Other parameters by which medicinal chemists assess or classify their compounds are: synthetic complexity, chirality, flatness, and aromatic ring count.
Structural analysis of lead compounds is often performed through computational methods prior to actual synthesis of the ligand(s). This is done for a number of reasons, including but not limited to: time and financial considerations (expenditure, etc.). Once the ligand of interest has been synthesized in the laboratory, analysis is then performed by traditional methods (TLC, NMR, GC/MS, and others).
Training
Medicinal chemistry is by nature an interdisciplinary science, and practitioners have a strong background in organic chemistry, which must eventually be coupled with a broad understanding of biological concepts related to cellular drug targets. Scientists in medicinal chemistry work are principally industrial scientists (but see following), working as part of an interdisciplinary team that uses their chemistry abilities, especially, their synthetic abilities, to use chemical principles to design effective therapeutic agents. The length of training is intense, with practitioners often required to attain a 4-year bachelor's degree followed by a 4–6 year Ph.D. in organic chemistry. Most training regimens also include a postdoctoral fellowship period of 2 or more years after receiving a Ph.D. in chemistry, making the total length of training range from 10 to 12 years of college education. However, employment opportunities at the Master's level also exist in the pharmaceutical industry, and at that and the Ph.D. level there are further opportunities for employment in academia and government.
Graduate level programs in medicinal chemistry can be found in traditional medicinal chemistry or pharmaceutical sciences departments, both of which are traditionally associated with schools of pharmacy, and in some chemistry departments. However, the majority of working medicinal chemists have graduate degrees (MS, but especially Ph.D.) in organic chemistry, rather than medicinal chemistry, and the preponderance of positions are in research, where the net is necessarily cast widest, and most broad synthetic activity occurs.
In research of small molecule therapeutics, an emphasis on training that provides for breadth of synthetic experience and "pace" of bench operations is clearly present (e.g., for individuals with pure synthetic organic and natural products synthesis in Ph.D. and post-doctoral positions, ibid.). In the medicinal chemistry specialty areas associated with the design and synthesis of chemical libraries or the execution of process chemistry aimed at viable commercial syntheses (areas generally with fewer opportunities), training paths are often much more varied (e.g., including focused training in physical organic chemistry, library-related syntheses, etc.).
As such, most entry-level workers in medicinal chemistry, especially in the U.S., do not have formal training in medicinal chemistry but receive the necessary medicinal chemistry and pharmacologic background after employment—at entry into their work in a pharmaceutical company, where the company provides its particular understanding or model of "medichem" training through active involvement in practical synthesis on therapeutic projects. (The same is somewhat true of computational medicinal chemistry specialties, but not to the same degree as in synthetic areas.)
See also
Bioisostere
Biological machines
Chemoproteomics
Drug design
Pharmacognosy
Pharmacokinetics
Pharmacology
Pharmacophore
Xenobiotic metabolism
References
Cheminformatics | Medicinal chemistry | [
"Chemistry",
"Biology"
] | 2,109 | [
"Computational chemistry",
"Cheminformatics",
"Medicinal chemistry",
"Biochemistry",
"nan"
] |
938,244 | https://en.wikipedia.org/wiki/Downwash | In aeronautics, downwash is the change in direction of air deflected by the aerodynamic action of an airfoil, wing, or helicopter rotor blade in motion, as part of the process of producing lift. In helicopter aerodynamics discussions, it may be referred to as induced flow.
Lift on an airfoil is an example of the application of Newton's third law of motion – the force required to deflect the air in the downwards direction is equal in magnitude and opposite in direction to the lift force on the airfoil. Lift on an airfoil is also an example of the Kutta-Joukowski theorem. The Kutta condition explains the existence of downwash at the trailing edge of the wing.
See also
Brownout (aeronautics)
Jet blast
Slipstream
Wake turbulence
Wingtip vortices
References
Aerodynamics
Vortices | Downwash | [
"Chemistry",
"Mathematics",
"Engineering"
] | 170 | [
"Vortices",
"Aerodynamics",
"Aerospace engineering",
"Fluid dynamics",
"Dynamical systems"
] |
938,631 | https://en.wikipedia.org/wiki/Antineutron | The antineutron is the antiparticle of the neutron with symbol . It differs from the neutron only in that some of its properties have equal magnitude but opposite sign. It has the same mass as the neutron, and no net electric charge, but has opposite baryon number (+1 for neutron, −1 for the antineutron). This is because the antineutron is composed of antiquarks, while neutrons are composed of quarks. The antineutron consists of one up antiquark and two down antiquarks.
Background
The antineutron was discovered in proton–antiproton collisions at the Bevatron (Lawrence Berkeley National Laboratory) by the team of Bruce Cork, Glen Lambertson, Oreste Piccioni, and William Wenzel in 1956, one year after the antiproton was discovered.
Since the antineutron is electrically neutral, it cannot easily be observed directly. Instead, the products of its annihilation with ordinary matter are observed. In theory, a free antineutron should decay into an antiproton, a positron, and a neutrino in a process analogous to the beta decay of free neutrons. There are theoretical proposals of neutron–antineutron oscillations, a process that implies the violation of the baryon number conservation.
Magnetic moment
The magnetic moment of the antineutron is the opposite of that of the neutron. It is for the antineutron but for the neutron (relative to the direction of the spin). Here μN is the nuclear magneton.
See also
Antimatter
Neutron magnetic moment
List of particles
References
External links
LBL Particle Data Group: summary tables
suppression of neutron-antineutron oscillation
Elementary particles: includes information about antineutron discovery (archived link)
"Is Antineutron the Same as Neutron?" explains how the antineutron differs from the regular neutron despite having the same, that is zero, charge.
Antimatter
Baryons
Neutron
Nucleons | Antineutron | [
"Physics"
] | 424 | [
"Antimatter",
"Nucleons",
"Matter",
"Nuclear physics"
] |
938,659 | https://en.wikipedia.org/wiki/Vertical%20deflection | The vertical deflection (VD) or deflection of the vertical (DoV), also known as deflection of the plumb line and astro-geodetic deflection, is a measure of how far the gravity direction at a given point of interest is rotated by local mass anomalies such as nearby mountains. They are widely used in geodesy, for surveying networks and for geophysical purposes.
The vertical deflection are the angular components between the true zenith–nadir curve (plumb line) tangent line and the normal vector to the surface of the reference ellipsoid (chosen to approximate the Earth's sea-level surface). VDs are caused by mountains and by underground geological irregularities and can amount to angles of 10″ in flat areas or 20–50″ in mountainous terrain.
The deflection of the vertical has a north–south component ξ (xi) and an east–west component η (eta). The value of ξ is the difference between the astronomic latitude and the geodetic latitude (taking north latitudes to be positive and south latitudes to be negative); the latter is usually calculated by geodetic network coordinates. The value of η is the product of cosine of latitude and the difference between the astronomic longitude and the longitude (taking east longitudes to be positive and west longitudes to be negative). When a new mapping datum replaces the old, with new geodetic latitudes and longitudes on a new ellipsoid, the calculated vertical deflections will also change.
Determination
The deflections reflect the undulation of the geoid and gravity anomalies, for they depend on the gravity field and its inhomogeneities.
Vertical deflections are usually determined astronomically. The true zenith is observed astronomically with respect to the stars, and the ellipsoidal zenith (theoretical vertical) by geodetic network computation, which always takes place on a reference ellipsoid. Additionally, the very local variations of the vertical deflection can be computed from gravimetric survey data and by means of digital terrain models (DTM), using a theory originally developed by Vening-Meinesz.
VDs are used in astrogeodetic levelling: as a vertical deflection describes the difference between the geoidal vertical direction and ellipsoidal normal direction, it represents the horizontal spatial gradient of the geoid undulations of the geoid (i.e., the inclination between geoid and reference ellipsoid).
In practice, the deflections are observed at special points with spacings of 20 or 50 kilometers. The densification is done by a combination of DTM models and areal gravimetry. Precise vertical deflection observations have accuracies of ±0.2″ (on high mountains ±0.5″), calculated values of about 1–2″.
The maximal vertical deflection of Central Europe seems to be a point near the Großglockner (3,798 m), the highest peak of the Austrian Alps. The approx. values are ξ = +50″ and η = −30″. In the Himalaya region, very asymmetric peaks may have vertical deflections up to 100″ (0.03°). In the rather flat area between Vienna and Hungary the values are less than 15", but scatter by ±10″ for irregular rock densities in the subsurface.
More recently, a combination of digital camera and tiltmeter have also been used, see zenith camera.
Application
Vertical deflections are principally used in four matters:
For precise calculation of survey networks. The geodetic theodolites and levelling instruments are oriented with respect to the true vertical, but its deflection exceeds the geodetic measuring accuracy by a factor of 5 to 50. Therefore, the data would have to be corrected exactly with respect to the global ellipsoid. Without these reductions, the surveys may be distorted by some centimeters or even decimeters per km.
For the geoid determination (mean sea level) and for exact transformation of elevations. The global geoidal undulations amount to 50–100 m, and their regional values to 10–50 m. They are adequate to the integrals of VD components ξ,η and therefore can be calculated with cm accuracy over distances of many kilometers.
For GPS surveys. The satellites measurements refer to a pure geometrical system (usually the WGS84 ellipsoid), whereas the terrestrial heights refer to the geoid. We need accurate geoid data to combine the different types of measurements.
For geophysics. Because VD data are affected by the physical structure of the Earth's crust and mantle, geodesists are engaged in models to improve our knowledge of the Earth's interior. Additionally and similar to applied geophysics, the VD data can support the future exploration of raw materials, oil, gas or ores.
Historical implications
Vertical deflections were used to measure Earth's density in the Schiehallion experiment.
Vertical deflection is the reason why modern prime meridian passes more than 100 m to the east of the historical astronomic prime meridian in Greenwich.
The meridian arc measurement made by Nicolas-Louis de Lacaille north of Cape Town in 1752 (de Lacaille's arc measurement) was affected by vertical deflection. The resulting discrepancy with Northern Hemisphere measurements was not explained until a visit to the area by George Everest in 1820; Maclear's arc measurement resurvey ultimately confirmed Everest's conjecture.
Errors in the meridian arc of Delambre and Méchain determination, which affected the original definition of the metre, were long known to be mainly caused by an uncertain determination of Barcelona's latitude later explained by vertical deflection. When these errors where acknowledged in 1866, it became urgent to proceed to a new measurement of the French arc between Dunkirk and Perpignan. The operations concerning the revision of the French arc linked to Spanish triangulation were completed only in 1896. Meanwhile, the French geodesists had accomplished in 1879 the junction of Algeria to Spain, with the help of the geodesists of the Madrid Institute headed by the late Carlos Ibañez Ibáñez de Ibero (1825–1891).
Until Hayford ellipsoid was calculated in 1910, vertical deflections were considered as random errors. Plumb line deviations were identified by Jean Le Rond d'Alembert as an important source of error in geodetic surveys as early as 1756. A few years later, in 1828, Carl Friedrich Gauss proposed the concept of geoid.
See also
Deviation survey
Gravity anomaly
Isostasy
Vertical direction
Zenith
Notes
References
External links
The NGS website gives vertical deflection anywhere in the United States here and here.
Geodesy
Geophysics
Gravity | Vertical deflection | [
"Physics",
"Mathematics"
] | 1,431 | [
"Applied mathematics",
"Applied and interdisciplinary physics",
"Geodesy",
"Geophysics"
] |
938,663 | https://en.wikipedia.org/wiki/Multi-task%20learning | Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately.
Inherently, Multi-task learning is a multi-objective optimization problem having trade-offs between different tasks.
Early versions of MTL were called "hints".
In a widely cited 1997 paper, Rich Caruana gave the following characterization:Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.
In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Further examples of settings for MTL include multiclass classification and multi-label classification.
Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.
Methods
The key challenge in multi-task learning, is how to combine learning signals from multiple tasks into a single model. This may strongly depend on how well different task agree with each other, or contradict each other. There are several ways to address this challenge:
Task grouping and overlap
Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a linear combination of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with sparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases. Task relatedness can be imposed a priori or learned from the data. Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly. For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.
Exploiting unrelated tasks
One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared low-dimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multi-task learning methods.
Transfer of knowledge
Related to multi-task learning is the concept of knowledge transfer. Whereas traditional multi-task learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep convolutional neural network GoogLeNet, an image-based object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pre-trained model can be used as a feature extractor to perform pre-processing for another learning algorithm. Or the pre-trained model can be used to initialize a model with similar architecture which is then fine-tuned to learn a different classification task.
Multiple non-stationary tasks
Traditionally Multi-task learning and transfer of knowledge are applied to stationary learning settings. Their extension to non-stationary environments is termed Group online adaptive learning (GOAL). Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such group-adaptive learning has numerous applications, from predicting financial time-series, through content recommendation systems, to visual understanding for adaptive autonomous agents.
Multi-task optimization
Multitask optimization: In some cases, the simultaneous training of seemingly related tasks may hinder performance compared to single-task models. Commonly, MTL models employ task-specific modules on top of a joint feature representation obtained using a shared module. Since this joint representation must capture useful features across all tasks, MTL may hinder individual task performance if the different tasks seek conflicting representation, i.e., the gradients of different tasks point to opposing directions or differ significantly in magnitude. This phenomenon is commonly referred to as negative transfer. To mitigate this issue, various MTL optimization methods have been proposed. Commonly, the per-task gradients are combined into a joint update direction through various aggregation algorithms or heuristics.
Mathematics
Reproducing Hilbert space of vector valued functions (RKHSvv)
The MTL problem can be cast within the context of RKHSvv (a complete inner product space of vector-valued functions equipped with a reproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015.
RKHSvv concepts
Suppose the training data set is , with , , where indexes task, and . Let . In this setting there is a consistent input and output space and the same loss function for each task: . This results in the regularized machine learning problem:
where is a vector valued reproducing kernel Hilbert space with functions having components .
The reproducing kernel for the space of functions is a symmetric matrix-valued function , such that and the following reproducing property holds:
The reproducing kernel gives rise to a representer theorem showing that any solution to equation has the form:
Separable kernels
The form of the kernel induces both the representation of the feature space and structures the output across tasks. A natural simplification is to choose a separable kernel, which factors into separate kernels on the input space and on the tasks . In this case the kernel relating scalar components and is given by . For vector valued functions we can write , where is a scalar reproducing kernel, and is a symmetric positive semi-definite matrix. Henceforth denote .
This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely by . Methods for non-separable kernels is a current field of research.
For the separable case, the representation theorem is reduced to . The model output on the training data is then , where is the empirical kernel matrix with entries , and is the matrix of rows .
With the separable kernel, equation can be rewritten as
where is a (weighted) average of applied entry-wise to and . (The weight is zero if is a missing observation).
Note the second term in can be derived as follows:
Known task structure
Task structure representations
There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping.
Task structure examples
Via the regularizer formulation, one can represent a variety of task structures easily.
Letting (where is the TxT identity matrix, and is the TxT matrix of ones) is equivalent to letting control the variance of tasks from their mean . For example, blood levels of some biomarker may be taken on patients at time points during the course of a day and interest may lie in regularizing the variance of the predictions across patients.
Letting , where is equivalent to letting control the variance measured with respect to a group mean: . (Here the cardinality of group r, and is the indicator function). For example, people in different political parties (groups) might be regularized together with respect to predicting the favorability rating of a politician. Note that this penalty reduces to the first when all tasks are in the same group.
Letting , where is the Laplacian for the graph with adjacency matrix M giving pairwise similarities of tasks. This is equivalent to giving a larger penalty to the distance separating tasks t and s when they are more similar (according to the weight ,) i.e. regularizes .
All of the above choices of A also induce the additional regularization term which penalizes complexity in f more broadly.
Learning tasks together with their structure
Learning problem can be generalized to admit learning task matrix A as follows:
Choice of must be designed to learn matrices A of a given type. See "Special cases" below.
Optimization of
Restricting to the case of convex losses and coercive penalties Ciliberto et al. have shown that although is not convex jointly in C and A, a related problem is jointly convex.
Specifically on the convex set , the equivalent problem
is convex with the same minimum value. And if is a minimizer for then is a minimizer for .
may be solved by a barrier method on a closed set by introducing the following perturbation:
The perturbation via the barrier forces the objective functions to be equal to on the boundary of .
can be solved with a block coordinate descent method, alternating in C and A. This results in a sequence of minimizers in that converges to the solution in as , and hence gives the solution to .
Special cases
Spectral penalties - Dinnuzo et al suggested setting F as the Frobenius norm . They optimized directly using block coordinate descent, not accounting for difficulties at the boundary of .
Clustered tasks learning - Jacob et al suggested to learn A in the setting where T tasks are organized in R disjoint clusters. In this case let be the matrix with . Setting , and , the task matrix can be parameterized as a function of : , with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxation . In this formulation, .
Generalizations
Non-convex penalties - Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases.
Non-separable kernels - Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels.
Software package
A Matlab package called Multi-Task Learning via StructurAl Regularization (MALSAR) implements the following multi-task learning algorithms: Mean-Regularized Multi-Task Learning, Multi-Task Learning with Joint Feature Selection, Robust Multi-Task Feature Learning, Trace-Norm Regularized Multi-Task Learning, Alternating Structural Optimization, Incoherent Low-Rank and Sparse Learning, Robust Low-Rank Multi-Task Learning, Clustered Multi-Task Learning, Multi-Task Learning with Graph Structures.
Literature
Multi-Target Prediction: A Unifying View on Problems and Methods Willem Waegeman, Krzysztof Dembczynski, Eyke Huellermeier https://arxiv.org/abs/1809.02352v1
See also
Artificial intelligence
Artificial neural network
Automated machine learning (AutoML)
Evolutionary computation
Foundation model
General game playing
Human-based genetic algorithm
Kernel methods for vector output
Multitask optimization
Robot learning
Transfer learning
James–Stein estimator
References
External links
The Biosignals Intelligence Group at UIUC
Washington University in St. Louis Department of Computer Science
Software
The Multi-Task Learning via Structural Regularization Package
Online Multi-Task Learning Toolkit (OMT) A general-purpose online multi-task learning toolkit based on conditional random field models and stochastic gradient descent training (C#, .NET)
Machine learning | Multi-task learning | [
"Engineering"
] | 2,723 | [
"Artificial intelligence engineering",
"Machine learning"
] |
27,758,020 | https://en.wikipedia.org/wiki/Hydraulic%20roughness | Hydraulic roughness is the measure of the amount of frictional resistance water experiences when passing over land and channel features. It quantifies the impact of surface irregularities and obstructions on the flow of water.
One roughness coefficient is Manning's n-value. Manning's n is used extensively around the world to predict the degree of roughness in channels. The coefficient is critical in hydraulic engineering, floodplain management, and sediment transport studies. Flow velocity is strongly dependent on the resistance to flow. An increase in this n value will cause a decrease in the velocity of water flowing across a surface.
Manning's n
The value of Manning's n is affected by many variables. Factors like suspended load, sediment grain size, presence of bedrock or boulders in the stream channel, variations in channel width and depth, and overall sinuosity of the stream channel can all affect Manning's n value. For instance, a narrow, rocky channel with irregular obstructions such as large boulders will have a higher n value than a smooth, straight channel with uniform depth.
Biological factors have the greatest overall effect on Manning's n; key influences include bank stabilization by vegetation, height of grass and brush across a floodplain, and stumps and logs creating natural dams are the main observable influences.
Additionally, seasonal changes in vegetation density and growth can cause fluctuations in Manning's n values.
Biological Importance
Recent studies have found a relationship between hydraulic roughness and salmon spawning habitat; “bed-surface grain size is responsive to hydraulic roughness caused by bank irregularities, bars, and wood debris… We find that wood debris plays an important role at our study sites, not only providing hydraulic roughness but also influencing pool spacing, frequency of textural patches, and the amplitude and wavelength of bank and bar topography and their consequent roughness. Channels with progressively greater hydraulic roughness have systematically finer bed surfaces, presumably due to reduced bed shear stress, resulting in lower channel competence and diminished bed load transport capacity, both of which promote textural fining”.
Textural fining of stream beds can affect more than just salmon spawning habitats, “bar and wood roughness create a greater variety of textural patches, offering a range of aquatic habitats that may promote biologic diversity or be of use to specific animals at different life stages.”
References
Erosion
Hydraulics
Hydraulic engineering | Hydraulic roughness | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 480 | [
"Hydrology",
"Physical systems",
"Hydrology stubs",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering",
"Fluid dynamics"
] |
27,759,167 | https://en.wikipedia.org/wiki/AZFinText | Arizona Financial Text System (AZFinText) is a textual-based quantitative financial prediction system written by Robert P. Schumaker of University of Texas at Tyler and Hsinchun Chen of the University of Arizona.
System
This system differs from other systems in that it uses financial text as one of its key means of predicting stock price movement. This reduces the information lag-time problem evident in many similar systems where new information must be transcribed (e.g., such as losing a costly court battle or having a product recall), before the quant can react appropriately. AZFinText overcomes these limitations by utilizing the terms used in financial news articles to predict future stock prices twenty minutes after the news article has been released.
It is believed that certain article terms can move stocks more than others. Terms such as factory exploded or workers strike will have a depressing effect on stock prices whereas terms such as earnings rose will tend to increase stock prices.
When a human trading expert sees certain terms, they will react in a somewhat predictable fashion. AZFinText capitalizes on the arbitrage opportunities that exist when investment experts over and under-react to certain news stories. By analyzing breaking financial news articles and focusing on specific parts of speech, portfolio selection, term weighting and even article sentiment, the AZFinText system becomes a powerful tool and is a radically different way of looking at stock market prediction.
Overview of research
The foundation of AZFinText can be found in the ACM TOIS article. Within this paper, the authors tested several different prediction models and linguistic textual representations. From this work, it was found that using the article terms and the price of the stock at the time the article was released was the most effective model and using proper nouns was the most effective textual representation technique. Combining the two, AZFinText netted a 2.84% trading return over the five-week study period.
AZFinText was then extended to study what combination of peer organizations help to best train the system. Using the premise that IBM has more in common with Microsoft than GM, AZFinText studied the effect of varying peer-based training sets. To do this, AZFinText trained on the various levels of GICS and evaluated the results. It was found that sector-based training was most effective, netting an 8.50% trading return, outperforming Jim Cramer, Jim Jubak and DayTraders.com during the study period. AZFinText was also compared against the top 10 quantitative systems and outperformed 6 of them.
A third study investigated the role of portfolio building in a textual financial prediction system. From this study, Momentum and Contrarian stock portfolios were created and tested. Using the premise that past winning stocks will continue to win and past losing stocks will continue to lose, AZFinText netted a 20.79% return during the study period. It was also noted that traders were generally overreacting to news events, creating the opportunity of abnormal returns.
A fourth study looked into using author sentiment as an added predictive variable. Using the premise that an author can unwittingly influence market trades simply by the terms they use, AZFinText was tested using tone and polarity features. It was found that Contrarian activity was occurring within the market, where articles of a positive tone would decrease in price and articles of a negative tone would increase in price.
A further study investigated what article verbs have the most influence on stock price movement. From this work, it was found that planted, announcing, front, smaller and crude had the highest positive impact on stock price.
Notable publicity
AZFinText has been the topic of discussion by numerous media outlets. Some of the more notable ones include The Wall Street Journal, MIT's Technology Review, Dow Jones Newswire, WBIR in Knoxville, TN, Slashdot and other media outlets.
References
External links
https://blogs.wsj.com/digits/2010/06/21/using-artificial-intelligence-to-digest-financial-news/
slashdot.org/story/10/06/12/1341212/Quant-AI-Picks-Stocks-Better-Than-Humans
www.technologyreview.com/blog/guest/25308/
Mathematical finance
Artificial intelligence
Artificial intelligence engineering | AZFinText | [
"Mathematics",
"Engineering"
] | 876 | [
"Artificial intelligence engineering",
"Applied mathematics",
"Mathematical finance",
"Software engineering"
] |
27,762,016 | https://en.wikipedia.org/wiki/Cell%20Stem%20Cell | Cell Stem Cell is a peer-reviewed scientific journal published by Cell Press, an imprint of Elsevier.
History
The journal was established in 2007 and focuses on stem cell research.
Both research articles and reviews are published, at about a 7 to 1 ratio.
References
External links
Stem Cell Therapy
Stem Cell Treatment
Monthly journals
Stem cell research
Developmental biology journals
English-language journals
Delayed open access journals
Cell Press academic journals
Academic journals established in 2007 | Cell Stem Cell | [
"Chemistry",
"Biology"
] | 87 | [
"Translational medicine",
"Tissue engineering",
"Stem cell research"
] |
33,111,104 | https://en.wikipedia.org/wiki/G%C3%B6del%27s%20speed-up%20theorem | In mathematics, Gödel's speed-up theorem, proved by , shows that there are theorems whose proofs can be drastically shortened by working in more powerful axiomatic systems.
Kurt Gödel showed how to find explicit examples of statements in formal systems that are provable in that system but whose shortest proof is unimaginably long. For example, the statement:
"This statement cannot be proved in Peano arithmetic in fewer than a googolplex symbols"
is provable in Peano arithmetic (PA) but the shortest proof has at least a googolplex symbols, by an argument similar to the proof of Gödel's first incompleteness theorem: If PA is consistent, then it cannot prove the statement in fewer than a googolplex symbols, because the existence of such a proof would itself be a theorem of PA, a contradiction. But simply enumerating all strings of length up to a googolplex and checking that each such string is not a proof (in PA) of the statement, yields a proof of the statement (which is necessarily longer than a googolplex symbols).
The statement has a short proof in a more powerful system: in fact the proof given in the previous paragraph is a proof in the system of Peano arithmetic plus the statement "Peano arithmetic is consistent" (which, per the incompleteness theorem, cannot be proved in Peano arithmetic).
In this argument, Peano arithmetic can be replaced by any more powerful consistent system, and a googolplex can be replaced by any number that can be described concisely in the system.
Harvey Friedman found some explicit natural examples of this phenomenon, giving some explicit statements in Peano arithmetic and other formal systems whose shortest proofs are ridiculously long . For example, the statement
"there is an integer n such that if there is a sequence of rooted trees T1, T2, ..., Tn such that Tk has at most k + 10 vertices, then some tree can be homeomorphically embedded in a later one"
is provable in Peano arithmetic, but the shortest proof has length at least A(1000), where A(0) = 1 and A(n+1) = 2A(n). The statement is a special case of Kruskal's theorem and has a short proof in second order arithmetic.
If one takes Peano arithmetic together with the negation of the statement above, one obtains an inconsistent theory whose shortest known contradiction is equivalently long.
See also
Blum's speedup theorem
List of long proofs
References
Proof theory
Theorems in the foundations of mathematics
Speed-up theorem | Gödel's speed-up theorem | [
"Mathematics"
] | 538 | [
"Mathematical theorems",
"Foundations of mathematics",
"Proof theory",
"Mathematical logic",
"Mathematical problems",
"Theorems in the foundations of mathematics"
] |
33,113,800 | https://en.wikipedia.org/wiki/HRS%20Computing | HRS Computing is an open-source scientific software which simulates the hyper Rayleigh scattering (HRS) in nonlinear optics.
The software is designed for researchers, and it is used to verify the agreement between theoretical models and experimental data.
Main features
From the physics point of view the software provides coefficients that are useful for the determination of the microscopic structure of composites, molecules, etc.
the dipolar and quadripolar coefficients
the depolarization factor
Using these coefficients, the software also provides:
the visualization of simulated polar graphics generated by HRS
molecular position and dipolar momentum in 3D
easy data and graphics export
External links
HRS Computing official site
References
Physics software | HRS Computing | [
"Physics"
] | 137 | [
"Physics software",
"Computational physics"
] |
33,116,840 | https://en.wikipedia.org/wiki/Tissue%20Engineering%20and%20Regenerative%20Medicine%20International%20Society | Tissue Engineering and Regenerative Medicine International Society is an international learned society dedicated to tissue engineering and regenerative medicine.
Background
Regenerative medicine involves processes of replacing, engineering or regenerating human cells, tissues or organs to restore or establish normal function. A major technology of regenerative medicine is tissue engineering, which has variously been defined as "an interdisciplinary field that applies the principles of engineering and the life sciences toward the development of biological substitutes that restore, maintain, or improve tissue function", or "the creation of new tissue by the deliberate and controlled stimulation of selected target cells through a systematic combination of molecular and mechanical signals".
History
Tissue engineering emerged during the 1990s as a potentially powerful option for regenerating tissue and research initiatives were established in various cities in the US and in European countries including the UK, Italy, Germany and Switzerland, and also in Japan. Soon fledgling societies were formed in these countries in order to represent these new sciences, notably the European Tissue Engineering Society (ETES) and, in the US, the Tissue Engineering Society (TES), soon to become the Tissue Engineering Society international (TESi) and the Regenerative Medicine Society (RMS).
Because of the overlap between the activities of these societies and the increasing globalization of science and medicine, considerations of a merger between TESI and ETES and RMS were initiated in 2004 and agreement was reached during 2005 about the formation of the consolidated society, the Tissue Engineering and Regenerative Medicine International Society (TERMIS). Election of officers for TERMIS took place in September 2005, and the by-laws were approved by the Board.
Rapid progress in the organization of TERMIS took place during late 2005 and 2006. The SYIS, Student and Young Investigator Section was established in January 2006, website and newsletter launched and membership dues procedures put in place.
Structure and governance
It was determined that each Chapter would have its own Council, the overall activities being determined by the Governing Board, on which each Council was represented, and an executive committee.
Society chapters
At the beginning of the Society, it was agreed that there would be Continental Chapters of TERMIS, initially TERMIS-North America (TERMIS-NA) and TERMIS-Europe (TERMIS-EU), to be joined at the time of the major Shanghai conference in October 2005 by TERMIS-Asia Pacific (TERMIS-AP). It was subsequently agreed that the remit of TERMIS-North America should be expanded to incorporate activity in South America, the chapter becoming TERMIS-Americas (TERMS-AM) officially in 2012.
Student and Young Investigator Section
The Student and Young Investigator Section of TERMIS (TERMIS-SYIS) brings together undergraduate and graduate students, post-doctoral researchers and young investigators in industry and academia related to tissue engineering and regenerative medicine. It follows the organizational and working pattern of TERMIS.
Activities
Journal
A contract was signed between TERMIS and the Mary Ann Liebert publisher which designated the journal Tissue Engineering, Parts A, B, and C as the official journal of TERMIS with free on-line access for the membership.
Conferences
It was agreed that there would be a World Congress every three years, with each Chapter organizing its own conference in the intervening two years.
Awards
Each TERMIS chapter has defined awards to recognize outstanding scientists and their contributions within the community.
TERMIS-AP
The Excellence Achievement Award has been established to recognize a researcher in the Asia-Pacific region who has made continuous and landmark contributions to the tissue engineering and regenerative medicine field.
The Outstanding Scientist Award has been established to recognize a mid-career researcher in the Asia-Pacific region who has made significant contributions to the TERM field.
The Young Scholar Award has been established to recognize a young researcher in the Asia-Pacific region who has made significant and consistent achievements in the TERM field, showing clear evidence of their potential to excel.
The Mary Ann Liebert, Inc. Best TERM Paper Award has been established to recognize a student researcher (undergraduate/graduate/postdoc) in the Asia-Pacific region who has achieved outstanding research accomplishments in the TERM field.
The TERMIS-AP Innovation Team Award has been established to recognize a team of researchers in the Asia-Pacific region. It aims to recognize successful applications of tissue engineering and regenerative medicine leading to the development of relevant products/therapies/technologies which will ultimately benefit the patients.
TERMIS-EU
The Career Achievement Award is aimed towards a recognition of individuals who have made outstanding contributions to the field of TERM and have carried out most of their career in the TERMIS-EU geographical area.
The Mid Terms Career Award has been established in 2020 to recognize individuals that are within 10–20 years after obtaining their PhD, with a successful research group and clear evidence of outstanding performance.
The Robert Brown Early Career Principal Investigator Award recognizes individuals that are within 2–10 years after obtaining their PhD, with clear evidence of a growing profile.
Award recipients
Fellows
Fellows of Tissue Engineering and Regenerative Medicine (FTERM) recipients are:
Alini, Mauro
Atala, Anthony
Badylak, Stephen
Cancedda, Ranieri
Cao, Yilin
Chatzinikolaidou, Maria
El Haj, Alicia
Fontanilla, Marta
Germain, Lucie
Gomes, Manuela
Griffith, Linda
Guldberg, Robert
Hellman, Kiki
Hilborn, Jöns
Hubbell, Jeffrey
Hutmacher, Dietmar
Khang, Gilson
Kirkpatrick, C. James
Langer, Robert
Lee, Hai-Bang
Lee, Jin Ho
Lewandowska-Szumiel, Malgorzata
Marra, Kacey
Martin, Ivan
McGuigan, Alison
Mikos, Antonios
Mooney, David
Motta, Antonella
Naughton, Gail
Okano, Teruo
Pandit, Abhay
Parenteau, Nancy
Radisic, Milica
Ratner, Buddy
Redl, Heinz
Reis, Rui L.
Richards, R. Geoff
Russell, Alan
Schenke-Layland, Katja
Shoichet, Molly
Smith, David
Tabata, Yasuhiko
Tuan, Rocky
Vacanti, Charles
Vacanti, Joseph
van Osch, Gerjo
Vunjak-Novakovic, Gordana
Wagner, William
Weiss, Anthony S.
Emeritus
Johnson, Peter
Williams, David
Deceased Fellows
Nerem, Robert
References
External links
TERMIS homepage
Tissue Engineering Journal
Tissue engineering
Medical associations based in the United States
Medical and health organizations based in California
International medical associations | Tissue Engineering and Regenerative Medicine International Society | [
"Chemistry",
"Engineering",
"Biology"
] | 1,310 | [
"Biological engineering",
"Cloning",
"Chemical engineering",
"Tissue engineering",
"Medical technology"
] |
33,125,193 | https://en.wikipedia.org/wiki/Crowdsourced%20testing | Crowdsourced testing is an emerging trend in software testing which exploits the benefits, effectiveness, and efficiency of crowdsourcing and the cloud platform. It differs from traditional testing methods in that the testing is carried out by a number of different testers from different places, and not by hired consultants and professionals. The software is put to test under diverse realistic platforms which makes it more reliable, cost-effective, and can be fast. In addition, crowdsource testing can allow for remote usability testing because specific target groups can be recruited through the crowd.
This method of testing is considered when the software is more user-centric: i.e., software whose success is determined by its user feedback and which has a diverse user space. It is frequently implemented with gaming, mobile applications, when experts who may be difficult to find in one place are required for specific testing, or when the company lacks the resources or time to carry out the testing internally.
Crowdsourced testing may be considered to be a sub-type of software testing outsourcing.
References
Crowdsourcing
Software testing | Crowdsourced testing | [
"Engineering"
] | 220 | [
"Software engineering",
"Software testing"
] |
38,596,565 | https://en.wikipedia.org/wiki/Ethenium | In chemistry, ethenium, protonated ethylene or ethyl cation is a positive ion with the formula . It can be viewed as a molecule of ethylene () with one added proton (), or a molecule of ethane () minus one hydride ion (). It is a carbocation; more specifically, a nonclassical carbocation.
Preparation
Ethenium has been observed in rarefied gases subjected to radiation. Another preparation method is to react certain proton donors such as , , , and with ethane at ambient temperature and pressures below 1 mmHg. (Other donors such as and form ethanium preferably to ethenium.)
At room temperature and in a rarefied methane atmosphere, ethanium slowly dissociates to ethenium and . The reaction is much faster at 90 °C.
Stability and reactions
Contrary to some earlier reports, ethenium was found to be largely unreactive towards neutral methane at ambient temperature and low pressure (on the order of 1 mmHg), even though the reaction yielding sec- and is believed to be exothermic.
Structure
The structure of ethenium's ground state was in dispute for many years, but it was eventually agreed to be a non-classical structure, with the two carbon atoms and one of the hydrogen atoms forming a three-center two-electron bond. Calculations have shown that higher homologues, like the propyl and n-butyl cations also have bridged structures. Generally speaking, bridging appears to be a common means by which 1° alkyl carbocations achieve additional stabilization. Consequently, true 1° carbocations (with a classical structure) may be rare or nonexistent.
References
Carbocations
Physical chemistry | Ethenium | [
"Physics",
"Chemistry"
] | 367 | [
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
25,901,829 | https://en.wikipedia.org/wiki/Metamaterial%20absorber | A metamaterial absorber is a type of metamaterial intended to efficiently absorb electromagnetic radiation such as light. Furthermore, metamaterials are an advance in materials science. Hence, those metamaterials that are designed to be absorbers offer benefits over conventional absorbers such as further miniaturization, wider adaptability, and increased effectiveness. Intended applications for the metamaterial absorber include emitters, photodetectors, sensors, spatial light modulators, infrared camouflage, wireless communication, and use in solar photovoltaics and thermophotovoltaics.
For practical applications, the metamaterial absorbers can be divided into two types: narrow band and broadband. For example, metamaterial absorbers can be used to improve the performance of photodetectors. Metamaterial absorbers can also be used for enhancing absorption in both solar photovoltaic and thermo-photovoltaic applications. Skin depth engineering can be used in metamaterial absorbers in photovoltaic applications as well as other optoelectronic devices, where optimizing the device performance demands minimizing resistive losses and power consumption, such as photodetectors, laser diodes, and light emitting diodes.
In addition, the advent of metamaterial absorbers enable researchers to further understand the theory of metamaterials which is derived from classical electromagnetic wave theory. This leads to understanding the material's capabilities and reasons for current limitations.
Unfortunately, achieving broadband absorption, especially in the THz region (and higher frequencies), still remains a challenging task because of the intrinsically narrow bandwidth of surface plasmon polaritons (SPPs) or localized surface plasmon resonances (LSPRs) generated on metallic surfaces at the nanoscale, which are exploited as a mechanism to obtain perfect absorption.
Metamaterials
Metamaterials are artificial materials which exhibit unique properties which do not occur in nature. These are usually arrays of structures which are smaller than the wavelength they interact with. These structures have the capability to control electromagnetic radiation in unique ways that are not exhibited by conventional materials. It is the spacing and shape of a given metamaterial's components that define its use and the way it controls electromagnetic radiation. Unlike most conventional materials, researchers in this field can physically control electromagnetic radiation by altering the geometry of the material's components. Metamaterial structures are used in a wide range of applications and across a broad frequency range from radio frequencies, to microwave, terahertz, across the infrared spectrum and almost to visible wavelengths.
Absorbers
"An electromagnetic absorber neither reflects nor transmits the incident radiation. Therefore, the power of the impinging wave is mostly absorbed in the absorber materials. The performance of an absorber depends on its thickness and morphology, and also the materials used to fabricate it."
"A near unity absorber is a device in which all incident radiation is absorbed at the operating frequency–transmissivity, reflectivity, scattering and all other light propagation channels are disabled. Electromagnetic (EM) wave absorbers can be categorized into two types: resonant absorbers and broadband absorbers.
Principal conceptions
A metamaterial absorber utilizes the effective medium design of metamaterials and the loss components of permittivity and magnetic permeability to create a material that has a high ratio of electromagnetic radiation absorption. Loss is noted in applications of negative refractive index (photonic metamaterials, antenna systems metamaterials) or transformation optics (metamaterial cloaking, celestial mechanics), but is typically undesired in these applications.
Complex permittivity and permeability are derived from metamaterials using the effective medium approach. As effective media, metamaterials can be characterized with complex ε(w) = ε1 + iε2 for effective permittivity and μ(w) = μ1 + i μ2 for effective permeability. Complex values of permittivity and permeability typically correspond to attenuation in a medium. Most of the work in metamaterials is focused on the real parts of these parameters, which relate to wave propagation rather than attenuation. The loss (imaginary) components are small in comparison to the real parts and are often neglected in such cases.
However, the loss terms (ε2 and μ2) can also be engineered to create high attenuation and correspondingly large absorption. By independently manipulating resonances in ε and μ it is possible to absorb both the incident electric and magnetic field. Additionally, a metamaterial can be impedance-matched to free space by engineering its permittivity and permeability, minimizing reflectivity. Thus, it becomes a highly capable absorber.
This approach can be used to create thin absorbers. Typical conventional absorbers are thick compared to wavelengths of interest, which is a problem in many applications. Since metamaterials are characterized based on their subwavelength nature, they can be used to create effective yet thin absorbers. This is not limited to electromagnetic absorption either.
See also
Negative index metamaterials
History of metamaterials
Metamaterial cloaking
Nonlinear metamaterials
Photonic crystal
Seismic metamaterials
Split-ring resonator
Acoustic metamaterials
Plasmonic metamaterials
Superlens
Terahertz metamaterials
Transformation optics
Theories of cloaking
References
Further reading
**The above PDF download is a self-published version of this paper. * The Salisbury screen, invented by American engineer Winfield Salisbury in 1952.
Salisbury W. W. "Absorbent body for electromagnetic waves", United States patent number 2599944 10 June 1952. Also cited in Munk
External links
Images - A simple schematic of Miniaturized Microwave Absorbers from Kamil Boratay Alıcı (Ph. D., Physics) of the Nanotechnology Research Center, Bilkent University.
Metamaterials | Metamaterial absorber | [
"Materials_science",
"Engineering"
] | 1,236 | [
"Metamaterials",
"Materials science"
] |
25,902,186 | https://en.wikipedia.org/wiki/PD%205500 | PD 5500 is a specification for unfired pressure vessels. It specifies requirements for the design, manufacture, inspection and testing of unfired pressure vessels made from carbon, ferritic alloy, and austenitic steels. It also includes material supplements containing requirements for vessels made from aluminium, copper, nickel, titanium and duplex.
PD 5500 is the UK’s national pressure vessels code, although the code is used outside the UK. A new edition of PD5500 is published every three years. An amendment is usually published every year in September.
BS5500 was declassified as a full British Standard and reclassified as a 'Publicly Available Specification', which lead to it being renamed to PD5500. PD5500 was withdrawn from the list of British Standards because it was not harmonized with the European Pressure Equipment Directive (2014/68/EU formerly 97/23/EC) . EN 13445 was introduced as the harmonized standard. Harmonized standards carry presumed conformity with the requirements of the Pressure Equipment Directive, whereas other pressure vessel design codes such as PD5500 or ASME must demonstrate conformity against each of the Essential Safety Requirements of the Pressure Equipment Directive before conformity can be declared. PD5500 is currently published as a "Published Document" (PD) by the British Standards Institution.
Brexit
In the UK the Pressure Equipment Safety Regulations 2016 enacted the PED into UK law. Since the UK exited the European Union, the PED no longer applies and the Pressure Equipment Safety Regulations 2016 have been amended by the enactment of the UK Product Safety and Metrology Regulations, which update a number of pieces of legislation which required amendments to operate outside of the EU.
Under this new legislation Harmonised Standards are now referred to as Designated Standards, but the practice of demonstrating compliance remains largely the same. EN 13445 is recognised as a Designated Standard, while other codes such as PD5500 must still demonstrate conformity against each Essential Safety Requirement.
See also
Pressure Equipment Directive
Pressure Equipment Safety Regulations
The Product Safety and Metrology etc. (Amendment etc.) (EU Exit) Regulations 2019
PD5500 - British Standard Institute
References
Pressure vessels
British Standards
Structural engineering standards | PD 5500 | [
"Physics",
"Chemistry",
"Engineering"
] | 443 | [
"Structural engineering",
"Chemical equipment",
"Physical systems",
"Hydraulics",
"Structural engineering standards",
"Pressure vessels"
] |
25,902,271 | https://en.wikipedia.org/wiki/Mesenchymal%E2%80%93epithelial%20transition | A mesenchymal–epithelial transition (MET) is a reversible biological process that involves the transition from motile, multipolar or spindle-shaped mesenchymal cells to planar arrays of polarized cells called epithelia. MET is the reverse process of epithelial–mesenchymal transition (EMT) and it has been shown to occur in normal development, induced pluripotent stem cell reprogramming, cancer metastasis and wound healing.
Introduction
Unlike epithelial cells – which are stationary and characterized by an apico-basal polarity with binding by a basal lamina, tight junctions, gap junctions, adherent junctions and expression of cell-cell adhesion markers such as E-cadherin, mesenchymal cells do not make mature cell-cell contacts, can invade through the extracellular matrix, and express markers such as vimentin, fibronectin, N-cadherin, Twist, and Snail. MET plays also a critical role in metabolic switching and epigenetic modifications. In general epithelium-associated genes are upregulated and mesenchyme-associated genes are downregulated in the process of MET.
In development
During embryogenesis and early development, cells switch back and forth between different cellular phenotypes via MET and its reverse process, epithelial–mesenchymal transition (EMT). Developmental METs have been studied most extensively in embryogenesis during somitogenesis and nephrogenesis and carcinogenesis during metastasis, but it also occurs in cardiogenesis or foregut development. MET is an essential process in embryogenesis to gather mesenchymal-like cells into cohesive structures. Although the mechanism of MET during various organs morphogenesis is quite similar, each process has a unique signaling pathway to induce changes in gene expression profiles.
Nephrogenesis
One example of this, the most well described of the developmental METs, is kidney ontogenesis. The mammalian kidney is primarily formed by two early structures: the ureteric bud and the nephrogenic mesenchyme, which form the collecting duct and nephrons respectively (see kidney development for more details). During kidney ontogenesis, a reciprocal induction of the ureteric bud epithelium and nephrogenic mesenchyme occurs. As the ureteric bud grows out of the Wolffian duct, the nephrogenic mesenchyme induces the ureteric bud to branch. Concurrently, the ureteric bud induces the nephrogenic mesenchyme to condense around the bud and undergo MET to form the renal epithelium, which ultimately forms the nephron. Growth factors, integrins, cell adhesion molecules, and protooncogenes, such as c-ret, c-ros, and c-met, mediate the reciprocal induction in metanephrons and consequent MET.
Somitogenesis
Another example of developmental MET occurs during somitogenesis. Vertebrate somites, the precursors of axial bones and trunk skeletal muscles, are formed by the maturation of the presomitic mesoderm (PSM). The PSM, which is composed of mesenchymal cells, undergoes segmentation by delineating somite boundaries (see somitogenesis for more details). Each somite is encapsulated by an epithelium, formerly mesenchymal cells that had undergone MET. Two Rho family GTPases – Cdc42 and Rac1 – as well as the transcription factor Paraxis are required for chick somitic MET.
Cardiogenesis
Development of heart is involved in several rounds of EMT and MET. While development splanchnopleure undergo EMT and produce endothelial progenitors, these then form the endocardium through MET. Pericardium is formed by sinus venosus mesenchymal cells that undergo MET. Quite similar processes occur also while regeneration in the injured heart. Injured pericardium undergoes EMT and is transformed into adipocytes or myofibroblasts which induce arrhythmia and scars. MET than leads to the formation of vascular and epithelial progenitors that can differentiate into vasculogenic cells which lead to regeneration of heart injury.
Hepatogenesis
In cancer
While relatively little is known about the role MET plays in cancer when compared to the extensive studies of EMT in tumor metastasis, MET is believed to participate in the establishment and stabilization of distant metastases by allowing cancerous cells to regain epithelial properties and integrate into distant organs. Between these two states, cells occur in 'intermediate‐state', or so‐called partial EMT.
In recent years, researchers have begun to investigate MET as one of many potential therapeutic targets in the prevention of metastases. This approach to preventing metastasis is known as differentiation-based therapy or differentiation therapy and it can be used for development of new anti-cancer therapeutic strategies.
In iPS cell reprogramming
A number of different cellular processes must take place in order for somatic cells to undergo reprogramming into induced pluripotent stem cells (iPS cells). iPS cell reprogramming, also known as somatic cell reprogramming, can be achieved by ectopic expression of Oct4, Klf4, Sox2, and c-Myc (OKSM). Upon induction, mouse fibroblasts must undergo MET to successfully begin the initiation phase of reprogramming. Epithelial-associated genes such as E-cadherin/Cdh1, Cldns −3, −4, −7, −11, Occludin (Ocln), Epithelial cell adhesion molecule (Epcam), and Crumbs homolog 3 (Crb3), were all upregulated before Nanog, a key transcription factor in maintaining pluripotency, was turned on. Additionally, mesenchymal-associated genes such as Snail, Slug, Zeb −1, −2, and N-cadherin were downregulated within the first 5 days post-OKSM induction. Addition of exogenous TGF-β1, which blocks MET, decreased iPS reprogramming efficiency significantly. These findings are all consistent with previous observations that embryonic stem cells resemble epithelial cells and express E-cadherin.
Recent studies have suggested that ectopic expression of Klf4 in iPS cell reprogramming may be specifically responsible for inducing E-cadherin expression by binding to promoter regions and the first intron of CDH1 (the gene encoding for E-cadherin).
See also
Epithelial–mesenchymal transition
References
Developmental biology
Oncology | Mesenchymal–epithelial transition | [
"Biology"
] | 1,436 | [
"Behavior",
"Developmental biology",
"Reproduction"
] |
25,903,837 | https://en.wikipedia.org/wiki/Edge%20crush%20test | The edge crush test is a laboratory test method that is used to measure the cross-direction crushing of a sample of corrugated board. It gives information on the ability of a particular board construction to resist crushing. It provides some relationship with the peak top-to-bottom compression strength of empty singlewall regular slotted containers in laboratory conditions.
The edge crush resistance R, expressed in kilonewtons per meter (kN/m) is calculated by the equation: , where is the mean value of the maximum force and is measured in newtons. More details are laid down in ISO 3037.
Corrugated fiberboard can be evaluated by many material test methods including an edge crush test. There have been efforts to estimate the compression strength of a box (usually empty, regular singlewall slotted containers, top-to-bottom) based on various board properties. Some have involved finite element analysis. One of the commonly referenced empirical estimations was published by McKee in 1963. This used the board ECT, the MD and CD flexural stiffness, the box perimeter, and the box depth. Simplifications have used a formula involving the board ECT, the board thickness, and the box perimeter. Most estimations do not relate well to other box orientations, box styles, or to filled boxes.
In order to calculate the value of BCT (Box compression test), the formula of McKee would be the easiest but also the least accurate. The ratio of height to the circumference must be greater than 1:7; even then, are many reservations.
Simplified McKee formula:
BCT = Box compression test in Pounds
U = box outline in inch
d = thickness of corrugated board in inch
References
ISO 3037:2007-03, Corrugated fibreboard - Determination of edgewise crush resistance
Materials testing | Edge crush test | [
"Materials_science",
"Engineering"
] | 368 | [
"Materials testing",
"Materials science"
] |
25,904,910 | https://en.wikipedia.org/wiki/Cognition%20Network%20Technology | Cognition Network Technology (CNT), also known as Definiens Cognition Network Technology, is an object-based image analysis method developed by Nobel laureate Gerd Binnig together with a team of researchers at Definiens AG in Munich, Germany. It serves for extracting information from images using a hierarchy of image objects (groups of pixels), as opposed to traditional pixel processing methods.
To emulate the human mind's cognitive powers, Definiens used patented image segmentation and classification processes, and developed a method to render knowledge in a semantic network. CNT examines pixels not in isolation, but in context. It builds up a picture iteratively, recognizing groups of pixels as objects. It uses the color, shape, texture and size of objects as well as their context and relationships to draw conclusions and inferences, similar to human analysis.
History
In 1994 Professor Gerd Binnig founded Definiens. CNT was first available with the launch of the eCognition software in May 2000. In June 2010, Trimble Navigation Ltd (NASDAQ: TRMB) acquired Definiens business asset in earth sciences markets, including eCognition software, and also licensed Definiens' patented CNT. In 2014, Definiens was acquired by MedImmune, the global biologics research and development arm of AstraZeneca, for an initial consideration of $150 million.
Software
Definiens Tissue Studio
Definiens Tissue Studio is a digital pathology image analysis software application based on CNT.
The intended use of Definiens Tissue Studio is for biomarker translational research in formalin-fixed, paraffin-embedded tissue samples which have been treated with immunohistochemical staining assays, or hematoxylin and eosin (H&E).
The central concept behind Definiens Tissue Studio is a user interface that facilitates machine learning from example digital histopathology images in order to derive an image analysis solution suitable for the measurement of biomarkers and/or histological features within pre-defined regions of interest on a cell-by-cell basis, and within sub-cellular compartments. The derived image analysis solution is then automatically applied to subsequent digital images in order to objectively measure defined sets of multiparametric image features. These data sets are used for further understanding the underlying biological processes that drive cancer and other diseases. Image processing and data analysis are performed either on a local desktop computer workstation, or on a server grid.
eCognition
The eCognition suite offers three components which can be used stand-alone or in combination to solve image analysis tasks. eCognition Developer is a development environment for object-based image analysis. It is used in earth sciences to develop rule sets (or applications) for the analysis of remote sensing data. eCognition Architect enables non-technical users to configure, calibrate and execute image analysis workflows created in eCognition Developer. eCognition Server software provides a processing environment for batch execution of image analysis jobs.
eCognition software is utilized in numerous remote sensing and geospatial application scenarios and environments, using a variety of data types:
Generic: Rapid Mapping, Change Detection, Object Recognition
By environment: Diverse Landcover Mapping, Urban Analysis (i.e. impervious surface area analysis for taxation, property assessment for insurance, inventory of green infrastructure), Forestry (i.e. biomass measurement, species identification, firescar measurement), Agriculture (i.e. regional planning, precision farming, crisis response), Marine and Riparian (i.e. ecosystem evaluation, disaster management, harbor monitoring).
Other: Defense, security, atmosphere and climate
The online eCognition community was launched in July 2009 and had 2813 members as of July 9, 2010. Membership is distributed globally and user conferences are held regularly, the last having taken place in November 2009 in Munich, Germany. The bi-annual GEOBIA (Geographic Object-Based Image Analysis) conference is heavily attended by eCognition users, with the majority of presentations based on eCognition software.
References
Further reading
External links
Computer Vision & Machine Learning
Computer vision
Science and technology in Germany | Cognition Network Technology | [
"Engineering"
] | 857 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
25,905,119 | https://en.wikipedia.org/wiki/Fermat%27s%20Last%20Theorem%20%28book%29 | Fermat's Last Theorem is a popular science book (1997) by Simon Singh. It tells the story of the search for a proof of Fermat's Last Theorem, first conjectured by Pierre de Fermat in 1637, and explores how many mathematicians such as Évariste Galois had tried and failed to provide a proof for the theorem. Despite the efforts of many mathematicians, the proof would remain incomplete until 1995, with the publication of Andrew Wiles' proof of the Theorem. The book is the first mathematics book to become a Number One seller in the United Kingdom, whilst Singh's documentary The Proof, on which the book was based, won a BAFTA in 1997.
In the United States, the book was released as Fermat's Enigma: The Epic Quest to Solve the World's Greatest Mathematical Problem. The book was released in the United States in October 1998 to coincide with the US release of Singh's documentary The Proof about Wiles's proof of Fermat's Last Theorem.
References
Books by Simon Singh
Popular mathematics books
1997 non-fiction books
Fermat's Last Theorem
English-language non-fiction books
Fourth Estate books | Fermat's Last Theorem (book) | [
"Mathematics"
] | 243 | [
"Number theory stubs",
"Theorems in number theory",
"Fermat's Last Theorem",
"Number theory"
] |
25,906,274 | https://en.wikipedia.org/wiki/PHOSFOS | PHOSFOS (Photonic Skins For Optical Sensing) is a research and technology development project co-funded by the European Commission.
Project Description
The PHOSFOS project is developing flexible and stretchable foils or skins that integrate optical sensing elements with optical and electrical devices, such as onboard signal processing and wireless communications, as seen in Figure 1. These flexible skins can be wrapped around, embedded in, and anchored to irregularly shaped or moving objects and allow quasi-distributed sensing of mechanical quantities such as deformation, pressure, stress, and strain. This approach offers advantages over conventional sensing systems, such as increased portability and measurement range.
The sensing technology is based around sensing elements called Fiber Bragg Gratings (FBGs) that are fabricated in standard single core silica fibers, highly birefringent Microstructured fibers (MSF) and Plastic optical fibers (POF). The silica MSFs are designed to exhibit almost zero temperature sensitivity to cope with the traditional temperature cross-sensitivity issues of conventional fiber sensors. These specialty fibers are being modeled, designed, and fabricated within the programme. FBGs implemented in plastic optical fiber are also being studied because plastic fibers can be stretched up to 300% before breaking, permitting use under conditions that would result in catastrophic failure of other types of strain sensors.
Once optimized, the sensors are embedded into a flexible skin and interfaced with peripheral optoelectronics and electronics (see Figure 2).
The photonic skins developed by PHOSFOS have potential application in real-time remote monitoring of behavior and integrity of various structures such as in civil engineering (buildings, dams, bridges, roads, tunnels and mines), in aerospace (aircraft wings, helicopter blades), and in energy production (windmill blades). Applications in healthcare are also being investigated.
Key results
A summary of the key developments can be found on the PhoSFOS EU webpage and includes the demonstration of a fully flexible opto-electronic foil.
Figure 3 shows the scattering of HeNe laser light from noise gratings recorded in PMMA using a 325 nm HeCd laser.
One of the early results from the project was in developing a repeatable method for joining polymer fiber to standard silica fiber — a major development that enabled using POF Bragg gratings in applications outside an optics lab. One of the first uses for these sensors was in monitoring strain in tapestries shown in Figure 4. In this case conventional electrical strain sensors and silica fiber sensors were shown to be strengthening the tapestries in areas where they were fixed. Because polymer fibre devices are much more flexible they did not distort the textiles as much, permitting more accurate measurement of strain.
Temperature and humidity sensing using a combined silica / POF fiber sensor has been demonstrated. Combined strain, temperature and bend sensing has also been shown. Using a fiber Bragg grating in an eccentric core polymer has been shown to yield a high sensitivity to bend.
Other recent progress includes the demonstration of birefringent photonic crystal fibers with zero polarimetric sensitivity to temperature, and a successful demonstration of transversal load sensing with fibre Bragg gratings in microstructured optic fibers.
The key areas where significant progress has been made are listed below:
Silica microstructured fibers for temperature-insensitive optical sensors - a new pressure-sensitive and temperature-insensitive optical fibre sensor has been developed. The sensor uses a fiber Bragg grating written into a microstructured fiber. The pressure sensitivity exceeds the state-of-the-art with a factor of 20, whilst the sensor is truly temperature-insensitive. The sensor is based on a novel design of a highly birefringent (10−3) microstructured optical fibre sensor that is designed to have a high pressure sensitivity (3.3 pm/bar), whilst at the same time exhibit negligible temperature sensitivity (10−2 pm/K). The fabrication method is compatible with conventional ultraviolet grating inscription setups for fiber Bragg grating manufacture. The temperature insensitivity was achieved by tailoring the design of the doped region in the core of the microstructured fiber via a series of design iterations.
Embedded optoelectronic devices - the possibility to integrate optical sources and photodetectors, compatible with the optical fibre sensors has been developed within the PHOSFOS project. The optoelectronic components are thinned down by polishing until they are only 20 μm thick so that they become flexible themselves without compromising functionality. Thin optical sources and detectors are then embedded in optical clear polymers, and electrically contacted using well-established micro- via, metallization and patterning technologies.
Integrated sensors and optoelectronics - several different approaches for embedding optical fibre sensors in a flexible and stretchable host material, including injection molding, laser structuring, and soft lithography were considered. The influence of the embedding process was studied for silica and polymer fiber Bragg gratings. Temperature, humidity, strain, curvature and pressure sensitivities were fully characterized for different flexible host materials. An approach in which the embedded optoelectronic chips can be efficiently coupled towards the optical fiber sensors, using dedicated coupling structures, incorporating a 45˚ micromirror, as well as a fiber alignment groove was proposed. This allowed low cost components to be used in combination with well-established fabrication technologies, to demonstrate a truly low cost fully integrated sensing foil for biomedical applications.
Polymer fiber Bragg gratings - prior to the commencement of PHOSFOS, gratings in polymer optical fibre (POF) only existed in the 1550 nm spectral region where the large fibre loss (1 dB/cm) only permitted very short (<10 cm) fibre lengths to be used and the devices had to be butt-coupled to a silica fiber pigtail on the optical bench.
The PHOSFOS consortium has developed a means for reliably splicing POF to silica fibre and produced the first gratings in the 800 nm spectral region where losses are almost 2 orders of magnitude less than at 1550 nm. These developments have allowed POF grating sensors to be used outside the laboratory for the first time.
Wavelength multiplexed polymer fiber Bragg gratings - once the fiber connection issue was solved it was possible to fabricated the first ever wavelength division multiplexed (WDM) Bragg grating sensors in polymer optical fibre (POF). Moreover, by characterizing and using the thermal annealing properties of the fibre it was possible to shift the reflecting wavelength of a grating by over 20 nm, to enable multiple WDM sensors to be recorded with a single phase mask.
Femtosecond fiber Bragg gratings - using femtosecond lasers to inscribe fiber Bragg gratings in optical fibers, while also selectively inducing birefringence in the optical fibre at the same spatial location as the grating, has enabled the development of vectorial sensors.
Polymers for flexible skinlike materials - a series of polymer materials were developed that have inherent flexibility and tuneable mechanical strength. They are also visually transparent and are compatible with commercially available formulations. A great step forward in developing novel monomers and prepolymers that supplement commercial formulations was taken and several novel formulations created. Finally, we also developed a new optical fiber coating material that quickly cures on silica fibres under UV irradiation.
Sensing system for silica microstructured fibers for pressure and temperature sensing - the silica MSF based pressure sensor has great potential value potential in the field of downhole pressure monitoring within the oil and gas industry. In this application there is a need to monitor high pressures (range from 0 to 1000 bar) in combination with fast temperature variations. The ultralow temperature cross-sensitivity is therefore an important feature of this system
Sensing system for multimode polymer fiber Bragg gratings - fiber Bragg grating sensors are commonly used for strain and temperature sensing but pressure sensing can be more challenging especially when space is limited. The PHOSFOS project consortium developed a new polymer multipoint FBG sensor that can measure the pressure in various medical applications. The fact that polymer fiber is used rather than silica fiber is beneficial in terms of patient safely. The low Young's modulus of polymer fiber improves the strain transfer from the surrounding medium to the sensors.
Consortium
, Vrije Universiteit Brussel
, Interuniversitair Micro-Electronica Centrum VZW
, Ghent University
, Politechnika Wroclawska
, Maria Curie-Skłodowska University
, Aston University
, Fiber Optic Sensors and Systems BVBA
, Cyprus University of Technology
, Astasense Limited
External links
https://web.archive.org/web/20111127030416/http://www.phosfos.eu/eng/Phosfos/About-us/Project-Summary
http://optics.org/cws/article/research/34671
http://spie.org/x38859.xml?highlight=x2406&ArticleID=x38859
http://spie.org/x39927.xml?ArticleID=x39927
http://www.fos-s.be/projectsadv/be-en/1/detail/item/604/cat/19/
https://web.archive.org/web/20110715161219/http://rdmag.com/News/2008/10/Optical-foils-could--be-basis-for-artificial-skin/
http://www.photonics.com/Article.aspx?AID=36120
http://www.opticalfibersensors.org/news/be-en/143/detail/item/1305/
http://www.ist-world.org/ProjectDetails.aspx?ProjectId=5959e74fdec54b57859fe30988c9add5&SourceDatabaseId=9900e74f1158484985c6bf0d2aa3cc2a
Open meetings
The 2nd "Benefits for Industry" Meeting of the EU FP7 Project PHOSFOS will take place on Sunday 22 May 2011 in Munich (Germany).
The meeting is co-located with the Industry Meets Academia Workshop organized by SPIE SPIE as part of the Optical Metrology Conference. It will be followed by the World of Photonics Congress and the Laser World of Photonics Trade Fair in Munich, in the week from 23 to 26 May 2011.
This Meeting is the second in its kind gathering all companies that have expressed their possible interest in the technology developed by the EU FP7 project PHOSFOS.
18 companies/institutes have registered for the Industrial User Club of PHOSFOS, new members are welcome.
References
Optical devices
Photonics | PHOSFOS | [
"Physics",
"Materials_science",
"Engineering",
"Biology"
] | 2,283 | [
"Biological engineering",
"Applied and interdisciplinary physics",
"Biomedical engineering",
"Materials science",
"Construction",
"Electronic engineering",
"Civil engineering",
"nan",
"Flexible electronics",
"Medical technology"
] |
25,907,412 | https://en.wikipedia.org/wiki/Track%20significance | Track significance, in high energy collision experiments, is defined as the ratio between the impact parameter of a track (distance from the primary vertex) and the estimated error in it.
Formula
References
Experimental particle physics | Track significance | [
"Physics"
] | 42 | [
"Particle physics stubs",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
25,910,081 | https://en.wikipedia.org/wiki/Knee%20arthritis | Arthritis of the knee is typically a particularly debilitating form of arthritis. The knee may become affected by almost any form of arthritis.
The word arthritis refers to inflammation of the joints. Types of arthritis include those related to wear and tear of cartilage, such as osteoarthritis, to those associated with inflammation resulting from an overactive immune system (such as rheumatoid arthritis).
Causes
It is not always certain why arthritis of the knee develops. The knee may become affected by almost any form of arthritis, including those related to mechanical damage of the structures of the knee (osteoarthritis, and post-traumatic arthritis), various autoimmune forms of arthritis (including; rheumatoid arthritis, juvenile arthritis, and SLE-related arthritis, psoriatic arthritis, and ankylosing spondylitis), arthritis due to infectious causes (including Lyme disease-related arthritis), gouty arthritis, or reactive arthritis.
Osteoarthritis of the knee
The knee is one of the joints most commonly affected by osteoarthritis. Cartilage in the knee may begin to break down after sustained stress, leaving the bones of the knee rubbing against each other and resulting in osteoarthritis. Nearly a third of US citizens are affected by osteoarthritis of the knee by age 70.
Obesity is a known and very significant risk factor for the development of osteoarthritis. Risk increases proportionally to body weight. Obesity contributes to OA development, not only by increasing the mechanical stress exerted upon the knees when standing, but also leads to increased production of compounds that may cause joint inflammation.
Parity is associated with an increased risk of knee OA and likelihood of knee replacement. The risk increases in proportion to the number of children the woman has birthed. This may be due to weight gain after pregnancy, or increased body weight and consequent joint stress during pregnancy.
Flat feet are a significant risk factor for the development of osteoarthritis. Additionally, structural deformities, advanced age, female sex, past joint trauma, genetic predisposition, and certain at-risk occupations may all contribute to the development of osteoarthritis in general.
Lyme disease-related arthritis of the knee
The knee is often the first joint affected in Lyme disease.
Systemic lupus erythematosus
Arthritis is a common symptom of SLE. Arthritis is often symmetric and more often involves small joints. Though almost any joint may be affected, the knees and joints of the hands are most often involved in SLE. In larger joints (including the knee), avascular necrosis is a possible complication, leading to further pain and disability.
Reactive arthritis
Reactive arthritis often presents with lower limb oligoarthritis, including that of the knee.
Gout
Arthritis of a single joint of the lower extremities with rapid onset is highly suggestive of gouty arthritis. The knee may sometimes be affected. In cases of gouty arthritis of the knee, skin symptoms occur less often, however pain and swelling may be particularly intense.
Rheumatoid arthritis
RA most often first manifests as inflammation of particular finger or toe joints, however, pain and swelling of larger joints, including the knees, may also be the first sign.
Diagnosis
Osteoarthritis of the knee
Diagnosis of knee osteoarthritis often entails a physical examination, assessment of symptoms and the patient's medical history, but may also involve medical imaging and blood tests. Persistent knee pain, limited morning stiffness and reduced function, crepitus, restricted movement, and bony enlargement appear to be the most useful indications of knee osteoarthritis for diagnosis.
Standardized medical questionnaires like the Knee injury and Osteoarthritis Outcome Score (KOOS) or the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) can also be used to diagnose and monitor progression of knee osteoarthritis.
Management
A physician will recommend a treatment regimen based upon the severity of symptoms. General recommendations for the management of knee arthritis may include avoiding activities that aggravate the condition, and applying cold or warm packs and using ointments and creams to relieve symptoms.
Pharmaceutical
Pharmaceutical management is usually dependent upon the nature of the underlying condition causing arthritis. Over-the-counter medications like acetaminophen (paracetamol), and ibuprofen, naproxen, and other NSAIDs are often used as first-line medical treatments for pain relief and/or managing inflammation. Corticosteroids may be injected directly into the joint cavity to provide more significant relief from inflammation, swelling, and pain. Other medications used in management of arthritis of the knee include; disease-modifying antirheumatic drugs, biopharmaceuticals, viscosupplementation (including hyaluronic acid injections), and glucosamine and chondroitin sulphate.
Hyaluronic acid is normally present in joints (including the knee), acting as lubricant and providing shock absorption, among other functions. In osteoarthritis, there is a loss of articular hyaluronic acid activity, likely contributing to pain and stiffness associated with the condition. Hyaluronic acid injections are an FDA-approved treatment for osteoarthritis of the knee, and are sometimes also used for other joints. However, the merits of HA injections are still disputed. HA injections are indicated when other medications fail to offer adequate symptom relief. Symptom relief associated with HA injections may last up to 2 years after an injection. HA injections appear to offer significant pain relief to some patients, while others may see no benefits at all. In severe osteoarthritis without much cartilage, the benefits of hyaluronic are not observed.
Orthotics
Supportive devices like knee braces can be used for symptom relief in osteoarthritis of the knee. Knee braces may however result in discomfort, skin irritation, swelling, and may not provide benefits to all. Using a cane, shock-absorbent footwear and inserts, elastic bandages, and knee sleeves may also be helpful for managing arthritis symptoms. Braces may be especially effective when only one knee is affected. Shoe insoles that are fitted to correct flat feet provide significant relief to those with severely flat feet. However, it has been found that insoles used to correct medial knee osteoarthritis (the more common form) may not offer much pain relief.
Lifestyle
Body weight
Obesity is a known and very significant risk factor for the development of osteoarthritis. Furthermore, losing weight reduces mechanical stress acting upon the knees when standing, possibly reducing pain and improving function in knee osteoarthritis. However, it is necessary to ascertain whether the patient is actually overweight before committing to weight loss as a management technique.
Exercise
Exercises can help increase range of motion and flexibility as well as help strengthen the muscles in the leg. Physical therapy and exercise are often effective in reducing pain and improving function. Compared to the patient-education program, pain and function showed improvement after eight weeks of aquatic exercise, and after twelve weeks it showed improvement in the function actiites. Inclusion of isokinetic quadriceps and hamstring strengthening exercises into the rehabilitation packages for the patients with knee osteoarthritis may also enhance the high-quality of life and make a contribution to the decreased hazard of fall. land-based exercises that focus on hip abductor shows improvement related to performance and function in women with symptomatic knee osteoarthritis. A Cochrane review could not conclude whether high-intensity exercises provide better results than low-intensity exercises.
Surgical
Surgical intervention may be undertaken if no other management technique yields adequate relief. Surgical procedures may entail an arthroscopy (seldom used for sole osteoarthritis), osteotomy (performed only for unilateral early-stage osteoarthritis), or arthroplasty.
Knee replacement is the most definitive treatment for osteoarthritis-related symptoms and disability. It is a type of arthroplasty, and may involve either a partial or total replacement with a prosthesis.
Alternative medicine
Alternative medicine interventions undertaken for pain relief in arthritis of the knee include acupuncture, and magnetic pulse therapy.
Notes
Aging-associated diseases
Inflammations
Rheumatology
Skeletal disorders
Knee injuries and disorders | Knee arthritis | [
"Biology"
] | 1,800 | [
"Senescence",
"Aging-associated diseases"
] |
25,910,555 | https://en.wikipedia.org/wiki/Demonic%20composition | In mathematics, demonic composition is an operation on binary relations that is similar to the ordinary composition of relations but is robust to refinement of the relations into (partial) functions or injective relations.
Unlike ordinary composition of relations, demonic composition is not associative.
Definition
Suppose is a binary relation between and and is a relation between and Their is a relation between and Its graph is defined as
Conversely, their is defined by
References
.
Algebraic logic
Binary operations
Mathematical relations | Demonic composition | [
"Mathematics"
] | 97 | [
"Mathematical analysis",
"Predicate logic",
"Mathematical logic",
"Binary operations",
"Binary relations",
"Fields of abstract algebra",
"Basic concepts in set theory",
"Mathematical relations",
"Algebraic logic"
] |
34,608,273 | https://en.wikipedia.org/wiki/Star-mesh%20transform | The star-mesh transform, or star-polygon transform, is a mathematical circuit analysis technique to transform a resistive network into an equivalent network with one less node. The equivalence follows from the Schur complement identity applied to the Kirchhoff matrix of the network.
The equivalent impedance betweens nodes A and B is given by:
where is the impedance between node A and the central node being removed.
The transform replaces N resistors with resistors. For , the result is an increase in the number of resistors, so the transform has no general inverse without additional constraints.
It is possible, though not necessarily efficient, to transform an arbitrarily complex two-terminal resistive network into a single equivalent resistor by repeatedly applying the star-mesh transform to eliminate each non-terminal node.
Special cases
When N is:
For a single dangling resistor, the transform eliminates the resistor.
For two resistors, the "star" is simply the two resistors in series, and the transform yields a single equivalent resistor.
The special case of three resistors is better known as the Y-Δ transform. Since the result also has three resistors, this transform has an inverse Δ-Y transform.
See also
Topology of electrical circuits
Network analysis (electrical circuits)
References
E.B. Curtis, D. Ingerman, J.A. Morrow. Circular planar graphs and resistor networks. Linear Algebra and its Applications. Volume 283, Issues 1–3, 1 November 1998, pp. 115–150| doi = https://doi.org/10.1016/S0024-3795(98)10087-3.
Electrical circuits
Circuit theorems
Transforms | Star-mesh transform | [
"Physics",
"Mathematics",
"Engineering"
] | 349 | [
"Functions and mappings",
"Equations of physics",
"Mathematical objects",
"Electronic engineering",
"Mathematical relations",
"Circuit theorems",
"Transforms",
"Electrical circuits",
"Electrical engineering",
"Physics theorems"
] |
34,610,793 | https://en.wikipedia.org/wiki/Crystallography%20Open%20Database | The Crystallography Open Database (COD) is a database of crystal structures. Unlike similar crystallographic databases, the database is entirely open-access, with registered users able to contribute published and unpublished structures of small molecules and small to medium-sized unit cell crystals to the database. As of November 2024, the database has more than 520,000 entries. The database has various contributors, and contains Crystallographic Information Files as defined by the International Union of Crystallography (IUCr). There are currently five sites worldwide that mirror this database. The 3D structures of compounds can be converted to input files for 3D printers.
See also
Crystallography
Crystallographic database
References
External links
https://www.crystallography.net
http://cod.ibt.lt/
https://archive.today/20130714225104/http://cod.ensicaen.fr/
https://qiserver.ugr.es/cod/
http://nanocrystallography.research.pdx.edu/search/codmirror/
https://nanocrystallography.research.pdx.edu
https://crystallography.io/
Crystallographic databases | Crystallography Open Database | [
"Chemistry",
"Materials_science"
] | 264 | [
"Crystallography stubs",
"Crystallographic databases",
"Crystallography",
"Materials science stubs"
] |
34,614,646 | https://en.wikipedia.org/wiki/IUPAC%20polymer%20nomenclature | IUPAC Polymer Nomenclature are standardized naming conventions for polymers set by the International Union of Pure and Applied Chemistry (IUPAC) and described in their publication "Compendium of Polymer Terminology and Nomenclature", which is also known as the "Purple Book". Both the IUPAC and Chemical Abstracts Service (CAS) make similar naming recommendations for the naming of polymers.
Basic Concepts
The terms polymer and macromolecule do not mean the same thing. A polymer is a substance composed of macromolecules. The latter usually have a range of molar masses (unit g mol−1), the distributions of which are indicated by dispersity (Đ). It is defined as the ratio of the mass-average molar mass (Mm) to the number-average molar mass (Mn) i.e. Đ = Mm/Mn. Symbols for physical quantities or variables are in italic font but those representing units or labels are in roman font.
Polymer nomenclature usually applies to idealized representations meaning minor structural irregularities are ignored. A polymer can be named in one of two ways. Source-based nomenclature can be used when the monomer can be identified. Alternatively, more explicit structure-based nomenclature can be used when the polymer structure is proven. Where there is no confusion, some traditional names are also acceptable.
Whatever method is used, all polymer names have the prefix poly, followed by enclosing marks around the rest of the name. The marks are used in the order: {[( )]}. Locants indicate the position of structural features, e.g., poly(4-chlorostyrene). If the name is one word and has no locants, then the enclosing marks are not essential, but they should be used when there might be confusion, e.g., poly(chlorostyrene) is a polymer whereas polychlorostyrene might be a small, multi-substituted molecule. End-groups are described with α- and ω-, e.g., α-chloro-ω-hydroxy-polystyrene.
Source-Based Nomenclature
Homopolymers
Homopolymers are named using the name of the real or assumed monomer (the ‘source’) from which it is derived, e.g., poly(methyl methacrylate). Monomers can be named using IUPAC recommendations, or well-established traditional names. Should ambiguity arise, class names can be added. For example, the source-based name poly(vinyloxirane) could correspond to either of the structures shown. To clarify, the polymer is named using the polymer class name followed by a colon and the name of the monomer, i.e., class name:monomer name. Thus on the left and right, respectively, are polyalkylene:vinyloxirane and polyether:vinyloxirane.
Copolymers
The structure of a copolymer can be described using the most appropriate of the connectives shown in Table 1. These are written in italic font.
a The first name is that of the main chain.
Non-linear polymers
Non-linear polymers and copolymers, and polymer assemblies are named using the italicized qualifiers in Table 2. The qualifier, such as branch, is used as a prefix (P) when naming a (co)polymer, or as a connective (C), e.g., comb, between two polymer names.
a In accordance with IUPAC organic nomenclature, square brackets indicate the nature of the locant sites in fused ring systems.
Structure-Based Nomenclature
Regular single-strand organic polymers
In place of the monomer name used in source-based nomenclature, structure-based nomenclature uses that of the "preferred constitutional repeating unit" (CRU). It can be determined as follows:
A large enough part of the polymer chain is drawn to show the structural repetition.
Consider as an example:
The smallest repeating portion is a CRU, so all such possibilities are identified (including multiple directional possibilities for the chain).
For the preceding polymer, they are:
The subunits that make up each of these structures are identified, i.e., the largest divalent groups that can be named using IUPAC nomenclature of organic chemistry.
In the example, the two-carbon ethylidene unit is longer than two separate one-carbon methanediyl units.
Using the shortest path in order of decreasing precedence of subunits, the correct order of the subunits is determined using Figure 1.
In the example, the oxy subunits in the CRUs are heteroatom chains. From Figure 1, oxy subunits are take precedence over acyclic carbon chain subunits.
The preferred CRU is chosen as that with the lowest possible locant(s) for substituents.
In the example, there is a bromo-substituted -CH2-CH2- subunit. 1-Bromoethane-1,2-diyl is chosen in preference to 2- bromoethane-1,2-diyl as the former has a lower locant for the bromo-substituent. The preferred CRU is therefore oxy(1-bromoethane-1,2-diyl) and the polymer is thus named poly[oxy(1-bromoethane-1,2-diyl)].
Polymers that are not made up of regular repetitions of a single CRU are called irregular polymers. For these, each constitutional unit (CU) is separated by a slash, e.g., poly(but-1-ene-1,4-diyl/1-vinylethane-1,2-diyl).
a To avoid ambiguity, wavy lines drawn perpendicular to the free bond,
which are conventionally used to indicate free valences,
are usually omitted from graphical representations in a polymer context.
Regular double-strand organic polymers
Double-strand polymers consist of uninterrupted chains of rings. In a spiro polymer, each ring has one atom in common with adjacent rings. In a ladder polymer, adjacent rings have two or more atoms in common. To identify the preferred CRU, the chain is broken so that the senior ring is retained with the maximum number of heteroatoms and the minimum number of free valences.
An example is The preferred CRU is an acyclic subunit of 4 carbon atoms with 4 free valences, one at each atom, as shown. It is oriented so that the lower left atom has the lowest number. The free-valence locants are written before the suffix, and they are cited clockwise from the lower left position as: lower-left, upper-left:upper-right, lower-right. This example is thus named poly(butane-1,4:3,2-tetrayl). For more complex structures, the order of seniority again follows Figure 1.
Nomenclature of Inorganic and Inorganic-Organic Polymers
]Some regular single-strand inorganic polymers can be named like organic polymers using the rules given above, e.g., and are named poly[oxy(dimethylsilanediyl)] and poly(dimethylstannanediyl), respectively. Inorganic polymers can also be named in accordance with inorganic nomenclature, but the seniority of the elements is different from that in organic nomenclature. However, certain inorganic and inorganic-organic polymers, for example those containing metallocene derivatives, are at present best named using organic nomenclature, e.g., the polymer shown can be named poly[(dimethylsilanediyl)ferrocene-1,1'-diyl].
Traditional Names
When they fit into the general pattern of systematic nomenclature, some traditional and trivial names for polymers in common usage, such as polyethylene, polypropylene, and polystyrene, are retained.
Graphical Representations
The bonds between atoms can be omitted, but dashes should be drawn for chain-ends. The seniority of the subunits does not need to be followed. For single-strand (co)polymers, a dash is drawn through the enclosing marks, e.g., poly[oxy(ethane-1,2-diyl)] shown below left.
For irregular polymers, the CUs are separated by slashes, and the dashes are drawn inside the enclosing marks. End-groups are connected using additional dashes outside of the enclosing marks, e.g., α-methyl-ω-hydroxy-poly[oxirane-co-(methyloxirane)], shown below right.
CA Index Names
CAS maintains a registry of substances. In the CAS system, the CRU is called a structural repeating unit (SRU). There are minor differences in the placements of locants, e.g., poly(pyridine-3,5-diylthiophene-2,5-diyl) is poly(3,5-pyridinediyl-2,5-thiophenediyl) in the CAS registry, but otherwise polymers are named using similar methods to those of IUPAC.
References
External links
Pure and Applied Chemistry
Chemical Abstracts Service
Polymers
Chemical nomenclature | IUPAC polymer nomenclature | [
"Chemistry",
"Materials_science"
] | 1,940 | [
"Polymers",
"nan",
"Polymer chemistry"
] |
34,615,176 | https://en.wikipedia.org/wiki/Weak%20equivalence%20%28homotopy%20theory%29 | In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category.
A model category is a category with classes of morphisms called weak equivalences, fibrations, and cofibrations, satisfying several axioms. The associated homotopy category of a model category has the same objects, but the morphisms are changed in order to make the weak equivalences into isomorphisms. It is a useful observation that the associated homotopy category depends only on the weak equivalences, not on the fibrations and cofibrations.
Topological spaces
Model categories were defined by Quillen as an axiomatization of homotopy theory that applies to topological spaces, but also to many other categories in algebra and geometry. The example that started the subject is the category of topological spaces with Serre fibrations as fibrations and weak homotopy equivalences as weak equivalences (the cofibrations for this model structure can be described as the retracts of relative cell complexes X ⊆ Y). By definition, a continuous mapping f: X → Y of spaces is called a weak homotopy equivalence if the induced function on sets of path components
is bijective, and for every point x in X and every n ≥ 1, the induced homomorphism
on homotopy groups is bijective. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.)
For simply connected topological spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the induced homomorphism f*: Hn(X,Z) → Hn(Y,Z) on singular homology groups is bijective for all n. Likewise, for simply connected spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the pullback homomorphism f*: Hn(Y,Z) → Hn(X,Z) on singular cohomology is bijective for all n.
Example: Let X be the set of natural numbers {0, 1, 2, ...} and let Y be the set {0} ∪ {1, 1/2, 1/3, ...}, both with the subspace topology from the real line. Define f: X → Y by mapping 0 to 0 and n to 1/n for positive integers n. Then f is continuous, and in fact a weak homotopy equivalence, but it is not a homotopy equivalence.
The homotopy category of topological spaces (obtained by inverting the weak homotopy equivalences) greatly simplifies the category of topological spaces. Indeed, this homotopy category is equivalent to the category of CW complexes with morphisms being homotopy classes of continuous maps.
Many other model structures on the category of topological spaces have also been considered. For example, in the Strøm model structure on topological spaces, the fibrations are the Hurewicz fibrations and the weak equivalences are the homotopy equivalences.
Chain complexes
Some other important model categories involve chain complexes. Let A be a Grothendieck abelian category, for example the category of modules over a ring or the category of sheaves of abelian groups on a topological space. Define a category C(A) with objects the complexes X of objects in A,
and morphisms the chain maps. (It is equivalent to consider "cochain complexes" of objects of A, where the numbering is written as
simply by defining Xi = X−i.)
The category C(A) has a model structure in which the cofibrations are the monomorphisms and the weak equivalences are the quasi-isomorphisms. By definition, a chain map f: X → Y is a quasi-isomorphism if the induced homomorphism
on homology is an isomorphism for all integers n. (Here Hn(X) is the object of A defined as the kernel of Xn → Xn−1 modulo the image of Xn+1 → Xn.) The resulting homotopy category is called the derived category D(A).
Trivial fibrations and trivial cofibrations
In any model category, a fibration that is also a weak equivalence is called a trivial (or acyclic) fibration. A cofibration that is also a weak equivalence is called a trivial (or acyclic) cofibration.
Notes
References
Homotopy theory
Homological algebra
Equivalence (mathematics) | Weak equivalence (homotopy theory) | [
"Mathematics"
] | 992 | [
"Fields of abstract algebra",
"Mathematical structures",
"Category theory",
"Homological algebra"
] |
30,581,229 | https://en.wikipedia.org/wiki/Rodin%20tool | The Rodin tool is a software tool for formal modelling in Event-B. It was developed as part of several collaborative European Union projects, including initially the RODIN project (2004–2007).
Overview
Event-B is a notation and method developed from the B-Method and is intended to be used with an incremental style of modelling. The idea of incremental modelling has been taken from programming: modern programming languages come with integrated development environment that make it easy to modify and improve programs. The Rodin tool provides such an environment for Event-B. Two characteristics of the Rodin tool are its ease of use and its extensibility.
The tool focuses on modelling. It allows the user to modify models and try out variations of a model. The tool is also extensible. This makes it possible to adapt the tool to specific needs, so the tool can be adapted to fit into existing development processes instead of demanding the opposite. There is an associated Event-B wiki.
Rodin ("Rigorous Open Development Environment for Complex Systems") is an extension of Eclipse IDE (Java-based). The Rodin Eclipse Builder manages the following:
Well-formedness and type checker
Proof obligation (PO) generator
Proof manager (PM)
Propagation of changes
Rodin Proof Manager (PM)
PM constructs a proof tree for each PO
Automatic and interactive modes
PM manages used hypotheses
PM calls reasoners to:
discharge goal, or
split goal into subgoals
Collection of reasoners:
simplifier, rule‐based, decision procedures,
Basic tactics language to define PM and reasoners
Industrial applications and case studies
The Rodin project included five industrial case studies that served to validate the toolset and helped with the elaboration of an appropriate methodology for using the tools. The case studies were led by industrial partners of the Rodin project, supported by the other partners. The case studies were as follows:
A failure management system for an engine controller;
Part of a platform for mobile Internet technology;
Engineering of communications protocols;
An air-traffic display system;
An ambient campus application.
Some available plug-ins for Rodin
B4free provers
Provider: ClearSy
Function: Theorem provers
UML-B
Provider: University of Southampton
Function: UML-like graphical front-end for Event-B supporting class diagrams and state charts
ProB
Provider: University of Düsseldorf
Function: Animation and Model-checking of Event-B models; Counterexamples for false proof goals, in particular, proof obligations
Brama
Provider: ClearSy
Function: Animation of B models. The purpose is twofold:
Experimentation with a model to observe states and transitions
Flash animation of Event-B models
Modularisation
Provider: Newcastle University
Function: Structuring Event-B developments into logical units of modelling, called modules; Model composition; Model reuse
References
Further reading
Jean-Raymond Abrial. The B-Book: Assigning Programs to Meanings. Cambridge University Press, 1996, ().
Jean-Raymond Abrial, Michael Butler, Stefan Hallerstede, and Laurent Voisin. An open extensible tool environment for Event-B. In Z. Liu and J. He, editors, ICFEM 2006, LNCS, volume 4260, pages 588–605. Springer, 2006.
Abdolbaghi Rezazadeh, Neil Evans, and Michael Butler. Redevelopment of an Industrial, Case Study Using Event-B and Rodin. In BCS-FACS Christmas 2007 Meeting, 2007.
RODIN. Deliverable D18: Intermediate report on case study developments.
Michael Butler and Stefan Hallerstede, The Rodin Formal Modelling Tool, EU Research Project IST 511599 RODIN.
Eclipse platform homepage.
External links
Event-B and the Rodin Platform
Event-B and Rodin Documentation Wiki
Rodin on SourceForge
2007 software
Formal methods tools
Formal specification languages | Rodin tool | [
"Mathematics"
] | 795 | [
"Formal methods tools",
"Mathematical software"
] |
30,586,932 | https://en.wikipedia.org/wiki/Antifragility | Antifragility is a property of systems in which they increase in capability to thrive as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. The concept was developed by Nassim Nicholas Taleb in his book, Antifragile, and in technical papers. As Taleb explains in his book, antifragility is fundamentally different from the concepts of resiliency (i.e. the ability to recover from failure) and robustness (that is, the ability to resist failure). The concept has been applied in risk analysis, physics, molecular biology, transportation planning, engineering, aerospace (NASA), and computer science.
Taleb defines it as follows in a letter to Nature responding to an earlier review of his book in that journal:
Antifragile versus robust/resilient
In his book, Taleb stresses the differences between antifragile and robust/resilient. "Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better." The concept has now been applied to ecosystems in a rigorous way. In their work, the authors review the concept of ecosystem resilience in its relation to ecosystem integrity from an information theory approach. This work reformulates and builds upon the concept of resilience in a way that is mathematically conveyed and can be heuristically evaluated in real-world applications: for example, ecosystem antifragility. The authors also propose that for socio-ecosystem governance, planning or in general, any decision making perspective, antifragility might be a valuable and more desirable goal to achieve than a resilience aspiration. In the same way, Pineda and co-workers have proposed a simply calculable measure of antifragility, based on the change of “satisfaction” (i.e., network complexity) before and after adding perturbations, and apply it to random Boolean networks (RBNs). They also show that several well known biological networks such as Arabidopsis thaliana cell-cycle are as expected antifragile.
Antifragile versus adaptive/cognitive
An adaptive system is one that changes its behavior based on information available at time of utilization (as opposed to having the behavior defined during system design). This characteristic is sometimes referred to as cognitive. While adaptive systems allow for robustness under a variety of scenarios (often unknown during system design), they are not necessarily antifragile. In other words, the difference between adaptive and antifragile is the difference between a system that is robust under volatile environments/conditions, and one that is robust in a previously unknown environment.
Mathematical heuristic
Taleb proposed a simple heuristic for detecting fragility. If is some model of , then fragility exists when , robustness exists when , and antifragility exists when , where
.
In short, the heuristic is to adjust a model input higher and lower. If the average outcome of the model after the adjustments is significantly worse than the model baseline, then the model is fragile with respect to that input.
Applications
The concept has been applied in business and management, physics, risk analysis, molecular biology, transportation planning, urban planning, engineering, aerospace (NASA), computer science, and water system design.
In computer science, there is a structured proposal for an "Antifragile Software Manifesto", to react to traditional system designs. The major idea is to develop antifragility by design, building a system which improves from environment's input.
See also
Complexity theory and organizations
Hygiene hypothesis
Information management
Structures of organizations
Nodal organizational structure
Systems theory
Systems engineering
System accident
References
Systems theory
Risk management
Nassim Nicholas Taleb | Antifragility | [
"Engineering"
] | 783 | [
"Systems engineering",
"Reliability engineering"
] |
30,594,071 | https://en.wikipedia.org/wiki/Alternative%20Splicing%20and%20Transcript%20Diversity%20database | The Alternative Splicing and Transcript Diversity database (ASTD) was a database of transcript variants maintained by the European Bioinformatics Institute from 2008 to 2012. It contained transcription initiation, polyadenylation and splicing variant data.
See also
Alternative Splicing Annotation Project
AspicDB
RNA splicing
References
External links
https://web.archive.org/web/20111227225355/http://www.ebi.ac.uk/asd/
Genetics databases
Gene expression
RNA splicing
Science and technology in Cambridgeshire
South Cambridgeshire District | Alternative Splicing and Transcript Diversity database | [
"Chemistry",
"Biology"
] | 123 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
30,594,108 | https://en.wikipedia.org/wiki/ChimerDB | ChimerDB in computational biology is a database of fusion sequences.
ChimerDB currently consists of three searchable datasets.
ChimerKB is a curated knowledge base of 1,066 fusion genes sourced from publicly available scientific literature.
ChimerPub provides continuously updated descriptions on fusion genes text mined from publications.
ChimerSeq is a database of RNA-seq data of fusion sequences downloaded from the TCGA data portal.
See also
ECgene
Fusion gene
References
External links
http://203.255.191.229:8080/chimerdbv31/mindex.cdb
Biological databases
Genes
Gene expression | ChimerDB | [
"Chemistry",
"Biology"
] | 132 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
30,594,153 | https://en.wikipedia.org/wiki/DBASS3/5 | DBASS3 and DBASS5 in computational biology is a database of new exon boundaries induced by pathogenic mutations in human disease genes.
The database has been used in a large number of studies; Google Scholar has 87 entries for papers using DBASS3, and 80 for papers using DBASS5 including
Vallée MP, Di Sera TL, Nix DA, Paquette AM, Parsons MT, Bell R, Hoffman A, Hogervorst FB, Goldgar DE, Spurdle AB, Tavtigian SV. Adding in silico assessment of potential splice aberration to the integrated evaluation of BRCA gene unclassified variants. Human Mutation. 2016 Jul;37(7):627-39.
Dhir A, Buratti E. Alternative splicing: role of pseudoexons in human disease and potential therapeutic strategies. The FEBS Journal. 2010 Feb 1;277(4):841-55.
Vallée MP, Francy TC, Judkins MK, Babikyan D, Lesueur F, Gammon A, Goldgar DE, Couch FJ, Tavtigian SV. Classification of missense substitutions in the BRCA genes: A database dedicated to Ex‐UVs. Human mutation. 2012 Jan;33(1):22-8.
Churbanov A, Vořechovský I, Hicks C. A method of predicting changes in human gene splicing induced by genetic variants in context of cis-acting elements. BMC Bioinformatics. 2010 Dec;11(1):1-2.
Wang J, Zhang J, Li K, Zhao W, Cui Q. SpliceDisease database: linking RNA splicing and disease. Nucleic Acids Research. 2012 Jan 1;40(D1):D1055-9.
See also
Human diseases markers
References
External links
http://www.dbass.org.uk/.
Biological databases
Gene expression
RNA
RNA splicing
Spliceosome | DBASS3/5 | [
"Chemistry",
"Biology"
] | 419 | [
"Gene expression",
"Bioinformatics",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Biological databases"
] |
30,594,193 | https://en.wikipedia.org/wiki/Maneb | Maneb (manganese ethylene-bis-dithiocarbamate) is a fungicide and a polymeric complex of manganese with the ethylene bis (dithiocarbamate) anionic ligand.
Health effects
Exposure to maneb can occur when breathed in, it can irritate the eyes, nose, and throat as well as headache, fatigue, nervousness, dizziness, seizures and even unconsciousness. Prolonged or long-term exposure may interfere with the function of the thyroid. Exposure to maneb is also shown to induce a Parkinson's disease like neurotoxicity in mice. It is still challenged whether maneb, along with Paraquat, is an environmental risk factor for Parkinson's disease.
Production
Manganese(II) ethylenebis(dithiocarbamate) of low ethylenethiourea (ETU) content is prepared by mixing disodium ethylenebis (dithiocarbamate) with formaldehyde in aqueous medium then mixing a water-soluble manganese(II) salt to precipitate the maneb. The product can be further formulated with a metal salt and also with paraformaldehyde. (See External links for the patent citation)
Applications
Maneb, is a broad spectrum fungicide that is extensively applied against a wide range of fungal pathogens affecting ornamental plants, food and feed crops. It can also be used to create a toxin-based animal model of Parkinson's disease, usually in primates.
Environmental effects
Regulation
It was included in a pesticide ban proposed by the Swedish Chemicals Agency and approved by the European Parliament on January 13, 2009.
See also
Metam sodium - A related dithiocarbamate salt which is also used as a fungicide.
Zineb - ethylene bis(dithiocarbamate) with zinc instead of manganese.
Mancozeb - A common fungicide containing Zineb and Maneb.
References
External links
Aldehyde dehydrogenase inhibitors
Dithiocarbamates
Fungicides
Manganese(II) compounds
Monoaminergic neurotoxins
Polymers | Maneb | [
"Chemistry",
"Materials_science",
"Biology"
] | 442 | [
"Fungicides",
"Dithiocarbamates",
"Functional groups",
"Polymer chemistry",
"Polymers",
"Biocides"
] |
31,597,935 | https://en.wikipedia.org/wiki/Glossary%20of%20quantum%20philosophy | This is a glossary for the terminology applied in the foundations of quantum mechanics and quantum metaphysics, collectively called quantum philosophy, a subfield of philosophy of physics.
Note that this is a highly debated field, hence different researchers may have different definitions on the terms.
Physics
Non-classical properties of quantum mechanics
nonseparability
See also: entangled
Nonlocality
Superposition of states
See also: Schrödinger's cat
Quantum phenomena
decoherence
uncertainty principle
See also: Einstein and the quantum
entanglement
See also: Bell's theorem, EPR paradox and CHSH inequality
quantum teleportation
superselection rule
quantum erasure
delayed choice experiment
Quantum Zeno effect
premeasurement
ideal measurement
Suggested physical entities
hidden variables
ensemble
Terms used in the formalism of quantum mechanics
Born's rule
collapse postulate
measurement
relative state
decoherent histories
Metaphysics
objective and subjective
ontic and epistemic
intrinsic and extrinsic
agnostic
Philosophical realism
determinism
causality
empiricism
rationalism
scientific realism
psychophysical parallelism
Interpretations of quantum mechanics
List of interpretations:
Bohmian Mechanics
de Broglie–Bohm theory
consistent histories
Copenhagen interpretation
conventional interpretation
Usually refer to the Copenhagen interpretation.
Ensemble Interpretation
Everett interpretation
See relative-state interpretation.
hydrodynamic interpretation
Ghirardi–Rimini–Weber theory (GRW theory / GRW effect)
many-worlds interpretation
many-minds interpretation
many-measurements interpretation
modal interpretations
objective collapse theory
orthodox interpretation
Usually refer to the Copenhagen interpretation.
Penrose interpretation
Pilot wave
Quantum logic
relative-state interpretation
relational quantum mechanics
stochastic interpretation
transactional interpretation
Uncategorized items
quantum Darwinism
completeness
relativistic measurement theory
consciousness and observer role
quantum correlation
quantum indeterminism
stochastic collapse
pointer state
quantum causality
postselection
entropy
quantum cosmology
People
Early researchers (before the 1950s):
Max Born
Albert Einstein
Niels Bohr
J. S. Bell
Hugh Everett III
David Bohm
1950s–2010s:
Roland Omnès
W. H. Zurek
Erich Joos
Max Tegmark
Maximilian Schlosshauer
H. D. Zeh
David Deutsch
Robert B. Griffiths
Bernard d'Espagnat
Carl von Weizsäcker
2000s or later:
Bob Coecke
Robert Spekkens
See also
time arrow
quantum chaos
probability interpretations
relative frequency approach
probability theory as extended logic, decision theory
history of quantum mechanics
Further reading
External links
https://web.archive.org/web/20121004044323/http://users.ox.ac.uk/~mert0130/teaching/qmreading.doc
Philosophy of Physics - A guide for the aspiring researcher
Quantum Mechanics (Stanford Encyclopedia of Philosophy)
Quantum mechanics
quantum philosophy, Glossary of
Wikipedia glossaries using description lists | Glossary of quantum philosophy | [
"Physics"
] | 570 | [
"Theoretical physics",
"Quantum mechanics"
] |
31,600,380 | https://en.wikipedia.org/wiki/Digital%20dividend%20after%20digital%20television%20transition | The digital dividend refers to the radio spectrum which is released in the process of digital television transition. When television broadcasters switch from analog TV to digital-only platforms, part of the electromagnetic spectrum that has been used for broadcasting will be freed-up because digital television needs less spectrum than analog television, due to lossy compression. One reason is that new digital video compression technology can transmit numerous digital subchannels using the same amount of spectrum used to transmit one analog TV channel. However, the primary reason is that digital transmissions require much less of a guard band on either side, since they are not nearly as prone to RF interference from adjacent channels. Because of this, there is no longer any need to leave empty channels to protect stations from each other, in turn allowing stations to be repacked into fewer channels, leaving more contiguous spectrum to be allocated for other wireless services.
The digital dividend usually locates at frequency bands from 174 to 230 MHz (VHF) and from 470 to 862 MHz (UHF), with its midpoint being chosen precisely as 666 MHz. However, the location and size of digital dividend vary among countries due to the factors including geographical position and penetration of satellite/cable services.
As a result of the technological transition, a significant number of governments are now planning for or allocating their digital dividends. For examples, the United States completed its transition on 12 June 2009 and auctioned the spectrum. Meanwhile, Australia is still planning for it.
Potential uses
In countries where the digital television transition has not yet finished, over-the-air broadcasting services are still using radio-frequency spectrum in what is known as the Very High Frequency (VHF) and Ultra High Frequency (UHF) bands.
After the completion of digital transition, part of this spectrum will be released as digital dividend to provide a range of new communication services. Proposed utilization of the released spectrum includes:
Digital Terrestrial TV
Advanced Mobile Services
Broadcast Mobile TV
Commercial Wireless Broadband services, both to fixed locations and mobile devices
Wireless Broadband services for public safety and disaster relief (PPDR)
Services ancillary to broadcasting and programming (SAB/SAP)
Analog television spectrum in the UHF bands is valuable to potential purchasers because of its ability to carry signals over long distances, penetrate buildings and carry large amounts of data.
Many countries favour using a part of the digital dividend for electronic communications services, such as mobile communications and wireless broadband. These new services would utilize the upper part of the UHF band (790–862 MHz).
Allocation for mobile services
One proposal to utilize the digital dividend is to develop an international mobile service using frequencies which will be released after the completion of digital transition in a global range. This part spectrum suits for 3G mobile telecommunication service. However, it would be difficult to fully realize the potentials of the digital dividend because countries in the world will not finish the switch to digital TV simultaneously. Further it is an issue involving factors such as topography, penetration of satellite/cable services, the requirements for regional service, spectrum usage in neighboring countries, etc.
In 2007, the World Radiocommunication Conference revised the allocation of a portion of UHF spectrum for mobile broadband services and advanced mobile services. Although the allocations set a framework, they do not dictate member countries how to allocate digital dividend spectrum. Rather they can take national requirements into consideration. Countries in global regions one and three, such as Europe, the Middle East, Africa, Russia, Asia and Australia use the spectrum range 790–960 MHz, which is one of the bands dedicated to the roll-out of international mobile telecommunications (IMT). Some countries in global region three, such as Bangladesh, China, Korea, India, Japan, New Zealand, Papua New Guinea, Philippines, and Singapore, identified the band or portions of the band 698–790 MHz for the implementation of IMT. Meanwhile, the spectrum 698–960 MHz was planned for implementation of IMT in global region two, the Americas. However, part of the spectrum, 806–960 MHz, is already used for mobile telecommunications in global region two.
Benefits of mobile broadband use
Experts see several benefits of using the freed spectrum for mobile broadband because it is cheaper than fixed broadband to provide last mile connectivity.
It could facilitate economic development. McKinsey's report suggested that every 10% increase in household broadband penetration will bring 1.4% increase of GDP growth. Usually GDP growth leads to job creations. One good thing about mobile broadband is that mobile penetration is much higher than PC penetration, that means mobile broadband will help to the broadband penetration to increase faster.
Mobile broadband could be used to serve people living in rural area and low income groups, such as farmers. It could provide them medical, educational and other general information.
For bridging the digital divide
Some researchers argued that digital dividend could provide opportunity to bridging the digital divide. They argued that because of the characteristics of this spectrum fewer radio stations are needed to cover a given area. Therefore, the cost to provide broadband in remote rural areas could be significantly reduced. In the past, a profitable roll out of fixed line broadband infrastructure is not feasible because the necessary investments to cover the long distances are too high. However, wireless broadband using the spectrum of the digital dividend offers an opportunity to overcome the digital divide.
But not all of them agree with the point. Those who do not agree, argued that this approach suffers from the trade-off between reach and speed significantly limit scalability of the network. If the transmission demands grow at current rates, a wireless broadband access network covering large areas will be likely outdated before being deployed. Therefore, the digital divide will still exist.
Digital dividend policies around the world
United States
700 MHz Auction
The chunk of spectrum being freed from broadcasting was auctioned for commercial uses in the U.S. The 700 MHz auction, known officially as Auction 73, concluded with 1090 provisionally winning bids covering 1091 licenses and raised a total of $19,120,378,000 in winning bids and $18,957,582,150 in net winning bids.
Due to its physical attributes, 700 MHz band penetrates walls fairly easily and well. This makes this chunk of spectrum perfect for either cellular or long-range wireless broadband. A telco could build a powerful wireless network by holding it. An ISP could also make a fortune with it.
The auction went the following way: part of the available spectrum, which spans 698–806 MHz, had already been auctioned off, some of it is reserved for the nationwide public safety broadband network that will be constructed over the next few years. The remaining was divided up into A,B,C,D, and E blocks, regionally. Every bidder was supposed to secretly declare their intent to the FCC before the auction, but Google did not comply and kept its intentions secret. Winners picked up their allocations around two months after the auction.
There was a lot of excitement over the auction of C and D blocks. Block C was the prime spectrum that Verizon and others, such as Google, were interested in. This block covered two 11 MHz chunks of spectrum that could be bid on together, making 22 MHz available. Besides the bidders of this chunk, the other reason drawing attention to this block was that Google convinced the FCC to load up with two open-access provisions: (i) the winner has to make the network open so any "safe" device can use it; (ii) they have to make their own networked devices open as well.
The block D licenses covers two 5 MHz sections for a total of 10 MHz. This block is special compared with other blocks because the winners must be part of the Public Safety/Private Partnership established by the FCC. Therefore, the winner's wireless network has to be good enough to meet public safety specifications for coverage and redundancy. Furthermore, the winner could operate two public safety portions, 10 MHz, as a commercial network. Commercial traffic can also be carried over the public safety portion of the network as long as it is not being utilized.
The blocks A, B, and E covered 30 MHz in total. The licenses for each of the blocks were only good for small geographic areas. The FCC's intention was that the winners would use the spectrum for regional or rural area telecommunication services.
There were some other conditions for the winners of blocks A, B, C, and E. For blocks A, B and E, winners needed to cover at least 35 percent of the territory of their license within four years, and a full 70 percent of the territory within 10 years. For block C the winners were also required to cover 40 percent of the territory within four years, 75 percent coverage within 10 years. The FCC will automatically reclaim "unserved portions of the license area" from companies that do not meet the build-out requirements.
Not all of the licenses were sold. On April 15, 2011, the FCC announced that they would hold auction 92 on July 19, 2011, to sell the available license of 700 MHz. The results for all of the blocks were as follows:
Block A
Verizon Wireless and U.S. Cellular Corp. got 25 licenses each. But no one company dominated the A-Block. Other notable winners included CenturyTel Inc. and Cellular South. Verizon Wireless's licenses mainly covered densely populated urban areas, while U.S. Cellular was focused on the ones for the Midwest, Northeast and Northwest.
Block B
AT&T Mobility got one-third of the available licenses, spending $6.6 billion. U.S. Cellular snapped 127 licenses which are near its current market clusters in the Midwest, Northeast and Northwest. Verizon Wireless scooped up 77 B-Block licenses.
Block C
Verizon Wireless paid more than $4.6 billion for licenses covering the contiguous 48 states as well as Hawaii. Triad 700 L.L.C. picked up C-Block licenses covering Alaska, Puerto Rico and the U.S. Virgin Islands, while Small Ventures USA L.P. bought the license for the C Block covering the Gulf of Mexico. Also in Tanzania
Block D
No licenses were sold in this block and new auctions were scheduled to sell it.
Block E
EchoStar Corp, satellite television provider, picked up 168 of the total 176 E-Block licenses for more than $711 million. Qualcomm also bought 5 licenses covering markets in California, Arizona and the Northeast.
The winners
Of the 214 applicants approved to bid in the FCC's auction, 101 walked away with spectrum.
Verizon Wireless's winnings covered seven of the 10 regional C-Block licenses, as well as 77 licenses in the B Block and 25 licenses in the A Block.
AT&T Mobility spent $6.64 billion for 227 B-Block licenses.
The remaining C-block licenses were won by a number of operators:
Triad 700 L.L.C. took the Alaska and Puerto Rico/U.S. Virgin Islands licenses and Small Ventures USA L.P. took the Gulf of Mexico license.
Qualcomm Inc. won nine licenses for a total of around $500 million: B-Block licenses covering Yuba City and Imperial, Calif., and Hunterdon, N.J.; and five E Block licenses.
MetroPCS Communications Inc. scored only one license, the A Block for Boston, for $313 million.
Chevron walked away with the A, B and E blocks covering the Gulf of Mexico, most likely for the company's oil operations there.
Cavalier Wireless spent $61.8 million for licenses across the A and B Blocks.
Cox Communications won $304 million in the A and B blocks.
U.S. Cellular won 152 licenses for $401 million in the A and B Block.
Regional telecom operator CenturyTel Inc. spent close to $150 million for 48 B-Block licenses and 21 A-Block licenses.
Vulcan Ventures, owned by Microsoft co-founder Paul Allen, won two licenses, in the Seattle/Tacoma and Portland/Salem markets, for $112 million.
Regional wireless provider Cellular South spent $191.5 million on A- and B-Block licenses.
600 MHz Auction
Europe
The European Union has not yet worked out a specific plan on how to use the freed spectrum. According to the visions of trans-European 4G mobile wireless, economists predict that it will bring €50 billion economic growth. To utilize the full benefit of the digital dividend, the European countries at least have two steps to take: first, all member states of the European Union are requested to make sure that the switchover to digital transmission technologies will be completed by 1 January 2012. Therefore, the complete digital dividend should be made available for communication services. Second, member states shall use a standardized regulatory framework to ensure the use for the 790–862 MHz sub-band for electronic communications services throughout the EU (European Commission 2009).
The EU published a schedule table in the 2009 Report for the European Commission 'Exploiting the digital dividend' – a European approach.
Table: Spectrum auction plans for the digital dividend and left-over 2G and 3G spectrum in 2009
, the digital television transition has been completed in all of the member states of the European Union.
Canada
In Canada, the digital dividend auction was held in 2014 to allocate 108 MHz of spectrum from 698 to 804 MHz. This followed a consultation by Industry Canada on whether to abolish or at least scale back restrictions on foreign investment considered among the most restrictive in the G8 group of countries.
Australia
In Australia an independent engineering consultancy firm, Kordia Pty Ltd, was commissioned by the Department of Broadband, Communications and the Digital Economy to identify issues and options for releasing spectrum after analog television is switched off.
Kordia found that it is possible for 126 MHz of UHF spectrum to be released as a digital dividend.
The government also have some other factors to consider:
The uncertainty of future uses of spectrum;
Australia needs to align spectrum allocations with other major developed countries;
Public interest.
Based on the political and technological considerations Australia government's target digital dividend is 126 MHz of contiguous UHF spectrum.
See also
Spectrum auction
White spaces (radio)
White spaces (database)
References
External links
FCC - 700 MHz Band
The 700MHz spectrum band: market driversand harmonisation challenges worldwide
Broadcast engineering
Digital television
Radio spectrum | Digital dividend after digital television transition | [
"Physics",
"Engineering"
] | 2,938 | [
"Broadcast engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Electronic engineering"
] |
31,601,451 | https://en.wikipedia.org/wiki/Brus%20equation | The Brus equation or confinement energy equation can be used to describe the emission energy of quantum dot semiconductor nanocrystals in terms of the band gap energy Egap, the Planck constant h, the radius of the quantum dot r, as well as the effective mass of the excited electron me* and of the excited hole mh*. The equation was named after Louis E. Brus who independently discovered it.
The radius of the quantum dot affects the wavelength of the emitted light due to quantum confinement, and this equation describes the effect of changing the radius of the quantum dot on the wavelength λ of the emitted light (and thereby on the emission energy , where c is the speed of light). This is useful for calculating the radius of a quantum dot from experimentally determined parameters.
The overall equation is
Egap, me*, and mh* are unique for each nanocrystal composition.
For example, with cadmium selenide (CdSe) nanocrystals:
Egap (CdSe) = = ,
me* (CdSe) = 0.13 me = ,
mh* (CdSe) = 0.45 me = .
References
Quantum dots
Nanoparticles
Quantum electronics | Brus equation | [
"Physics",
"Materials_science"
] | 249 | [
"Nanotechnology",
"Quantum mechanics",
"Condensed matter physics",
"Quantum electronics"
] |
31,601,913 | https://en.wikipedia.org/wiki/Enzyme%20Function%20Initiative | The Enzyme Function Initiative (EFI) is a large-scale collaborative project aiming to develop and disseminate a robust strategy to determine enzyme function through an integrated sequence–structure-based approach. The project was funded in May 2010 by the National Institute of General Medical Sciences as a Glue Grant which supports the research of complex biological problems that cannot be solved by a single research group. The EFI was largely spurred by the need to develop methods to identify the functions of the enormous number proteins discovered through genomic sequencing projects.
Motivation
The dramatic increase in genome sequencing technology has caused the number of protein sequences deposited into public databases to grow apparently exponentially. To cope with the influx of sequences, databases use computational predictions to auto-annotate individual protein's functions. While these computational methods offer the advantages of being extremely high-throughput and generally provide accurate broad classifications, exclusive use has led to a significant level of misannotation of enzyme function in protein databases. Thus although the information now available represents an unprecedented opportunity to understand cellular metabolism across a wide variety of organisms, which includes the ability to identify molecules and/or reactions that may benefit human quality of life, the potential has not been fully actualized. The biological community's ability to characterize newly discovered proteins has been outstripped by the rate of genome sequencing, and the task of assigning function is now considered the rate-limiting step in understanding biological systems in detail.
Integrated strategy for functional assignment
The EFI is developing an integrated sequence-structure based strategy for functional assignment by predicting the substrate specificities of unknown members of mechanistically diverse enzyme superfamilies. The approach leverages conserved features within a given superfamily such as known chemistry, identity of active site functional groups, and composition of specificity-determining residues, motifs, or structures to predict function but relies on multidisciplinary expertise to streamline, refine, and test the predictions. The integrated sequence-strategy under development will be generally applicable to deciphering the ligand specificities of any functionally unknown protein.
Organization
By NIGMS program mandate, Glue Grant consortia must contain core resources and bridging projects. The EFI consists of six scientific cores which provide bioinformatic, structural, computational, and data management expertise to facilitate functional predictions for enzymes of unknown function targeted by the EFI. At the beginning of the grant, these predictions were tested by five Bridging Projects representing the amidohydrolase, enolase, GST, HAD, and isoprenoid synthase enzyme superfamilies. Three Bridging Projects now remain. In addition, the Anaerobic Enzymology Pilot Project was added in 2014 to explore the Radical SAM superfamily and Glycyl Radical Enzyme superfamily.
Scientific cores
The bioinformatics core contributes bioinformatic analysis by collecting and curating complete sequence data sets, generating sequence similarity networks, and classification of superfamily members into subgroups and families for subsequent annotation transfer and evaluation as targets for functional characterization.
The protein core develops cloning, expression, and protein purification strategies for the enzymes targeted for study.
The structure core fulfills the structural biology component for EFI by providing high resolution structures of targeted enzymes.
The computation core performs in silico docking to generate rank-ordered lists of predicted substrates for targeted enzymes using both experimentally determined and/or homology modeled protein structures.
The microbiology core examines in vivo functions using genetic techniques and metabolomics to complement in vitro functions determined by the Bridging Projects.
The data and dissemination core maintains a public database for experimental data (EFI-DB).
Bridging projects
The enolase superfamily contains evolutionarily related enzymes with a (β/α)7β‑barrel (TIM‑barrel) fold which primarily catalyze metal-assisted epimerization/racemization or β-elimination of carboxylate substrates.
The Haloacid dehydrogenase superfamily contains evolutionarily related enzymes with a Rossmanoid α/β fold with an inserted "cap" region which primarily catalyze metal-assisted nucleophilic catalysis, most frequently resulting in phosphoryl group transfer.
The isoprenoid synthase (I) superfamily contains evolutionarily related enzymes with a mostly all α-helical fold and primarily catalyze trans-prenyl transfer reactions to form elongated or cyclized isoprene products.
The Anaerobic Enzymology bridging project will explore radical-dependent enzymology, which allows the execution of unusual chemical transformations via an iron-sulfur cluster cleaving S-Adenosyl methionine (SAM) and producing a radical intermediate, or alternatively, abstraction of a hydrogen from glycine producing a glycyl radical. The superfamilies containing these enzymes are largely unexplored and thus, ripe with the potential for functional discoveries. The acquisition of an anaerobic protein production pipeline coupled with the installation of a Biosafety Level 2 anaerobic chamber for culturing human gut microbes has readied the EFI to pursue anaerobic enzymology.
Participating investigators
Twelve investigators with expertise in various disciplines make up the EFI.
Deliverables
The EFI's primary deliverable is development and dissemination of an integrated sequence/structure strategy for functional assignment. The EFI now offers access to two high-throughput docking tools, a web tool for comparing protein sequences within entire protein families, and a web tool for composing a genome context inventory based on a protein sequence similarity network. Additionally, as the strategy is developed, data and clones generated by the EFI are made freely available via several online resources.
Funding
The EFI was established in May 2010 with $33.9 million in funding over a five-year period (grant number GM093342).
References
External links
Enzyme Function Initiative
Structure-Function Linkage Database
EFI-DB
NIGMS Glue Grant Consortia
Enzymes
Metabolism
Computational biology
National Institutes of Health | Enzyme Function Initiative | [
"Chemistry",
"Biology"
] | 1,230 | [
"Biochemistry",
"Cellular processes",
"Metabolism",
"Computational biology"
] |
31,605,902 | https://en.wikipedia.org/wiki/Di%282-ethylhexyl%29phosphoric%20acid | Di(2-ethylhexyl)phosphoric acid (DEHPA or HDEHP) is an organophosphorus compound with the formula (C8H17O)2PO2H. The colorless liquid is a diester of phosphoric acid and 2-ethylhexanol. It is used in the solvent extraction of uranium, vanadium and the rare-earth metals.
Preparation
DEHPA is prepared through the reaction of phosphorus pentoxide and 2-ethylhexanol:
4 C8H17OH + P4O10 → 2 [(C8H17O)PO(OH)]2O
[(C8H17O)PO(OH)]2O + C8H17OH → (C8H17O)2PO(OH) + (C8H17O)PO(OH)2
These reaction produce a mixture of mono-, di-, and trisubstituted phosphates, from which DEHPA can be isolated based on solubility.
Use in lanthanide extraction
DEHPA can be used to extract lanthanides (rare earths) from aqueous solutions, it is commonly used in the lanthanide sector as an extraction agent. In general the distribution ratio of the lanthanides increase as their atomic number increases due to the lanthanide contraction. It is possible by bringing a mixture of lanthanides in a counter current mixer settler bank into contact with a suitable concentration of nitric acid to selectively strip (back extract) some of the lanthanides while leaving the others still in the DEHPA based organic layer. In this way selective stripping of the lanthanides can be used to make a separation of a mixture of the lanthanides into mixtures containing fewer lanthanides. Under ideal conditions this can be used to obtain a single lanthanide from a mixture of many lanthanides.
It is common to use DEHPA in an aliphatic kerosene which is best considered to be a mixture of long chain alkanes and cycloalkanes. When used in an aromatic hydrocarbon diluent the lanthanide distribution ratios are lower. It has been shown that it is possible to use a second generation biodiesel which was made by the hydrotreatment of vegetable oil. It has been reported that Neste's HVO100 is a suitable diluent for DEHPA when calcium, lanthanum and neodymium are extracted from aqueous nitric acid
Use in uranium extraction
Extraction
DEHPA is used in the solvent extraction of uranium salts from solutions containing the sulfate, chloride, or perchlorate anions. This extraction is known as the “Dapex procedure” (dialkyl phosphoric extraction). Reminiscent of the behaviours of carboxylic acids, DEHPA generally exists as a hydrogen-bonded dimer in the non-polar organic solvents. For practical applications, the solvent, often called a diluent, is typically kerosene. A complex is formed from two equivalents of the conjugate base of DEHPA and one uranyl ion. Complexes of the formula (UO2)2[(O2P(OR)2]4 also form, and at high concentrations of uranium, polymeric complexes may form.
The extractability of Fe3+ is similar to that of uranium, so it must be reduced to Fe2+ before the extraction.
Stripping
The uranium is then stripped from the DEHPA/kerosene solution with hydrochloric acid, hydrofluoric acid, or carbonate solutions. Sodium carbonate solutions effectively strip uranium from the organic layer, but the sodium salt of DEHPA is somewhat soluble in water, which can lead to loss of the extractant.
Synergistic effects
The extractive capabilities of DEHPA can be increased through synergistic effects by the addition of other organophosphorus compounds. Tributyl phosphate is often used, as well as dibutyl-, diamyl-, and dihexylphosphonates. The synergistic effects are thought to occur by the addition of the trialkylphosphate to the uranyl-DEHPA complex by hydrogen bonding. The synergistic additive may also react with the DEHPA, competing with the uranyl extraction, resulting in a decrease in extraction efficiency past a concentration specific to the compound.
Alternatives to DEHPA
Alternative organophosphorus compounds include trioctylphosphine oxide and bis(2,4,4-trimethyl pentyl)phosphinic acid. Secondary, tertiary, and quaternary amines have also been used for some uranium extractions. Compared to phosphate extractants, amines are more selective for uranium, extract the uranium faster, and are easily stripped with a wider variety of reagents. However, the phosphates are more tolerant of solids in the feed solution and show faster phase separation.
References
Separation processes
Organophosphates
2-Ethylhexyl esters | Di(2-ethylhexyl)phosphoric acid | [
"Chemistry"
] | 1,066 | [
"nan",
"Separation processes"
] |
37,158,447 | https://en.wikipedia.org/wiki/PK-3%20Plus%20%28ISS%20experiment%29 | The Plasmakristall-3 Plus (PK-3 Plus) laboratory was a joint Russian-German laboratory for the investigation of dusty/complex plasmas on board the International Space Station (ISS), with the principal investigators at the German Max Planck Institute for Extraterrestrial Physics and the Russian Institute for High Energy Densities. It was the successor to the PKE Nefedov experiment with improvements in hardware, diagnostics and software. The laboratory was launched in December 2005 and was operated for the first time in January 2006. It was used in 21 missions until it was deorbited in 2013. It is succeeded by the PK-4 Laboratory.
Technical description
The heart of the PK-3 Plus laboratory consists of a capacitively coupled plasma chamber. A plasma is generated by applying a radio frequency voltage signal to two electrodes. Microparticles are then injected into the plasma via dispensers that are mounted on the side of the electrodes. The microparticles are illuminated with a sheet of laser light from the side. They scatter the light, which is then recorded by up to four cameras mounted around the plasma chamber. The data from the cameras are recorded on hard drives that are physically brought back to Earth via Soyuz capsules for analysis.
Scientific goals
PK-3 Plus studies complex plasmas - plasmas that contain microparticles. The microparticles acquire high negative charges by collecting electrons from the surrounding plasma. They interact with each other and with the plasma particles, e.g., they experience a drag force from the ions that are streaming to the edges of the plasma.
Depending on the experimental settings like the gas pressure, the system made up of the microparticles forms various phases - solid, liquid or gaseous. In this sense, the microparticles can be seen as analogous to atoms or molecules in ordinary physical systems. The experiments are performed by observing the movement of the microparticles and tracing them from camera frame to frame.
The PK-3 Plus experiment allows investigating a large variety of topics, for instance: the crystal structure, fluid-solid phase transitions, electrorheological fluids, wave propagation, the heartbeat instability, Mach cone formation and the speed of sound, and lane formation
External links
Forschungsgruppe komplexe Plasmen - DLR Oberpfaffenhofen
References
Plasma physics facilities
Science facilities on the International Space Station
International Space Station experiments | PK-3 Plus (ISS experiment) | [
"Physics"
] | 505 | [
"Plasma physics facilities",
"Plasma physics"
] |
29,057,063 | https://en.wikipedia.org/wiki/Appert%20topology | In general topology, a branch of mathematics, the Appert topology, named for , is a topology on the set } of positive integers.
In the Appert topology, the open sets are those that do not contain 1, and those that asymptotically contain almost every positive integer. The space X with the Appert topology is called the Appert space.
Construction
For a subset S of X, let denote the number of elements of S which are less than or equal to n:
S is defined to be open in the Appert topology if either it does not contain 1 or if it has asymptotic density equal to 1, i.e., it satisfies
.
The empty set is open because it does not contain 1, and the whole set X is open since
for all n.
Related topologies
The Appert topology is closely related to the Fort space topology that arises from giving the set of integers greater than one the discrete topology, and then taking the point 1 as the point at infinity in a one point compactification of the space. The Appert topology is finer than the Fort space topology, as any cofinite subset of X has asymptotic density equal to 1.
Properties
The closed subsets S of X are those that either contain 1 or that have zero asymptotic density, namely .
Every point of X has a local basis of clopen sets, i.e., X is a zero-dimensional space.Proof: Every open neighborhood of 1 is also closed. For any , is both closed and open.
X is Hausdorff and perfectly normal (T6).Proof: X is T1. Given any two disjoint closed sets A and B, at least one of them, say A, does not contain 1. A is then clopen and A and its complement are disjoint respective neighborhoods of A and B, which shows that X is normal and Hausdorff. Finally, any subset, in particular any closed subset, in a countable T1 space is a Gδ, so X is perfectly normal.
X is countable, but not first countable, and hence not second countable and not metrizable.
A subset of X is compact if and only if it is finite. In particular, X is not locally compact, since there is no compact neighborhood of 1.
X is not countably compact.Proof: The infinite set has zero asymptotic density, hence is closed in X. Each of its points is isolated. Since X contains an infinite closed discrete subset, it is not limit point compact, and therefore it is not countably compact.
See also
Notes
References
.
.
General topology
Topological spaces | Appert topology | [
"Mathematics"
] | 551 | [
"General topology",
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology"
] |
29,057,399 | https://en.wikipedia.org/wiki/Demagnetizing%20field | The demagnetizing field, also called the stray field (outside the magnet), is the magnetic field (H-field) generated by the magnetization in a magnet. The total magnetic field in a region containing magnets is the sum of the demagnetizing fields of the magnets and the magnetic field due to any free currents or displacement currents. The term demagnetizing field reflects its tendency to act on the magnetization so as to reduce the total magnetic moment. It gives rise to shape anisotropy in ferromagnets with a single magnetic domain and to magnetic domains in larger ferromagnets.
The demagnetizing field of an arbitrarily shaped object requires a numerical solution of Poisson's equation even for the simple case of uniform magnetization. For the special case of ellipsoids (including infinite cylinders) the demagnetization field is linearly related to the magnetization by a geometry dependent constant called the demagnetizing factor. Since the magnetization of a sample at a given location depends on the total magnetic field at that point, the demagnetization factor must be used in order to accurately determine how a magnetic material responds to a magnetic field. (See magnetic hysteresis.)
Magnetostatic principles
Maxwell's equations
In general the demagnetizing field is a function of position . It is derived from the magnetostatic equations for a body with no electric currents. These are Ampère's law
and Gauss's law
The magnetic field and flux density are related by
where is the permeability of vacuum and is the magnetisation.
The magnetic potential
The general solution of the first equation can be expressed as the gradient of a scalar potential :
Inside the magnetic body, the potential is determined by substituting () and () in ():
Outside the body, where the magnetization is zero,
At the surface of the magnet, there are two continuity requirements:
The component of parallel to the surface must be continuous (no jump in value at the surface).
The component of perpendicular to the surface must be continuous.
This leads to the following boundary conditions at the surface of the magnet:
Here is the surface normal and is the derivative with respect to distance from the surface.
The outer potential must also be regular at infinity: both and must be bounded as goes to infinity. This ensures that the magnetic energy is finite. Sufficiently far away, the magnetic field looks like the field of a magnetic dipole with the same moment as the finite body.
Uniqueness of the demagnetizing field
Any two potentials that satisfy equations (), () and (), along with regularity at infinity, have identical gradients. The demagnetizing field is the gradient of this potential (equation ).
Energy
The energy of the demagnetizing field is completely determined by an integral over the volume of the magnet:
Suppose there are two magnets with magnetizations and . The energy of the first magnet in the demagnetizing field of the second is
The reciprocity theorem states that
Magnetic charge and the pole-avoidance principle
Formally, the solution of the equations for the potential is
where is the variable to be integrated over the volume of the body in the first integral and the surface in the second, and is the gradient with respect to this variable.
Qualitatively, the negative of the divergence of the magnetization (called a volume pole) is analogous to a bulk bound electric charge in the body while (called a surface pole) is analogous to a bound surface electric charge. Although the magnetic charges do not exist, it can be useful to think of them in this way. In particular, the arrangement of magnetization that reduces the magnetic energy can often be understood in terms of the pole-avoidance principle, which states that the magnetization tries to reduce the poles as much as possible.
Effect on magnetization
Single domain
One way to remove the magnetic poles inside a ferromagnet is to make the magnetization uniform. This occurs in single-domain ferromagnets. This still leaves the surface poles, so division into domains reduces the poles further. However, very small ferromagnets are kept uniformly magnetized by the exchange interaction.
The concentration of poles depends on the direction of magnetization (see the figure). If the magnetization is along the longest axis, the poles are spread across a smaller surface, so the energy is lower. This is a form of magnetic anisotropy called shape anisotropy.
Multiple domains
If the ferromagnet is large enough, its magnetization can divide into domains. It is then possible to have the magnetization parallel to the surface. Within each domain the magnetization is uniform, so there are no volume poles, but there are surface poles at the interfaces (domain walls) between domains. However, these poles vanish if the magnetic moments on each side of the domain wall meet the wall at the same angle (so that the components are the same but opposite in sign). Domains configured this way are called closure domains.
Demagnetizing factor
An arbitrarily shaped magnetic object has a total magnetic field that varies with location inside the object and can be quite difficult to calculate. This makes it very difficult to determine the magnetic properties of a material such as, for instance, how the magnetization of a material varies with the magnetic field. For a uniformly magnetized sphere in a uniform magnetic field the internal magnetic field is uniform:
where is the magnetization of the sphere and is called the demagnetizing factor, which assumes values between 0 and 1, and equals for a sphere in SI units. Note that in cgs units assumes values between 0 and .
This equation can be generalized to include ellipsoids having principal axes in x, y, and z directions such that each component has a relationship of the form:
Other important examples are an infinite plate (an ellipsoid with two of its axes going to infinity) which has (SI units) in a direction normal to the plate and zero otherwise and an infinite cylinder (an ellipsoid with one of its axes tending toward infinity with the other two being the same) which has along its axis and perpendicular to its axis. The demagnetizing factors are the principal values of the depolarization tensor, which gives both the internal and external values of the fields induced in ellipsoidal bodies by applied electric or magnetic fields.
Notes and references
Further reading
Electric and magnetic fields in matter
Magnetostatics
Potentials | Demagnetizing field | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,336 | [
"Condensed matter physics",
"Electric and magnetic fields in matter",
"Materials science"
] |
29,064,324 | https://en.wikipedia.org/wiki/Henry%20Arthur%20Campbell | Henry Arthur Campbell MIEE (24 June 1873 to 6 June 1953) was a Jamaican electrical engineer and long-serving employee and Chief Engineer of the Jamaican public utility.
Education and career
Henry was tutored privately and at Church of England grammar schools in his early years, before apprenticing as an electrical engineer with the Jamaica Electric Light and Power Co. Ltd. beginning in 1890.
He notably contributed to the establishment and maintenance of an electric tram system in Kingston, Jamaica, and was recognized by the British Admiralty for having "rendered invaluable services" during both World War I and World War II.
He worked for over sixty years with the Jamaica Public Service Co. Ltd., from 1890 until after 1953, though in his latter years in the role of a consultant.
He rose to head several committees and clubs with the company, during his time there, including membership in the Apprenticeship Committee, chairman of the managing committee of the company's employee's thrift club, and vice-chairman of the company's mutual aid society.
He became an associate member of the Institution of Electrical Engineers in 1928, and later a member in 1932.
Masonry
Campbell was heavily involved in masonry throughout his life and career. Among his masonic achievements were:
Director, Masonic Benevolence Association
Past Deputy District Grand Master of Scottish Free Masons in Jamaica
Past Preceptors of Kingston Templars
Past Principal Z. Holy Royal Arch Chapter
Past Most Wise Sovereign, Kingston Rose Croix Chapter
Past Grand Director of Ceremonies of the Grand Lodge of Scotland
Past Master Mark Mason
Past Priory Knights of Malta
Past Worshipful Master St. John Masonic Lodge, S.C.
References
1873 births
People from Saint Andrew Parish, Jamaica
Electrical engineers
1953 deaths | Henry Arthur Campbell | [
"Engineering"
] | 345 | [
"Electrical engineering",
"Electrical engineers"
] |
22,992,399 | https://en.wikipedia.org/wiki/Construction%20and%20Analysis%20of%20Distributed%20Processes | CADP (Construction and Analysis of Distributed Processes) is a toolbox for the design of communication protocols and distributed systems. CADP is developed by the CONVECS team (formerly by the VASY team) at INRIA Rhone-Alpes and connected to various complementary tools. CADP is maintained, regularly improved, and used in many industrial projects.
The purpose of the CADP toolkit is to facilitate the design of reliable systems by use of formal description techniques together with software tools for simulation, rapid application development, verification, and test generation.
CADP can be applied to any system that comprises asynchronous concurrency, i.e., any system whose behavior can be modeled as a set of parallel processes governed by interleaving semantics. Therefore, CADP can be used to design hardware architecture, distributed algorithms, telecommunications protocols, etc.
The enumerative verification (also known as explicit state verification) techniques implemented in CADP, though less general that theorem proving, enable an automatic, cost-efficient detection of design errors in complex systems.
CADP includes tools to support use of two approaches in formal methods, both of which are needed for reliable systems design:
Models provide mathematical representations for parallel programs and related verification problems. Examples of models are automata, networks of communicating automata, Petri nets, binary decision diagrams, boolean equation systems, etc. From a theoretical point of view, research on models seeks for general results, independent of any particular description language.
In practice, models are often too elementary to describe complex systems directly (this would be tedious and error-prone). A higher level formalism known as process algebra or process calculus is needed for this task, as well as compilers that translate high-level descriptions into models suitable for verification algorithms.
History
Work began on CADP in 1986, when the development of the first two tools, CAESAR and ALDEBARAN, was undertaken. In 1989, the CADP acronym was coined, which stood for CAESAR/ALDEBARAN Distribution Package. Over time, several tools were added, including programming interfaces that enabled tools to be contributed: the CADP acronym then became the CAESAR/ALDEBARAN Development Package. Currently CADP contains over 50 tools. While keeping the same acronym, the name of the toolbox has been changed to better indicate its purpose:
Construction and Analysis of Distributed Processes.
Major releases
The releases of CADP have been successively named with alphabetic letters (from "A" to "Z"), then with the names of cities hosting academic research groups actively working on the LOTOS language and, more generally, the names of cities in which major contributions to concurrency theory have been made.
Between major releases, minor releases are often available, providing early access to new features and improvements. For more information, see the change list page on the CADP website.
CADP features
CADP offers a wide set of functionalities, ranging from step-by-step simulation to massively parallel model checking. It includes:
Compilers for several input formalisms:
High-level protocol descriptions written in the ISO language LOTOS. The toolbox contains two compilers (CAESAR and CAESAR.ADT) that translate LOTOS descriptions into C code to be used for simulation, verification, and testing purposes.
Low-level protocol descriptions specified as finite state machines.
Networks of communicating automata, i.e., finite state machines running in parallel and synchronized (either using process algebra operators or synchronization vectors).
Several equivalence checking tools (minimization and comparisons modulo bisimulation relations), such as BCG_MIN and BISIMULATOR.
Several model-checkers for various temporal logic and mu-calculus, such as EVALUATOR and XTL.
Several verification algorithms combined: enumerative verification, on-the-fly verification, symbolic verification using binary decision diagrams, compositional minimization, partial orders, distributed model checking, etc.
Plus other tools with advanced functionalities such as visual checking, performance evaluation, etc.
CADP is designed in a modular way and puts the emphasis on intermediate formats and programming interfaces (such as the BCG and OPEN/CAESAR software environments), which allow the CADP tools to be combined with other tools and adapted to various specification languages.
Models and verification techniques
Verification is comparison of a complex system against a set of properties characterizing the intended functioning of the system (for instance, deadlock freedom, mutual exclusion, fairness, etc.).
Most of the verification algorithms in CADP are based on the labeled transition systems (or, simply, automata or graphs) model, which consists of a set of states, an initial state, and a transition relation between states. This model is often generated automatically from high level descriptions of the system under study, then compared against the system properties using various decision procedures. Depending on the formalism used to express the properties, two approaches are possible:
Behavioral properties express the intended functioning of the system in the form of automata (or higher level descriptions, which are then translated into automata). In such a case, the natural approach to verification is equivalence checking, which consists in comparing the system model and its properties (both represented as automata) modulo some equivalence or preorder relation. CADP contains equivalence checking tools that compare and minimize automata modulo various equivalence and preorder relations; some of these tools also apply to stochastic and probabilistic models (such as Markov chains). CADP also contains visual checking tools that can be used to verify a graphical representation of the system.
Logical properties express the intended functioning of the system in the form of temporal logic formulas. In such a case, the natural approach to verification is model checking, which consists of deciding whether or not the system model satisfies the logical properties. CADP contains model checking tools for a powerful form of temporal logic, the modal mu-calculus, which is extended with typed variables and expressions so as to express predicates over the data contained in the model. This extension provides for properties that could not be expressed in the standard mu-calculus (for instance, the fact that the value of a given variable is always increasing along any execution path).
Although these techniques are efficient and automated, their main limitation is the state explosion problem, which occurs when models are too large to fit in computer memory. CADP provides software technologies for handling models in two complementary ways:
Small models can be represented explicitly, by storing in memory all their states and transitions (exhaustive verification);
Larger models are represented implicitly, by exploring only the model states and transitions needed for the verification (on the fly verification).
Languages and compilation techniques
Accurate specification of reliable, complex systems requires a language that is executable (for enumerative verification) and has formal semantics (to avoid any as language ambiguities that could lead to interpretation divergences between designers and implementors). Formal semantics are also required when it is necessary to establish the correctness of an infinite system; this cannot be done using enumerative techniques because they deal only with finite abstractions, so must be done using theorem proving techniques, which only apply to languages with a formal semantics.
CADP acts on a LOTOS description of the system. LOTOS is an international standard for protocol description (ISO/IEC standard 8807:1989), which combines the concepts of process algebras (in particular CCS and CSP and algebraic abstract data types. Thus, LOTOS can describe both asynchronous concurrent processes and complex data structures.
LOTOS was heavily revised in 2001, leading to the publication of E-LOTOS (Enhanced-Lotos, ISO/IEC standard 15437:2001), which tries to provide a greater expressiveness (for instance, by introducing quantitative time to describe systems with real-time constraints) together with a better user friendliness.
Several tools exist to convert descriptions in other process calculi or intermediate format into LOTOS, so that the CADP tools can then be used for verification.
Licensing and installation
CADP is distributed free of charge to universities and public research centers. Users in industry can obtain an evaluation license for non-commercial use during a limited period of time, after which a full license is required. To request a copy of CADP, complete the registration form at. After the license agreement has been signed, you will receive details of how to download and install CADP.
Tools summary
The toolbox contains several tools:
CAESAR.ADT is a compiler that translates LOTOS abstract data types into C types and C functions. The translation involves pattern-matching compiling techniques and automatic recognition of usual types (integers, enumerations, tuples, etc.), which are implemented optimally.
CAESAR is a compiler that translates LOTOS processes into either C code (for rapid prototyping and testing purposes) or finite graphs (for verification). The translation is done using several intermediate steps, among which the construction of a Petri net extended with typed variables, data handling features, and atomic transitions.
OPEN/CAESAR is a generic software environment for developing tools that explore graphs on the fly (for instance, simulation, verification, and test generation tools). Such tools can be developed independently of any particular high level language. In this respect, OPEN/CAESAR plays a central role in CADP by connecting language-oriented tools with model-oriented tools. OPEN/CAESAR consists of a set of 16 code libraries with their programming interfaces, such as:
Caesar_Hash, which contains several hash functions
Caesar_Solve, which resolves boolean equation systems on the fly
Caesar_Stack, which implements stacks for depth-first search exploration
Caesar_Table, which handles tables of states, transitions, labels, etc.
A number of tools have been developed within the OPEN/CAESAR environment:
BISIMULATOR, which checks bisimulation equivalences and preorders
CUNCTATOR, which performs on-the-fly steady state simulation
DETERMINATOR, which eliminates stochastic nondeterminism in normal, probabilistic, or stochastic systems
DISTRIBUTOR, which generates the graph of reachable states using several machines
EVALUATOR, which evaluates regular alternation-free mu-calculus formulas
EXECUTOR, which performs random execution of code
EXHIBITOR, which searches for execution sequences matching a given regular expression
GENERATOR, which constructs the graph of reachable states
PREDICTOR, which predict the feasibility of reachability analysis,
PROJECTOR, which computes abstractions of communicating systems
REDUCTOR, which constructs and minimizes the graph of reachable states modulo various equivalence relations
SIMULATOR, X-SIMULATOR and OCIS, which allow interactive simulation
TERMINATOR, which searches for deadlock states
BCG (Binary Coded Graphs) is both a file format for storing very large graphs on disk (using efficient compression techniques) and a software environment for handling this format, including partitioning graphs for distributed processing. BCG also plays a key role in CADP as many tools rely on this format for their inputs/outputs. The BCG environment consists of various libraries with their programming interfaces, and of several tools, including:
BCG_DRAW, which builds a two-dimensional view of a graph,
BCG_EDIT which allows to modify interactively the graph layout produced by Bcg_Draw
BCG_GRAPH, which generates various forms of practically useful graphs
BCG_INFO, which displays various statistical information about a graph
BCG_IO, which performs conversions between BCG and many other graph formats
BCG_LABELS, which hides and/or renames (using regular expressions) the transition labels of a graph
BCG_MERGE, which gathers graph fragments obtained from distributed graph construction
BCG_MIN, which minimizes a graph modulo strong or branching equivalences (and can also deal with probabilistic and stochastic systems)
BCG_STEADY, which performs steady-state numerical analysis of (extended) continuous-time Markov chains
BCG_TRANSIENT, which performs transient numerical analysis of (extended) continuous-time Markov chains
PBG_CP, which copies a partitioned BCG graph
PBG_INFO, which displays information about a partitioned BCG graph
PBG_MV which moves a partitioned BCG graph
PBG_RM, which removes a partitioned BCG graph
XTL (eXecutable Temporal Language), which is a high level, functional language for programming exploration algorithms on BCG graphs. XTL provides primitives to handle states, transitions, labels, successor and predecessor functions, etc. For instance, one can define recursive functions on sets of states, which allow to specify in XTL evaluation and diagnostic generation fixed point algorithms for usual temporal logics (such as HML, CTL, ACTL, etc.).
The connection between explicit models (such as BCG graphs) and implicit models (explored on the fly) is ensured by OPEN/CAESAR-compliant compilers including:
CAESAR.OPEN, for models expressed as LOTOS descriptions
BCG.OPEN, for models represented as BCG graphs
EXP.OPEN, for models expressed as communicating automata
FSP.OPEN, for models expressed as FSP descriptions
LNT.OPEN, for models expressed as LNT descriptions
SEQ.OPEN, for models represented as sets of execution traces
The CADP toolbox also includes additional tools, such as ALDEBARAN and TGV (Test Generation based on Verification) developed by the Verimag laboratory (Grenoble) and the Vertecs project-team of INRIA Rennes.
The CADP tools are well-integrated and can be accessed easily using either the EUCALYPTUS graphical interface or the SVL scripting language. Both EUCALYPTUS and SVL provide users with an easy, uniform access to the CADP tools by performing file format conversions automatically whenever needed and by supplying appropriate command-line options as the tools are invoked.
Awards
In 2002, Radu Mateescu, who designed and developed the EVALUATOR model checker of CADP, received the Information Technology Award attributed during the 10th edition of the yearly symposium organized by the Foundation Rhône-Alpes Futur.
In 2011, Hubert Garavel, software architect and developer of CADP, received the Gay-Lussac Humboldt Prize.
In 2019, Frédéric Lang and Franco Mazzanti won all the gold medals for the parallel problems of the RERS challenge by using CADP to successfully and correctly evaluate 360 computational tree logic (CTL) and linear temporal logic (LTL) formulas on various sets of communicating state machines.
In 2020, Frédéric Lang, Franco Mazzanti, and Wendelin Serwe won three gold medals at the RERS'2020 challenge by correctly solving 88% of the "Parallel CTL" problems, only giving "don't know" answers for 11 formulas out of 90.
In 2021, Hubert Garavel, Frédéric Lang, Radu Mateescu, and Wendelin Serwe jointly won the Innovation Prize of Inria – Académie des sciences – Dassault Systèmes for their scientific work that led to the development of the CADP toolbox.
In 2023, Hubert Garavel, Frédéric Lang, Radu Mateescu, and Wendelin Serwe jointly received, for the CADP toolbox, the first ever Test-of-Time Tool Award from ETAPS, the premier European forum for software science.
See also
SYNTAX compiler generator (used to build many CADP compilers and translators)
References
External links
http://cadp.inria.fr/
http://vasy.inria.fr/
http://convecs.inria.fr/
Model checkers
Process calculi
Formal methods
Formal specification languages
Concurrency (computer science)
Concurrency control
Synchronization | Construction and Analysis of Distributed Processes | [
"Mathematics",
"Engineering"
] | 3,224 | [
"Telecommunications engineering",
"Formal methods",
"Model checkers",
"Software engineering",
"Synchronization",
"Mathematical software"
] |
22,993,401 | https://en.wikipedia.org/wiki/Proteases%20in%20angiogenesis | Angiogenesis is the process of forming new blood vessels from existing blood vessels, formed in vasculogenesis. It is a highly complex process involving extensive interplay between cells, soluble factors, and the extracellular matrix (ECM). Angiogenesis is critical during normal physiological development, but it also occurs in adults during inflammation, wound healing, ischemia, and in pathological conditions such as rheumatoid arthritis, hemangioma, and tumor growth. Proteolysis has been indicated as one of the first and most sustained activities involved in the formation of new blood vessels. Numerous proteases including matrix metalloproteinases (MMPs), a disintegrin and metalloproteinase domain (ADAM), a disintegrin and metalloproteinase domain with throbospondin motifs (ADAMTS), and cysteine and serine proteases are involved in angiogenesis. This article focuses on the important and diverse roles that these proteases play in the regulation of angiogenesis.
MMPs
Matrix metalloproteinases (MMPs) are a large multigene family of zinc-dependent endopeptidases. The collective MMP family is capable of degrading all known ECM macromolecules. MMP activity is regulated at the level of transcription, post-translationally by proteolytic cleavage, and by endogenous inhibitors known as tissue inhibitors of metalloproteinases (TIMPs). The role of matrix metalloproteinases and TIMPs in several pathological conditions including angiogenesis, tumor growth, and metastasis has been investigated and very well described.
Matrix metalloproteinases contain five conserved domains/sequence motifs:
Signal peptide sequence, which directs the enzyme into the rough endoplasmic reticulum during synthesis
Propeptide domain, which is cleaved to activate the enzyme
Catalytic domain, which contains the conserved Zn2+ binding region and mediates enzyme activity
Hemopexin domain, which provides the substrate specificity
Small hinge region, which allows the hemopexin domain to bring the substrate to the active core of the catalytic domain
There is also a subfamily of the matrix metalloproteinases, the membrane-type MMPs (MT-MMPs) which contain an additional transmembrane domain and a short cytoplasmic domain. After activation of MMPs by removal of the propeptide domain, their activity is regulated by TIMPs. TIMPs specifically and reversibly inhibit the activity of MMPs. So far there have been identified four members of the family, TIMP1–4. All TIMPs contain twelve conserved cystein residues, which form six disulfide bonds. The C-terminal domains of TIMPs are highly variable and confer their specificity towards preferred MMP targets.
ADAM/ADAMTS
ADAMs comprise a family of integral membrane as well as secreted glycoproteins which are related to snake venom metalloproteinases and MMPs. Like MMPs, ADAMs are composed of multiple conserved domains. They contain propeptide, metalloproteinase, disintegrin-like, cystein-rich, and epidermal growth factor like domains, although variations in domain composition have been observed in non-animal organisms. Membrane anchored ADAMs contain a transmembrane and cytoplasmic domain. The domains contained within the ADAMs family have been characterized, uncovering their functional and structural roles. ADAMs contain a consensus sequence which has three histidine residues that bind to the catalytically essential zinc ion. The propeptide is removed through cleavage by a furin type protease yielding the active enzyme. The propeptide of most MMPs is cleavable by proteases such as trypsin, plasmin, chymotrypsin and other MMPs. ADAMs participate in a wide variety of cell surface remodeling processes, including ectodomain shedding, regulation of growth factor availability and mediating cell-matrix interactions. ADAM17 and ADAM15 have recently been identified in endothelial cells (EC).
ADAMTS are a subfamily of ADAM related metalloproteinases that contain at least one thrombospondin type I sequence repeat motif (TSR). They are secreted proteins; and the TSR facilitates their localization to the ECM placing it in close proximity to their substrates. Functionally, ADAMTS can be divided into three groups: procollagen aminopeptidase, aggrecanase, and ADAMTS13 which cleaves von Willebrand factor. Unlike with MMPs, TIMPs are more selective in their ability to inhibit ADAMs and ADAMTSs. TIMP3 is able to inhibit ADAM17 and 12 as well as ADAMTS4 and 5. ADAM8 and ADAM9 are not susceptible to inhibition by TIMPs.
Other proteolytic enzymes
Many additional classes of enzymes have been identified that facilitate angiogenesis. They include serine, aspartic, and cysteine-type proteases. A highly characterized example of the serine protease family is the plasminogen activator-plasmin system, which has been shown to be involved in vascular remodelling . Tissue plasminogen activator (tPA), and urokinase plasminogen activator (urokinase, uPA) are serine proteases which cleave and activate plasminogen. The activated form of plasminogen, plasmin, is a wide-ranging protease capable of acting on various ECM components including fibrin, collagens, laminin, fibronectin, and proteoglycans. Additionally, plasmin also is able to activate various other MMPs.
In humans, the group of cathepsin cysteine proteases or cysteine cathepsins comprises 11 family members, cathepsins B, C, F, H, L1, L2, K, O, S, W, and X/Z. Cysteine cathepsins are synthesized as inactive zymogens and activated by proteolytic removal of their propeptide. These enzymes are primarily localized in lysosomes and function in terminal protein degradation and processing. Cathepsins also can be secreted by cells, associate with the cell surface, and degrade the ECM. A study of all 11 members of the cathepsin family highlights their importance in tumorigenesis and tumor associated angiogenesis. Examination of cathepsin activity by using chemical probes and in vivo imaging techniques demonstrated an increase in cathepsin activity in the angiogenic blood vessels and invasive fronts of carcinoma in the RIP-Tag2 transgenic mouse model of pancreatic islet tumor genesis.
Aminopeptidases function as exopeptidases which remove amino acids from the amino-terminus of proteins. Aminopeptidase N (CD13/APN) is highly expressed on the endothelium of growing vessels. Inhibitors of CD13/APN dramatically impair tumor growth.
Ectodomain shedding
It has become clear in the past years that ectodomain shedding is an initial step for the activation of specific receptors such as Notch, ErbB-4 and the angiopoietin receptor Tie-1. Notch-1 signaling is essential for endothelial differentiation, and tumor angiogenesis, while the angiopoietin receptor Tie-1 facilitates embryonic blood vessel formation. Upon binding of their ligands, Notch-1 and Tie-1 undergo proteolytic cleavage of the ectodomains by ADAM17 and ADAM10. This cleavage frees the cytoplasmic fragment for cellular signaling. In the case of Notch-1, it transfers to the nucleus.
Many cytokines and growth factors are synthesized as membrane-bound proforms which undergo proteolytic shedding for activation. The ephrins EPH receptor A2 and A3 are shed by ADAM10, creating cleaved soluble Eph receptors, which inhibit tumor angiogenesis in mice. Additional examples are the proteolytic shedding of soluble E-selectin, shedding of urokinase receptor (uPAR) by MMP-12 creating soluble uPAR which has chemotactic properties for leukocytes and progenitor cells, and the shedding of interleukin-6 receptors by ADAM10 and ADAM17 which facilitates interleukin-6 signaling in endothelial cells. Semaphorin 4D is cleaved from its membrane-bound form by MT1-MMP (MMP-14) in tumor cells; it then interacts with plexin B1 on endothelial cells, promoting pro-angiogenic chemotaxis. Shedding of a membrane-anchored cytokine or growth factor by ADAM proteinases may be relevant for various signal transduction events. Alternatively, shedding may be required for the ligand to diffuse to distant receptors. Shedding may be required for the down regulation of signals by removing signaling ligands, or cleavage and release of receptors. Release of the receptor may also generate soluble receptors which act as decoys by sequestering ligands. These findings indicate that ectodomain shedding is a ubiquitous process facilitating a wide variety of cellular events involved in angiogenesis. Because potent biological modifiers are generated, it is likely controlled by highly regulated mechanism. Along with ADAMs and MT-MMPs, membrane-bound serine proteases also may play a role in ectodomain shedding.
Proteolytic degradation of the extracellular matrix (ECM)
The formation of capillaries from pre-existing blood vessels requires the remodeling of both the peicapillary membrane of the parent venule, as well as the local and distal ECM. At the onset of angiogenesis endothelial cells (EC) must remodel three different barriers in order to migrate and invade the target tissue. First is the basement membrane between the endothelium and vascular smooth muscle cells or pericytes, followed by the fibrin gel formed from fibrinogen that is leaked from the vasculature, and finally the extracellular matrix in the target tissue. The vascular basement membrane is composed of type IV collagen, type XV collagen, type XVIII collagen, laminins, entactin, heparan sulfate proteoglycans, perlecan, and osteonectin. All of these components of the basement membrane are substrates for MMP-2, 3, 7, and 9, among others. Inhibitors of MMP activity have spotlighted the importance of these proteins in controlling angiogenesis. Recently, it has been discovered that small interfering RNA (siRNA) mediated target RNA degradation of urokinase receptor and MMP-9 inhibits the formation of capillary like structures in both in vitro and in vivo models of angiogenesis. After working their way through the basement membrane, EC must invade through a dense fibrin gel which is polymerized from fibrinogen derived from the vascular bed. Plasmin, an effective fibrinolysin produced by tPA or uPA, was thought to be essential in this process, but plasminogen deficient mice do not display major defects of neovascularization in fibrin rich tissues. These findings highlight the diverse amount of proteolytic enzymes ECs use to remodel the ECM. For example, MMP-3, 7, 8, 12 and 13 can cleave fibrinogen.
MMP activity is one of the earliest and most sustained processes that take place during angiogenesis. By studying the transition from an avascular to a vascular tumor Fang et al. were able to identify the key role of MMP-2 in angiogenesis. MMP-2 expression and activity was increased in angiogenic tumors as compared with avascular tumors, and the addition of antisense oligonucleotides targeting MMP-2 inhibits the initiation of angiogenesis maintaining the avascular phenotype. This data along with other reports suggest that MMP activity is necessary to initiate the earliest stages of angiogenesis and tumor development. The creation of MMP deficient mice has provided important insight into the role of MMPs in the regulation of angiogenesis. For example, MMP-2 knockout mice develop normally but display significant inhibition of corneal angiogenesis.
Proteolytic fragments as regulators of angiogenesis
Numerous proteolytic fragments or domains of ECM proteins have been reported to exert positive or negative activity on angiogenesis. Native proteins which contain such domains with regulatory activity are normally inactive, most likely because they are cryptic segments hidden in the native protein structure. Angiostatin is a 38 kDa plasminogen fragment with angiogenesis inhibitor activity. Angiostatin fragments contain kringle domains which exert their inhibitory activity at several different levels; they inhibit endothelial cell migration and proliferation, increase apoptosis, and modulate the activity of focal adhesion kinase (FAK). Endostatin is a 20 kDa fragment of collagen XVIII. The major role of endostatin is in its ability to potently inhibit endothelial cell migration and induce apoptosis. These effects are mediated by interacting and interfering with various angiogenic related proteins such as integrins and serine/threonine-specific protein kinases. Numerous studies have demonstrated that tropoelastin, the soluble precursor of elastin, or proteolytic elastin fragments have diverse biological properties. Nackman et al. demonstrated that elastase generated elastin fragments mediate several characteristic features of aneurysmal disease which correlated to angiogenesis. Osteonectin is a metal binding glycoprotein produced by many cell types including ECs. Lastly, endorepellin is a recently described inhibitor of angiogenesis derived from the carboxy terminus of perlecan. Nanomolar concentrations of endorepellin inhibits EC migration and angiogenesis in different in vitro and in vivo models by blocking EC adhesion to various substrate such as fibronectin and type I collagen.
Endogenous inhibitors or activators generated by proteolytic degradation of larger proteins mostly from the ECM have proven to contribute to the regulation of tumor growth and angiogenesis. This article mentions only a small fraction of the known proteolytic fragments which alter EC behavior and function during angiogenesis. This abundance has garnered increased attention because of their potential for anti-angiogenic and anti-cancer therapies.
Proteolytic activation of growth factors
Proteases not only modulate cell-matrix interactions but also can control the onset and progression of angiogenesis by activating angiogenic growth factors and cytokines. Hepatocyte growth factor (HGF), an angiogenesis promoting growth factor, is activated by HGF activation factor, a serine protease related to plasminogen. Several growth factors such as basic fibroblast growth factor (bFGF) and vascular endothelial growth factor (VEGF) are trapped in the ECM by various proteoglycans. The proteolytic degradation of these proteoglycans liberates the growth factors allowing them to reach their receptors and influence cellular behavior. Growth factors that indirectly affect angiogenesis are also targets of proteolytic activation. For example, plasminogen activators drive the activation of latent transforming growth factor beta (TGF-β) from bone ECM and thus modulate angiogenesis in bone.
Proteases not only have the ability change the availability of growth factors, but can also modify their properties. This ability was shown for VEGF165 that is cleaved by MMP-3 or MMP-9 to a smaller molecule with properties similar to VEGF121. These two isoforms of VEGF have very different properties. VEGF165 induces a regular vessel pattern during tumor neovascularization. VEGF121 and the truncated VEGF165, in contrast, cause irregular patterns of neovascularization, most likely due to their inability to bind heparan sulfates, wherefore they do not provide any spatial information that is buried in the ECM. Another important factor in angiogenesis, stromal cell-derived factor-1 (SDF-1), is also modified by the aminodipeptidase dipeptidyl peptidase-4 (DPP4). Cleavage of SDF-1 reduces it heparan sulfate affinity and interactions with its receptor CXCR4 are reduced. The ADAM family of proteases is receiving increased attention for their ability to alter the balance between pro-and anti-angiogenic factors. ADAM17 is able to release active tumor necrosis factor-alpha (TNFα) and heparin-binding EGF-like growth factor (HB-EGF) from their membrane bound precursors which can indirectly affect angiogenesis.
Proteases as inhibitors of angiogenesis
Proteases not only facilitate angiogenesis, but they also have the ability to put the brakes on the process. One example of this is the processing of angiogenesis inhibitors by MMPs. As previously described, MMPs have been shown to cleave plasminogen and collagen XVIII into the endogenous angiogenesis inhibitors angiostatin and endostatin. MMP-2 itself possesses anti-angiogenic properties that are independent of its catalytic domain. Interactions between integrin αvβ3 and MMP-2 on the EC cell surface may be necessary for MMP-2 activity during angiogenesis. The hemopexin like domain in the carboxy terminus of MMP-2 is able to block this interaction of active MMP-2 and integrin αvβ3 on the EC surface which lead to inhibition of MMP-2 activity.
During angiogenesis ADAM15 is preferentially expressed on EC. ADAM15 is able to interact with integrins αvβ3 and α5β1 through its disintegrin domain via an RGD (arginine-glycine-aspartic acid) motif. Most disintegrins contain this conserved RGD motif, but ADAM15 is the only member of the ADAM family to contain this motif. A recombinant disintegrin domain of human ADAM15 inhibits a variety of EC functions in vitro including proliferation, adhesion, migration, and capillary formation. Overexpression of ADAM15 disintegrin domain resulted in inhibition of angiogenesis, tumor growth, and metastasis. On the other hand, it has not been shown whether full length ADAM15 plays an inhibitory role in vivo. ADAMTS1 and ADAMTS8 inhibit angiogenesis in vitro in two functional angiogenesis assays. Both enzymes inhibit bFGF induced vascularization in the corneal pocket assay and inhibit VEGF induced angiogenesis in the chorioallantoic membrane assay. All together, these data indicate that proteases can function as both positive and negative regulators of angiogenesis.
Proteolysis and cell migration
Angiogenesis requires the migration and invasive growth of cells. This is facilitated by a balanced interplay between detachment and formation of cell adhesions which enable the cell to crawl forward through the ECM. The cell uses limited proteolytic activity at sites of individual focal adhesions via the formation of multiprotein complexes. Multiprotein complexes are localized in lipid rafts on the cell surface, where membrane bound proteases are often incorporated. For example, leukocytes complex urokinase (uPA), urokinase receptor (uPAR), and integrins which participate in cell adhesion and invasion. In these complexes, uPAR acts as an organizing center forming noncovalent complexes with integrins, LRP-like proteins, and urokinase. Similar complexes also are found on ECs.
Uncontrolled proteolysis of the ECM
The proteolytic activities that take place during angiogenesis require precise spatial and temporal regulation. If not for this control excessive proteolysis could lead to damage of the tissue and the loss of anchorage points for migrating cells. This is illustrated by mice which are deficient for plasminogen activator inhibitor-1 (PAI-1). PAI-1 inhibits plasminogen activators and thus plasmin activation; therefore it could be assumed that PAI-1 deficiency would increase angiogenesis and tumor growth. Unexpectedly, when PAI-1 deficient mice were challenged with cancer cells on a collagenous matrix, angiogenesis and vascular stabilization was inhibited, hampering tumor growth. This finding was credited to the protective properties PAI-1 imparts against excessive degradation of the surrounding ECM by plasmin. Without this protection the footholds used by endothelial cells to migrate and form capillary structures are destroyed. Uncontrolled proteolysis also is attributed to the disruption of vascular development and premature deaths in murine embryos deficient of the inhibitor reversion-inducing-cysteine-rich protein with kazal motifs (RECK). This is most likely due to uncontrolled MMP activity, because a partial rescue was obtained by simultaneously knocking out RECK and MMP-2.
Proteases involved in the recruitment of bone marrow derived cells during angiogenesis
Leukocytes and endothelial progenitor cells (EPCs) contribute to the initiation and guidance of new blood vessels. Monocytes produce a variety of pro-angiogenic factors. There is also a population of CD34 positive cells that can express endothelial associated proteins, such as VE-cadherin and kinase insert domain receptor (KDR, VEGF receptor 2) which aid in influencing the progression of angiogenesis. The absence or dysfunction of these cells is implicated in impaired vascularization in cardiac and diabetes patients. MMP-9 plays a key role in mobilizing EPCs from the bone marrow. Heissig et al. have proposed a mechanism for how MMP-9 facilitates the availability of EPCs for angiogenesis. First, circulating VEGF induces MMP-9 expression in the bone marrow, MMP-9 then is able to cleave and release c-kit ligand. Activated c-kit is then able to recruit hematopoietic, endothelial and mast cell progenitor cells, these cells are then accumulated in the angiogenic area and produce large amounts of VEGF tipping the scales in favor of angiogenesis.
MMP-9 is not the only protease shown to be involved in EPC enhanced angiogenesis. Cathepsin L1 is active at neutral pH by associating with a p41 splice variant of the MHC class II-associated invariant chain which is strongly expressed in EPCs. This ability to stay active at neutral pH may facilitate EPC invasion, remodeling of matrix collagens and gelatin, and neovascularization. Knock out of cathepsin L1 in mice exhibited impaired blood flow restoration in ischemic limbs, indicating impaired neovascularization. Neovascularization also is impaired in mice treated with bone marrow derived cells deficient of cathepsin L1 as compared with wild type cells. The target by which cathepsin L1 stimulates angiogenesis is not yet identified.
Maturation of newly formed blood vessels via proteases
It has been well established that smooth muscle-like pericytes play an important role in stabilizing newly formed blood vessels. Pericytes present in the stroma of tumors of breast cancer patients express MMP-9. Animal models deficient of MMP-9 display disturbed recruitment of pericytes. The inability to recruit pericytes severely affects the stability of vessels and the degree of vascularization of neuroblastomas. Aminopeptidase A also may be involved in pericyte recruitment due to its increased expression by activated pericytes in various pathological conditions associated with angiogenesis. The mechanism by which this protease facilitates vessel maturation has not yet been determined. Angiogenesis requires a fine balance between proteolytic activity and proteinase inhibition. Pericytes secrete TIMP-3 which inhibits MT1-MMP dependent MMP-2 activation on endothelial cell, thus facilitating stabilization of newly formed microvessels. Co-cultures consisting of pericytes and endothelial cells induce the expression of TIMP-3 by pericytes, while endothelial cells produce TIMP-2. Together, these inhibitors stabilize the vasculature by inhibiting a variety of MMPs, ADAMs, and VEGF receptor 2.
Immature vessels remain dependent on continuous exposure the angiogenic growth factors without pericyte coverage. As the reservoir of growth factors is removed the endothelial cells do not survive, and undergo caspases induced apoptosis, while other proteases participate in the degradation and removal of the remaining cell debris.
Perspective
Proteases play numerous roles in angiogenesis, both in development and especially in pathological conditions. Because they are important regulators of tissue degradation and cell migration, it is expected that their inhibition would be beneficial for inhibiting tumor growth and vascularization. Promising results have been observed in animal studies, but clinical trials have failed to demonstrate similar results and are often accompanied by unacceptable side effects. This has influenced continued research which has identified new families of proteases, such as ADAM, ADAMTS, and MT-MMPs. Perhaps more significantly, a new paradigm has emerged for proteases being essential for modulating growth factors and cytokines, generating biologically active fragments from the matrix, facilitating recruitment of bone marrow derived cells, and stabilization of mature blood vessels. Better understanding of the various activities of proteases and their inhibitors will aid in more tailor made treatments for numerous disorders.
References
Angiogenesis
Post-translational modification
Proteases | Proteases in angiogenesis | [
"Chemistry",
"Biology"
] | 5,527 | [
"Angiogenesis",
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
22,994,055 | https://en.wikipedia.org/wiki/Solomon%20Mikhlin | Solomon Grigor'evich Mikhlin (, real name Zalman Girshevich Mikhlin) (the family name is also transliterated as Mihlin or Michlin) (23 April 1908 – 29 August 1990) was a Soviet mathematician of who worked in the fields of linear elasticity, singular integrals and numerical analysis: he is best known for the introduction of the symbol of a singular integral operator, which eventually led to the foundation and development of the theory of pseudodifferential operators.
Biography
He was born in , Rechytsa District, Minsk Governorate (in present-day Belarus) on 23 April 1908; himself states in his resume that his father was a merchant, but this assertion could be untrue since, in that period, people sometimes lied on the profession of parents in order to overcome political limitations in the access to higher education. According to a different version, his father was a melamed, at a primary religious school (kheder), and that the family was of modest means: according to the same source, Zalman was the youngest of five children. His first wife was Victoria Isaevna Libina: Mikhlin's book is dedicated to her memory. She died of peritonitis in 1961 during a boat trip on Volga. In 1940 they adopted a son, Grigory Zalmanovich Mikhlin, who later emigrated to Haifa, Israel. His second wife was Eugenia Yakovlevna Rubinova, born in 1918, who was his companion for the rest of his life.
Education and academic career
He graduated from a secondary school in Gomel in 1923 and entered the State Herzen Pedagogical Institute in 1925. In 1927 he was transferred to the Department of Mathematics and Mechanics of Leningrad State University as a second year student, passing all the exams of the first year without attending lectures. Among his university professors there were Nikolai Maximovich Günther and Vladimir Ivanovich Smirnov. The latter became his master thesis supervisor: the topic of the thesis was the convergence of double series, and was defended in 1929. Sergei Lvovich Sobolev studied in the same class as Mikhlin. In 1930 he started his teaching career, working in some Leningrad institutes for short periods, as Mikhlin himself records on the document . In 1932 he got a position at the Seismological Institute of the USSR Academy of Sciences, where he worked till 1941: in 1935 he got the degree "Doktor nauk" in Mathematics and Physics, without having to earn the "kandidat nauk" degree, and finally in 1937 he was promoted to the rank of professor. During World War II he became professor at the Kazakh University in Alma Ata. Since 1944 S.G. Mikhlin has been professor at the Leningrad State University. From 1964 to 1986 he headed the Laboratory of Numerical Methods at the Research Institute of Mathematics and Mechanics of the same university: since 1986 until his death he was a senior researcher at that laboratory.
Honours
He received the order of the Badge of Honour () in 1961: the name of the recipients of this prize was usually published in newspapers. He was awarded of the Laurea honoris causa by the Karl-Marx-Stadt (now Chemnitz) Polytechnic in 1968 and was elected member of the German Academy of Sciences Leopoldina in 1970 and of the Accademia Nazionale dei Lincei in 1981. As states, in his country he did not receive honours comparable to his scientific stature, mainly because of the racial policy of the communist regime, briefly described in the following section.
Influence of communist antisemitism
He lived in one of the most difficult periods of contemporary Russian history. The state of mathematical sciences during this period is well described by : marxist ideology rise in the USSR universities and Academia was one of the main themes of that period. Local administrators and communist party functionaries interfered with scientists on either ethnical or ideological grounds. As a matter of fact, during the war and during the creation of a new academic system, Mikhlin did not experience the same difficulties as younger Soviet scientists of Jewish origin: for example he was included in the Soviet delegation in 1958, at the International Congress of Mathematicians in Edinburgh. However, , examining the life of Mikhlin, finds it surprisingly similar to the life of Vito Volterra under the fascist regime. He notes that antisemitism in communist countries took different forms compared to his nazist counterpart: the communist regime aimed not to the brutal homicide of Jews, but imposed on them a number of constrictions, sometimes very cruel, in order to make their life difficult. During the period from 1963 to 1981, he met Mikhlin attending several conferences in the Soviet Union, and realised how he was in a state of isolation, almost marginalized inside his native community: Fichera describes several episodes revealing this fact. Perhaps, the most illuminating one is the election of Mikhlin as a member of the Accademia Nazionale dei Lincei: in June 1981, Solomon G. Mikhlin was elected Foreign Member of the class of mathematical and physical sciences of the Lincei. At first time, he was proposed as a winner of the Antonio Feltrinelli Prize, but the almost sure confiscation of the prize by the Soviet authorities induced the Lincei members to elect him as a member: they decided to honour him in a way that no political authority could alienate. However, Mikhlin was not allowed to visit Italy by the Soviet authorities, so Fichera and his wife brought the tiny golden lynx, the symbol of the Lincei membership, directly to Mikhlin's apartment in Leningrad on 17 October 1981: the only guests to that "ceremony" were Vladimir Maz'ya and his wife Tatyana Shaposhnikova.
Death
According to , which refers a conversation with Mark Vishik and Olga Oleinik, on 29 August 1990 Mikhlin left home to buy medicines for his wife Eugenia. On a public transport, he suffered a lethal stroke. He had no documents with him, therefore he was identified only some time after his death: this may be the cause of the difference in the death date reported on several biographies and obituary notices. Fichera also writes that Mikhlin's wife Eugenia survived him only a few months.
Work
Research activity
He was author of monographs and textbooks which become classics for their style. His research is devoted mainly to the following fields.
Elasticity theory and boundary value problems
In mathematical elasticity theory, Mikhlin was concerned by three themes: the plane problem (mainly from 1932 to 1935), the theory of shells (from 1954) and the Cosserat spectrum (from 1967 to 1973). Dealing with the plane elasticity problem, he proposed two methods for its solution in multiply connected domains. The first one is based upon the so-called complex Green's function and the reduction of the related boundary value problem to integral equations. The second method is a certain generalization of the classical Schwarz algorithm for the solution of the Dirichlet problem in a given domain by splitting it in simpler problems in smaller domains whose union is the original one. Mikhlin studied its convergence and gave applications to special applied problems. He proved existence theorems for the fundamental problems of plane elasticity involving inhomogeneous anisotropic media: these results are collected in the book . Concerning the theory of shells, there are several Mikhlin's articles dealing with it. He studied the error of the approximate solution for shells, similar to plane plates, and found out that this error is small for the so-called purely rotational state of stress. As a result of his study of this problem, Mikhlin also gave a new (invariant) form of the basic equations of the theory. He also proved a theorem on perturbations of positive operators in a Hilbert space which let him to obtain an error estimate for the problem of approximating a sloping shell by a plane plate. Mikhlin studied also the spectrum of the operator pencil of the classical linear elastostatic operator or Navier–Cauchy operator
where is the displacement vector, is the vector laplacian, is the gradient, is the divergence and is a Cosserat eigenvalue. The full description of the spectrum and the proof of the completeness of the system of eigenfunctions are also due to Mikhlin, and partly to V.G. Maz'ya in their only joint work.
Singular integrals and Fourier multipliers
He is one of the founders of the multi-dimensional theory of singular integrals, jointly with Francesco Tricomi and Georges Giraud, and also one of the main contributors. By singular integral we mean an integral operator of the following form
where is a point in the n-dimensional euclidean space, =|| and are the hyperspherical coordinates (or the polar coordinates or the spherical coordinates respectively when or ) of the point with respect to the point . Such operators are called singular since the singularity of the kernel of the operator is so strong that the integral does not exist in the ordinary sense, but only in the sense of Cauchy principal value. Mikhlin was the first to develop a theory of singular integral equations as a theory of operator equations in function spaces. In the papers and he found a rule for the composition of double singular integrals (i.e. in 2-dimensional euclidean spaces) and introduced the very important notion of symbol of a singular integral. This enabled him to show that the algebra of bounded singular integral operators is isomorphic to the algebra of either scalar or matrix-valued functions. He proved the Fredholm's theorems for singular integral equations and systems of such equations under the hypothesis of non-degeneracy of the symbol: he also proved that the index of a single singular integral equation in the euclidean space is zero. In 1961 Mikhlin developed a theory of multidimensional singular integral equations on Lipschitz spaces. These spaces are widely used in the theory of one-dimensional singular integral equations: however, the direct extension of the related theory to the multidimensional case meets some technical difficulties, and Mikhlin suggested another approach to this problem. Precisely, he obtained the basic properties of this kind of singular integral equations as a by-product of the Lp-space theory of these equations. Mikhlin also proved a now classical theorem on multipliers of Fourier transform in the Lp-space, based on an analogous theorem of Józef Marcinkiewicz on Fourier series. A complete collection of his results in this field up to the 1965, as well as the contributions of other mathematicians like Tricomi, Giraud, Calderón and Zygmund, is contained in the monograph .
A synthesis of the theories of singular integrals and linear partial differential operators was accomplished, in the mid 1960s, by the theory of pseudodifferential operators: Joseph J. Kohn, Louis Nirenberg, Lars Hörmander and others operated this synthesis, but this theory owe his rise to the discoveries of Mikhlin, as is universally acknowledged. This theory has numerous applications to mathematical physics. Mikhlin's multiplier theorem is widely used in different branches of mathematical analysis, particularly to the theory of differential equations. The analysis of Fourier multipliers was later forwarded by Lars Hörmander, Walter Littman, Elias Stein, Charles Fefferman and others.
Partial differential equations
In four papers, published in the period 1940–1942, Mikhlin applies the potentials method to the mixed problem for the wave equation. In particular, he solves the mixed problem for the two-space dimensional wave equation in the half plane by reducing it to the planar Abel integral equation. For plane domains with a sufficiently smooth curvilinear boundary he reduces the problem to an integro-differential equation, which he is also able to solve when the boundary of the given domain is analytic. In 1951 Mikhlin proved the convergence of the Schwarz alternating method for second order elliptic equations. He also applied the methods of functional analysis, at the same time as Mark Vishik but independently of him, to the investigation of boundary value problems for degenerate second order elliptic partial differential equations.
Numerical mathematics
His work in this field can be divided into several branches: in the following text, four main branches are described, and a sketch of his last researches is also given. The papers within the first branch are summarized in the monograph , which contain the study of convergence of variational methods for problems connected with positive operators, in particular, for some problems of mathematical physics. Both "a priori" and "a posteriori" estimates of the errors concerning the approximation given by these methods are proved. The second branch deals with the notion of stability of a numerical process introduced by Mikhlin himself. When applied to the variational method, this notion enables him to state necessary and sufficient conditions in order to minimize errors in the solution of the given problem when the error arising in the numerical construction of the algebraic system resulting from the application of the method itself is sufficiently small, no matter how large is the system's order. The third branch is the study of variational-difference and finite element methods. Mikhlin studied the completeness of the coordinate functions used in this methods in the Sobolev space , deriving the order of approximation as a function of the smoothness properties of the functions to be approximation of functions approximated. He also characterized the class of coordinate functions which give the best order of approximation, and has studied the stability of the variational-difference process and the growth of the condition number of the variation-difference matrix. Mikhlin also studied the finite element approximation in weighted Sobolev spaces related to the numerical solution of degenerate elliptic equations. He found the optimal order of approximation for some methods of solution of variational inequalities. The fourth branch of his research in numerical mathematics is a method for the solution of Fredholm integral equations which he called resolvent method: its essence rely on the possibility of substituting the kernel of the integral operator by its variational-difference approximation, so that the resolvent of the new kernel can be expressed by simple recurrence relations. This eliminates the need to construct and solve large systems of equations. During his last years, Mikhlin contributed to the theory of errors in numerical processes, proposing the following classification of errors.
Approximation error: is the error due to the replacement of an exact problem by an approximating one.
Perturbation error: is the error due to the inaccuracies in the computation of the data of the approximating problem.
Algorithm error: is the intrinsic error of the algorithm used for the solution of the approximating problem.
Rounding error: is the error due to the limits of computer arithmetic.
This classification is useful since enables one to develop computational methods adjusted in order to diminish the errors of each particular type, following the divide et impera (divide and rule) principle.
Teaching activity
He was the "kandidat nauk" advisor of Tatyana O. Shaposhnikova. He was also mentor and friend of Vladimir Maz'ya: he was never his official supervisor, but his friendship with the young undergraduate Maz'ya had a great influence on shaping his mathematical style.
Selected publications
Books
. The book of Mikhlin summarizing his results in the plane elasticity problem: according to this is a widely known monograph in the theory of integral equations.
.
. A masterpiece in the multidimensional theory of singular integrals and singular integral equations summarizing all the results from the beginning to the year of publication, and also sketching the history of the subject.
.
. This book summarize the contributions of Mikhlin and of the former Soviet school of numerical analysis to the problem of error analysis in numerical solutions of various kind of equations: it was also reviewed by for the Bulletin of the American Mathematical Society.
.
Papers
.
. The paper, with French title and abstract, where Solomon Mikhlin introduces the symbol of a singular integral operator as a means to calculate the composition of such kind of operators and solve singular integral equations: the integral operators considered here are defined by integration on the whole n-dimensional (for n = 2) euclidean space.
. In this paper, with French title and abstract, Solomon Mikhlin extends the definition of the symbol of a singular integral operator introduced before in the paper to integral operators defined by integration on a (n − 1)-dimensional closed manifold (for n = 3) in n-dimensional euclidean space.
.
.
.
.
.
.
.
.
See also
Linear elasticity
Mikhlin multiplier theorem
Multiplier (Fourier analysis)
Singular integrals
Singular integral equations
Notes
References
Biographical and general references
.
.
.
.
. A detailed commemorative paper, referencing the works , and of for the bibliographical details.
. A short survey of the work of Mikhlin by a friend and his pupil: not as complete as the commemorative paper , but very useful for the English speaking reader.
.
. See also the final version available from the "George Lorentz" section of the Approximation Theory web page at the Mathematics Department of the Ohio State University (retrieved on 25 October 2009).
. Some vivid recollection about Gaetano Fichera by his colleague and friend Vladimir Gilelevich Maz'ya: there is a short description of the "ceremony" for the election of Mikhlin as a foreign member of the Accademia Nazionale dei Lincei.
.
Solomon Grigor'evich Mikhlin's entry at the Russian Wikipedia, Retrieved 28 May 2010.
. An official resume written by Mikhlin itself to be used by the public authority in the former Soviet Union: it contains very useful (if not unique) information about his early career and school formation.
Scientific references
.
.
.
External links
.
. Memorial page at the St. Petersburg Mathematical Pantheon.
1908 births
1990 deaths
People from Rechytsa district
People from Rechitsky Uyezd
Belarusian Jews
Jewish scientists
Mathematical analysts
Mathematical physicists
Soviet mathematicians
Herzen University alumni | Solomon Mikhlin | [
"Mathematics"
] | 3,730 | [
"Mathematical analysis",
"Mathematical analysts"
] |
22,996,735 | https://en.wikipedia.org/wiki/VB%2010 | VB 10 or Van Biesbroeck's star is a small and dim red dwarf located in the constellation Aquila. It is part of a binary star system. VB 10 is historically notable as it was the least luminous and least massive known star from its discovery in 1944, until 1982 when LHS 2924 was shown to be less luminous. Although it is relatively close to Earth, at about 19 light years, VB 10 is a dim magnitude 17, making it difficult to image with amateur telescopes as it can get lost in the glare of the primary star.
VB 10 is also the primary standard for the M8V spectral class.
History
VB 10 was discovered in 1944 by the astronomer George van Biesbroeck using the Otto Struve reflector telescope at the McDonald Observatory. He found it while surveying the telescopic field of view of the high-proper-motion red dwarf Gliese 752 (Wolf 1055), for companions. Wolf 1055 had been catalogued 25 years earlier by German astronomer Max Wolf using similar astrophotographic techniques. It is designated VB 10 in the 1961 publication of Van Biesbroeck's star catalog. Later, other astronomers began referring to it as Van Biesbroeck's star in honor of its discoverer. Because it is so dim and so close to its much brighter primary star, earlier astronomical surveys missed it even though its large parallax and large proper motion should have made it stand out on photographic plates taken at different times.
Characteristics
VB 10 has an extremely low luminosity with a baseline absolute magnitude of nearly 19 and an apparent magnitude of 17.3 (somewhat variable), making it very difficult to see.
Mathematical formulae for calculating apparent magnitude show that, if VB 10 occupied the place of the Sun, it would shine on Earth's sky at a magnitude of −12.87—approximately the same magnitude of that of the full moon.
Later researchers also noted that its mass, at 0.08 solar mass (), is right at the lower limit needed to create internal pressures and temperatures high enough to initiate nuclear fusion and actually be a star rather than a brown dwarf. At the time of its discovery it was the lowest-mass star known. The previous record holder for the lowest mass was Wolf 359 at .
VB 10 is also notable by its very large proper motion, moving more than one arc second a year through the sky as seen from Earth.
Flare star
VB 10 is a variable star and is identified in the General Catalogue of Variable Stars as V1298 Aquilae. It is a UV Ceti-type variable star and is known to be subject to frequent flare events. Its dynamics were studied from the Hubble Space Telescope in the mid-1990s. Although VB 10 has a normal low surface temperature of 2600 K it was found to produce violent flares of up to 100,000 K. This came as a surprise to astronomers. It had previously been assumed that low mass red dwarfs would have insignificant or nonexistent magnetic fields, which are necessary for the production of solar flares. The dwarfs were believed to lack the radiative zone just outside the star's core that powers the dynamo of stars like our Sun. Nevertheless, the detection of solar flares indicates some as yet unknown process allows the solely convective cores of low mass stars to produce sufficient magnetic fields to power such outbursts.
Binary star
VB 10 is the secondary star of a bound binary star system. The primary is called Gliese 752, and hence VB 10 is also referred to as Gliese 752 B. The primary star is much larger and brighter. The two stars are separated by about 74 arc seconds (~434 AU).
Claims of a planetary system
In May 2009, astronomers from NASA's Jet Propulsion Laboratory, Pasadena, California, announced that they had found evidence of a planet orbiting VB 10, which they designated VB 10b. The Hale Telescope at the Palomar Observatory was used to detect evidence of this planet using the astrometry method. The new planet was claimed to have a mass 6 times that of Jupiter and an orbital period of 270 days. However, subsequent studies using Doppler spectroscopy failed to detect the radial velocity variations that would be expected if such a planet was orbiting this small star. The claimants of VB 10b note that these Doppler measurements only rule out planets more massive than 3 times the mass of Jupiter, but this limit is only half the reported best-fit mass of the planet as originally claimed. The claims for this planet thus fall into a long history of claimed astrometric extrasolar planet detections that were subsequently refuted.
By 2016, it was suspected that the asymmetric debris disk signal was mistaken for the long-period planet.
See also
List of smallest stars
List of stars named after people
References
External links
Aquila (constellation)
M-type main-sequence stars
Binary stars
Flare stars
0752
Aquilae, V1298
Hypothetical planetary systems
J19165762+0509021 | VB 10 | [
"Astronomy"
] | 1,045 | [
"Aquila (constellation)",
"Constellations"
] |
22,998,776 | https://en.wikipedia.org/wiki/C20H20 | {{DISPLAYTITLE:C20H20}}
The molecular formula C20H20 (molar mass: 260.37 g/mol, exact mass: 260.1565 u) may refer to:
Dodecahedrane
Pagodane
Molecular formulas | C20H20 | [
"Physics",
"Chemistry"
] | 55 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
23,001,219 | https://en.wikipedia.org/wiki/Diffusing-wave%20spectroscopy | Diffusing-wave spectroscopy (DWS) is an optical technique derived from dynamic light scattering (DLS) that studies the dynamics of scattered light in the limit of strong multiple scattering. It has been widely used in the past to study colloidal suspensions, emulsions, foams, gels, biological media and other forms of soft matter. If carefully calibrated, DWS allows the quantitative measurement of microscopic motion in a soft material, from which the rheological properties of the complex medium can be extracted via the microrheology approach.
One-speckle diffusing-wave spectroscopy
Laser light is sent to the sample and the outcoming transmitted or backscattered light is detected by an optoelectric sensor. The light intensity detected is the result of the interference of all the optical waves coming from the different light paths.
The signal is analysed by calculating the intensity autocorrelation function called g2.
For the case of non-interacting particles suspended in a (complex) fluid a direct relation between g2-1 and the mean squared displacement of the particles <Δr2> can be established. Let us note P(s) the probability density function (PDF) of the photon path length s. The relation can be written as follows:
with and is the transport mean free path of scattered light.
For simple cell geometries, it is thus possible to calculate the mean squared displacement of the particles <Δr2> from the measured g2-1 values analytically. For example, for the backscattering geometry, an infinitely thick cell, large laser spot illumination and detection of photons coming from the center of the spot, the relationship between g2-1 and <Δr2> is:
, γ value is around 2.
For less thick cells and in transmission, the relationship depends also on l* (the transport length).
For quasi-transparent cells, an angle-independent variant method called cavity amplified scattering spectroscopy makes use of an integrating sphere to isotropically probe samples from all directions, elongating photon paths through the sample in the process, allowing for the study of low turbidity samples under the DWS formalism.
Multispeckle diffusing-wave spectroscopy (MSDWS)
This technique either uses a camera to detect many speckle grains (see speckle pattern) or a ground glass to create a large number of speckle realizations (Echo-DWS). In both cases an average over a large number of statistically independent intensity values is obtained, allowing a much faster data acquisition time.
MSDWS is particularly adapted for the study of slow dynamics and non ergodic media. Echo-DWS allows seamless integration of MSDWS in a traditional DWS-scheme with superior temporal resolution down to 12 ns. Camera based adaptive image processing allows online measurement of particle dynamics for example during drying.
References
External links
Diffusing Wave Spectroscopy Overview with video
Diffusing Wave Spectroscopy Overview with Animations
Particle Sizing using Diffusing Wave Spectroscopy
Spectroscopy
Soft matter | Diffusing-wave spectroscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 625 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Soft matter",
"Condensed matter physics",
"Spectroscopy"
] |
23,001,725 | https://en.wikipedia.org/wiki/Lurgi%E2%80%93Ruhrgas%20process | The Lurgi–Ruhrgas process is an above-ground coal liquefaction and shale oil extraction technology. It is classified as a hot recycled solids technology.
History
The Lurgi–Ruhrgas process was originally invented in the 1940s and further developed in the 1950s for a low-temperature liquefaction of lignite (brown coal). The technology is named after its developers Lurgi Gesellschaft für Wärmetechnik G.m.b.H. and Ruhrgas AG. Over a time, the process was used for coal processing in Japan, Germany, the United Kingdom, Argentina, and former Yugoslavia. The plant in Japan processed also cracking petroleum oils to olefins.
In 1947–1949, the Lurgi–Ruhrgas process was used in Germany for shale oil production. In Lukavac, Bosnia and Herzegovina, two retorts for liquefaction of lignite were in operation from 1963 to 1968. The capacity of the plant was 850 tons of lignite per day. The plant in Lincolnshire, the United Kingdom, operated in 1978–1979 with capacity of 900 tons of coal per day. In late 1960s and early 1970s oil shales from different European countries and from the Green River Formation of Colorado, the United States, were tested at the Lurgi's pilot plant in Frankfurt. In the United States, the technology was promoted in cooperation with Dravo Corporation. In the 1970s, the technology was licensed to the Rio Blanco Shale Oil Project for construction of a modular retort in combination with the modified in situ process. However, this plan was terminated.
In 1980, the Natural Resources Authority of Jordan commissioned from the Klöckner-Lurgi consortium a pre-feasibility study of construction of an oil shale retorting complex in Jordan using the Lurgi–Ruhrgas process. However, although the study found the technology feasible, it was never implemented.
Technology
The Lurgi–Ruhrgas process is a hot recycled solids technology, which processes fine particles of coal or oil shale sized . As a heat carrier, it uses spent char or spent oil shale (oil shale ash), mixed with sand or other more durable materials. In this process, crushed coal or oil shale is fed into the top of the retort. In retort, coal or oil shale is mixed with the heated char or spent oil shale particles in the mechanical mixer (screw conveyor). The heat is transferred from the heated char or spent oil shale to the coal or raw oil shale causing pyrolysis. As a result, oil shale decomposes to shale oil vapors, oil shale gas and spent oil shale. The oil vapor and product gases pass through a hot cyclone for cleaning before sending to a condenser. In the condenser, shale oil is separated from product gases.
The spent oil shale, still including residual carbon (char), is burnt at a lift pipe combustor to heat the process. If necessary, additional fuel oil is used for combustion. During the combustion process, heated solid particles in the pipe are moved to the surge bin by pre-heated air that is introduced from the bottom of the pipe. At the surge bin, solids and gases are separated, and solid particles are transferred to the mixer unit to conduct the pyrolysis of the raw oil shale.
One of the disadvantages of this technology is the fact that produced shale oil vapors are mixed with shale ash causing impurities in shale oil. Ensuring the quality of produced shale oil is complicated as compared with other mineral dusts the shale ash is more difficult to collect.
See also
Galoter process
Alberta Taciuk process
Petrosix process
Kiviter process
TOSCO II process
Fushun process
Paraho process
KENTORT II
Union process
References
Oil shale technology
Coal
Thermal treatment | Lurgi–Ruhrgas process | [
"Chemistry"
] | 783 | [
"Petroleum technology",
"Oil shale technology",
"Synthetic fuel technologies"
] |
5,227,127 | https://en.wikipedia.org/wiki/Lanthanum%20hexaboride | Lanthanum hexaboride (LaB6, also called lanthanum boride and LaB) is an inorganic chemical, a boride of lanthanum. It is a refractory ceramic material that has a melting point of 2210 °C, and is insoluble in water and hydrochloric acid. It is extremely hard, with a Mohs hardness of 9.5. It has a low work function and one of the highest electron emissivities known, and is stable in vacuum. Stoichiometric samples are colored intense purple-violet, while boron-rich ones (above LaB6.07) are blue. Ion bombardment changes its color from purple to emerald green. LaB6 is a superconductor with a relatively low transition temperature of 0.45 K.
Uses
Electron Sources
The principal use of lanthanum hexaboride is in hot cathodes, either as a single crystal or as a coating deposited by physical vapor deposition. Hexaborides, such as lanthanum hexaboride (LaB6) and cerium hexaboride (CeB6), have low work functions, around 2.5 eV. They are also somewhat resistant to cathode poisoning. Cerium hexaboride cathodes have a lower evaporation rate at 1700 K than lanthanum hexaboride, but they become equal at temperatures above 1850 K. Cerium hexaboride cathodes have one and half the lifetime of lanthanum hexaboride, due to the former's higher resistance to carbon contamination. Hexaboride cathodes are about ten times "brighter" than tungsten cathodes, and have 10–15 times longer lifetime. Devices and techniques in which hexaboride cathodes are used include electron microscopes, microwave tubes, electron lithography, electron beam welding, X-ray tubes, free electron lasers and several types of electric propulsion technologies. Lanthanum hexaboride slowly evaporates from the heated cathodes and forms deposits on the Wehnelt cylinders and apertures.
X-Ray Diffraction Reference
LaB6 is also used as an X-ray powder diffraction (XRD or pXRD) peak position and line shape reference standard. It is therefore used to calibrate measured diffractometer angles and to determine instrumental broadening of diffraction peaks. The latter makes crystallite size and strain measurements by XRD possible.
References
Lanthanum compounds
Borides
Refractory materials
Ceramic materials | Lanthanum hexaboride | [
"Physics",
"Engineering"
] | 529 | [
"Refractory materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
5,227,275 | https://en.wikipedia.org/wiki/Cerium%20hexaboride | Cerium hexaboride (CeB6, also called cerium boride, CeBix, CEBIX, and (incorrectly) CeB) is an inorganic chemical, a boride of cerium. It is a refractory ceramic material. It has a low work function, one of the highest electron emissivities known, and is stable in vacuum. The principal use of cerium hexaboride is a coating of hot cathodes. It usually operates at temperature of 1450 °C.
Applications
Lanthanum hexaboride (LaB6) and cerium hexaboride (CeB6) are used as coating of some high-current hot cathodes. Hexaborides show low work function, around 2.5 eV. They are also somewhat resistant to cathode poisoning. Cerium boride cathodes show lower evaporation rate at 1700 K than lanthanum boride, but it becomes equal at 1850 K and higher above that. Cerium boride cathodes have one and half the lifetime of lanthanum boride, due to its higher resistance to carbon contamination. Boride cathodes are about ten times as "bright" than the tungsten ones and have 10–15 times longer lifetime. In some laboratory tests, CeB6 has proven to be more resistant to the negative impact of carbon contamination than LaB6. They are used e.g. in electron microscopes, microwave tubes, electron lithography, electron beam welding, X-Ray tubes, and free electron lasers.
Cerium Hexaboride is also investigated for use in radiation detection devices due to its efficiency in detecting photon radiation. Research indicates that a three-layer sensor configuration, incorporating CeB6, can outperform single-layer designs in terms of energy resolution and counting rate.
Cerium hexaboride, like lanthanum hexaboride, slowly evaporates during cathode operation. In conditions where CeB6 cathodes are operated below 1850 K, CeB6 should maintain its optimum shape longer and therefore last longer. While the process is about 30% slower than with lanthanum boride, the cerium boride deposits are reported to be more difficult to remove.
Ce heavy fermion compounds have attracted much attention since they show a variety of unusual and interesting macroscopic properties. In particular, interest has been focused on the 4f narrow-band occupancy, and the role of hybridization with the conduction band states which strongly affects the physical properties.
References
Cerium compounds
Borides
Refractory materials
Ceramic materials | Cerium hexaboride | [
"Physics",
"Engineering"
] | 540 | [
"Refractory materials",
"Materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
5,229,194 | https://en.wikipedia.org/wiki/Kinetic%20Monte%20Carlo | The kinetic Monte Carlo (KMC) method is a Monte Carlo method computer simulation intended to simulate the time evolution of some processes occurring in nature. Typically these are processes that occur with known transition rates among states. These rates are inputs to the KMC algorithm; the method itself cannot predict them.
The KMC method is essentially the same as the dynamic Monte Carlo method and the Gillespie algorithm.
Algorithms
One possible classification of KMC algorithms is as rejection-KMC (rKMC) and rejection-free-KMC (rfKMC).
Rejection-free KMC
A rfKMC algorithm, often only called KMC, for simulating the time evolution of a system, where some processes can occur with known rates r, can be written for instance as follows:
Set the time .
Choose an initial state k.
Form the list of all possible transition rates in the system , from state k into a generic state i. States that do not communicate with k will have .
Calculate the cumulative function for . The total rate is .
Get a uniform random number .
Find the event to carry out i by finding the i for which (this can be achieved efficiently using binary search).
Carry out event i (update the current state ).
Get a new uniform random number .
Update the time with , where . Note that this time interval represents the time elapsed between the prior event and this one, rather than the time interval between this event and the next one.
Return to step 3.
(Note: because the average value of is equal to unity, the same average time scale can be obtained by instead using in step 9. In this case, however, the delay associated with transition i will not be drawn from the Poisson distribution described by the rate , but will instead be the mean of that distribution.)
This algorithm is known in different sources variously as the residence-time algorithm or the n-fold way or the Bortz-Kalos-Lebowitz (BKL) algorithm. It is important to note that the timestep involved is a function of the probability that all events i, did not occur.
Rejection KMC
Rejection KMC has typically the advantage of an easier data handling, and faster computations for each attempted step, since the time consuming action of getting all is not needed.
On the other hand, the time evolved at each step is smaller than for rfKMC. The relative weight of pros and cons varies with the case at hand, and with available resources.
An rKMC associated with the same transition rates as above can be written as follows:
Set the time .
Choose an initial state k.
Get the number of all possible transition rates, from state k into a generic state i.
Find the candidate event to carry out i by uniformly sampling from the transitions above.
Accept the event with probability , where is a suitable upper bound for . It is often easy to find without having to compute all (e.g., for Metropolis transition rate probabilities).
If accepted, carry out event i (update the current state ).
Get a new uniform random number .
Update the time with , where .
Return to step 3.
(Note: can change from one MC step to another.)
This algorithm is usually called a standard algorithm.
Theoretical and numerical comparisons between the algorithms were provided.
Time-dependent Algorithms
If the rates are time dependent, step 9 in the rfKMC must be modified by:
.
The reaction (step 6) has to be chosen after this by
Another very similar algorithm is called the First Reaction Method (FRM). It consists of choosing the first-occurring reaction, meaning to choose the smallest time , and the corresponding reaction number i, from the formula
,
where the are N random numbers.
Comments on the algorithm
The key property of the KMC algorithm (and of the FRM one) is that if the rates are correct, if the processes associated with the rates are of the Poisson process type, and if different processes are independent (i.e. not correlated) then the KMC algorithm gives the correct time scale for the evolution of the simulated system. There was some debate about the correctness of the time scale for rKMC algorithms, but this was also rigorously shown to be correct.
If furthermore the transitions follow detailed balance, the KMC algorithm can be used to simulate thermodynamic equilibrium. However, KMC is widely used to simulate non-equilibrium processes, in which case detailed balance need not be obeyed.
The rfKMC algorithm is efficient in the sense that every iteration is guaranteed to produce a transition. However, in the form presented above it requires operations for each transition, which is not too efficient. In many cases this can be much improved on by binning the same kinds of transitions into bins, and/or forming a tree data structure of the events. A constant-time scaling algorithm of this type has recently been developed and tested.
The major disadvantage with rfKMC is that all possible rates and reactions have to be known in advance. The method itself can do nothing about predicting them. The rates and reactions must be obtained from other methods, such as diffusion (or other) experiments, molecular dynamics or density-functional theory simulations.
Examples of use
KMC has been used in simulations of the following physical systems:
Surface diffusion
Surface growth
Vacancy diffusion in alloys (this was the original use)
Coarsening of domain evolution
Defect mobility and clustering in ion or neutron irradiated solids including, but not limited to, damage accumulation and amorphization/recrystallization models.
Viscoelasticity of physically crosslinked networks
Crystal Growth
To give an idea what the "objects" and "events" may be in practice, here is one concrete simple example, corresponding to example 2 above.
Consider a system where individual atoms are deposited on a surface one at a time (typical of physical vapor deposition), but also may migrate on the surface with some known jump rate . In this case the "objects" of the KMC algorithm are simply the individual atoms.
If two atoms come right next to each other, they become immobile. Then the flux of incoming atoms determines a rate rdeposit, and the system can be simulated with KMC considering all deposited mobile atoms which have not (yet) met a counterpart and become immobile. This way there are the following events possible at each KMC step:
A new atom comes in with rate 'rdeposit
An already deposited atom jumps one step with rate w.
After an event has been selected and carried out with the KMC algorithm, one then needs to check whether the new or just jumped atom has become immediately adjacent to some other atom. If this has happened, the atom(s) which are now adjacent needs to be moved away from the list of mobile atoms, and correspondingly their jump events removed from the list of possible events.
Naturally in applying KMC to problems in physics and chemistry, one has to first consider whether the real system follows the assumptions underlying KMC well enough.
Real processes do not necessarily have well-defined rates, the
transition processes may be correlated, in case of atom or particle jumps
the jumps may not occur in random directions, and so on. When simulating
widely disparate time scales one also needs to consider whether
new processes may be present at longer time scales. If any of these
issues are valid, the time scale and system evolution predicted by KMC
may be skewed or even completely wrong.
History
The first publication which described the basic features of the KMC method (namely using a cumulative function to select an event and a time scale calculation of the form 1/R) was by Young and Elcock in 1966. The residence-time algorithm was also published at about the same time.
Apparently independent of the work of Young and Elcock, Bortz, Kalos and Lebowitz developed a KMC algorithm for simulating the Ising model, which they called the n-fold way''. The basics of their algorithm is the same as that of Young, but they do provide much greater detail on the method.
The following year Dan Gillespie published what is now known as the Gillespie algorithm to describe chemical reactions. The algorithm is similar and the time advancement scheme essentially the same as in KMC.
There is as of the writing of this (June 2006) no definitive treatise of the theory of KMC, but Fichthorn and Weinberg have discussed the theory for thermodynamic equilibrium KMC simulations in detail. A good introduction is given also by Art Voter, and by A.P.J. Jansen,, and a recent review is (Chatterjee 2007) or (Chotia 2008). The justification of KMC as a coarse-graining of the Langevin dynamics using the quasi-stationary distribution approach has been developed by T. Lelièvre and collaborators.
In March 2006 the, probably, first commercial software using Kinetic Monte Carlo to simulate the diffusion and activation/deactivation of dopants in Silicon and Silicon like materials is released by Synopsys, reported by Martin-Bragado et al.
Varieties of KMC
The KMC method can be subdivided by how the objects are moving or reactions occurring. At least the following subdivisions are used:
Lattice KMC (LKMC) signifies KMC carried out on an atomic lattice. Often this variety is also called atomistic KMC, (AKMC). A typical example is simulation of vacancy diffusion in alloys, where a vacancy is allowed to jump around the lattice with rates that depend on the local elemental composition.
Object KMC (OKMC) means KMC carried out for defects or impurities, which are jumping either in random or lattice-specific directions. Only the positions of the jumping objects are included in the simulation, not those of the 'background' lattice atoms. The basic KMC step is one object jump.
Event KMC (EKMC) or First-passage KMC (FPKMC) signifies an OKMC variety where the following reaction between objects (e.g. clustering of two impurities or vacancy-interstitial annihilation) is chosen with the KMC algorithm, taking the object positions into account, and this event is then immediately carried out.
References
External links
3D lattice kinetic Monte Carlo simulation in 'bit language'
KMC simulation of the Plateau-Rayleigh instability
KMC simulation of f.c.c. vicinal (100)-surface diffusion
Stochastic Kinetic Mean Field Model (gives similar results as lattice kinetic Monte Carlo, however, far more cost-effective and easier to realise — open source program code is provided)
Monte Carlo methods
Statistical mechanics
Stochastic simulation | Kinetic Monte Carlo | [
"Physics"
] | 2,172 | [
"Monte Carlo methods",
"Statistical mechanics",
"Computational physics"
] |
5,230,467 | https://en.wikipedia.org/wiki/Systems%20for%20Nuclear%20Auxiliary%20Power | The Systems Nuclear Auxiliary POWER (SNAP) program was a program of experimental radioisotope thermoelectric generators (RTGs) and space nuclear reactors flown during the 1960s by NASA.
The SNAP program developed as a result of Project Feedback, a Rand Corporation study of reconnaissance satellites completed in 1954. As some of the proposed satellites had high power demands, some as high as a few kilowatts, the U.S. Atomic Energy Commission (AEC) requested a series of nuclear power-plant studies from industry in 1951. Completed in 1952, these studies determined that nuclear power plants were technically feasible for use on satellites.
In 1955, the AEC began two parallel SNAP nuclear power projects. One, contracted with The Martin Company, used radio-isotopic decay as the power source for its generators. These plants were given odd-numbered SNAP designations beginning with SNAP-1. The other project used nuclear reactors to generate energy, and was developed by the Atomics International Division of North American Aviation. Their systems were given even-numbered SNAP designations, the first being SNAP-2.
Most of the systems development and reactor testing was conducted at the Santa Susana Field Laboratory, Ventura County, California using a number of specialized facilities.
Odd-numbered SNAPs: radioisotope thermoelectric generators
Radioisotope thermoelectric generators use the heat of radioactive decay to produce electricity.
SNAP-1
SNAP-1 was a test platform that was never deployed, using cerium-144 in a Rankine cycle with mercury as the heat transfer fluid. Operated successfully for 2500 hours.
SNAP-3
SNAP-3 was the first RTG used in a space mission (1961). Launched aboard U.S. Navy Transit 4A and 4B navigation satellites. The electrical output of this RTG was 2.5 watts.
SNAP-7
SNAP-7A, D and F was designed for marine applications such as lighthouses and buoys; at least six units were deployed in the mid-1960s, with names SNAP-7A through SNAP-7F. SNAP-7D produced thirty watts of electricity using (about four kilograms) of strontium-90 as SrTiO3. These were very large units, weighing between .
SNAP-9
After SNAP-3 on Transit 4A/B, SNAP-9A units served aboard many of the Transit satellite series. In April 1964 a SNAP-9A failed to achieve orbit and disintegrated, dispersing roughly of plutonium-238 over all continents. Most plutonium fell in the southern hemisphere. Estimated 630 TBq of radiation was released.
SNAP-11
SNAP-11 was an experimental RTG intended to power the Surveyor probes during the lunar night. The curium-242 RTGs would have produced 25 watts of electricity using 900 watts of thermal energy for 130 days. The hot junction temperature was , the cold junction temperature was . They had a liquid NaK thermal control system and a movable shutter to dump excess heat. They were not used on the Surveyor missions.
In general, the SNAP 11 fuel block is a cylindrical multi-material unit which occupies the internal volume of the generator. TZM (molybdenum alloy) fuel capsule, fueled with curium-242 (Cm2O3 in an iridium matrix) is located in the center of the fuel block. Capsule is surrounded by a platinum sphere, approximately inches in diameter, which provides shielding and acts as an energy absorber for impact considerations. This assembly is enclosed in graphite and beryllium sub-assemblies to provide the proper thermal distribution and ablative protection.
SNAP-19
SNAP-19(B) was developed for the Nimbus-B satellite by the Nuclear Division of the Martin-Marietta Company (now Teledyne Energy Systems). Fueled with plutonium-238, two parallel lead telluride thermocouple generators produced an initial maximum of approximately 30 watts of electricity. Nimbus 3 used a SNAP-19B with the recovered fuel from the Nimbus-B1 attempt.
SNAP-19's powered the Pioneer 10 and Pioneer 11 missions. They used n-type 2N-PbTe and p-type TAGS-85 thermoelectric elements.
Modified SNAP-19B's were used for the Viking 1 and Viking 2 landers.
A SNAP-19C was used to power a telemetry array at Nanda Devi in Uttarakhand for a CIA operation to track Chinese missile launches.
SNAP-21 & 23
SNAP-21 and SNAP-23 were designed for underwater use and used strontium-90 as the radioactive source, encapsulated as either strontium oxide or strontium titanate. They produced about ten watts of electricity.
SNAP-27
Five SNAP-27 units provided electric power for the Apollo Lunar Surface Experiments Packages (ALSEP) left on the Moon by Apollo 12, 14, 15, 16, and 17. The SNAP-27 power supply weighed about 20 kilograms, was 46 cm long and 40.6 cm in diameter. It consisted of a central fuel capsule surrounded by concentric rings of thermocouples. Outside of the thermocouples was a set of fins to provide for heat rejection from the cold side of the thermocouple. Each of the SNAP devices produced approximately 75 W of electrical power at 30 VDC. The energy source for each device was a rod of plutonium-238 providing a thermal power of approximately 1250 W. This fuel capsule, containing of plutonium-238 in oxide form (44,500 Ci or 1.65 PBq), was carried to the Moon in a separate fuel cask attached to the side of the Lunar Module. The fuel cask provided thermal insulation and added structural support to the fuel capsule. On the Moon, the Lunar Module pilot removed the fuel capsule from the cask and inserted it in the RTG.
These stations transmitted information about moonquakes and meteor impacts, lunar magnetic and gravitational fields, the Moon's internal temperature, and the Moon's atmosphere for several years after the missions. After ten years, a SNAP-27 still produced more than 90% of its initial output of 75 watts.
The fuel cask from the SNAP-27 unit carried by the Apollo 13 mission currently lies in of water at the bottom of the Tonga Trench in the Pacific Ocean. This mission failed to land on the moon, and the lunar module carrying its generator burnt up during re-entry into the Earth's atmosphere, with the trajectory arranged so that the cask would land in the trench. The cask survived re-entry, as it was designed to do, and no release of plutonium has been detected. The corrosion resistant materials of the capsule are expected to contain it for 10 half-lives (870 years).
Even-numbered SNAPs: compact nuclear reactors
A series of compact nuclear reactors intended for space use, the even numbered SNAPs were developed for the U.S. government by the Atomics International division of North American Aviation.
SNAP Experimental Reactor (SER)
The SNAP Experimental Reactor (SER) was the first reactor to be built by the specifications established for space satellite applications. The SER used uranium zirconium hydride as the fuel and eutectic sodium-potassium alloy (NaK) as the coolant and operated at approximately 50 kW thermal. The system did not have a power conversion but used a secondary heat air blast system to dissipate the heat to the atmosphere. The SER used a similar reactor reflector moderator device as the SNAP-10A but with only one reflector. Criticality was achieved in September 1959 with final shutdown completed in December 1961. The project was considered a success. It gave continued confidence in the development of the SNAP Program and it also led to in depth research and component development.
SNAP-2
The SNAP-2 Developmental Reactor was the second SNAP reactor built. This device used Uranium-zirconium hydride fuel and had a design reactor power of 55 kWt. It was the first model to use a flight control assembly and was tested from April 1961 to December 1962. The basic concept was that nuclear power would be a long term source of energy for crewed space capsules. However, the crew capsule had to be shielded from deadly radiation streaming from the nuclear reactor. Surrounding the reactor with a radiation shield was out of the question. It would be far too heavy to launch with the rockets available at that time. To protect the "crew" and "payload", the SNAP-2 system used a "shadow shield". The shield was a truncated cone containing lithium hydride. The reactor was at the small end and the crew capsule/payload was in the shadow of the large end.
Studies were performed on the reactor, individual components and the support system. Atomics International, a division of North American Aviation did the development and testing work. The SNAP-2 Shield Development unit was responsible for developing the radiation shield. Creating the shield meant melting lithium hydride and casting it into the form required. The form was a big truncated cone. Molten lithium hydride had to be poured into the casting mold a little at a time otherwise it would crack as it cooled and solidified. Cracks in the shield material would be fatal to any space crew or payload depending on it because it would allow radiation to stream through to the crew/payload compartment. As the material cooled, it would form kind of a hollowed vortex in the middle. The development engineers had to create ways to fill the vortex while maintaining the shield's integrity. And, in doing all this they had to keep in mind that they were working with a material that could be explosively unstable in a moist oxygen rich environment. Analysis also revealed that under thermal and radiation gradients, the lithium hydride could disassociate and hydrogen ions could migrate through the shield. This would produce variations of shielding efficacy and could subject the payloads to intense radiation. Efforts were made to mitigate these effects.
The SNAP 2DR used a similar reactor reflector moderator device as the SNAP-10A but with two movable and internal fixed reflectors. The system was designed so that the reactor could be integrated with a mercury Rankine cycle to generate 3.5 kW of electricity.
SNAP-8
The SNAP-8 reactors were designed, constructed and operated by Atomics International under contract with the National Aeronautics and Space Administration. Two SNAP-8 reactors were produced: The SNAP 8 Experimental Reactor and the SNAP 8 Developmental Reactor. Both SNAP 8 reactors used the same highly enriched uranium zirconium hydride fuel as the SNAP 2 and SNAP 10A reactors. The SNAP 8 design included primary and secondary NaK loops to transfer heat to the mercury rankine power conversion system. The electrical generating system for the SNAP 8 reactors was supplied by Aerojet General.
The SNAP 8 Experimental Reactor was a 600 kWt reactor that was tested from 1963 to 1965.
The SNAP 8 Developmental Reactor had a reactor core measuring , contained a total of of fuel, had a power rating of 1 MWt. The reactor was tested in 1969 at the Santa Susana Field Laboratory.
SNAP-10A
The SNAP-10A was a space-qualified nuclear reactor power system launched into space in 1965 under the SNAPSHOT program. It was built as a research project for the Air Force, to demonstrate the capability to generate higher power than RTGs. The reactor employed two moveable beryllium reflectors for control, and generated 35 kWt at beginning of life. The system generated electricity by circulating NaK around lead tellurium thermocouples. To mitigate launch hazards, the reactor was not started until it reached a safe orbit.
SNAP-10A was launched into Earth orbit in April 1965, and used to power an Agena-D research satellite, built by Lockheed/Martin. The system produced 500W of electrical power during an abbreviated 43-day flight test. The reactor was prematurely shut down by a faulty command receiver. It is predicted to remain in orbit for 4,000 years.
See also
List of nuclear power systems in space
Nuclear power in space
Citations
General sources
"Nuclear Power in Space". U.S. Department of Energy, Office of Nuclear Energy, Science & Technology
External links
SNAP-8 Electrical Generating System Development Program, Final Report
SNAP-19, Phase 3. Quarterly Progress Report, 1 January – 31 March 1966
SNAP 19, Phase 3. Quarterly Progress Report, 1 Apr. – 30 Jun. 1966
Analysis of the need for Agena command destruct and/or generator eject systems on the Nimbus B/SNAP-19 mission
SNAP-19/Nimbus B integration experience
SNAP-27, Volume 1. Quarterly Report, 1 Jul. – 30 Sep. 1966
SNAP-27, Volume 2. Quarterly Report, 1 Jan. – 31 Mar. 1966
"Space Nuclear Power: Opening the Final Frontier" by G. L. Bennett (2006)
"Space Nuclear Power Sources" (tables)
Atomics International
Electrical generators
NASA programs
North American Aviation
Nuclear power in space
Nuclear technology
United States Atomic Energy Commission | Systems for Nuclear Auxiliary Power | [
"Physics",
"Technology"
] | 2,671 | [
"Electrical generators",
"Machines",
"Nuclear technology",
"Physical systems",
"Nuclear physics"
] |
5,230,919 | https://en.wikipedia.org/wiki/Uniqueness%20theorem%20for%20Poisson%27s%20equation | The uniqueness theorem for Poisson's equation states that, for a large class of boundary conditions, the equation may have many solutions, but the gradient of every solution is the same. In the case of electrostatics, this means that there is a unique electric field derived from a potential function satisfying Poisson's equation under the boundary conditions.
Proof
The general expression for Poisson's equation in electrostatics is
where is the electric potential and is the charge distribution over some region with boundary surface .
The uniqueness of the solution can be proven for a large class of boundary conditions as follows.
Suppose that we claim to have two solutions of Poisson's equation. Let us call these two solutions and . Then
and
It follows that is a solution of Laplace's equation, which is a special case of Poisson's equation that equals to . Subtracting the two solutions above gives
By applying the vector differential identity we know that
However, from () we also know that throughout the region Consequently, the second term goes to zero and we find that
By taking the volume integral over the region , we find that
By applying the divergence theorem, we rewrite the expression above as
We now sequentially consider three distinct boundary conditions: a Dirichlet boundary condition, a Neumann boundary condition, and a mixed boundary condition.
First, we consider the case where Dirichlet boundary conditions are specified as on the boundary of the region. If the Dirichlet boundary condition is satisfied on by both solutions (i.e., if on the boundary), then the left-hand side of () is zero. Consequently, we find that
Since this is the volume integral of a positive quantity (due to the squared term), we must have at all points. Further, because the gradient of is everywhere zero and is zero on the boundary, must be zero throughout the whole region. Finally, since throughout the whole region, and since throughout the whole region, therefore throughout the whole region. This completes the proof that there is the unique solution of Poisson's equation with a Dirichlet boundary condition.
Second, we consider the case where Neumann boundary conditions are specified as on the boundary of the region. If the Neumann boundary condition is satisfied on by both solutions, then the left-hand side of () is zero again. Consequently, as before, we find that
As before, since this is the volume integral of a positive quantity, we must have at all points. Further, because the gradient of is everywhere zero within the volume , and because the gradient of is everywhere zero on the boundary , therefore must be constant---but not necessarily zero---throughout the whole region. Finally, since throughout the whole region, and since throughout the whole region, therefore throughout the whole region. This completes the proof that there is the unique solution up to an additive constant of Poisson's equation with a Neumann boundary condition.
Mixed boundary conditions could be given as long as either the gradient or the potential is specified at each point of the boundary. Boundary conditions at infinity also hold. This results from the fact that the surface integral in () still vanishes at large distances because the integrand decays faster than the surface area grows.
See also
Poisson's equation
Gauss's law
Coulomb's law
Method of images
Green's function
Uniqueness theorem
Spherical harmonics
References
Electrostatics
Vector calculus
Uniqueness theorems
Theorems in calculus | Uniqueness theorem for Poisson's equation | [
"Mathematics"
] | 704 | [
"Theorems in mathematical analysis",
"Mathematical theorems",
"Theorems in calculus",
"Calculus",
"Mathematical problems",
"Uniqueness theorems"
] |
35,806,738 | https://en.wikipedia.org/wiki/Threshold%20%28architecture%29 | A threshold is the sill of a door. Some cultures attach special symbolism to a threshold. It is called a door saddle in New England.
Door thresholds cover the gap between the floor and the door frame, helping to prevent any water leaks, insects or draughts from entering through the opening.
Etymology
Various popular false etymologies of this word exist, some of which were even recorded by dictionaries in the past and even created by early linguists before linguistics became a strictly scientific field. Some of these false etymologies date from the time of Old English or even earlier.
Many different forms of this word are attested in Old English, which shows that the original meaning of this word and especially of its latter half was already obscure at the time and that most or all of the different Old English spellings were the result of folk etymologies. Although modern dictionaries do not yet record the results of the latest etymological research on this word, they do record the results of older research that shows that the second half is not related to the modern word hold. According to the linguist Anatoly Liberman, the most likely etymology is that the term referred to a threshing area that was originally not part of the doorway but was later associated with it:
Cultural symbolism
In many cultures it has a special symbolism: for instance, in Poland, Ukraine and Russia it is considered bad luck to shake hands or kiss across the threshold when meeting somebody. In many countries it is considered good luck for a bridegroom to carry the bride over the threshold to their new home.
References
Doors
Superstitions
Architectural elements | Threshold (architecture) | [
"Technology",
"Engineering"
] | 328 | [
"Building engineering",
"Architectural elements",
"Components",
"Architecture"
] |
35,810,608 | https://en.wikipedia.org/wiki/Courcelle%27s%20theorem | In the study of graph algorithms, Courcelle's theorem is the statement that every graph property definable in the monadic second-order logic of graphs can be decided in linear time on graphs of bounded treewidth. The result was first proved by Bruno Courcelle in 1990 and independently rediscovered by .
It is considered the archetype of algorithmic meta-theorems.
Formulations
Vertex sets
In one variation of monadic second-order graph logic known as MSO1, the graph is described by a set of vertices and a binary adjacency relation , and the restriction to monadic logic means that the graph property in question may be defined in terms of sets of vertices of the given graph, but not in terms of sets of edges, or sets of tuples of vertices.
As an example, the property of a graph being colorable with three colors (represented by three sets of vertices , , and ) may be defined by the monadic second-order formula
with the naming convention that uppercase variables denote sets of vertices and lowercase variables denote individual vertices (so that an explicit declaration of which is which can be omitted from the formula). The first part of this formula ensures that the three color classes cover all the vertices of the graph, and the rest ensures that they each form an independent set. (It would also be possible to add clauses to the formula to ensure that the three color classes are disjoint, but this makes no difference to the result.) Thus, by Courcelle's theorem, 3-colorability of graphs of bounded treewidth may be tested in linear time.
For this variation of graph logic, Courcelle's theorem can be extended from treewidth to clique-width: for every fixed MSO1 property , and every fixed bound on the clique-width of a graph, there is a linear-time algorithm for testing whether a graph of clique-width at most has property . The original formulation of this result required the input graph to be given together with a construction proving that it has bounded clique-width, but later approximation algorithms for clique-width removed this requirement.
Edge sets
Courcelle's theorem may also be used with a stronger variation of monadic second-order logic known as MSO2. In this formulation, a graph is represented by a set V of vertices, a set
E of edges, and an incidence relation between vertices and edges. This variation allows quantification over sets of vertices or edges, but not over more complex relations on tuples of vertices or edges.
For instance, the property of having a Hamiltonian cycle may be expressed in MSO2 by describing the cycle as a set of edges that includes exactly two edges incident to each vertex, such that every nonempty proper subset of vertices has an edge in the putative cycle with exactly one endpoint in the subset. However, Hamiltonicity cannot be expressed in MSO1.
Labeled graphs
It is possible to apply the same results to graphs in which the vertices or edges have labels from a fixed finite set, either by augmenting the graph logic to incorporate predicates describing the labels, or by representing the labels by unquantified vertex set or edge set variables.
Modular congruences
Another direction for extending Courcelle's theorem concerns logical formulas that include predicates for counting the size of the test.
In this context, it is not possible to perform arbitrary arithmetic operations on set sizes, nor even to test whether two sets have the same size.
However, MSO1 and MSO2 can be extended to logics called CMSO1 and CMSO2, that include for every two constants q and r a predicate which tests whether the cardinality of set S is congruent to r modulo q. Courcelle's theorem can be extended to these logics.
Decision versus optimization
As stated above, Courcelle's theorem applies primarily to decision problems: does a graph have a property or not. However, the same methods also allow the solution to optimization problems in which the vertices or edges of a graph have integer weights, and one seeks the minimum or maximum weight vertex set that satisfies a given property, expressed in second-order logic. These optimization problems can be solved in linear time on graphs of bounded clique-width.
Space complexity
Rather than bounding the time complexity of an algorithm that recognizes an MSO property on bounded-treewidth graphs, it is also possible to analyze the space complexity of such an algorithm; that is, the amount of memory needed above and beyond the size of the input itself (which is assumed to be represented in a read-only way so that its space requirements cannot be put to other purposes).
In particular, it is possible to recognize the graphs of bounded treewidth, and any MSO property on these graphs, by a deterministic Turing machine that uses only logarithmic space.
Proof strategy and complexity
The typical approach to proving Courcelle's theorem involves the construction of a finite bottom-up tree automaton that acts on the tree decompositions of the given graph.
In more detail, two graphs G1 and G2, each with a specified subset T of vertices called terminals, may be defined to be equivalent with respect to an MSO formula F if, for all other graphs H whose intersection with G1 and G2 consists only of vertices in T, the two graphs
G1 ∪ H and G2 ∪ H behave the same with respect to F: either they both model F or they both do not model F. This is an equivalence relation, and it can be shown by induction on the length of F that (when the sizes of T and F are both bounded) it has finitely many equivalence classes.
A tree decomposition of a given graph G consists of a tree and, for each tree node, a subset of the vertices of G called a bag. It must satisfy two properties: for each vertex v of G, the bags containing v must be associated with a contiguous subtree of the tree, and for each edge uv of G, there must be a bag containing both u and v.
The vertices in a bag can be thought of as the terminals of a subgraph of G, represented by the subtree of the tree decomposition descending from that bag. When G has bounded treewidth, it has a tree decomposition in which all bags have bounded size, and such a decomposition can be found in fixed-parameter tractable time. Moreover, it is possible to choose this tree decomposition so that it forms a binary tree, with only two child subtrees per bag. Therefore, it is possible to perform a bottom-up computation on this tree decomposition, computing an identifier for the equivalence class of the subtree rooted at each bag by combining the edges represented within the bag with the two identifiers for the equivalence classes of its two children.
The size of the automaton constructed in this way is not an elementary function of the size of the input MSO formula. This non-elementary complexity is necessary, in the sense that (unless P = NP) it is not possible to test MSO properties on trees in a time that is fixed-parameter tractable with an elementary dependence on the parameter.
Bojańczyk-Pilipczuk's theorem
The proofs of Courcelle's theorem show a stronger result: not only can every (counting) monadic second-order property be recognized in linear time for graphs of bounded treewidth, but also it can be recognized by a finite-state tree automaton. Courcelle conjectured a converse to this: if a property of graphs of bounded treewidth is recognized by a tree automaton, then it can be defined in counting monadic second-order logic. In 1998 , claimed a resolution of the conjecture. However, the proof is widely regarded as unsatisfactory. Until 2016, only a few special cases were resolved: in particular, the conjecture has been proved for graphs of treewidth at most three, for k-connected graphs of treewidth k, for graphs of constant treewidth and chordality, and for k-outerplanar graphs.
The general version of the conjecture was finally proved by Mikołaj Bojańczyk and Michał Pilipczuk.
Moreover, for Halin graphs (a special case of treewidth three graphs) counting is not needed: for these graphs, every property that can be recognized by a tree automaton can also be defined in monadic second-order logic. The same is true more generally for certain classes of graphs in which a tree decomposition can itself be described in MSOL. However, it cannot be true for all graphs of bounded treewidth, because in general counting adds extra power over monadic second-order logic without counting. For instance, the graphs with an even number of vertices can be recognized using counting, but not without.
Satisfiability and Seese's theorem
The satisfiability problem for a formula of monadic second-order logic is the problem of determining whether there exists at least one graph (possibly within a restricted family of graphs) for which the formula is true. For arbitrary graph families, and arbitrary formulas, this problem is undecidable. However, satisfiability of MSO2 formulas is decidable for the graphs of bounded treewidth, and satisfiability of MSO1 formulas is decidable for graphs of bounded clique-width. The proof involves building a tree automaton for the formula and then testing whether the automaton has an accepting path.
As a partial converse, proved that, whenever a family of graphs has a decidable MSO2 satisfiability problem, the family must have bounded treewidth. The proof is based on a theorem of Robertson and Seymour that the families of graphs with unbounded treewidth have arbitrarily large grid minors. Seese also conjectured that every family of graphs with a decidable MSO1 satisfiability problem must have bounded clique-width; this has not been proven, but a weakening of the conjecture that replaces MSO1 by CMSO1 is true.
Applications
used Courcelle's theorem to show that computing the crossing number of a graph G is fixed-parameter tractable with a quadratic dependence on the size of G, improving a cubic-time algorithm based on the Robertson–Seymour theorem. An additional later improvement to linear time by follows the same approach. If the given graph G has small treewidth, Courcelle's theorem can be applied directly to this problem. On the other hand, if G has large treewidth, then it contains a large grid minor, within which the graph can be simplified while leaving the crossing number unchanged. Grohe's algorithm performs these simplifications until the remaining graph has a small treewidth, and then applies Courcelle's theorem to solve the reduced subproblem.
observed that Courcelle's theorem applies to several problems of finding minimum multi-way cuts in a graph, when the structure formed by the graph and the set of cut pairs has bounded treewidth. As a result they obtain a fixed-parameter tractable algorithm for these problems, parameterized by a single parameter, treewidth, improving previous solutions that had combined multiple parameters.
In computational topology, extend Courcelle's theorem from MSO2 to a form of monadic second-order logic on simplicial complexes of bounded dimension that allows quantification over simplices of any fixed dimension. As a consequence, they show how to compute certain quantum invariants of 3-manifolds as well as how to solve certain problems in discrete Morse theory efficiently, when the manifold has a triangulation (avoiding degenerate simplices) whose dual graph has small treewidth.
Methods based on Courcelle's theorem have also been applied to database theory, knowledge representation and reasoning, automata theory, and model checking.
References
Metatheorems
Graph algorithms
Graph minor theory | Courcelle's theorem | [
"Mathematics"
] | 2,536 | [
"Graph minor theory",
"Mathematical relations",
"Graph theory"
] |
35,816,133 | https://en.wikipedia.org/wiki/Molecular%20Aspects%20of%20Medicine | Molecular Aspects of Medicine is a bimonthly peer-reviewed medical journal covering molecular medicine. It is published by Elsevier on behalf of the International Union of Biochemistry and Molecular Biology.
Abstracting and indexing
The journal is abstracted and indexed in:
Article categories
The journal publishes only invited review articles.
External links
General medical journals
Elsevier academic journals
Hybrid open access journals | Molecular Aspects of Medicine | [
"Chemistry"
] | 75 | [
"Biochemistry stubs",
"Biochemistry journal stubs"
] |
24,489,295 | https://en.wikipedia.org/wiki/Topray%20Solar | Shenzhen Topray Solar () is a vertically integrated solar energy company with global presences in Africa, Europe, Asia and North America. It is a publicly listed company on China Shenzhen Stock Exchange (Stock Ticker: 002218).
In 2017, the European Commission alleged that Topray Solar had repeatedly violated the minimum price agreement for solar products.
In 2020, a whistleblower in Uganda petitioned the Inspectorate of Government to investigate how Topray Solar was awarded a deal to install and maintain solar panels for Ugandan secondary schools.
Global subsidiary companies
Germany—Topray Solar GmbH, Frankfurt am Main, Germany
United States—Topray Power Inc.
Africa—ASE Solar, Uganda and Kenya
See also
Photovoltaics
Building-integrated photovoltaics
Solar energy
Solar panel
References
External links
Solar energy companies
Manufacturing companies based in Shenzhen
Chinese companies established in 1999
Electrical engineering companies
Engineering companies of China
Photovoltaics manufacturers
Solar energy companies of China
Chinese brands
1999 in Shenzhen | Topray Solar | [
"Engineering"
] | 196 | [
"Electrical engineering companies",
"Photovoltaics manufacturers",
"Electrical engineering organizations",
"Engineering companies"
] |
24,497,685 | https://en.wikipedia.org/wiki/Algebraic%20theory | Informally in mathematical logic, an algebraic theory is a theory that uses axioms stated entirely in terms of equations between terms with free variables. Inequalities and quantifiers are specifically disallowed. Sentential logic is the subset of first-order logic involving only algebraic sentences.
The notion is very close to the notion of algebraic structure, which, arguably, may be just a synonym.
Saying that a theory is algebraic is a stronger condition than saying it is elementary.
Informal interpretation
An algebraic theory consists of a collection of n-ary functional terms with additional rules (axioms).
For example, the theory of groups is an algebraic theory because it has three functional terms: a binary operation a × b, a nullary operation 1 (neutral element), and a unary operation x ↦ x−1 with the rules of associativity, neutrality and inverses respectively. Other examples include:
the theory of semigroups
the theory of lattices
the theory of rings
This is opposed to geometric theory which involves partial functions (or binary relationships) or existential quantors − see e.g. Euclidean geometry where the existence of points or lines is postulated.
Category-based model-theoretical interpretation
An algebraic theory T is a category whose objects are natural numbers 0, 1, 2,..., and which, for each n, has an n-tuple of morphisms:
proji: n → 1, i = 1, ..., n
This allows interpreting n as a cartesian product of n copies of 1.
Example: Let's define an algebraic theory T taking hom(n, m) to be m-tuples of polynomials of n free variables X1, ..., Xn with integer coefficients and with substitution as composition. In this case proji is the same as Xi. This theory T is called the theory of commutative rings.
In an algebraic theory, any morphism n → m can be described as m morphisms of signature n → 1. These latter morphisms are called n-ary operations of the theory.
If E is a category with finite products, the full subcategory Alg(T, E) of the category of functors [T, E] consisting of those functors that preserve finite products is called the category of T-models or T-algebras.
Note that for the case of operation 2 → 1, the appropriate algebra A will define a morphism
A(2) ≈ A(1) × A(1) → A(1)
See also
Algebraic definition
References
Lawvere, F. W., 1963, Functorial Semantics of Algebraic Theories, Proceedings of the National Academy of Sciences 50, No. 5 (November 1963), 869-872
Adámek, J., Rosický, J., Vitale, E. M., Algebraic Theories. A Categorical Introduction To General Algebra
Kock, A., Reyes, G., Doctrines in categorical logic, in Handbook of Mathematical Logic, ed. J. Barwise, North Holland 1977
Mathematical logic | Algebraic theory | [
"Mathematics"
] | 640 | [
"Mathematical logic"
] |
24,497,801 | https://en.wikipedia.org/wiki/Emissions%20reduction%20currency%20systems | Emissions reduction currency systems (ERCS) are schemes that provide a positive economic and or social reward for reductions in greenhouse gas emissions, either through distribution or redistribution of national currency or through the publishing of coupons, reward points, local currency, or complementary currency.
Compared to other emissions reductions instruments
Emissions reduction currency is different from an emissions credit. The value of an emissions credit is determined by a national cap in emissions and the degree to which the credit confers a right to pollute. The ultimate value of an emissions credit is realised when it is surrendered to avoid punitive fines for emitting.
Emissions reduction currency is also different from a voluntary carbon offset where a payment is made, typically to fund alternative energy or reforestation, the emissions reduction or sequestration resulting from which is used to reduce or cancel the payers responsibility for emissions produced by themselves. The value of an offset is in its being held by the purchaser and applies only for the period and purpose against which the offset applies.
An emissions reduction currency, by contrast, is purely an incentive for behaviour change by individuals or groups. As such the currency creates an additional economic benefit for emissions reductions separate from the cost imposed by national emissions caps or the voluntary cost assumed by the purchaser of a voluntary offset.
Emissions reduction currencies are not exchangeable within national cap and trade systems and as such do not confer any right to pollute.
While no emissions reduction currency system has achieved the scale of emissions crediting systems, there are a number of small scale schemes in operation or being set up. In addition there are a number of approaches that are currently hypothetical being promoted by a number of organisations, academic institutions and think tanks.
Emissions reduction currency systems conceptually are inclusive of carbon currency systems but also include schemes that reduce emissions in incidental ways such as through waste reduction and community education.
History
The idea of a global wealth system based on alternative energy production was first suggested by Buckminster Fuller in his 1969 book Operating Manual for Spaceship Earth. This idea was piloted by Garry Davis who distributed these "kilowatt dollars" at the 1992 Earth Summit held in Rio de Janeiro. Edgar Kempers and Rob Van Hilton launched the Kiwah (kilowatt hour) currency at the Copenhagen Climate Summit in 2009.
Categories of emissions reduction currency systems
Emission reduction currency systems can be designated as belonging to one or more of five categories:
Carbon title schemes
Introduction of sustainable land management practices in tropical rainforest and other high carbon environments can lead to abatement of emissions from land clearing that might have otherwise occurred or from additional CO2 sequestration.
Land purchased and managed for these purposes can be used to create an independently tradeable carbon right, which may or may not be recognised within an emissions credit scheme. For example, aboveground biomass resulting from land use changes can currently be converted to recognised emissions credits under the Kyoto Protocol Clean Development Mechanism (CDM). Increases in soil carbon for reasons other than reforestation either through changes to land management practices or through the burial of biochar are currently not included in emissions credit systems such as the CDM.
These certificates of legal title can be traded as a form of currency independently of their use as an offset, yielding additional economic benefits. This use is suggested by The Carbon Currency Foundation.
Another emissions reduction currency system proposed on this basis is the ECO, a project of The Next Nature Lab which is an initiative of Eindhoven University of Technology in the Netherlands.
Promotional discount schemes
An emissions reduction currency system based on promotional discount is one where participants are rewarded for reducing their emissions by gaining points which can be redeemed for discounts from businesses advertising in the system.
RecycleBank is one such scheme where participants weigh recycled materials in specially designed disposal bins that identify themselves to scales embedded in garbage collection vehicles. Recyclebank is also funded by municipal governments that purchase and operate the required equipment, allowing RecycleBank to operate as a private for profit company. Another similar scheme was GreenOps LLC, a community-based recycling rewards program founded by eco-entrepreneur Anthony Zolezzi, which he later sold to Waste Management, becoming Greenopolis Recycling Rewards. Greenopolis gave rewards points to users from 2008 to 2012 through social media websites, Facebook Games and bottle and can recycling via PepsiCo Dream Machines. The Dream Machines were placed on college campuses, grocery stores and military bases across the US and collected more than 4 million plastic bottles in their first year of use. Oceanopolis, a Facebook game created by Greenopolis to make the recycling habit fun and engaging, was recognized by Al Gore at the 2011 Games for Change Festival at New York University, saying "I've been encouraged by recent developments like Trash Tycoon and Oceanopolis, and both have spurred my thinking in this area." Eventually, Recyclebank and Greenopolis would merge following an investment in Recyclebank by Waste Management. In 2019, RecycleBank was purchased by Recycle Track Systems (RTS).
EarthAid uses specialised software that publishes utility bills from companies in an online format that participants can share with family and friends. Reduced energy consumption earns reward points that can be redeemed for prizes at businesses in the EarthAid rewards network.
Allocation schemes
An emissions currency reduction system based on allocation is one where all individual participants are awarded an equal allotment of emissions currency. Participants then trade goods and services with one another to obtain enough of the currency to cover their actual emissions.
The objective of an allocation scheme is to obtain social parity between participants with regards to emissions reductions.
Technically an emissions crediting scheme, an allocation scheme is classed as an emissions reduction currency system because the trading of the currency between individuals as parity is sought can create a secondary market of trading where the currency can act as a medium of exchange, and this trading creates an additional positive economic value associated with emissions reductions.
The Global Resource Bank is one organisation advocating such a global allocation scheme.
Emissions rationing schemes
Otherwise known as personal carbon trading, an emissions reduction currency system based on rationing presumes a standard ration of emissions allowable for an average citizen that incrementally decreases over time.
Participants using less than the rationed amount receive a currency that can be traded with those emitting more than the allowed amount. All participants pledge to in total remain below the average with a net positive value in the scheme.
Carbon Rationing Action Groups (CRAGs), started in the United Kingdom, has a global network of groups. CRAG participants use a standard average for the country as a basis for the rationed amount. Participants emitting at above rationed levels must pay those below it in national currency.
Norfolk Island, Australia is in the process of implementing an island-wide voluntary personal carbon trading scheme designed by Southern Cross University Professor Garry Egger.
Community based currency schemes
A community based emissions reduction currency scheme is a C4 type local currency in which local currency issues are backed by the emissions reductions of the schemes members. The local currency, when accepted for trade by other members or local businesses, thereby rewards participants for their efforts at global warming prevention. These currencies may have various degrees of convertibility into carbon saved, renewable energy, or national currency.
The Edogawatt is a form of emissions reduction currency used in Edogawa, Tokyo that is an initiative of the local Jōdo Shinshū Jukou-in temple. In this scheme, the temple and devotees purchase solar panels and sell the excess power to the Tokyo Electric Power Company. The temple then takes the difference between the price paid by the Tokyo Electric Power Company and the price paid for natural energy in Germany and sells Green Power Certificates as a fund raiser for the temple. Purchasers of the Green Power Certificates are given 30 Edogawatt bills per certificate. "These are currently being used among people ... as a certificate of debt or obligation in exchange for baby-sitting, carrying loads, translating and other small jobs. They have provided an incentive for creation of a mutual aid society within the community and we would like to make them a tool for deepening interpersonal relationships and trust."
http://www.qoin.org/what-we-do/past-projects/kyoto4all/ Kyoto4All was a 2006 report written by Peter van Luttervelt, David Beatty, Edgar Kampers and Hugo Schönbeck for the Dutch Ministry of Environment (then named VROM). The study described a series of monetary models to connect citizens-consumers to the climate change targets of the post-Kyoto period.
The Maia Maia Emissions Reduction Currency System, is a scheme developed in Western Australia. The system currency is known as a "boya", named after the indigenous Nyungar people's word for rock trading tokens used by them. Each boya is based on 10 kilograms of carbon dioxide equivalent global warming prevention which is equates to a $100 tonne e social cost of carbon, which approximates a middle estimate from peer reviewed studies. The first issue of boya occurred on 30 January 2011 in Fremantle, Western Australia at an event hosted by the International Permaculture Service and the Gaia Foundation of Western Australia. Other issuers of Boya include the University of Vermont and in Australia, primary schools, non-profit organisations, and a neighbourhood association.
The Liquidity Network, an initiative of the Foundation for the Economics of Sustainability is proposing to introduce a community run emissions reduction currency in the County of Kilkenny in Ireland. The proposal is currently before council for consideration.
The German climate protection association SaveClimate.Earth is proposing to introduce initially on a European base a climate currency ECO (Earth Carbon Obligation) as complementary currency to pay for the individual greenhouse gas emissions consumption.
Monetizing schemes
A monetised emissions reduction currency is backed by the financial value of emissions credits or certified under a regulatory scheme or other financial products derived from them. These credits can be converted into fiat currency through transferring ownership of the underlying assets such as selling the emission credits into cap and trade markets.
The Ven is a virtual currency issued by the Hub Culture social media network. The value of Ven is determined on the financial markets from a basket of currencies and commodities. The Ven may be categorised as an emissions reduction currency because carbon futures are included as one of the commodities used to value the currency.
Carbon Manna is a proposed scheme that will use proceeds from pre-selling credits from bundled emissions reduction projects to reimburse users directly or to enroll them in the successful mobile phone currency M-PESA being used in developing countries to reduce monetary transaction costs and hedge against currency fluctuations.
See also
NORI token
Notes
Local currencies
Emissions reduction
Carbon finance
Economics and climate change
Environmental issues in Ireland | Emissions reduction currency systems | [
"Chemistry"
] | 2,181 | [
"Greenhouse gases",
"Emissions reduction"
] |
21,508,600 | https://en.wikipedia.org/wiki/Kramers%27%20law | Kramers' law is a formula for the spectral distribution of X-rays produced by an electron hitting a solid target. The formula concerns only bremsstrahlung radiation, not the element specific characteristic radiation. It is named after its discoverer, the Dutch physicist Hendrik Anthony Kramers.
The formula for Kramers' law is usually given as the distribution of intensity (photon count) against the wavelength of the emitted radiation:
The constant K is proportional to the atomic number of the target element, and is the minimum wavelength given by the Duane–Hunt law. The maximum intensity is at .
The intensity described above is a particle flux and not an energy flux as can be seen by the fact that the integral over values from to is infinite. However, the
integral of the energy flux is finite.
To obtain a simple expression for the energy flux, first change variables from (the wavelength) to (the angular frequency) using and also using . Now is that quantity which is integrated over from 0 to to get the total number (still infinite) of photons, where :
The energy flux, which we will call (but which may also be referred to as the "intensity" in conflict with the above name of ) is obtained by multiplying the above by the energy :
for
for .
It is a linear function that is zero at the maximum energy .
References
Spectroscopy
X-rays | Kramers' law | [
"Physics",
"Chemistry",
"Astronomy"
] | 280 | [
"Spectroscopy stubs",
"Molecular physics",
"X-rays",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Electromagnetic spectrum",
"Astronomy stubs",
"Molecular physics stubs",
"Spectroscopy",
"Physical chemistry stubs"
] |
21,510,483 | https://en.wikipedia.org/wiki/Lipid%20bilayer%20fusion | In membrane biology, fusion is the process by which two initially distinct lipid bilayers merge their hydrophobic cores, resulting in one interconnected structure. If this fusion proceeds completely through both leaflets of both bilayers, an aqueous bridge is formed and the internal contents of the two structures can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. In hemifusion, the lipid constituents of the outer leaflet of the two bilayers can mix, but the inner leaflets remain distinct. The aqueous contents enclosed by each bilayer also remain separated.
Fusion is involved in many cellular processes, particularly in eukaryotes since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis, fertilization of an egg by sperm and transport of waste products to the lysosome are a few of the many eukaryotic processes that rely on some form of fusion. Fusion is also an important mechanism for transport of lipids from their site of synthesis to the membrane where they are needed. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell.
Lipid mechanism
There are four fundamental steps in the fusion process, although each of these steps actually represents a complex sequence of events. First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel at this distance. Third, a destabilization must develop at one point between the two bilayers, inducing a highly localized rearrangement of the two bilayers. Finally, as this point defect grows, the components of the two bilayers mix and diffuse away from the site of contact. Depending on whether hemifusion or full fusion occurs, the internal contents of the membranes may mix at this point as well.
The exact mechanisms behind this complex sequence of events are still a matter of debate. To simplify the system and allow more definitive study, many experiments have been performed in vitro with synthetic lipid vesicles. These studies have shown that divalent cations play a critical role in the fusion process by binding to negatively charged lipids such as phosphatidylserine, phosphatidylglycerol and cardiolipin. One role on these ions in the fusion process is to shield the negative charge on the surface of the bilayer, diminishing electrostatic repulsion and allowing the membranes to approach each other. This is clearly not the only role, however, since there is an extensively documented difference in the ability of Mg2+ versus Ca2+ to induce fusion. Although Mg2+ will induce extensive aggregation it will not induce fusion, while Ca2+ induces both. It has been proposed that this discrepancy is due to a difference in extent of dehydration. Under this theory, calcium ions bind more strongly to charged lipids, but less strongly to water. The resulting displacement of calcium for water destabilizes the lipid-water interface and promotes intimate interbilayer contact. A recently proposed alternative hypothesis is that the binding of calcium induces a destabilizing lateral tension. Whatever the mechanism of calcium-induced fusion, the initial interaction is clearly electrostatic, since zwitterionic lipids are not susceptible to this effect.
In the fusion process, the lipid head group is not only involved in charge density, but can affect dehydration and defect nucleation. These effects are independent of the effects of ions. The presence of the uncharged headgroup phosphatidylethanolamine (PE) increases fusion when incorporated into a phosphatidylcholine bilayer. This phenomenon has been explained by some as a dehydration effect similar to the influence of calcium. The PE headgroup binds water less tightly than PC and therefore may allow close apposition more easily. An alternate explanation is that the physical rather than chemical nature of PE may help induce fusion. According to the stalk hypothesis of fusion, a highly curved bridge must form between the two bilayers for fusion to occur. Since PE has a small headgroup and readily forms inverted micelle phases it should, according to the stalk model, promote the formation of these stalks. Further evidence cited in favor of this theory is the fact that certain lipid mixtures have been shown to only support fusion when raised above the transition temperature of these inverted phases. This topic also remains controversial, and even if there is a curved structure present in the fusion process, there is debate in the literature over whether it is a cubic, hexagonal or more exotic extended phase.
Fusion proteins
The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins. The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat). Broadly, there are two classes of viral fusion proteins: acidic and pH-independent. pH independent fusion proteins can function under neutral conditions and fuse with the plasma membrane, allowing viral entry into the cell. Viruses utilizing this scheme included HIV, measles and herpes. Acidic fusion proteins such as those found on influenza are only activated when in the low pH of acidic endosomes and must first be endocytosed to gain entry into the cell.
Eukaryotic cells use entirely different classes of fusion proteins, the best studied of which are the SNAREs. SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion. Even once the role of SNAREs or other specific proteins is illuminated, a unified understanding of fusion proteins is unlikely as there is an enormous diversity of structure and function within these classes, and very few themes are conserved.
Fusion in laboratory practice
In studies of molecular and cellular biology it is often desirable to artificially induce fusion. Although this can be accomplished with the addition of calcium as discussed earlier, this procedure is often not feasible because calcium regulates many other biochemical processes and its addition would be a strong confound. Also, as mentioned, calcium induces massive aggregation as well as fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with myeloma cells. The resulting “hybridoma” from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the myeloma component. The mechanism of PEG fusion has not been definitively identified, but some researchers believe that the PEG, by binding a large number of water molecules, effectively decreases the chemical activity of the water and thus dehydrates the lipid headgroups. Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers.
Alternatively, SNARE-inspired model systems can be used to induce membrane fusion of lipid vesicles. In those systems membrane anchored complementary DNA, PNA, peptides, or other molecules "zip" together and pull the membranes into proximity. Such systems could have practical applications in the future, for example in drug delivery. The probably best investigated system consists of coiled-coil forming peptides of complementary charge (one is typically carrying an excess of positively charged lysins and is thus termed peptide K, and one negatively charged glutamic acids called peptide E). Interestingly, it was discovered that not only the coiled-coil formation between the two peptides is necessary for membrane fusion to occur, but also that the peptide K interacts with the membrane surface and cause local defects.
Assays to measure membrane fusion
There are two levels of fusion: mixing of membrane lipids and mixing of contents. Assays of membrane fusion report either the mixing of membrane lipids or the mixing of the aqueous contents of the fused entities.
Assays for measuring lipid mixing
Assays evaluating lipid mixing make use of concentration dependent effects such as nonradiative energy transfer, fluorescence quenching and pyrene excimer formation.
NBD-Rhodamine Energy Transfer: In this method, membrane labeled with both NBD (donor) and Rhodamine (acceptor) combine with unlabeled membrane. When NBD and Rhodamine are within a certain distance, the Förster resonance energy transfer (FRET) happens. After fusion, resonance energy transfer (FRET) decreases when the average distance between probes increases, while NBD fluorescence increases.
Pyrene Excimer Formation: Pyrene monomer and excimer emission wavelengths are different. The emission wavelength of monomer is around 400 nm and that of excimer is around 470 nm. In this method, membrane labeled with Pyrene combines with unlabeled membrane. Pyrene self associates in membrane and then excited pyrene excites other pyrene. Before fusion, the majority of the emission is excimer emission. After fusion, the distance between probes increases and the ratio of excimer emission decreases.
Octadecyl Rhodamine B Self-Quenching: This assay is based on self-quenching of octadecyl rhodamine B. Octadecyl rhodamine B self-quenching occurs when the probe is incorporated into membrane lipids at concentrations of 1–10 mole percent because Rhodamine dimers quench fluorescence. In this method, membrane labeled Rhodamine combines with unlabeled membrane. Fusion with unlabeled membranes resulting in dilution of the probe, which is accompanied by increasing fluorescence. The major problem of this assay is spontaneous transfer.
Assays for measuring content mixing
Mixing of aqueous contents from vesicles as a result of lysis, fusion or physiological permeability can be detected fluorometrically using low molecular weight soluble tracers.
Fluorescence quenching assays with ANTS/DPX: ANTS is a polyanionic fluorophore, while DPX is a cationic quencher. The assay is based on the collisional quenching of them. Separate vesicle populations are loaded with ANTS or DPX, respectively. When content mixing happens, ANTS and DPX collide and fluorescence of ANTS monitored at 530 nm, with excitation at 360 nm is quenched. This method is performed at acidic pH and high concentration.
Fluorescence enhancement assays with Tb3+/DPA: This method is based on the fact that chelate of Tb3+/DPA is 10,000 times more fluorescent than Tb3+ alone. In the Tb3+/DPA assay, separate vesicle populations are loaded with TbCl3 or DPA. The formation of Tb3+/DPA chelate can be used to indicate vesicle fusion. This method is good for protein free membranes.
Single molecule DNA assay. A DNA hairpin composed of 5 base pair stem and poly-thymidine loop that is labeled with a donor (Cy3) and an acceptor (Cy5) at the ends of the stem was encapsulated in the v-SNARE vesicle. We separately encapsulated multiple unlabeled poly-adenosine DNA strands in the t-SNARE vesicle. If the two vesicles, both ~100 nm in diameter, dock and a large enough fusion pore forms between them, the two DNA molecules should hybridize, opening up the stem region of the hairpin and switching the Förster resonance energy transfer (FRET) efficiency (E) between Cy3 and Cy5 from a high to a low value.
See also
Interbilayer Forces in Membrane Fusion
Fusion mechanism
Cell fusion
References
Membrane biology
Biophysics | Lipid bilayer fusion | [
"Physics",
"Chemistry",
"Biology"
] | 2,628 | [
"Membrane biology",
"Applied and interdisciplinary physics",
"Biophysics",
"Molecular biology"
] |
21,510,668 | https://en.wikipedia.org/wiki/Pascal%27s%20law | Pascal's law (also Pascal's principle or the principle of transmission of fluid-pressure) is a principle in fluid mechanics given by Blaise Pascal that states that a pressure change at any point in a confined incompressible fluid is transmitted throughout the fluid such that the same change occurs everywhere. The law was established by French mathematician Blaise Pascal in 1653 and published in 1663.
Definition
Pascal's principle is defined as:
Fluid column with gravity
For a fluid column in a uniform gravity (e.g. in a hydraulic press), this principle can be stated mathematically as:
where
The intuitive explanation of this formula is that the change in pressure between two elevations is due to the weight of the fluid between the elevations. Alternatively, the result can be interpreted as a pressure change caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures. Therefore, Pascal's law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid.
The formula is a specific case of Navier–Stokes equations without inertia and viscosity terms.
Applications
If a U-tube is filled with water and pistons are placed at each end, pressure exerted by the left piston will be transmitted throughout the liquid and against the bottom of the right piston (The pistons are simply "plugs" that can slide freely but snugly inside the tube.). The pressure that the left piston exerts against the water will be exactly equal to the pressure the water exerts against the right piston . By using we get . Suppose the tube on the right side is made 50 times wider . If a 1 N load is placed on the left piston (), an additional pressure due to the weight of the load is transmitted throughout the liquid and up against the right piston. This additional pressure on the right piston will cause an upward force which is 50 times bigger than the force on the left piston. The difference between force and pressure is important: the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area, 50 times as much force is exerted on the larger piston. Thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston.
Forces can be multiplied using such a device. One newton input produces 50 newtons output. By further increasing the area of the larger piston (or reducing the area of the smaller piston), forces can be multiplied, in principle, by any amount. Pascal's principle underlies the operation of the hydraulic press. The hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the large piston will be raised only one-fiftieth of this, or 2 centimeters. The input force multiplied by the distance moved by the smaller piston is equal to the output force multiplied by the distance moved by the larger piston; this is one more example of a simple machine operating on the same principle as a mechanical lever.
A typical application of Pascal's principle for gases and liquids is the automobile lift seen in many service stations (the hydraulic jack). Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir. The oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the lifting force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from very small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved.
Other applications:
Force amplification in the braking system of most motor vehicles.
Used in artesian wells, water towers, and dams.
Scuba divers must understand this principle. Starting from normal atmospheric pressure, about 100 kilopascal, the pressure increases by about 100 kPa for each increase of 10 m depth.
Usually Pascal's rule is applied to confined space (static flow), but due to the continuous flow process, Pascal's principle can be applied to the lift oil mechanism (which can be represented as a U tube with pistons on either end).
Pascal's barrel
Pascal's barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646. In the experiment, Pascal supposedly inserted a long vertical tube into an (otherwise sealed) barrel filled with water. When water was poured into the vertical tube, the increase in hydrostatic pressure caused the barrel to burst.
The experiment is mentioned nowhere in Pascal's preserved works and it may be apocryphal, attributed to him by 19th-century French authors, among whom the experiment is known as crève-tonneau (approx.: "barrel-buster");
nevertheless the experiment remains associated with Pascal in many elementary physics textbooks.
See also
Pascal's contributions to the physical sciences
References
Hydrostatics
Fluid mechanics
Blaise Pascal | Pascal's law | [
"Engineering"
] | 1,048 | [
"Civil engineering",
"Fluid mechanics"
] |
21,512,627 | https://en.wikipedia.org/wiki/Satellite%20collision | Strictly speaking, a satellite collision is when two satellites collide while in orbit around a third, much larger body, such as a planet or moon. This definition is typically loosely extended to include collisions between sub-orbital or escape-velocity objects with an object in orbit. Prime examples are the anti-satellite weapon tests. There have been no observed collisions between natural satellites, but impact craters may show evidence of such events. Both intentional and unintentional collisions have occurred between man-made satellites around Earth since the 1980s. Anti-satellite weapon tests and failed rendezvous or docking operations can result in orbital space debris, which in turn may collide with other satellites.
Natural-satellite collisions
There have been no observed collisions between natural satellites of any Solar System planet or moon. Collision candidates for past events are:
Impact craters on many Jupiter (Jovian) and Saturn's (Saturnian) moons. They may have been formed by collisions with smaller moons, but they could equally likely have been formed by impacts with asteroids and comets during the Late Heavy Bombardment.
The far side of the Moon may have formed from the impact of a smaller moon that also formed during the giant impact event that created the Moon.
The objects making up the Rings of Saturn are believed to continually collide and aggregate with each other, leading to debris with limited size constrained to a thin plane. Although this is believed to be an ongoing process, this has not been directly observed.
Artificial-satellite collisions
Three types of collisions have occurred involving artificial satellites orbiting the Earth:
Intentional collisions intended to destroy the satellites, either to test anti-satellite weapons or destroy satellites which may pose a hazard should they reenter the atmosphere intact:
Several tests conducted as part of the Soviet Union's Istrebitel Sputnikov programme in the 1970s and 80s, involving IS-A satellites intercepting and destroying IS-P, DS-P1-M and Lira target satellites launched specifically for the tests.
The 1985 destruction of the USA P78-1 solar research satellite during a USA ASM-135 anti-satellite missile test.
The 2007 destruction of the Chinese Fungyun FY-1C weather satellite during a Chinese anti-satellite missile test.
The 2008 destruction of the USA-193 military reconnaissance satellite in a decaying orbit by a USA SM-3 missile.
The 2019 destruction of Microsat-R after Indian military launched an anti-satellite weapon (ASAT) to destroy an Indian telecom satellite in a move called "Mission Shakti".
Unintentional low-speed collisions during failed rendezvous and docking operations:
The 1994 collision between the crewed Soyuz TM-17 spacecraft and the Russian Mir space station.
The 1997 low-speed collision between the Progress M-34 supply ship and the Russian Mir space station during manual docking manoeuvers.
The 2005 low-speed collision between the USA DART spacecraft and the USA MUBLCOM communications satellite during orbital rendezvous manoeuvers.
Unintentional high-speed collisions between active satellites and orbital debris:
The 1991 collision between Kosmos 1934 and Mission-related debris (1977-062C, 13475).
The 1996 collision between the French Cerise military reconnaissance satellite and debris from an Ariane rocket.
The 2009 collision between the Iridium 33 communications satellite and the derelict Russian Kosmos 2251 spacecraft, which resulted in the destruction of both satellites.
The 22 January 2013 collision between debris from Fengyun FY-1C satellite and the Russian BLITS nano-satellite.
The 22 May 2013 collision between two CubeSats, Ecuador's NEE-01 Pegaso and Argentina's CubeBug-1, and the particles of a debris cloud around a Tsyklon-3 upper stage (SCN 15890) left over from the launch of Kosmos 1666.
The 18 March 2021 collision between Yunhai-1 02 and debris from the Zenit-2 rocket body that launched the Kosmos 2333 satellite (a Tselina-2 satellite) in 1996.
Spacecraft impacts with moons
The first spacecraft to impact the Earth's Moon was the USSR Luna 2 on September 14, 1959. For a complete list of spacecraft impacts and controlled landings on the Moon, see List of man-made objects on the Moon. Also see Timeline of Moon exploration and List of lunar probes.
There have been no spacecraft collisions with the Martian moons.
There have been no spacecraft collisions with any Jovian moons. Note that to avoid collision with Europa and possible contamination by Earth microbes, the NASA Galileo spacecraft was intentionally deorbited into Jupiter's atmosphere on September 21, 2003.
There have been no spacecraft collisions with any Saturnian moons; the ESA Huygens probe made a controlled landing on Titan on January 14, 2005.
The Double Asteroid Redirection Test (DART) was intentionally collided with Dimorphos, a moon of the asteroid 65803 Didymos on 26 September 2022.
Satellite collision avoidance
Satellite operators frequently maneuver their satellites to avoid potential collisions. One notable near collision was Sept 2019 between an ESA satellite and a SpaceX Starlink satellite, when ESA tweeted/complained at having to move to avoid the Starlink satellite.
See also
Space debris
Meteor
Kessler syndrome
References
Collision
Satellites | Satellite collision | [
"Physics",
"Astronomy",
"Technology"
] | 1,067 | [
"Outer space",
"Satellite collisions",
"Space debris",
"Mechanics",
"Satellites",
"Collision"
] |
21,516,878 | https://en.wikipedia.org/wiki/Helical%20resonator | A helical resonator is a passive electrical component that can be used as a filter resonator.
Physically, a helical resonator is a wire helix surrounded by a square or cylindrical conductive shield. One end of the helix is connected to the shield and the other end is left open (Weston, 2001, p. 660). The device works like a coaxial resonator, but it is much shorter because the helical inner conductor reduces the velocity of wave propagation (Lancaster, 2006, p. 99).
Like cavity resonators, helical resonators can achieve Q factors in the 1000s. This is because at high frequencies, the skin effect results in most of the current flowing on the surface of the helix and shield. Plating the shield walls and helix with high conductivity materials increases the Q beyond that of bare copper (Blattenberger, 1989).
The length of wire is one quarter of the wavelength of interest. The helix is space wound, the gap between turns is equal to the diameter of the wire (Blattenberger, 1989). If the open end of the helix is close to the end cap of the metal shield the length is somewhat reduced due to the capacitance between the conductor and the shield (Whittaker, 2000, p. 227).
Coupling to the resonator can be achieved with a tap wire soldered to the helix at some distance from the shorted end. Input impedance varies with distance from the shorted end by impedance transformer action. The tap point is chosen to achieve an impedance match with the connected circuit. Tuning of the resonator may be achieved by inserting a screw into the central axis of the helix (Weston, 2001, p. 660). Other means of input and output coupling used are a wire loop coupling to the magnetic field near the shorted end, or a probe capacitively coupling near the open end. Coupling between resonators in a multi-resonator filter is often simply achieved with apertures in the shielding between them (Whittaker, 2000, p. 227).
Helical resonators are well suited to UHF frequencies ranging from 600 MHz to 1500 MHz (Blattenberger, 1989).
Design equations
Q - quality factor (dimensionless)
- resonator characteristic impedance (Ohms)
d - mean helix diameter (cm)
h - height of helix (cm)
f - frequency (MHz)
(Blattenberger, 1989)
References
Kirt Blattenberger, "Helical resonator design", RF Cafe, 1989.
M. J. Lancaster, Passive Microwave Device Applications of High-Temperature Superconductors, Cambridge University Press, 2006 .
David Weston, Electromagnetic Compatibility: Principles and Applications, Second Edition, CRC Press, 2001 .
Jerry C. Whitaker, The Resource Handbook of Electronics, CRC Press, 2000 .
Anatol I. Zverev, Handbook of filter synthesis, pp.499-519, Wiley, 1967 .
Resonators
Distributed element circuits | Helical resonator | [
"Engineering"
] | 640 | [
"Electronic engineering",
"Distributed element circuits"
] |
21,518,774 | https://en.wikipedia.org/wiki/Neofavolus%20alveolaris | Neofavolus alveolaris, commonly known as the hexagonal-pored polypore, is a species of fungus in the family Polyporaceae. It causes a white rot of dead hardwoods. Found on sticks and decaying logs, its distinguishing features are its yellowish to orange scaly cap, and the hexagonal or diamond-shaped pores. It is widely distributed in North America, and also found in Asia, Australia, and Europe.
Taxonomy
The first scientific description of the fungus was published in 1815 by Augustin Pyramus de Candolle, under the name Merulius alveolaris. A few years later in 1821 it was sanctioned by Elias Magnus Fries as Cantharellus alveolaris. It was transferred to the genus Polyporus in a 1941 publication by Appollinaris Semenovich Bondartsev and Rolf Singer. It was then transferred to its current genus in 2012.
The genus name is derived from the Greek meaning "many pores", while the specific epithet alveolaris means "with small pits or hollows".
Description
The fruit bodies of P. alveolaris are in diameter, rounded to kidney- or fan-shaped. Fruit bodies sometimes have stems, but they are also found attached directly to the growing surface. The cap surface is dry, covered with silk-like fibrils, and is an orange-yellow or reddish-orange color, which weathers to cream to white. The context is thin (2 mm), tough, and white. Tubes are radially elongated, with the pore walls breaking down in age. The pores are large—compared to other species in this genus—typically 0.5–3 mm wide, angular (diamond-shaped) or hexagonal; the pore surface is a white to buff color. The stipe, if present, is 0.5–2 cm long by 1.5–5 mm thick, placed either laterally or centrally, and has a white to tan color. The pores extend decurrently on the stipe. The spore deposit is white.
Microscopic features
Spores are narrowly elliptical and smooth, hyaline, with dimensions of 11–14.5 × 4–5 μm. The basidia are club-shaped and four-spored, with dimensions of 28–42 × 7–9 μm.
Similar species
Polyporus craterellus bears a resemblance to P. alveolaris, but the former species has a more prominent stalk and does not have the reddish-orange colors observed in the latter.
Edibility
This mushroom is edible when young. It has been described as "edible but tough," with toughness increasing with age, and not having "all that distinctive of a flavor." Another reference lists the species as inedible.
Habitat and distribution
Neofavolus alveolaris is found growing singly or grouped together on branches and twigs of hardwoods, commonly on shagbark hickory in the spring and early summer. It has been reported growing on the dead hardwoods of genera Acer, Castanea, Cornus, Corylus, Crataegus, Erica, Fagus, Fraxinus, Juglans, Magnolia, Morus, Populus, Pyrus, Robinia, Quercus, Syringa, Tilia, and Ulmus.
This species is widely distributed in North America,
and has also been collected in Australia, China, and Europe (Czechoslovakia, Italy and Portugal).
Antifungal compounds
A polypeptide with antifungal properties has been isolated from the fresh fruit bodies of this species. Named alveolarin, it inhibits the growth of the species Botrytis cinerea, Fusarium oxysporum, Mycosphaerella arachidicola, and Physalospora piricola.
References
alveolaris
Edible fungi
Fungi described in 1815
Fungi of Australia
Fungi of China
Fungi of Europe
Fungi of North America
Fungi of Asia
Polyporaceae
Taxa named by Augustin Pyramus de Candolle
Fungus species | Neofavolus alveolaris | [
"Biology"
] | 843 | [
"Fungi",
"Fungus species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.