id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
26,069,912
https://en.wikipedia.org/wiki/Virus%20quantification
Virus quantification is counting or calculating the number of virus particles (virions) in a sample to determine the virus concentration. It is used in both research and development (R&D) in academic and commercial laboratories as well as in production situations where the quantity of virus at various steps is an important variable that must be monitored. For example, the production of virus-based vaccines, recombinant proteins using viral vectors, and viral antigens all require virus quantification to continually monitor and/or modify the process in order to optimize product quality and production yields and to respond to ever changing demands and applications. Other examples of specific instances where viruses need to be quantified include clone screening, multiplicity of infection (MOI) optimization, and adaptation of methods to cell culture. There are many ways to categorize virus quantification methods. Here, the methods are grouped according to what is being measured and in what biological context. For example, cell-based assays typically measure infectious units (active virus). Other methods may measure the concentration of viral proteins, DNA, RNA, or molecular particles, but not necessarily measure infectivity. Each method has its own advantages and disadvantages, which often determines which method is used for specific applications. Cell-based assays Plaque assay Plaque-based assays are a commonly used method to determine virus concentration in terms of infectious dose. Plaque assays determine the number of plaque forming units (PFU) in a virus sample, which is one measure of virus quantity. This assay is based on a microbiological method conducted in petri dishes or multi-well cell culture plates. Specifically, a confluent monolayer of host cells is infected by applying a sample containing the virus at varying dilutions and then covered with a semi-solid medium, such as agar or carboxymethyl cellulose, to prevent the virus infection from spreading indiscriminately, as would occur in a liquid medium. A viral plaque is formed after a virus infects a cell within the fixed cell monolayer. The virus-infected cell will lyse and spread the infection to adjacent cells, where the infection-to-lysis cycle is repeated. This will create an area of infected, lysed cells (viral plaque) surrounded by uninfected, intact cells. The plaque can be seen with an optical microscope or visually using cell staining techniques (e.g., staining with a crystal violet solution to visualize intact vs. lysed cells). Plaque formation can take 3–14 days, depending on the virus being analyzed. Plaques are generally counted manually, and the plaque count, in combination with the dilution factor of the infection solution (the sample initially applied to the cells), is used to calculate the number of plaque forming units per sample unit volume (PFU/mL). The PFU/mL number represents the concentration of infectious virus particles within the sample and is based on the assumption that each plaque formed is representative of an initial infection by one infectious virus particle. Focus forming assay (FFA) The focus forming assay (FFA) is a variation of the plaque assay, but instead of depending on cell lysis in order to detect plaque formation, the FFA employs immunostaining techniques using fluorescently labeled antibodies specific for a viral antigen to detect infected host cells and infectious virus particles before an actual plaque is formed. The FFA is particularly useful for quantifying classes of viruses that do not lyse the cell membranes, as these viruses would not be amenable to the plaque assay. Like the plaque assay, host cell monolayers are infected with various dilutions of the virus sample and allowed to incubate for a relatively brief incubation period (e.g., 24–72 hours) under a semisolid overlay medium that restricts the spread of infectious virus, creating localized clusters (foci) of infected cells. Plates are subsequently probed with fluorescently labeled antibodies against a viral antigen, and fluorescence microscopy is used to count and quantify the number of foci. The FFA method typically yields results in less time than plaque assays or fifty-percent-tissue-culture-infective-dose (TCID50) assays (see below), but it can be more expensive in terms of required reagents and equipment. Assay completion time is also dependent on the size of area that the user is counting. A larger area will require more time but can provide a more accurate representation of the sample. Results of the FFA are expressed as focus forming units per milliliter, or FFU/mL. TCID50 endpoint dilution assay The TCID50 (50% tissue culture infectious dose) assay is the measure of infectious virus titer. This endpoint dilution assay quantifies the amount of virus required to kill 50% of infected hosts or to produce a cytopathic effect in 50% of inoculated tissue culture cells. This assay may be more common in clinical research applications where the lethal dose of virus must be determined or if the virus does not form plaques. When used in the context of tissue culture, host cells are plated and serial dilutions of the virus are added. After incubation, the percentage of cell death (i.e. infected cells) is manually observed and recorded for each virus dilution, and results are used to mathematically calculate a TCID50 result. Due to distinct differences in assay methods and principles, TCID50 and pfu/mL or other infectivity assay results are not equivalent. This method can take up to a week due to cell infectivity time. Two methods commonly used to calculate TCID50 (can also be used to calculate other types of 50% endpoint such EC50, IC50, and LD50) are: Spearman–Kärber Reed–Muench method The theoretical relationship between TCID50 and PFU is approximately 0.69 PFU = 1 TCID50 based on the Poisson distribution, a probability distribution which describes how many random events (virus particles) occurring at a known average rate (virus titer) are likely to occur in a fixed space (the amount of virus medium in a well). However, it must be emphasized that in practice, this relationship may not hold even for the same virus + cell combination, as the two types of assay are set up differently and virus infectivity is very sensitive to various factors such as cell age, overlay media, etc. But the following reference defines the relationship differently: From ATTC: "Assuming that the same cell system is used, that the virus forms plaques on those cells, and that no procedures are added which would inhibit plaque formation, 1 mL of virus stock would be expected to have about half of the number of plaque forming units (PFUs) as TCID50. This is only an estimate but is based on the rationale that the limiting dilution which would infect 50% of the cell layers challenged would often be expected to initially produce a single plaque in the cell layers which become infected. In some instances, two or more plaques might by chance form, and thus the actual number of PFUs should be determined experimentally. "Mathematically, the expected PFUs would be somewhat greater than one-half the TCID50, since the negative tubes in the TCID50 represent zero plaque forming units and the positive tubes each represent one or more plaque forming units. A more precise estimate is obtained by applying the Poisson distribution. Where is the proportion of negative tubes and m is the mean number of infectious units per volume (PFU/ml), . For any titer expressed as a TCID50, . Thus and which is ~ 0.7. "Therefore, one could multiply the TCID50 titer (per ml) by 0.7 to predict the mean number of PFU/ml. When actually applying such calculations, remember the calculated mean will only be valid if the changes in protocol required to visualize plaques do not alter the expression of infectious virus as compared with expression under conditions employed for TCID50. "Thus as a working estimate, one can assume material with a TCID50 of 1 × 105 TCID50/mL will produce 0.7 × 105 PFUs/mL." Protein and antibody-based assays There are several variations of protein- and antibody-based virus quantification assays. In general, these methods quantify either the amount of all protein or the amount of a specific virus protein in the sample rather than the number of infected cells or virus particles. Quantification commonly relies on colorimetric or fluorescence detection. Some assay variations quantify proteins directly in a sample, while other variations require host cell infection and incubation to allow virus growth prior to quantification. The variation used depends primarily on the amount of protein (i.e. viral protein) in the initial sample and the sensitivity of the assay itself. If incubation and virus growth are required, cell and/or virus lysis/digestion are often conducted prior to analysis. Most protein-based methods are relatively fast and sensitive but require quality standards for accurate calibration, and quantify protein, not actual virus particle concentrations. Below are specific examples of widely used protein-based assays. Hemagglutination assay The hemagglutination assay (HA) is a common non-fluorescence protein quantification assay specific for influenza. It relies on the fact that hemagglutinin, a surface protein of influenza viruses, agglutinates red blood cells (i.e. causes red blood cells to clump together). In this assay, dilutions of an influenza sample are incubated with a 1% erythrocyte solution for one hour and the virus dilution at which agglutination first occurs is visually determined. The assay produces a result of hemagglutination units (HAU), with typical PFU to HAU ratios in the 106 range. This assay takes ~1–2 hours to complete. The hemagglutination inhibition assay is a common variation of the HA assay used to measure flu-specific antibody levels in blood serum. In this variation, serum antibodies to the influenza virus will interfere with the virus attachment to red blood cells. Therefore, hemagglutination is inhibited when antibodies are present at a sufficient concentration. Bicinchoninic acid (BCA) assay The bicinchoninic acid assay (BCA; a.k.a. Smith assay) is based on a simple colorimetric measurement and is a commonly used protein quantification assay. BCA is similar to the Lowry or Bradford protein assays. The BCA assay reagent was first developed and made commercially by Pierce Chemical Company (now owned by Thermo Fisher Scientific) which held the patent until 2006. In the BCA assay, a protein's peptide bonds quantitatively reduce Cu2+ to Cu1+, which produces a light blue color. BCA chelates Cu1+ at a 2:1 ratio resulting in a more intensely colored species that absorbs at562 nm. Absorbance of a sample at 562 nm is used to determine the bulk protein concentration in the sample. Assay results are compared with known standard curves after analysis with a spectrophotometer or plate reader. Total assay time is 30 minutes to one hour. While this assay is ubiquitous and fast, it lacks specificity to viral proteins since it counts all protein in the sample. Thus the virus preparation to be quantified must contain very low levels of host cell proteins. Enzyme-linked immunosorbent assay (ELISA) Enzyme-linked immunosorbent assay (ELISA) is an antibody-based assay that utilizes an antigen-specific antibody chemically linked to an enzyme (or bound to a second antibody linked to an enzyme) to detect the presence of an unknown amount of the antigen (e.g., viral protein) in a sample. The antibody-antigen binding event is detected and/or quantified through the enzyme's ability to convert a substrate reagent to produce a detectable signal that can then be used to calculate the concentration of the target antigen in the sample. Horseradish peroxidase (HRP) is a common enzyme utilized in ELISA schemes due to its ability to amplify signal and increase assay sensitivity. There are many variations, or types of ELISA assays but they can generally be classified as either indirect, competitive, sandwich or reverse. Single radial immunodiffusion (SRID) assay Single radial immunodiffusion assay (SRID), also known as the Mancini method, is a protein assay that detects the amount of specific viral antigen by immunodiffusion in a semi-solid medium (e.g. agar). The medium contains antiserum specific to the antigen of interest and the antigen is placed in the center of the disc. As the antigen diffuses into the medium it creates a precipitate ring that grows until equilibrium is reached. Assay time can range from 10 hours to days depending on equilibration time of the antigen and antibody. The zone diameter from the ring is linearly related to the log of protein concentration and is compared to zone diameters for known protein standards for quantification. DNA and RNA assays Quantitative polymerase chain reaction (qPCR) Quantitative PCR utilizes polymerase chain reaction chemistry to amplify viral DNA or RNA to produce high enough concentrations for detection and quantification by fluorescence. In general, quantification by qPCR relies on serial dilutions of standards of known concentration being analyzed in parallel with the unknown samples for calibration and reference. Quantitative detection can be achieved using a wide variety of fluorescence detection strategies, including sequence specific probes or non-specific fluorescent dyes such as SYBR Green. Sequence-specific probes, such as TaqMan Molecular Beacons, or Scorpion, bind only to the DNA of the appropriate sequence produced during the reaction. SYBR Green dye binds to all double-stranded DNA produced during the reaction. While SYBR Green is easy to use, its lack of specificity and lower sensitivity lead most labs to use probe-based qPCR detection schemes. There are many variations of qPCR including the comparative threshold method, which allows relative quantification through comparison of Ct values (PCR cycles that show statistically significant increases in the product) from multiple samples that include an internal standard. PCR amplifies all target nucleic acid, including ones originating from intact infectious viral particles, from defective viral particles as well as free nucleic acid in solution. Because of this, qPCR results (expressed in terms of genome copies/mL) are likely to be higher in quantity than TEM results. For viral quantification, the ratio of whole virions to copies of nucleic acid is seldom one to one. This is because during viral replication, the nucleic acid and viral proteins are not always produced in 1:1 ratio and viral assembly process results in complete virions as well as empty capsids and/or excess free viral genomes. In the example of foot-and-mouth disease virus, the ratio of whole virions to RNA copies within an actively replicating host cell is approximately 1:1000. Advantages of titration by qPCR include quick turnaround time (1–4 hours) and sensitivity (can detect much lower concentration of viruses than other methods). Particle Assays Tunable resistive pulse sensing (TRPS) Tunable resistive pulse sensing (TRPS) is a method that allows high-throughput single particle measurements of individual virus particles, as they are driven through a size-tunable nanopore, one at a time. The technique has the advantage of simultaneously determining the size and concentration, of virus particles in solution with high resolution. This can be used in assessing sample stability and the contribution of aggregates, as well as total viral particle concentration (vp/mL). TRPS-based measurement occurs in an ionic buffer, and no pre-staining of samples is required prior to analysis, thus the technique is more rapid than those which require pre-treatment with fluorescent dyes, with a total preparation and measurement time of less than 10 minutes per sample. TRPS-bases virus analysis is commercially available through qViro-X systems, which have the ability to be decontaminated chemically by autoclaving after measurement has occurred. Single Virus Inductively Coupled Plasma Mass Spectroscopy (SV ICP-MS) This technique is similar to Single Particle Inductively Coupled Plasma Mass Spectroscopy (SP ICP-MS) discovered by Degueldre and Favarger (2003) and adapted later for other nanoparticles (e.g. gold colloids, see Degueldre et al. (2006)). The SP ICP-MS was adapted for the analysis of Single Virus Inductively Coupled Plasma Mass Spectroscopy (SV ICPMS) in a comprehensive study i.e. Degueldre (2021). This study suggests to adapting this method for single viruses (SV) identification and counting. With high resolution multi-channel sector field (MC SF) ICP-MS records in SV detection mode, the counting of master and key ions can allow analysis and identification of single viruses. The counting of 2-500 virial units can be performed in 20 s. Analyses are proposed to be carried out in Ar torch for master ions: 12C+, 13C+, 14N+, 15N+, and key ions 31P+, 32S+, 33S+ and 34S+. All interferences are discussed in detail. The use of high resolution MC ICP-MS is recommended while options with anaerobic/aerobic atmospheres are explored to upgrade the analysis when using quadrupole ICP-MS. Application for two virus types (SARS-COV2 and bacteriophage T5) is investigated using time scan and fixed mass analysis for the selected virus ions allowing characterisation of the species using the N/C, P/C and S/C molar ratio's and quantification of their number concentration. Other assays Flow cytometry While most flow cytometers do not have sufficient sensitivity, there are a few commercially available flow cytometers that can be used for virus quantification. A virus counter quantifies the number of intact virus particles in a sample using fluorescence to detect colocalized proteins and nucleic acids. Samples are stained with two dyes, one specific for proteins and one specific for nucleic acids, and analyzed as they flow through a laser beam. The quantity of particles producing simultaneous events on each of the two distinct fluorescence channels is determined, along with the measured sample flow rate, to calculate a concentration of virus particles (vp/mL). The results are generally similar in absolute quantity to a TEM result. The assay has a linear working range of 105–109 vp/mL and an analysis time of ~10 min with a short sample preparation time. Transmission electron microscopy (TEM) TEM is a specialized type of microscopy that utilizes a beam of electrons focused with a magnetic field to image a sample. TEM provides imaging with 1000x greater spatial resolution than a light microscope (resolution down to 0.2 nm). An ultrathin, negatively stained sample is required. Sample preparations involve depositing specimens onto a coated TEM grid and negative staining with an electron-opaque liquid. Tissue embedded samples can also be examined if thinly sectioned. Sample preparations vary depending on protocol and user but generally require hours to complete. TEM images can show individual virus particles and quantitative image analysis can be used to determine virus concentrations. These high resolution images also provide particle morphology information that most other methods cannot. Quantitative TEM results will often be greater than results from other assays as all particles, regardless of infectivity, are quantified in the reported virus-like particles per mL (vlp/mL) result. Quantitative TEM generally works well for virus concentrations greater than 106 particles/mL. Because of high instrument cost and the amount of space and support facilities needed, TEM equipment is only available in a few laboratories. See also List of subviral agents Minimal infective dose Viral load References Laboratory techniques Immunology Virology
Virus quantification
[ "Chemistry", "Biology" ]
4,262
[ "Immunology", "nan" ]
29,284,093
https://en.wikipedia.org/wiki/Assassin%27s%20Creed%3A%20The%20Fall
Assassin's Creed: The Fall is an American comic book three-issue mini-series published by WildStorm. Set in the Assassin's Creed universe, it tells the story of Nikolai Orelov, a member of the Russian Brotherhood of Assassins, who battles Templar influence in Russia in the late 19th and early 20th centuries. The miniseries also features a framing story, taking place from 1998 to 2000, which follows Nikolai's descendant Daniel Cross as he explores his ancestor's genetic memories while trying to learn more about his own past and the history of the Assassins. Written and illustrated by Cameron Stewart and Karl Kerschl, the series was initially going to be an expansion of the travels of Ezio Auditore da Firenze, but was moved to an entirely new setting to provide greater freedom to the writers. However, the story still follows the millennia-old conflict between the Assassins and the Templars, which is central to the Assassin's Creed franchise. It also incorporates several events from Russian history like the Borki train disaster, the Tunguska explosion, and the Russian Revolution. The first issue of the comic was released on November 10, 2010, a few days before the retail debut of Assassin's Creed: Brotherhood. It was followed by the second issue on December 1, 2010, and the third on January 12, 2011. Stewart and Kerschl later worked on a graphic novel sequel to the comic, titled Assassin's Creed: The Chain, which was published by UbiWorkshop in August 2012. A video game set in-between the events of The Fall and The Chain, Assassin's Creed Chronicles: Russia, was released in February 2016. Plot synopsis The comic intercuts between events in Nikolai Orelov's life from 1888 to 1917, and his great-grandson Daniel Cross, who struggles with Nikolai's memories that he can inexplicably relive. Nikolai In 1888, Nikolai Orelov is having reservations about the Assassin life as he recalls the death of his friend and fellow Assassin Aleksandr Ulyanov, who was executed one year prior after a failed attempt to kill Tsar Alexander III. He voices his doubts to his pregnant wife Anna, remarking that his father chose this life for him and he never had a say in the matter, but Anna encourages him to carry on for the sake of their unborn child. On October 29, the Brotherhood sends Nikolai to assassinate the Tsar as the latter is traveling from Crimea to Saint Petersburg on the imperial train. Nikolai boards the train and is shocked to find the entire Royal family aboard, having been told the Tsar was traveling alone. He is then ambushed by Alexander and, in the ensuing fight, the train is derailed, injuring both Nikolai and the Tsar. After getting his family to safety, Alexander retrieves the Imperial Sceptre and defeats Nikolai with it, although he spares the Assassin's life so that his children would not have to witness a violent murder. In 1908, Nikolai, now bitter and more brutal in his methods after having lost his child, tortures a captured Templar for the whereabouts of the Imperial Sceptre, which the Brotherhood has discovered to be a powerful Piece of Eden. After the Templar reveals that the Sceptre is kept at a research facility in Siberia, the Brotherhood devises a plan to retrieve it and destroy the facility with the help of their ally, Nikola Tesla. On June 30, Nikolai and two other Assassins storm the facility to retrieve the Sceptre, which is being experimented on by the Templars, while Tesla prepares to release a powerful burst of electricity from Wardenclyffe Tower to destroy the facility. However, the Assassins are unable to get to the Sceptre in time and the artifact is hit by the electricity, destroying it and causing a massive explosion that leaves Nikolai as the sole survivor. In 1917, during the Russian Revolution, Nikolai's friend Vladimir Lenin writes to him, asking him to assassinate Tsar Nicholas II, the last symbol of imperialism in Russia. Nikolai infiltrates Nicholas' residence, but rather than kill the Tsar, he demands that he hand over the Imperial Sceptre in his possession. Upon seeing the Sceptre, Nikolai quickly deduces it to be a replica and destroys it. Nicholas, remembering Nikolai from their brief encounter on the imperial train three decades ago, asks him to spare his family, but Nikolai reveals he has no intention to kill the Tsar or his family and prepares to leave. Before he does, Nicholas informs him that the late Grigori Rasputin wore a necklace which was seemingly made of the same material as the real Sceptre. Deciding to investigate this lead, Nikolai digs up Rasputin's body and retrieves his necklace, discovering it to be a shard of the destroyed Sceptre. He then returns to Anna and their young daughter Nadya as the family prepares to start a new life abroad, far from the Revolution and the Assassins. Daniel In 1998, petty criminal Daniel Cross, struggling with his addiction to drugs and alcohol and violent outbursts, is forced to attend court-ordered therapy sessions and take prescription medication for his hallucinations. After he stops taking his medication, his psychiatrist calls him out for it during one of their sessions, but Daniel chooses to ignore him and heads to a local bar. When one of Nikolai's memories kicks in, Daniel becomes deranged and attacks a man who tried to help him, mistaking him for a Templar, but is stopped by the Assassin Hannah Mueller. Mistaking Daniel for an Assassin, Hannah reprimands him for breaking the Creed and takes him away to her camp near Philadelphia, Pennsylvania. At the Assassin camp, Daniel meets its director, Paul Bellamy, who reveals there are no records of Daniel being a member of the Brotherhood despite him having the Assassins' logo tattooed on his arm. At that moment, another one of Nikolai's memories kicks in, causing Daniel to speak Russian and bring up Tunguska. The Assassins demand an explanation, but an annoyed and confused Daniel starts a fight, causing him to be subdued and returned to his cabin. Later that night, Daniel, unable to keep his hallucinations under control, convinces Hannah to help him leave the camp so that he can go to his apartment and retrieve his medication. Upon finding that Daniel has thrown away all his medication, Hannah comforts him and tells him that the Assassins might help him unlock whatever is locked away in his mind, just as Bellamy arrives, having discovered Daniel is Nikolai's descendant. As Daniel begins involuntarily reliving Nikolai's mission in Tunguska, he attacks Bellamy and attempts to flee but is eventually cornered by Hannah and Bellamy. Daniel collapses, claiming that he has finally learend his purpose: to find the Assassins' Mentor. Over the following two years, Daniel, now an official member of the Brotherhood, travels across the globe to search for clues to the Mentor's whereabouts, finally managing to secure a meeting with him in November 2000. Daniel is sedated by two Assassins in his room and awakens in the Mentor's office in Dubai. After the Mentor reveals that he has been observing Daniel's progress and tells him about his own role in the Brotherhood, he presents Daniel with a Hidden Blade. This triggers something in Daniel's brain and he impulsively stabs the Mentor with the blade, killing him. Confused and horrified by his own actions, Daniel escapes by leaping out of the office window into the water below. It is then revealed that Daniel is, in fact, a sleeper agent who was brainswashed at age 7 to kill the Mentor when the opportunity presents itself, as well as Subject 4, a former test subject for Abstergo Industries' Animus, where he was forced to relive Nikolai's memories to benefit the Templars, giving him his hallucinations. As the Assassins discover the aftermath of the Mentor's assassination and Hannah is informed of Daniel's betrayal, the Templars take advantage of the Brotherhood's weakened state to launch a worldwide purge against the Assassins, crippling them. Meanwhile, Daniel returns to Abstergo, the only place he feels would welcome him, and demands to be put back in the Animus to freely explore Nikolai's memories. Collected editions The comic has been collected into a trade paperback: Assassin's Creed: The Fall (128 pages, Panini Comics, Italian language edition, January 2011, , Titan Books, November 2013, ) The Fall Deluxe Edition was a softcover special edition that brought all three issues of The Fall, plus an exclusive 10-page epilogue, which would also act as transition towards the next comic saga, Assassin's Creed: The Chain. This edition had a total of 128 pages, including the exclusive epilogue and a making-of section. Both The Fall and The Chain were later collected in Assassin's Creed: Subject Four, a 208-page trade paperback that was include in Assassin's Creed 3: The Ubiworkshop Edition, along with Assassin's Creed: Encyclopedia. Notes References 2010 comics debuts WildStorm limited series Comics based on Assassin's Creed Frame stories Fiction set in 1888 Fiction set in 1908 Fiction set in 1917 Fiction set in 1998 Fiction set in 1999 Fiction set in 2000 Comics set in the 1990s Comics set in the 2000s Tunguska event Works set in the Russian Empire Cultural depictions of Nikola Tesla Works about the Russian Revolution Cultural depictions of Vladimir Lenin Cultural depictions of Nicholas II of Russia Cultural depictions of Grigori Rasputin Comics set in Pennsylvania Works set in Philadelphia Dubai in fiction it:Assassin's Creed (serie)#Assassin's Creed: The Fall
Assassin's Creed: The Fall
[ "Physics" ]
1,964
[ "Unsolved problems in physics", "Tunguska event" ]
29,284,770
https://en.wikipedia.org/wiki/Thermal%20scanning%20probe%20lithography
Thermal scanning probe lithography (t-SPL) is a form of scanning probe lithography (SPL) whereby material is structured on the nanoscale using scanning probes, primarily through the application of thermal energy. Related fields are thermo-mechanical SPL (see also Millipede memory), thermochemical SPL (or thermochemical nanolithography) where the goal is to influence the local chemistry, and thermal dip-pen lithography as an additive technique. History Scientists around Daniel Rugar and John Mamin at the IBM research laboratories in Almaden have been the pioneers in using heated AFM (atomic force microscope) probes for the modification of surfaces. In 1992, they used microsecond laser pulses to heat AFM tips to write indents as small as 150 nm into the polymer PMMA at rates of 100 kHz. In the following years, they developed cantilevers with resonance frequencies above 4 MHz and integrated resistive heaters and piezoresistive sensors for writing and reading of data. This thermo-mechanical data storage concept formed the basis of the Millipede project which was initialized by Peter Vettiger and Gerd Binnig at the IBM Research laboratories Zurich in 1995. It was an example of a memory storage device with a large array of parallel probes, which was however never commercialized due to growing competition from non-volatile memory such as flash memory. The storage medium of the Millipede memory consisted of polymers with shape memory functionality, like e.g. cross-linked polystyrene, in order to allow to write data indents by plastic deformation and erasing of the data again by heating. However, evaporation instead of plastic deformation was necessary for nanolithography applications to be able to create any pattern in the resist. Such local evaporation of resist induced by a heated tip could be achieved for several materials like pentaerythritol tetranitrate, cross-linked polycarbonates, and Diels-Alder polymers. Significant progress in the choice of resist material was made in 2010 at IBM Research in Zurich, leading to high resolution and precise 3D-relief patterning with the use of the self-amplified depolymerization polymer polyphthalaldehyde (PPA) and molecular glasses as resist, where the polymer decomposes into volatile monomers upon heating with the tip without the application of mechanical force and without pile-up or residues of the resist. Working principle The thermal cantilevers are fabricated from silicon wafers using bulk – and surface micro-machining processes. Probes have a radius of curvatures below 5 nm, enabling sub-10 nm resolution in the resist. The resistive heating is carried out by integrated micro-heaters in the cantilever legs which are created by different levels of doping. The time constant of the heaters lies between 5 μs to 100 μs. Electromigration limits the longterm sustainable heater temperature to 700–800 °C. The integrated heaters enable in-situ metrology of the written patterns, allowing feedback control, field stitching without the use of alignment markers and using pre-patterned structures as reference for sub-5 nm overlay. Pattern transfer for semiconductor device fabrication including reactive ion etching and metal lift-off has been demonstrated with sub-20 nm resolution. Comparison to other lithographic techniques Due to the ablative nature of the patterning process, no development step (as in: selective removal of either the exposed or non-exposed regions of the resist as for e-beam and optical lithography) is needed, neither are optical proximity corrections. Maximum linear writing speeds of up to 20 mm/s have been shown with throughputs in the 104 – 105 μm2 h−1 range which is comparable to single-column, Gaussian-shaped e-beam using HSQ as resist. The resolution of t-SPL is determined by the probe tip shape and not limited by the diffraction limit or by the focal spot size of beam approaches, however, tip-sample interactions during the in-situ metrology process create tip wear, limiting the lifetime of the probes. In order to extend the lifetime of the probe tips, Ultrananocrystalline diamond (UNCD) and Silicon-Carbide (SiC)-coated tips or wear-less floating contact imaging methods have been demonstrated. No electron damage or charging is caused to the patterned surfaces due to the absence of electron or ion beams. References See also Nanolithography Scanning probe lithography Thermochemical nanolithography Dip-pen nanolithography Atomic force microscopy Scanning probe microscopy Electron-beam lithography Lithography (microfabrication) Nanotechnology
Thermal scanning probe lithography
[ "Materials_science", "Engineering" ]
985
[ "Nanotechnology", "Materials science", "Microtechnology", "Lithography (microfabrication)" ]
29,287,934
https://en.wikipedia.org/wiki/Hydrophile
A hydrophile is a molecule or other molecular entity that is attracted to water molecules and tends to be dissolved by water. In contrast, hydrophobes are not attracted to water and may seem to be repelled by it. Hygroscopics are attracted to water, but are not dissolved by water. Molecules A hydrophilic molecule or portion of a molecule is one whose interactions with water and other polar substances are more thermodynamically favorable than their interactions with oil or other hydrophobic solvents. They are typically charge-polarized and capable of hydrogen bonding. This makes these molecules soluble not only in water but also in polar solvents. Hydrophilic molecules (and portions of molecules) can be contrasted with hydrophobic molecules (and portions of molecules). In some cases, both hydrophilic and hydrophobic properties occur in a single molecule. An example of these amphiphilic molecules is the lipids that comprise the cell membrane. Another example is soap, which has a hydrophilic head and a hydrophobic tail, allowing it to dissolve in both water and oil. Hydrophilic and hydrophobic molecules are also known as polar molecules and nonpolar molecules, respectively. Some hydrophilic substances do not dissolve. This type of mixture is called a colloid. An approximate rule of thumb for hydrophilicity of organic compounds is that solubility of a molecule in water is more than 1 mass % if there is at least one neutral hydrophile group per 5 carbons, or at least one electrically charged hydrophile group per 7 carbons. Hydrophilic substances (ex: salts) can seem to attract water out of the air. Sugar is also hydrophilic, and like salt is sometimes used to draw water out of foods. Sugar sprinkled on cut fruit will "draw out the water" through hydrophilia, making the fruit mushy and wet, as in a common strawberry compote recipe. Chemicals Liquid hydrophilic chemicals complexed with solid chemicals can be used to optimize solubility of hydrophobic chemicals. Liquid chemicals Examples of hydrophilic liquids include ammonia, alcohols, some amides such as urea and some carboxylic acids such as acetic acid. Alcohols Hydroxyl groups (-OH), found in alcohols, are polar and therefore hydrophilic (water liking) but their carbon chain portion is non-polar which make them hydrophobic. The molecule increasingly becomes overall more nonpolar and therefore less soluble in the polar water as the carbon chain becomes longer. Methanol has the shortest carbon chain of all alcohols (one carbon atom) followed by ethanol (two carbon atoms), and 1-propanol along with its isomer 2-propanol, all being miscible with water. Tert-Butyl alcohol, with four carbon atoms, is the only one among its isomers to be miscible with water. Solid chemicals Cyclodextrins Cyclodextrins are used to make pharmaceutical solutions by capturing hydrophobic molecules as guest hosts. Because inclusion compounds of cyclodextrins with hydrophobic molecules are able to penetrate body tissues, these can be used to release biologically active compounds under specific conditions. For example, testosterone is complexed with hydroxy-propyl-beta-cyclodextrin (HPBCD), 95% absorption of testosterone was achieved in 20 minutes via the sublingual route but HPBCD was not absorbed, whereas hydrophobic testosterone is usually absorbed less than 40% via the sublingual route. Membrane filtration Hydrophilic membrane filtration is used in several industries to filter various liquids. These hydrophilic filters are used in the medical, industrial, and biochemical fields to filter elements such as bacteria, viruses, proteins, particulates, drugs, and other contaminants. Common hydrophilic molecules include colloids, cotton, and cellulose (which cotton consists of). Unlike other membranes, hydrophilic membranes do not require pre-wetting: they can filter liquids in their dry state. Although most are used in low-heat filtration processes, many new hydrophilic membrane fabrics are used to filter hot liquids and fluids. See also Hydrophilic-lipophilic balance Hydrophobicity scales Superhydrophilicity Ultrahydrophobicity Wetting Hygroscopic References Chemical properties Intermolecular forces Surface science Articles containing video clips
Hydrophile
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
926
[ "Molecular physics", "Materials science", "Surface science", "Intermolecular forces", "Condensed matter physics", "nan" ]
29,289,141
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20nucleoside%20and%20nucleotide%20reverse-transcriptase%20inhibitors
Discovery and development of nucleoside and nucleotide reverse-transcriptase inhibitors (NRTIs and NtRTIs) began in the 1980s when the AIDS epidemic hit Western societies. NRTIs inhibit the reverse transcriptase (RT), an enzyme that controls the replication of the genetic material of the human immunodeficiency virus (HIV). The first NRTI was zidovudine, approved by the U.S. Food and Drug Administration (FDA) in 1987, which was the first step towards treatment of HIV. Six NRTI agents and one NtRTI have followed. The NRTIs and the NtRTI are analogues of endogenous 2´-deoxy-nucleoside and nucleotide. Drug-resistant viruses are an inevitable consequence of prolonged exposure of HIV-1 to anti-HIV drugs. History In the summer of 1981 the acquired immunodeficiency syndrome (AIDS) was first reported. Two years later the etiological link to AIDS, the human immunodeficiency virus (HIV) was identified. Since the identification of HIV the development of effective antiretroviral drugs and the scientific achievements in HIV research has been enormous. Antiretroviral drugs for the treatment of HIV infections belong to six categories: Nucleoside and nucleotide reverse-transcriptase inhibitors, Non-nucleoside reverse-transcriptase inhibitors, protease inhibitors, entry inhibitors, co-receptor inhibitors and integrase inhibitors. The reverse transcriptase of HIV-1 has been the main foundation for the development of anti-HIV drugs. The first nucleoside reverse-transcriptase inhibitor with in vitro anti-HIV activity was zidovudine. Since zidovudine was approved in 1987, six nucleosides and one nucleotide reverse-transcriptase inhibitor (NRTI) have been approved by FDA. NRTIs approved by the FDA are zidovudine, didanosine, zalcitabine, stavudine, lamivudine, abacavir and emtricitabine and the only nucleotide reverse-transcriptase inhibitor (NtRTI) approved is tenofovir (see table 4). The HIV-1 reverse transcriptase enzyme Function Most standard HIV drug therapies revolve around inhibiting the reverse transcriptase enzyme (RT), an enzyme that is necessary to the HIV-1 virus and other retroviruses to complete their life cycle. The RT enzyme serves two key functions. First, it controls the replication of the viruses genetic material via its polymerase activity. It converts the viral single-stranded RNA into an integration competent double stranded DNA. Subsequently, the generated DNA is translocated into the nucleus of the host cell where it is integrated in its genome by the retroviral integrase. The other role of the RT is its ribonuclease H activity that degrades RNA only when it is in a heteroduplex with DNA. Structure HIV-1 RT is an asymmetric heterodimer which is 1000 amino acid long and is composed of two subunits. The larger subunit, p66, is 560 amino acid long and it exhibits all the enzymatic activities of the RT. The smaller subunit, called p51, is 440 amino acid long and it is considered to stabilize the heterodimer but also it may take part in the binding of the tRNA primer. The p66 subunit has the two active sites: polymerase and ribonuclease H. The polymerase has four subdomains that have been named “fingers“, “thumb“, “connection“ and “palm“ for it has been compared to the right hand. Mechanism of action Activation of nucleoside and nucleotide reverse-transcriptase inhibitors is primarily dependent on cellular entry by passive diffusion or carrier-mediated transport. NRTIs are highly hydrophilic and have limited membrane permeability and therefore this step is very important. NRTIs are analogues of endogenous 2´-deoxy-nucleoside and nucleotide. They are inactive in their parent forms and require successive phosphorylation. Nucleosides must be triphosphorylated, while nucleotides, which possess one phosphonated group, must be diphosphorylated. This stepwise activation process occurs inside the cell and is mediated by a coordinated series of enzymes. The first, and often rate limiting, phosphorylation step (for nucleoside analogues) are most commonly catalyzed by deoxynucleoside kinases. Addition of the second phosphate group to nucleoside monophosphate analogues is completed by the nucleoside monophosphate kinases (NMP kinases). A variety of enzymes are able to catalyze the final phosphorylation step for NRTIs, including nucleoside diphosphate kinase (NDP kinase), phosphoglycerate kinase, pyruvate kinase and creatine kinase, resulting in formation of respective antivirally active triphosphate analogues. In their respective triphosphate forms, NRTIs and the only NtRTI available compete with their corresponding endogenous deoxynucleotide triphosphate (dNTPs) for incorporation into the nascent DNA chain (see figure 1). Unlike dNTPs substrate, NRTIs lack a 3´-hydroxyl group on the deoxyribose moiety. Once incorporated into the DNA chain, the absence of a 3´-hydroxyl group, which normally forms the 5´- to 3´- phosphoester bond with the next nucleic acid, blocks further extension of the DNA by RT, and they act as chain terminators. Discovery and development First step towards treatment of HIV- zidovudine In 1964 zidovudine (AZT) was synthesized by Horwitz at the Michigan Cancer Foundation. The 3´hydroxyl group in the deoxyribose ring of thymidine is replaced by an azido group which gives us zidovudine. The lack of the 3´hydroxyl group which provides the attachment point for the next nucleotide in the growing DNA chain during the reverse transcription makes it an obligate chain terminator. Ziduvodine is incorporated in place of thymidine and is an extremely potent inhibitor of HIV replication. This compound had been prepared in 1964 as a potential anti-cancer agent but was shown to be ineffective. In 1974 zidovudine was reported to have activity against retroviruses and was subsequently re-screened as an antiviral when the AIDS epidemic hit Western societies during the mid-1980s. However, zidovudine is relatively toxic since it is converted into the triphosphate by the cellular enzymes and therefore it is activated in uninfected cells. Further development of nucleoside analogues Dideoxynucleosides Dideoxynucleosides are analogues of nucleoside where the sugar ring lacks both 2´ and 3´-hydroxyl groups. Three years after the synthesis of zidovudine, Jerome Horwitz and his colleagues in Chicago prepared another dideoxynucleoside now known as zalcitabine (ddC). Zalcitabine is a synthetic pyrimidine nucleoside analogue, structurally related to deoxycytidine, in which the 3´-hydroxyl group of the ribose sugar moiety is substituted with hydrogen. Zalcitabine was approved by the FDA for the treatment of HIV-1 in June 1992. 2´,3´-dideoxyinosine or didanosine is converted into dideoxyadenosine in vivo. Its development has a long history. In 1964 dideoxyadenosine, the corresponding adenosine analogue of zalcitabine was synthesised. Dideoxyadenosine caused kidney damage so didanosine was prepared from dideoxyadenosine by enzymatic oxidation (see table 1). It was found to be active against HIV without causing kidney damage. Didanosine was approved by the FDA for the treatment of HIV-1 in October 1991. Zalcitabine and didanosine are both obligate chain terminators, that have been developed for anti-HIV treatment. Unfortunately, both drugs lack selectivity and therefore cause side-effects. Further modification of the dideoxy framework led to the development of 2´,3´-didehydro-3´-deoxythymidine (stavudine, d4T). Activity of stavudine was shown to be similar to that of zidovudine, although their phosphorylation patterns differ; the affinity for zidovudine to thymidine kinase (the enzyme responsible for the first phosphorylation) is similar to that of thymidine, whereas the affinity for stavudine is 700-fold weaker. 2',3'-dideoxy-3'-thiacytidine (lamivudine, 3TC) was discovered by Bernard Belleau. The history of lamivudine can be traced back to the mid-1970s while Bernard Belleau was investigating sugar derivatives. Lamivudine was developed as the sulfur analogue of zalcitabine (see table 2). It was initially synthesized as a racemic mixture (BCH-189) and analysis showed that both positive and negative enantiomers of BCH-189 (2',3'-dideoxy-3'-thiacytidine) had in vitro activity against HIV. Lamivudine is the negative enantiomer and is a pyrimidine nucleoside analogue. The 3' carbon of the ribose ring of 2'-deoxycytidine has been replaced by a sulfur atom because it had greater anti-HIV activity and is less toxic than the positive enantiomer. Next in line was 2',3'-dideoxy-5-fluoro-3'-thiacytidine (Emtricitabine, FTC) which is a structural homologue of lamivudine. The structural difference is the 5-fluoro-modification of the base moiety of lamivudine. It is similar in many ways to lamivudine and is active against both HIV-1 and hepatitis B virus (HBV). Carbocyclic nucleoside Carbocyclic analogues of dideoxyadenosine were investigated for their anti-HIV activity. Minimal activity was first observed. Many nucleoside analogues were prepared and examined but only one had significant activity and satisfied the requirements for clinical use. That was 2´,3´-didehydro analogue of dideoxyadenosine. Insertion of a cyclopropyl group on its 6-amino nitrogen of the adenine ring increased lipophilicity and thus enhanced brain penetration. The resulting compound is known as abacavir (see table 3). Abacavir was approved by the FDA for use in therapy of HIV-1 infections in December 1998. This drug is the only approved antiretroviral that is active as a guanosine analogue in vivo. First it is monophosphorylated by adenosine phosphotransferase and then the monophosphate is converted to carbovir 3´-monophosphate. Subsequently, it is fully phosphorylated and the carbovir is incorporated by the RT into the DNA chain and acts as a chain terminator. Carbovir is a related guanosine analogue that had poor oral bioavailability and thus was withdrawn from clinical development. Acyclic nucleotide – the only approved NtRTI Nucleotide analogues require only two phosphorylation steps whereas nucleoside analogues require three steps. Reduction in the phosphorylation requirement may allow more rapid and complete conversion of drugs to their active metabolites. Such considerations have led to the development of phosphonate nucleotide analogues such as tenofovir. Tenofovir disoproxil fumarate (Tenofovir DF) is the prodrug of tenofovir. Tenofovir is an acyclic adenosine derivative. The acyclic nature of the compound and its phosphonate moiety are unique structural features among the approved NRTIs. Tenofovir DF is hydrolyzed enzymatically to tenofovir which exhibits anti-HIV activity. It was developed by the synthesis and broad spectrum antiviral activity of 2,3-dihydroxypropyladenine. Tenofovir DF was the first nucleotide reverse-transcriptase inhibitor approved by the FDA for the treatment of HIV-1 infection in October 2001. Resistance Currently, appearance of drug resistant viruses is an inevitable consequence of prolonged exposure of HIV-1 to antiretroviral therapy. Drug resistance is a serious clinical concern in treatment of viral infection, and it is a particularly difficult problem in treatment of HIV. Resistance mutations are known for all approved NRTIs. Two main mechanisms are known that cause NRTI drug resistance: Interference with the incorporation of NRTIs and excision of incorporated NRTIs. Interference with the incorporated NRTIs involves a mutation in the p66 subdomain of the RT. The mutation causes a steric hindrance that can exclude certain drugs, for example lamivudine, from being incorporated during reverse transcription. In case of excision of incorporated NRTIs the resistant enzymes readily accept the inhibitor as a substrate for incorporation into the DNA chain. Subsequently, the RT enzyme can remove the incorporated NRTI by reversing the polymerization step. The excision reaction requires a pyrophosphate donor which RT joins to the NRTI at the 3´primer terminus, excising it from the primer DNA. To achieve efficient inhibition of HIV-1 replication in patients, and to delay or prevent appearance of drug resistant viruses, drug combinations are used. HAART, also known as highly active antiretroviral therapy consists of combinations of antiviral drugs which include NRTIs, NtRTI, non-nucleoside reverse-transcriptase inhibitors and protease inhibitors. Current status Currently, there are several NRTIs in various stages of clinical and preclinical development. The main reasons for continuing the search for new NRTIs against HIV-1 are to decrease toxicity, increase efficiency against resistant viruses, and simplify anti-HIV-1 treatment. Apricitabine (ATC) Apricitabine is a deoxycytidine analogue. It is structurally related to lamivudine where the positions of the oxygen and the sulfur are essentially reversed. Even though apricitabine is a little less potent in vitro compared to some other NRTIs, it maintains its activity against a broad spectrum of HIV-1 variants with NRTI resistance mutations. Apricitabine is in the final stage of clinical development for the treatment of NRTI-experienced patients. Elvucitabine (L-d4FC) Elvucitabine is a deoxycytidine analogue with activity against HIV resistant to several other nucleoside analogues, including zidovudine and lamivudine. This is partly because of high intracellular levels of its triphosphate metabolite reached in cells. Clinical trials of elvucitabine are on hold, because it has shown bone marrow suppression in some patients, with CD4+ cell numbers dropping as early as two days after initiation of dosing. Amdoxovir (DAPD) Amdoxovir is a guanosine analogue NRTI prodrug that has good bioavailability. It is deaminated intracellularly by adenosine deaminase to dioxolane guanine (DXG). DXG-triphosphate, the active form of the drug, has greater activity than DAPD-triphosphate. Amdoxovir is currently in phasa II clinical trials. Racivir (RCV) Racivir is a racemic mixture of the two β-enantiomers of emtricitabine (FTC), (-)-FTC and (+)-FTC. Racivir has excellent oral bioavailability and has the advantage of needing to be taken only once a day. Racivir can be considered to be used in combination of two NRTIs and has shown promising antiviral activity when used in combination. Racivir is currently in phase II clinical trials. There are several more NRTIs in development. Either the sponsors have filed for an Investigational New Drug (IND) application, the application has been approved by the FDA or the drugs are in different phases of clinical trials. Some of the NRTIs that are in development exhibit various attractive pharmacological properties that could make them desirable for the treatment of patients in need of new agents. See also Antiretroviral drug Discovery and development of CCR5 receptor antagonists Discovery and Development of Non-Nucleoside Reverse-Transcriptase Inhibitors Discovery and development of HIV protease inhibitors Discovery and development of CCR5 receptor antagonists HIV/AIDS research Reverse-transcriptase inhibitor Protease inhibitor Entry inhibitor References Nucleoside analog reverse transcriptase inhibitors Nucleoside And Nucleotide Reverse-transcriptase Inhibitors, Discovery And Development Of
Discovery and development of nucleoside and nucleotide reverse-transcriptase inhibitors
[ "Chemistry", "Biology" ]
3,667
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
29,289,671
https://en.wikipedia.org/wiki/Melatonin%20receptor%20agonist
Melatonin receptor agonists are analogues of melatonin that bind to and activate the melatonin receptor. Agonists of the melatonin receptor have a number of therapeutic applications including treatment of sleep disorders and depression. The discovery and development of melatonin receptor agonists was motivated by the need for more potent analogues than melatonin, with better pharmacokinetics and longer half-lives. Melatonin receptor agonists were developed with the melatonin structure as a model. The melatonin receptors are G protein-coupled receptors and are expressed in various tissues of the body. There are two subtypes of the receptor in humans, melatonin receptor 1 (MT1) and melatonin receptor 2 (MT2). Melatonin and melatonin receptor agonists, on market or in clinical trials, all bind to and activate both receptor types. The binding of the agonists to the receptors has been investigated since 1986, yet is still not fully understood. When melatonin receptor agonists bind to and activate their receptors it causes numerous physiological processes. History In 1917 McCord and Allen discovered melatonin itself. In 1958, Aaron B. Lerner and his colleagues isolated the substance N-acetyl-5-methoxytryptamine and named it melatonin. High-affinity melatonin binding sites were pharmacologically characterized in the bovine brain in 1979. The first melatonergic receptor was cloned from melanophores of Xenopus laevis in 1994. In 1994-1995 the melatonin receptors were characterized and cloned in the human being by Reppert and colleagues. TIK-301 (PD-6735, LY-156735) has been in phase II clinical trial in the United States (US) since 2002. The FDA granted TIK-301 orphan drug designation in May 2004, to use as a treatment for circadian rhythm sleep disorder in blind individuals without light perception and individuals with tardive dyskinesia. In 2005 ramelteon (Rozerem) was approved in the US indicated for treatment of insomnia, characterized as difficulty with falling asleep, in adults. Melatonin in the form of prolonged release (trade name Circadin) was approved in 2007 in Europe (EU) for use as a short-term treatment, in patients 55 years or older, for primary insomnia (poor quality of sleep). Products containing melatonin are available as a dietary supplement in the United States and Canada. In 2009 agomelatine (Valdoxan, Melitor, Thymanax) was also approved in Europe and is indicated for the treatment of major depressive disorder in adults. Tasimelteon completed the phase III clinical trial in the United States for primary insomnia in 2010. The Food and Drug Administration (FDA) granted tasimelteon orphan drug designation status for blind individuals without light perception with non-24-hour sleep–wake disorder in January the same year, and final FDA approval for the same purpose was achieved in January 2014 under the trade name Hetlioz. Melatonin receptors In humans there are two subtypes of melatonin receptors targeted by melatonin agonists, MT1 and MT2. They are G protein-coupled receptors and are expressed in various tissues of the body, together or singly. MT1 receptors are expressed in many regions of the central nervous system (CNS): suprachiasmatic nucleus (SCN) of the hypothalamus, hippocampus, substantia nigra, cerebellum, central dopaminergic pathways, ventral tegmental area and nucleus accumbens. MT1 is also expressed in the retina, ovary, testis, mammary gland, coronary circulation and aorta, gallbladder, liver, kidney, skin and the immune system. MT2 receptors are expressed mainly in the CNS, also in the lung, cardiac, coronary and aortic tissue, myometrium and granulosa cells, immune cells, duodenum and adipocytes. Mechanism of action The binding of melatonin to melatonin receptors activates a few signaling pathways. MT1 receptor activation inhibits the adenylyl cyclase and its inhibition causes a rippling effect of non activation; starting with decreasing formation of cyclic adenosine monophosphate (cAMP), and then progressing to less protein kinase A (PKA) activity, which in turn hinders the phosphorylation of cAMP responsive element-binding protein (CREB binding protein) into P-CREB. MT1 receptors also activate phospholipase C (PLC), affect ion channels and regulate ion flux inside the cell. The binding of melatonin to MT2 receptors inhibits adenylyl cyclase which decreases the formation of cAMP. As well it hinders guanylyl cyclase and therefore the forming of cyclic guanosine monophosphate (cGMP). Binding to MT2 receptors probably affects PLC which increases protein kinase C (PKC) activity. Activation of the receptor can lead to ion flux inside the cell. When melatonin receptor agonists activate their receptors it causes numerous physiological processes. MT1 and MT2 receptors may be a target for the treatment of circadian and non circadian sleep disorders because of their differences in pharmacology and function within the SCN. The SCN is responsible for maintaining the 24 hour cycle which regulates many different body functions ranging from sleep to immune functions. Melatonin receptors have been identified in the cardiovascular system. Evidence from animal studies points to a dual role of melatonin in the vasculature. Activation of MT1 receptors mediates vasoconstriction and the activation of MT2 receptors mediates vasodilation. Melatonin is involved in regulating immune responses in both human and animals through activation of both MT1 and MT2 receptors. MT1 and MT2 receptors are widespread in the eye and are involved in regulating aqueous humor secretion, which is important for glaucoma, and in phototransduction. This is not a complete list since many of the possible processes need further confirmation. Drug design and development Receptors and the structure of melatonin are known. Therefore, researchers started to investigate modulations of the core structure to develop better agonists than melatonin; more potent, with better pharmacokinetics and longer half-life. TIK-301 (Figure 1) is an agonist of the early classes. It is very similar to melatonin and has made it to clinical trials. This led to further research on the molecule, mainly substitution of the aromatic ring. Various modulations showed promising activity, especially the naphthalene ring which is present in agomelatine (Figure 1). Other ring systems have also showed melatonin agonist activity. Amongst them are indane which is present in ramelteon (Figure 1) and the ring system of tasimelteon (Figure 1). Structure-activity relationship The general structure of melatonin is the indole ring with methoxy group in position 5 (5-methoxy group) and acylaminoethyl side-chain in position 3. The two side-chains are important for binding to and activating the receptors. The indole ring has been evaluated at all positions by the effect of substitutions as seen in Figure 1. Each position is further explained below: Binding and pharmacophore 2-Iodomelatonin was synthesized in 1986 and its radioligand, 2-[125I]-melatonin, has been useful in finding cellular targets of melatonin. Though the melatonin receptor was not characterized and cloned in the human being until 1994 it was possible to start carrying out binding studies in various tissues before that time. As mentioned in the structure-activity relationship chapter above, certain groups are important for the activity. The most important groups are the 5-methoxy group and the acylaminoethyl side-chain, because they bind to and activate the receptors. The –NH group of the indole ring is not important for binding and activation. In fact, it is possible to replace the indole with other aromatic ring systems (naphthalene, benzofuran, benzothiazole, indane, tetraline, tetrahydroquinolines). An example of approved drug with naphthalene ring is agomelatine. The aromatic ring and the ethyl side-chain hold the correct distance between those two groups, as the correct distance is the key to good binding and more important than what type of aromatic ring system the analogue contains. Therefore, it is possible to use different ring systems in melatonin receptor analogues, if the distance is right. Furthermore, the aromatic ring can be substituted with different flexible scaffolds, such as phenyl-propilamides, O-phenoxy-ethylamides or N-anilino-ethylamides. The ethylamide chain of these ligands has been thought having a bioactive conformation with said chain projecting outside of the indole plane and it was demonstrated by testing rigidified analogues. Substituents in positions 1 or 2 of the indole scaffold projecting outside of the aromatic cycle plane increase selectivity toward the MT2 receptor, resulting in the most selective melatonin receptor ligands and simultaneously reducing receptor activation. The melatonin receptors consist of proteins around 40 kDa each. The MT1 receptor encodes 350 amino acids and the MT2 encodes 362 amino acids. The binding of melatonin and its analogues is now understood through X-ray crystal structures published in 2019. The binding space for melatonin and analogues on the MT1 receptor is smaller than on the MT2. Investigations usually focus on two binding pockets, for the two side-chains. The binding pocket of the 5-methoxy group is more investigated than the other pocket. Researchers agree that the oxygen in the group binds to histidine (His) residues in transmembrane 5 (TM5) domain of the receptor with a hydrogen bond; His1955.46 in MT1 and His2085.46 in MT2. Another amino acid, Val192, also participates in the binding of the 5-methoxy group by binding to the methyl portion of the group. His1955.46 has also been proposed as important for receptor activation. The binding of the N-acetyl group is more complex and less known. The important amino acids in the binding pocket for this group differ between the two receptors. Serines, Ser110 and Ser114, in the TM3 domain seem to be important for binding to the MT1 receptor. However, asparagine 175 (Asn) in the TM4 domain is likely to be important for the MT2 receptor. The aromatic ring system in melatonin and analogues most likely contributes some binding affinity by binding to aromatic rings of the amino acids phenylalanine (Phe) and tryptophan (Trp) in the receptor. The bonds that form are van der Waals interactions. The N-acetyl binding and binding pocket, binding of the ring system and important domains are somewhat known and need further investigation. In past years, mutagenesis of residues involved in the binding site was not fully successful in the determination of the polar key contacts established by the methoxy group and the ethyl-amide side chain. Asn162/1754.60 and the Gln181/194, belonging to the ECL2, bind the methoxy and the ethyl-amide groups, respectively. The importance of His195/2085.46 could be related to the receptor activation, since cryo-electron microscopy structures of the ternary complexes of the receptor show that the residues enters the binding site, near the "toogle-switch" residue Trp6.48. Carbamate insecticides target human melatonin receptors. Despite its structural similarities with melatonin, serotonin is not able to bind melatonin receptor due to the polar first amine group and the lack of an aspartate in position 3.32 within melatonin receptor orthosteric site. Current status There are three melatonin agonists on the market today (February 2014); ramelteon (Rozerem), agomelatine (Valdoxan, Melitor, Thymanax) and tasimelteon (Hetlioz). Ramelteon was developed by Takeda Pharmaceutical Company and approved in the United States in 2005. Agomelatine was developed by the pharmaceutical company Servier and approved in Europe 2009. Tasimelteon was developed by Vanda Pharmaceuticals and completed the phase III trial in 2010. It was approved by the FDA on January 31, 2014, for the treatment of non-24-hour sleep–wake disorder in totally blind individuals. One melatonin agonist has received orphan drug designation and is going through clinical trials in the United States: TIK-301. Originally TIK-301 was developed by Eli Lilly and Company and called LY-156,735, it wasn't until July 2007 that Tikvah Pharmaceuticals took over the development and named it TIK-301. It is now in phase II trials and has been since 2002. In July 2010 in Europe, prolonged-release melatonin (Circadin, Neurim Pharmaceuticals) was approved for use for 13 weeks for insomnia patients over 55 years old. Additionally, Neurim Pharmaceuticals reported the results of a positive phase II trial of its investigational compound piromelatine (Neu-P11) in February 2013. No antagonists or selective ligands are currently reported in clinical studies. References
Melatonin receptor agonist
[ "Chemistry" ]
2,918
[ "Melatonin receptor agonists", "Drug discovery" ]
35,943,710
https://en.wikipedia.org/wiki/Methyl%202-chloroacrylate
Methyl 2-chloroacrylate is a colorless liquid used in manufacture of acrylic high polymer similar to polymethylmethacrylate. It is also used as a monomer for certain specialty polymers. Methyl 2-chloroacrylate is polymerizable, insoluble in water, and a skin, eye, and lung irritant. Inhalation of vapors causes pulmonary edema. Trace amounts on the skin cause large blisters. 2-Aminothiazoline-4-carboxylic acid, an intermediate in the industrial synthesis of L-cysteine, is produced by the reaction of thiourea with methyl 2-chloroacrylate. References Monomers Acrylate esters Methyl esters
Methyl 2-chloroacrylate
[ "Chemistry", "Materials_science" ]
165
[ "Monomers", "Polymer chemistry" ]
35,944,102
https://en.wikipedia.org/wiki/Magnesium%20iron%20hexahydride
Magnesium iron hexahydride is an inorganic compound with the formula Mg2FeH6. It is a green diamagnetic solid that is stable in dry air. The material is prepared by heating a mixture of powdered magnesium and iron under high pressures of hydrogen: 2 Mg + Fe + 3 H2 → Mg2FeH6 Structure The compound is isomorphous with K2PtCl6, i.e., their connectivities and structures are the same. The [FeH6]4− centre adopts octahedral molecular geometry with Fe-H distances of 1.56 Å. The Mg2+ centres are bound to the faces of the octahedron, with Mg-H distances of 2.38 Å. Several related compounds are known including salts of [RuH6]4−, [OsH6]4−, and [PtH6]2− anions. Soluble derivatives Although Mg2FeH6 is not soluble in ordinary solvents, related derivatives are. For example, the related salt Mg4Br4(THF)4FeH6 is soluble as are related alkoxides. Measurements on such compounds suggest that the hydride ligand exerts a weaker crystal field than cyanide. References Metal hydrides Magnesium compounds Iron(II) compounds Iron complexes
Magnesium iron hexahydride
[ "Chemistry" ]
280
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
35,945,712
https://en.wikipedia.org/wiki/Topological%20degeneracy
In quantum many-body physics, topological degeneracy is a phenomenon in which the ground state of a gapped many-body Hamiltonian becomes degenerate in the limit of large system size such that the degeneracy cannot be lifted by any local perturbations. Applications Topological degeneracy can be used to protect qubits which allows topological quantum computation. It is believed that topological degeneracy implies topological order (or long-range entanglement ) in the ground state. Many-body states with topological degeneracy are described by topological quantum field theory at low energies. Background Topological degeneracy was first introduced to physically define topological order. In two-dimensional space, the topological degeneracy depends on the topology of space, and the topological degeneracy on high genus Riemann surfaces encode all information on the quantum dimensions and the fusion algebra of the quasiparticles. In particular, the topological degeneracy on torus is equal to the number of quasiparticles types. The topological degeneracy also appears in the situation with topological defects (such as vortices, dislocations, holes in 2D sample, ends of a 1D sample, etc.), where the topological degeneracy depends on the number of defects. Braiding those topological defect leads to topologically protected non-Abelian geometric phase, which can be used to perform topologically protected quantum computation. Topological degeneracy of topological order can be defined on a closed space or an open space with gapped boundaries or gapped domain walls, including both Abelian topological orders and non-Abelian topological orders. The application of these types of systems for quantum computation has been proposed. In certain generalized cases, one can also design the systems with topological interfaces enriched or extended by global or gauge symmetries. The topological degeneracy also appear in non-interacting fermion systems (such as p+ip superconductors) with trapped defects (such as vortices). In non-interacting fermion systems, there is only one type of topological degeneracy where number of the degenerate states is given by , where is the number of the defects (such as the number of vortices). Such topological degeneracy is referred as "Majorana zero-mode" on the defects. In contrast, there are many types of topological degeneracy for interacting systems. A systematic description of topological degeneracy is given by tensor category (or monoidal category) theory. See also Topological order Quantum topology Topological defect Topological quantum field theory Topological quantum number Majorana fermion References Quantum phases Condensed matter physics
Topological degeneracy
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
557
[ "Quantum phases", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Matter" ]
35,946,908
https://en.wikipedia.org/wiki/Market%20game
In economic theory, a strategic market game, also known as a market game, is a game explaining price formation through game theory, typically implementing a general equilibrium outcome as a Nash equilibrium. Fundamentally in a strategic market game, markets work in a strategic way that does not (directly) involve price but can indirectly influence it. The key ingredients to modelling strategic market games are the definition of trading posts (or markets), and their price formation mechanisms as a function of the actions of players. A leading example is the Lloyd Shapley and Martin Shubik trading post game. Shapley-Shubik use a numeraire and trading posts for the exchange of goods. The relative price of each good in terms of the numeraire is determined as the ratio of the amount of the numeraire brought at each post, to the quantity of goods offered for sale at that post. In this way, every agent is allocated goods in proportion to his bids, so that posts always clear. Pradeep Dubey and John Geanakoplos show that such a game can be a strategic foundation of the Walras equilibrium. A key ingredient of such approaches is to have very large numbers of players, such that for each player the action appears to him as a linear constraint that he cannot influence. An excellent description of price formation in a strategic market game in which for each commodity there is a unique trading post, on which consumers place offers of the commodity and bids of inside money, is provided by James Peck, Karl Shell and Stephen Spear. References Business models Game theory game classes
Market game
[ "Mathematics" ]
320
[ "Game theory game classes", "Game theory" ]
35,947,857
https://en.wikipedia.org/wiki/Krull%E2%80%93Akizuki%20theorem
In commutative algebra, the Krull–Akizuki theorem states the following: Let A be a one-dimensional reduced noetherian ring, K its total ring of fractions. Suppose L is a finite extension of K. If and B is reduced, then B is a noetherian ring of dimension at most one. Furthermore, for every nonzero ideal of B, is finite over A. Note that the theorem does not say that B is finite over A. The theorem does not extend to higher dimension. One important consequence of the theorem is that the integral closure of a Dedekind domain A in a finite extension of the field of fractions of A is again a Dedekind domain. This consequence does generalize to a higher dimension: the Mori–Nagata theorem states that the integral closure of a noetherian domain is a Krull domain. Proof First observe that and KB is a finite extension of K, so we may assume without loss of generality that . Then for some . Since each is integral over K, there exists such that is integral over A. Let . Then C is a one-dimensional noetherian ring, and , where denotes the total ring of fractions of C. Thus we can substitute C for A and reduce to the case . Let be minimal prime ideals of A; there are finitely many of them. Let be the field of fractions of and the kernel of the natural map . Then we have: and . Now, if the theorem holds when A is a domain, then this implies that B is a one-dimensional noetherian domain since each is and since . Hence, we reduced the proof to the case A is a domain. Let be an ideal and let a be a nonzero element in the nonzero ideal . Set . Since is a zero-dim noetherian ring; thus, artinian, there is an such that for all . We claim Since it suffices to establish the inclusion locally, we may assume A is a local ring with the maximal ideal . Let x be a nonzero element in B. Then, since A is noetherian, there is an n such that and so . Thus, Now, assume n is a minimum integer such that and the last inclusion holds. If , then we easily see that . But then the above inclusion holds for , contradiction. Hence, we have and this establishes the claim. It now follows: Hence, has finite length as A-module. In particular, the image of there is finitely generated and so is finitely generated. The above shows that has dimension at most zero and so B has dimension at most one. Finally, the exact sequence of A-modules shows that is finite over A. References Theorems in algebra Commutative algebra
Krull–Akizuki theorem
[ "Mathematics" ]
572
[ "Theorems in algebra", "Commutative algebra", "Fields of abstract algebra", "Mathematical problems", "Mathematical theorems", "Algebra" ]
40,125,862
https://en.wikipedia.org/wiki/Dark%20Ages%20Radio%20Explorer
The Dark Ages Radio Explorer (DARE) is a proposed NASA mission aimed at detecting redshifted line emissions from the earliest neutral hydrogen atoms, formed post-Cosmic Dawn. Emissions from these neutral hydrogen atoms, characterized by a rest wavelength of 21 cm and a frequency of 1420 MHz, offer insights into the formation of the universe's first stars and the epoch succeeding the cosmic Dark Ages. The intended orbiter aims to investigate the universe's state from approximately 80 million years to 420 million years post-Big Bang by capturing the line emissions at their redshifted frequencies originating from that period. Data collected by this mission is expected to shed light on the genesis of the first stars, the rapid growth of the initial black holes, and the universe’s reionization process. Moreover, it would facilitate the testing of computational galaxy formation models. Furthermore, the mission could advance research into dark matter decay and inform the development of lunar surface telescopes, enhancing the exploration of exoplanets around proximate stars. Background The epoch between recombination and the emergence of stars and galaxies is termed the "cosmic Dark Ages". In this era, neutral hydrogen predominated the universe's matter composition. While this hydrogen has not yet been directly observed, ongoing experiments aim to detect the characteristic hydrogen line from this period. The hydrogen line arises when an electron in a neutral hydrogen atom transitions between hyperfine states, either by excitation to a state with aligned spins or by de-excitation as the spins move from alignment to anti-alignment. The energy differential between these hyperfine states, electron volts, equates to a photon with a wavelength of 21 centimeters. When neutral hydrogen attains thermodynamic equilibrium with cosmic microwave background (CMB) photons, a "coupling" occurs, rendering the hydrogen line undetectable. Observation of the hydrogen line is feasible only when there is a temperature discrepancy between the neutral hydrogen and the CMB. Theoretical motivation In the immediate aftermath of the Big Bang, the universe was characterized by intense heat, density, and near-uniformity. Its subsequent expansion and cooling created conducive conditions for nuclear and atomic formation. Around 400,000 years post-Big Bang, at a redshift of approximately 1100, the cooling of primordial plasma allowed protons and electrons to merge into neutral hydrogen atoms, rendering the universe transparent as photons ceased to interact significantly with matter. These ancient photons are detectable in the present as the cosmic microwave background (CMB). The CMB reveals a universe that remained smooth and homogeneous. Following the formation of the initial hydrogen atoms, the universe was composed of an almost entirely neutral, uniformly distributed intergalactic medium (IGM), predominantly made up of hydrogen gas. This epoch, devoid of luminous bodies, is referred to as the cosmic Dark Ages. Theoretical models forecast that, over subsequent hundreds of millions of years, gravitational forces gradually compressed the gas into denser regions, culminating in the emergence of the first stars—a milestone known as Cosmic Dawn. The formation of additional stars and the assembly of the earliest galaxies inundated the universe with ultraviolet photons, which had the potential to ionize hydrogen gas. Several hundred million years post-Cosmic Dawn, the initial stars emitted sufficient ultraviolet photons to reionize the vast majority of hydrogen atoms in the universe. This reionization epoch signifies the IGM’s transition back to a state of near-complete ionization. Observational studies have not yet explored the universe’s emerging structural complexity. Studying the universe’s earliest structures necessitates a telescope surpassing the capabilities of the Hubble Space Telescope. While theoretical models indicate that current measurements are starting to examine the concluding phase of Reionization, the initial stars and galaxies from the Dark Ages and Cosmic Dawn remain beyond the observational reach of contemporary instruments. The envisioned DARE mission aims to conduct pioneering measurements of the inception of the first stars and black holes, as well as ascertain the characteristics of hitherto undetectable stellar populations. These observations would contextualize existing data and enhance our comprehension of the developmental processes of the first galaxies from antecedent cosmic structures. Mission The DARE mission aims to analyze the spectral profile of the sky-averaged, redshifted 21-cm signal within a 40–120 MHz radio bandpass, targeting neutral hydrogen at redshifts between 11-35, corresponding to a period 420-80 million years subsequent to the Big Bang. DARE’s tentative schedule involves a 3-year lunar orbit, focusing on data collection above the Moon’s far side—a region considered devoid of human-made radio frequency interference and substantial ionospheric activity. The mission’s scientific apparatus, affixed to an RF-quiet spacecraft bus, comprises a three-part radiometer system featuring an electrically short, tapered, biconical dipole antenna, along with a receiver and a digital spectrometer. DARE’s utilization of the antenna’s smooth frequency response and a differential spectral calibration technique is anticipated to mitigate intense cosmic foregrounds, thereby facilitating the detection of the faint cosmic 21-cm signal. Related initiatives In addition to the DARE mission, several other initiatives have been proposed to investigate this field. These include the Precision Array for Probing the Epoch of Reionization (PAPER), the Low Frequency Array (LOFAR), the Murchison Widefield Array (MWA), the Giant Metrewave Radio Telescope (GMRT), and the Large Aperture Experiment to Detect the Dark Ages (LEDA). See also Reionization Wouthuysen-Field coupling References Further reading External links JPL Helps Shoot for the Moon, Stars, Planets and More Proposed NASA space probes Cosmic background radiation Big Bang Physical cosmology
Dark Ages Radio Explorer
[ "Physics", "Astronomy" ]
1,182
[ "Cosmogony", "Astronomical sub-disciplines", "Big Bang", "Theoretical physics", "Astrophysics", "Physical cosmology" ]
40,129,720
https://en.wikipedia.org/wiki/Devex%20algorithm
In applied mathematics, the devex algorithm is a pivot rule for the simplex method developed by Paula M. J. Harris. It identifies the steepest-edge approximately in its search for the optimal solution. References Algorithms
Devex algorithm
[ "Mathematics" ]
47
[ "Algorithms", "Mathematical logic", "Applied mathematics" ]
40,131,213
https://en.wikipedia.org/wiki/SU2%20code
SU2 is a suite of open-source software tools written in C++ for the numerical solution of partial differential equations (PDE) and performing PDE-constrained optimization. The primary applications are computational fluid dynamics and aerodynamic shape optimization, but has been extended to treat more general equations such as electrodynamics and chemically reacting flows. SU2 supports continuous and discrete adjoint for calculating the sensitivities/gradients of a scalar field. Developers SU2 is being developed by individuals and organized teams around the world. The SU2 Lead Developers are: Dr. Francisco Palacios and Dr. Thomas D. Economon. The most active groups developing SU2 are: Prof. Juan J. Alonso's group at Stanford University. Prof. Piero Colonna's group at Delft University of Technology. Prof. Nicolas R. Gauger's group at Kaiserslautern University of Technology. Prof. Alberto Guardone's group at Polytechnic University of Milan. Prof. Rafael Palacios' group at Imperial College London. Capabilities The SU2 tools suite solution suite includes High-fidelity analysis and adjoint-based design using unstructured mesh technology. Compressible and incompressible Euler, Navier-Stokes, and RANS solvers. Additional PDE solvers for electrodynamics, linear elasticity, heat equation, wave equation and thermochemical non-equilibrium. Convergence acceleration (multi-grid, preconditioning, etc.). Sensitivity information via the continuous adjoint methodology approach. Adaptive, goal-oriented mesh refinement and deformation. Modularized C++ object-oriented design. Parallelization with MPI. Python scripts for automation. FEATool Multiphysics features built-in GUI and CLI interfaces for SU2. Release history License SU2 is free and open source software, released under the GNU General Public License version 3 (SU2 v1.0 and v2.0) and GNU Lesser General Public License version 2.1 (SU2 v2.0.7 and later versions). Alternative software Free and open-source software Advanced Simulation Library (AGPL) CLAWPACK Code Saturne (GPL) FreeFem++ Gerris Flow Solver (GPL) OpenFOAM OpenFVM Palabos Flow Solver Proprietary software ADINA CFD ANSYS CFX ANSYS Fluent Azore FEATool Multiphysics Pumplinx STAR-CCM+ COMSOL Multiphysics KIVA (software) RELAP5-3D PowerFlow FOAMpro SimScale Cradle SC/Tetra Cradle scSTREAM Cradle Heat Designer References External links Official resources SU2 home page SU2 Github repository Community resources SU2 Forum at CFD Online SU2 wiki page at CFD Online Other resources SU2 version 2.0 announcement Review of SU2 by Tecplot Co-founder Stanford News story about SU2 initial release FEATool Multiphysics GUI and CFD solver interface for SU2 Computational fluid dynamics Free science software Free computer-aided design software Scientific simulation software 2012 software
SU2 code
[ "Physics", "Chemistry" ]
638
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
24,668,006
https://en.wikipedia.org/wiki/Carbene%20C%E2%88%92H%20insertion
Carbene C−H insertion in organic chemistry concerns the insertion reaction of a carbene into a carbon–hydrogen bond. This organic reaction is of some importance in the synthesis of new organic compounds. Simple carbenes such as the methylene and dichlorocarbene are not regioselective towards insertion. When the carbene is stabilized by a metal the selectivity increases. The compound dirhodium tetraacetate is found to be especially effective. In a typical reaction ethyl diazoacetate (a well-known carbene precursor) and dirhodium tetraacetate react with hexane; the insertion into a C−H bond occurs 1% on one of the methyl groups, 63% on the alpha-methylene unit and 33% on the beta-methylene unit. The first such reaction was reported in 1981, and the general reaction mechanism proposed by Doyle in 1993. the metal that stabilizes the carbene, dissociates at the same time but not to the same degree as carbon–carbon bond formation and hydrogen atom migration. The reaction is distinct from a metal catalyzed C−H activation reaction (sensu stricto) in which the metal actually inserts itself between carbon and hydrogen to form a species with a metal–carbon bond. It does, however, serve as a premier example of a metal-catalyzed C–H functionalization reaction, which some authors also refer to as C–H activation (sensu lato). The metal employed as a catalyst in this reaction historically was copper until superseded by rhodium. Other metals stabilize the carbene too much (e.g. molybdenum as in Fischer carbenes) or result in carbenes too reactive (e.g. gold, silver). Many dirhodium carboxylates and carboxamidates exist, including chiral ones. An effective chiral dirhodium catalyst is Rh2(MPPIM)4 with MPPIM (Methyl PhenylPropyl IMidazolidinecarboxylato) asymmetric ligand. Most successful reactions are intramolecular within geometrically rigid systems, as pioneered by Wenkert (1982) and Taber (1982). References Organic reactions Carbenes
Carbene C−H insertion
[ "Chemistry" ]
476
[ "Organic compounds", "Carbenes", "Inorganic compounds", "Organic reactions" ]
24,670,118
https://en.wikipedia.org/wiki/Persistent%20current
In physics, persistent current is a perpetual electric current that does not require an external power source. Such a current is impossible in normal electrical devices, since all commonly-used conductors have a non-zero resistance, and this resistance would rapidly dissipate any such current as heat. However, in superconductors and some mesoscopic devices, persistent currents are possible and observed due to quantum effects. In resistive materials, persistent currents can appear in microscopic samples due to size effects. Persistent currents are widely used in the form of superconducting magnets. In magnetized objects In electromagnetism, all magnetizations can be seen as microscopic persistent currents. By definition a magnetization can be replaced by its corresponding microscopic form, which is an electric current density: . This current is a bound current, not having any charge accumulation associated with it since it is divergenceless. What this means is that any permanently magnetized object, for example a piece of lodestone, can be considered to have persistent electric currents running throughout it (the persistent currents are generally concentrated near the surface). The converse is also true: any persistent electric current is divergence-free, and can therefore be represented instead by a magnetization. Therefore, in the macroscopic Maxwell's equations, it is purely a choice of mathematical convenience, whether to represent persistent currents as magnetization or vice versa. In the microscopic formulation of Maxwell's equations, however, does not appear and so any magnetizations must be instead represented by bound currents. In superconductors In superconductors, charge can flow without any resistance. It is possible to make pieces of superconductor with a large built-in persistent current, either by creating the superconducting state (cooling the material) while charge is flowing through it, or by changing the magnetic field around the superconductor after creating the superconducting state. This principle is used in superconducting electromagnets to generate sustained high magnetic fields that only require a small amount of power to maintain. The persistent current was first identified by H. Kamerlingh Onnes, and attempts to set a lower bound on their duration have reached values of over 100,000 years. In resistive conductors Surprisingly, it is also possible to have tiny persistent currents inside resistive metals that are placed in a magnetic field, even in metals that are nominally "non-magnetic". The current is the result of a quantum mechanical effect that influences how electrons travel through metals, and arises from the same kind of motion that allows the electrons inside an atom to orbit the nucleus forever. This type of persistent current is a mesoscopic low temperature effect: the magnitude of the current becomes appreciable when the size of the metallic system is reduced to the scale of the electron quantum phase coherence length and the thermal length. Persistent currents decrease with increasing temperature and will vanish exponentially above a temperature known as the Thouless temperature. This temperature scales as the inverse of the circuit diameter squared. Consequently, it has been suggested that persistent currents could flow up to room temperature and above in nanometric metal structures such as metal (Au, Ag,...) nanoparticles. This hypothesis has been offered for explaining the singular magnetic properties of nanoparticles made of gold and other metals. Unlike with superconductors, these persistent currents do not appear at zero magnetic field, as the current fluctuates symmetrically between positive and negative values; the magnetic field breaks that symmetry and allows a nonzero average current. Although the persistent current in an individual ring is largely unpredictable due to uncontrolled factors like the disorder configuration, it has a slight bias so that an average persistent current appears even for an ensemble of conductors with different disorder configurations. This kind of persistent current was first predicted to be experimentally observable in micrometer-scale rings in 1983 by Markus Büttiker, Yoseph Imry, and Rolf Landauer. Because the effect requires the phase coherence of electrons around the entire ring, the current can not be observed when the ring is interrupted by an ammeter and thus the current must by measured indirectly through its magnetization. In fact, all metals exhibit some magnetization in magnetic fields due a combination of de Haas–van Alphen effect, core diamagnetism, Landau diamagnetism, Pauli paramagnetism, which all appear regardless of the shape of the metal. The additional magnetization from persistent current becomes strong with a connected ring shape, and for example would disappear if the ring were cut. Experimental evidence of the observation of persistent currents were first reported in 1990 by a research group at Bell Laboratories using a superconducting resonator to study an array of copper rings. Subsequent measurements using superconducting resonators and extremely sensitive magnetometers known as superconducting quantum interference devices (SQUIDs) produced inconsistent results. In 2009, physicists at Stanford University using a scanning SQUID and at Yale University using microelectromechanical cantilevers reported measurements of persistent currents in nanoscale gold and aluminum rings respectively that both showed a strong agreement with the simple theory for non-interacting electrons. The 2009 measurements both reported greater sensitivity to persistent currents than previous measurements and made several other improvements to persistent current detection. The scanning SQUID's ability to change the position of the SQUID detector relative to the ring sample allowed for a number of rings to be measured on one sample chip and better extraction of the current signal from background noise. The cantilever detector's mechanical detection technique made it possible to measure the rings in a clean electromagnetic environment over a large range of magnetic field and also to measure a number of rings on one sample chip. See also References Mesoscopic physics Electrical engineering Electric current
Persistent current
[ "Physics", "Materials_science", "Engineering" ]
1,185
[ "Physical quantities", "Quantum mechanics", "Electrical engineering", "Condensed matter physics", "Electric current", "Wikipedia categories named after physical quantities", "Mesoscopic physics" ]
34,841,856
https://en.wikipedia.org/wiki/Single-atom%20transistor
A single-atom transistor is a device that can open and close an electrical circuit by the controlled and reversible repositioning of one single atom. The single-atom transistor was invented and first demonstrated in 2002 by Dr. Fangqing Xie in Prof. Thomas Schimmel's Group at the Karlsruhe Institute of Technology (former University of Karlsruhe). By means of a small electrical voltage applied to a control electrode, the so-called gate electrode, a single silver atom is reversibly moved in and out of a tiny junction, in this way closing and opening an electrical contact. Therefore, the single-atom transistor works as an atomic switch or atomic relay, where the switchable atom opens and closes the gap between two tiny electrodes called source and drain. The single-atom transistor opens perspectives for the development of future atomic-scale logics and quantum electronics. At the same time, the device of the Karlsruhe team of researchers marks the lower limit of miniaturization, as feature sizes smaller than one atom cannot be produced lithographically. The device represents a quantum transistor, the conductance of the source-drain channel being defined by the rules of quantum mechanics. It can be operated at room temperature and at ambient conditions, i.e. neither cooling nor vacuum are required. Few atom transistors have been developed at Waseda University and at Italian CNR by Takahiro Shinada and Enrico Prati, who observed the Anderson–Mott transition in miniature by employing arrays of only two, four and six individually implanted As or P atoms. See also QFET (quantum field-effect transistor) References External links Beilstein TV Video of the Schimmel group: The single-atom transistor – perspectives for quantum electronics at room temperature (link offline) Transistors Condensed matter physics
Single-atom transistor
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
386
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
34,845,435
https://en.wikipedia.org/wiki/Latanoprost/timolol
Latanoprost/timolol, sold under the brand name Xalacom, is a combination drug used for the treatment of glaucoma, consisting of latanoprost (increase uveoscleral outflow of aqueous humor) and timolol (a beta blocker decreasing the production of aqueous fluid). Society and culture Brand names In some countries, Xalacom is marketed by Viatris after Upjohn was spun off from Pfizer. References Ophthalmology drugs Combination drugs
Latanoprost/timolol
[ "Chemistry" ]
114
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
34,846,515
https://en.wikipedia.org/wiki/Phenopicolinic%20acid
Phenopicolinic acid is a dopamine beta hydroxylase inhibitor. References Pyridines Carboxylic acids 4-Hydroxyphenyl compounds
Phenopicolinic acid
[ "Chemistry" ]
37
[ "Carboxylic acids", "Functional groups" ]
34,846,642
https://en.wikipedia.org/wiki/Flat%20cover
In algebra, a flat cover of a module M over a ring is a surjective homomorphism from a flat module F to M that is in some sense minimal. Any module over a ring has a flat cover that is unique up to (non-unique) isomorphism. Flat covers are in some sense dual to injective hulls, and are related to projective covers and torsion-free covers. Definitions The homomorphism F→M is defined to be a flat cover of M if it is surjective, F is flat, every homomorphism from flat module to M factors through F, and any map from F to F commuting with the map to M is an automorphism of F. History While projective covers for modules do not always exist, it was speculated that for general rings, every module would have a flat cover. This flat cover conjecture was explicitly first stated in . The conjecture turned out to be true, resolved positively and proved simultaneously by . This was preceded by important contributions by P. Eklof, J. Trlifaj and J. Xu. Minimal flat resolutions Any module M over a ring has a resolution by flat modules → F2 → F1 → F0 → M → 0 such that each Fn+1 is the flat cover of the kernel of Fn → Fn−1. Such a resolution is unique up to isomorphism, and is a minimal flat resolution in the sense that any flat resolution of M factors through it. Any homomorphism of modules extends to a homomorphism between the corresponding flat resolutions, though this extension is in general not unique. References Module theory
Flat cover
[ "Mathematics" ]
328
[ "Fields of abstract algebra", "Module theory" ]
34,847,251
https://en.wikipedia.org/wiki/Sesam%20%28structural%20analysis%20software%29
Sesam is a software suite for structural and hydrodynamic analysis of ships and offshore structures. It is based on the displacement formulation of the Finite Element Method. The first version of Sesam was developed at NTH, now Norges Teknisk-Naturvitenskapelige Universitet (NTNU Trondheim), in the mid-1960s. Sesam was bought by Det Norske Veritas, now DNV, in 1968 and commercialized under the name SESAM-69 in 1970. Sesam was thus one of the first major structural analysis tools based on the Finite Element Method available and when it came to capability of analysing large and complex structures it outclassed all. In the beginning it was used for analysis of ships, in particular oil tankers (for which a comparison of analysis results with measurements on the real ship was made to confirm the accuracy of the method and tool ) and liquefied natural gas (LNG) carriers. With the development of offshore oil fields in the North Sea in the 1970s the use of Sesam for fixed offshore platforms grew. Examples of such use are the Ekofisk concrete tank of the Ekofisk oil field, the Condeep concrete gravity base structures and the Kvitebjørn jacket in the North Sea. In the late 1970s development of a completely new version of Sesam started. This version was released in the mid-1980s under the name SESAM'80 and is the basis for today's Sesam. During the 1990s Sesam was further enhanced with a high-level concept modelling technique together with a design-oriented and unified user interface. Analysis features for mooring systems and flexible risers were also added. The software name was at the same time simplified to merely "Sesam". The development of the recent years with frequent new releases is focused on improving Sesam as a tool for all phases of offshore and maritime structures from design, through transportation, installation, operation and modification to life extension, requalification and finally decommissioning. Sesam consists of several modules of which the most important are: GeniE for modelling, analysis and code checking of beam, plate and shell structures like offshore platforms and ships. HydroD for hydrodynamic and hydrostatic analysis of fixed and floating structures like offshore platforms and ships. Sima for simulation of marine operations like lifting and lowering large objects in a marine environment. DeepC for mooring and riser design as well marine operations of offshore floating structures. Sesam is developed in Norway by DNV with focus on solution of structural and hydrodynamic engineering problems within the offshore and maritime industries. It has been used by the offshore and maritime industries world-wide for more than 50 years. References External links Celebrating Sesam's first 40 years - A brief view on the history of Sesam from 1969 to 2009. Structural engineering Structural analysis Computer-aided engineering software Finite element software
Sesam (structural analysis software)
[ "Engineering" ]
596
[ "Structural engineering", "Structural analysis", "Construction", "Civil engineering", "Mechanical engineering", "Aerospace engineering" ]
34,848,600
https://en.wikipedia.org/wiki/Benitec%20Biopharma
Benitec Biopharma Limited is an Australian biotechnology company founded in 1997. It is engaged in the development of gene-silencing therapies for the treatment of chronic and life-threatening diseases using DNA-directed RNA interference (ddRNAi) technology. The CSIRO has researched RNAi extensively, developing the small hairpin RNA concept employed in ddRNAi. Benitec Biopharma has an exclusive license to this ddRNAi technology in human therapeutic uses and research. Research and development Benitec Biopharma is researching ddRNAi in the following fields: Hepatitis B (under development with partner Biomics Biotechnologies)) Oculopharyngeal muscular dystrophy (OPMD) References External links Benitec Biopharma website Biotechnology companies of Australia Stem cells Biopharmaceutical companies Genetic engineering Companies based in Sydney Multinational companies headquartered in Australia Biotechnology companies established in 1997 Companies listed on the Nasdaq Companies formerly listed on the Australian Securities Exchange Australian brands Pharmaceutical companies established in 1997 Australian companies established in 1997 Science and technology in Melbourne
Benitec Biopharma
[ "Chemistry", "Engineering", "Biology" ]
222
[ "Biological engineering", "Genetic engineering", "Molecular biology" ]
31,752,262
https://en.wikipedia.org/wiki/REPAIRtoire
REPAIRtoire is a database of resources for systems biology of DNA damage and repair. See also DNA repair References External links http://repairtoire.genesilico.pl Biological databases DNA repair Systems biology
REPAIRtoire
[ "Biology" ]
45
[ "DNA repair", "Bioinformatics", "Molecular genetics", "Cellular processes", "Biological databases", "Systems biology" ]
31,760,032
https://en.wikipedia.org/wiki/Vibrational%20analysis%20with%20scanning%20probe%20microscopy
The technique of vibrational analysis with scanning probe microscopy allows probing vibrational properties of materials at the submicrometer scale, and even of individual molecules. This is accomplished by integrating scanning probe microscopy (SPM) and vibrational spectroscopy (Raman scattering or/and Fourier transform infrared spectroscopy, FTIR). This combination allows for much higher spatial resolution than can be achieved with conventional Raman/FTIR instrumentation. The technique is also nondestructive, requires non-extensive sample preparation, and provides more contrast such as intensity contrast, polarization contrast and wavelength contrast, as well as providing specific chemical information and topography images simultaneously. History Raman-NSOM Near-field scanning optical microscopy (NSOM) was described in 1984, and used in many applications since then. The combination of Raman scattering and NSOM techniques was first realized in 1995, when it was used for imaging a Rb-doped KTP crystal at a spatial resolution of 250 nm. NSOM employs two different methods for data collection and analysis: the fiber tip aperture approach and the apertureless metal tip approach. NSOM with aperture probes has a smaller aperture that can increase the spatial resolution of NSOM; however, the transmission of light to the sample and the collection efficiency of the scattered/emitted light is also diminished. The apertureless near-field scanning microscopy (ANSOM) was developed in the 1990s. ANSOM employs a metalized tip instead of an optical fiber probe. The performance of the ANSOM strongly depends on the electric field enhancement factor of the metalized tip. This technique is based on surface plasmon resonance (SPR) which is the precursor of tip-enhanced Raman scattering (TERS) and surface-enhanced Raman scattering (SERS). In 1997, Martin and Girard demonstrated theoretically that electric field under a metallic or dielectric tip (belonging to NSOM apertureless technique) can be strongly enhanced if the incident field is along the tip axis. Since then a few groups have reported Raman or fluorescence enhancement in near field optical spectroscopy by apertureless microscopy. In 2000, T. Kalkbrenner et al. used a single gold particle as a probe for apertureless scanning and presented images of an aluminium film with 3 μm holes on a glass substrate. The resolution of this apertureless method was 100 nm, that is comparable to that of fiber-based systems Recently, a carbon nanotube (CNT) having a conical end, tagged with gold nanoparticles, was applied as a nanometer-resolution optical probe tip for NSOM. NSOM images were obtained with a spatial resolution of ~5 nm, demonstrating the potential of a composite CNT probe tip for nanoscale-resolution optical imaging. Tip-enhanced Raman scattering There are two options for realizing apertureless NSOM-Raman technique: TERS and SERS. TERS is frequently used for apertureless NSOM-Raman and can significantly enhance the spatial resolution. This technique requires a metal tip to enhance the signal of the sample. That is why an AFM metal tip is usually used for enhancing the electric field for molecule excitation. Raman spectroscopy was combined with AFM in 1999. A very narrow aperture of the tip was required to obtain a relatively high spatial resolution; such aperture reduced the signal and was difficult to prepare. In 2000, Stȍckle et al. first designed a setup combining apertureless NSOM, Raman and AFM techniques, in which the tip had a 20 nm thick granular silver film on it. They reported a large gain in the Raman scattering intensity of a dye film (brilliant cresyl blue) deposited on a glass substrate if a metal-coated AFM tip was brought very close to the sample. About 2000-fold enhancement of Raman scattering and a spatial resolution of ~55 nm were achieved. Similarly, Nieman et al. used an illuminated AFM tip coated with a 100 nm thick film of gold to enhance Raman scattering from polymers samples and achieved a resolution of 100 nm. In the early research of TERS, the most commonly used coating materials for the tip probe were silver and gold. High-resolution spatial maps of Raman signals were obtained with this technique from molecular films of such compounds as brilliant cresyl blue, malachite green isothiocyanate and rhodamine 6G, as well as individual carbon nanotubes. IR-NSOM and AFM IR near-field scanning optical microscopy (IR-NSOM) is a powerful spectroscopic tool because it allows subwavelength resolution in IR spectroscopy. Previously, IR-NSOM was realized by applying a solid immersion lens with a refractive index of n, which shortens wavelength (λ) to (λ/n), compared to FTIR-based IR microscopy. In 2004, an IR-SNOM achieved a spatial resolution ~λ/7 that is less than 1 μm. This resolution was further improved to about λ/60 that is 50–150 nm for a boron nitride thin film sample. IR-NSOM uses an AFM to detect the absorption response of a material to the modulated infrared radiation from an FTIR spectrometer and therefore is also referred to as AFM/FTIR spectroscopy. Two approaches have been used to measure the response of polymer systems to infrared absorption. The first mode relies on the AFM contact mode, and the second mode of operation employs a scanning thermal microscopy probe (invented in 1986) to measure the polymer's temperature increase. In 2007, AFM was combined with infrared attenuated total reflection (IR-ATR) spectroscopy to study the dissolution process of urea in a cyclohexane/butanol solution with a high spatial resolution. Theory and instrumentation Raman-NSOM There are two modes for the operation of NSOM technique, with and without an aperture. These two mode have also been combined with the near-field Raman spectroscopy. The near-field aperture must be nanosized that complicates the probe manufacturing process. Also, the aperture method usually has a very weak signal due to weak excitation and Raman scattering signal. Overall, these factors lower the signal-to-noise ratio in aperature based NSOM/Raman technique. Apertureless probes are based on a metal-coated tip and provide a stronger signal. Aperture-based detection Although the apertureless mode is more promising than the aperture mode, the latter is more widely used because of easier instrumental setup and operation. To obtain a high resolution Raman micrograph/spectrum, the following conditions should be met: (1) the size of the aperture must be on the order of the wavelength of the excitation light. (2) The distance from the tip of the probe to the sample must be smaller than excitation wavelength. (3) The instrument must remain stable over a long time. An important AFM feature is the ability to accurately control the distance between the sample and probe tip, which is the reason why the AFM-Raman combination is preferred for realizing Raman-NSOM. Apertureless mode The main drawback of the aperture mode is that the small aperture size reduces the signal intensity and is difficult to fabricate. Recently, researchers have focused on the apertureless mode, which utilizes SPR theory to produce stronger signals. There are two techniques supporting this mode: SERS and TERS. TERS technique Theory and instrumentation of Raman/AFM and IR/AFM combine the theory of SPR (AFM and NSOM) and Raman scattering, and this combination is based on TERS. In TERS, the electric field of excitation source induces an SPR in the tip of the probe. If the electric field vector of the incidence light is perpendicular (s-polarized) to the metal tip axis, the free electrons are driven to the sides lateral of the tip. If it is parallel (p-polarized) to the tip axis, the free electrons on the surface of the metal are confined to the end of the apex of tip. As a consequence, there is a very large electric-field enhancement which is sensed by the molecules close to it leading to a stronger signal. A typical approach in a TERS experiment is to focus the laser beam on a metal tip with the light polarized along the tip axis, followed by collection of the surface-enhanced Raman scattered light from the sample in the enhancement zone of the tip using optics. Depending on the sample and experiment, different illumination geometries have been applied in TERS experiments, as shown in figure 4. With p-polarized (parallel to the surface normal) incidence light, the plasmon excitation at the tip is most efficient. If the focusing objective lens is also used for collecting the scattered photons (backscattering geometry), the optimum angle is around 55° with respect to the surface normal. This is because the scattering lobe is maximum with this configuration and it provides a much enhanced signal. The setup of figure 4(A) is usually used for the large thick samples. Setup (B) handles semi-transparent or transparent samples, such as single cells, tissue samples and biopolymers. The setup of figure 4(C) is preferred for opaque samples because all the light would be focused by the parabolic mirror. Comparison of TERS and SERS Both TERS and SERS rely on a localized surface plasmon for increasing the ought-to-be weak Raman signal. The only difference between them is that the sample in SERS has a rough surface that hinders application of a sharp AFM-like tip. TERS, on the other hand, uses a metal-coated tip having some roughness at nanoscale. The “hot spot” theory is very popular in explaining the large enhancement in the signal. That is, the signal from “hot spots” on the surface of the sample dominates the total signal from the sample. This is also reinforced by the fact that the distance between nanoparticles and sample is an important factor in obtaining high Raman signal. Raman/AFM instrumentation The Raman/AFM technique has two approaches: aperture and apertureless, and the apertureless mode is realized with SERS and TERS. Figure 5 is the example of an integrated TERS system. It shows that there are five main components for a whole integrated TERS (apertureless) system. These components are: microscope, one objective lens, one integrated AFM head, a Raman spectrometer and a CCD. The laser is focused on the sample, on piezo-stage and the AFM tip by the moving the laser beam along the tip. The movement of the laser beam is achieved by the mirror in the top left corner. The XYZ piezo-stage in the left bottom holds the sample. In this design, the laser beam is focused on the sample through an objective lens, and the scattered light is collected by the same lens. This setup utilizes a low contact-pressure to reduce the damage to the AFM tip and sample. The laser power is typically below 1 mW. The notch filter can filter Rayleigh scattering from the excitation laser light from the back of the cantilever. The laser beam is focused on the apex of the gold-coated AFM tip and the sample. The laser scanning is completed by the moving the mirror across the approaching tip. A small enhance in background occurs when the laser spot focuses on the tip area. The movement of the XYZ piezo-stage finishes the sample scanning. The wide red signal is Raman signal which is collected through the objective lens. The same lens is also used for excitation of the sample and collecting the Raman signal. NSOM/FTIR, AFM/FTIR and AFM-IR Because of the diffraction limit in the resolution of conventional lens-based microscopes, namely D = 0.61λ/nsinθ, the maximum resolution obtainable with an optical microscope is ~200 nm. A new type of lens using multiple scattering of light allowed to improve the resolution to about 100 nm. Several new microscopy techniques with a sub-nanometer resolution have been developed in the last several decades, such as electron microscopy (SEM and TEM) and scanning probe microscopy (NSOM, STM and AFM). SPM differs from other techniques in that the excitation and signal collection are very close (less than diffraction limit distance) to the sample. Instead of using a conventional lens to obtain magnified images of samples, an SPM scans across the sample with a very sharp probe. Whereas SEM and TEM usually require vacuum and an extensive sample preparation, SPM measurements can be performed in atmospheric or liquid conditions. Despite the achievable resolution of atomic scale for AFM and NSOM techniques, it does not provide chemical information of the sample. The infrared part of the electromagnetic spectrum covers molecular vibrations which can characterize chemical bonding within the sample. By combining SPM and vibrational spectroscopy, AFM/IR-NSOM and AFM-IR have emerged as useful characterization tools that integrate the high spatial resolution abilities of AFM with IR spectroscopy. This new technique can be referred to as AFM-FTIR, AFM-IR and NSOM/FTIR. AFM and NSOM can be used to detect the response when a modulated infrared radiation generated by an FTIR spectrometer is absorbed by a material. In the AFM-IR technique the absorption of the radiation by sample will cause a rapid thermal expansion wave which will be transferred to the vibrational modes of the AFM cantilever. Specifically, thermal expansion wave induces a vertical displacement of the ATM tip (Figure 6). A local IR absorption spectrum then can be obtained through the measurement of the amplitude of the cantilever, which is a function of the IR source wavelength. For example, when the radiation laser wavelength is tuned at the resonance frequency with the vibrational absorption frequency of the sample, the displacement intensity of the cantilever will increase until the laser wavelength reaches the maximum of sample absorption. The displacement of the cantilever will then be reduced as the laser wavelength is tuned past the absorption maximum. This approach can map chemical composition beyond the diffraction-limit resolution and can also provide three-dimensional topographic, thermal and mechanical information at the nanoscale. Overall, it overcomes the resolution limit of traditional IR spectroscopy and adds chemical and mechanical mapping to the AFM and NSOM. Infrared light source The ideal IR source should be monochromatic and tunable within a wide range of wavelength. According to T ∝d4/λ4, where T is the transmission coefficient, d the aperture diameter and λ is wavelength, the aperture-based NSOM/FTIR transmission is even more limited due to the long infrared wavelength; therefore, an intense IR source is needed to offset the low transmission through the optical fiber. The common bright IR light sources are the free-electron laser (FEL), color-center lasers, CO2 lasers and laser diodes. FEL is an excellent IR source, with 2–20 μm spectral range, short pulses (picosecond) and high average power (0.1-1 W). Alternately, a tabletop picosecond optical parametric oscillator (OPO) can be used which is less expensive, but has a limited tunability and a lower power-output. NSOM/FTIR experimental setup The essence of NSOM/FTIR is that it allows the detection of non-propagating evanescent waves in the near-field (less than one wavelength from the sample), thus yielding high spatial resolution. Depending on the detection modes of these non-propagating evanescent waves, two NSOM/FTIR instrumentations are available: apertureless NSOM/FTIR and aperture-based NSOM/FTIR. Aperture-based NSOM/FTIR In aperture-based NSOM/FTIR, the probe is a waveguide with a tapered tip with a very small, sub-wavelength size aperture. When the aperture is brought into the near-field, it collects the non-propagating light and guides it to the detector. In general, there are two modes when the aperture is scanned over the sample: illumination mode and collection mode (Figure 7). The high-quality infrared fiber tip is very important in realizing NSOM/FTIR technique. There are several types of fibers, such as sapphire, chalcogenide glass, fluoride glass and hollow silica guides. Chalcogenide glasses are widely used because of their high transmittance in the broad IR range of 2–12 μm. The fluoride fibers also exhibit low transmitting losses beyond 3.0 μm. Apertureless NSOM/FTIR The probe is a sharp metal tip ending with a single or a few atoms. The sample is illuminated from far-field and the radiation is focused at the contact area between probe and sample. When this tip approaches the sample, usually within 10 nm, the incident electromagnetic field is enhanced due to the resonant surface plasma excitation as well as due to hot-spots in the sharp tip. The dipole interaction between the tip and sample change the non-propagating waves into propagating waves by scattering, and a detector collects the signal in the far-field. An apertureless NSOM/FTIR usually has better resolution (~5–30 nm) compared with aperture-based NSOM/FTIR (~50–150 nm). One main challenge in apertureless NSOM/FTIR is a strong background signal because the scattering is obtained from both near-field and remote area of the probe. Thus, the small near-field contribution to the signal has to be extracted from the background. One solution is to use a very flat sample with only optical spatial fluctuation. Another solution is to apply constant-height mode scanning or pseudo-constant-height mode scanning. Experimental scheme of aperture-based NSOM/FTIR Figure 8 shows the experimental setup used in NSOM/FTIR in the external reflection mode. FEL source is focused on the sample from the far-field using a mirror. The distance between the probe and a sample is kept at a few nanometers during scanning. Figure 9 is the cross-section of a NSOM/FTIR instrument. As shown below, sample is placed on a piezo-electric tube scanner, in which the x-y tube has four parts, namely x+, x-, y+ and y-. Lateral (x-y plane) oscillation of the fiber tip is induced by applying an AC voltage to a dither piezo-scanner. Also, the fiber tip is fixed to a bimorph piezo-scanner so that the amplitude of the oscillation of the tip can be monitored through the scanner. AFM-IR setup Spatial resolution The spatial resolution of an AFM-IR instrument is related to the contact area between the probe and sample. The contact area is given by a3 = 3PR/4E* and 1/E* = (1-n12)/ E1+ (1-n22)/ E2, where P is the force employed to the probe, n1 and n2 represent the Poisson ratios of the sample and probe, respectively, and E1 and E2 are the elastic moduli of the sample and probe materials respectively. Typically, an AFM-IR has a lateral spatial resolution of 10–400 nm, for example, 100 nm, λ/150, and λ/400. Recently, Ruggeri et al. have demonstrated the acquisition of infrared absorption spectra and chemical maps at the single molecule level in the case of protein molecules with ca. 10 nm diameter and a molecular weight of 400 kDa. Instrumentation In AFM-IR, an AFM probe is used to measure the absorption response of the sample to infrared radiation. The general approach for AFM/FTIR is shown in Figure 10. There are a few different experimental setups when the infrared radiation is projected onto the sample as shown below: top, side, and bottom illumination setups (Figure 11). In the first developed setup of AFM-IR, a sample is mounted onto an infrared-transparent zinc selenide prism for excitation purposes (Figure 12), then an optical parametric oscillator (OPO)-based tunable IR lased is radiated on the molecules to be probed by the instrument. Similar to conventional ATR spectroscopy, IR beam illuminates the sample through total internal reflection mechanism (Figure 12). The sample will heat up while absorbing radiation which causes a rapid thermal expansion of the sample surface. This expansion will increase the resonant oscillations of the AFM cantilever in a characteristic ringdown pattern (ringdown patterns means the decay of cantilever oscillation exponential in nature). Through Fourier transformation analysis, the signal could be isolated to obtain the amplitudes and frequencies of the oscillations. The amplitudes of the cantilever provide information of local absorption spectra, whereas the oscillation frequencies depend on the mechanical stiffness of the sample (Figure 12). Pros and cons NSOM combined with FTIR/Raman techniques can provide local chemical information together with topographical details. This technique is non-destructive and can work in a variety of environments (liquids), for example, when detecting single biomolecules. The illuminated area of sample is relatively big at around 1 μm. However, the sampling area is only ~10 nm. This means that a strong background from an unclean tip contributes to the overall signal, hindering the signal analysis. The Raman spectroscopy in general could be time-consuming due to the low scattering efficiency (<1 in 107 photons). It usually takes several minutes to accumulate a conventional Raman spectrum, and this time could be much longer in Raman-NSOM; for example, 9 hours for a 32×32-pixel image. As to near-field IR/AFM, high optical losses in aqueous environments (water is strongly absorbing in the IR range) reduces the signal-to-noise ratio. Applications Improving the resolution and enhancing the instrumentation with user-friendly hardware and software will make AFM/NSOM coupled with IR/Raman a useful characterization tool in many areas including biomedical, materials and life sciences. For example, this technique was used in detecting the spin-cast thin film of poly(dimethylsiloxane) with polystyrene on it by scanning the tip over the sample. The shape and size of polystyrene fragments was detected at a high spatial resolution due to its high absorption at specific resonance frequencies. Other examples include inorganic boron nitride thin films characterization with IR-NSOM. The images of single molecule rhodamine 6G (Rh-6G) was obtained with a spatial resolution of 50 nm. These techniques can be also used in numerous biological related applications including the analysis of plant materials, bone, and single cells. Biological application was demonstrated by detecting details of conformation changes of cholesteryl-oleate caused by FEL irradiation with a spatial resolution below the diffraction limit. Researchers also used Raman/NSOM in tracking the formation of energy-storing polymer polyhydroxybutyrate in bacteria Rhodobacter capsulatus. This characterization tool may also help in the kinetic studies on physical and chemical processes at a wide variety of surfaces giving chemical specificity via IR spectroscopy as well as high-resolution imaging via AFM. For example, the study of the hydrogen termination of Si (100) surface was performed by studying the absorbance of Si-O bond to characterize the reaction between silicon surface and atmospheric oxygen. Studies were also conducted where the reactivity of a polymer, a 1000-nm-thick poly-(tert-butylmethacrylate) (PTBMA) combined with a photochemically modified 500-nm-thick poly(methacrylic acid) (PMAA), with water vapor depicted the different absorption bands before and after water uptake by the polymer. Not only the increased swell of PMAA (280 nm) was observed but also the different absorption ability of water was shown by the different transmission of IR light at a much smaller dimension (<500 nm). These results are related to polymer, chemical and biological sensors, and tissue engineering and artificial organ studies. Because of their high spatial resolution, NSOM/AFM-Raman/IR techniques can be used for measuring the width of multilayer films, including layers which are too small (in the x and y directions) to be probed with conventional IR or Raman spectroscopy. References Spectroscopy
Vibrational analysis with scanning probe microscopy
[ "Physics", "Chemistry", "Materials_science" ]
5,120
[ "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Scanning probe microscopy", "Microscopy", "Nanotechnology", "Spectroscopy" ]
31,761,967
https://en.wikipedia.org/wiki/Thermokinetics
Thermokinetics deals with the study of thermal decomposition kinetics. See also Thermogravimetry Differential thermal analysis Differential scanning calorimetry References Chemical kinetics
Thermokinetics
[ "Physics", "Chemistry" ]
37
[ "Thermodynamics stubs", "Chemical reaction engineering", "Thermodynamics", "Analytical chemistry stubs", "Chemical kinetics", "Physical chemistry stubs" ]
5,340,351
https://en.wikipedia.org/wiki/Crystal%20engineering
Crystal engineering studies the design and synthesis of solid-state structures with desired properties through deliberate control of intermolecular interactions. It is an interdisciplinary academic field, bridging solid-state and supramolecular chemistry. The main engineering strategies currently in use are hydrogen- and halogen bonding and coordination bonding. These may be understood with key concepts such as the supramolecular synthon and the secondary building unit. History of term The term 'crystal engineering' was first used in 1955 by R. Pepinsky but the starting point is often credited to Gerhard Schmidt in connection with photodimerization reactions in crystalline cinnamic acids. Since this initial use, the meaning of the term has broadened considerably to include many aspects of solid state supramolecular chemistry. A useful modern definition is that provided by Gautam Desiraju, who in 1988 defined crystal engineering as "the understanding of intermolecular interactions in the context of crystal packing and the utilization of such understanding in the design of new solids with desired physical and chemical properties." Since many of the bulk properties of molecular materials are dictated by the manner in which the molecules are ordered in the solid state, it is clear that an ability to control this ordering would afford control over these properties. Non-covalent control of structure Crystal engineering relies on noncovalent bonding to achieve the organization of molecules and ions in the solid state. Much of the initial work on purely organic systems focused on the use of hydrogen bonds, although coordination and halogen bonds provide additional control in crystal design. Molecular self-assembly is at the heart of crystal engineering, and it typically involves an interaction between complementary hydrogen bonding faces or a metal and a ligand. "Supramolecular synthons" are building blocks that are common to many structures and hence can be used to order specific groups in the solid state. Design of multi-component crystals The intentional synthesis of cocrystals is most often achieved with strong heteromolecular interactions. The main relevance of multi-component crystals is focused upon designing pharmaceutical cocrystals. Pharmaceutical cocrystals are generally composed of one API (Active Pharmaceutical Ingredient) with other molecular substances that are considered safe according to the guidelines provided by WHO (World Health Organization). Various properties (such as solubility, bioavailability, permeability) of an API can be modulated through the formation of pharmaceutical cocrystals. In two dimensions 2D architectures (i.e., molecularly thick architectures) is a branch of crystal engineering. The formation (often referred as molecular self-assembly depending on its deposition process) of such architectures lies in the use of solid interfaces to create adsorbed monolayers. Such monolayers may feature spatial crystallinity. However the dynamic and wide range of monolayer morphologies ranging from amorphous to network structures have made of the term (2D) supramolecular engineering a more accurate term. Specifically, supramolecular engineering refers to "(The) design (of) molecular units in such way that a predictable structure is obtained" or as "the design, synthesis and self-assembly of well defined molecular modules into tailor-made supramolecular architectures". scanning probe microscopic techniques enable visualization of two dimensional assemblies. Polymorphism Polymorphism, the phenomenon wherein the same chemical compound exists in more than one crystal forms, is relevant commercially because polymorphic forms of drugs may be entitled to independent patent protection. The importance of crystal engineering to the pharmaceutical industry is expected to grow exponentially. Polymorphism arises due to the competition between kinetic and thermodynamic factors during crystallization. While long-range strong intermolecular interactions dictate the formation of kinetic crystals, the close packing of molecules generally drives the thermodynamic outcome. Understanding this dichotomy between the kinetics and thermodynamics constitutes the focus of research related to the polymorphism. In organic molecules, three types of polymorphism are mainly observed. Packing polymorphism arises when molecules pack in different ways to give different structures. Conformational polymorphism, on the other hand is mostly seen in flexible molecules where molecules have multiple conformational possibilities within a small energy window. As a result, multiple crystal structures can be obtained with the same molecule but in different conformations. The rarest form of polymorphism arises from the differences in the primary synthon and this type of polymorphism is called as synthon polymorphism. Crystal structure prediction Crystal structure prediction (CSP) is a computational approach to generate energetically feasible crystal structures (with corresponding space group and positional parameters) from a given molecular structure. The CSP exercise is considered most challenging as "experimental" crystal structures are very often kinetic structures and therefore are very difficult to predict. In this regard, many protocols have been proposed and are tested through several blind tests organized by CCDC since 2002. A major advance in the CSP happened in 2007 while a hybrid method based on tailor made force fields and density functional theory (DFT) was introduced. In the first step, this method employs tailor made force fields to decide upon the ranking of the structures followed by a dispersion corrected DFT method to calculate the lattice energies precisely. Apart from the ability of predicting crystal structures, CSP also gives computed energy landscapes of crystal structures where many structures lie within a narrow energy window. This kind of computed landscapes lend insights into the study on polymorphism, design of new structures and also help to design crystallization experiments. Property design The design of crystal structures with desired properties is the ultimate goal of crystal engineering. Crystal engineering principles have been applied to the design of non-linear optical materials, especially those with second harmonic generation (SHG) properties. Using supramolecular synthons, supramolecular gels have been designed. Mechanical properties of crystalline materials Designing a crystalline material with targeted properties requires an understanding of the material's molecular and crystal features in relation to its mechanical properties. Four mechanical properties are of interest for crystalline materials: plasticity, elasticity, brittleness, and shear strength). Intermolecular interactions Manipulation of the intermolecular interaction network is a means for controlling bulk properties. During crystallization, intermolecular interactions form according to an electrostatic hierarchy. Strong hydrogen bonds are the primary director for crystal organization. Crystal architecture Typically, the strongest intermolecular interactions form the molecular layers or columns and the weakest intermolecular interactions form the slip plane. For example, long chains or layers of acetaminophen molecules form due to the hydrogen bond donors and acceptors that flank the benzene ring. The weaker interactions between the chains or layers of acetaminophen required less energy to break than the hydrogen bonds. As a result, a slip plane is formed. A supramolecular synthon is a pair of molecules that form relatively strong intermolecular interactions in the early phases of crystallization; these molecule pairs are the basic structural motif found in a crystal lattice. Defects or imperfections Lattice defects, such as point defects, tilt boundaries, or dislocations, create imperfections in crystal architecture and topology. Any disruption to the crystal structure alters the mechanism or degree of molecular movement, thereby changing the mechanical properties of the material. Examples of point imperfections include vacancies, substitutional impurities, interstitial impurities, Frenkel’s defects, and Schottky’s defects. Examples of line imperfections include edge and screw dislocations. Assessing Crystal Structure Crystallographic methods, such as X-ray diffraction, are used to elucidate the crystal structure of a material by quantifying distances between atoms. The X-ray diffraction technique relies on a particular crystal structure creating a unique pattern after X-rays are diffracted through the crystal lattice. Microscopic methods, such as optical, electron, field ion, and scanning tunneling microscopy, can be used to visualize the microstructure, imperfections, or dislocations of a material. Ultimately, these methods elaborate on the growth and assembly of crystallites during crystallization, which can be used to rationalize the movement of crystallites in response to an applied load. Calorimetric methods, such as differential scanning calorimetry, use induce phase transitions in order to quantify the associated changes in enthalpy, entropy, and Gibb's free energy. The melting and fusion phase transitions are dependent on the lattice energy of the crystalline material, which can be used to determine percent crystallinity of the sample. Raman spectroscopy is a method that uses light scattering to interact with bonds in a sample. This technique provides information about chemical bonds, intermolecular interactions, and crystallinity. Assessing mechanical properties Nanoindentation is a standard and widely-accepted method for measuring mechanical properties within the crystal engineering field. The method quantifies hardness, elasticity, packing anisotropy, and polymorphism of a crystalline material. Hirshfeld surfaces are visual models of electron density at a specific isosurface that aid in visualizing and quantifying intermolecular interactions. An advantage to using Hirshfeld surfaces in crystal engineering is that these surface maps are embedded with information about a molecular and its neighbors. The insight into molecular neighbors can be applied to assessment or prediction of molecular properties. An emerging method for topography and slip plane analysis using energy frameworks, which are models of crystal packing that depict interaction energies as pillars or beams. See also Coordination polymers crystal nets (periodic graphs) Crystallography Laser-heated pedestal growth CrystEngComm Crystal Growth & Design CrystEngCommunity Hydrogen bond Molecular design software Supramolecular chemistry Self-assembly Molecular self-assembly References External links Crystal Growth and Design CrystEngComm Acta Crystallographica Section B Cambridge Structural Database Materials science Chemical product engineering Solid-state chemistry Supramolecular chemistry Crystallography
Crystal engineering
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,052
[ "Applied and interdisciplinary physics", "Products of chemical industry", "Chemical engineering", "Supramolecular chemistry", "Materials science", "Crystallography", "Condensed matter physics", "nan", "Chemical product engineering", "Nanotechnology", "Solid-state chemistry" ]
5,340,728
https://en.wikipedia.org/wiki/Stereoscopic%20spectroscopy
Stereoscopic spectroscopy is a type of imaging spectroscopy that can extract a few spectral parameters over a complete image plane simultaneously. A stereoscopic spectrograph is similar to a normal spectrograph except that (A) it has no slit, and (B) multiple spectral orders (often including the non-dispersed zero order) are collected simultaneously. The individual images are blurred by the spectral information present in the original data. The images are recombined using stereoscopic algorithms similar to those used to find ground feature altitudes from parallax in aerial photography. Stereoscopic spectroscopy is a special case of the more general field of tomographic spectroscopy. Both types of imaging use an analogy between the data space of imaging spectrographs and the conventional 3-space of the physical world. Each spectral order in the instrument produces an image plane analogous to the view from a camera with a particular look angle through the data space, and recombining the views allows recovery of (some aspects of) the spectrum at every location in the image. References Spectroscopy
Stereoscopic spectroscopy
[ "Physics", "Chemistry" ]
210
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
5,343,488
https://en.wikipedia.org/wiki/Bicycle%20and%20motorcycle%20dynamics
Bicycle and motorcycle dynamics is the science of the motion of bicycles and motorcycles and their components, due to the forces acting on them. Dynamics falls under a branch of physics known as classical mechanics. Bike motions of interest include balancing, steering, braking, accelerating, suspension activation, and vibration. The study of these motions began in the late 19th century and continues today. Bicycles and motorcycles are both single-track vehicles and so their motions have many fundamental attributes in common and are fundamentally different from and more difficult to study than other wheeled vehicles such as dicycles, tricycles, and quadracycles. As with unicycles, bikes lack lateral stability when stationary, and under most circumstances can only remain upright when moving forward. Experimentation and mathematical analysis have shown that a bike stays upright when it is steered to keep its center of mass over its wheels. This steering is usually supplied by a rider, or in certain circumstances, by the bike itself. Several factors, including geometry, mass distribution, and gyroscopic effect all contribute in varying degrees to this self-stability, but long-standing hypotheses and claims that any single effect, such as gyroscopic or trail (the distance between steering axis and ground contact of the front tire), is solely responsible for the stabilizing force have been discredited. While remaining upright may be the primary goal of beginning riders, a bike must lean in order to maintain balance in a turn: the higher the speed or smaller the turn radius, the more lean is required. This balances the roll torque about the wheel contact patches generated by centrifugal force due to the turn with that of the gravitational force. This lean is usually produced by a momentary steering in the opposite direction, called countersteering. Unlike other wheeled vehicles, the primary control input on bikes is steering torque, not position. Although longitudinally stable when stationary, bikes often have a high enough center of mass and a short enough wheelbase to lift a wheel off the ground under sufficient acceleration or deceleration. When braking, depending on the location of the combined center of mass of the bike and rider with respect to the point where the front wheel contacts the ground, and if the front brake is applied hard enough, bikes can either: skid the front wheel which may or not result in a crash; or flip the bike and rider over the front wheel. A similar situation is possible while accelerating, but with respect to the rear wheel. History The history of the study of bike dynamics is nearly as old as the bicycle itself. It includes contributions from famous scientists such as Rankine, Appell, and Whipple. In the early 19th century Karl von Drais, credited with inventing the two-wheeled vehicle variously called the laufmaschine, velocipede, draisine, and dandy horse, showed that a rider could balance his device by steering the front wheel. In 1869, Rankine published an article in The Engineer repeating von Drais' assertion that balance is maintained by steering in the direction of a lean. In 1897, the French Academy of Sciences made understanding bicycle dynamics the goal of its Prix Fourneyron competition. Thus, by the end of the 19th century, Carlo Bourlet, Emmanuel Carvallo, and Francis Whipple had showed with rigid-body dynamics that some safety bicycles could actually balance themselves if moving at the right speed. Bourlet won the Prix Fourneyron, and Whipple won the Cambridge University Smith Prize. It is not clear to whom should go the credit for tilting the steering axis from the vertical which helps make this possible. In 1970, David E. H. Jones published an article in Physics Today showing that gyroscopic effects are not necessary for a person to balance a bicycle. Since 1971, when he identified and named the wobble, weave and capsize modes, Robin Sharp has written regularly about the behavior of motorcycles and bicycles. While at Imperial College, London, he worked with David Limebeer and Simos Evangelou. In the early 1970s, Cornell Aeronautical Laboratory (CAL, later Calspan Corporation in Buffalo, NY USA) was sponsored by the Schwinn Bicycle Company and others to study and simulate bicycle and motorcycle dynamics. Portions of this work have now been released to the public and scans of over 30 detailed reports have been posted at this TU Delft Bicycle Dynamics site. Since the 1990s, Cossalter, et al., have been researching motorcycle dynamics at the University of Padova. Their research, both experimental and numerical, has covered weave, wobble, chatter, simulators, vehicle modelling, tire modelling, handling, and minimum lap time maneuvering. In 2007, Meijaard, et al., published the canonical linearized equations of motion, in the Proceedings of the Royal Society A, along with verification by two different methods. These equations assumed the tires to roll without slip, that is to say, to go where they point, and the rider to be rigidly attached to the rear frame of the bicycle. In 2011, Kooijman, et al., published an article in Science showing that neither gyroscopic effects nor so-called caster effects due to trail are necessary for a bike to balance itself. They designed a two-mass-skate bicycle that the equations of motion predict is self-stable even with negative trail, the front wheel contacts the ground in front of the steering axis, and with counter-rotating wheels to cancel any gyroscopic effects. Then they constructed a physical model to validate that prediction. This may require some of the details provided below about steering geometry or stability to be re-evaluated. Bicycle dynamics was named 26 of Discovers 100 top stories of 2011. In 2013, Eddy Merckx Cycles was awarded over €150,000 with Ghent University to examine bicycle stability. Forces If the bike and rider are considered to be a single system, the forces that act on that system and its components can be roughly divided into two groups: internal and external. The external forces are due to gravity, inertia, contact with the ground, and contact with the atmosphere. The internal forces are caused by the rider and by interaction between components. External forces As with all masses, gravity pulls the rider and all the bike components toward the earth. At each tire contact patch there are ground reaction forces with both horizontal and vertical components. The vertical components mostly counteract the force of gravity, but also vary with braking and accelerating. For details, see the section on longitudinal stability below. The horizontal components, due to friction between the wheels and the ground, including rolling resistance, are in response to propulsive forces, braking forces, and turning forces. Aerodynamic forces due to the atmosphere are mostly in the form of drag, but can also be from crosswinds. At normal bicycling speeds on level ground, aerodynamic drag is the largest force resisting forward motion. At faster speed, aerodynamic drag becomes overwhelmingly the largest force resisting forward motion. Turning forces are generated during maneuvers for balancing in addition to just changing direction of travel. These may be interpreted as centrifugal forces in the accelerating reference frame of the bike and rider; or simply as inertia in a stationary, inertial reference frame and not forces at all. Gyroscopic forces acting on rotating parts such as wheels, engine, transmission, etc., are also due to the inertia of those rotating parts. They are discussed further in the section on gyroscopic effects below. Internal forces Internal forces, those between components of the bike and rider system, are mostly caused by the rider or by friction. In addition to pedaling, the rider can apply torques between the steering mechanism (front fork, handlebars, front wheel, etc.) and rear frame, and between the rider and the rear frame. Friction exists between any parts that move against each other: in the drive train, between the steering mechanism and the rear frame, etc. In addition to brakes, which create friction between rotating wheels and non-rotating frame parts, many bikes have front and rear suspensions. Some motorcycles and bicycles have a steering damper to dissipate undesirable kinetic energy, and some bicycles have a spring connecting the front fork to the frame to provide a progressive torque that tends to steer the bicycle straight ahead. On bikes with rear suspensions, feedback between the drive train and the suspension is an issue designers attempt to handle with various linkage configurations and dampers. Motions Motions of a bike can be roughly grouped into those out of the central plane of symmetry: lateral; and those in the central plane of symmetry: longitudinal or vertical. Lateral motions include balancing, leaning, steering, and turning. Motions in the central plane of symmetry include rolling forward, of course, but also stoppies, wheelies, brake diving, and most suspension activation. Motions in these two groups are linearly decoupled, that is they do not interact with each other to the first order. An uncontrolled bike is laterally unstable when stationary and can be laterally self-stable when moving under the right conditions or when controlled by a rider. Conversely, a bike is longitudinally stable when stationary and can be longitudinally unstable when undergoing sufficient acceleration or deceleration. Lateral dynamics Of the two, lateral dynamics has proven to be the more complicated, requiring three-dimensional, multibody dynamic analysis with at least two generalized coordinates to analyze. At a minimum, two coupled, second-order differential equations are required to capture the principal motions. Exact solutions are not possible, and numerical methods must be used instead. Competing theories of how bikes balance can still be found in print and online. On the other hand, as shown in later sections, much longitudinal dynamic analysis can be accomplished simply with planar kinetics and just one coordinate. Balance When discussing bike balance, it is necessary to distinguish carefully between "stability", "self-stability", and "controllability". Recent research suggests that "rider-controlled stability of bicycles is indeed related to their self-stability". A bike remains upright when it is steered so that the ground reaction forces exactly balance all the other internal and external forces it experiences, such as gravitational if leaning, inertial or centrifugal if in a turn, gyroscopic if being steered, and aerodynamic if in a crosswind. Steering may be supplied by a rider or, under certain circumstances, by the bike itself. This self-stability is generated by a combination of several effects that depend on the geometry, mass distribution, and forward speed of the bike. Tires, suspension, steering damping, and frame flex can also influence it, especially in motorcycles. Even when staying relatively motionless, a rider can balance a bike by the same principle. While performing a track stand, the rider can keep the line between the two contact patches under the combined center of mass by steering the front wheel to one side or the other and then moving forward and backward slightly to move the front contact patch from side to side as necessary. Forward motion can be generated simply by pedaling. Backwards motion can be generated the same way on a fixed-gear bicycle. Otherwise, the rider can take advantage of an opportune slope of the pavement or lurch the upper body backwards while the brakes are momentarily engaged. If the steering of a bike is locked, it becomes virtually impossible to balance while riding. On the other hand, if the gyroscopic effect of rotating bike wheels is cancelled by adding counter-rotating wheels, it is still easy to balance while riding. One other way that a bike can be balanced, with or without locked steering, is by applying appropriate torques between the bike and rider similar to the way a gymnast can swing up from hanging straight down on uneven parallel bars, a person can start swinging on a swing from rest by pumping their legs, or a double inverted pendulum can be controlled with an actuator only at the elbow. Forward speed The rider applies torque to the handlebars in order to turn the front wheel and so to control lean and maintain balance. At high speeds, small steering angles quickly move the ground contact points laterally; at low speeds, larger steering angles are required to achieve the same results in the same amount of time. Because of this, it is usually easier to maintain balance at high speeds. As self-stability typically occurs at speeds above a certain threshold, going faster increases the chances that a bike is contributing to its own stability. Center of mass The farther forward (closer to front wheel) the center of mass of the combined bike and rider, the less the front wheel has to move laterally in order to maintain balance. Conversely, the farther back (closer to the rear wheel) the center of mass is located, the more front wheel lateral movement or bike forward motion is required to regain balance. This can be noticeable on long-wheelbase recumbents, choppers, and wheelie bikes. It can also be a challenge for touring bikes that carry a heavy load of gear over or even behind the rear wheel. Mass over the rear wheel can be more easily controlled if it is lower than mass over the front wheel. A bike is also an example of an inverted pendulum. Just as a broomstick is more easily balanced in the hand than a pencil, a tall bike (with a high center of mass) can be easier to balance when ridden than a low one because the tall bike's lean rate (rate at which its angle of lean increases as it begins to fall over) will be slower. However, a rider can have the opposite impression of a bike when it is stationary. A top-heavy bike can require more effort to keep upright, when stopped in traffic for example, than a bike which is just as tall but with a lower center of mass. This is an example of a vertical second-class lever. A small force at the end of the lever, the seat or handlebars at the top of the bike, more easily moves a large mass if the mass is closer to the fulcrum, where the tires touch the ground. This is why touring cyclists are advised to carry loads low on a bike, and panniers hang down on either side of front and rear racks. Trail A factor that influences how easy or difficult a bike will be to ride is trail, the distance by which the front wheel ground contact point trails behind the steering axis ground contact point. The steering axis is the axis about which the entire steering mechanism (fork, handlebars, front wheel, etc.) pivots. In traditional bike designs, with a steering axis tilted back from the vertical, positive trail tends to steer the front wheel into the direction of a lean, independent of forward speed. This can be simulated by pushing a stationary bike to one side. The front wheel will usually also steer to that side. In a lean, gravity provides this force. The dynamics of a moving bike are more complicated, however, and other factors can contribute to or detract from this effect. Trail is a function of head angle, fork offset or rake, and wheel size. Their relationship can be described by this formula: where is wheel radius, is the head angle measured clock-wise from the horizontal and is the fork offset or rake. Trail can be increased by increasing the wheel size, decreasing the head angle, or decreasing the fork rake. The more trail a traditional bike has, the more stable it feels, although too much trail can make a bike feel difficult to steer. Bikes with negative trail (where the contact patch is in front of where the steering axis intersects the ground), while still rideable, are reported to feel very unstable. Normally, road racing bicycles have more trail than touring bikes but less than mountain bikes. Mountain bikes are designed with less-vertical head angles than road bikes so as to have greater trail and hence improved stability for descents. Touring bikes are built with small trail to allow the rider to control a bike weighed down with baggage. As a consequence, an unloaded touring bike can feel unstable. In bicycles, fork rake, often a curve in the fork blades forward of the steering axis, is used to diminish trail. Bikes with negative trail exist, such as the Python Lowracer, and are rideable, and an experimental bike with negative trail has been shown to be self-stable. In motorcycles, rake refers to the head angle instead, and offset created by the triple tree is used to diminish trail. A small survey by Whitt and Wilson found: touring bicycles with head angles between 72° and 73° and trail between 43 mm and 60 mm racing bicycles with head angles between 73° and 74° and trail between 28 mm and 45 mm track bicycles with head angles of 75° and trail between 23.5 mm and 37 mm. However, these ranges are not hard and fast. For example, LeMond Racing Cycles offers both with forks that have 45 mm of offset or rake and the same size wheels: a 2006 Tete de Course, designed for road racing, with a head angle that varies from ° to 74°, depending on frame size, and thus trail that varies from 51.5 mm to 69 mm. a 2007 Filmore, designed for the track, with a head angle that varies from ° to 74°, depending on frame size, and thus trail that varies from 51.5 mm to 61 mm. The amount of trail a particular bike has may vary with time for several reasons. On bikes with front suspension, especially telescopic forks, compressing the front suspension, due to heavy braking for example, can steepen the steering axis angle and reduce trail. Trail also varies with lean angle, and steering angle, usually decreasing from a maximum when the bike is straight upright and steered straight ahead. Trail can decrease to zero with sufficiently large lean and steer angles, which can alter how stable a bike feels. Finally, even the profile of the front tire can influence how trail varies as the bike is leaned and steered. A measurement similar to trail, called either mechanical trail, normal trail, or true trail, is the perpendicular distance from the steering axis to the centroid of the front wheel contact patch. Wheelbase A factor that influences the directional stability of a bike is wheelbase, the horizontal distance between the ground contact points of the front and rear wheels. For a given displacement of the front wheel, due to some disturbance, the angle of the resulting path from the original is inversely proportional to wheelbase. Also, the radius of curvature for a given steer angle and lean angle is proportional to the wheelbase. Finally, the wheelbase increases when the bike is leaned and steered. In the extreme, when the lean angle is 90°, and the bike is steered in the direction of that lean, the wheelbase is increased by the radius of the front and rear wheels. Steering mechanism mass distribution Another factor that can also contribute to the self-stability of traditional bike designs is the distribution of mass in the steering mechanism, which includes the front wheel, the fork, and the handlebar. If the center of mass for the steering mechanism is in front of the steering axis, then the pull of gravity will also cause the front wheel to steer in the direction of a lean. This can be seen by leaning a stationary bike to one side. The front wheel will usually also steer to that side independent of any interaction with the ground. Additional parameters, such as the fore-to-aft position of the center of mass and the elevation of the center of mass also contribute to the dynamic behavior of a bike. Gyroscopic effects The role of the gyroscopic effect in most bike designs is to help steer the front wheel into the direction of a lean. This phenomenon is called precession, and the rate at which an object precesses is inversely proportional to its rate of spin. The slower a front wheel spins, the faster it will precess when the bike leans, and vice versa. The rear wheel is prevented from precessing by friction of the tires on the ground, and so continues to lean as though it were not spinning at all. Hence gyroscopic forces do not provide any resistance to tipping. At low forward speeds, the precession of the front wheel is too quick, contributing to an uncontrolled bike's tendency to oversteer, start to lean the other way and eventually oscillate and fall over. At high forward speeds, the precession is usually too slow, contributing to an uncontrolled bike's tendency to understeer and eventually fall over without ever having reached the upright position. This instability is very slow, on the order of seconds, and is easy for most riders to counteract. Thus a fast bike may feel stable even though it is actually not self-stable and would fall over if it were uncontrolled. Another contribution of gyroscopic effects is a roll moment generated by the front wheel during countersteering. For example, steering left causes a moment to the right. The moment is small compared to the moment generated by the out-tracking front wheel, but begins as soon as the rider applies torque to the handlebars and so can be helpful in motorcycle racing. For more detail, see the section countersteering, below, and the countersteering article. Self-stability Between the two unstable regimes mentioned in the previous section, and influenced by all the factors described above that contribute to balance (trail, mass distribution, gyroscopic effects, etc.), there may be a range of forward speeds for a given bike design at which these effects steer an uncontrolled bike upright. It has been proven that neither gyroscopic effects nor positive trail are sufficient by themselves or necessary for self-stability, although they certainly can enhance hands-free control. However, even without self-stability a bike may be ridden by steering it to keep it over its wheels. Note that the effects mentioned above that would combine to produce self-stability may be overwhelmed by additional factors such as headset friction and stiff control cables. This video shows a riderless bicycle exhibiting self-stability. Longitudinal acceleration Longitudinal acceleration has been shown to have a large and complex effect on lateral dynamics. In one study, positive acceleration eliminates self stability, and negative acceleration (deceleration) changes the speeds of self stability. Turning In order for a bike to turn, that is, change its direction of forward travel, the front wheel must aim approximately in the desired direction, as with any front-wheel steered vehicle. Friction between the wheels and the ground then generates the centripetal acceleration necessary to alter the course from straight ahead as a combination of cornering force and camber thrust. The radius of the turn of an upright (not leaning) bike can be roughly approximated, for small steering angles, by: where is the approximate radius, is the wheelbase, is the steer angle, and is the caster angle of the steering axis. Leaning However, unlike other wheeled vehicles, bikes must also lean during a turn to balance the relevant forces: gravitational, inertial, frictional, and ground support. The angle of lean, θ, can easily be calculated using the laws of circular motion: where v is the forward speed, r is the radius of the turn and g is the acceleration of gravity. This is in the idealized case. A slight increase in the lean angle may be required on motorcycles to compensate for the width of modern tires at the same forward speed and turn radius. It can also be seen however that this simple 2-dimensional model, essentially an inverted pendulum on a turntable, predicts that the steady-state turn is unstable. If the bike is displaced slightly downwards from its equilibrium lean angle, the torque of gravity increases, that of centrifugal force decreases and the displacement gets amplified. A more-sophisticated model that allows a wheel to steer, adjust the path, and counter the torque of gravity, is necessary to capture the self-stability observed in real bikes. For example, a bike in a 10 m (33 ft) radius steady-state turn at 10 m/s (36 km/h, 22 mph) must be at an angle of 45.6°. A rider can lean with respect to the bike in order to keep either the torso or the bike more or less upright if desired. The angle that matters is the one between the horizontal plane and the plane defined by the tire contacts and the location of the center of mass of bike and rider. This lean of the bike decreases the actual radius of the turn proportionally to the cosine of the lean angle. The resulting radius can be roughly approximated (within 2% of exact value) by: where is the approximate radius, is the wheelbase, is the lean angle, is the steering angle, and is the caster angle of the steering axis. As a bike leans, the tires' contact patches move farther to the side causing wear. The portions at either edge of a motorcycle tire that remain unworn by leaning into turns is sometimes referred to as . The finite width of the tires alters the actual lean angle of the rear frame from the ideal lean angle described above. The actual lean angle between the frame and the vertical must increase with tire width and decrease with center of mass height. Bikes with fat tires and low center of mass must lean more than bikes with skinnier tires or higher centers of mass to negotiate the same turn at the same speed. The increase in lean angle due to a tire thickness of 2t can be calculated as where φ is the ideal lean angle, and h is the height of the center of mass. For example, a motorcycle with a 12 inch wide rear tire will have t = 6 inches. If the combined bike and rider center of mass is at a height of 26 inches, then a 25° lean must be increased by 7.28°: a nearly 30% increase. If the tires are only 6 inches wide, then the lean angle increase is only 3.16°, just under half. The couple created by gravity and the ground reaction forces is necessary for a bicycle to turn at all. On a custom built bicycle with spring-loaded outriggers that exactly cancel this couple, so that the bicycle and rider may assume any lean angle when traveling in a straight line, riders find it impossible to make a turn. As soon as the wheels deviate from a straight path, the bicycle and rider begin to lean in the opposite direction, and the only way to right them is to steer back onto the straight path. Countersteering To initiate a turn and the necessary lean in the direction of that turn, a bike must momentarily steer in the opposite direction. This is often referred to as countersteering. With the front wheel now at a finite angle to the direction of motion, a lateral force is developed at the contact patch of the tire. This force creates a torque around the longitudinal (roll) axis of the bike, and this torque causes the bike to lean away from the initially steered direction and toward the direction of the desired turn. Where there is no external influence, such as an opportune side wind to create the force necessary to lean the bike, countersteering is necessary to initiate a rapid turn. While the initial steer torque and steer angle are both opposite the desired turn direction, this may not be the case to maintain a steady-state turn. The sustained steer angle is usually in the same direction as the turn, but may remain opposite to the direction of the turn, especially at high speeds. The sustained steer torque required to maintain that steer angle is usually opposite the turn direction. The actual magnitude and orientation of both the sustained steer angle and sustained steer torque of a particular bike in a particular turn depend on forward speed, bike geometry, tire properties, and combined bike and rider mass distribution. Once in a turn, the radius can only be changed with an appropriate change in lean angle, and this can be accomplished by additional countersteering out of the turn to increase lean and decrease radius, then into the turn to decrease lean and increase radius. To exit the turn, the bike must again countersteer, momentarily steering more into the turn in order to decrease the radius, thus increasing inertial forces, and thereby decreasing the angle of lean. Steady-state turning Once a turn is established, the torque that must be applied to the steering mechanism in order to maintain a constant radius at a constant forward speed depends on the forward speed and the geometry and mass distribution of the bike. At speeds below the capsize speed, described below in the section on Eigenvalues and also called the inversion speed, the self-stability of the bike will cause it to tend to steer into the turn, righting itself and exiting the turn, unless a torque is applied in the opposite direction of the turn. At speeds above the capsize speed, the capsize instability will cause it to tend to steer out of the turn, increasing the lean, unless a torque is applied in the direction of the turn. At the capsize speed no input steering torque is necessary to maintain the steady-state turn. Steering angle Several effects influence the steering angle, the angle at which the front assembly is rotated about the steering axis, necessary to maintain a steady-state turn. Some of these are unique to single-track vehicles, while others are also experienced by automobiles. Some of these may be mentioned elsewhere in this article, and they are repeated here, though not necessarily in order of importance, so that they may be found in one place. First, the actual kinematic steering angle, the angle projected onto the road plane to which the front assembly is rotated is a function of the steering angle and the steering axis angle: where is the kinematic steering angle, is the steering angle, and is the caster angle of the steering axis. Second, the lean of the bike decreases the actual radius of the turn proportionally to the cosine of the lean angle. The resulting radius can be roughly approximated (within 2% of exact value) by: where is the approximate radius, is the wheelbase, is the lean angle, is the steering angle, and is the caster angle of the steering axis. Third, because the front and rear tires can have different slip angles due to weight distribution, tire properties, etc., bikes can experience understeer or oversteer. When understeering, the steering angle must be greater, and when oversteering, the steering angle must be less than it would be if the slip angles were equal to maintain a given turn radius. Some authors even use the term counter-steering to refer to the need on some bikes under some conditions to steer in the opposite direction of the turn (negative steering angle) to maintain control in response to significant rear wheel slippage. Fourth, camber thrust contributes to the centripetal force necessary to cause the bike to deviate from a straight path, along with cornering force due to the slip angle, and can be the largest contributor. Camber thrust contributes to the ability of bikes to negotiate a turn with the same radius as automobiles but with a smaller steering angle. When a bike is steered and leaned in the same direction, the camber angle of the front tire is greater than that of the rear and so can generate more camber thrust, all else being equal. No hands While countersteering is usually initiated by applying torque directly to the handlebars, on lighter vehicles such as bicycles, it can be accomplished by shifting the rider's weight. If the rider leans to the right relative to the bike, the bike leans to the left to conserve angular momentum, and the combined center of mass remains nearly in the same vertical plane. This leftward lean of the bike, called counter lean by some authors, will cause it to steer to the left and initiate a right-hand turn as if the rider had countersteered to the left by applying a torque directly to the handlebars. This technique may be complicated by additional factors such as headset friction and stiff control cables. The combined center of mass does move slightly to the left when the rider leans to the right relative to the bike, and the bike leans to the left in response. The action, in space, would have the tires move right, but this is prevented by friction between the tires and the ground, and thus pushes the combined center of mass left. This is a small effect, however, as evidenced by the difficulty most people have in balancing a bike by this method alone. Gyroscopic effects As mentioned above in the section on balance, one effect of turning the front wheel is a roll moment caused by gyroscopic precession. The magnitude of this moment is proportional to the moment of inertia of the front wheel, its spin rate (forward motion), the rate that the rider turns the front wheel by applying a torque to the handlebars, and the cosine of the angle between the steering axis and the vertical. For a sample motorcycle moving at 22 m/s (50 mph) that has a front wheel with a moment of inertia of 0.6 kg·m2, turning the front wheel one degree in half a second generates a roll moment of 3.5 N·m. In comparison, the lateral force on the front tire as it tracks out from under the motorcycle reaches a maximum of 50 N. This, acting on the 0.6 m (2 ft) height of the center of mass, generates a roll moment of 30 N·m. While the moment from gyroscopic forces is only 12% of this, it can play a significant part because it begins to act as soon as the rider applies the torque, instead of building up more slowly as the wheel out-tracks. This can be especially helpful in motorcycle racing. Two-wheel steering Because of theoretical benefits, such as a tighter turning radius at low speed, attempts have been made to construct motorcycles with two-wheel steering. One working prototype by Ian Drysdale in Australia is reported to "work very well". Issues in the design include whether to provide active control of the rear wheel or let it swing freely. In the case of active control, the control algorithm needs to decide between steering with or in the opposite direction of the front wheel, when, and how much. One implementation of two-wheel steering, the Sideways bike, lets the rider control the steering of both wheels directly. Another, the Swing Bike, had the second steering axis in front of the seat so that it could also be controlled by the handlebars. Milton W. Raymond built a long low two-wheel steering bicycle, called "X-2", with various steering mechanisms to control the two wheels independently. Steering motions included "balance", in which both wheels move together to steer the tire contacts under the center of mass; and "true circle", in which the wheels steer equally in opposite directions and thus steering the bicycle without substantially changing the lateral position of the tire contacts relative to the center of mass. X-2 was also able to go "crabwise" with the wheels parallel but out of line with the frame, for instance with the front wheel near the roadway center line and rear wheel near the curb. "Balance" steering allowed easy balancing despite long wheelbase and low center of mass, but no self-balancing ("no hands") configuration was discovered. True circle, as expected, was essentially impossible to balance, as steering does not correct for misalignment of the tire patch and center of mass. Crabwise cycling at angles tested up to about 45° did not show a tendency to fall over, even under braking. X-2 is mentioned in passing in Whitt and Wilson's Bicycling Science 2nd edition. Rear-wheel steering Because of the theoretical benefits, especially a simplified front-wheel drive mechanism, attempts have been made to construct a rideable rear-wheel steering bike. The Bendix Company built a rear-wheel steering bicycle, and the U.S. Department of Transportation commissioned the construction of a rear-wheel steering motorcycle: both proved to be unrideable. Rainbow Trainers, Inc. in Alton, Illinois, offered US$5,000 to the first person "who can successfully ride the rear-steered bicycle, Rear Steered Bicycle I". One documented example of someone successfully riding a rear-wheel steering bicycle is that of L. H. Laiterman at Massachusetts Institute of Technology, on a specially designed recumbent bike. The difficulty is that turning left, accomplished by turning the rear wheel to the right, initially moves the center of mass to the right, and vice versa. This complicates the task of compensating for leans induced by the environment. Examination of the eigenvalues for bicycles with common geometries and mass distributions shows that when moving in reverse, so as to have rear-wheel steering, they are inherently unstable. This does not mean they are unridable, but that the effort to control them is higher. Other, purpose-built designs have been published, however, that do not suffer this problem. Center steering Between the extremes of bicycles with classical front-wheel steering and those with strictly rear-wheel steering is a class of bikes with a pivot point somewhere between the two, referred to as center-steering, and similar to articulated steering. An early implementation of the concept was the Phantom bicycle in the early 1870s promoted as a safer alternative to the penny-farthing. This design allows for simple front-wheel drive and current implementations appear to be quite stable, even rideable no-hands, as many photographs illustrate. These designs, such as the Python Lowracer, a recumbent, usually have very lax head angles (40° to 65°) and positive or even negative trail. The builder of a bike with negative trail states that steering the bike from straight ahead forces the seat (and thus the rider) to rise slightly and this offsets the destabilizing effect of the negative trail. Reverse steering Bicycles have been constructed, for investigation and demonstration purposes, with the steering reversed so that turning the handlebars to the left causes the front wheel to turn to the right, and vica versa. It is possible to ride such a bicycle, but riders experienced with normal bicycles find it very difficult to learn, if they can manage it at all. Tiller effect Tiller effect is the expression used to describe how handlebars that extend far behind the steering axis (head tube) act like a tiller on a boat, in that one moves the bars to the right in order to turn the front wheel to the left, and vice versa. This situation is commonly found on cruiser bicycles, some recumbents, and some motorcycles. It can be troublesome when it limits the ability to steer because of interference or the limits of arm reach. Tires Tires have a large influence over bike handling, especially on motorcycles, but also on bicycles. Tires influence bike dynamics in two distinct ways: finite crown radius and force generation. Increase the crown radius of the front tire has been shown to decrease the size or eliminate self stability. Increasing the crown radius of the rear tire has the opposite effect, but to a lesser degree. Tires generate the lateral forces necessary for steering and balance through a combination of cornering force and camber thrust. Tire inflation pressures have also been found to be important variables in the behavior of a motorcycle at high speeds. Because the front and rear tires can have different slip angles due to weight distribution, tire properties, etc., bikes can experience understeer or oversteer. Of the two, understeer, in which the front wheel slides more than the rear wheel, is more dangerous since front wheel steering is critical for maintaining balance. Because real tires have a finite contact patch with the road surface that can generate a scrub torque, and when in a turn, can experience some side slipping as they roll, they can generate torques about an axis normal to the plane of the contact patch. One torque generated by a tire, called the self aligning torque, is caused by asymmetries in the side-slip along the length of the contact patch. The resultant force of this side-slip occurs behind the geometric center of the contact patch, a distance described as the pneumatic trail, and so creates a torque on the tire. Since the direction of the side-slip is towards the outside of the turn, the force on the tire is towards the center of the turn. Therefore, this torque tends to turn the front wheel in the direction of the side-slip, away from the direction of the turn, and therefore tends to increase the radius of the turn. Another torque is produced by the finite width of the contact patch and the lean of the tire in a turn. The portion of the contact patch towards the outside of the turn is actually moving rearward, with respect to the wheel's hub, faster than the rest of the contact patch, because of its greater radius from the hub. By the same reasoning, the inner portion is moving rearward more slowly. So the outer and inner portions of the contact patch slip on the pavement in opposite directions, generating a torque that tends to turn the front wheel in the direction of the turn, and therefore tends to decrease the turn radius. The combination of these two opposite torques creates a resulting yaw torque on the front wheel, and its direction is a function of the side-slip angle of the tire, the angle between the actual path of the tire and the direction it is pointing, and the camber angle of the tire (the angle that the tire leans from the vertical). The result of this torque is often the suppression of the inversion speed predicted by rigid wheel models described above in the section on steady-state turning. High side A highsider is a type of bike motion which is caused by a rear wheel gaining traction when it is not facing in the direction of travel, usually after slipping sideways in a curve. This can occur under heavy braking, acceleration, a varying road surface, or suspension activation, especially due to interaction with the drive train. It can take the form of a single slip-then-flip or a series of violent oscillations. Maneuverability and handling Bike maneuverability and handling is difficult to quantify for several reasons. The geometry of a bike, especially the steering axis angle makes kinematic analysis complicated. Under many conditions, bikes are inherently unstable and must always be under rider control. Finally, the rider's skill has a large influence on the bike's performance in any maneuver. Bike designs tend to consist of a trade-off between maneuverability and stability. Rider control inputs The primary control input that the rider can make is to apply a torque directly to the steering mechanism via the handlebars. Because of the bike's own dynamics, due to steering geometry and gyroscopic effects, direct position control over steering angle has been found to be problematic. A secondary control input that the rider can make is to lean the upper torso relative to the bike. As mentioned above, the effectiveness of rider lean varies inversely with the mass of the bike. On heavy bikes, such as motorcycles, rider lean mostly alters the ground clearance requirements in a turn, improves the view of the road, and improves the bike system dynamics in a very low-frequency passive manner. In motorcycle racing, leaning the torso, moving the body, and projecting a knee to the inside of the turn relative to the bike can also cause an aerodynamic yawing moment that facilitates entering and rounding the turn. Differences from automobiles The need to keep a bike upright to avoid injury to the rider and damage to the vehicle limits the type of maneuverability testing commonly performed. For example, while automobile enthusiast publications often perform and quote skidpad results, motorcycle publications do not. The need to "set up" for a turn, lean the bike to the appropriate angle, means that the rider must see further ahead than is necessary for a typical car at the same speed, and this need increases more than in proportion to the speed. Rating schemes Several schemes have been devised to rate the handling of bikes, particularly motorcycles. The roll index is the ratio between steering torque and roll or lean angle. The acceleration index is the ratio between steering torque and lateral or centripetal acceleration. The steering ratio is the ratio between the theoretical turning radius based on ideal tire behavior and the actual turning radius. Values less than one, where the front wheel side slip is greater than the rear wheel side slip, are described as under-steering; equal to one as neutral steering; and greater than one as over-steering. Values less than zero, in which the front wheel must be turned opposite the direction of the curve due to much greater rear wheel side slip than front wheel have been described as counter-steering. Riders tend to prefer neutral or slight over-steering. Car drivers tend to prefer under-steering. The Koch index is the ratio between peak steering torque and the product of peak lean rate and forward speed. Large, touring motorcycles tend to have a high Koch index, sport motorcycles tend to have a medium Koch index, and scooters tend to have a low Koch index. It is easier to maneuver light scooters than heavy motorcycles. Lateral motion theory Although its equations of motion can be linearized, a bike is a nonlinear system. The variable(s) to be solved for cannot be written as a linear sum of independent components, i.e. its behavior is not expressible as a sum of the behaviors of its descriptors. Generally, nonlinear systems are difficult to solve and are much less understandable than linear systems. In the idealized case, in which friction and any flexing is ignored, a bike is a conservative system. Damping, however, can still be demonstrated: under the right circumstances, side-to-side oscillations will decrease with time. Energy added with a sideways jolt to a bike running straight and upright (demonstrating self-stability) is converted into increased forward speed, not lost, as the oscillations die out. A bike is a nonholonomic system because its outcome is path-dependent. In order to know its exact configuration, especially location, it is necessary to know not only the configuration of its parts, but also their histories: how they have moved over time. This complicates mathematical analysis. Finally, in the language of control theory, a bike exhibits non-minimum phase behavior. It turns in the direction opposite of how it is initially steered, as described above in the section on countersteering Degrees of freedom The number of degrees of freedom of a bike depends on the particular model being used. The simplest model that captures the key dynamic features, called the "Whipple model" after Francis Whipple who first developed the equations for it, has four rigid bodies with knife edge wheels rolling without slip on a flat smooth surface, and has 7 degrees of freedom (configuration variables required to completely describe the location and orientation of all 4 bodies): x coordinate of rear wheel contact point y coordinate of rear wheel contact point orientation angle of rear frame (yaw) rotation angle of rear wheel rotation angle of front wheel lean angle of rear frame (roll) steering angle between rear frame and front end Equations of motion The equations of motion of an idealized bike, consisting of a rigid frame, a rigid fork, two knife-edged, rigid wheels, all connected with frictionless bearings and rolling without friction or slip on a smooth horizontal surface and operating at or near the upright and straight-ahead, unstable equilibrium can be represented by a single fourth-order linearized ordinary differential equation or two coupled second-order differential equations, the lean equation and the steer equation where is the lean angle of the rear assembly, is the steer angle of the front assembly relative to the rear assembly and and are the moments (torques) applied at the rear assembly and the steering axis, respectively. For the analysis of an uncontrolled bike, both are taken to be zero. These can be represented in matrix form as where is the symmetrical mass matrix which contains terms that include only the mass and geometry of the bike, is the so-called damping matrix, even though an idealized bike has no dissipation, which contains terms that include the forward speed and is asymmetric, is the so-called stiffness matrix which contains terms that include the gravitational constant and and is symmetric in and asymmetric in , is a vector of lean angle and steer angle, and is a vector of external forces, the moments mentioned above. In this idealized and linearized model, there are many geometric parameters (wheelbase, head angle, mass of each body, wheel radius, etc.), but only four significant variables: lean angle, lean rate, steer angle, and steer rate. These equations have been verified by comparison with multiple numeric models derived completely independently. The equations show that the bicycle is like an inverted pendulum with the lateral position of its support controlled by terms representing roll acceleration, roll velocity and roll displacement to steering torque feedback. The roll acceleration term is normally of the wrong sign for self-stabilization and can be expected to be important mainly in respect of wobble oscillations. The roll velocity feedback is of the correct sign, is gyroscopic in nature, being proportional to speed, and is dominated by the front wheel contribution. The roll displacement term is the most important one and is mainly controlled by trail, steering rake and the offset of the front frame mass center from the steering axis. All the terms involve complex combinations of bicycle design parameters and sometimes the speed. The limitations of the benchmark bicycle are considered and extensions to the treatments of tires, frames and riders, and their implications, are included. Optimal rider controls for stabilization and path-following control are also discussed. Eigenvalues It is possible to calculate eigenvalues, one for each of the four state variables (lean angle, lean rate, steer angle, and steer rate), from the linearized equations in order to analyze the normal modes and self-stability of a particular bike design. In the plot to the right, eigenvalues of one particular bicycle are calculated for forward speeds of 0–10 m/s (22 mph). When the real parts of all eigenvalues (shown in dark blue) are negative, the bike is self-stable. When the imaginary parts of any eigenvalues (shown in cyan) are non-zero, the bike exhibits oscillation. The eigenvalues are point symmetric about the origin and so any bike design with a self-stable region in forward speeds will not be self-stable going backwards at the same speed. There are three forward speeds that can be identified in the plot to the right at which the motion of the bike changes qualitatively: The forward speed at which oscillations begin, at about 1 m/s (2.2 mph) in this example, sometimes called the double root speed due to there being a repeated root to the characteristic polynomial (two of the four eigenvalues have exactly the same value). Below this speed, the bike simply falls over as an inverted pendulum does. The forward speed at which oscillations do not increase, where the weave mode eigenvalues switch from positive to negative in a Hopf bifurcation at about 5.3 m/s (12 mph) in this example, is called the weave speed. Below this speed, oscillations increase until the uncontrolled bike falls over. Above this speed, oscillations eventually die out. The forward speed at which non-oscillatory leaning increases, where the capsize mode eigenvalues switch from negative to positive in a pitchfork bifurcation at about 8 m/s (18 mph) in this example, is called the capsize speed. Above this speed, this non-oscillating lean eventually causes the uncontrolled bike to fall over. Between these last two speeds, if they both exist, is a range of forward speeds at which the particular bike design is self-stable. In the case of the bike whose eigenvalues are shown here, the self-stable range is 5.3–8.0 m/s (12–18 mph). The fourth eigenvalue, which is usually stable (very negative), represents the castoring behavior of the front wheel, as it tends to turn towards the direction in which the bike is traveling. Note that this idealized model does not exhibit the wobble or shimmy and rear wobble instabilities described above. They are seen in models that incorporate tire interaction with the ground or other degrees of freedom. Experimentation with real bikes has so far confirmed the weave mode predicted by the eigenvalues. It was found that tire slip and frame flex are not important for the lateral dynamics of the bicycle in the speed range up to 6 m/s. Modes Bikes, as complex mechanisms, have a variety of modes: fundamental ways that they can move. These modes can be stable or unstable, depending on the bike parameters and its forward speed. In this context, "stable" means that an uncontrolled bike will continue rolling forward without falling over as long as forward speed is maintained. Conversely, "unstable" means that an uncontrolled bike will eventually fall over, even if forward speed is maintained. The modes can be differentiated by the speed at which they switch stability and the relative phases of leaning and steering as the bike experiences that mode. Any bike motion consists of a combination of various amounts of the possible modes, and there are three main modes that a bike can experience: capsize, weave, and wobble. A lesser known mode is rear wobble, and it is usually stable. Capsize Capsize is falling over without oscillation. During capsize, an uncontrolled front wheel usually steers in the direction of lean, but never enough to stop the increasing lean, until a very high lean angle is reached, at which point the steering may turn in the opposite direction. A capsize can happen very slowly if the bike is moving forward rapidly. Because the capsize instability is so slow, on the order of seconds, it is easy for the rider to control, and is actually used by the rider to initiate the lean necessary for a turn. For most bikes, depending on geometry and mass distribution, capsize is stable at low speeds, and becomes less stable as speed increases until it is no longer stable. However, on many bikes, tire interaction with the pavement is sufficient to prevent capsize from becoming unstable at high speeds. Weave Weave is a slow (0&–4&–Hz) oscillation between leaning left and steering right, and vice versa. The entire bike is affected with significant changes in steering angle, lean angle (roll), and heading angle (yaw). The steering is 180° out of phase with the heading and 90° out of phase with the leaning. This AVI movie shows weave. For most bikes, depending on geometry and mass distribution, weave is unstable at low speeds, and becomes less pronounced as speed increases until it is no longer unstable. While the amplitude may decrease, the frequency actually increases with speed. Wobble or shimmy Wobble, shimmy, tank-slapper, speed wobble, and death wobble are all words and phrases used to describe a rapid (4–10 Hz) oscillation of primarily just the front end (front wheel, fork, and handlebars). Also involved is the yawing of the rear frame which may contribute to the wobble when too flexible. This instability occurs mostly at high speed and is similar to that experienced by shopping cart wheels, airplane landing gear, and automobile front wheels. While wobble or shimmy can be easily remedied by adjusting speed, position, or grip on the handlebar, it can be fatal if left uncontrolled. Wobble or shimmy begins when some otherwise minor irregularity, such as fork asymmetry, accelerates the wheel to one side. The restoring force is applied in phase with the progress of the irregularity, and the wheel turns to the other side where the process is repeated. If there is insufficient damping in the steering the oscillation will increase until system failure occurs. The oscillation frequency can be changed by changing the forward speed, making the bike stiffer or lighter, or increasing the stiffness of the steering, of which the rider is a main component. Rear wobble The term rear wobble is used to describe a mode of oscillation in which lean angle (roll) and heading angle (yaw) are almost in phase and both 180° out of phase with steer angle. The rate of this oscillation is moderate with a maximum of about 6.5 Hz. Rear wobble is heavily damped and falls off quickly as bike speed increases. Design criteria The effect that the design parameters of a bike have on these modes can be investigated by examining the eigenvalues of the linearized equations of motion. For more details on the equations of motion and eigenvalues, see the section on the equations of motion above. Some general conclusions that have been drawn are described here. The lateral and torsional stiffness of the rear frame and the wheel spindle affects wobble-mode damping substantially. Long wheelbase and trail and a flat steering-head angle have been found to increase weave-mode damping. Lateral distortion can be countered by locating the front fork torsional axis as low as possible. Cornering weave tendencies are amplified by degraded damping of the rear suspension. Cornering, camber stiffnesses and relaxation length of the rear tire make the largest contribution to weave damping. The same parameters of the front tire have a lesser effect. Rear loading also amplifies cornering weave tendencies. Rear load assemblies with appropriate stiffness and damping, however, were successful in damping out weave and wobble oscillations. One study has shown theoretically that, while a bike leaned in a turn, road undulations can excite the weave mode at high speed or the wobble mode at low speed if either of their frequencies match the vehicle speed and other parameters. Excitation of the wobble mode can be mitigated by an effective steering damper and excitation of the weave mode is worse for light riders than for heavy riders. Riding on treadmills and rollers Riding on a treadmill is theoretically identical to riding on stationary pavement, and physical testing has confirmed this. Treadmills have been developed specifically for indoor bicycle training. Riding on rollers is still under investigation. Other hypotheses Although bicycles and motorcycles can appear to be simple mechanisms with only four major moving parts (frame, fork, and two wheels), these parts are arranged in a way that makes them complicated to analyze. While it is an observable fact that bikes can be ridden even when the gyroscopic effects of their wheels are canceled out, the hypothesis that the gyroscopic effects of the wheels are what keep a bike upright is common in print and online. Examples in print: "Angular momentum and motorcycle counter-steering: A discussion and demonstration", A. J. Cox, Am. J. Phys. 66, 1018–1021 ~1998 "The motorcycle as a gyroscope", J. Higbie, Am. J. Phys. 42, 701–702 The Physics of Everyday Phenomena, W. T. Griffith, McGraw–Hill, New York, 1998, pp. 149–150. The Way Things Work., Macaulay, Houghton-Mifflin, New York, NY, 1989 Longitudinal dynamics Bikes may experience a variety of longitudinal forces and motions. On most bikes, when the front wheel is turned to one side or the other, the entire rear frame pitches forward slightly, depending on the steering axis angle and the amount of trail. On bikes with suspensions, either front, rear, or both, trim is used to describe the geometric configuration of the bike, especially in response to forces of braking, accelerating, turning, drive train, and aerodynamic drag. The load borne by the two wheels varies not only with center of mass location, which in turn varies with the number of passengers, the amount of luggage, and the location of passengers and luggage, but also with acceleration and deceleration. This phenomenon is known as load transfer or weight transfer, depending on the author, and provides challenges and opportunities to both riders and designers. For example, motorcycle racers can use it to increase the friction available to the front tire when cornering, and attempts to reduce front suspension compression during heavy braking has spawned several motorcycle fork designs. The net aerodynamic drag forces may be considered to act at a single point, called the center of pressure. At high speeds, this will create a net moment about the rear driving wheel and result in a net transfer of load from the front wheel to the rear wheel. Also, depending on the shape of the bike and the shape of any fairing that might be installed, aerodynamic lift may be present that either increases or further reduces the load on the front wheel. Stability Though longitudinally stable when stationary, a bike may become longitudinally unstable under sufficient acceleration or deceleration, and Euler's second law can be used to analyze the ground reaction forces generated. For example, the normal (vertical) ground reaction forces at the wheels for a bike with a wheelbase and a center of mass at height and at a distance in front of the rear wheel hub, and for simplicity, with both wheels locked, can be expressed as: for the rear wheel and for the front wheel. The frictional (horizontal) forces are simply for the rear wheel and for the front wheel, where is the coefficient of friction, is the total mass of the bike and rider, and is the acceleration of gravity. Therefore, if which occurs if the center of mass is anywhere above or in front of a line extending back from the front wheel contact patch and inclined at the angle above the horizontal, then the normal force of the rear wheel will be zero (at which point the equation no longer applies) and the bike will begin to flip or loop forward over the front wheel. On the other hand, if the center of mass height is behind or below the line, such as on most tandem bicycles or long-wheel-base recumbent bicycles, as well as cars, it is less likely that the front wheel can generate enough braking force to flip the bike. This means they can decelerate up to nearly the limit of adhesion of the tires to the road, which could reach 0.8 g if the coefficient of friction is 0.8, which is 40% more than an upright bicycle under even the best conditions. Bicycling Science author David Gordon Wilson points out that this puts upright bicyclists at particular risk of causing a rear-end collision if they tailgate cars. Similarly, powerful motorcycles can generate enough torque at the rear wheel to lift the front wheel off the ground in a maneuver called a wheelie. A line similar to the one described above to analyze braking performance can be drawn from the rear wheel contact patch to predict if a wheelie is possible given the available friction, the center of mass location, and sufficient power. This can also happen on bicycles, although there is much less power available, if the center of mass is back or up far enough or the rider lurches back when applying power to the pedals. Of course, the angle of the terrain can influence all of the calculations above. All else remaining equal, the risk of pitching over the front end is reduced when riding up hill and increased when riding down hill. The possibility of performing a wheelie increases when riding up hill, and is a major factor in motorcycle hillclimbing competitions. Braking according to ground conditions When braking, the rider in motion is seeking to change the speed of the combined mass m of rider plus bike. This is a negative acceleration a in the line of travel. F=ma, the acceleration a causes an inertial forward force F on mass m. The braking a is from an initial speed u to a final speed v, over a length of time t. The equation u - v=at implies that the greater the acceleration the shorter the time needed to change speed. The stopping distance s is also shortest when acceleration a is at the highest possible value compatible with road conditions: the equation s=ut + 1/2 at2 makes s low when a is high and t is low. How much braking force to apply to each wheel depends both on ground conditions and on the balance of weight on the wheels at each instant in time. The total braking force cannot exceed the gravity force on the rider and bike times the coefficient of friction μ of the tire on the ground. mgμ >= Ff + Fr. A skid occurs if the ratio of either Ff over Nf or Fr over Nr is greater than μ, with a rear wheel skid having less of a negative impact on lateral stability. When braking, the inertial force ma in the line of travel, not being co-linear with f, tends to rotate m about f. This tendency to rotate, an overturning moment, is resisted by a moment from mg. Taking moments about the front wheel contact point at an instance in time: When there is no braking, mass m is typically above the bottom bracket, about 2/3 of the way back between the front and rear wheels, with Nr thus greater than Nf. In constant light braking, whether because an emergency stop is not required or because poor ground conditions prevent heavy braking, much weight still rests on the rear wheel, meaning that Nr is still large and Fr can contribute towards a. As braking a increases, Nr and Fr decrease because the moment mah increases with a. At maximum constant a, clockwise and anti-clockwise moments are equal, at which point Nr=0. Any greater Ff initiates a stoppie. Other factors: Downhill it is much easier to topple over the front wheel because the incline moves the line of mg closer to f. To try to reduce this tendency the rider can stand back on the pedals to try to keep m as far back as possible. When braking is increasing the center of mass m may move forward relative to the front wheel, as the rider moves forward relative to the bike, and, if the bike has suspension on the front wheel, the front forks compress under load, changing the bike geometry. This all puts extra load on the front wheel. At the end of a brake maneuver, as the rider comes to a halt, the suspension decompresses and pushes the rider back. Values for μ vary greatly depending on a number of factors: The material that the ground or road surface is made of. Whether the ground is wet or dry. The temperature of the tyre and ground. The smoothness or roughness of the ground. The firmness or looseness of the ground. The speed of the vehicle, with friction reducing above 30 mph (50 km/h). Whether friction is rolling or sliding, with sliding friction at least 10% below peak rolling friction. Braking Most of the braking force of standard upright bikes comes from the front wheel. As the analysis above shows, if the brakes themselves are strong enough, the rear wheel is easy to skid, while the front wheel often can generate enough stopping force to flip the rider and bike over the front wheel. This is called a stoppie if the rear wheel is lifted but the bike does not flip, or an endo (abbreviated form of end-over-end) if the bike flips. On long or low bikes, however, such as cruiser motorcycles and recumbent bicycles, the front tire will skid instead, possibly causing a loss of balance. Assuming no loss of balance, it is possible to calculate optimum braking performance depending on the bike's geometry, the location of center of gravity of bike and rider, and the maximum coefficient of friction. In the case of a front suspension, especially telescoping fork tubes, the increase in downward force on the front wheel during braking may cause the suspension to compress and the front end to lower. This is known as brake diving. A riding technique that takes advantage of how braking increases the downward force on the front wheel is known as trail braking. Front wheel braking The limiting factors on the maximum deceleration in front wheel braking are: the maximum, limiting value of static friction between the tire and the ground, often between 0.5 and 0.8 for rubber on dry asphalt, the kinetic friction between the brake pads and the rim or disk, and pitching or looping (of bike and rider) over the front wheel. For an upright bicycle on dry asphalt with excellent brakes, pitching will probably be the limiting factor. The combined center of mass of a typical upright bicycle and rider will be about back from the front wheel contact patch and above, allowing a maximum deceleration of 0.5 g (5 m/s2 or 16 ft/s2). If the rider modulates the brakes properly, however, pitching can be avoided. If the rider moves his weight back and down, even larger decelerations are possible. Rear-wheel braking The rear brake of an upright bicycle can only produce about 0.25 g (≈2.5 m/s2) deceleration at best, because of the decrease in normal force at the rear wheel as described above. All such bikes with only rear braking are subject to this limitation: for example, bikes with only a coaster brake, and fixed-gear bikes with no other braking mechanism. There are, however, situations that may warrant rear wheel braking Slippery surfaces or bumpy surfaces. Under front wheel braking, the lower coefficient of friction may cause the front wheel to skid which often results in a loss of balance. Front flat tire. Braking a wheel with a flat tire can cause the tire to come off the rim which greatly reduces friction and, in the case of a front wheel, result in a loss of balance. To deliberately induce a rear wheel skid to induce oversteer and achieve a smaller turn radius on tight turns. Front brake failure. Recumbent bicycles. Long-wheelbase recumbents require a good rear brake as the CG is near the rear wheel. Braking technique Expert opinion varies from "use both levers equally at first" to "the fastest that you can stop any bike of normal wheelbase is to apply the front brake so hard that the rear wheel is just about to lift off the ground", depending on road conditions, rider skill level, and desired fraction of maximum possible deceleration. The SureStop System uses a sliding mechanism to enable the front brakes to be actuated by the friction applied to the back brake shoes by the rotation of the rear wheel. This is designed to optimise the braking friction to that of the road conditions so as to mitigate the risk of going over the handlebars. Suspension Bikes may have only front, only rear, full suspension or no suspension that operate primarily in the central plane of symmetry; though with some consideration given to lateral compliance. The goals of a bike suspension are to reduce vibration experienced by the rider, maintain wheel contact with the ground, reduce the loss of momentum when riding over an object, reduce impact forces caused by jumps or drops and maintain vehicle trim. The primary suspension parameters are stiffness, damping, sprung and unsprung mass, and tire characteristics. Vibration The study of vibrations in bikes includes its causes, such as engine balance, wheel balance, ground surface, and aerodynamics; its transmission and absorption; and its effects on the bike, the rider, and safety. An important factor in any vibration analysis is a comparison of the natural frequencies of the system with the possible driving frequencies of the vibration sources. A close match means mechanical resonance that can result in large amplitudes. A challenge in vibration damping is to create compliance in certain directions (vertically) without sacrificing frame rigidity needed for power transmission and handling (torsionally). Another issue with vibration for the bike is the possibility of failure due to material fatigue Effects of vibration on riders include discomfort, loss of efficiency, Hand-Arm Vibration Syndrome, a secondary form Raynaud's disease, and whole body vibration. Vibrating instruments may be inaccurate or difficult to read. In bicycles The primary cause of vibrations in a properly functioning bicycle is the surface over which it rolls. In addition to pneumatic tires and traditional bicycle suspensions, a variety of techniques have been developed to damp vibrations before they reach the rider. These include materials, such as carbon fiber, either in the whole frame or just key components such as the front fork, seatpost, or handlebars; tube shapes, such as curved seat stays;, gel handlebar grips and saddles and special inserts, such as Zertz by Specialized, and Buzzkills by Bontrager. In motorcycles In addition to the road surface, vibrations in a motorcycle can be caused by the engine and wheels, if unbalanced. Manufacturers employ a variety of technologies to reduce or damp these vibrations, such as engine balance shafts, rubber engine mounts, and tire weights. The problems that vibration causes have also spawned an industry of after-market parts and systems designed to reduce it. Add-ons include handlebar weights, isolated foot pegs, and engine counterweights. At high speeds, motorcycles and their riders may also experience aerodynamic flutter or buffeting. This can be abated by changing the air flow over key parts, such as the windshield. Experimentation A variety of experiments verify or disprove various hypotheses about bike dynamics. David Jones built several bikes in a search for an unrideable configuration. Richard Klein built several bikes to confirm Jones' findings. Richard Klein also built a "Torque Wrench Bike" and a "Rocket Bike" to investigate steering torques and their effects. Keith Code built a motorcycle with fixed handlebars to investigate the effects of rider motion and position on steering. Schwab and Kooijman have performed measurements with an instrumented bike. Hubbard and Moore have performed measurements with an instrumented bike. See also Bicycle and motorcycle geometry Bicycle fork Bicycle performance Bicycle tire Lowsider Motorcycle fork Parallel parking problem Outline of motorcycles and motorcycling References Further reading 'An Introduction to Bicycle Geometry and Handling', Karl Anderson 'What keeps the bicycle upright?' by Jobst Brandt 'Report on Stability of the Dahon Bicycle' by John Forester External links Videos: Video of riderless bicycle demonstrating self-stability Why bicycles do not fall: Arend Schwab at TEDx Delft 2012 Wobble movie (AVI) Weave movie (AVI) Wobble Crash (Flash) Video on Science Friday Research centers: Bicycle Dynamics at Delft University of Technology Bicycle Mechanics at Cornell University Bicycle Science at the University of Illinois Motorcycle Dynamics at the University of Padova Control and Power Research Group at Imperial College Bicycle dynamics, control and handling at UC Davis Bicycle and Motorcycle Engineering Research Laboratory at the University of Wisconsin-Milwaukee Conferences: Single Track Vehicle Dynamics at DSCC 2012: two sessions at the ASME Dynamic Systems and Control Conference in Fort Lauderdale, Florida, USA, October 17–19, 2012 Bicycle and Motorcycle Dynamics 2013 : Symposium on Dynamics and Control of Single Track Vehicles, Nihon University, Nov 11–13, 2013 Bicycle and Motorcycle Dynamics Conference: Summary page Control theory Cycling Dynamics (mechanics)
Bicycle and motorcycle dynamics
[ "Physics", "Mathematics" ]
15,442
[ "Physical phenomena", "Applied mathematics", "Control theory", "Classical mechanics", "Motion (physics)", "Dynamics (mechanics)", "Dynamical systems" ]
41,565,937
https://en.wikipedia.org/wiki/Scan%20statistic
In statistics, a scan statistic or window statistic is a problem relating to the clustering of randomly positioned points. An example of a typical problem is the maximum size of a cluster of points on a line or the longest series of successes recorded by a moving window of fixed length. Joseph Naus first published on the problem in the 1960s, and has been called the "father of the scan statistic" in honour of his early contributions. The results can be applied in epidemiology, public health and astronomy to find unusual clusters of events. It was extended by Martin Kulldorff to multidimensional settings and varying window sizes in a 1997 paper, which is () the most cited article in its journal, Communications in Statistics – Theory and Methods. This work lead to the creation of the software SaTScan, a program trademarked by Martin Kulldorff that applies his methods to data. Recent results have shown that using scale-dependent critical values for the scan statistic allows to attain asymptotically optimal detection simultaneously for all signal lengths, thereby improving on the traditional scan, but this procedure has been criticized for losing too much power for short signals. Walther and Perry (2022) considered the problem of detecting an elevated mean on an interval with unknown location and length in the univariate Gaussian sequence model. They explain this discrepancy by showing that these asymptotic optimality results will necessarily be too imprecise to discern the performance of scan statistics in a practically relevant way, even in a large sample context. Instead, they propose to assess the performance with a new finite sample criterion. They presented three new calibration techniques for scan statistics that perform well across a range of relevant signal lengths to optimally increase performance of short signals. The scan-statistic-based methods have been specifically developed to detect rare variant associations in the noncoding genome, especially for the intergenic region. Compared with fixed-size sliding window analysis, scan-statistic-based methods use data-adaptive size dynamic window to scan the genome continuously, and increase the analysis power by flexibly selecting the locations and sizes of the signal regions. Some examples of these methods are Q-SCAN, SCANG, WGScan. References External links SaTScan free software for the spatial, temporal and space-time scan statistics Summary statistics Spatial analysis
Scan statistic
[ "Physics" ]
481
[ "Spacetime", "Space", "Spatial analysis" ]
41,565,982
https://en.wikipedia.org/wiki/Proof%20mass
A proof mass or test mass is a known quantity of mass used in a measuring instrument as a reference for the measurement of an unknown quantity. A mass used to calibrate a weighing scale is sometimes called a calibration mass or calibration weight. A proof mass that deforms a spring in an accelerometer is sometimes called the seismic mass. In a convective accelerometer, a fluid proof mass may be employed. See also Calibration, checking or adjustment by comparison with a standard Control variable, the experimental element that is constant and unchanged throughout a scientific investigation Test particle, an idealized model of an object in which all physical properties are assumed to be negligible, except for the property being studied References Accelerometers Measurement Mass Units of mass
Proof mass
[ "Physics", "Mathematics", "Technology", "Engineering" ]
163
[ "Scalar physical quantities", "Accelerometers", "Units of measurement", "Physical quantities", "Acceleration", "Quantity", "Mass", "Units of mass", "Measurement", "Size", "Measuring instruments", "Wikipedia categories named after physical quantities", "Matter" ]
41,573,644
https://en.wikipedia.org/wiki/Burgers%20vortex
In fluid dynamics, the Burgers vortex or Burgers–Rott vortex is an exact solution to the Navier–Stokes equations governing viscous flow, named after Jan Burgers and Nicholas Rott. The Burgers vortex describes a stationary, self-similar flow. An inward, radial flow, tends to concentrate vorticity in a narrow column around the symmetry axis, while an axial stretching causes the vorticity to increase. At the same time, viscous diffusion tends to spread the vorticity. The stationary Burgers vortex arises when the three effects are in balance. The Burgers vortex, apart from serving as an illustration of the vortex stretching mechanism, may describe such flows as tornados, where the vorticity is provided by continuous convection-driven vortex stretching. Flow field The flow for the Burgers vortex is described in cylindrical coordinates. Assuming axial symmetry (no -dependence), the flow field associated with the axisymmetric stagnation point flow is considered: where (strain rate) and (circulation) are constants. The flow satisfies the continuity equation by the two first of the above equations. The azimuthal momentum equation of the Navier–Stokes equations then reduces to where is the kinematic viscosity of the fluid. The equation is integrated with the condition so that at infinity the solution behaves like a potential vortex, but at finite location, the flow is rotational. The choice ensures at the axis. The solution is The vorticity equation only gives a non-trivial component in the -direction, given by Intuitively the flow can be understood by looking at the three terms in the vorticity equation for , The first term on the right-hand side of the above equation corresponds to vortex stretching which intensifies the vorticity of the vortex core due to the axial-velocity component . The intensified vorticity tries to diffuse outwards radially due to the second term on the right-hand side, but is prevented by radial vorticity convection due to that emerges on the left-hand side of the above equation. The three-way balance establishes a steady solution. The Burgers vortex is a stable solution of the Navier–Stokes equations. One of the important property of the Burgers vortex that was shown by Jan Burgers is that the total viscous dissipation rate per unit axial length is independent of the viscosity, indicating that dissipation by the Burgers vortex is non-zero even in the limit . For this reason, it serves as a suitable candidate in modelling and understanding stretched-vortex tubes observed in turbulent flows. The total dissipation rate per unit axial length is, in incompressible flows, simply equal to the total enstrophy per unit length, which is given by Unsteady evolution to Burgers's vortex An exact solution of the time dependent Navier Stokes equations for arbitrary function is available. In particular, when is constant, the vorticity field with an arbitrary initial distribution is given by As , the asymptotic behaviour is given by Thus, provided , an arbitrary vorticity distribution approaches the Burgers' vortex. If , say in the case where the initial condition is composed of two equal and opposite vortices, then the first term is zero and the second term implies that vorticity decays to zero as Burgers vortex layer Burgers vortex layer or Burgers vortex sheet is a strained shear layer, which is a two-dimensional analogue of Burgers vortex. This is also an exact solution of the Navier–Stokes equations, first described by Albert A. Townsend in 1951. The velocity field expressed in the Cartesian coordinates are where is the strain rate, and . The value is interpreted as the vortex sheet strength. The vorticity equation only gives a non-trivial component in the -direction, given by The Burgers vortex sheet is shown to be unstable to small disturbances by K. N. Beronov and S. Kida thereby undergoing Kelvin–Helmholtz instability initially, followed by second instabilities and possibly transitioning to Kerr–Dold vortices at moderately large Reynolds numbers, but becoming turbulent at large Reynolds numbers. Non-axisymmetric Burgers vortices Non-axisymmetric Burgers' vortices emerge in non-axisymmetric strained flows. The theory for non-axisymmetric Burgers's vortex for small vortex Reynolds numbers was developed by A. C. Robinson and Philip Saffman in 1984, whereas Keith Moffatt, S. Kida and K. Ohkitani has developed the theory for in 1994. The structure of non-axisymmetric Burgers' vortices for arbitrary values of vortex Reynolds number can be discussed through numerical integrations. The velocity field takes the form subjected to the condition . Without loss of generality, one assumes and . The vortex cross-section lies in plane, providing a non-zero vorticity component in the direction The axisymmetric Burgers' vortex is recovered when whereas the Burgers' vortex layer is recovered when and . Burgers vortex in cylindrical stagnation surfaces Explicit solution of the Navier–Stokes equations for the Burgers vortex in stretched cylindrical stagnation surfaces was solved by P. Rajamanickam and A. D. Weiss. The solution is expressed in the cylindrical coordinate system as follows where is the strain rate, is the radial location of the cylindrical stagnation surface, is the circulation and is the regularized gamma function. This solution is nothing but the Burgers' vortex in the presence of a line source with source strength . The vorticity equation only gives a non-trivial component in the -direction, given by where in the above expression is the gamma function. As , the solution reduces to Burgers' vortex solution and as , the solution becomes the Burgers' vortex layer solution. Explicit solution for Sullivan vortex in cylindrical stagnation surface also exists. See also Sullivan vortex Kerr–Dold vortex References Vortices
Burgers vortex
[ "Chemistry", "Mathematics" ]
1,227
[ "Dynamical systems", "Vortices", "Fluid dynamics" ]
43,053,125
https://en.wikipedia.org/wiki/Iterated%20forcing
In mathematics, iterated forcing is a method for constructing models of set theory by repeating Cohen's forcing method a transfinite number of times. Iterated forcing was introduced by in their construction of a model of set theory with no Suslin tree. They also showed that iterated forcing can construct models where Martin's axiom holds and the continuum is any given regular cardinal. In iterated forcing, one has a transfinite sequence Pα of forcing notions indexed by some ordinals α, which give a family of Boolean-valued models VPα. If α+1 is a successor ordinal then Pα+1 is often constructed from Pα using a forcing notion in VPα, while if α is a limit ordinal then Pα is often constructed as some sort of limit (such as the direct limit) of the Pβ for β<α. A key consideration is that, typically, it is necessary that is not collapsed. This is often accomplished by the use of a preservation theorem such as: Finite support iteration of c.c.c. forcings (see countable chain condition) are c.c.c. and thus preserve . Countable support iterations of proper forcings are proper (see Fundamental Theorem of Proper Forcing) and thus preserve . Revised countable support iterations of semi-proper forcings are semi-proper and thus preserve . Some non-semi-proper forcings, such as Namba forcing, can be iterated with appropriate cardinal collapses while preserving using methods developed by Saharon Shelah. References Sources External links Forcing (mathematics)
Iterated forcing
[ "Mathematics" ]
328
[ "Forcing (mathematics)", "Mathematical logic" ]
43,056,044
https://en.wikipedia.org/wiki/Time-triggered%20architecture
Time-triggered architecture (abbreviated as TTA), also known as a time-triggered system, is a computer system that executes one or more sets of tasks according to a predetermined and set task schedule. Implementation of a TT system will typically involve use of a single interrupt that is linked to the periodic overflow of a timer. This interrupt may drive a task scheduler (a restricted form of real-time operating system). The scheduler willin turnrelease the system tasks at predetermined points in time. History and development Because they have highly deterministic timing behavior, TT systems have been used for many years to develop safety-critical aerospace and related systems. An early text that sets forth the principles of time triggered architecture, communications, and sparse time approaches is Real-Time Systems: Design Principles for Distributed Embedded Applications in 1997. Use of TT systems was popularized by the publication of Patterns for Time-Triggered Embedded Systems (PTTES) in 2001 and the related introductory book Embedded C in 2002. The PTTES book also introduced the concepts of time-triggered hybrid schedulers (an architecture for time-triggered systems that require task pre-emption) and shared-clock schedulers (an architecture for distributed time-triggered systems involving multiple, synchronized, nodes). Since publication of PTTES, extensive research work on TT systems has been carried out. Current applications Time-triggered systems are now commonly associated with international safety standards such as IEC 61508 (industrial systems), ISO 26262 (automotive systems), IEC 62304 (medical systems) and IEC 60730 (household goods). Alternatives Time-triggered systems can be viewed as a subset of a more general event-triggered (ET) system architecture (see event-driven programming). Implementation of an ET system will typically involve use of multiple interrupts, each associated with specific periodic events (such as timer overflows) or aperiodic events (such as the arrival of messages over a communication bus at random points in time). ET designs are traditionally associated with the use of what is known as a real-time operating system (or RTOS), though use of such a software platform is not a defining characteristic of an ET architecture. See also Event-driven programming (an alternative architecture for computer systems) IEC 61508 (a related safety standard) ISO 26262 (a related safety standard) DO-178C (a related safety standard) Life-critical system (a common application for TT architectures) References Software architecture Safety Safety engineering International standards Electrical standards IEC standards Automotive standards Automotive safety Networks
Time-triggered architecture
[ "Physics", "Technology", "Engineering" ]
531
[ "Systems engineering", "Electrical standards", "Electrical systems", "Safety engineering", "Computer standards", "IEC standards", "Physical systems" ]
43,056,476
https://en.wikipedia.org/wiki/Lanthanum%20manganite
Lanthanum manganite is an inorganic compound with the formula LaMnO3, often abbreviated as LMO. Lanthanum manganite is formed in the perovskite structure, consisting of oxygen octahedra with a central Mn atom. The cubic perovskite structure is distorted into an orthorhombic structure by a strong Jahn–Teller distortion of the oxygen octahedra. LaMnO3 often has lanthanum vacancies as evidenced by neutron scattering. For this reason, this material is usually referred as LaMnO3+ẟ. These vacancies generate a structure with a rhombohedral unit cell in this perovskite. A temperatures below 140 K, this LaMnO3+ẟ semiconductor exhibit a ferromagnetic order. Synthesis Lanthanum manganite can be prepared via solid-state reactions at high temperatures, using their oxides or carbonates. An alternative method is to use lanthanum nitrate and manganese nitrate as raw materials. The reaction occurs at high temperature after the solvents are vaporized. Lanthanum manganite alloys Lanthanum manganite is an electrical insulator and an A-type antiferromagnet. It is the parent compound of several important alloys, often termed rare-earth manganites or colossal magnetoresistance oxides. These families include lanthanum strontium manganite, lanthanum calcium manganite and others. In lanthanum manganite, both the La and the Mn are in the +3 oxidation state. Substitution of some of the La atoms by divalent atoms such as Sr or Ca induces a similar amount of tetravalent Mn4+ ions. Such substitution, or doping can induce various electronic effects, which form the basis of a rich and complex electron correlation phenomena that yield diverse electronic phase diagrams in these alloys. See also Super exchange Double exchange Jahn–Teller effect Electron correlation References Lanthanum compounds Manganates Perovskites
Lanthanum manganite
[ "Chemistry" ]
411
[ "Manganates", "Salts" ]
43,058,222
https://en.wikipedia.org/wiki/Organic%20mineral
An organic mineral is an organic compound in mineral form. An organic compound is any compound containing carbon, aside from some simple ones discovered before 1828. There are three classes of organic mineral: hydrocarbons (containing just hydrogen and carbon), salts of organic acids, and miscellaneous. Organic minerals are rare, and tend to have specialized settings such as fossilized cacti and bat guano. Mineralogists have used statistical models to predict that there are more undiscovered organic mineral species than known ones. Definition In general, an organic compound is defined as any compound containing carbon, but some compounds are excepted for historical reasons. Before 1828, chemists thought that organic and inorganic compounds were fundamentally different, with the former requiring a vital force that could only come from living organisms. Then Friedrich Wöhler synthesized urea by heating an inorganic substance called ammonium cyanate, proving that organic compounds could also be created through an inorganic process. Nevertheless, carbon-containing compounds that were already classified as inorganic were not reclassified. These include carbides, simple oxides of carbon such as carbon monoxide and carbon dioxide, carbonates, cyanides and elemental carbon minerals such as graphite and diamond. Organic minerals are rare and difficult to find, often forming crusts on fractures. Early descriptions of organic minerals include mellite in 1793, humboldtine in 1821 and idrialite in 1832. Types of organic mineral In the proposed 10th edition of the Nickel-Strunz classification, organic minerals are one of the ten primary classes of minerals. The class is divided into three subclasses: salts of organic acids, hydrocarbons, and miscellaneous organic minerals. Hydrocarbons As the name implies, hydrocarbon minerals are composed entirely of carbon and hydrogen. Some are inorganic forms of polycyclic aromatic hydrocarbon (PAH) compounds. For example, a rare mineral known as either carpathite, karpatite or pendletonite is nearly pure coronene. Carpathite is deposited as pale yellow flakes in cracks between diorite (an igneous rock) and argillite (a sedimentary rock); it is prized for a beautiful blue fluorescence under ultraviolet light. Other PAH compounds appearing as minerals include fluorene as kratochvilite; and anthracene as ravatite. Others are mixtures: curtisite contains several PAH compounds, including dibenzofluorine, picene, and chrysene, while the most common components of idrialite are tribenzofluorenes. One theory for their formation involves burial of PAH compounds until they reach a temperature where pyrolysis can occur, followed by hydrothermal transport towards the surface, during which the composition of minerals that precipitate out depends on the temperature. Salts of organic acids A salt of an organic acid is a compound in which an organic acid is combined with a base. The largest such group is the oxalates, which combine with cations. A large fraction have water molecules attached; examples include weddellite, whewellite, and zhemchuzhnikovite. Oxalates are often associated with particular fossilized biological materials, for example weddellite with cacti; oxammite with guano and egg shells of birds; glushinskite with lichen; humboldtine, stepanovite and whewellite with leaf litter; and humboldtine, stepanovite and whewellite with coal. Where plant material such as tree roots interacts with ore bodies, one can find oxalates with transition metals (moolooite, wheatleyite). Other salts include salts of formate () such as formicaite and dashkovaite; and salts of acetate () such as acetamide and calclacite. Joanneumite is the first isocyanurate mineral to be officially recognized. Miscellaneous organic compounds Some organic minerals do not fall into the above categories. These include nickel porphyrin (), closely related to biological molecules such as heme (a porphyrin with iron as the cation) and chlorophyll (a magnesium cation), but does not itself occur in biological systems. Instead, it is found on the surface of fractures in oil shales. Urea derived from bat guano and urine also occurs as a mineral in very arid conditions. In the Dana and Strunz classifications, amber is considered an organic mineral, but this classification is not approved by the International Mineralogical Association (IMA). Other sources call it a mineraloid because it has no crystal structure. Type of Carbon Mineral As of 2016, the IMA recognized ten hydrocarbon minerals, ten miscellaneous organic minerals, 21 oxalates and over 24 other salts of organic acids. However, Robert Hazen and colleagues analyzed the known species of carbon-bearing minerals using a statistical technique called the Large Number of Rare Events (LNRE) model, and predicted that at least 145 such minerals are yet to be discovered. Many undiscovered organic minerals may be related to known species by various substitutions of cations. Hazen et al. predict that at least three more PAH crystals (pyrene, chrysene and tetracene) should occur as minerals. There are 72 known synthetic oxalates, some of which could occur in nature, particularly near fossil organisms. To encourage the discovery of more carbon minerals, the Deep Carbon Observatory launched an initiative known as the Carbon Mineral Challenge. See also Classification of organic minerals Kerogen References Further reading
Organic mineral
[ "Chemistry" ]
1,136
[ "Organic compounds", "Organic minerals" ]
43,062,422
https://en.wikipedia.org/wiki/Fluorine%20azide
Fluorine azide or triazadienyl fluoride is a yellow green gas composed of nitrogen and fluorine with formula . Its properties resemble those of , , and . The bond between the fluorine atom and the nitrogen is very weak, leading to this substance being very unstable and prone to explosion. Calculations show the F–N–N angle to be around 102° with a straight line of 3 nitrogen atoms. The gas boils at –30° and melts at –139 °C. It was first made by John F. Haller in 1942. Reactions Fluorine azide can be made by reacting hydrazoic acid or sodium azide, with fluorine gas. Fluorine azide decomposes without explosion at normal temperatures to make dinitrogen difluoride: . At higher temperatures such as 1000 °C fluorine azide breaks up into nitrogen monofluoride radical: The FN itself dimerizes on cooling. Solid or liquid can explode, releasing a large amount of energy. A thin film burns at the rate of 1.6 km/s. Due to the explosion hazard, only very small quantities of this substance should be handled at a time. adducts can be formed with the Lewis acids boron trifluoride () and arsenic pentafluoride () at -196 °C. These molecules bond with the first nitrogen atom from the fluorine. Properties Spectroscopy Shape Distances between atoms are F–N 0.1444 nm, FN=NN 0.1253 nm and FNN=N 0.1132 nm. Physical has a density of 1.3 g/cm3. adsorbs on to solid surfaces of potassium fluoride, but not onto lithium fluoride or sodium fluoride. This property was being investigated so that could boost the energy of solid propellants. The ultraviolet photoelectric spectrum shows ionisation peaks at 11.01, 13,72, 15.6, 15.9, 16.67, 18.2, and 19.7 eV. Respectively these are assigned to the orbitals: π, nN or nF, nF, πF, nN or σ, π and σ. References External links Fluorine compounds Azido compounds Gases with color Explosive gases Explosive chemicals Pseudohalogens
Fluorine azide
[ "Chemistry" ]
482
[ "Explosive chemicals", "Pseudohalogens", "Inorganic compounds", "Explosive gases" ]
43,063,523
https://en.wikipedia.org/wiki/Subhamiltonian%20graph
In graph theory and graph drawing, a subhamiltonian graph is a subgraph of a planar Hamiltonian graph. Definition A graph G is subhamiltonian if G is a subgraph of another graph aug(G) on the same vertex set, such that aug(G) is planar and contains a Hamiltonian cycle. For this to be true, G itself must be planar, and additionally it must be possible to add edges to G, preserving planarity, in order to create a cycle in the augmented graph that passes through each vertex exactly once. The graph aug(G) is called a Hamiltonian augmentation of G. It would be equivalent to define G to be subhamiltonian if G is a subgraph of a Hamiltonian planar graph, without requiring this larger graph to have the same vertex set. That is, for this alternative definition, it should be possible to add both vertices and edges to G to create a Hamiltonian cycle. However, if a graph can be made Hamiltonian by the addition of vertices and edges it can also be made Hamiltonian by the addition of edges alone, so this extra freedom does not change the definition. In a subhamiltonian graph, a subhamiltonian cycle is a cyclic sequence of vertices such that adding an edge between each consecutive pair of vertices in the sequence preserves the planarity of the graph. A graph is subhamiltonian if and only if it has a subhamiltonian cycle. History and applications The class of subhamiltonian graphs (but not this name for them) was introduced by , who proved that these are exactly the graphs with two-page book embeddings. Subhamiltonian graphs and Hamiltonian augmentations have also been applied in graph drawing to problems involving embedding graphs onto universal point sets, simultaneous embedding of multiple graphs, and layered graph drawing. Related graph classes Some classes of planar graphs are necessarily Hamiltonian, and therefore also subhamiltonian; these include the 4-connected planar graphs and the Halin graphs. Every planar graph with maximum degree at most four is subhamiltonian, as is every planar graph with no separating triangles. If the edges of an arbitrary planar graph are subdivided into paths of length two, the resulting subdivided graph is always subhamiltonian. References Graph families Planar graphs Hamiltonian paths and cycles
Subhamiltonian graph
[ "Mathematics" ]
505
[ "Planes (geometry)", "Planar graphs" ]
43,066,470
https://en.wikipedia.org/wiki/Artin%20transfer%20%28group%20theory%29
In the mathematical field of group theory, an Artin transfer is a certain homomorphism from an arbitrary finite or infinite group to the commutator quotient group of a subgroup of finite index. Originally, such mappings arose as group theoretic counterparts of class extension homomorphisms of abelian extensions of algebraic number fields by applying Artin's reciprocity maps to ideal class groups and analyzing the resulting homomorphisms between quotients of Galois groups. However, independently of number theoretic applications, a partial order on the kernels and targets of Artin transfers has recently turned out to be compatible with parent-descendant relations between finite p-groups (with a prime number p), which can be visualized in descendant trees. Therefore, Artin transfers provide a valuable tool for the classification of finite p-groups and for searching and identifying particular groups in descendant trees by looking for patterns defined by the kernels and targets of Artin transfers. These strategies of pattern recognition are useful in purely group theoretic context, as well as for applications in algebraic number theory concerning Galois groups of higher p-class fields and Hilbert p-class field towers. Transversals of a subgroup Let be a group and be a subgroup of finite index Definitions. A left transversal of in is an ordered system of representatives for the left cosets of in such that Similarly a right transversal of in is an ordered system of representatives for the right cosets of in such that Remark. For any transversal of in , there exists a unique subscript such that , resp. . Of course, this element with subscript which represents the principal coset (i.e., the subgroup itself) may be, but need not be, replaced by the neutral element . Lemma. Let be a non-abelian group with subgroup . Then the inverse elements of a left transversal of in form a right transversal of in . Moreover, if is a normal subgroup of , then any left transversal is also a right transversal of in . Proof. Since the mapping is an involution of we see that: For a normal subgroup we have for each . We must check when the image of a transversal under a homomorphism is also a transversal. Proposition. Let be a group homomorphism and be a left transversal of a subgroup in with finite index The following two conditions are equivalent: is a left transversal of the subgroup in the image with finite index Proof. As a mapping of sets maps the union to another union: but weakens the equality for the intersection to a trivial inclusion: Suppose for some : then there exists elements such that Then we have: Conversely if then there exists such that But the homomorphism maps the disjoint cosets to equal cosets: Remark. We emphasize the important equivalence of the proposition in a formula: Permutation representation Suppose is a left transversal of a subgroup of finite index in a group . A fixed element gives rise to a unique permutation of the left cosets of in by left multiplication such that: Using this we define a set of elements called the monomials associated with with respect to : Similarly, if is a right transversal of in , then a fixed element gives rise to a unique permutation of the right cosets of in by right multiplication such that: And we define the monomials associated with with respect to : Definition. The mappings: are called the permutation representation of in the symmetric group with respect to and respectively. Definition. The mappings: are called the monomial representation of in with respect to and respectively. Lemma. For the right transversal associated to the left transversal , we have the following relations between the monomials and permutations corresponding to an element : Proof. For the right transversal , we have , for each . On the other hand, for the left transversal , we have This relation simultaneously shows that, for any , the permutation representations and the associated monomials are connected by and for each . Artin transfer Definitions. Let be a group and a subgroup of finite index Assume is a left transversal of in with associated permutation representation such that Similarly let be a right transversal of in with associated permutation representation such that The Artin transfer with respect to is defined as: Similarly we define: Remarks. Isaacs calls the mappings the pre-transfer from to . The pre-transfer can be composed with a homomorphism from into an abelian group to define a more general version of the transfer from to via , which occurs in the book by Gorenstein. Taking the natural epimorphism yields the preceding definition of the Artin transfer in its original form by Schur and by Emil Artin, which has also been dubbed Verlagerung by Hasse. Note that, in general, the pre-transfer is neither independent of the transversal nor a group homomorphism. Independence of the transversal Proposition. The Artin transfers with respect to any two left transversals of in coincide. Proof. Let and be two left transversals of in . Then there exists a unique permutation such that: Consequently: For a fixed element , there exists a unique permutation such that: Therefore, the permutation representation of with respect to is given by which yields: Furthermore, for the connection between the two elements: we have: Finally since is abelian and and are permutations, the Artin transfer turns out to be independent of the left transversal: as defined in formula (5). Proposition. The Artin transfers with respect to any two right transversals of in coincide. Proof. Similar to the previous proposition. Proposition. The Artin transfers with respect to and coincide. Proof. Using formula (4) and being abelian we have: The last step is justified by the fact that the Artin transfer is a homomorphism. This will be shown in the following section. Corollary. The Artin transfer is independent of the choice of transversals and only depends on and . Artin transfers as homomorphisms Theorem. Let be a left transversal of in . The Artin transfer and the permutation representation: are group homomorphisms: Let : Since is abelian and is a permutation, we can change the order of the factors in the product: This relation simultaneously shows that the Artin transfer and the permutation representation are homomorphisms. It is illuminating to restate the homomorphism property of the Artin transfer in terms of the monomial representation. The images of the factors are given by In the last proof, the image of the product turned out to be , which is a very peculiar law of composition discussed in more detail in the following section. The law is reminiscent of crossed homomorphisms in the first cohomology group of a -module , which have the property for . Wreath product of H and S(n) The peculiar structures which arose in the previous section can also be interpreted by endowing the cartesian product with a special law of composition known as the wreath product of the groups and with respect to the set Definition. For , the wreath product of the associated monomials and permutations is given by Theorem. With this law of composition on the monomial representation is an injective homomorphism. The homomorphism property has been shown above already. For a homomorphism to be injective, it suffices to show the triviality of its kernel. The neutral element of the group endowed with the wreath product is given by , where the last means the identity permutation. If , for some , then and consequently Finally, an application of the inverse inner automorphism with yields , as required for injectivity. Remark. The monomial representation of the theorem stands in contrast to the permutation representation, which cannot be injective if Remark. Whereas Huppert uses the monomial representation for defining the Artin transfer, we prefer to give the immediate definitions in formulas (5) and (6) and to merely illustrate the homomorphism property of the Artin transfer with the aid of the monomial representation. Composition of Artin transfers Theorem. Let be a group with nested subgroups such that and Then the Artin transfer is the compositum of the induced transfer and the Artin transfer , that is: . If is a left transversal of in and is a left transversal of in , that is and , then is a disjoint left coset decomposition of with respect to . Given two elements and , there exist unique permutations , and , such that Then, anticipating the definition of the induced transfer, we have For each pair of subscripts and , we put , and we obtain resp. Therefore, the image of under the Artin transfer is given by Finally, we want to emphasize the structural peculiarity of the monomial representation which corresponds to the composite of Artin transfers, defining for a permutation , and using the symbolic notation for all pairs of subscripts , . The preceding proof has shown that Therefore, the action of the permutation on the set is given by . The action on the second component depends on the first component (via the permutation ), whereas the action on the first component is independent of the second component . Therefore, the permutation can be identified with the multiplet which will be written in twisted form in the next section. Wreath product of S(m) and S(n) The permutations , which arose as second components of the monomial representation in the previous section, are of a very special kind. They belong to the stabilizer of the natural equipartition of the set into the rows of the corresponding matrix (rectangular array). Using the peculiarities of the composition of Artin transfers in the previous section, we show that this stabilizer is isomorphic to the wreath product of the symmetric groups and with respect to the set , whose underlying set is endowed with the following law of composition: This law reminds of the chain rule for the Fréchet derivative in of the compositum of differentiable functions and between complete normed spaces. The above considerations establish a third representation, the stabilizer representation, of the group in the wreath product , similar to the permutation representation and the monomial representation. As opposed to the latter, the stabilizer representation cannot be injective, in general. For instance, certainly not, if is infinite. Formula (10) proves the following statement. Theorem. The stabilizer representation of the group in the wreath product of symmetric groups is a group homomorphism. Cycle decomposition Let be a left transversal of a subgroup of finite index in a group and be its associated permutation representation. Theorem. Suppose the permutation decomposes into pairwise disjoint (and thus commuting) cycles of lengths which is unique up to the ordering of the cycles. More explicitly, suppose for , and Then the image of under the Artin transfer is given by Define for and . This is a left transversal of in since is a disjoint decomposition of into left cosets of . Fix a value of . Then: Define: Consequently, The cycle decomposition corresponds to a double coset decomposition of : It was this cycle decomposition form of the transfer homomorphism which was given by E. Artin in his original 1929 paper. Transfer to a normal subgroup Let be a normal subgroup of finite index in a group . Then we have , for all , and there exists the quotient group of order . For an element , we let denote the order of the coset in , and we let be a left transversal of the subgroup in , where . Theorem. Then the image of under the Artin transfer is given by: . is a cyclic subgroup of order in , and a left transversal of the subgroup in , where and is the corresponding disjoint left coset decomposition, can be refined to a left transversal with disjoint left coset decomposition: of in . Hence, the formula for the image of under the Artin transfer in the previous section takes the particular shape with exponent independent of . Corollary. In particular, the inner transfer of an element is given as a symbolic power: with the trace element of in as symbolic exponent. The other extreme is the outer transfer of an element which generates , that is . It is simply an th power . The inner transfer of an element , whose coset is the principal set in of order , is given as the symbolic power with the trace element of in as symbolic exponent. The outer transfer of an element which generates , that is , whence the coset is generator of with order, is given as the th power Transfers to normal subgroups will be the most important cases in the sequel, since the central concept of this article, the Artin pattern, which endows descendant trees with additional structure, consists of targets and kernels of Artin transfers from a group to intermediate groups between and . For these intermediate groups we have the following lemma. Lemma. All subgroups containing the commutator subgroup are normal. Let . If were not a normal subgroup of , then we had for some element . This would imply the existence of elements and such that , and consequently the commutator would be an element in in contradiction to . Explicit implementations of Artin transfers in the simplest situations are presented in the following section. Computational implementation Abelianization of type (p,p) Let be a p-group with abelianization of elementary abelian type . Then has maximal subgroups of index Lemma. In this particular case, the Frattini subgroup, which is defined as the intersection of all maximal subgroups coincides with the commutator subgroup. Proof. To see this note that due to the abelian type of the commutator subgroup contains all p-th powers and thus we have . For each , let be the Artin transfer homomorphism. According to Burnside's basis theorem the group can therefore be generated by two elements such that For each of the maximal subgroups , which are also normal we need a generator with respect to , and a generator of a transversal such that A convenient selection is given by Then, for each we use equations (16) and (18) to implement the inner and outer transfers: , The reason is that in and The complete specification of the Artin transfers also requires explicit knowledge of the derived subgroups . Since is a normal subgroup of index in , a certain general reduction is possible by but a presentation of must be known for determining generators of , whence Abelianization of type (p2,p) Let be a p-group with abelianization of non-elementary abelian type . Then has maximal subgroups of index and subgroups of index For each let be the Artin transfer homomorphisms. Burnside's basis theorem asserts that the group can be generated by two elements such that We begin by considering the first layer of subgroups. For each of the normal subgroups , we select a generator such that . These are the cases where the factor group is cyclic of order . However, for the distinguished maximal subgroup , for which the factor group is bicyclic of type , we need two generators: such that . Further, a generator of a transversal must be given such that , for each . It is convenient to define Then, for each , we have inner and outer transfers: since and . Now we continue by considering the second layer of subgroups. For each of the normal subgroups , we select a generator such that . Among these subgroups, the Frattini subgroup is particularly distinguished. A uniform way of defining generators of a transversal such that , is to set Since , but on the other hand and , for , with the single exception that , we obtain the following expressions for the inner and outer transfers exceptionally The structure of the derived subgroups and must be known to specify the action of the Artin transfers completely. Transfer kernels and targets Let be a group with finite abelianization . Suppose that denotes the family of all subgroups which contain and are therefore necessarily normal, enumerated by a finite index set . For each , let be the Artin transfer from to the abelianization . Definition. The family of normal subgroups is called the transfer kernel type (TKT) of with respect to , and the family of abelianizations (resp. their abelian type invariants) is called the transfer target type (TTT) of with respect to . Both families are also called multiplets whereas a single component will be referred to as a singulet. Important examples for these concepts are provided in the following two sections. Abelianization of type (p,p) Let be a p-group with abelianization of elementary abelian type . Then has maximal subgroups of index . For let denote the Artin transfer homomorphism. Definition. The family of normal subgroups is called the transfer kernel type (TKT) of with respect to . Remark. For brevity, the TKT is identified with the multiplet , whose integer components are given by Here, we take into consideration that each transfer kernel must contain the commutator subgroup of , since the transfer target is abelian. However, the minimal case cannot occur. Remark. A renumeration of the maximal subgroups and of the transfers by means of a permutation gives rise to a new TKT with respect to , identified with , where It is adequate to view the TKTs as equivalent. Since we have the relation between and is given by . Therefore, is another representative of the orbit of under the action of the symmetric group on the set of all mappings from where the extension of the permutation is defined by and formally Definition. The orbit of any representative is an invariant of the p-group and is called its transfer kernel type, briefly TKT. Remark. Let denote the counter of total transfer kernels , which is an invariant of the group . In 1980, S. M. Chang and R. Foote proved that, for any odd prime and for any integer , there exist metabelian p-groups having abelianization of type such that . However, for , there do not exist non-abelian -groups with , which must be metabelian of maximal class, such that . Only the elementary abelian -group has . See Figure 5. In the following concrete examples for the counters , and also in the remainder of this article, we use identifiers of finite p-groups in the SmallGroups Library by H. U. Besche, B. Eick and E. A. O'Brien. For , we have for the extra special group of exponent with TKT (Figure 6), for the two groups with TKTs (Figures 8 and 9), for the group with TKT (Figure 4 in the article on descendant trees), for the group with TKT (Figure 6), for the extra special group of exponent with TKT (Figure 6). Abelianization of type (p2,p) Let be a p-group with abelianization of non-elementary abelian type Then possesses maximal subgroups of index and subgroups of index Assumption. Suppose is the distinguished maximal subgroup and is the distinguished subgroup of index which as the intersection of all maximal subgroups, is the Frattini subgroup of . First layer For each , let denote the Artin transfer homomorphism. Definition. The family is called the first layer transfer kernel type of with respect to and , and is identified with , where Remark. Here, we observe that each first layer transfer kernel is of exponent with respect to and consequently cannot coincide with for any , since is cyclic of order , whereas is bicyclic of type . Second layer For each , let be the Artin transfer homomorphism from to the abelianization of . Definition. The family is called the second layer transfer kernel type of with respect to and , and is identified with where Transfer kernel type Combining the information on the two layers, we obtain the (complete) transfer kernel type of the p-group with respect to and . Remark. The distinguished subgroups and are unique invariants of and should not be renumerated. However, independent renumerations of the remaining maximal subgroups and the transfers by means of a permutation , and of the remaining subgroups of index and the transfers by means of a permutation , give rise to new TKTs with respect to and , identified with , where and with respect to and , identified with where It is adequate to view the TKTs and as equivalent. Since we have the relations between and , and and , are given by Therefore, is another representative of the orbit of under the action: of the product of two symmetric groups on the set of all pairs of mappings , where the extensions and of a permutation are defined by and , and formally and Definition. The orbit of any representative is an invariant of the p-group and is called its transfer kernel type, briefly TKT. Connections between layers The Artin transfer is the composition of the induced transfer from to and the Artin transfer There are two options regarding the intermediate subgroups For the subgroups only the distinguished maximal subgroup is an intermediate subgroup. For the Frattini subgroup all maximal subgroups are intermediate subgroups. This causes restrictions for the transfer kernel type of the second layer, since and thus But even Furthermore, when with an element of order with respect to , can belong to only if its th power is contained in , for all intermediate subgroups , and thus: , for certain , enforces the first layer TKT singulet , but , for some , even specifies the complete first layer TKT multiplet , that is , for all . Inheritance from quotients The common feature of all parent-descendant relations between finite p-groups is that the parent is a quotient of the descendant by a suitable normal subgroup Thus, an equivalent definition can be given by selecting an epimorphism with Then the group can be viewed as the parent of the descendant . In the following sections, this point of view will be taken, generally for arbitrary groups, not only for finite p-groups. Passing through the abelianization Proposition. Suppose is an abelian group and is a homomorphism. Let denote the canonical projection map. Then there exists a unique homomorphism such that and (See Figure 1). Proof. This statement is a consequence of the second Corollary in the article on the induced homomorphism. Nevertheless, we give an independent proof for the present situation: the uniqueness of is a consequence of the condition which implies for any we have: is a homomorphism, let be arbitrary, then: Thus, the commutator subgroup , and this finally shows that the definition of is independent of the coset representative, TTT singulets Proposition. Assume are as above and is the image of a subgroup The commutator subgroup of is the image of the commutator subgroup of Therefore, induces a unique epimorphism , and thus is a quotient of Moreover, if , then the map is an isomorphism (See Figure 2). Proof. This claim is a consequence of the Main Theorem in the article on the induced homomorphism. Nevertheless, an independent proof is given as follows: first, the image of the commutator subgroup is Second, the epimorphism can be restricted to an epimorphism . According to the previous section, the composite epimorphism factors through by means of a uniquely determined epimorphism such that . Consequently, we have . Furthermore, the kernel of is given explicitly by . Finally, if , then is an isomorphism, since . Definition. Due to the results in the present section, it makes sense to define a partial order on the set of abelian type invariants by putting , when , and , when . TKT singulets Proposition. Assume are as above and is the image of a subgroup of finite index Let and be Artin transfers. If , then the image of a left transversal of in is a left transversal of in , and Moreover, if then (See Figure 3). Proof. Let be a left transversal of in . Then we have a disjoint union: Consider the image of this disjoint union, which is not necessarily disjoint, and let We have: Let be the epimorphism from the previous proposition. We have: Since , the right hand side equals , if is a left transversal of in , which is true when Therefore, Consequently, implies the inclusion Finally, if , then by the previous proposition is an isomorphism. Using its inverse we get , which proves Combining the inclusions we have: Definition. In view of the results in the present section, we are able to define a partial order of transfer kernels by setting , when TTT and TKT multiplets Assume are as above and that and are isomorphic and finite. Let denote the family of all subgroups containing (making it a finite family of normal subgroups). For each let: Take be any non-empty subset of . Then it is convenient to define , called the (partial) transfer kernel type (TKT) of with respect to , and called the (partial) transfer target type (TTT) of with respect to . Due to the rules for singulets, established in the preceding two sections, these multiplets of TTTs and TKTs obey the following fundamental inheritance laws: Inheritance Law I. If , then , in the sense that , for each , and , in the sense that , for each . Inheritance Law II. If , then , in the sense that , for each , and , in the sense that , for each . Inherited automorphisms A further inheritance property does not immediately concern Artin transfers but will prove to be useful in applications to descendant trees. Inheritance Law III. Assume are as above and If then there exists a unique epimorphism such that . If then Proof. Using the isomorphism we define: First we show this map is well-defined: The fact that is surjective, a homomorphism and satisfies are easily verified. And if , then injectivity of is a consequence of Let be the canonical projection then there exists a unique induced automorphism such that , that is, The reason for the injectivity of is that since is a characteristic subgroup of . Definition. is called a σ−group, if there exists such that the induced automorphism acts like the inversion on , that is for all The Inheritance Law III asserts that, if is a σ−group and , then is also a σ−group, the required automorphism being . This can be seen by applying the epimorphism to the equation which yields Stabilization criteria In this section, the results concerning the inheritance of TTTs and TKTs from quotients in the previous section are applied to the simplest case, which is characterized by the following Assumption. The parent of a group is the quotient of by the last non-trivial term of the lower central series of , where denotes the nilpotency class of . The corresponding epimorphism from onto is the canonical projection, whose kernel is given by . Under this assumption, the kernels and targets of Artin transfers turn out to be compatible with parent-descendant relations between finite p-groups. Compatibility criterion. Let be a prime number. Suppose that is a non-abelian finite p-group of nilpotency class . Then the TTT and the TKT of and of its parent are comparable in the sense that and . The simple reason for this fact is that, for any subgroup , we have , since . For the remaining part of this section, the investigated groups are supposed to be finite metabelian p-groups with elementary abelianization of rank , that is of type . Partial stabilization for maximal class. A metabelian p-group of coclass and of nilpotency class shares the last components of the TTT and of the TKT with its parent . More explicitly, for odd primes , we have and for . This criterion is due to the fact that implies , for the last maximal subgroups of . The condition is indeed necessary for the partial stabilization criterion. For odd primes , the extra special -group of order and exponent has nilpotency class only, and the last components of its TKT are strictly smaller than the corresponding components of the TKT of its parent which is the elementary abelian -group of type . For , both extra special -groups of coclass and class , the ordinary quaternion group with TKT and the dihedral group with TKT , have strictly smaller last two components of their TKTs than their common parent with TKT . Total stabilization for maximal class and positive defect. A metabelian p-group of coclass and of nilpotency class , that is, with index of nilpotency , shares all components of the TTT and of the TKT with its parent , provided it has positive defect of commutativity . Note that implies , and we have for all . This statement can be seen by observing that the conditions and imply , for all the maximal subgroups of . The condition is indeed necessary for total stabilization. To see this it suffices to consider the first component of the TKT only. For each nilpotency class , there exist (at least) two groups with TKT and with TKT , both with defect , where the first component of their TKT is strictly smaller than the first component of the TKT of their common parent . Partial stabilization for non-maximal class. Let be fixed. A metabelian 3-group with abelianization , coclass and nilpotency class shares the last two (among the four) components of the TTT and of the TKT with its parent . This criterion is justified by the following consideration. If , then for the last two maximal subgroups of . The condition is indeed unavoidable for partial stabilization, since there exist several -groups of class , for instance those with SmallGroups identifiers , such that the last two components of their TKTs are strictly smaller than the last two components of the TKT of their common parent . Total stabilization for non-maximal class and cyclic centre. Again, let be fixed. A metabelian 3-group with abelianization , coclass , nilpotency class and cyclic centre shares all four components of the TTT and of the TKT with its parent . The reason is that, due to the cyclic centre, we have for all four maximal subgroups of . The condition of a cyclic centre is indeed necessary for total stabilization, since for a group with bicyclic centre there occur two possibilities. Either is also bicyclic, whence is never contained in , or is cyclic but is never contained in . Summarizing, we can say that the last four criteria underpin the fact that Artin transfers provide a marvellous tool for classifying finite p-groups. In the following sections, it will be shown how these ideas can be applied for endowing descendant trees with additional structure, and for searching particular groups in descendant trees by looking for patterns defined by the kernels and targets of Artin transfers. These strategies of pattern recognition are useful in pure group theory and in algebraic number theory. Structured descendant trees (SDTs) This section uses the terminology of descendant trees in the theory of finite p-groups. In Figure 4, a descendant tree with modest complexity is selected exemplarily to demonstrate how Artin transfers provide additional structure for each vertex of the tree. More precisely, the underlying prime is , and the chosen descendant tree is actually a coclass tree having a unique infinite mainline, branches of depth , and strict periodicity of length setting in with branch . The initial pre-period consists of branches and with exceptional structure. Branches and form the primitive period such that , for odd , and , for even . The root of the tree is the metabelian -group with identifier , that is, a group of order and with counting number . This root is not coclass settled, whence its entire descendant tree is of considerably higher complexity than the coclass- subtree , whose first six branches are drawn in the diagram of Figure 4. The additional structure can be viewed as a sort of coordinate system in which the tree is embedded. The horizontal abscissa is labelled with the transfer kernel type (TKT) , and the vertical ordinate is labelled with a single component of the transfer target type (TTT). The vertices of the tree are drawn in such a manner that members of periodic infinite sequences form a vertical column sharing a common TKT. On the other hand, metabelian groups of a fixed order, represented by vertices of depth at most , form a horizontal row sharing a common first component of the TTT. (To discourage any incorrect interpretations, we explicitly point out that the first component of the TTT of non-metabelian groups or metabelian groups, represented by vertices of depth , is usually smaller than expected, due to stabilization phenomena!) The TTT of all groups in this tree represented by a big full disk, which indicates a bicyclic centre of type , is given by with varying first component , the nearly homocyclic abelian -group of order , and fixed further components and , where the abelian type invariants are either written as orders of cyclic components or as their -logarithms with exponents indicating iteration. (The latter notation is employed in Figure 4.) Since the coclass of all groups in this tree is , the connection between the order and the nilpotency class is given by . Pattern recognition For searching a particular group in a descendant tree by looking for patterns defined by the kernels and targets of Artin transfers, it is frequently adequate to reduce the number of vertices in the branches of a dense tree with high complexity by sifting groups with desired special properties, for example filtering the -groups, eliminating a set of certain transfer kernel types, cancelling all non-metabelian groups (indicated by small contour squares in Fig. 4), removing metabelian groups with cyclic centre (denoted by small full disks in Fig. 4), cutting off vertices whose distance from the mainline (depth) exceeds some lower bound, combining several different sifting criteria. The result of such a sieving procedure is called a pruned descendant tree with respect to the desired set of properties. However, in any case, it should be avoided that the main line of a coclass tree is eliminated, since the result would be a disconnected infinite set of finite graphs instead of a tree. For example, it is neither recommended to eliminate all -groups in Figure 4 nor to eliminate all groups with TKT . In Figure 4, the big double contour rectangle surrounds the pruned coclass tree , where the numerous vertices with TKT are completely eliminated. This would, for instance, be useful for searching a -group with TKT and first component of the TTT. In this case, the search result would even be a unique group. We expand this idea further in the following detailed discussion of an important example. Historical example The oldest example of searching for a finite p-group by the strategy of pattern recognition via Artin transfers goes back to 1934, when A. Scholz and O. Taussky tried to determine the Galois group of the Hilbert -class field tower, that is the maximal unramified pro- extension , of the complex quadratic number field They actually succeeded in finding the maximal metabelian quotient of , that is the Galois group of the second Hilbert -class field of . However, it needed years until M. R. Bush and D. C. Mayer, in 2012, provided the first rigorous proof that the (potentially infinite) -tower group coincides with the finite -group of derived length , and thus the -tower of has exactly three stages, stopping at the third Hilbert -class field of . The search is performed with the aid of the p-group generation algorithm by M. F. Newman and E. A. O'Brien. For the initialization of the algorithm, two basic invariants must be determined. Firstly, the generator rank of the p-groups to be constructed. Here, we have and is given by the -class rank of the quadratic field . Secondly, the abelian type invariants of the -class group of . These two invariants indicate the root of the descendant tree which will be constructed successively. Although the p-group generation algorithm is designed to use the parent-descendant definition by means of the lower exponent-p central series, it can be fitted to the definition with the aid of the usual lower central series. In the case of an elementary abelian p-group as root, the difference is not very big. So we have to start with the elementary abelian -group of rank two, which has the SmallGroups identifier , and to construct the descendant tree . We do that by iterating the p-group generation algorithm, taking suitable capable descendants of the previous root as the next root, always executing an increment of the nilpotency class by a unit. As explained at the beginning of the section Pattern recognition, we must prune the descendant tree with respect to the invariants TKT and TTT of the -tower group , which are determined by the arithmetic of the field as (exactly two fixed points and no transposition) and . Further, any quotient of must be a -group, enforced by number theoretic requirements for the quadratic field . The root has only a single capable descendant of type . In terms of the nilpotency class, is the class- quotient of and is the class- quotient of . Since the latter has nuclear rank two, there occurs a bifurcation , where the former component can be eliminated by the stabilization criterion for the TKT of all -groups of maximal class. Due to the inheritance property of TKTs, only the single capable descendant qualifies as the class- quotient of . There is only a single capable -group among the descendants of . It is the class- quotient of and has nuclear rank two. This causes the essential bifurcation in two subtrees belonging to different coclass graphs and . The former contains the metabelian quotient of with two possibilities , which are not balanced with relation rank bigger than the generator rank. The latter consists entirely of non-metabelian groups and yields the desired -tower group as one among the two Schur -groups and with . Finally the termination criterion is reached at the capable vertices and , since the TTT is too big and will even increase further, never returning to . The complete search process is visualized in Table 1, where, for each of the possible successive p-quotients of the -tower group of , the nilpotency class is denoted by , the nuclear rank by , and the p-multiplicator rank by . Commutator calculus This section shows exemplarily how commutator calculus can be used for determining the kernels and targets of Artin transfers explicitly. As a concrete example we take the metabelian -groups with bicyclic centre, which are represented by big full disks as vertices, of the coclass tree diagram in Figure 4. They form ten periodic infinite sequences, four, resp. six, for even, resp. odd, nilpotency class , and can be characterized with the aid of a parametrized polycyclic power-commutator presentation: where is the nilpotency class, with is the order, and are parameters. The transfer target type (TTT) of the group depends only on the nilpotency class , is independent of the parameters , and is given uniformly by . This phenomenon is called a polarization, more precisely a uni-polarization, at the first component. The transfer kernel type (TKT) of the group is independent of the nilpotency class , but depends on the parameters , and is given by c.18, , for (a mainline group), H.4, , for (two capable groups), E.6, , for (a terminal group), and E.14, , for (two terminal groups). For even nilpotency class, the two groups of types H.4 and E.14, which differ in the sign of the parameter only, are isomorphic. These statements can be deduced by means of the following considerations. As a preparation, it is useful to compile a list of some commutator relations, starting with those given in the presentation, for and for , which shows that the bicyclic centre is given by . By means of the right product rule and the right power rule , we obtain , , and , for . The maximal subgroups of are taken in a similar way as in the section on the computational implementation, namely Their derived subgroups are crucial for the behavior of the Artin transfers. By making use of the general formula , where , and where we know that in the present situation, it follows that Note that is not far from being abelian, since is contained in the centre . As the first main result, we are now in the position to determine the abelian type invariants of the derived quotients: the unique quotient which grows with increasing nilpotency class , since for even and for odd , since generally , but for , whereas for and . Now we come to the kernels of the Artin transfer homomorphisms . It suffices to investigate the induced transfers and to begin by finding expressions for the images of elements , which can be expressed in the form First, we exploit outer transfers as much as possible: Next, we treat the unavoidable inner transfers, which are more intricate. For this purpose, we use the polynomial identity to obtain: Finally, we combine the results: generally and in particular, To determine the kernels, it remains to solve the equations: The following equivalences, for any , finish the justification of the statements: both arbitrary . with arbitrary , with arbitrary , , Consequently, the last three components of the TKT are independent of the parameters which means that both, the TTT and the TKT, reveal a uni-polarization at the first component. Systematic library of SDTs The aim of this section is to present a collection of structured coclass trees (SCTs) of finite p-groups with parametrized presentations and a succinct summary of invariants. The underlying prime is restricted to small values . The trees are arranged according to increasing coclass and different abelianizations within each coclass. To keep the descendant numbers manageable, the trees are pruned by eliminating vertices of depth bigger than one. Further, we omit trees where stabilization criteria enforce a common TKT of all vertices, since we do not consider such trees as structured any more. The invariants listed include pre-period and period length, depth and width of branches, uni-polarization, TTT and TKT, -groups. We refrain from giving justifications for invariants, since the way how invariants are derived from presentations was demonstrated exemplarily in the section on commutator calculus Coclass 1 For each prime , the unique tree of p-groups of maximal class is endowed with information on TTTs and TKTs, that is, for for , and for . In the last case, the tree is restricted to metabelian -groups. The -groups of coclass in Figure 5 can be defined by the following parametrized polycyclic pc-presentation, quite different from Blackburn's presentation. where the nilpotency class is , the order is with , and are parameters. The branches are strictly periodic with pre-period and period length , and have depth and width . Polarization occurs for the third component and the TTT is , only dependent on and with cyclic . The TKT depends on the parameters and is for the dihedral mainline vertices with , for the terminal generalized quaternion groups with , and for the terminal semi dihedral groups with . There are two exceptions, the abelian root with and , and the usual quaternion group with and . The -groups of coclass in Figure 6 can be defined by the following parametrized polycyclic pc-presentation, slightly different from Blackburn's presentation. where the nilpotency class is , the order is with , and are parameters. The branches are strictly periodic with pre-period and period length , and have depth and width . Polarization occurs for the first component and the TTT is , only dependent on and . The TKT depends on the parameters and is for the mainline vertices with for the terminal vertices with for the terminal vertices with , and for the terminal vertices with . There exist three exceptions, the abelian root with , the extra special group of exponent with and , and the Sylow -subgroup of the alternating group with . Mainline vertices and vertices on odd branches are -groups. The metabelian -groups of coclass in Figure 7 can be defined by the following parametrized polycyclic pc-presentation, slightly different from Miech's presentation. where the nilpotency class is , the order is with , and are parameters. The (metabelian!) branches are strictly periodic with pre-period and period length , and have depth and width . (The branches of the complete tree, including non-metabelian groups, are only virtually periodic and have bounded width but unbounded depth!) Polarization occurs for the first component and the TTT is , only dependent on and the defect of commutativity . The TKT depends on the parameters and is for the mainline vertices with for the terminal vertices with for the terminal vertices with , and for the vertices with . There exist three exceptions, the abelian root with , the extra special group of exponent with and , and the group with . Mainline vertices and vertices on odd branches are -groups. Coclass 2 Abelianization of type (p,p) Three coclass trees, , and for , are endowed with information concerning TTTs and TKTs. On the tree , the -groups of coclass with bicyclic centre in Figure 8 can be defined by the following parametrized polycyclic pc-presentation. where the nilpotency class is , the order is with , and are parameters. The branches are strictly periodic with pre-period and period length , and have depth and width . Polarization occurs for the first component and the TTT is , only dependent on . The TKT depends on the parameters and is for the mainline vertices with , for the capable vertices with , for the terminal vertices with , and for the terminal vertices with . Mainline vertices and vertices on even branches are -groups. On the tree , the -groups of coclass with bicyclic centre in Figure 9 can be defined by the following parametrized polycyclic pc-presentation. where the nilpotency class is , the order is with , and are parameters. The branches are strictly periodic with pre-period and period length , and have depth and width . Polarization occurs for the second component and the TTT is , only dependent on . The TKT depends on the parameters and is for the mainline vertices with , for the capable vertices with , for the terminal vertices with , and for the terminal vertices with . Mainline vertices and vertices on even branches are -groups. Abelianization of type (p2,p) and for , and for . Abelianization of type (p,p,p) for , and for . Coclass 3 Abelianization of type (p2,p) , and for . Abelianization of type (p,p,p) and for , and for . Arithmetical applications In algebraic number theory and class field theory, structured descendant trees (SDTs) of finite p-groups provide an excellent tool for visualizing the location of various non-abelian p-groups associated with algebraic number fields , displaying additional information about the groups in labels attached to corresponding vertices, and emphasizing the periodicity of occurrences of the groups on branches of coclass trees. For instance, let be a prime number, and assume that denotes the second Hilbert p-class field of an algebraic number field , that is the maximal metabelian unramified extension of of degree a power of . Then the second p-class group of is usually a non-abelian p-group of derived length and frequently permits to draw conclusions about the entire p''-class field tower of , that is the Galois group of the maximal unramified pro-p extension of . Given a sequence of algebraic number fields with fixed signature , ordered by the absolute values of their discriminants , a suitable structured coclass tree (SCT) , or also the finite sporadic part of a coclass graph , whose vertices are entirely or partially realized by second p-class groups of the fields is endowed with additional arithmetical structure when each realized vertex , resp. , is mapped to data concerning the fields such that . Example To be specific, let and consider complex quadratic fields with fixed signature having -class groups with type invariants . See OEIS A242863 . Their second -class groups have been determined by D. C. Mayer for the range , and, most recently, by N. Boston, M. R. Bush and F. Hajir for the extended range . Let us firstly select the two structured coclass trees (SCTs) and , which are known from Figures 8 and 9 already, and endow these trees with additional arithmetical structure by surrounding a realized vertex with a circle and attaching an adjacent underlined boldface integer which gives the minimal absolute discriminant such that is realized by the second -class group . Then we obtain the arithmetically structured coclass trees (ASCTs) in Figures 10 and 11, which, in particular, give an impression of the actual distribution of second -class groups. See OEIS A242878 . Concerning the periodicity of occurrences of second -class groups of complex quadratic fields, it was proved that only every other branch of the trees in Figures 10 and 11 can be populated by these metabelian -groups and that the distribution sets in with a ground state (GS) on branch and continues with higher excited states (ES) on the branches with even . This periodicity phenomenon is underpinned by three sequences with fixed TKTs E.14 , OEIS A247693 , E.6 , OEIS A247692 , H.4 , OEIS A247694 on the ASCT , and by three sequences with fixed TKTs E.9 , OEIS A247696 , E.8 , OEIS A247695 , G.16 ,OEIS A247697 on the ASCT . Up to now, the ground state and three excited states are known for each of the six sequences, and for TKT E.9 even the fourth excited state occurred already. The minimal absolute discriminants of the various states of each of the six periodic sequences are presented in Table 2. Data for the ground states (GS) and the first excited states (ES1) has been taken from D. C. Mayer, most recent information on the second, third and fourth excited states (ES2, ES3, ES4) is due to N. Boston, M. R. Bush and F. Hajir. In contrast, let us secondly select the sporadic part of the coclass graph for demonstrating that another way of attaching additional arithmetical structure to descendant trees is to display the counter of hits of a realized vertex by the second -class group of fields with absolute discriminants below a given upper bound , for instance . With respect to the total counter of all complex quadratic fields with -class group of type and discriminant , this gives the relative frequency as an approximation to the asymptotic density of the population in Figure 12 and Table 3. Exactly four vertices of the finite sporadic part of are populated by second -class groups : , OEIS A247689 , , OEIS A247690 , , OEIS A242873 , , OEIS A247688 . Comparison of various primes Now let and consider complex quadratic fields with fixed signature and p-class groups of type . The dominant part of the second p-class groups of these fields populates the top vertices of order of the sporadic part of the coclass graph , which belong to the stem of P. Hall's isoclinism family , or their immediate descendants of order . For primes , the stem of consists of regular p-groups and reveals a rather uniform behaviour with respect to TKTs and TTTs, but the seven -groups in the stem of are irregular. We emphasize that there also exist several ( for and for ) infinitely capable vertices in the stem of which are partially roots of coclass trees. However, here we focus on the sporadic vertices which are either isolated Schur -groups ( for and for ) or roots of finite trees within ( for each ). For , the TKT of Schur -groups is a permutation whose cycle decomposition does not contain transpositions, whereas the TKT of roots of finite trees is a compositum of disjoint transpositions having an even number ( or ) of fixed points. We endow the forest (a finite union of descendant trees) with additional arithmetical structure by attaching the minimal absolute discriminant to each realized'' vertex . The resulting structured sporadic coclass graph is shown in Figure 13 for , in Figure 14 for , and in Figure 15 for . References Group theory Class field theory
Artin transfer (group theory)
[ "Mathematics" ]
11,203
[ "Group theory", "Fields of abstract algebra" ]
30,788,646
https://en.wikipedia.org/wiki/Plasma%20Science%20Society%20of%20India
Plasma Science Society of India was founded in 1978 at Institute for Plasma Research, Ahmedabad in India for the benefit of the fusion community working on plasma. This serves the thrive for knowledge towards the fusion research in the field of theoretical and experimental research, it associated with India Science, Technology and Innovation, . The devices are SST-1, SINP-Tokamak, AdityaTokamak, SST-2 (DEMO) to generate the electricity.There are over 950 life-member of this society along with number of annual members. References External links Nuclear fusion Scientific organisations based in India
Plasma Science Society of India
[ "Physics", "Chemistry" ]
121
[ "Nuclear fusion", "Nuclear chemistry stubs", "Nuclear physics" ]
30,790,943
https://en.wikipedia.org/wiki/Blake%20number
In fluid mechanics, the Blake number is a nondimensional number showing the ratio of inertial force to viscous force. It is used in momentum transfer in general and in particular for flow of a fluid through beds of solids. It is a generalisation of the Reynolds number for flow through porous media. It is named after the US chemist Frank C. Blake (1892–1926). Expressed mathematically the Blake number is: where {| style="border:0px" | || = || void fraction |- | || = || dynamic viscosity |- | || = || fluid density |- | || = || hydraulic diameter |- | || = || flow velocity |} References Fluid dynamics Porous media Dimensionless numbers of physics
Blake number
[ "Chemistry", "Materials_science", "Engineering" ]
166
[ "Materials science stubs", "Porous media", "Chemical engineering", "Materials science", "Piping", "Fluid dynamics stubs", "Fluid dynamics" ]
30,791,232
https://en.wikipedia.org/wiki/Decorative%20laminate
Decorative laminates are laminated products primarily used as furniture surface materials or wall paneling. It can be manufactured as either high- or low-pressure laminate, with the two processes not much different from each other except for the pressure applied in the pressing process. Also, laminate can be produced either in batches or in a continuous process; the latter is called continuous pressure laminate (CPL). High-pressure laminate (HPL) According to McGraw-Hill Dictionary of Architecture & Construction, high-pressure laminates consists of laminates "molded and cured at pressures not lower than and more commonly in the range of ." HPL is made of resin impregnated cellulose layers, which are consolidated under heat and high pressure. The various layers are described below: Overlay paper, which serves to improve the abrasion, scratch and heat-resistance Decorative paper, which defines the design and is composed of colored or printed paper Kraft paper, which is used as core material and control product thickness. Trade names include Formica, Arborite, Greenlam, Wilsonart, GW-HPL, Micarta and Trespa. Manufacturing After the Kraft papers are impregnated with the resins, the three layers of paper/resin are placed into a press which simultaneously applies heat (120 °C) and pressure (>70 bars). The pressing operation allows the thermoset resins to flow into the paper, then subsequently cure into a consolidated sheet with a density greater than . During the press cycle, the decorative surface can also be cured while in contact with a textured surface to create one of many different surface finishes. HPL consists of more than 60 to 70% Kraft paper, with the remaining 30 to 40% a combination of phenol-formaldehyde resin for the core layers mostly, and melamine-formaldehyde resin for the surface layer. Both resins belong to a class of thermosetting resins which crosslink during the press cycle creating irreversible chemical bonds that produce a nonreactive, stable material with characteristics different and superior to those of the component parts. HPL can be produced using both continuous and discontinuous (batch) manufacturing processes. HPLs are supplied in sheet form, or compact form, in a variety of sizes, thicknesses and surface finishes. Low-pressure laminate Low-pressure laminate is defined as "a plastic laminate molded and cured at pressures in general of ". Quality standards There are various industrial standards specifically applied for high-pressure decorative laminates: European Standard EN438 The European Standard EN438 is one of the standards that most decorative laminates manufacturers selling to worldwide market adhere to. The specific code is EN438, entitled: Decorative high-pressure laminates (HPL) sheets based on thermosetting resins, specifications. It replaced all other national European standards. The specific part of EN438 which applies to high-pressure laminates is Part 3. The full title to this standard is: High-pressure decorative laminates (HPL) Sheets based on thermosetting resins (Usually called laminates) Part 3: Classification and specifications for laminates less than 2 mm thick intended for bonding to supporting substrates. In total there are 9 parts to the EN438. Decorative laminates are grouped into the following types according to EN 438: Type S (standard grade) - The characteristic properties of this grade are hard, virtually wear and scratch proof surfaces, high resistance to impact, insensitivity to boiling water and a number of typical household chemicals, as well as a pronounced resistance to dry and humid heat. The back side of decorative laminate is designed to allow defect free bonding to a substrate such as MDF or chipboard. Type P (postforming grade) - The properties of this grade are generally equivalent to type S, but is capable of being postformed at fixed temperature conditions according to the manufacturers specifications. Type F (fire-retardant grade) - The properties of this grade are generally equivalent to type S, but feature increased resistance to fire. Product specifications applicable to HPL include the nine parts of EN 438 and the two parts of ISO 4586 as shown below: EN 438-1: Introduction and general information EN 438-2: Determination of properties EN 438-3: Classification and specifications for laminates less than 2 mm thick intended for bonding to supporting substrates EN 438-4: Classification and specifications for compact laminates of thickness 2 mm and greater EN 438-5: Classification and specifications for flooring grade laminates less than 2 mm thick intended for bonding to supporting substrates EN 438-6: Classification and specifications for exterior grade compact laminates of thickness 2 mm and greater EN 438-7: Compact laminate and HPL composite panels for internal and external wall and ceiling finishes EN 438-8: Classification and specifications for design laminates EN 438-9 Classification and specifications for alternative core laminates ISO 4586-1: High-pressure decorative laminates—Sheets made from thermosetting resins—Part 1: Classification and specifications ISO 4586-1: High-pressure decorative laminates—Sheets made from thermosetting resins—Part 2: Determination of properties Antibacterial Antibacterial properties are important for decorative laminates because these laminates are used as kitchen tops and counter tops, cabinets and table tops that may be in constant contact with food materials and younger children. Antibacterial properties are there to ensure that bacterial growth is minimal. One of the standards for Anti-Bacterial is the ISO 22196:2007, which is based on the Japanese Industrial Standards (JIS), code Z2801. This is one of the standards most often referred to in the industry with regards to tests on microbial activities (specifically bacteria) and in the JIS Z2801, two bacteria species are used as a standard, namely E. Coli and Staphylococcus aureus. However, some companies may have the initiative to test more than just these two bacteria and may also replace Staphylococcus aureus with MRSA, the methicillin-resistant version of the same bacteria. Again, different countries may choose to specify different types of microbes for testing especially if they identified some bacteria groups which are more intimidating in their countries due to specific reasons. Anti-fungi A common anti-fungi standard is the ASTM G21-09. Not all manufacturers will take the initiatives for product R&D for anti-Fungi attributes. Manufacturers like Maica Laminates send their products for laboratory tests for certification following the ASTM G21-09 standard, while Formica (South America) partners with Microban Protection, which is a company manufacturing additives, including the anti-bacterial additives. Fire-resistant and flame-retardant There are many different standards with regards to fire-resistant and flame-retardant properties of high-pressure decorative laminates. While different countries may have different standards for the building industry to adhere to, most countries may agree on some of the more common standards being used in the industry. Very often, just like other standards applicable to the industry, the tests may be European Standards with their equivalent in the US Standards. For example, many Commonwealth countries may be comfortable with the British Standards 476 especially Parts 6 and 7, while there will still be US Standard equivalence in the ASTM. Others The list of tests applicable to decorative laminates will never be exhaustive. As the technology improves, there will be many more tests to ensure the safety of the products upon use by the end consumer, for example perhaps the tests on transfer of surface substance to food materials if prepared on the decorative laminates as a kitchen surface. The core tests will then also branch out based on the specific requirements and standards adopted by different countries. "Green" certificates Two of the internationally acknowledged "Green" certificates for decorative laminates are MAS Certified Green and GREENGUARD. The MAS Certified Green and GREENGUARD marks are to certify that the products have low chemical emissions. Chemicals tested include VOCs, formaldehyde and other harmful particles. The tests are based on single occupancy room with outdoor ventilation following the ANSI/ASHRAE Standard 62.1-2007, Ventilation for Acceptable Indoor Air Quality. GREENGUARD especially, has two main consideration, GREENGUARD and GREENGUARD GOLD. The GREENGUARD n GOLD was previously known as the GREENGUARD Children and Schools Certified, signifying its relevance of very low allowable chemical emissions levels to ensure the safety of young children and school environment. There are also many other "Green" certifications, some which are requirements by the authorities before the product can be used as building materials. These include the Singapore Green Label which is recognised by the Global Ecolabelling Network (GEN) and all its member countries. Applications Decorative high-pressure laminates are usually used for furniture tops especially on flat surfaces, including cabinets and tables. Decorative compact laminates are sometimes constructed as toilet cubicle systems, laboratory tables and kitchen tops. Some new usage models include wall panels with conceptual designs and custom prints. Competition The popularity of large format printing using inkjet printers has produced a cheaper alternative to decorative laminates, minus the quality. For most uninformed consumers, the large format printing are similar to laminates, and seem to offer more variety of designs and applications. For example, large format prints can be printed on wall stickers, and then installed on walls. Unlike decorative laminates, there is no special adhesive to be used, and the price may sometimes seem much cheaper comparatively. However, there are health considerations for large format prints because of the solvent inks used, especially with their relatively high concentrations of VOCs. These health considerations may be alleviated with newer wide format technology that uses Eco-Solvent or Latex inks. References External links ICDLI — International Committee of the Decorative Laminates Industry Printing processes Composite materials
Decorative laminate
[ "Physics" ]
2,043
[ "Materials", "Composite materials", "Matter" ]
30,794,160
https://en.wikipedia.org/wiki/Ion-beam%20shepherd
An ion-beam shepherd (IBS) is a concept in which the orbit and/or attitude of a spacecraft or a generic orbiting body is modified by having a beam of quasi-neutral plasma impinging against its surface to create a force and/or a torque on the target. Ion and plasma thrusters commonly used to propel spacecraft can be employed to produce a collimated plasma/ion beam and point it towards the body. The fact that the beam can be generated on a "shepherd" spacecraft placed in proximity of the target without physical attachment with the latter provides an interesting solution for space applications such as space debris removal, asteroid deflection and space transportation in general. The Technical University of Madrid (UPM) is exploring this concept by developing analytical and numerical control models in collaboration with the Advanced Concepts Team of the European Space Agency. The concept has also been proposed independently by JAXA and CNES. How it works The force and torque transmitted to the target originate from the momentum carried out by the plasma ions (typically xenon) which are accelerated to a few tens of kilometer per second by an ion or plasma thruster. The ions that reach the target surface lose their energy following nuclear collision in the substrate of the target material. In order to keep a constant distance between the target and the shepherd spacecraft the latter must carry a secondary propulsion system (e.g. another ion or plasma thruster) compensating for the reaction force created by the targeted ion beam. Applications The concept has been suggested as a possible solution for active space debris removal, as well as for accurate deflection of Earth threatening asteroids. Further in the future the concept could play an important role in areas such as space mobility, transportation, assembly of large orbital infrastructures and small asteroid capturing in Earth orbit. Control Beam divergence angles of ion and plasma thrusters, typically greater than 10 degrees make it necessary to have the shepherd flying not more than a few target diameters away if efficient beam overlap is to be reached. Proximity formation flying guidance and control as well as collision avoidance are among the most critical technological challenges of the concept. References External links http://www.aero.upm.es/ Universidad Politécnica de Madrid (UPM) http://sdg.aero.upm.es/ Space Dynamics Group at UPM https://web.archive.org/web/20120426091458/http://web.fmetsia.upm.es/ep2/page.php?page=index&lang=en Equipo de Propulsión Espacial y Plasmas (UPM) http://www.esa.int/act, ESA Advanced Concepts Team http://leosweep.upm.es/en, LEOSWEEP project Aerospace engineering Planetary defense
Ion-beam shepherd
[ "Engineering" ]
581
[ "Aerospace engineering" ]
30,796,127
https://en.wikipedia.org/wiki/Wiggle%20stereoscopy
Wiggle stereoscopy is an example of stereoscopy in which left and right images of a stereogram are animated. This technique is also called wiggle 3-D, wobble 3-D, wigglegram, or sometimes Piku-Piku (Japanese for "twitching"). The sense of depth from such images is due to parallax and to changes to the occlusion of background objects. In contrast to other stereo display techniques, the same image is presented to both eyes. Advantages and disadvantages Wiggle stereoscopy offers the advantages that no glasses or special hardware is required; most people can perceive the effect more quickly than when using cross-eyed and parallel viewing techniques. Furthermore, it offers stereo-like depth to people with limited or no vision in one eye. Disadvantages of wiggle stereoscopy are that it does not provide true binocular depth perception; it is not suitable for print media, being limited to displays that can alternate between the two images, and it is difficult to appreciate details in images that are constantly in motion. Number and timing of images Most wiggle images use only two images, yielding a jerky image. A smoother image can be composed by using several intermediate images and using the left and right images as end images of the image sequence. If intermediate images are not available, approximate images can be computed from the end images using techniques known as view interpolation. The two end images may be displayed for a longer time than the intermediate images to allow inspection of details in the left and right images. Another option for reducing the impression of jerkiness is to reduce the time between the frames of a wiggle image. 3D photos from a single image With advances in machine learning and computer vision, it is now also possible to recreate this effect using a single monocular image as an input. In this case one can use a segmentation model combined with a depth estimation model to estimate information relating to the distance of the surfaces of objects in the scene from a given viewpoint for every pixel in that image (known as a depth map), and with that information you can then render that pixel data as if it were 3 dimensional to create a subtle 3D effect. Perception The sense of depth from wiggle 3-D images is due to parallax and to changes to the occlusion of background objects. Although wiggle stereoscopy permits the perception of stereoscopic images, it is not a "true" three-dimensional stereoscopic display format in the sense that wiggle stereoscopy does not present the eyes with their own separate view each. The sense of depth may be enhanced by closing one eye, which removes the conflicting vergence visual cue of both eyes looking at the flat image plane. The apparent stereo effect results from syncing the timing of the wiggle and the amount of parallax to the processing done by the visual cortex. Three or five images with good parallax may produce a better effect than simple left and right images. Wiggling works for the same reason that a transitional pan (or tracking shot) in a film provides good depth information: the visual cortex is able to infer distance information from motion parallax, the relative speed of the perceived motion of different objects on the screen. Many small animals bob their heads to create motion parallax (wiggling) so they can better estimate distance prior to jumping. Gallery See also Kinetic depth effect References External links Wigglegram, Android application 3D imaging Animation techniques Optical illusions Articles containing video clips Stereoscopic photography
Wiggle stereoscopy
[ "Physics" ]
718
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
27,952,019
https://en.wikipedia.org/wiki/Deuterium%20fusion
Deuterium fusion, also called deuterium burning, is a nuclear fusion reaction that occurs in stars and some substellar objects, in which a deuterium nucleus (deuteron) and a proton combine to form a helium-3 nucleus. It occurs as the second stage of the proton–proton chain reaction, in which a deuteron formed from two protons fuses with another proton, but can also proceed from primordial deuterium. In protostars Deuterium (H) is the most easily fused nucleus available to accreting protostars, and such fusion in the center of protostars can proceed when temperatures exceed 10 K. The reaction rate is so sensitive to temperature that the temperature does not rise very much above this. The energy generated by fusion drives convection, which carries the heat generated to the surface. If there were no H available to fuse, then stars would gain significantly less mass in the pre-main-sequence phase, as the object would collapse faster, and more intense hydrogen fusion would occur and prevent the object from accreting matter. H fusion allows further accretion of mass by acting as a thermostat that temporarily stops the central temperature from rising above about one million degrees, a temperature not high enough for hydrogen fusion, but allowing time for the accumulation of more mass. When the energy transport mechanism switches from convective to radiative, energy transport slows, allowing the temperature to rise and hydrogen fusion to take over in a stable and sustained way. Hydrogen fusion will begin at . The rate of energy generation is proportional to the product of deuterium concentration, density and temperature. If the core is in a stable state, the energy generation will be constant. If one variable in the equation increases, the other two must decrease to keep energy generation constant. As the temperature is raised to the power of 11.8, it would require very large changes in either the deuterium concentration or its density to result in even a small change in temperature. The deuterium concentration reflects the fact that the gases are a mixture of normal hydrogen, helium and deuterium. The mass surrounding the radiative zone is still rich in deuterium, and deuterium fusion proceeds in an increasingly thin shell that gradually moves outwards as the radiative core of the star grows. The generation of nuclear energy in these low-density outer regions causes the protostar to swell, delaying the gravitational contraction of the object and postponing its arrival on the main sequence. The total energy available by H fusion is comparable to that released by gravitational contraction. Due to the scarcity of deuterium in the cosmos, a protostar's supply of it is limited. After a few million years, it will have effectively been completely consumed. In substellar objects Hydrogen fusion requires much higher temperatures and pressures than does deuterium fusion, hence, there are objects massive enough to burn H but not massive enough to burn normal hydrogen. These objects are called brown dwarfs, and have masses between about 13 and 80 times the mass of Jupiter. Brown dwarfs may shine for a hundred million years before their deuterium supply is burned out. Objects above the deuterium-fusion minimum mass (deuterium burning minimum mass, DBMM) will fuse all their deuterium in a very short time (~4–50 Myr), whereas objects below that will burn little, and hence, preserve their original H abundance. "The apparent identification of free-floating objects, or rogue planets below the DBMM would suggest that the formation of star-like objects extends below the DBMM." The onset of deuterium burning is called deuterium flash. Deuterium burning induced instability after this initial deuterium flash was proposed for very low-mass stars in 1964 by M. Gabriel. In this scenario a low-mass star or brown dwarf that is fully convective will become pulsationally unstable due to the nuclear reaction being sensitive to temperature. This pulsation is hard to observe because the onset of deuterium burning is thought to begin at <0.5 Myrs for >0.1 stars. At this time protostars are still deeply embedded in their circumstellar envelopes. Brown dwarfs with masses between 20 and 80 should be easier targets because the onset of deuterium burning does occur at an older age of 1 to 10 Myrs. Observations of very low-mass stars failed to detect variability that could be connected to deuterium-burning instability, despite these predictions. Ruíz-Rodríguez et al. proposed that the elliptical carbon monoxide shell around the young brown dwarf SSTc2d J163134.1-24006 is due to a violent deuterium flash, reminiscent of a helium shell flash in old stars. In planets It has been shown that deuterium fusion should also be possible in planets. The mass threshold for the onset of deuterium fusion atop the solid cores is also at roughly 13 Jupiter masses (1 = ). Other reactions Though fusion with a proton is the dominant way to consume deuterium, other reactions are possible. These include fusion with another deuteron to form helium-3, tritium, or more rarely helium-4, or with helium to form various isotopes of lithium. Pathways include: {| border="0" |- style="height:2em;" |H ||+ ||H ||→ ||H ||() ||+ ||p ||() || || || || || || 50% |- style="height:2em;" | || || ||→ ||He ||() ||+ ||n ||() || || || || || || 50% |- style="height:2em;" |H ||+ ||H ||→ ||He ||() ||+ ||n ||() |- style="height:2em;" |H ||+ ||He ||→ ||He ||() ||+ ||p ||() |} References Nucleosynthesis Stellar astronomy Definition of planet Nuclear fusion reactions Fusion
Deuterium fusion
[ "Physics", "Chemistry", "Astronomy" ]
1,294
[ "Nuclear fission", "Definition of planet", "Astrophysics", "Nucleosynthesis", "Astronomical controversies", "Nuclear fusion reactions", "Astronomical classification systems", "Nuclear physics", "Nuclear fusion", "Astronomical sub-disciplines", "Stellar astronomy" ]
27,959,040
https://en.wikipedia.org/wiki/Manganese-mediated%20coupling%20reactions
Manganese-mediated coupling reactions are radical coupling reactions between enolizable carbonyl compounds and unsaturated compounds initiated by a manganese(III) salt, typically manganese(III) acetate. Copper(II) acetate is sometimes used as a co-oxidant to assist in the oxidation of intermediate radicals to carbocations. Manganese(III) acetate is effective for the one-electron oxidation of enolizable carbonyl compounds to α-oxoalkyl or α,α'-dioxoalkyl radicals. Radicals generated in this manner may then undergo inter- or intramolecular addition to carbon-carbon multiple bonds. Pathways available to the adduct radical include further oxidation to a carbocation (and subsequent β-elimination or trapping with a nucleophile) and hydrogen abstraction to generate a saturated carbonyl compound containing a new carbon-carbon bond. Copper(II) acetate is sometimes needed to facilitate the oxidation of adduct radicals to carbocations. Yields of these reactions are generally moderate, particularly in the intermolecular case, but tandem intramolecular radical cyclizations initiated by Mn(III) oxidation may generate complex carbocyclic frameworks. Because of the limited functional group compatibility of Mn(OAc)3, radical couplings employing this reagent have mainly been applied to the synthesis of hydrocarbon natural products, such as pheromones. (1) Mechanism Manganese(III)-mediated radical reactions begin with the single-electron oxidation of a carbonyl compound to an α-oxoalkyl radical. Addition to an olefin then occurs, generating adduct radical 2. The fate of 2 is primarily determined by reaction conditions—in the presence of copper(II) acetate, this intermediate undergoes further oxidation to a carbocation and may eliminate to form β,γ-unsaturated ketone 4. Manganese acetate itself can effect the second oxidation of resonance-stabilized adduct radicals to carbocations 5; unstabilized radicals undergo further transformations before reacting with Mn(OAc)3. Atom transfer from another molecule of substrate may generate saturated compound 3. Adduct radicals or carbocations may undergo ligand-transfer reactions, yielding γ-functionalized carbonyl compounds. When lithium chloride is used as an additive, chlorination takes place. Alternatively, carbocations may be trapped intramolecularly by the carbonyl oxygen to form dihydrofurans after β-elimination. (2) Scope The outcomes of manganese-mediated coupling reactions depend on both the structure of the substrate(s) and the reaction conditions. This section describes the scope and limitations of inter- and intramolecular manganese-mediated radical coupling reactions and is organized according to the carbonyl compound employed as the substrate. Intermolecular reactions between ketones/aldehydes and alkenes tend to result in low yields. In the absence of copper(II) acetate, hydrogen atom abstraction occurs, yielding saturated ketones or aldehydes. (3) When Cu(OAc)2 is present, further oxidation to carbocations followed by elimination takes place, leading to the formation of β,γ-unsaturated carbonyl compounds in moderate yields. (4) Aromatic compounds are also useful radical acceptors in manganese(III)-mediated coupling reactions. Furan reacts selectively at the α position to afford substituted products in high yield. (5) Lactonization of alkenes in the presence of acetic acid and acetate salts is a synthetically useful method for the synthesis of γ-lactones. Selectivity is high for the radical addition that leads to the more stable adduct radical, and trans lactones are selectively formed from either cis or trans acyclic alkenes. (6) β-Dicarbonyl compounds are useful substrates for the formation of dihydrofurans. Copper(II) acetate is not necessary in this case because of the high resonance stabilization of the intermediate diphenylmethyl radical. (7) When alkenes or carbonyl compounds containing pendant unsaturated moiety are treated with manganese(III) acetate, tandem intramolecular cyclization reactions may occur. Generally, exo cyclization of terminal double bonds is favored, as shown in equation (10). A variety of substitution patterns may be employed for this transformation, and yields are generally higher than intermolecular coupling reactions. (8) The stereochemical course of tandem reactions can be understood in some cases by invoking a chairlike transition state with as many substituents as possible in pseudoequatorial positions; however, a number of examples exhibiting unpredictable stereochemistry are known. (9) Nitriles are useful as radical acceptors in tandem cyclizations. Hydrolysis of the resulting imine leads to polycyclic ketones in moderate yields with good stereoselectivity. (10) Use in organic chemistry Manganese-mediated couplings have been used for the synthesis of hydrocarbon natural products, such as pheromones. A synthesis of queen bee pheromone uses the intermolecular coupling of acetone and an ω-alkenyl acetate en route to the target. (11) Lactonization is a key step in the synthesis of tomato pinworm sex pheromone. Subsequent Lindlar hydrogenation, reduction and acetylation provided the target compound. (12) References Organic reactions
Manganese-mediated coupling reactions
[ "Chemistry" ]
1,154
[ "Organic reactions" ]
27,960,249
https://en.wikipedia.org/wiki/Parasitic%20oscillation
Parasitic oscillation is an undesirable electronic oscillation (cyclic variation in output voltage or current) in an electronic or digital device. It is often caused by feedback in an amplifying device. The problem occurs notably in RF, audio, and other electronic amplifiers as well as in digital signal processing. It is one of the fundamental issues addressed by control theory. Parasitic oscillation is undesirable for several reasons. The oscillations may be coupled into other circuits or radiate as radio waves, causing electromagnetic interference (EMI) to other devices. In audio systems, parasitic oscillations can sometimes be heard as annoying sounds in the speakers or earphones. The oscillations waste power and may cause undesirable heating. For example, an audio power amplifier that goes into parasitic oscillation may generate enough power to damage connected speakers. A circuit that is oscillating will not amplify linearly, so desired signals passing through the stage will be distorted. In digital circuits, parasitic oscillations may only occur on particular logic transitions and may result in erratic operation of subsequent stages; for example, a counter stage may see many spurious pulses and count erratically. Causes Parasitic oscillation in an amplifier stage occurs when part of the output energy is coupled into the input, with the correct phase and amplitude to provide positive feedback at some frequency. The coupling can occur directly between input and output wiring with stray capacitance or mutual inductance between input and output. In some solid-state or vacuum electron devices there is sufficient internal capacitance to provide a feedback path. Since the ground is common to both input and output, output current flowing through the impedance of the ground connection can also couple signals back to the input. Similarly, impedance in the power supply can couple input to output and cause oscillation. When a common power supply is used for several stages of amplification, the supply voltage may vary with the changing current in the output stage. The power supply voltage changes will appear in the input stage as positive feedback. An example is a transistor radio which plays well with a fresh battery, but squeals or "motorboats" when the battery is old. In audio systems, if a microphone is placed close to a loudspeaker, parasitic oscillations may occur. This is caused by positive feedback, from amplifier's output to loudspeaker to sound waves, and back via the microphone to the amplifier input. See Audio feedback. Conditions Feedback control theory was developed to address the problem of parasitic oscillation in servo control systems – the systems oscillated rather than performing their intended function, for example velocity control in engines. The Barkhausen stability criterion gives the necessary condition for oscillation; the loop gain around the feedback loop, which is equal to the amplifier gain multiplied by the transfer function of the inadvertent feedback path, must be equal to one, and the phase shift around the loop must be zero or a multiple of 360° (2π radians). In practice, feedback may occur over a range of frequencies (for example the operating range of an amplifier); at various frequencies, the phase of the amplifier may be different. If there is one frequency where the feedback is positive and the amplitude condition is also fulfilled – the system will oscillate at that frequency. These conditions can be expressed in mathematical terms using the Nyquist plot. Another method used in control loop theory uses Bode plots of gain and phase vs. frequency. Using Bode plots, a design engineer checks whether there is a frequency where both conditions for oscillations are met: the phase is zero (positive feedback) and the loop gain is 1 or greater. When parasitic oscillations occur, the designer can use the various tools of control loop engineering to correct the situation – to reduce the gain or to change the phase at problematic frequencies. Mitigation Several measures are used to prevent parasitic oscillation. Amplifier circuits are laid out so that input and output wiring are not adjacent, preventing capacitive or inductive coupling. A metal shield may be placed over sensitive portions of the circuit. Bypass capacitors may be put at power supply connections, to provide a low-impedance path for AC signals and prevent interstage coupling through the power supply. Where printed circuit boards are used, high- and low-power stages are separated and ground return traces are arranged so that heavy currents don't flow in mutually shared portions of the ground trace. In some cases the problem may only be solved by introduction of another feedback neutralization network, calculated and adjusted to eliminate the negative feedback within the passband of the amplifying device. A classic example is the Neutrodyne circuit used in tuned radio frequency receivers. See also June 2009 Washington Metro train collision, fatal train crash caused by parasitic oscillation in signal circuits References Control theory Electronic feedback Electronic oscillators Electronic amplifiers Dynamical systems Ordinary differential equations
Parasitic oscillation
[ "Physics", "Mathematics", "Technology" ]
1,030
[ "Applied mathematics", "Control theory", "Mechanics", "Electronic amplifiers", "Amplifiers", "Dynamical systems" ]
44,514,117
https://en.wikipedia.org/wiki/Application%20of%20CFD%20in%20thermal%20power%20plants
Computational fluid dynamics (CFD) are used to understand complex thermal flow regimes in power plants. The thermal power plant may be divided into different subsectors and the CFD analysis applied to critical equipment/components - mainly different types of heat exchangers - which are of crucial significance for efficient and trouble free long-term operation of the plant. Overview The thermal power station subsystem involves multiphase flow, phase transformation and complex chemical reaction associated with conjugate heat transfer. . Methods Finite difference method Finite difference method describes the unknowns of the flow problem by means of point samples at the node points of a grid co-ordinate lines. Taylor series expansions are used to generate finite difference approximations of derivatives in terms of point samples at each grid point and its immediate neighbours. Those derivatives appearing in the governing equations are replaced by finite differences yielding an algebraic equation. Finite element method Finite element method uses piece wise functions valid on elements to describe the local variations of unknown flow variables. Here also a set of algebraic equations are generated to determine unknown co-efficients. Finite volume method Finite volume method is probably the most popular method used for numerical discretization in CFD. This method is similar in some ways to the finite difference method. This approach involves the discretization of the spatial domain into finite control volumes. The governing equations in their differential form are integrated over each control volume. The resulting integral conservation laws are exactly satisfied for each control volume and for the entire domain, which is a distinct advantage of the finite volume method. Each integral term is then converted into a discrete form, thus yielding discretised equations at the centroids, or nodal points, of the control volumes. Application of CFD in thermal power plants Low NOx burner design When fossil fuels are burned, Nitric oxide and Nitrogen dioxide are produced. These pollutants initiate reactions which result in production of ozone and acid rain. NOx formation takes place due to (1) High temperature combustion i.e. thermal NOx and (2)Nitrogen bound to fuel i.e. fuel NOx and which is insignificant. In the majority of cases the level of thermal NOx can be reduced by lowering flame temperature. This can be done by modifying the burner to create a larger (hence lower temperature) flame, in turn reducing the NOx formation. The role of CFD analysis is vital for design and analysis of such low NOx burners. Many available CFD tools, such as CFX, Fluent, Star CCM++ with different models as RNG k-ε turbulence models with hybrid and CONDIF upwind differencing schemes has been used for analysis purpose and the data obtained with these analysis helped in modifying the burner design in turn lowering the adverse effect on the environment due to NOx formation during combustion. CFD analysis of economiser The economiser is a crucial component for efficient performance of a thermal power plant. It is a non-steaming type of heat exchanger which is placed in the convective zone of the furnace. It takes the heat energy of the flue gases for heating the feed water before it enters the boiler drum. The thermal efficiency/boiler efficiency largely depends on the performance of the economiser. CFD analysis helps in optimizing the thermal performance of the economiser by analysing the pressure, velocity and temperature distribution, and to identify the critical areas for further improvement with the result obtained by CFD analysis. CFD analysis of superheaters Superheaters, which are generally placed in the radiant zone of the furnace, are used for increasing the temperature of dry saturated steam coming out from boiler drum and to maintain the required parameters before sending it to the steam turbine. The thermal efficiency of a thermal power plant depends on the performance of the superheater. The CFD analysis of superheaters is done at design stage and later at the troubleshooting and performance evaluation during the operation of the plant. The CFD results obtained can be useful for the maintenance engineer to make suitable predictions of the tube life and make suitable arrangements for the high temperature zone to reduce the erosion of the tube coil and restricting the tube leakage problem. CFD analysis consists of modelling the superheater and doing analysis to study the velocity, pressure and temperature distribution of the steam inside the superheater. The uneven temperature distribution of steam in the tube leads to boiler tube leakage. CFD also helps to study the effect of the operating parameters on the tube erosion rate. Thermal power plants operates round the year and it is not always possible to shut down and analyse the problem. CFD helps in this. CFD analysis of pulverized coal combustion In a thermal power plant combustion of fuel, especially pulverized coal, is of significant importance. Proper and complete combustion, with the required proportions of air and fuel, is required for total energy transfer to water for steam generation and to reduce pollutants. CFD models based on fundamental conservation equations of mass, energy, chemical species and momentum can be used to simulate the flow of air and coal through the burners. The results obtained from CFD analyses give insight to identify the potential areas for improvement. CFD application in other areas of thermal power plants There are some other areas of importance where CFD can play a significant role in performance and efficiency improvement. The unbalanced coal/air flow in the pipe systems of coal fired power plants leads to non-uniform combustion in the furnace, and hence an overall lower efficiency of the boiler. A common solution to this problem is to put orifices in the pipe systems to balance the flow. If the orifices are sized to balance clean airflow to individual burners connected to a pulverizer, the coal/airflow would still be unbalanced and vice versa. The CFD with standard k–e two-phase flow model can be used to calculate pressure drop coefficients for the coal/air as well as the clean air flow. The CFD is also used to obtain the numerical solution to address the problem of water wall erosion of the furnace of a thermal power plant. This is caused by flame misalignment, thermal attack and erosion due to the contact with chemicals. The flame misalignment occurs because of alteration in fluid dynamics factors due to burner geometry. CFD results show velocity profiles, pressure profiles, streamlines and other data that is helpful in understanding the fluid flow phenomena inside the equipment. It is clearly evident from above examples how crucial is the application of CFD in addressing the bottlenecks in thermal power plants, improving power plant efficiency and assisting in maintenance decisions. References Further reading Krunal .P Mudafle, Hemant S. Farkade "CFD analysis of economizer in a tengential fired boiler", International Journal of Mechanical and Industrial Engineering (IJMIE) ISSN No. 2231 –6477, Vol-2, Iss-4, 2012. Ajay N. Ingale, Vivek C. Pathade, Dr. Vivek H. Tatwawadi" CFD Analysis of Superheater in View of Boiler Tube Leakage" International Journal of Engineering and Innovative Technology (IJEIT) Volume 1, Issue 3, March 2012 H.Versteg, W.malalasekra " An Introduction to Computational Fluid Dynamics" Second edition, Pearson Publications. Computational fluid dynamics Power station technology
Application of CFD in thermal power plants
[ "Physics", "Chemistry" ]
1,515
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
44,516,005
https://en.wikipedia.org/wiki/Pauling%27s%20principle%20of%20electroneutrality
Pauling's principle of electroneutrality states that each atom in a stable substance has a charge close to zero. It was formulated by Linus Pauling in 1948 and later revised. The principle has been used to predict which of a set of molecular resonance structures would be the most significant, to explain the stability of inorganic complexes and to explain the existence of π-bonding in compounds and polyatomic anions containing silicon, phosphorus or sulfur bonded to oxygen; it is still invoked in the context of coordination complexes. However, modern computational techniques indicate many stable compounds have a greater charge distribution than the principle predicts (they contain bonds with greater ionic character). History Pauling first stated his "postulate of the essential electroneutrality of atoms" in his 1948 Liversidge lecture (in a broad-ranging paper that also included his ideas on the calculation of oxidation states in molecules): “...the electronic structure of substances is such as to cause each atom to have essentially zero resultant electrical charge, the amount of leeway being not greater than about +/- ½, and these resultant charges are possessed mainly by the most electropositive and electronegative atoms and are distributed in such a way as to correspond to electrostatic stability." A slightly revised version was published in 1970: “Stable molecules and crystals have electronic structures such that the electric charge of each atom is close to zero. Close to zero means between -1 and +1.” Pauling said in his Liversidge lecture in 1948 that he had been led to the principle by a consideration of ionic bonding. In the gas phase, molecular caesium fluoride has a polar covalent bond. The large difference in electronegativity gives a calculated covalent character of 9%. In the crystal (CsF has the NaCl structure with both ions being 6-coordinate) if each bond has 9% covalent character the total covalency of Cs and F would be 54%. This would be represented by one bond of around 50% covalent character resonating between the six positions and the overall effect would be to reduce the charge on Cs to about + 0.5 and fluoride to -0.5. It seemed reasonable to him that since CsF is the most ionic of ionic compounds, most, if not all substances will have atoms with even smaller charges. Applications of the principle Explanation of the structure adopted by hydrogen cyanide There are two possible structures for hydrogen cyanide, HCN and CNH, differing only as to the position of the hydrogen atom. The structure with hydrogen attached to nitrogen, CNH, leads to formal charges of -1 on carbon and +1 on nitrogen, which would be partially compensated for by the electronegativity of nitrogen and Pauling calculated the net charges on H, N and C as -0.79, +0.75 and +0.04 respectively. In contrast the structure with hydrogen bonded to carbon, HCN, has formal charges on carbon and nitrogen of 0, and the effect of the electronegativity of the nitrogen would make the charges on H, C and N +0.04, +0.17 and -0.21. The triple bonded structure is therefore favored. Relative contribution of resonance structures (canonicals) As an example the cyanate ion (OCN)− can be assigned three resonance structures:- ^-O-C{\equiv}N <-> O=C=N^- <-> {^+O{\equiv}C-N^{2-}} The rightmost structure in the diagram has a charge of -2 on the nitrogen atom. Applying the principle of electroneutrality this can be identified as only a minor contributor. Additionally as the most electronegative atom should carry the negative charge, then the triple bonded structure on the left is predicted to be the major contributor. Stability of complexes The hexammine cobalt(III) complex [Co(NH3)6]3+ would have all of charge on the central Co atom if the bonding to the ammonia molecules were electrostatic. On the other hand, a covalent linkage would lead to a charge of -3 on the metal and +1 on each of the nitrogen atoms in the ammonia molecules. Using the electroneutrality principle the assumption is made that the Co-N bond will have 50% ionic character thus resulting in a zero charge on the cobalt atom. Due to the difference in electronegativity the N-H bond would 17% ionic character and therefore a charge of 0.166 on each of the 18 hydrogen atoms. This essentially spreads the 3+ charge evenly onto the "surface" of the complex ion. π-bonding in oxo compounds of Si, P, and S Pauling invoked the principle of electroneutrality in a 1952 paper to suggest that pi bonding is present, for example, in molecules with 4 Si-O bonds. The oxygen atoms in such molecules would form polar covalent bonds with the silicon atom because their electronegativity (electron withdrawing power) was higher than that of silicon. Pauling calculated the charge build up on the silicon atom due to the difference in electronegativity to be +2. The electroneutrality principle led Pauling to the conclusion that charge transfer from O to Si must occur using d orbitals forming a π-bond and he calculated that this π-bonding accounted for the shortening of the Si-O bond. The adjacent charge rule The "adjacent charge rule" was another principle of Pauling's for determining whether a resonance structure would make a significant contribution. First published in 1932, it stated that structures that placed charges of the same sign on adjacent atoms would be unfavorable. References Chemical bonding Quantum chemistry
Pauling's principle of electroneutrality
[ "Physics", "Chemistry", "Materials_science" ]
1,190
[ "Quantum chemistry", "Quantum mechanics", "Theoretical chemistry", "Condensed matter physics", " molecular", "nan", "Atomic", "Chemical bonding", " and optical physics" ]
23,195,101
https://en.wikipedia.org/wiki/Retirement%20spend-down
At retirement, individuals stop working and no longer get employment earnings, and enter a phase of their lives, where they rely on the assets they have accumulated, to supply money for their spending needs for the rest of their lives. Retirement spend-down, or withdrawal rate, is the strategy a retiree follows to spend, decumulate or withdraw assets during retirement. Retirement planning aims to prepare individuals for retirement spend-down, because the different spend-down approaches available to retirees depend on the decisions they make during their working years. Actuaries and financial planners are experts on this topic. Importance More than 10,000 Post-World War II baby boomers will reach age 65 in the United States every day between 2014 and 2027. This represents the majority of the more than 78 million Americans born between 1946 and 1964. As of 2014, 74% of these people are expected to be alive in 2030, which highlights that most of them will live for many years beyond retirement. By the year 2000, 1 in every 14 people was age 65 or older. By the year 2050, more than 1 in 6 people are projected to be at least 65 years old. The following statistics emphasize the importance of a well-planned retirement spend-down strategy for these people: 87% of workers do not feel very confident about having enough money to retire comfortably. 80% of retirees do not feel very confident about maintaining financial security throughout their remaining lifetime. 49% of workers over age 55 have less than $50,000 of savings. 25% of workers have not saved at all for retirement. 35% of workers are not currently saving for retirement. 56% of workers have not tried to calculate their income needs in retirement. Longevity risk Individuals each have their own retirement aspirations, but all retirees face longevity risk – the risk of outliving their assets. This can spell financial disaster. Avoiding this risk is therefore a baseline goal that any successful retirement spend-down strategy addresses. Generally, longevity risk is greatest for low and middle income individuals. The probabilities of a 65-year-old living to various ages are: Longevity risk is largely underestimated. Most retirees do not expect to live beyond age 85, let alone into their 90s. A 2007 study of recently retired individuals asked them to rank the following risks in order of the level of concern they present: Health care costs Inflation Investment risk Maintaining lifestyle Need for long-term care Outliving assets (longevity risk) Longevity risk was ranked as the least concerning of these risks. Withdrawal rate A portion of retirement income often comes from savings, sometimes referred to as a nest egg. Analyzing one's savings involves a number of variables: how savings are invested (e.g., cash, stocks, bonds, real estate), and how this changes over time inflation during retirement how quickly savings are spent – the withdrawal rate Often, an investor will change some of their investment types as one ages. A common strategy to replace more risky investments with less risky investments as one gets older. A "risky" investment is an investment that has a higher potential return but also a higher potential loss. A "conservative" investment is an investment with a low potential return but a lower potential loss. A number of approaches exist to assist with choosing the correct risk level, for example, target date funds. A common rule of thumb for withdrawal rate is 4%, based on 20th century American investment returns, and first articulated in . Bengen later stated the 4% guideline was intended as a "worst case scenario" for retirees in United States, using a hypothetical example of someone who retired in 1968 at a stock market peak before a protracted bear market and high inflation through the 1970s. In that scenario, a 4% withdrawal rate allowed the investor's funds to last 30 years. Historically, Bengen says closer to 7% is an average safe withdrawal rate and at other times withdrawal rates up to 13% have been feasible. A 4% withdrawal rate is also one conclusion of the Trinity study (1998). This particular rule and approach have been heavily criticized, as have the methods of both sources, with critics arguing that withdrawal rates should vary with investment style (which they do in Bengen) and returns, and that this ignores the risk of emergencies and rising expenses (e.g., medical or long-term care). Others question the suitability of matching relatively fixed expenses with risky investment assets. New dynamic adjustment methods for retirement withdrawal rates have been developed after Bengen's 4% withdrawal rate was proposed: constant inflation-adjusted spending, Bengen's floor-and-ceiling rule, and Guyton and Klinger's decision rules. More complex withdrawal strategies have also been created. To decide a withdrawal rate, history shows the maximum sustainable inflation-adjusted withdrawal rate over rolling 30-year periods for three hypothetical stock and bond portfolios from 1926 to 2014. Stocks are represented by the S&P 500 Index, bonds by an index of five-year U.S. Treasury bonds. During the best 30-year period withdrawal rates of 10% annually could be used with a 100% success rate. The worst 30-year period had a maximum withdrawal rate of 3.5%. A 4% withdrawal rate survived most 30 year periods. The higher the stock allocation the higher rate of success. A portfolio of 75% stocks is more volatile but had higher maximum withdrawal rates. Starting with a withdrawal rate near 4% and a minimum 50% equity allocation in retirement gave a higher probability of success in historical 30 year periods. The above withdrawal strategies, sometimes referred to as strategic withdrawal plans or structured withdrawal plans, focus only on spend-down of invested assets and do not typically coordinate with retirement income from other sources, such as Social Security, pensions, and annuities. Under the actuarial approach described below for equating total personal assets with total spending liabilities to develop a sustainable spending budget, the amount to be withdrawn from invested assets each year is equal to the amount to be spent during the year (the spending budget) reduced by income from other sources for the year. Sources of retirement income Individuals may receive retirement income from a variety of sources: Personal savings and interest Retirement savings plans (i.e., individual retirement account (United States), Registered Retirement Savings Plan (Canada)) Defined contribution plans (i.e., 401(k), 403(b), SIMPLE, 457(b), etc.) Defined benefit pension plans Social Insurance (i.e., Canada Pension Plan, Old Age Security (Canada), National Insurance (United Kingdom), Social Security (United States)) Rental income Annuities Dividends Sale of assets to provide income Tontines Work during retirement Each has unique risk, eligibility, tax, timing, form of payment, and distribution considerations that should be integrated into a retirement spend-down strategy. Modeling retirement spend-down: traditional approach Traditional retirement spend-down approaches generally take the form of a gap analysis. Essentially, these tools collect a variety of input variables from an individual and use them to project the likelihood that the individual will meet specified retirement goals. They model the shortfall or surplus between the individual's retirement income and expected spending needs to identify whether the individual has adequate resources to retire at a particular age. Depending on their sophistication, they may be stochastic (often incorporating Monte Carlo simulation) or deterministic. Standard input variables Current age Expected retirement date or age Life expectancy Current savings Savings rate Current salary Salary increase rate Tax rate Inflation rate Rate of return on investments Expected retirement expenses Additional input variables that can enhance model sophistication Marital status Spouse's age Spouse's assets Health status Medical expense inflation Estimated social security benefit Estimated benefits from employer sponsored plans Asset class weights comprising personal savings Detailed expected retirement expenses Value of home and mortgage balance Life insurance holdings Expected post-retirement part-time income Output Shortfall or surplus There are three primary approaches utilized to estimate an individual's spending needs in retirement: Income replacement ratios: financial experts generally suggest that individuals need at least 70% of their pre-retirement income to maintain their standard of living. This approach is criticized from the standpoint that expenses, such as those related to health care, are not stable over time. Consumption smoothing: under this approach individuals develop a target expenditure pattern, generally far before retirement, that is intended to remain level throughout their lives. Proponents argue that individuals often spend conservatively earlier in their lives and could increase their overall utility and living standard by smoothing their consumption. Direct expense modeling: with the help of financial experts, individuals attempt to estimate future expenses directly, using projections of inflation, health care costs, and other variables to provide a framework for the analysis. Adverse impact of market downturn and lower interest rates Market volatility can have a significant impact on both a worker's retirement preparedness and a retiree's retirement spend-down strategy. American workers lost an estimated $2 trillion in retirement savings during the 2007–2008 financial crisis. 54% of workers lost confidence in their ability to retire comfortably due to the direct impact of the market turmoil on their retirement savings. Asset allocation contributed significantly to these issues. Basic investment principles recommend that individuals reduce their equity investment exposure as they approach retirement. Studies show, however, that 43% of 401(k) participants had equity exposure in excess of 70% at the beginning of 2008. World Pensions Council (WPC) financial economists have argued that durably low interest rates in most G20 countries will have an adverse impact on the underfunding condition of pension funds as "without returns that outstrip inflation, pension investors face the real value of their savings declining rather than ratcheting up over the next few years" From 1982 until 2011, most Western economies experienced a period of low inflation combined with relatively high returns on investments across all asset classes including government bonds. This brought a certain sense of complacency amongst some pension actuarial consultants and regulators, making it seem reasonable to use optimistic economic assumptions to calculate the present value of future pension liabilities. The potentially long-lasting collapse in returns on government bonds is taking place against the backdrop of a protracted fall in returns for other core-assets such as blue chip companies'stocks, and, more importantly, a silent demographic shock. Factoring in the corresponding longevity risk, pension premiums could be raised significantly while disposable incomes stagnate and employees work longer years before retiring. Coping with retirement spend-down challenges Longevity risk becomes more of a concern for individuals when their retirement savings are depleted by asset losses. Following the market downturn of 2008–09, 61% of working baby boomers are concerned about outliving their retirement assets. Traditional spend-down approaches generally recommend three ways they can attempt to address this risk: Save more (spend less) Invest more aggressively Lower their standard of living Saving more and investing more aggressively are difficult strategies for many individuals to implement due to constraints imposed by current expenses or an aversion to increased risk. Most individuals also are averse to lowering their standard of living. The closer individuals are to retirement, the more drastic these measures must be for them to have a significant impact on the individuals' retirement savings or spend-down strategies. Postponing retirement Individuals tend to have significantly more control over their retirement ages than they do over their savings rates, asset returns, or expenses. As a result, postponing retirement can be an attractive option for individuals looking to enhance their retirement preparedness or recover from investment losses. The relative impact that delaying retirement can have on an individual's retirement spend-down is dependent upon specific circumstances, but research has shown that delaying retirement from age 62 to age 66 can increase an average worker's retirement income by 33%. Postponing retirement minimizes the probability of running out of retirement savings in several ways: Additional returns are earned on savings that otherwise would be paid out as retirement income Additional savings are accumulated from a longer wage-earning period The post-retirement period is shortened Other sources of retirement income increase in value (Social Security, defined contribution plans, defined benefit pension plans) Studies show that nearly half of all workers expect to delay their retirement because they have accumulated fewer retirement assets than they had planned. Much of this is attributable to the market downturn of 2008–2009. Various unforeseen circumstances cause nearly half of all workers to retire earlier than they intend. In many cases, these individuals intend to work part-time during retirement. Again, however, statistics show that this is far less common than intentions would suggest. Modeling retirement spend-down: alternative approach The appeal of retirement age flexibility is the focal point of an actuarial approach to retirement spend-down that has spawned in response to the surge of baby boomers approaching retirement. The approach is based on personal asset/liability matching process and present values to determine current year and future year spending budget data points. This self-adjusting actuarial process is very similar to the process employed by pension actuaries to help pension plan sponsors determine current and future years’ annual contribution requirements. Similarity to individual asset/liability modeling Most approaches to retirement spend-down can be likened to individual asset/liability modeling. Regardless of the strategy employed, they seek to ensure that individuals' assets available for retirement are sufficient to fund their post-retirement liabilities and expenses. This is elaborated in dedicated portfolio theory. See also Trinity study References External links Post Retirement Needs and Risks, Society of Actuaries Financial Planning and Retirement Portal, AARP Retirement Portal, 360 Degrees of Financial Literacy Employee Benefit Research Institute Center for Retirement Research, Boston College Journal of Financial Planning Morningstar's 5-Point Retirement Portfolio Checkup Retirement Withdrawal Calculator How Much Can I Afford to Spend in Retirement Blog Actuarial science Investment Plan Individual retirement accounts
Retirement spend-down
[ "Mathematics" ]
2,810
[ "Applied mathematics", "Actuarial science" ]
23,197,445
https://en.wikipedia.org/wiki/John%20Murphy%20%28contractor%29
John Murphy (5 October 1913 – 7 May 2009) was an Irish business man who established the construction and infrastructure contractor J. Murphy & Sons. The company, based in Kentish Town, with its green vans and lorries, works on building sites across the UK and Ireland. His late brother Joe also went into construction in London, trading as Murphy Ltd and using grey vehicles, but that company went into administration and closed in 2013. Biography Murphy was born at Loughmark, near Cahersiveen, County Kerry. He left school at 15 but found work hard to come by. He travelled to London and started up as a subcontractor in the building trade. The Second World War offered him a golden opportunity. New airfields were urgently needed and later on runway repairs were needed also. He was successful in providing this service and at end of the war was well placed to help with large-scale reconstruction. Other ventures included electrification, cable installation, water facilities and road-building. At his death in 2009, his worth was estimated at £190 million. He valued his privacy and was known to spend little on luxuries, instead preferring to spend time with his own workmen and other Irish friends. J Murphy & Sons In the 1970s a specialist division of the company worked on the development of natural gas. Later projects included the Stansted Airport Rail Link, work in the City of London, the Channel Tunnel Rail Link and London's Olympic Park. In 2007 J Murphy and Sons generated nearly £500 million of revenue and made pre-tax profits of £60 million. It was appointed lead contractor in the £125 million Liverpool-Manchester water pipeline project, which is to carry up to 100 million litres of water per day. After Murphy's death in May 2009, leadership of the company passed initially to his daughter Caroline, who had been appointed deputy chairperson of the group in 2007. She later planned to turn the business into a worker's Co-op owned by its 3,500 employees, but other board members — notably her mother, brother and half-brother — resisted, and she resigned in 2014. The company was then led by Steve Hollingshead until the appointment in 2017 of John Murphy's grandson, John B Murphy. In the year to 31 December 2017 the company made a pre-tax profit of £12.43m from a turnover of £711m, and had 3,878 employees. In 2018, it experienced a slow down in growth, and cancelled its Christmas Party as part of a cost-cutting drive affecting jobs across the business. References 1913 births 2009 deaths People from Cahersiveen Civil engineering contractors 20th-century Irish businesspeople
John Murphy (contractor)
[ "Engineering" ]
544
[ "Civil engineering", "Civil engineering contractors" ]
23,198,270
https://en.wikipedia.org/wiki/Onigawara
are a type of roof ornamentation found in Japanese architecture. They are generally roof tiles or statues depicting an oni (ogre) or a fearsome beast. Onigawara were historically found on Buddhist temples, but are now used in many traditionally styled buildings. Some tiles may depict things besides oni, but are still called onigawara due to custom. History Prior to the Heian period, similar ornaments with floral and plant designs (hanagawara) preceded the onigawara. The present design is thought to have come from a previous architectural element, the oni-ita, which is a board painted with the face of an oni and was meant to stop roof leaks. During the Nara period the tile was decorated with other motifs, but later it acquired distinct ogre-like features and became strongly tridimensional. Gallery Oni depictions Other depictions See also Chimera (architecture) Gargoyle Imperial roof decoration Japanese architecture Jisaburō Ozawa, an admiral nicknamed "Onigawara" by his men Shachihoko Shibi (roof tile) Shisa List of Traditional Crafts of Japan Notes References Parent, Mary Neighbour. (2003). Japanese Architecture and Art Net Users System. External links Japanese architectural features Oni Roofs Roof tiles Architectural sculpture Traditional East Asian architecture
Onigawara
[ "Technology", "Engineering" ]
265
[ "Structural system", "Structural engineering", "Roofs" ]
3,976,202
https://en.wikipedia.org/wiki/Baire%20set
In mathematics, more specifically in measure theory, the Baire sets form a σ-algebra of a topological space that avoids some of the pathological properties of Borel sets. There are several inequivalent definitions of Baire sets, but in the most widely used, the Baire sets of a locally compact Hausdorff space form the smallest σ-algebra such that all compactly supported continuous functions are measurable. Thus, measures defined on this σ-algebra, called Baire measures, are a convenient framework for integration on locally compact Hausdorff spaces. In particular, any compactly supported continuous function on such a space is integrable with respect to any finite Baire measure. Every Baire set is a Borel set. The converse holds in many, but not all, topological spaces. Baire sets avoid some pathological properties of Borel sets on spaces without a countable base for the topology. In practice, the use of Baire measures on Baire sets can often be replaced by the use of regular Borel measures on Borel sets. Baire sets were introduced by , and , who named them after Baire functions, which are in turn named after René-Louis Baire. Basic definitions There are at least three inequivalent definitions of Baire sets on locally compact Hausdorff spaces, and even more definitions for general topological spaces, though all these definitions are equivalent for locally compact σ-compact Hausdorff spaces. Moreover, some authors add restrictions on the topological space that Baire sets are defined on, and only define Baire sets on spaces that are compact Hausdorff, or locally compact Hausdorff, or σ-compact. First definition Kunihiko Kodaira defined what we call Baire sets (although he confusingly calls them "Borel sets") of certain topological spaces to be the sets whose characteristic function is a Baire function (the smallest class of functions containing all continuous real-valued functions and closed under pointwise limits of sequences). gives an equivalent definition and defines Baire sets of a topological space to be elements of the smallest σ-algebra such that all continuous real-valued functions are measurable. For locally compact σ-compact Hausdorff spaces this is equivalent to the following definitions, but in general the definitions are not equivalent. Conversely, the Baire functions are exactly the real-valued functions that are Baire measurable. For metric spaces, the Baire sets coincide with the Borel sets. Second definition defined Baire sets of a locally compact Hausdorff space to be the elements of the σ-ring generated by the compact Gδ sets. This definition is no longer used much, as σ-rings are somewhat out of fashion. When the space is σ-compact, this definition is equivalent to the next definition. One reason for working with compact Gδ sets rather than closed Gδ sets is that Baire measures are then automatically regular . Third definition The third and most widely used definition is similar to Halmos's definition, modified so that the Baire sets form a σ-algebra rather than just a σ-ring. A subset of a locally compact Hausdorff topological space is called a Baire set if it is a member of the smallest σ–algebra containing all compact Gδ sets. In other words, the σ–algebra of Baire sets is the σ–algebra generated by all those intersections of countably many open sets that yield a compact set. Alternatively, Baire sets form the smallest σ-algebra such that all continuous functions of compact support are measurable (at least on locally compact Hausdorff spaces; on general topological spaces these two conditions need not be equivalent). For σ-compact spaces this is equivalent to Halmos's definition. For spaces that are not σ-compact the Baire sets under this definition are those under Halmos's definition together with their complements. However, in this case it is no longer true that a finite Baire measure is necessarily regular: for example, the Baire probability measure that assigns measure 0 to every countable subset of an uncountable discrete space and measure 1 to every co-countable subset is a Baire probability measure that is not regular. Examples The different definitions of Baire sets are not equivalent For locally compact Hausdorff topological spaces that are not σ-compact the three definitions above need not be equivalent. A discrete topological space is locally compact and Hausdorff. Any function defined on a discrete space is continuous, and therefore, according to the first definition, all subsets of a discrete space are Baire. However, since the compact subspaces of a discrete space are precisely the finite subspaces, the Baire sets, according to the second definition, are precisely the at most countable sets, while according to the third definition the Baire sets are the at most countable sets and their complements. Thus, the three definitions are non-equivalent on an uncountable discrete space. For non-Hausdorff spaces the definitions of Baire sets in terms of continuous functions need not be equivalent to definitions involving Gδ compact sets. For example, if X is an infinite countable set whose closed sets are the finite sets and the whole space, then the only continuous real functions on X are constant, but all subsets of X are in the σ-algebra generated by compact closed Gδ sets. A Borel set that is not a Baire set In a Cartesian product of uncountably many compact Hausdorff spaces with more than one point, a point is never a Baire set, in spite of the fact that it is closed, and therefore a Borel set. Properties Baire sets coincide with Borel sets in Euclidean spaces. For every compact Hausdorff space, every finite Baire measure (that is, a measure on the σ-algebra of all Baire sets) is regular. For every compact Hausdorff space, every finite Baire measure has a unique extension to a regular Borel measure. The Kolmogorov extension theorem states that every consistent collection of finite-dimensional probability distributions leads to a Baire measure on the space of functions. Assuming compactness (of the given space, and therefore also the function space) one may extend it to a regular Borel measure. After completion one gets a probability space that is not necessarily standard. Notes References See especially Sect. 51 "Borel sets and Baire sets". . See especially Sect. 7.1 "Baire and Borel σ–algebras and regularity of measures" and Sect. 7.3 "The regularity extension". General topology Measure theory
Baire set
[ "Mathematics" ]
1,356
[ "General topology", "Topology" ]
3,977,589
https://en.wikipedia.org/wiki/Correlation%20sum
In chaos theory, the correlation sum is the estimator of the correlation integral, which reflects the mean probability that the states at two different times are close: where is the number of considered states , is a threshold distance, a norm (e.g. Euclidean norm) and the Heaviside step function. If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem): where is the time series, the embedding dimension and the time delay. The correlation sum is used to estimate the correlation dimension. See also Recurrence quantification analysis References Chaos theory Dynamical systems Dimension theory
Correlation sum
[ "Physics", "Mathematics" ]
139
[ "Mechanics", "Dynamical systems" ]
3,978,080
https://en.wikipedia.org/wiki/Building%20information%20modeling
Building information modeling (BIM) is an approach involving the generation and management of digital representations of the physical and functional characteristics of buildings or other physical assets and facilities. BIM is supported by various tools, processes, technologies and contracts. Building information models (BIMs) are computer files (often but not always in proprietary formats and containing proprietary data) which can be extracted, exchanged or networked to support decision-making regarding a built asset. BIM software is used by individuals, businesses and government agencies who plan, design, construct, operate and maintain buildings and diverse physical infrastructures, such as water, refuse, electricity, gas, communication utilities, roads, railways, bridges, ports and tunnels. The concept of BIM has been in development since the 1970s, but it only became an agreed term in the early 2000s. The development of standards and the adoption of BIM has progressed at different speeds in different countries. Developed by buildingSMART, Industry Foundation Classes (IFCs) – data structures for representing information – became an international standard, ISO 16739, in 2013, and BIM process standards developed in the United Kingdom from 2007 onwards formed the basis of an international standard, ISO 19650, launched in January 2019. History The concept of BIM has existed since the 1970s. The first software tools developed for modeling buildings emerged in the late 1970s and early 1980s, and included workstation products such as Chuck Eastman's Building Description System and GLIDE, RUCAPS, Sonata, Reflex and Gable 4D Series. The early applications, and the hardware needed to run them, were expensive, which limited widespread adoption. The pioneering role of applications such as RUCAPS, Sonata and Reflex has been recognized by Laiserin as well as the UK's Royal Academy of Engineering; former GMW employee Jonathan Ingram worked on all three products. What became known as BIM products differed from architectural drafting tools such as AutoCAD by allowing the addition of further information (time, cost, manufacturers' details, sustainability, and maintenance information, etc.) to the building model. As Graphisoft had been developing such solutions for longer than its competitors, Laiserin regarded its ArchiCAD application as then "one of the most mature BIM solutions on the market." Following its launch in 1987, ArchiCAD became regarded by some as the first implementation of BIM, as it was the first CAD product on a personal computer able to create both 2D and 3D geometry, as well as the first commercial BIM product for personal computers. However, Graphisoft founder Gábor Bojár has acknowledged to Jonathan Ingram in an open letter, that Sonata "was more advanced in 1986 than ArchiCAD at that time", adding that it "surpassed already the matured definition of 'BIM' specified only about one and a half decade later". The term 'building model' (in the sense of BIM as used today) was first used in papers in the mid-1980s: in a 1985 paper by Simon Ruffle eventually published in 1986, and later in a 1986 paper by Robert Aish – then at GMW Computers Ltd, developer of RUCAPS software – referring to the software's use at London's Heathrow Airport. The term 'Building Information Model' first appeared in a 1992 paper by G.A. van Nederveen and F. P. Tolman. However, the terms 'Building Information Model' and 'Building Information Modeling' (including the acronym "BIM") did not become popularly used until some 10 years later. Facilitating exchange and interoperability of information in digital format was variously with differing terminology: by Graphisoft as "Virtual Building" or "Single Building Model", Bentley Systems as "Integrated Project Models", and by Autodesk or Vectorworks as "Building Information Modeling". In 2002, Autodesk released a white paper entitled "Building Information Modeling," and other software vendors also started to assert their involvement in the field. By hosting contributions from Autodesk, Bentley Systems and Graphisoft, plus other industry observers, in 2003, Jerry Laiserin helped popularize and standardize the term as a common name for the digital representation of the building process. Interoperability and BIM standards As some BIM software developers have created proprietary data structures in their software, data and files created by one vendor's applications may not work in other vendor solutions. To achieve interoperability between applications, neutral, non-proprietary or open standards for sharing BIM data among different software applications have been developed. Poor software interoperability has long been regarded as an obstacle to industry efficiency in general and to BIM adoption in particular. In August 2004 a US National Institute of Standards and Technology (NIST) report conservatively estimated that $15.8 billion was lost annually by the U.S. capital facilities industry due to inadequate interoperability arising from "the highly fragmented nature of the industry, the industry’s continued paper-based business practices, a lack of standardization, and inconsistent technology adoption among stakeholders". An early BIM standard was the CIMSteel Integration Standard, CIS/2, a product model and data exchange file format for structural steel project information (CIMsteel: Computer Integrated Manufacturing of Constructional Steelwork). CIS/2 enables seamless and integrated information exchange during the design and construction of steel framed structures. It was developed by the University of Leeds and the UK's Steel Construction Institute in the late 1990s, with inputs from Georgia Tech, and was approved by the American Institute of Steel Construction as its data exchange format for structural steel in 2000. BIM is often associated with Industry Foundation Classes (IFCs) and aecXML – data structures for representing information – developed by buildingSMART. IFC is recognised by the ISO and has been an official international standard, ISO 16739, since 2013. Construction Operations Building information exchange (COBie) is also associated with BIM. COBie was devised by Bill East of the United States Army Corps of Engineers in 2007, and helps capture and record equipment lists, product data sheets, warranties, spare parts lists, and preventive maintenance schedules. This information is used to support operations, maintenance and asset management once a built asset is in service. In December 2011, it was approved by the US-based National Institute of Building Sciences as part of its National Building Information Model (NBIMS-US) standard. COBie has been incorporated into software, and may take several forms including spreadsheet, IFC, and ifcXML. In early 2013 BuildingSMART was working on a lightweight XML format, COBieLite, which became available for review in April 2013. In September 2014, a code of practice regarding COBie was issued as a British Standard: BS 1192-4. In January 2019, ISO published the first two parts of ISO 19650, providing a framework for building information modelling, based on process standards developed in the United Kingdom. UK BS and PAS 1192 specifications form the basis of further parts of the ISO 19650 series, with parts on asset management (Part 3) and security management (Part 5) published in 2020. The IEC/ISO 81346 series for reference designation has published 81346-12:2018, also known as RDS-CW (Reference Designation System for Construction Works). The use of RDS-CW offers the prospect of integrating BIM with complementary international standards based classification systems being developed for the Power Plant sector. Definition ISO 19650-1:2018 defines BIM as: Use of a shared digital representation of a built asset to facilitate design, construction and operation processes to form a reliable basis for decisions. The US National Building Information Model Standard Project Committee has the following definition: Building Information Modeling (BIM) is a digital representation of physical and functional characteristics of a facility. A BIM is a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle; defined as existing from earliest conception to demolition. Traditional building design was largely reliant upon two-dimensional technical drawings (plans, elevations, sections, etc.). Building information modeling extends the three primary spatial dimensions (width, height and depth), incorporating information about time (so-called 4D BIM), cost (5D BIM), asset management, sustainability, etc. BIM therefore covers more than just geometry. It also covers spatial relationships, geospatial information, quantities and properties of building components (for example, manufacturers' details), and enables a wide range of collaborative processes relating to the built asset from initial planning through to construction and then throughout its operational life. BIM authoring tools present a design as combinations of "objects" – vague and undefined, generic or product-specific, solid shapes or void-space oriented (like the shape of a room), that carry their geometry, relations, and attributes. BIM applications allow extraction of different views from a building model for drawing production and other uses. These different views are automatically consistent, being based on a single definition of each object instance. BIM software also defines objects parametrically; that is, the objects are defined as parameters and relations to other objects so that if a related object is amended, dependent ones will automatically also change. Each model element can carry attributes for selecting and ordering them automatically, providing cost estimates as well as material tracking and ordering. For the professionals involved in a project, BIM enables a virtual information model to be shared by the design team (architects, landscape architects, surveyors, civil, structural and building services engineers, etc.), the main contractor and subcontractors, and the owner/operator. Each professional adds discipline-specific data to the shared model – commonly, a 'federated' model which combines several different disciplines' models into one. Combining models enables visualisation of all models in a single environment, better coordination and development of designs, enhanced clash avoidance and detection, and improved time and cost decision-making. BIM wash "BIM wash" or "BIM washing" is a term sometimes used to describe inflated, and/or deceptive, claims of using or delivering BIM services or products. Usage throughout the asset life cycle Use of BIM goes beyond the planning and design phase of a project, extending throughout the life cycle of the asset. The supporting processes of building lifecycle management include cost management, construction management, project management, facility operation and application in green building. Common Data Environment A 'Common Data Environment' (CDE) is defined in ISO 19650 as an: Agreed source of information for any given project or asset, for collecting, managing and disseminating each information container through a managed process. A CDE workflow describes the processes to be used while a CDE solution can provide the underlying technologies. A CDE is used to share data across a project or asset lifecycle, supporting collaboration across a whole project team. The concept of a CDE overlaps with enterprise content management, ECM, but with a greater focus on BIM issues. Management of building information models Building information models span the whole concept-to-occupation time-span. To ensure efficient management of information processes throughout this span, a BIM manager might be appointed. The BIM manager is retained by a design build team on the client's behalf from the pre-design phase onwards to develop and to track the object-oriented BIM against predicted and measured performance objectives, supporting multi-disciplinary building information models that drive analysis, schedules, take-off and logistics. Companies are also now considering developing BIMs in various levels of detail, since depending on the application of BIM, more or less detail is needed, and there is varying modeling effort associated with generating building information models at different levels of detail. BIM in construction management Participants in the building process are constantly challenged to deliver successful projects despite tight budgets, limited staffing, accelerated schedules, and limited or conflicting information. The significant disciplines such as architectural, structural and MEP designs should be well-coordinated, as two things can't take place at the same place and time. BIM additionally is able to aid in collision detection, identifying the exact location of discrepancies. The BIM concept envisages virtual construction of a facility prior to its actual physical construction, in order to reduce uncertainty, improve safety, work out problems, and simulate and analyze potential impacts. Sub-contractors from every trade can input critical information into the model before beginning construction, with opportunities to pre-fabricate or pre-assemble some systems off-site. Waste can be minimised on-site and products delivered on a just-in-time basis rather than being stock-piled on-site. Quantities and shared properties of materials can be extracted easily. Scopes of work can be isolated and defined. Systems, assemblies and sequences can be shown in a relative scale with the entire facility or group of facilities. BIM also prevents errors by enabling conflict or 'clash detection' whereby the computer model visually highlights to the team where parts of the building (e.g.:structural frame and building services pipes or ducts) may wrongly intersect. BIM in facility operation and asset management BIM can bridge the information loss associated with handing a project from design team, to construction team and to building owner/operator, by allowing each group to add to and reference back to all information they acquire during their period of contribution to the BIM model. Enabling an effective handover of information from design and construction (including via IFC or COBie) can thus yield benefits to the facility owner or operator. BIM-related processes relating to longer-term asset management are also covered in ISO-19650 Part 3. For example, a building owner may find evidence of a water leak in a building. Rather than exploring the physical building, the owner may turn to the model and see that a water valve is located in the suspect location. The owner could also have in the model the specific valve size, manufacturer, part number, and any other information ever researched in the past, pending adequate computing power. Such problems were initially addressed by Leite and Akinci when developing a vulnerability representation of facility contents and threats for supporting the identification of vulnerabilities in building emergencies. Dynamic information about the building, such as sensor measurements and control signals from the building systems, can also be incorporated within software to support analysis of building operation and maintenance. As such, BIM in facility operation can be related to internet of things approaches; rapid access to data may also be aided by use of mobile devices (smartphones, tablets) and machine-readable RFID tags or barcodes; while integration and interoperability with other business systems - CAFM, ERP, BMS, IWMS, etc - can aid operational reuse of data. There have been attempts at creating information models for older, pre-existing facilities. Approaches include referencing key metrics such as the Facility Condition Index (FCI), or using 3D laser-scanning surveys and photogrammetry techniques (separately or in combination) or digitizing traditional building surveying methodologies by using mobile technology to capture accurate measurements and operation-related information about the asset that can be used as the basis for a model. Trying to retrospectively model a building constructed in, say 1927, requires numerous assumptions about design standards, building codes, construction methods, materials, etc, and is, therefore, more complex than building a model during design. One of the challenges to the proper maintenance and management of existing facilities is understanding how BIM can be utilized to support a holistic understanding and implementation of building management practices and "cost of ownership" principles that support the full product lifecycle of a building.  An American National Standard entitled APPA 1000 – Total Cost of Ownership for Facilities Asset Management incorporates BIM to factor in a variety of critical requirements and costs over the life-cycle of the building, including but not limited to: replacement of energy, utility, and safety systems; continual maintenance of the building exterior and interior and replacement of materials; updates to design and functionality; and recapitalization costs. BIM in green building BIM in green building, or "green BIM", is a process that can help architecture, engineering and construction firms to improve sustainability in the built environment. It can allow architects and engineers to integrate and analyze environmental issues in their design over the life cycle of the asset. In the ERANet projects EPC4SES and FinSESCo projects worked on the digital representation of the energy demand of the building. The nucleus is the XML from issuing Energy Performance Certificates, amended by roof data to be able to retrieve the position and size of PV or PV/T. International developments Asia China China began its exploration on informatisation in 2001. The Ministry of Construction announced BIM was as the key application technology of informatisation in "Ten new technologies of construction industry" (by 2010). The Ministry of Science and Technology (MOST) clearly announced BIM technology as a national key research and application project in "12th Five-Year" Science and Technology Development Planning. Therefore, the year 2011 was described as "The First Year of China's BIM". Hong Kong In 2006 the Hong Kong Housing Authority introduced BIM, and then set a target of full BIM implementation in 2014/2015. BuildingSmart Hong Kong was inaugurated in Hong Kong SAR in late April 2012. The Government of Hong Kong mandates the use of BIM for all government projects over HK$30M since 1 January 2018. India India Building Information Modelling Association (IBIMA) is a national-level society that represents the entire Indian BIM community. In India BIM is also known as VDC: Virtual Design and Construction. Due to its population and economic growth, India has an expanding construction market. In spite of this, BIM usage was reported by only 22% of respondents in a 2014 survey. In 2019, government officials said BIM could help save up to 20% by shortening construction time, and urged wider adoption by infrastructure ministries. Iran The Iran Building Information Modeling Association (IBIMA) was founded in 2012 by professional engineers from five universities in Iran, including the Civil and Environmental Engineering Department at Amirkabir University of Technology. While it is not currently active, IBIMA aims to share knowledge resources to support construction engineering management decision-making. Malaysia BIM implementation is targeted towards BIM Stage 2 by the year 2020 led by the Construction Industry Development Board (CIDB Malaysia). Under the Construction Industry Transformation Plan (CITP 2016–2020), it is hoped more emphasis on technology adoption across the project life-cycle will induce higher productivity. Singapore The Building and Construction Authority (BCA) has announced that BIM would be introduced for architectural submission (by 2013), structural and M&E submissions (by 2014) and eventually for plan submissions of all projects with gross floor area of more than 5,000 square meters by 2015. The BCA Academy is training students in BIM. Japan The Ministry of Land, Infrastructure and Transport (MLIT) has announced "Start of BIM pilot project in government building and repairs" (by 2010). Japan Institute of Architects (JIA) released the BIM guidelines (by 2012), which showed the agenda and expected effect of BIM to architects. MLIT announced " BIM will be mandated for all of its public works from the fiscal year of 2023, except those having particular reasons". The works subject to WTO Government Procurement Agreement shall comply with the published ISO standards related to BIM such as ISO19650 series as determined by the Article 10 (Technical Specification) of the Agreement. South Korea Small BIM-related seminars and independent BIM effort existed in South Korea even in the 1990s. However, it was not until the late 2000s that the Korean industry paid attention to BIM. The first industry-level BIM conference was held in April 2008, after which, BIM has been spread very rapidly. Since 2010, the Korean government has been gradually increasing the scope of BIM-mandated projects. McGraw Hill published a detailed report in 2012 on the status of BIM adoption and implementation in South Korea. United Arab Emirates Dubai Municipality issued a circular (196) in 2014 mandating BIM use for buildings of a certain size, height or type. The one page circular initiated strong interest in BIM and the market responded in preparation for more guidelines and direction. In 2015 the Municipality issued another circular (207) titled 'Regarding the expansion of applying the (BIM) on buildings and facilities in the emirate of Dubai' which made BIM mandatory on more projects by reducing the minimum size and height requirement for projects requiring BIM. This second circular drove BIM adoption further with several projects and organizations adopting UK BIM standards as best practice. In 2016, the UAE's Quality and Conformity Commission set up a BIM steering group to investigate statewide adoption of BIM. Europe Austria Austrian standards for digital modeling are summarized in the ÖNORM A 6241, published on 15 March 2015. The ÖNORM A 6241-1 (BIM Level 2), which replaced the ÖNORM A 6240-4, has been extended in the detailed and executive design stages, and corrected in the lack of definitions. The ÖNORM A 6241-2 (BIM Level 3) includes all the requirements for the BIM Level 3 (iBIM). Czech Republic The Czech BIM Council, established in May 2011, aims to implement BIM methodologies into the Czech building and designing processes, education, standards and legislation. Estonia In Estonia digital construction cluster (Digitaalehituse Klaster) was formed in 2015 to develop BIM solutions for the whole life-cycle of construction. The strategic objective of the cluster is to develop an innovative digital construction environment as well as VDC new product development, Grid and e-construction portal to increase the international competitiveness and sales of Estonian businesses in the construction field. The cluster is equally co-funded by European Structural and Investment Funds through Enterprise Estonia and by the members of the cluster with a total budget of 600 000 euros for the period 2016–2018. France The French arm of buildingSMART, called Mediaconstruct (existing since 1989), is supporting digital transformation in France. A building transition digital plan – French acronym PTNB – was created in 2013 (mandated since 2015 to 2017 and under several ministries). A 2013 survey of European BIM practice showed France in last place, but, with government support, in 2017 it had risen to third place with more than 30% of real estate projects carried out using BIM. PTNB was superseded in 2018 by Plan BIM 2022, administered by an industry body, the Association for the Development of Digital in Construction (AND Construction), founded in 2017, and supported by a digital platform, KROQI, developed and launched in 2017 by CSTB (France's Scientific and Technical Centre for Building). Germany In December 2015, the German minister for transport Alexander Dobrindt announced a timetable for the introduction of mandatory BIM for German road and rail projects from the end of 2020. Speaking in April 2016, he said digital design and construction must become standard for construction projects in Germany, with Germany two to three years behind The Netherlands and the UK in aspects of implementing BIM. BIM was piloted in many areas of German infrastructure delivery and in July 2022 Volker Wissing, Federal Minister for Digital and Transport, announced that, from 2025, BIM will be used as standard in the construction of federal trunk roads in addition to the rail sector. Ireland In November 2017, Ireland's Department for Public Expenditure and Reform launched a strategy to increase use of digital technology in delivery of key public works projects, requiring the use of BIM to be phased in over the next four years. Italy Through the new D.l. 50, in April 2016 Italy has included into its own legislation several European directives including 2014/24/EU on Public Procurement. The decree states among the main goals of public procurement the "rationalization of designing activities and of all connected verification processes, through the progressive adoption of digital methods and electronic instruments such as Building and Infrastructure Information Modelling". A norm in 8 parts is also being written to support the transition: UNI 11337-1, UNI 11337-4 and UNI 11337-5 were published in January 2017, with five further chapters to follow within a year. In early 2018 the Italian Ministry of Infrastructure and Transport issued a decree (DM 01/12/17) creating a governmental BIM Mandate compelling public client organisations to adopt a digital approach by 2025, with an incremental obligation which will start on 1 January 2019. Lithuania Lithuania is moving towards adoption of BIM infrastructure by founding a public body "Skaitmeninė statyba" (Digital Construction), which is managed by 13 associations. Also, there is a BIM work group established by Lietuvos Architektų Sąjunga (a Lithuanian architects body). The initiative intends Lithuania to adopt BIM, Industry Foundation Classes (IFC) and National Construction Classification as standard. An international conference "Skaitmeninė statyba Lietuvoje" (Digital Construction in Lithuania) has been held annually since 2012. The Netherlands On 1 November 2011, the Rijksgebouwendienst, the agency within the Dutch Ministry of Housing, Spatial Planning and the Environment that manages government buildings, introduced the Rgd BIM Standard, which it updated on 1 July 2012. Norway In Norway BIM has been used increasingly since 2008. Several large public clients require use of BIM in open formats (IFC) in most or all of their projects. The Government Building Authority bases its processes on BIM in open formats to increase process speed and quality, and all large and several small and medium-sized contractors use BIM. National BIM development is centred around the local organisation, buildingSMART Norway which represents 25% of the Norwegian construction industry. Poland BIMKlaster (BIM Cluster) is a non-governmental, non-profit organisation established in 2012 with the aim of promoting BIM development in Poland. In September 2016, the Ministry of Infrastructure and Construction began a series of expert meetings concerning the application of BIM methodologies in the construction industry. Portugal Created in 2015 to promote the adoption of BIM in Portugal and its normalisation, the Technical Committee for BIM Standardisation, CT197-BIM, has created the first strategic document for construction 4.0 in Portugal, aiming to align the country's industry around a common vision, integrated and more ambitious than a simple technology change. Russia The Russian government has approved a list of the regulations that provide the creation of a legal framework for the use of information modeling of buildings in construction and encourages the use of BIM in government projects. Slovakia The BIM Association of Slovakia, "BIMaS", was established in January 2013 as the first Slovak professional organisation focused on BIM. Although there are neither standards nor legislative requirements to deliver projects in BIM, many architects, structural engineers and contractors, plus a few investors are already applying BIM. A Slovak implementation strategy created by BIMaS and supported by the Chamber of Civil Engineers and Chamber of Architects has yet to be approved by Slovak authorities due to their low interest in such innovation. Spain A July 2015 meeting at Spain's Ministry of Infrastructure [Ministerio de Fomento] launched the country's national BIM strategy, making BIM a mandatory requirement on public sector projects with a possible starting date of 2018. Following a February 2015 BIM summit in Barcelona, professionals in Spain established a BIM commission (ITeC) to drive the adoption of BIM in Catalonia. Switzerland Since 2009 through the initiative of buildingSmart Switzerland, then 2013, BIM awareness among a broader community of engineers and architects was raised due to the open competition for Basel's Felix Platter Hospital where a BIM coordinator was sought. BIM has also been a subject of events by the Swiss Society for Engineers and Architects, SIA. United Kingdom In May 2011 UK Government Chief Construction Adviser Paul Morrell called for BIM adoption on UK government construction projects. Morrell also told construction professionals to adopt BIM or be "Betamaxed out". In June 2011 the UK government published its BIM strategy, announcing its intention to require collaborative 3D BIM (with all project and asset information, documentation and data being electronic) on its projects by 2016. Initially, compliance would require building data to be delivered in a vendor-neutral 'COBie' format, thus overcoming the limited interoperability of BIM software suites available on the market. The UK Government BIM Task Group led the government's BIM programme and requirements, including a free-to-use set of UK standards and tools that defined 'level 2 BIM'. In April 2016, the UK Government published a new central web portal as a point of reference for the industry for 'level 2 BIM'. The work of the BIM Task Group then continued under the stewardship of the Cambridge-based Centre for Digital Built Britain (CDBB), announced in December 2017 and formally launched in early 2018. Outside of government, industry adoption of BIM since 2016 has been led by the UK BIM Alliance, an independent, not-for-profit, collaboratively-based organisation formed to champion and enable the implementation of BIM, and to connect and represent organisations, groups and individuals working towards digital transformation of the UK's built environment industry. In November 2017, the UK BIM Alliance merged with the UK and Ireland chapter of BuildingSMART. In October 2019, CDBB, the UK BIM Alliance and the BSI Group launched the UK BIM Framework. Superseding the BIM levels approach, the framework describes an overarching approach to implementing BIM in the UK, giving free guidance on integrating the international ISO 19650 series of standards into UK processes and practice. National Building Specification (NBS) has published research into BIM adoption in the UK since 2011, and in 2020 published its 10th annual BIM report. In 2011, 43% of respondents had not heard of BIM; in 2020 73% said they were using BIM. North America Canada BIM is not mandatory in Canada. Several organizations support BIM adoption and implementation in Canada: the Canada BIM Council (CANBIM, founded in 2008), the Institute for BIM in Canada, and buildingSMART Canada (the Canadian chapter of buildingSMART International). Public Services and Procurement Canada (formerly Public Works and Government Services Canada) is committed to using non-proprietary or "OpenBIM" BIM standards and avoids specifying any specific proprietary BIM format. Designers are required to use the international standards of interoperability for BIM (IFC). United States The Associated General Contractors of America and US contracting firms have developed various working definitions of BIM that describe it generally as: an object-oriented building development tool that utilizes 5-D modeling concepts, information technology and software interoperability to design, construct and operate a building project, as well as communicate its details. Although the concept of BIM and relevant processes are being explored by contractors, architects and developers alike, the term itself has been questioned and debated with alternatives including Virtual Building Environment (VBE) also considered. Unlike some countries such as the UK, the US has not adopted a set of national BIM guidelines, allowing different systems to remain in competition. In 2021, the National Institute of Building Sciences (NIBS) looked at applying UK BIM experiences to developing shared US BIM standards and processes. The US National BIM Standard had largely been developed through volunteer efforts; NIBS aimed to create a national BIM programme to drive effective adoption at a national scale. BIM is seen to be closely related to Integrated Project Delivery (IPD) where the primary motive is to bring the teams together early on in the project. A full implementation of BIM also requires the project teams to collaborate from the inception stage and formulate model sharing and ownership contract documents. The American Institute of Architects has defined BIM as "a model-based technology linked with a database of project information", and this reflects the general reliance on database technology as the foundation. In the future, structured text documents such as specifications may be able to be searched and linked to regional, national, and international standards. Africa Nigeria BIM has the potential to play a vital role in the Nigerian AEC sector. In addition to its potential clarity and transparency, it may help promote standardization across the industry. For instance, Utiome suggests that, in conceptualizing a BIM-based knowledge transfer framework from industrialized economies to urban construction projects in developing nations, generic BIM objects can benefit from rich building information within specification parameters in product libraries, and used for efficient, streamlined design and construction. Similarly, an assessment of the current 'state of the art' by Kori found that medium and large firms were leading the adoption of BIM in the industry. Smaller firms were less advanced with respect to process and policy adherence. There has been little adoption of BIM in the built environment due to construction industry resistance to changes or new ways of doing things. The industry is still working with conventional 2D CAD systems in services and structural designs, although production could be in 3D systems. There is virtually no utilisation of 4D and 5D systems. BIM Africa Initiative, primarily based in Nigeria, is a non-profit institute advocating the adoption of BIM across Africa. Since 2018, it has been engaging with professionals and the government towards the digital transformation of the built industry. Produced annually by its research and development committee, the African BIM Report gives an overview of BIM adoption across the African continent. South Africa The South African BIM Institute, established in May 2015, aims to enable technical experts to discuss digital construction solutions that can be adopted by professionals working within the construction sector. Its initial task was to promote the SA BIM Protocol. There are no mandated or national best practice BIM standards or protocols in South Africa. Organisations implement company-specific BIM standards and protocols at best (there are isolated examples of cross-industry alliances). Oceania Australia In February 2016, Infrastructure Australia recommended: "Governments should make the use of Building Information Modelling (BIM) mandatory for the design of large-scale complex infrastructure projects. In support of a mandatory rollout, the Australian Government should commission the Australasian Procurement and Construction Council, working with industry, to develop appropriate guidance around the adoption and use of BIM; and common standards and protocols to be applied when using BIM". New Zealand In 2015, many projects in the rebuilding of Christchurch were being assembled in detail on a computer using BIM well before workers set foot on the site. The New Zealand government started a BIM acceleration committee, as part of a productivity partnership with the goal of 20 per cent more efficiency in the construction industry by 2020. Today, BIM use is still not mandated in the country while several challenges have been identified for its implementation in the country. However, members of the AEC industry and academia have developed a national BIM handbook providing definitions, case studies and templates. Purposes or dimensionality Some purposes or uses of BIM may be described as 'dimensions'. However, there is little consensus on definitions beyond 5D. Some organisations dismiss the term; for example, the UK Institution of Structural Engineers does not recommend using nD modelling terms beyond 4D, adding "cost (5D) is not really a 'dimension'." 3D 3D BIM, an acronym for three-dimensional building information modeling, refers to the graphical representation of an asset's geometric design, augmented by information describing attributes of individual components. 3D BIM work may be undertaken by professional disciplines such as architectural, structural, and MEP, and the use of 3D models enhances coordination and collaboration between disciplines. A 3D virtual model can also be created by creating a point cloud of the building or facility using laser scanning technology. 4D 4D BIM, an acronym for 4-dimensional building information modeling, refers to the intelligent linking of individual 3D CAD components or assemblies with time- or scheduling-related information. The term 4D refers to the fourth dimension: time, i.e. 3D plus time. 4D modelling enables project participants (architects, designers, contractors, clients) to plan, sequence the physical activities, visualise the critical path of a series of events, mitigate the risks, report and monitor progress of activities through the lifetime of the project. 4D BIM enables a sequence of events to be depicted visually on a time line that has been populated by a 3D model, augmenting traditional Gantt charts and critical path (CPM) schedules often used in project management. Construction sequences can be reviewed as a series of problems using 4D BIM, enabling users to explore options, manage solutions and optimize results. As an advanced construction management technique, it has been used by project delivery teams working on larger projects. 4D BIM has traditionally been used for higher end projects due to the associated costs, but technologies are now emerging that allow the process to be used by laymen or to drive processes such as manufacture. 5D 5D BIM, an acronym for 5-dimensional building information modeling refers to the intelligent linking of individual 3D components or assemblies with time schedule (4D BIM) constraints and then with cost-related information. 5D models enable participants to visualise construction progress and related costs over time. This BIM-centric project management technique has potential to improve management and delivery of projects of any size or complexity. In June 2016, McKinsey & Company identified 5D BIM technology as one of five big ideas poised to disrupt construction. It defined 5D BIM as "a five-dimensional representation of the physical and functional characteristics of any project. It considers a project’s time schedule and cost in addition to the standard spatial design parameters in 3-D." 6D 6D BIM, an acronym for 6-dimensional building information modeling, is sometimes used to refer to the intelligent linking of individual 3D components or assemblies with all aspects of project life-cycle management information. However, there is less consensus about the definition of 6D BIM; it is also sometimes used to cover use of BIM for sustainability purposes. In the project life cycle context, a 6D model is usually delivered to the owner when a construction project is finished. The "As-Built" BIM model is populated with relevant building component information such as product data and details, maintenance/operation manuals, cut sheet specifications, photos, warranty data, web links to product online sources, manufacturer information and contacts, etc. This database is made accessible to the users/owners through a customized proprietary web-based environment. This is intended to aid facilities managers in the operation and maintenance of the facility. The term is less commonly used in the UK and has been replaced with reference to the Asset Information Requirements (AIR) and an Asset Information Model (AIM) as specified in BS EN ISO 19650-3:2020. See also Data model Design computing Digital twin (the physical manifestation instrumented and connected to the model) BCF GIS Digital Building Logbook Lean construction List of BIM software Macro BIM OpenStreetMap Pre-fire planning System information modelling Whole Building Design Guide Facility management (or Building management) Building automation (and Building management systems) Notes References Further reading Kensek, Karen (2014). Building Information Modeling, Routledge. Kensek, Karen and Noble, Douglas (2014). Building Information Modeling: BIM in Current and Future Practice, Wiley. Lévy, François (2011). BIM in Small-Scale Sustainable Design, Wiley. Weygant, Robert S. (2011) BIM Content Development: Standards, Strategies, and Best Practices, Wiley. Smith, Dana K. and Tardif, Michael (2009). Building Information Modeling: A Strategic Implementation Guide for Architects, Engineers, Constructors, and Real Estate Asset Managers, Wiley. Underwood, Jason, and Isikdag, Umit (2009). Handbook of Research on Building Information Modeling and Construction Informatics: Concepts and Technologies, Information Science Publishing. Krygiel, Eddy and Nies, Brad (2008). Green BIM: Successful Sustainable Design with Building Information Modeling, Sybex. Kymmell, Willem (2008). Building Information Modeling: Planning and Managing Construction Projects with 4D CAD and Simulations, McGraw-Hill Professional. Data modeling Computer-aided design Construction Architecture Building engineering
Building information modeling
[ "Engineering" ]
8,298
[ "Computer-aided design", "Design engineering", "Building engineering", "Data modeling", "Construction", "Data engineering", "Building information modeling", "Civil engineering", "Architecture" ]
3,978,907
https://en.wikipedia.org/wiki/Greenwood%20Furnace%20State%20Park
Greenwood Furnace State Park is a Pennsylvania state park in Jackson Township, Huntingdon County, Pennsylvania in the United States. The park is near the historic iron making center of Greenwood Furnace. The park includes the ghost town of Greenwood that grew up around the ironworks, old roads and charcoal hearths. Greenwood Furnace State Park is adjacent to Rothrock State Forest and on the western edge of an area of Central Pennsylvania known as the Seven Mountains. The park is on Pennsylvania Route 305, south of State College. Within the park is Greenwood Lake, a lake that is stocked with trout and which allows ice fishing during the winter. The dam that forms the lake is listed on the National Register of Historic Places. Greenwood Furnace State Park was chosen by the Pennsylvania Department of Conservation and Natural Resources (DCNR) and its Bureau of Parks as one of "25 Must-See Pennsylvania State Parks". History Early settlement The northern Huntingdon County area was once inhabited by the Ona Jutta Hage or Juniata tribe. Their name meant "The People of the Standing Stone", an obelisk that once stood in their village near present-day Huntingdon. The Juniata had moved away by the time that Pennsylvania was colonized by William Penn. Penn bought the land from the Iroquois and the Tuscarora and Shawnee that had resettled throughout central Pennsylvania were soon forced to move on once again. Many different groups of European settlers migrated to the area by the late 18th century. They were mostly farmers of Scots-Irish descent with large numbers of Amish and Mennonite Germans who had fled religious persecution in Germany, Austria, and Switzerland. Later settlers built a tavern and a sawmill in the present location of Greenwood Furnace State Park. Greenwood Iron Works Greenwood Furnace State Park is named for the iron furnace that was once the center of industry in northern Huntingdon County. Greenwood Furnace was open for operation on June 5, 1834. The parent company, Norris, Rawle and Co., selected the site because of the ease in access to the needed natural resources, iron ore, limestone, trees for charcoal and a steady water supply. Greenwood Furnace was able to produce up to five tons of pig iron ingots per day at the height of its production. Soon a small village sprang up around Greenwood Furnace to support the needs of the workers and the furnace. The village included 20 houses, a company store, company offices, stables and a blacksmith shop. A deposit of high quality iron ore was discovered in the area leading to further growth in the Greenwood Furnace area. A gristmill was constructed in 1842. Greenwood Lake was built at this time to create a water supply to power the mill. Greenwood Lake is currently used as a recreation lake by visitors to Greenwood Furnace State Park. Ownership of Greenwood Furnace Iron Works was transferred to John A. Wright in 1847. Wright was one of the founders of the Pennsylvania Railroad in nearby Altoona. The ironworks at Greenwood and nearby Freedom Iron Works were supervised in part by Andrew Carnegie. Under the leadership of Wright and Carnegie Greenwood and Freedom became vitally important centers of iron production for the booming railroad industry. The company expanded its iron output by building a Bessemer furnace at Freedom Iron Works and building a second stack at Greenwood Furnace. The community surrounding Greenwood Furnace Iron Works reached its zenith in the 1870s. At that time it included the two furnaces, the ironmaster's mansion, a church and school, a company store and blacksmith and wagon shop, there were seventeen stables, ninety tenant houses in the mill village and the gristmill. Greenwood Furnace was the home to about 300 families and included its own baseball team known at the "Energetics" and a fifteen piece brass band. Greenwood Furnace became a ghost town in the early 20th century. Changes in the iron industry lead to the closing of the now obsolete furnaces. When the furnaces were shut down, the jobs were gone and the people of Greenwood left their homes for jobs elsewhere. The land of the former furnace and village was soon reclaimed for nature through the efforts of the Commonwealth of Pennsylvania. Tree nursery and state park On February 1, 1906 the state of Pennsylvania purchased the former lands of the ironworks and village from the Logan Iron and Steel Company. The State Forest Reserve Commission established the Greenwood Forest Tree Nursery (which later became the state park) on part of the land, while the rest was made part of the "Greenwood Reserves" and eventually became part of Rothrock State Forest. The soil was ideally suited for use as a tree nursery after the years of use as an iron furnace. The remnants of fly ash and charcoal dust enriched the earth with minerals that were needed for the growing of trees. The nursery began operation in 1906 and closed in 1993. During the 1970s and 1980s the nursery produced an average of three million seedlings per year. The seedlings were replanted in forests throughout Pennsylvania. The Pennsylvania Bureau of Forestry recently re-established the tree nursery on a limited basis to provide seedlings for use at its other nurseries and for sale to private nurseries. The state park was formally established by 1924 by the Pennsylvania Bureau of Forestry (although it was then known as "Greenwood Public Camp"). Former residents of Greenwood village had begun to visit their old homes earlier and in 1921 began an annual reunion known as "Old Home Day." Although the Bureau of Forestry made some improvements to the park, most of the facilities at the park were built during the Great Depression by the young men of the Civilian Conservation Corps, established by President Franklin D. Roosevelt. The boys of the CCC worked to restore a furnace stack and also repaired six original buildings that had not been dismantled when the village was abandoned. In the 1930s the name became "Greenwood Furnace State Forest Park". Greenwood Furnace State Park became an official part of the Pennsylvania state park system in 1966. Archaeological work began at the park in 1976 to uncover the remains of the village. Greenwood Furnace was designated a Historical Landmark in 1995 by ASM International in recognition of the superior quality iron that was produced by Greenwood Iron and was vitally important to the westward expansion of the railroads. Recreation Greenwood Furnace State Park provides a look into historic industrial past of north Huntingdon county as well as recreational opportunities similar that of other Pennsylvania State Parks. A walking tour passes through the remains of Greenwood Furnace, providing park visitors with a lesson about the history of the town that once surrounded the ironworks. A working blacksmith shop has historical demonstrations of the craft of blacksmithing. Greenwood Lake Greenwood Lake was first built to provide water for a gristmill. It stands today as a reminder of the small town that once thrived there. The lake is used for recreational fishing, ice fishing, and swimming. Beginning in 2008 lifeguards will not be posted at the beach. Picnics Greenwood Furnace State Park has a centrally located picnic area in a spruce and pine grove. There are several picnic tables and seven pavilions that can be rented up to eleven months in advance. The picnic area has easy access to a playground, a horseshoe pit, volleyball courts, a snack bar and a softball field. Camping There is a 51 site campground at Greenwood Furnace State Park. It opens at the beginning of trout season in mid-April and closes with the conclusion of deer season in late December. Forty-six of the camp sites have an electric hook-up. A showerhouse with flush toilets and laundry tubs is nearby. Hunting There are about acres of woods open to hunting at Greenwood Furnace State Park. Hunters are expected to follow the rules and regulations of the Pennsylvania Game Commission. The common game species are ruffed grouse, squirrels, white-tailed deer, and turkeys. The hunting of groundhogs is prohibited. Hunters may access the adjoining Rothrock State Forest by using the parking lots at Greenwood Furnace State Park and hiking in on the trails. Trails The trails of Greenwood Furnace State Park explore the forests in the park and venture out into Rothrock State Forest. They also pass by the historical remains of Greenwood Furnace Iron Works. The trails are open to hiking, cross-country skiing, and in some locations recreational snowmobiling. Chestnut Spring Trail is a "easy/moderate" trail marked with yellow blazes. It passes by several springs, a collier's hut, and a charcoal hearth as it winds its way up a hollow. Dogtown Trail is a "easy/moderate" trail marked with blue blazes. The trail is named for the former village of Dogtown, which in turn was named for the dogs that barked at the passing iron ore trains. Dogtown Trail is open for hiking and snowmobiling. The trail begins at the park campground and connects with Brush Ridge Trail. Fire Tower Loop is a "moderate/difficult" trail marked with blue blazes on the Greenwood Spur and red blazes on Ruff Gap and Snowmobile Road. This trail loops through the park and passes by the Greenwood Fire Tower on Broad Mountain, house foundations, and charcoal hearths. Greenwood Tower was built in the 1930s by the CCC and is still used by forest rangers to watch for forest fires. Greenwood Trail is a "easy/moderate" trail marked with red blazes. This short loop begins near the picnic area at pavilion six and passes through a diversity of trees, ferns and wildflowers. Lakeview Trail is a "easy/moderate" trail marked with white blazes. It runs along part of the edge of Greenwood Lake. Monsell Trail is a "moderate" trail marked with yellow blazes. The trail intersects with Greenwood Trail and links the campground to the Standing Stone Trail. Monsell Trail passes through a pine plantation left over from the days of the ironworks. Ore Banks Trail is a "moderate" trail marked with yellow and red blazes. It shares part of its trails with Chestnut Springs Trail (yellow blazes) and Brush Ridge Trail (red blazes). Ore Banks Trail passes over the top of a ridge with a view of the park and the remnants of Brush Ridge Ore Banks where iron ore was extracted from the ground and transferred to the furnace. The trail then follows the path used by the mule-drawn railroad that transported the iron ore to the furnace. Stone Valley Vista Loop Trail is a "moderate" trail marked with blue (Turkey Trail) and orange (Standing Stone Trail) blazes. Part of the trail follows a logging slide used during the days of the iron furnace. Viantown Trail is a "moderate" trail marked with yellow blazes. It follows the old wagon road that connected Greenwood Furnace with Viantown and crosses Brush Ridge. Greenwood Furnace State Park is also a trail head for two much longer backpacking trails that pass through the Appalachian Mountains of Pennsylvania. The Pennsylvania Mid State Trail is a trail that runs from the Maryland state line to the New York state line near Lawrenceville, Pennsylvania and connects to the park via the Greenwood Spur. The Standing Stone Trail is a backpacking trail that connects the park to the Tuscarora Trail, which in turn connects with the Appalachian Trail. Nearby state parks The following state parks are within of Greenwood Furnace State Park: Bald Eagle State Park (Centre County) Big Spring State Forest Picnic Area (Perry County) Black Moshannon State Park (Centre County) Canoe Creek State Park (Blair County) Fowlers Hollow State Park (Perry County) Penn-Roosevelt State Park (Centre County) Poe Paddy State Park (Centre County) Poe Valley State Park (Centre County) Reeds Gap State Park (Mifflin County) Whipple Dam State Park (Huntingdon County) References External links   Huntingdon County Visitors Bureau official website State parks of Pennsylvania Archaeological sites on the National Register of Historic Places in Pennsylvania Protected areas established in 1924 Civilian Conservation Corps in Pennsylvania Parks in Huntingdon County, Pennsylvania Industrial furnaces National Register of Historic Places in Huntingdon County, Pennsylvania 1924 establishments in Pennsylvania Protected areas of Huntingdon County, Pennsylvania
Greenwood Furnace State Park
[ "Chemistry" ]
2,404
[ "Metallurgical processes", "Industrial furnaces" ]
3,980,616
https://en.wikipedia.org/wiki/Eigenspinor
In quantum mechanics, eigenspinors are thought of as basis vectors representing the general spin state of a particle. Strictly speaking, they are not vectors at all, but in fact spinors. For a single spin 1/2 particle, they can be defined as the eigenvectors of the Pauli matrices. General eigenspinors In quantum mechanics, the spin of a particle or collection of particles is quantized. In particular, all particles have either half integer or integer spin. In the most general case, the eigenspinors for a system can be quite complicated. If you have a collection of the Avogadro number of particles, each one with two (or more) possible spin states, writing down a complete set of eigenspinors would not be practically possible. However, eigenspinors are very useful when dealing with the spins of a very small number of particles. The spin 1/2 particle The simplest and most illuminating example of eigenspinors is for a single spin 1/2 particle. A particle's spin has three components, corresponding to the three spatial dimensions: , , and . For a spin 1/2 particle, there are only two possible eigenstates of spin: spin up, and spin down. Spin up is denoted as the column matrix: and spin down is . Each component of the angular momentum thus has two eigenspinors. By convention, the z direction is chosen as having the and states as its eigenspinors. The eigenspinors for the other two orthogonal directions follow from this convention: : : : All of these results are but special cases of the eigenspinors for the direction specified by θ and φ in spherical coordinates - those eigenspinors are: Example usage Suppose there is a spin 1/2 particle in a state . To determine the probability of finding the particle in a spin up state, we simply multiply the state of the particle by the adjoint of the eigenspinor matrix representing spin up, and square the result. Thus, the eigenspinor allows us to sample the part of the particle's state that is in the same direction as the eigenspinor. First we multiply: . Now, we simply square this value to obtain the probability of the particle being found in a spin up state: Properties Each set of eigenspinors forms a complete, orthonormal basis. This means that any state can be written as a linear combination of the basis spinors. The eigenspinors are eigenvectors of the Pauli matrices in the case of a single spin 1/2 particle. See also Spin Spinor Eigenvector Pauli matrices References Griffiths, David J. (2005) Introduction to Quantum Mechanics(2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. . de la Peña, Luis (2006). Introducción a la mecánica cuántica (3 edición). México DF: Fondo de Cultura Económica. . Quantum mechanics
Eigenspinor
[ "Physics" ]
642
[ "Theoretical physics", "Quantum mechanics" ]
3,981,163
https://en.wikipedia.org/wiki/Circulator%20pump
A circulator pump or circulating pump is a specific type of pump used to circulate gases, liquids, or slurries in a closed circuit with small elevation changes. They are commonly found circulating water in a hydronic heating or cooling system. They are specialized in providing a large flow rate rather than providing much head, as they are supposed to only overcome the friction of a piping system, as opposed to a regular centrifugal pump which may need to lift a fluid significantly. Circulator pumps as used in hydronic systems are usually electrically powered centrifugal pumps. As used in homes, they are often small, sealed, and rated at a fraction of a horsepower, but in commercial applications they range in size up to many horsepower and the electric motor is usually separated from the pump body by some form of mechanical coupling. The sealed units used in home applications often have the motor rotor, pump impeller, and support bearings combined and sealed within the water circuit. This avoids one of the principal challenges faced by the larger, two-part pumps: maintaining a water-tight seal at the point where the pump drive shaft enters the pump body. Small- to medium-sized circulator pumps are usually supported entirely by the pipe flanges that join them to the rest of the hydronic plumbing. Large pumps are usually pad-mounted. Pumps that are used solely for closed hydronic systems can be made with cast iron components as the water in the loop will either become de-oxygenated or be treated with chemicals to inhibit corrosion. But pumps that have a steady stream of oxygenated, potable water flowing through them must be made of more expensive materials such as bronze. Use with domestic hot water Circulating pumps are often used to circulate domestic hot water so that a faucet will provide hot water instantly upon demand, or (more conserving of energy) a short time after a user's request for hot water. In regions where water conservation issues are rising in importance with rapidly expanding and urbanizing populations local water authorities offer rebates to homeowners and builders that install a circulator pump to save water. In typical one-way plumbing without a circulation pump, water is simply piped from the water heater through the pipes to the tap. Once the tap is shut off, the water remaining in the pipes cools producing the familiar wait for hot water the next time the tap is opened. By adding a circulator pump and constantly circulating a small amount of hot water through the pipes from the heater to the farthest fixture and back to the heater, the water in the pipes is always hot, and no water is wasted during the wait. The tradeoff is the energy wasted in operating the pump and the additional demand on the water heater to make up for the heat lost from the constantly hot pipes. While the majority of these pumps mount nearest to the hot water heater and have no adjustable temperature capabilities, a significant reduction in energy can be achieved by using a temperature adjustable thermostatically controlled circulation pump mounted at the last fixture on the loop. Thermostatically controlled circulation pumps allow owners to choose the desired temperature of hot water to be maintained within the hot water pipes since most homes do not require degree water instantly out of their taps. Thermostatically controlled circulation pumps cycle on and off to maintain a user's chosen temperature and consume less energy than a continuously operating pump. By installing a thermostatically controlled pump just after the farthest fixture on the loop, cyclic pumping maintains ready hot water up to the last fixture on the loop instead of wasting energy heating the piping from the last fixture to the water heater. Installing a circulation pump at the farthest fixture on a hot water circulation loop is often not feasible due to limited available space, cosmetics, noise restrictions or lack of available power. Recent advancements in hot water circulation technology allow for benefiting from temperature controlled pumping without having to install the pump at the last fixture on the hot water loop. These advanced hot water circulation systems utilize a water contacting temperature probe strategically installed at the last fixture on the loop to minimize the energy wasted heating lengthy return pipes. Thermal insulation applied to the pipes helps mitigate this second loss and minimize the amount of water that must be pumped to keep hot water constantly available. The traditional hot water recirculation system uses the existing cold water line as return line from the point of use located farthest from the hot water tank back to the hot water tank. The first of two system types has a pump mounted at the hot water heater while a "normally open" thermostatic control valve gets installed at the farthest fixture from the water heater and closes once hot water contacts the valve to control crossover flow between the hot and cold lines. A second type of system uses a thermostatically controlled pump which gets installed at the farthest fixture from the water heater. These thermostatically controlled pumps often have a built-in "normally closed" check-valve which prevents water in the cold water line from entering into the hot water line. Compared to a dedicated return line, using the cold water line as a return has the disadvantage of heating the cold water pipe (and the contained water). Accurate temperature monitoring and active flow control can minimize loss of cold water within the cold water line. Technological advancements within the industry allow for incorporating timers to limit the operations during specific hours of the day to reduce energy waste by only operating when occupants are likely to use hot water. Additional advancements in technology include pumps which cycle on and off to maintain hot water temperature versus a continuously operating pump which consumes more electrical energy. Reduced energy waste and discomfort is possible by preventing occurrences of hot water line siphoning in open-loop hot water circulation systems which utilize the cold water line to return water back to the water heater. Hot Water Line Siphoning occurs when water from within the hot water line siphons or is forced into the cold water line due to differences in water pressure between the hot and cold water lines. Utilizing "normally closed" solenoid valve significantly reduces energy consumption by preventing siphoning of non-hot water out of hot water lines during cold water use. Using cold water instantly lowers the water pressure in the cold water lines, the higher water pressure in the hot water lines force water through "normally open" thermostatic crossover valves and backflow check valves (which only prevent cold water from flowing into hot water line), increasing the energy demand on the water heater. Circulator pump potential side effects It is important to take note of the increased heat in the piping system, which in turn increases system pressure. Piping that is sensitive to the water condition (i.e., copper, and soft water) will be adversely affected by the continual flow. Although water is conserved, the parasitic heat loss through the piping will be greater as a result of the increased heat passing through it. Quantitative measures of function During the pump operation, there is a drop of the liquid flow in the center of the rotor, causing the inflow of the liquid through the suction port. In the event of an excessive pressure decrease, in some parts of the rotor, the pressure can be lower than the saturation pressure corresponding to the temperature of the pumped liquid, causing the so-called cavitation, i.e. liquid evaporation. To prevent this, the pressure in the suction port (at the inlet of the pump) should be higher than the saturation pressure corresponding to the liquid temperature by the net positive suction head (NPSH). The following parameters are characteristic for the circulating pumps: capacity Q, pump pressure ∆p (delivery head ∆H), energy consumption P with pump unit efficiency η, impeller rotational speed n, NPSH and sound level L. In practice, the graphical relationship between the values Q, ∆ p(∆H), P and η is used. These are called the pump curves. They are determined by studies, whose methodology is standardized. These curves are specified when water is pumped with a density of 1000 kg/m3 and kinematic viscosity of 1 mm2/s. When the circulating pump is used for liquids of different density and viscosity, the pump curves have to be recalculated. These curves are provided in catalogues and in operation and maintenance manuals, however their stroke is the subject of pump manufacturers warranty. EU regulation for circulators As from 1 January 2013, circulators must comply with European regulation 641/2009. This regulation is part of the ecodesign policy of the European Union. See also Zone valve References Bibliography Pumps
Circulator pump
[ "Physics", "Chemistry" ]
1,777
[ "Physical systems", "Hydraulics", "Turbomachinery", "Pumps" ]
3,981,747
https://en.wikipedia.org/wiki/Epsilon%20number
In mathematics, the epsilon numbers are a collection of transfinite numbers whose defining property is that they are fixed points of an exponential map. Consequently, they are not reachable from 0 via a finite series of applications of the chosen exponential map and of "weaker" operations like addition and multiplication. The original epsilon numbers were introduced by Georg Cantor in the context of ordinal arithmetic; they are the ordinal numbers ε that satisfy the equation in which ω is the smallest infinite ordinal. The least such ordinal is ε0 (pronounced epsilon nought (chiefly British), epsilon naught (chiefly American), or epsilon zero), which can be viewed as the "limit" obtained by transfinite recursion from a sequence of smaller limit ordinals: where is the supremum, which is equivalent to set union in the case of the von Neumann representation of ordinals. Larger ordinal fixed points of the exponential map are indexed by ordinal subscripts, resulting in . The ordinal is still countable, as is any epsilon number whose index is countable. Uncountable ordinals also exist, along with uncountable epsilon numbers whose index is an uncountable ordinal. The smallest epsilon number appears in many induction proofs, because for many purposes transfinite induction is only required up to (as in Gentzen's consistency proof and the proof of Goodstein's theorem). Its use by Gentzen to prove the consistency of Peano arithmetic, along with Gödel's second incompleteness theorem, show that Peano arithmetic cannot prove the well-foundedness of this ordering (it is in fact the least ordinal with this property, and as such, in proof-theoretic ordinal analysis, is used as a measure of the strength of the theory of Peano arithmetic). Many larger epsilon numbers can be defined using the Veblen function. A more general class of epsilon numbers has been identified by John Horton Conway and Donald Knuth in the surreal number system, consisting of all surreals that are fixed points of the base ω exponential map . defined gamma numbers (see additively indecomposable ordinal) to be numbers such that whenever , and delta numbers (see multiplicatively indecomposable ordinal) to be numbers such that whenever , and epsilon numbers to be numbers such that whenever . His gamma numbers are those of the form , and his delta numbers are those of the form . Ordinal ε numbers The standard definition of ordinal exponentiation with base α is: when has an immediate predecessor . , whenever is a limit ordinal. From this definition, it follows that for any fixed ordinal , the mapping is a normal function, so it has arbitrarily large fixed points by the fixed-point lemma for normal functions. When , these fixed points are precisely the ordinal epsilon numbers. when has an immediate predecessor . , whenever is a limit ordinal. Because a different sequence with the same supremum, , is obtained by starting from 0 and exponentiating with base instead: Generally, the epsilon number indexed by any ordinal that has an immediate predecessor can be constructed similarly. In particular, whether or not the index β is a limit ordinal, is a fixed point not only of base ω exponentiation but also of base δ exponentiation for all ordinals . Since the epsilon numbers are an unbounded subclass of the ordinal numbers, they are enumerated using the ordinal numbers themselves. For any ordinal number , is the least epsilon number (fixed point of the exponential map) not already in the set . It might appear that this is the non-constructive equivalent of the constructive definition using iterated exponentiation; but the two definitions are equally non-constructive at steps indexed by limit ordinals, which represent transfinite recursion of a higher order than taking the supremum of an exponential series. The following facts about epsilon numbers are straightforward to prove: Although it is quite a large number, is still countable, being a countable union of countable ordinals; in fact, is countable if and only if is countable. The union (or supremum) of any non-empty set of epsilon numbers is an epsilon number; so for instance is an epsilon number. Thus, the mapping is a normal function. The initial ordinal of any uncountable cardinal is an epsilon number. Representation of ε0 by rooted trees Any epsilon number ε has Cantor normal form , which means that the Cantor normal form is not very useful for epsilon numbers. The ordinals less than , however, can be usefully described by their Cantor normal forms, which leads to a representation of as the ordered set of all finite rooted trees, as follows. Any ordinal has Cantor normal form where k is a natural number and are ordinals with , uniquely determined by . Each of the ordinals in turn has a similar Cantor normal form. We obtain the finite rooted tree representing α by joining the roots of the trees representing to a new root. (This has the consequence that the number 0 is represented by a single root while the number is represented by a tree containing a root and a single leaf.) An order on the set of finite rooted trees is defined recursively: we first order the subtrees joined to the root in decreasing order, and then use lexicographic order on these ordered sequences of subtrees. In this way the set of all finite rooted trees becomes a well-ordered set which is order isomorphic to . This representation is related to the proof of the hydra theorem, which represents decreasing sequences of ordinals as a graph-theoretic game. Veblen hierarchy The fixed points of the "epsilon mapping" form a normal function, whose fixed points form a normal function; this is known as the Veblen hierarchy (the Veblen functions with base ). In the notation of the Veblen hierarchy, the epsilon mapping is , and its fixed points are enumerated by . Continuing in this vein, one can define maps for progressively larger ordinals α (including, by this rarefied form of transfinite recursion, limit ordinals), with progressively larger least fixed points . The least ordinal not reachable from 0 by this procedure—i. e., the least ordinal α for which , or equivalently the first fixed point of the map —is the Feferman–Schütte ordinal . In a set theory where such an ordinal can be proved to exist, one has a map that enumerates the fixed points , , , ... of ; these are all still epsilon numbers, as they lie in the image of for every , including of the map that enumerates epsilon numbers. Surreal ε numbers In On Numbers and Games, the classic exposition on surreal numbers, John Horton Conway provided a number of examples of concepts that had natural extensions from the ordinals to the surreals. One such function is the -map ; this mapping generalises naturally to include all surreal numbers in its domain, which in turn provides a natural generalisation of the Cantor normal form for surreal numbers. It is natural to consider any fixed point of this expanded map to be an epsilon number, whether or not it happens to be strictly an ordinal number. Some examples of non-ordinal epsilon numbers are and There is a natural way to define for every surreal number n, and the map remains order-preserving. Conway goes on to define a broader class of "irreducible" surreal numbers that includes the epsilon numbers as a particularly interesting subclass. See also Ordinal arithmetic Large countable ordinal References J.H. Conway, On Numbers and Games (1976) Academic Press Section XIV.20 of Ordinal numbers
Epsilon number
[ "Mathematics" ]
1,644
[ "Ordinal numbers", "Order theory", "Mathematical objects", "Numbers" ]
3,983,007
https://en.wikipedia.org/wiki/Mecamylamine
Mecamylamine (INN, BAN; or mecamylamine hydrochloride (USAN); brand names Inversine, Vecamyl) is a non-selective, non-competitive antagonist of the nicotinic acetylcholine receptors (nAChRs) that was introduced in the 1950s as an antihypertensive drug. In the United States, it was voluntarily withdrawn from the market in 2009 but was brought to market in 2013 as Vecamyl and eventually was marketed by Turing Pharmaceuticals. Chemically, mecamylamine is a secondary aliphatic amine, with a pKaH of 11.2 Medical uses Mecamylamine has been used as an orally-active ganglionic blocker in treating autonomic dysreflexia and hypertension, but, like most ganglionic blockers, it is more often used now as a research tool. Mecamylamine is also sometimes used as an antiaddictive drug to help people stop smoking tobacco, and is now more widely used for this application than it is for lowering blood pressure. This effect is thought to be due to its blocking α3β4 nicotinic receptors in the brain. It has also been reported to bring about sustained relief from tics in Tourette syndrome when a series of more usually used agents had failed. In a recent double-blind, placebo-controlled Phase II trial in Indian patients with major depression, (S)-mecamylamine (TC-5214) appeared to have efficacy as an augmentation therapy. This is the first substantive evidence that shows that compounds where the primary pharmacology is antagonism to neuronal nicotinic receptors will have antidepressant properties. TC-5214 is currently in Phase III of clinical development as an add-on treatment and on stage II as a monotherapy treatment for major depression. The first results reported from the Phase III trials showed that TC-5214 failed to meet the primary goal and the trial did not replicate the effects that had been encouraging in the Phase II trial. Development is funded by Targacept and AstraZeneca. It did not produce meaningful, beneficial results on patients as measured by changes on the Montgomery-Asberg Depression Rating Scale after eight weeks of treatment as compared with placebo. Overdose The for the HCl salt in mice: 21 mg/kg (i.v.); 37 mg/kg (i.p.); 96 mg/kg (p.o.). Pharmacology (S)-(+)-Mecamylamine dissociates more slowly from α4β2 and α3β4 receptors than does the (R)-(−)-enantiomer. A large SAR study of mecamylamine and its analogs was reported by a group from Merck in 1962. Another, more recent SAR study was undertaken by Suchocki et al. A comprehensive review of the pharmacology of mecamylamine was published in 2001. History Mecamylamine was brought to market by Merck & Co. in the 1950s; in 1996 Merck sold the asset to Layton Bioscience. In 2002, Targacept acquired it from Layton, intending to repurpose it for CNS conditions. Targacept voluntarily withdrew mecamylamine from the market in 2009 for reasons not related to safety or efficacy. Manchester Pharmaceuticals brought the drug back to market in 2013. Retrophin acquired Manchester in 2014 and after Martin Shkreli was forced out of Retrophin, in 2014 his new company, Turing Pharmaceuticals, acquired the rights to mecamylamine from Retrophin. See also Bupropion Scopolamine References Amines Nicotinic antagonists Withdrawn drugs Bicyclic compounds
Mecamylamine
[ "Chemistry" ]
787
[ "Drug safety", "Functional groups", "Amines", "Bases (chemistry)", "Withdrawn drugs" ]
21,693,114
https://en.wikipedia.org/wiki/Lydersen%20method
The Lydersen method is a group contribution method for the estimation of critical properties temperature (Tc), pressure (Pc) and volume (Vc). The method is named after Aksel Lydersen who published it in 1955. The Lydersen method is the prototype for and ancestor of many new models like Joback, Klincewicz, Ambrose, Gani-Constantinou and others. The Lydersen method is based in case of the critical temperature on the Guldberg rule which establishes a relation between the normal boiling point and the critical temperature. Equations Critical temperature Guldberg has found that a rough estimate of the normal boiling point Tb, when expressed in kelvins (i.e., as an absolute temperature), is approximately two-thirds of the critical temperature Tc. Lydersen uses this basic idea but calculates more accurate values. Critical pressure Critical volume M is the molar mass and Gi are the group contributions (different for all three properties) for functional groups of a molecule. Group contributions Example calculation Acetone is fragmented in two different groups, one carbonyl group and two methyl groups. For the critical volume the following calculation results: Vc = 40 + 60.0 + 2 * 55.0 = 210 cm3 In the literature (such as in the Dortmund Data Bank) the values 215.90 cm3, 230.5 cm3 and 209.0 cm3 are published. References Physical chemistry Thermodynamic models
Lydersen method
[ "Physics", "Chemistry" ]
303
[ "Applied and interdisciplinary physics", "Thermodynamic models", "Thermodynamics", "nan", "Physical chemistry" ]
21,697,541
https://en.wikipedia.org/wiki/Lactosylceramide
The Lactosylceramides, also known as LacCer, are a class of glycosphingolipids composed of a variable hydrophobic ceramide lipid and a hydrophilic sugar moiety. Lactosylceramides are found in microdomains on the plasma layers of numerous cells. Moreover, they are a type of ceramide including lactose, which is an example of a globoside. Composition As with many lipids, the chemical formula and molecular weight varies depending on the fatty-acid present. As one example, the chemical formula of Lactosylceramide (d18:1/12:0) is C42H79NO13, which has 806.088 g/mol of molar mass, and the IUPAC name of this species is N-(dodecanoyl)-1-beta-lactosyl-sphing-4-enine. Function Lactosylceramides were initially called 'cytolipin H'. It is found in small amounts just in most tissues, however, it has various organic capacities and it is of significance as the biosynthetic forerunner of the greater part of the impartial oligoglycosylceramides, sulfatides and gangliosides. In tissues, biosynthesis of lactosylceramide includes expansion of the second monosaccharides unit (galactose) as its nucleotide subsidiary to monoglucosylceramide, catalyzed by a particular beta-1, 4-galactosyltransferase on the lumenal side of the Golgi mechanical assembly. In tissues, the precursor glucosylceramide is moved by the sphingolipid transport protein FAPP2 to the distal Golgi apparatus, where it initially cross from the cytosolic side of the membran by means of flippase activity. Biosynthesis of lactosylceramide then includes expansion of the second monosaccharides unit as its actuated nucleotide subordinate (UDP-galactose) to monoglucosylceramide on the lumenal side of the Golgi apparatus in a response catalyzed by β-1,4-galactosyltransferases of which two are known. The lactosylceramide created can be further glycosylated, or it very well may be moved to the plasma layer essentially by a non-vesicular system that is inadequately seen, however it can't be translocated back to the cytosolic flyer. It is likewise recovered by the catabolism of a considerable lot of the lipids for which it is the biosynthetic antecedent. Erasure of the lactosylceramide synthase by quality focusing on is embryonically deadly. Associated disorders Gaucher's disease is a sphingolipidosis described by a particular inadequacy in acidic glucocerebrosidase, which results in abnormal gathering of glucosylceramide essentially inside the lysosome. Gaucher's disease has been associated with instances of leukemia, myeloma, glioblastoma, lung malignancy, and hepatocellular carcinoma, in spite of the fact that the explanations behind the relationship are at present being discussed. Some suggest that the impact of Gaucher's disease might be connected to malignant growth, while others ensnare the treatments used to treat Gaucher's illness. This discussion is not completely astounding, as the theories connecting Gaucher's disease with cancer fail to address the roles of ceramide and glucosylceramide in malignant growth science. Gaucher disease is caused by mutations in GBA1, which encodes the lysosomal catalyst glucocerebrosidase (GCase). GBA1 transformations drive broad gathering of glucosylceramide (GC) in different natural and versatile resistant cells in the spleen, liver, lung and bone marrow, frequently prompting endless irritation. The systems that interface abundance GC to tissue aggravation stay obscure. See also Lactosylceramide 1,3-N-acetyl-beta-D-glucosaminyltransferase Lactosylceramide alpha-2,3-sialyltransferase GAL3ST1 ST3GAL5 References Glycolipids
Lactosylceramide
[ "Chemistry" ]
936
[ "Glycobiology", "Carbohydrates", "Glycolipids" ]
26,077,153
https://en.wikipedia.org/wiki/Talagrand%27s%20concentration%20inequality
In the probability theory field of mathematics, Talagrand's concentration inequality is an isoperimetric-type inequality for product probability spaces. It was first proved by the French mathematician Michel Talagrand. The inequality is one of the manifestations of the concentration of measure phenomenon. Roughly, the product of the probability to be in some subset of a product space (e.g. to be in one of some collection of states described by a vector) multiplied by the probability to be outside of a neighbourhood of that subspace at least a distance away, is bounded from above by the exponential factor . It becomes rapidly more unlikely to be outside of a larger neighbourhood of a region in a product space, implying a highly concentrated probability density for states described by independent variables, generically. The inequality can be used to streamline optimisation protocols by sampling a limited subset of the full distribution and being able to bound the probability to find a value far from the average of the samples. Statement The inequality states that if is a product space endowed with a product probability measure and is a subset in this space, then for any where is the complement of where this is defined by and where is Talagrand's convex distance defined as where , are -dimensional vectors with entries respectively and is the -norm. That is, References Probabilistic inequalities Measure theory
Talagrand's concentration inequality
[ "Mathematics" ]
277
[ "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)" ]
26,078,212
https://en.wikipedia.org/wiki/PEPS%20effect
The PEPS Effect (Photoelectochemical Photocurrent Switching) is a phenomenon seen in semiconducting electrodes. It is defined as switching of photocurrent polarity on changes in photoelectrode potential and/or incident light wavelength. Konrad Szaciłowski and Wojciech Macyk were the first to describe it in their publication in 2006. The discovered phenomenon opens a wide variety of applications in construction of switches, logic gates and sensors based on chemical systems. References Bibliography S. Gawęda, A. Podborska, W. Macyk and K. Szaciowski, Nanoscale optoelectronic switches and logic devices, Nanoscale, 2009, 1, 299 Solar cells Semiconductors
PEPS effect
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
160
[ "Electrical resistance and conductance", "Physical quantities", "Semiconductors", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
33,297,462
https://en.wikipedia.org/wiki/Keller%27s%20conjecture
In geometry, Keller's conjecture is the conjecture that in any tiling of -dimensional Euclidean space by identical hypercubes, there are two hypercubes that share an entire -dimensional face with each other. For instance, in any tiling of the plane by identical squares, some two squares must share an entire edge, as they do in the illustration. This conjecture was introduced by , after whom it is named. A breakthrough by showed that it is false in ten or more dimensions, and after subsequent refinements, it is now known to be true in spaces of dimension at most seven and false in all higher dimensions. The proofs of these results use a reformulation of the problem in terms of the clique number of certain graphs now known as Keller graphs. The related Minkowski lattice cube-tiling conjecture states that whenever a tiling of space by identical cubes has the additional property that the cubes' centers form a lattice, some cubes must meet face-to-face. It was proved by György Hajós in 1942. , , and give surveys of work on Keller's conjecture and related problems. Statement A tessellation or tiling of a Euclidean space is, intuitively, a family of subsets that cover the whole space without overlapping. More formally, a family of closed sets, called tiles, forms a tiling if their union is the whole space and every two distinct sets in the family have disjoint interiors. A tiling is said to be monohedral if all of the tiles have the same shape (they are congruent to each other). Keller's conjecture concerns monohedral tilings in which all of the tiles are hypercubes of the same dimension as the space. As formulates the problem, a cube tiling is a tiling by congruent hypercubes in which the tiles are additionally required to all be translations of each other without any rotation, or equivalently, to have all of their sides parallel to the coordinate axes of the space. Not every tiling by congruent cubes has this property; for instance, three-dimensional space may be tiled by two-dimensional sheets of cubes that are twisted at arbitrary angles with respect to each other. In formulating the same problem, instead considers all tilings of space by congruent hypercubes and states, without proof, that the assumption that cubes are axis-parallel can be added without loss of generality. An -dimensional hypercube has faces of dimension that are, themselves, hypercubes; for instance, a square has four edges, and a three-dimensional cube has six square faces. Two tiles in a cube tiling (defined in either of the above ways) meet face-to-face if there is an ()-dimensional hypercube that is a face of both of them. Keller's conjecture is the statement that every cube tiling has at least one pair of tiles that meet face-to-face in this way. The original version of the conjecture stated by Keller was for a stronger statement: every cube tiling has a column of cubes all meeting face-to-face. This version of the problem is true or false for the same dimensions as its more commonly studied formulation. It is a necessary part of the conjecture that the cubes in the tiling all be congruent to each other, for if cubes of unequal sizes are allowed, then the Pythagorean tiling would form a counterexample in two dimensions. The conjecture as stated does not require all of the cubes in a tiling to meet face-to-face with other cubes. Although tilings by congruent squares in the plane have the stronger property that every square meets edge-to-edge with another square, some of the tiles in higher-dimensional hypercube tilings may not meet face-to-face with any other tile. For instance, in three dimensions, the tetrastix structure formed by three perpendicular sets of square prisms can be used to construct a cube tiling, combinatorially equivalent to the Weaire–Phelan structure, in which one fourth of the cubes (the ones not part of any prism) are surrounded by twelve other cubes without meeting any of them face-to-face. Group-theoretic reformulation Keller's conjecture was shown to be true in dimensions at most six by . The disproof of Keller's conjecture, for sufficiently high dimensions, has progressed through a sequence of reductions that transform it from a problem in the geometry of tilings into a problem in group theory and, from there, into a problem in graph theory. first reformulated Keller's conjecture in terms of factorizations of abelian groups. He shows that if there is a counterexample to the conjecture, then it can be assumed to be a periodic tiling of cubes with an integer side length and integer vertex positions; thus, in studying the conjecture, it is sufficient to consider tilings of this special form. In this case, the group of integer translations, modulo the translations that preserve the tiling, forms an abelian group, and certain elements of this group correspond to the positions of the tiles. Hajós defines a family of subsets of an abelian group to be a factorization if each element of the group has a unique expression as a sum , where each belongs to . With this definition, Hajós' reformulated conjecture is that whenever an Abelian group has a factorization in which the first set may be arbitrary but each subsequent set takes the special form for some element of , then at least one element must belong to (the difference set of with itself). showed that any tiling that forms a counterexample to the conjecture can be assumed to have an even more special form: the cubes have side length a power of two and integer vertex coordinates, and the tiling is periodic with period twice the side length of the cubes in each coordinate direction. Based on this geometric simplification, he also simplified Hajós' group-theoretic formulation, showing that it is sufficient to consider abelian groups that are the direct sums of cyclic groups of order four, with each . Keller graphs reformulated Szabó's result as a condition about the existence of a large clique in a certain family of graphs, which subsequently became known as the Keller graphs. More precisely, the vertices of the Keller graph of dimension are the elements where each is 0, 1, 2, or 3. Two vertices are joined by an edge if they differ in at least two coordinates and differ by exactly two in at least one coordinate. Corrádi and Szabó showed that the maximum clique in this graph has size at most and if there is a clique of this size, then Keller's conjecture is false. Given such a clique, one can form a covering of space by cubes of side two whose centers have coordinates that, when taken modulo four, are vertices of the clique. The condition that any two vertices of the clique have a coordinate that differs by two implies that cubes corresponding to these vertices do not overlap. The condition that vertices differ in two coordinates implies that these cubes cannot meet face-to-face. The condition that the clique has size implies that the cubes within any period of the tiling have the same total volume as the period itself. Together with the fact that they do not overlap, this implies that the cubes placed in this way tile space without meeting face-to-face. disproved Keller's conjecture by finding a clique of size 210 in the Keller graph of dimension 10. This clique leads to a non-face-to-face tiling in dimension 10, and copies of it can be stacked (offset by half a unit in each coordinate direction) to produce non-face-to-face tilings in any higher dimension. Similarly, found a clique of size 28 in the Keller graph of dimension eight, leading in the same way to a non-face-to-face tiling in dimension 8 and (by stacking) in dimension 9. Subsequently, showed that the Keller graph of dimension seven has a maximum clique of size 124. Because this is less than 27 = 128, the graph-theoretic version of Keller's conjecture is true in seven dimensions. However, the translation from cube tilings to graph theory can change the dimension of the problem, so this result does not settle the geometric version of the conjecture in seven dimensions. Finally, a 200-gigabyte computer-assisted proof in 2019 used Keller graphs to establish that the conjecture holds true in seven dimensions. Therefore, the question Keller posed can be considered solved: the conjecture is true in seven dimensions or fewer but is false when there are more than seven dimensions. The sizes of the maximum cliques in the Keller graphs of dimensions 2, 3, 4, 5, and 6 are, respectively, 2, 5, 12, 28, and 60. The Keller graphs of dimensions 4, 5, and 6 have been included in the set of "DIMACS challenge graphs" frequently used as a benchmark for clique-finding algorithms. Related problems As describes, Hermann Minkowski was led to a special case of the cube-tiling conjecture from a problem in diophantine approximation. One consequence of Minkowski's theorem is that any lattice (normalized to have determinant one) must contain a nonzero point whose Chebyshev distance to the origin is at most one. The lattices that do not contain a nonzero point with Chebyshev distance strictly less than one are called critical, and the points of a critical lattice form the centers of the cubes in a cube tiling. Minkowski conjectured in 1900 that whenever a cube tiling has its cubes centered at lattice points in this way, it must contain two cubes that meet face-to-face. If this is true, then (because of the symmetries of the lattice) each cube in the tiling must be part of a column of cubes, and the cross-sections of these columns form a cube tiling of one smaller dimension. Reasoning in this way, Minkowski showed that (assuming the truth of his conjecture) every critical lattice has a basis that can be expressed as a triangular matrix, with ones on its main diagonal and numbers less than one away from the diagonal. György Hajós proved Minkowski's conjecture in 1942 using Hajós's theorem on factorizations of abelian groups, a similar group-theoretic method to the one that he would later apply to Keller's more general conjecture. Keller's conjecture is a variant of Minkowski's conjecture in which the condition that the cube centers form a lattice is relaxed. A second related conjecture, made by Furtwängler in 1936, instead relaxes the condition that the cubes form a tiling. Furtwängler asked whether a system of cubes centered on lattice points forming a -fold covering of space (that is, all but a measure-zero subset of the points in the space must be interior to exactly cubes) must necessarily have two cubes meeting face-to-face. Furtwängler's conjecture is true for two- and three-dimensional space, but Hajós found a four-dimensional counterexample in 1938. characterized the combinations of and the dimension that permit a counterexample. Additionally, combining both Furtwängler's and Keller's conjectures, Robinson showed that -fold square coverings of the Euclidean plane must include two squares that meet edge-to-edge. However, for every and every , there is a -fold tiling of -dimensional space by cubes with no shared faces. Once counterexamples to Keller's conjecture became known, it became of interest to ask for the maximum dimension of a shared face that can be guaranteed to exist in a cube tiling. When the dimension is at most seven, this maximum dimension is just , by the proofs of Keller's conjecture for those small dimensions, and when is at least eight, then this maximum dimension is at most . showed that it is at most , stronger for ten or more dimensions. and found close connections between cube tilings and the spectral theory of square-integrable functions on the cube. use cliques in the Keller graphs that are maximal but not maximum to study packings of cubes into space that cannot be extended by adding any additional cubes. In 1975, Ludwig Danzer and independently Branko Grünbaum and G. C. Shephard found a tiling of three-dimensional space by parallelepipeds with 60° and 120° face angles in which no two parallelepipeds share a face. Notes References . . . . . . . . See in particular pages 43, 114, 147, 156, and 161–163, describing different computational results on the Keller graphs included in this challenge set. . . . . . . . . . . . . . . Cubes Tessellation Parametric families of graphs Disproved conjectures Computer-assisted proofs
Keller's conjecture
[ "Physics", "Mathematics" ]
2,718
[ "Tessellation", "Computer-assisted proofs", "Euclidean plane geometry", "Planes (geometry)", "Symmetry" ]
33,302,030
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2033
In molecular biology, glycoside hydrolase family 33 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. This family contains sialidases (CAZY GH_33), which hydrolyse alpha-(2->3)-, alpha-(2->6)-, alpha-(2->8)-glycosidic linkages of terminal sialic residues in oligosaccharides, glycoproteins, glycolipids, colominic acid and synthetic substrates. Sialidases may act as pathogenic factors in microbial infections. The 1.8 A structure of trans-sialidase from leech (Macrobdella decora, ) in complex with 2-deoxy-2, 3-didehydro-NeuAc was solved. The refined model comprising residues 81-769 has a catalytic beta-propeller domain, a N-terminal lectin-like domain and an irregular beta-stranded domain inserted into the catalytic domain. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 33
[ "Biology" ]
350
[ "Protein families", "Protein classification" ]
33,302,625
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2037
In molecular biology, glycoside hydrolase family 37 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 37 CAZY GH_37 comprises enzymes with only one known activity; trehalase (). Trehalase is the enzyme responsible for the degradation of the disaccharide alpha,alpha-trehalose yielding two glucose subunits. It is an enzyme found in a wide variety of organisms and whose sequence has been highly conserved throughout evolution. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 37
[ "Biology" ]
235
[ "Protein families", "Protein classification" ]
33,302,785
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%2047
In molecular biology, glycoside hydrolase family 47 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 47 CAZY GH_47 comprises enzymes with only one known activity; alpha-mannosidase (). Alpha-mannosidase is involved in the maturation of Asn-linked oligosaccharides. The enzyme hydrolyses terminal 1,2-linked alpha-D-mannose residues in the oligo-mannose oligosaccharide man(9)(glcnac)(2) in a calcium-dependent manner. The mannose residues are trimmed away to produce, first, man(8)glcnac(2), then a man(5)(glcnac)(2) structure. References EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 47
[ "Biology" ]
300
[ "Protein families", "Protein classification" ]
33,304,869
https://en.wikipedia.org/wiki/Bridges%20to%20Prosperity
Bridges to Prosperity (B2P) is a United States–based nonprofit organization that partners with local governments to connect communities via pedestrian trailbridges, in addition to providing technical assistance and resource mobilization. Bridges to Prosperity is based in Denver, Colorado, with an operational headquarters in Rwanda and staff around the world. Trailbridges are cost-effective, durable, and safe, as well as easy for rural communities to build with only modest support, while the impact is great. A randomized control study completed at the University of Notre Dame concluded that bridge connectivity increases farm profits by 75%, labor market income by 36%, and overall household income 30%. Since its foundation, over 450 bridges have been built, connecting over 1.5 million people across 21 countries. Bridges to Prosperity's current efforts are centered in East Africa due to a compelling mix of need (with millions living in rural isolation due to impassable rivers), existing interest from national governments to invest, the region's track record of safety and stability of leadership, and Bridges to Prosperity's long-standing relationships in the region. In 2019, Bridges to Prosperity partnered with the government of Rwanda in the organization's first scaled program to build over 200 trail bridges between 2019 and 2024, serving over 660,000 people. A similar program was started in Uganda in 2018 to test a country-wide coalition approach to bridge building. Finally, in 2021, The Leona M. and Harry B. Helmsley Charitable Trust provided a $10.7 million 3-year partnership between Helvetas, Bridges to Prosperity, and the Government of Ethiopia to construct 150 bridges between 2022 and 2025, serving over 1.3 million people in that time frame. Mission Bridges to Prosperity uses a community-driven approach and intelligent technology to deliver a multi-dimensional impact to thousands of communities worldwide. The trailbridges are built to unlock opportunity, expand the reach of other development interventions, and ultimately eliminate poverty caused by rural isolation.. History Bridges to Prosperity was established by Kenneth Frantz in 2001, after seeing a photo in National Geographic Magazine of a broken bridge over the Blue Nile River in Ethiopia, with ten men on either side of the broken span pulling themselves across the chasm by rope. Transport is a crucial driver of development, bringing socio-economic opportunities within the reach of the poor and enabling economies to be more competitive. Transport infrastructure connects people to jobs, education, and health services; it enables the supply of goods and services around the world; and it allows people to interact and generate the knowledge that creates long-term growth. Rural roads, for example, can help prevent maternal deaths through timely access to childbirth-related care, boost girls' enrollment in school, and increase and diversify farmers' income by connecting them to markets. The World Bank estimates that over 1 billion people do not have access to transportation networks. Bridges to Prosperity determined that positive results could be attained by spreading the technology by building approximately 10–20 demonstration bridges per country, training locals, partnering with local technological institutes, providing downloadable and easy to use step-by-step photo and video manuals, and supplying free wire rope and wire rope clamps/clips for post-training/demonstration programs. Bridges to Prosperity has focused on education and training, the propagation of technical manuals, and trail bridge building textbooks as components of trail bridge building best practices. All Bridges to Prosperity trailbridges leverage locally sourced and repurposed materials. Materials are not donated where that would cause unintended harm to existing businesses. In 2005, Bridges to Prosperity received a long term donation of free 7/8 inch to 1.25 inch wire rope from the ports of Portsmouth and Norfolk, Virginia. The wire rope donated was American manufactured high tensile steel wire rope used on gantry cranes for unloading container ships. Later, the port of Baltimore was added, as were Texas and West Coast ports. In 2012, approximately 100,000 feet of such donated used wire rope and strand was shipped in an intermodal container to programs all over the world. To build one trailbridge, the average number of feet of wire rope required is approximately 1,800 feet. Worldwide, there is enough recycled wire rope from gantry cranes to build approximately 2,500 footbridges every year. Each container shipped overseas weighs approximately 52,000 pounds and contains 20,000 feet of cable. Charity Navigator rated Bridges to Prosperity, Inc their top rating, four stars. Strategy Safe access is transformative for households, communities, and countries. Bridges to Prosperity partners with local governments and community leaders to develop, enable, and advocate for national infrastructure programs that acknowledge the needs of rural populations. To help effect this system change approach, Bridges to Prosperity is furthering the following three initiatives as outlined in the organization's strategic plan: 1) gather the evidence supporting efficacy and efficiency of safe access as a fulcrum for rural development; 2) create collective action to elevate rural transport on the development agenda; and 3) support governments with technical assistance best practices and the capacity to make smart infrastructure investments. Awards Bridges to Prosperity has been recognized by a number of global awards. In 2016, Bridges to Prosperity was named the 2016 Eurostar Ashden Award for Sustainability, and one of the top 10 Social Enterprises in the world by Classy. Corporate sponsorship Building on the affinity with construction firms, especially those that design and build highway bridges, a corporate sponsor program was started to allow employees to form teams to design and build trailbridges. Today, Bridges to Prosperity provides co-branded bridge-building opportunities for companies around the world, ranging from financial services firms to construction industry giants. The original industry partners included Ross Construction of Palo Alto, California, and Flatiron Construction, along with Flatiron's parent company, Hochtief of Germany. By 2019 over 50 industry partners included Parsons Corporation, COWI, Alridge, Berger Charitable Foundation, Balfour Beatty, Europengineers, Institution of Civil Engineers, Kiewit, Michael Baker, NSBA, Railroad Construction Co, Thornton Tomasetti, WSP, American Bridge, Arup, Bechtel, Burohappold, FHECOR, HDR, Freyssinet, IBT, Knights Brown, KPFF, McNary Bergeron, Mott MacDonald, PCL, Price & Myers, Ramboll Fonden, Tony Gee, Traylor Bros, Walsh, and Weston & Sampson. Rotary Important supporters include various Rotary International clubs who collaborate in providing Rotary Foundation-matched humanitarian grants.When Bridges to Prosperity was founded by a member of the Gloucester Point Rotary Club in Gloucester, Virginia. That club assisted in the purchase of materials for the repair of the Blue Nile Bridge. Over 65 rotary clubs worldwide (and over 800 individual Rotarians) have participated directly in Bridges to Prosperity programs. Partnerships with the local developing country Rotary clubs, such as the Rotary Club of Nkwazi, Lusaka, Zambia, facilitate access and operation in countries with little bureaucratic interference. The partnerships with local Rotary clubs allow quick customs clearance of wire rope imports and expedited business contacts and allow USA-based Rotarians to easily travel and participate in schemes as well as adding defense against potential corruption. University programs Support from former CEO Avery Bang's alma mater, the University of Iowa engineering school, and non-profit Continental Crossings has led the construction of three additional bridges. Other university engineering programs include Arizona State University, NDSEED, and Virginia Tech, among others. Effective September 1, 2018, the university program was spun off into its own entity with Engineers in Action. Financial information In the fiscal year ending in 2022, the organization had revenues of $15,325,708 contributed as follows: Individual and grant contribution: 76% Corporate partnership fees: 9% Government contribution: 9% In-kind contribution & others: 6% References External links Bridges to Prosperity How It All Started > Video Bridge Design & Engineering Issue 46 - Charity Link Brings Footbridge To Fruition The Rotarian - Bridging Worlds National Public Radio audio file Bridges Footbridges Pedestrian infrastructure Non-profit organizations based in Colorado 2001 establishments in the United States
Bridges to Prosperity
[ "Engineering" ]
1,656
[ "Structural engineering", "Bridges" ]
33,305,931
https://en.wikipedia.org/wiki/Macrophage%20migration%20inhibitory%20factor%20domain
Macrophage migration inhibitory factor domain is an evolutionary conserved protein domain. Macrophage migration inhibitory factor (MIF) is a key regulatory cytokine within innate and adaptive immune responses, capable of promoting and modulating the magnitude of the response. MIF is released from T-cells and macrophages, and acts within the neuroendocrine system. MIF is capable of tautomerase activity, although its biological function has not been fully characterised. It is induced by glucocorticoid and is capable of overriding the anti-inflammatory actions of glucocorticoid. MIF regulates cytokine secretion and the expression of receptors involved in the immune response. It can be taken up into target cells in order to interact with intracellular signalling molecules, inhibiting p53 function, and/or activating components of the mitogen-activated protein kinase and Jun-activation domain-binding protein-1 (Jab-1). MIF has been linked to various inflammatory diseases, such as rheumatoid arthritis and atherosclerosis. The MIF homologue D-dopachrome tautomerase (EC 4.1.1.84) is involved in detoxification through the conversion of dopaminechrome (and possibly norepinephrinechrome), the toxic quinine product of the neurotransmitter dopamine (and norepinephrine), to an indole derivative that can serve as a precursor to neuromelanin. Examples Human genes encoding proteins that contain this domain include DDT and MIF. References Protein domains
Macrophage migration inhibitory factor domain
[ "Biology" ]
337
[ "Protein domains", "Protein classification" ]
33,307,786
https://en.wikipedia.org/wiki/Supercapacitor
A supercapacitor (SC), also called an ultracapacitor, is a high-capacity capacitor, with a capacitance value much higher than solid-state capacitors but with lower voltage limits. It bridges the gap between electrolytic capacitors and rechargeable batteries. It typically stores 10 to 100 times more energy per unit volume or mass than electrolytic capacitors, can accept and deliver charge much faster than batteries, and tolerates many more charge and discharge cycles than rechargeable batteries. Unlike ordinary capacitors, supercapacitors do not use the conventional solid dielectric, but rather, they use electrostatic double-layer capacitance and electrochemical pseudocapacitance, both of which contribute to the total energy storage of the capacitor. Supercapacitors are used in applications requiring many rapid charge/discharge cycles, rather than long-term compact energy storage: in automobiles, buses, trains, cranes and elevators, where they are used for regenerative braking, short-term energy storage, or burst-mode power delivery. Smaller units are used as power backup for static random-access memory (SRAM). Background The electrochemical charge storage mechanisms in solid media can be roughly (there is an overlap in some systems) classified into 3 types: Electrostatic double-layer capacitors (EDLCs) use carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance, achieving separation of charge in a Helmholtz double layer at the interface between the surface of a conductive electrode and an electrolyte. The separation of charge is of the order of a few ångströms (0.3–0.8 nm), much smaller than in a conventional capacitor. The electric charge in EDLCs is stored in a two-dimensional interphase (surface) of an electronic conductor (e.g. carbon particle) and ionic conductor (electrolyte solution). Batteries with solid electroactive materials store charge in bulk solid phases by virtue of redox chemical reactions. Electrochemical supercapacitors (ECSCs) fall in between EDLs and batteries. ECSCs use metal oxide or conducting polymer electrodes with a high amount of electrochemical pseudocapacitance additional to the double-layer capacitance. Pseudocapacitance is achieved by Faradaic electron charge-transfer with redox reactions, intercalation or electrosorption. In solid-state capacitors, the mobile charges are electrons, and the gap between electrodes is a layer of a dielectric. In electrochemical double-layer capacitors, the mobile charges are solvated ions (cations and anions), and the effective thickness is determined on each of the two electrodes by their electrochemical double layer structure. In batteries the charge is stored in the bulk volume of solid phases, which have both electronic and ionic conductivities. In electrochemical supercapacitors, the charge storage mechanisms either combine the double-layer and battery mechanisms, or are based on mechanisms, which are intermediate between true double layer and true battery. History In the early 1950s, General Electric engineers began experimenting with porous carbon electrodes in the design of capacitors, from the design of fuel cells and rechargeable batteries. Activated charcoal is an electrical conductor that is an extremely porous "spongy" form of carbon with a high specific surface area. In 1957 H. Becker developed a "Low voltage electrolytic capacitor with porous carbon electrodes". He believed that the energy was stored as a charge in the carbon pores as in the pores of the etched foils of electrolytic capacitors. Because the double layer mechanism was not known by him at the time, he wrote in the patent: "It is not known exactly what is taking place in the component if it is used for energy storage, but it leads to an extremely high capacity." General Electric did not immediately pursue this work. In 1966 researchers at Standard Oil of Ohio (SOHIO) developed another version of the component as "electrical energy storage apparatus", while working on experimental fuel cell designs. The nature of electrochemical energy storage was not described in this patent. Even in 1970, the electrochemical capacitor patented by Donald L. Boos was registered as an electrolytic capacitor with activated carbon electrodes. Early electrochemical capacitors used two aluminum foils covered with activated carbon (the electrodes) that were soaked in an electrolyte and separated by a thin porous insulator. This design gave a capacitor with a capacitance on the order of one farad, significantly higher than electrolytic capacitors of the same dimensions. This basic mechanical design remains the basis of most electrochemical capacitors. SOHIO did not commercialize their invention, licensing the technology to NEC, who finally marketed the results as "supercapacitors" in 1978, to provide backup power for computer memory. Between 1975 and 1980 Brian Evans Conway conducted extensive fundamental and development work on ruthenium oxide electrochemical capacitors. In 1991 he described the difference between "supercapacitor" and "battery" behaviour in electrochemical energy storage. In 1999 he defined the term "supercapacitor" to make reference to the increase in observed capacitance by surface redox reactions with faradaic charge transfer between electrodes and ions. His "supercapacitor" stored electrical charge partially in the Helmholtz double-layer and partially as result of faradaic reactions with "pseudocapacitance" charge transfer of electrons and protons between electrode and electrolyte. The working mechanisms of pseudocapacitors are redox reactions, intercalation and electrosorption (adsorption onto a surface). With his research, Conway greatly expanded the knowledge of electrochemical capacitors. The market expanded slowly. That changed around 1978 as Panasonic marketed its Goldcaps brand. This product became a successful energy source for memory backup applications. Competition started only years later. In 1987 ELNA "Dynacap"s entered the market. First generation EDLC's had relatively high internal resistance that limited the discharge current. They were used for low current applications such as powering SRAM chips or for data backup. At the end of the 1980s, improved electrode materials increased capacitance values. At the same time, the development of electrolytes with better conductivity lowered the equivalent series resistance (ESR) increasing charge/discharge currents. The first supercapacitor with low internal resistance was developed in 1982 for military applications through the Pinnacle Research Institute (PRI), and were marketed under the brand name "PRI Ultracapacitor". In 1992, Maxwell Laboratories (later Maxwell Technologies) took over this development. Maxwell adopted the term Ultracapacitor from PRI and called them "Boost Caps" to underline their use for power applications. Since capacitors' energy content increases with the square of the voltage, researchers were looking for a way to increase the electrolyte's breakdown voltage. In 1994 using the anode of a 200 V high-voltage tantalum electrolytic capacitor, David A. Evans developed an "Electrolytic-Hybrid Electrochemical Capacitor". These capacitors combine features of electrolytic and electrochemical capacitors. They combine the high dielectric strength of an anode from an electrolytic capacitor with the high capacitance of a pseudocapacitive metal oxide (ruthenium (IV) oxide) cathode from an electrochemical capacitor, yielding a hybrid electrochemical capacitor. Evans' capacitors, coined Capattery, had an energy content about a factor of 5 higher than a comparable tantalum electrolytic capacitor of the same size. Their high costs limited them to specific military applications. Recent developments include lithium-ion capacitors. These hybrid capacitors were pioneered by Fujitsu's FDK in 2007. They combine an electrostatic carbon electrode with a pre-doped lithium-ion electrochemical electrode. This combination increases the capacitance value. Additionally, the pre-doping process lowers the anode potential and results in a high cell output voltage, further increasing specific energy. Research departments active in many companies and universities are working to improve characteristics such as specific energy, specific power, and cycle stability and to reduce production costs. Design Basic design Electrochemical capacitors (supercapacitors) consist of two electrodes separated by an ion-permeable membrane (separator), and an electrolyte ionically connecting both electrodes. When the electrodes are polarized by an applied voltage, ions in the electrolyte form electric double layers of opposite polarity to the electrode's polarity. For example, positively polarized electrodes will have a layer of negative ions at the electrode/electrolyte interface along with a charge-balancing layer of positive ions adsorbing onto the negative layer. The opposite is true for the negatively polarized electrode. Additionally, depending on electrode material and surface shape, some ions may permeate the double layer becoming specifically adsorbed ions and contribute with pseudocapacitance to the total capacitance of the supercapacitor. Capacitance distribution The two electrodes form a series circuit of two individual capacitors C1 and C2. The total capacitance Ctotal is given by the formula Supercapacitors may have either symmetric or asymmetric electrodes. Symmetry implies that both electrodes have the same capacitance value, yielding a total capacitance of half the value of each single electrode (if C1 = C2, then Ctotal = ½ C1). For asymmetric capacitors, the total capacitance can be taken as that of the electrode with the smaller capacitance (if C1 >> C2, then Ctotal ≈ C2). Storage principles Electrochemical capacitors use the double-layer effect to store electric energy; however, this double-layer has no conventional solid dielectric to separate the charges. There are two storage principles in the electric double-layer of the electrodes that contribute to the total capacitance of an electrochemical capacitor: Double-layer capacitance, electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer. Pseudocapacitance, electrochemical storage of the electrical energy. The original type uses faradaic redox reactions with charge-transfer. Both capacitances are only separable by measurement techniques. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size, although the amount of capacitance of each storage principle can vary extremely. Electrical double-layer capacitance Every electrochemical capacitor has two electrodes, mechanically separated by a separator, which are ionically connected to each other via the electrolyte. The electrolyte is a mixture of positive and negative ions dissolved in a solvent such as water. At each of the two electrode surfaces originates an area in which the liquid electrolyte contacts the conductive metallic surface of the electrode. This interface forms a common boundary among two different phases of matter, such as an insoluble solid electrode surface and an adjacent liquid electrolyte. In this interface occurs a very special phenomenon of the double layer effect. Applying a voltage to an electrochemical capacitor causes both electrodes in the capacitor to generate electrical double-layers. These double-layers consist of two layers of charges: one electronic layer is in the surface lattice structure of the electrode, and the other, with opposite polarity, emerges from dissolved and solvated ions in the electrolyte. The two layers are separated by a monolayer of solvent molecules, e.g., for water as solvent by water molecules, called inner Helmholtz plane (IHP). Solvent molecules adhere by physical adsorption on the surface of the electrode and separate the oppositely polarized ions from each other, and can be idealised as a molecular dielectric. In the process, there is no transfer of charge between electrode and electrolyte, so the forces that cause the adhesion are not chemical bonds, but physical forces, e.g., electrostatic forces. The adsorbed molecules are polarized, but, due to the lack of transfer of charge between electrolyte and electrode, suffered no chemical changes. The amount of charge in the electrode is matched by the magnitude of counter-charges in outer Helmholtz plane (OHP). This double-layer phenomena stores electrical charges as in a conventional capacitor. The double-layer charge forms a static electric field in the molecular layer of the solvent molecules in the IHP that corresponds to the strength of the applied voltage. The double-layer serves approximately as the dielectric layer in a conventional capacitor, albeit with the thickness of a single molecule. Thus, the standard formula for conventional plate capacitors can be used to calculate their capacitance: . Accordingly, capacitance C is greatest in capacitors made from materials with a high permittivity ε, large electrode plate surface areas A and small distance between plates d. As a result, double-layer capacitors have much higher capacitance values than conventional capacitors, arising from the extremely large surface area of activated carbon electrodes and the extremely thin double-layer distance on the order of a few ångströms (0.3–0.8 nm), of order of the Debye length. Assuming that the minimum distance between the electrode and the charge accumulating region cannot be less than the typical distance between negative and positive charges in atoms of ~0.05 nm a general capacitance upper limit of ~18 μF/cm2 has been predicted for non-faradaic capacitors. The main drawback of carbon electrodes of double-layer SCs is small values of quantum capacitance which act in series with capacitance of ionic space charge. Therefore, further increase of density of capacitance in SCs can be connected with increasing of quantum capacitance of carbon electrode nanostructures. The amount of charge stored per unit voltage in an electrochemical capacitor is primarily a function of the electrode size. The electrostatic storage of energy in the double-layers is linear with respect to the stored charge, and correspond to the concentration of the adsorbed ions. Also, while charge in conventional capacitors is transferred via electrons, capacitance in double-layer capacitors is related to the limited moving speed of ions in the electrolyte and the resistive porous structure of the electrodes. Since no chemical changes take place within the electrode or electrolyte, charging and discharging electric double-layers in principle is unlimited. Real supercapacitors lifetimes are only limited by electrolyte evaporation effects. Electrochemical pseudocapacitance Applying a voltage at the electrochemical capacitor terminals moves electrolyte ions to the opposite polarized electrode and forms a double-layer in which a single layer of solvent molecules acts as separator. Pseudocapacitance can originate when specifically adsorbed ions out of the electrolyte pervade the double-layer. This pseudocapacitance stores electrical energy by means of reversible faradaic redox reactions on the surface of suitable electrodes in an electrochemical capacitor with an electric double-layer. Pseudocapacitance is accompanied with an electron charge-transfer between electrolyte and electrode coming from a de-solvated and adsorbed ion whereby only one electron per charge unit is participating. This faradaic charge transfer originates by a very fast sequence of reversible redox, intercalation or electrosorption processes. The adsorbed ion has no chemical reaction with the atoms of the electrode (no chemical bonds arise) since only a charge-transfer take place. The electrons involved in the faradaic processes are transferred to or from valence electron states (orbitals) of the redox electrode reagent. They enter the negative electrode and flow through the external circuit to the positive electrode where a second double-layer with an equal number of anions has formed. The electrons reaching the positive electrode are not transferred to the anions forming the double-layer, instead they remain in the strongly ionized and "electron hungry" transition-metal ions of the electrode's surface. As such, the storage capacity of faradaic pseudocapacitance is limited by the finite quantity of reagent in the available surface. A faradaic pseudocapacitance only occurs together with a static double-layer capacitance, and its magnitude may exceed the value of double-layer capacitance for the same surface area by factor of 100, depending on the nature and the structure of the electrode, because all the pseudocapacitance reactions take place only with de-solvated ions, which are much smaller than solvated ion with their solvating shell. The amount of pseudocapacitance has a linear function within narrow limits determined by the potential-dependent degree of surface coverage of the adsorbed anions. The ability of electrodes to accomplish pseudocapacitance effects by redox reactions, intercalation or electrosorption strongly depends on the chemical affinity of electrode materials to the ions adsorbed on the electrode surface as well as on the structure and dimension of the electrode pores. Materials exhibiting redox behavior for use as electrodes in pseudocapacitors are transition-metal oxides like RuO2, IrO2, or MnO2 inserted by doping in the conductive electrode material such as active carbon, as well as conducting polymers such as polyaniline or derivatives of polythiophene covering the electrode material. The amount of electric charge stored in a pseudocapacitance is linearly proportional to the applied voltage. The unit of pseudocapacitance is farad, same as that of capacitance. Although conventional battery-type electrode materials also use chemical reactions to store charge, they show very different electrical profiles, as the rate of discharge is limited by the speed of diffusion. Grinding those materials down to nanoscale frees them of the diffusion limit and give them a more pseudocapacitative behavior, making them extrinsic pseudocapacitors. Chodankar et al. 2020, figure 2 shows the representative voltage-capacity curves for bulk LiCoO2, nano LiCoO2, a redox pseudocapacitor (RuO2), and a intercalation pseudocapacitor (T-Nb2O5). Asymmetric capacitors Supercapacitors can also be made with different materials and principles at the electrodes. If both of those materials use a fast, supercapacitor-type reaction (capacitance or pseudocapacitance), the result is called an asymmetric capacitor. The two electrodes have different electric potentials; when combined with proper balancing, the result is improved energy density with no loss of lifespan or current capacity. Hybrid capacitors A number of newer supercapacitors are "hybrid": only one electrode uses a fast reaction (capacitance or pseudocapacitance), the other using a more "battery-like" (slower but higher-capacity) material. For example, an EDLC anode can be combined with an activated carbon–Ni(OH)2 cathode, the latter being a slow faradaic material. The and profiles of a hybrid capacitor have a shape between that of a battery and an SC, more similar to that of an SC. Hybrid capacitors have much higher energy density, but have inferior cycle life and current capacity owing to the slower electrode. Potential distribution Conventional capacitors (also known as electrostatic capacitors), such as ceramic capacitors and film capacitors, consist of two electrodes separated by a dielectric material. When charged, the energy is stored in a static electric field that permeates the dielectric between the electrodes. The total energy increases with the amount of stored charge, which in turn correlates linearly with the potential (voltage) between the plates. The maximum potential difference between the plates (the maximal voltage) is limited by the dielectric's breakdown field strength. The same static storage also applies for electrolytic capacitors in which most of the potential decreases over the anode's thin oxide layer. The somewhat resistive liquid electrolyte (cathode) accounts for a small decrease of potential for "wet" electrolytic capacitors, while electrolytic capacitors with solid conductive polymer electrolyte this voltage drop is negligible. In contrast, electrochemical capacitors (supercapacitors) consists of two electrodes separated by an ion-permeable membrane (separator) and electrically connected via an electrolyte. Energy storage occurs within the double-layers of both electrodes as a mixture of a double-layer capacitance and pseudocapacitance. When both electrodes have approximately the same resistance (internal resistance), the potential of the capacitor decreases symmetrically over both double-layers, whereby a voltage drop across the equivalent series resistance (ESR) of the electrolyte is achieved. For asymmetrical supercapacitors like hybrid capacitors the voltage drop between the electrodes could be asymmetrical. The maximum potential across the capacitor (the maximal voltage) is limited by the electrolyte decomposition voltage. Both electrostatic and electrochemical energy storage in supercapacitors are linear with respect to the stored charge, just as in conventional capacitors. The voltage between the capacitor terminals is linear with respect to the amount of stored energy. Such linear voltage gradient differs from rechargeable electrochemical batteries, in which the voltage between the terminals remains independent of the amount of stored energy, providing a relatively constant voltage. Comparison with other storage technologies Supercapacitors compete with electrolytic capacitors and rechargeable batteries, especially lithium-ion batteries. The following table compares the major parameters of the three main supercapacitor families with electrolytic capacitors and batteries. Electrolytic capacitors feature nearly unlimited charge/discharge cycles, high dielectric strength (up to 550 V) and good frequency response as alternating current (AC) reactance in the lower frequency range. Supercapacitors can store 10 to 100 times more energy than electrolytic capacitors, but they do not support AC applications. With regards to rechargeable batteries, supercapacitors feature higher peak currents, low cost per cycle, no danger of overcharging, good reversibility, non-corrosive electrolyte and low material toxicity. Batteries offer lower purchase cost and stable voltage under discharge, but require complex electronic control and switching equipment, with consequent energy loss and spark hazard given a short. Styles Supercapacitors are made in different styles, such as flat with a single pair of electrodes, wound in a cylindrical case, or stacked in a rectangular case. Because they cover a broad range of capacitance values, the size of the cases can vary. Supercapacitors are constructed with two metal foils (current collectors), each coated with an electrode material such as activated carbon, which serve as the power connection between the electrode material and the external terminals of the capacitor. Specifically to the electrode material is a very large surface area. In this example the activated carbon is electrochemically etched, so that the surface area of the material is about 100,000 times greater than the smooth surface. The electrodes are kept apart by an ion-permeable membrane (separator) used as an insulator to protect the electrodes against short circuits. This construction is subsequently rolled or folded into a cylindrical or rectangular shape and can be stacked in an aluminum can or an adaptable rectangular housing. The cell is then impregnated with a liquid or viscous electrolyte of organic or aqueous type. The electrolyte, an ionic conductor, enters the pores of the electrodes and serves as the conductive connection between the electrodes across the separator. Finally, the housing is hermetically sealed to ensure stable behavior over the specified lifetime. Types Electrical energy is stored in supercapacitors via two storage principles, static double-layer capacitance and electrochemical pseudocapacitance; and the distribution of the two types of capacitance depends on the material and structure of the electrodes. There are three types of supercapacitors based on storage principle: Double-layer capacitors (EDLCs): with activated carbon electrodes or derivatives with much higher electrostatic double-layer capacitance than electrochemical pseudocapacitance Pseudocapacitors: with transition metal oxide or conducting polymer electrodes with a high electrochemical pseudocapacitance Hybrid capacitors: with asymmetric electrodes, one of which exhibits mostly electrostatic and the other mostly electrochemical capacitance, such as lithium-ion capacitors Because double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of an electrochemical capacitor, a correct description of these capacitors only can be given under the generic term. The concepts of supercapattery and supercabattery have been recently proposed to better represent those hybrid devices that behave more like the supercapacitor and the rechargeable battery, respectively. The capacitance value of a supercapacitor is determined by two storage principles: Double-layer capacitance – electrostatic storage of the electrical energy achieved by separation of charge in a Helmholtz double layer at the interface between the surface of a conductor electrode and an electrolytic solution electrolyte. The separation of charge distance in a double-layer is on the order of a few ångströms (0.3–0.8 nm) and is static in origin. Pseudocapacitance – Electrochemical storage of the electrical energy, achieved by redox reactions, electrosorption or intercalation on the surface of the electrode by specifically adsorbed ions, that results in a reversible faradaic charge-transfer on the electrode. Double-layer capacitance and pseudocapacitance both contribute inseparably to the total capacitance value of a supercapacitor. However, the ratio of the two can vary greatly, depending on the design of the electrodes and the composition of the electrolyte. Pseudocapacitance can increase the capacitance value by as much as a factor of ten over that of the double-layer by itself. Electric double-layer capacitors (EDLC) are electrochemical capacitors in which energy storage predominantly is achieved by double-layer capacitance. In the past, all electrochemical capacitors were called "double-layer capacitors". Contemporary usage sees double-layer capacitors, together with pseudocapacitors, as part of a larger family of electrochemical capacitors called supercapacitors. They are also known as ultracapacitors. Materials The properties of supercapacitors come from the interaction of their internal materials. Especially, the combination of electrode material and type of electrolyte determine the functionality and thermal and electrical characteristics of the capacitors. Electrodes Supercapacitor electrodes are generally thin coatings applied and electrically connected to a conductive, metallic current collector. Electrodes must have good conductivity, high temperature stability, long-term chemical stability (inertness), high corrosion resistance and high surface areas per unit volume and mass. Other requirements include environmental friendliness and low cost. The amount of double-layer as well as pseudocapacitance stored per unit voltage in a supercapacitor is predominantly a function of the electrode surface area. Therefore, supercapacitor electrodes are typically made of porous, spongy material with an extraordinarily high specific surface area, such as activated carbon. Additionally, the ability of the electrode material to perform faradaic charge transfers enhances the total capacitance. Generally the smaller the electrode's pores, the greater the capacitance and specific energy. However, smaller pores increase equivalent series resistance (ESR) and decrease specific power. Applications with high peak currents require larger pores and low internal losses, while applications requiring high specific energy need small pores. Electrodes for EDLCs The most commonly used electrode material for supercapacitors is carbon in various manifestations such as activated carbon (AC), carbon fibre-cloth (AFC), carbide-derived carbon (CDC), carbon aerogel, graphite (graphene), graphane and carbon nanotubes (CNTs). Carbon-based electrodes exhibit predominantly static double-layer capacitance, even though a small amount of pseudocapacitance may also be present depending on the pore size distribution. Pore sizes in carbons typically range from micropores (less than 2 nm) to mesopores (2-50 nm), but only micropores (<2 nm) contribute to pseudocapacitance. As pore size approaches the solvation shell size, solvent molecules are excluded and only unsolvated ions fill the pores (even for large ions), increasing ionic packing density and storage capability by faradaic intercalation. Activated carbon Activated carbon was the first material chosen for EDLC electrodes. Even though its electrical conductivity is approximately 0.003% that of metals (1,250 to 2,000 S/m), it is sufficient for supercapacitors. Activated carbon is an extremely porous form of carbon with a high specific surface area — a common approximation is that 1 gram (0.035 oz) (a pencil-eraser-sized amount) has a surface area of roughly — about the size of 4 to 12 tennis courts. The bulk form used in electrodes is low-density with many pores, giving high double-layer capacitance. Solid activated carbon, also termed consolidated amorphous carbon (CAC) is the most used electrode material for supercapacitors and may be cheaper than other carbon derivatives. It is produced from activated carbon powder pressed into the desired shape, forming a block with a wide distribution of pore sizes. An electrode with a surface area of about 1000 m2/g results in a typical double-layer capacitance of about 10 μF/cm2 and a specific capacitance of 100 F/g. virtually all commercial supercapacitors use powdered activated carbon made from coconut shells. Coconut shells produce activated carbon with more micropores than does charcoal made from wood. Activated carbon fibres Activated carbon fibres (ACF) are produced from activated carbon and have a typical diameter of 10 μm. They can have micropores with a very narrow pore-size distribution that can be readily controlled. The surface area of ACF woven into a textile is about . Advantages of ACF electrodes include low electrical resistance along the fibre axis and good contact to the collector. As for activated carbon, ACF electrodes exhibit predominantly double-layer capacitance with a small amount of pseudocapacitance due to their micropores. Carbon aerogel Carbon aerogel is a highly porous, synthetic, ultralight material derived from an organic gel in which the liquid component of the gel has been replaced with a gas. Aerogel electrodes are made via pyrolysis of resorcinol-formaldehyde aerogels and are more conductive than most activated carbons. They enable thin and mechanically stable electrodes with a thickness in the range of several hundred micrometres (μm) and with uniform pore size. Aerogel electrodes also provide mechanical and vibration stability for supercapacitors used in high-vibration environments. Researchers have created a carbon aerogel electrode with gravimetric densities of about 400–1200 m2/g and volumetric capacitance of 104 F/cm3, yielding a specific energy of () and specific power of . Standard aerogel electrodes exhibit predominantly double-layer capacitance. Aerogel electrodes that incorporate composite material can add a high amount of pseudocapacitance. Carbide-derived carbon Carbide-derived carbon (CDC), also known as tunable nanoporous carbon, is a family of carbon materials derived from carbide precursors, such as binary silicon carbide and titanium carbide, that are transformed into pure carbon via physical, e.g., thermal decomposition or chemical, e.g., halogenation) processes. Carbide-derived carbons can exhibit high surface area and tunable pore diameters (from micropores to mesopores) to maximize ion confinement, increasing pseudocapacitance by faradaic adsorption treatment. CDC electrodes with tailored pore design offer as much as 75% greater specific energy than conventional activated carbons. , a CDC supercapacitor offered a specific energy of 10.1 Wh/kg, 3,500 F capacitance and over one million charge-discharge cycles. Graphene Graphene is a one-atom thick sheet of graphite, with atoms arranged in a regular hexagonal pattern, also called "nanocomposite paper". Graphene has a theoretical specific surface area of 2630 m2/g which can theoretically lead to a capacitance of 550 F/g. In addition, an advantage of graphene over activated carbon is its higher electrical conductivity. , a new development used graphene sheets directly as electrodes without collectors for portable applications. In one embodiment, a graphene-based supercapacitor uses curved graphene sheets that do not stack face-to-face, forming mesopores that are accessible to and wettable by ionic electrolytes at voltages up to 4 V. A specific energy of () is obtained at room temperature equaling that of a conventional nickel–metal hydride battery, but with 100–1000 times greater specific power. The two-dimensional structure of graphene improves charging and discharging. Charge carriers in vertically oriented sheets can quickly migrate into or out of the deeper structures of the electrode, thus increasing currents. Such capacitors may be suitable for 100/120 Hz filter applications, which are unreachable for supercapacitors using other carbon materials. Carbon nanotubes Carbon nanotubes (CNTs), also called buckytubes, are carbon molecules with a cylindrical nanostructure. They have a hollow structure with walls formed by one-atom-thick sheets of graphite. These sheets are rolled at specific and discrete ("chiral") angles, and the combination of chiral angle and radius controls properties such as electrical conductivity, electrolyte wettability and ion access. Nanotubes are categorized as single-walled nanotubes (SWNTs) or multi-walled nanotubes (MWNTs). The latter have one or more outer tubes successively enveloping a SWNT, much like the Russian matryoshka dolls. SWNTs have diameters ranging between 1 and 3 nm. MWNTs have thicker coaxial walls, separated by spacing (0.34 nm) that is close to graphene's interlayer distance. Nanotubes can grow vertically on the collector substrate, such as a silicon wafer. Typical lengths are 20 to 100 μm. Carbon nanotubes can greatly improve capacitor performance, due to the highly wettable surface area and high conductivity. A SWNT-based supercapacitor with aqueous electrolyte was systematically studied at University of Delaware in Prof. Bingqing Wei's group. Li et al., for the first time, discovered that the ion-size effect and the electrode-electrolyte wettability are the dominant factors affecting the electrochemical behavior of flexible SWCNTs-supercapacitors in different 1 molar aqueous electrolytes with different anions and cations. The experimental results also showed for flexible supercapacitor that it is suggested to put enough pressure between the two electrodes to improve the aqueous electrolyte CNT supercapacitor. CNTs can store about the same charge as activated carbon per unit surface area, but nanotubes' surface is arranged in a regular pattern, providing greater wettability. SWNTs have a high theoretical specific surface area of 1315 m2/g, while that for MWNTs is lower and is determined by the diameter of the tubes and degree of nesting, compared with a surface area of about 3000 m2/g of activated carbons. Nevertheless, CNTs have higher capacitance than activated carbon electrodes, e.g., 102 F/g for MWNTs and 180 F/g for SWNTs. MWNTs have mesopores that allow for easy access of ions at the electrode–electrolyte interface. As the pore size approaches the size of the ion solvation shell, the solvent molecules are partially stripped, resulting in larger ionic packing density and increased faradaic storage capability. However, the considerable volume change during repeated intercalation and depletion decreases their mechanical stability. To this end, research to increase surface area, mechanical strength, electrical conductivity and chemical stability is ongoing. Electrodes for pseudocapacitors MnO2 and RuO2 are typical materials used as electrodes for pseudocapacitors, since they have the electrochemical signature of a capacitive electrode (linear dependence on current versus voltage curve) as well as exhibiting aic behavior. Additionally, the charge storage originates from electron-transfer mechanisms rather than accumulation of ions in the electrochemical double layer. Pseudocapacitors were created through faradaic redox reactions that occur within the active electrode materials. More research was focused on transition-metal oxides such as MnO2 since transition-metal oxides have a lower cost compared to noble metal oxides such as RuO2. Moreover, the charge storage mechanisms of transition-metal oxides are based predominantly on pseudocapacitance. Two mechanisms of MnO2 charge storage behavior were introduced. The first mechanism implies the intercalation of protons (H+) or alkali metal cations (C+) in the bulk of the material upon reduction followed by deintercalation upon oxidation. MnO2 + H+ (C+) + e− MnOOH(C) The second mechanism is based on the surface adsorption of electrolyte cations on MnO2. (MnO2)surface + C+ + e− (MnO2− C+)surface Not every material that exhibits faradaic behavior can be used as an electrode for pseudocapacitors, such as Ni(OH)2 since it is a battery type electrode (non-linear dependence on current versus voltage curve). Metal oxides Brian Evans Conway's research described electrodes of transition metal oxides that exhibited high amounts of pseudocapacitance. Oxides of transition metals including ruthenium (), iridium (), iron (), manganese () or sulfides such as titanium sulfide () alone or in combination generate strong faradaic electron–transferring reactions combined with low resistance. Ruthenium dioxide in combination with electrolyte provides specific capacitance of 720 F/g and a high specific energy of 26.7 Wh/kg (). Charge/discharge takes place over a window of about 1.2 V per electrode. This pseudocapacitance of about 720 F/g is roughly 100 times higher than for double-layer capacitance using activated carbon electrodes. These transition metal electrodes offer excellent reversibility, with several hundred-thousand cycles. However, ruthenium is expensive and the 2.4 V voltage window for this capacitor limits their applications to military and space applications. Das et al. reported highest capacitance value (1715 F/g) for ruthenium oxide based supercapacitor with electrodeposited ruthenium oxide onto porous single wall carbon nanotube film electrode. A high specific capacitance of 1715 F/g has been reported which closely approaches the predicted theoretical maximum capacitance of 2000 F/g. In 2014, a supercapacitor anchored on a graphene foam electrode delivered specific capacitance of 502.78 F/g and areal capacitance of 1.11 F/cm2) leading to a specific energy of 39.28 Wh/kg and specific power of 128.01 kW/kg over 8,000 cycles with constant performance. The device was a three-dimensional (3D) sub-5 nm hydrous ruthenium-anchored graphene and carbon nanotube (CNT) hybrid foam (RGM) architecture. The graphene foam was conformally covered with hybrid networks of nanoparticles and anchored CNTs. Less expensive oxides of iron, vanadium, nickel and cobalt have been tested in aqueous electrolytes, but none has been investigated as much as manganese dioxide (). However, none of these oxides are in commercial use. Conductive polymers Another approach uses electron-conducting polymers as pseudocapacitive material. Although mechanically weak, conductive polymers have high conductivity, resulting in a low ESR and a relatively high capacitance. Such conducting polymers include polyaniline, polythiophene, polypyrrole and polyacetylene. Such electrodes also employ electrochemical doping or dedoping of the polymers with anions and cations. Electrodes made from, or coated with, conductive polymers have costs comparable to carbon electrodes. Conducting polymer electrodes generally suffer from limited cycling stability. However, polyacene electrodes provide up to 10,000 cycles, much better than batteries. Electrodes for hybrid capacitors All commercial hybrid supercapacitors are asymmetric. They combine an electrode with high amount of pseudocapacitance with an electrode with a high amount of double-layer capacitance. In such systems the faradaic pseudocapacitance electrode with their higher capacitance provides high specific energy while the non-faradaic EDLC electrode enables high specific power. An advantage of the hybrid-type supercapacitors compared with symmetrical EDLC's is their higher specific capacitance value as well as their higher rated voltage and correspondingly their higher specific energy. Composite electrodes Composite electrodes for hybrid-type supercapacitors are constructed from carbon-based material with incorporated or deposited pseudocapacitive active materials like metal oxides and conducting polymers. most research for supercapacitors explores composite electrodes. CNTs give a backbone for a homogeneous distribution of metal oxide or electrically conducting polymers (ECPs), producing good pseudocapacitance and good double-layer capacitance. These electrodes achieve higher capacitances than either pure carbon or pure metal oxide or polymer-based electrodes. This is attributed to the accessibility of the nanotubes' tangled mat structure, which allows a uniform coating of pseudocapacitive materials and three-dimensional charge distribution. The process to anchor pseudocapactive materials usually uses a hydrothermal process. However, a recent researcher, Li et al., from the University of Delaware found a facile and scalable approach to precipitate MnO2 on a SWNT film to make an organic-electrolyte based supercapacitor. Another way to enhance CNT electrodes is by doping with a pseudocapacitive dopant as in lithium-ion capacitors. In this case the relatively small lithium atoms intercalate between the layers of carbon. The anode is made of lithium-doped carbon, which enables lower negative potential with a cathode made of activated carbon. This results in a larger voltage of 3.8-4 V that prevents electrolyte oxidation. As of 2007 they had achieved capacitance of 550 F/g. and reach a specific energy up to 14 Wh/kg (). Battery-type electrodes Rechargeable battery electrodes influenced the development of electrodes for new hybrid-type supercapacitor electrodes as for lithium-ion capacitors. Together with a carbon EDLC electrode in an asymmetric construction offers this configuration higher specific energy than typical supercapacitors with higher specific power, longer cycle life and faster charging and recharging times than batteries. Asymmetric electrodes (pseudo/EDLC) Recently some asymmetric hybrid supercapacitors were developed in which the positive electrode were based on a real pseudocapacitive metal oxide electrode (not a composite electrode), and the negative electrode on an EDLC activated carbon electrode. Asymmetric supercapacitors (ASC) have shown a great potential candidate for high-performance supercapacitor due to their wide operating potential which can remarkably enhance the capacitive behavior. An advantage of this type of supercapacitors is their higher voltage and correspondingly their higher specific energy (up to 10-20 Wh/kg (36-72 kJ/kg)).And they also have good cycling stability. For example, researchers use a kind of novel skutterudite Ni–CoP3 nanosheets and use it as positive electrodes with activated carbon (AC) as negative electrodes to fabricate asymmetric supercapacitor (ASC). It exhibits high energy density of 89.6 Wh/kg at 796 W/kg and stability of 93% after 10,000 cycles, which can be a great potential to be an excellent next-generation electrode candidate. Also, carbon nanofibers/poly(3,4-ethylenedioxythiophene)/manganese oxide (f-CNFs/PEDOT/MnO2) were used as positive electrodes and AC as negative electrodes. It has high specific energy of 49.4 Wh/kg and good cycling stability (81.06% after cycling 8000 times). Besides, many kinds of nanocomposite are being studied as electrodes, like NiCo2S4@NiO, MgCo2O4@MnO2 and so on. For example, Fe-SnO2@CeO2 nanocomposite used as electrode can provide a specific energy and specific power of 32.2 Wh/kg and 747 W/kg. The device exhibited the capacitance retention of 85.05 % over 5000 cycles of operation. As far as known no commercial offered supercapacitors with such kind of asymmetric electrodes are on the market. Electrolytes Electrolytes consist of a solvent and dissolved chemicals that dissociate into positive cations and negative anions, making the electrolyte electrically conductive. The more ions the electrolyte contains, the better its conductivity. In supercapacitors electrolytes are the electrically conductive connection between the two electrodes. Additionally, in supercapacitors the electrolyte provides the molecules for the separating monolayer in the Helmholtz double-layer and delivers the ions for pseudocapacitance. The electrolyte determines the capacitor's characteristics: its operating voltage, temperature range, ESR and capacitance. With the same activated carbon electrode an aqueous electrolyte achieves capacitance values of 160 F/g, while an organic electrolyte achieves only 100 F/g. The electrolyte must be chemically inert and not chemically attack the other materials in the capacitor to ensure long time stable behavior of the capacitor's electrical parameters. The electrolyte's viscosity must be low enough to wet the porous, sponge-like structure of the electrodes. An ideal electrolyte does not exist, forcing a compromise between performance and other requirements. Water is a relatively good solvent for inorganic chemicals. Treated with acids such as sulfuric acid (), alkalis such as potassium hydroxide (KOH), or salts such as quaternary phosphonium salts, sodium perchlorate (), lithium perchlorate () or lithium hexafluoride arsenate (), water offers relatively high conductivity values of about 100 to 1000 mS/cm. Aqueous electrolytes have a dissociation voltage of 1.15 V per electrode (2.3 V capacitor voltage) and a relatively low operating temperature range. They are used in supercapacitors with low specific energy and high specific power. Electrolytes with organic solvents such as acetonitrile, propylene carbonate, tetrahydrofuran, diethyl carbonate, γ-butyrolactone and solutions with quaternary ammonium salts or alkyl ammonium salts such as tetraethylammonium tetrafluoroborate () or triethyl (metyl) tetrafluoroborate () are more expensive than aqueous electrolytes, but they have a higher dissociation voltage of typically 1.35 V per electrode (2.7 V capacitor voltage), and a higher temperature range. The lower electrical conductivity of organic solvents (10 to 60 mS/cm) leads to a lower specific power, but since the specific energy increases with the square of the voltage, a higher specific energy. Ionic electrolytes consists of liquid salts that can be stable in a wider electrochemical window, enabling capacitor voltages above 3.5 V. Ionic electrolytes typically have an ionic conductivity of a few mS/cm, lower than aqueous or organic electrolytes. Separators Separators have to physically separate the two electrodes to prevent a short circuit by direct contact. It can be very thin (a few hundredths of a millimeter) and must be very porous to the conducting ions to minimize ESR. Furthermore, separators must be chemically inert to protect the electrolyte's stability and conductivity. Inexpensive components use open capacitor papers. More sophisticated designs use nonwoven porous polymeric films like polyacrylonitrile or Kapton, woven glass fibers or porous woven ceramic fibres. Collectors and housing Current collectors connect the electrodes to the capacitor's terminals. The collector is either sprayed onto the electrode or is a metal foil. They must be able to distribute peak currents of up to 100 A. If the housing is made out of a metal (typically aluminum) the collectors should be made from the same material to avoid forming a corrosive galvanic cell. Electrical parameters Capacitance Capacitance values for commercial capacitors are specified as "rated capacitance CR". This is the value for which the capacitor has been designed. The value for an actual component must be within the limits given by the specified tolerance. Typical values are in the range of farads (F), three to six orders of magnitude larger than those of electrolytic capacitors. The capacitance value results from the energy (expressed in Joule) of a loaded capacitor loaded via a DC voltage VDC. This value is also called the "DC capacitance". Measurement Conventional capacitors are normally measured with a small AC voltage (0.5 V) and a frequency of 100 Hz or 1 kHz depending on the capacitor type. The AC capacitance measurement offers fast results, important for industrial production lines. The capacitance value of a supercapacitor depends strongly on the measurement frequency, which is related to the porous electrode structure and the limited electrolyte's ion mobility. Even at a low frequency of 10 Hz, the measured capacitance value drops from 100 to 20 percent of the DC capacitance value. This extraordinarily strong frequency dependence can be explained by the different distances the ions have to move in the electrode's pores. The area at the beginning of the pores can be easily accessed by the ions; this short distance is accompanied by low electrical resistance. The greater the distance the ions have to cover, the higher the resistance. This phenomenon can be described with a series circuit of cascaded RC (resistor/capacitor) elements with serial RC time constants. These result in delayed current flow, reducing the total electrode surface area that can be covered with ions if polarity changes – capacitance decreases with increasing AC frequency. Thus, the total capacitance is achieved only after longer measuring times. Out of the reason of the very strong frequency dependence of the capacitance, this electrical parameter has to be measured with a special constant current charge and discharge measurement, defined in IEC standards 62391-1 and -2. Measurement starts with charging the capacitor. The voltage has to be applied and after the constant current/constant voltage power supply has achieved the rated voltage, the capacitor must be charged for 30 minutes. Next, the capacitor has to be discharged with a constant discharge current Idischarge. Then the time t1 and t2, for the voltage to drop from 80% (V1) to 40% (V2) of the rated voltage is measured. The capacitance value is calculated as: The value of the discharge current is determined by the application. The IEC standard defines four classes: Memory backup, discharge current in mA = 1 • C (F) Energy storage, discharge current in mA = 0,4 • C (F) • V (V) Power, discharge current in mA = 4 • C (F) • V (V) Instantaneous power, discharge current in mA = 40 • C (F) • V (V) The measurement methods employed by individual manufacturers are mainly comparable to the standardized methods. The standardized measuring method is too time consuming for manufacturers to use during production for each individual component. For industrial-produced capacitors, the capacitance value is instead measured with a faster, low-frequency AC voltage, and a correlation factor is used to compute the rated capacitance. This frequency dependence affects capacitor operation. Rapid charge and discharge cycles mean that neither the rated capacitance value nor specific energy are available. In this case the rated capacitance value is recalculated for each application condition. The time t a supercapacitor can deliver a constant current I can be calculated as: as the capacitor voltage decreases from Ucharge down to Umin. If the application needs a constant power P for a certain time t this can be calculated as: wherein also the capacitor voltage decreases from Ucharge down to Umin. Operating voltage Supercapacitors are low voltage components. Safe operation requires that the voltage remain within specified limits. The rated voltage UR is the maximum DC voltage or peak pulse voltage that may be applied continuously and remain within the specified temperature range. Capacitors should never be subjected to voltages continuously in excess of the rated voltage. The rated voltage includes a safety margin against the electrolyte's breakdown voltage at which the electrolyte decomposes. The breakdown voltage decomposes the separating solvent molecules in the Helmholtz double-layer, e.g. water splits into hydrogen and oxygen. The solvent molecules then cannot separate the electrical charges from each other. Higher voltages than rated voltage cause hydrogen gas formation or a short circuit. Standard supercapacitors with aqueous electrolyte normally are specified with a rated voltage of 2.1 to 2.3 V and capacitors with organic solvents with 2.5 to 2.7 V. Lithium-ion capacitors with doped electrodes may reach a rated voltage of 3.8 to 4 V, but have a low voltage limit of about 2.2 V. Supercapacitors with ionic electrolytes can exceed an operating voltage of 3.5 V. Operating supercapacitors below the rated voltage improves the long-time behavior of the electrical parameters. Capacitance values and internal resistance during cycling are more stable and lifetime and charge/discharge cycles may be extended. Higher application voltages require connecting cells in series. Since each component has a slight difference in capacitance value and ESR, it is necessary to actively or passively balance them to stabilize the applied voltage. Passive balancing employs resistors in parallel with the supercapacitors. Active balancing may include electronic voltage management above a threshold that varies the current. Internal resistance Charging/discharging a supercapacitor is connected to the movement of charge carriers (ions) in the electrolyte across the separator to the electrodes and into their porous structure. Losses occur during this movement that can be measured as the internal DC resistance. With the electrical model of cascaded, series-connected RC (resistor/capacitor) elements in the electrode pores, the internal resistance increases with the increasing penetration depth of the charge carriers into the pores. The internal DC resistance is time dependent and increases during charge/discharge. In applications often only the switch-on and switch-off range is interesting. The internal resistance Ri can be calculated from the voltage drop ΔV2 at the time of discharge, starting with a constant discharge current Idischarge. It is obtained from the intersection of the auxiliary line extended from the straight part and the time base at the time of discharge start (see picture right). Resistance can be calculated by: The discharge current Idischarge for the measurement of internal resistance can be taken from the classification according to IEC 62391-1. This internal DC resistance Ri should not be confused with the internal AC resistance called equivalent series resistance (ESR) normally specified for capacitors. It is measured at 1 kHz. ESR is much smaller than DC resistance. ESR is not relevant for calculating supercapacitor inrush currents or other peak currents. Ri determines several supercapacitor properties. It limits the charge and discharge peak currents as well as charge/discharge times. Ri and the capacitance C results in the time constant This time constant determines the charge/discharge time. A 100 F capacitor with an internal resistance of 30 mΩ for example, has a time constant of 0.03 • 100 = 3 s. After 3 seconds charging with a current limited only by internal resistance, the capacitor has 63.2% of full charge (or is discharged to 36.8% of full charge). Standard capacitors with constant internal resistance fully charge during about 5 τ. Since internal resistance increases with charge/discharge, actual times cannot be calculated with this formula. Thus, charge/discharge time depends on specific individual construction details. Current load and cycle stability Because supercapacitors operate without forming chemical bonds, current loads, including charge, discharge and peak currents are not limited by reaction constraints. Current load and cycle stability can be much higher than for rechargeable batteries. Current loads are limited only by internal resistance, which may be substantially lower than for batteries. Internal resistance "Ri" and charge/discharge currents or peak currents "I" generate internal heat losses "Ploss" according to: This heat must be released and distributed to the ambient environment to maintain operating temperatures below the specified maximum temperature. Heat generally defines capacitor lifetime due to electrolyte diffusion. The heat generation coming from current loads should be smaller than 5 to 10 K at maximum ambient temperature (which has only minor influence on expected lifetime). For that reason the specified charge and discharge currents for frequent cycling are determined by internal resistance. The specified cycle parameters under maximal conditions include charge and discharge current, pulse duration and frequency. They are specified for a defined temperature range and over the full voltage range for a defined lifetime. They can differ enormously depending on the combination of electrode porosity, pore size and electrolyte. Generally a lower current load increases capacitor life and increases the number of cycles. This can be achieved either by a lower voltage range or slower charging and discharging. Supercapacitors (except those with polymer electrodes) can potentially support more than one million charge/discharge cycles without substantial capacity drops or internal resistance increases. Beneath the higher current load is this the second great advantage of supercapacitors over batteries. The stability results from the dual electrostatic and electrochemical storage principles. The specified charge and discharge currents can be significantly exceeded by lowering the frequency or by single pulses. Heat generated by a single pulse may be spread over the time until the next pulse occurs to ensure a relatively small average heat increase. Such a "peak power current" for power applications for supercapacitors of more than 1000 F can provide a maximum peak current of about 1000 A. Such high currents generate high thermal stress and high electromagnetic forces that can damage the electrode-collector connection requiring robust design and construction of the capacitors. Device capacitance and resistance dependence on operating voltage and temperature Device parameters such as capacitance initial resistance and steady state resistance are not constant, but are variable and dependent on the device's operating voltage. Device capacitance will have a measurable increase as the operating voltage increases. For example: a 100F device can be seen to vary 26% from its maximum capacitance over its entire operational voltage range. Similar dependence on operating voltage is seen in steady state resistance (Rss) and initial resistance (Ri). Device properties can also be seen to be dependent on device temperature. As the temperature of the device changes either through operation of varying ambient temperature, the internal properties such as capacitance and resistance will vary as well. Device capacitance is seen to increase as the operating temperature increases. Energy capacity Supercapacitors occupy the gap between high power/low energy electrolytic capacitors and low power/high energy rechargeable batteries. The energy Wmax (expressed in Joule) that can be stored in a capacitor is given by the formula This formula describes the amount of energy stored and is often used to describe new research successes. However, only part of the stored energy is available to applications, because the voltage drop and the time constant over the internal resistance mean that some of the stored charge is inaccessible. The effective realized amount of energy Weff is reduced by the used voltage difference between Vmax and Vmin and can be represented as: This formula also represents the energy asymmetric voltage components such as lithium ion capacitors. Specific energy and specific power The amount of energy that can be stored in a capacitor per mass of that capacitor is called its specific energy. Specific energy is measured gravimetrically (per unit of mass) in watt-hours per kilogram (Wh/kg). The amount of energy can be stored in a capacitor per volume of that capacitor is called its energy density (also called volumetric specific energy in some literature). Energy density is measured volumetrically (per unit of volume) in watt-hours per litre (Wh/L). Units of liters and dm3 can be used interchangeably. commercial energy density varies widely, but in general range from around 5 to . In comparison, petrol fuel has an energy density of 32.4 MJ/L or . Commercial specific energies range from around 0.5 to . For comparison, an aluminum electrolytic capacitor stores typically 0.01 to , while a conventional lead–acid battery stores typically 30 to and modern lithium-ion batteries 100 to . Supercapacitors can therefore store 10 to 100 times more energy than electrolytic capacitors, but only one tenth as much as batteries. For reference, petrol fuel has a specific energy of 44.4 MJ/kg or . Although the specific energy of supercapacitors is defavorably compared with batteries, capacitors have the important advantage of the specific power. Specific power describes the speed at which energy can be delivered to the load (or, in charging the device, absorbed from the generator). The maximum power Pmax specifies the power of a theoretical rectangular single maximum current peak of a given voltage. In real circuits the current peak is not rectangular and the voltage is smaller, caused by the voltage drop, so IEC 62391–2 established a more realistic effective power Peff for supercapacitors for power applications, which is half the maximum and given by the following formulas : , with V = voltage applied and Ri, the internal DC resistance of the capacitor. Just like specific energy, specific power is measured either gravimetrically in kilowatts per kilogram (kW/kg, specific power) or volumetrically in kilowatts per litre (kW/L, power density). Supercapacitor specific power is typically 10 to 100 times greater than for batteries and can reach values up to 15 kW/kg. Ragone charts relate energy to power and are a valuable tool for characterizing and visualizing energy storage components. With such a diagram, the position of specific power and specific energy of different storage technologies is easily to compare, see diagram. Lifetime Since supercapacitors do not rely on chemical changes in the electrodes (except for those with polymer electrodes), lifetimes depend mostly on the rate of evaporation of the liquid electrolyte. This evaporation is generally a function of temperature, current load, current cycle frequency and voltage. Current load and cycle frequency generate internal heat, so that the evaporation-determining temperature is the sum of ambient and internal heat. This temperature is measurable as core temperature in the center of a capacitor body. The higher the core temperature, the faster the evaporation, and the shorter the lifetime. Evaporation generally results in decreasing capacitance and increasing internal resistance. According to IEC/EN 62391-2, capacitance reductions of over 30%, or internal resistance exceeding four times its data sheet specifications, are considered "wear-out failures," implying that the component has reached end-of-life. The capacitors are operable, but with reduced capabilities. Whether the aberration of the parameters have any influence on the proper functionality depends on the application of the capacitors. Such large changes of electrical parameters specified in IEC/EN 62391-2 are usually unacceptable for high current load applications. Components that support high current loads use much smaller limits, e.g., 20% loss of capacitance or double the internal resistance. The narrower definition is important for such applications, since heat increases linearly with increasing internal resistance, and the maximum temperature should not be exceeded. Temperatures higher than specified can destroy the capacitor. The real application lifetime of supercapacitors, also called "service life," "life expectancy," or "load life," can reach 10 to 15 years or more, at room temperature. Such long periods cannot be tested by manufacturers. Hence, they specify the expected capacitor lifetime at the maximum temperature and voltage conditions. The results are specified in datasheets using the notation "tested time (hours)/max. temperature (°C)," such as "5000 h/65 °C". With this value, and expressions derived from historical data, lifetimes can be estimated for lower temperature conditions. Datasheet lifetime specification is tested by the manufactures using an accelerated aging test called an "endurance test," with maximum temperature and voltage over a specified time. For a "zero defect" product policy, no wear out or total failure may occur during this test. The lifetime specification from datasheets can be used to estimate the expected lifetime for a given design. The "10-degrees-rule" used for electrolytic capacitors with non-solid electrolyte is used in those estimations, and can be used for supercapacitors. This rule employs the Arrhenius equation: a simple formula for the temperature dependence of reaction rates. For every 10 °C reduction in operating temperature, the estimated life doubles. With: Lx = estimated lifetime L0 = specified lifetime T0 = upper specified capacitor temperature Tx = actual operating temperature of the capacitor cell Calculated with this formula, capacitors specified with 5000 h at 65 °C, have an estimated lifetime of 20,000 h at 45 °C. Lifetimes are also dependent on the operating voltage, because the development of gas in the liquid electrolyte depends on the voltage. The lower the voltage, the smaller the gas development, and the longer the lifetime. No general formula relates voltage to lifetime. The voltage dependent curves shown from the picture are an empirical result from one manufacturer. Life expectancy for power applications may be also limited by current load or number of cycles. This limitation has to be specified by the relevant manufacturer and is strongly type dependent. Self-discharge Storing electrical energy in the double-layer separates the charge carriers within the pores by distances in the range of molecules. Irregularities can occur over this short distance, leading to a small exchange of charge carriers and gradual discharge. This self-discharge is called leakage current. Leakage depends on capacitance, voltage, temperature, and the chemical stability of the electrode/electrolyte combination. At room temperature, leakage is so low that it is specified as time to self-discharge in hours, days, or weeks. As an example, a 5.5 V/F Panasonic "Goldcapacitor" specifies a voltage drop at 20 °C from 5.5 V to 3 V in 600 hours (25 days or 3.6 weeks) for a double cell capacitor. Post charge voltage relaxation It has been noticed that after the EDLC experiences a charge or discharge, the voltage will drift over time, relaxing toward its previous voltage level. The observed relaxation can occur over several hours and is likely due to long diffusion time constants of the porous electrodes within the EDLC. Polarity Since the positive and negative electrodes (or simply positrode and negatrode, respectively) of symmetric supercapacitors consist of the same material, theoretically supercapacitors have no true polarity and catastrophic failure does not normally occur. However reverse-charging a supercapacitor lowers its capacity, so it is recommended practice to maintain the polarity resulting from the formation of the electrodes during production. Asymmetric supercapacitors are inherently polar. Pseudocapacitor and hybrid supercapacitors which have electrochemical charge properties may not be operated with reverse polarity, precluding their use in AC operation. However, this limitation does not apply to EDLC supercapacitors A bar in the insulating sleeve identifies the negative terminal in a polarized component. In some literature, the terms "anode" and "cathode" are used in place of negative electrode and positive electrode. Using anode and cathode to describe the electrodes in supercapacitors (and also rechargeable batteries, including lithium-ion batteries) can lead to confusion, because the polarity changes depending on whether a component is considered as a generator or as a consumer of current. In electrochemistry, cathode and anode are related to reduction and oxidation reactions, respectively. However, in supercapacitors based on electric double-layer capacitance, there is no oxidation nor reduction reactions on any of the two electrodes. Therefore, the concepts of cathode and anode do not apply. Comparison of selected commercial supercapacitors The range of electrodes and electrolytes available yields a variety of components suitable for diverse applications. The development of low-ohmic electrolyte systems, in combination with electrodes with high pseudocapacitance, enable many more technical solutions. The following table shows differences among capacitors of various manufacturers in capacitance range, cell voltage, internal resistance (ESR, DC or AC value) and volumetric and gravimetric specific energy. In the table, ESR refers to the component with the largest capacitance value of the respective manufacturer. Roughly, they divide supercapacitors into two groups. The first group offers greater ESR values of about 20 milliohms and relatively small capacitance of 0.1 to 470 F. These are "double-layer capacitors" for memory back-up or similar applications. The second group offers 100 to 10,000 F with a significantly lower ESR value under 1 milliohm. These components are suitable for power applications. A correlation of some supercapacitor series of different manufacturers to the various construction features is provided in Pandolfo and Hollenkamp. In commercial double-layer capacitors, or, more specifically, EDLCs in which energy storage is predominantly achieved by double-layer capacitance, energy is stored by forming an electrical double layer of electrolyte ions on the surface of conductive electrodes. Since EDLCs are not limited by the electrochemical charge transfer kinetics of batteries, they can charge and discharge at a much higher rate, with lifetimes of more than 1 million cycles. The EDLC energy density is determined by operating voltage and the specific capacitance (farad/gram or farad/cm3) of the electrode/electrolyte system. The specific capacitance is related to the Specific Surface Area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Commercial EDLCs are based on two symmetric electrodes impregnated with electrolytes comprising tetraethylammonium tetrafluoroborate salts in organic solvents. Current EDLCs containing organic electrolytes operate at 2.7 V and reach energy densities around 5-8 Wh/kg and 7 to 10 Wh/L. The specific capacitance is related to the specific surface area (SSA) accessible by the electrolyte, its interfacial double-layer capacitance, and the electrode material density. Graphene-based platelets with mesoporous spacer material is a promising structure for increasing the SSA of the electrolyte. Standards Supercapacitors vary sufficiently that they are rarely interchangeable, especially those with higher specific energy. Applications range from low to high peak currents, requiring standardized test protocols. Test specifications and parameter requirements are specified in the generic specification IEC/EN 62391–1, Fixed electric double layer capacitors for use in electronic equipment. The standard defines four application classes, according to discharge current levels: Memory backup Energy storage, mainly used for driving motors require a short time operation, Power, higher power demand for a long time operation, Instantaneous power, for applications that requires relatively high current units or peak currents ranging up to several hundreds of amperes even with a short operating time Three further standards describe special applications: IEC 62391–2, Fixed electric double-layer capacitors for use in electronic equipment - Blank detail specification - Electric double-layer capacitors for power application IEC 62576, Electric double-layer capacitors for use in hybrid electric vehicles. Test methods for electrical characteristics BS/EN 61881-3, Railway applications. Rolling stock equipment. Capacitors for power electronics. Electric double-layer capacitors Applications Supercapacitors have advantages in applications where a large amount of power is needed for a relatively short time, where a very high number of charge/discharge cycles or a longer lifetime is required. Typical applications range from milliamp currents or milliwatts of power for up to a few minutes to several amps current or several hundred kilowatts power for much shorter periods. Supercapacitors do not support alternating current (AC) applications. Consumer electronics In applications with fluctuating loads, such as laptop computers, PDAs, GPS, portable media players, hand-held devices, and photovoltaic systems, supercapacitors can stabilize the power supply. Supercapacitors deliver power for photographic flashes in digital cameras and for LED flashlights that can be charged in much shorter periods of time, e.g., 90 seconds. Some portable speakers are powered by supercapacitors. A cordless electric screwdriver with supercapacitors for energy storage has about half the run time of a comparable battery model, but can be fully charged in 90 seconds. It retains 85% of its charge after three months left idle. Power generation and distribution Grid power buffering Numerous non-linear loads, such as EV chargers, HEVs, air conditioning systems, and advanced power conversion systems cause current fluctuations and harmonics. These current differences create unwanted voltage fluctuations and therefore power oscillations on the grid. Power oscillations not only reduce the efficiency of the grid, but can cause voltage drops in the common coupling bus, and considerable frequency fluctuations throughout the entire system. To overcome this problem, supercapacitors can be implemented as an interface between the load and the grid to act as a buffer between the grid and the high pulse power drawn from the charging station. Low-power equipment power buffering Supercapacitors provide backup or emergency shutdown power to low-power equipment such as RAM, SRAM, micro-controllers and PC Cards. They are the sole power source for low energy applications such as automated meter reading (AMR) equipment or for event notification in industrial electronics. Supercapacitors buffer power to and from rechargeable batteries, mitigating the effects of short power interruptions and high current peaks. Batteries kick in only during extended interruptions, e.g., if the mains power or a fuel cell fails, which lengthens battery life. Uninterruptible power supplies (UPS) may be powered by supercapacitors, which can replace much larger banks of electrolytic capacitors. This combination reduces the cost per cycle, saves on replacement and maintenance costs, enables the battery to be downsized and extends battery life. Supercapacitors provide backup power for actuators in wind turbine pitch systems, so that blade pitch can be adjusted even if the main supply fails. Voltage stabilization Supercapacitors can stabilize voltage fluctuations for powerlines by acting as dampers. Wind and photovoltaic systems exhibit fluctuating supply evoked by gusting or clouds that supercapacitors can buffer within milliseconds. Micro grids Micro grids are usually powered by clean and renewable energy. Most of this energy generation, however, is not constant throughout the day and does not usually match demand. Supercapacitors can be used for micro grid storage to instantaneously inject power when the demand is high and the production dips momentarily, and to store energy in the reverse conditions. They are useful in this scenario, because micro grids are increasingly producing power in DC, and capacitors can be utilized in both DC and AC applications. Supercapacitors work best in conjunction with chemical batteries. They provide an immediate voltage buffer to compensate for quick changing power loads due to their high charge and discharge rate through an active control system. Once the voltage is buffered, it is put through an inverter to supply AC power to the grid. Supercapacitors cannot provide frequency correction in this form directly in the AC grid. Energy harvesting Supercapacitors are suitable temporary energy storage devices for energy harvesting systems. In energy harvesting systems, the energy is collected from the ambient or renewable sources, e.g., mechanical movement, light or electromagnetic fields, and converted to electrical energy in an energy storage device. For example, it was demonstrated that energy collected from RF (radio frequency) fields (using an RF antenna as an appropriate rectifier circuit) can be stored to a printed supercapacitor. The harvested energy was then used to power an application-specific integrated circuit (ASIC) for over 10 hours. Batteries The UltraBattery is a hybrid rechargeable lead-acid battery and a supercapacitor. Its cell construction contains a standard lead-acid battery positive electrode, standard sulphuric acid electrolyte and a specially prepared negative carbon-based electrode that store electrical energy with double-layer capacitance. The presence of the supercapacitor electrode alters the chemistry of the battery and affords it significant protection from sulfation in high rate partial state of charge use, which is the typical failure mode of valve regulated lead-acid cells used this way. The resulting cell performs with characteristics beyond either a lead-acid cell or a supercapacitor, with charge and discharge rates, cycle life, efficiency and performance all enhanced. Medical Supercapacitors are used in defibrillators where they can deliver 500 joules to shock the heart back into sinus rhythm. Military Supercapacitors' low internal resistance supports applications that require short-term high currents. Among the earliest uses were motor startup (cold engine starts, particularly with diesels) for large engines in tanks and submarines. Supercapacitors buffer the battery, handling short current peaks, reducing cycling and extending battery life. Further military applications that require high specific power are phased array radar antennae, laser power supplies, military radio communications, avionics displays and instrumentation, backup power for airbag deployment and GPS-guided missiles and projectiles. Transport A primary challenge of all transport is reducing energy consumption and reducing emissions. Recovery of braking energy (recuperation or regenerative braking) helps with both. This requires components that can quickly store and release energy over long times with a high cycle rate. Supercapacitors fulfill these requirements and are therefore used in various applications in transportation. Aviation In 2005, aerospace systems and controls company Diehl Luftfahrt Elektronik GmbH chose supercapacitors to power emergency actuators for doors and evacuation slides used in airliners, including the Airbus 380. Cars The Toyota Yaris Hybrid-R concept car uses a supercapacitor to provide bursts of power. PSA Peugeot Citroën started using supercapacitors (circa 2014) as part of its stop-start fuel-saving system, which permits faster initial acceleration. Mazda's i-ELOOP system stores energy in a supercapacitor during deceleration and uses it to power on-board electrical systems while the engine is stopped by the stop-start system. Rail Supercapacitors can be used to supplement batteries in starter systems in diesel railroad locomotives with diesel–electric transmission. The capacitors capture the braking energy of a full stop and deliver the peak current for starting the diesel engine and acceleration of the train and ensures the stabilization of line voltage. Depending on the driving mode up to 30% energy saving is possible by recovery of braking energy. Low maintenance and environmentally friendly materials encouraged the choice of supercapacitors. Plant machinery Mobile hybrid Diesel–electric rubber tyred gantry cranes move and stack containers within a terminal. Lifting the boxes requires large amounts of energy. Some of the energy could be recaptured while lowering the load, resulting in improved efficiency. A triple hybrid forklift truck uses fuel cells and batteries as primary energy storage and supercapacitors to buffer power peaks by storing braking energy. They provide the fork lift with peak power over 30 kW. The triple-hybrid system offers over 50% energy savings compared with Diesel or fuel-cell systems. Supercapacitor-powered terminal tractors transport containers to warehouses. They provide an economical, quiet and pollution-free alternative to Diesel terminal tractors. Light rail Supercapacitors make it possible not only to reduce energy, but to replace overhead lines in historical city areas, so preserving the city's architectural heritage. This approach may allow many new light rail city lines to replace overhead wires that are too expensive to fully route. In 2003 Mannheim adopted a prototype light-rail vehicle (LRV) using the MITRAC Energy Saver system from Bombardier Transportation to store mechanical braking energy with a roof-mounted supercapacitor unit. It contains several units each made of 192 capacitors with 2700 F / 2.7 V interconnected in three parallel lines. This circuit results in a 518 V system with an energy content of 1.5 kWh. For acceleration when starting this "on-board-system" can provide the LRV with 600 kW and can drive the vehicle up to 1 km without overhead line supply, thus better integrating the LRV into the urban environment. Compared to conventional LRVs or Metro vehicles that return energy into the grid, onboard energy storage saves up to 30% and reduces peak grid demand by up to 50%. In 2009 supercapacitors enabled LRVs to operate in the historical city area of Heidelberg without overhead wires, thus preserving the city's architectural heritage. The SC equipment cost an additional €270,000 per vehicle, which was expected to be recovered over the first 15 years of operation. The supercapacitors are charged at stop-over stations when the vehicle is at a scheduled stop. In April 2011 German regional transport operator Rhein-Neckar, responsible for Heidelberg, ordered a further 11 units. In 2009, Alstom and RATP equipped a Citadis tram with an experimental energy recovery system called "STEEM". The system is fitted with 48 roof-mounted supercapacitors to store braking energy, which provides tramways with a high level of energy autonomy by enabling them to run without overhead power lines on parts of its route, recharging while traveling on powered stop-over stations. During the tests, which took place between the Porte d'Italie and Porte de Choisy stops on line T3 of the tramway network in Paris, the tramset used an average of approximately 16% less energy. In 2012 tram operator Geneva Public Transport began tests of an LRV equipped with a prototype roof-mounted supercapacitor unit to recover braking energy. Siemens is delivering supercapacitor-enhanced light-rail transport systems that include mobile storage. Hong Kong's South Island metro line is to be equipped with two 2 MW energy storage units that are expected to reduce energy consumption by 10%. In August 2012 the CSR Zhuzhou Electric Locomotive corporation of China presented a prototype two-car light metro train equipped with a roof-mounted supercapacitor unit. The train can travel up 2 km without wires, recharging in 30 seconds at stations via a ground mounted pickup. The supplier claimed the trains could be used in 100 small and medium-sized Chinese cities. Seven trams (street cars) powered by supercapacitors were scheduled to go into operation in 2014 in Guangzhou, China. The supercapacitors are recharged in 30 seconds by a device positioned between the rails. That powers the tram for up to . As of 2017, Zhuzhou's supercapacitor vehicles are also used on the new Nanjing streetcar system, and are undergoing trials in Wuhan. In 2012, in Lyon (France), the SYTRAL (Lyon public transportation administration) started experiments of a "way side regeneration" system built by Adetel Group which has developed its own energy saver named "NeoGreen" for LRV, LRT and metros. In 2014 China began using trams powered with supercapacitors that are recharged in 30 seconds by a device positioned between the rails, storing power to run the tram for up to 4 km — more than enough to reach the next stop, where the cycle can be repeated. In 2015, Alstom announced SRS, an energy storage system that charges supercapacitors on board a tram by means of ground-level conductor rails located at tram stops. This allows trams to operate without overhead lines for short distances. The system has been touted as an alternative to the company's ground-level power supply (APS) system, or can be used in conjunction with it, as in the case of the VLT network in Rio de Janeiro, Brazil, which opened in 2016. CAF also offers supercapacitors on their Urbos 3 trams in the form of their ACR system. Buses Maxwell Technologies, an American supercapacitor maker, claimed that more than 20,000 hybrid buses use the devices to increase acceleration, particularly in China. The first hybrid electric bus with supercapacitors in Europe came in 2001 in Nuremberg, Germany. It was MAN's so-called "Ultracapbus", and was tested in real operation in 2001/2002. The test vehicle was equipped with a diesel-electric drive in combination with supercapacitors. The system was supplied with 8 Ultracap modules of 80 V, each containing 36 components. The system worked with 640 V and could be charged/discharged at 400 A. Its energy content was 0.4 kWh with a weight of 400 kg. The supercapacitors recaptured braking energy and delivered starting energy. Fuel consumption was reduced by 10 to 15% compared to conventional diesel vehicles. Other advantages included reduction of emissions, quiet and emissions-free engine starts, lower vibration and reduced maintenance costs. in Luzern, Switzerland an electric bus fleet called TOHYCO-Rider was tested. The supercapacitors could be recharged via an inductive contactless high-speed power charger after every transportation cycle, within 3 to 4 minutes. In early 2005 Shanghai tested a new form of electric bus called capabus that runs without powerlines (catenary free operation) using large onboard supercapacitors that partially recharge whenever the bus is at a stop (under so-called electric umbrellas), and fully charge in the terminus. In 2006, two commercial bus routes began to use the capabuses; one of them is route 11 in Shanghai. It was estimated that the supercapacitor bus was cheaper than a lithium-ion battery bus, and one of its buses had one-tenth the energy cost of a diesel bus with lifetime fuel savings of $200,000. A hybrid electric bus called tribrid was unveiled in 2008 by the University of Glamorgan, Wales, for use as student transport. It is powered by hydrogen fuel or solar cells, batteries and ultracapacitors. Motor racing The FIA, a governing body for motor racing events, proposed in the Power-Train Regulation Framework for Formula 1 version 1.3 of 23 May 2007 that a new set of power train regulations be issued that includes a hybrid drive of up to 200 kW input and output power using "superbatteries" made with batteries and supercapacitors connected in parallel (KERS). About 20% tank-to-wheel efficiency could be reached using the KERS system. The Toyota TS030 Hybrid LMP1 car, a racing car developed under Le Mans Prototype rules, uses a hybrid drivetrain with supercapacitors. In the 2012 24 Hours of Le Mans race a TS030 qualified with a fastest lap only 1.055 seconds slower (3:24.842 versus 3:23.787) than the fastest car, an Audi R18 e-tron quattro with flywheel energy storage. The supercapacitor and flywheel components, whose rapid charge-discharge capabilities help in both braking and acceleration, made the Audi and Toyota hybrids the fastest cars in the race. In the 2012 Le Mans race the two competing TS030s, one of which was in the lead for part of the race, both retired for reasons unrelated to the supercapacitors. The TS030 won three of the 8 races in the 2012 FIA World Endurance Championship season. In 2014 the Toyota TS040 Hybrid used a supercapacitor to add 480 horsepower from two electric motors. Hybrid electric vehicles Supercapacitor/battery combinations in electric vehicles (EV) and hybrid electric vehicles (HEV) are well investigated. A 20 to 60% fuel reduction has been claimed by recovering brake energy in EVs or HEVs. The ability of supercapacitors to charge much faster than batteries, their stable electrical properties, broader temperature range and longer lifetime are suitable, but weight, volume and especially cost mitigate those advantages. Supercapacitors' lower specific energy makes them unsuitable for use as a stand-alone energy source for long distance driving. The fuel economy improvement between a capacitor and a battery solution is about 20% and is available only for shorter trips. For long distance driving the advantage decreases to 6%. Vehicles combining capacitors and batteries run only in experimental vehicles. all automotive manufacturers of EV or HEVs have developed prototypes that uses supercapacitors instead of batteries to store braking energy in order to improve driveline efficiency. The Mazda 6 is the only production car that uses supercapacitors to recover braking energy. Branded as i-eloop, the regenerative braking is claimed to reduce fuel consumption by about 10%. Russian Yo-cars Ё-mobile series was a concept and crossover hybrid vehicle working with a gasoline driven rotary vane type and an electric generator for driving the traction motors. A supercapacitor with relatively low capacitance recovers brake energy to power the electric motor when accelerating from a stop. Toyota's Yaris Hybrid-R concept car uses a supercapacitor to provide quick bursts of power. PSA Peugeot Citroën fit supercapacitors to some of its cars as part of its stop-start fuel-saving system, as this permits faster start-ups when the traffic lights turn green. Gondolas In Zell am See, Austria, an aerial lift connects the city with Schmittenhöhe mountain. The gondolas sometimes run 24 hours per day, using electricity for lights, door opening and communication. The only available time for recharging batteries at the stations is during the brief intervals of guest loading and unloading, which is too short to recharge batteries. Supercapacitors offer a fast charge, higher number of cycles and longer life time than batteries. Emirates Air Line (cable car), also known as the Thames cable car, is a 1-kilometre (0.62 mi) gondola line in London, UK, that crosses the Thames from the Greenwich Peninsula to the Royal Docks. The cabins are equipped with a modern infotainment system, which is powered by supercapacitors. Developments commercially available lithium-ion supercapacitors offered the highest gravimetric specific energy to date, reaching 15 Wh/kg (). Research focuses on improving specific energy, reducing internal resistance, expanding temperature range, increasing lifetimes and reducing costs. Projects include tailored-pore-size electrodes, pseudocapacitive coating or doping materials and improved electrolytes. Research into electrode materials requires measurement of individual components, such as an electrode or half-cell. By using a counterelectrode that does not affect the measurements, the characteristics of only the electrode of interest can be revealed. Specific energy and power for real supercapacitors only have more or less roughly 1/3 of the electrode density. Market worldwide sales of supercapacitors is about US$400 million. The market for batteries (estimated by Frost & Sullivan) grew from US$47.5 billion, (76.4% or US$36.3 billion of which was rechargeable batteries) to US$95 billion. The market for supercapacitors is still a small niche market that is not keeping pace with its larger rival. In 2016, IDTechEx forecast sales to grow from $240 million to $2 billion by 2026, an annual increase of about 24%. Supercapacitor costs in 2006 were US$0.01 per farad or US$2.85 per kilojoule, moving in 2008 below US$0.01 per farad, and were expected to drop further in the medium term. See also References Further reading External links Supercapacitors: A Brief Overview Capacitors Energy conversion
Supercapacitor
[ "Physics" ]
19,995
[ "Capacitance", "Capacitors", "Physical quantities" ]
33,308,777
https://en.wikipedia.org/wiki/Good%E2%80%93deal%20bounds
Good–deal bounds are price bounds for a financial portfolio which depends on an individual trader's preferences. Mathematically, if is a set of portfolios with future outcomes which are "acceptable" to the trader, then define the function by where is the set of final values for self-financing trading strategies. Then any price in the range does not provide a good deal for this trader, and this range is called the "no good-deal price bounds." If then the good-deal price bounds are the no-arbitrage price bounds, and correspond to the subhedging and superhedging prices. The no-arbitrage bounds are the greatest extremes that good-deal bounds can take. If where is a utility function, then the good-deal price bounds correspond to the indifference price bounds. References Mathematical finance Pricing
Good–deal bounds
[ "Mathematics" ]
170
[ "Applied mathematics", "Mathematical finance" ]
37,364,287
https://en.wikipedia.org/wiki/Underground%20hospital
An underground hospital is a hospital that is constructed underground to protect patients and staff from attack during war. They were often used during World War II but very few now remain operational. History Medieval Ceppo Hospital of Pistoia in Italy The Ceppo Hospital of Pistoia was founded in 1277 in a labyrinth of tunnels under the city and is one of the oldest continuously operating hospitals in the world. World War I Carriere Suzanne in France “Carriere Suzanne“ was an underground hospital built during the second World War in a limestone quarry the “Carrieres de Montigny”, north of Compiègne. Carrière Wellington in France A hospital was built inside tunnels under Arras, named Carrière Wellington, with facilities for 700 beds. World War II Hohlgangsanlage 8 in Jersey Hohlgangsanlage 8 was an artillery storage tunnel build by Organisation Todt workers for the Germans during World War II in St. Lawrence, Jersey, which was converted to a hospital to deal with casualties after the Normandy landings on 6 June 1944. The tunnel complex is open to the public during the summer months. Hohlgangsanlage 7/40 in Guernsey Hohlgangsanlage 7/40 (Ho.7/40) two interconnected cave passage installations of 7,000m², were built in 1942-43 by German Fortress Engineer and Organisation Todt workers to store vehicles, ammunition, food, fuel and equipment. Part of Ho. 7/40 was equipped and used for a short while in 1944 as a hospital, as the planned hospital tunnel had not been built, however patients underground did not recuperate very well. The tunnel complex is open to the public during the summer months. Mtarfa Hospital in Malta During the Second World War, the Mtarfa Hospital was reorganized as the 90th General Hospital and expanded to accommodate a maximum of 1200 beds. An underground hospital was excavated under the military hospital. Current Israel Israel currently has at least three hospitals with dedicated underground facilities. Sourasky Medical Center Tel Aviv Sourasky Medical Center is the main hospital serving Tel Aviv, Israel. It is the third-largest hospital complex in the country. In 2011, a 700-1,000 bed bombproof emergency facility was opened. The building, with 13 stories above ground and four stories underground, provides protection against conventional, chemical and biological attack. Construction began in 2008. The cost of the building was $110 million, with a donation of $45 million from Israeli billionaire Sammy Ofer. The architect was Arad Sharon, grandson of Arieh Sharon who designed the original facility. Rambam Hospital Rambam Health Care Campus the largest medical center in northern Israel and fifth largest in Israel, began in October 2010 work on a protected emergency underground hospital designed to withstand conventional, chemical, and biological attacks. The project included a three-floor parking lot that could be transformed at short notice into a 2,000-bed hospital. The hospital can generate its own power and store enough oxygen, drinking water and medical supplies for up to three days. Beilinson Hospital The 90 million shekel fortified emergency room at Beilinson Hospital in Petach Tikvah has gone operational, becoming Israel’s largest ER. The 5,000 square meter (58,000 square feet) facility is capable of treating 200,000 patients annually. There is also a trauma center capable of addressing numerous patients simultaneously. Sweden Södersjukhuset The hospital Södersjukhuset in Stockholm has an underground complex measuring 4,700 square meters (50,600 square feet) called DEMC (Disaster Emergency Center), which was completed and inaugurated on 25 November 1994. In peacetime the complex is used for training and scientific research. In case of disaster or war the complex is fully operational as a normal hospital, it has 270 beds in peacetime and 160 in wartime. Syria Doctors and international N.G.O.s have created an elaborate network of underground hospitals throughout Syria. They have installed cameras in intensive-care units, so that doctors abroad can monitor patients by Skype and direct technicians to administer proper treatment. Aleppo In 2016, because of the number of hospitals that have been damaged or destroyed in the city, hospitals have moved underground. Ghouta The 2019 Syrian-Danish documentary film The Cave is about a makeshift underground hospital nicknamed "the Cave" in Eastern Ghouta. References Military hospitals Hospitals Bunkers Types of hospitals Underground construction
Underground hospital
[ "Engineering" ]
897
[ "Underground construction", "Civil engineering", "Construction" ]
37,369,491
https://en.wikipedia.org/wiki/List%20of%20DNA%20nanotechnology%20research%20groups
This list of DNA nanotechnology research groups gives a partial overview of academic research organisations in the field of DNA nanotechnology, sorted geographically. Any sufficiently notable research group (which in general can be considered as any group having published in well regarded, high impact factor journals) should be listed here, along with a brief description of their research. North America Asia Europe References DNA nanotechnology DNA Research groups
List of DNA nanotechnology research groups
[ "Materials_science" ]
83
[ "Nanotechnology", "DNA nanotechnology" ]
37,376,713
https://en.wikipedia.org/wiki/Heat%20transfer%20physics
Heat transfer physics describes the kinetics of energy storage, transport, and energy transformation by principal energy carriers: phonons (lattice vibration waves), electrons, fluid particles, and photons. Heat is thermal energy stored in temperature-dependent motion of particles including electrons, atomic nuclei, individual atoms, and molecules. Heat is transferred to and from matter by the principal energy carriers. The state of energy stored within matter, or transported by the carriers, is described by a combination of classical and quantum statistical mechanics. The energy is different made (converted) among various carriers. The heat transfer processes (or kinetics) are governed by the rates at which various related physical phenomena occur, such as (for example) the rate of particle collisions in classical mechanics. These various states and kinetics determine the heat transfer, i.e., the net rate of energy storage or transport. Governing these process from the atomic level (atom or molecule length scale) to macroscale are the laws of thermodynamics, including conservation of energy. Introduction Heat is thermal energy associated with temperature-dependent motion of particles. The macroscopic energy equation for infinitesimal volume used in heat transfer analysis is where is heat flux vector, is temporal change of internal energy ( is density, is specific heat capacity at constant pressure, is temperature and is time), and is the energy conversion to and from thermal energy ( and are for principal energy carriers). So, the terms represent energy transport, storage and transformation. Heat flux vector is composed of three macroscopic fundamental modes, which are conduction (, : thermal conductivity), convection (, : velocity), and radiation (, : angular frequency, : polar angle, : spectral, directional radiation intensity, : unit vector), i.e., . Once states and kinetics of the energy conversion and thermophysical properties are known, the fate of heat transfer is described by the above equation. These atomic-level mechanisms and kinetics are addressed in heat transfer physics. The microscopic thermal energy is stored, transported, and transformed by the principal energy carriers: phonons (p), electrons (e), fluid particles (f), and photons (ph). Length and time scales Thermophysical properties of matter and the kinetics of interaction and energy exchange among the principal carriers are based on the atomic-level configuration and interaction. Transport properties such as thermal conductivity are calculated from these atomic-level properties using classical and quantum physics. Quantum states of principal carriers (e.g.. momentum, energy) are derived from the Schrödinger equation (called first principle or ab initio) and the interaction rates (for kinetics) are calculated using the quantum states and the quantum perturbation theory (formulated as the Fermi golden rule). Variety of ab initio (Latin for from the beginning) solvers (software) exist (e.g., ABINIT, CASTEP, Gaussian, Q-Chem, Quantum ESPRESSO, SIESTA, VASP, WIEN2k). Electrons in the inner shells (core) are not involved in heat transfer, and calculations are greatly reduced by proper approximations about the inner-shells electrons. The quantum treatments, including equilibrium and nonequilibrium ab initio molecular dynamics (MD), involving larger lengths and times are limited by the computation resources, so various alternate treatments with simplifying assumptions have been used and kinetics. In classical (Newtonian) MD, the motion of atom or molecule (particles) is based on the empirical or effective interaction potentials, which in turn can be based on curve-fit of ab initio calculations or curve-fit to thermophysical properties. From the ensembles of simulated particles, static or dynamics thermal properties or scattering rates are derived. At yet larger length scales (mesoscale, involving many mean free paths), the Boltzmann transport equation (BTE) which is based on the classical Hamiltonian-statistical mechanics is applied. BTE considers particle states in terms of position and momentum vectors (x, p) and this is represented as the state occupation probability. The occupation has equilibrium distributions (the known boson, fermion, and Maxwell–Boltzmann particles) and transport of energy (heat) is due to nonequilibrium (cause by a driving force or potential). Central to the transport is the role of scattering which turn the distribution toward equilibrium. The scattering is presented by the relations time or the mean free path. The relaxation time (or its inverse which is the interaction rate) is found from other calculations (ab initio or MD) or empirically. BTE can be numerically solved with Monte Carlo method, etc. Depending on the length and time scale, the proper level of treatment (ab initio, MD, or BTE) is selected. Heat transfer physics analyses may involve multiple scales (e.g., BTE using interaction rate from ab initio or classical MD) with states and kinetic related to thermal energy storage, transport and transformation. So, heat transfer physics covers the four principal energy carries and their kinetics from classical and quantum mechanical perspectives. This enables multiscale (ab initio, MD, BTE and macroscale) analyses, including low-dimensionality and size effects. Phonon Phonon (quantized lattice vibration wave) is a central thermal energy carrier contributing to heat capacity (sensible heat storage) and conductive heat transfer in condensed phase, and plays a very important role in thermal energy conversion. Its transport properties are represented by the phonon conductivity tensor Kp (W/m-K, from the Fourier law qk,p = -Kp⋅∇ T) for bulk materials, and the phonon boundary resistance ARp,b [K/(W/m2)] for solid interfaces, where A is the interface area. The phonon specific heat capacity cv,p (J/kg-K) includes the quantum effect. The thermal energy conversion rate involving phonon is included in . Heat transfer physics describes and predicts, cv,p, Kp, Rp,b (or conductance Gp,b) and , based on atomic-level properties. For an equilibrium potential ⟨φ⟩o of a system with N atoms, the total potential ⟨φ⟩ is found by a Taylor series expansion at the equilibrium and this can be approximated by the second derivatives (the harmonic approximation) as where di is the displacement vector of atom i, and Γ is the spring (or force) constant as the second-order derivatives of the potential. The equation of motion for the lattice vibration in terms of the displacement of atoms [d(jl,t): displacement vector of the j-th atom in the l-th unit cell at time t] is where m is the atomic mass and Γ is the force constant tensor. The atomic displacement is the summation over the normal modes [sα: unit vector of mode α, ωp: angular frequency of wave, and κp: wave vector]. Using this plane-wave displacement, the equation of motion becomes the eigenvalue equation where M is the diagonal mass matrix and D is the harmonic dynamical matrix. Solving this eigenvalue equation gives the relation between the angular frequency ωp and the wave vector κp, and this relation is called the phonon dispersion relation. Thus, the phonon dispersion relation is determined by matrices M and D, which depend on the atomic structure and the strength of interaction among constituent atoms (the stronger the interaction and the lighter the atoms, the higher is the phonon frequency and the larger is the slope dωp/dκp). The Hamiltonian of phonon system with the harmonic approximation is where Dij is the dynamical matrix element between atoms i and j, and di (dj) is the displacement of i (j) atom, and p is momentum. From this and the solution to dispersion relation, the phonon annihilation operator for the quantum treatment is defined as where N is the number of normal modes divided by α and ħ is the reduced Planck constant. The creation operator is the adjoint of the annihilation operator, The Hamiltonian in terms of bκ,α† and bκ,α is Hp = Σκ,αħωp,α[bκ,α†bκ,α + 1/2] and bκ,α†bκ,α is the phonon number operator. The energy of quantum-harmonic oscillator is Ep = Σκ,α [fp(κ,α) + 1/2]ħωp,α(κp), and thus the quantum of phonon energy ħωp. The phonon dispersion relation gives all possible phonon modes within the Brillouin zone (zone within the primitive cell in reciprocal space), and the phonon density of states Dp (the number density of possible phonon modes). The phonon group velocity up,g is the slope of the dispersion curve, dωp/dκp. Since phonon is a boson particle, its occupancy follows the Bose–Einstein distribution {fpo = [exp(ħωp/kBT)-1]−1, kB: Boltzmann constant}. Using the phonon density of states and this occupancy distribution, the phonon energy is Ep(T) = ∫Dp(ωp)fp(ωp,T)ħωpdωp, and the phonon density is np(T) = ∫Dp(ωp)fp(ωp,T)dωp. Phonon heat capacity cv,p (in solid cv,p = cp,p, cv,p : constant-volume heat capacity, cp,p: constant-pressure heat capacity) is the temperature derivatives of phonon energy for the Debye model (linear dispersion model), is where TD is the Debye temperature, m is atomic mass, and n is the atomic number density (number density of phonon modes for the crystal 3n). This gives the Debye T3 law at low temperature and Dulong-Petit law at high temperatures. From the kinetic theory of gases, thermal conductivity of principal carrier i (p, e, f and ph) is where ni is the carrier density and the heat capacity is per carrier, ui is the carrier speed and λi is the mean free path (distance traveled by carrier before an scattering event). Thus, the larger the carrier density, heat capacity and speed, and the less significant the scattering, the higher is the conductivity. For phonon λp represents the interaction (scattering) kinetics of phonons and is related to the scattering relaxation time τp or rate (= 1/τp) through λp= upτp. Phonons interact with other phonons, and with electrons, boundaries, impurities, etc., and λp combines these interaction mechanisms through the Matthiessen rule. At low temperatures, scattering by boundaries is dominant and with increase in temperature the interaction rate with impurities, electron and other phonons become important, and finally the phonon-phonon scattering dominants for T > 0.2TD. The interaction rates are reviewed in and includes quantum perturbation theory and MD. A number of conductivity models are available with approximations regarding the dispersion and λp. Using the single-mode relaxation time approximation (∂fp′/∂t|s = −fp′/τp) and the gas kinetic theory, Callaway phonon (lattice) conductivity model as With the Debye model (a single group velocity up,g, and a specific heat capacity calculated above), this becomes where a is the lattice constant a = n−1/3 for a cubic lattice, and n is the atomic number density. Slack phonon conductivity model mainly considering acoustic phonon scattering (three-phonon interaction) is given as where is the mean atomic weight of the atoms in the primitive cell, Va=1/n is the average volume per atom, TD,∞ is the high-temperature Debye temperature, T is the temperature, No is the number of atoms in the primitive cell, and ⟨γ2G⟩ is the mode-averaged square of the Grüneisen constant or parameter at high temperatures. This model is widely tested with pure nonmetallic crystals, and the overall agreement is good, even for complex crystals. Based on the kinetics and atomic structure consideration, a material with high crystalline and strong interactions, composed of light atoms (such as diamond and graphene) is expected to have large phonon conductivity. Solids with more than one atom in the smallest unit cell representing the lattice have two types of phonons, i.e., acoustic and optical. (Acoustic phonons are in-phase movements of atoms about their equilibrium positions, while optical phonons are out-of-phase movement of adjacent atoms in the lattice.) Optical phonons have higher energies (frequencies), but make smaller contribution to conduction heat transfer, because of their smaller group velocity and occupancy. Phonon transport across hetero-structure boundaries (represented with Rp,b, phonon boundary resistance) according to the boundary scattering approximations are modeled as acoustic and diffuse mismatch models. Larger phonon transmission (small Rp,b) occurs at boundaries where material pairs have similar phonon properties (up, Dp, etc.), and in contract large Rp,b occurs when some material is softer (lower cut-off phonon frequency) than the other. Electron Quantum electron energy states for electron are found using the electron quantum Hamiltonian, which is generally composed of kinetic (-ħ2∇2/2me) and potential energy terms (φe). Atomic orbital, a mathematical function describing the wave-like behavior of either an electron or a pair of electrons in an atom, can be found from the Schrödinger equation with this electron Hamiltonian. Hydrogen-like atoms (a nucleus and an electron) allow for closed-form solution to Schrödinger equation with the electrostatic potential (the Coulomb law). The Schrödinger equation of atoms or atomic ions with more than one electron has not been solved analytically, because of the Coulomb interactions among electrons. Thus, numerical techniques are used, and an electron configuration is approximated as product of simpler hydrogen-like atomic orbitals (isolate electron orbitals). Molecules with multiple atoms (nuclei and their electrons) have molecular orbital (MO, a mathematical function for the wave-like behavior of an electron in a molecule), and are obtained from simplified solution techniques such as linear combination of atomic orbitals (LCAO). The molecular orbital is used to predict chemical and physical properties, and the difference between highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) is a measure of excitability of the molecules. In a crystal structure of metallic solids, the free electron model (zero potential, φe = 0) for the behavior of valence electrons is used. However, in a periodic lattice (crystal), there is periodic crystal potential, so the electron Hamiltonian becomes where me is the electron mass, and the periodic potential is expressed as φc (x) = Σg φgexp[i(g∙x)] (g: reciprocal lattice vector). The time-independent Schrödinger equation with this Hamiltonian is given as (the eigenvalue equation) where the eigenfunction ψe,κ is the electron wave function, and eigenvalue Ee(κe), is the electron energy (κe: electron wavevector). The relation between wavevector, κe and energy Ee provides the electronic band structure. In practice, a lattice as many-body systems includes interactions between electrons and nuclei in potential, but this calculation can be too intricate. Thus, many approximate techniques have been suggested and one of them is density functional theory (DFT), uses functionals of the spatially dependent electron density instead of full interactions. DFT is widely used in ab initio software (ABINIT, CASTEP, Quantum ESPRESSO, SIESTA, VASP, WIEN2k, etc.). The electron specific heat is based on the energy states and occupancy distribution (the Fermi–Dirac statistics). In general, the heat capacity of electron is small except at very high temperature when they are in thermal equilibrium with phonons (lattice). Electrons contribute to heat conduction (in addition to charge carrying) in solid, especially in metals. Thermal conductivity tensor in solid is the sum of electric and phonon thermal conductivity tensors K = Ke + Kp. Electrons are affected by two thermodynamic forces [from the charge, ∇(EF/ec) where EF is the Fermi level and ec is the electron charge and temperature gradient, ∇(1/T)] because they carry both charge and thermal energy, and thus electric current je and heat flow q are described with the thermoelectric tensors (Aee, Aet, Ate, and Att) from the Onsager reciprocal relations as Converting these equations to have je equation in terms of electric field ee and ∇T and q equation with je and ∇T, (using scalar coefficients for isotropic transport, αee, αet, αte, and αtt instead of Aee, Aet, Ate, and Att) Electrical conductivity/resistivity σe (Ω−1m−1)/ ρe (Ω-m), electric thermal conductivity ke (W/m-K) and the Seebeck/Peltier coefficients αS (V/K)/αP (V) are defined as, Various carriers (electrons, magnons, phonons, and polarons) and their interactions substantially affect the Seebeck coefficient. The Seebeck coefficient can be decomposed with two contributions, αS = αS,pres + αS,trans, where αS,pres is the sum of contributions to the carrier-induced entropy change, i.e., αS,pres = αS,mix + αS,spin + αS,vib (αS,mix: entropy-of-mixing, αS,spin: spin entropy, and αS,vib: vibrational entropy). The other contribution αS,trans is the net energy transferred in moving a carrier divided by qT (q: carrier charge). The electron's contributions to the Seebeck coefficient are mostly in αS,pres. The αS,mix is usually dominant in lightly doped semiconductors. The change of the entropy-of-mixing upon adding an electron to a system is the so-called Heikes formula where feo = N/Na is the ratio of electrons to sites (carrier concentration). Using the chemical potential (μ), the thermal energy (kBT) and the Fermi function, above equation can be expressed in an alternative form, αS,mix = (kB/q)[(Ee − μ)/(kBT)]. Extending the Seebeck effect to spins, a ferromagnetic alloy can be a good example. The contribution to the Seebeck coefficient that results from electrons' presence altering the systems spin entropy is given by αS,spin = ΔSspin/q = (kB/q)ln[(2s + 1)/(2s0 +1)], where s0 and s are net spins of the magnetic site in the absence and presence of the carrier, respectively. Many vibrational effects with electrons also contribute to the Seebeck coefficient. The softening of the vibrational frequencies produces a change of the vibrational entropy is one of examples. The vibrational entropy is the negative derivative of the free energy, i.e., where Dp(ω) is the phonon density-of-states for the structure. For the high-temperature limit and series expansions of the hyperbolic functions, the above is simplified as αS,vib = (ΔSvib/q) = (kB/q)Σi(-Δωi/ωi). The Seebeck coefficient derived in the above Onsager formulation is the mixing component αS,mix, which dominates in most semiconductors. The vibrational component in high-band gap materials such as B13C2 is very important. Considering the microscopic transport (transport is a results of nonequilibrium), where ue is the electron velocity vector, fe (feo) is the electron nonequilibrium (equilibrium) distribution, τe is the electron scattering time, Ee is the electron energy, and Fte is the electric and thermal forces from ∇(EF/ec) and ∇(1/T). Relating the thermoelectric coefficients to the microscopic transport equations for je and q, the thermal, electric, and thermoelectric properties are calculated. Thus, ke increases with the electrical conductivity σe and temperature T, as the Wiedemann–Franz law presents [ke/(σeTe) = (1/3)(πkB/ec)2 = ]. Electron transport (represented as σe) is a function of carrier density ne,c and electron mobility μe (σe = ecne,cμe). μe is determined by electron scattering rates (or relaxation time, ) in various interaction mechanisms including interaction with other electrons, phonons, impurities and boundaries. Electrons interact with other principal energy carriers. Electrons accelerated by an electric field are relaxed through the energy conversion to phonon (in semiconductors, mostly optical phonon), which is called Joule heating. Energy conversion between electric potential and phonon energy is considered in thermoelectrics such as Peltier cooling and thermoelectric generator. Also, study of interaction with photons is central in optoelectronic applications (i.e. light-emitting diode, solar photovoltaic cells, etc.). Interaction rates or energy conversion rates can be evaluated using the Fermi golden rule (from the perturbation theory) with ab initio approach. Fluid particle Fluid particle is the smallest unit (atoms or molecules) in the fluid phase (gas, liquid or plasma) without breaking any chemical bond. Energy of fluid particle is divided into potential, electronic, translational, vibrational, and rotational energies. The heat (thermal) energy storage in fluid particle is through the temperature-dependent particle motion (translational, vibrational, and rotational energies). The electronic energy is included only if temperature is high enough to ionize or dissociate the fluid particles or to include other electronic transitions. These quantum energy states of the fluid particles are found using their respective quantum Hamiltonian. These are Hf,t = −(ħ2/2m)∇2, Hf,v = −(ħ2/2m)∇2 + Γx2/2 and Hf,r = −(ħ2/2If)∇2 for translational, vibrational and rotational modes. (Γ: spring constant, If: the moment of inertia for the molecule). From the Hamiltonian, the quantized fluid particle energy state Ef and partition functions Zf [with the Maxwell–Boltzmann (MB) occupancy distribution] are found as translational vibrational rotational total Here, gf is the degeneracy, n, l, and j are the transitional, vibrational and rotational quantum numbers, Tf,v is the characteristic temperature for vibration (= ħωf,v/kB, : vibration frequency), and Tf,r is the rotational temperature [= ħ2/(2IfkB)]. The average specific internal energy is related to the partition function through Zf, With the energy states and the partition function, the fluid particle specific heat capacity cv,f is the summation of contribution from various kinetic energies (for non-ideal gas the potential energy is also added). Because the total degrees of freedom in molecules is determined by the atomic configuration, cv,f has different formulas depending on the configuration, monatomic ideal gas diatomic ideal gas nonlinear, polyatomic ideal gas where Rg is the gas constant (= NAkB, NA: the Avogadro constant) and M is the molecular mass (kg/kmol). (For the polyatomic ideal gas, No is the number of atoms in a molecule.) In gas, constant-pressure specific heat capacity cp,f has a larger value and the difference depends on the temperature T, volumetric thermal expansion coefficient β and the isothermal compressibility κ [cp,f – cv,f = Tβ2/(ρfκ), ρf : the fluid density]. For dense fluids that the interactions between the particles (the van der Waals interaction) should be included, and cv,f and cp,f would change accordingly. The net motion of particles (under gravity or external pressure) gives rise to the convection heat flux qu = ρfcp,fufT. Conduction heat flux qk for ideal gas is derived with the gas kinetic theory or the Boltzmann transport equations, and the thermal conductivity is where ⟨uf2⟩1/2 is the RMS (root mean square) thermal velocity (3kBT/m from the MB distribution function, m: atomic mass) and τf-f is the relaxation time (or intercollision time period) [(21/2π d2nf ⟨uf⟩)−1 from the gas kinetic theory, ⟨uf⟩: average thermal speed (8kBT/πm)1/2, d: the collision diameter of fluid particle (atom or molecule), nf: fluid number density]. kf is also calculated using molecular dynamics (MD), which simulates physical movements of the fluid particles with the Newton equations of motion (classical) and force field (from ab initio or empirical properties). For calculation of kf, the equilibrium MD with Green–Kubo relations, which express the transport coefficients in terms of integrals of time correlation functions (considering fluctuation), or nonequilibrium MD (prescribing heat flux or temperature difference in simulated system) are generally employed. Fluid particles can interact with other principal particles. Vibrational or rotational modes, which have relatively high energy, are excited or decay through the interaction with photons. Gas lasers employ the interaction kinetics between fluid particles and photons, and laser cooling has been also considered in CO2 gas laser. Also, fluid particles can be adsorbed on solid surfaces (physisorption and chemisorption), and the frustrated vibrational modes in adsorbates (fluid particles) is decayed by creating e−-h+ pairs or phonons. These interaction rates are also calculated through ab initio calculation on fluid particle and the Fermi golden rule. Photon Photon is the quanta of electromagnetic (EM) radiation and energy carrier for radiation heat transfer. The EM wave is governed by the classical Maxwell equations, and the quantization of EM wave is used for phenomena such as the blackbody radiation (in particular to explain the ultraviolet catastrophe). The quanta EM wave (photon) energy of angular frequency ωph is Eph = ħωph, and follows the Bose–Einstein distribution function (fph). The photon Hamiltonian for the quantized radiation field (second quantization) is where ee and be are the electric and magnetic fields of the EM radiation, εo and μo are the free-space permittivity and permeability, V is the interaction volume, ωph,α is the photon angular frequency for the α mode and cα† and cα are the photon creation and annihilation operators. The vector potential ae of EM fields (ee = −∂ae/∂t and be = ∇×ae) is where sph,α is the unit polarization vector, κα is the wave vector. Blackbody radiation among various types of photon emission employs the photon gas model with thermalized energy distribution without interphoton interaction. From the linear dispersion relation (i.e., dispersionless), phase and group speeds are equal (uph = d ωph/dκ = ωph/κ, uph: photon speed) and the Debye (used for dispersionless photon) density of states is Dph,b,ωdω = ωph2dωph/π2uph3. With Dph,b,ω and equilibrium distribution fph, photon energy spectral distribution dIb,ω or dIb,λ (λph: wavelength) and total emissive power Eb are derived as (Planck law), (Stefan–Boltzmann law). Compared to blackbody radiation, laser emission has high directionality (small solid angle ΔΩ) and spectral purity (narrow bands Δω). Lasers range far-infrared to X-rays/γ-rays regimes based on the resonant transition (stimulated emission) between electronic energy states. Near-field radiation from thermally excited dipoles and other electric/magnetic transitions is very effective within a short distance (order of wavelength) from emission sites. The BTE for photon particle momentum pph = ħωphs/uph along direction s experiencing absorption/emission (= uphσph,ω[fph(ωph,T) - fph(s)], σph,ω: spectral absorption coefficient), and generation/removal , is In terms of radiation intensity (Iph,ω = uphfphħωphDph,ω/4π, Dph,ω: photon density of states), this is called the equation of radiative transfer (ERT) The net radiative heat flux vector is From the Einstein population rate equation, spectral absorption coefficient σph,ω in ERT is, where is the interaction probability (absorption) rate or the Einstein coefficient B12 (J−1 m3 s−1), which gives the probability per unit time per unit spectral energy density of the radiation field (1: ground state, 2: excited state), and ne is electron density (in ground state). This can be obtained using the transition dipole moment μe with the FGR and relationship between Einstein coefficients. Averaging σph,ω over ω gives the average photon absorption coefficient σph. For the case of optically thick medium of length L, i.e., σphL >> 1, and using the gas kinetic theory, the photon conductivity kph is 16σSBT3/3σph (σSB: Stefan–Boltzmann constant, σph: average photon absorption), and photon heat capacity nphcv,ph is 16σSBT3/uph. Photons have the largest range of energy and central in a variety of energy conversions. Photons interact with electric and magnetic entities. For example, electric dipole which in turn are excited by optical phonons or fluid particle vibration, or transition dipole moments of electronic transitions. In heat transfer physics, the interaction kinetics of phonon is treated using the perturbation theory (the Fermi golden rule) and the interaction Hamiltonian. The photon-electron interaction is where pe is the dipole moment vector and a† and a are the creation and annihilation of internal motion of electron. Photons also participate in ternary interactions, e.g., phonon-assisted photon absorption/emission (transition of electron energy level). The vibrational mode in fluid particles can decay or become excited by emitting or absorbing photons. Examples are solid and molecular gas laser cooling. Using ab initio calculations based on the first principles along with EM theory, various radiative properties such as dielectric function (electrical permittivity, εe,ω), spectral absorption coefficient (σph,ω), and the complex refraction index (mω), are calculated for various interactions between photons and electric/magnetic entities in matter. For example, the imaginary part (εe,c,ω) of complex dielectric function (εe,ω = εe,r,ω + i εe,c,ω) for electronic transition across a bandgap is where V is the unit-cell volume, VB and CB denote the valence and conduction bands, wκ is the weight associated with a κ-point, and pij is the transition momentum matrix element. The real part is εe,r,ω is obtained from εe,c,ω using the Kramers-Kronig relation Here, denotes the principal value of the integral. In another example, for the far IR regions where the optical phonons are involved, the dielectric function (εe,ω) are calculated as where LO and TO denote the longitudinal and transverse optical phonon modes, j is all the IR-active modes, and γ is the temperature-dependent damping term in the oscillator model. εe,∞ is high frequency dielectric permittivity, which can be calculated DFT calculation when ions are treated as external potential. From these dielectric function (εe,ω) calculations (e.g., Abinit, VASP, etc.), the complex refractive index mω(= nω + i κω, nω: refraction index and κω: extinction index) is found, i.e., mω2 = εe,ω = εe,r,ω + i εe,c,ω). The surface reflectance R of an ideal surface with normal incident from vacuum or air is given as R = [(nω - 1)2 + κω2]/[(nω + 1)2 + κω2]. The spectral absorption coefficient is then found from σph,ω = 2ω κω/uph. The spectral absorption coefficient for various electric entities are listed in the below table. See also Energy transfer Mass transfer Energy transformation (Energy conversion) Thermal physics Thermal science Thermal engineering References Heat transfer Thermodynamics Condensed matter physics
Heat transfer physics
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
7,215
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Phases of matter", "Materials science", "Condensed matter physics", "Thermodynamics", "Matter", "Dynamical systems" ]
38,812,038
https://en.wikipedia.org/wiki/Spontelectrics
In solid state physics, spontelectrics is the study and phenomenon of thin films of various materials producing strong electric fields. Properties When laid down as thin films tens to hundreds of molecular layers thick, a range of materials spontaneously generate large electric fields. The electric fields can be greater than 108 V/m. Spontelectric behaviour is intrinsic to the dipolar nature of the constituent molecules. The detection (in ~2009) of spontaneous electric fields in numerous solid films prepared by vapour deposition raises fundamental questions about the nature of disordered materials. External links Spontelectrics, or the solid state continues to surprise us Fysikoverraskelse: Elektrisk spænding opstår spontant i tyndfilm af lattergas Another way to sponteletrics. References Condensed matter physics
Spontelectrics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
169
[ "Phases of matter", "Condensed matter physics", "Matter", "Materials science" ]
38,812,596
https://en.wikipedia.org/wiki/ABCE1
ATP-binding cassette sub-family E member 1 (ABCE1) also known as RNase L inhibitor (RLI) is an enzyme that in humans is encoded by the ABCE1 gene. ABCE1 is an ATPase that is a member of the ATP-binding cassette (ABC) transporters superfamily and OABP subfamily. ABCE1 inhibits the action of ribonuclease L. Ribonuclease L normally binds to 2-5A (5'-phosphorylated 2',5'-linked oligoadenylates) and inhibits the interferon-regulated 2-5A/RNase L pathway, which is used by viruses. ABCE1 heterodimerize with ribonuclease L and prevents its interaction with 2-5A, antagonizing the anti-viral properties of ribonuclease L, and allow the virus to synthesize viral proteins. It has also been implicated to have an effect in tumor cell proliferation and antiapoptosis. ABCE1 is an essential and highly conserved protein that is required for both eukaryotic translation initiation as well as ribosome biogenesis. The most studied homologues are Rli1p in yeast and Pixie in Drosophila. Structure RLI is a 68 kDa cytoplasmic protein found in most eukaryota and archae. Since the crystal structure for RLI has not yet been determined, all that is known has been inferred from protein sequencing. The protein sequences between species is very well conserved, for example Pixie and yeast Rli1p are 66% identical, and Rli1p and human RLI are 67% identical. RLI belongs to the ABCE family of ATP-binding cassette (ABC) proteins. ABC proteins typically also contain a transmembrane region, and utilize ATP to transport substrates across a membrane, however RLI is unique in that it is a soluble protein that contains ABC domains. RLI has two C-terminal ABC domains; upon binding ATP they form a characteristic "ATP-sandwich," with two ATP molecules sandwiched between the two dimerized ABC domains. Hydrolysis of ATP allows the dimer to dissociate in a fully reversible process. Incubation of the protein with a non-hydrolyzable ATP analogue or a mutation of the ABC domain causes a complete loss of protein function. RLI also has a cysteine-rich N-terminal region that is predicted to tightly bind two [4Fe-4S] clusters. Mutation of this region, or depletion of available Fe/S clusters, renders the protein unable to function, and loss of cell viability, making RLI the only known essential cytoplasmic protein dependent on Fe/S cluster biosynthesis in the mitochondria. The function of the Fe/S clusters is unknown, although it has been suggested that they regulate the ABC domains in response to a change in the redox environment, for example in the presence of reactive oxygen species. Function RLI and its homologues in yeast and Drosophila have two major identified functions: translation initiation and ribosome biogenesis. In addition, human RLI is a known inhibitor of RNAse L. This was the first activity identified and the source of its name (RNAse L Inhibitor). Translation Initiation Translation initiation is an essential process required for proper protein expression and cell viability. Rli1p has been found to co-purify with eukaryotic initiation factors, specifically eIF2, eIF5, and eIF3, as well as the 40S subunit of the ribosome. These initiation factors must associate with the ribosome in stoichiometric proportions, while Rli1p is required in catalytic amounts. The following mechanism for the process has been proposed: One ABC domain binds the 40S subunit, while the other binds an initiation factor. Binding of ATP allows for dimerization, which subsequently brings the initiation factor and ribosomal subunit in close enough contact to associate. ATP hydrolysis releases the two substrates and allows the cycle to begin again. This model is similar to one that has been proposed for DNA repair enzymes with ABC domains, in which each domain binds either side of a broken piece of DNA, with hydrolysis allowing the pieces to be brought together and subsequently repaired. Ribosome recycling Recycling is essential for ribosomes to become usable again after translating an mRNA or stalling. In both eukaryotes and archaea, ABCE1 is responsible for splitting a ribosome that has been bound to Pelota or its paralog eRF1. The exact movements leading to the split is not well understood. Ribosome biogenesis RLI and its homologues are also thought to play a role in ribosome biogenesis, nuclear export, or both. They have been found in the nucleus associated with the 40S and 60S subunits, as well as Hcr1p, a protein required for rRNA processing. It has been shown that the Fe/S clusters are necessary for ribosome biogenesis and/or nuclear export, although the exact mechanism is unknown. RNAse inhibitor Human RLI was first identified because of its ability to inhibit RNAse L, which plays a crucial role in antiviral activity in mammals. This cannot account for the conservation of the protein in all other organisms, since only mammals have the RNAse L system. It has been suggested that RLI in lower eukaryotes functions by inhibiting RNAses involved in ribosomal biosynthesis, thereby regulating the process. Role in mitochondria The mitochondria's energetic and metabolic functions have been established to be non-essential for yeast cell viability. The only function that has been implicated in being necessary for survival is the biosynthesis of Fe/S clusters. RLI is the only known essential cytoplasmic Fe/S protein that is absolutely dependent on the mitochondrial Fe/S synthesis and export system for proper maturation. Rli1p is therefore a novel link between the mitochondria and ribosome function and biosynthesis, and therefore the viability of the cell. References External links ABCE1 at NCBI AceView genes ABCE1 at GeneCards UniProt Protein biosynthesis ATP-binding cassette transporters
ABCE1
[ "Chemistry" ]
1,321
[ "Protein biosynthesis", "Gene expression", "Biosynthesis" ]
38,813,617
https://en.wikipedia.org/wiki/Useless%20machine
A useless machine or useless box is a device whose only function is to turn itself off. The best-known useless machines are those inspired by Marvin Minsky's design, in which the device's sole function is to switch itself off by operating its own "off" switch. Such machines were popularized commercially in the 1960s, sold as an amusing engineering hack, or as a joke. More elaborate devices and some novelty toys, which have an obvious entertainment function, have been based on these simple useless machines. History The Italian artist Bruno Munari began building "useless machines" (macchine inutili) in the 1930s. He was a "third generation" Futurist and did not share the first generation's boundless enthusiasm for technology but sought to counter the threats of a world under machine rule by building machines that were artistic and unproductive. The version of the useless machine that became famous in information theory (basically a box with a simple switch which, when turned "on", causes a hand or lever to appear from inside the box that switches the machine "off" before disappearing inside the box again) appears to have been invented by MIT professor and artificial intelligence pioneer Marvin Minsky, while he was a graduate student at Bell Labs in 1952. Minsky dubbed his invention the "ultimate machine", but this nomenclature did not catch on. The device has also been called the "Leave Me Alone Box". Minsky's mentor at Bell Labs, information theory pioneer Claude Shannon (who later became an MIT professor himself), made his own versions of the machine. He kept one on his desk, where science fiction author Arthur C. Clarke saw it. Clarke later wrote, "There is something unspeakably sinister about a machine that does nothing—absolutely nothing—except switch itself off", and he was fascinated by the concept. Minsky also invented a "gravity machine" that would ring a bell if the gravitational constant were to change, a theoretical possibility that is not expected to occur in the foreseeable future. Commercial products In the 1960s, a novelty toy maker called "Captain Co." sold a "Monster Inside the Black Box", featuring a mechanical hand that emerged from a featureless plastic black box and flipped a toggle switch, turning itself off. This version may have been inspired in part by "Thing", the disembodied hand featured in the television sitcom The Addams Family. Other versions have been produced. In their conceptually purest form, these machines do nothing except switch themselves off. It is claimed that Don Poynter, who graduated from the University of Cincinnati in 1949 and founded Poynter Products, Inc., first produced and sold the "Little Black Box", which simply switched itself off. He then added the coin snatching feature, dubbed his invention "The Thing", arranged licensing with the producers of the television show, The Addams Family, and later sold "Uncle Fester's Mystery Light Bulb" as another show spinoff product. Robert J. Whiteman, owner and president of Liberty Library Corporation, also claims credit for developing "The Thing". (Both companies were later to be co-defendants in landmark litigation initiated by Theodor Geisel ("Dr. Seuss") over copyright issues related to figurines.) Both the plain black box and the bank version were widely sold by Spencer Gifts, and appeared in its mail-order catalogs through the 1960s and early 1970s. , a version of the coin snatching black box is being sold as the "Black Box Money Trap Bank" or "Black Box Bank". Do-it-yourself versions of the useless machine (often modernized with microprocessor controls) have been featured in a number of web videos and inspired more complex machines that are able to move or which use more than one switch. , there are several completed or kit form devices being offered for sale. Cultural references In 2009, the artist David Moises exhibited his reconstruction of The Ultimate Machine aka Shannon's Hand, and explained the interactions of Claude Shannon, Marvin Minsky, and Arthur C. Clarke regarding the device. Episode 3 of the third season of the FX show Fargo, "The Law of Non-Contradiction", features a useless machine (and, in a story within the story, an android named MNSKY after Marvin Minsky). See also Arthur Ganson Discard Protocol Jean Tinguely Overengineering Rube Goldberg machine Theo Jansen Trammel of Archimedes References Office toys Machines Novelty items Mechanical toys Products introduced in the 1950s
Useless machine
[ "Physics", "Technology", "Engineering" ]
931
[ "Physical systems", "Machines", "Mechanical toys", "Mechanical engineering" ]
38,817,016
https://en.wikipedia.org/wiki/Cyanuric%20triazide
Cyanuric triazide (C3N12 or (NCN3)3) is described as an environmentally friendly, low toxicity, and organic primary explosive with a detonation velocity of about 7,300 m s−1 and a autoignition temperature of 205 °C. Structure The cyanuric triazide molecule exists as a planar triskelion with molecular point group C3h. The 1,3,5-triazine (or cyanuric) ring consists of alternating carbon and nitrogen atoms with C–N bond lengths of 1.334 to 1.336 Å. The distance from the center of the ring to each ring carbon atom is 1.286 Å, while the corresponding distance to ring nitrogens is 1.379 Å. Azide groups are linked to the carbon atoms on the cyanuric ring by single bonds with an interatomic distance of 1.399 Å. Synthesis Cyanuric triazide can be synthesized via the nucleophilic aromatic substitution of cyanuric trichloride with an excess of sodium azide in heated acetone. The white crystals can then be purified via recrystallization from −20 °C toluene. Decomposition reactions This white polycrystalline solid was found to be stable under standard conditions but is extremely shock sensitive causing it to violently decompose when ground with a mortar. The thermodynamic properties of cyanuric triazide were studied using bomb calorimetry with a combustion enthalpy (H) of 2234 kJ mol−1 under oxidizing conditions and 740 kJ mol−1 otherwise. The former value is comparable to the military explosive RDX, (C3N3)(NO3)3H6, but is not put into use due to its less than favorable stability. Melting point examination showed a sharp melting range to clear liquid at 94–95 °C, gas evolution at 155 °C, orange to brown solution discoloration at 170 °C, orange-brown solidification at 200 °C and rapid decomposition at 240 °C. The rapid decomposition at 240 °C results from the formation of elemental carbon as graphite and the formation of nitrogen gas. References Explosive chemicals Triazines Organoazides
Cyanuric triazide
[ "Chemistry" ]
459
[ "Explosive chemicals" ]
1,476,307
https://en.wikipedia.org/wiki/Memory%20foam
Memory foam consists mainly of polyurethane with additional chemicals that increase its viscosity and density. It is often referred to as "viscoelastic" polyurethane foam, or low-resilience polyurethane foam (LRPu). The foam bubbles or ‘cells’ are open, effectively creating a matrix through which air can move. Higher-density memory foam softens in reaction to body heat, allowing it to mold to a warm body in a few minutes. Newer foams may recover their original shape more quickly. Mechanics Memory foam derives its viscoelastic properties from several effects, due to the material's internal structure. The network effect is the force working to restore the foam's structure when it is deformed. This effect is generated by the deformed porous material pushing outwards to restore its structure against an applied pressure. Three effects work against the network effect, slowing the regeneration of the foam's original structure: The pneumatic effect, caused by the time it takes air to flow into the foam's porous structure. The adhesive effect, or adhesion, caused by the stickiness of the surfaces within the foam, which work against decompression as the internal pores within the foam are pressed together The relaxation effect (the strongest of the three forces working against expansion), caused by the foam's material being near its glass transition temperature—limiting its mobility, forcing any change to be gradual, and slowing the expansion of the foam once the applied pressure has been removed The effects are temperature-dependent, so the temperature range at which memory foam retains its properties is limited. If it is too cold, it hardens. If it is too hot, it acts like conventional foams, quickly springing back to its original shape. The underlying physics of this process can be described by polymeric creep. The pneumatic and adhesive effects are strongly correlated with the size of the pores within memory foam. Smaller pores lead to higher internal surface area and reduced air flow, increasing the adhesion and pneumatic effects. Thus the foam's properties can be controlled by changing its cell structure and porosity. Its glass transition temperature can also be modulated by using additives in the foam's material. Memory foam's mechanical properties can affect the comfort of mattresses produced with it. There is also a trade-off between comfort and durability. Certain memory foams may have a more rigid cell structure, leading to a weaker distribution of weight, but better recovery of the original structure, leading to improved cyclability and durability. Denser cell structure can also resist the penetration of water vapor, leading to reduced weathering and better durability and overall appearance. History Memory foam was developed in 1966 under a contract by NASA's Ames Research Center to improve the safety of aircraft cushions. The temperature-sensitive memory foam was initially referred to as "slow spring back foam"; most called it "temper foam". Created by feeding gas into a polymer matrix, it had an open-cell solid structure that matched pressure against it, yet slowly returned to its original shape. Later commercialisation of the foam included use in medical equipment such as X-ray table pads, and sports equipment such as American / Canadian football helmet liners. When NASA released memory foam to the public domain in the early 1980s, Fagerdala World Foams was one of the few companies willing to work with it, as the manufacturing process remained difficult and unreliable. Their 1991 product, the Tempur-Pedic Swedish Mattress eventually led to the mattress and cushion company Tempur World. Memory foam was subsequently used in medical settings. For example, when patients were required to lie immobile in bed, on a firm mattress, for an unhealthy period of time, the pressure on some of their body regions impaired blood flow, causing pressure sores or gangrene. Memory foam mattresses significantly decreased such events, as well as alternating pressure air mattresses. Memory foam was initially too expensive for widespread use, but became cheaper. Its most common domestic uses are mattresses, pillows, shoes, and blankets. It has medical uses, such as wheelchair seat cushions, hospital bed pillows and padding for people suffering long-term pain or postural problems. Gel Heat retention can be a disadvantage when used in mattresses and pillows, so in second-generation memory foam, companies began using open cell structure to improve breathability. In 2006, the third generation of memory foam was introduced. Gel visco or gel memory foam consists of gel particles fused with visco foam to reduce trapped body heat, speed up spring back time and help the mattress feel softer. This technology was originally developed and patented by Peterson Chemical Technology, and gel mattresses became popular with the release of Serta's iComfort line and Simmons' Beautyrest line in 2011. Gel-infused memory foam was next developed with what were described as "beads" containing the gel which, as a phase-change material, achieved the desired temperature stabilization or cooling effect by changing from a solid to a liquid "state" within the capsule. Changing physical states can significantly alter an element's heat absorption properties. Since the development of gel memory foam, other materials have been added. Aloe vera, green tea extract and activated charcoal have been combined with it to reduce odors and even provide aromatherapy while sleeping. Rayon has been used in woven mattress covers over memory foam beds to wick moisture away from the body to increase comfort. Phase-change materials (PCMs) have also been used in covers on memory foam pillows, beds, and mattress pads. Materials other than polyurethane also have the properties necessary to make memory foam. Polyethylene terephthalate, one such polymeric material, provides certain benefits over polyurethane, such as recyclability, lightness, and thermal insulation. Mattresses A memory foam mattress is usually denser than other foam mattresses, making it both more supportive and heavier. Memory foam mattresses are often sold for higher prices than traditional mattresses. Memory foam used in mattresses is commonly manufactured in densities ranging from less than 24kg/m3 (1.5 lb/ft3) to 128kg/m3 (8 lb/ft3) density. Most standard memory foam has a density of 16–80 kg/m3 (1 to 5 lb/ft3). Most bedding, such as topper pads and comfort layers in mattresses, has a density of 48–72 kg/m3 (3 to 4.5 lb/ft3). High densities such as 85 kg/m3 (5.3 lb/ft3) are used infrequently. The firmness property (hard to soft) of memory foam is used in determining comfort. It is measured by a foam's indentation force deflection (IFD) rating. However, it is not a complete measurement of a "soft" or "firm" feel. A foam of higher IFD but lower density can feel soft when compressed. IFD measures the force in newtons (or pounds-force) required to make a dent 1 inch into a foam sample by a 323 cm3 (50 sq in, 8-inch-diameter) disc—known as IFD @ 25% compression. IFD ratings for memory foams range between super soft (IFD 10) and semi-rigid (IFD 12). Most memory foam mattresses are firm (IFD 12 to IFD 16). Second and third generation memory foams have an open-cell structure that reacts to body heat and weight by molding to the sleeper's body, helping relieve pressure points, preventing pressure sores, etc. Manufacturers claim that this may help relieve pressure points to relieve pain and promote more restful sleep, although there are no objective studies supporting the mattresses' claimed benefits. Memory foam mattresses retain body heat, so they can be excessively warm in hot weather. However, gel-type memory foams tend to be cooler due to their greater breathability. Hazards Emissions from memory foam mattresses may directly cause more respiratory irritation than other mattresses. Memory foam, like other polyurethane products, can be combustible. Laws in several jurisdictions have been enacted to require that all bedding, including memory foam items, be resistant to ignition from an open flame such as a candle or cigarette lighter. US bedding laws that went into effect in 2010 change the Cal-117 Bulletin for FR testing. There is concern that high levels of the fire retardant PBDE commonly used in memory foam could cause health problems for some users. PBDEs are no longer used in most bedding foams, especially in the European Union. Manufacturers caution about leaving babies and small children unattended on memory foam mattresses, as they may find it difficult to turn over and may suffocate. The United States Environmental Protection Agency published two documents proposing National Emissions Standards for Hazardous Air Pollutants (HAP) concerning hazardous emissions produced during the making of flexible polyurethane foam products. The HAP emissions associated with polyurethane foam production include methylene chloride, toluene diisocyanate, methyl chloroform, methylene diphenyl diisocyanate, propylene oxide, diethanolamine, methyl ethyl ketone, methanol, and toluene. However, not all chemical emissions associated with the production of these material have been classified. Methylene chloride makes up over 98 percent of the total HAP emissions from this industry. Short-term exposure to high concentrations of methylene chloride also irritates the nose and throat. The effects of chronic (long-term) exposure to methylene chloride in humans involve the central nervous system, and include headaches, dizziness, nausea, and memory loss. Animal studies indicate that inhalation of methylene chloride affects the liver, kidney, and cardiovascular system. Developmental or reproductive effects of methylene chloride have not been reported in humans, but limited animal studies have reported lowered fetal body weights in exposed rats. See also Low-resilience polyurethane Sorbothane Neoprene List of polyurethane applications References Polyurethanes Foams Non-Newtonian fluids Bedding NASA spin-off technologies Smart materials de:Formgedächtnispolymer
Memory foam
[ "Chemistry", "Materials_science", "Engineering" ]
2,141
[ "Foams", "Smart materials", "Materials science" ]
1,477,878
https://en.wikipedia.org/wiki/David%20Slowinski
David Slowinski is a mathematician involved in prime numbers. His career highlights have included the discovery of several of the largest known Mersenne primes: 244497−1 (M27) (with H. L. Nelson) on April 8, 1979 286243−1 (M28) on September 25, 1982 2132049−1 (M30) on September 19, 1983 2216091−1 (M31) on September 1, 1985 2756839−1 (M32) (with P. Gage) on February 17, 1992 2859433−1 (M33) (with P. Gage) on January 4, 1994 21257787−1 (M34) (with P. Gage) on September 3, 1996 He has also written several textbooks on the subject. Slowinski was a software engineer for Cray Research. References External links David Slowinski on Prime Wiki David Slowinski on The Prime Pages Living people 20th-century American mathematicians Cray employees Number theorists Year of birth missing (living people)
David Slowinski
[ "Mathematics" ]
212
[ "Number theorists", "Number theory" ]
1,480,126
https://en.wikipedia.org/wiki/Underwater%20habitat
Underwater habitats are underwater structures in which people can live for extended periods and carry out most of the basic human functions of a 24-hour day, such as working, resting, eating, attending to personal hygiene, and sleeping. In this context, 'habitat' is generally used in a narrow sense to mean the interior and immediate exterior of the structure and its fixtures, but not its surrounding marine environment. Most early underwater habitats lacked regenerative systems for air, water, food, electricity, and other resources. However, some underwater habitats allow for these resources to be delivered using pipes, or generated within the habitat, rather than manually delivered. An underwater habitat has to meet the needs of human physiology and provide suitable environmental conditions, and the one which is most critical is breathing gas of suitable quality. Others concern the physical environment (pressure, temperature, light, humidity), the chemical environment (drinking water, food, waste products, toxins) and the biological environment (hazardous sea creatures, microorganisms, marine fungi). Much of the science covering underwater habitats and their technology designed to meet human requirements is shared with diving, diving bells, submersible vehicles and submarines, and spacecraft. Numerous underwater habitats have been designed, built and used around the world since as early as the start of the 1960s, either by private individuals or by government agencies. They have been used almost exclusively for research and exploration, but, in recent years, at least one underwater habitat has been provided for recreation and tourism. Research has been devoted particularly to the physiological processes and limits of breathing gases under pressure, for aquanaut, as well as astronaut training, and for research on marine ecosystems. Terminology and scope The term 'underwater habitat' is used for a range of applications, including some structures that are not exclusively underwater while operational, but all include a significant underwater component. There may be some overlap between underwater habitats and submersible vessels, and between structures which are completely submerged and those which have some part extending above the surface when in operation. In 1970 G. Haux stated: At this point it must also be said that it is not easy to sharply define the term "underwater laboratory". One may argue whether Link's diving chamber which was used in the "Man-in-Sea I" project, may be called an underwater laboratory. But the Bentos 300, planned by the Soviets, is not so easy to classify as it has a certain ability to maneuver. Therefore, the possibility exists that this diving hull is classified elsewhere as a submersible. Well, a certain generosity can not hurt. Comparison with surface based diving operations In an underwater habitat, observations can be carried out at any hour to study the behavior of both diurnal and nocturnal organisms. Habitats in shallow water can be used to accommodate divers from greater depths for a major portion of the decompression required. This principle was used in the project Conshelf II. Saturation dives provide the opportunity to dive with shorter intervals than possible from the surface, and risks associated with diving and ship operations at night can be minimized. In the habitat La Chalupa, 35% of all dives took place at night. To perform the same amount of useful work diving from the surface instead of from La Chalupa, an estimated eight hours of decompression time would have been necessary every day. However, maintaining an underwater habitat is much more expensive and logistically difficult than diving from the surface. It also restricts the diving to a much more limited area. Technical classification and description Architectural variations Pressure modes Underwater habitats are designed to operate in two fundamental modes. Open to ambient pressure via a moon pool, meaning the air pressure inside the habitat equals underwater pressure at the same level, such as SEALAB. This makes entry and exit easy as there is no physical barrier other than the moon pool water surface. Living in ambient pressure habitats is a form of saturation diving, and return to the surface will require appropriate decompression. Closed to the sea by hatches, with internal air pressure less than ambient pressure and at or closer to atmospheric pressure; entry or exit to the sea requires passing through hatches and an airlock. Decompression may be necessary when entering the habitat after a dive. This would be done in the airlock. A third or composite type has compartments of both types within the same habitat structure and connected via airlocks, such as Aquarius. Components Excursions An excursion is a visit to the environment outside the habitat. Diving excursions can be done on scuba or umbilical supply, and are limited upwards by decompression obligations while on the excursion, and downwards by decompression obligations while returning from the excursion. Open circuit or rebreather scuba have the advantage of mobility, but it is critical to the safety of a saturation diver to be able to get back to the habitat, as surfacing directly from saturation is likely to cause severe and probably fatal decompression sickness. For this reason, in most of the programs, signs and guidelines are installed around the habitat in order to prevent divers from getting lost. Umbilicals or airline hoses are safer, as the breathing gas supply is unlimited, and the hose is a guideline back to the habitat, but they restrict freedom of movement and can become tangled. The horizontal extent of excursions is limited by the scuba air supply or the length of the umbilical. The distance above and below the level of the habitat are also limited and depend on the depth of the habitat and the associated saturation of the divers. The volume of the underwater environment available for excursions thus takes the shape of a vertical axis cylinder centred on the habitat. As an example, in the Tektite I program, the habitat was located at a depth of . Excursions were limited vertically to a depth of (6.4 m above the habitat) and (12.8 m below the habitat level) and were horizontally limited to a distance of from the Habitat. History The history of underwater habitats follows on from the previous development of diving bells and caissons, and as long exposure to a hyperbaric environment results in saturation of the body tissues with the ambient inert gases, it is also closely connected to the history of saturation diving. The original inspiration for the development of underwater habitats was the work of George F. Bond, who investigated the physiological and medical effects of hyperbaric saturation in the Genesis project between 1957 and 1963. Edwin Albert Link started the Man-in-the-Sea project in 1962, which exposed divers to hyperbaric conditions underwater in a diving chamber, culminating in the first aquanaut, Robert Sténuit, spending over 24 hours at a depth of . Also inspired by Genesis, Jacques-Yves Cousteau conducted the first Conshelf project in France in 1962 where two divers spent a week at a depth of , followed in 1963 by Conshelf II at for a month and for two weeks. In June 1964, Robert Sténuit and Jon Lindberg spent 49 hours at 126m in Link's Man-in-the-Sea II project. The habitat was an inflatable structure called SPID. This was followed by a series of underwater habitats where people stayed for several weeks at great depths. Sealab II had a usable area of , and was used at a depth of more than . Several countries built their own habitats at much the same time and mostly began experimenting in shallow waters. In Conshelf III six aquanauts lived for several weeks at a depth of . In Germany, the Helgoland UWL was the first habitat to be used in cold water, the Tektite stations were more spacious and technically more advanced. The most ambitious project was Sealab III, a rebuild of Sealab II, which was to be operated at . When one of the divers died in the preparatory phase due to human error, all similar projects of the United States Navy were terminated. Internationally, except for the La Chalupa Research Laboratory the large-scale projects were carried out, but not extended, so that the subsequent habitats were smaller and designed for shallower depths. The race for greater depths, longer missions and technical advances seemed to have come to an end. For reasons such as lack of mobility, lack of self-sufficiency, shifting focus to space travel and transition to surface-based saturation systems, the interest in underwater habitats decreased, resulting in a noticeable decrease in major projects after 1970. In the mid eighties, the Aquarius habitat was built in the style of Sealab and Helgoland and is still in operation today. Historical underwater habitats Man-in-the-Sea I and II The first aquanaut was Robert Stenuit in the Man-in-the-Sea I project run by Edwin A. Link. On 6 September 1962, he spent 24 hours and 15 minutes at a depth of in a steel cylinder, doing several excursions. In June 1964 Stenuit and Jon Lindbergh spent 49 hours at a depth of in the Man-in-the-Sea II program. The habitat consisted of a submerged portable inflatable dwelling (SPID). Conshelf I, II and III Conshelf, short for Continental Shelf Station, was a series of undersea living and research stations undertaken by Jacques Cousteau's team in the 1960s. The original design was for five of these stations to be submerged to a maximum depth of over the decade; in reality only three were completed with a maximum depth of . Much of the work was funded in part by the French petrochemical industry, who, along with Cousteau, hoped that such colonies could serve as base stations for the future exploitation of the sea. Such colonies did not find a productive future, however, as Cousteau later repudiated his support for such exploitation of the sea and put his efforts toward conservation. It was also found in later years that industrial tasks underwater could be more efficiently performed by undersea robot devices and men operating from the surface or from smaller lowered structures, made possible by a more advanced understanding of diving physiology. Still, these three undersea living experiments did much to advance man's knowledge of undersea technology and physiology, and were valuable as "proof of concept" constructs. They also did much to publicize oceanographic research and, ironically, usher in an age of ocean conservation through building public awareness. Along with Sealab and others, it spawned a generation of smaller, less ambitious yet longer-term undersea habitats primarily for marine research purposes. Conshelf I (Continental Shelf Station), constructed in 1962, was the first inhabited underwater habitat. Developed by Cousteau to record basic observations of life underwater, Conshelf I was submerged in of water near Marseille, and the first experiment involved a team of two spending seven days in the habitat. The two oceanauts, Albert Falco and Claude Wesly, were expected to spend at least five hours a day outside the station, and were subject to daily medical exams. Conshelf Two, the first ambitious attempt for men to live and work on the sea floor, was launched in 1963. In it, a half-dozen oceanauts lived down in the Red Sea off Sudan in a starfish-shaped house for 30 days. The undersea living experiment also had two other structures, one a submarine hangar that housed a small, two-man submarine named SP-350 Denise, often referred to as the "diving saucer" for its resemblance to a science fiction flying saucer, and a smaller "deep cabin" where two oceanauts lived at a depth of for a week. They were among the first to breathe heliox, a mixture of helium and oxygen, avoiding the normal nitrogen/oxygen mixture, which, when breathed under pressure, can cause narcosis. The deep cabin was also an early effort in saturation diving, in which the aquanauts' body tissues were allowed to become totally saturated by the helium in the breathing mixture, a result of breathing the gases under pressure. The necessary decompression from saturation was accelerated by using oxygen enriched breathing gases. They suffered no apparent ill effects. The undersea colony was supported with air, water, food, power, all essentials of life, from a large support team above. Men on the bottom performed a number of experiments intended to determine the practicality of working on the sea floor and were subjected to continual medical examinations. Conshelf II was a defining effort in the study of diving physiology and technology, and captured wide public appeal due to its dramatic "Jules Verne" look and feel. A Cousteau-produced feature film about the effort (World Without Sun) was awarded an Academy Award for Best Documentary the following year. Conshelf III was initiated in 1965. Six divers lived in the habitat at in the Mediterranean Sea near the Cap Ferrat lighthouse, between Nice and Monaco, for three weeks. In this effort, Cousteau was determined to make the station more self-sufficient, severing most ties with the surface. A mock oil rig was set up underwater, and divers successfully performed several industrial tasks. SEALAB I, II and III SEALAB I, II, and III were experimental underwater habitats developed by the United States Navy in the 1960s to prove the viability of saturation diving and humans living in isolation for extended periods of time. The knowledge gained from the SEALAB expeditions helped advance the science of deep sea diving and rescue, and contributed to the understanding of the psychological and physiological strains humans can endure. The three SEALABs were part of the United States Navy Genesis Project. Preliminary research work was undertaken by George F. Bond. Bond began investigations in 1957 to develop theories about saturation diving. Bond's team exposed rats, goats, monkeys, and human beings to various gas mixtures at different pressures. By 1963 they had collected enough data to test the first SEALAB habitat. Tektite I and II The Tektite underwater habitat was constructed by General Electric and was funded by NASA, the Office of Naval Research and the United States Department of the Interior. On 15 February 1969, four Department of the Interior scientists (Ed Clifton, Conrad Mahnken, Richard Waller and John VanDerwalker) descended to the ocean floor in Great Lameshur Bay in the United States Virgin Islands to begin an ambitious diving project dubbed "Tektite I". By 18 March 1969, the four aquanauts had established a new world's record for saturated diving by a single team. On 15 April 1969, the aquanaut team returned to the surface after performing 58 days of marine scientific studies. More than 19 hours of decompression were needed to safely return the team to the surface. Inspired in part by NASA's budding Skylab program and an interest in better understanding the effectiveness of scientists working under extremely isolated living conditions, Tektite was the first saturation diving project to employ scientists rather than professional divers. The term tektite generally refers to a class of meteorites formed by extremely rapid cooling. These include objects of celestial origins that strike the sea surface and come to rest on the bottom (note project Tektite's conceptual origins within the U.S space program). The Tektite II missions were carried out in 1970. Tektite II comprised ten missions lasting 10 to 20 days with four scientists and an engineer on each mission. One of these missions included the first all-female aquanaut team, led by Dr. Sylvia Earle. Other scientists participating in the all-female mission included Dr. Renate True of Tulane University, as well as Ann Hartline and Alina Szmant, graduate students at Scripps Institute of Oceanography. The fifth member of the crew was Margaret Ann Lucas, a Villanova University engineering graduate, who served as Habitat Engineer. The Tektite II missions were the first to undertake in-depth ecological studies. Tektite II included 24 hour behavioral and mission observations of each of the missions by a team of observers from the University of Texas at Austin. Selected episodic events and discussions were videotaped using cameras in the public areas of the habitat. Data about the status, location and activities of each of the 5 members of each mission was collected via key punch data cards every six minutes during each mission. This information was collated and processed by BellComm and was used for the support of papers written about the research concerning the relative predictability of behavior patterns of mission participants in constrained, dangerous conditions for extended periods of time, such as those that might be encountered in crewed spaceflight. The Tektite habitat was designed and built by General Electric Space Division at the Valley Forge Space Technology Center in King of Prussia, Pennsylvania. The Project Engineer who was responsible for the design of the habitat was Brooks Tenney, Jr. Tenney also served as the underwater Habitat Engineer on the International Mission, the last mission on the Tektite II project. The Program Manager for the Tektite projects at General Electric was Dr. Theodore Marton. Hydrolab Hydrolab was constructed in 1966 at a cost of $60,000 ($560,000 in today's currency) and used as a research station from 1970. The project was in part funded by the National Oceanic and Atmospheric Administration (NOAA). Hydrolab could house four people. Approximately 180 Hydrolab missions were conducted—100 missions in The Bahamas during the early to mid-1970s, and 80 missions off Saint Croix, U.S. Virgin Islands, from 1977 to 1985. These scientific missions are chronicled in the Hydrolab Journal. Dr. William Fife spent 28 days in saturation, performing physiology experiments on researchers such as Dr. Sylvia Earle. The habitat was decommissioned in 1985 and placed on display at the Smithsonian Institution's National Museum of Natural History in Washington, D.C. , the habitat is located at the NOAA Auditorium and Science Center at National Oceanic and Atmospheric Administration (NOAA) headquarters in Silver Spring, Maryland. Edalhab The Engineering Design and Analysis Laboratory Habitat was a horizontal cylinder 2.6 m high, 3.3 m long and weighing 14 tonnes was built by students of the Engineering Design and Analysis Laboratory in the US at a cost of $20,000 or $187,000 in today's currency. From 26 April 1968, four students spent 48 hours and 6 minutes in this habitat in Alton Bay, New Hampshire. Two further missions followed to 12.2 m. In the 1972 Edalhab II Florida Aquanaut Research Expedition experiments, the University of New Hampshire and NOAA used nitrox as a breathing gas. In the three FLARE missions, the habitat was positioned off Miami at a depth of 13.7 m. The conversion to this experiment increased the weight of the habitat to 23 tonnes. BAH I BAH I (for Biological Institute Helgoland ) had a length of 6 m and a diameter of 2 m. It weighed about 20 tons and was intended for a crew of two people. The first mission in September 1968 with Jürgen Dorschel and Gerhard Lauckner at 10 m depth in the Baltic Sea lasted 11 days. In June 1969, a one-week flat-water mission took place in Lake Constance. In attempting to anchor the habitat at 47 m, the structure was flooded with the two divers in it and sank to the seabed. It was decided to lift it with the two divers according to the necessary decompression profile and nobody was harmed. BAH I provided valuable experience for the much larger underwater laboratory Helgoland. In 2003 it was taken over as a technical monument by the Technical University of Clausthal-Zellerfeld and in the same year went on display at the Nautineum Stralsund on Kleiner Dänholm island. Helgoland The Helgoland underwater laboratory (UWL) is an underwater habitat. It was built in Lübeck, Germany in 1968, and was the first of its kind in the world built for use in colder waters. The 14 meter long, 7 meter diameter UWL allowed divers to spend several weeks under water using saturation diving techniques. The scientists and technicians would live and work in the laboratory, returning to it after every diving session. At the end of their stay they decompressed in the UWL, and could resurface without decompression sickness. The UWL was used in the waters of the North and Baltic Seas and, in 1975, on Jeffreys Ledge, in the Gulf of Maine off the coast of New England in the United States. At the end of the 1970s it was decommissioned and in 1998 donated to the German Oceanographic Museum where it can be visited at the Nautineum, a branch of the museum in Stralsund. Bentos-300 Bentos-300 (Bentos minus 300) was a maneuverable Soviet submersible with a diver lockout facility that could be stationed at the seabed. It was able to spend two weeks underwater at a maximum depth of 300m with about 25 people on board. Although announced in 1966, it had its first deployment in 1977. [1] There were two vessels in the project. After Bentos-300 sank in the Russian Black Sea port of Novorossiisk in 1992, several attempts to recover it failed. In November 2011 it was cut up and recovered for scrap in the following six months. Progetto Abissi The Italian Progetto Abissi habitat, also known as La Casa in Fondo al Mare (Italian for The House at the Bottom of the Sea), was designed by the diving team Explorer Team Pellicano, consisted of three cylindrical chambers and served as a platform for a television game show. It was deployed for the first time in September 2005 for ten days, and six aquanauts lived in the complex for 14 days in 2007. MarineLab The MarineLab underwater laboratory was the longest serving seafloor habitat in history, having operated continuously from 1984 to 2018 under the direction of aquanaut Chris Olstad at Key Largo, Florida. The seafloor laboratory has trained hundreds of individuals in that time, featuring an extensive array of educational and scientific investigations from United States military investigations to pharmaceutical development. Beginning with a project initiated in 1973, MarineLab, then known as Midshipman Engineered & Designed Undersea Systems Apparatus (MEDUSA), was designed and built as part of an ocean engineering student program at the United States Naval Academy under the direction of Dr. Neil T. Monney. In 1983, MEDUSA was donated to the Marine Resources Development Foundation (MRDF), and in 1984 was deployed on the seafloor in John Pennekamp Coral Reef State Park, Key Largo, Florida. The shore-supported habitat supports three or four persons and is divided into a laboratory, a wet-room, and a transparent observation sphere. From the beginning, it has been used by students for observation, research, and instruction. In 1985, it was renamed MarineLab and moved to the mangrove lagoon at MRDF headquarters in Key Largo at a depth of with a hatch depth of . The lagoon contains artifacts and wrecks placed there for education and training. From 1993 to 1995, NASA used MarineLab repeatedly to study Controlled Ecological Life Support Systems (CELLS). These education and research programs qualify MarineLab as the world's most extensively used habitat. MarineLab was used as an integral part of the "Scott Carpenter, Man in the Sea" Program. In 2018 the habitat was retired and restored to its 1985 condition and is on display to the public at Marine Resources Development Foundation, Inc. Key Largo, Florida. Existing underwater habitats Aquarius The Aquarius Reef Base is an underwater habitat located 5.4 miles (9 kilometers) off Key Largo in the Florida Keys National Marine Sanctuary. It is deployed on the ocean floor below the surface and next to a deep coral reef named Conch Reef. Aquarius is one of three undersea laboratories in the world dedicated to science and education. Two additional undersea facilities, also located in Key Largo, Florida, are owned and operated by Marine Resources Development Foundation. Aquarius was owned by the National Oceanic and Atmospheric Administration (NOAA) and operated by the University of North Carolina–Wilmington until 2013 when Florida International University assumed operational control. Florida International University (FIU) took ownership of Aquarius in October 2014. As part of the FIU Marine Education and Research Initiative, the Medina Aquarius Program is dedicated to the study and preservation of marine ecosystems worldwide and is enhancing the scope and impact of FIU on research, educational outreach, technology development, and professional training. At the heart of the program is the Aquarius Reef Base. La Chalupa research laboratory In the early 1970s, Ian Koblick, president of Marine Resources Development Foundation, developed and operated the La Chalupa research laboratory, which was the largest and most technologically advanced underwater habitat of its time. Koblick, who has continued his work as a pioneer in developing advanced undersea programs for ocean science and education, is the co-author of the book Living and Working in the Sea and is considered one of the foremost authorities on undersea habitation. La Chalupa was operated off Puerto Rico. During the habitat's launching for its second mission, a steel cable wrapped around Dr. Lance Rennka's left wrist, shattering his arm, which he subsequently lost to gas gangrene. In the mid-1980s La Chalupa was transformed into Jules' Undersea Lodge in Key Largo, Florida. Jules' co-developer, Dr. Neil Monney, formerly served as Professor and Director of Ocean Engineering at the U.S. Naval Academy, and has extensive experience as a research scientist, aquanaut and designer of underwater habitats. La Chalupa was used as the primary platform for the Scott Carpenter Man in the Sea Program, an underwater analog to Space Camp. Unlike Space Camp, which utilizes simulations, participants performed scientific tasks while using actual saturation diving systems. This program, envisioned by Ian Koblick and Scott Carpenter, was directed by Phillip Sharkey with operational help of Chris Olstad. Also used in the program was the MarineLab Underwater Habitat, the submersible Sea Urchin (designed and built by Phil Nuytten), and an Oceaneering Saturation Diving system consisting of an on-deck decompression chamber and a diving bell. La Chalupa was the site of the first underwater computer chat, a session hosted on GEnie's Scuba RoundTable (the first non-computing related area on GEnie) by then-director Sharkey from inside the habitat. Divers from all over the world were able to direct questions to him and to Commander Carpenter. Scott Carpenter Space Analog Station The Scott Carpenter Space Analog Station was launched near Key Largo on six-week missions in 1997 and 1998. The station was a NASA project illustrating the analogous science and engineering concepts common to both undersea and space missions. During the missions, some 20 aquanauts rotated through the undersea station including NASA scientists, engineers and director James Cameron. The SCSAS was designed by NASA engineer Dennis Chamberland. Lloyd Godson's Biosub Lloyd Godson's Biosub was an underwater habitat, built in 2007 for a competition by Australian Geographic. The Biosub generated its own electricity (using a bike); its own water, using the Air2Water Dragon Fly M18 system; and its own air, using algae that produce O2. The algae were fed using the Cascade High School Advanced Biology Class Biocoil. The habitat shelf itself was constructed by Trygons Designs. Galathée The first underwater habitat built by Jacques Rougerie was launched and immersed on 4 August 1977. The unique feature of this semi-mobile habitat-laboratory is that it can be moored at any depth between 9 and 60 metres, which gives it the capability of phased integration in the marine environment. This habitat therefore has a limited impact on the marine ecosystem and is easy to position. Galathée was experienced by Jacques Rougerie himself. Aquabulle Launched for the first time in March 1978, this underwater shelter suspended in midwater (between 0 and 60 metres) is a mini scientific observatory 2.8 metres high by 2.5 metres in diameter. The Aquabulle, created and experienced by Jacques Rougerie, can accommodate three people for a period of several hours and acts as an underwater refuge. A series of Aquabulles were later built and some are still being used by laboratories. Hippocampe This underwater habitat, created by a French architect, Jacques Rougerie, was launched in 1981 to act as a scientific base suspended in midwater using the same method as Galathée. Hippocampe can accommodate 2 people on saturation dives up to a depth of 12 metres for periods of 7 to 15 days, and was also designed to act as a subsea logistics base for the offshore industry. Ithaa undersea restaurant Ithaa (Dhivehi for mother of pearl) is the world's only fully glazed underwater restaurant and is located in the Conrad Maldives Rangali Island hotel. It is accessible via a corridor from above the water and is open to the atmosphere, so there is no need for compression or decompression procedures. Ithaa was built by M.J. Murphy Ltd, and has an unballasted mass of 175 tonnes. Red Sea Star The "Red Sea Star" restaurant in Eilat, Israel, consisted of three modules; an entrance area above the water surface, a restaurant with 62 panorama windows 6 m under water and a ballast area below. The entire construction weighs about 6000 tons. The restaurant had a capacity of 105 people. It shut down in 2012. Eilat’s Coral World Underwater Observatory The first part of Eilat's Coral World Underwater Observatory was built in 1975 and it was expanded in 1991 by adding a second underwater observatory connected by a tunnel. The underwater complex is accessible via a footbridge from the shore and a shaft from above the water surface. The observation area is at a depth of approximately 12 m. Alpha Deep SeaPod Alpha Deep SeaPod is located off the coast of Puerto Lindo in Portobelo. The Pod was commissioned on the 5th of Feb, 2024. It is currently operational and serves as the residence of its owner. The floating residence provides living quarters with 360-degree panoramic views, while the underwater capsule serves as a 300-square-foot (about 28-square-meter) functional living space. In popular culture See also References Sources Gregory Stone: "Deep Science". National Geographic Online Extra (Sept 2003). Retrieved 29 July 2007. BBC. Living under the sea External links US Naval Undersea Museum SEALAB II Display Pressure vessels
Underwater habitat
[ "Physics", "Chemistry", "Engineering" ]
6,293
[ "Structural engineering", "Chemical equipment", "Physical systems", "Hydraulics", "Pressure vessels" ]
1,480,543
https://en.wikipedia.org/wiki/Plate%20girder%20bridge
A plate girder bridge is a bridge supported by two or more plate girders. Overview In a plate girder bridge, the plate girders are typically I-beams made up from separate structural steel plates (rather than rolled as a single cross-section), which are welded or, in older bridges, bolted or riveted together to form the vertical web and horizontal flanges of the beam. In some cases, the plate girders may be formed in a Z-shape rather than I-shape. The first tubular wrought iron plate girder bridge was built in 1846-47 by James Millholland for the Baltimore and Ohio Railroad. Plate girder bridges are suitable for short to medium spans and may support railroads, highways, or other traffic. Plate girders are usually prefabricated and the length limit is frequently set by the mode of transportation used to move the girder from the bridge shop to the bridge site. Generally, the depth of the girder is no less than the span, and for a given load bearing capacity, a depth of around the span minimizes the weight of the girder. Stresses on the flanges near the centre of the span are greater than near the end of the span, so the top and bottom flange plates are frequently reinforced in the middle portion of the span. Vertical stiffeners prevent the web plate from buckling under shear stresses. These are typically uniformly spaced along the girder with additional stiffeners over the supports and wherever the bridge supports concentrated loads. Deck-type plate girder bridge In the deck-type bridge, a wood, steel or reinforced concrete bridge deck is supported on top of two or more plate girders, and may act compositely with them. In the case of railroad bridges, the railroad ties themselves may form the bridge deck, or the deck may support ballast on which the track is laid. Additional beams may connect the main girders, for example in the form of bridge known as ladder-deck construction. Also, further elements may be attached to provide cross-bracing and prevent the girders from buckling. Semi-through plate girder bridge In the half-through bridge (also called a pony truss), the bridge deck is supported between two plate girders, often on top of the bottom flange. The overall bridge then has a 'U'-shape in cross-section. As cross-bracing cannot normally be added, vertical stiffeners on the girders are normally used to prevent buckling (technically described as 'U-frame behaviour'). This form of bridge is most often used on railroads as the construction depth (distance between the underside of the vehicle, and the underside of the bridge) is much less. This allows obstacles to be cleared with less change in height. Multi-span plate girder bridge Multispan plate-girder bridges may be an economical way to span gaps longer than can be spanned by a single girder. Spacing of piers between the abutments is dependent on the capacity of the selected plate girders. Separate plate girder bridges span between each pair of abutments in order to allow for expansion joints between the spans. Concrete is commonly used for low piers, while steel trestle work may be used for high bridges. See also Beam bridge — the ancestor of the plate girder bridge Box girder bridge — an evolution of the plate girder bridge Balloon flange girder Pin and hanger assembly Trestle — some modern steel trestles are composed of a number of girder bridge segments References Bridges by structural type Girders Structural steel
Plate girder bridge
[ "Technology", "Engineering" ]
740
[ "Structural system", "Girders", "Structural steel", "Structural engineering" ]
1,480,601
https://en.wikipedia.org/wiki/Power%20cable
A power cable is an electrical cable, an assembly of one or more electrical conductors, usually held together with an overall sheath. The assembly is used for transmission of electrical power. Power cables may be installed as permanent wiring within buildings, buried in the ground, run overhead, or exposed. Power cables that are bundled inside thermoplastic sheathing and that are intended to be run inside a building are known as NM-B (nonmetallic sheathed building cable). Flexible power cables are used for portable devices, mobile tools, and machinery. History The first power distribution system developed by Thomas Edison in 1882 in New York City used copper rods, wrapped in jute and placed in rigid pipes filled with a bituminous compound. Although vulcanized rubber had been patented by Charles Goodyear in 1844, it was not applied to cable insulation until the 1880s, when it was used for lighting circuits. Rubber-insulated cable was used for 11,000-volt circuits in 1897 installed for the Niagara Falls power project. Mass-impregnated paper-insulated medium voltage cables were commercially practical by 1895. During World War II several varieties of synthetic rubber and polyethylene insulation were applied to cables. Typical residential and office construction in North America has gone through several technologies: Early bare and cloth-covered wires installed with staples Knob and tube wiring, 1880s–1930s, using asphalt-saturated cloth or later rubber insulation Armored cable, known by the genericized trademark "BX" - flexible steel sheath with two cloth-covered, rubber-insulated conductors - introduced in 1906 but more expensive than open single conductors Rubber-insulated wires with jackets of woven cotton cloth (usually impregnated with tar), waxed paper filler - introduced in 1922 Modern two or three-wire+ground PVC-insulated cable (e.g., NM-B), produced by such brands as Romex Aluminum wire was used in the 1960s and 1970s as a cheap replacement for copper and is still used today, but this is now considered unsafe, without proper installation, due to corrosion, softness and creeping of connection. Asbestos was used as an electrical insulator in some cloth wires from the 1920s to 1970s, but discontinued due to its health risk. Teck cable, a PVC-sheathed armored cable Construction Modern power cables come in a variety of sizes, materials, and types, each particularly adapted to its uses. Large single insulated conductors are also sometimes called power cables in the industry. Cables consist of three major components: conductors, insulation, protective jacket. The makeup of individual cables varies according to application. The construction and material are determined by three main factors: Working voltage, determining the thickness of the insulation; Current-carrying capacity, determining the cross-sectional size of the conductor(s); Environmental conditions such as temperature, water, chemical or sunlight exposure, and mechanical impact, determining the form and composition of the outer cable jacket. Cables for direct burial or for exposed installations may also include metal armor in the form of wires spiraled around the cable, or a corrugated tape wrapped around it. The armor may be made of steel or aluminum, and although connected to earth ground is not intended to carry current during normal operation. Electrical power cables are sometimes installed in raceways, including electrical conduit and cable trays, which may contain one or more conductors. When it is intended to be used inside a building, nonmetallic sheathed building cable (NM-B) consists of two or more wire conductors (plus a grounding conductor) enclosed inside a thermoplastic insulation sheath that is heat-resistant. It has advantages over armored building cable because it is lighter, easier to handle, and its sheathing is easier to work with. Power cables use stranded copper or aluminum conductors, although small power cables may use solid conductors in sizes of up to 1/0. (For a detailed discussion on copper cables, see: Copper wire and cable.). The cable may include uninsulated conductors used for the circuit neutral or for ground (earth) connection. The grounding conductor connects the equipment's enclosure/chassis to ground for protection from electric shock. These uninsulated versions are known are bare conductors or tinned bare conductors. The overall assembly may be round or flat. Non-conducting filler strands may be added to the assembly to maintain its shape. Filler materials can be made in non-hydroscopic versions if required for the application. Special purpose power cables for overhead applications are often bound to a high strength alloy, ACSR, or alumoweld messenger. This cable is called aerial cable or pre-assembled aerial cable (PAC). PAC can be ordered unjacketed, however, this is less common in recent years due to the low added cost of supplying a polymeric jacket. For vertical applications the cable may include armor wires on top of the jacket, steel or Kevlar. The armor wires are attached to supporting plates periodically to help support the weight of the cable. A supporting plate may be included on each floor of the building, tower, or structure. This cable would be called an armored riser cable. For shorter vertical transitions (perhaps 30–150 feet) an unarmored cable can be used in conjunction with basket (Kellum) grips or even specially designed duct plugs. Material specification for the cable's jacket will often consider resistance to water, oil, sunlight, underground conditions, chemical vapors, impact, fire, or high temperatures. In nuclear industry applications the cable may have special requirements for ionizing radiation resistance. Cable materials for a transit application may be specified not to produce large amounts of smoke if burned (low smoke zero halogen). Cables intended for direct burial must consider damage from backfill or dig-ins. HDPE or polypropylene jackets are common for this use. Cables intended for subway (underground vaults) may consider oil, fire resistance, or low smoke as a priority. Few cables these days still employ an overall lead sheath. However, some utilities may still install paper insulated lead covered cable in distribution circuits. Transmission or submarine cables are more likely to use lead sheaths. However, lead is in decline and few manufacturers exist today to produce such items. When cables must run where exposed to mechanical damage (industrial sites), they may be protected with flexible steel tape or wire armor, which may also be covered by a water-resistant jacket. A hybrid cable can include conductors for control signals or may also include optical fibers for data. Higher voltages For circuits operating at or above 2,000 volts between conductors, a conductive shield should surround the conductor's insulation. This equalizes electrical stress on the cable insulation. This technique was patented by Martin Hochstadter in 1916; the shield is sometimes called a Hochstadter shield. Aside from the semi conductive ("semicon") insulation shield, there will also be a conductor shield. The conductor shield may be semi conductive (usually) or non conducting. The purpose of the conductor shield is similar to the insulation shield: it is a void filler and voltage stress equalizer. To drain off stray voltage, a metallic shield will be placed over the "semicon." This shield is intended to "make safe" the cable by pulling the voltage on the outside of the insulation down to zero (or at least under the OSHA limit of 50 volts). This metallic shield can consist of a thin copper tape, concentric drain wires, flat straps, lead sheath, or other designs. The metallic shields of a cable are connected to earth ground at the ends of the cable, and possibly locations along the length if voltage rise during faults would be dangerous. Multi-point grounding is the most common way to ground the cable's shield. Some special applications require shield breaks to limit circulating currents during the normal operations of the circuit. Circuits with shield breaks could be single or multi point grounded. Special engineering situations may require cross bonding. Liquid or gas filled cables are still employed in distribution and transmission systems today. Cables of 10 kV or higher may be insulated with oil and paper, and are run in a rigid steel pipe, semi-rigid aluminum or lead sheath. For higher voltages the oil may be kept under pressure to prevent formation of voids that would allow partial discharges within the cable insulation. Liquid filled cables are known for extremely long service lives with little to no outages. Unfortunately, oil leaks into soil and bodies of water are of grave concern and maintaining a fleet of the needed pumping stations is a drain on the O+M budget of most power utilities. Pipe type cables are often converted to solid insulation circuit at the end of their service life despite a shorter expected service life. Modern high-voltage cables use polyethylene or other polymers, including XLPE for insulation. They require special techniques for jointing and terminating, see High-voltage cable. Flexibility of cables (stranding class) All electrical cables are somewhat flexible, allowing them to be shipped to installation sites wound on reels, drums or hand coils. Flexibility is an important factor in determining the appropriate stranding class of the cable as it directly affects the minimum bending radius. Power cables are generally stranding class A, B, or C. These classes allow for the cable to be trained into a final installed position where the cable will generally not be disturbed. Class A, B, and C offer more durability, especially when pulling cable, and are generally cheaper. Power utilities generally order Class B stranded wire for primary and secondary voltage applications. At times, a solid conductor medium voltage cable can be used when flexibility is not a concern but low cost and water blocking are prioritized. Applications requiring a cable to be moved repeatedly, such as for portable equipment, more flexible cables called "cords" or "flex" are used (stranding class G-M). Flexible cords contain fine stranded conductors, rope lay or bunch stranded. They feature overall jackets with appropriate amounts of filler materials to improve their flexibility, trainability, and durability. Heavy duty flexible power cords such as those feeding a mine face cutting machine are carefully engineered — their life is measured in weeks. Very flexible power cables are used in automated machinery, robotics, and machine tools. See power cord and extension cable for further description of flexible power cables. Other types of flexible cable include twisted pair, extensible, coaxial, shielded, and communication cable. An X-ray cable is a special type of flexible high-voltage cable. See also AC power plugs and sockets American wire gauge – for a table of cross section sizes Ampacity – for a description of current carrying capacity of wires and cables Cross-linked polyethylene Electrical cable Ethylene propylene rubber (EPR) Industrial and multiphase power plugs and sockets Overhead power line Portable cord Railway electrification system Restriction of Hazardous Substances Directive Telecommunications power cable Voltage drop – another consideration when selecting proper cable sizes References Electrical wiring
Power cable
[ "Physics", "Engineering" ]
2,240
[ "Electrical systems", "Building engineering", "Physical systems", "Electrical engineering", "Electrical wiring" ]
29,298,089
https://en.wikipedia.org/wiki/Borrmann%20effect
The Borrmann effect (or Borrmann–Campbell effect after Gerhard Borrmann and Herbert N. Campbell) is the anomalous increase in the intensity of X-rays transmitted through a crystal when it is being set up for Bragg reflection. The Borrmann effect—a dramatic increase in transparency to X-ray beams—is observed when X-rays satisfying Bragg's law diffract through a perfect crystal. The minimization of absorption seen in the Borrmann effect has been explained by noting that the electric field of the X-ray beam approaches zero amplitude at the crystal planes, thus avoiding the atoms. References Borrmann, Gerhard; Über Extinktionsdiagramme von Quarz, Physikalische Zeitschrift 42, 157–162 (1941); Die Absorption von Röntgenstrahlen im Fall der Interferenz, Zeitschrift für Physik 127, 297–323 (1950) - original articles on Borrmann effect Campbell, Herbert N.; X-Ray Absorption in a Crystal Set at the Bragg Angle, Journal of Applied Physics 22, 1139 (1951) von Laue, Max; Die Absorption der Röntgenstrahlen in Kristallen im Interferenzfall, Acta Crystallographica 2, 106–113 (1949) - original explanation of Borrmann effect X-ray crystallography
Borrmann effect
[ "Chemistry", "Materials_science" ]
290
[ "Crystallography stubs", "X-ray crystallography", "Crystallography", "Materials science stubs" ]
29,298,391
https://en.wikipedia.org/wiki/V404%20Cygni
V404 Cygni is a microquasar and a triple star system in the constellation of Cygnus. It contains a black hole with a mass of about and an early K giant star companion with a mass slightly smaller than the Sun, and an evolved tertiary component. The inner star and the black hole orbit each other every 6.47129 days at fairly close range, while the outer tertiary takes 70000 years to orbit the inner binary system. Due to their proximity and the intense gravity of the black hole, the secondary star loses mass to an accretion disk around the black hole and ultimately to the black hole itself. The "V" in the name indicates that it is a variable star, which repeatedly gets brighter and fainter over time. It is also considered a nova, because at least three times in the 20th century it produced a bright outburst of energy. Finally, it is a soft X-ray transient because it periodically emits short bursts of X-rays. The black hole companion has been proposed as a Q star candidate. Observation history The system was first noted as Nova Cygni 1938 and given the variable star designation V404 Cygni. It was considered to be an ordinary "moderately fast" nova although large fluctuations were noted during the decline. It was discovered after maximum light, and the photographic magnitude range was measured at 12.5–20.5. On May 22, 1989 the Japanese Ginga Team discovered a new X-ray source that was catalogued as GS 2023+338. This source was quickly linked to V404 Cygni, which was discovered to be in outburst again as Nova Cygni 1989. Follow-up studies showed a previously unnoticed outburst in 1956. There was also a possible brightening in 1979. In 2009, the black hole in the V404 Cygni system became the first black hole to have an accurate parallax measurement for its distance from the Solar System. Measured by very-long-baseline interferometry using the High Sensitivity Array, the distance is , or light-years. In April 2019, astronomers announced that jets of particles shooting from the black hole were wobbling back and forth on the order of a few minutes, something that had never before been seen in the particle jets streaming from a black hole. Astronomers believe that the wobble is caused by the Lense-Thirring effect due to warping of space/time by the huge gravitational field in the vicinity of the black hole. In April 2024, astronomers announced that V404 Cygni was discovered to be part of a hierarchical triple star system, with the tertiary companion being at least 3500 AU away from the inner binary system. This discovery was evidence that V404 Cygni formed with a minimal black hole natal kick on the order of less than 5 km/s. The tertiary component was also found to be evolved, indicating that the triple system has remained bounded through the black hole's formation and that the system's age is constrained to between 3-5 billion years old. 2015 outburst On 15 June 2015 NASA's Swift satellite detected the first signs of renewed activity. A worldwide observing campaign was commenced and on 17 June ESA's INTEGRAL Gamma-ray observatory started monitoring the outburst. INTEGRAL was detecting "repeated bright flashes of light time scales shorter than an hour, something rarely seen in other black hole systems", and during these flashes V404 Cygni was the brightest object in the X-ray sky—up to fifty times brighter than the Crab Nebula. This outburst was the first since 1989. Other outbursts occurred in 1938 and 1956, and the outbursts were probably caused by material piling up in a disk around the black hole until a tipping point was reached. The outburst was unusual in that physical processes in the inner accretion disk were detectable in optical photometry from small telescopes; previously, these variations were thought to be only detectable with space-based X-ray telescopes. A detailed analysis of the INTEGRAL data revealed the existence of so-called pair plasma near the black hole. This plasma consists of electrons and their antimatter counterparts, positrons. A follow-up study of the 2015 data found a coronal magnetic field strength of 461 ± 12 gauss, "substantially lower than previous estimates for such systems". See also List of stars in Cygnus List of black holes List of nearest black holes References Cygnus (constellation) Microquasars X-ray binaries Stellar black holes Cygni, V404 K-type giants Novae
V404 Cygni
[ "Physics", "Astronomy" ]
925
[ "Black holes", "Stellar black holes", "Novae", "Cygnus (constellation)", "Astronomical events", "Unsolved problems in physics", "Constellations" ]
29,298,896
https://en.wikipedia.org/wiki/A%20Healing%20Art
A Healing Art is a 2009 short documentary film from director Ellen Frick. It tells the story of two Ocularists, Christie Erickson and Todd Cranmore, who make custom prosthetic eyes. Their story is interwoven with the lives of their patients. A Healing Art was distributed by Ellen Frick and Seattle-based Fly on the Wall Films. Awards The film premièred at the 2009 Toronto International Film Festival, where it won the POV | American Documentary Award and the 2009 Audience Award. It was featured at the Seattle Film Festival in February 2010 and had its television première on PBS's show P.O.V. on August 17, 2010. Example of healing art can be found on http://energyhealingpainting.com External links 2009 films American short documentary films 2009 short documentary films Prosthetics 2000s English-language films 2000s American films English-language short documentary films
A Healing Art
[ "Engineering", "Biology" ]
187
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
29,302,590
https://en.wikipedia.org/wiki/Plant%20cover
The abundances of plant species are often measured by plant cover, which is the relative area covered by different plant species in a small plot. Plant cover is not biased by the size and distributions of individuals, and is an important and often measured characteristic of the composition of plant communities. Usage Plant cover data may be used to classify the studied plant community into a vegetation type, to test different ecological hypotheses on plant abundance, and in gradient studies, where the effects of different environmental gradients on the abundance of specific plant species are studied . Measurement The most common way to measure plant cover in herbal plant communities, is to make a visual assessment of the relative area covered by the different species in a small plot (see quadrat). The visually assessed cover of a plant species is then recorded as a continuous variable between 0 and 1, or divided into interval classes as an ordinal variable. An alternative methodology, called the pin-point method (or point-intercept method), has also been widely employed. In a pin-point analysis, a frame with a fixed grid pattern is placed randomly above the vegetation, and a thin pin is inserted vertically through one of the grid points into the vegetation. The different species touched by the pin are recorded at each insertion. The cover of plant species k in a plot, , is now assumed to be proportional to the number of “hits” by the pin, , where is the number of pins that hit species k out of a total of n pins. Since a single pin in multi-species plant communities often will hit more than a single species, the sum of the plant cover of the different species may be larger than unity when estimated by the pin-point method. The sum of the estimated plant cover is expected to increase with the number of plant species in a plot and with increasing 3-dimensional structuring of the plants in the community. Plant cover data obtained by the pin-point method may be modelled by a generalised binomial distribution (or Pólya–Eggenberger distribution). See also References External links PIN-POINT 1.0 is a Mathematica notebook for estimating plant cover from pin-point data using a generalised binomial distribution. Ecology Ecology terminology Ecological metrics
Plant cover
[ "Mathematics", "Biology" ]
457
[ "Ecology terminology", "Metrics", "Ecological metrics", "Quantity", "Ecology" ]
29,305,046
https://en.wikipedia.org/wiki/Center%20for%20Mathematics%20and%20Theoretical%20Physics
The Center for Mathematics and Theoretical Physics (CMTP) is an Italian institution supporting research in mathematics and theoretical physics. The CMTP was founded on November 17, 2009 as an interdepartmental research center of the three Roman universities: Sapienza, Tor Vergata and Roma Tre. The CMTP's director is Roberto Longo, from the Mathematics Department of Tor Vergata University, and its scientific secretaries are Alberto De Sole, from Sapienza University, and Alessandro Giuliani, from Roma Tre University. The center does not have a permanent location; however, it is temporarily hosted in Tor Vergata's Mathematics Department. The aim of the CMTP, according to its Web site, is to "take advantage of the high quality and wide spectrum of research in mathematical physics presently carried on in Roma [sic] in order to promote cross fertilization of mathematics and theoretical physics at the highest level by fostering creative interactions of leading experts from both subjects." Activities of the center The CMTP promotes scientific research by organizing workshops, congresses, and periods of thematic research; sending invitations to scientists; and assigning study grants. The CMTP's goal is to attract foreign scientists of international prestige and young talented foreigners to Rome by offering a natural place for scientific education and a base of cultural interchange with other scientific centers abroad. The opening activity of the center was to present the Seminal Interactions between Mathematics and Physics conference hosted by the Accademia Nazionale dei Lincei in Rome. The invited speakers counted, among others, four fields medalists; Alain Connes, Andrei Okounkov, Stanislav Smirnov and C. Villani; and an Abel prize winner, Isadore Singer. As part of the conference, the center organized two evening public lectures for the general audience, held by Ludvig Faddeev and Singer. Among its activities, the center runs the Levi Civita colloquia. References See also Institute for Theoretical Physics (disambiguation) Center for Theoretical Physics (disambiguation) External links Home page: http://cmtp.uniroma2.it/index.php http://cmtp.uniroma2.it/documents/100804Sole24ore.pdf http://cmtp.uniroma2.it/documents/100917Messaggero.pdf https://web.archive.org/web/20110722041533/http://www.lswn.it/en/conferences/2010/seminal_interactions_between_mathematics_and_physics http://www.adnkronos.com/IGN/News/Cronaca/Ricerca-con-il-Cmtp-e-nato-a-Roma-un-nuovo-gruppo-di-ragazzi-di-Via-Panisperna_985806198.html http://cmtp.uniroma2.it/documents/100921DNews.pdf http://matematica.unibocconi.it/news/apre-roma-un-nuovo-centro-la-ricerca-matematica-e-fisica-teorica http://www3.lastampa.it/scienza/sezioni/news/articolo/lstp/333282/ http://cmtp.uniroma2.it/documents/100922ManifestoB.pdf http://cmtp.uniroma2.it/documents/100922CorrieredellaSera.pdf http://roma.corriere.it/roma/notizie/tempo_libero/10_settembre_22/casa-jazz-1703811852126.shtml https://web.archive.org/web/20101114164434/http://news.sciencemag.org/scienceinsider/2010/09/romes-mathematical-physicists-in.html http://www.scienzainrete.it/contenuto/articolo/il-centro-dove-matematica-e-fisica-teorica-si-incontrano http://cmtp.uniroma2.it/documents/PublicServiceReview.pdf Roberto Longo Alberto De Sole Alessandro Giuliani Seminal Interactions between Mathematics and Physics Levi Civita colloquia Mathematical institutes Higher education in Italy Physics research institutes Educational institutions established in 2009 University of Rome Tor Vergata Theoretical physics institutes 2009 establishments in Italy
Center for Mathematics and Theoretical Physics
[ "Physics" ]
989
[ "Theoretical physics", "Theoretical physics institutes" ]
29,309,143
https://en.wikipedia.org/wiki/Vanadium%20nitrogenase
Vanadium nitrogenase is a key enzyme for nitrogen fixation found in nitrogen-fixing bacteria, and is used as an alternative to molybdenum nitrogenase when molybdenum is unavailable. Vanadium nitrogenases are an important biological use of vanadium, which is uncommonly used by life. An important component of the nitrogen cycle, vanadium nitrogenase converts nitrogen gas to ammonia, thereby making otherwise inaccessible nitrogen available to plants. Unlike molybdenum nitrogenase, vanadium nitrogenase can also reduce carbon monoxide to ethylene, ethane and propane but both enzymes can reduce protons to hydrogen gas and acetylene to ethylene. Biological functions Vanadium nitrogenases are found in members of the bacterial genus Azotobacter as well as the species Rhodopseudomonas palustris and Anabaena variabilis. Most of the functions of vanadium nitrogenase match those of the more common molybdenum nitrogenases and serve as an alternative pathway for nitrogen fixation in molybdenum deficient conditions. Like molybdenum nitrogenase, dihydrogen functions as a competitive inhibitor and carbon monoxide functions as a non-competitive inhibitor of nitrogen fixation. Vanadium nitrogenase has an α2β2Ύ2 subunit structure while molybdenum nitrogenase has an α2β2 structure. Though the structural genes encoding vanadium nitrogenase show only about 15% conservation with molybdenum nitrogenases, the two nitrogenases share the same type of iron-sulphur redox centers. At room temperature, vanadium nitrogenase is less efficient at fixing nitrogen than molybdenum nitrogenases because it converts more H+ to H2 as a side reaction. However, at low temperatures vanadium nitrogenases have been found to be more active than the molybdenum type, and at temperatures as low as 5 °C its nitrogen-fixing activity is 10 times higher than that of molybdenum nitrogenase. Like molybdenum nitrogenase, vanadium nitrogenase is easily oxidized and is thus only active under anaerobic conditions. Various bacteria employ complex protection mechanisms to avoid oxygen. The overall stoichiometry of nitrogen fixation catalyzed by vanadium nitrogenase can be summarized as follows: N2 + 12e− + 14H+ + 24MgATP → 2NH4+ + 3H2 + 24MgADP + 24HPO42− The crystal structure of A. vinelandii vanadium nitrogenase was resolved in 2017 (). Compared to Mo nitrogenase, V nitrogenase replaces one sulfide in the active site with a bridging ligand. Carbon monoxide reduction Research at the University of California Irvine showed the ability of vanadium nitrogenase to convert carbon monoxide into trace amounts of propane, ethylene, and ethane in the absence of nitrogen through the reduction of carbon monoxide by dithionite and ATP hydrolysis . The process of forming these hydrocarbons is carried out through proton and electron transfer in which short carbon chains are formed and may ultimately allow the production of hydrocarbon fuel from CO at an industrial scale. References Nitrogen cycle Organometallic chemistry Biofuels
Vanadium nitrogenase
[ "Chemistry" ]
694
[ "Nitrogen cycle", "Organometallic chemistry", "Metabolism" ]
29,309,436
https://en.wikipedia.org/wiki/Symposium%20on%20Computational%20Geometry
The International Symposium on Computational Geometry (SoCG) is an academic conference in computational geometry. Today its acronym is pronounced "sausage." It was founded in 1985, with the program committee consisting of David Dobkin, Joseph O'Rourke, Franco Preparata, and Godfried Toussaint; O'Rourke was the conference chair. The symposium was originally sponsored by the SIGACT and SIGGRAPH Special Interest Groups of the Association for Computing Machinery (ACM). It dissociated from the ACM in 2014, motivated by the difficulties of organizing ACM conferences outside the United States and by the possibility of turning to an open-access system of publication. Since 2015 the conference proceedings have been published by the Leibniz International Proceedings in Informatics instead of by the ACM. Since 2019 the conference has been organized under the auspices of the newly formed Society for Computational Geometry. A 2010 assessment of conference quality by the Australian Research Council listed it as "Rank A". References External links Recurring events established in 1985 Mathematics conferences Theoretical computer science conferences Computational geometry Association for Computing Machinery conferences
Symposium on Computational Geometry
[ "Mathematics", "Technology", "Engineering" ]
224
[ "Computer engineering", "Computer engineering stubs", "Computational mathematics", "Computational geometry", "Computing stubs" ]
24,679,482
https://en.wikipedia.org/wiki/Bulk%20polymerization
Bulk polymerization or mass polymerization is carried out by adding a soluble radical initiator to pure monomer in liquid state. The initiator should dissolve in the monomer. The reaction is initiated by heating or exposing to radiation. As the reaction proceeds the mixture becomes more viscous. The reaction is exothermic and a wide range of molecular masses are produced. Bulk polymerization is carried out in the absence of any solvent or dispersant and is thus the simplest in terms of formulation. It is used for most step-growth polymers and many types of chain-growth polymers. In the case of chain-growth reactions, which are generally exothermic, the heat evolved may cause the reaction to become too vigorous and difficult to control unless efficient cooling is used. Advantages and disadvantages Bulk polymerization has several advantages over other methods, these advantages are: The system is simple and requires thermal insulation. The polymer obtained is pure. Large castings may be prepared directly. Molecular weight distribution can be easily changed The product obtained has high optical clarity Disadvantages: Heat transfer and mixing become difficult as the viscosity of reaction mass increases. The problem of heat transfer is compounded by the highly exothermic nature of free radical addition polymerization. The polymerization is obtained with a broad molecular weight distribution due to the high viscosity and lack of good heat transfer. very high molecular weights are obtained. Gel effect. For reducing the disadvantages of bulk polymerization, the process can be carried out in a solution. This is known as solution polymerization. Classification There are two main types of bulk polymerization: Quiescent bulk polymerization There is no agitation in this type of bulk polymerization. This is often used to synthesize cross-linked and thermosetting polymers. Due to dormant nature of the system, the Trommsdorff effect is significantly present, which in turn leads to longer chains and tougher material. The major disadvantages of this type of polymerization include entrapped bubbles (or voids) due to monomer boil-off and inability to convert all monomers. Stirred bulk polymerization Continuous stirring of the monomer happens in this type of polymerization. Very specific designs of reactors are used depending upon the viscosity of the polymer. In some applications, the completed polymer melt is transferred from the reactor using a gear pump or applying moderate external pressure. It differs from the solution polymerization in a way that the monomer itself acts as a solvent. References Polymerization reactions fr:Procédé de polymérisation#Polymérisation en masse
Bulk polymerization
[ "Chemistry", "Materials_science" ]
527
[ "Polymerization reactions", "Polymer chemistry" ]
24,686,102
https://en.wikipedia.org/wiki/Scenario%20optimization
The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in modeling and decision-making. The technique has existed for decades as a heuristic approach and has more recently been given a systematic theoretical foundation. In optimization, robustness features translate into constraints that are parameterized by the uncertain elements of the problem. In the scenario method, a solution is obtained by only looking at a random sample of constraints (heuristic approach) called scenarios and a deeply-grounded theory tells the user how “robust” the corresponding solution is related to other constraints. This theory justifies the use of randomization in robust and chance-constrained optimization. Data-driven optimization At times, scenarios are obtained as random extractions from a model. More often, however, scenarios are instances of the uncertain constraints that are obtained as observations (data-driven science). In this latter case, no model of uncertainty is needed to generate scenarios. Moreover, most remarkably, also in this case scenario optimization comes accompanied by a full-fledged theory because all scenario optimization results are distribution-free and can therefore be applied even when a model of uncertainty is not available. Theoretical results For constraints that are convex (e.g. in semidefinite problems, involving LMIs (Linear Matrix Inequalities)), a deep theoretical analysis has been established which shows that the probability that a new constraint is not satisfied follows a distribution that is dominated by a Beta distribution. This result is tight since it is exact for a whole class of convex problems. More generally, various empirical levels have been shown to follow a Dirichlet distribution, whose marginals are beta distribution. The scenario approach with regularization has also been considered, and handy algorithms with reduced computational complexity are available. Extensions to more complex, non-convex, set-ups are still objects of active investigation. Along the scenario approach, it is also possible to pursue a risk-return trade-off. Moreover, a full-fledged method can be used to apply this approach to control. First constraints are sampled and then the user starts removing some of the constraints in succession. This can be done in different ways, even according to greedy algorithms. After elimination of one more constraint, the optimal solution is updated, and the corresponding optimal value is determined. As this procedure moves on, the user constructs an empirical “curve of values”, i.e. the curve representing the value achieved after the removing of an increasing number of constraints. The scenario theory provides precise evaluations of how robust the various solutions are. A remarkable advance in the theory has been established by the recent wait-and-judge approach: one assesses the complexity of the solution (as precisely defined in the referenced article) and from its value formulates precise evaluations on the robustness of the solution. These results shed light on deeply-grounded links between the concepts of complexity and risk. A related approach, named "Repetitive Scenario Design" aims at reducing the sample complexity of the solution by repeatedly alternating a scenario design phase (with reduced number of samples) with a randomized check of the feasibility of the ensuing solution. Example Consider a function which represents the return of an investment; it depends on our vector of investment choices and on the market state which will be experienced at the end of the investment period. Given a stochastic model for the market conditions, we consider of the possible states (randomization of uncertainty). Alternatively, the scenarios can be obtained from a record of observations. We set out to solve the scenario optimization program This corresponds to choosing a portfolio vector x so as to obtain the best possible return in the worst-case scenario. After solving (1), an optimal investment strategy is achieved along with the corresponding optimal return . While has been obtained by looking at possible market states only, the scenario theory tells us that the solution is robust up to a level , that is, the return will be achieved with probability for other market states. In quantitative finance, the worst-case approach can be overconservative. One alternative is to discard some odd situations to reduce pessimism; moreover, scenario optimization can be applied to other risk-measures including CVaR – Conditional Value at Risk – so adding to the flexibility of its use. Application fields Fields of application include: prediction, systems theory, regression analysis (Interval Predictor Models in particular), Actuarial science, optimal control, financial mathematics, machine learning, decision making, supply chain, and management. References Stochastic optimization Optimal decisions Control theory Mathematical finance
Scenario optimization
[ "Mathematics" ]
941
[ "Applied mathematics", "Control theory", "Mathematical finance", "Dynamical systems" ]
24,686,241
https://en.wikipedia.org/wiki/Collaborative%20Computing%20Project%20for%20NMR
The Collaborative Computing Project for NMR (CCPN) is a project that aims to bring together computational aspects of the scientific community involved in NMR spectroscopy, especially those who work in the field of protein NMR. The general aims are to link new and existing NMR software via a common data standard and provide a forum within the community for the discussion of NMR software and the scientific methods it supports. CCPN was initially started in 1999 in the United Kingdom but collaborates with NMR and software development groups worldwide. The Collaborative Project for the NMR Community The Collaborative Computing project for NMR spectroscopy was set up in with three main aims; to create a common standard for representing NMR spectroscopy related data, to create a suite of new open-source NMR software packages and to arrange meetings for the NMR community, including conferences, workshops and courses in order to discuss and spread best-practice within the NMR community, for both computational and non-computational aspects. Primary financial support for CCPN comes from the BBSRC; the UK Biotechnology and Biological Sciences Research Council. CCPN is part of an array of collaborative computing projects (CCP) and follows in a similar vein to the successful and well-established CCP4 project for X-ray crystallography. CCPN is also supported by European Union grants, most recently as part of the Extend-NMR project; which links together several software producing groups from across Europe. CCPN is governed by an executive committee which draws its members from academics throughout the UK NMR community. This committee is chosen at the CCPN Assembly Meeting where all UK based NMR groups may participate and vote. The day-to-day work of CCPN, including the organisation of meetings and software development, is handled by an informal working group, coordinated by Ernest Laue at the University of Cambridge, which comprises the core group of staff and developers, as well as a growing number of collaborators throughout the world who contribute to coordinated NMR software development. NMR Data Standards The many different software packages available to the NMR spectroscopy community have traditionally employed a number of different data formats and standards to represent computational information. The inception of CCPN was partly to look at this situation and to develop a more unified approach. It was deemed that multiple, informally connected data standards not only made it more difficult for a user to move from one program to the next, but also adversely affected data fidelity, harvesting and database deposition. To this end CCPN has developed a common data standard for NMR, referred to as the CCPN data model, as well as software routines and libraries that allow access, manipulation and storage of the data. The CCPN system works alongside the Bio Mag Res Bank which continues to handle archiving NMR database depositions; the CCPN standard is for active data exchange and in-program manipulation. Although NMR spectroscopy remains at the core of the data standard it naturally expands into other related areas of science that support and complement NMR. These include molecular and macromolecular description, three-dimensional biological structures, sample preparation, workflow management and software setup. The CCPN libraries are created using the principles of model-driven architecture and automatic code generation; the CCPN data model provides a specification for the automatic generation of APIs in multiple languages. To date CCPN provides APIs to its data model in Python, Java and C programming languages. Through its collaborations, CCPN continues to link new and existing software via its data standards. To enable interaction with as much external software as possible, CCPN has created a format conversion program. This allows data to enter from outside the CCPN scheme and provides a mechanism to translate between existing data formats. The open-source CcpNmr FormatConverter software was first released in 2005 and is available for download (from CCPN and SourceForge) but is also recently accessible as a web application. CCPN Software Suite As well as enabling data exchange, CCPN aims to develop software for processing, analysis and interpretation of macromolecular NMR data. To this end CCPN has created CcpNmr Analysis; a graphical program for spectrum visualisation, assignment and NMR data analysis. Here, the requirement was for a program that used a modern graphical user interface and could run on many types of computer. It would be supported and maintained by CCPN and would allow modification and extension, including for new NMR techniques. The first version of Analysis was released in 2005 and is now at version 2.1. Analysis is built directly on the CCPN data model and its design is partly inspired by the older ANSIG. and SPARKY programs, but it has continued to develop from the suggestions, requirements and computational contributions of its user community. Analysis is freely available to academic and non-profit institutions. Commercial users are required to subscribe to CCPN for a moderate fee. CCPN software, including Analysis, is available for download at the CCPN web site and is supported by an active JISC email discussion group. CCPN Meetings Through its meetings CCPN provides a forum for the discussion of computational and experimental NMR techniques. The aim is to debate and spread best practice in the determination of macromolecular information, including structure, dynamics and biological chemistry. CCPN continues to arrange annual conferences for the UK NMR community (the current being the ninth) and a series of workshops to discuss and promote data standards. Because it is vital to the success of CCPN as a software project and as a coordinated NMR community, its software developers run courses to teach the use of CCPN software and its development framework. They also arrange visits to NMR groups to introduce the CCPN program suite and to gain an understanding of the requirements of users. CCPN is especially keen to enable young scientists to contribute to and attend its meetings. Accordingly, wherever possible CCPN tries to keep conference fees at a minimum by using contributions that come from our industrial sponsorship and software subscriptions. Footnotes References Vranken WF, Boucher W, Stevens TJ, Fogh RH, Pajon A, Llinas M, Ulrich EL, Markley JL, Ionides J, Laue ED. (2005) "The CCPN data model for NMR spectroscopy: development of a software pipeline." Proteins 59(4):687-96. Fogh RH, Boucher W, Vranken WF, Pajon A, Stevens TJ, Bhat TN, Westbrook J, Ionides JM, Laue ED.(2005) "A framework for scientific data modeling and automated software development." Bioinformatics. 21(8):1678-84 External links CCPN Website CCPN Community Software Wiki E-Science Information technology organisations based in the United Kingdom Medical Research Council (United Kingdom) Nuclear magnetic resonance Organisations associated with the University of Cambridge Science and technology in Cambridgeshire
Collaborative Computing Project for NMR
[ "Physics", "Chemistry" ]
1,444
[ "Nuclear magnetic resonance", "Nuclear physics" ]
40,132,737
https://en.wikipedia.org/wiki/Flow%20plasticity%20theory
Flow plasticity is a solid mechanics theory that is used to describe the plastic behavior of materials. Flow plasticity theories are characterized by the assumption that a flow rule exists that can be used to determine the amount of plastic deformation in the material. In flow plasticity theories it is assumed that the total strain in a body can be decomposed additively (or multiplicatively) into an elastic part and a plastic part. The elastic part of the strain can be computed from a linear elastic or hyperelastic constitutive model. However, determination of the plastic part of the strain requires a flow rule and a hardening model. Small deformation theory Typical flow plasticity theories for unidirectional loading (for small deformation perfect plasticity or hardening plasticity) are developed on the basis of the following requirements: The material has a linear elastic range. The material has an elastic limit defined as the stress at which plastic deformation first takes place, i.e., . Beyond the elastic limit the stress state always remains on the yield surface, i.e., . Loading is defined as the situation under which increments of stress are greater than zero, i.e., . If loading takes the stress state to the plastic domain then the increment of plastic strain is always greater than zero, i.e., . Unloading is defined as the situation under which increments of stress are less than zero, i.e., . The material is elastic during unloading and no additional plastic strain is accumulated. The total strain is a linear combination of the elastic and plastic parts, i.e., . The plastic part cannot be recovered while the elastic part is fully recoverable. The work done of a loading-unloading cycle is positive or zero, i.e., . This is also called the Drucker stability postulate and eliminates the possibility of strain softening behavior. The above requirements can be expressed in three dimensional states of stress and multidirectional loading as follows. Elasticity (Hooke's law). In the linear elastic regime the stresses and strains in the material are related by where the stiffness matrix is constant. Elastic limit (Yield surface). The elastic limit is defined by a yield surface that does not depend on the plastic strain and has the form Beyond the elastic limit. For strain hardening materials, the yield surface evolves with increasing plastic strain and the elastic limit changes. The evolving yield surface has the form Loading. For general states of stress, plastic loading is indicated if the state of stress is on the yield surface and the stress increment is directed toward the outside of the yield surface; this occurs if the inner product of the stress increment and the outward normal of the yield surface is positive, i.e., The above equation, when it is equal to zero, indicates a state of neutral loading where the stress state moves along the yield surface. Unloading: A similar argument is made for unloading for which situation , the material is in the elastic domain, and Strain decomposition: The additive decomposition of the strain into elastic and plastic parts can be written as Stability postulate: The stability postulate is expressed as Flow rule In metal plasticity, the assumption that the plastic strain increment and deviatoric stress tensor have the same principal directions is encapsulated in a relation called the flow rule. Rock plasticity theories also use a similar concept except that the requirement of pressure-dependence of the yield surface requires a relaxation of the above assumption. Instead, it is typically assumed that the plastic strain increment and the normal to the pressure-dependent yield surface have the same direction, i.e., where is a hardening parameter. This form of the flow rule is called an associated flow rule and the assumption of co-directionality is called the normality condition. The function is also called a plastic potential. The above flow rule is easily justified for perfectly plastic deformations for which when , i.e., the yield surface remains constant under increasing plastic deformation. This implies that the increment of elastic strain is also zero, , because of Hooke's law. Therefore, Hence, both the normal to the yield surface and the plastic strain tensor are perpendicular to the stress tensor and must have the same direction. For a work hardening material, the yield surface can expand with increasing stress. We assume Drucker's second stability postulate which states that for an infinitesimal stress cycle this plastic work is positive, i.e., The above quantity is equal to zero for purely elastic cycles. Examination of the work done over a cycle of plastic loading-unloading can be used to justify the validity of the associated flow rule. Consistency condition The Prager consistency condition is needed to close the set of constitutive equations and to eliminate the unknown parameter from the system of equations. The consistency condition states that at yield because , and hence Large deformation theory Large deformation flow theories of plasticity typically start with one of the following assumptions: the rate of deformation tensor can be additively decomposed into an elastic part and a plastic part, or the deformation gradient tensor can be multiplicatively decomposed in an elastic part and a plastic part. The first assumption was widely used for numerical simulations of metals but has gradually been superseded by the multiplicative theory. Kinematics of multiplicative plasticity The concept of multiplicative decomposition of the deformation gradient into elastic and plastic parts was first proposed independently by B. A. Bilby, E. Kröner, in the context of crystal plasticity and extended to continuum plasticity by Erasmus Lee. The decomposition assumes that the total deformation gradient (F) can be decomposed as: where Fe is the elastic (recoverable) part and Fp is the plastic (unrecoverable) part of the deformation. The spatial velocity gradient is given by where a superposed dot indicates a time derivative. We can write the above as The quantity is called a plastic velocity gradient and is defined in an intermediate (incompatible) stress-free configuration. The symmetric part (Dp) of Lp is called the plastic rate of deformation while the skew-symmetric part (Wp) is called the plastic spin: Typically, the plastic spin is ignored in most descriptions of finite plasticity. Elastic regime The elastic behavior in the finite strain regime is typically described by a hyperelastic material model. The elastic strain can be measured using an elastic right Cauchy-Green deformation tensor defined as: The logarithmic or Hencky strain tensor may then be defined as The symmetrized Mandel stress tensor is a convenient stress measure for finite plasticity and is defined as where S is the second Piola-Kirchhoff stress. A possible hyperelastic model in terms of the logarithmic strain is where W is a strain energy density function, J = det(F), μ is a modulus, and "dev" indicates the deviatoric part of a tensor. Flow rule Application of the Clausius-Duhem inequality leads, in the absence of a plastic spin, to the finite strain flow rule Loading-unloading conditions The loading-unloading conditions can be shown to be equivalent to the Karush-Kuhn-Tucker conditions Consistency condition The consistency condition is identical to that for the small strain case, References See also Plasticity (physics) Continuum mechanics Solid mechanics
Flow plasticity theory
[ "Physics" ]
1,522
[ "Solid mechanics", "Mechanics", "Classical mechanics", "Continuum mechanics" ]
40,133,154
https://en.wikipedia.org/wiki/Lanthanum%20aluminate-strontium%20titanate%20interface
The interface between lanthanum aluminate (LaAlO3) and strontium titanate (SrTiO3) is a notable materials interface because it exhibits properties not found in its constituent materials. Individually, LaAlO3 and SrTiO3 are non-magnetic insulators, yet LaAlO3/SrTiO3 interfaces can exhibit electrical metallic conductivity, superconductivity, ferromagnetism, large negative in-plane magnetoresistance, and giant persistent photoconductivity. The study of how these properties emerge at the LaAlO3/SrTiO3 interface is a growing area of research in condensed matter physics. Emergent properties Conductivity Under the right conditions, the LaAlO3/SrTiO3 interface is electrically conductive, like a metal. The angular dependence of Shubnikov–de Haas oscillations indicates that the conductivity is two-dimensional, leading many researchers to refer to it as a two-dimensional electron gas (2DEG). Two-dimensional does not mean that the conductivity has zero thickness, but rather that the electrons are confined to only move in two directions. It is also sometimes called a two-dimensional electron liquid (2DEL) to emphasize the importance of inter-electron interactions. Conditions necessary for conductivity Not all LaAlO3/SrTiO3 interfaces are conductive. Typically, conductivity is achieved only when: The LaAlO3/SrTiO3 interface is along the 001,110 and 111 crystallographic direction The LaAlO3 and SrTiO3 are crystalline and epitaxial The SrTiO3 side of the interface is TiO2-terminated (causing the LaAlO3 side of the interface to be LaO-terminated) The LaAlO3 layer is at least 4 unit cells thick Conductivity can also be achieved when the SrTiO3 is doped with oxygen vacancies; however, in that case, the interface is technically LaAlO3/SrTiO3−x instead of LaAlO3/SrTiO3. Hypotheses for conductivity The source of conductivity at the LaAlO3/SrTiO3 interface has been debated for years. SrTiO3 is a wide-band gap semiconductor that can be doped n-type in a variety of ways. Clarifying the mechanism behind the conductivity is a major goal of current research. Four leading hypotheses are: Polar gating Oxygen vacancies Intermixing Structural distortions Polar gating Polar gating was the first mechanism used to explain the conductivity at LaAlO3/SrTiO3 interfaces. It postulates that the LaAlO3, which is polar in the 001 direction (with alternating sheets of positive and negative charge), acts as an electrostatic gate on the semiconducting SrTiO3. When the LaAlO3 layer grows thicker than three unit cells, its valence band energy rises above the Fermi level, causing holes (or positively charged oxygen vacancies ) to form on the outer surface of the LaAlO3. The positive charge on the surface of the LaAlO3 attracts negative charge to nearby available states. In the case of the LaAlO3/SrTiO3 interface, this means electrons accumulate in the surface of the SrTiO3, in the Ti d bands. The strengths of the polar gating hypothesis are that it explains why conductivity requires a critical thickness of four unit cells of LaAlO3 and that it explains why conductivity requires the SrTiO3 to be TiO2-terminated. The polar gating hypothesis also explains why alloying the LaAlO3 increases the critical thickness for conductivity. One weakness of the hypothesis is that it predicts that the LaAlO3 films should exhibit a built-in electric field; so far, x-ray photoemission experiments and other experiments have shown little to no built-in field in the LaAlO3 films. The polar gating hypothesis also cannot explain why Ti3+ is detected when the LaAlO3 films are thinner than the critical thickness for conductivity. The polar gating hypothesis is sometimes called the polar catastrophe hypothesis, alluding to the counterfactual scenario where electrons don't accumulate at the interface and instead voltage in the LaAlO3 builds up forever. The hypothesis has also been called the electronic reconstruction hypothesis, highlighting the fact that electrons, not ions, move to compensate the building voltage. Oxygen vacancies Another hypothesis is that the conductivity comes from free electrons left by oxygen vacancies in the SrTiO3. SrTiO3 is known to be easily doped by oxygen vacancies, so this was initially considered a promising hypothesis. However, electron energy loss spectroscopy measurements have bounded the density of oxygen vacancies well below the density necessary to supply the measured free electron densities. Another proposed possibility is that oxygen vacancies in the surface of the LaAlO3 are remotely doping the SrTiO3. Under generic growth conditions, multiple mechanisms can coexist. A systematic study across a wide growth parameter space demonstrated different roles played by oxygen vacancy formation and the polar gating at different interfaces. An obvious difference between oxygen vacancies and polar gating in creating the interface conductivity is that the carriers from oxygen vacancies are thermally activated as the donor level of oxygen vacancies is usually separated from the SrTiO3 conduction band, consequently exhibiting the carrier freeze-out effect at low temperatures; in contrast, the carriers originating from the polar gating are transferred into the SrTiO3 conduction band (Ti 3d orbitals) and are therefore degenerate. Intermixing Lanthanum is a known dopant in SrTiO3, so it has been suggested that La from the LaAlO3 mixes into the SrTiO3 and dopes it n-type. Multiple studies have shown that intermixing takes place at the interface; however, it is not clear whether there is enough intermixing to provide all of the free carriers. For example, a flipped interface between a SrTiO3 film and a LaAlO3 substrate is insulating. Structural distortions A fourth hypothesis is that the LaAlO3 crystal structure undergoes octahedral rotations in response to the strain from the SrTiO3. These octahedral rotations in the LaAlO3 induce octahedral rotations in the SrTiO3, increasing the Ti d-band width enough so that electrons are no longer localized. Superconductivity Superconductivity was first observed in LaAlO3/SrTiO3 interfaces in 2007, with a critical temperature of ~200 mK. Like the conductivity, the superconductivity appears to be two-dimensional. Ferromagnetism Hints of ferromagnetism in LaAlO3/SrTiO3 were first seen in 2007, when Dutch researchers observed hysteresis in the magnetoresistance of LaAlO3/SrTiO3. Follow up measurements with torque magnetometry indicated that the magnetism in LaAlO3/SrTiO3 persisted all the way to room temperature. In 2011, researchers at Stanford University used a scanning SQUID to directly image the ferromagnetism, and found that it occurred in heterogeneous patches. Like the conductivity in LaAlO3/SrTiO3, the magnetism only appeared when the LaAlO3 films were thicker than a few unit cells. However, unlike conductivity, magnetism was seen at SrO-terminated surfaces as well as TiO2-terminated surfaces. The discovery of ferromagnetism in a materials system that also superconducts spurred a flurry of research and debate, because ferromagnetism and superconductivity almost never coexist together. Ferromagnetism requires electron spins to align, while superconductivity typically requires electron spins to anti-align. Magnetoresistance Magnetoresistance measurements are a major experimental tool used to understand the electronic properties of materials. The magnetoresistance of LaAlO3/SrTiO3 interfaces has been used to reveal the 2D nature of conduction, carrier concentrations (through the hall effect), electron mobilities, and more. Field applied out-of-plane At low magnetic field, the magnetoresistance of LaAlO3/SrTiO3 is parabolic versus field, as expected for an ordinary metal. However, at higher fields, the magnetoresistance appears to become linear versus field. Linear magnetoresistance can have many causes, but so far there is no scientific consensus on the cause of linear magnetoresistance in LaAlO3/SrTiO3 interfaces. Linear magnetoresistance has also been measured in pure SrTiO3 crystals, so it may be unrelated to the emergent properties of the interface. Field applied in-plane At low temperature (T < 30 K), the LaAlO3/SrTiO3 interface exhibits negative in-plane magnetoresistance, sometimes as large as -90%. The large negative in-plane magnetoresistance has been ascribed to the interface's enhanced spin-orbit interaction. Electron gas distribution at the LaAlO3/SrTiO3 interface Experimentally, the charge density profile of the electron gas at the LaAlO3/SrTiO3 interface has a strongly asymmetric shape with a rapid initial decay over the first 2 nm and a pronounced tail that extends to about 11 nm. A wide variety of theoretical calculations support this result. Importantly, to get electron distribution one have to take into account field-dependent dielectric constant of SrTiO3. Comparison to other 2D electron gases The 2D electron gas that arises at the LaAlO3/SrTiO3 interface is notable for two main reasons. First, it has very high carrier concentration, on the order of 1013 cm−2. Second, if the polar gating hypothesis is true, the 2D electron gas has the potential to be totally free of disorder, unlike other 2D electron gases that require doping or gating to form. However, so far researchers have been unable to synthesize interfaces that realize the promise of low disorder. Synthesis methods Most LaAlO3/SrTiO3 interfaces are synthesized using pulsed laser deposition. A high-power laser ablates a LaAlO3 target, and the plume of ejected material is deposited onto a heated SrTiO3 substrate. Typical conditions used are: Laser wavelength of 248 nm Laser fluence of 0.5 J/cm2 to 2 J/cm2 Substrate temperature of 600 °C to 850 °C Background oxygen pressure of 10−5 Torr to 10−3 Torr Some LaAlO3/SrTiO3 interfaces have also been synthesized by molecular beam epitaxy, sputtering, and atomic layer deposition. Similar interfaces To better understand in the LaAlO3/SrTiO3 interface, researchers have synthesized a number of analogous interfaces between other polar perovskite films and SrTiO3. Some of these analogues have properties similar to LaAlO3/SrTiO3, but some do not. Conductive interfaces GdTiO3/SrTiO3 LaTiO3/SrTiO3 LaVO3/SrTiO3 LaGaO3/SrTiO3 PrAlO3/SrTiO3 NdAlO3/SrTiO3 NdGaO3/SrTiO3 GdAlO3/SrTiO3 Nd0.35Sr0.65MnO3/SrTiO3 Al2O3/SrTiO3 amorphous-YAlO3/SrTiO3 La0.5Al0.5Sr0.5Ti0.5O3/SrTiO3 DyScO3/SrTiO3 KTaO3/SrTiO3 CaZrO3/SrTiO3 Insulating interfaces LaCrO3/SrTiO3 LaMnO3/SrTiO3 La2O3/SrTiO3 Y2O3/SrTiO3 LaYO3/SrTiO3 EuAlO3/SrTiO3 BiMnO3/SrTiO3 Applications As of 2015, there are no commercial applications of the LaAlO3/SrTiO3 interface. However, speculative applications have been suggested, including field-effect devices, sensors, photodetectors, and thermoelectrics; related LaVO3/SrTiO3 is a functional solar cell albeit hitherto with a low efficiency. References External links Materials science: Enter the oxides Condensed matter physics Materials science Perovskites
Lanthanum aluminate-strontium titanate interface
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,614
[ "Applied and interdisciplinary physics", "Phases of matter", "Materials science", "Condensed matter physics", "nan", "Matter" ]
40,142,982
https://en.wikipedia.org/wiki/Irreversible%20electroporation
Irreversible electroporation or IRE is a soft tissue ablation technique using short but strong electrical fields to create permanent and hence lethal nanopores in the cell membrane, to disrupt cellular homeostasis. The resulting cell death results from induced apoptosis or necrosis induced by either membrane disruption or secondary breakdown of the membrane due to transmembrane transfer of electrolytes and adenosine triphosphate. The main use of IRE lies in tumor ablation in regions where precision and conservation of the extracellular matrix, blood flow and nerves are of importance. The first generation of IRE for clinical use, in the form of the NanoKnife System, became commercially available for research purposes in 2009, solely for the surgical ablation of soft tissue tumors. Cancerous tissue ablation via IRE appears to show significant cancer specific immunological responses which are currently being evaluated alone and in combination with cancer immunotherapy. History First observations of IRE effects go back to 1754. Nollet reported the first systematic observations of the appearance of red spots on animal and human skin that was exposed to electric sparks. However, its use for modern medicine began in 1982 with the seminal work of Neumann and colleagues. Pulsed electric fields were used to temporarily permeabilize cell membranes to deliver foreign DNA into cells. In the following decade, the combination of high-voltage pulsed electric fields with the chemotherapeutic drug bleomycin and with DNA yielded novel clinical applications: electrochemotherapy and gene electrotransfer, respectively. The use of irreversible electroporation for therapeutic applications was first suggested by Davalos, Mir, and Rubinsky. Mechanism Utilizing ultra short pulsed but very strong electrical fields, micropores and nanopores are induced in the phospholipid bilayers which form the outer cell membranes. Two kinds of damage can occur: Reversible electroporation (RE): Temporary and limited pathways for molecular transport via nanopores are formed, but after the end of the electric pulse, the transport ceases and the cells remain viable. Medical applications are, for example, local introduction of intracellular cytotoxic pharmaceuticals such as bleomycin (electroporation and electrochemotherapy). Irreversible electroporation (IRE): After a certain degree of damage to the cell membranes by electroporation, the leakage of intracellular contents is too severe or the resealing of the cellular membrane is too slow, leaving healthy and/or cancerous cells irreversibly damaged. They die by either apoptosis or via cell-internally induced necrotic pathways, which is unique to this ablation technique. It should be stated that even though the ablation method is generally accepted to be apoptosis, some findings seem to contradict a pure apoptotic cell death, making the exact process by which IRE causes cell death unclear. In any case, all studies agree that the cell death is an induced one with the cells dying over a varying time period of hours to days and does not rely on local extreme heating and melting of tissue via high energy deposition like most ablation technologies (see radiofrequency ablation, microwave ablation, High-intensity focused ultrasound). When an electrical field of more than 0.5 V/nm is applied to the resting trans-membrane potential, it is proposed that water enters the cell during this dielectric breakdown. Hydrophilic pores are formed. A molecular dynamics simulation by Tarek illustrates this proposed pore formation in two steps: After the application of an electrical field, water molecules line up in single file and penetrate the hydrophobic center of the bilayer lipid membrane. These water channels continue to grow in length and diameter and expand into water-filled pores, at which point they are stabilized by the lipid head groups that move from the membrane-water interface to the middle of the bilayer. It is proposed that as the applied electrical field increases, the greater is the perturbation of the phospholipid head groups, which in turn increases the number of water filled pores. This entire process can occur within a few nanoseconds. Average sizes of nanopores are likely cell-type specific. In swine livers, they average around 340-360 nm, as found using SEM. A secondary described mode of cell death was described to be from a breakdown of the membrane due to transmembrane transfer of electrolytes and adenosine triphosphate. Other effects like heat or electrolysis were also shown to play a role in the currently clinically applied IRE pulse protocols. Potential advantages and disadvantages Advantages of IRE Tissue selectivity - conservation of vital structures within the treatment field. Its capability of preserving vital structures within the IRE-ablated zone. In all IRE ablated liver tissues, critical structures, such as the hepatic arteries, hepatic veins, portal veins and intrahepatic bile ducts were all preserved. As IRE targets the bilipid membranes of cells, structures mainly consisting of proteins like vascular elastic and collagenous structures, as well as peri-cellular matrix proteins are not affected by the currents. Vital and scaffolding structures (like large blood vessels, urethra or intrahepatic bile ducts) are conserved. The electrically insulating myelin layer, surrounding nerve fibers, protects nerve bundles from the IRE effects to a certain degree. Up to what point nerves stay unaffected or can regenerate is not completely understood. Sharp ablation zone margins- The transition zone between reversible electroporated area and irreversible electroporated area is accepted to be only a few cell layers. Whereas, the transition areas as in radiation or thermal based ablation techniques are non-existent. Further, the absence of the heat sink effect, which is a cause of many problems and treatment failures, is advantageous and increases the predictability of the treatment field. Geometrically, rather complex treatment fields are enabled by the multi-electrode concept. Absence of thermally induced necrosis - The short pulse lengths relative to the time between the pulses prevents joule heating of the tissue. Hence, by design, no necrotic cell damage is to be expected (except possibly in very close proximity to the needle). Therefore, IRE has none of the typical short and long term side-effects associated with necrosis. Short treatment time - A typical treatment takes less than 5 minutes. This does not include the possibly complicated electrode placement which might require the use of many electrode and re-position of the electrodes during the procedure. Real time monitoring - The treatment volume can be to a certain degree be visualized, both during and after the treatment. Possible visualization methods are ultrasound, MRI, and CT. Immunological response - IRE appears to provoke a stronger immunological response than other ablation methods which is currently being studied for use in conjunction with cancer immunotheraputic approaches. Disadvantages of IRE Strong muscle contractions - The strong electric fields created by IRE, due to direct stimulation of the neuromuscular junction, cause strong muscle contractions requiring special anesthesia and total body paralysis. Incomplete ablation within targeted tumors - The originally threshold for IRE of cells was approximately 600 V/cm with 8 pulses, a pulse duration of 100 μs, and a frequency of 10 Hz. Qin et al. later discovered that even at 1,300 V/cm with 99 pulses, a pulse duration of 100 μs, and 10 Hz, there were still islands of viable tumor cells within ablated regions. This suggests that tumor tissue may respond differently to IRE than healthy parenchyma. The mechanism of cell death following IRE relies on cellular apoptosis, which results from pore formation in the cellular membrane. Tumor cells, known to be resistant to apoptotic pathways, may require higher thresholds of energy to be adequately treated. However, the recurrence rated found in clinical studies suggest a rather low recurrence rate and often superior overall survival when compared with other ablation modalities. Local environment - The electric fields of IRE are strongly influenced by the conductivity of the local environment. The presence of metal, for example with biliary stents, can result in variances in energy deposition. Various organs, such as the kidneys, are also subject to irregular ablation zones, due to the increased conductivity of urine. Use in medical practice A number of electrodes, in the form of long needles, are placed around the target volume. The point of penetration for the electrodes is chosen according to anatomical conditions. Imaging is essential to the placement and can be achieved by ultrasound, magnetic resonance imaging or tomography. The needles are then connected to the IRE-generator, which then proceeds to sequentially build up a potential difference between two electrodes. The geometry of the IRE-treatment field is calculated in real time and can be influenced by the user. Depending on the treatment-field and number of electrodes used, the ablation takes between 1 and 10 minutes. In general muscle relaxants are administered, since even under general anesthetics, strong muscle contractions are induced by excitation of the motor end-plate. Typical parameters (1st generation IRE system): Number of pulses per treatment: 90 Pulse length: 100 μs Intermission between pulses: 100 to 1000 ms Field strength: 1500 volt/cm Current: ca. 50 A (tissue- and geometry dependent) Max ablation volume using two electrodes: 4 × 3 × 2 cm³ The shortly pulsed, strong electrical fields are induced through thin, sterile, disposable electrodes. The potential differences are calculated and applied by a computer system between these electrodes in accordance to a previously planned treatment field. One specific device for the IRE procedure is the NanoKnife system manufactured by AngioDynamics, which received FDA 510k clearance on October 24, 2011. The NanoKnife system has also received an Investigational Device Exemption (IDE) from the FDA that allows AngioDynamics to conduct clinical trials using this device. The Nanoknife system transmits a low-energy direct current from a generator to electrode probes placed in the target tissues for the surgical ablation of soft tissue. In 2011, AngioDynamics received an FDA warning letter for promoting the device for indications for which it had not received approval. In 2013, the UK National Institute for Health and Clinical Excellence issued a guidance that the safety and efficacy of the use of irreversible electroporation of the treatment of various types of cancer has not yet been established. Newer generations of Electroporation-based ablation systems are being developed specifically to address the shortcomings of the first generation of IRE but, as of June 2020, none of the technologies are available as a medical device. Clinical data Potential organ systems, where IRE might have a significant impact due to its properties include the pancreas, liver, prostate and the kidneys, which were the main focus of the studies listed in Table 1-3 (state: June 2020). None of the potential organ systems, which may be treated for various conditions and tumors, are covered by randomized multicenter trials or long-term follow-ups (state. June 2020). Liver Hepatic IRE appears to be safe, even when performed near vessels and bile ducts with an overall complication rate of 16%, with most complications being needle related (pneumothorax and hemorrhage).The COLDFIRE-2 trial with 50 patients showed 76% local tumor progression-free survival after 1 year. Whilst there are no studies comparing IRE to other ablative therapies yet, thermal ablations have shown a higher efficacy in that matter with around 96% progression free survival. Therefor Bart et al. concluded that IRE should currently only be performed for only truly unresectable and non-ablatable tumors. Pancreas Animal studies have shown the safety and efficacy of IRE on pancreatic tissue. The overall survival rates in studies on the use of IRE for pancreatic cancer provide an encouraging nonvariable endpoint and show an additive beneficial effect of IRE compared with standard-of care chemotherapeutic treatment with FOLFIRINOX (a combination of 5-fluorouracil, leucovorin, irinotecan, and oxaliplatin) (median OS, 12–14months). However, IRE appears to be more effective in conjunction with systemic therapy and is not suggested as first-line treatment. Despite that IRE makes adjuvant tumor mass reduction therapy for LAPC possible, IRE remains, in its current state, a high risk procedure requiring additional safety data before it can be used widely. Prostate The concept of treating prostate cancer with IRE was first proposed by Gary Onik and Boris Rubinsky in 2007. Prostate carcinomas are frequently located near sensitive structures which might be permanently damaged by thermal treatments or radiation therapy. The applicability of surgical methods is often limited by accessibility and precision. Surgery is also associated with a long healing time and high rate of side effects. Using IRE, the urethra, bladder, rectum and neurovascular bundle and lower urinary sphincter can potentially be included in the treatment field without creating (permanent) damage. IRE has been in use against prostate cancer since 2011, partly in form of clinical trials, compassionate care or individualized treatment approach. As for all other ablation technologies and also most conventional methods, no studies employed a randomized multi-center approach or targeted cancer-specific mortality as endpoint. Cancer-specific mortality or overall survival are notoriously hard to assess for prostate cancer, as the trials require more than a decade and usually several treatment types are performed during the years making treatment-specific survival advantages difficult to quantify. Therefore, the results of ablation-based treatments and focal treatments in general usually use local recurrences and functional outcome (quality of life) as endpoint. In that regard, the clinical results collected so far and listed in Table 3 shown encouraging results and uniformly state IRE as a safe and effective treatment (at least for focal ablation) but all warrant further studies. The largest cohort presented by Guenther et al. with up to 6-year follow-up is limited as a heterogeneous retrospective analysis and no prospective clinical trial. Therefore, despite that several hospitals in Europe have been employing the method for many years with one private clinic even listing more than one thousand treatments as of June 2020, IRE for prostate cancer is currently not recommended in treatment guidelines. Kidney While nephron-sparing surgery is the gold standard treatment for small, malignant renal masses, ablative therapies are considered a viable option in patients who are poor surgical candidates. Radiofrequency ablation (RFA) and cryoablation have been used since the 1990s; however, in lesions larger than 3 cm, their efficacy is limited. The newer ablation modalities, such as IRE, microwave ablation (MWA), and high-intensity focused ultrasound, may help overcome the challenges in tumor size. The first human studies have proven the safety of IRE for the ablation of renal masses; however, the effectiveness of IRE through histopathological examination of an ablated renal tumor in humans is yet to be known. Wagstaff et al. have set out to investigate the safety and effectiveness of IRE ablation of renal masses and to evaluate the efficacy of ablation using MRI and contrast-enhanced ultrasound imaging. In accordance with the prospective protocol designed by the authors, the treated patients will subsequently undergo radical nephrectomy to assess IRE ablation success. Later phase 2 prospective trials showed good results in terms of safety and feasibility for small renal masses but the cohort was limited in numbers (7 and 10 patients respectively), hence efficacy is not yet sufficiently determined. IRE appears safe for small renal masses up to 4 cm. However, the consensus is that current evidence is still inadequate in quality and quantity. Lung In a prospective, single-arm, multi-center, phase II clinical trial, the safety and efficacy of IRE on lung cancers were evaluated. The trial included patients with primary and secondary lung malignancies and preserved lung function. The expected effectiveness was not met at interim analysis and the trial was stopped prematurely. Complications included pneumothoraces (11 of 23 patients), alveolar hemorrhage not resulting in significant hemoptysis, and needle tract seeding was found in 3 cases (13%). Disease progression was seen in 14 of 23 patients (61%). Stable disease was found in 1 (4%), partial remission in 1 (4%) and complete remission in 7 (30%) patients. The authors concluded that IRE is not effective for the treatment of lung malignancies. Similarly poor treatment outcomes have been observed in other studies. A major obstacle of IRE in the lung is the difficulty in positioning the electrodes; placing the probes in parallel alignment is made challenging by the interposition of ribs. Additionally, the planned and actual ablation zones in the lung are dramatically different due to the differences in conductivity between tumor, lung parenchyma, and air. Coronary arteries Maor et el have demonstrated the safety and efficiency of IRE as an ablation modality for smooth muscle cells in the walls of large vessels in rat model. Therefore, IRE has been suggested as preventive treatment for coronary artery re-stenosis after percutaneous coronary intervention. Cardiac ablation therapy Numerous studies in animals have demonstrated the safety and efficiency of IRE as a non-thermal ablation modality for pulmonary veins in the context of atrial fibrillation treatment. In 2023, irreversible electroporation is being widely used and evaluated in humans, as cardiac ablation therapy to kill very small areas of heart muscle. This is done to treat irregularities of heart rhythm. A cardiac catheter delivers trains of high-voltage ultra-rapid electrical pulses that form irreversible pores in cell membranes, resulting in cell death. It is thought to allow better selectivity than the previous techniques, which used heat or cold to kill larger volumes of muscle. Other organs IRE has also been investigated in ex-vivo human eye models for treatment of uveal melanoma and in thyroid cancer. Successful ablations in animal tumor models have been conducted for lung, brain, heart, skin, bone, head and neck cancer, and blood vessels. References Further reading Interventional radiology Medical physics Vascular procedures Surgical oncology Medical technology
Irreversible electroporation
[ "Physics", "Biology" ]
3,891
[ "Applied and interdisciplinary physics", "Medical physics", "Medical technology" ]
44,520,102
https://en.wikipedia.org/wiki/Biodegradable%20athletic%20footwear
Biodegradable athletic footwear is athletic footwear that uses biodegradable materials with the ability to compost at the end-of-life phase. Such materials include natural biodegradable polymers, synthetic biodegradable polymers, and biodegradable blends. The use of biodegradable materials is a long-term solution to landfill pollution that can significantly help protect the natural environment by replacing the synthetic, non-biodegradable polymers found in athletic footwear. Problem of non-degradable waste The United States athletic shoe market is a $13 billion-per-year dollar industry that sells more than 350 million pairs of athletic shoes annually. The global footwear consumption has nearly doubled every twenty years, from 2.5 billion pairs in 1950 to more than 19 billion pairs of shoes in 2005. The increase in demand for athletic shoe products have progressively decreased the useful lives of shoes as a result of the rapid market changes and new consumer trends. A shorter life cycle of athletic footwear has begun to create non-degradable waste in landfills due to synthetic and other non-biodegradable materials used in production. The considerable growth in industrial production and consumption has made the athletic footwear industry face the environmental challenge of generated end-of-life waste. Ethylene vinyl acetate copolymer The athletic shoe midsole is one of the main contributors that lead to a generation of end-of-life waste because it is composed of polymeric foams based on ethylene-vinyl acetate (EVA). EVA is a polyolefin copolymer of ethylene and vinyl acetate that provides durability and flexibility, making it the most commonly used material found in athletic shoe midsoles. Although the synthetic polymer is a useful material for the athletic shoe industry, it has become an environmental concern because of its poor biodegradability. EVA goes through an anaerobic decomposition process called thermal degradation that often occurs in landfills resulting in releases of volatile organic compounds (VOCs) into the air. VOCs "contribute to the formation of tropospheric ozone, which is harmful to humans and plant life." Thermal degradation of EVA is temperature-dependent and occurs in two stages; in the first stage acetic acid is lost, followed by the degradation of the unsaturated polyethylene polymer. Environmental impact The environmental impacts of athletic shoe degradation in landfills "are inextricably connected to the nature of the materials." The production of many petroleum-based products, such as EVA, used to manufacture athletic shoes result in serious environmental pollution of groundwater and rivers when disposed into landfills. When disposed of in landfills, athletic footwear can take up to thousands of years to naturally degrade. EVA athletic shoe midsoles can be kept in contact with moist soil for a period of 12 years and experience little to no evidence of biodeterioration. Although there are some that are taking initiatives to produce environmentally friendly athletic footwear, most of the footwear industry's response to this increasing problem of end-of-life shoe waste has been negligible. In order to reduce post-consumer waste and improve environmental properties of athletic shoes, biodegradable materials can help to replace synthetic polymers such as ethylene-vinyl acetate with the ability to compost at the end-of-life phase. Biodegradable materials "Biodegradation is a chemical degradation of materials provoked by the action of microorganisms such as bacteria, fungi, and algae." Although there are many materials categorized as biodegradable, there has been an increasing interest of biodegradable polymers that can lead to waste management options for polymers in the environment. These biodegradable polymers can be broken down into three categories: natural biodegradable polymer, synthetic biodegradable polymer, and biodegradable blends. Natural biodegradable polymers Natural biodegradable polymers are formed in nature during growth cycles of all organisms. When searching for natural fibers to replace synthetic materials in athletic shoes, the major natural biodegradable polymer that offers the most potential are polysaccharides. Starch is a polysaccharide that is useful because it readily degrades into harmless products when placed in contact with soil microorganisms. Starch is not often used alone as a plastic material because of its brittle nature, but is commonly used as a biodegradation additive. Many plasticizers use starch-glycerol-water to modify starch's brittle nature. Biodegradation of this blend was tested and was found that by the second day the degraded carbon had already attained about 100% of the initial carbon of the sample. Synthetic biodegradable polymer Aliphatic polyesters are a diverse family of synthetic polymers of which are biocompatible, biodegradable, and non-toxic. Specifically, poly (lactic acid) has low melt strength and low viscosity properties that are similar to EVA midsoles in athletic shoes. Poly (lactic acid) (PLA) is part of the polyester group and can go through thermoplastic and foaming processes. Along with its good mechanical properties, its popularity is based on the non-toxic products that it becomes when it decomposes through hydrolytic degradation. Hydrolytic degradation of PLA generates the monomer lactic acid, which is metabolized via the tri-carboxylic acid cycle and eliminated as carbon dioxide. Biodegradable blends Most synthetic polymers are resistant to microbial attack because of their physical and chemical properties. However, they can become biodegradable when introducing natural polymers such as starch. Natural polymers introduce ester groups that attach to the backbone of non-biodegradable polymers, making them more susceptible to degradation. Due to biodegradable polymers having limited properties; blending synthetic polymers can bring economic advantages and superior properties. End-of-life management Although total elimination of post-consumer waste is not encouraged by any current change-causing agent due to the enormous change in infrastructure that the elimination of waste requires and the consequent lack of profitability for those agents, proactive approaches to reduce the enormous amount of waste that 350 million pairs of athletic shoes create can make a difference in the environment. Biodegradable materials, such as biodegradable polymers, are a viable solution to aid in avoiding the end-of-life athletic footwear waste consumption. The major advantage of introducing biodegradable polymers to athletic footwear is the ability to compost with other organic wastes for it to become useful soil attendant products. An alternative short-term approach to end-of-life management is recycling activities in the footwear industry. One major shoe manufacture, Nike Inc., created Reuse-A-Shoe program that involves recycling discarded athletic shoes by grinding and shredding the shoes to produce a material called Nike Grind, which can be used in surfacing for tennis and basketball playgrounds or running tracks. Currently, the Reuse-A-Shoe program recycles approximately 125,000 pairs of shoes per year in the United States. Recycling and composting are two major proposed solutions to end-of-life management. However, the use of biodegradable materials is a long-term solution that can significantly help protect the natural environment by replacing synthetic, non-biodegradable polymers found in athletic footwear. See also Abandoned footwear Sustainable fashion References Athletic shoes Biodegradable materials Clothing and the environment Environmental design Sustainable products
Biodegradable athletic footwear
[ "Physics", "Chemistry", "Engineering" ]
1,565
[ "Environmental design", "Biodegradable materials", "Biodegradation", "Materials", "Design", "Matter" ]
44,524,545
https://en.wikipedia.org/wiki/Optoelectrowetting
Optoelectrowetting (OEW) is a method of liquid droplet manipulation used in microfluidics applications. This technique builds on the principle of electrowetting, which has proven useful in liquid actuation due to fast switching response times and low power consumption. Where traditional electrowetting runs into challenges, however, such as in the simultaneous manipulation of multiple droplets, OEW presents a lucrative alternative that is both simpler and cheaper to produce. OEW surfaces are easy to fabricate, since they require no lithography, and have real-time, reconfigurable, large-scale manipulation control, due to its reaction to light intensity. Theory The traditional electrowetting mechanism has been receiving increasing interest due to its ability to control tension forces on a liquid droplet. As surface tension acts as the dominant liquid actuation force in nano-scale applications, electrowetting has been used to modify this tension at the solid-liquid interface through the application of an external voltage. The applied electric field causes a change in the contact angle of the liquid droplet, and in turn changes the surface tensions across the droplet. Precise manipulation of the electric field allows control of the droplets. The droplet is placed on an insulating substrate located in between an electrode.cooxoxc9x The optoelectrowetting mechanism adds a photoconductor underneath the conventional electrowetting circuit, with an AC power source attached. Under normal (dark) conditions, the majority of the system's impedance lies in the photoconducting region, and therefore the majority of the voltage drop occurs here. However, when light is shined on the system, carrier generation and recombination causes the conductivity of the photoconductor spikes and results in a voltage drop across the insulating layer, changing the contact angle as a function of the voltage. The contact angle between a liquid and electrode can be described as: where VA, d, ε, and γLV are applied voltage, thickness of the insulation layer, dielectric constant of the insulation layer, and the interfacial tension constant between liquid and gas. In AC situations, such as OEW, VA is replaced with the RMS voltage. The frequency of the AC power source is adjusted so that the impedance of the photoconductor dominates in the dark state. The shift in the voltage drop across the insulating layer therefore reduces the contact angle of the droplet as a function of the light intensity. By shining an optical beam on one edge of a liquid droplet, the reduced contact angle creates a pressure difference throughout the droplet, and pushes the droplet's center of mass towards the illuminated side. Control of the optical beam results in control of the droplet's movement. Using 4 mW laser beams, OEW has proven to move droplets of deionized water at speeds of 7mm/s. Traditional electrowetting runs into problems because it requires a two-dimensional array of electrodes for droplet actuation. The large number of electrodes leads to complexity for both control and packaging of these chips, especially for droplet sizes of smaller scales. While this problem can be solved through integration of electronic decoders, the cost of the chip would significantly increase. Single-sided continuous optoelectrowetting (SCOEW) Droplet manipulation in electrowetting-based devices are usually accomplished using two parallel plates which sandwiches the droplet and is actuated by digital electrodes. The minimum droplet size that can be manipulated is determined by the size of pixilated electrodes. This mechanism provides a solution to the size limitation of physical pixilated electrodes by utilizing dynamic and reconfigurable optical patterns and enables operations such as continuous transport, splitting, merging, and mixing of droplets. SCOEW is conducted on open, featureless, and photoconductive surfaces. This configuration creates a flexible interface that allows simple integration with other microfluidic components, such as sample reservoirs through simple tubing. It is also known as open optoelectrowetting (O-OEW). Optoelectrowetting using a photocapacitance Optoelectrowetting can also be achieved using the photocapacitance in a liquid–insulator–semiconductor junction. The photo-sensitive electrowetting is achieved via optical modulation of carriers in the space charge region at the insulator-semiconductor junction which acts as a photodiode – similar to a charge-coupled device based on a metal–oxide–semiconductor structure. Types of applications Clinical diagnostics Electrowetting presents a solution to one of the most challenging tasks in lab-on-a-chip systems in its ability to handle and manipulate complete physiological compounds. Conventional microfluidic systems aren't easily adaptable to handle different compounds, requiring reconfiguration that often results in the device being impractical as a whole. Through OEW, a chip with one power source can be readily used with a variety of substances, with potential for multiplexed detection. Optical actuation Photoactuation in microelectromechanical systems (MEMS) has been demonstrated in proof-of-concept experiments. Instead of a typical substrate, a specialized cantilever is placed on top of the liquid-insulator-photoconductor stack. As light is shined on the photoconductor, the capillary force from the drop on the cantilever changes with the contact angle, and deflects the beam. This wireless actuation can be used as a substitute for complex circuit-based systems currently used for optical addressing and control of autonomous wireless sensors See also photoelectrowetting References External links Demonstration of SCOEW on a lab-on-a-chip O-OEW droplet acutation testing at Purdue University Fluid dynamics Nanotechnology Biotechnology
Optoelectrowetting
[ "Chemistry", "Materials_science", "Engineering", "Biology" ]
1,207
[ "Microfluidics", "Microtechnology", "Chemical engineering", "Materials science", "Biotechnology", "nan", "Piping", "Nanotechnology", "Fluid dynamics" ]
41,578,185
https://en.wikipedia.org/wiki/Broadly%20neutralizing%20HIV-1%20antibodies
Broadly neutralizing HIV-1 antibodies (bNAbs) are neutralizing antibodies which neutralize multiple HIV-1 viral strains. bNAbs are unique in that they target conserved epitopes of the virus, meaning the virus may mutate, but the targeted epitopes will still exist. In contrast, non-bNAbs are specific for individual viral strains with unique epitopes. The discovery of bNAbs has led to an important area of research, namely, discovery of a vaccine, not only limited to HIV, but also other rapidly mutating viruses like influenza. Characteristics The following table shows the characteristics of various HIV-1 bNAbs In addition to targeting conserved epitopes, bNAbs are known to have long variable regions on their immunoglobulin (Ig) isotypes and subclasses. When compared to non-bNAbs, sequence variability from the germline immunoglobulin isotype is 7 fold. This implies that bNAbs develop from intense affinity maturation in the germinal centers hence the reason for high sequence variability on the variable Ig domain. Indeed HIV-1 patients who develop bNAbs have been shown to have high germinal center activity as exhibited by their comparatively higher levels of plasma CXCL13, which is a biomarker of germinal center activity. Online databases like bNAber (now defunct) and LANL constantly report and update the discovery of new HIV bNAbs. History of HIV bNAbs In 1990, researchers identified the first HIV bNAb, far more powerful than any antibody seen before. They described the exact viral component, or epitope that triggered the antibody. Six amino acids at the tip of HIV's surface protein, gp120, were responsible. The first bNAb turned out to be clinically irrelevant, but in 1994 another team isolated a bNAb that worked on cells taken from patients. This antibody attached to a "conserved" portion of gp120 that outlasts many of its mutations, affecting 17/24 tested strains at low doses. Another bNAb was discovered that acted on protein gp41 across many strains. Antibodies require antigens to trigger them and these were not originally identified. Over time more bNAbs were isolated, while single cell antibody cloning made it possible to produce large quantities of the antibodies for study. Low levels of bNAbs are now found in up to 25% of HIV patients. bNAbs evolve over years, accumulating some three times as many mutations as other antibodies. By 2006, researchers had identified a few so-called "broadly neutralizing antibodies" (bNAbs) that worked on multiple HIV strains. They analyzed 1800 blood samples from HIV-infected people from Africa, South Asia and the English-speaking world. They individually probed 30,000 of one woman's antibody-producing B cells and isolated two that were able to stop more than 70% of 162 divergent HIV strains from establishing an infection. Since 2009, researchers have identified more than 50 HIV bNAbs. In 2006, a Malawian man joined a study within weeks of becoming infected. Over a year, he repeatedly donated blood, which researchers used to create a timeline of changes in his virus' gp120, his antibody response and the ultimate emergence of a bNAb. Researchers want to direct this evolution in other subjects to achieve similar results. A screen of massive gp120 libraries led to one that strongly bound both an original antibody and the mature bNAb that evolved from it. Giving patients a modified gp120 that contains little more than the epitope that both antibodies target could act to "prime" the immune system, followed by a booster that contains trimer spikes in the most natural configuration possible. However, it is still under study whether bNAbs could prevent HIV infection. In 2009, researchers isolated and characterized the first HIV bNAbs seen in a decade. The two broadest neutralizers were PGT151 and PGT152. They could block about two-thirds of a large panel of HIV strains. Unlike most other bNAbs, these antibodies do not bind to known epitopes, on Env or on Env's subunits (gp120 or gp41). Instead, they attach to parts of both. Gp120 and gp41 assemble as a trimer. The bNAbs binding site occurs only on the trimer structure, the form of Env that invades host cells. Recent years have seen an increase in HIV-1 bNAb discovery. See also HIV vaccine development UB-421 References External links bNAber, Database of Broadly Neutralizing HIV-1 Antibodies (bNAbs) LANL Antibody Database Molecular biology Immunology
Broadly neutralizing HIV-1 antibodies
[ "Chemistry", "Biology" ]
978
[ "Biochemistry", "Immunology", "Molecular biology" ]
41,578,905
https://en.wikipedia.org/wiki/Conformational%20ensembles
In computational chemistry, conformational ensembles, also known as structural ensembles, are experimentally constrained computational models describing the structure of intrinsically unstructured proteins. Such proteins are flexible in nature, lacking a stable tertiary structure, and therefore cannot be described with a single structural representation. The techniques of ensemble calculation are relatively new on the field of structural biology, and are still facing certain limitations that need to be addressed before it will become comparable to classical structural description methods such as biological macromolecular crystallography. Purpose Ensembles are models consisting of a set of conformations that together attempt to describe the structure of a flexible protein. Even though the degree of conformational freedom is extremely high, flexible/disordered protein generally differ from fully random coil structures. The main purpose of these models is to gain insights regarding the function of the flexible protein, extending the structure-function paradigm from folded proteins to intrinsically disordered proteins. Calculation techniques The calculation of ensembles rely on experimental measurements, mostly by Nuclear Magnetic Resonance spectroscopy and Small-angle X-ray scattering. These measurements yield short and long-range structural information. Short-range Chemical Shifts (CS) Residual Dipolar Couplings (RDCs) J-couplings Hydrogen-exchange Solvent-accessibility. Long-range Paramagnetic Relaxation Enhancements (PREs) Nuclear Overhauser effects (NOEs) SAXS topological restraints. Constrained molecular dynamics simulations The structure of disordered proteins may be approximated by running constrained molecular dynamics (MD) simulations where the conformational sampling is being influenced by experimentally derived constraints. Fitting experimental data Another approach uses selection algorithms such as ENSEMBLE and ASTEROIDS. Calculation procedures first generate a pool of random conformers (initial pool) so that they sufficiently sample the conformation space. The selection algorithms start by choosing a smaller set of conformers (an ensemble) from the initial pool. Experimental parameters (NMR/SAXS) are calculated (usually by some theoretical prediction methods) for each conformer of chosen ensemble and averaged over ensemble. The difference between these calculated parameters and true experimental parameters is used to make an error function and the algorithm selects the final ensemble so that the error function is minimised. Limitations The determination of a structural ensemble for an IDP from NMR/SAXS experimental parameters involves generation of structures that agree with the parameters and their respective weights in the ensemble. Usually, the available experimental data is less compared to the number of variables required to determine making it an under-determined system. Due to this reason, several structurally very different ensembles may describe the experimental data equally well, and currently there are no exact methods to discriminate between ensembles of equally good fit. This problem has to be solved either by bringing in more experimental data or by improving the prediction methods by introducing rigorous computational methods. References External links Protein structure Articles containing video clips
Conformational ensembles
[ "Chemistry" ]
574
[ "Protein structure", "Structural biology" ]