id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
64,017,065
https://en.wikipedia.org/wiki/Geometric%20diode
Geometric diodes, also known as morphological diodes, use the shape of their structure and ballistic / quasi-ballistic electron transport to create diode behavior. Geometric diodes differ from all other forms of diodes because they do not rely on a depletion region or a potential barrier to create their diode behavior. Instead of a potential barrier, an asymmetry in the geometry of the material (that is on the order of the mean free path of the charge carrier) creates an asymmetry in forward vs reverse bias current (aka a diode). Creating a geometric diode Geometric diodes are formed from one continuous material (adding a caveat for 2D-electron gasses which are layered systems) that has an asymmetry in the structure on the order of the size of the charge carrier's mean free path (MFP). Typical room temperature MFPs range from single digit nanometers for metals up to tens or hundreds of nms for semiconductors, and even >1 micrometer in select systems. This means that to create a geometric diode, one must either use a high MFP material, or have a fabrication process that has nanometer precision in order to create the relevant geometries. Geometric diodes are majority carrier devices that do not need a potential barrier. The diode behavior comes from an asymmetry in the shape of the structure (as shown in the figure). Quite simply geometric diodes can be thought of as funnels or lobster traps for charges; In one direction it is relatively easy for charges to flow, and in the reverse direction it is more difficult. Additionally, it is ideal to have specular reflection of the charge carriers at the surface of the structure; however, this is not as critical as being small enough to be in a ballistic regime. Advantages and disadvantages of geometric diodes Advantages Because all other diodes create asymmetry in current flow through some form of a potential barrier, they necessarily have some degree of a turn-on voltage. Geometric diodes could theoretically achieve zero-bias turn-on voltage due to their lack of potential barrier. With zero-bias turn-on voltage, there is no DC bias that must be supplied to the device; therefor, geometric diodes could greatly reduce the power needed to operate a device. This could also be beneficial in that the diodes would be more sensitive to small signals. This is of course theoretical, and truly zero-bias diodes may be limited from being experimentally realized. A second major advantage also stems from their lack of potential barrier and minority carriers. A potential barrier is a large source of capacitance in a diode. Capacitance serves to decrease a diodes frequency response by increasing its RC time. Geometric diodes lack of potential barrier means they can have ultra-low capacitance down to the attofarads. A geometric diode's frequency response is limited not by RC time or minority carrier mobility, but by the flight time of the charge carriers through the structural asymmetry. Therefore, geometric diodes can achieve frequency response into the THz. The ability for a geometric diode's electronic properties to be tuned by the geometry of the structure, the surface coating on the structure, and the properties of the material used offer a level customization that is unrealized in any other diode system. Principles learned from geometric diodes and ballistic systems will be used in understanding technology as devices become increasingly small and exist at or below charge carrier MFPs. Disadvantages The same benefits from the lack of potential barrier also come with their share of downsides. The main one being that the reverse bias current from a geometric diode can be quite high (anywhere from three to less than one orders of magnitude less than the forward bias current). Depending on the application, a high reverse bias can be tolerated though. Typically geometric diodes are on the nano-scale, so that necessarily means that they have high resistances. However, depending on the fabrication process this can be mitigated by stringing many diodes in parallel. Perhaps the largest hurdle for geometric diodes to overcome is the reliability of their fabrication and ability to scale it up. Geometric diodes are typically made using nanofabrication methods that do not scale up well, but with the increasing resolution of photolithography this may not be a problem for long. Experimental examples Geometric diodes are linked to the phenomena of electron ratchets, and their histories are intermingled. 2DEG Early work on geometric diodes used 2D electron gasses (2DEG) at cryogenic temperatures because these material systems have a very long charge carrier MFP. One of the most studied structures is a four-terminal geometry that either had a single antidot at the center, or an array of antidots that forces charges down instead of up when current is supplied from either the left or right. This system was initially demonstrated at cryogenic temperatures, but then was able to operate at room-temperature and rectify signals of 50 GHz. Graphene The four-terminal geometries have also been created in graphene and function at room-temperature. Additionally, a different, two-terminal geometry resembling the simple geometric diode schematic was demonstrated in 2013. Optimum design for the ballistic diode based on graphene field-effect transistors in 2021 by Van Huy Nguyen. This work showed rectification speeds at THz frequencies. Nanowires Geometric diodes formed from etched Silicon nanowires were shown to operate at room-temperature in April 2020. This work highlights the tunability of geometric diodes by thoroughly studying the effects of geometry on the diode's electronic properties. The work also demonstrated rectification up to an instrument-limited 40 GHz. See also Rectenna semiconductor diode rectifier References Diodes Nanoelectronics
Geometric diode
[ "Materials_science" ]
1,201
[ "Nanotechnology", "Nanoelectronics" ]
64,024,339
https://en.wikipedia.org/wiki/Addressed%20fiber%20Bragg%20structure
An addressed fiber Bragg structure (AFBS) is a fiber Bragg grating, the optical frequency response of which includes two narrowband components with the frequency spacing between them (which is the address frequency of the AFBS) being in the radio frequency (RF) range. The frequency spacing (the address frequency) is unique for every AFBS in the interrogation circuit and does not change when the AFBS is subjected to strain or temperature variation. An addressed fiber Bragg structure can perform triple function in fiber-optic sensor systems: a sensor, a shaper of double-frequency probing radiation, and a multiplexor. The key feature of AFBS is that it enables the definition of its central wavelength without scanning its spectral response, as opposed to conventional fiber Bragg gratings (FBG), which are probed using optoelectronic interrogators. An interrogation circuit of AFBS is significantly simplified in comparison with conventional interrogators and consists of a broadband optical source (such as a superluminescent diode), an optical filter with a predefined linear inclined frequency response, and a photodetector. The AFBS interrogation principle intrinsically allows to include several AFBSs with the same central wavelength and different address frequencies into a single measurement system. History The concept of addressed fiber Bragg structures was introduced in 2018 by Airat Sakhabutdinov and developed in collaboration with his scientific adviser, Oleg Morozov. The idea emerged from the earlier works of Morozov and his colleagues, where the double-frequency optical radiation from an electro-optic modulator was used for the definition of the FBG central wavelength based on the amplitude and phase analysis of the beating signal at the frequency equal to the spacing between the two components of the probing radiation. This eliminates the need for scanning the FBG spectral response while providing high accuracy of measurements and reducing the system cost. AFBS has been developed as a further step towards simplification of FBG interrogation systems by transferring the shaping of double-frequency probing radiation from the source modulator to the sensor itself. Types of AFBS Thus far, two types of AFBS with different mechanisms of forming double-frequency radiation have been presented: 2π-FBG and 2λ-FBG. 2π-FBG A 2π-FBG is an FBG with two discrete phase π-shifts. It comprises three sequential uniform FBGs with gaps equal to one grating period between them (see Fig. 1). In the system, several 2π-FBGs must be connected in parallel so that the photodetector receives the light propagated through the structures. 2λ-FBG A 2λ-FBG consists of two identical ultra-narrow FBGs, the central wavelengths of which are separated by an address frequency. Several 2λ-FBGs in the system can be connected in series, so that the photodetector receives the light reflected from the structures. Interrogation principle Fig. 2 presents the block diagram of the interrogation system for two AFBSs (2π-FBG-type) with different address frequencies Ω1 and Ω2. A broadband light source 1 generates continuous light radiation (diagram a), which corresponds to the measurement bandwidth. The light is transmitted through the fiber-optic coupler 9, then enters the two AFBSs 2.1 and 2.2. Both AFBSs transmit two-frequency radiations that are summed into a combined radiation (diagram b) using another coupler 10. At the output of the coupler, a four-frequency radiation (diagram c) is formed, which is sent through a fiber-optic splitter 6. The splitter divides the optical signal into two channels – the measuring channel and the reference channel. In the measuring channel, an optical filter 3 with a pre-defined linear inclined frequency response is installed, which modifies the amplitudes of the four-frequency radiation into the asymmetrical radiation (diagram d). After that, the signal is sent to the photodetector 4 and is received by the measuring analog-to-digital converter (ADC) 5. The signal from the ADC is used to define the measurement information from the AFBS. In the reference channel, the signal (diagram e) is sent to the reference photodetector 7 for the optical power output control, and then it is received by the reference ADC 8. Thus, the normalization of output signal intensity is achieved, and all subsequent calculations are performed using the relations of the intensities in the measuring and reference channels. Assume that the response from each spectral components of AFBSs is represented by a single harmonic, then the total optical response from the two AFBSs can be expressed as: where Ai, Bi are the amplitudes of the frequency components of the i-th AFBS; ωi, is the frequency of the left spectral components of the i-th AFBS; Ωi is the address frequency of the i-th AFBS. The luminous power received by the photodetector can be described by the following expression: By narrowband filtering of the signal P(t) at the address frequencies, a system of equations can be obtained, using which the central frequencies of the AFBSs can be defined: where Dj is the amplitude of the signal at the address frequencies Ωj, and the exponential multipliers describe the bandpass filters at the address frequencies. References Fiber optics Diffraction Sensors
Addressed fiber Bragg structure
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,134
[ "Spectrum (physical sciences)", "Measuring instruments", "Crystallography", "Diffraction", "Sensors", "Spectroscopy" ]
64,026,460
https://en.wikipedia.org/wiki/Marjem%20Chatterton
Marjem Chatterton (28 September 1916 – 27 January 2010) was a pioneering engineer in Israel and Zimbabwe, specialising in multi storey reinforced concrete buildings. She was the first female fellow of the Institution of Structural Engineers. Chatterton designed some of Zimbabwe's first skyscrapers. Early life and education She was born Marjem or Marynia Znamirowska in Warsaw, Poland, in 1916, as an Orthodox Jew. In 1932 the family emigrated to Mandatory Palestine. Znamirowska had intended returning to Poland but by 1934 it was clear that the situation was becoming dangerous for Jews in Poland so Znamirowska attended the Technion – the Israel Institute of Technology in Haifa to study engineering. Znamirowska's aunt Rachel Shalon (née Znanmirowska) the first Israeli female engineer, was on faculty there, having qualified in 1930. Career Znamirowska graduated from her civil engineering course in 1939 and began working in the Technical Office of the Collective Settlements Association. By 1947 Znamirowska had married a British man, Frank Chatterton, and they moved with their family to Southern Rhodesia (which became Zimbabwe in 1980). Chatterton got a job working as a reinforced concrete designer immediately. Chatterton was working with Lysaght and Company until 1957 when she initially started consulting. In 1969 she established her consulting firm, M. Chatterton and Partners. Chatterton used her experience with concrete to design some of Zimbabwe's first skyscrapers, banks and building societies as well as cotton, fertiliser, and sugar industrial buildings. During this time Chatterton became a member of the Institution of Structural Engineers. She was the first woman to win the Andrews Prize and also won the Wallace Premium Prize. In 1954 she became the Institute's first female fellow. When the political situation in Zimbabwe deteriorated in 1976 Chatterton moved to Leeds to work in the University as a lecturer. Chatterton was involved in encouraging girls into engineering careers through the university's and girl's schools campaign. Chatterton returned to her consultancy in Zimbabwe in 1984. She also took on a teaching role in the national university. The country's independence in 1980 ensured development investment and building projects. Chatterton's last big project was the tallest office building, the 26-storey Reserve Bank. Later life Chatterton continued working until 1999 when the political situation again became unstable and she retired and returned to the UK. Chatterton died on 27 January 2010. She is buried in Exeter. References and sources 1916 births 2010 deaths Structural engineers Academics of the University of Leeds Jewish engineers Polish emigrants to Mandatory Palestine Technion – Israel Institute of Technology alumni Expatriates in Zimbabwe Israeli expatriates in the United Kingdom Polish expatriates in the United Kingdom Polish women engineers 20th-century Polish engineers 21st-century Polish engineers 20th-century Polish women engineers 21st-century Polish women engineers 20th-century Israeli engineers 21st-century Israeli engineers 20th-century Israeli women engineers 21st-century Israeli women engineers Immigrants of the Fifth Aliyah
Marjem Chatterton
[ "Engineering" ]
615
[ "Structural engineering", "Structural engineers" ]
38,426,772
https://en.wikipedia.org/wiki/Tritan%20copolyester
Tritan, a copolymer offered by the Eastman Chemical Company since 2007, is a transparent plastic intended to replace polycarbonate, because of health concerns about Bisphenol A (BPA). Tritan is a copolymer made from three monomers: dimethyl terephthalate (DMT), cyclohexanedimethanol (CHDM), and 2,2,4,4-Tetramethyl-1,3-cyclobutanediol (CBDO). Tritan (PCTG) is made without using any bisphenols or phthalates. Eastman Tritan cannot be used for hot beverages (like hot water, coffee or tea) and is recommended only for usage temperatures below 60 °C, as it starts to deteriorate at temperatures above 80 °C. In April 2008, Nalgene announced it would phase out production of its outdoor line of polycarbonate containers containing the chemical bisphenol A. Nalgene now uses Tritan as a replacement for polycarbonate, as it does not contain BPA. Health controversy In 2011, a neurobiologist at the University of Texas at Austin, George Bittner, published an article claiming that most polymers, including Tritan, contained other materials with estrogenic activity. After these claims were published by PlastiPure, an Eastman Chemical Company competitor, Eastman sued. A jury ruled in their favor, and the Court barred PlastiPure from making claims about triton's estrogenic activity. In expert testimony, Wade Welshon of the University of Missouri-Columbia, agreed that the Tritan copolymer is likely not estrogenic but that the estrogenic activity he found in five separate tests of Tritan products could be attributable to other chemicals added during manufacturing. During the trial emerged that Thomas Osimitz an author of the journal article that initially cleared Tritan of estrogenic activity was paid $10,000 by the company for the paper and that this was not disclosed in the Conflict of Interests section. When Osimitz was questioned by Reuters he stated that the disclosure forms were "very confusing." Bittner maintains that his assays are more sensitive that the ones performed Osimitz et al. Similar products Other manufacturers have developed similar products including the French Arc Holdings's Kwarx since 2006, the German (Leonardo) Teqton since 2009 and the South-Korean SK Chemicals' Ecozen, a glycol-modified polyethylene terephthalate (PETG) since 2010/2011. Other manufacturers propose polypropylene (PP) or methylstyrene (MS) as alternatives to Tritan. Name confusion Tritan can also refer to a type of so called unbreakable glass originally developed by the German Zwiesel Kristallglas in 2002 together with University of Erlangen–Nuremberg. Its name is derived from titanoxide (titanium oxide in English). In 2012, the Zwiesel Kristallglas company introduced Tritan Protect. Confusingly, although both are unrelated, Zwiesel Tritan glass and Eastman Tritan copolyester are both advertised as "shatter protected" and are used in the production of drinking glasses as replacements for traditional glasses, despite their different material properties. See also Superfest (a chemically hardened glass also known as Ceverit or ) Gorilla Glass Borosilicate glass (a type of heat-resistant glass) References Commodity chemicals Plastics Thermoplastics Transparent materials
Tritan copolyester
[ "Physics", "Chemistry" ]
736
[ "Physical phenomena", "Commodity chemicals", "Products of chemical industry", "Unsolved problems in physics", "Optical phenomena", "Materials", "Transparent materials", "Amorphous solids", "Matter", "Plastics" ]
38,428,702
https://en.wikipedia.org/wiki/Xolve
Xolve, Inc. is a Madison, Wisconsin-based nanomaterial company that uses its proprietary technology to improve the attributes and performance of polymer composites and energy storage materials. The company is known for developing a process that uses organic compounds or polymers to either dissolve or place true solutions of nanoparticles previously thought to be insoluble, including carbon nanotubes and graphene. Xolve won the Wisconsin Governor's Business Plan Contest in 2008, and was named one of the top startups of 2008 by Businessweek. The company was also a national finalist in the 2010 CleanTech Open San Jose, CA. The company originated from the fundamental research of then 17-year-old student Philip Streich and University of Wisconsin-Platteville Chemistry and Engineering Physics Professor James P. Hamilton and was founded by serial entrepreneurs Professor Hamilton and Eric Apfelbach as well as Philip Streich. History Founded in 2007 as Graphene Solutions, the firm was incubated in the UW-Platteville Nanotechnology Center for Collaborative Research and Development, the NCCRD. Xolve licenses some of the earliest patents on graphene from Professor Hamilton's Group that date back to work done in 2006 and 2007. In 2010, the company changed its name to Xolve and went on to raise $2 million in its first round of funding. Primary investors included DSM, a Dutch material sciences company, and the Nordic Group of Companies in Baraboo, Wisconsin. In 2011, the company moved to its own labs in Middleton, Wisconsin. Nanomaterials advancements The potential of nanoparticles rests on their surface area. However, practical applications of these materials have been limited by their tendency to form clumps and bundles, destroying that surface area. Beginning with its ability to place nanomaterials into true solutions, Xolve has developed additional technology to bring dispersed nanomaterials into industrial polymers and energy storage materials and keep them dispersed. With this technology, Xolve aims to lower the cost of producing nanomaterials, such as graphene, and to use these nanomaterials to dramatically improve the performance of industrial materials while maintaining their standard cost structure. References Nanotechnology companies
Xolve
[ "Materials_science" ]
458
[ "Nanotechnology", "Nanotechnology companies" ]
38,430,890
https://en.wikipedia.org/wiki/Chalcogenide%20chemical%20vapour%20deposition
Chalcogenide chemical vapor deposition is a proposed technology for depositing thin films of chalcogenides, i.e. materials derived from sulfides, selenides, and tellurides. Conventional CVD can be used to deposit films of most metals, many non-metallic elements (notably silicon) as well as a wide range of compounds including carbides, nitrides, oxides. CVD can also be used to synthesize chalcogenide glasses. Sulfide based thin films The fabrication of chalcogenide thin films is a topic of research. For example, routes to germanium disulfide films could entail germanium chloride and hydrogen sulphide: GeCl4 (g) + 2 H2S(g) → GeS2(s) + 4 HCl (g) Alternatively via plasma enhanced CVD there is the reaction GeH4/H2S. Telluride based thin films Phase change random access memory (PCRAM) has attracted considerable interest as a candidate for non-volatile devices for higher density and operation speed. The ternary Ge2Sb2Te5 (GST) compound is widely regarded as the most viable and practical phase change family of materials for this application. CVD techniques have been applied to deposit GST materials in sub micron cell pores. Challenges include the need to control device to device variability and undesirable changes in the phase change material that can be induced by the fabrication procedure. A confined cell structure where the phase change material is formed inside a contact via is expected to be essential for the next generation PCRAM device because it requires lower switching power. This structure however requires more complex deposition of the active chalcogenide into a cell pore. CVD techniques could provide better performance and enable the production of thin films with superior quality compared to those obtained by sputtering, especially in terms of conformality, coverage, and stoichiometry control, and allows implementation of phase-change films in nanoelectronic devices. In addition, CVD deposition is well known to provide higher purity materials and provides the scope for new phase change materials with optimized properties to be deposited. The CVD apparatus for Ge-Sb-Te thin film deposition is shown schematically to the right. References Semiconductor device fabrication
Chalcogenide chemical vapour deposition
[ "Materials_science" ]
477
[ "Semiconductor device fabrication", "Microtechnology" ]
38,431,897
https://en.wikipedia.org/wiki/Seismic%20inverse%20Q%20filtering
Seismic inverse Q filtering is a data processing technology for enhancing the resolution of reflection seismology images. Q is the anelastic attenuation factor or the seismic quality factor, a measure of the energy loss as the seismic wave moves. Basics Seismic inverse Q-filtering employs a wave propagation reversal procedure that compensates for energy absorption and corrects wavelet distortion due to velocity dispersion. By compensating for amplitude attenuation with a model of the visco-elastic attenuation model type, seismic data can provide true relative-amplitude information for amplitude inversion and subsequent reservoir characterization. By correcting the phase distortion due to velocity dispersion, seismic data with enhanced vertical resolution can yield correct timings for lithological identification. However, Wang's outline of the subject is excellent and to follow his path, inverse Q filtering can be introduced based on the 1-D one-way propagation wave equation. He introduce this equation:. where U(r,w) is the plane wave of radial frequency w at travel distance r, k(w) is the wavenumber and i is the imaginary unit. Reflection seismograms record the reflection wave along the propagation path r from the source to reflector and back to the surface. With this approach Wang assumes that the plane wave U(r,w) has already been attenuated by a Q filter through travel distance r. We must have this in mind when we go to the step of finding a solution of (1.1). It is necessary that the initial U(r,w) either is already created by a forward synthetic Q-filtering process or taken directly from seismic surface data. Wang has introduced this concept in chapter 5 in his book. I think it is necessary to have this in mind also when the inverse theory is developed. Equation (1.1) has an analytical solution given by Kolsky's attenuation-dispersion model The wavenumber k(w) is an important variable in the solution (1.2). To obtain a solution that can be applied to seismic k(w) must be connected to a function that represent the way U(r,w) propagates in the seismic media. This functions can be regarded as a Q-model. The Kolsky Model is used extensively in seismic inverse Q-filtering. The model assumes the attenuation α(w) to be strictly linear with frequency over the range of measurement: And defines the phase velocity as: Where cr and Qr are the phase velocity and the Q value at a reference frequency wr. For a large value of Qr >>1 the solution (1.4) can be approximated to where Kolsky's model was derived from and fitted well with experimental observations. A requirement in the theory for materials satisfying the linear attenuation assumption is that the reference frequency wr is a finite (arbitrarily small but nonzero) cut-off on the absorption. According to Kolsky, we are free to choose wr following the phenomenological criterion that it be small compared with the lowest measured frequency w in the frequency band. Those who want a deeper insight into this concept can go to Futterman (1962) Now we consider (1.3) as the real part of the wavenumber k(w) and (1.5) as the imaginary part. Considering only the positive frequencies, the complex wavenumber k(w) then becomes where cr and Qr are the phase velocity c(w) and the Q(w) value, respectively, at an arbitrary reference frequency. Substituting this complex-valued wavenumber k(w) into solution (1.2) produces the following expression: Replacing the distance increment ∆r by traveltime increment ∆t = ∆r/cr, equation (1.7) is expressed as This is a basic inverse Q filter with the Kolsky model. The two exponential operators compensate and correct for, respectively, the amplitude effect (i.e. the energy absorption) and the phase effect (i.e. the velocity dispersion) of the earth Q filter. In order for the inverse filter to compensate for a causal solution Wang modified the original Kolsky model by using wr as the highest frequency in the frequency band and call it wh. As mentioned above, at the same time we start discussing the design and application of an inverse Q-filter, we need to specify a mathematical Q-model that can compute benchmark data similar to Wang's benchmark data that can be used for inversion later. Then we must regard the forward Q-filtering process as our solution of (1.2) which means we must compute the inversion of (1.2) as a Q-forward filtering process. Wang showed how easily this could be done by simply changing the sign before γ and Q. Wang writes: We call the removal of the phase correction effect given by a previous phase-only inverse Q-filtering as phase-only forward Q-filtering. It would be rather straightforward if we consider it as an inverse problem of the inverse equation and take an inverse solution. But solving an inverse problem is often time-consuming because it involves the computation of an inverse matrix. Then Wang develops the theory both of amplitude forward Q-filtering and phase Q-filtering simply by changing sign before γ and Q, and introduce the equation: In (1.8.b) I use the reference frequency in the original form given by Kolsky and use the notation wr. Equation (1.8.a and. b) is the basis for Q filtering in which downward continuation is performed in the frequency domain on all plane waves. The sum of these plane waves gives the time-domain seismic signal, This summation is referred to as the imaging condition, as in seismic migration. Equations (1.8.a) and (1.8.b) must be applied successively to each time sample with sampling interval ∆t, producing u(t) at each level. This last approach completes the downward continuation theory of q-filtering. Computations Fig.1 a and b shows a forward Q-filtering solution of (1.8.a-b integrated with (1.9)) for a single pulse presented for this article. This simple graph illustrates very well some aspects of the theory of Q-filtering. Black dotted graph is zero-phase solution of the equation (1.8.b) where we have amplitude only Q-filtering and no effect from the phase (Q=10). On fig.1.a. blue dotted line is (1.8.b) with fr=0.001 Hz. (We compute from the relation w=2: f.) Red line is the solution of (1.8.a) with fh=10 Hz. We have no amplitude compensation for (1.8.a), looking only at the effect from the phase. Comparing with fig.1.b. we can see that the smaller the reference frequency fr in the forward solution, the more phase effect we get. In the inverse solution (1.8.a) we have the opposite effect, the higher value of fh the more the forward phase effect is compensated, toward a causal solution. The graphs give a good illustration of Q-filtering according to the theory of Wang. However, more calculations should be done and more seismic models should be introduced. The effect of amplitude inverse Q-filtering should also be studied. Notes References External links Some aspects of seismic inverse Q-filtering theory by Knut Sørsdal Seismology measurement Geophysics
Seismic inverse Q filtering
[ "Physics" ]
1,568
[ "Applied and interdisciplinary physics", "Geophysics" ]
38,432,576
https://en.wikipedia.org/wiki/Pulsed-power%20water%20treatment
Pulsed-power water treatment is the process of using pulsed, electro-magnetic fields into cooling water to control scaling, biological growth, and corrosion. The process does not require the use of chemicals and helps eliminate environmental and health issues associated with the use and life-cycle management of chemicals used to treat water. Pulsed-power systems have the ability to maintain low levels of microbiological activity without using corrosive chemicals. Several reports have shown that pulse-powered systems yield significantly lower counts of bacteria colony forming units compared to chemically controlled systems. Overview and uses Pulsed-power systems are used to control scale, corrosion and biological activity in cooling towers without the use of chemicals, chemical tanks or pumps. Pulsed-power has been used as the sole source of water treatment in cooling systems for over a decade now with good results. The pulsed-power imparts electromagnetic fields into the cooling water and the induced fields have a direct effect in preventing mineral scale formation on equipment surfaces and controlling microbial populations to very low levels while also significantly reducing biofilms present in cooling systems. Pulsed-power is also an FDA approved method for pasteurizing fluids such as fruit juices. However, the energy needed for pasteurization is 100 times that of a pulsed-power water treatment system. Pros and cons Pulsed-power treatment enables chemical-free treatment of cooling tower water while providing lower bacterial contamination as it controls scale and corrosion. The cost over the lifetime of use is lower than that of chemical treatment and also reduces the health concerns of handling chemicals. Cycles of concentration are typically increased which reduces blowdown water. The resulting elimination of chemicals provides many benefits including reduced environment, health and safety risks, environmental benefits of reusing blowdown water and elimination of chemical-laden discharge water. Pulse-power treatment is less effective on water that is extremely soft or distilled, as the technology is based on changing the way minerals in the water precipitate. It also still requires energy to use. See also Pulsed power Water cooling Water treatment References Water technology
Pulsed-power water treatment
[ "Chemistry" ]
413
[ "Water technology" ]
38,433,373
https://en.wikipedia.org/wiki/Nuclear%20Medicine%20and%20Biology
Nuclear Medicine and Biology is a peer-reviewed medical journal published by Elsevier that covers research on all aspects of nuclear medicine, including radiopharmacology, radiopharmacy and clinical studies of targeted radiotracers. It is the official journal of the Society of Radiopharmaceutical Sciences. According to the Journal Citation Reports, the journal has a 2011 impact factor of 3.023. Abstracting and indexing The journal is abstracted and indexed in: BIOSIS Elsevier BIOBASE Cambridge Scientific Abstracts Chemical Abstracts Service Current Contents/Life Sciences MEDLINE/PubMed EMBASE References External links Radiology and medical imaging journals Nuclear medicine Nuclear magnetic resonance Radiopharmaceuticals Elsevier academic journals English-language journals Academic journals established in 1978
Nuclear Medicine and Biology
[ "Physics", "Chemistry" ]
156
[ "Nuclear magnetic resonance", "Medicinal radiochemistry", "Nuclear chemistry stubs", "Radiopharmaceuticals", "Nuclear magnetic resonance stubs", "Nuclear physics", "Chemicals in medicine" ]
47,201,091
https://en.wikipedia.org/wiki/NPG%20Asia%20Materials
NPG Asia Materials is a peer-reviewed open access scientific journal focusing on materials science. It was established in 2009 and is published by the Nature Publishing Group. The founding editor-in-chief was Hideo Takezoe (Tokyo Institute of Technology); the current editor-in-chief is Martin Vacha (Tokyo Institute of Technology). Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstract Services Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2021 impact factor of 10.990. References External links Nature Research academic journals English-language journals Continuous journals Materials science journals Creative Commons Attribution-licensed journals Academic journals established in 2009
NPG Asia Materials
[ "Materials_science", "Engineering" ]
144
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
47,203,681
https://en.wikipedia.org/wiki/EUROfusion
EUROfusion is a consortium of national fusion research institutes located in the European Union, the UK, Switzerland and Ukraine. It was established in 2014 to succeed the European Fusion Development Agreement (EFDA) as the umbrella organisation of Europe's fusion research laboratories. The consortium is currently funded by the Euratom Horizon 2020 programme. Organisation The EUROfusion consortium agreement has been signed by 30 research organisations and universities from 25 European Union countries plus Switzerland, Ukraine and the United Kingdom. The EUROfusion's Programme Management Unit offices located in Garching, near Munich (Germany), are hosted by the Max Planck Institute of Plasma Physics (IPP). The IPP is also the seat for the co-ordinator of EUROfusion. Activities EUROfusion funds fusion research activities in accordance with the Roadmap to the realisation of fusion energy. The Roadmap outlines the most efficient way to realise fusion electricity by 2050. Research carried out under the EUROfusion umbrella aims to prepare for ITER experiments and develop concepts for the fusion power demonstration plant DEMO. EUROfusion is in charge of the fusion-related research carried out at JET, the Joint European Torus, which is housed in the Culham Centre for Fusion Energy, UK. Other fusion devices in Europe that devote some amount of time towards research under the EUROfusion framework include the following: References Further reading ITER Joint European Torus (JET) Fusion for Energy DEMO Euratom Max Planck Institute of Plasma Physics Culham Centre for Fusion Energy External links College and university associations and consortia in Europe Fusion power Organisations based in Munich Physics in Germany Research institutes in Germany Science and technology in Europe
EUROfusion
[ "Physics", "Chemistry" ]
334
[ "Nuclear fusion", "Fusion power", "Plasma physics" ]
47,206,618
https://en.wikipedia.org/wiki/Nigerian%20Association%20of%20Mathematical%20Physics
The Nigerian Association of Mathematical Physics is a professional academic association of Nigerian mathematical physicists. The association is governed by its Council, which is chaired by the association's President, according to a set of Statutes and Standing Orders. Notable members Professor Awele Maduemezia Professor Garba Babaji References Professional associations based in Nigeria Mathematical physics
Nigerian Association of Mathematical Physics
[ "Physics", "Mathematics" ]
69
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
41,216,240
https://en.wikipedia.org/wiki/Hericenone
Hericenones is a class of benzaldehydes that are isolates of the fruiting body of Hericium erinaceum (lion's mane mushroom) that promote nerve growth factor synthesis in vitro. Hericenones References Benzaldehydes
Hericenone
[ "Chemistry" ]
57
[ "Biomolecules by chemical classification", "Organic compounds", "Natural phenols", "Organic compound stubs", "Organic chemistry stubs" ]
41,223,765
https://en.wikipedia.org/wiki/Bowtie%20%28sequence%20analysis%29
Bowtie is a software package commonly used for sequence alignment and sequence analysis in bioinformatics. The source code for the package is distributed freely and compiled binaries are available for Linux, macOS and Windows platforms. As of 2017, the Genome Biology paper describing the original Bowtie method has been cited more than 11,000 times. Bowtie is open-source software and is currently maintained by Johns Hopkins University. History The Bowtie sequence aligner was originally developed by Ben Langmead et al. at the University of Maryland in 2009. The aligner is typically used with short reads and a large reference genome, or for whole genome analysis. Bowtie is promoted as "an ultrafast, memory-efficient short aligner for short DNA sequences." The speed increase of Bowtie is partly due to implementing the Burrows–Wheeler transform for aligning, which reduces the memory footprint (typically to around 2.2GB for the human genome); a similar method is used by the BWA and SOAP2 alignment methods. Bowtie conducts a quality-aware, greedy, randomized, depth-first search through the space of possible alignments. Because the search is greedy, the first valid alignment encountered by Bowtie will not necessarily be the 'best' in terms of the number of mismatches or in terms of quality. Bowtie is used as a sequence aligner by a number of other related bioinformatics algorithms, including TopHat, Cufflinks and the CummeRbund Bioconductor package. Bowtie 2 On 16 October 2011, the developers released a beta fork of the project called Bowtie 2. In addition to the Burrows-Wheeler transform, Bowtie 2 also uses an FM-index (similar to a suffix array) to keep its memory footprint small. Due to its implementation, Bowtie 2 is more suited to finding longer, gapped alignments in comparison with the original Bowtie method. There is no upper limit on read length in Bowtie 2 and it allows alignments to overlap ambiguous characters in the reference. References External links Bowtie page on SourceForge Bowtie 2 page on SourceForge Bioinformatics algorithms Bioinformatics software Laboratory software Software using the Artistic license
Bowtie (sequence analysis)
[ "Biology" ]
452
[ "Bioinformatics", "Bioinformatics software", "Bioinformatics algorithms" ]
41,224,221
https://en.wikipedia.org/wiki/Weighted%20correlation%20network%20analysis
Weighted correlation network analysis, also known as weighted gene co-expression network analysis (WGCNA), is a widely used data mining method especially for studying biological networks based on pairwise correlations between variables. While it can be applied to most high-dimensional data sets, it has been most widely used in genomic applications. It allows one to define modules (clusters), intramodular hubs, and network nodes with regard to module membership, to study the relationships between co-expression modules, and to compare the network topology of different networks (differential network analysis). WGCNA can be used as a data reduction technique (related to oblique factor analysis), as a clustering method (fuzzy clustering), as a feature selection method (e.g. as gene screening method), as a framework for integrating complementary (genomic) data (based on weighted correlations between quantitative variables), and as a data exploratory technique. Although WGCNA incorporates traditional data exploratory techniques, its intuitive network language and analysis framework transcend any standard analysis technique. Since it uses network methodology and is well suited for integrating complementary genomic data sets, it can be interpreted as systems biologic or systems genetic data analysis method. By selecting intramodular hubs in consensus modules, WGCNA also gives rise to network based meta analysis techniques. History The WGCNA method was developed by Steve Horvath, a professor of human genetics at the David Geffen School of Medicine at UCLA and of biostatistics at the UCLA Fielding School of Public Health and his colleagues at UCLA, and (former) lab members (in particular Peter Langfelder, Bin Zhang, Jun Dong). Much of the work arose from collaborations with applied researchers. In particular, weighted correlation networks were developed in joint discussions with cancer researchers Paul Mischel, Stanley F. Nelson, and neuroscientists Daniel H. Geschwind, Michael C. Oldham, according to the acknowledgement section in. Comparison between weighted and unweighted correlation networks A weighted correlation network can be interpreted as special case of a weighted network, dependency network or correlation network. Weighted correlation network analysis can be attractive for the following reasons: The network construction (based on soft thresholding the correlation coefficient) preserves the continuous nature of the underlying correlation information. For example, weighted correlation networks that are constructed on the basis of correlations between numeric variables do not require the choice of a hard threshold. Dichotomizing information and (hard)-thresholding may lead to information loss. The network construction gives highly robust results with respect to different choices of the soft threshold. In contrast, results based on unweighted networks, constructed by thresholding a pairwise association measure, often strongly depend on the threshold. Weighted correlation networks facilitate a geometric interpretation based on the angular interpretation of the correlation, chapter 6 in. Resulting network statistics can be used to enhance standard data-mining methods such as cluster analysis since (dis)-similarity measures can often be transformed into weighted networks; see chapter 6 in. WGCNA provides powerful module preservation statistics which can be used to quantify similarity to another condition. Also module preservation statistics allow one to study differences between the modular structure of networks. Weighted networks and correlation networks can often be approximated by "factorizable" networks. Such approximations are often difficult to achieve for sparse, unweighted networks. Therefore, weighted (correlation) networks allow for a parsimonious parametrization (in terms of modules and module membership) (chapters 2, 6 in ) and. Method First, one defines a gene co-expression similarity measure which is used to define the network. We denote the gene co-expression similarity measure of a pair of genes i and j by . Many co-expression studies use the absolute value of the correlation as an unsigned co-expression similarity measure, where gene expression profiles and consist of the expression of genes i and j across multiple samples. However, using the absolute value of the correlation may obfuscate biologically relevant information, since no distinction is made between gene repression and activation. In contrast, in signed networks the similarity between genes reflects the sign of the correlation of their expression profiles. To define a signed co-expression measure between gene expression profiles and , one can use a simple transformation of the correlation: As the unsigned measure , the signed similarity takes on a value between 0 and 1. Note that the unsigned similarity between two oppositely expressed genes () equals 1 while it equals 0 for the signed similarity. Similarly, while the unsigned co-expression measure of two genes with zero correlation remains zero, the signed similarity equals 0.5. Next, an adjacency matrix (network), , is used to quantify how strongly genes are connected to one another. is defined by thresholding the co-expression similarity matrix . 'Hard' thresholding (dichotomizing) the similarity measure results in an unweighted gene co-expression network. Specifically an unweighted network adjacency is defined to be 1 if and 0 otherwise. Because hard thresholding encodes gene connections in a binary fashion, it can be sensitive to the choice of the threshold and result in the loss of co-expression information. The continuous nature of the co-expression information can be preserved by employing soft thresholding, which results in a weighted network. Specifically, WGCNA uses the following power function assess their connection strength: , where the power is the soft thresholding parameter. The default values and are used for unsigned and signed networks, respectively. Alternatively, can be chosen using the scale-free topology criterion which amounts to choosing the smallest value of such that approximate scale free topology is reached. Since , the weighted network adjacency is linearly related to the co-expression similarity on a logarithmic scale. Note that a high power transforms high similarities into high adjacencies, while pushing low similarities towards 0. Since this soft-thresholding procedure applied to a pairwise correlation matrix leads to weighted adjacency matrix, the ensuing analysis is referred to as weighted gene co-expression network analysis. A major step in the module centric analysis is to cluster genes into network modules using a network proximity measure. Roughly speaking, a pair of genes has a high proximity if it is closely interconnected. By convention, the maximal proximity between two genes is 1 and the minimum proximity is 0. Typically, WGCNA uses the topological overlap measure (TOM) as proximity. which can also be defined for weighted networks. The TOM combines the adjacency of two genes and the connection strengths these two genes share with other "third party" genes. The TOM is a highly robust measure of network interconnectedness (proximity). This proximity is used as input of average linkage hierarchical clustering. Modules are defined as branches of the resulting cluster tree using the dynamic branch cutting approach. Next the genes inside a given module are summarized with the module eigengene, which can be considered as the best summary of the standardized module expression data. The module eigengene of a given module is defined as the first principal component of the standardized expression profiles. Eigengenes define robust biomarkers, and can be used as features in complex machine learning models such as Bayesian networks. To find modules that relate to a clinical trait of interest, module eigengenes are correlated with the clinical trait of interest, which gives rise to an eigengene significance measure. Eigengenes can be used as features in more complex predictive models including decision trees and Bayesian networks. One can also construct co-expression networks between module eigengenes (eigengene networks), i.e. networks whose nodes are modules. To identify intramodular hub genes inside a given module, one can use two types of connectivity measures. The first, referred to as , is defined based on correlating each gene with the respective module eigengene. The second, referred to as kIN, is defined as a sum of adjacencies with respect to the module genes. In practice, these two measures are equivalent. To test whether a module is preserved in another data set, one can use various network statistics, e.g. . Applications WGCNA has been widely used for analyzing gene expression data (i.e. transcriptional data), e.g. to find intramodular hub genes. Such as, WGCNA study reveals novel transcription factors are associated with Bisphenol A (BPA) dose-response. It is often used as data reduction step in systems genetic applications where modules are represented by "module eigengenes" e.g. Module eigengenes can be used to correlate modules with clinical traits. Eigengene networks are coexpression networks between module eigengenes (i.e. networks whose nodes are modules) . WGCNA is widely used in neuroscientific applications, e.g. and for analyzing genomic data including microarray data, single cell RNA-Seq data DNA methylation data, miRNA data, peptide counts and microbiota data (16S rRNA gene sequencing). Other applications include brain imaging data, e.g. functional MRI data. R software package The WGCNA R software package provides functions for carrying out all aspects of weighted network analysis (module construction, hub gene selection, module preservation statistics, differential network analysis, network statistics). The WGCNA package is available from the Comprehensive R Archive Network (CRAN), the standard repository for R add-on packages. References Bioinformatics Data mining
Weighted correlation network analysis
[ "Engineering", "Biology" ]
1,977
[ "Bioinformatics", "Biological engineering" ]
59,027,149
https://en.wikipedia.org/wiki/Shortcuts%20to%20adiabaticity
Shortcuts to adiabaticity (STA) are fast control protocols to drive the dynamics of system without relying on the adiabatic theorem. The concept of STA was introduced in a 2010 paper by Xi Chen et al. Their design can be achieved using a variety of techniques. A universal approach is provided by counterdiabatic driving, also known as transitionless quantum driving. Motivated by one of authors systematic study of dissipative Landau-Zener transition, the key idea was demonstrated earlier by a group of scientists from China, Greece and USA in 2000, as steering an eigenstate to destination. Counterdiabatic driving has been demonstrated in the laboratory using a time-dependent quantum oscillator. The use of counterdiabatic driving requires to diagonalize the system Hamiltonian, limiting its use in many-particle systems. In the control of trapped quantum fluids, the use of symmetries such as scale invariance and the associated conserved quantities has allowed to circumvent this requirement. STA have also found applications in finite-time quantum thermodynamics to suppress quantum friction. Fast nonadiabatic strokes of a quantum engine have been implemented using a three-dimensional interacting Fermi gas. The use of STA has also been suggested to drive a quantum phase transition. In this context, the Kibble-Zurek mechanism predicts the formation of topological defects. While the implementation of counterdiabatic driving across a phase transition requires complex many-body interactions, feasible approximate controls can be found. Outside of physics, STA have been applied to population genetics to derive a formalism to admit finite time control of the speed and trajectory in evolving populations, with an eye towards manipulating large populations of organisms causing human disease as an evolutionary therapy method, or toward more efficient directed evolution. References Quantum mechanics
Shortcuts to adiabaticity
[ "Physics" ]
377
[ "Theoretical physics", "Quantum mechanics", "Quantum physics stubs" ]
59,028,782
https://en.wikipedia.org/wiki/Francis%20Halzen
Francis Louis Halzen (born 23 March 1944 in Tienen, Belgium) is a Belgian particle physicist. He is the Hilldale and Gregory Breit Distinguished Professor at the University of Wisconsin–Madison and Director of its Institute for Elementary Particle Physics. Halzen is the Principal Investigator of the IceCube Neutrino Observatory at the Amundsen–Scott South Pole Station in Antarctica, the world's largest neutrino detector which has been operational since 2010. Background Halzen was born and raised in Belgium. He graduated from the University of Louvain (UCLouvain) with a MSc Physics degree in 1966, a PhD in 1969, then his Agrégé de l'Enseignement Supérieur in 1972. Between 1969 and 1971 he worked as a scientific associate at CERN. Since 1972 he has been a professor at the University of Wisconsin–Madison and the Principal Investigator on the AMANDA and IceCube projects. Halzen has been a leading scientist in the development of cosmic ray physics and astroparticle physics since the 1970s. In addition to particle physics he published many early papers on cosmic ray anomalies and quark matter, and on relations between particle physics and cosmic rays, on particles from supernovae and on muon production in atmospheric gamma-ray showers. He has served on various advisory committees, including those for the SNO, Telescope Array and Auger-upgrade experiments, the Max Planck Institutes in Heidelberg and Munich, the ICRR at the University of Tokyo, the US Particle Physics Prioritization Panel and the ApPEC particle astrophysics advisory panel in Europe. With Alan Martin he is the co-author of Quarks and Leptons, a standard text. AMANDA Halzen first learned about attempts by Russian scientists to detect neutrinos in Antarctica, using radio antennas at their Antarctic research station to search for electric sparks resulting from cosmic neutrinos colliding with the ice. After determining these interactions would be too weak to register, in 1987 he started working on the AMANDA project, which proposed burying an array of light sensors deep in the Antarctic ice which is clear, dark, stable, sterile and free of background light. This pilot experiment proved successful, however their results were marred by interference from cosmic rays as well as air bubbles in the ice. This convinced him that a much larger and deeper array would be needed, and in 2005 the AMANDA project became part of its successor project, the IceCube Neutrino Observatory. IceCube Halzen argued for a much larger detector, and was able to secure funding from both European and American sources. In 2005 his team started construction of the IceCube project, designed to be 100 times bigger than AMANDA, with a total size of 1 km3 and buried up to a mile and a half deep. After six years of construction, IceCube became operational in 2010. The most important result from the IceCube was the clear break-through observation of high-energy neutrinos (about 100 times more energetic than the particles accelerated today in the world's most powerful machine, the LHC at CERN) in 2013, from as yet not identified sources outside the Galaxy. This discovery has stimulated the planning and development of even larger neutrino telescopes, both at the South Pole and deep under the ocean. Awards 1994: Fellow of the American Physical Society 2013: Breakthrough of the Year Award by the journal Physics World for the first-time discovery of cosmic neutrinos beyond the Milky Way 2015: Balzan Prize 2015: European Physical Society Prize for Astroparticle Physics and Cosmology 2018: Bruno Pontecorvo Prize for significant contribution to the IceCube detector construction and experimental discovery of high-energy astrophysical neutrinos. 2019: Yodh Prize of IUPAP 2021: Homi Bhabha Medal and Prize of IUPAP and TIFR 2024 Elected to National Academy of Sciences References External links (archive of talks at Institute for Nuclear Theory, U.S. Department of Energy & University of Washington) Living people 1944 births People from Tienen Université catholique de Louvain alumni People associated with CERN University of Wisconsin–Madison faculty Belgian expatriates in the United States Belgian physicists Cosmic ray physicists Particle physicists Fellows of the American Physical Society Members of Academia Europaea Recipients of the Homi Bhabha Medal and Prize Recipients of the Yodh Prize
Francis Halzen
[ "Physics" ]
900
[ "Particle physicists", "Particle physics" ]
59,028,825
https://en.wikipedia.org/wiki/Lithium%20cyclopentadienide
Lithium cyclopentadienide is an organolithium compound with the formula C5H5Li. The compound is often abbreviated as LiCp, where Cp− is the cyclopentadienide anion. Lithium cyclopentadienide is a colorless solid, although samples often are pink owing to traces of oxidized impurities. Preparation, structure and reactions Lithium cyclopentadienide is commercially available as a solution in tetrahydrofuran. It is prepared by treating cyclopentadiene with butyllithium: C5H6 + LiC4H9 → LiC5H5 + C4H10 Because lithium cyclopentadienide is usually handled as a solution, the solvent-free solid is rarely encountered. According to X-ray crystallography, LiCp is a "polydecker" sandwich complex, consisting of an infinite chain of alternating Li+ centers sandwiched between μ-η5:η5-C5H5 ligands. In the presence of amines or ethers, LiCp gives adducts, e.g. (η5-Cp)Li(TMEDA). LiCp is a common reagent for the preparation of cyclopentadienyl complexes. See also Sodium cyclopentadienide References Cyclopentadienyl complexes Non-benzenoid aromatic carbocycles Organolithium compounds
Lithium cyclopentadienide
[ "Chemistry" ]
300
[ "Organolithium compounds", "Organometallic chemistry", "Cyclopentadienyl complexes", "Reagents for organic chemistry" ]
59,031,392
https://en.wikipedia.org/wiki/Quantum%20speed%20limit
In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable (orthogonal) states. QSL theorems are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, which was verified in a cavity QED experiment. QSL have been used to explore the limits of computation and complexity. In 2017, QSLs were studied in a quantum oscillator at high temperature. In 2018, it was shown that QSL are not restricted to the quantum domain and that similar bounds hold in classical systems. In 2021, both the Mandelstam-Tamm and the Margolus–Levitin QSL bounds were concurrently tested in a single experiment which indicated there are "two different regimes: one where the Mandelstam-Tamm limit constrains the evolution at all times, and a second where a crossover to the Margolus-Levitin limit occurs at longer times." In quantum sensing, QSLs impose fundamental constraints on the maximum achievable time resolution of quantum sensors. These limits stem from the requirement that quantum states must evolve to orthogonal states to extract precise information. For example, in applications like Ramsey interferometry, the QSL determines the minimum time required for phase accumulation during control sequences, directly impacting the sensor's temporal resolution and sensitivity. Preliminary definitions The speed limit theorems can be stated for pure states, and for mixed states; they take a simpler form for pure states. An arbitrary pure state can be written as a linear combination of energy eigenstates: The task is to provide a lower bound for the time interval required for the initial state to evolve into a state orthogonal to . The time evolution of a pure state is given by the Schrödinger equation: Orthogonality is obtained when and the minimum time interval required to achieve this condition is called the orthogonalization interval or orthogonalization time. Mandelstam–Tamm limit For pure states, the Mandelstam–Tamm theorem states that the minimum time required for a state to evolve into an orthogonal state is bounded below: , where , is the variance of the system's energy and is the Hamiltonian operator. The quantum evolution is independent of the particular Hamiltonian used to transport the quantum system along a given curve in the projective Hilbert space; the distance along this curve is measured by the Fubini–Study metric. This is sometimes called the quantum angle, as it can be understood as the arccos of the inner product of the initial and final states. For mixed states The Mandelstam–Tamm limit can also be stated for mixed states and for time-varying Hamiltonians. In this case, the Bures metric must be employed in place of the Fubini–Study metric. A mixed state can be understood as a sum over pure states, weighted by classical probabilities; likewise, the Bures metric is a weighted sum of the Fubini–Study metric. For a time-varying Hamiltonian and time-varying density matrix the variance of the energy is given by The Mandelstam–Tamm limit then takes the form , where is the Bures distance between the starting and ending states. The Bures distance is geodesic, giving the shortest possible distance of any continuous curve connecting two points, with understood as an infinitessimal path length along a curve parametrized by Equivalently, the time taken to evolve from to is bounded as where is the time-averaged uncertainty in energy. For a pure state evolving under a time-varying Hamiltonian, the time taken to evolve from one pure state to another pure state orthogonal to it is bounded as This follows, as for a pure state, one has the density matrix The quantum angle (Fubini–Study distance) is then and so one concludes when the initial and final states are orthogonal. Margolus–Levitin limit For the case of a pure state, Margolus and Levitin obtain a different limit, that where is the average energy, This form applies when the Hamiltonian is not time-dependent, and the ground-state energy is defined to be zero. For time-varying states The Margolus–Levitin theorem can also be generalized to the case where the Hamiltonian varies with time, and the system is described by a mixed state. In this form, it is given by with the ground-state defined so that it has energy zero at all times. This provides a result for time varying states. Although it also provides a bound for mixed states, the bound (for mixed states) can be so loose as to be uninformative. The Margolus–Levitin theorem has not yet been experimentally established in time-dependent quantum systems, whose Hamiltonians are driven by arbitrary time-dependent parameters, except for the adiabatic case. Dual Margolus–Levitin limit In addition to the original Margolus–Levitin limit, a dual bound exists for quantum systems with a bounded energy spectrum. This dual bound, also known as the Ness–Alberti–Sagi limit or the Ness limit, depends on the difference between the state's mean energy and the energy of the highest occupied eigenstate. In bounded systems, the minimum time required for a state to evolve to an orthogonal state is bounded by where is the energy of the highest occupied eigenstate and is the mean energy of the state. This bound complements the original Margolus–Levitin limit and the Mandelstam–Tamm limit, forming a trio of constraints on quantum evolution speed. Levitin–Toffoli limit A 2009 result by Lev B. Levitin and Tommaso Toffoli states that the precise bound for the Mandelstam–Tamm theorem is attained only for a qubit state. This is a two-level state in an equal superposition for energy eigenstates and . The states and are unique up to degeneracy of the energy level and an arbitrary phase factor This result is sharp, in that this state also satisfies the Margolus–Levitin bound, in that and so This result establishes that the combined limits are strict: Levitin and Toffoli also provide a bound for the average energy in terms of the maximum. For any pure state the average energy is bounded as where is the maximum energy eigenvalue appearing in (This is the quarter-pinched sphere theorem in disguise, transported to complex projective space.) Thus, one has the bound The strict lower bound is again attained for the qubit state with . Bremermann's limit The quantum speed limit bounds establish an upper bound at which computation can be performed. Computational machinery is constructed out of physical matter that follows quantum mechanics, and each operation, if it is to be unambiguous, must be a transition of the system from one state to an orthogonal state. Suppose the computing machinery is a physical system evolving under Hamiltonian that does not change with time. Then, according to the Margolus–Levitin theorem, the number of operations per unit time per unit energy is bounded above by This establishes a strict upper limit on the number of calculations that can be performed by physical matter. The processing rate of all forms of computation cannot be higher than about 6 × 1033 operations per second per joule of energy. This is including "classical" computers, since even classical computers are still made of matter that follows quantum mechanics. This bound is not merely a fanciful limit: it has practical ramifications for quantum-resistant cryptography. Imagining a computer operating at this limit, a brute-force search to break a 128-bit encryption key requires only modest resources. Brute-forcing a 256-bit key requires planetary-scale computers, while a brute-force search of 512-bit keys is effectively unattainable within the lifetime of the universe, even if galactic-sized computers were applied to the problem. The Bekenstein bound limits the amount of information that can be stored within a volume of space. The maximal rate of change of information within that volume of space is given by the quantum speed limit. This product of limits is sometimes called the Bremermann–Bekenstein limit; it is saturated by Hawking radiation. That is, Hawking radiation is emitted at the maximal allowed rate set by these bounds. References Further reading Quantum mechanics Mathematical physics
Quantum speed limit
[ "Physics", "Mathematics" ]
1,834
[ "Applied mathematics", "Theoretical physics", "Mathematical physics", "Quantum mechanics" ]
59,038,212
https://en.wikipedia.org/wiki/Tulipalin%20A
Tulipalin A, also known as α-methylene-γ-butyrolactone, is a naturally occurring compound found in certain flowers such as tulips and alstroemerias. Tulipalin A has the molecular formula C5H6O2 and the CAS registry number 547-65-9. It is an allergen and has been known to cause occupational contact dermatitis, i.e. 'tulip fingers,' in some who are commonly exposed to it such as florists. It has been shown to be synthesized from Tuliposide-A in response to damages to the plant. When the plant is damaged, Tuliposide-A is broken down by Tuliposide-converting enzymes (TCE) to produce Tulipalin-A. More recent experiments with this compound have uncovered potential applications for it in the field of polymerization. References Gamma-lactones Plant toxins Vinylidene compounds
Tulipalin A
[ "Chemistry" ]
203
[ "Chemical ecology", "Plant toxins" ]
50,428,178
https://en.wikipedia.org/wiki/Ultraviolet%20thermal%20processing
In electronics manufacturing, ultraviolet thermal processing (UVTP) is the process of using ultraviolet light to stabilize dielectric films used to insulate semiconductors. Description Semiconductor films need low dielectric constants (k-values) for optimal thermal conductivity, to ensure semiconductor scaling. Newer dielectric films used to insulate modern chips can be easily damaged, causing them to lose their insulating capacity. Specialized treatments applied with ultraviolet light improve chip performance. Tungsten halogen lamps are the sources used for traditional rapid thermal processing. References Electronics manufacturing Packaging (microfabrication) Semiconductor device fabrication Semiconductor packages Semiconductor technology
Ultraviolet thermal processing
[ "Materials_science", "Engineering" ]
126
[ "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Semiconductor device fabrication", "Electronic engineering", "Semiconductor technology" ]
50,429,846
https://en.wikipedia.org/wiki/Gene%20desert
Gene deserts are regions of the genome that are devoid of protein-coding genes. Gene deserts constitute an estimated 25% of the entire genome, leading to the recent interest in their true functions. Originally believed to contain inessential and "Junk DNA" due to their inability to create proteins, gene deserts have since been linked to several vital regulatory functions, including distal enhancing and conservatory inheritance. Thus, an increasing number of risks that lead to several major diseases, including a handful of cancers, have been attributed to irregularities found in gene deserts. One of the most notable examples is the 8q24 gene region, which, when affected by certain single nucleotide polymorphisms, lead to a myriad of diseases. The major identifying factors of gene deserts lay in their low GpC content and their relatively high levels of repeats, which are not observed in coding regions. Recent studies have even further categorized gene deserts into variable and stable forms; regions are categorized based on their behavior through recombination and their genetic contents. Although current knowledge of gene deserts is rather limited, ongoing research and improved techniques are beginning to open the doors for exploration on the various important effects of these noncoding regions. History Although the possibility of function in gene deserts was predicted as early as the 1960s, genetic identification tools were unable to uncover any specific characteristics of the long noncoding regions, other than that no coding occurred in those regions. Before the completion of the human genome in 2001 through the Human Genome Project, most of the early associative gene comparisons relied on the belief that essential housekeeping genes were clustered in the same areas of the genome for ease of access and tight regulation. This belief later constructed a hypothesis that gene deserts are therefore previous regulatory sequences that are highly linked (and hence do not undergo recombination), but have had substitutions between them over time. These substitutions could cause tightly conserved genes to separate over time, thus forming regions of nonsense codes with a few essential genes. However, uncertainty due to differential gene conservation rates in different portions of chromosomes prevented accurate identification. Later associations were remodeled when regulatory sequences were associated with transcription factors, leading to the birth of large-scale genome-wide mapping. Thus began the hunt for the contents and functions of gene deserts. Recent advancements in the screening of chromatin signatures on chromosomes (for instance, chromosome conformation capture, also known as 3C) have allowed the confirmation of the long-range gene activation model, which postulates that there are indeed physical links between regulatory enhancers and their target promoters. Research on gene deserts, although centralized on human genetics, has also been applied to mice, various birds, and Drosophila melanogaster. Although conservation is variable among selected species’ genomes, orthologous gene deserts function similarly. Thus, the prevailing the contention of gene deserts is that these noncoding sequences harbor active and important regulatory elements. Possible functions One study focused on a regulatory archipelago, a region with “islands” of coding sequences surrounded by vast noncoding regions. The study, which explored the effects of regulation on the hox genes, initially focused on two enhancer sequences, GCR and Prox, which are located 200 basepairs and 50 basepairs respectively upstream of the Hox D locus. To manipulate the region, the study inverted the two enhancer sequences and discovered no major effects on the transcription of the Hox D gene, even though the two sequences were the closest sequences to the gene. Thus, the turned to the gene desert that flanked the GCR sequence upstream and found 5 regulatory islands within it that could regulate the gene. To select the most likely candidate, the study then applied several individual and multiple deletions to the five islands to observe the effects. These varied deletions only resulted in minor effects including physical abnormalities or a few missing digits. When the experiment was taken a step further and applied a deletion of the entire 830 kilobase gene desert, the functionality of the entire Hox D locus was rendered inactive. This indicates that the neighboring gene desert, as an entire 830 kilobase unit (including the five island sequences within it), serves as an important regulator of a single gene that spans merely 50 kilobases. Therefore, these results hinted at the regulatory effects of flanking gene deserts. This study was supported by a later observation through a comparison between fluorescence in situ hybridization and chromosome conformation capture which discovered that the Hox D locus was the most decondensed portion in the region. This meant that it had relatively higher activity in comparison to the flanking gene deserts. Hence, the Hox D could be regulated by specific nearby enhancer sequences that were not expressed in unison. However, this does caution that proximity is inaccurate when either analytical method is used. Thus, associations between regulatory gene deserts and their target promoters seem to have variable distances and are not required to act as borders. The variability in distance demonstrates that distance may be another important factor that is determined by gene deserts. For instance, distal enhancers may interact with their target promoters through looping interactions which must act over a certain distance. Thus, proximity is not an accurate predictor of enhancers: enhancers do not need to border their target sequence to regulate them. While this leads to a variation in distances, the average distance between transcription start sites and the interaction complex mediated by their enhancer elements is 120 kilobases upstream of the start site. Gene deserts may play a role in constructing this distance to allow maximal looping to occur. Given that the mechanism of enhancer complex formation is a fairly simply regulated mechanism (the structures that are recruited into the enhancing complex have various regulatory controls that control construction), more than 50% of promoters have several long-range interactions. Certain core genes even have up to 20 possible enhancing interactions. There is a curious bias for complexes to form only upstream of the promoters. Thus, given the correlation that many regulatory gene deserts appear upstream of their target promoters, it is possible that the more immediate role that gene deserts play is in long-range regulation of key sequences. As the ideal formation of enhancer interactions requires specific constructs, a possible side-product of the regulatory roles of gene deserts may be the conservation of genes: to retain the specific lengths of loops and order of regulating genes hidden in gene deserts, certain portions of gene deserts are more highly conserved than others when passing through inheritance events. These conserved noncoding sequences (CNS) are directly associated with syntenic inheritance in all vertebrates. Thus, the presence of these CNSs could serve to conserve of large regions of genes. Although distance may vary in regulatory gene deserts, distance appears to have an upper limit in conservative gene deserts. CNSs were initially thought to occur close to their conserved genes: earlier estimates placed most CNSs in proximity of gene sequences. However, the expansion of genetic data has revealed that several CNSs reside up to 2.5 megabases from their target genes, with the majority of CNSs falling between 1 and 2 megabases. This range, which was measured for the human genome, is varied among different species. For instance, in comparison to humans, the Fugu fish has a smaller range, with an estimated maximum distance of a few hundred kilobases. Regardless of the difference in lengths, CNSs work in similar methods in both species. Thus, as functions differ between gene deserts, so do their contents. Stable and variable gene deserts Certain gene deserts are heavy regulators, while others may be deleted without any effect. As a possible classification, gene deserts can be broken down into two subtypes: stable and variable. Stable gene deserts have fewer repeats and have relatively higher Guanine to Cytosine (GpC) content than observed in variable gene deserts. Guanine and cytosine content is indicative of protein-coding functionality. For example, in a study on chromosomes 2 and 4, which have been linked to several genetic diseases, there were elevated GpC content in certain regions. Mutations in these GC-rich regions caused a variety of diseases, revealing the necessary integrity of these genes. High density CpG regions serve as regulatory regions for DNA methylation. Therefore, essential coding genes should be represented by high-CpG regions. In particular, regions with high GC content should tend to have high densities of genes that are devoted mainly to the essential housekeeping and tissue specific processes. These processes would require the most protein production to express functionality. Stable gene deserts, which have higher levels of GC content, should therefore contain the essential enhancer sequences. This could determine the conservatory functions of stable gene deserts. On the other hand, approximately 80% of gene deserts have low GpC contents, indicating that they have very few essential genes. Thus, the majority of gene deserts are variable gene deserts, which may have alternate functions. One prevalent theory regarding the origins of gene deserts postulates that gene deserts are accumulations of essential genes that act as a distance. This may hold true, as given the low numbers of essential genes within them, these regions would have been less conserved. As a result, due to the prevalence of cytosine to thymine conversions, the most common SNP, would cause a gradual separation between the few essential genes within variable gene deserts. These essential sequences would have been maintained and conserved, leading to small regions of high density that regulate at a distance. GC content is therefore indication for the presence of coding or regulatory processes in DNA. While stable gene deserts have higher GC content, this relative value is only an average. Within stable gene deserts, although the ends contain very high levels of GC content, the main bulk of the DNA contains even less GC content than observed in variable gene deserts. This indicates that there are very few highly conserved regions in stable gene deserts that do not recombine, or do so at very low rates. Given that the ends of the stable gene deserts have particularly high levels of GC contents, these sequences must be extremely conserved. This conservation may in turn cause the flanking genes to also have higher conservation rates. Thus, stable genes should be directly linked to at least one of their flanking genes and cannot be separated from coding sequences by recombination events. Most gene deserts appear to cluster in pairs around a small number of genes. This clustering creates long loci that have very low gene density; small regions with high numbers of genes are surrounded by long stretches of gene deserts, creating a low gene average. Therefore, the minimized probability of recombination events in these long loci creates syntenic blocks that are inherited together over time. These syntenic blocks can be conserved for very long periods of time, preventing loss of essential material, even while the distance between essential genes may grow in time. Although this effect should theoretically be amplified through the even lower GC-content in variable gene deserts (thereby truly minimalizing gene density), the gene conservation rates in variable gene deserts are even lower than observed in stable gene deserts—in fact, the rate is far lower than the rest of the genome. A possible explanation for this phenomenon is that variable gene deserts may be recently evolved regions that have not yet been fixed into stable gene deserts. Therefore, shuffling may still occur before stabilizing regions within the variable gene deserts begin to cluster as whole units. There are a few exceptions to this minimal rate of conservation, as a few GC gene deserts are subjected to hypermethylation, which greatly reduces the accessibility to the DNA, thus effectively protecting the region from recombination. However, these occur rarely in observation. Although stable and variable gene deserts differ in content and function, both wield conservatory abilities. It is possible that since most variable gene deserts have regulatory elements that can act at a distance, conservation of the entire gene desert into a sytenic locus would not have been necessary, so long as these regulatory elements themselves were conserved as units. Given the particularly low levels of GC content, the regulatory elements would therefore be in a minimal gene density situation as observed similarly in flanking stable gene deserts, with the same effect. Thus, both types of gene deserts serve to retain essential genes within the genome. Genetic diseases The conservative nature of gene deserts confirms that these stretches of noncoding bases are essential to proper functioning. Indeed, a wide range of studies on irregularities in the noncoding genes discovered several associations to genetic diseases. One of the most studied gene deserts is the 8q24 region. Early genome wide association studies were focused on the 8q24 region (residing on chromosome 8) due to the abnormally high rates of SNPs that seem to occur in the region. These studies found that the region was linked to increased risks for a variety of cancers, notably in the prostate, breast, ovaries, colonic, and pancreas. Using inserts of the gene desert into bacterial artificial chromosomes, one study was able to produce enhancer activity in certain regions, which were isolated via cloning systems. This study successfully identified an enhancer sequence hidden in the region. Within this enhancer sequence, an SNP that conferred risk for prostate cancer, labeled SNP s6983267, was discovered in diseased mice. However, the 8q24 region is not solely limited to conferred risks of prostate cancer. A study in 2008 screened human subjects (and controls) with variations in the gene desert region, discovering five different regions that conferred different risks when affected by different SNPs. This study used identified SNP markers in the gene desert to identify risk conference from each of the regions to a specific tissue expression. Although these risks were successfully linked to various forms of cancer, Ghoussaini, M., et al. note their uncertainty in whether the SNPs functioned merely as markers or were the direct causants of the cancers. These varied effects occur due to the different interactions between the SNPs in this region and MYC promoters of different organs. The MYC promoter, which is located at a short distance downstream of the 8q24 region, is perhaps the most studied oncogene due to its association with a myriad of diseases. Normal functioning of the MYC promoter ensures that cells divide regularly. The study postulates that the 8q region, which underwent a chromosomal translocation in humans, could have moved an essential enhancer for the MYC promoter. This areas around this region could have been subjected to recombination that may have hidden the essential MYC enhancer within the gene desert through time, although its enhancing effects are still very much retained. This analysis stems from disease associations observed in several mice species where this region is retained at proximity to the MYC promoter. Thus, the 8q24 gene desert should have been somewhat linked to the MYC promoter. The desert resembles a stable gene desert that has had very little recombination after the translocation event. Thus, a potential hypothesis is that SNPs affecting this region disrupt the important tissue-specific genes with the stable gene desert, which could explain the risks of cancer in various tissue forms. This effect of hidden enhancer elements can also be observed in other locations in the genome. For instance, SNPs in the 5p13.1 deregulate the PTGER4 coding region, leading to Crohn's Disease. Another affected region in the 9p21 gene desert causes several coronary artery diseases. However, none of these risk-conferring gene deserts seem to be affected as much as the 8q24 regions. Current studies are still unsure about the SNP-affected processes in the 8q24 region that result in particularly amplified responses to the MYC promoter. With the aid of a more accessible population and more specific markers for genome wide association mapping, an increasing number of risk alleles are now being marked in gene deserts, where small, isolated, and seemingly-unimportant regions of genes may moderate important genes. A caveat The majority of the contents in gene deserts are still likely to be disposable. Naturally, this is not to say that the roles that gene deserts play are inessential or unimportant, rather than their functions may include buffering effects. An example of essential gene deserts with inessential DNA content are the telomeres that protect the ends of genomes. Telomeres can be categorized as true gene deserts, given that they solely contain repeats of TTAGGG (in humans) and do not have apparent protein-coding functions. Without these telomeres, human genomes would be severely mutated within a fixed number of cell cycles. On the other hand, since telomeres do not code for proteins, their loss ensures that there is no effect in important processes. Therefore, the term “junk” DNA should no longer be applied to any region of the genome; every portion of the genome should play a role in protecting, regulating, or repairing the protein coding regions that determine the functions of life. Although there is still much to learn about the nooks and crannies of the immense (yet limited) human genome, with the aid of various new technologies and the synthesis of the full human genome, we may perhaps unravel a great collection of secrets in the approaching years about the marvels of our genetic code. See also Conserved non-coding sequence Gene regulatory network Noncoding DNA Non-coding RNA References DNA Gene expression Non-coding DNA
Gene desert
[ "Chemistry", "Biology" ]
3,560
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
50,431,382
https://en.wikipedia.org/wiki/RAN%20translation
Repeat Associated Non-AUG translation, or RAN translation, is an irregular mode of mRNA translation that can occur in eukaryotic cells. Mechanism For the majority of eukaryotic messenger RNAs (mRNAs), translation initiates from a methionine-encoding AUG start codon following the molecular processes of 'cap-binding' and 'scanning' by ribosomal pre-initiation complexes (PICs). In rare exceptions, such as translation by viral IRES-containing mRNAs, 'cap-binding' and/or 'scanning' are not required for initiation, although AUG is still typically used as the first codon. RAN translation is an exception to the canonical rules as it uses variable start site selection and initiates from a non-AUG codon, but may still depend on 'cap-binding' and 'scanning'. Disease RAN translation produces a variety of dipeptide repeat proteins by translation of expanded hexanucleotide repeats present in an intron of the C9orf72 gene. The expansion of the hexanucleotide repeats and thus accumulation of dipeptide repeat proteins are thought to cause cellular toxicity that leads to neurodegeneration in ALS disease. See also Trinucleotide repeat disorder Eukaryotic translation C9orf72 References Protein biosynthesis Molecular biology Gene expression Protein complexes RNA-binding proteins Biochemistry RNA Proteins Neurodegenerative disorders
RAN translation
[ "Chemistry", "Biology" ]
291
[ "Biomolecules by chemical classification", "Protein biosynthesis", "Gene expression", "Molecular genetics", "Biosynthesis", "Cellular processes", "nan", "Molecular biology", "Biochemistry", "Proteins" ]
52,802,478
https://en.wikipedia.org/wiki/Numerical%20algebraic%20geometry
Numerical algebraic geometry is a field of computational mathematics, particularly computational algebraic geometry, which uses methods from numerical analysis to study and manipulate the solutions of systems of polynomial equations. Homotopy continuation The primary computational method used in numerical algebraic geometry is homotopy continuation, in which a homotopy is formed between two polynomial systems, and the isolated solutions (points) of one are continued to the other. This is a specialization of the more general method of numerical continuation. Let represent the variables of the system. By abuse of notation, and to facilitate the spectrum of ambient spaces over which one can solve the system, we do not use vector notation for . Similarly for the polynomial systems and . Current canonical notation calls the start system , and the target system, i.e., the system to solve, . A very common homotopy, the straight-line homotopy, between and is In the above homotopy, one starts the path variable at and continues toward . Another common choice is to run from to . In principle, the choice is completely arbitrary. In practice, regarding endgame methods for computing singular solutions using homotopy continuation, the target time being can significantly ease analysis, so this perspective is here taken. Regardless of the choice of start and target times, the ought to be formulated such that , and . One has a choice in , including Roots of unity Total degree Polyhedral Multi-homogeneous and beyond these, specific start systems that closely mirror the structure of may be formed for particular systems. The choice of start system impacts the computational time it takes to solve , in that those that are easy to formulate (such as total degree) tend to have higher numbers of paths to track, and those that take significant effort (such as the polyhedral method) are much sharper. There is currently no good way to predict which will lead to the quickest time to solve. Actual continuation is typically done using predictor–corrector methods, with additional features as implemented. Predicting is done using a standard ODE predictor method, such as Runge–Kutta, and correction often uses Newton–Raphson iteration. Because and are polynomial, homotopy continuation in this context is theoretically guaranteed to compute all solutions of , due to Bertini's theorem. However, this guarantee is not always achieved in practice, because of issues arising from limitations of the modern computer, most namely finite precision. That is, despite the strength of the probability-1 argument underlying this theory, without using a priori certified tracking methods, some paths may fail to track perfectly for various reasons. Witness set A witness set is a data structure used to describe algebraic varieties. The witness set for an affine variety that is equidimensional consists of three pieces of information. The first piece of information is a system of equations . These equations define the algebraic variety that is being studied. The second piece of information is a linear space . The dimension of is the codimension of , and chosen to intersect transversely. The third piece of information is the list of points in the intersection . This intersection has finitely many points and the number of points is the degree of the algebraic variety . Thus, witness sets encode the answer to the first two questions one asks about an algebraic variety: What is the dimension, and what is the degree? Witness sets also allow one to perform a numerical irreducible decomposition, component membership tests, and component sampling. This makes witness sets a good description of an algebraic variety. Certification Solutions to polynomial systems computed using numerical algebraic geometric methods can be certified, meaning that the approximate solution is "correct". This can be achieved in several ways, either a priori using a certified tracker, or a posteriori by showing that the point is, say, in the basin of convergence for Newton's method. Software Several software packages implement portions of the theoretical body of numerical algebraic geometry. These include, in alphabetic order: alphaCertified Bertini Hom4PS HomotopyContinuation.jl Macaulay2 (core implementation of homotopy tracking and NumericalAlgebraicGeometry package) MiNuS: Optimized C++ framework for fast homotopy continuation. Fastest solver for certain 100-320 degree square problems to date. PHCPack References External links Bertini home page Hom4PS-3 HomotopyContinuation.jl MiNuS fast C++ framework Algebraic geometry Computational geometry Computational fields of study
Numerical algebraic geometry
[ "Mathematics", "Technology" ]
909
[ "Computational fields of study", "Computational mathematics", "Fields of abstract algebra", "Computational geometry", "Computing and society", "Algebraic geometry" ]
52,802,727
https://en.wikipedia.org/wiki/Crossover%20value
In genetics, the crossover value is the linked frequency of chromosomal crossover between two gene loci (markers). For a fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant and the same is then true for the crossover value which is used in the production of genetic maps. Origin in cell biology Crossover implies the exchange of chromosomal segments between non-sister chromatids, in meiosis during the production of gametes. The effect is to assort the alleles on parental chromosomes, so that the gametes carry recombinations of genes different from either parent. This has the overall effect of increasing the variety of phenotypes present in a population. The process of non-sister chromatid exchanges, including the crossover value, can be observed directly in stained cells, and indirectly by the presence or absence of genetic markers on the chromosomes. The visible crossovers are called chiasmata. The large-scale effect of crossover is to spread genetic variations within a population, as well as genetic basis for the selection of the most adaptable phenotypes. The crossover value depends on the mutual distance of the genetic loci observed. The crossover value is equal to the recombination value or fraction when the distance between the markers in question is short. See also Chromosomal crossover Genetic recombination References Classical genetics Cellular processes Cytogenetics
Crossover value
[ "Biology" ]
301
[ "Cellular processes" ]
52,805,567
https://en.wikipedia.org/wiki/Monoamine%20nuclei
Monoamine nuclei are clusters of cells that primarily use monoamine neurotransmitters to communicate. The raphe nuclei, ventral tegmental area, and locus coeruleus have been included in texts about monoamine nuclei. These nuclei receive a variety of inputs including from other monoamines, as well as from glutaminergic, GABAergic, and substance p related pathways. The catacholaminergic pathways mainly project upwards into the cortical and limbic regions, power sparse descending axons have been observed in animals models. Both ascending and descending serotonergic pathways project from the raphe nuclei. Raphe nuclei in the obscurus, pallid us, and magnus descend into the brainstem and spinal cord, while the raphe ponds, raphe dorsals, and nucleus centralism superior projected up into the medial forebrain bundle before branching off. Monoamine nuclei have been studied in relation to major depressive disorder, with some abnormalities observed, however MAO-B levels appear to be normal during depression in these regions. References Neurochemistry
Monoamine nuclei
[ "Chemistry", "Biology" ]
226
[ "Biochemistry", "Neurochemistry" ]
52,811,717
https://en.wikipedia.org/wiki/List%20of%20fast%20radio%20bursts
This is a list of fast radio bursts. Items are listed here if information about the fast radio burst has been published. Although there could be thousands of detectable events per day, only detected ones are listed. Notes References Radio bursts Unsolved problems in astronomy Radio astronomy Fast radio bursts
List of fast radio bursts
[ "Physics", "Astronomy" ]
59
[ "Astronomy-related lists", "Unsolved problems in astronomy", "Concepts in astronomy", "Astronomical events", "Lists of astronomical events", "Radio astronomy", "Astronomical controversies", "Lists of astronomical objects", "Astronomical objects", "Astronomical sub-disciplines" ]
60,515,451
https://en.wikipedia.org/wiki/MHC%20class%20III
MHC class III is a group of proteins belonging the class of major histocompatibility complex (MHC). Unlike other MHC types such as MHC class I and MHC class II, of which their structure and functions in immune response are well defined, MHC class III are poorly defined structurally and functionally. They are not involved in antigen binding (the process called antigen presentation, a classic function of MHC proteins). Only few of them are actually involved in immunity while many are signalling molecules in other cell communications. They are mainly known from their genes because their gene cluster is present between those of class I and class II. The gene cluster was discovered when genes (specifically those of complement components C2, C4, and factor B) were found in between class I and class II genes on the short (p) arm of human chromosome 6. It was later found that it contains many genes for different signaling molecules such as tumour necrosis factors (TNFs) and heat shock proteins. More than 60 MHC class III genes are described, which is about 28% of the total MHC genes (224). The region previously considered within MHC class III gene cluster that contains genes for TNFs is now known as MHC class IV or inflammatory region. In contrast to other MHC proteins, MHC class III proteins are produced by liver cells (hepatocytes) and special white blood cells (macrophages), among others. Gene structure MHC class III genes are located on chromosome 6 (6p21.3) in humans. It covers 700 kb and contains 61 genes. The gene cluster is the most gene-dense region of the human genome. They are basically similar with those of other animals. The functions of many genes are yet unknown. Many retroelements such as human endogenous retrovirus (HERV) and Alu elements are located in the cluster. The region containing genes STK19(G11)/C4/Z/CYP21/X/Y, varying in size from 142 to 214 kb, is known as the most complex gene cluster in the human genome. Diversity MHC class III genes are similar in humans, mouse, frog (Xenopus tropicalis), and gray short-tailed opossum, but not all genes are common. For example, human NCR3, MIC and MCCD1 are absent in mouse. Human NCR3 and LST1 are absent in opossum. However, birds (chicken and quail) have only a single gene, which codes for a complement component gene (C4). In fishes, the genes are distributed in different chromosomes. References Immune system Cell signaling Genes on human chromosome 6 Cytokines Heat shock proteins
MHC class III
[ "Chemistry", "Biology" ]
562
[ "Organ systems", "Cytokines", "Immune system", "Signal transduction" ]
60,521,624
https://en.wikipedia.org/wiki/Theresa%20M.%20Reineke
Theresa M. Reineke (born January 1, 1972) is an American chemist and Distinguished McKnight University Professor at the University of Minnesota. She designs sustainable, environmentally friendly polymer-based delivery systems for targeted therapeutics. She is the associate editor of ACS Macro Letters. Early life and education Reineke earned her bachelor's degree at University of Wisconsin–Eau Claire. She moved to Arizona State University for her graduate studies and earned a master's degree in 1998. Reineke was a PhD student at the University of Michigan, where she was supervised by Michael O'Keeffe and Omar M. Yaghi. She was awarded the Wirt and Mary Cornell Prize for Outstanding Graduate Research. Reineke joined the California Institute of Technology as an National Institutes of Health postdoctoral fellow in 2000. Research and career Reineke joined the University of Minnesota in 2011. Her research group focus on the design, characterisation and functionalisation of macromolecular systems. These macromolecules include biocompatible polymers that can deliver DNA for regenerative medicine as well as targeted therapeutic treatments. She was made a Lloyd H. Reyerson Professor with tenure at the University of Minnesota in 2011. Reineke has published over 140 papers. Nucleic acids can have an unparalleled specificity for targets inside a cell, but need to be compacted into nanostructures (polyplexes) that can enter cells. Reineke designs polymer-based transportation systems for nucleic acids. These polymer vehicles can improve the solubility and bioavailability of drugs. These often incorporate carbohydrates, which have an affinity for polyplexes and are non-toxic. She is a member of the University of Minnesota Centre for Sustainable Polymers, synthesising polymers from sustainable ingredients. The carbohydrate units within her polymer drug delivery systems are a widely available, renewable resource. The sustainable polymers designed by Reineke include poly(ester-thioethers). Reineke used reversible addition−fragmentation chain-transfer polymerization for the synthesis of diblock terpolymers that can be used for targeted drug delivery. She used spray dried dispersions of the polymer with the drug probucol. Reineke was made a University of Minnesota Distinguished McKnight University Professor in 2017. She is the associate editor of ACS Macro Letters and on the Advisory Board of Biomacromolecules, Bioconjugate Chemistry and Polymer Chemistry. She is a member of the American Chemical Society Polymer Division. Her work has been supported by an National Science Foundation CAREER Award, a Sloan Research Fellowship, the National Institutes of Health and the National Academy of Sciences. Awards and honors 2000 Outstanding Graduate Research Award from the Wirt and Mary Cornell Prize 2003 American Chemical Society (PMSE Division), Arthur K. Doolittle Award 2007 YWCA Rising Star 2007 OH Bioscience Thirty in Their 30s Award 2012 American Society of Gene and Cell Therapy Outstanding New Investigator Award 2012 American Chemical Society, Polymer Materials: Science and Engineering Division Macro 2012 Lecture Award 2016 American Institute for Medical and Biological Engineering Fellow 2016 University of Minnesota Sara Evans Faculty Woman Scholar/Leader Award 2016 University of Minnesota George W. Taylor Award for Distinguished Research 2017 American Chemical Society Polymer Chemistry Division Carl S. Marvel Creative Polymer Chemistry Award 2018 Danisco Foundation DuPont Nutrition and Health Science Excellence Medal 2018 American Chemical Society POLY Fellow Award 2018 Big 10 Alliance Academic Leadership Program Fellow Patents 2014 Monomers, polymers and articles containing the same from sugar derived compounds 2018 Isosorbide-based polymethacrylates External links The Reineke Research Group References University of Minnesota faculty University of Wisconsin–Eau Claire alumni Arizona State University alumni University of Michigan alumni Polymer scientists and engineers Biochemists 1972 births Living people American women chemists 20th-century American chemists American women academics 21st-century American women scientists
Theresa M. Reineke
[ "Chemistry", "Biology" ]
787
[ "Biochemistry", "Biochemists" ]
60,525,671
https://en.wikipedia.org/wiki/Calcium%20channel%20associated%20transcriptional%20regulator
Calcium channel associated transcriptional regulator (CCAT) is a transcription factor found in mammalian cells. Generation In neuronal cells, CCAT is generated upon activation of a cryptic promoter in exon 46 of CACNA1C, the gene that encodes the voltage-gated calcium channel Cav1.2. References Genes Transcription factors
Calcium channel associated transcriptional regulator
[ "Chemistry", "Biology" ]
71
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
51,270,099
https://en.wikipedia.org/wiki/Sunviridae
Sunviridae is a family of negative-strand RNA viruses in the order Mononegavirales. Snakes serve as natural hosts. The family includes the single genus Sunshinevirus which includes the single species Reptile sunshinevirus 1.The family was formed to accommodate the Sunshine Coast virus (SunCV), previously referred to as "Sunshine virus", a novel virus discovered in Australian pythons. The name derives from the geographic origin of the first isolate on the Sunshine Coast of Queensland, Australia. Genome Sunshineviruses have a nonsegmented, negative-sense, single-stranded RNA genome. The total length of the genome is 17 kbp. The genome encodes seven proteins. References Mononegavirales Virus families
Sunviridae
[ "Biology" ]
149
[ "Virus stubs", "Viruses" ]
51,272,361
https://en.wikipedia.org/wiki/Intensity%20fading%20MALDI%20mass%20spectrometry
Intensity-fading MALDI is a term coined to rename an existing method originally reported in 1999 to indirectly study a Protein–protein interaction or other protein complex and the same year applied to a biological mixture to study the antigenicity of the influenza virus. It involves treating a protein and a potential binding partner with a site-specific endoproteinase with the binding sites identified by their reduced area (or intensity) in a MALDI mass spectrum compared to that of non-bound protein control. It was falsely reported as new and novel in a later application by a Spanish group. The true origins of the approach and a range of applications including those employing gel based separations, drug-protein interactions and the relative affinity of such interactions, are described in a review article. External links MALDI-MS Protein Complexes - Intensity-Fading MALDI References Protein–protein interaction assays
Intensity fading MALDI mass spectrometry
[ "Chemistry", "Biology" ]
178
[ "Biochemistry methods", "Protein–protein interaction assays" ]
37,006,478
https://en.wikipedia.org/wiki/Straton%20tube
The Straton tube, by Siemens Healthineers (formerly Siemens Medical Solutions), Erlangen, Germany, is the first X-ray tube from the class of rotating envelope tubes (RET) to be used for computed tomography. With rotating envelope tubes, the entire vacuum tube rotates with respect to the anode axis, versus rotating anode tubes, in which the target disk rotates inside a stationary vacuum tube. The target cools by conduction rather than radiation. Heat storage is less important, and waiting times are eliminated. References X-rays X-ray instrumentation
Straton tube
[ "Physics", "Technology", "Engineering" ]
121
[ "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "X-ray instrumentation", "Measuring instruments" ]
37,007,490
https://en.wikipedia.org/wiki/Hodge%E2%80%93Arakelov%20theory
In mathematics, Hodge–Arakelov theory of elliptic curves is an analogue of classical and p-adic Hodge theory for elliptic curves carried out in the framework of Arakelov theory. It was introduced by . It bears the name of two mathematicians, Suren Arakelov and W. V. D. Hodge. The main comparison in his theory remains unpublished as of 2019. Mochizuki's main comparison theorem in Hodge–Arakelov theory states (roughly) that the space of polynomial functions of degree less than d on the universal extension of a smooth elliptic curve in characteristic 0 is naturally isomorphic (via restriction) to the d2-dimensional space of functions on the d-torsion points. It is called a 'comparison theorem' as it is an analogue for Arakelov theory of comparison theorems in cohomology relating de Rham cohomology to singular cohomology of complex varieties or étale cohomology of p-adic varieties. In and he pointed out that arithmetic Kodaira–Spencer map and Gauss–Manin connection may give some important hints for Vojta's conjecture, ABC conjecture and so on; in 2012, he published his Inter-universal Teichmuller theory, in which he didn't use Hodge-Arakelov theory but used the theory of frobenioids, anabelioids and mono-anabelian geometry. See also Hodge theory Arakelov theory P-adic Hodge theory Inter-universal Teichmüller theory References Number theory Algebraic geometry Abc conjecture
Hodge–Arakelov theory
[ "Mathematics" ]
318
[ "Discrete mathematics", "Fields of abstract algebra", "Algebraic geometry", "Abc conjecture", "Number theory" ]
37,009,413
https://en.wikipedia.org/wiki/Inertial%20manifold
In mathematics, inertial manifolds are concerned with the long term behavior of the solutions of dissipative dynamical systems. Inertial manifolds are finite-dimensional, smooth, invariant manifolds that contain the global attractor and attract all solutions exponentially quickly. Since an inertial manifold is finite-dimensional even if the original system is infinite-dimensional, and because most of the dynamics for the system takes place on the inertial manifold, studying the dynamics on an inertial manifold produces a considerable simplification in the study of the dynamics of the original system. In many physical applications, inertial manifolds express an interaction law between the small and large wavelength structures. Some say that the small wavelengths are enslaved by the large (e.g. synergetics). Inertial manifolds may also appear as slow manifolds common in meteorology, or as the center manifold in any bifurcation. Computationally, numerical schemes for partial differential equations seek to capture the long term dynamics and so such numerical schemes form an approximate inertial manifold. Introductory Example Consider the dynamical system in just two variables  and  and with parameter : It possesses the one dimensional inertial manifold  of (a parabola). This manifold is invariant under the dynamics because on the manifold    which is the same as   The manifold  attracts all trajectories in some finite domain around the origin because near the origin  (although the strict definition below requires attraction from all initial conditions). Hence the long term behavior of the original two dimensional dynamical system is given by the 'simpler' one dimensional dynamics on the inertial manifold , namely . Definition Let denote a solution of a dynamical system. The solution  may be an evolving vector in or may be an evolving function in an infinite-dimensional Banach space . In many cases of interest the evolution of  is determined as the solution of a differential equation in , say with initial value . In any case, we assume the solution of the dynamical system can be written in terms of a semigroup operator, or state transition matrix, such that for all times and all initial values . In some situations we might consider only discrete values of time as in the dynamics of a map. An inertial manifold for a dynamical semigroup  is a smooth manifold  such that is of finite dimension, for all times , attracts all solutions exponentially quickly, that is, for every initial value  there exist constants  such that . The restriction of the differential equation  to the inertial manifold  is therefore a well defined finite-dimensional system called the inertial system. Subtly, there is a difference between a manifold being attractive, and solutions on the manifold being attractive. Nonetheless, under appropriate conditions the inertial system possesses so-called asymptotic completeness: that is, every solution of the differential equation has a companion solution lying in  and producing the same behavior for large time; in mathematics, for all  there exists  and possibly a time shift  such that as . Researchers in the 2000s generalized such inertial manifolds to time dependent (nonautonomous) and/or stochastic dynamical systems (e.g.) Existence Existence results that have been proved address inertial manifolds that are expressible as a graph. The governing differential equation is rewritten more specifically in the form for unbounded self-adjoint closed operator  with domain , and nonlinear operator . Typically, elementary spectral theory gives an orthonormal basis of  consisting of eigenvectors : , , for ordered eigenvalues . For some given number  of modes,  denotes the projection of  onto the space spanned by , and  denotes the orthogonal projection onto the space spanned by . We look for an inertial manifold expressed as the graph . For this graph to exist the most restrictive requirement is the spectral gap condition  where the constant  depends upon the system. This spectral gap condition requires that the spectrum of  must contain large gaps to be guaranteed of existence. Approximate inertial manifolds Several methods are proposed to construct approximations to inertial manifolds, including the so-called intrinsic low-dimensional manifolds. The most popular way to approximate follows from the existence of a graph. Define the  slow variables , and the 'infinite' fast variables . Then project the differential equation onto both  and  to obtain the coupled system and . For trajectories on the graph of an inertial manifold , the fast variable . Differentiating and using the coupled system form gives the differential equation for the graph: This differential equation is typically solved approximately in an asymptotic expansion in 'small'  to give an invariant manifold model, or a nonlinear Galerkin method, both of which use a global basis whereas the so-called holistic discretisation uses a local basis. Such approaches to approximation of inertial manifolds are very closely related to approximating center manifolds for which a web service exists to construct approximations for systems input by a user. See also Wandering set References Dynamical systems
Inertial manifold
[ "Physics", "Mathematics" ]
1,016
[ "Mechanics", "Dynamical systems" ]
37,011,244
https://en.wikipedia.org/wiki/Conductor%20clashing
In an overhead power line, conductor clashing occurs when energized wires accidentally come into contact with each other. Overhead transmission systems typically use un-insulated bare conductors for reasons of weight and economy. When bare conductors touch, the resulting momentary short circuit or electric arc can cause disturbances to the electric power system, damage to the conductors, or fire. Conductor clashing may be caused by wind, ice, excess sag due to creep or thermal expansion due to sustained heavy loading, or by contact with animals or objects. Conductor clash is prevented by proper design and installation to anticipate the likely conditions of weather and load. The effects of clashing conductors can be mitigated by fuses or protective relays and circuit breakers to de-energize the shorted conductors. For some types of transmission line, it may be possible to automatically reclose a circuit breaker in expectation that the clash was a momentary problem, thus minimizing interruption of service to grid customers. Causes Heavy winds or gusts can often result in the unintended contact of conductors, particularly where power lines exhibit excessive sag or other structural conditions that permit conductors to come into close proximity. Trees near power lines may break and drop branches onto the wires, increasing the potential for conductors to clash by bringing them together. Vehicles may hit transmission towers or poles, and aircraft may get entangled in wires. This may cause power lines to clash. This type of collision, often the result of accidents, can have a cascading effect on the power system, leading to conductor clashing. Seismic activity may displace transmission line support structures, disturbing the planned spacing of conductors and possibly producing a clash. Acts of vandalism targeted at power lines introduce another reason for conductor clashing. Deliberate acts of hurling objects at power lines can induce drooping and the subsequent collision of wires. Process When conductors clash, heat is produced, along with vaporization of conductor material, and the expulsion of metal particles. These ejected particles, often in the form of sparks, are then carried away by the wind. The combustion aspect is driven by the release of energy in the form of an electrical arc (electrical breakdown of gas resulting in electrical discharge). Simultaneously, the conductor material erodes and vaporizes due to the intense heat generated by this arc. The process is significantly influenced by key parameters, including arc voltage, short-circuit current, and the duration of the arc. A higher arc voltage intensifies the energy of the electrical arc, while an increased short-circuit current leads to more substantial heat generation and vaporization of the conductor material. The duration of the arc plays a critical role, impacting the extent of material vaporization and potentially leading to molten or burning particles. Contact between conductors may produce an electric arc with a bright flash, the emission of sparks, and a puff of white smoke. The intense heat of the arc causes the underlying metal to reach its boiling point and vaporize. When these vaporized metal particles come into contact with the air, they ignite and burn rapidly, forming Al2O3 (aluminum oxide) as small aerosol particles. These aerosol particles can reach temperatures anywhere from 930 K (Kelvin) to 2730 K and create the characteristic puff of smoke. When the oxide is in a molten state, the oxidation process proceeds rapidly, with the heat generated by oxidation offsetting heat losses through convection and radiation. These droplets will continue to burn until all the metal is consumed or until they reach the ground. Effects Fire ignition resulting from conductor clashing has been a recurring issue worldwide, with numerous instances occurring in various countries. Such incidents can lead to significant environmental damage, such as forest fires, as well as substantial financial losses and, in some cases, pose potential threats to human lives. An example of a conductor clashing catastrophe occurred in Western Australia on December 2, 2004. A 19.1 kV (kilovolt) conductor became dislodged from a pole-mounted insulator at the first pole and subsequently clashed with the underslung running earth conductor approximately 200 meters away. This collision led to a flashover (ignition of combustible material in an enclosed area), releasing hot metal particles (sparks) that ignited dry harvested stubble, which initiated the wildfire. Amid the fire, both conductors snapped, with the first conductor ultimately succumbing to structural wear and the influence of northerly winds. When both conductors fell and made contact with the dividing fence, the wildfire was ignited. It's worth noting that the property owner had previously reported a low-hanging power line conductor adjacent to the first pole. According to the property owner's estimate, roughly 468 hectares of land had been burned. References Electric power distribution Electrical phenomena
Conductor clashing
[ "Physics" ]
964
[ "Physical phenomena", "Electrical phenomena" ]
44,084,967
https://en.wikipedia.org/wiki/Tilting%20pan%20filter
A tilting pan filter is a chemical equipment used in continuous solid-liquid filtration. It is formed by a number of trapezoidal pans arranged in circle. At the center of the equipment there is the main valve which is connected to every pan through pipes. The pans are rotating continuously around the main valve, which provides the air or the vacuum necessary for the operation. In each pan it is carried out the filtration in a cyclic process that involves these stages: feed is poured in the pan; the material to be filtered formed in this way a "cake"; cake is washed out; cake is dried through the aspiration of the liquid; cake is washed out again; cake is dried again; pan is tilted in order to discharge the solid; pan is sprayed with water to be cleaned; pan is tilted back to the initial angle and the process continues with the feeding stage. See also Filtration Filter cake References Filters
Tilting pan filter
[ "Chemistry", "Engineering" ]
193
[ "Chemical equipment", "Filtration", "Filters" ]
44,086,751
https://en.wikipedia.org/wiki/Orban%20%28audio%20processing%29
Orban is an international company making audio processors for radio, television and Internet broadcasters. It has been operating since founder Bob Orban sold his first product in 1967. The company was originally based in San Francisco, California. History The Orban company started in 1967 when Bob Orban built and sold his first product, a stereo synthesizer, to WOR-FM in New York City, a year before Orban earned his master's degree from Stanford University. He teamed with synthesizer pioneers Bernie Krause and Paul Beaver to promote his products. In 1970, Orban established manufacturing and design in San Francisco. Bob Orban partnered with John Delantoni to form Orban Associates in 1975. The company was bought by Harman International in 1989, and the firm moved to nearby San Leandro in 1991. In 2000, Orban was bought by Circuit Research Labs (CRL) who moved manufacturing to Tempe, Arizona, in 2005, keeping the design team in the San Francisco Bay Area. Orban expanded into Germany in 2006 by purchasing Dialog4 System Engineering in Ludwigsburg. Orban USA acquired the company in 2009, based in Arizona. The Orban company was acquired by Daysequerra in 2016, moving manufacturing to New Jersey. In 2020, Orban Labs consolidated divisions and streamlined operations, with Orban Europe GmbH assuming responsibility for all Orban product sales worldwide. Over its years of trading, the Orban company has released many well-known audio-processing products, including the Orban Optimod 8000, which was the first audio processor to include FM processing and a stereo generator under one package, an innovative idea at the time, as no other processor took into account 75 μs pre-emphasis curve employed by FM, which leads to low average modulation and many peaks. This was followed by the Orban Optimod 8100, which went on to become the company's most successful product, and the Orban Optimod 8200, the first successful digital signal processor. It was entirely digital and featured a two-band AGC, followed by five-band or two-band processing, with phase cancellation of clipping distortion. Processors were also made for AM and digital radio as well, including the Orban Optimod 9200 and the Orban Optimod 6200, the first processor made exclusively for digital television, digital radio and Internet radio. During the 2000s, Orban followed up the 8200 by creating the Orban Optimod 8400 in 2000, the Orban Optimod 8500 in 2005, and the Orban Optimod 8600 in 2010. Present day The company's current product line includes its flagship audio processor, the Optimod-FM 5950 Other processors include the Orban Optimod-FM 5750, the Trio, the Optimod PCn1600 for digital, internet and mastering applications and the XPN-AM/ Optimod 9300 for AM radio. References External links Electronics companies established in 1967 1967 establishments in California 1989 mergers and acquisitions 2000 mergers and acquisitions 2009 mergers and acquisitions 2016 mergers and acquisitions Manufacturing companies based in the San Francisco Bay Area Audio electronics Harman International Signal processing
Orban (audio processing)
[ "Technology", "Engineering" ]
648
[ "Audio electronics", "Telecommunications engineering", "Computer engineering", "Signal processing", "Audio engineering" ]
44,086,763
https://en.wikipedia.org/wiki/Merton%20Sandler
Merton Sandler (28 March 1926 – 24 August 2014) was a British professor of chemical pathology and a pioneer in biological psychiatry. Education and career Sandler grew up in an observant Jewish family in Salford. He studied at the Manchester Grammar School having won a scholarship, before studying medicine at the University of Manchester. Following his qualification in 1949, Sandler served two years of National Service in the Royal Army Medical Corps at Shoreham-by-Sea, attaining the rank of Captain. With his prior pathology training, he managed a small hospital laboratory during this period. In 1951 Sandler was appointed consultant chemical pathologist at Queen Charlotte’s Hospital. In 1959, he suggested a link between depression and monoamine deficiency in the brain, which led to the development of antidepressants. Sandler was Professor of Chemical Pathology at the University of London from 1973 to 1991, and Fellow Emeritus of the American College of Neuropsychopharmacology Private life Sandler married Lorna Grenby in 1961 and they had four children. He was an active Freemason initiated in 1954 in the In Arduis Fidelis Lodge (London), and two years later in the Holy Royal Arch. He belonged to several lodges and chapters, and held office in the United Grand Lodge of England. Awards Anna Monika Prize for research on biological aspects of depression (1973) Gold Medal British Migraine Association (1974) British Association for Psychopharmacology Lifetime Achievement Award (1999) CINP Pioneer Award for lifetime contribution to monoamine studies in human health and disease (2006) References External links 1926 births 2014 deaths People educated at Manchester Grammar School Alumni of the University of Manchester Chemical pathologists Academics of the University of London
Merton Sandler
[ "Chemistry" ]
347
[ "Chemical pathology", "Chemical pathologists" ]
44,088,995
https://en.wikipedia.org/wiki/Acoustoelastography
Acoustoelastography is an ultrasound technique that relates ultrasonic wave amplitude changes to a tendon's mechanical properties. See also the page on the acoustoelastic effect. References Medical technology
Acoustoelastography
[ "Biology" ]
44
[ "Biotechnology stubs", "Medical technology stubs", "Medical technology" ]
44,089,226
https://en.wikipedia.org/wiki/Heterostasis%20%28cybernetics%29
Heterostasis is a medical term. It is a neologism coined by Walter Cannon intended to connote an alternative but related meaning to its lexical sibling Homeostasis, which means 'same state'. Any device, organ, system or organism capable of Heterostasis (multistable behavior) can be represented by an abstract state machine composed of a characteristic set of related, interconnected states, linked dynamically by change processes allowing transition between states. Although the term 'Heterostasis' is an obvious rearrangement (by syntactically substituting the prefix 'Hetero-' for its dichotome 'Homeo-', and likewise swapping the semantic reference, from 'same'/'single' to 'different'/'many'), the endocrinologist Hans Selye is generally credited with its invention. An excellent overview of the two concepts is contained in the Cambridge Handbook of Psychophysiology, Chapter 19. Selye's ideas were used by Gunther et al., in which dimensionless numbers (allometric invariance analysis) were used to investigate the existence of heterostasis in canine cardiovascular systems. Alternative terminology The equivalent term Allostasis is used in biological contexts, where state change is analog (continuous), but Heterostasis is sometimes preferred for systems which possess a finite number of distinct (discrete) internal states, such as those containing computational processes. The term Servomechanism is usually used in industrial/mechanical situations (non-biological and non-computational) where it often applies to analog state change, e.g. in a Direct Current Servomotor. References Homeostasis Servomechanisms Technology neologisms
Heterostasis (cybernetics)
[ "Biology" ]
358
[ "Homeostasis" ]
42,648,132
https://en.wikipedia.org/wiki/4-Fluoro-L-threonine
{{DISPLAYTITLE:4-Fluoro-L-threonine}} 4-Fluoro--threonine is an antibacterial produced by Streptomyces cattleya. It is formed by the fluorothreonine transaldolase catalysed transfer of fluoroacetaldehyde onto threonine. References Alpha-Amino acids Fluorinated amino acids Fluorohydrins Fluorine-containing natural products
4-Fluoro-L-threonine
[ "Chemistry" ]
100
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
42,649,528
https://en.wikipedia.org/wiki/Commercial%20uses%20of%20armor
Armor has been used in the military for a long period of time during the course of history but is becoming more frequently seen in the public sector as time passes. There are many different forms and ways that armor is being commercially used throughout the world today. The most popular and well-known uses are body and vehicle armor. There are other commercial uses including aircraft armor and armored glass. Commercial and cargo planes Following the bombing of the Pan Am Flight 103 in 1988 the Transportation Security Laboratories have been developing ways to reduce the damage to an airplane by placing a hardened film around the cargo bay and overhead compartments. They have also changed the shape of the cargo bay to provide more security and to reduce the force of any explosion. To improve the circumstances in a case where an aircraft turbine engine fails, the Federal Aviation Administration (FAA) is working to design specific armor to protect the vital parts of the plane to assure safe flying until landing is accomplished. This armored barrier would prevent fragments from engine failure from damaging other sections of the airplane. High strength polymer fibers have been found to be the most effective material for this specific use. Armored glass Armor Glass International, Inc. was founded by Michael Fjetland, BBA/JD, under the trademark Armor Glass® to provide security from breach of glass by natural disasters, explosions, burglars, hurricanes, tornadoes, hail, golf balls or other harmful events. One of the main products made by Amor Glass International is security film. This type of film is 8 mil thick, is rated for a Large Missile Impact (Level C 4.5 lb.) and is placed on the inside of a window or other source of glass, the weakest link of every building, to create a more durable and defensive layer. Studies have shown that breach of a window by wind-borne debris hurled by hurricane-force winds is what leads to roof uplift and structural collapse. This protective film is used on many buildings in Washington D.C. such as the Pentagon, Smithsonian, Congress, etc. but is also used commercially throughout the world for any person or company striving for extra protection against specific unpredictable encounters. Vehicle armor Automobile armor should be customized to fit the client’s needs based on type of car, threat level, and defensive options. When determining type of car the customer has many options as any automobile can be armored. However, the weight of the armor can vary from 500 to 2000 pounds requiring special suspension and engines upgrades to be installed. After the car type is chosen the customer is asked about their perceived threat level. This helps the manufactures to determine what ballistic protection level the car needs customized for (See also, International Armoring Corporation). Ballistic protection levels range Type IIa to Type IV and are governed by National Institute of Justice Standard 0108.01. Once the preliminary review is completed and the specifications are finalized the manufacture begin the project. According to patent US 4352316 A* there are several steps when it comes to armoring a civilian automobile. First, the automobile is stripped of its interior. Second, door frames are rebuilt to include armor plating and bullet proof windows are added. Third, the vertical portions, top, and bottom of the automobile are enforced with armoring plating. Finally, the car’s battery and engine are encased in armor plating. The objective of the plating is to prevent bullets from penetrating the automobile and entering into the passenger cabin. During the installation process various materials are used including; bulletproof glass, ballistic nylon, run flat inserts, and Lexan. Outside of the basic armoring package several defensive options are available to help improve the security of the automobile comprising; dual battery system, DVR security camera system, electric door handles, flashing front strobe lights, night vision systems, self-sealing fuel tank, and siren/loudspeaker system, etc. Economic impact Economics of body armor The US body armor industry is worth $802 million a year with a decrease just over nine percent in the last five years according to a market research done by IBISWorld. This is the result of the conclusion of the war in Afghanistan and the withdrawal of troops. There are currently around 80 companies in the US that are specializing in body armor from head to toe. The top four companies are said to control almost half of the market. The market is expected to come back from this 9% low due to needs for law enforcement and other private security firms. Just the body armor industry alone profits 39.3 million in profit. The military takes the majority with 72%, law enforcement take 14.2%, the commercial use has the remainder of the 13.8%. The report states that the use of robots has reduced the need for body armor in highly dangerous situations. Economics of vehicle armor In a report done by Ibisworld.com commercial uses of vehicle armor only share an 8% of the 7.2 billion dollar industry. The market has been on a steady decline, 12% over the past five years and expected to drop another 2% over the next five years. 68% of the market is taken by the military and government. It profits just under 1 billion dollars a year. Exports of commercial armored vehicles are on the rise the majority of the exports go the United Arab Emirates about 29%. Most of the other majority are exported to the middle east. See also Aramid Bulletproof glass Dragon Skin International Armoring Corporation Personal Armor Spider silk Armour References External links IBISWorld US Website International Armoring Company Armour Materials
Commercial uses of armor
[ "Physics" ]
1,115
[ "Materials", "Matter" ]
42,650,781
https://en.wikipedia.org/wiki/Our%20Mathematical%20Universe
Our Mathematical Universe: My Quest for the Ultimate Nature of Reality is a 2014 nonfiction book by the Swedish-American cosmologist Max Tegmark. Written in popular science format, the book interweaves what a New York Times reviewer called "an informative survey of exciting recent developments in astrophysics and quantum theory" with Tegmark's mathematical universe hypothesis, which posits that reality is a mathematical structure. This mathematical nature of the universe, Tegmark argues, has important consequences for the way researchers should approach many questions of physics. Summary Tegmark, whose background and scientific research have been in the fields of theoretical astrophysics and cosmology, mixes autobiography and humor into his analysis of the universe. The book begins with an account of a bicycle accident in Stockholm in which Tegmark was killed—in some theoretical parallel universes, though not in our own. The rest of the book is divided into three parts. Part one, "Zooming Out," deals with locating ourselves in the cosmos and/or multiverse. Part two, "Zooming In," looks for added perspective from quantum mechanics and particle physics. Part three, "Stepping Back," interweaves a scientific viewpoint with Tegmark's speculative ideas about the mathematical nature of reality. By the end of the book, Tegmark has hypothesized four different levels of multiverse. According to Andrew Liddle, reviewing the book for Nature:The culmination that Tegmark seeks to lead us to is the "Level IV multiverse". This level contends that the Universe is not just well described by mathematics, but, in fact, is mathematics. All possible mathematical structures have a physical existence, and collectively, give a multiverse that subsumes all others. Here, Tegmark is taking us well beyond accepted viewpoints, advocating his personal vision for explaining the Universe. Reception Reviews of the book have generally praised Tegmark's writing and exposition of established physics, while often criticizing the content and speculativeness of his new "mathematical universe" hypothesis. In a very positive review, Clive Cookson in The Financial Times wrote that "physics could do with more characters like Tegmark" and that his book "should engage any reader interested in the infinite variety of nature." Giles Whitsell in The Times described the book as "mind-bending." Peter Forbes in The Independent praised the last chapter of the book, on the risks of extinction humanity faces, as "wise and bracing". Brian Rotman, writing for The Guardian, was unconvinced by Tegmark's conclusions but also wrote that the book is "at the cutting edge of cosmology and quantum theory in friendly and relaxed prose, full of entertaining anecdotes and down-to-earth analogies." Similarly, cosmologist Andrew Liddle, in Nature, summarized: This is a valuable book, written in a deceptively simple style but not afraid to make significant demands on its readers, especially once the multiverse level gets turned up to four. It is impressive how far Tegmark can carry you until, like a cartoon character running off a cliff, you wonder whether there is anything holding you up. Criticism Mathematical physicist Edward Frenkel, writing for The New York Times, alleged that the meaning of Tegmark's hypothesis "is a big question, which is never fully answered" and said that parts of the book "[pretend] to stay in the realm of science" while actually espousing "science fiction and mysticism." In a positive review, cosmologist Andreas Albrecht, writing for SIAM Review, criticized Tegmark's proposed test of the "mathematical universe" hypothesis (the hypothetical identification of physical phenomena which cannot be described mathematically) as meaningless. In a review written for The Wall Street Journal, physicist Peter Woit said that the problem with Tegmark's proposal is "not that it's wrong but that it's empty" and "radically untestable." In Physics Today, Francis Sullivan particularly praised Tegmark's explanation of the theory of inflation but criticized his purportedly physical application of Emile Borel's theorem on normal numbers, and regarded his overall argument as circular. In New Scientist, Mark Buchanan contrasted what he saw as the "uninhibited speculation" in parts of Tegmark's book with his earlier "hard, empirical" work which established him as a physicist. In The New York Times, science writer Amir Alexander concluded that the book is "brilliantly argued and beautifully written" and "never less than thought-provoking," although Tegmark's hypothesis is "simply too far removed from the frontiers of today's mainstream science" to judge its legitimacy. References 2014 non-fiction books Popular physics books Cosmology books Alfred A. Knopf books
Our Mathematical Universe
[ "Mathematics" ]
1,000
[ "Mathematical Platonism" ]
57,458,671
https://en.wikipedia.org/wiki/Methyl%20fluoroacetate
Methyl fluoroacetate (MFA) is an organic compound with the chemical formula . It is the extremely toxic methyl ester of fluoroacetic acid. It is a colorless, odorless liquid at room temperature. It is used as a laboratory chemical and as a rodenticide. Because of its extreme toxicity, MFA was studied for potential use as a chemical weapon. The general population is not likely to be exposed to methyl fluoroacetate. People who use MFA for work, however, can breathe in or have direct skin contact with the substance. History MFA was first synthesized in 1896 by the Belgian chemist Frédéric Swarts by reacting methyl iodoacetate with silver fluoride. It can also be synthesized by reacting methyl chloroacetate with potassium fluoride Because of its toxicity, MFA was studied for potential use as a chemical weapon during World War II. It was considered a good water poison since it is colorless and odorless and therefore it can toxify the water supply and kill a big part of the population. By the end of the war, several countries began to make methyl fluoroacetate to debilitate or kill the enemy. Synthesis The synthesis of methyl fluoroacetate consists of a two-step process: Potassium fluoride (KF) and the catalyst are added into the solvent within the reactor; this is then stirred and heated up. The catalyst mentioned in this step is a phase-transfer catalyst and can be the chemicals dodecyl(trimethyl)ammonium chloride , tetrabutylammonium chloride , tetrabutylammonium bromide , or tetramethylammonium chloride . The mass ratio of the potassium fluoride and the catalyst in this step is 0.5~1 : 0.02~0.03. With the solvent mentioned in this step being a mixture of dimethylformamide () and acetamide () with a mass ratio of 1.4~1.6: 1. The mass ratio of the solvent and potassium fluoride is 1.1~2.0 : 0.5~1. When the reaction temperature of 100~160 °C is reached, methyl chloroacetate is continuously added in the reactor at a speed of 5~10 kg/min with the mass ratio of methyl chloroacetate and potassium fluoride being 1:0.5~1. The reaction between these chemicals produces a gas mixture, with the gases within this mixture then being split between two condensers according to their condensation temperature. Methyl chloroacetate is condensed within the condenser set at 100~105 °C, it is then returned to the reactor to continue participating in the chemical reaction. Methyl fluoroacetate in the other condenser then enters a two-stage nitration condensation at a temperature of 20~25 °C which then ensures that the methyl fluoroacetate is condensed into a liquid with it being the product of this reaction. Structure and reactivity Methyl fluoroacetate is a methyl ester of fluoroacetic acid. MFA is a liquid, which is odorless or can have a faint, fruity smell. The boiling point of MFA is 104.5 °C and the melting point is −35.0 °C. It is soluble in water (117 g/L at 25 °C) and slightly soluble in petroleum ether. MFA is resistant to the displacement of fluorine by nucleophiles, so there is higher stability of the bond compared to the other halogens (, , ). The other haloacetates are more powerful alkylating agents that react with group of proteins. This, however, does not happen for MFA and gives it a unique toxic action. Moreover, MFA is a derivative of fluoroacetate (FA) compound which is as toxic and has similar biotransformation to MFA. Mechanism of action and metabolism Generally, fluoroacetates are toxic because they are converted to fluorocitrate by fluoroacetyl coenzyme A. Fluorocitrate can inhibit aconitate hydratase, which is needed for the conversion of citrate, by competitive inhibition. This interrupts the citric acid cycle (TCA cycle) and also causes citrate to accumulate in the tissues and eventually in the plasma. MFA is mainly biotransformed by glutathione transferase enzyme in a phase 2 biotransformation process. The GSH-dependent enzyme couples glutathione to MFA and thereby defluorinating MFA. As a result, a fluoride anion and S-carboxymethylglutathione are produced. The decoupling of fluoride is mediated by a fluoroacetate-specific defluorinase. The defluorinating activity is mainly present in the liver, but also kidneys, lungs, the heart, and the testicles show activity. In the brain, there are no signs of defluorination. Eventually, fluorocitrate (FC) is formed which is the main toxic compound. It binds the aconitase enzyme with a very high affinity and therefore intervenes in the TCA cycle. Citrate in normal circumstances is converted to succinate, but the process is inhibited. The cycle stops and oxidative phosphorylation is prevented since NADH, FADH2 and succinate are required from the TCA cycle. Respiration stops shortly. The poison acts very quickly and has no antidote. Mammals are intolerant to MFA. However, a few Australian species (e.g. brush-tailed possum) show a level of tolerance to fluoroacetate by metabolizing it using glutathione-S-transferase. Fluoride can be removed from fluoroacetate or fluorocitrate. It is involved in detoxifying the aryl and alkyl groups by converting them into glutathione conjugates. The bond is cleaved because of a nucleophilic attack of carbon resulting in the formation of S-carboxymethyl glutathione. This can be afterward excreted in the form of S-carboxymethylcysteine. The elimination half-life of biotransformed MFA is about 2 days. When administered, the MFA mainly resides in blood plasma, but can also be traced in the liver, kidney, and muscle tissue. Toxicity MFA is a convulsant poison. It causes severe convulsions in poisoned victims. Death results from respiratory failure. For a variety of animals, the toxicity of methyl fluoroacetate has been determined orally and through subcutaneous injection. The dosage ranges from 0.1 mg/kg in dogs to 10–12 mg/kg in monkeys indicating considerable variation. An order of decreasing susceptibility has been determined within these animals which is: dog, guinea-pig, cat, rabbit, goat, and then likely horse, rat, mouse, and monkey. For the rat and mouse, the toxicity by inhalation has been investigated more fully than for other animals. The LD50 for the rat and mouse are 450 mg/m3 and above 1,000 mg/m3 for 5 minutes, respectively. In dogs, guinea-pigs, cats, rabbits, goats, horses, rats, mice, and monkeys, the pharmacological effects of this substance have been investigated by mouth and by injection. Methyl fluoroacetate causes progressive depression of respiration and is a convulsant poison in most animals. When applied to the skin it is not toxic, yet through inhalation, injection and by mouth it is. For the rat, cat and the rhesus monkey, the effects of methyl fluoroacetate have been determined similar to those of nicotine, strychnine, leptazol, picrotoxin, and electrically induced convulsions. The convulsive pattern is considered to be similar to that of leptazol. Little besides signs of asphyxia is found post-mortem in these animals. Estimations have been made for blood sugar, hemoglobin, plasma proteins, non-protein nitrogen, and serum potassium, calcium, chloride, and inorganic phosphate in a small number of rabbits, dogs, and goats. Blood changes include a rise of 20 to 60% in hemoglobin, a rise of up to 90% in blood sugar, a rise of 70 to 130% in inorganic phosphate, and a less significant rise in serum potassium with a terminal rise in non-protein nitrogen and potassium. The whole central nervous system is affected by methyl fluoroacetate just like with leptazol, with the higher centers being more sensitive than the lower ones. Small doses of methyl fluoroacetate have little effect on blood pressure yet in large doses it has an action similar to nicotine. It further stimulates the rate and volume of respiration and then causes failure of the respiration, probably central in origin as seen through graphic records. The knee jerk reaction appears to be accentuated through methyl fluoroacetate until convulsions occur due to the irradiation of the stimuli being so facilitated. Nervous conduction is increased and the threshold stimulus lessened in the reflex arc of a spinal cat. Methyl fluoroacetate reduces the electric convulsive threshold about 10 times in rats. The difficulties of treatments are stressed as methyl fluoroacetate is both a powerful convulsant and a respiratory depressant, yet suggestions for treatment in man are made. Methyl fluoroacetate presents a serious hazard as a food and water contaminant in the case that it is used as a poison against rodents and other vermin, as it is not easily detected or destroyed and is equally toxic by mouth and by injection. Environmental exposure Methyl fluoroacetate is produced and used as a chemical reagent and it can be released to the environment through several waste streams. When it was used as a rodenticide, it was released directly to the environment where it would be broken down in the air. If released to air, an estimated vapor pressure of 31 mmHg at 25 °C indicates methyl fluoroacetate will exist solely as a vapor in the atmosphere. Vapor-phase methyl fluoroacetate will be degraded in the atmosphere by reaction with photochemically produced hydroxyl radicals. The half-life for this reaction in air is estimated to be 98 days. MFA does not contain chromophores that absorb at wavelengths > 290 nm and therefore it's not expected to be susceptible to direct photolysis by sunlight. Effects on animals The effects on animals occur very rapidly and strongly, all resulting in death. Exposure to a high concentration of MFA vapor does not show any symptoms in animals until 30–60 minutes. Then violent reactions and death took place in a few hours, according to studies. From intravenous injection mice, rats, and guinea pigs show symptoms after 15 min to 2 hours. The animals become quiet and limp. Rabbits show a similar latent time period and muscle weakness. Dogs show symptoms of hyperactivity. They are more sensitive because of higher rates of metabolism and, eventually, they also fail to respirate. Fish are more resistant because of slow metabolism and therefore it is not expected that the substance will build up in fish. Also, Australian herbivores (e.g. possum and seed-eating birds) that live in a habitat consisting of plants with traces of fluoroacetate, have some tolerance. This can happen by detoxifying fluoroacetate or more resistivity of aconitase to fluorocitrate in the presence of GSH. Some insects can store the toxin in vacuoles and use it later. The highly hazardous MFA cannot be used for poisoning animals without risking human life. Antidotal therapy There is no known antidote against MFA, but there are some suggestions regarding the treatment of MFA poisoning. Advised is to use an intravenous injection of fast-acting anesthetics directly after poisoning. The anesthetic should be pentothal sodium or evipan sodium followed by an intramuscular injection of long-acting cortical depressants like sodium phenobarbitone or rectal avertin. Afterward, careful supervision of oxygen supply is necessary together with a BLB mask and the use of artificial respiration. Possibly, the use of hypertonic glucose intravenously is required as in status epilepticus. At last, careful use of tubocurarine chloride should be applied to control any convulsions. If any vomiting occurs, lean the patient forward to maintain an open airway. Alternatively, there is a therapy aimed at the prevention of fluorocitrate synthesis, the blocking of aconitase within the mitochondria, and to provide a citrate outflow from the mitochondria to keep the TCA cycle going. For now, ethanol has proven to be the most effective against FC formation. When ethanol is oxidized, it increases blood acetate levels which inhibits FC production. In humans, an oral dose of 40-60 mL 96% ethanol is advised followed by 1.0-1.5 g/kg of 5-10% ethanol intravenously during the first hour and 0.1 g/kg during the following 6–8 hours. This therapy is meant for fluoroacetate (FA) poisoning which is highly related MFA, so this therapy aimed at MFA may result in other outcomes. Treatment with monoacetin (glycerol monoacetate) helped against FA poisoning. It aids in increasing acetate levels of the blood and it decreases citrate levels in the heart, brain, and kidneys. However, this is only tested experimentally. In monkeys, monoacetin even reverses the effects of FA: all biological effects normalized. As with ethanol, monoacetin is effective against FA poisoning. There is up until now, no proven treatment against MFA. However, the beforementioned treatments can provide starting points for therapy aimed at MFA since FA and MFA are closely related compounds. See also Chemical weapon Highly hazardous chemical EPA list of extremely hazardous substances References Acetate esters Chemical weapons Convulsants Fluoroacetates Methyl esters Poisons
Methyl fluoroacetate
[ "Chemistry", "Biology", "Environmental_science" ]
2,960
[ "Chemical accident", "Toxicology", "Chemical weapons", "Poisons", "Biochemistry" ]
57,461,942
https://en.wikipedia.org/wiki/Metal-induced%20embrittlement
Metal-induced embrittlement (MIE) is the embrittlement caused by diffusion of metal, either solid or liquid, into the base material. Metal induced embrittlement occurs when metals are in contact with low-melting point metals while under tensile stress. The embrittler can be either solid (SMIE) or liquid (liquid metal embrittlement). Under sufficient tensile stress, MIE failure occurs instantaneously at temperatures just above melting point. For temperatures below the melting temperature of the embrittler, solid-state diffusion is the main transport mechanism. This occurs in the following ways: Diffusion through grain boundaries near the crack of matrix Diffusion of first monolayer heterogeneous surface embrittler atoms Second monolayer heterogenous surface diffusion of embrittler Surface diffusion of the embrittler over a layer of embrittler The main mechanism of transport for SMIE is surface self-diffusion of the embrittler over a layer of embrittler that’s thick enough to be characterized as self-diffusion at the crack tip. In comparison, LMIE dominant mechanism is bulk liquid flow that penetrates at the tips of cracks. Examples Studies have shown that Zn, Pb, Cd, Sn and In can embrittle steel at temperature below each embrittler’s melting point. Cadmium can embrittle titanium at temperatures below its melting point. Hg can embrittle zinc at temperatures below its melting point. Hg can embrittle copper at temperatures below its melting point. Mechanics and temperature dependence Similar to liquid metal embrittlement (LME), solid metal-induced embrittlement results in a decrease in fracture strength of a material. In addition, a decrease in tensile ductility over a temperature range is indicative of metal-induced embrittlement. Although SMIE is greatest just below the embrittler’s melting temperature, the range over which SMIE occurs ranges from to T, where T is the melting temperature of the embrittler. The reduction in ductility is caused by formation and propagation of stable, subcritical intergranular cracks. SMIE produces both intergranular and transgranular fracture surfaces in otherwise ductile materials. Kinetics of crack onset and propagation via SMIE Crack extension, as opposed to crack onset, is the rate determining step for solid induced-metal embrittlement. The main mechanism leading to solid metal induced embrittlement is multilayer surface self-diffusion of the embrittler at the crack tip. Propagation rate of a crack undergoing metal-induced embrittlement is a function of the supply of embrittler present at the crack tip. Crack velocities in SMIE are much slower than LMIE velocities. Catastrophic failure of a material via SMIE occurs as a result of the propagation of cracks to a critical point. To this end, the propagation of the crack is controlled by the transport rate and mechanisms of the embrittler at the tip of nucleated cracks. SMIE can be mitigated by increasing the tortuosity of crack paths such that resistance to intergranular cracking increases. Susceptibility SMIE is less common that LMIE and much less common that other failure mechanisms such as hydrogen embrittlement, fatigue, and stress-corrosion cracking. Still, embrittlement mechanisms can be introduced during fabrication, coatings, testing or during service of the material components. Susceptibility for SMIE increases with the following material characteristics: Increase in strength of high-strength material Increasing grain size Materials with more planar-slip than wavy-slip References Metals Corrosion
Metal-induced embrittlement
[ "Chemistry", "Materials_science" ]
769
[ "Metals", "Metallurgy", "Corrosion", "Electrochemistry", "Materials degradation" ]
57,462,594
https://en.wikipedia.org/wiki/Line%20splice
In telecommunications, a line splice is a method of connecting electrical cables (electrical splice) or optical fibers (optical splice). Splices are often housed in sleeves to protect against external influences. Splicing of copper wires The splicing of copper wires happens in the following steps: The cores are laid one above the other at the junction. The core insulation is removed. The wires are wrapped two to three times around each other (twisting). The bare veins on a length of about 3 cm "strangle" or "twist". In some cases, the strangulation is soldered. To isolate the splice, an insulating sleeve made of paper or plastic is pushed over it. The splicing of copper wires is mainly used on paper insulated wires. LSA techniques (LSA: soldering, screwing and stripping free) are used to connect copper wires, making the copper wires faster and easier to connect. LSA techniques include: Wire connection sleeves (AVH = Adernverbindungshülsen) and other crimp connectors. The two wires to be connected are inserted into the AVH without being stripped, which is then compressed with special pliers. The about 2 cm long AVH consist of contact, pressure and insulation. For wire connection strips (AVL = Adernverbindungsleisten) several pairs of wires (10 = AVL10 or 20 = AVL20) are inserted, the strip is then closed with a lid and pressed together with a hydraulic press, which ensures the connection. Splicing of glass fibers Fiber-optic cables are spliced using a special arc-splicer, with installation cables connected at their ends to respective "pigtails" - short individual fibers with fiber-optic connectors at one end. The splicer precisely adjusts the light-guiding cores of the two ends of the glass fibers to be spliced. The adjustment is done fully automatically in modern devices, whereas in older models this is carried out manually by means of micrometer screws and microscope. An experienced splicer can precisely position the fiber ends within a few seconds. Subsequently, the fibers are fused together (welded) with an electric arc. Since no additional material is added, such as gas welding or soldering, this is called a "fusion splice". Depending on the quality of the splicing process, attenuation values at the splice points are achieved by 0.3 dB, with good splices also below 0.02 dB. For newer generation devices, alignment is done automatically by motors. Here one differentiates core and jacket centering. At core centering (usually single-mode fibers), the fiber cores are aligned. A possible core offset with respect to the jacket is corrected. In the jacket centering (usually in multimode fibers), the fibers are adjusted to each other by means of electronic image processing in front of the splice. When working with good equipment, the damping value is according to experience at max. 0.1 dB. Measurements are made by means of special measuring devices including optical time-domain reflectometry (OTDR). A good splice should have an attenuation of less than 0.3 dB over the entire distance. Finished fiber optic splices are housed in splice boxes. One differentiates: Fusion splice Adhesive splicing Crimp splice or NENP (no-epoxy no-polish), mechanical splice See also Fusion splicing Mechanical splice Western Union splice References Industrial processes Electrical wiring Fiber optics Telecommunications
Line splice
[ "Physics", "Technology", "Engineering" ]
740
[ "Information and communications technology", "Electrical systems", "Building engineering", "Telecommunications", "Physical systems", "Electrical engineering", "Electrical wiring" ]
61,664,297
https://en.wikipedia.org/wiki/Theorem%20of%20absolute%20purity
In algebraic geometry, the theorem of absolute (cohomological) purity is an important theorem in the theory of étale cohomology. It states: given a regular scheme X over some base scheme, a closed immersion of a regular scheme of pure codimension r, an integer n that is invertible on the base scheme, a locally constant étale sheaf with finite stalks and values in , for each integer , the map is bijective, where the map is induced by cup product with . The theorem was introduced in SGA 5 Exposé I, § 3.1.4. as an open problem. Later, Thomason proved it for large n and Gabber in general. See also purity (algebraic geometry) References Fujiwara, K.: A proof of the absolute purity conjecture (after Gabber). Algebraic geometry 2000, Azumino (Hotaka), pp. 153–183, Adv. Stud. Pure Math. 36, Math. Soc. Japan, Tokyo, 2002 R. W. Thomason, Absolute cohomological purity, Bull. Soc. Math. France 112 (1984), no. 3, 397–406. MR 794741 Algebraic geometry
Theorem of absolute purity
[ "Mathematics" ]
248
[ "Fields of abstract algebra", "Algebraic geometry" ]
64,029,436
https://en.wikipedia.org/wiki/Crossing%20Numbers%20of%20Graphs
Crossing Numbers of Graphs is a book in mathematics, on the minimum number of edge crossings needed in graph drawings. It was written by Marcus Schaefer, a professor of computer science at DePaul University, and published in 2018 by the CRC Press in their book series Discrete Mathematics and its Applications. Topics The main text of the book has two parts, on the crossing number as traditionally defined and on variations of the crossing number, followed by two appendices providing background material on topological graph theory and computational complexity theory. After introducing the problem, the first chapter studies the crossing numbers of complete graphs (including Hill's conjectured formula for these numbers) and complete bipartite graphs (Turán's brick factory problem and the Zarankiewicz crossing number conjecture), again giving a conjectured formula). It also includes the crossing number inequality, and the Hanani–Tutte theorem on the parity of crossings. The second chapter concerns other special classes of graphs including graph products (especially products of cycle graphs) and hypercube graphs. After a third chapter relating the crossing number to graph parameters including skewness, bisection width, thickness, and (via the Albertson conjecture) the chromatic number, the final chapter of part I concerns the computational complexity of finding minimum-crossing graph drawings, including the results that the problem is both NP-complete and fixed-parameter tractable. In the second part of the book, two chapters concern the rectilinear crossing number, describing graph drawings in which the edges must be represented as straight line segments rather than arbitrary curves, and Fáry's theorem that every planar graph can be drawn without crossings in this way. Another chapter concerns 1-planar graphs and the associated local crossing number, the smallest number such that the graph can be drawn with at most crossings per edge. Two chapters concern book embeddings and string graphs, and two more chapters concern variations of the crossing number that count crossings in different ways, for instance by the number of pairs of edges that cross or that cross an odd number of times. The final chapter of part II concerns thrackles and the problem of finding drawings with a maximum number of crossings. Audience and reception The book can be used as an advanced textbook, and has exercises provided for that use. However, it assumes that its readers are already familiar with both graph theory and the design and analysis of algorithms. Reviewing the book, L. W. Beineke calls it a "valuable contribution" for its presentation of the many results in this area. References Graph drawing Topological graph theory Geometric graph theory Mathematics books 2018 non-fiction books CRC Press books
Crossing Numbers of Graphs
[ "Mathematics" ]
541
[ "Graph theory", "Topology", "Mathematical relations", "Geometric graph theory", "Topological graph theory" ]
64,037,033
https://en.wikipedia.org/wiki/Fluoroestradiol%20F-18
Fluoroestradiol F-18, also known as [18F]16α-fluoroestradiol and sold under the brand name Cerianna, is a radioactive diagnostic agent indicated for use with positron emission tomography (PET) imaging. It is an analog of estrogen and is used to detect estrogen receptor-positive breast cancer lesions. Chemistry Chemically, fluoroestradiol F-18 is [18F]16α-fluoro-3,17β-diol-estratriene-1,3,5(10). History Fluoroestradiol F-18 was approved for medical use in the United States in May 2020. References External links Secondary alcohols Estradiol Estranes Estrogens Medicinal radiochemistry PET radiotracers Hydroxyarenes Radiopharmaceuticals Organofluorides Breast cancer
Fluoroestradiol F-18
[ "Chemistry" ]
186
[ "Medicinal radiochemistry", "PET radiotracers", "Radiopharmaceuticals", "Medicinal chemistry", "Chemicals in medicine" ]
64,041,384
https://en.wikipedia.org/wiki/Europium%28II%29%20fluoride
Europium(II) fluoride is an inorganic compound with a chemical formula EuF2. It was first synthesized in 1937. Production Europium(II) fluoride can be produced by reducing europium(III) fluoride with metallic europium or hydrogen gas. Properties Europium(II) fluoride is a bright yellowish solid with a fluorite structure. EuF2 can be used to dope a trivalent rare-earth fluoride, such as LaF3, to create a vacancy-filled structure with increased conductivity over a pure crystal. Such a crystal can be used as a fluoride-specific semipermeable membrane in a fluoride selective electrode to detect trace quantities of fluoride. References Europium(II) compounds Fluorides Lanthanide halides Substances discovered in the 1930s Fluorite crystal structure
Europium(II) fluoride
[ "Chemistry" ]
186
[ "Fluorides", "Salts" ]
54,150,419
https://en.wikipedia.org/wiki/Multivariate%20Laplace%20distribution
In the mathematical theory of probability, multivariate Laplace distributions are extensions of the Laplace distribution and the asymmetric Laplace distribution to multiple variables. The marginal distributions of symmetric multivariate Laplace distribution variables are Laplace distributions. The marginal distributions of asymmetric multivariate Laplace distribution variables are asymmetric Laplace distributions. Symmetric multivariate Laplace distribution A typical characterization of the symmetric multivariate Laplace distribution has the characteristic function: where is the vector of means for each variable and is the covariance matrix. Unlike the multivariate normal distribution, even if the covariance matrix has zero covariance and correlation the variables are not independent. The symmetric multivariate Laplace distribution is elliptical. Probability density function If , the probability density function (pdf) for a k-dimensional multivariate Laplace distribution becomes: where: and is the modified Bessel function of the second kind. In the correlated bivariate case, i.e., k = 2, with the pdf reduces to: where: and are the standard deviations of and , respectively, and is the correlation coefficient of and . For the uncorrelated bivariate Laplace case, that is k = 2, and , the pdf becomes: Asymmetric multivariate Laplace distribution A typical characterization of the asymmetric multivariate Laplace distribution has the characteristic function: As with the symmetric multivariate Laplace distribution, the asymmetric multivariate Laplace distribution has mean , but the covariance becomes . The asymmetric multivariate Laplace distribution is not elliptical unless , in which case the distribution reduces to the symmetric multivariate Laplace distribution with . The probability density function (pdf) for a k-dimensional asymmetric multivariate Laplace distribution is: where: and is the modified Bessel function of the second kind. The asymmetric Laplace distribution, including the special case of , is an example of a geometric stable distribution. It represents the limiting distribution for a sum of independent, identically distributed random variables with finite variance and covariance where the number of elements to be summed is itself an independent random variable distributed according to a geometric distribution. Such geometric sums can arise in practical applications within biology, economics and insurance. The distribution may also be applicable in broader situations to model multivariate data with heavier tails than a normal distribution but finite moments. The relationship between the exponential distribution and the Laplace distribution allows for a simple method for simulating bivariate asymmetric Laplace variables (including for the case of ). Simulate a bivariate normal random variable vector from a distribution with and covariance matrix . Independently simulate an exponential random variable from an Exp(1) distribution. will be distributed (asymmetric) bivariate Laplace with mean and covariance matrix . References Probability distributions Multivariate continuous distributions Geometric stable distributions
Multivariate Laplace distribution
[ "Mathematics" ]
600
[ "Functions and mappings", "Mathematical relations", "Mathematical objects", "Probability distributions" ]
54,150,730
https://en.wikipedia.org/wiki/Subspace%20identification%20method
In mathematics, specifically in control theory, subspace identification (SID) aims at identifying linear time invariant (LTI) state space models from input-output data. SID does not require that the user parametrizes the system matrices before solving a parametric optimization problem and, as a consequence, SID methods do not suffer from problems related to local minima that often lead to unsatisfactory identification results. History SID methods are rooted in the work by the German mathematician Leopold Kronecker (1823–1891). Kronecker showed that a power series can be written as a rational function when the rank of the Hankel operator that has the power series as its symbol is finite. The rank determines the order of the polynomials of the rational function. In the 1960s the work of Kronecker inspired a number of researchers in the area of Systems and Control, like Ho and Kalman, Silverman and Youla and Tissi, to store the Markov parameters of an LTI system into a finite dimensional Hankel matrix and derive from this matrix an (A,B,C) realization of the LTI system. The key observation was that when the Hankel matrix is properly dimensioned versus the order of the LTI system, the rank of the Hankel matrix is the order of the LTI system and the SVD of the Hankel matrix provides a basis of the column space observability matrix and row space of the controllability matrix of the LTI system. Knowledge of this key spaces allows to estimate the system matrices via linear least squares. An extension to the stochastic realization problem where we have knowledge only of the Auto-correlation (covariance) function of the output of an LTI system driven by white noise, was derived by researchers like Akaike. A second generation of SID methods attempted to make SID methods directly operate on input-output measurements of the LTI system in the decade 1985–1995. One such generalization was presented under the name of the Eigensystem Realization Algorithm (ERA) made use of specific input-output measurements considering the impulse inputs. It has been used for modal analysis of flexible structures, like bridges, space structures, etc. These methods have demonstrated to work in practice for resonant structures they did not work well for other type of systems and an input different from an impulse. A new impulse to the development of SID methods was made for operating directly on generic input-output data and avoiding to first explicitly compute the Markov parameters or estimating the samples of covariance functions prior to realizing the system matrices. Pioneers that contributed to these breakthroughs were Van Overschee and De Moor – introducing the N4SID approach, Verhaegen – introducing the MOESP approach and Larimore – presenting ST in the framework of Canonical Variate Analysis (CVA) References Control theory
Subspace identification method
[ "Mathematics" ]
582
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
50,438,425
https://en.wikipedia.org/wiki/GF%20Biochemicals
GF Biochemicals is a biochemical company founded in 2008. It was co-founded by and named after Pasquale Granata and Mathieu Flamini. Along with Biofine, it is a mass producer of levulinic acid. The company worked with the University of Pisa for seven years on its production. In 2016 GF Biochemicals acquired the American company Segetis. The company has a plant in Caserta that employs around 80 people. In 2015, the company won the John Sime Award for Most Innovative New Technology. The company has offices in Milan and the Netherlands. References External links Italian companies established in 2008 Green chemistry Chemical companies of Italy
GF Biochemicals
[ "Chemistry", "Engineering", "Environmental_science" ]
138
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan" ]
50,445,178
https://en.wikipedia.org/wiki/Nitronickelate
The nitronickelates are a class of chemical compounds containing a nickel atom complexed by nitro groups, –NO2. Nickel can be in a +2 or +3 oxidation state. There can be five (pentanitronickelates), or six, (hexanitronickelates) nitro groups per nickel atom. They can be considered the double nitrites of nickel nitrite. References Nickel complexes Nitrites Anions
Nitronickelate
[ "Physics", "Chemistry" ]
97
[ "Ions", "Matter", "Anions" ]
50,445,433
https://en.wikipedia.org/wiki/Kite%20square
A kite square is a device used to measure the "out-of-squareness" of a machining center or coordinate measuring machine. “Square-ness” or “Out-of-Square” is one of the critical measurement in machine tool metrology. For rectangular measurements, it refers to angular deviation of working axes between one carriages to another. The value of Monarch VMC was previously found as 4 arc sec. The Kite Square Technique, together with a displacement sensing instrument, can measure the alignment of any points on a line of interest. Its main components are two perpendicular bars and three calibrated artifacts (Balls). The general principle of measuring with a kite square is once one of the diagonal is aligned to working axis, the displacement deviation on artifacts in other diagonal will appear (if any) as the kite square arms are perpendicular to each other. References Dimensional instruments
Kite square
[ "Physics", "Mathematics" ]
178
[ "Quantity", "Dimensional instruments", "Physical quantities", "Size" ]
50,446,731
https://en.wikipedia.org/wiki/Phase%20contrast%20magnetic%20resonance%20imaging
Phase contrast magnetic resonance imaging (PC-MRI) is a specific type of magnetic resonance imaging used primarily to determine flow velocities. PC-MRI can be considered a method of Magnetic Resonance Velocimetry. It also provides a method of magnetic resonance angiography. Since modern PC-MRI is typically time-resolved, it provides a means of 4D imaging (three spatial dimensions plus time). How it Works Atoms with an odd number of protons or neutrons have a randomly aligned angular spin momentum. When placed in a strong magnetic field, some of these spins align with the axis of the external field, which causes a net 'longitudinal' magnetization. These spins precess about the axis of the external field at a frequency proportional to the strength of that field. Then, energy is added to the system through a Radio frequency (RF) pulse to 'excite' the spins, changing the axis that the spins precess about. These spins can then be observed by receiver coils (Radiofrequency coils) using Faraday's law of induction. Different tissues respond to the added energy in different ways, and imaging parameters can be adjusted to highlight desired tissues. All of these spins have a phase that is dependent on the atom's velocity. Phase shift of a spin is a function of the gradient field : where is the Gyromagnetic ratio and is defined as: , is the initial position of the spin, is the spin velocity, and is the spin acceleration. If we only consider static spins and spins in the x-direction, we can rewrite equation for phase shift as: We then assume that acceleration and higher order terms are negligible to simplify the expression for phase to: where is the zeroth moment of the x-gradient and is the first moment of the x gradient. If we take two different acquisitions with applied magnetic gradients that are the opposite of each other (bipolar gradients), we can add the results of the two acquisitions together to calculate a change in phase that is dependent on gradient: where . The phase shift is measured and converted to a velocity according to the following equation: where is the maximum velocity that can be recorded and is the recorded phase shift. The choice of defines range of velocities visible, known as the ‘dynamic range’. A choice of below the maximum velocity in the slice will induce aliasing in the image where a velocity just greater than will be incorrectly calculated as moving in the opposite direction. However, there is a direct trade-off between the maximum velocity that can be encoded and the signal-to-noise ratio of the velocity measurements. This can be described by: where is the signal-to-noise ratio of the image (which depends on the magnetic field of the scanner, the voxel volume, and the acquisition time of the scan). For an example, setting a ‘low’ (below the maximum velocity expected in the scan) will allow for better visualization of slower velocities (better SNR), but any higher velocities will alias to an incorrect value. Setting a ‘high’ (above the maximum velocity expected in the scan) will allow for the proper velocity quantification, but the larger dynamic range will obscure the smaller velocity features as well as decrease SNR. Therefore, the setting of will be application dependent and care must be exercised in the selection. In order to further allow for proper velocity quantification, especially in clinical applications where the velocity dynamic range of flow is high (e.g. blood flow velocities in vessels across the thoracoabdominal cavity), a dual-echo PC-MRI (DEPC) method with dual velocity encoding in the same repetition time has been developed. The DEPC method does not only allow for proper velocity quantification, but also reduces the total acquisition time (especially when applied to 4D flow imaging) compared to a single-echo single- PC-MRI acquisition carried out at two separate values. To allow for more flexibility in selecting , instantaneous phase (phase unwrapping) can be used to increase both dynamic range and SNR. Encoding Methods When each dimension of velocity is calculated based on acquisitions from oppositely applied gradients, this is known as a six-point method. However, more efficient methods are also used. Two are described here: Simple Four-point Method Four sets of encoding gradients are used. The first is a reference and applies a negative moment in ,, and . The next applies a positive moment in , and a negative moment in and . The third applies a positive moment in , and a negative moment in and . And the last applies a positive moment in , and a negative moment in and . Then, the velocities can be solved based on the phase information from the corresponding phase encodes as follows: Balanced Four-Point Method The balanced four-point method also includes four sets of encoding gradients. The first is the same as in the simple four-point method with negative gradients applied in all directions. The second has a negative moment in , and a positive moment in and . The third has a negative moment in , and a positive moment in and . The last has a negative moment in and a positive moment in and . This gives us the following system of equations: Then, the velocities can be calculated: Retrospective Cardiac and Respiratory Gating For medical imaging, in order to get highly resolved scans in 3D space and time without motion artifacts from the heart or lungs, retrospective cardiac gating and respiratory compensation are employed. Beginning with cardiac gating, the patient's ECG signal is recorded throughout the imaging process. Similarly, the patient's respiratory patterns can be tracked throughout the scan. After the scan, the continuously collected data in k-space (temporary image space) can be assigned accordingly to match-up with the timing of the heart beat and lung motion of the patient. This means that these scans are cardiac-averaged so the measured blood velocities are an average over multiple cardiac cycles. Applications Phase contrast MRI is one of the main techniques for magnetic resonance angiography (MRA). This is used to generate images of arteries (and less commonly veins) in order to evaluate them for stenosis (abnormal narrowing), occlusions, aneurysms (vessel wall dilatations, at risk of rupture) or other abnormalities. MRA is often used to evaluate the arteries of the neck and brain, the thoracic and abdominal aorta, the renal arteries, and the legs (the latter exam is often referred to as a "run-off"). Limitations In particular, a few limitations of PC-MRI are of importance for the measured velocities: Partial volume effects (when a voxel contains the boundary between static and moving materials) can overestimate phase leading to inaccurate velocities at the interface between materials or tissues. Intravoxel phase dispersion (when velocities within a pixel are heterogeneous or in areas of turbulent flow) can produce a resultant phase that does not resolve the flow features accurately. Assuming that acceleration and higher orders of motion are negligible can be inaccurate depending on the flow field. Displacement artifacts (also known as misregistration and oblique flow artifacts) occur when there is a time difference between the phase and frequency encoding. These artifacts are highest when the flow direction is within the slice plane (most prominent in the heart and aorta for biological flows) Vastly undersampled Isotropic Projection Reconstruction (VIPR) A Vastly undersampled Isotropic Projection Reconstruction (VIPR) is a radially acquired MRI sequence which results in high-resolution MRA with significantly reduced scan times, and without the need for breath-holding. References Magnetic resonance imaging
Phase contrast magnetic resonance imaging
[ "Chemistry" ]
1,598
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
41,226,144
https://en.wikipedia.org/wiki/Toyota%20Electronic%20Modulated%20Suspension
TEMS (Toyota Electronic Modulated Suspension) is a shock absorber that is electronically controlled (Continuous Damping Control) based on multiple factors, and was built and exclusively used by Toyota for selected products during the 1980s and 1990s (first introduced on the Toyota Soarer in 1983). The semi-active suspension system was widely used on luxury and top sport trim packages on most of Toyota's products sold internationally. Its popularity fell after the “bubble economy” as it was seen as an unnecessary expense to purchase and maintain, and remained in use on luxury or high performance sports cars. Summary TEMS consisted of four shock absorbers mounted at all four wheels, and could be used in either an automatic or driver selected mode based on the installation of the system used. The technology was installed on top-level Toyota products with four wheel independent suspension, labeled PEGASUS (Precision Engineered Geometrically Advanced SUSpension). Because of the nature of the technology, TEMS was installed on vehicles with front and rear independent suspensions. The technology was modified and installed on minibuses or minivans like Toyota TownAce/MasterAce rear independent suspensions, and the top trim package on the Toyota HiAce. Based on road conditions, the system would increase or decrease ride damping force for particular situations. The TEMS system was easily installed to suit ride comfort, and road handling stability on small suspensions, adding a level of ride modification found on larger, more expensive luxury vehicles. The technology was originally developed and calibrated for Japanese driving conditions due to Japanese speed limits, but was adapted for international driving conditions with later revisions. As the Japanese recession of the early 1990s began to take effect, the system was seen as an unnecessary expense as buyers were less inclined to purchase products and services seen as “luxury” and more focused on basic needs. TEMS installation was still achieved on vehicles that were considered luxurious, like the Toyota Crown, Toyota Century, Toyota Windom, and the Toyota Supra and Toyota Soarer sports cars. Recently the technology has been installed on luxury minivans like the Toyota Alphard, Toyota Noah and the Toyota Voxy. The TEMS system has been recently named “Piezo TEMS” (with piezoelectric ceramics), “Skyhook TEMS” “Infinity TEMS” and more recently “AVS” (Adaptive Variable Suspension). Configuration settings The system was deployed with an earlier two-stage switch labeled “Auto-Sport”, with a later modification of “Auto-Soft-Mid-Hard”. Some variations used a dial to specifically select the level of hardness to the driver's desires. For most driving situations, the “Auto” selection was recommended. When the system was activated, an indicator light reflected the suspension setting selected. The system components consisted of a control switch, indicator light, four shock absorbers, shock absorber control actuator, shock absorber control computer, vehicle speed sensor, stop lamp switch, with a throttle position sensor and a steering angle sensor on TEMS three stage systems only. All the absorbers are controlled with the same level of hardness. Operation parameters of TEMS The following describes how the system would activate on the earlier version installed during the 1980s on two stage TEMS During normal running The system chooses the "SOFT" selection, to provide a softer ride. At high speeds The system selects the "HARD" selection and determines that at high speeds, it assumes a more rigid configuration for better ride stability, and to reduce roll tendencies. Braking (reducing speed to ) In order to prevent “nose dive”, the process proceeds to "HARD" automatically damping force until it senses the brakes to be at the"SOFT" setting. It will return to the "SOFT" state when the brake light is off, and the pedal has been released after 2 seconds or more. (Only 3-stage systems) during hard acceleration To suppress suspension “squat” the system switches to "HARD" based on accelerator pedal position and throttle position. (Only 3-stage systems) during hard cornering To suppress suspension “roll” the system switches to "HARD" based on steering angle sensor position. SPORT mode The system remains in the "HARD" position regardless of driving conditions. (For 3-stage systems, the system automatically chooses between the “MID” and the "HARD" configurations - by the other words, the "SOFT" stage is excepted) Vehicles installed The following is a list of vehicles in Japan that were installed with the technology. There may have been vehicles exported internationally that were also equipped. Starlet (EP71-based Turbo S, EP82-based GT) Tercel / Corsa / Corolla II (EL31-based GP turbo) Cynos Sera Corolla Levin / Sprinter Trueno (AE92 • AE101GT GT-APEX) Corolla FX (AE92-GT) Corona (ST171-based GT-R) Celica (ST183 models) Carina ED and Corona EXiV (ST180 models) Century Crown Majesta Camry / Vista (SV20-based GT and Prominent G, SV30-based GT) Pronard Aristo (S140) Town Ace / Master Ace Lite Ace Mark II / Chaser / Cresta (GX71-based Twin Cam Grande, GX81-based Twin Cam Grande system, JZX91 Grande G, JZX100 Grande G, JZX101 Grande G, JZX110 Grande G) Windom (MCV10 system G, MCV20 system G, MCV30 system G) Hiace Hilux Surf (KZN130) Hilux Surf (KZN185)) Crown Soarer (GZ20 system 2.0GT Twin turbo L, JZZ30 system 2.5GT twin turbo L) Soarer (1UZ-FE V8 UZZ31). Supra (Select Models) Celsior: Piezo TEMS Noah / Voxy Alphard Land Cruiser (100 series) Ipsum (acm20 system) Super Strut (MacPherson modified strut) Super strut suspension is a high-performance suspension for automobiles developed by Toyota. On vehicles equipped, the abbreviation listed was "SSsus" and was first installed on the AE101 Corolla Levin / Sprinter Trueno for 1991 . Overview This is a MacPherson strut type suspension that has been improved to compete with double wishbone type suspensions. It suppressed the change in camber angle that occurs when the suspension is in motion, and as a result it greatly increases handling stability and grip limit while turning. For front wheel drive sports coupes, there arose a need for an inexpensive upgrade that could be installed on vehicles that originally had MacPherson struts on the front wheels. In contrast to the traditional L-shaped lower control arm used with MacPherson struts, Super Strut had a lower control arm divided into two parts, one of which is equipped with a camber control arm, which is connected to a specially shaped strut. As a result, a virtual kingpin axis was set inside the tire, making it possible to significantly reduce the kingpin angle from 14 degrees to 6 degrees and the spindle offset from 66 mm to 18 mm. As a result, the torque steer that is noticeable in high-output front-engine, front-wheel drive vehicles equipped with LSD is reduced. Active use of ball joints also ensures rigidity and reduces friction. The camber control arm regulates the movement of the lower arm, so when the suspension reacts to a uneven road surface, the upper part of the upright pulls inward, causing the camber angle to change negatively. Note that the inclination of the strut body may be opposite to that of the MacPherson strut type. While there are various advantages, there are also disadvantages. The unsprung suspension weight is heavier than the general MacPherson strut, and depending on the car model, the minimum turning radius would be increased. There are also conditions where the steering feels uncomfortable as the steering angle increases. Furthermore, because the effective range of motion of the short camber control arm is narrow, the amount of suspension travel is also affected. The behavior is stable due to the unique characteristic of camber change, when the suspension travel is minimal, and the camber change is also minimal, and when the camber control arm reaches a certain angle, the camber change is suddenly increased. Due to the narrow vehicle height range, it was not favorable to off-road driving conditions. Although the above disadvantages were not a problem in ordinary cars where the road surface conditions did not change much and the vehicle speed was slow, the setting range that is considered best in high speed racing conditions where limited performance is necessary and required flexibility. Therefore, in the category where suspension changes are allowed, there were cases where the structure was simple, there was accumulated know-how, and the suspension was replaced with a conventional strut, which was easier to handle. Vehicles installed Corolla Levin / Sprinter Trueno (AE92 • AE101GT• BZ-R) Corolla FX (AE101) Toyota Celica (T200) SS-II, SS-III (ST202) Celica GT-Four (ST205) Toyota Celica (T230) SS-II (ZZT231) Curren (ST206) Carina E (ST190 series) (export) Carina ED (ST200 series) Corona EXiV (ST200 series) See also Active Stabilizer Suspension System Kinetic Dynamic Suspension System Toyota Active Control Suspension Active Body Control References Notes Sources Development of New Toyota Electronic Modulated Suspension - Two Concepts for Semi-Active Suspension Control Toyota Automotive suspension technologies Shock absorbers Automotive technology tradenames Automotive safety technologies Auto parts Mechanical power control
Toyota Electronic Modulated Suspension
[ "Physics" ]
2,037
[ "Mechanics", "Mechanical power control" ]
41,228,482
https://en.wikipedia.org/wiki/Proline%20organocatalysis
Proline organocatalysis is the use of proline as an organocatalyst in organic chemistry. This theme is often considered the starting point for the area of organocatalysis, even though early discoveries went unappreciated. Modifications, such as MacMillan’s catalyst and Jorgensen's catalysts, proceed with excellent stereocontrol. Proline catalysis was initially reported by groups at Schering AG and Hoffmann-La Roche. Proline's chiral structure enables enantioselective synthesis, favoring a particular enantiomer or diastereomer. Reactions The Hajos–Parrish–Eder–Sauer–Wiechert reaction, reported in 1971 by several research teams, is an early example of an enantioselective catalytic reaction in organic chemistry. Its scope has been modified and expanded through the development of related reactions including the Michael addition, asymmetric aldol reaction, and the Mannich reaction. This reaction has likewise been used to perform asymmetric Robinson annulations. The general scheme of this reaction follows: This example illustrates a proline-catalyzed asymmetric 6-enolendo aldolization. The zwitterionic character and the H-bonding of proline in the transition state determine the reaction outcome. An enamine is formed during the reaction and only one proline molecule is involved in forming the transition state. Asymmetric synthesis of the Wieland-Miescher ketone is also based on proline. Additional reactions include aldol reactions, Mannich reaction, Michael reaction, amination, α-oxyamination, and α-halogenation. Modifications on the basic proline structure improved the enantioselectivity and regioselectivity of the catalysis. These proline-derived auxiliaries and catalysts, including the Enders hydrazone reaction and Corey–Itsuno reduction, have been reviewed, as have MacMillan’s iminium catalysts, Miller catalysts, and CBS-oxazaborolidines. Illustrating an enolexo intramolecular aldolization, dicarbonyl (dials,diketones) can be converted to anti-aldol products with a 10% L-proline catalyst loading. A prominent example of proline catalysis is the addition of acetone or hydroxyacetone to a diverse set of aldehydes catalyzed by 20-30% proline catalyst loading with high (>99%) enantioselectivity yielding diol products. As refined by List and Notz, the aforementioned reaction produces diol products as follows: Mechanistic considerations Proline-catalyzed aldol additions proceed via a six-membered enamine transition state according to the Zimmerman-Traxler model. Addition of 20-30 mol% proline to acetone or hydroxyacetone catalyzes their addition to a diverse set of aldehydes with high (>99%) enantioselectivity yielding diol products. Proline and proline derivatives have been implemented as organocatalysts to promote asymmetric condensation reactions. An example of such a reaction proceeding through a six membered transition state is modelled as follows. Intramolecular aldolization reactions that are catalyzed by proline likewise go through six-membered transition states. These transition states can enable the formation of either the enolexo or the enolendo product. References Organic chemistry Catalysis
Proline organocatalysis
[ "Chemistry" ]
724
[ "Catalysis", "Chemical kinetics", "nan" ]
41,228,673
https://en.wikipedia.org/wiki/Internal%20combustion%20engine
An internal combustion engine (ICE or IC engine) is a heat engine in which the combustion of a fuel occurs with an oxidizer (usually air) in a combustion chamber that is an integral part of the working fluid flow circuit. In an internal combustion engine, the expansion of the high-temperature and high-pressure gases produced by combustion applies direct force to some component of the engine. The force is typically applied to pistons (piston engine), turbine blades (gas turbine), a rotor (Wankel engine), or a nozzle (jet engine). This force moves the component over a distance. This process transforms chemical energy into kinetic energy which is used to propel, move or power whatever the engine is attached to. The first commercially successful internal combustion engines were invented in the mid-19th century. The first modern internal combustion engine, the Otto engine, was designed in 1876 by the German engineer Nicolaus Otto. The term internal combustion engine usually refers to an engine in which combustion is intermittent, such as the more familiar two-stroke and four-stroke piston engines, along with variants, such as the six-stroke piston engine and the Wankel rotary engine. A second class of internal combustion engines use continuous combustion: gas turbines, jet engines and most rocket engines, each of which are internal combustion engines on the same principle as previously described. In contrast, in external combustion engines, such as steam or Stirling engines, energy is delivered to a working fluid not consisting of, mixed with, or contaminated by combustion products. Working fluids for external combustion engines include air, hot water, pressurized water or even boiler-heated liquid sodium. While there are many stationary applications, most ICEs are used in mobile applications and are the primary power supply for vehicles such as cars, aircraft and boats. ICEs are typically powered by hydrocarbon-based fuels like natural gas, gasoline, diesel fuel, or ethanol. Renewable fuels like biodiesel are used in compression ignition (CI) engines and bioethanol or ETBE (ethyl tert-butyl ether) produced from bioethanol in spark ignition (SI) engines. As early as 1900 the inventor of the diesel engine, Rudolf Diesel, was using peanut oil to run his engines. Renewable fuels are commonly blended with fossil fuels. Hydrogen, which is rarely used, can be obtained from either fossil fuels or renewable energy. History Various scientists and engineers contributed to the development of internal combustion engines. In 1791, John Barber developed the gas turbine. In 1794 Thomas Mead patented a gas engine. Also in 1794, Robert Street patented an internal combustion engine, which was also the first to use liquid fuel, and built an engine around that time. In 1798, John Stevens built the first American internal combustion engine. In 1807, French engineers Nicéphore Niépce (who went on to invent photography) and Claude Niépce ran a prototype internal combustion engine, using controlled dust explosions, the Pyréolophore, which was granted a patent by Napoleon Bonaparte. This engine powered a boat on the Saône river in France. In the same year, Swiss engineer François Isaac de Rivaz invented a hydrogen-based internal combustion engine and powered the engine by electric spark. In 1808, De Rivaz fitted his invention to a primitive working vehicle – "the world's first internal combustion powered automobile". In 1823, Samuel Brown patented the first internal combustion engine to be applied industrially. In 1854, in the UK, the Italian inventors Eugenio Barsanti and Felice Matteucci obtained the certification: "Obtaining Motive Power by the Explosion of Gases". In 1857 the Great Seal Patent Office conceded them patent No.1655 for the invention of an "Improved Apparatus for Obtaining Motive Power from Gases". Barsanti and Matteucci obtained other patents for the same invention in France, Belgium and Piedmont between 1857 and 1859. In 1860, Belgian engineer Jean Joseph Etienne Lenoir produced a gas-fired internal combustion engine. In 1864, Nicolaus Otto patented the first atmospheric gas engine. In 1872, American George Brayton invented the first commercial liquid-fueled internal combustion engine. In 1876, Nicolaus Otto began working with Gottlieb Daimler and Wilhelm Maybach, patented the compressed charge, four-cycle engine. In 1879, Karl Benz patented a reliable two-stroke gasoline engine. Later, in 1886, Benz began the first commercial production of motor vehicles with an internal combustion engine, in which a three-wheeled, four-cycle engine and chassis formed a single unit. In 1892, Rudolf Diesel developed the first compressed charge, compression ignition engine. In 1926, Robert Goddard launched the first liquid-fueled rocket. In 1939, the Heinkel He 178 became the world's first jet aircraft. Etymology At one time, the word engine (via Old French, from Latin , "ability") meant any piece of machinery—a sense that persists in expressions such as siege engine. A "motor" (from Latin , "mover") is any machine that produces mechanical power. Traditionally, electric motors are not referred to as "engines"; however, combustion engines are often referred to as "motors". (An electric engine refers to a locomotive operated by electricity.) In boating, an internal combustion engine that is installed in the hull is referred to as an engine, but the engines that sit on the transom are referred to as motors. Applications Reciprocating piston engines are by far the most common power source for land and water vehicles, including automobiles, motorcycles, ships and to a lesser extent, locomotives (some are electrical but most use diesel engines). Rotary engines of the Wankel design are used in some automobiles, aircraft and motorcycles. These are collectively known as internal-combustion-engine vehicles (ICEV). Where high power-to-weight ratios are required, internal combustion engines appear in the form of combustion turbines, or sometimes Wankel engines. Powered aircraft typically use an ICE which may be a reciprocating engine. Airplanes can instead use jet engines and helicopters can instead employ turboshafts; both of which are types of turbines. In addition to providing propulsion, aircraft may employ a separate ICE as an auxiliary power unit. Wankel engines are fitted to many unmanned aerial vehicles. ICEs drive large electric generators that power electrical grids. They are found in the form of combustion turbines with a typical electrical output in the range of some 100 MW. Combined cycle power plants use the high temperature exhaust to boil and superheat water steam to run a steam turbine. Thus, the efficiency is higher because more energy is extracted from the fuel than what could be extracted by the combustion engine alone. Combined cycle power plants achieve efficiencies in the range of 50–60%. In a smaller scale, stationary engines like gas engines or diesel generators are used for backup or for providing electrical power to areas not connected to an electric grid. Small engines (usually 2‐stroke single cylinder gasoline/petrol engines) are a common power source for lawnmowers, string trimmers, chainsaws, leaf blowers, pressure washers, radio-controlled cars, snowmobiles, jet skis, outboard motors, mopeds, and motorcycles. Classification There are several possible ways to classify internal combustion engines. Reciprocating By number of strokes: Two-stroke engine Clerk cycle Day cycle Four-stroke engine (Otto cycle) Six-stroke engine By type of ignition: Compression-ignition engine Spark-ignition engine (commonly found as gasoline engines) By mechanical/thermodynamic cycle (these cycles are infrequently used but are commonly found in hybrid vehicles, along with other vehicles manufactured for fuel efficiency): Atkinson cycle Miller cycle Rotary Wankel engine Pistonless rotary engine Continuous combustion Gas turbine engine Turbojet, through a propelling nozzle Turbofan, through a duct-fan Turboprop, through an unducted propeller, usually with variable pitch Turboshaft, a gas turbine optimized for producing mechanical torque instead of thrust Ramjet, similar to a turbojet but uses vehicle speed to compress (ram) the air instead of a compressor. Scramjet, a variant of the ramjet that uses supersonic combustion. Rocket engine Reciprocating engines Structure The base of a reciprocating internal combustion engine is the engine block, which is typically made of cast iron (due to its good wear resistance and low cost) or aluminum. In the latter case, the cylinder liners are made of cast iron or steel, or a coating such as nikasil or alusil. The engine block contains the cylinders. In engines with more than one cylinder they are usually arranged either in 1 row (straight engine) or 2 rows (boxer engine or V engine); 3 or 4 rows are occasionally used (W engine) in contemporary engines, and other engine configurations are possible and have been used. Single-cylinder engines (or thumpers) are common for motorcycles and other small engines found in light machinery. On the outer side of the cylinder, passages that contain cooling fluid are cast into the engine block whereas, in some heavy duty engines, the passages are the types of removable cylinder sleeves which can be replaceable. Water-cooled engines contain passages in the engine block where cooling fluid circulates (the water jacket). Some small engines are air-cooled, and instead of having a water jacket the cylinder block has fins protruding away from it to cool the engine by directly transferring heat to the air. The cylinder walls are usually finished by honing to obtain a cross hatch, which is able to retain more oil. A too rough surface would quickly harm the engine by excessive wear on the piston. The pistons are short cylindrical parts which seal one end of the cylinder from the high pressure of the compressed air and combustion products and slide continuously within it while the engine is in operation. In smaller engines, the pistons are made of aluminum; while in larger applications, they are typically made of cast iron. In performance applications, pistons can also be titanium or forged steel for greater strength. The top surface of the piston is called its crown and is typically flat or concave. Some two-stroke engines use pistons with a deflector head. Pistons are open at the bottom and hollow except for an integral reinforcement structure (the piston web). When an engine is working, the gas pressure in the combustion chamber exerts a force on the piston crown which is transferred through its web to a gudgeon pin. Each piston has rings fitted around its circumference that mostly prevent the gases from leaking into the crankcase or the oil into the combustion chamber. A ventilation system drives the small amount of gas that escapes past the pistons during normal operation (the blow-by gases) out of the crankcase so that it does not accumulate contaminating the oil and creating corrosion. In two-stroke gasoline engines the crankcase is part of the air–fuel path and due to the continuous flow of it, two-stroke engines do not need a separate crankcase ventilation system. The cylinder head is attached to the engine block by numerous bolts or studs. It has several functions. The cylinder head seals the cylinders on the side opposite to the pistons; it contains short ducts (the ports) for intake and exhaust and the associated intake valves that open to let the cylinder be filled with fresh air and exhaust valves that open to allow the combustion gases to escape. The valves are often poppet valves but they can also be rotary valves or sleeve valves. However, 2-stroke crankcase scavenged engines connect the gas ports directly to the cylinder wall without poppet valves; the piston controls their opening and occlusion instead. The cylinder head also holds the spark plug in the case of spark ignition engines and the injector for engines that use direct injection. All CI (compression ignition) engines use fuel injection, usually direct injection but some engines instead use indirect injection. SI (spark ignition) engines can use a carburetor or fuel injection as port injection or direct injection. Most SI engines have a single spark plug per cylinder but some have 2. A head gasket prevents the gas from leaking between the cylinder head and the engine block. The opening and closing of the valves is controlled by one or several camshafts and springs—or in some engines—a desmodromic mechanism that uses no springs. The camshaft may press directly the stem of the valve or may act upon a rocker arm, again, either directly or through a pushrod. The crankcase is sealed at the bottom with a sump that collects the falling oil during normal operation to be cycled again. The cavity created between the cylinder block and the sump houses a crankshaft that converts the reciprocating motion of the pistons to rotational motion. The crankshaft is held in place relative to the engine block by main bearings, which allow it to rotate. Bulkheads in the crankcase form a half of every main bearing; the other half is a detachable cap. In some cases a single main bearing deck is used rather than several smaller caps. A connecting rod is connected to offset sections of the crankshaft (the crankpins) in one end and to the piston in the other end through the gudgeon pin and thus transfers the force and translates the reciprocating motion of the pistons to the circular motion of the crankshaft. The end of the connecting rod attached to the gudgeon pin is called its small end, and the other end, where it is connected to the crankshaft, the big end. The big end has a detachable half to allow assembly around the crankshaft. It is kept together to the connecting rod by removable bolts. The cylinder head has an intake manifold and an exhaust manifold attached to the corresponding ports. The intake manifold connects to the air filter directly, or to a carburetor when one is present, which is then connected to the air filter. It distributes the air incoming from these devices to the individual cylinders. The exhaust manifold is the first component in the exhaust system. It collects the exhaust gases from the cylinders and drives it to the following component in the path. The exhaust system of an ICE may also include a catalytic converter and muffler. The final section in the path of the exhaust gases is the tailpipe. Four-stroke engines The top dead center (TDC) of a piston is the position where it is nearest to the valves; bottom dead center (BDC) is the opposite position where it is furthest from them. A stroke is the movement of a piston from TDC to BDC or vice versa, together with the associated process. While an engine is in operation, the crankshaft rotates continuously at a nearly constant speed. In a 4-stroke ICE, each piston experiences 2 strokes per crankshaft revolution in the following order. Starting the description at TDC, these are: Intake, induction or suction: The intake valves are open as a result of the cam lobe pressing down on the valve stem. The piston moves downward increasing the volume of the combustion chamber and allowing air to enter in the case of a CI engine or an air-fuel mix in the case of SI engines that do not use direct injection. The air or air-fuel mixture is called the charge in any case. Compression: In this stroke, both valves are closed and the piston moves upward reducing the combustion chamber volume which reaches its minimum when the piston is at TDC. The piston performs work on the charge as it is being compressed; as a result, its pressure, temperature and density increase; an approximation to this behavior is provided by the ideal gas law. Just before the piston reaches TDC, ignition begins. In the case of a SI engine, the spark plug receives a high voltage pulse that generates the spark which gives it its name and ignites the charge. In the case of a CI engine, the fuel injector quickly injects fuel into the combustion chamber as a spray; the fuel ignites due to the high temperature. Power or working stroke: The pressure of the combustion gases pushes the piston downward, generating more kinetic energy than is required to compress the charge. Complementary to the compression stroke, the combustion gases expand and as a result their temperature, pressure and density decreases. When the piston is near to BDC the exhaust valve opens. In the blowdown, the combustion gases expand irreversibly due to the leftover pressure—in excess of back pressure, the gauge pressure on the exhaust port. Exhaust: The exhaust valve remains open while the piston moves upward expelling the combustion gases. For naturally aspirated engines a small part of the combustion gases may remain in the cylinder during normal operation because the piston does not close the combustion chamber completely; these gases dissolve in the next charge. At the end of this stroke, the exhaust valve closes, the intake valve opens, and the sequence repeats in the next cycle. The intake valve may open before the exhaust valve closes to allow better scavenging. Two-stroke engines The defining characteristic of this kind of engine is that each piston completes a cycle every crankshaft revolution. The 4 processes of intake, compression, power and exhaust take place in only 2 strokes so that it is not possible to dedicate a stroke exclusively for each of them. Starting at TDC the cycle consists of: Power: While the piston is descending the combustion gases perform work on it, as in a 4-stroke engine. The same thermodynamics for the expansion apply. Scavenging: Around 75° of crankshaft rotation before BDC the exhaust valve or port opens, and blowdown occurs. Shortly thereafter the intake valve or transfer port opens. The incoming charge displaces the remaining combustion gases to the exhaust system and a part of the charge may enter the exhaust system as well. The piston reaches BDC and reverses direction. After the piston has traveled a short distance upwards into the cylinder the exhaust valve or port closes; shortly the intake valve or transfer port closes as well. Compression: With both intake and exhaust closed the piston continues moving upwards compressing the charge and performing work on it. As in the case of a 4-stroke engine, ignition starts just before the piston reaches TDC and the same consideration on the thermodynamics of the compression on the charge apply. While a 4-stroke engine uses the piston as a positive displacement pump to accomplish scavenging taking 2 of the 4 strokes, a 2-stroke engine uses the last part of the power stroke and the first part of the compression stroke for combined intake and exhaust. The work required to displace the charge and exhaust gases comes from either the crankcase or a separate blower. For scavenging, expulsion of burned gas and entry of fresh mix, two main approaches are described: Loop scavenging, and Uniflow scavenging. SAE news published in the 2010s that 'Loop Scavenging' is better under any circumstance than Uniflow Scavenging. Crankcase scavenged Some SI engines are crankcase scavenged and do not use poppet valves. Instead, the crankcase and the part of the cylinder below the piston is used as a pump. The intake port is connected to the crankcase through a reed valve or a rotary disk valve driven by the engine. For each cylinder, a transfer port connects in one end to the crankcase and in the other end to the cylinder wall. The exhaust port is connected directly to the cylinder wall. The transfer and exhaust port are opened and closed by the piston. The reed valve opens when the crankcase pressure is slightly below intake pressure, to let it be filled with a new charge; this happens when the piston is moving upwards. When the piston is moving downwards the pressure in the crankcase increases and the reed valve closes promptly, then the charge in the crankcase is compressed. When the piston is moving downwards, it also uncovers the exhaust port and the transfer port and the higher pressure of the charge in the crankcase makes it enter the cylinder through the transfer port, blowing the exhaust gases. Lubrication is accomplished by adding two-stroke oil to the fuel in small ratios. Petroil refers to the mix of gasoline with the aforesaid oil. This kind of 2-stroke engine has a lower efficiency than comparable 4-strokes engines and releases more polluting exhaust gases for the following conditions: They use a total-loss oiling system: all the lubricating oil is eventually burned along with the fuel. There are conflicting requirements for scavenging: On one side, enough fresh charge needs to be introduced in each cycle to displace almost all the combustion gases but introducing too much of it means that a part of it gets in the exhaust. They must use the transfer port(s) as a carefully designed and placed nozzle so that a gas current is created in a way that it sweeps the whole cylinder before reaching the exhaust port so as to expel the combustion gases, but minimize the amount of charge exhausted. 4-stroke engines have the benefit of forcibly expelling almost all of the combustion gases because during exhaust the combustion chamber is reduced to its minimum volume. In crankcase scavenged 2-stroke engines, exhaust and intake are performed mostly simultaneously and with the combustion chamber at its maximum volume. The main advantage of 2-stroke engines of this type is mechanical simplicity and a higher power-to-weight ratio than their 4-stroke counterparts. Despite having twice as many power strokes per cycle, less than twice the power of a comparable 4-stroke engine is attainable in practice. In the US, 2-stroke engines were banned for road vehicles due to the pollution. Off-road only motorcycles are still often 2-stroke but are rarely road legal. However, many thousands of 2-stroke lawn maintenance engines are in use. Blower scavenged Using a separate blower avoids many of the shortcomings of crankcase scavenging, at the expense of increased complexity which means a higher cost and an increase in maintenance requirement. An engine of this type uses ports or valves for intake and valves for exhaust, except opposed piston engines, which may also use ports for exhaust. The blower is usually of the Roots-type but other types have been used too. This design is commonplace in CI engines, and has been occasionally used in SI engines. CI engines that use a blower typically use uniflow scavenging. In this design the cylinder wall contains several intake ports placed uniformly spaced along the circumference just above the position that the piston crown reaches when at BDC. An exhaust valve or several like that of 4-stroke engines is used. The final part of the intake manifold is an air sleeve that feeds the intake ports. The intake ports are placed at a horizontal angle to the cylinder wall (I.e: they are in plane of the piston crown) to give a swirl to the incoming charge to improve combustion. The largest reciprocating IC are low speed CI engines of this type; they are used for marine propulsion (see marine diesel engine) or electric power generation and achieve the highest thermal efficiencies among internal combustion engines of any kind. Some diesel–electric locomotive engines operate on the 2-stroke cycle. The most powerful of them have a brake power of around 4.5 MW or 6,000 HP. The EMD SD90MAC class of locomotives are an example of such. The comparable class GE AC6000CW, whose prime mover has almost the same brake power, uses a 4-stroke engine. An example of this type of engine is the Wärtsilä-Sulzer RTA96-C turbocharged 2-stroke diesel, used in large container ships. It is the most efficient and powerful reciprocating internal combustion engine in the world with a thermal efficiency over 50%. For comparison, the most efficient small four-stroke engines are around 43% thermally-efficient (SAE 900648); size is an advantage for efficiency due to the increase in the ratio of volume to surface area. See the external links for an in-cylinder combustion video in a 2-stroke, optically accessible motorcycle engine. Historical design Dugald Clerk developed the first two-cycle engine in 1879. It used a separate cylinder which functioned as a pump in order to transfer the fuel mixture to the cylinder. In 1899 John Day simplified Clerk's design into the type of 2 cycle engine that is very widely used today. Day cycle engines are crankcase scavenged and port timed. The crankcase and the part of the cylinder below the exhaust port is used as a pump. The operation of the Day cycle engine begins when the crankshaft is turned so that the piston moves from BDC upward (toward the head) creating a vacuum in the crankcase/cylinder area. The carburetor then feeds the fuel mixture into the crankcase through a reed valve or a rotary disk valve (driven by the engine). There are cast in ducts from the crankcase to the port in the cylinder to provide for intake and another from the exhaust port to the exhaust pipe. The height of the port in relationship to the length of the cylinder is called the "port timing". On the first upstroke of the engine there would be no fuel inducted into the cylinder as the crankcase was empty. On the downstroke, the piston now compresses the fuel mix, which has lubricated the piston in the cylinder and the bearings due to the fuel mix having oil added to it. As the piston moves downward it first uncovers the exhaust, but on the first stroke there is no burnt fuel to exhaust. As the piston moves downward further, it uncovers the intake port which has a duct that runs to the crankcase. Since the fuel mix in the crankcase is under pressure, the mix moves through the duct and into the cylinder. Because there is no obstruction in the cylinder of the fuel to move directly out of the exhaust port prior to the piston rising far enough to close the port, early engines used a high domed piston to slow down the flow of fuel. Later the fuel was "resonated" back into the cylinder using an expansion chamber design. When the piston rose close to TDC, a spark ignited the fuel. As the piston is driven downward with power, it first uncovers the exhaust port where the burned fuel is expelled under high pressure and then the intake port where the process has been completed and will keep repeating. Later engines used a type of porting devised by the Deutz company to improve performance. It was called the Schnurle Reverse Flow system. DKW licensed this design for all their motorcycles. Their DKW RT 125 was one of the first motor vehicles to achieve over 100 mpg as a result. Ignition Internal combustion engines require ignition of the mixture, either by spark ignition (SI) or compression ignition (CI). Before the invention of reliable electrical methods, hot tube and flame methods were used. Experimental engines with laser ignition have been built. Spark ignition process The spark-ignition engine was a refinement of the early engines which used Hot Tube ignition. When Bosch developed the magneto it became the primary system for producing electricity to energize a spark plug. Many small engines still use magneto ignition. Small engines are started by hand cranking using a recoil starter or hand crank. Prior to Charles F. Kettering of Delco's development of the automotive starter all gasoline engined automobiles used a hand crank. Larger engines typically power their starting motors and ignition systems using the electrical energy stored in a lead–acid battery. The battery's charged state is maintained by an automotive alternator or (previously) a generator which uses engine power to create electrical energy storage. The battery supplies electrical power for starting when the engine has a starting motor system, and supplies electrical power when the engine is off. The battery also supplies electrical power during rare run conditions where the alternator cannot maintain more than 13.8 volts (for a common 12 V automotive electrical system). As alternator voltage falls below 13.8 volts, the lead-acid storage battery increasingly picks up electrical load. During virtually all running conditions, including normal idle conditions, the alternator supplies primary electrical power. Some systems disable alternator field (rotor) power during wide-open throttle conditions. Disabling the field reduces alternator pulley mechanical loading to nearly zero, maximizing crankshaft power. In this case, the battery supplies all primary electrical power. Gasoline engines take in a mixture of air and gasoline and compress it by the movement of the piston from bottom dead center to top dead center when the fuel is at maximum compression. The reduction in the size of the swept area of the cylinder and taking into account the volume of the combustion chamber is described by a ratio. Early engines had compression ratios of 6 to 1. As compression ratios were increased, the efficiency of the engine increased as well. With early induction and ignition systems the compression ratios had to be kept low. With advances in fuel technology and combustion management, high-performance engines can run reliably at 12:1 ratio. With low octane fuel, a problem would occur as the compression ratio increased as the fuel was igniting due to the rise in temperature that resulted. Charles Kettering developed a lead additive which allowed higher compression ratios, which was progressively abandoned for automotive use from the 1970s onward, partly due to lead poisoning concerns. The fuel mixture is ignited at different progressions of the piston in the cylinder. At low rpm, the spark is timed to occur close to the piston achieving top dead center. In order to produce more power, as rpm rises the spark is advanced sooner during piston movement. The spark occurs while the fuel is still being compressed progressively more as rpm rises. The necessary high voltage, typically 10,000 volts, is supplied by an induction coil or transformer. The induction coil is a fly-back system, using interruption of electrical primary system current through some type of synchronized interrupter. The interrupter can be either contact points or a power transistor. The problem with this type of ignition is that as RPM increases the availability of electrical energy decreases. This is especially a problem, since the amount of energy needed to ignite a more dense fuel mixture is higher. The result was often a high RPM misfire. Capacitor discharge ignition was developed. It produces a rising voltage that is sent to the spark plug. CD system voltages can reach 60,000 volts. CD ignitions use step-up transformers. The step-up transformer uses energy stored in a capacitance to generate electric spark. With either system, a mechanical or electrical control system provides a carefully timed high-voltage to the proper cylinder. This spark, via the spark plug, ignites the air-fuel mixture in the engine's cylinders. While gasoline internal combustion engines are much easier to start in cold weather than diesel engines, they can still have cold weather starting problems under extreme conditions. For years, the solution was to park the car in heated areas. In some parts of the world, the oil was actually drained and heated overnight and returned to the engine for cold starts. In the early 1950s, the gasoline Gasifier unit was developed, where, on cold weather starts, raw gasoline was diverted to the unit where part of the fuel was burned causing the other part to become a hot vapor sent directly to the intake valve manifold. This unit was quite popular until electric engine block heaters became standard on gasoline engines sold in cold climates. Compression ignition process For ignition, diesel, PPC and HCCI engines rely solely on the high temperature and pressure created by the engine in its compression process. The compression level that occurs is usually twice or more than a gasoline engine. Diesel engines take in air only, and shortly before peak compression, spray a small quantity of diesel fuel into the cylinder via a fuel injector that allows the fuel to instantly ignite. HCCI type engines take in both air and fuel, but continue to rely on an unaided auto-combustion process, due to higher pressures and temperature. This is also why diesel and HCCI engines are more susceptible to cold-starting issues, although they run just as well in cold weather once started. Light duty diesel engines with indirect injection in automobiles and light trucks employ glowplugs (or other pre-heating: see Cummins ISB#6BT) that pre-heat the combustion chamber just before starting to reduce no-start conditions in cold weather. Most diesels also have a battery and charging system; nevertheless, this system is secondary and is added by manufacturers as a luxury for the ease of starting, turning fuel on and off (which can also be done via a switch or mechanical apparatus), and for running auxiliary electrical components and accessories. Most new engines rely on electrical and electronic engine control units (ECU) that also adjust the combustion process to increase efficiency and reduce emissions. Lubrication Surfaces in contact and relative motion to other surfaces require lubrication to reduce wear, noise and increase efficiency by reducing the power wasting in overcoming friction, or to make the mechanism work at all. Also, the lubricant used can reduce excess heat and provide additional cooling to components. At the very least, an engine requires lubrication in the following parts: Between pistons and cylinders Small bearings Big end bearings Main bearings Valve gear (The following elements may not be present): Tappets Rocker arms Pushrods Timing chain or gears. Toothed belts do not require lubrication. In 2-stroke crankcase scavenged engines, the interior of the crankcase, and therefore the crankshaft, connecting rod and bottom of the pistons are sprayed by the two-stroke oil in the air-fuel-oil mixture which is then burned along with the fuel. The valve train may be contained in a compartment flooded with lubricant so that no oil pump is required. In a splash lubrication system no oil pump is used. Instead the crankshaft dips into the oil in the sump and due to its high speed, it splashes the crankshaft, connecting rods and bottom of the pistons. The connecting rod big end caps may have an attached scoop to enhance this effect. The valve train may also be sealed in a flooded compartment, or open to the crankshaft in a way that it receives splashed oil and allows it to drain back to the sump. Splash lubrication is common for small 4-stroke engines. In a forced (also called pressurized) lubrication system, lubrication is accomplished in a closed-loop which carries motor oil to the surfaces serviced by the system and then returns the oil to a reservoir. The auxiliary equipment of an engine is typically not serviced by this loop; for instance, an alternator may use ball bearings sealed with their own lubricant. The reservoir for the oil is usually the sump, and when this is the case, it is called a wet sump system. When there is a different oil reservoir the crankcase still catches it, but it is continuously drained by a dedicated pump; this is called a dry sump system. On its bottom, the sump contains an oil intake covered by a mesh filter which is connected to an oil pump then to an oil filter outside the crankcase. From there it is diverted to the crankshaft main bearings and valve train. The crankcase contains at least one oil gallery (a conduit inside a crankcase wall) to which oil is introduced from the oil filter. The main bearings contain a groove through all or half its circumference; the oil enters these grooves from channels connected to the oil gallery. The crankshaft has drillings that take oil from these grooves and deliver it to the big end bearings. All big end bearings are lubricated this way. A single main bearing may provide oil for 0, 1 or 2 big end bearings. A similar system may be used to lubricate the piston, its gudgeon pin and the small end of its connecting rod; in this system, the connecting rod big end has a groove around the crankshaft and a drilling connected to the groove which distributes oil from there to the bottom of the piston and from then to the cylinder. Other systems are also used to lubricate the cylinder and piston. The connecting rod may have a nozzle to throw an oil jet to the cylinder and bottom of the piston. That nozzle is in movement relative to the cylinder it lubricates, but always pointed towards it or the corresponding piston. Typically forced lubrication systems have a lubricant flow higher than what is required to lubricate satisfactorily, in order to assist with cooling. Specifically, the lubricant system helps to move heat from the hot engine parts to the cooling liquid (in water-cooled engines) or fins (in air-cooled engines) which then transfer it to the environment. The lubricant must be designed to be chemically stable and maintain suitable viscosities within the temperature range it encounters in the engine. Cylinder configuration Common cylinder configurations include the straight or inline configuration, the more compact V configuration, and the wider but smoother flat or boxer configuration. Aircraft engines can also adopt a radial configuration, which allows more effective cooling. More unusual configurations such as the H, U, X, and W have also been used. Multiple cylinder engines have their valve train and crankshaft configured so that pistons are at different parts of their cycle. It is desirable to have the pistons' cycles uniformly spaced (this is called even firing) especially in forced induction engines; this reduces torque pulsations and makes inline engines with more than 3 cylinders statically balanced in its primary forces. However, some engine configurations require odd firing to achieve better balance than what is possible with even firing. For instance, a 4-stroke I2 engine has better balance when the angle between the crankpins is 180° because the pistons move in opposite directions and inertial forces partially cancel, but this gives an odd firing pattern where one cylinder fires 180° of crankshaft rotation after the other, then no cylinder fires for 540°. With an even firing pattern, the pistons would move in unison and the associated forces would add. Multiple crankshaft configurations do not necessarily need a cylinder head at all because they can instead have a piston at each end of the cylinder called an opposed piston design. Because fuel inlets and outlets are positioned at opposed ends of the cylinder, one can achieve uniflow scavenging, which, as in the four-stroke engine is efficient over a wide range of engine speeds. Thermal efficiency is improved because of a lack of cylinder heads. This design was used in the Junkers Jumo 205 diesel aircraft engine, using two crankshafts at either end of a single bank of cylinders, and most remarkably in the Napier Deltic diesel engines. These used three crankshafts to serve three banks of double-ended cylinders arranged in an equilateral triangle with the crankshafts at the corners. It was also used in single-bank locomotive engines, and is still used in marine propulsion engines and marine auxiliary generators. Diesel cycle Most truck and automotive diesel engines use a cycle reminiscent of a four-stroke cycle, but with temperature increase by compression causing ignition, rather than needing a separate ignition system. This variation is called the diesel cycle. In the diesel cycle, diesel fuel is injected directly into the cylinder so that combustion occurs at constant pressure, as the piston moves. Otto cycle The Otto cycle is the most common cycle for most cars' internal combustion engines that use gasoline as a fuel. It consists of the same major steps as described for the four-stroke engine: Intake, compression, ignition, expansion and exhaust. Five-stroke engine In 1879, Nicolaus Otto manufactured and sold a double expansion engine (the double and triple expansion principles had ample usage in steam engines), with two small cylinders at both sides of a low-pressure larger cylinder, where a second expansion of exhaust stroke gas took place; the owner returned it, alleging poor performance. In 1906, the concept was incorporated in a car built by EHV (Eisenhuth Horseless Vehicle Company); and in the 21st century Ilmor designed and successfully tested a 5-stroke double expansion internal combustion engine, with high power output and low SFC (Specific Fuel Consumption). Six-stroke engine The six-stroke engine was invented in 1883. Four kinds of six-stroke engines use a regular piston in a regular cylinder (Griffin six-stroke, Bajulaz six-stroke, Velozeta six-stroke and Crower six-stroke), firing every three crankshaft revolutions. These systems capture the waste heat of the four-stroke Otto cycle with an injection of air or water. The Beare Head and "piston charger" engines operate as opposed-piston engines, two pistons in a single cylinder, firing every two revolutions rather than every four like a four-stroke engine. Other cycles The first internal combustion engines did not compress the mixture. The first part of the piston downstroke drew in a fuel-air mixture, then the inlet valve closed and, in the remainder of the down-stroke, the fuel-air mixture fired. The exhaust valve opened for the piston upstroke. These attempts at imitating the principle of a steam engine were very inefficient. There are a number of variations of these cycles, most notably the Atkinson and Miller cycles. Split-cycle engines separate the four strokes of intake, compression, combustion and exhaust into two separate but paired cylinders. The first cylinder is used for intake and compression. The compressed air is then transferred through a crossover passage from the compression cylinder into the second cylinder, where combustion and exhaust occur. A split-cycle engine is really an air compressor on one side with a combustion chamber on the other. Previous split-cycle engines have had two major problems—poor breathing (volumetric efficiency) and low thermal efficiency. However, new designs are being introduced that seek to address these problems. The Scuderi Engine addresses the breathing problem by reducing the clearance between the piston and the cylinder head through various turbocharging techniques. The Scuderi design requires the use of outwardly opening valves that enable the piston to move very close to the cylinder head without the interference of the valves. Scuderi addresses the low thermal efficiency via firing after top dead center (ATDC). Firing ATDC can be accomplished by using high-pressure air in the transfer passage to create sonic flow and high turbulence in the power cylinder. Combustion turbines Jet engine Jet engines use a number of rows of fan blades to compress air which then enters a combustor where it is mixed with fuel (typically JP fuel) and then ignited. The burning of the fuel raises the temperature of the air which is then exhausted out of the engine creating thrust. A modern turbofan engine can operate at as high as 48% efficiency. There are six sections to a turbofan engine: Fan Compressor Combustor Turbine Mixer Nozzle Gas turbines A gas turbine compresses air and uses it to turn a turbine. It is essentially a jet engine which directs its output to a shaft. There are three stages to a turbine: 1) air is drawn through a compressor where the temperature rises due to compression, 2) fuel is added in the combustor, and 3) hot air is exhausted through turbine blades which rotate a shaft connected to the compressor. A gas turbine is a rotary machine similar in principle to a steam turbine and it consists of three main components: a compressor, a combustion chamber, and a turbine. The temperature of the air, after being compressed in the compressor, is increased by burning fuel in it. The heated air and the products of combustion expand in a turbine, producing work output. About of the work drives the compressor: the rest (about ) is available as useful work output. Gas turbines are among the most efficient internal combustion engines. The General Electric 7HA and 9HA turbine combined cycle electrical plants are rated at over 61% efficiency. Brayton cycle A gas turbine is a rotary machine somewhat similar in principle to a steam turbine. It consists of three main components: compressor, combustion chamber, and turbine. The air is compressed by the compressor where a temperature rise occurs. The temperature of the compressed air is further increased by combustion of injected fuel in the combustion chamber which expands the air. This energy rotates the turbine which powers the compressor via a mechanical coupling. The hot gases are then exhausted to provide thrust. Gas turbine cycle engines employ a continuous combustion system where compression, combustion, and expansion occur simultaneously at different places in the engine—giving continuous power. Notably, the combustion takes place at constant pressure, rather than with the Otto cycle, constant volume. Wankel engines The Wankel engine (rotary engine) does not have piston strokes. It operates with the same separation of phases as the four-stroke engine with the phases taking place in separate locations in the engine. In thermodynamic terms it follows the Otto engine cycle, so may be thought of as a "four-phase" engine. While it is true that three power strokes typically occur per rotor revolution, due to the 3:1 revolution ratio of the rotor to the eccentric shaft, only one power stroke per shaft revolution actually occurs. The drive (eccentric) shaft rotates once during every power stroke instead of twice (crankshaft), as in the Otto cycle, giving it a greater power-to-weight ratio than piston engines. This type of engine was most notably used in the Mazda RX-8, the earlier RX-7, and other vehicle models. The engine is also used in unmanned aerial vehicles, where the small size and weight and the high power-to-weight ratio are advantageous. Forced induction Forced induction is the process of delivering compressed air to the intake of an internal combustion engine. A forced induction engine uses a gas compressor to increase the pressure, temperature and density of the air. An engine without forced induction is considered a naturally aspirated engine. Forced induction is used in the automotive and aviation industry to increase engine power and efficiency. It particularly helps aviation engines, as they need to operate at high altitude. Forced induction is achieved by a supercharger, where the compressor is directly powered from the engine shaft or, in the turbocharger, from a turbine powered by the engine exhaust. Fuels and oxidizers All internal combustion engines depend on combustion of a chemical fuel, typically with oxygen from the air (though it is possible to inject nitrous oxide to do more of the same thing and gain a power boost). The combustion process typically results in the production of a great quantity of thermal energy, as well as the production of steam and carbon dioxide and other chemicals at very high temperature; the temperature reached is determined by the chemical make up of the fuel and oxidizers (see stoichiometry), as well as by the compression and other factors. Fuels The most common modern fuels are made up of hydrocarbons and are derived mostly from fossil fuels (petroleum). Fossil fuels include diesel fuel, gasoline and petroleum gas, and the rarer use of propane. Except for the fuel delivery components, most internal combustion engines that are designed for gasoline use can run on natural gas or liquefied petroleum gases without major modifications. Large diesels can run with air mixed with gases and a pilot diesel fuel ignition injection. Liquid and gaseous biofuels, such as ethanol and biodiesel (a form of diesel fuel that is produced from crops that yield triglycerides such as soybean oil), can also be used. Engines with appropriate modifications can also run on hydrogen gas, wood gas, or charcoal gas, as well as from so-called producer gas made from other convenient biomass. Experiments have also been conducted using powdered solid fuels, such as the magnesium injection cycle. Presently, fuels used include: Petroleum: Petroleum spirit (North American term: gasoline, British term: petrol) Diesel fuel. Autogas (liquified petroleum gas). Propane. Compressed natural gas. Jet fuel (aviation fuel) Residual fuel Coal: Gasoline can be made from carbon (coal) using the Fischer–Tropsch process Diesel fuel can be made from carbon using the Fischer–Tropsch process Biofuels and vegetable oils: Peanut oil and other vegetable oils. Woodgas, from an onboard wood gasifier using solid wood as a fuel Biofuels: Biobutanol (replaces gasoline). Biodiesel (replaces petrodiesel). Dimethyl Ether (replaces petrodiesel). Bioethanol and biomethanol (wood alcohol) and other biofuels (see Flexible-fuel vehicle). Biogas Hydrogen (mainly spacecraft rocket engines) Even fluidized metal powders and explosives have seen some use. Engines that use gases for fuel are called gas engines and those that use liquid hydrocarbons are called oil engines; however, gasoline engines are also often colloquially referred to as "gas engines" ("petrol engines" outside North America). The main limitations on fuels are that it must be easily transportable through the fuel system to the combustion chamber, and that the fuel releases sufficient energy in the form of heat upon combustion to make practical use of the engine. Diesel engines are generally heavier, noisier, and more powerful at lower speeds than gasoline engines. They are also more fuel-efficient in most circumstances and are used in heavy road vehicles, some automobiles (increasingly so for their increased fuel efficiency over gasoline engines), ships, railway locomotives, and light aircraft. Gasoline engines are used in most other road vehicles including most cars, motorcycles, and mopeds. In Europe, sophisticated diesel-engined cars have taken over about 45% of the market since the 1990s. There are also engines that run on hydrogen, methanol, ethanol, liquefied petroleum gas (LPG), biodiesel, paraffin and tractor vaporizing oil (TVO). Hydrogen Hydrogen could eventually replace conventional fossil fuels in traditional internal combustion engines. Alternatively fuel cell technology may come to deliver its promise and the use of the internal combustion engines could even be phased out. Although there are multiple ways of producing free hydrogen, those methods require converting combustible molecules into hydrogen or consuming electric energy. Unless that electricity is produced from a renewable source—and is not required for other purposes—hydrogen does not solve any energy crisis. In many situations the disadvantage of hydrogen, relative to carbon fuels, is its storage. Liquid hydrogen has extremely low density (14 times lower than water) and requires extensive insulation—whilst gaseous hydrogen requires heavy tankage. Even when liquefied, hydrogen has a higher specific energy but the volumetric energetic storage is still roughly five times lower than gasoline. However, the energy density of hydrogen is considerably higher than that of electric batteries, making it a serious contender as an energy carrier to replace fossil fuels. The 'Hydrogen on Demand' process (see direct borohydride fuel cell) creates hydrogen as needed, but has other issues, such as the high price of the sodium borohydride that is the raw material. Oxidizers Since air is plentiful at the surface of the earth, the oxidizer is typically atmospheric oxygen, which has the advantage of not being stored within the vehicle. This increases the power-to-weight and power-to-volume ratios. Other materials are used for special purposes, often to increase power output or to allow operation under water or in space. Compressed air has been commonly used in torpedoes. Compressed oxygen, as well as some compressed air, was used in the Japanese Type 93 torpedo. Some submarines carry pure oxygen. Rockets very often use liquid oxygen. Nitromethane is added to some racing and model fuels to increase power and control combustion. Nitrous oxide has been used—with extra gasoline—in tactical aircraft, and in specially equipped cars to allow short bursts of added power from engines that otherwise run on gasoline and air. It is also used in the Burt Rutan rocket spacecraft. Hydrogen peroxide power was under development for German World War II submarines. It may have been used in some non-nuclear submarines, and was used on some rocket engines (notably the Black Arrow and the Messerschmitt Me 163 rocket fighter). Other chemicals such as chlorine or fluorine have been used experimentally, but have not been found practical. Cooling Cooling is required to remove excessive heat—high temperature can cause engine failure, usually from wear (due to high-temperature-induced failure of lubrication), cracking or warping. Two most common forms of engine cooling are air-cooled and water-cooled. Most modern automotive engines are both water and air-cooled, as the water/liquid-coolant is carried to air-cooled fins and/or fans, whereas larger engines may be singularly water-cooled as they are stationary and have a constant supply of water through water-mains or fresh-water, while most power tool engines and other small engines are air-cooled. Some engines (air or water-cooled) also have an oil cooler. In some engines, especially for turbine engine blade cooling and liquid rocket engine cooling, fuel is used as a coolant, as it is simultaneously preheated before injecting it into a combustion chamber. Starting Internal combustion engines must have their cycles started. In reciprocating engines this is accomplished by turning the crankshaft (Wankel Rotor Shaft) which induces the cycles of intake, compression, combustion, and exhaust. The first engines were started with a turn of their flywheels, while the first vehicle (the Daimler Reitwagen) was started with a hand crank. All ICE engined automobiles were started with hand cranks until Charles Kettering developed the electric starter for automobiles. This method is now the most widely used, even among non-automobiles. As diesel engines have become larger and their mechanisms heavier, air starters have come into use. This is due to the lack of torque in electric starters. Air starters work by pumping compressed air into the cylinders of an engine to start it turning. Two-wheeled vehicles may have their engines started in one of four ways: By pedaling, as on a bicycle By pushing the vehicle and then engaging the clutch, known as "run-and-bump starting" By kicking downward on a single pedal, known as "kick starting" By an electric starter, as in cars There are also starters where a spring is compressed by a crank motion and then used to start an engine. Some small engines use a pull-rope mechanism called "recoil starting", as the rope rewinds itself after it has been pulled out to start the engine. This method is commonly used in pushed lawn mowers and other settings where only a small amount of torque is needed to turn an engine over. Turbine engines are frequently started by an electric motor or by compressed air. Measures of engine performance Engine types vary greatly in a number of different ways: energy efficiency fuel/propellant consumption (brake specific fuel consumption for shaft engines, thrust specific fuel consumption for jet engines) power-to-weight ratio thrust to weight ratio torque curves (for shaft engines), thrust lapse (jet engines) compression ratio for piston engines, overall pressure ratio for jet engines and gas turbines Energy efficiency Once ignited and burnt, the combustion products—hot gases—have more available thermal energy than the original compressed fuel-air mixture (which had higher chemical energy). This available energy is manifested as a higher temperature and pressure that can be converted into kinetic energy by the engine. In a reciprocating engine, the high-pressure gases inside the cylinders drive the engine's pistons. Once the available energy has been removed, the remaining hot gases are vented (often by opening a valve or exposing the exhaust outlet) and this allows the piston to return to its previous position (top dead center, or TDC). The piston can then proceed to the next phase of its cycle, which varies between engines. Any thermal energy that is not translated into work is normally considered a waste product and is removed from the engine either by an air or liquid cooling system. Internal combustion engines are considered heat engines (since the release of chemical energy in combustion has the same effect as heat transfer into the engine) and as such their theoretical efficiency can be approximated by idealized thermodynamic cycles. The thermal efficiency of a theoretical cycle cannot exceed that of the Carnot cycle, whose efficiency is determined by the difference between the lower and upper operating temperatures of the engine. The upper operating temperature of an engine is limited by two main factors; the thermal operating limits of the materials, and the auto-ignition resistance of the fuel. All metals and alloys have a thermal operating limit, and there is significant research into ceramic materials that can be made with greater thermal stability and desirable structural properties. Higher thermal stability allows for a greater temperature difference between the lower (ambient) and upper operating temperatures, hence greater thermodynamic efficiency. Also, as the cylinder temperature rises, the fuel becomes more prone to auto-ignition. This is caused when the cylinder temperature nears the flash point of the charge. At this point, ignition can spontaneously occur before the spark plug fires, causing excessive cylinder pressures. Auto-ignition can be mitigated by using fuels with high auto-ignition resistance (octane rating), however it still puts an upper bound on the allowable peak cylinder temperature. The thermodynamic limits assume that the engine is operating under ideal conditions: a frictionless world, ideal gases, perfect insulators, and operation for infinite time. Real world applications introduce complexities that reduce efficiency. For example, a real engine runs best at a specific load, termed its power band. The engine in a car cruising on a highway is usually operating significantly below its ideal load, because it is designed for the higher loads required for rapid acceleration. In addition, factors such as wind resistance reduce overall system efficiency. Vehicle fuel economy is measured in miles per gallon or in liters per 100 kilometers. The volume of hydrocarbon assumes a standard energy content. Even when aided with turbochargers and stock efficiency aids, most engines retain an average efficiency of about 18–20%. However, the latest technologies in Formula One engines have seen a boost in thermal efficiency past 50%. There are many inventions aimed at increasing the efficiency of IC engines. In general, practical engines are always compromised by trade-offs between different properties such as efficiency, weight, power, heat, response, exhaust emissions, or noise. Sometimes economy also plays a role in not only the cost of manufacturing the engine itself, but also manufacturing and distributing the fuel. Increasing the engine's efficiency brings better fuel economy but only if the fuel cost per energy content is the same. Measures of fuel efficiency and propellant efficiency For stationary and shaft engines including propeller engines, fuel consumption is measured by calculating the brake specific fuel consumption, which measures the mass flow rate of fuel consumption divided by the power produced. For internal combustion engines in the form of jet engines, the power output varies drastically with airspeed and a less variable measure is used: thrust specific fuel consumption (TSFC), which is the mass of propellant needed to generate impulses that is measured in either pound force-hour or the grams of propellant needed to generate an impulse that measures one kilonewton-second. For rockets, TSFC can be used, but typically other equivalent measures are traditionally used, such as specific impulse and effective exhaust velocity. Air and noise pollution Air pollution Internal combustion engines such as reciprocating internal combustion engines produce air pollution emissions, due to incomplete combustion of carbonaceous fuel. The main derivatives of the process are carbon dioxide , water and some soot—also called particulate matter (PM). The effects of inhaling particulate matter have been studied in humans and animals and include asthma, lung cancer, cardiovascular issues, and premature death. There are, however, some additional products of the combustion process that include nitrogen oxides and sulfur and some uncombusted hydrocarbons, depending on the operating conditions and the fuel-air ratio. Carbon dioxide emissions from internal combustion engines (particularly ones using fossil fuels such as gasoline and diesel) contribute to human-induced climate change. Increasing the engine's fuel efficiency can reduce, but not eliminate, the amount of emissions as carbon-based fuel combustion produces . Since removing from engine exhaust is impractical, there is increasing interest in alternatives. Sustainable fuels such as biofuels, synfuels, and electric motors powered by batteries are examples. Not all of the fuel is completely consumed by the combustion process. A small amount of fuel is present after combustion, and some of it reacts to form oxygenates, such as formaldehyde or acetaldehyde, or hydrocarbons not originally present in the input fuel mixture. Incomplete combustion usually results from insufficient oxygen to achieve the perfect stoichiometric ratio. The flame is "quenched" by the relatively cool cylinder walls, leaving behind unreacted fuel that is expelled with the exhaust. When running at lower speeds, quenching is commonly observed in diesel (compression ignition) engines that run on natural gas. Quenching reduces efficiency and increases knocking, sometimes causing the engine to stall. Incomplete combustion also leads to the production of carbon monoxide (CO). Further chemicals released are benzene and 1,3-butadiene that are also hazardous air pollutants. Increasing the amount of air in the engine reduces emissions of incomplete combustion products, but also promotes reaction between oxygen and nitrogen in the air to produce nitrogen oxides (). is hazardous to both plant and animal health, and leads to the production of ozone (). Ozone is not emitted directly; rather, it is a secondary air pollutant, produced in the atmosphere by the reaction of and volatile organic compounds in the presence of sunlight. Ground-level ozone is harmful to human health and the environment. Though the same chemical substance, ground-level ozone should not be confused with stratospheric ozone, or the ozone layer, which protects the earth from harmful ultraviolet rays. Carbon fuels containing sulfur produce sulfur monoxides (SO) and sulfur dioxide () contributing to acid rain. In the United States, nitrogen oxides, PM, carbon monoxide, sulfur dioxide, and ozone, are regulated as criteria air pollutants under the Clean Air Act to levels where human health and welfare are protected. Other pollutants, such as benzene and 1,3-butadiene, are regulated as hazardous air pollutants whose emissions must be lowered as much as possible depending on technological and practical considerations. , carbon monoxide and other pollutants are frequently controlled via exhaust gas recirculation which returns some of the exhaust back into the engine intake. Catalytic converters are used to convert exhaust chemicals to (a greenhouse gas), (water vapour, also a greenhouse gas) and (nitrogen). Non-road engines The emission standards used by many countries have special requirements for non-road engines which are used by equipment and vehicles that are not operated on the public roadways. The standards are separated from the road vehicles. Noise pollution Significant contributions to noise pollution are made by internal combustion engines. Automobile and truck traffic operating on highways and street systems produce noise, as do aircraft flights due to jet noise, particularly supersonic-capable aircraft. Rocket engines create the most intense noise. Idling Internal combustion engines continue to consume fuel and emit pollutants while idling. Idling is reduced by stop-start systems. Carbon dioxide formation A good way to estimate the mass of carbon dioxide that is released when one litre of diesel fuel (or gasoline) is combusted can be found as follows: As a good approximation the chemical formula of diesel is . In reality diesel is a mixture of different molecules. As carbon has a molar mass of 12 g/mol and hydrogen (atomic) has a molar mass of about 1 g/mol, the fraction by weight of carbon in diesel is roughly . The reaction of diesel combustion is given by: 2 + 3n 2n + 2n Carbon dioxide has a molar mass of 44 g/mol as it consists of 2 atoms of oxygen (16 g/mol) and 1 atom of carbon (12 g/mol). So 12 g of carbon yields 44 g of carbon dioxide. Diesel has a density of 0.838 kg per litre. Putting everything together the mass of carbon dioxide that is produced by burning 1 litre of diesel can be calculated as: The figure obtained with this estimation is close to the values found in the literature. For gasoline, with a density of 0.75 kg/L and a ratio of carbon to hydrogen atoms of about 6 to 14, the estimated value of carbon dioxide emission from burning 1 litre of gasoline is: Parasitic loss The term parasitic loss is often applied to devices that take energy from the engine in order to enhance the engine's ability to create more energy or convert energy to motion. In the internal combustion engine, almost every mechanical component, including the drivetrain, causes parasitic loss and could thus be characterized as a parasitic load. Examples Bearings, oil pumps, piston rings, valve springs, flywheels, transmissions, driveshafts, and differentials all act as parasitic loads that rob the system of power. These parasitic loads can be divided into two categories: those inherent to the working of the engine and those drivetrain losses incurred in the systems that transfer power from the engine to the road (such as the transmission, driveshaft, differentials and axles). For example, the former category (engine parasitic loads) includes the oil pump used to lubricate the engine, which is a necessary parasite that consumes power from the engine (its host). Another example of an engine parasitic load is a supercharger, which derives its power from the engine and creates more power for the engine. The power that the supercharger consumes is parasitic loss and is usually expressed in kilowatt or horsepower. While the power that the supercharger consumes in comparison to what it generates is small, it is still measurable or calculable. One of the desirable features of a turbocharger over a supercharger is the lower parasitic loss of the former. Drivetrain parasitic losses include both steady state and dynamic loads. Steady state loads occur at constant speeds and may originate in discrete components such as the torque converter, the transmission oil pump, and/or clutch drag, and in seal/bearing drag, churning of lubricant and gear windage/friction found throughout the system. Dynamic loads occur under acceleration and are caused by inertia of rotating components and/or increased friction. Measurement While rules of thumb such as a 15% power loss from drivetrain parasitic loads have been commonly repeated, the actual loss of energy due to parasitic loads varies between systems. It can be influenced by powertrain design, lubricant type and temperature and many other factors. In automobiles, drivetrain loss can be quantified by measuring the difference between power measured by an engine dynamometer and a chassis dynamometer. However, this method is primarily useful for measuring steady state loads and may not accurately reflect losses due to dynamic loads. More advanced methods can be used in a laboratory setting, such as measuring in-cylinder pressure measurements, flow rate and temperature at certain points, and testing of individual parts or sub-assemblies to determine friction and pumping losses. For example, in a dynamometer test by Hot Rod magazine, a Ford Mustang equipped with a modified 357ci small-block Ford V8 engine and an automatic transmission had a measured drivetrain power loss averaging 33%. In the same test, a Buick equipped with a modified 455ci V8 engine and a 4-speed manual transmission was measured to have an average drivetrain power loss of 21%. Laboratory testing of a heavy-duty diesel engine determined that 1.3% of the fuel energy input was lost to parasitic loads of engine accessories such as water and oil pumps. Reduction Automotive engineers and tuners commonly make design choices that reduce parasitic loads in order to improve efficiency and power output. These may involve the choice of major engine components or systems, such as the use of dry sump lubrication system over a wet sump system. Alternately, this can be effected through substitution of minor components available as aftermarket modifications, such as exchanging a directly engine-driven fan for one equipped with a fan clutch or an electric fan. Another modification to reduce parasitic loss, usually seen in track-only cars, is the replacement of an engine-driven water pump for an electrical water pump. The reduction in parasitic loss from these changes may be due to reduced friction or many other variables that cause the design to be more efficient. See also References Bibliography Patents: Further reading External links Combustion video – in-cylinder combustion in an optically accessible, 2-stroke engine Animated Engines – explains a variety of types Intro to Car Engines – Cut-away images and a good overview of the internal combustion engine Walter E. Lay Auto Lab – Research at The University of Michigan YouTube – Animation of the components and built-up of a 4-cylinder engine YouTube – Animation of the internal moving parts of a 4-cylinder engine Next generation engine technologies retrieved May 9, 2009 How Car Engines Work Unusual Internal-Combustion Engines Aircraft Engine Historical Society (AEHS) – AEHS Home 19th-century inventions Air pollution Internal Internal combustion Piston engines
Internal combustion engine
[ "Physics", "Chemistry", "Technology", "Engineering" ]
14,216
[ "Internal combustion engine", "Machines", "Combustion engine", "Engines", "Piston engines", "Combustion engineering", "Physical systems", "Combustion" ]
41,234,142
https://en.wikipedia.org/wiki/Readout%20integrated%20circuit
A readout integrated circuit (ROIC) is an integrated circuit (IC) specifically used for reading detectors of a particular type. They are compatible with different types of detectors such as infrared and ultraviolet. The primary purpose for ROICs is to accumulate the photocurrent from each pixel and then transfer the resultant signal onto output taps for readout. Conventional ROIC technology stores the signal charge at each pixel and then routes the signal onto output taps for readout. This requires storing large signal charge at each pixel site and maintaining signal-to-noise ratio (or dynamic range) as the signal is read out and digitized. A ROIC has high-speed analog outputs to transmit pixel data outside of the integrated circuit. If digital outputs are implemented, the IC is referred to as a Digital Readout Integrated Circuit (DROIC). A digital readout integrated circuit (DROIC) is a class of ROIC that uses on-chip analog-to-digital conversion (ADC) to digitize the accumulated photocurrent in each pixel of the imaging array. DROICs are easier to integrate into a system compared to ROICs as the package size and complexity are reduced, they are less sensitive to noise and have higher bandwidth compared to analog outputs. A digital pixel readout integrated circuit (DPROIC) is a ROIC that uses on-chip analog-to-digital conversion (ADC) within each pixel (or small group of pixels) to digitize the accumulated photocurrent within the imaging array. DPROICs have an even higher bandwidth than DROICs and can significantly increase the well capacity and dynamic range of the device. References Digital Converters for Image Sensors, Kenton T. Veeder, SPIE Press, 2015. A 25μm pitch LWIR focal plane array with pixel-level 15-bit ADC providing high well capacity and targeting 2mK NETD, Fabrice Guellec et al, Proceedings Volume 7660, Infrared Technology and Applications XXXVI, 2010. A high-resolution, compact and low-power ADC suitable for array implementation in standard CMOS, Christer Jansson, IEEE Transactions on circuits and systems - I: Fundamental theory and applications, Vol. 42, No. 11, November 1995. Digital Pixel Readout Integrated Circuit for High Dynamic Range Infrared Imaging Applications, Phase I SBIR, Technology report, NASA Jet Propulsion Laboratory, July 2018. Digital pixel readout integrated circuit architectures for LWIR, Shafique, A., Yaziki, M., Kayahan, H., Ceylan, O., Gurbuz, Y., Proceedings Volume 9451, Infrared Technology and Applications XLI; 94510V, 2015. Integrated circuits Detectors Digital-Pixel Focal Plane Array Technology, Schultz, K., et al, Lincoln Laboratory Journal, Vol. 20, No. 2, 2014. Sensors, Space Probes and Wi-Fi Cybersecurity, Oh My!, Maxfield, Max., Electronic Engineering Journal, February, 2020. Digital Pixel Infrared Imaging Boosts Camera Speed and Performance, Bannatyne, R., Vision Systems Design, June 2020.
Readout integrated circuit
[ "Technology", "Engineering" ]
651
[ "Computer engineering", "Integrated circuits" ]
41,234,248
https://en.wikipedia.org/wiki/Difluorine%20complex
A difluorine complex is a molecular complex involving a difluorine molecule (F2) and another molecule. The first example was gold heptafluoride (AuF7). Instead of being a gold(VII) compound, AuF7 is an adduct of gold pentafluoride (AuF5) and F2. This conclusion has been repeatedly supported by calculations. Unlike dihydrogen complexes, which feature η2-H2, difluorine complexes feature "end-on" or η1-F2 ligand. See also dihydrogen complex References Fluorine compounds
Difluorine complex
[ "Chemistry" ]
129
[ "Inorganic compounds", "Inorganic compound stubs" ]
41,234,301
https://en.wikipedia.org/wiki/Keck%20asymmetric%20allylation
In organic chemistry, the Keck asymmetric allylation is a chemical reaction that involves the nucleophilic addition of an allyl group to an aldehyde. The catalyst is a chiral complex that contains titanium as a Lewis acid. The chirality of the catalyst induces a stereoselective addition, so the secondary alcohol of the product has a predictable absolute stereochemistry based on the choice of catalyst. This name reaction is named for Gary Keck. Background The Keck asymmetric allylation has many applications to the synthesis of natural products, including (−)-Gloeosporone, Epothilone A, the CD-Subunit of spongistatins, and the C10-C20 Subunit of rhizoxin A, The Keck allylation has also been utilized to form substituted tetrahydropyrans enantioselectively, moieties found in products such as phorboxazole and bryostatin 1. Although the groups of E. Tagliavini and K. Mikami reported the catalysis of this reaction using a Ti(IV)–BINOL complex in the same year as the Keck group, Keck's publication reported higher enantio- and diastereoselectivity, and did not require the use of 4 Angstrom molecular sieves as in Mikami's procedure or an excess of BINOL as in Tagliavini's procedure. Keck's early success with stereoselectivity and the simplicity of the catalyst preparation led to many improvements in reaction design, including development of other structural analogs of BINOL, use of stoichiometric additives to enhance the reaction rate, and broadening the scope of the reaction to include substituted stannane nucleophiles. Mechanism The mechanism of this allylation is not fully known, although a cycle involving activation of the aldehyde by the bidentate BINOL-Ti complex followed by the addition of the allyl ligand to the aldehyde, removal of the tributyltin, and transmetallation to regenerate the Ti complex has been proposed. Work performed by Keck and followed up by Faller and coworkers showed a positive nonlinear effect (NLE) correlating the product enantiomeric purity with the BINOL enantiomeric purity. These observations imply that a dimeric meso-chiral catalyst is less active than the homochiral dimers, leading to the observed chiral amplification. Corey and coworkers established a CH-O hydrogen bonding model that accounts for the absolute stereochemistry of the transformation. Improvements The Tagliavini group, which had carried out asymmetric allylation using a similar BINOL-Ti(IV) complex, followed up early successes by synthesizing a variety of enantiopure substituted binaphthyl ligands. The most successful of these substituted binaphthyls, shown below, gave 92% product enantiomeric excess in the addition of allyltributyltin to aldehydes with a Ti(OiPr)2Cl2 metal complex. The Brenna group developed a synthesis for a binol analog, shown below, which can be resolved into its enantiomers quite easily and used as a chiral auxiliary for stereoselective Keck allylations, showing in some cases improved enantiomeric excesses of up to 4% over the (R)-BINOL catalyzed allylations. Additionally, the developed auxiliary also showed an NLE similar to the classic BINOL, allowing enantio-impure quantities to be used. Faller's group, whose aforementioned work helped elucidate the chiral amplification of the reaction, also developed the use of diisopropyl tartrate in a chiral poisoning strategy. Diisopropyl tartrate, racemic BINOL, Ti(OiPr)4, phenylaldehyde, and allyltributyltin were used to give enantiomeric excesses of up to 91%. Yoshida and coworkers developed a synthesis of dendritic binaphthols that serve as homogenous, easily recoverable catalyst systems, and showed that they were amenable to forming homoallylic alcohols using Keck's allylation conditions. Maruoka and Kii developed a bidentate Ti(IV) binol ligand for the allylation of aldehydes with the aim of restricting M-O bond rotation between the lewis acid and the aldehyde in order to improve enantiomeric excesses. The bidentate ligand contains two titaniums, binols, and an aromatic diamine connecting moiety, gave enantiomeric excesses of up to 99%. Improved stereoselectivity is proposed to come from double activation of the carbonyl from the titaniums, a hypothesis supported by C13 NMR and IR spectroscopy studies on 2,6-γ-pyrone substrate. The most convincing evidence that the M-O rotation is restricted comes from NOE NMR studies on trans-4-methoxy-3-buten-2-one. Radiation of methoxyvinyl protons in free enone and in enone complexed with monodentate Ti(IV) show s-cis and s-trans conformations, while radiation of the enone in a bidentate Ti(IV) complex showed predominantly s-trans conformers. In 2003, this group extended the allylation strategy using this bidentate catalyst to ketones. Two key steps in the allylation reaction involve breakage of the Sn-C bond in the allyl fragment and formation of the O-Sn bond to facilitate reproduction of the Ti(IV) catalyst. Chan Mo-Yu and coworkers developed an alkylthiosilane accelerator to promote both of these steps, simultaneously increasing the reaction rate and lowering the required catalyst dosages. Coupling of phenylaldehyde with allyltributyltin afforded 91% yield and 97% enantiomeric excess of homoallylic alcohol using 10 mol% of the BINOL-Ti(IV) catalyst, however addition of the alkylthiosilane and use of only 5 mol% catalyst gave 80% yield and 95% enantiomeric excess of homoallylic alcohol. Brueckner and Weigand extended the use of this allylation chemistry to beta-substituted stannanes, including those that contain heterocycles, in 1996, exploring a variety of titanium alkoxides, premixing times, and reaction temperatures. The optimal discovered conditions were 10 mol% Ti(OiPr)4 or Ti(OEt)4, 20 mol% enantiopure BINOL, with a premixing period of 2 hours, giving enantiomeric excesses of up to 99%. References Name reactions Addition reactions
Keck asymmetric allylation
[ "Chemistry" ]
1,456
[ "Name reactions" ]
41,237,117
https://en.wikipedia.org/wiki/Poly%28amidoamine%29
Poly(amidoamine), or PAMAM, is a class of dendrimer which is made of repetitively branched subunits of amide and amine functionality. PAMAM dendrimers, sometimes referred to by the trade name Starburst, have been extensively studied since their synthesis in 1985, and represent the most well-characterized dendrimer family as well as the first to be commercialized. Like other dendrimers, PAMAMs have a sphere-like shape overall, and are typified by an internal molecular architecture consisting of tree-like branching, with each outward 'layer', or generation, containing exponentially more branching points. This branched architecture distinguishes PAMAMs and other dendrimers from traditional polymers, as it allows for low polydispersity and a high level of structural control during synthesis, and gives rise to a large number of surface sites relative to the total molecular volume. Moreover, PAMAM dendrimers exhibit greater biocompatibility than other dendrimer families, perhaps due to the combination of surface amines and interior amide bonds; these bonding motifs are highly reminiscent of innate biological chemistry and endow PAMAM dendrimers with properties similar to that of globular proteins. The relative ease/low cost of synthesis of PAMAM dendrimers (especially relative to similarly-sized biological molecules such as proteins and antibodies), along with their biocompatibility, structural control, and functionalizability, have made PAMAMs viable candidates for application in drug development, biochemistry, and nanotechnology. Synthesis Divergent synthesis Divergent synthesis refers to the sequential "growth" of a dendrimer layer by layer, starting with a core "initiator" molecule which contains functional groups capable of acting as active sites in the initial reaction. Each subsequent reaction in the series increases the number of available surface groups exponentially. Core molecules which give rise to PAMAM dendrimers can vary, but the most basic initiators are ammonia and ethylene diamine. Outward growth of PAMAM dendrimers is accomplished by alternating between two reactions: Michael addition of the amino-terminated surface onto methyl acrylate, resulting in an ester-terminated outer layer, and Coupling with ethylene diamine to achieve a new amino-terminated surface. Each round of reactions forms a new "generation", and PAMAM dendrimers are often classified by generation number; the common shorthand for this classification is "GX" or "GX PAMAM", where X is a number referring to the generation number. The first full cycle of Michael addition followed by coupling with ethylene diamine forms Generation 0 PAMAM, with subsequent Michael additions giving rise to "half" generations, and subsequent amide coupling giving rise to "full" (integer) generations. With divergent synthesis of dendrimers, it is extremely important to allow each reaction to proceed to completion; any defects caused by incomplete reaction or intramolecular coupling of new surface amines with unreacted methyl ester surface groups could cause "trailing" generations, stunting further growth for certain branches. These impurities are difficult to remove when using the divergent synthetic approach because the molecular weight, physical size, and chemical properties of the defective dendrimers are very similar in nature to the desired product. As generation number increases, it becomes more difficult to produce pure products in a timely fashion due to steric constraints. As a result, synthesis of higher-generation PAMAM dendrimers can take months. Convergent synthesis Convergent synthesis of a dendrimer begins with what will eventually become the surface of the dendrimer and proceeds inward. The convergent synthetic approach makes use of orthogonal protecting groups (two protecting groups whose deprotection conditions will not remove one another); this is an additional consideration not present when using a divergent approach. The figure below depicts a general scheme for a convergent synthetic approach. Convergent synthesis as shown above begins with the dendritic subunit composed of reactive "focal group" A and branched group B (B can be multiply branched in the most generalized scenario, but PAMAMs only split once at each branching point). First, A is orthogonally protected and set aside for further reactions. B is also orthogonally protected, leaving the unprotected A on this molecule to couple with each of the unprotected B groups from the initial compound. This results in a new higher-generation species that is protected on both A and B. Selective deprotection of A yields a new molecule which can again be coupled onto the original monomer, thus forming another new generation. This process can then be repeated to form more and more layers. Note that the black protecting groups for group B represent what will become the outermost layer of the final molecule, and remain attached throughout the synthetic process; their purpose is to guarantee that propagation of dendrimer growth can take place in a controlled fashion by preventing unwanted side reactions. In forming each new layer, the number of AB couplings is restricted to two, in sharp contrast to the divergent synthetic approach, which involves exponentially more couplings per layer. Incomplete reaction products (single addition adduct, unreacted starting materials) will have a markedly different molecular weight from the desired product, especially for higher-generation compounds, making purification more straightforward. The reactive focal group A must be terminated onto a final acceptor at some point during the synthetic process; until then, each compound can only be considered a dendron and not a full dendrimer (see page for disambiguation). An advantage to synthesizing dendrons with focal group A as a chemical handle is the ability to attach multiple equivalents of the dendron to a polyfunctional core molecule; changing the core element does not require rebuilding the entire dendrimer. In the case of PAMAM, the focal points of convergently synthesized fragments have been used to create unsymmetrical dendrimers as well as dendrimers with various core functionalization. Since each successive generation of dendron becomes bulkier, with final attachment to the dendrimer core being the most prohibitive step of all, steric constraints can severely impact yield. Toxicity in vitro It has been established that cationic macromolecules in general destabilize the cell membrane, which can lead to lysis and cell death. The common conclusion present in current work echoes this observation: increasing dendrimer molecular weight and surface charge (both being generation-dependent) increases their cytotoxic behavior. Initial studies on PAMAM toxicity showed that PAMAM was less toxic (in some cases, much less so) than related dendrimers, exhibiting minimal cytotoxicity across multiple toxicity screens, including tests of metabolic activity (MTT assay), cell breakdown (LDH assay), and nucleus morphology (DAPI staining). However, in other cell lines, the MTT assay and several other assays revealed some cytotoxicity. These disparate observations could be due to differences in sensitivity of the various cell lines used in each study to PAMAM; although cytotoxicity for PAMAM varies among cell lines, they remain less toxic than other dendrimer families overall. More recently, a series of studies by Mukherjee et al. have shed some light on the mechanism of PAMAM cytotoxicity, providing evidence that the dendrimers break free of their encapsulating membrane (endosome) after being absorbed by the cell, causing harm to the cell's mitochondria and eventually leading to cell death. Further elucidation of the mechanism of PAMAM cytotoxicity would help resolve the dispute as to precisely how toxic the dendrimers are. In relation to neuronal toxicity, fourth generation PAMAM has been shown to break down calcium transients, altering neurotransmitter vesicle dynamics and synaptic transmission. All of the above can be prevented by replacing the surface amines with folate or polyethylene glycol. It has also been shown that PAMAM dendrimers cause rupturing of red blood cells, or hemolysis. Thus, if PAMAM dendrimers are to be considered in biological applications that involve dendrimers or dendrimer complexes traveling through the bloodstream, the concentration and generation number of unmodified PAMAM in the bloodstream should be taken into account. in vivo To date, few in-depth studies on the in vivo behavior of PAMAM dendrimers have been carried out. This could be in part due to the diverse behavior of PAMAMs depending on surface modification (see below), which make characterization of their in vivo properties largely case-dependent. Nonetheless, the fate and transport of unmodified PAMAM dendrimers is an important case study as any biological applications could involve unmodified PAMAM as a metabolic byproduct. In the only major systematic study of in vivo PAMAM behavior, injections of high levels of bare PAMAMs over extended periods of time in mice showed no evidence of toxicity up through G5 PAMAM, and for G3-G7 PAMAM, low immunogenicity was observed. These systemic-level observations seem to align with the observation that PAMAM dendrimers are not extremely cytotoxic overall; however, more in-depth studies of the pharmacokinetics and biodistribution of PAMAM are required before a move toward in vivo applications can be made. Surface modification One unique property of dendrimers such as PAMAM is the high density of surface functional groups, which allow many alterations to be made to the surface of each dendrimer molecule. In putative PAMAM dendrimers, the surface is rife with primary amines, with higher generations expressing exponentially greater densities of amino groups. Although the potential to attach many things to each dendrimer is one of their greatest advantages, the presence of highly localized positive charges can be toxic to cells. Surface modification via attachment of acetyl and lauroyl groups help mask these positive charges, attenuating cytotoxicity and increasing permeability to cells. Thus, these types of modifications are especially beneficial for biological applications. Secondary and tertiary amino surface groups are also found to be less toxic than primary amino surface groups, suggesting it is charge shielding which has major bearing on cytotoxicity and not some secondary effect from a particular functional group. Furthermore, other studies point to a delicate balance in charge which must be achieved to obtain minimal cytotoxicity. Hydrophobic interactions can also cause cell lysis, and PAMAM dendrimers whose surfaces are saturated with nonpolar modifications such as lipids or polyethylene glycol (PEG) suffer from higher cytotoxicity than their partially substituted analogues. PAMAM dendrimers with nonpolar internal components have also been shown to induce hemolysis. Applications Applications involving dendrimers in general take advantage of either stuffing cargo into the interior of the dendrimer (sometimes referred to as the "dendritic box"), or attaching cargo onto the dendrimer surface. PAMAM dendrimer applications have generally focused on surface modification, taking advantage of both electrostatic and covalent methods for binding cargo. Currently, major areas of study using PAMAM dendrimers and their functionalized derivatives involve drug delivery and gene delivery. Drug delivery Since PAMAM dendrimers have shown penetration capability to a wide range of cell lines, simple PAMAM-drug complexes would affect a broad spectrum of cells upon introduction to a living system. Thus, additional targeting ligands are required for the selective penetration of cell types. For example, PAMAM derivatized with folic acid is preferentially taken up by cancer cells, which are known to overexpress the folate receptor on their surfaces. Attaching additional treatment methods along with the folic acid, such as boron isotopes, cisplatin, and methotrexate have proven quite effective. In the future, as synthetic control over dendrimer surface chemistry becomes more robust, PAMAM and other dendrimer families may rise to prominence alongside other major approaches to targeted cancer therapy. In a study of folic acid functionalized PAMAM, methotrexate was combined either as an inclusion complex within the dendrimer or as a covalent surface attachment. In the case of the inclusion complex, the drug was released from the dendrimer interior almost immediately when subjected to biological conditions and acted similarly to the free drug. The surface attachment approach yielded stable, soluble complexes which were able to selectively target cancer cells and did not prematurely release their cargo. Drug release in the case of the inclusion complex could be explained by the protonation of surface and interior amines under biological conditions, leading to unpacking of the dendrimer conformation and consequent release of the inner cargo. A similar phenomenon was observed with complexes of PAMAM and cisplatin. PAMAM dendrimers have also demonstrated intrinsic drug properties. One quite notable example is the ability for PAMAM dendrimers to remove prion protein aggregates, the deadly protein aggregates responsible for bovine spongiform encephalopathy ("mad cow disease") and Creutzfeldt–Jakob disease in humans. The solubilization of prions is attributed to the polycationic and dendrimeric nature of the PAMAMs, with higher generation (>G3) dendrimers being the most efficient; hydroxy-terminated PAMAMs as well as linear polymers showed little to no effect. Since there are no other known compounds capable of dissolving prions which have already aggregated, PAMAM dendrimers have offered a bit of reprieve in the study of such fatal diseases, and may offer additional insight into the mechanism of prion formation. Gene therapy The discovery that mediating positive charge on PAMAM dendrimer surfaces decreases their cytotoxicity has interesting implications for DNA transfection applications. Because the cell membrane has a negatively charged exterior, and the DNA phosphate backbone is also negatively charged, the transfection of free DNA is not very efficient simply due to charge repulsion. However, it would be reasonable to expect charged interactions between the anionic phosphate backbone of DNA and the amino-terminated surface groups of PAMAM dendrimers, which are positively ionized under physiological conditions. This could result in a PAMAM-DNA complex, which would make DNA transfection more efficient due to neutralization of the charges on both elements, while the cytotoxicity of the PAMAM dendrimer would also be reduced. Indeed, several reports have confirmed PAMAM dendrimers as effective DNA transfection agents. When the charge balance between DNA phosphates and PAMAM surface amines is slightly positive, the maximum transfection efficiency is obtained; this finding supports the idea that the complex binds to the cell surface via charge interactions. A striking observation is that "activation" of PAMAM by partial degradation via hydrolysis improves transfection efficiency by 2-3 orders of magnitude, providing further evidence supporting the existence of an electrostatically coupled complex. The fragmentation of some branches of the dendrimer is thought to loosen up the overall structure (fewer amide bonds and space constraints), which would theoretically result in better contact between the dendrimer and DNA substrate because the dendrimer is not forced into a rigid spherical conformation due to sterics. This in turn results in more compact DNA complexes which are more easily endocytosed. After endocytosis, the complexes are subjected to the acidic conditions of the cellular endosome. The PAMAM dendrimers act as a buffer in this environment, soaking up the excess protons with multitudes of amine residues, leading to the inhibition of pH-dependent endosomal nuclease activity and thus protecting the cargo DNA. The tertiary amines on the interior of the dendrimer can also participate in the buffering activity, causing the molecule to puff up; additionally, as the PAMAMs take on more and more positive charge, fewer of them are required for the optimal PAMAM-DNA interaction, and free dendrimers are released from the complex. Dendrimer release and swelling can eventually lyse the endosome, resulting in release of the cargo DNA. The activated PAMAM dendrimers have less spatial barrier to interior amine protonation, which is thought to be a major source of their advantage over non-activated PAMAM. In the context of existing approaches to gene transfer, PAMAM dendrimers hold a strong position relative to major classical technologies such as electroporation, microinjection, and viral methods. Electroporation, which involves pulsing electricity through cells to create holes in the membrane through which DNA can enter, has obvious cytotoxic effects and is not appropriate for in vivo applications. On the other hand, microinjection, the use of fine needles to physically inject genetic material into the cell nucleus, offers more control but is a high-skill, meticulous task in which a relatively low number of cells can be transfected. Although viral vectors can offer highly specific, high-efficiency transfection, the generation of such viruses is costly and time-consuming; furthermore, the inherent viral nature of the gene transfer often triggers an immune response, thus limiting in vivo applications. In fact, many modern transfection technologies are based on artificially assembled liposomes (both liposomes and PAMAMs are positively charged macromolecules). Since PAMAM dendrimers and their complexes with DNA exhibit low cytotoxicity, higher transfection efficiencies than liposome-based methods, and are effective across a broad range of cell lines, they have taken an important place in modern gene therapy methodologies. The biotechnology company Qiagen currently offers two DNA transfection product lines (SuperFect and PolyFect) based on activated PAMAM dendrimer technology. Much work lies ahead before activated PAMAM dendrimers can be used as in vivo gene therapy agents. Although the dendrimers have proved to be highly efficient and non-toxic in vitro, the stability, behavior, and transport of the transfection complex in biological systems has yet to be characterized and optimized. As with drug delivery applications, specific targeting of the transfection complex is ideal and must be explored as well. See also Amidoamine References Bibliography Dendrimers Materials science
Poly(amidoamine)
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,890
[ "nan", "Applied and interdisciplinary physics", "Materials science", "Dendrimers" ]
65,507,105
https://en.wikipedia.org/wiki/Wielandt%20theorem
In mathematics, the Wielandt theorem characterizes the gamma function, defined for all complex numbers for which by as the only function defined on the half-plane such that: is holomorphic on ; ; for all and is bounded on the strip . This theorem is named after the mathematician Helmut Wielandt. See also Bohr–Mollerup theorem Hadamard's gamma function References . Gamma and related functions Theorems in complex analysis
Wielandt theorem
[ "Mathematics" ]
93
[ "Theorems in mathematical analysis", "Theorems in complex analysis" ]
65,513,520
https://en.wikipedia.org/wiki/Phiala%20E.%20Shanahan
Phiala Elisabeth Shanahan is an Australian theoretical physicist who lives and works in the United States. She is known for her work on the structure and interactions of hadrons and nuclei and her innovative use of machine learning techniques in lattice quantum field theory calculations. Education Shanahan attended The Wilderness School in Medindie, a suburb of Adelaide, South Australia. While there, she received a 2007 Australian Student Prize. She received her BSc from the University of Adelaide in 2012 and her PhD from the same institution in 2015. Her PhD advisors were Anthony William Thomas and Ross D. Young. In her doctoral thesis, "Strangeness and Charge Symmetry Violation in Nucleon Structure," Shanahan studied the role of elementary particles called strange quarks and charge symmetry breaking in the structure of protons and neutrons in atomic nuclei using lattice quantum chromodynamics and effective field theory techniques. Her work improved understanding of the role of strange quarks in protons and atomic nuclei, which refines interpretations of experiments that seek to understand dark matter through direct detection techniques. Shanahan's work at the University of Adelaide and her thesis earned her the American Physical Society's 2017 Dissertation Award in Hadronic Physics, the 2016 Bragg Gold Medal for the best PhD completion in physics in Australia, and the University of Adelaide's 2016 Postgraduate Alumni University Medal. Career After completing her PhD, Shanahan became a postdoctoral associate at the Massachusetts Institute of Technology from 2015 to 2017. During this time, she studied the role of force-carrying elementary particles called gluons in the structure of subatomic particles called hadrons. She also used lattice quantum chromodynamics techniques to examine the structures of atomic nuclei. In 2017, Forbes featured Shanahan in its "30 Under 30: Science" list for the impact of her work on the understanding of dark matter and physics beyond the Standard Model. From 2017 to 2018, she held a joint appointment as assistant professor at the College of William & Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility. Shanahan became assistant professor in the Center for Theoretical Physics at the Massachusetts Institute of Technology in July 2018, which at that time made her the youngest assistant professor of physics there. Shanahan was also a Simons Emmy Noether Fellow at the Perimeter Institute for Theoretical Physics during the fall 2018 semester. This fellowship supports early- and mid-career women physicists. Shanahan's current research includes seeking to understand how the structures and interactions of hadrons and atomic nuclei can be calculated from the fundamental principles of the Standard Model of physics, the role of gluons in the structures of hadrons and atomic nuclei, and how supercomputers and machine learning may be used to perform low-energy quantum chromodynamics calculations. Some of the predictions she is currently developing may be testable in the future using the Thomas Jefferson National Accelerator Facility's planned electron-ion collider. Shanahan received the American Physical Society's 2021 Maria Goeppert Mayer Award, which recognizes outstanding achievement by early-career women physicists, for her "key insights into the structure and interactions of hadrons and nuclei using numerical and analytical methods and pioneering the use of machine learning techniques in lattice quantum field theory calculations in particle and nuclear physics." Honors and awards 2007 Australian Student Prize 2016 Bragg Gold Medal for the best PhD completion in physics in Australia University of Adelaide's 2016 Postgraduate Alumni University Medal American Physical Society's 2017 Dissertation Award in Hadronic Physics Featured in Forbes magazine's 2017 "30 Under 30: Science" list. 2018 National Science Foundation CAREER Award for the project "Quark and Gluon Structure of Nucleons and Nuclei" 2020 United States Department of Energy Early Career Award for the project "The QCD Structure of Nucleons and Light Nuclei." The Lattice International Conference's 2020 Kenneth G. Wilson Award for Excellence in Lattice Field Theory Featured in Science News's 2020 "10 Scientists to Watch" list American Physical Society's 2021 Maria Goeppert Mayer Award References External links Phiala Shanahan's website at MIT Oral history interview transcript for Phiala Shanahan on 21 September 2020, American Institute of Physics, Niels Bohr Library & Archives Interview with Phiala Shanahan by Robyn Williams on The Science Show radio program on March 4, 2017 (contains audio and transcript) Public lecture "The Building Blocks of the Universe" by Phiala Shanahan at The Perimeter Institute on November 7, 2018 (contains video) Phiala Shanahan's author page at INSPIRE-HEP Shanahan, Phiala Shanahan, Phiala Shanahan, Phiala Shanahan, Phiala Shanahan, Phiala Shanahan, Phiala Scientists from Adelaide Particle physicists Australian women physicists Year of birth missing (living people)
Phiala E. Shanahan
[ "Physics" ]
993
[ "Theoretical physics", "Theoretical physicists", "Particle physics", "Particle physicists" ]
48,934,114
https://en.wikipedia.org/wiki/Angle%20of%20incidence%20%28aerodynamics%29
On fixed-wing aircraft, the angle of incidence (sometimes referred to as the mounting angle or setting angle) is the angle between the chord line of the wing where the wing is mounted to the fuselage, and a reference axis along the fuselage (often the direction of minimum drag, or where applicable, the longitudinal axis). The angle of incidence is fixed in the design of the aircraft, and with rare exceptions, cannot be varied in flight. The term can also be applied to horizontal surfaces in general (such as canards or horizontal stabilizers) for the angle they make relative the longitudinal axis of the fuselage. The figure to the right shows a side view of an airplane. The extended chord line of the wing root (red line) makes an angle with the longitudinal axis (roll axis) of the aircraft (blue line). Wings are typically mounted at a small positive angle of incidence, to allow the fuselage to have a low angle with the airflow in cruising flight. Angles of incidence of about 6° are common on most general aviation designs. Other terms for angle of incidence in this context are rigging angle and rigger's angle of incidence. The angle of incidence should not be confused with the angle of attack, which is the angle the wing chord presents to the airflow in flight. However some ambiguity in this terminology exists, as some engineering texts that focus solely on the study of airfoils and their medium may use either term when referring to angle of attack. On rotary–wing aircraft, the AoA (Angle of Attack) is the angle between the airfoil chord line and resultant relative wind. AoA is an aerodynamic angle. It can change with no change in the AoI (Angle of Incidence). Several factors may change the rotor blade AoA. Pilots control some of those factors; others occur automatically due to the rotor system design. Pilots adjust AoA through normal control manipulation; however, even with no pilot input AoA will change as an integral part of travel of the rotor blade through the rotor-disc. This continuous process of change accommodates rotary-wing flight. Pilots have little control over blade flapping and flexing, gusty wind, and/or turbulent air conditions. AoA is one of the primary factors determining amount of lift and drag produced by an airfoil. Notes Aerodynamics Aircraft wing design Angle
Angle of incidence (aerodynamics)
[ "Physics", "Chemistry", "Engineering" ]
476
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Aerodynamics", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Angle", "Fluid dynamics" ]
48,934,672
https://en.wikipedia.org/wiki/TOG%20superfamily
The transporter-opsin-G protein-coupled receptor (TOG) superfamily is a protein superfamily of integral membrane proteins, usually of 7 or 8 transmembrane alpha-helical segments (TMSs). It includes (1) ion-translocating microbial rhodopsins and (2) G protein-coupled receptors (GPCRs), (3) Sweet sugar transporters, (4) nicotinamide ribonucleoside uptake permeases (PnuC; TC# 4.B.1), (5) 4-toluene sulfonate uptake permeases (TSUP); TC# 2.A.102), (6) Ni2+–Co2+ transporters (NiCoT); TC# 2.A.52), (7) organic solute transporters (OST); TC# 2.A.82), (8) phosphate:Na+ symporters (PNaS); TC# 2.A.58) and (9) lysosomal cystine transporters (LCT); TC# 2.A.43). Families Currently recognized families within the TOG Superfamily (with TC numbers in blue) include: 1.A.14 - The Testis-enhanced Gene Transfer (TEGT) Family 1.A.26 - The Mg2+ Transporter-E (MgtE) Family 1.A.76 - The Magnesium Transporter1 (MagT1) Family 2.A.43 - The Lysosomal Cystine Transporter (LCT) Family 2.A.52 - The Ni2+-Co2+ Transporter (NiCoT) Family 2.A.58 - The Phosphate:Na+ Symporter (PNaS) Family 2.A.82 - The Organic Solute Transporter (OST) Family 2.A.102 - The 4-Toluene Sulfonate Uptake Permease (TSUP) Family 2.A.123 - The Sweet; PQ-loop; Saliva; MtN3 (Sweet) Family 3.E.1 - The Ion-translocating Microbial Rhodopsin (MR) Family 4.B.1 - The Nicotinamide Ribonucleoside (NR) Uptake Permease (PnuC) Family 9.A.14 - The G-protein-coupled receptor (GPCR) Family Structures A couple of the 3-D structures available for members of the following families include: SWEET family: MR family: , - high resolution structures PnuC family: GPCR family: See also Solute carrier family Transporter Classification Database References Further reading Transmembrane transporters Protein superfamilies
TOG superfamily
[ "Biology" ]
581
[ "Protein superfamilies", "Protein classification" ]
48,938,687
https://en.wikipedia.org/wiki/Absement
In kinematics, absement (or absition) is a measure of sustained displacement of an object from its initial position, i.e. a measure of how far away and for how long. The word absement is a portmanteau of the words absence and displacement. Similarly, its synonym absition is a portmanteau of the words absence and position. Absement changes as an object remains displaced and stays constant as the object resides at the initial position. It is the first time-integral of the displacement (i.e. absement is the area under a displacement vs. time graph), so the displacement is the rate of change (first time-derivative) of the absement. The dimension of absement is length multiplied by time. Its SI unit is meter second (m·s), which corresponds to an object having been displaced by 1 meter for 1 second. This is not to be confused with a meter per second (m/s), a unit of velocity, the time-derivative of position. For example, opening the gate of a gate valve (of rectangular cross section) by 1 mm for 10 seconds yields the same absement of 10 mm·s as opening it by 5 mm for 2 seconds. The amount of water having flowed through it is linearly proportional to the absement of the gate, so it is also the same in both cases. Occurrence in nature Whenever the rate of change ′ of a quantity is proportional to the displacement of an object, the quantity is a linear function of the object's absement. For example, when the fuel flow rate is proportional to the position of the throttle lever, then the total amount of fuel consumed is proportional to the lever's absement. The first published paper on the topic of absement introduced and motivated it as a way to study flow-based musical instruments, such as the hydraulophone, to model empirical observations of some hydraulophones in which obstruction of a water jet for a longer period of time resulted in a buildup in sound level, as water accumulates in a sounding mechanism (reservoir), up to a certain maximum filling point beyond which the sound level reached a maximum, or fell off (along with a slow decay when a water jet was unblocked). Absement has also been used to model artificial muscles, as well as for real muscle interaction in a physical fitness context. Absement has also been used to model human posture. As the displacement can be seen as a mechanical analogue of electric charge, the absement can be seen as a mechanical analogue of the time-integrated charge, a quantity useful for modelling some types of memory elements. Applications In addition to modeling fluid flow and for Lagrangian modeling of electric circuits, absement is used in physical fitness and kinesiology to model muscle bandwidth, and as a new form of physical fitness training. In this context, it gives rise to a new quantity called actergy, which is to energy as energy is to power. Actergy has the same units as action (joule-seconds) but is the time-integral of total energy (time-integral of the Hamiltonian rather than time-integral of the Lagrangian). Just as displacement and its derivatives form kinematics, so do displacement and its integrals form "integral kinematics". Fluid flow in a throttle: Relation to PID controllers PID controllers are controllers that work on a signal that is proportional to a physical quantity (e.g. displacement, proportional to position) and its integral(s) and derivative(s), thusly defining PID in the context of integrals and derivatives of a position of a control element in the Bratland sense Example of PID controller (Bratland 2014): P, position; I, absement; D, velocity. Strain absement Strain absement is the time-integral of strain, and is used extensively in mechanical systems and memsprings: a quantity called absement which allows mem-spring models to display hysteretic response in great abundance. Anglement Absement originally arose in situations involving valves and fluid flow, for which the opening of a valve was by a long, T-shaped handle, which actually varied in angle rather than position. The time-integral of angle is called "anglement" and it is approximately equal or proportional to absement for small angles, because the sine of an angle is approximately equal to the angle for small angles. Phase space: Absement and momentement In regard to a conjugate variable for absement, the time-integral of momentum, known as momentement, has been proposed. This is consistent with Jeltsema's 2012 treatment with charge and flux as the base units rather than current and voltage. References External links Motion (physics) Vector physical quantities
Absement
[ "Physics", "Mathematics" ]
986
[ "Physical phenomena", "Physical quantities", "Quantity", "Motion (physics)", "Mechanics", "Vector physical quantities", "Space", "Spacetime" ]
52,820,398
https://en.wikipedia.org/wiki/Hurewicz%20space
In mathematics, a Hurewicz space is a topological space that satisfies a certain basic selection principle that generalizes σ-compactness. A Hurewicz space is a space in which for every sequence of open covers of the space there are finite sets such that every point of the space belongs to all but finitely many sets . History In 1926, Witold Hurewicz introduced the above property of topological spaces that is formally stronger than the Menger property. He didn't know whether Menger's conjecture is true, and whether his property is strictly stronger than the Menger property, but he conjectured that in the class of metric spaces his property is equivalent to -compactness. Hurewicz's conjecture Hurewicz conjectured that in ZFC every Hurewicz metric space is σ-compact. Just, Miller, Scheepers, and Szeptycki proved that Hurewicz's conjecture is false, by showing that there is, in ZFC, a set of real numbers that is Menger but not σ-compact. Their proof was dichotomic, and the set witnessing the failure of the conjecture heavily depends on whether a certain (undecidable) axiom holds or not. Bartoszyński and Shelah (see also Tsaban's solution based on their work ) gave a uniform ZFC example of a Hurewicz subset of the real line that is not σ-compact. Hurewicz's problem Hurewicz asked whether in ZFC his property is strictly stronger than the Menger property. In 2002, Chaber and Pol in unpublished note, using dichotomy proof, showed that there is a Hurewicz subset of the real line that is not Menger. In 2008, Tsaban and Zdomskyy gave a uniform example of a Hurewicz subset of the real line that is Menger but not Hurewicz. Characterizations Combinatorial characterization For subsets of the real line, the Hurewicz property can be characterized using continuous functions into the Baire space . For functions , write if for all but finitely many natural numbers . A subset of is bounded if there is a function such that for all functions . A subset of is unbounded if it is not bounded. Hurewicz proved that a subset of the real line is Hurewicz iff every continuous image of that space into the Baire space is unbounded. In particular, every subset of the real line of cardinality less than the bounding number is Hurewicz. Topological game characterization Let be a topological space. The Hurewicz game played on is a game with two players Alice and Bob. 1st round: Alice chooses an open cover of . Bob chooses a finite set . 2nd round: Alice chooses an open cover of . Bob chooses a finite set . etc. If every point of the space belongs to all but finitely many sets , then Bob wins the Hurewicz game. Otherwise, Alice wins. A player has a winning strategy if he knows how to play in order to win the game (formally, a winning strategy is a function). A topological space is Hurewicz iff Alice has no winning strategy in the Hurewicz game played on this space. -neighborhood characterization A Tychonoff space is Hurewicz iff for every compact space containing the space , and a subset G of containing the space , there is a -compact set with . Properties Every compact, and even σ-compact, space is Hurewicz. Every Hurewicz space is a Menger space, and thus it is a Lindelöf space Continuous image of a Hurewicz space is Hurewicz The Hurewicz property is closed under taking subsets Hurewicz's property characterizes filters whose Mathias forcing notion does not add unbounded functions. References Properties of topological spaces Topology
Hurewicz space
[ "Physics", "Mathematics" ]
793
[ "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Spacetime" ]
52,821,211
https://en.wikipedia.org/wiki/Phylogenetic%20inference%20using%20transcriptomic%20data
In molecular phylogenetics, relationships among individuals are determined using character traits, such as DNA, RNA or protein, which may be obtained using a variety of sequencing technologies. High-throughput next-generation sequencing has become a popular technique in transcriptomics, which represent a snapshot of gene expression. In eukaryotes, making phylogenetic inferences using RNA is complicated by alternative splicing, which produces multiple transcripts from a single gene. As such, a variety of approaches may be used to improve phylogenetic inference using transcriptomic data obtained from RNA-Seq and processed using computational phylogenetics. Sequence acquisition There have been several transcriptomics technologies used to gather sequence information on transcriptomes. However the most widely used is RNA-Seq. RNA-Seq RNA reads may be obtained using a variety of RNA-seq methods. Public databases There are a number of public databases that contain freely available RNA-Seq data. Assembly Sequence assembly RNA-Seq data may be directly assembled into transcripts using sequence assembly. Two main categories of sequence assembly are often distinguished: de novo transcriptome assembly - especially important when a reference genome is not available for a given species. Genome-guided assembly (sometimes mapping or reference-guided assembly) - is capable of using a pre-existing reference to guide the assembly of transcripts Both methods attempt to generate biologically representative isoform-level constructs from RNA-seq data and generally attempt to associate isoforms with a gene-level construct. However, proper identification of gene-level constructs may be complicated by recent duplications, paralogs, alternative splicing or gene fusions. These complications may also cause downstream issues during ortholog inference. When selecting or generating sequence data, it is also vital to consider the tissue type, developmental stage and environmental conditions of the organisms. Since the transcriptome represents a snapshot of gene expression, minor changes to these conditions may significantly affect which transcripts are expressed. This may detrimentally affect downstream ortholog detection. Public databases RNA may also be acquired from public databases, such as GenBank, RefSeq, 1000 Plants (1KP) and 1KITE. Public databases potentially offer curated sequences which can improve inference quality and avoid the computational overhead associated with sequence assembly. Inferring gene pair orthology/paralogy Approaches Orthology or paralogy inference requires an assessment of sequence homology, usually via sequence alignment. Phylogenetic analyses and sequence alignment are often considered jointly, as phylogenetic analyses using DNA or RNA require sequence alignment and alignments themselves often represent some hypothesis of homology. As proper ortholog identification is pivotal to phylogenetic analyses, there are a variety of methods available to infer orthologs and paralogs. These methods are generally distinguished as either graph-based algorithms or tree-based algorithms. Some examples of graph-based methods include InParanoid, MultiParanoid, OrthoMCL, HomoloGene and OMA. Tree-based algorithms include programs such as OrthologID or RIO. A variety of BLAST methods are often used to detect orthologs between species as a part of graph-based algorithms, such as MegaBLAST, BLASTALL, or other forms of all-versus-all BLAST and may be nucleotide- or protein-based alignments. RevTrans will even use protein data to inform DNA alignments, which can be beneficial for resolving more distant phylogenetic relationships. These approaches often assume that best-reciprocal-hits passing some threshold metric(s), such as identity, E-value, or percent alignment, represent orthologs and may be confounded by incomplete lineage sorting. Databases and tools It is important to note that orthology relationships in public databases typically represent gene-level orthology and do not provide information concerning conserved alternative splice variants. Databases that contain and/or detect orthologous relationships include: DIOPT Ensembl Compara GreenPhylDB HaMStR HomoloGene InParanoid MultiParanoid OMA OrthoDB OrthoFinder OrthologID OrthoMCL OrtholugeDB PhylomeDB TreeFam eggNOG metaPhOrs Multiple sequence alignment As eukaryotic transcription is a complex process by which multiple transcripts may be generated from a single gene through alternative splicing with variable expression, the utilization of RNA is more complicated than DNA. However, transcriptomes are cheaper to sequence than complete genomes and may be obtained without the use of a pre-existing reference genome. It is not uncommon to translate RNA sequence into protein sequence when using transcriptomic data, especially when analyzing highly diverged taxa. This is an intuitive step as many (but not all) transcripts are expected to code for protein isoforms. Potential benefits include the reduction of mutational biases and a reduced number of characters, which may speed analyses. However, this reduction in characters may also result in the loss of potentially informative characters. There are a number of tools available for multiple sequence alignment. All of which possess their own strengths and weaknesses and may be specialized for distinct sequence types (DNA, RNA or protein). As such, a splice-aware aligner may be ideal for aligning RNA sequences, whereas an aligner that considers protein structure or residue substitution rates may be preferable for translated RNA sequence data. Opportunities and limitations Using RNA for phylogenetic analysis comes with its own unique set of strengths and weaknesses. Advantages large set of characters cost-effective not dependent upon a reference genome Disadvantages expenses of extensive taxon sampling difficulty in identification of full-length, single-copy transcripts and orthologs potential misassembly of transcripts (especially when duplicates are present) missing data as a product of the transcriptome representing a snapshot of expression or incomplete lineage sorting See also BLAST Coding region Computational phylogenetics De novo transcriptome assembly Exome Exome sequencing Expressed sequence tag Gene expression Homology List of phylogenetics software Phylogenetics Phylogenetic tree RNA RNA-Seq Sequence alignment Synonymous substitution Systematics Transcriptome UniGene References External links 1KITE 1000 Plants (1KP) DIOPT eggNOG Ensembl Compara GenBank GreenPhylDB HaMStR HomoloGene InParanoid MultiParanoid metaPhOrs NCBI_BLAST OMA OrthoDB OrthologID OrthoMCL OrtholugeDB PhylomeDB RefSeq RevTrans_2.0 TreeFam Trinity_de_novo_assembler Computational phylogenetics Genetics techniques
Phylogenetic inference using transcriptomic data
[ "Engineering", "Biology" ]
1,350
[ "Genetics techniques", "Computational phylogenetics", "Genetic engineering", "Bioinformatics", "Phylogenetics" ]
52,821,888
https://en.wikipedia.org/wiki/SAFE%20Building%20System
The SAFE Building System, also known as the SAFE Foundation System, is a way to build in flood zones and coastal areas, developed by architect and inventor Greg Henderson and his team at Arx Pax Labs, Inc. It is designed to float buildings, roadways, and utilities in a few feet of water. The self-adjusting floating environment draws from existing technologies used to float concrete bridges and runways such as Washington's SR 520 and Japan's Mega-Float. It also absorbs the shock of earthquakes, allowing buildings and their related communities to remain stable. Arx Pax is working with Republic of Kiribati and Pacific Rising to solve for sustainable development challenges associated with rising sea levels. Arx Pax, the company involved in this technology has proposed building a “floating village” project in north San Jose's Alviso hamlet, deploying a group of pontoons beneath the buildings to protect the development from floods and earthquakes. Originally developed for earthquakes as an alternative to Base Isolation the floating foundation decouples the structure from the earth with a simple patented method consisting of three parts. According to the patent, "Three part foundation systems can include a containment vessel, which constrains a buffer medium to an area above the containment vessel, and a construction platform. A building can be built on the construction platform. In a particular embodiment, during operation, the construction platform and structures built on the construction platform can float on the buffer medium. In an earthquake, a construction platform floating on a buffer medium may experience greatly reduced shear forces. In a flood, a construction platform floating on a buffer medium can be configured to rise as water levels rise to limit flood damage." See also Sustainable development Evergreen Point Floating Bridge Very large floating structure References Civil engineering
SAFE Building System
[ "Engineering" ]
359
[ "Construction", "Civil engineering" ]
52,823,070
https://en.wikipedia.org/wiki/Lovelock%27s%20theorem
Lovelock's theorem of general relativity says that from a local gravitational action which contains only second derivatives of the four-dimensional spacetime metric, then the only possible equations of motion are the Einstein field equations. The theorem was described by British physicist David Lovelock in 1971. Statement In four dimensional spacetime, any tensor whose components are functions of the metric tensor and its first and second derivatives (but linear in the second derivatives of ), and also symmetric and divergence-free, is necessarily of the form where and are constant numbers and is the Einstein tensor. The only possible second-order Euler–Lagrange expression obtainable in a four-dimensional space from a scalar density of the form is Consequences Lovelock's theorem means that if we want to modify the Einstein field equations, then we have five options. Add other fields rather than the metric tensor; Use more or fewer than four spacetime dimensions; Add more than second order derivatives of the metric; Non-locality, e.g. for example the inverse d'Alembertian; Emergence – the idea that the field equations don't come from the action. See also Lovelock theory of gravity Vermeil's theorem References General relativity Theorems in general relativity
Lovelock's theorem
[ "Physics", "Mathematics" ]
253
[ "Equations of physics", "Theorems in general relativity", "General relativity", "Theorems in mathematical physics", "Relativity stubs", "Theory of relativity", "Physics theorems" ]
59,041,363
https://en.wikipedia.org/wiki/Zavarzinella
Zavarzinella is an aerobic genus of bacteria from the family of Planctomycetaceae with one known species (Zavarzinella formosa). Zavarzinella formosa has been isolated from Sphagnum peat from West Siberia. See also List of bacterial orders List of bacteria genera References Bacteria genera Monotypic bacteria genera Planctomycetota
Zavarzinella
[ "Biology" ]
78
[ "Bacteria stubs", "Bacteria" ]
59,041,411
https://en.wikipedia.org/wiki/Thermostilla
Thermostilla is a thermophilic genus of bacteria from the family of Planctomycetaceae with one known species (Thermostilla marina). Thermostilla marina has been isolated from a hydrothermal vent from a Vulcano Island in Italy. See also List of bacterial orders List of bacteria genera References Bacteria genera Monotypic bacteria genera Planctomycetota
Thermostilla
[ "Biology" ]
82
[ "Bacteria stubs", "Bacteria" ]
59,041,446
https://en.wikipedia.org/wiki/Telmatocola
Telmatocola is a genus of bacteria from the family of Planctomycetaceae with one known species (Telmatocola sphagniphila). Telmatocola sphagniphila has been isolate from Sphagnum peat from Staroselsky moss from the Tver Region. See also List of bacterial orders List of bacteria genera References Bacteria genera Monotypic bacteria genera Planctomycetota
Telmatocola
[ "Biology" ]
88
[ "Bacteria stubs", "Bacteria" ]
59,041,486
https://en.wikipedia.org/wiki/Schlesneria
Schlesneria is a genus of bacteria from the family of Planctomycetaceae with one known species (Schlesneria paludicola). Schlesneria paludicola has been isolated from sphagnum peat from Bakchar in Russia. See also List of bacterial orders List of bacteria genera References Bacteria genera Monotypic bacteria genera Planctomycetota
Schlesneria
[ "Biology" ]
85
[ "Bacteria stubs", "Bacteria" ]
59,041,656
https://en.wikipedia.org/wiki/Thermogutta
Thermogutta is a thermophilic genus of bacteria from the family of Planctomycetaceae. See also List of bacterial orders List of bacteria genera References Bacteria genera Planctomycetota
Thermogutta
[ "Biology" ]
47
[ "Bacteria stubs", "Bacteria" ]
59,047,031
https://en.wikipedia.org/wiki/Combinatorial%20matrix%20theory
Combinatorial matrix theory is a branch of linear algebra and combinatorics that studies matrices in terms of the patterns of nonzeros and of positive and negative values in their coefficients. Concepts and topics studied within combinatorial matrix theory include: (0,1)-matrix, a matrix whose coefficients are all 0 or 1 Permutation matrix, a (0,1)-matrix with exactly one nonzero in each row and each column The Gale–Ryser theorem, on the existence of (0,1)-matrices with given row and column sums Hadamard matrix, a square matrix of 1 and –1 coefficients with each pair of rows having matching coefficients in exactly half of their columns Alternating sign matrix, a matrix of 0, 1, and –1 coefficients with the nonzeros in each row or column alternating between 1 and –1 and summing to 1 Sparse matrix, is a matrix with few nonzero elements, and sparse matrices of special form such as diagonal matrices and band matrices Sylvester's law of inertia, on the invariance of the number of negative diagonal elements of a matrix under changes of basis Researchers in combinatorial matrix theory include Richard A. Brualdi and Pauline van den Driessche. References Linear algebra Combinatorics
Combinatorial matrix theory
[ "Mathematics" ]
263
[ "Discrete mathematics", "Combinatorics", "Combinatorics stubs", "Linear algebra", "Algebra" ]
59,049,263
https://en.wikipedia.org/wiki/Ferlins
Ferlins are an ancient protein family involved in vesicle fusion and membrane trafficking. Ferlins are distinguished by their multiple tandem C2 domains, and sometimes a FerA and a DysF domain. Mutations in ferlins can cause human diseases such as muscular dystrophy and deafness. Abnormalities in expression of myoferlin, a human ferlin protein, is also directly associated with higher mortality rate and tumor recurrence in several types of cancer, including pancreatic, colorectal, breast, cervical, stomach, ovarian, cervical, thyroid, endometrial, and oropharyngeal squamous cell carcinoma. In other animals, ferlin mutations can cause infertility. Ferlins are type II transmembrane proteins (N-terminus on the cytoplasmic side of the membrane) and contain five to seven C2 domains linked in tandem and have a single-pass transmembrane domain located at the C-terminus. The C2 domains are denoted in order from amino-terminus to carboxyl-terminus as C2A to C2G. C2 domains are essentially calcium and phospholipid binding domains, evolved for cell membrane interactions. In fact, many proteins involved in signal transduction, membrane trafficking, and membrane fusion employ C2 domains to target the cell membrane. However, ferlins are unique for containing more C2 domains than any other proteins (between five and seven). FerA and DysF are two intermediate domains that are unique to ferlins. There is less known about FerA and DysF domains, however, mutations of these domains in dysferlin can also lead to muscular dystrophy. As in other mammals, there are six ferlin genes in humans (Fer1L1-Fer1L6). Among them, Fer1L1-Fer1L3 have known disease relevance. Therefore, Fer1L1-Fer1L3 are better characterized compare to Fer1L4-Fer1L6 with unknown function and tissue localization. Fer1L1-Fer1L3 proteins each has a unique name and they correspond to dysferlin, myoferlin, and otoferlin accordingly. Discovery The first member of ferlin protein family, fer-1, was discovered in nematode Caenorhabditis elegans. Fer-1 gene was first described in 1997 by Achanzar and Ward. Fer-1 is required for reproduction in C. elegans and was therefore named Fer-1 because of its involvement in fertility. The name is an abbreviation for “fertilization factor 1”. The nomenclature in other ferlins in humans is Fer1Lx, where x is a number from 1-6, each identifying one of the six Fer1-like ferlins in humans. Evolution Ferlins are ancient proteins and they have been identified in protists and metazoans, and are known to exist in a range of organisms from unicellular eukaryotes to humans, suggesting primordial functions for ferlins. More specifically, DysF domain and the last two C-terminal C2 domains followed by the C-terminal transmembrane domain (C2E-C2F-TM, containing approximately 489 amino acids) show a high degree of conservation. All ferlins contain several C2 domains. However, C2A may be missing in some ferlins. More specifically, from six human ferlins, three of them do not contain C2A domains. Another highly conserved domain is the N-terminal C2-FerI-C2 sequence. FerI is a motif detected by Pfam, however, the function of this conserved motif is currently unknown. Ferlins have been evolved into two groups, DysF-containing and non-DysF ferlins. Most invertebrates possess two ferlin proteins, one from each class. Most vertebrate however, have six ferlin genes, three of which DysF containing and the other three non-DysF ferlins, indicating that vertebrate ferlins are evolved and originated from the two ferlins in early metazoans. Both subgroups have been identified in early metazoans, suggesting the fundamental role associated to these proteins. Structure Ferlins are large proteins and currently the full length structure of ferlins is unknown. In order to understand their structural aspects, ferlin domains have been studied individually: C2A domains are calcium and lipid binding domains made from 8 β-strands forming 2 sheets. The loops connecting the sheets form the calcium binding site. The β-sheet structure is conserved among C2 domains, however, the loops may have different features. Depending on the amino acids located at the calcium binding site and the loops, C2 domains can have different specificities for calcium and lipid binding, suggesting that they are evolved to function in different environments. The DysF domain exists as an internal duplication where an inner DysF domain is surrounded by an outer DysF domain. Such structure is a result of gene duplication and both inner and outer DysF domains have adopted the same fold. The structure of DysF is mainly consist of two antiparallel long β-strands. To date, the crystallographic structure of human dysferlin and solution NMR structure of myoferlin DysF have been obtained by Altin Sula et al. and PryankPatel et al. accordingly. Myoferlin and dysferlin DysF domains show 61% sequence identity. A unique feature of DysF domains in both dysferlin and myoferlin is that these domains are held together by arginine/aromatic sidechain (specially tryptophan) stacking. FerA had been predicted using Pfam and SMART and remained uncharacterized both structurally and functionally until recently. It had been determined by secondary structure prediction however, that FerA domain contains several helices. Recently, a model of FerA structure obtained by homology models have been confirmed by fitting the calculated model into the FerA structure obtained by small-angle X-ray scattering (SAXS) experiments. These structural models provided evidence that FerA contains four helices, which fold to form a four-helix bundle. Function Ferlins play roles in vesicle fusion and membrane trafficking. Different ferlins are found in various organs and they play specific roles. Fer-1 is a member of ferlin protein family, and a fertilization factor involved in fusion of vesicles called membraneous organelles with the sperm plasma membrane during spermatogenesis in C. elegans. In C. elegans spermatids are immobile and during sperm maturation mobility is gained after fusion of membraneous organelles with the plasma membrane. At this point, spermatids extend their pseudopod and become mobile. This process is calcium-dependent and a normal progression of this step requires ferlin's involvement. Dysferlin is highly expressed in skeletal muscles, but is also found in heart, placenta, liver, lung, kidney and pancreas. Dysferlin is essential for membrane repair mechanism in muscle cells. Dysferlin in sea stars is 46.9% identical to human dysferlin, and is critical for normal endocytosis during oogenesis and embryogenesis. In humans, dysferlin's primary function is believed to be involvement in muscle membrane repair mechanism. Skeletal muscles experience micro-damages during exercising and daily activities. When muscles are damaged, dysferlin containing vesicles accumulate at the site of injury, and by fusing together and to the membrane, they patch the leakage. In dysferlin-null muscles, these vesicles still accumulate at the damage site, but they cannot fuse and therefore, are unable to repair the damaged muscle cells. Otoferlin is another ferlin member in humans and it plays a role in exocytosis of synaptic vesicles at the auditory inner hair cell ribbon synapse. In adult fruit flies, a ferlin member called misfire is expressed in testis and ovaries. Mutations in misfire and Fer-1, ferlins in flies and C. elegans, cause male sterility because of defects in fertilization. Function of ferlin proteins involves employing multiple domains. C2A domains are specialized in lipid binding. The phospholipid interaction is often calcium dependent as C2 domains have evolved to respond to increase in calcium concentration. A sudden increase in calcium concentration is observed in synaptic vesicles or inside muscle cells after membrane damage. Therefore, C2 domains are often referred to as the calcium sensor of C2 domain-containing proteins. The function and mechanism of function of C2 domains is well-characterized, although it may vary between different C2 domains. In general, C2 domains interact with the membrane via electrostatic or hydrophobic interactions. It has been proposed that FerA may be involved in membrane interaction as well. It can in fact interact with neutral or negatively charged phospholipids and the interaction is enhanced in the presence of calcium ions. The molecular mechanism by which FerA interacts with the membrane or calcium ions however, is currently unknown. Disease association The most important disease relevance of ferlins in humans is related to mutations in dysferlin. In humans, disease causing mutations in dysferlin have been identified in all C2 domains, FerA domain, DysF domain, and even linker segments. Lack of functional dysferlin causes a group of muscular dystrophies called dysferlinopathies. Dysferlinopathies include limb-girdle muscular dystrophy (LGMD) 2B, Miyoshi myopathy (MM) and distal myopathy of the anterior tibialis. C2A mutations which affect its calcium binding or lipid binding can often cause muscular dystrophy. Interestingly, dysferlin C2B does not bind calcium, however, mutations in this domain can still cause muscular dystrophy. Some mutations in C2A can disrupt dysferlin interaction with other important proteins involved in membrane repair process (such as MG53) which can also lead to muscular dystrophy. Many mutations in dysferlin occur in DysF domain which often disrupt Arginine/Tryptophan stacks of this domain. This leads to a less stable and possibly unfolded protein which may result in the degradation of the entire dysferlin. Several FerA mutations have been also identified. These mutations have been shown to lower the stability of FerA domains which may explain the pathogenicity of these mutations. Otoferlin has been shown to interact with SNAREs and play a role in a calcium-dependent exocytosis in the hair cells in the inner ear. Mutations in otoferlin can cause mild to profound non-syndromic recessive hearing loss in humans. Currently, there is no association between myoferlin mutations and human diseases. However, it has been shown experimentally that loss of myoferlin results in reduced myoblast fusion and muscle size. There is also a correlation between myoferlin overexpression and several types of cancers such as lung cancer and breast cancer. In pancreatic ductal adenocarcinoma (PDAC) myoferlin increases cell proliferation and promotes tumorigenesis and its expression negatively correlates with tumor size. Breast cancer patients with overexpressed myoferlin have a lower survival rate. Although it is not yet clear how myoferlin contributes in cancer pathology in a molecular level, there are scientific evidences that myoferlin overexpression is associated with tumor growth and metastasis. In fact, myoferlin depletion in cancer cell lines can result in reduced tumor size and metastasis rate. References Proteins Vesicles Membrane proteins
Ferlins
[ "Chemistry", "Biology" ]
2,509
[ "Biomolecules by chemical classification", "Protein classification", "Membrane proteins", "Proteins", "Molecular biology" ]
59,050,931
https://en.wikipedia.org/wiki/Inverse%20vulcanization
Inverse vulcanization is a process that produces polysulfide polymers, which also contain some organic linkers. In contrast, sulfur vulcanization produces material that is predominantly organic but has a small percentage of polysulfide crosslinks. Synthesis Like Thiokols and sulfur-vulcanization, inverse vulcanization uses the tendency of sulfur catenate. The polymers produced by inverse vulcanization consist of long sulfur linear chains interspersed with organic linkers. Traditional sulfur vulcanization produces a cross-linked material with short sulfur bridges, down to one or two sulfur atoms. The polymerization process begins with the heating of elemental sulfur above its melting point (115.21 °C), to favor the ring-opening polymerization process (ROP) of the S8 monomer, occurring at 159 °C. As a result, the liquid sulfur is constituted by linear polysulfide chains with diradical ends, which can be easily bridged together with small dienes, such as 1,3-Diisopropylbenzene(DIB), 1,4-diphenylbutadiyne, limonene, divinylbenzene (DVB), dicyclopentadiene, styrene, 4-vinylpyridine, cycloalkene and ethylidene norbornene, or longer organic molecules as polybenzoxazines, squalene and triglyceride. Chemically, the diene carbon-carbon double bond (C=C) of the substitutional group disappears, forming the carbon-sulfur single bond (C-S) which binds together the sulfur linear chains. The advantage of such a polymerization is the absence of a solvent; Sulphur acts as comonomer and solvent. This makes the process highly scalable at the industrial level, and kilogram-scale synthesis of the poly(S-r-DIB) has already been accomplished. Products, characterization and properties Vibrational spectroscopy was performed to investigate the chemical structure of the copolymers, and the presence of the C-S bonds was detected through Infrared or Raman spectroscopies. The high amount of S-S bonds makes the copolymer highly IR-inactive in the near and mid-infrared spectrum. As a consequence, sulfur-rich materials made via inverse vulcanization are characterized by a high refractive index (n~1.8), whose value depends again upon the composition and crosslinking species. As shown by thermogravimetric analysis (TGA), the copolymer thermal stability increases with the amount of added crosslinker; however, all the tested compositions degrade above 222 °C. Copolymer behavior included that, the glass-transition temperature depends upon the composition and crosslinking species. For given comonomers, the behavior of the copolymers as a function of the temperature depends on the chemical composition; for example, the poly (sulfur-random-divinylbenzene) behaves as a plastomer for a diene content between 15 and 25%wt, and as a viscous resin with the 30–35%wt of DVB. On the other hand, the poly (sulfur-random-1,3-diisopropenylbenzene) acts as thermoplastic at 15–25%wt of DIB, while it becomes a thermoplastic-thermosetting polymer for a diene concentration of 30-35%wt. The potential to break and reform the chemical bonds along the polysulfide chains (S-S) allows the repair of the copolymer by simply heating above 100 °C. This increases the ability to reform and recycle the high molecular weight copolymer. Potential applications The sulfur-rich copolymers made via inverse vulcanization could in principle find diverse applications due to their simple synthesis process and thermoplasticity. Lithium-sulfur batteries This new way of sulfur processing has been exploited for the cathode preparation of long-cycling lithium-sulfur batteries. Such electrochemical systems are characterized by a greater energy density than commercial Li-ion batteries, but they are not stable for long service life. Simmonds et al. first demonstrated improved capacity retention for over 500 cycles with an inverse vulcanization copolymer, suppressing the typical capacity fading of sulfur-polymer composites. The poly (sulfur-random-1,3-diisopropenylbenzene), briefly defined as poly (S-r-DIB), showed a higher composition homogeneity compared with other cathodic materials, together with greater sulfur retention and an enhanced adjustment of the polysulfides' volume variations. These advantages made it possible to assemble a stable and durable Li-S cell. Subsequently, other copolymers were synthesized via inverse vulcanization and tested inside these electrochemical devices, again providing high stability over their cycles. In order to overcome the disadvantages related to the materials' low electrical conductivity (1015–1016 Ω·cm), researchers have started to add special carbon-based particles to increase electron transport inside the copolymer. Furthermore, such carbonaceous additives improve the polysulfides' retention at the cathode through the polysulfides-capturing effect, increasing the battery performances. Examples of employed nanostructures are long carbon nanotubes, graphene, and carbon onions. Capturing Mercury The new materials could be used to remove toxic metals from soil or water. Pure sulfur cannot be employed to manufacture a functional filter because of its low mechanical properties; therefore, inverse vulcanization was investigated to produce porous materials, in particular for the mercury capturing process. The liquid metal binds together with the sulfur-rich copolymer, remaining mostly inside the filter. Infrared transmission Sulfur-rich copolymers, made via inverse vulcanization, have advantages over traditional IR optical materials due to the simple manufacturing process, low cost reagents, and high refractive index. As mentioned before, the latter depends upon the S-S bonds concentration, leading to the ability to tune the optical properties of the material by modifying the chemical formulation. The ability to change the material's refractive index to fulfill the specific application requirements makes these copolymers applicable in military, civil or medical fields. Others The inverse vulcanization process can also be employed for the synthesis of activated carbon with narrow pore-size distributions. The sulfur-rich copolymer acts as a template where the carbons are produced. The final material is doped with sulfur and exhibits a micro-porous network and high gas selectivity. Therefore, inverse vulcanization could also be used for gas separation applications. See also Sulfur Free-radical polymerization Lithium-sulfur batteries References External links "New “inverse vulcanization” process produces polymeric sulfur that can function as high performance electrodes for Li-S batteries". 15 April 2013. Chemical processes Reaction mechanisms Polymerization reactions
Inverse vulcanization
[ "Chemistry", "Materials_science" ]
1,442
[ "Reaction mechanisms", "Chemical processes", "nan", "Polymer chemistry", "Physical organic chemistry", "Chemical kinetics", "Chemical process engineering", "Polymerization reactions" ]
47,211,913
https://en.wikipedia.org/wiki/Von%20Foerster%20equation
The McKendrick–von Foerster equation is a linear first-order partial differential equation encountered in several areas of mathematical biology – for example, demography and cell proliferation modeling; it is applied when age structure is an important feature in the mathematical model. It was first presented by Anderson Gray McKendrick in 1926 as a deterministic limit of lattice models applied to epidemiology, and subsequently independently in 1959 by biophysics professor Heinz von Foerster for describing cell cycles. Mathematical formula The mathematical formula can be derived from first principles. It reads:where the population density is a function of age and time , and is the death function. When , we have: It relates that a population ages, and that fact is the only one that influences change in population density; the negative sign shows that time flows in just one direction, that there is no birth and the population is going to die out. Derivation Suppose that for a change in time and change in age , the population density is:That is, during a time period the population density decreases by a percentage . Taking a Taylor series expansion to order gives us that:We know that , since the change of age with time is 1. Therefore, after collecting terms, we must have that: Analytical solution The von Foerster equation is a continuity equation; it can be solved using the method of characteristics. Another way is by similarity solution; and a third is a numerical approach such as finite differences. To get the solution, the following boundary conditions should be added: which states that the initial births should be conserved (see Sharpe–Lotka–McKendrick’s equation for otherwise), and that: which states that the initial population must be given; then it will evolve according to the partial differential equation. Similar equations In Sebastian Aniţa, Viorel Arnăutu, Vincenzo Capasso. An Introduction to Optimal Control Problems in Life Sciences and Economics (Birkhäuser. 2011), this equation appears as a special case of the Sharpe–Lotka–McKendrick’s equation; in the latter there is inflow, and the math is based on directional derivative. The McKendrick’s equation appears extensively in the context of cell biology as a good approach to model the eukaryotic cell cycle. See also Finite difference method Partial differential equation Renewal theory Continuity equation Volterra integral equation References Diffusion Parabolic partial differential equations Stochastic differential equations Transport phenomena Eponymous equations of physics Mathematical and theoretical biology Ecology Demography Epidemiology
Von Foerster equation
[ "Physics", "Chemistry", "Mathematics", "Engineering", "Biology", "Environmental_science" ]
511
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Equations of physics", "Demography", "Mathematical and theoretical biology", "Applied mathematics", "Chemical engineering", "Eponymous equations of physics", "Ecology", "Epidemiology", "Environmental social science" ]
47,215,278
https://en.wikipedia.org/wiki/Electronic%20entropy
Electronic entropy is the entropy of a system attributable to electrons' probabilistic occupation of states. This entropy can take a number of forms. The first form can be termed a density of states based entropy. The Fermi–Dirac distribution implies that each eigenstate of a system, , is occupied with a certain probability, . As the entropy is given by a sum over the probabilities of occupation of those states, there is an entropy associated with the occupation of the various electronic states. In most molecular systems, the energy spacing between the highest occupied molecular orbital and the lowest unoccupied molecular orbital is usually large, and thus the probabilities associated with the occupation of the excited states are small. Therefore, the electronic entropy in molecular systems can safely be neglected. Electronic entropy is thus most relevant for the thermodynamics of condensed phases, where the density of states at the Fermi level can be quite large, and the electronic entropy can thus contribute substantially to thermodynamic behavior. A second form of electronic entropy can be attributed to the configurational entropy associated with localized electrons and holes. This entropy is similar in form to the configurational entropy associated with the mixing of atoms on a lattice. Electronic entropy can substantially modify phase behavior, as in lithium-ion battery electrodes, high temperature superconductors, and some perovskites. It is also the driving force for the coupling of heat and charge transport in thermoelectric materials, via the Onsager reciprocal relations. From the density of states General Formulation The entropy due to a set of states that can be either occupied with probability or empty with probability can be written as: , where is Boltzmann constant. For a continuously distributed set of states as a function of energy, such as the eigenstates in an electronic band structure, the above sum can be written as an integral over the possible energy values, rather than a sum. Switching from summing over individual states to integrating over energy levels, the entropy can be written as: where is the density of states of the solid. The probability of occupation of each eigenstate is given by the Fermi function, : where is the Fermi energy and is the absolute temperature. One can then re-write the entropy as: This is the general formulation of the density-of-states based electronic entropy. Useful approximation It is useful to recognize that the only states within ~ of the Fermi level contribute significantly to the entropy. Other states are either fully occupied, , or completely unoccupied, . In either case, these states do not contribute to the entropy. If one assumes that the density of states is constant within of the Fermi level, one can derive that the electron heat capacity, equal to: where is the density of states (number of levels per unit energy) at the Fermi level. Several other approximations can be made, but they all indicate that the electronic entropy should, to first order, be proportional to the temperature and the density of states at the Fermi level. As the density of states at the Fermi level varies widely between systems, this approximation is a reasonable heuristic for inferring when it may be necessary to include electronic entropy in the thermodynamic description of a system; only systems with large densities of states at the Fermi level should exhibit non-negligible electronic entropy (where large may be approximately defined as ). Application to different materials classes Insulators have zero density of states at the Fermi level due to their band gaps. Thus, the density of states-based electronic entropy is essentially zero in these systems. Metals have non-zero density of states at the Fermi level. Metals with free-electron-like band structures (e.g. alkali metals, alkaline earth metals, Cu, and Al) generally exhibit relatively low density of states at the Fermi level, and therefore exhibit fairly low electronic entropies. Transition metals, wherein the flat d-bands lie close to the Fermi level, generally exhibit much larger electronic entropies than the free-electron like metals. Oxides have particularly flat band structures and thus can exhibit large , if the Fermi level intersects these bands. As most oxides are insulators, this is generally not the case. However, when oxides are metallic (i.e. the Fermi level lies within an unfilled, flat set of bands), oxides exhibit some of the largest electronic entropies of any material. Thermoelectric materials are specifically engineered to have large electronic entropies. The thermoelectric effect relies on charge carriers exhibiting large entropies, as the driving force to establish a gradient in electrical potential is driven by the entropy associated with the charge carriers. In the thermoelectric literature, the term band structure engineering refers to the manipulation of material structure and chemistry to achieve a high density of states near the Fermi level. More specifically, thermoelectric materials are intentionally doped to exhibit only partially filled bands at the Fermi level, resulting in high electronic entropies. Instead of engineering band filling, one may also engineer the shape of the band structure itself via introduction of nanostructures or quantum wells to the materials. Configurational electronic entropy Configurational electronic entropy is usually observed in mixed-valence transition metal oxides, as the charges in these systems are both localized (the system is ionic), and capable of changing (due to the mixed valency). To a first approximation (i.e. assuming that the charges are distributed randomly), the molar configurational electronic entropy is given by: where is the fraction of sites on which a localized electron/hole could reside (typically a transition metal site), and is the concentration of localized electrons/holes. Of course, the localized charges are not distributed randomly, as the charges will interact electrostatically with one another, and so the above formula should only be regarded as an approximation to the configurational atomic entropy. More sophisticated approximations have been made in the literature. References Physical quantities Statistical mechanics Thermodynamics Condensed matter physics
Electronic entropy
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,265
[ "Physical phenomena", "Physical quantities", "Quantity", "Phases of matter", "Materials science", "Thermodynamics", "Condensed matter physics", "Statistical mechanics", "Physical properties", "Matter", "Dynamical systems" ]
47,218,591
https://en.wikipedia.org/wiki/Etherington%27s%20reciprocity%20theorem
The Etherington's distance-duality equation is the relationship between the luminosity distance of standard candles and the angular diameter distance. The equation is as follows: , where is the redshift, is the luminosity distance and the angular-diameter distance. History and derivations When Ivor Etherington introduced this equation in 1933, he mentioned that this equation was proposed by Tolman as a way to test a cosmological model. Ellis proposed a proof of this equation in the context of Riemannian geometry. A quote from Ellis: "The core of the reciprocity theorem is the fact that many geometric properties are invariant when the roles of the source and observer in astronomical observations are transposed". This statement is fundamental in the derivation of the reciprocity theorem. Validation from astronomical observations The Etherington's distance-duality equation has been validated from astronomical observations based on the X-ray surface brightness and the Sunyaev–Zel'dovich effect of galaxy clusters. The reciprocity theorem is considered to be true when photon number is conserved, gravity is described by a metric theory with photons traveling on unique null geodesics. Any violation of the distance duality would be attributed to exotic physics provided that astrophysical effects altering the cosmic distance measurements are well below the statistical errors. For instance, an incorrect modelling of the three-dimensional gas density profile in galaxy clusters may introduce systematic uncertainties in the determination of the cluster angular diameter distance from X-ray and/or SZ observations, thus altering the outcome of the distance-duality test. Similarly, unaccounted extinction from a diffuse dust component in the inter-galactic medium can affect the determination of luminosity distances and cause a violation of the distance-duality relation. See also Distance measures (cosmology) References Physical quantities
Etherington's reciprocity theorem
[ "Physics", "Mathematics" ]
376
[ "Physical phenomena", "Quantity", "Physical quantities", "Physical properties" ]
47,218,791
https://en.wikipedia.org/wiki/Centrifugal%20acceleration%20%28astrophysics%29
Centrifugal acceleration of astroparticles to relativistic energies might take place in rotating astrophysical objects (see also Fermi acceleration). It is strongly believed that active galactic nuclei and pulsars have rotating magnetospheres, therefore, they potentially can drive charged particles to high and ultra-high energies. It is a proposed explanation for ultra-high-energy cosmic rays (UHECRs) and extreme-energy cosmic rays (EECRs) exceeding the Greisen–Zatsepin–Kuzmin limit. Acceleration to high energies It is well known that the magnetospheres of AGNs and pulsars are characterized by strong magnetic fields that force charged particles to follow the field lines. If the magnetic field is rotating (which is the case for such astrophysical objects), the particles will inevitably undergo centrifugal acceleration. The pioneering work by Machabeli & Rogava was a thought experiment in which a bead moves inside a straight rotating pipe. Dynamics of the particle were analyzed both analytically and numerically and it was shown that if the rigid rotation is maintained for a sufficiently long time energy of the bead will asymptotically increase. In particular, Rieger & Mannheim, building on the theory of Machabeli & Rogava, showed that the Lorentz factor of the bead behaves as where is the initial Lorentz factor, Ω is the angular velocity of rotation, is the radial coordinate of the particle, and is the speed of light. From this behavior it is evident that radial motion will exhibit a nontrivial character. In due course of motion the particle will reach the light cylinder surface (a hypothetical area where the linear velocity of rotation exactly equals the speed of light), leading to the increase of the poloidal component of velocity. On the other hand, the total velocity cannot exceed the speed of light, therefore, the radial component must decrease. This means that the centrifugal force changes its sign. As is seen from (), the Lorentz factor of the particle tends to infinity if the rigid rotation is maintained. This means that in reality the energy has to be limited by certain processes. Generally speaking, there are two major mechanisms: The inverse Compton scattering (ICS) and the so-called breakdown of the bead on the wire (BBW) mechanism. For jet-like structures in an AGN it has been shown that, for a wide range of inclination angles of field lines with respect to the rotation axis, ICS is the dominant mechanism efficiently limiting the maximum attainable Lorentz factors of electrons . On the other hand, it was shown that the BBW becomes dominant for relatively low luminosity AGN , leading to . The centrifugal effects are more efficient in millisecond pulsars as the rotation rate is quite high. Osmanov & Rieger considered the centrifugal acceleration of charged particles in the light cylinder area of the Crab-like pulsars. It has been shown that electrons might achieve the Lorentz factors via inverse Compton Klein–Nishina up-scattering. Acceleration to very high and ultra-high energies Although the direct centrifugal acceleration has limitations, as analysis shows the effects of rotation still might play an important role in the processes of acceleration of charged particles. Generally speaking, it is believed that the centrifugal relativistic effects may induce plasma waves, which under certain conditions might be unstable efficiently pumping energy from the background flow. On the second stage energy of wave-modes can be transformed into energy of plasma particles, leading to consequent acceleration. In rotating magnetospheres the centrifugal force acts differently in different locations, leading to generation of Langmuir waves, or plasma oscillations via the parametric instability. One can show that this mechanism efficiently works in the magnetospheres of AGN and pulsars. Considering Crab-like pulsars it has been shown that by means of the Landau damping the centrifugally induced electrostatic waves efficiently lose energy transferring it to electrons. It is found that energy gain by electrons is given by where , is the increment of the instability (for details see the cited article), , , is the plasma number density, is the electron's mass and is the Goldreich-Julian density. One can show that for typical parameters of the Crab-like pulsars, the particles might gain energies of the order of 100s of TeVs or even PeVs. In case of millisecond newly born pulsars, the electrons might be accelerated to even higher energies of or EeVs By examining the magnetospheres of AGNs, the acceleration of protons takes place through the Langmuir collapse. As it is shown this mechanism is strong enough to guarantee efficient acceleration of particles to ultra-high energies via the Langmuir damping where is the normalized luminosity of AGN, is its normalized mass and is the Solar mass. As it is evident, for a convenient set of parameters one can achieve enormous energies of the order of or ZeVs, so AGNs become cosmic Zevatrons. References Further references Astroparticle physics Thought experiments in physics
Centrifugal acceleration (astrophysics)
[ "Physics" ]
1,070
[ "Astroparticle physics", "Particle physics", "Astrophysics" ]
37,019,370
https://en.wikipedia.org/wiki/List%20of%20electromagnetism%20equations
This article summarizes equations in the theory of electromagnetism. Definitions Here subscripts e and m are used to differ between electric and magnetic charges. The definitions for monopoles are of theoretical interest, although real magnetic dipoles can be described using pole strengths. There are two possible units for monopole strength, Wb (Weber) and A m (Ampere metre). Dimensional analysis shows that magnetic charges relate by qm(Wb) = μ0 qm(Am). Initial quantities Electric quantities Contrary to the strong analogy between (classical) gravitation and electrostatics, there are no "centre of charge" or "centre of electrostatic attraction" analogues. Electric transport Electric fields Magnetic quantities Magnetic transport Magnetic fields Electric circuits DC circuits, general definitions AC circuits Magnetic circuits Electromagnetism Electric fields General Classical Equations Magnetic fields and moments General classical equations Electric circuits and electronics Below N = number of conductors or circuit components. Subscript net refers to the equivalent and resultant property value. See also Defining equation (physical chemistry) Fresnel equations List of equations in classical mechanics List of equations in fluid mechanics List of equations in gravitation List of equations in nuclear and particle physics List of equations in quantum mechanics List of equations in wave theory List of photonics equations List of relativistic equations SI electromagnetism units Table of thermodynamic equations Footnotes Sources Further reading Physical quantities SI units electromagnetism Electromagnetism
List of electromagnetism equations
[ "Physics", "Mathematics" ]
300
[ "Electromagnetism", "Physical phenomena", "Physical quantities", "Equations of physics", "Quantity", "Fundamental interactions", "Physical properties", "Lists of physics equations" ]
37,019,524
https://en.wikipedia.org/wiki/List%20of%20equations%20in%20fluid%20mechanics
This article summarizes equations in the theory of fluid mechanics. Definitions Here is a unit vector in the direction of the flow/current/flux. Equations See also Defining equation (physical chemistry) List of electromagnetism equations List of equations in classical mechanics List of equations in gravitation List of equations in nuclear and particle physics List of equations in quantum mechanics List of photonics equations List of relativistic equations Table of thermodynamic equations Sources Further reading Physical quantities SI units Physical chemistry fluid mechanics
List of equations in fluid mechanics
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
106
[ "Physical phenomena", "Applied and interdisciplinary physics", "Equations of physics", "Physical quantities", "Quantity", "Fluid mechanics", "Civil engineering", "nan", "Physical chemistry", "Physical properties", "Lists of physics equations" ]
37,019,651
https://en.wikipedia.org/wiki/List%20of%20equations%20in%20wave%20theory
This article summarizes equations in the theory of waves. Definitions General fundamental quantities A wave can be longitudinal where the oscillations are parallel (or antiparallel) to the propagation direction, or transverse where the oscillations are perpendicular to the propagation direction. These oscillations are characterized by a periodically time-varying displacement in the parallel or perpendicular direction, and so the instantaneous velocity and acceleration are also periodic and time varying in these directions. (the apparent motion of the wave due to the successive oscillations of particles or fields about their equilibrium positions) propagates at the phase and group velocities parallel or antiparallel to the propagation direction, which is common to longitudinal and transverse waves. Below oscillatory displacement, velocity and acceleration refer to the kinematics in the oscillating directions of the wave - transverse or longitudinal (mathematical description is identical), the group and phase velocities are separate. General derived quantities Relation between space, time, angle analogues used to describe the phase: Modulation indices Acoustics Equations In what follows n, m are any integers (Z = set of integers); . Standing waves Propagating waves Sound waves Gravitational waves Gravitational radiation for two orbiting bodies in the low-speed limit. Superposition, interference, and diffraction Wave propagation A common misconception occurs between phase velocity and group velocity (analogous to centres of mass and gravity). They happen to be equal in non-dispersive media. In dispersive media the phase velocity is not necessarily the same as the group velocity. The phase velocity varies with frequency. The phase velocity is the rate at which the phase of the wave propagates in space. The group velocity is the rate at which the wave envelope, i.e. the changes in amplitude, propagates. The wave envelope is the profile of the wave amplitudes; all transverse displacements are bound by the envelope profile. Intuitively the wave envelope is the "global profile" of the wave, which "contains" changing "local profiles inside the global profile". Each propagates at generally different speeds determined by the important function called the dispersion relation. The use of the explicit form ω(k) is standard, since the phase velocity ω/k and the group velocity dω/dk usually have convenient representations by this function. General wave functions Wave equations Sinusoidal solutions to the 3d wave equation N different sinusoidal waves Complex amplitude of wave n Resultant complex amplitude of all N waves Modulus of amplitude The transverse displacements are simply the real parts of the complex amplitudes. 1-dimensional corollaries for two sinusoidal waves The following may be deduced by applying the principle of superposition to two sinusoidal waves, using trigonometric identities. The angle addition and sum-to-product trigonometric formulae are useful; in more advanced work complex numbers and fourier series and transforms are used. See also Defining equation (physical chemistry) List of equations in classical mechanics List of equations in fluid mechanics List of equations in gravitation List of equations in nuclear and particle physics List of equations in quantum mechanics List of photonics equations List of relativistic equations SI electromagnetism units Wave equation One-way wave equation Footnotes Sources Further reading Physical quantities SI units Physical chemistry wave Waves
List of equations in wave theory
[ "Physics", "Chemistry", "Mathematics" ]
688
[ "Physical phenomena", "Applied and interdisciplinary physics", "Equations of physics", "Physical quantities", "Quantity", "Waves", "Motion (physics)", "nan", "Physical chemistry", "Physical properties", "Lists of physics equations" ]
37,020,373
https://en.wikipedia.org/wiki/Cheng%20rotation%20vane
A fluid flow conditioning device, the cheng rotation vane (CRV) is a stationary vane fabricated within a pipe piece as a single unit and welded directly upstream of an elbow before the pump inlet, flow meters, compressors, or other downstream equipment. The cheng rotation vane is used to eliminate elbow induced turbulence, cavitation, erosion, vibration, which effect pump performance, seal life, impeller life, lead to bearing failure, flow meter accuracy, pipe bursts, and other common pipe problems. References Fluid mechanics
Cheng rotation vane
[ "Engineering" ]
105
[ "Civil engineering", "Fluid mechanics" ]
55,704,714
https://en.wikipedia.org/wiki/Light%20field%20microscopy
Light field microscopy (LFM) is a scanning-free 3-dimensional (3D) microscopic imaging method based on the theory of light field. This technique allows sub-second (~10 Hz) large volumetric imaging ([~0.1 to 1 mm]3) with ~1 μm spatial resolution in the condition of weak scattering and semi-transparence, which has never been achieved by other methods. Just as in traditional light field rendering, there are two steps for LFM imaging: light field capture and processing. In most setups, a microlens array is used to capture the light field. As for processing, it can be based on two kinds of representations of light propagation: the ray optics picture and the wave optics picture. The Stanford University Computer Graphics Laboratory published their first prototype LFM in 2006 and has been working on the cutting edge since then. Light field generation A light field is a collection of all the rays flowing through some free space, where each ray can be parameterized with four variables. In many cases, two 2D coordinates–denoted as & –on two parallel planes with which the rays intersect are applied for parameterization. Accordingly, the intensity of the 4D light field can be described as a scalar function: , where is the distance between two planes. LFM can be built upon the traditional setup of a wide-field fluorescence microscope and a standard CCD camera or sCMOS. A light field is generated by placing a microlens array at the intermediate image plane of the objective (or the rear focal plane of an optional relay lens) and is further captured by placing the camera sensor at the rear focal plane of the microlenses. As a result, the coordinates of the microlenses conjugate with those on the object plane (if additional relay lenses are added, then on the front focal plane of the objective) ; the coordinates of the pixels behind each microlens conjugate with those on the objective plane . For uniformity and convenience, we shall call the plane the original focus plane in this article. Correspondingly, is the focal length of the microlenses (i.e., the distance between microlens array plane and the sensor plane). In addition, the apertures and the focal-length of each lens and the dimensions of the sensor and microlens array should all be properly chosen to ensure that there is neither overlap nor empty areas between adjacent subimages behind the corresponding microlenses. Realization from the ray optics picture This section mainly introduces the work of Levoy et al., 2006. Perspective views from varied angles Owing to the conjugated relationships as mentioned above, any certain pixel behind a certain microlens corresponds to the ray passing through the point towards the direction . Therefore, by extracting the pixel from all subimages and stitching them together, a perspective view from the certain angle is obtained: . In this scenario, spatial resolution is determined by the number of microlenses; angular resolution is determined by the number of pixels behind each microlens. Tomographic views based on synthetic refocusing Step 1: Digital refocusing Synthetic focusing uses the captured light field to compute the photograph focusing at any arbitrary section. By simply summing all the pixels in each subimage behind the microlens (equivalent to collecting all radiation coming from different angles that falls on the same position), the image is focused exactly on the plane that conjugates with the microlens array plane: , where is the angle between the ray and the normal of the sensor plane, and if the origin of the coordinate system of each subimage is located on the principal optic axis of the corresponding microlens. Now, a new function can defined to absorb the effective projection factor into the light field intensity and obtain the actual radiance collection of each pixel: . In order to focus on some other plane besides the front focal plane of the objective, say, the plane whose conjugated plane is away from the sensor plane, the conjugated plane can be moved from to and reparameterize its light field back to the original one at : . Thereby, the refocused photograph can be computed with the following formula: . Consequently, a focal stack is generated to recapitulate the instant 3D imaging of the object space. Furthermore, tilted or even curved focal planes are also synthetically possible. In addition, any reconstructed 2D image focused at an arbitrary depth corresponds to a 2D slice of a 4D light field in the Fourier domain, where the algorithm complexity can be reduced from to . Step 2: Point spread function measurement Due to diffraction and defocus, however, the focal stack differs from the actual intensity distribution of voxels , which is really desired. Instead, is a convolution of and a point spread function (PSF): Thus, the 3D shape of the PSF has to be measured in order to subtract its effect and to obtain voxels' net intensity. This measurement can be easily done by placing a fluorescent bead at the center of the original focus plane and recording its light field, based on which the PSF's 3D shape is ascertained by synthetically focusing on varied depth. Given that the PSF is acquired with the same LFM setup and digital refocusing procedure as the focal stack, this measurement correctly reflects the angular range of rays captured by the objective (including any falloff in intensity); therefore, this synthetic PSF is actually free of noise and aberrations. The shape of the PSF can be considered identical everywhere within our desired field of view (FOV); hence, multiple measurements can be avoided. Step 3: 3D deconvolution In the Fourier domain, the actual intensity of voxels has a very simple relation with the focal stack and the PSF: , where is the operator of the Fourier transform. However, it may not be possible to directly solve the equation above, given the fact that the aperture is of limited size, resulting in the PSF being bandlimited (i.e., its Fourier transform has zeros). Instead, an iterative algorithm called constrained iterative deconvolution in the spatial domain is much more practical here: ; . This idea is based on constrained gradient descent: the estimation of is improved iteratively by calculating the difference between the actual focal stack and the estimated focal stack and correcting with the current difference ( is constrained to be non-negative). Fourier Slice Photography The formula of can be rewritten by adopting the concept of the Fourier Projection-Slice Theorem. Because the photography operator can be viewed as a shear followed by projection, the result should be proportional to a dilated 2D slice of the 4D Fourier transform of a light field. Precisely, a refocused image can be generated from the 4D Fourier spectrum of a light field by extracting an 2D slice, applying an inverse 2D transform, and scaling. Before the proof, we first introduce some operators: Integral Projection Operator: Slicing operator: Photography Change of Basis: Let denote an operator for a change of basis of an 4-dimensional function so that , with . Fourier Transform Operator: Let denote the N-dimensional Fourier transform operator. By these definitions, we can rewrite . According to the generalized Fourier-slice theorem, we have , and hence the photography operator has the form . According to the formula, we know a photograph is the inverse 2D Fourier transform of a dilated 2D slice in the 4D Fourier transform of the light field. Discrete Fourier Slice Photography If all we have available are samples of the light field, instead of use Fourier slice theorem for continuous signal mentioned above, we adopt discrete Fourier slice theorem, which is a generalization of the discrete Radon transform, to compute refocused image. Assume that a lightfield is periodic with periods and is defined on the hypercube . Also, assume there are known samples of the light field , where and , respectively. Then, we can define using trigonometric interpolation with these sample points: , where . Note that the constant factors are dropped for simplicity. To compute its refocused photograph, we replace infinite integral in the formula of with summation whose bounds are and . That is, . Then, by discrete Fourier slice theorem indicates, we can represent the photograph using Fourier slice: Realization from the wave optics picture Although ray-optics based plenoptic camera has demonstrated favorable performance in the macroscopic world, diffraction places a limit on the LFM reconstruction when staying with ray-optics parlance. Hence, it may be much more convenient to switch to wave optics. (This section mainly introduce the work of Broxton et al., 2013.) Discretization of the space The interested FOV is segmented into voxels, each with a label . Thus, the whole FOV can be discretely represented with a vector with a dimension of . Similarly, a vector represents the sensor plane, where each element denotes one sensor pixel. Under the condition of incoherent propagation among different voxels, the light field transmission from the object space to the sensor can be linearly linked by a measurement matrix, in which the information of PSF is incorporated: In the ray-optics scenario, a focal stack is generated via synthetically focusing of rays, and then deconvolution with a synthesized PSF is applied to diminish the blurring caused by the wave nature of light. In the wave optics picture, on the other hand, the measurement matrix –describing light field transmission–is directly calculated based on propagation of waves. Unlike transitional optical microscopes whose PSF shape is invariant (e.g., Airy Pattern) with respect to position of the emitter, an emitter in each voxel generates a unique pattern on the sensor of a LFM. In other words, each column in is distinct. In the following sections, the calculation of the whole measurement matrix would be discussed in detail. Optical impulse response The optical impulse response is the intensity of an electric field at a 2D position on the sensor plane when an isotropic point source of unit amplitude is placed at some 3D position in the FOV. There are three steps along the electric-field propagation: traveling from a point source to the native image plane (i.e., the microlens array plane), passing through the microlens array, and propagating onto the sensor plane. Step 1: Propagation cross an objective For an objective with a circular aperture, the wavefront at the native image plane initiated from an emitter at can be computed using the scalar Debye theory: , where is the focal length of the objective; is its magnification. is the wavelength. is the half-angle of the numerical aperture ( is the index of refraction of the sample). is the apodization function of the microscope ( for Abbe-sine corrected objectives). is the zeroth order Bessel function of the first kind. and are the normalized radial and axial optical coordinates, respectively: , where is the wave number. Step 2: Focusing through the microlens array Each microlens can be regarded as a phase mask: , where is the focal length of microlenses and is the vector pointing from the center of the microlens to a point on the microlens. It is worth noticing that is non-zero only when is located at the effective transmission area of a microlens. Thereby, the transmission function of the overall microlens array can be represented as convoluted with a 2D comb function: , where is the pitch (say, the dimension) of microlenses. Step 3: Near-field propagation to the sensor The propagation of wave front with distance from the native image plane to the sensor plane can be computed with a Fresnel diffraction integral: , where is the wave front immediately passing the native imaging plane. Therefore, the whole optical impulse response can be expressed in terms of a convolution: . Computing the measurement matrix Having acquired the optical impulse response, any element in the measurement matrix can be calculated as: , where is the area for pixel and is the volume for voxel . The weight filter is added to match the fact that a PSF contributes more at the center of a voxel than at the edges. The linear superposition integral is based on the assumption that fluorophores in each infinitesimal volume experience an incoherent, stochastic emission process, considering their rapid, random fluctuations. Solving the inverse problem The noisy nature of the measurements Again, due to the limited bandwidth, the photon shot noise, and the huge matrix dimension, it is impossible to directly solve the inverse problem as: . Instead, a stochastic relation between a discrete light field and FOV more resembles: , where is the background fluorescence measured prior to imaging; is the Poisson noise. Therefore, now becomes a random vector with Possion-distributed values in units of photoelectrons e−. Maximum likelihood estimation Based on the idea of maximizing the likelihood of the measured light field given a particular FOV and background , the Richardson-Lucy iteration scheme provides an effective 3D deconvolution algorithm here: . where the operator remains the diagonal arguments of a matrix and sets its off-diagonal elements to zero. Applications Light Field Microscopy for functional neural imaging Starting with initial work at Stanford University applying Light Field Microscopy to calcium imaging in larval zebrafish (Danio Rerio), a number of articles have now applied Light Field Microscopy to functional neural imaging including measuring the neuron dynamic activities across the whole brain of C. elegans, whole-brain imaging in larval zebrafish, imaging calcium and voltage activity sensors across the brain of fruit flies (Drosophila) at up to 200 Hz, and fast imaging of 1mm x 1mm x 0.75mm volumes in the hippocampus of mice navigating a virtual environment. This area of application is a rapidly developing area at the intersection of computational optics and neuroscience. See also Light field Microlens Tomography Aperture synthesis Voxel References Microscopy
Light field microscopy
[ "Chemistry" ]
2,891
[ "Microscopy" ]
55,707,554
https://en.wikipedia.org/wiki/Gunungan%20%28wayang%29
The ( ), also known as () or kayonan in Bali, is a figure in the Indonesian theatrical performance of e.g. , , , and . The is a conical or triangular structure (tapered peak) inspired by the shape of a mountain (volcano). In , are special figures in the form of pictures of mountains and its contents. has many functions in wayang performances, therefore, there are many different depictions. In the standard function, as the opening and closing of a performance stage, two things are depicted on two different sides. On one side, at the bottom is a picture of a gate guarded by two rakshasa holding swords and shields. It symbolizes the palace gate, and when played the is used as a palace. At the top of the mountain is the tree of life (kalpataru) which is entangled by a dragon. On the tree branch depicted several forest animals, such as tigers, bulls, monkeys, and birds. The picture as a whole depicts the situation in the wilderness. This side symbolizes the state of the world and its contents. On the other side, a blazing fire is depicted. It symbolizes chaos and hell. Before the puppet is played, the is stuck in the middle of the screen, leaning slightly to the right which means that the wayang play has not yet started, like a world that has not yet been told. After playing, is removed, lined up on the right. is used as a sign of changing plays/story stages. For that the mountains are plugged in the middle leaning to the left. In addition, gunungan is also used to symbolize fire or wind. In this case the side of the mountain is reversed, on the other hand there is only red-red paint, and this color symbolizes fire. can act as land, forest, roads and others by following the dialogue of the dhalang. After the play is finished, is plugged again in the center of the screen, symbolizing that the story is finished. There are two kinds of , namely Gunungan Gapuran and Gunungan Blumbangan. Gunungan Blumbangan was composed by Sunan Kalijaga in the era of the Demak Kingdom. Then during the Kartasura era, it was composed again with the Gunungan Gapuran. contains high philosophical teachings, namely the teachings of wisdom. All of this implies that the play in contains lessons of high value. This means that performances also contain high philosophical teachings. Javanese diaspora Malaysia In Kelantan, Peninsular Malaysia, a similar figure is set up in the local iteration of the performance known as the pohon beringin ("banyan"). The beringin is often displayed in the beginning and the end of the performance symbolizing "a world loaded with lives...in the water, on the land and in the air". Gallery See also Wayang Javanese culture Theatre of Indonesia Culture of Indonesia References Sources Shadows Wayang Arts in Malaysia Arts in Indonesia Precursors of film Theatrical genres
Gunungan (wayang)
[ "Physics" ]
629
[ "Optical phenomena", "Physical phenomena", "Shadows" ]
55,707,592
https://en.wikipedia.org/wiki/Polonez%20%28multiple%20rocket%20launcher%29
The Polonez is a Belarusian 300 mm rocket artillery system of a launcher unit comprising eight rockets packaged in two four-rocket pods mounted on a MZKT-7930 vehicle. In 2018, it was exported to Azerbaijan. The system was designed by the Belarusian Plant of Precision Electromechanics in cooperation with a foreign country, probably China. The first combat missile launches were carried out in China. The 77th Separate Rocket Artillery Battalion of the 336th Rocket Artillery Brigade of the Belarusian Ground Forces is equipped with it. An upgraded version called Polonez-M passed all trials and has been accepted into service by the Belarusian Ground Forces as of May 2019. Polonez-M has an increased range of 290 km (186.4 mi), a higher share of domestic components and can fire the improved A-300 missile. The first delivery was conducted in November 2023. See also Katyusha, BM-13, BM-8, and BM-31 multiple rocket launchers of World War II T-122 Sakarya, Turkish 122 mm multiple launch rocket system Fajr-5, Iranian 333 mm long-range multiple launch rocket system TOROS, Turkish 230 and 260 mm multiple launch rocket system BM-14, Soviet 140 mm multiple launch rocket system BM-21 Grad, Soviet 122 mm multiple launch rocket system BM-27 Uragan, Soviet 220 mm multiple launch rocket system M270, U.S. multiple launch rocket system Pinaka Multi Barrel Rocket Launcher, Indian 214 mm multiple launch rocket system TOS-1 Buratino, Soviet / Russian Heavy Flame Thrower System (multiple rocket / thermobaric weapon launcher) References External links Polonez Multiple Launch Rocket System (MLRS), Belarus Wheeled self-propelled rocket launchers Self-propelled artillery of Belarus Multiple rocket launchers Modular rocket launchers Military equipment of Belarus Military vehicles introduced in the 2010s
Polonez (multiple rocket launcher)
[ "Engineering" ]
392
[ "Modular design", "Modular rocket launchers" ]
55,708,561
https://en.wikipedia.org/wiki/Gas%20Council%20Engineering%20Research%20Station
The Gas Council Engineering Research Station (ERS) was a former engineering research institute on Tyneside, situated in a distinctively-shaped and listed building, now occupied by the Metropolitan Borough of North Tyneside. History Design It was designed by Ryder & Yates in 1965, who also designed the Television Centre, Newcastle upon Tyne. Ryder and Yates had formed in 1953 in Newcastle. It was built under the former Northumberland County Council. The Northern Gas Board had its main headquarters in Killingworth. It was first announced in November 1965. It was built in anticipation of North Sea gas. Killingworth was a north-east new town, known as Killingworth Township. It was planned to open in the summer of 1968. It was built on the site of Killingworth Colliery. The modernist architecture is developed from Le Corbusier and Berthold Lubetkin. Construction It was built from 1966-67 on a 10-acre site. An extension was added from 1975-76 to contain a restaurant. It was Grade II* listed on 27 January 1997 by English Heritage (Historic England since 2015). Structure It is situated directly between the B1505 to east and the East Coast Main Line (ECML) to the west, in the west of Killingworth. Nearby to the south was the former distinctively-designed headquarters, Norgas House, of the Northern Gas Board, also designed by Ryder & Yates, until North Tyneside agreed its demolition in 2012. Block A housed the Engineering Research Station and Block B housed the School of Engineering. Function It housed the main engineering research function of British Gas, where the National Transmission System (NTS) was designed, although British Gas also operated a Midlands Research Station (MRS) and a London Research Station (LRS). The research centre's first function was to design the pipeline system around the UK. It researched metallurgy and pipeline technology, including avoiding any cracks in the UK's pipelines. British Gas left the site in 1995 when it brought its research stations onto a single site at Loughborough. The leader of North Tyneside Council at the time, Brian Flood, was also a senior manager at the Research Station, and he facilitated the sale of the site to the Council. However in 2008, North Tyneside moved most of its functions to Cobalt Park close to the A19. See also Grade II* listed buildings in Tyne and Wear References External links 100 Places NE RIBA Roof Something Concrete and Modern 1967 establishments in the United Kingdom Buildings and structures in the Metropolitan Borough of North Tyneside Education in the Metropolitan Borough of North Tyneside Engineering education in the United Kingdom Energy research institutes Grade II* listed buildings in Tyne and Wear Grade II* listed industrial buildings Natural gas infrastructure in the United Kingdom Research institutes established in 1967 Research institutes in Tyne and Wear
Gas Council Engineering Research Station
[ "Engineering" ]
564
[ "Energy research institutes", "Energy organizations" ]
55,709,615
https://en.wikipedia.org/wiki/Polyester%20fiberfill
Polyester fiberfill is a synthetic fiber used for stuffing pillows and other soft objects such as stuffed animals. It is also used in audio speakers for its acoustic properties. It is commonly sold under the trademark name Poly-Fil, or un-trademarked as polyfill. References Synthetic fibers
Polyester fiberfill
[ "Chemistry" ]
62
[ "Polymer stubs", "Synthetic materials", "Organic chemistry stubs", "Synthetic fibers" ]
55,711,057
https://en.wikipedia.org/wiki/Mamuli
Mamuli are precious metal ornaments of the Sumba people, Sumba, Indonesia. They are found in the megalithic society of the western Sumba people, e.g. the Anakalang society. The mamuli ornaments have a shape which represents the female genitalia, symbolizing the woman as the giver of life. Mamuli are the most important Sumbanese precious metal valuables and are seen as heirloom objects which served in important exchange rituals. Form The mamuli can be plain (lobu) or decorated (karagat). The basic lobu mamuli have the shape of a diamond with a concave center. There is a round hole and a slit in the middle which represents the female genitalia, a symbol of woman's sexuality and reproductive power. The decorated karagat mamuli (also known as ma pawisi ("those with feet") have additional finials at the bottom of the diamond-shaped center which gives it the shape of the letter omega. Additional figures are added on these finials, flanking the diamond-shaped female genitalia. These additional figures can be of roosters, cockatoos, horsemen, buffalo, goats, headhunting skull trees, or warriors; all symbols of male greatness. Thus the most decorated karagat mamuli are seen as male, while the simple undecorated lobu mamuli are seen as female. During the colonial period, Baroque versions of mamuli are carved, which included complex battle scenes and movable parts. Mamuli are always a precious metal valuables, usually made of gold or silver. In Sumbanese mythology, precious metals are believed to be of celestial origin: the gold are deposited on earth when the sun sets, while the silver came from the setting of the moon or from the shooting of the stars. Function Mamuli are basically ear ornaments worn on elongated earlobes of females and sometimes male. A very large mamuli are usually worn around the neck as pendants or hanged on the headdress. A mamuli can also be worn as a brooch on jacket. As a brooch, a mamuli is worn with other Sumbanese metal ornaments e.g. the flat twisted maraga, the crescent-shaped tabelu, and the circular wula; but the mamuli always has the best quality of all. Mamuli play an essential role in the elaborate ceremonial gift exchanges practiced by the west Sumba people. The giving of a woman in marriage by one group to another is seen as the most intimate expression of the gift of life. The group from which she originates is regarded as the 'life-giving' group to whomever she marries. Because of this concept, marriage relationship is seen as key to the organization of Sumbanese society. Thus the society is divided into wife-givers and wife-takers. Mamuli are given by the wife-taking group to their wife-givers in a marriage. They become the heirloom of the family which is traded family to family and generation to generation. The exchange of mamuli can also happen in a household and not through marriage. For example, the pig is seen as the most valuable animal recognized as the property of a woman. A man who wishes to use a pig must obtain the permission of the woman who raised it and compensate her with the exchange of the mamuli to "cool the trough". Mamuli are also seen as sacred relics which are usually found kept in the clan leader's treasuries. They are seen as a powerful relic to communicate with the spirits of the ancestors. Mamulis are rarely removed from their container, because their power are believed to kill onlookers or cause natural disasters. As grave goods, mamuli accompany the soul to the land of the dead. See also Marangga Madaka References Cited works Types of jewellery Jewellery components Necklaces Sumba
Mamuli
[ "Technology" ]
819
[ "Jewellery components", "Components" ]
55,714,452
https://en.wikipedia.org/wiki/Lateral%20accessory%20lobes
Lateral accessory lobes, or LALs are paired, symmetrical, systems of synaptic neuropils that exist in the brains of insects and other arthropods. Lateral accessory lobes are located inferiorly and laterally from ellipsoid body, anteriorly and laterally from the bulb. In the frontal section of the arthropod brain the LALs are projected as two triangles, called lateral triangles. The LALs have roughly pyramidal shape. Anatomy The LALs are located behind the antennal lobes and in front of the ventral nervous complex. The two LALs, left and right, are interconnected by the commissure of lateral accessory lobes. Synonyms Lateral accessory lobes are synonymous with the ventral part of the inferior dorsofrontal protocerebrum of the arthropod brain. Physiology and function There is some evidence that lateral accessory lobes take part in the sensory processing and integration in the arthropod brain. Proposed homology of the arthropod LAL and the thalamus of the chordates In 2013, one author published a controversial article equating some parts of arthropod central complex with the basal ganglia of chordates, and the LALs of the arthropods with the nigro-receptive part of the thalamus of chordates. The proposed homology was based on the anatomical analogy in the location of those structures in the brain, on the analogous physiological functions of those structures, and, more importantly, on the patterns of gene expression during embryogenesis and later stages of ontogenesis of those structures. References Thalamus Nervous system Entomology
Lateral accessory lobes
[ "Biology" ]
334
[ "Organ systems", "Nervous system" ]
38,454,237
https://en.wikipedia.org/wiki/CAMECA
CAMECA is a manufacturer of scientific instruments, namely material analysis instruments based on charged particle beam, ions, or electrons. History The company was founded as a subsidiary of Compagnie générale de la télégraphie sans fil (CSF), in 1929, as "Radio-cinéma" at the time of the emergence of the talkies. The job was to design and manufacture Movie projectors for big cinema screening rooms. After World War II, spurred on by Maurice Ponte, director of CSF and a future member of the French Academy of Sciences, the company manufactured scientific instruments developed in French University laboratories: the Spark Spectrometer at the beginning of the 1950s, the Castaing Microprobe from 1958, and Secondary Ion Analysers from 1968. Also in the early 1950s the company settled the factory in Courbevoie, boulevard Saint-Denis where it remained for more than fifty years. The Spark Spectrometer was abandoned at the end of the 1950s. The name of CAMECA, standing for , was given in 1954. The business of movie projectors stopped soon after 1960, but in the 1960s there was a short-lived revival of the film business through the adventure of the Scopitone. Since 1977, the year that the IMS3F was launched, CAMECA has had a virtual monopoly in the field of magnetic SIMS, but it shares the market for Castaing microprobe with Japanese competitors, including Jeol. The semiconductor industry is a very important outlet for magnetic SIMS. At the end of the 20th century, CAMECA gained a foothold in a third analytical technique, tomographic atom probe. In 1987, CAMECA left the Thomson-CSF group and was subject to a leveraged buyout by its management and employees. In 2001, the company was sold to a small French private equity fund, and then to another private equity fund controlled by the Carlyle Group, which sold CAMECA to Ametek, which merged CAMECA with Imago Scientific Instruments in 2010. From 1975, the number of employees has been about 200. Subsidiaries were created in the United States, Japan, Korea, Taiwan and Germany. These subsidiaries engaged in commercial and maintenance activities and employ a few dozen people. The company in 2011 According to the website of the company, in 2011 its business was in two different markets: scientific instruments dedicated to research activities; and metrology for the semiconductor industry. The latter market addresses semiconductor fabrication cleanrooms with a dedicated version of the Castaing electron probe based on the LEXES technique (low energy electron induced X-ray emission spectrometry) developed at the beginning of the 21st century. CAMECA instruments are well known in academic communities, including the fields of geochemistry and planetary science, and CAMECA has been cited dozens of times in scientific journals such as Nature and Science. In 2010, Ametek purchased the Wisconsin start-up Imago Scientific Instruments and attached it to CAMECA. CAMECA therefore holds the monopoly in the manufacturing of atom probe instruments with the LEAP brand name. References External links CAMECA website The history of CAMECA in french Instrument-making corporations Manufacturing companies of France Equipment semiconductor companies Technology companies of France French brands The Carlyle Group companies Manufacturing companies established in 1929
CAMECA
[ "Engineering" ]
663
[ "Equipment semiconductor companies", "Semiconductor fabrication equipment" ]
38,454,487
https://en.wikipedia.org/wiki/Susan%20L.%20Solomon
Susan Lynn Solomon (August 23, 1951 – September 8, 2022) was an American executive and lawyer. She was the chief executive officer and co-founder of the New York Stem Cell Foundation (NYSCF). Early life and education Solomon was born in Brooklyn on August 23, 1951. Her father, Seymour Solomon, was the co-founder of Vanguard Records alongside his brother, Maynard; her mother, Ruth (Katz), was a pianist and worked as a manager of concert musicians. Solomon attended the Fieldston School. She then studied history at New York University, graduating with a bachelor's degree in 1975. Three years later, she obtained a Juris Doctor from Rutgers University School of Law, where she was an editor of the Rutgers Law Review. Career Solomon started her career as an attorney at Debevoise & Plimpton, and worked in the legal profession until 1981. She subsequently held executive positions at MacAndrews & Forbes and APAX (formerly MMG Patricof and Co.). She was the founder and President of Sony Worldwide Networks, the chairman and CEO of Lancit Media Productions, an Emmy award-winning television production company, and then served as the founding CEO of Sotheby's website prior to founding her own strategic management consulting firm Solomon Partners LLC in 2000. Solomon was a founding Board member of the Global Alliance for iPSC Therapies (GAiT) and New Yorkers for the Advancement of Medical Research (NYAMR). She served on the Board of the College Diabetes Network and was a board member for the Centre for Commercialization of Regenerative Medicine. She also served on the board of directors of the Regional Plan Association of New York, where she was a member of the nominating and governance committee. She previously sat on the strategic planning committee for the Empire State Stem Cell Board. NYSCF Solomon co-founded NYSCF in 2005. She had earlier started work as a health-care advocate in 1992, when her son was diagnosed with type 1 diabetes. As a result of her son's diagnosis and then her mother's death from cancer in 2004, she sought to find a way in which the most advanced medical research could translate more quickly into cures. In conversations with clinicians and scientists, Solomon identified stem cells as the most promising way to address unmet patient needs. At the time of her death, NYSCF was one of the biggest nonprofits dedicated to stem cell research, employing 45 scientists at their Research Institute in Manhattan and funding an additional 75 scientists around the world. Personal life Solomon married her first husband, Gary Hirsh, in 1968. Together, they had one son. They divorced and she later married Paul Goldberger in 1980. They remained married until her death, and had two children. Solomon died on September 8, 2022, at her home in Amagansett, New York. She was 71, and suffered from ovarian cancer prior to her death. Awards Living Landmark Honoree, New York Landmarks Conservancy, 2015 Stem Cell Action Leadership Award, Genetics Policy Institute, 2012 New York State Women of Excellence Award 2008 Triumph Award, The Brooke Ellison Foundation, 2008 Publications Articles "Institutional Report Cards for Gender Equality: Lessons Learned from Benchmarking Efforts for Women in STEM." Cell Stem Cell (September 9, 2019). "Automated, high-throughput derivation, characterization and differentiation of induced pluripotent stem cells." Nature Methods (August 3, 2015). "Cell Therapy Worldwide: An Incipient Revolution." Regenerative Medicine (March 1, 2015). "7 Actionable Strategies For Advancing Women in Science, Engineering, and Medicine." Cell Stem Cell (March 5, 2015). "Human Oocytes Reprogram Adult Somatic Nuclei to Diploid Pluripotent Stem Cells." Nature (April 28, 2014). "Twenty years of the International Society for Cellular Therapies: the past, present and future of cellular therapy clinical development." Cytotherapy (April 14, 2014). "The New York Stem Cell Foundation. Interview with Susan Solomon." Regenerative Medicine (November 2012). "The New York Stem Cell Foundation: Accelerating Cures Through Stem Cell Research." Stem Cells Translational Medicine (April 2012). "The sixth annual translational stem cell research conference of the New York Stem Cell Foundation." Annals of the New York Academy of Sciences (May 2012). Case Comment "Monty Python and the Lanham Act: In Search of the Moral Right." Rutgers Law Review (Winter 1977) 3(2). Editorials "Raising the Standards of Stem Cell Line Quality." Nature Cell Biology (March 31, 2016). "Banking on iPSC—Is it Doable and is it Worthwhile". Stem Cell Research and Reviews (December 17, 2014). "#StemCells: Education, Innovation, and Outreach." Cell Stem Cell: Voices (November 7, 2013). "The New Nonprofit: A Model for Innovation Across Sectors." Smart Assets: The Philanthropy New York Blog (March 14, 2013). "Stem Cell Research: Science, Not Politics." The Huffington Post (September 21, 2010). "Opinion: Science Shoved Aside in Stem Cell Ruling." AOL News (August 25, 2010). "Opportunity for Excellence: The Critical Role of State Programs in the New Federal Landscape." The Huffington Post (June 12, 2009). "Patients Before Politics: Putting Science First." The Huffington Post (March 9, 2009). "The Stem Cell Wars Are Not Over." The Huffington Post (November 30, 2007). "After Bush's Veto, What is Next for Stem Cell Research?" The Huffington Post (June 20, 2007). "Spitzer Shows Leadership in Stem Cell Research." Times Union (February 18, 2007). "Today's Stem Cell Bill: A Politically Expedient Approach." The Huffington Post (July 18, 2006). References 1951 births 2022 deaths 21st-century American women lawyers 21st-century American lawyers American health activists American nonprofit chief executives American women chief executives Businesspeople from New York City Deaths from ovarian cancer in New York (state) Ethical Culture Fieldston School alumni Lawyers from Brooklyn New York University alumni Rutgers University alumni 21st-century American businesswomen 21st-century American businesspeople Stem cell research
Susan L. Solomon
[ "Chemistry", "Biology" ]
1,310
[ "Translational medicine", "Tissue engineering", "Stem cell research" ]
38,454,848
https://en.wikipedia.org/wiki/Michelangelo%20Hand
The Michelangelo Hand is a fully articulated robotic hand prosthesis developed by the German prosthetics company Ottobock and its American partner Advanced Arm Dynamics. It is the first prosthesis to feature an electronically actuated thumb which mimics natural human hand movements. The Michelangelo Hand can be used for a variety of delicate everyday tasks, was first fitted to an Austrian elective-amputee in July 2010 and has been in use by military and civilian amputees in the United States and United Kingdom since 2011. Design and development The Michelangelo Hand's development was begun by the German prosthetics manufacturer Ottobock. In 2008, the American company Advanced Arm Dynamics became involved with testing and further refinement of the prosthesis. The prosthesis is battery-powered and can be used for up to 20 hours between charges. Constructed of metal and plastic, it is designed with a natural, anthropomorphic aesthetic, and can be custom-fitted for each user. Its motions are controlled by built-in electrodes, which detect the movements of the user's remaining arm muscles and interpret them using electromyography software. The fingers can form numerous naturalistic configurations to hold, grip or pinch objects. The Michelangelo Hand is capable of moving with enough precision to conduct delicate tasks such as cooking, ironing, and opening a toothpaste tube, but can also exert enough strength to use an automobile's steering wheel. Skin-toned cosmetic gloves are also available for the prosthesis. In 2013, the Michelangelo Hand had a unit cost of around £47,000 (US$73,800). Users Austrian electrician Patrick Mayrhofer suffered serious injuries to his hands at the age of 20 when he touched a 6000-volt power line in February 2008. After unsuccessful attempts to reconstruct his left hand, it was amputated below the elbow in July 2010 and he became the first patient in the world to be fitted with a Michelangelo Hand. He joined Ottobock 3 years later, helping their customers learn to use their prostheses. Having started para-snowboarding in 2012, Mayrhofer was named Paralympic Austrian Sports Personality of the Year after winning a gold medal in banked slalom at the 2015 Para-Snowboard World Championships He went on to win the Paralympic silver medal in banked slalom at the 2018 Winter Olympics. Numerous American soldiers who suffered limb amputation in combat have received Michelangelo Hands since 2011. In January 2012, Matt Rezink of Wisconsin became the first American civilian to receive a unit. In January 2013, Chris Taylor, a British service engineer who had lost his right hand in a jet ski accident in 2009, became the first UK citizen to be fitted with a Michelangelo Hand. By 2013, the hand was offered by several British prosthetic services companies, including Dorset Orthopaedic. See also Boston Digital Arm, an American manufacturer of powered prostheses References External links Advanced Arm Dynamics website Otto Bock Michelangelo page Bionics Prosthetics Biomedical engineering Medical equipment 2011 robots
Michelangelo Hand
[ "Engineering", "Biology" ]
626
[ "Biological engineering", "Biomedical engineering", "Bionics", "Medical equipment", "Medical technology" ]
51,277,173
https://en.wikipedia.org/wiki/Cellular%20agriculture
Cellular agriculture focuses on the production of agricultural products from cell cultures using a combination of biotechnology, tissue engineering, molecular biology, and synthetic biology to create and design new methods of producing proteins, fats, and tissues that would otherwise come from traditional agriculture. Most of the industry is focused on animal products such as meat, milk, and eggs, produced in cell culture rather than raising and slaughtering farmed livestock which is associated with substantial global problems of detrimental environmental impacts (e.g. of meat production), animal welfare, food security and human health. Cellular agriculture is a field of the biobased economy. The most well known cellular agriculture concept is cultured meat. History Although cellular agriculture is a nascent scientific discipline, cellular agriculture products were first commercialized in the late 20th century with insulin and rennet. On March 24, 1990, the FDA approved a bacterium that had been genetically engineered to produce rennet, making it the first genetically engineered product for food. Rennet is a mixture of enzymes that turns milk into curds and whey in cheese making. Traditionally, rennet is extracted from the inner lining of the fourth stomach of calves. Today, cheese making processes use rennet enzymes from genetically engineered bacteria, fungi, or yeasts because they are unadulterated, more consistent, and less expensive than animal-derived rennet. In 2004, Jason Matheny founded New Harvest, whose mission is to "accelerate breakthroughs in cellular agriculture". New Harvest is the only organization focused exclusively on advancing the field of cellular agriculture and provided the first PhD funding specifically for cellular agriculture, at Tufts University. By 2014, IndieBio, a synthetic biology accelerator in San Francisco, has incubated several cellular agriculture startups, hosting Muufri (making milk from cell culture, now Perfect Day Foods), The EVERY Company (making egg whites from cell culture), Gelzen (making gelatin from bacteria and yeast, now Geltor), Afineur (making cultured coffee beans) and Pembient (making rhino horn). Muufri and The EVERY Company were both initially sponsored by New Harvest. In 2015, Mercy for Animals created The Good Food Institute, which promotes plant-based and cellular agriculture. Also in 2015, Isha Datar coined the term "cellular agriculture" (often shortened to "cell ag") in a New Harvest Facebook group. On July 13, 2016, New Harvest hosted the world's first international conference on cellular agriculture in San Francisco, California. The day after the conference, New Harvest hosted the first closed-door workshop for industry, academic, and government stakeholders in cellular agriculture. Research tools Several key research tools are at the foundation of research in cellular agriculture. These include: Cell lines A fundamental missing piece in the advancement of cultured meat is the availability of the appropriate cellular materials. While some methods and protocols from human and mouse cell culture may apply to agricultural cellular materials, it has become clear that most do not. This is evidenced by the fact that established protocols for creating human and mouse embryonic stem cells have not succeeded in establishing ungulate embryonic stem cell lines. The ideal criteria for cell lines for the purpose of cultured meat production include immortality, high proliferative ability, surface independence, serum independence, and tissue-forming ability. The specific cell types most suitable for cellular agriculture are likely to differ from species to species. Growth media Conventional methods for growing animal tissue in culture involve the use of fetal bovine serum (FBS). FBS is a blood product extracted from fetal calves. This product supplies cells with nutrients and stimulating growth factors, but is unsustainable and resource-heavy to produce, with large batch-to-batch variation. Cultured meat companies have been putting significant resources into alternative growth media. After the creation of the cell lines, efforts to remove serum from the growth media are key to the advancement of cellular agriculture as fetal bovine serum has been the target of most criticisms of cellular agriculture and cultured meat production. It is likely that two different media formulations will be required for each cell type: a proliferation media, for growth, and a differentiation media, for maturation. Scaling technologies As biotechnological processes are scaled, experiments start to become increasingly expensive, as bioreactors of increasing volume will have to be created. Each increase in size will require a re-optimization of various parameters such as unit operations, fluid dynamics, mass transfer, and reaction kinetics. Scaffold materials For cells to form tissue, it is helpful for a material scaffold to be added to provide structure. Scaffolds are crucial for cells to form tissues larger than 100 μm across. An ideal scaffold must be non-toxic for the cells, edible, and allow for the flow of nutrients and oxygen. It must also be cheap and easy to produce on a large scale without the need for animals. 3D tissue systems The final phase for creating cultured meat involves bringing together all the previous pieces of research to create large (>100 μm in diameter) pieces of tissue that can be made of mass-produced cells without the need for serum, where the scaffold is suitable for cells and humans. Applications While the majority of the discussion has been around food applications, particular cultured meat, cellular agriculture can be used to create any kind of agricultural product, including those that never involved animals to begin with, like Ginkgo Biowork's fragrances. Meat Cultured meat (also known by other names) is a meat produced by in vitro cell cultures of animal cells. It is a form of cellular agriculture, with such agricultural methods being explored in the context of increased consumer demand for protein. Cultured meat is produced using tissue engineering techniques traditionally used in regenerative medicines. The concept of cultured meat was introduced to wider audiences by Jason Matheny in the early 2000s after he co-authored a paper on cultured meat production and created New Harvest, the world's first nonprofit organization dedicated to in-vitro meat research. Cultured meat may have the potential to address substantial global problems of the environmental impact of meat production, animal welfare, food security and human health. Specifically, it can be thought of in the context of the mitigation of climate change. In 2013, professor Mark Post at Maastricht University pioneered a proof-of-concept for cultured meat by creating the first hamburger patty grown directly from cells. Since then, other cultured meat prototypes have gained media attention: SuperMeat opened a farm-to-fork restaurant called "The Chicken" in Tel Aviv to test consumer reaction to its "Chicken" burger, while the "world's first commercial sale of cell-cultured meat" occurred in December 2020 at the Singapore restaurant "1880", where cultured meat manufactured by the US firm Eat Just was sold. While most efforts in the space focus on common meats such as pork, beef, and chicken which comprise the bulk of consumption in developed countries, some new companies such as Orbillion Bio have focused on high end or unusual meats including Elk, Lamb, Bison, and the prized Wagyu strain of beef. Avant Meats has brought cultured grouper fish to market as other companies have started to pursue cultivating additional fish species and other seafood. The production process is constantly evolving, driven by multiple companies and research institutions. The applications of cultured meat have led to ethical, health, environmental, cultural, and economic discussions. In terms of market strength, data published by the non-governmental organization Good Food Institute found that in 2021 cultivated meat companies attracted $140 million in Europe alone. Currently cultured meat is served at special events and few high end restaurants, mass production of cultured meat has not started yet. In 2020, the world's first regulatory approval for a cultivated meat product was awarded by the Government of Singapore. The chicken meat was grown in a bioreactor in a fluid of amino acids, sugar, and salt. The chicken nuggets food products are ~70% lab-grown meat, while the remainder is made from mung bean proteins and other ingredients. The company pledged to strive for price parity with premium "restaurant" chicken servings. Dairy Perfect Day is a San Francisco-based startup that started as the New Harvest Dairy Project and was incubated by IndieBio in 2014. Perfect Day is making dairy from yeast instead of cows. The company changed its name from Muufri to Perfect Day in August 2016. New Culture is a San Francisco-based startup that was incubated by IndieBio in 2019. New Culture makes mozzarella cheese using casein protein (dairy protein) made by microbes instead of cows. Real Vegan Cheese based in the San Francisco Bay-area is a grass-roots, non-profit Open Science collective working out of two open community labs and was spun out of the International Genetically Engineered Machine (iGEM) competition in 2014. Real Vegan Cheese are making cheese using casein protein (dairy protein) made by microbes instead of cows. Formo, based in Germany, is a startup making dairy products using microbial precision fermentation. Imagindairy, based in Israel, is a startup attempting to create milk proteins from bioengineered yeast. In 2024 it had received FDA and Israeli Ministry of Health approval for its products. Remilk, based in Israel, is a startup attempting to create milk proteins from bioengineered yeast. In 2022 it had received FDA approval for its products. Wilk, based in Israel, is a startup attempting to produce human mother milk ingredients using cells from breast reduction surgeries, to supplement infant formulas. NewMoo, based in Israel, is a startup attempting to create casein protein within the seeds of genetically modified plants. Real Deal Milk, based in Spain, is a startup attempting to create milk proteins from bioengineered microbes. Opalia, based in Canada, is a startup attempting to produce milk from cows' mammary cells. De Novo Dairy, based in South-Africa, is a startup attempting to produce human mother milk ingredients using cells from breast reduction surgeries, to supplement infant formulas. Cultivated Biosciences, based in Switzerland, is a startup attempting to produce fats from non-GMO yeast to make plant based milk more creamy. Naturopy, based in France, is a startup attempting to create milk proteins from bioengineered yeast. Eggs The EVERY Company is a San Francisco-based startup that started as the New Harvest Egg Project and was incubated by IndieBio in 2015. The EVERY Company is making egg whites from yeast instead of eggs. Gelatin Geltor is a San Francisco-based startup that was incubated by IndieBio in 2015. Geltor is developing a proprietary protein production platform that uses bacteria and yeast to produce gelatin. Coffee In 2021, media outlets reported that the world's first synthetic coffee products have been created by two biotechnology companies, still awaiting regulatory approvals for near-term commercialization. Such products – which can be produced via cellular agriculture in bioreactors and for which multiple companies' R&D have acquired substantial funding – may have equal or highly similar effects, composition and taste as natural products but use less water, generate less carbon emissions, require less labor and cause no deforestation. Cell-cultured coffee is a much more radical approach to the multiple challenges that traditional coffee is facing. While 100% coffee, cell-cultured coffee is cultivated in the lab from coffee cells to deliver, after drying, a powder that can be roasted and extracted. Horseshoe crab blood Sothic Bioscience is a Cork-based startup incubated by IndieBio in 2015. Sothic Bioscience is building a platform for biosynthetic horseshoe crab blood production. Horseshoe crab blood contains limulus amebocyte lysate (LAL), which is the gold standard in validating medical equipment and medication. Fish Cellular agriculture could be used for commercial fish feed. Finless Foods is working to develop and mass manufacture marine animal food products. Wild Type is a San Francisco-based startup focused on creating cultured meat to address items such as climate change, food security, and health. Fragrances Ginkgo Bioworks is a Boston-based organism design company culturing fragrances and designing custom microbes. Silk Spiber is a Japan-based company decoding the gene responsible for the production of fibroin in spiders and then bioengineering bacteria with recombinant DNA to produce the protein, which they then spin into their artificial silk. Bolt Threads is a California-based company creating engineered silk fibers based on proteins found in spider silk that can be produced at commercial scale. Bolt examines the DNA of spiders and then replicates those genetic sequences in other ingredients to create a similar silk fiber. Bolt's silk is made primarily of sugar, water, salts, and yeast. Through a process called wet spinning, this liquid is spun into fiber, similar to the way fibers like acrylic and rayon are made. Leather Modern Meadow is a Brooklyn-based startup growing collagen, a protein found in animal skin, to make biofabricated leather. Pet food Clean Meat cluster lists Because Animals, Wild Earth and Bond Pet Foods as participants in developing pet foods that use cultured meat. Wood In 2022, scientists reported the first 3D-printed lab-grown wood. It is unclear if it could ever be used on a commercial scale (e.g. with sufficient production efficiency and quality). Issues Academic programs New Harvest Cultured Tissue Fellowship at Tufts University A joint program between New Harvest and the Tissue Engineering Research Center (TERC), an NIH-supported initiative established in 2004 to advance tissue engineering. The fellowship program offers funding for Masters and PhD students at Tufts university who are interested in bioengineering tunable structures, mechanics, and biology into 3D tissue systems related to their utility as foods. Conferences New Harvest Conference New Harvest brings together pioneers in the cellular agriculture and new, interested parties from industry and academia to share relevant learnings for cellular agriculture's path moving forward. The Conference has been held in San Francisco, California, Brooklyn, New York, and is currently held in Cambridge, Massachusetts. Industrializing Cell-Based Meats & Seafood Summit The 3rd Annual Industrializing Cell-Based Meats & Seafood Summit is the only industry-led forum uniting key decision-makers from biotech and food tech, leading food and meat companies, and investors to discuss key operational and technical challenges for the development of cell-based meats and seafood. International Scientific Conference on Cultured Meat The International Scientific Conference on Cultured Meat began in collaboration with Maastricht University in 2015, and brings together an international group of scientists and industry experts to present the latest research and developments in cultured meat. It takes place annually in Maastricht, The Netherlands. Good Food Conference The GFI conference is an event focused on accelerating the commercialization of plant-based and clean meat. Cultured Meat Symposium The Cultured Meat Symposium is a conference held in Silicon Valley highlighting top industry insights of the clean meat revolution. Alternative Protein Show The Alternative Protein Show is a "networking event" to facilitate collaboration in the "New Protein Landscape", which includes plant-based and cellular agriculture. New Food Conference The New Food Conference is an industry-oriented event that aims to accelerate and empower innovative alternatives to animal products by bringing together key stakeholders. It is Europe's first and biggest conference on new-protein solutions. In the media Books Clean Meat: How Growing Meat Without Animals Will Revolutionize Dinner and the World is a book about cellular agriculture written by animal activist Paul Shapiro (author). The book reviews startup companies that are currently working towards mass-producing cellular agriculture products. Meat Planet: Artificial Flesh and the Future of Food by Benjamin Aldes Wurgaft is the result of five years researching cellular agriculture, and explores the quest to generate meat in the lab, asking what it means to imagine that this is the future of food. It is published by the University of California Press. Where do hot dogs come from? A Children's Book about Cellular Agriculture by Anita Broellochs, Alex Shirazi and Illustrated by Gabriel Gonzalez turns a family BBQ into a scientific story explaining how hot dogs are made with cellular agriculture technologies. The book was launched on Kickstarter on July 20, 2021. Podcasts Cultured Meat and Future Food is a podcast about clean meat and future food technologies hosted by Alex Shirazi, a mobile User Experience Designer based in Menlo Park, California, whose current projects focus on retail technology. The podcast features interviews with industry professionals from startups, investors, and non-profits working on cellular agriculture. Similar fields of research and production Microbial food cultures and genetically engineered microbial production (e.g. of spider silk or solar-energy-based protein powder) Controlled self-assembly of plant proteins (e.g. of spider silk similar plant-proteins-based plastics alternatives) Cell-free artificial synthesis (see Biobased economy#Agriculture) Imitation foods (e.g. meat analogues and milk substitutes) References External links Overview of relevant bibliography New Harvest Cellular Agriculture Society Further reading Clean meat, consumer attitudes and the transition to a cellular agriculture food economy A Closer Look at Cellular Agriculture and the Processes Defining It As lab-grown meat advances, U.S. lawmakers call for regulation CELLULAR AGRICULTURE: A WAY TO FEED TOMORROW'S SMART CITY? Cellular Agriculture, Intentional Imperfection And 'Post Truth': The Transformative Food Trends Of 2017 The 4 Key Biotechnologies Needed to Get Cellular Agriculture to Commercialization Cellular agriculture: Growing meat in a lab setting How Might Cellular Agriculture Impact the Livestock, Dairy, and Poultry Industries? Biological engineering Meat
Cellular agriculture
[ "Engineering", "Biology" ]
3,628
[ "Biological engineering", "Cellular agriculture" ]