id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
1,162,781
https://en.wikipedia.org/wiki/Chemical%20reactor
A chemical reactor is an enclosed volume in which a chemical reaction takes place. In chemical engineering, it is generally understood to be a process vessel used to carry out a chemical reaction, which is one of the classic unit operations in chemical process analysis. The design of a chemical reactor deals with multiple aspects of chemical engineering. Chemical engineers design reactors to maximize net present value for the given reaction. Designers ensure that the reaction proceeds with the highest efficiency towards the desired output product, producing the highest yield of product while requiring the least amount of money to purchase and operate. Normal operating expenses include energy input, energy removal, raw material costs, labor, etc. Energy changes can come in the form of heating or cooling, pumping to increase pressure, frictional pressure loss or agitation.Chemical reaction engineering is the branch of chemical engineering which deals with chemical reactors and their design, especially by application of chemical kinetics to industrial systems. Overview The most common basic types of chemical reactors are tanks (where the reactants mix in the whole volume) and pipes or tubes (for laminar flow reactors and plug flow reactors) Both types can be used as continuous reactors or batch reactors, and either may accommodate one or more solids (reagents, catalysts, or inert materials), but the reagents and products are typically fluids (liquids or gases). Reactors in continuous processes are typically run at steady-state, whereas reactors in batch processes are necessarily operated in a transient state. When a reactor is brought into operation, either for the first time or after a shutdown, it is in a transient state, and key process variables change with time. There are three idealised models used to estimate the most important process variables of different chemical reactors: Batch reactor model, Continuous stirred-tank reactor model (CSTR), and Plug flow reactor model (PFR). Many real-world reactors can be modeled as a combination of these basic types. Key process variables include: Residence time (τ, lower case Greek tau) Volume (V) Temperature (T) Pressure (P) Concentrations of chemical species (C1, C2, C3, ... Cn) Heat transfer coefficients (h, U) A tubular reactor can often be a packed bed. In this case, the tube or channel contains particles or pellets, usually a solid catalyst. The reactants, in liquid or gas phase, are pumped through the catalyst bed. A chemical reactor may also be a fluidized bed; see Fluidized bed reactor. Chemical reactions occurring in a reactor may be exothermic, meaning giving off heat, or endothermic, meaning absorbing heat. A tank reactor may have a cooling or heating jacket or cooling or heating coils (tubes) wrapped around the outside of its vessel wall to cool down or heat up the contents, while tubular reactors can be designed like heat exchangers if the reaction is strongly exothermic, or like furnaces if the reaction is strongly endothermic. Types Batch reactor The simplest type of reactor is a batch reactor. Materials are loaded into a batch reactor, and the reaction proceeds with time. A batch reactor does not reach a steady state, and control of temperature, pressure and volume is often necessary. Many batch reactors therefore have ports for sensors and material input and output. Batch reactors are typically used in small-scale production and reactions with biological materials, such as in brewing, pulping, and production of enzymes. One example of a batch reactor is a pressure reactor. CSTR (continuous stirred-tank reactor) In a CSTR, one or more fluid reagents are introduced into a tank reactor which is typically stirred with an impeller to ensure proper mixing of the reagents while the reactor effluent is removed. Dividing the volume of the tank by the average volumetric flow rate through the tank gives the space time, or the time required to process one reactor volume of fluid. Using chemical kinetics, the reaction's expected percent completion can be calculated. Some important aspects of the CSTR: At steady-state, the mass flow rate in must equal the mass flow rate out, otherwise the tank will overflow or go empty (transient state). While the reactor is in a transient state the model equation must be derived from the differential mass and energy balances. The reaction proceeds at the reaction rate associated with the final (output) concentration, since the concentration is assumed to be homogenous throughout the reactor. Often, it is economically beneficial to operate several CSTRs in series. This allows, for example, the first CSTR to operate at a higher reagent concentration and therefore a higher reaction rate. In these cases, the sizes of the reactors may be varied in order to minimize the total capital investment required to implement the process. It can be demonstrated that an infinite number of infinitely small CSTRs operating in series would be equivalent to a PFR. The behavior of a CSTR is often approximated or modeled by that of a Continuous Ideally Stirred-Tank Reactor (CISTR). All calculations performed with CISTRs assume perfect mixing. If the residence time is 5-10 times the mixing time, this approximation is considered valid for engineering purposes. The CISTR model is often used to simplify engineering calculations and can be used to describe research reactors. In practice it can only be approached, particularly in industrial size reactors in which the mixing time may be very large. A loop reactor is a hybrid type of catalytic reactor that physically resembles a tubular reactor, but operates like a CSTR. The reaction mixture is circulated in a loop of tube, surrounded by a jacket for cooling or heating, and there is a continuous flow of starting material in and product out. PFR (plug flow reactor) In a PFR, sometimes called continuous tubular reactor (CTR), one or more fluid reagents are pumped through a pipe or tube. The chemical reaction proceeds as the reagents travel through the PFR. In this type of reactor, the changing reaction rate creates a gradient with respect to distance traversed; at the inlet to the PFR the rate is very high, but as the concentrations of the reagents decrease and the concentration of the product(s) increases the reaction rate slows. Some important aspects of the PFR: The idealized PFR model assumes no axial mixing: any element of fluid traveling through the reactor doesn't mix with fluid upstream or downstream from it, as implied by the term "plug flow". Reagents may be introduced into the PFR at locations in the reactor other than the inlet. In this way, a higher efficiency may be obtained, or the size and cost of the PFR may be reduced. A PFR has a higher theoretical efficiency than a CSTR of the same volume. That is, given the same space-time (or residence time), a reaction will proceed to a higher percentage completion in a PFR than in a CSTR. This is not always true for reversible reactions. For most chemical reactions of industrial interest, it is impossible for the reaction to proceed to 100% completion. The rate of reaction decreases as the reactants are consumed until the point where the system reaches dynamic equilibrium (no net reaction, or change in chemical species occurs). The equilibrium point for most systems is less than 100% complete. For this reason a separation process, such as distillation, often follows a chemical reactor in order to separate any remaining reagents or byproducts from the desired product. These reagents may sometimes be reused at the beginning of the process, such as in the Haber process. In some cases, very large reactors would be necessary to approach equilibrium, and chemical engineers may choose to separate the partially reacted mixture and recycle the leftover reactants. Under laminar flow conditions, the assumption of plug flow is highly inaccurate, as the fluid traveling through the center of the tube moves much faster than the fluid at the wall. The continuous oscillatory baffled reactor (COBR) achieves thorough mixing by the combination of fluid oscillation and orifice baffles, allowing plug flow to be approximated under laminar flow conditions. Semibatch reactor A semibatch reactor is operated with both continuous and batch inputs and outputs. A fermenter, for example, is loaded with a batch of medium and microbes which constantly produces carbon dioxide that must be removed continuously. Similarly, reacting a gas with a liquid is usually difficult, because a large volume of gas is required to react with an equal mass of liquid. To overcome this problem, a continuous feed of gas can be bubbled through a batch of a liquid. In general, in semibatch operation, one chemical reactant is loaded into the reactor and a second chemical is added slowly (for instance, to prevent side reactions), or a product which results from a phase change is continuously removed, for example a gas formed by the reaction, a solid that precipitates out, or a hydrophobic product that forms in an aqueous solution. Catalytic reactor Although catalytic reactors are often implemented as plug flow reactors, their analysis requires more complicated treatment. The rate of a catalytic reaction is proportional to the amount of catalyst the reagents contact, as well as the concentration of the reactants. With a solid phase catalyst and fluid phase reagents, this is proportional to the exposed area, efficiency of diffusion of reagents in and products out, and efficacy of mixing. Perfect mixing usually cannot be assumed. Furthermore, a catalytic reaction pathway often occurs in multiple steps with intermediates that are chemically bound to the catalyst; and as the chemical binding to the catalyst is also a chemical reaction, it may affect the kinetics. Catalytic reactions often display so-called falsified kinetics, when the apparent kinetics differ from the actual chemical kinetics due to physical transport effects. The behavior of the catalyst is also a consideration. Particularly in high-temperature petrochemical processes, catalysts are deactivated by processes such as sintering, coking, and poisoning. A common example of a catalytic reactor is the catalytic converter that processes toxic components of automobile exhausts. However, most petrochemical reactors are catalytic, and are responsible for most industrial chemical production, with extremely high-volume examples including sulfuric acid, ammonia, reformate/BTEX (benzene, toluene, ethylbenzene and xylene), and fluid catalytic cracking. Various configurations are possible, see Heterogeneous catalytic reactor. References External links Chemical reactors
Chemical reactor
[ "Chemistry", "Engineering" ]
2,153
[ "Chemical reactors", "Chemical reaction engineering", "Chemical equipment" ]
1,164,549
https://en.wikipedia.org/wiki/Proton%20therapy
In medicine, proton therapy, or proton radiotherapy, is a type of particle therapy that uses a beam of protons to irradiate diseased tissue, most often to treat cancer. The chief advantage of proton therapy over other types of external beam radiotherapy is that the dose of protons is deposited over a narrow range of depth; hence in minimal entry, exit, or scattered radiation dose to healthy nearby tissues. When evaluating whether to treat a tumor with photon or proton therapy, physicians may choose proton therapy if it is important to deliver a higher radiation dose to targeted tissues while significantly decreasing radiation to nearby organs at risk. The American Society for Radiation Oncology Model Policy for Proton Beam therapy says proton therapy is considered reasonable if sparing the surrounding normal tissue "cannot be adequately achieved with photon-based radiotherapy" and can benefit the patient. Like photon radiation therapy, proton therapy is often used in conjunction with surgery and/or chemotherapy to most effectively treat cancer. Description Proton therapy is a type of external beam radiotherapy that uses ionizing radiation. In proton therapy, medical personnel use a particle accelerator to target a tumor with a beam of protons. These charged particles damage the DNA of cells, ultimately killing them by stopping their reproduction and thus eliminating the tumor. Cancerous cells are particularly vulnerable to attacks on DNA because of their high rate of division and their limited ability to repair DNA damage. Some cancers with specific defects in DNA repair may be more sensitive to proton radiation. Proton therapy lets physicians deliver a highly conformal beam, i.e. delivering radiation that conforms to the shape and depth of the tumor and sparing much of the surrounding, normal tissue. For example, when comparing proton therapy to the most advanced types of photon therapy—intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT)—proton therapy can give similar or higher radiation doses to the tumor with a 50%-60% lower total body radiation dose. Protons can focus energy delivery to fit the tumor shape, delivering only low-dose radiation to surrounding tissue. As a result, the patient has fewer side effects. All protons of a given energy have a certain penetration range; very few protons penetrate beyond that distance. Also, the dose delivered to tissue is maximized only over the last few millimeters of the particle's range; this maximum is called the spread out Bragg peak, often called the SOBP (see visual). To treat tumors at greater depth, one needs a beam with higher energy, typically given in MeV (mega electron volts). Accelerators used for proton therapy typically produce protons with energies of 70 to 250 MeV. Adjusting proton energy during the treatment maximizes the cell damage within the tumor. Tissue closer to the surface of the body than the tumor gets less radiation, and thus less damage. Tissues deeper in the body get very few protons, so the dose becomes immeasurably small. In most treatments, protons of different energies with Bragg peaks at different depths are applied to treat the entire tumor. These Bragg peaks are shown as thin blue lines in the figure in this section. While tissues behind (or deeper than) the tumor get almost no radiation, the tissues in front of (shallower than) the tumor get radiation dosage based on the SOBP. Equipment Most installed proton therapy systems use isochronous cyclotrons. Cyclotrons are considered simple to operate, reliable and can be made compact, especially with use of superconducting magnets. Synchrotrons can also be used, with the advantage of easier production at varying energies. Linear accelerators, as used for photon radiation therapy, are becoming commercially available as limitations of size and cost are resolved. Modern proton systems incorporate high-quality imaging for daily assessment of tumor contours, treatment planning software illustrating 3D dose distributions, and various system configurations, e.g. multiple treatment rooms connected to one accelerator. Partly because of these advances in technology, and partly because of the continually increasing amount of proton clinical data, the number of hospitals offering proton therapy continues to grow. FLASH therapy FLASH radiotherapy is a technique under development for photon and proton treatments, using very high dose rates (necessitating large beam currents). If applied clinically, it could shorten treatment time to just one to three 1-second sessions, and further reducing side effects. History The first suggestion that energetic protons could be an effective treatment was made by Robert R. Wilson in a paper published in 1946 while he was involved in the design of the Harvard Cyclotron Laboratory (HCL). The first treatments were performed with particle accelerators built for physics research, notably Berkeley Radiation Laboratory in 1954 and at Uppsala in Sweden in 1957. In 1961, a collaboration began between HCL and Massachusetts General Hospital (MGH) to pursue proton therapy. Over the next 41 years, this program refined and expanded these techniques while treating 9,116 patients before the cyclotron was shut down in 2002. In the USSR a therapeutic proton beam with energies up to 200 MeV was obtained at the synchrocyclotron of JINR in Dubna in 1967. The ITEP center in Moscow, Russia, which began treating patients in 1969, is the oldest proton center still in operation. The Paul Scherrer Institute in Switzerland was the world's first proton center to treat eye tumors beginning in 1984. In addition, they invented pencil beam scanning in 1996, which became the state-of-the art form of proton therapy. The world's first hospital-based proton therapy center was a low energy cyclotron centre for eye tumors at Clatterbridge Centre for Oncology in the UK, opened in 1989, followed in 1990 at the Loma Linda University Medical Center (LLUMC) in Loma Linda, California. Later, the Northeast Proton Therapy Center at Massachusetts General Hospital was brought online, and the HCL treatment program was transferred to it in 2001 and 2002. At the beginning of 2023, there were 41 proton therapy centers in the United States, and a total of 89 worldwide. As of 2020, six manufacturers make proton therapy systems: Hitachi, Ion Beam Applications, Mevion Medical Systems, ProNova Solutions, ProTom International and Varian Medical Systems. Types The newest form of proton therapy, pencil beam scanning, gives therapy by sweeping a proton beam laterally over the target so that it gives the required dose while closely conforming to shape of the targeted tumor. Before the use of pencil beam scanning, oncologists used a scattering method to direct a wide beam toward the tumor. Passive scattering beam delivery The first commercially available proton delivery systems used a scattering process, or passive scattering, to deliver the therapy. With scattering proton therapy the proton beam is spread out by scattering devices, and the beam is then shaped by putting items such as collimators and compensators in the path of the protons. The collimators were custom made for the patient with milling machines. Passive scattering gives homogeneous dose along the target volume. Therefore, passive scattering gives more limited control over dose distributions proximal to target. Over time many scattering therapy systems have been upgraded to deliver pencil beam scanning. Because scattering therapy was the first type of proton therapy available, most clinical data available on proton therapy—especially long-term data as of 2020—were acquired via scattering technology. Pencil beam scanning beam delivery A newer and more flexible delivery method is pencil beam scanning, using a beam that sweeps laterally over the target so that it delivers the needed dose while closely conforming to the tumor's shape. This conformal delivery is achieved by shaping the dose through magnetic scanning of thin beamlets of protons without needing apertures and compensators. Multiple beams are delivered from different directions, and magnets in the treatment nozzle steer the proton beam to conform to the target volume layer as the dose is painted layer by layer. This type of scanning delivery provides greater flexibility and control, letting the proton dose conform more precisely to the shape of the tumor. Delivery of protons via pencil beam scanning, in use since 1996 at the Paul Scherrer Institute, allows for the most precise type of proton delivery: intensity-modulated proton therapy (IMPT). IMPT is to proton therapy what IMRT is to conventional photon therapy—treatment that more closely conforms to the tumor while avoiding surrounding structures. Virtually all new proton systems provide pencil beam scanning exclusively. A study led by Memorial Sloan Kettering Cancer Center suggests that IMPT can improve local control when compared to passive scattering for patients with nasal cavity and paranasal sinus malignancies. Application It was estimated that by the end of 2019, a total of ≈200,000 patients had been treated with proton therapy. Physicians use protons to treat conditions in two broad categories: Disease sites that respond well to higher doses of radiation, i.e., dose escalation. Dose escalation has sometimes shown a higher probability of "cure" (i.e. local control) than conventional radiotherapy. These include, among others, uveal melanoma (ocular tumor), skull base and paraspinal tumor (chondrosarcoma and chordoma), and unresectable sarcoma. In all these cases proton therapy gives significant improvement in the probability of local control, over conventional radiotherapy. For eye tumors, proton therapy also has high rates of maintaining the natural eye. Treatment where proton therapy's increased precision reduces unwanted side effects by lessening the dose to normal tissue. In these cases, the tumor dose is the same as in conventional therapy, so there is no expectation of increased probability of curing the disease. Instead, emphasis is on reducing the dose to normal tissue, thus reducing unwanted effects. Two prominent examples are pediatric neoplasms (such as medulloblastoma) and prostate cancer. Pediatric Irreversible long-term side effects of conventional radiation therapy for pediatric cancers are well documented and include growth disorders, neurocognitive toxicity, ototoxicity with subsequent effects on learning and language development, and renal, endocrine and gonadal dysfunctions. Radiation-induced secondary malignancy is another very serious adverse effect that has been reported. As there is minimal exit dose when using proton radiation therapy, dose to surrounding normal tissues can be significantly limited, reducing the acute toxicity which positively impacts the risk for these long-term side effects. Cancers requiring craniospinal irradiation, for example, benefit from the absence of exit dose with proton therapy: dose to the heart, mediastinum, bowel, bladder and other tissues anterior to the vertebrae is eliminated, hence a reduction of acute thoracic, gastrointestinal and bladder side effects. Eye tumor Proton therapy for eye tumors is a special case since this treatment requires only relatively low energy protons (≈70 MeV). Owing to this low energy, some particle therapy centers only treat eye tumors. Proton, or more generally, hadron therapy of tissue close to the eye affords sophisticated methods to assess the alignment of the eye that can vary significantly from other patient position verification approaches in image guided particle therapy. Position verification and correction must ensure that the radiation spares sensitive tissue like the optic nerve to preserve the patient's vision. For ocular tumors, selecting the type of radiotherapy depends on tumor location and extent, tumor radioresistance (calculating the dose needed to eliminate the tumor), and the therapy's potential toxic side effects on nearby critical structures. For example, proton therapy is an option for retinoblastoma and intraocular melanoma. The advantage of a proton beam is that it has the potential to effectively treat the tumor while sparing sensitive structures of the eye. Given its effectiveness, proton therapy has been described as the "gold standard" treatment for ocular melanoma. The implementation of momentum cooling technique in proton therapy for eye treatment can significantly enhance its effectiveness. This technique aids in reducing the radiation dose administered to healthy organs while ensuring that the treatment is completed within a few seconds. Consequently, patients experience improved comfort during the procedure. Base of skull cancer When receiving radiation for skull base tumors, side effects of the radiation can include pituitary hormone dysfunction and visual field deficit—after radiation for pituitary tumors—as well as cranial neuropathy (nerve damage), radiation-induced osteosarcoma (bone cancer), and osteoradionecrosis, which occurs when radiation causes part of the bone in the jaw or skull base to die. Proton therapy has been very effective for people with base of skull tumors. Unlike conventional photon radiation, protons do not penetrate beyond the tumor. Proton therapy lowers the risk of treatment-related side effects from when healthy tissue gets radiation. Clinical studies have found proton therapy to be effective for skull base tumors. Head and neck tumor Proton particles do not deposit exit dose, so proton therapy can spare normal tissues far from the tumor. This is particularly useful for head and neck tumors because of the anatomic constraints found in nearly all cancers in this region. The dosimetric advantage unique to proton therapy translates into toxicity reduction. For recurrent head and neck cancer requiring reirradiation, proton therapy is able to maximize a focused dose of radiation to the tumor while minimizing dose to surrounding tissues, hence a minimal acute toxicity profile, even in patients who got multiple prior courses of radiotherapy. Left-side breast cancer When breast cancer — especially in the left breast — is treated with conventional radiation, the lung and heart, which are near the left breast, are particularly susceptible to photon radiation damage. Such damage can eventually cause lung problems (e.g. lung cancer) or various heart problems. Depending on location of the tumor, damage can also occur to the esophagus, or to the chest wall (which can potentially lead to leukemia). One recent study showed that proton therapy has low toxicity to nearby healthy tissues and similar rates of disease control compared with conventional radiation. Other researchers found that proton pencil beam scanning techniques can reduce both the mean heart dose and the internal mammary node dose to essentially zero. Small studies have found that, compared to conventional photon radiation, proton therapy delivers minimal toxic dose to healthy tissues and specifically decreased dose to the heart and lung. Large-scale trials are underway to examine other potential benefits of proton therapy to treat breast cancer. Lymphoma Though chemotherapy is the main treatment for lymphoma, consolidative radiation is often used in Hodgkin lymphoma and aggressive non-Hodgkin lymphoma, while definitive treatment with radiation alone is used in a small fraction of lymphoma patients. Unfortunately, treatment-related toxicities caused by chemotherapy agents and radiation exposure to healthy tissues are major concerns for lymphoma survivors. Advanced radiation therapy technologies such as proton therapy may offer significant and clinically relevant advantages such as sparing important organs at risk and decreasing the risk for late normal tissue damage while still achieving the primary goal of disease control. This is especially important for lymphoma patients who are being treated with curative intent and have long life expectancy following therapy. Prostate cancer In prostate cancer cases, the issue is less clear. Some published studies found a reduction in long term rectal and genito-urinary damage when treating with protons rather than photons (meaning X-ray or gamma ray therapy). Others showed a small difference, limited to cases where the prostate is particularly close to certain anatomical structures. The relatively small improvement found may be the result of inconsistent patient set-up and internal organ movement during treatment, which offsets most of the advantage of increased precision. One source suggests that dose errors around 20% can result from motion errors of just . and another that prostate motion is between . The number of cases of prostate cancer diagnosed each year far exceeds those of the other diseases referred to above, and this has led some, but not all, facilities to devote most of their treatment slots to prostate treatments. For example, two hospital facilities devote ≈65% and 50% of their proton treatment capacity to prostate cancer, while a third devotes only 7.1%. Worldwide numbers are hard to compile, but one example says that in 2003 ≈26% of proton therapy treatments worldwide were for prostate cancer. Gastrointestinal malignancy A growing amount of data shows that proton therapy has great potential to increase therapeutic tolerance for patients with GI malignancy. The possibility of decreasing radiation dose to organs at risk may also help facilitate chemotherapy dose escalation or allow new chemotherapy combinations. Proton therapy will play a decisive role for ongoing intensified combined modality treatments for GI cancers. The following review presents the benefits of proton therapy in treating hepatocellular carcinoma, pancreatic cancer and esophageal cancer. Hepatocellular carcinoma Post-treatment liver decompensation, and subsequent liver failure, is a risk with radiotherapy for hepatocellular carcinoma, the most common type of primary liver cancer. Research shows that proton therapy gives favorable results related to local tumor control, progression-free survival, and overall survival. Other studies, which examine proton therapy compared with conventional photon therapy, show that proton therapy gives improved survival and/or fewer side effects; hence proton therapy could significantly improve clinical outcomes for some patients with liver cancer. Reirradiation for recurrent cancer For patients who get local or regional recurrences after their initial radiation therapy, physicians are limited in their treatment options due to their reluctance to give additional photon radiation therapy to tissues that have already been irradiated. Re-irradiation is a potentially curative treatment option for patients with locally recurrent head and neck cancer. In particular, pencil beam scanning may be ideally suited for reirradiation. Research shows the feasibility of using proton therapy with acceptable side effects, even in patients who have had multiple prior courses of photon radiation. Comparison with other treatments A large study on comparative effectiveness of proton therapy was published by teams of the University of Pennsylvania and Washington University in St. Louis in JAMA Oncology, assessing if proton therapy in the setting of concurrent chemoradiotherapy is associated with fewer 90-day unplanned hospitalizations and overall survival compared with concurrent photon therapy and chemoradiotherapy. The study included 1483 adult patients with nonmetastatic, locally advanced cancer treated with concurrent chemoradiotherapy with curative intent and concluded, "proton chemoradiotherapy was associated with significantly reduced acute adverse events that caused unplanned hospitalizations, with similar disease-free and overall survival". A significant number of randomized controlled trials is recruiting, but only a limited number have been completed as of August 2020. A phase III randomized controlled trial of proton beam therapy versus radiofrequency ablation (RFA) for recurrent hepatocellular carcinoma organized by the National Cancer Center in Korea showed better 2-year local progression-free survival for the proton arm and concluded that proton beam therapy (PBT) is "not inferior to RFA in terms of local progression-free survival and safety, denoting that either RFA or PBT can be applied to recurrent small HCC patients". A phase IIB randomized controlled trial of proton beam therapy versus IMRT for locally advanced esophageal cancer organized by University of Texas MD Anderson Cancer Center concluded that proton beam therapy reduced the risk and severity of adverse events compared with IMRT while maintaining similar progression free survival. Another Phase II Randomized Controlled Trial comparing photons versus protons for Glioblastoma concluded that patients at risk of severe lymphopenia could benefit from proton therapy. A team from Stanford University assessed the risk of secondary cancer after primary cancer treatment with external beam radiation using data from the National Cancer Database for 9 tumor types: head and neck, gastrointestinal, gynecologic, lymphoma, lung, prostate, breast, bone/soft tissue, and brain/central nervous system. The study included a total of 450,373 patients and concluded that proton therapy was associated with a lower risk of second cancer. The issue of when, whether, and how best to apply this technology is still under discussion by physicians and researchers. One recently introduced method, 'model-based selection', uses comparative treatment plans for IMRT and IMPT in combination with normal tissue complication probability (NTCP) models to identify patients who may benefit most from proton therapy. Clinical trials are underway to examine the comparative efficacy of proton therapy (vs photon radiation) for the following: Pediatric cancers—by St. Jude Children's Research Hospital, Samsung Medical Center Base of skull cancer—by Heidelberg University Head and neck cancer—by MD Anderson, Memorial Sloan Kettering and other centers Brain and spinal cord cancer—by Massachusetts General Hospital, Uppsala University and other centers, NRG Oncology Hepatocellular carcinoma (liver)—by NRG Oncology, Chang Gung Memorial Hospital, Loma Linda University Lung cancer—by Radiation Therapy Oncology Group (RTOG), Proton Collaborative Group (PCG), Mayo Clinic Esophageal cancer—by NRG Oncology, Abramson Cancer Center, University of Pennsylvania Breast cancer—by University of Pennsylvania, Proton Collaborative Group (PCG) Pancreatic cancer—by University of Maryland, Proton Collaborative Group (PCG) X-ray radiotherapy The figure at the right of the page shows how beams of X-rays (IMRT; left frame) and beams of protons (right frame), of different energies, penetrate human tissue. A tumor with a sizable thickness is covered by the IMRT spread out Bragg peak (SOBP) shown as the red lined distribution in the figure. The SOBP is an overlap of several pristine Bragg peaks (blue lines) at staggered depths. Megavoltage X-ray therapy has less "skin sparing potential" than proton therapy: X-ray radiation at the skin, and at very small depths, is lower than for proton therapy. One study estimates that passively scattered proton fields have a slightly higher entrance dose at the skin (≈75%) compared to therapeutic megavoltage (MeV) photon beams (≈60%). X-ray radiation dose falls off gradually, needlessly harming tissue deeper in the body and damaging the skin and surface tissue opposite the beam entrance. The differences between the two methods depends on: Width of the SOBP Depth of the tumor Number of beams that treat the tumor The X-ray advantage of less harm to skin at the entrance is partially counteracted by harm to skin at exit point. Since X-ray treatments are usually done with multiple exposures from opposite sides, each section of skin is exposed to both entering and exiting X-rays. In proton therapy, skin exposure at the entrance point is higher, but tissues on the opposite side of the body to the tumor get no radiation. Thus, X-ray therapy causes slightly less damage to skin and surface tissues, and proton therapy causes less damage to deeper tissues in front of and beyond the target. An important consideration in comparing these treatments is whether the equipment delivers protons via the scattering method (historically, the most common) or a spot scanning method. Spot scanning can adjust the width of the SOBP on a spot-by-spot basis, which reduces the volume of normal (healthy) tissue inside the high dose region. Also, spot scanning allows for intensity modulated proton therapy (IMPT), which determines individual spot intensities using an optimization algorithm that lets the user balance the competing goals of irradiating tumors while sparing normal tissue. Spot scanning availability depends on the machine and the institution. Spot scanning is more commonly known as pencil-beam scanning and is available on IBA, Hitachi, Mevion (known as HYPERSCAN which became US FDA approved in 2017) and Varian. Surgery Physicians base the decision to use surgery or proton therapy (or any radiation therapy) on tumor type, stage, and location. Sometimes surgery is superior (such as cutaneous melanoma), sometimes radiation is superior (such as skull base chondrosarcoma), and sometimes are comparable (for example, prostate cancer). Sometimes, they are used together (e.g., rectal cancer or early stage breast cancer). The benefit of external beam proton radiation is in the dosimetric difference from external beam X-ray radiation and brachytherapy in cases where use of radiation therapy is already indicated, rather than as a direct competition with surgery. In prostate cancer, the most common indication for proton beam therapy, no clinical study directly comparing proton therapy to surgery, brachytherapy, or other treatments has shown any clinical benefit for proton beam therapy. Indeed, the largest study to date showed that IMRT compared with proton therapy was associated with less gastrointestinal morbidity. Side effects and risks Proton therapy is a type of external beam radiotherapy, and shares risks and side effects of other forms of radiation therapy. The dose outside of the treatment region can be significantly less for deep-tissue tumors than X-ray therapy, because proton therapy takes full advantage of the Bragg peak. Proton therapy has been in use for over 40 years, and is a mature technology. As with all medical knowledge, understanding of the interaction of radiations with tumor and normal tissue is still imperfect. Costs Historically, proton therapy has been expensive. An analysis published in 2003 found that the cost of proton therapy is ≈2.4 times that of X-ray therapies. Newer, less expensive, and dozens more proton treatment centers are driving costs down and they offer more accurate three-dimensional targeting. Higher proton dosage over fewer treatments sessions (1/3 fewer or less) is also driving costs down. Thus the cost is expected to reduce as better proton technology becomes more widely available. An analysis published in 2005 determined that the cost of proton therapy is not unrealistic and should not be the reason for denying patients access to the technology. In some clinical situations, proton beam therapy is clearly superior to the alternatives. A study in 2007 expressed concerns about the effectiveness of proton therapy for prostate cancer, but with the advent of new developments in the technology, such as improved scanning techniques and more precise dose delivery ('pencil beam scanning'), this situation may change considerably. Amitabh Chandra, a health economist at Harvard University, said, "Proton-beam therapy is like the Death Star of American medical technology... It's a metaphor for all the problems we have in American medicine." Proton therapy is cost-effective for some types of cancer, but not all. In particular, some other treatments offer better overall value for treatment of prostate cancer. As of 2018, the cost of a single-room particle therapy system is US$40 million, with multi-room systems costing up to US$200 million. Treatment centers As of August 2020, there are over 89 particle therapy facilities worldwide, with at least 41 others under construction. As of August 2020, there are 34 operational proton therapy centers in the United States. As of the end of 2015 more than 154,203 patients had been treated worldwide. One hindrance to universal use of the proton in cancer treatment is the size and cost of the cyclotron or synchrotron equipment necessary. Several industrial teams are working on development of comparatively small accelerator systems to deliver the proton therapy to patients. Among the technologies being investigated are superconducting synchrocyclotrons (also known as FM Cyclotrons), ultra-compact synchrotrons, dielectric wall accelerators, and linear particle accelerators. United States Proton treatment centers in the United States (in chronological order of first treatment date) include: The Indiana University Health Proton Therapy Center in Bloomington, Indiana opened in 2004 and ceased operations in 2014. Outside the US Australia In July 2020, construction began for "SAHMRI 2", the second building for the South Australian Health and Medical Research Institute. The building will house the Australian Bragg Centre for Proton Therapy & Research, a addition to the largest health and biomedical precinct in the Southern Hemisphere, Adelaide's BioMed City. The proton therapy unit is being supplied by ProTom International, which will install its Radiance 330 proton therapy system, the same system used at Massachusetts General Hospital. When in full operation, it will have the ability to treat approximately 600-700 patients per year with around half of these expected to be children and young adults. The facility is expected to be completed in late 2023, with its first patients treated in 2025. In 2024 the South Australian government expressed concerns about the delivery of the project. India Apollo Proton Cancer Centre (APCC) in Chennai, Tamil Nadu, a unit under Apollo Hospitals, is a Cancer specialty hospital. APCC is the only cancer hospital in India with Joint Commission International accreditation. Israel In January 2020, it was announced that a proton therapy center would be built in Ichilov Hospital, at the Tel Aviv Sourasky Medical Center. The project's construction was fully funded by donations. It will have two treatment rooms. According to a newspaper report in 2023, it should be ready in three to four years. The report also mentions that "Proton therapy for cancer treatment has arrived in Israel and the Middle East with a clinical trial underway that sees Hadassah Medical Center partnering with P-Cure, an Israeli company that has developed a unique system designed to fit into existing hospital settings". Spain In October 2021, the Amancio Ortega Foundation arranged with the Spanish government and several autonomous communities to donate 280 million euros to install ten proton accelerators in the public health system. United Kingdom In 2013 the British government announced that £250 million had been budgeted to establish two centers for advanced radiotherapy: The Christie NHS Foundation Trust (the Christie Hospital) in Manchester, which opened in 2018; and University College London Hospitals NHS Foundation Trust, which opened in 2021. These offer high-energy proton therapy, and other types of advanced radiotherapy, including intensity-modulated radiotherapy (IMRT) and image-guided radiotherapy (IGRT). In 2014, only low-energy proton therapy was available in the UK, at Clatterbridge Cancer Centre NHS Foundation Trust in Merseyside. But NHS England has paid to have suitable cases treated abroad, mostly in the US. Such cases rose from 18 in 2008 to 122 in 2013, 99 of whom were children. The cost to the National Health Service averaged ~£100,000 per case. See also Particle therapy Charged particle therapy Hadron Microbeam Fast neutron therapy Boron neutron capture therapy Linear energy transfer Electromagnetic radiation and health Dosimetry Ionizing radiation List of oncology-related terms References Further reading External links The Intrepid Proton-Man , educational comic books by Steve Englehart and Michael Jaszewski for pediatric patients 2019 BBC Horizon documentary 2019 Jove video by the University of Maryland School of Medicine explaining the treatment process: Proton Therapy Delivery and Its Clinical Application in Select Solid Tumor Malignancies 2019 The NHS Proton Beam Therapy Programme Proton Therapy Collaborative Group PTCOG Alliance for Proton Therapy CARES Cancer Network National Association for Proton Therapy American Society for Radiation Oncology Model Policy – Proton Beam Therapy Proton therapy – MedlinePlus Medical Encyclopedia Proton Therapy What is Proton Therapy Medical physics Radiation therapy procedures Proton
Proton therapy
[ "Physics" ]
6,418
[ "Applied and interdisciplinary physics", "Medical physics" ]
1,164,681
https://en.wikipedia.org/wiki/Chromism
In chemistry, chromism is a process that induces a change, often reversible, in the colors of compounds. In most cases, chromism is based on a change in the electron states of molecules, especially the π- or d-electron state, so this phenomenon is induced by various external stimuli which can alter the electron density of substances. It is known that there are many natural compounds that have chromism, and many artificial compounds with specific chromism have been synthesized to date. It is usually synonymous with chromotropism, the (reversible) change in color of a substance due to the physical and chemical properties of its ambient surrounding medium, such as temperature and pressure, light, solvent, and presence of ions and electrons. Chromism is classified by what kind of stimuli are used. Examples of the major kinds of chromism are as follows. thermochromism is chromism that is induced by heat, that is, a change of temperature. This is the most common chromism of all. photochromism is induced by light irradiation. This phenomenon is based on the isomerization between two different molecular structures, light-induced formation of color centers in crystals, precipitation of metal particles in a glass, or other mechanisms. electrochromism is induced by the gain and loss of electrons. This phenomenon occurs in compounds with redox active sites, such as metal ions or organic radicals. solvatochromism depends on the polarity of the solvent. Most solvatochromic compounds are metal complexes. There are many more chromisms and these are listed below in . The output from the chromisms described above is observed by a change in the absorption spectra of the chromic material. An increasingly important group of chromisms are those where changes are displayed in their emission spectra. Hence they are called fluorochromisms, exemplified by solvatofluorochromism, electrofluorochromism and mechanofluorochromism. Chromic phenomena Chromic phenomena are those phenomena in which color is produced when light interacts with materials, often called chromic materials in a variety of ways. These can be categorized under the following five headings: Stimulated (reversible) color change The absorption and reflection of light The absorption of energy followed by the emission of light The absorption of light and energy transfer (or conversion) The manipulation of light. Color change phenomena Those phenomena which involve the change in color of a chemical compound under an external stimulus fall under the generic term of chromisms. They take their individual names from the type of the external influence, which can be either chemical or physical, that is involved. Many of these phenomena are reversible. The following list includes all the classic chromisms plus many others of increasing interest in newer outlets. There are also chromisms which involve two or more stimuli. Examples include: Photoelectrochromism – Photovoltachromism – Bioelectrochromism – Solvatophotochromism – Thermosolvatochromism – Halosolvatochromism – Electromechanochromism. Color changes are also observed on the interaction of metallic nanoparticles and their attached ligands with another stimulus. Examples include plasmonic solvatochromism, plasmonic ionochromism, plasmonic chronochromism and plasmonic vapochromism. Commercial applications Color change materials have been used in several very common outlets but also in an increasing number of new ones. Commercial applications include photochromics in ophthalmics, fashion/cosmetics, security, sensors, optical memory and optical switches, thermochromics in paints, inks, plastics and textiles as indicators/sensors and in architecture, ionochromics in copy paper, direct thermal printing and textile sensors, electrochromics in car mirrors, smart windows, flexible devices and solar protection, solvatochromics in biological probes and sensors, gasochromics in windows and gas sensors. Dyes and pigments Classical dyes and pigments produce color by the absorption and reflection of light; these are the materials that make a major impact on the color of our daily lives. In 2000, world production of organic dyes was 800,000 tonnes and of organic pigments, 250,000 tonnes and the volume has grown at a steady rate throughout the early years of this century. In 2019 the value of the organic dyes/pigments market is forecast to be $19.5bn. Their value is exceeded by the very large production of inorganic pigments. Organic dyes are used mainly to color textile fibers, paper, hair, leather, while pigments are used largely in inks, paints, plastic and cosmetics. Both are used in the growth area of the digital printing of textiles, paper and other surfaces. Dyes are also made using the properties of chromic substances: Examples being Photochromic dyes and Thermochromic dyes Luminescence The absorption of energy followed by the emission of light is often described by the term luminescence. The exact term used is based on the energy source responsible for the luminescence as in color-change phenomena. Electrical – electroluminescence Galvanoluminescence Sonoluminescence. Photons (light) – Photoluminescence Fluorescence Phosphorescence Biofluorescence. Chemical – Chemiluminescence Bioluminescence Electrochemiluminescence. Thermal – Thermoluminescence Pyroluminescence Candololuminescence. Electron Beam – Cathodoluminescence Anodoluminescence Radioluminescence. Mechanical – Triboluminescence Fractoluminescence Mechanoluminescence Crystalloluminescence Lyoluminescence Elasticoluminescence. Many of these phenomena are widely used in consumer products and other important outlets. Cathodoluminescence is used in cathode-ray tubes, photoluminescence in fluorescent lighting and plasma display panels, phosphorescence in safety signs and low energy lighting, fluorescence in pigments, inks, optical brighteners, safety clothing, and biological and medicinal analysis and diagnostics, chemoluminescence and bioluminescence in analysis, diagnostics and sensors, and electroluminescence in the burgeoning areas of light-emitting diodes (LEDs/OLEDs), displays and panel lighting. Important new developments are taking place in the areas of quantum dots and metallic nanoparticles. Light and energy transfer Absorption of light and energy transfer (or conversion) involves colored molecules that can transfer electromagnetic energy, commonly in the form of a laser light source, to other molecules in another form of energy, such as thermal or electrical. These laser addressable colorants, also called near-infrared absorbers, are used in thermal energy conversion, photosensitisation of chemical reactions and the selective absorption of light. Applications areas include optical data storage, as organic photoconductors, as sensitisers in photomedicine, such as photodynamic therapy and photothermal therapy in the treatment of cancer, in photodiagnosis and phototheranostics, and in the photoinactivation of microbes, blood and insects. The absorption of natural sunlight by chromic materials/chromophores is exploited in solar cells for the production of electrical energy via solar cells, using both inorganic photovoltaics and organic materials (organic photovoltaics) and dye sensitized solar cells (DSSCs), and also in the production of useful chemicals via artificial photosynthesis. A developing area is the conversion of light into kinetic energy, often described under the generic term of lightdriven molecular machines. Light manipulation Materials may be used to control and manipulate light via a variety of mechanisms to produce useful effects involving color. For instance, a change of orientation of molecules to produce a visual effect as in liquid crystal displays. Other materials operate by producing a physical effect, by interference and diffraction as in lustre pigments and optically variable pigments, colloidal photonic crystals and in holography. Increasingly inspiration is coming from Nature, in the form of bioinspired structural colors. Molecular materials are also used to increase the intensity of light by modifying its movement through the materials by electrical means, so increasing its intensity as in organic lasers, or in modifying the transmission of light through materials, as in opto-electronics, or by purely by all optical means as in optical limiters. References Bibliography Bamfield Peter and Hutchings Michael, Chromic Phenomena; technological applications of colour chemistry, 3rd Edition, Royal Society of Chemistry, Cambridge, 2018. {EPUB }. Vik Michal and Periyasamy Aravin Prince, Chromic Materials; Fundamentals, Measurements and Applications, Apple Academic Press, 2018. . Ferrara Mariella and Murat Bengisu, Materials that Change Color: Smart Materials and Intelligent Design, Springer, 2014. Photochemistry
Chromism
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,893
[ "Spectrum (physical sciences)", "Chromism", "Materials science", "nan", "Smart materials", "Spectroscopy" ]
1,164,724
https://en.wikipedia.org/wiki/Injector
An injector is a system of ducting and nozzles used to direct the flow of a high-pressure fluid in such a way that a lower pressure fluid is entrained in the jet and carried through a duct to a region of higher pressure. It is a fluid-dynamic pump with no moving parts except a valve to control inlet flow. Depending on the application, an injector can also take the form of an eductor-jet pump, a water eductor or an aspirator. An ejector operates on similar principles to create a vacuum feed connection for braking systems etc. The motive fluid may be a liquid, steam or any other gas. The entrained suction fluid may be a gas, a liquid, a slurry, or a dust-laden gas stream. Steam injector The steam injector is a common device used for delivering water to steam boilers, especially in steam locomotives. It is a typical application of the injector principle used to deliver cold water to a boiler against its own pressure, using its own live or exhaust steam, replacing any mechanical pump. When first developed, its operation was intriguing because it seemed paradoxical, almost like perpetual motion, but it was later explained using thermodynamics. Other types of injector may use other pressurised motive fluids such as air. History Giffard The injector was invented by Henri Giffard in early 1850s and patented in France in 1858, for use on steam locomotives. It was patented in the United Kingdom by Sharp, Stewart and Company of Glasgow. After some initial scepticism resulting from the unfamiliar and superficially paradoxical mode of operation, the injector became widely adopted for steam locomotives as an alternative to mechanical pumps. Kneass Strickland Landis Kneass was a civil engineer, experimenter, and author, with many accomplishments involving railroading. Kneass began publishing a mathematical model of the physics of the injector, which he had verified by experimenting with steam. A steam injector has three primary sections: Steam nozzle, a diverging duct, which converts high pressure steam to low pressure, high velocity wet steam Combining tube, a converging duct, which mixes high velocity steam and cold water Delivery tube, a diverging duct, where a high velocity stream of steam and cold water become a slow high pressure stream of water Nozzle Figure 15 shows four sketches Kneass drew of steam passing through a nozzle. In general, compressible flows through a diverging duct increases velocity as a gas expands. The two sketches at the bottom of figure 15 are both diverging, but the bottom one is slightly curved, and produced the highest velocity flow parallel to the axis. The area of a duct is proportional to the square of the diameter, and the curvature allows the steam to expand more linearly as it passes through the duct. An ideal gas cools during adiabatic expansion (without adding heat), releasing less energy than the same gas would during isothermal expansion (constant temperature). Expansion of steam follows an intermediate thermodynamic process called the Rankine cycle. Steam does more work than an ideal gas, because steam remains hot during expansion. The extra heat comes from enthalpy of vaporization, as some of the steam condenses back into dropplets of water intermixed with steam. Combining tube At the end of the nozzle, the steam has very high velocity, but at less than atmospheric pressure, drawing in cold water which becomes entrained in the stream, where the steam condenses into droplets of water in a converging duct. Delivery tube The delivery tube is a diverging duct where the force of deceleration increases pressure, allowing the stream of water to enter the boiler. Operation The injector consists of a body filled with a secondary fluid, into which a motive fluid is injected. The motive fluid induces the secondary fluid to move. Injectors exist in many variations, and can have several stages, each repeating the same basic operating principle, to increase their overall effect. It uses the Venturi effect of a converging-diverging nozzle on a steam jet to convert the pressure energy of the steam to velocity energy, reducing its pressure to below that of the atmosphere, which enables it to entrain a fluid (e.g., water). After passing through the convergent "combining cone", the mixed fluid is fully condensed. The condensate mixture then enters a divergent "delivery cone" which slows the jet, converting kinetic energy back into static pressure energy above the pressure of the boiler enabling its feed through a non-return valve. Most of the heat energy in the condensed steam is returned to the boiler, increasing the thermal efficiency of the process. Injectors are therefore typically over 98% energy-efficient overall; they are also simple compared to the many moving parts in a feed pump. Key design parameters Fluid feed rate and operating pressure range are the key parameters of an injector, and vacuum pressure and evacuation rate are the key parameters for an ejector. Compression ratio and the entrainment ratio may also be defined: The compression ratio of the injector, , is defined as ratio of the injector's outlet pressure to the inlet pressure of the suction fluid . The entrainment ratio of the injector, , is defined as the amount (in kg/h) of suction fluid that can be entrained and compressed by a given amount (in kg/h) of motive fluid. Lifting properties Other key properties of an injector include the fluid inlet pressure requirements i.e. whether it is lifting or non-lifting. In a non-lifting injector, positive inlet fluid pressure is needed e.g. the cold water input is fed by gravity. The steam-cone minimal orifice diameter is kept larger than the combining cone minimal diameter. The non-lifting Nathan 4000 injector used on the Southern Pacific 4294 could push 12,000 US gallons (45,000 L) per hour at 250 psi (17 bar). The lifting injector can operate with negative inlet fluid pressure i.e. fluid lying below the level of the injector. It differs from the non-lifting type mainly in the relative dimensions of the nozzles. Overflow An overflow is required for excess steam or water to discharge, especially during starting. If the injector cannot initially overcome boiler pressure, the overflow allows the injector to continue to draw water and steam. Check valve There is at least one check valve (called a "clack valve" in locomotives because of the distinctive noise it makes) between the exit of the injector and the boiler to prevent back flow, and usually a valve to prevent air being sucked in at the overflow. Exhaust steam injector Efficiency was further improved by the development of a multi-stage injector which is powered not by live steam from the boiler but by exhaust steam from the cylinders, thereby making use of the residual energy in the exhaust steam which would otherwise go to waste. However, an exhaust injector also cannot work when the locomotive is stationary; later exhaust injectors could use a supply of live steam if no exhaust steam was available. Problems Injectors can be troublesome under certain running conditions, such as when vibration causes the combined steam and water jet to "knock off". Originally the injector had to be restarted by careful manipulation of the steam and water controls, and the distraction caused by a malfunctioning injector was largely responsible for the 1913 Ais Gill rail accident. Later injectors were designed to automatically restart on sensing the collapse in vacuum from the steam jet, for example with a spring-loaded delivery cone. Another common problem occurs when the incoming water is too warm and is less effective at condensing the steam in the combining cone. That can also occur if the metal body of the injector is too hot, e.g. from prolonged use. The internal parts of an injector are subject to erosive wear, particularly damage at the throat of the delivery cone which may be due to cavitation. Vacuum ejectors An additional use for the injector technology is in vacuum ejectors in continuous train braking systems, which were made compulsory in the UK by the Regulation of Railways Act 1889. A vacuum ejector uses steam pressure to draw air out of the vacuum pipe and reservoirs of continuous train brake. Steam locomotives, with a ready source of steam, found ejector technology ideal with its rugged simplicity and lack of moving parts. A steam locomotive usually has two ejectors: a large ejector for releasing the brakes when stationary and a small ejector for maintaining the vacuum against leaks. The exhaust from the ejectors is invariably directed to the smokebox, by which means it assists the blower in draughting the fire. The small ejector is sometimes replaced by a reciprocating pump driven from the crosshead because this is more economical of steam and is only required to operate when the train is moving. Vacuum brakes have been superseded by air brakes in modern trains, which allow the use of smaller brake cylinders and/or higher braking force due to the greater difference from atmospheric pressure. Earlier application of the principle An empirical application of the principle was in widespread use on steam locomotives before its formal development as the injector, in the form of the arrangement of the blastpipe and chimney in the locomotive smokebox. The sketch on the right shows a cross section through a smokebox, rotated 90 degrees; it can be seen that the same components are present, albeit differently named, as in the generic diagram of an injector at the top of the article. Exhaust steam from the cylinders is directed through a nozzle on the end of the blastpipe, to reduce pressure inside the smokebox by entraining the flue gases from the boiler which are then ejected via the chimney. The effect is to increase the draught on the fire to a degree proportional to the rate of steam consumption, so that as more steam is used, more heat is generated from the fire and steam production is also increased. The effect was first noted by Richard Trevithick and subsequently developed empirically by the early locomotive engineers; Stephenson's Rocket made use of it, and this constitutes much of the reason for its notably improved performance in comparison with contemporary machines. Modern uses The use of injectors (or ejectors) in various industrial applications has become quite common due to their relative simplicity and adaptability. For example: To inject chemicals into the boiler drums of small, stationary, low pressure boilers. In large, high-pressure modern boilers, usage of injectors for chemical dosing is not possible due to their limited outlet pressures. In thermal power stations, they are used for the removal of the boiler bottom ash, the removal of fly ash from the hoppers of the electrostatic precipitators used to remove that ash from the boiler flue gas, and for drawing a vacuum pressure in steam turbine exhaust condensers. Jet pumps have been used in boiling water nuclear reactors to circulate the coolant fluid. For use in producing a vacuum pressure in steam jet cooling systems. For expansion work recovery in air conditioning and refrigeration systems. For enhanced oil recovery processes in the oil & gas Industry. For the bulk handling of grains or other granular or powdered materials. The construction industry uses them for pumping turbid water and slurries. Eductors are used in ships to pump residual ballast water, or cargo oil which cannot be removed using centrifugal pumps due to loss of suction head and may damage the centrifugal pump if run dry, which may be caused due to trim or list of the ship. Eductors are used on-board ships to pump out bilges, since using centrifugal pump would not be feasible as the suction head may be lost frequently. Some aircraft (mostly earlier designs) use an ejector attached to the fuselage to provide vacuum for gyroscopic instruments such as an attitude indicator (artificial horizon). Eductors are used in aircraft fuel systems as transfer pumps; fluid flow from an engine-mounted mechanical pump can be delivered to a fuel tank-mounted eductor to transfer fuel from that tank. Aspirators are vacuum pumps based on the same operating principle and are used in laboratories to create a partial vacuum and for medical use in suction of mucus or bodily fluids. Water eductors are water pumps used for dredging silt and panning for gold, they're used because they can handle the highly abrasive mixtures quite well. To create vacuum system in vacuum distillation unit (oil refinery). Vacuum autoclaves use an ejector to pull a vacuum, generally powered by the cold water supply to the machine. Low weight jet pumps can be made out of paper mache. Well pumps Jet pumps are commonly used to extract water from water wells. The main pump, often a centrifugal pump, is powered and installed at ground level. Its discharge is split, with the greater part of the flow leaving the system, while a portion of the flow is returned to the jet pump installed below ground in the well. This recirculated part of the pumped fluid is used to power the jet. At the jet pump, the high-energy, low-mass returned flow drives more fluid from the well, becoming a low-energy, high-mass flow which is then piped to the inlet of the main pump. Shallow well pumps are those in which the jet assembly is attached directly to the main pump and are limited to a depth of approximately 5-8m to prevent cavitation. Deep well pumps are those in which the jet is located at the bottom of the well. The maximum depth for deep well pumps is determined by the inside diameter of and the velocity through the jet. The major advantage of jet pumps for deep well installations is the ability to situate all mechanical parts (e.g., electric/petrol motor, rotating impellers) at the ground surface for easy maintenance. The advent of the electrical submersible pump has partly replaced the need for jet type well pumps, except for driven point wells or surface water intakes. Multi-stage steam vacuum ejectors In practice, for suction pressure below 100 mbar absolute, more than one ejector is used, usually with condensers between the ejector stages. Condensing of motive steam greatly improves ejector set efficiency; both barometric and shell-and-tube surface condensers are used. In operation a two-stage system consists of a primary high-vacuum (HV) ejector and a secondary low-vacuum (LV) ejector. Initially the LV ejector is operated to pull vacuum down from the starting pressure to an intermediate pressure. Once this pressure is reached, the HV ejector is then operated in conjunction with the LV ejector to finally pull vacuum to the required pressure. In operation a three-stage system consists of a primary booster, a secondary high-vacuum (HV) ejector, and a tertiary low-vacuum (LV) ejector. As per the two-stage system, initially the LV ejector is operated to pull vacuum down from the starting pressure to an intermediate pressure. Once this pressure is reached, the HV ejector is then operated in conjunction with the LV ejector to pull vacuum to the lower intermediate pressure. Finally the booster is operated (in conjunction with the HV & LV ejectors) to pull vacuum to the required pressure. Construction materials Injectors or ejectors are made of carbon steel, stainless steel, brass, titanium, PTFE, carbon, and other materials. See also Aspirator (pump) De Laval nozzle Diffusion pump Giovanni Battista Venturi Gustaf de Laval Nozzle Surface condenser Venturi effect References Further reading External links Use of Eductor for Lifting Water Chemical equipment Fluid dynamics Pumps Locomotive parts Steam locomotive technologies French inventions
Injector
[ "Physics", "Chemistry", "Engineering" ]
3,337
[ "Pumps", "Turbomachinery", "Chemical equipment", "Chemical engineering", "Physical systems", "Hydraulics", "nan", "Piping", "Fluid dynamics" ]
1,165,029
https://en.wikipedia.org/wiki/Collimator
A collimator is a device which narrows a beam of particles or waves. To narrow can mean either to cause the directions of motion to become more aligned in a specific direction (i.e., make collimated light or parallel rays), or to cause the spatial cross section of the beam to become smaller (beam limiting device). History The English physicist Henry Kater was the inventor of the floating collimator, which rendered a great service to practical astronomy. He reported about his invention in January 1825. In his report, Kater mentioned previous work in this area by Carl Friedrich Gauss and Friedrich Bessel. Optical collimators In optics, a collimator may consist of a curved mirror or lens with some type of light source and/or an image at its focus. This can be used to replicate a target focused at infinity with little or no parallax. In lighting, collimators are typically designed using the principles of nonimaging optics. Optical collimators can be used to calibrate other optical devices, to check if all elements are aligned on the optical axis, to set elements at proper focus, or to align two or more devices such as binoculars or gun barrels and gunsights. A surveying camera may be collimated by setting its fiduciary markers so that they define the principal point, as in photogrammetry. Optical collimators are also used as gun sights in the collimator sight, which is a simple optical collimator with a cross hair or some other reticle at its focus. The viewer only sees an image of the reticle. They have to use it either with both eyes open and one eye looking into the collimator sight, with one eye open and moving the head to alternately see the sight and the target, or with one eye to partially see the sight and target at the same time. Adding a beam splitter allows the viewer to see the reticle and the field of view, making a reflector sight. Collimators may be used with laser diodes and CO2 cutting lasers. Proper collimation of a laser source with long enough coherence length can be verified with a shearing interferometer. X-ray, gamma ray, and neutron collimators In X-ray optics, gamma ray optics, and neutron optics, a collimator is a device that filters a stream of rays so that only those traveling parallel to a specified direction are allowed through. Collimators are used for X-ray, gamma-ray, and neutron imaging because it is difficult to focus these types of radiation into an image using lenses, as is routine with electromagnetic radiation at optical or near-optical wavelengths. Collimators are also used in radiation detectors in nuclear power stations to make them directionally sensitive. Applications The figure to the right illustrates how a Söller collimator is used in neutron and X-ray machines. The upper panel shows a situation where a collimator is not used, while the lower panel introduces a collimator. In both panels the source of radiation is to the right, and the image is recorded on the gray plate at the left of the panels. Without a collimator, rays from all directions will be recorded; for example, a ray that has passed through the top of the specimen (to the right of the diagram) but happens to be travelling in a downwards direction may be recorded at the bottom of the plate. The resultant image will be so blurred and indistinct as to be useless. In the lower panel of the figure, a collimator has been added (blue bars). This may be a sheet of lead or other material opaque to the incoming radiation with many tiny holes bored through it or in the case of neutrons it can be a sandwich arrangement (which can be up to several feet long; see ENGIN-X) with many layers alternating between neutron absorbing material (e.g., gadolinium) with neutron transmitting material. This can be something simple, such as air; alternatively, if mechanical strength is needed, a material such as aluminium may be used. If this forms part of a rotating assembly, the sandwich may be curved. This allows energy selection in addition to collimation; the curvature of the collimator and its rotation will present a straight path only to one energy of neutrons. Only rays that are travelling nearly parallel to the holes will pass through them—any others will be absorbed by hitting the plate surface or the side of a hole. This ensures that rays are recorded in their proper place on the plate, producing a clear image. For industrial radiography using gamma radiation sources such as iridium-192 or cobalt-60, a collimator (beam limiting device) allows the radiographer to control the exposure of radiation to expose a film and create a radiograph, to inspect materials for defects. A collimator in this instance is most commonly made of tungsten, and is rated according to how many half value layers it contains, i.e., how many times it reduces undesirable radiation by half. For instance, the thinnest walls on the sides of a 4 HVL tungsten collimator thick will reduce the intensity of radiation passing through them by 88.5%. The shape of these collimators allows emitted radiation to travel freely toward the specimen and the x-ray film, while blocking most of the radiation that is emitted in undesirable directions such as toward workers. Limitations Although collimators improve resolution, they also reduce intensity by blocking incoming radiation, which is undesirable for remote sensing instruments that require high sensitivity. For this reason, the gamma ray spectrometer on the Mars Odyssey is a non-collimated instrument. Most lead collimators let less than 1% of incident photons through. Attempts have been made to replace collimators with electronic analysis. In radiation therapy Collimators (beam limiting devices) are used in linear accelerators used for radiotherapy treatments. They help to shape the beam of radiation emerging from the machine and can limit the maximum field size of a beam. The treatment head of a linear accelerator consists of both a primary and secondary collimator. The primary collimator is positioned after the electron beam has reached a vertical orientation. When using photons, it is placed after the beam has passed through the X-ray target. The secondary collimator is positioned after either a flattening filter (for photon therapy) or a scattering foil (for electron therapy). The secondary collimator consists of two jaws which can be moved to either enlarge or minimize the size of the treatment field. New systems involving multileaf collimators (MLCs) are used to further shape a beam to localise treatment fields in radiotherapy. MLCs consist of approximately 50–120 leaves of heavy, metal collimator plates which slide into place to form the desired field shape. Computing the spatial resolution To find the spatial resolution of a parallel hole collimator with a hole length, , a hole diameter and a distance to the imaged object , the following formula can be used where the effective length is defined as Where is the linear attenuation coefficient of the material from which the collimator is made. See also Autocollimation Autocollimator Collimated light Hohlraum Nonimaging optics Snoot in lighting Reflector sight in fighter cockpits References Accelerator physics Neutron instrumentation Optical devices Radiology Synchrotron instrumentation X-ray instrumentation
Collimator
[ "Physics", "Materials_science", "Technology", "Engineering" ]
1,543
[ "Glass engineering and science", "Applied and interdisciplinary physics", "Optical devices", "Synchrotron instrumentation", "Measuring instruments", "X-ray instrumentation", "Neutron instrumentation", "Experimental physics", "Accelerator physics" ]
1,165,244
https://en.wikipedia.org/wiki/Chronon
A chronon is a proposed quantum of time, that is, a discrete and indivisible "unit" of time as part of a hypothesis that proposes that time is not continuous. In simple language, a chronon is the smallest, discrete, non-decomposable unit of time. In a one-dimensional model, a chronon is a time interval or period, while in an n-dimensional model it is a non-decomposable region in n-dimensional time. It is not easy to see how it could possibly be recast so as to postulate only a discrete spacetime (or even a merely dense one). For a set of instants to be dense, every instant not in the set must have a sequence of instants in the set that converge (get arbitrarily close) to it. For it to be a continuum, however, something more is required— that every set of instants earlier (later) than any given one should have a tight upper (lower) bound that is also an instant (see least upper bound property). It is continuity that enables modern mathematics to surmount the paradox of extension framed by the pre-Socratic eleatic Zeno—a paradox comprising the question of how a finite interval can be made up of dimensionless points or instants. Early work While time is a continuous quantity in both standard quantum mechanics and general relativity, many physicists have suggested that a discrete model of time might work, especially when considering the combination of quantum mechanics with general relativity to produce a theory of quantum gravity. The term was introduced in this sense by Robert Lévi in 1927. A quantum theory in which time is a quantum variable with a discrete spectrum, and which is nevertheless consistent with special relativity, was proposed by Chen Ning Yang in 1947. Henry Margenau in 1950 suggested that the chronon might be the time for light to travel the classical radius of an electron. Work by Caldirola A prominent model was introduced by Piero Caldirola in 1980. In Caldirola's model, one chronon corresponds to about 6.27 seconds for an electron. This is much longer than the Planck time, which is only about seconds. The Planck time may be postulated as a lower-bound on the length of time that could exist between two connected events, but it is not a quantization of time itself since there is no requirement that the time between two events be separated by a discrete number of Planck times. For example, ordered pairs of events (A, B) and (B, C) could each be separated by slightly more than 1 Planck time: this would produce a measurement limit of 1 Planck time between A and B or B and C, but a limit of 3 Planck times between A and C. The chronon is a quantization of the evolution in a system along its world line. Consequently, the value of the chronon, like other quantized observables in quantum mechanics, is a function of the system under consideration, particularly its boundary conditions. The value for the chronon, θ0, is calculated as From this formula, it can be seen that the nature of the moving particle being considered must be specified, since the value of the chronon depends on the particle's charge and mass. Caldirola claims that the chronon has important implications for quantum mechanics, in particular that it allows for a clear answer to the question of whether a free-falling charged particle does or does not emit radiation. This model supposedly avoids the difficulties met by Abraham–Lorentz's and Dirac's approaches to the problem and provides a natural explication of quantum decoherence. See also Elementary particle Gravastar Tachyon List of particles Particle physics Theoretical physics Notes References External links https://www.britannica.com/technology/chronon Quantum gravity Units of time
Chronon
[ "Physics", "Mathematics" ]
810
[ "Physical quantities", "Time", "Units of time", "Quantity", "Unsolved problems in physics", "Quantum gravity", "Spacetime", "Physics beyond the Standard Model", "Units of measurement" ]
1,165,416
https://en.wikipedia.org/wiki/Interhalogen
In chemistry, an interhalogen compound is a molecule which contains two or more different halogen atoms (fluorine, chlorine, bromine, iodine, or astatine) and no atoms of elements from any other group. Most interhalogen compounds known are binary (composed of only two distinct elements). Their formulae are generally , where n = 1, 3, 5 or 7, and X is the less electronegative of the two halogens. The value of n in interhalogens is always odd, because of the odd valence of halogens. They are all prone to hydrolysis, and ionize to give rise to polyhalogen ions. Those formed with astatine have a very short half-life due to astatine being intensely radioactive. No interhalogen compounds containing three or more different halogens are definitely known, although a few books claim that and have been obtained, and theoretical studies seem to indicate that some compounds in the series are barely stable. Some interhalogens, such as , , and , are good halogenating agents. is too reactive to generate fluorine. Beyond that, iodine monochloride has several applications, including helping to measure the saturation of fats and oils, and as a catalyst for some reactions. A number of interhalogens, including , are used to form polyhalides. Similar compounds exist with various pseudohalogens, such as the halogen azides (, , , and ) and cyanogen halides (, , , and ). Types of interhalogens Diatomic interhalogens The interhalogens of form XY have physical properties intermediate between those of the two parent halogens. The covalent bond between the two atoms has some ionic character, the less electronegative halogen, X, being oxidised and having a partial positive charge. All combinations of fluorine, chlorine, bromine, and iodine that have the above-mentioned general formula are known, but not all are stable. Some combinations of astatine with other halogens are not even known, and those that are known are highly unstable. Chlorine monofluoride (ClF) is the lightest interhalogen compound. ClF is a colorless gas with a normal boiling point of −100 °C. Bromine monofluoride (BrF) has not been obtained as a pure compound — it dissociates into the trifluoride and free bromine. It is created according to the following equation: Br2(l) + F2(g) → 2 BrF(g) Bromine monofluoride dissociates like this: 3 BrF → Br2 + BrF3 Iodine monofluoride (IF) is unstable and decomposes at 0 °C, disproportionating into elemental iodine and iodine pentafluoride. Bromine monochloride (BrCl) is a yellow-brown gas with a boiling point of 5 °C. Iodine monochloride (ICl) exists as red transparent crystals that melt at 27.2 °C to form a choking brownish liquid (similar in appearance and weight to bromine). It reacts with HCl to form the strong acid HICl2. The crystal structure of iodine monochloride consists of puckered zig-zag chains, with strong interactions between the chains. Astatine monochloride (AtCl) is made either by the direct combination of gas-phase astatine with chlorine or by the sequential addition of astatine and dichromate ion to an acidic chloride solution. Iodine monobromide (IBr) is made by the direct combination of the elements to form a dark red crystalline solid. It melts at 42 °C and boils at 116 °C to form a partially dissociated vapour. Astatine monobromide (AtBr) is made by the direct combination of astatine with either bromine vapour or an aqueous solution of iodine monobromide. Astatine monoiodide (AtI) is made by direct combination of astatine and iodine. No astatine fluorides have been discovered yet. Their absence has been speculatively attributed to the extreme reactivity of such compounds, including the reaction of an initially formed fluoride with the walls of the glass container to form a non-volatile product. Thus, although the synthesis of an astatine fluoride is thought to be possible, it may require a liquid halogen fluoride solvent, as has already been used for the characterization of radon fluorides. In addition, there exist analogous molecules involving pseudohalogens, such as the cyanogen halides. Tetratomic interhalogens Chlorine trifluoride (ClF3) is a colourless gas that condenses to a green liquid, and freezes to a white solid. It is made by reacting chlorine with an excess of fluorine at 250 °C in a nickel tube. It reacts more violently than fluorine, often explosively. The molecule is planar and T-shaped. It is used in the manufacture of uranium hexafluoride. Bromine trifluoride (BrF3) is a yellow-green liquid that conducts electricity — it self-ionises to form [BrF2]+ and [BrF4]−. It reacts with many metals and metal oxides to form similar ionised entities; with other metals, it forms the metal fluoride plus free bromine and oxygen; and with water, it forms hydrofluoric acid and hydrobromic acid. It is used in organic chemistry as a fluorinating agent. It has the same molecular shape as chlorine trifluoride. Iodine trifluoride (IF3) is a yellow solid that decomposes above −28 °C. It can be synthesised from the elements, but care must be taken to avoid the formation of IF5. F2 attacks I2 to yield IF3 at −45 °C in CCl3F. Alternatively, at low temperatures, the fluorination reaction I2 + 3 XeF2 → 2 IF3 + 3 Xe can be used. Not much is known about iodine trifluoride as it is so unstable. Iodine trichloride (ICl3) forms lemon yellow crystals that melt under pressure to a brown liquid. It can be made from the elements at low temperature, or from iodine pentoxide and hydrogen chloride. It reacts with many metal chlorides to form tetrachloroiodides (), and hydrolyses in water. The molecule is a planar dimer (ICl3)2, with each iodine atom surrounded by four chlorine atoms. Iodine tribromide (IBr3) is a dark brown liquid. Hexatomic interhalogens All stable hexatomic and octatomic interhalogens involve a heavier halogen combined with five or seven fluorine atoms. Unlike the other halogens, fluorine atoms have high electronegativity and small size which is able to stabilize them. Chlorine pentafluoride (ClF5) is a colourless gas, made by reacting chlorine trifluoride with fluorine at high temperatures and high pressures. It reacts violently with water and most metals and nonmetals. Bromine pentafluoride (BrF5) is a colourless fuming liquid, made by reacting bromine trifluoride with fluorine at 200 °C. It is physically stable, but decomposes violently on contact with water, organic substances, and most metals and nonmetals. Iodine pentafluoride (IF5) is a colourless liquid, made by reacting iodine pentoxide with fluorine, or iodine with silver(II) fluoride. It is highly reactive, even slowly with glass. It reacts with water to form hydrofluoric acid and with fluorine gas to form iodine heptafluoride. The molecule has the form of a tetragonal pyramid. Octatomic interhalogens Iodine heptafluoride (IF7) is a colourless gas and a strong fluorinating agent. It is made by reacting iodine pentafluoride with fluorine gas. The molecule is a pentagonal bipyramid. This compound is the only known interhalogen compound where the larger atom is carrying seven of the smaller atoms. All attempts to synthesize bromine or chlorine heptafluoride have met with failure; instead, bromine pentafluoride or chlorine pentafluoride is produced, along with fluorine gas. Properties Typically, interhalogen bonds are more reactive than diatomic halogen bonds, because interhalogen bonds are weaker than diatomic halogen bonds, except for F2. If interhalogens are exposed to water, they convert to halide and oxyhalide ions. With BrF5, this reaction can be explosive. If interhalogens are exposed to silicon dioxide, or metal oxides, then silicon or metal respectively bond with one of the types of halogen, leaving free diatomic halogens and diatomic oxygen. Most interhalogens are halogen fluorides, and all but three (IBr, AtBr, and AtI) of the remainder are halogen chlorides. Chlorine and bromine can each bond to five fluorine atoms, and iodine can bond to seven. AX and AX3 interhalogens can form between two halogens whose electronegativities are relatively close to one another. When interhalogens are exposed to metals, they react to form metal halides of the constituent halogens. The oxidation power of an interhalogen increases with the number of halogens attached to the central atom of the interhalogen, as well as with the decreasing size of the central atom of the compound. Interhalogens containing fluorine are more likely to be volatile than interhalogens containing heavier halogens. Interhalogens with one or three halogens bonded to a central atom are formed by two elements whose electronegativities are not far apart. Interhalogens with five or seven halogens bonded to a central atom are formed by two elements whose sizes are very different. The number of smaller halogens that can bond to a large central halogen is guided by the ratio of the atomic radius of the larger halogen over the atomic radius of the smaller halogen. A number of interhalogens, such as IF7, react with all metals except for those in the platinum group. IF7, unlike interhalogens in the XY5 series, does not react with the fluorides of the alkali metals. ClF3 is the most reactive of the XY3 interhalogens. ICl3 is the least reactive. BrF3 has the highest thermal stability of the interhalogens with four atoms. ICl3 has the lowest. Chlorine trifluoride has a boiling point of −12 °C. Bromine trifluoride has a boiling point of 127 °C and is a liquid at room temperature. Iodine trichloride melts at 101 °C. Most interhalogens are covalent gases. Some interhalogens, especially those containing bromine, are liquids, and most iodine-containing interhalogens are solids. Most of the interhalogens composed of lighter halogens are fairly colorless, but the interhalogens containing heavier halogens are deeper in color due to their higher molecular weight. In this respect, the interhalogens are similar to the halogens. The greater the difference between the electronegativities of the two halogens in an interhalogen, the higher the boiling point of the interhalogen. All interhalogens are diamagnetic. The bond length of interhalogens in the XY series increases with the size of the constituent halogens. For instance, ClF has a bond length of 1.628 Å, and IBr has a bond length of 2.47 Å. Production It is possible to produce larger interhalogens, such as ClF3, by exposing smaller interhalogens, such as ClF, to pure diatomic halogens, such as F2. This method of production is especially useful for generating halogen fluorides. At temperatures of 250 to 300 °C, this type of production method can also convert larger interhalogens into smaller ones. It is also possible to produce interhalogens by combining two pure halogens at various conditions. This method can generate any interhalogen save for IF7. Smaller interhalogens, such as ClF, can form by direct reaction with pure halogens. For instance, F2 reacts with Cl2 at 250 °C to form two molecules of ClF. Br2 reacts with diatomic fluorine in the same way, but at 60 °C. I2 reacts with diatomic fluorine at only 35 °C. ClF and BrF can both be produced by the reaction of a larger interhalogen, such as ClF3 or BrF3 and a diatomic molecule of the element lower in the periodic table. Among the hexatomic interhalogens, IF5 has a higher boiling point (97 °C) than BrF5 (40.5 °C), although both compounds are liquids at room temperature. The interhalogen IF7 can be formed by reacting palladium iodide with fluorine. See also Interchalcogen Hydrogen halide Notes References Bibliography External links
Interhalogen
[ "Chemistry" ]
2,889
[ "Interhalogen compounds", "Oxidizing agents" ]
1,165,464
https://en.wikipedia.org/wiki/Trypanothione
Trypanothione is an unusual form of glutathione containing two molecules of glutathione joined by a spermidine (polyamine) linker. It is found in parasitic protozoa such as leishmania and trypanosomes. These protozoal parasites are the cause of leishmaniasis, sleeping sickness and Chagas' disease. Trypanothione was discovered by Alan Fairlamb. Its structure was proven by chemical synthesis. It is present mainly in the Kinetoplastida but can be found in other parasitic protozoa such as Entamoeba histolytica. Since this thiol is absent from humans and is essential for the survival of the parasites, the enzymes that make and use this molecule are targets for the development of new drugs to treat these diseases. Trypanothione-dependent enzymes include reductases, peroxidases, glyoxalases and transferases. Trypanothione-disulfide reductase (TryR) was the first trypanothione-dependent enzyme to be discovered (EC 1.8.1.12). It is an NADPH-dependent flavoenzyme that reduces trypanothione disulfide. TryR is essential for survival of these parasites both in vitro and in the human host. A major function of trypanothione is in the defence against oxidative stress. Here, trypanothione-dependent enzymes such as tryparedoxin peroxidase (TryP) reduce peroxides using electrons donated either directly from trypanothione, or via the redox intermediate tryparedoxin (TryX). Trypanothione-dependent hydrogen peroxide metabolism is particularly important in these organisms because they lack catalase. Since the trypanosomatids also lack an equivalent of thioredoxin reductase, trypanothione reductase is the sole path that electrons can take from NADPH to these antioxidant enzymes. References Thiols Peptides
Trypanothione
[ "Chemistry" ]
434
[ "Biomolecules by chemical classification", "Thiols", "Organic compounds", "Molecular biology", "Peptides" ]
1,165,549
https://en.wikipedia.org/wiki/N-vector%20model
In statistical mechanics, the n-vector model or O(n) model is a simple system of interacting spins on a crystalline lattice. It was developed by H. Eugene Stanley as a generalization of the Ising model, XY model and Heisenberg model. In the n-vector model, n-component unit-length classical spins are placed on the vertices of a d-dimensional lattice. The Hamiltonian of the n-vector model is given by: where the sum runs over all pairs of neighboring spins and denotes the standard Euclidean inner product. Special cases of the n-vector model are: : The self-avoiding walk : The Ising model : The XY model : The Heisenberg model : Toy model for the Higgs sector of the Standard Model The general mathematical formalism used to describe and solve the n-vector model and certain generalizations are developed in the article on the Potts model. Reformulation as a loop model In a small coupling expansion, the weight of a configuration may be rewritten as Integrating over the vector gives rise to expressions such as which is interpreted as a sum over the 3 possible ways of connecting the vertices pairwise using 2 lines going through vertex . Integrating over all vectors, the corresponding lines combine into closed loops, and the partition function becomes a sum over loop configurations: where is the set of loop configurations, with the number of loops in the configuration , and the total number of lattice edges. In two dimensions, it is common to assume that loops do not cross: either by choosing the lattice to be trivalent, or by considering the model in a dilute phase where crossings are irrelevant, or by forbidding crossings by hand. The resulting model of non-intersecting loops can then be studied using powerful algebraic methods, and its spectrum is exactly known. Moreover, the model is closely related to the random cluster model, which can also be formulated in terms of non-crossing loops. Much less is known in models where loops are allowed to cross, and in higher than two dimensions. Continuum limit The continuum limit can be understood to be the sigma model. This can be easily obtained by writing the Hamiltonian in terms of the product where is the "bulk magnetization" term. Dropping this term as an overall constant factor added to the energy, the limit is obtained by defining the Newton finite difference as on neighboring lattice locations Then in the limit , where is the gradient in the direction. Thus, in the limit, which can be recognized as the kinetic energy of the field in the sigma model. One still has two possibilities for the spin : it is either taken from a discrete set of spins (the Potts model) or it is taken as a point on the sphere ; that is, is a continuously-valued vector of unit length. In the later case, this is referred to as the non-linear sigma model, as the rotation group is group of isometries of , and obviously, isn't "flat", i.e. isn't a linear field. Conformal field theory At the critical temperature and in the continuum limit, the model gives rise to a conformal field theory called the critical O(n) model. This CFT can be analyzed using expansions in the dimension d or in n, or using the conformal bootstrap approach. Its conformal data are functions of d and n, on which many results are known. References Lattice models
N-vector model
[ "Physics", "Materials_science" ]
693
[ "Statistical mechanics stubs", "Theoretical physics", "Lattice models", "Computational physics", "Condensed matter physics", "Theoretical physics stubs", "Statistical mechanics", "Computational physics stubs" ]
1,165,668
https://en.wikipedia.org/wiki/Ewald%27s%20sphere
The Ewald sphere is a geometric construction used in electron, neutron, and x-ray diffraction which shows the relationship between: the wavevector of the incident and diffracted beams, the diffraction angle for a given reflection, the reciprocal lattice of the crystal. It was conceived by Paul Peter Ewald, a German physicist and crystallographer. Ewald himself spoke of the sphere of reflection. It is often simplified to the two-dimensional "Ewald's circle" model or may be referred to as the Ewald sphere. Ewald construction A crystal can be described as a lattice of atoms, which in turn this leads to the reciprocal lattice. With electrons, neutrons or x-rays there is diffraction by the atoms, and if there is an incident plane wave with a wavevector , there will be outgoing wavevectors and as shown in the diagram after the wave has been diffracted by the atoms. The energy of the waves (electron, neutron or x-ray) depends upon the magnitude of the wavevector, so if there is no change in energy (elastic scattering) these have the same magnitude, that is they must all lie on the Ewald sphere. In the Figure the red dot is the origin for the wavevectors, the black spots are reciprocal lattice points (vectors) and shown in blue are three wavevectors. For the wavevector the corresponding reciprocal lattice point lies on the Ewald sphere, which is the condition for Bragg diffraction. For the corresponding reciprocal lattice point is off the Ewald sphere, so where is called the excitation error. The amplitude and also intensity of diffraction into the wavevector depends upon the Fourier transform of the shape of the sample, the excitation error , the structure factor for the relevant reciprocal lattice vector, and also whether the scattering is weak or strong. For neutrons and x-rays the scattering is generally weak so there is mainly Bragg diffraction, but it is much stronger for electron diffraction. See also Bragg's law Electron diffraction Laue equations Structure factor X-ray crystallography References Notes External links Origin of the Ewald Sphere in scattering (TEM) See also Chapter 5 in this web site Diffraction Crystallography
Ewald's sphere
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
472
[ "Spectrum (physical sciences)", "Materials science", "Crystallography", "Diffraction", "Condensed matter physics", "Spectroscopy" ]
1,166,059
https://en.wikipedia.org/wiki/Boltzmann%20machine
A Boltzmann machine (also called Sherrington–Kirkpatrick model with external field or stochastic Ising model), named after Ludwig Boltzmann is a spin-glass model with an external field, i.e., a Sherrington–Kirkpatrick model, that is a stochastic Ising model. It is a statistical physics technique applied in the context of cognitive science. It is also classified as a Markov random field. Boltzmann machines are theoretically intriguing because of the locality and Hebbian nature of their training algorithm (being trained by Hebb's rule), and because of their parallelism and the resemblance of their dynamics to simple physical processes. Boltzmann machines with unconstrained connectivity have not been proven useful for practical problems in machine learning or inference, but if the connectivity is properly constrained, the learning can be made efficient enough to be useful for practical problems. They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. They were heavily popularized and promoted by Geoffrey Hinton, Terry Sejnowski and Yann LeCun in cognitive sciences communities, particularly in machine learning, as part of "energy-based models" (EBM), because Hamiltonians of spin glasses as energy are used as a starting point to define the learning task. Structure A Boltzmann machine, like a Sherrington–Kirkpatrick model, is a network of units with a total "energy" (Hamiltonian) defined for the overall network. Its units produce binary results. Boltzmann machine weights are stochastic. The global energy in a Boltzmann machine is identical in form to that of Hopfield networks and Ising models: Where: is the connection strength between unit and unit . is the state, , of unit . is the bias of unit in the global energy function. ( is the activation threshold for the unit.) Often the weights are represented as a symmetric matrix with zeros along the diagonal. Unit state probability The difference in the global energy that results from a single unit equaling 0 (off) versus 1 (on), written , assuming a symmetric matrix of weights, is given by: This can be expressed as the difference of energies of two states: Substituting the energy of each state with its relative probability according to the Boltzmann factor (the property of a Boltzmann distribution that the energy of a state is proportional to the negative log probability of that state) yields: where is the Boltzmann constant and is absorbed into the artificial notion of temperature . Noting that the probabilities of the unit being on or off sum to allows for the simplification: whence the probability that the -th unit is given by where the scalar is referred to as the temperature of the system. This relation is the source of the logistic function found in probability expressions in variants of the Boltzmann machine. Equilibrium state The network runs by repeatedly choosing a unit and resetting its state. After running for long enough at a certain temperature, the probability of a global state of the network depends only upon that global state's energy, according to a Boltzmann distribution, and not on the initial state from which the process was started. This means that log-probabilities of global states become linear in their energies. This relationship is true when the machine is "at thermal equilibrium", meaning that the probability distribution of global states has converged. Running the network beginning from a high temperature, its temperature gradually decreases until reaching a thermal equilibrium at a lower temperature. It then may converge to a distribution where the energy level fluctuates around the global minimum. This process is called simulated annealing. To train the network so that the chance it will converge to a global state according to an external distribution over these states, the weights must be set so that the global states with the highest probabilities get the lowest energies. This is done by training. Training The units in the Boltzmann machine are divided into 'visible' units, V, and 'hidden' units, H. The visible units are those that receive information from the 'environment', i.e. the training set is a set of binary vectors over the set V. The distribution over the training set is denoted . The distribution over global states converges as the Boltzmann machine reaches thermal equilibrium. We denote this distribution, after we marginalize it over the hidden units, as . Our goal is to approximate the "real" distribution using the produced by the machine. The similarity of the two distributions is measured by the Kullback–Leibler divergence, : where the sum is over all the possible states of . is a function of the weights, since they determine the energy of a state, and the energy determines , as promised by the Boltzmann distribution. A gradient descent algorithm over changes a given weight, , by subtracting the partial derivative of with respect to the weight. Boltzmann machine training involves two alternating phases. One is the "positive" phase where the visible units' states are clamped to a particular binary state vector sampled from the training set (according to ). The other is the "negative" phase where the network is allowed to run freely, i.e. only the input nodes have their state determined by external data, but the output nodes are allowed to float. The gradient with respect to a given weight, , is given by the equation: where: is the probability that units i and j are both on when the machine is at equilibrium on the positive phase. is the probability that units i and j are both on when the machine is at equilibrium on the negative phase. denotes the learning rate This result follows from the fact that at thermal equilibrium the probability of any global state when the network is free-running is given by the Boltzmann distribution. This learning rule is biologically plausible because the only information needed to change the weights is provided by "local" information. That is, the connection (synapse, biologically) does not need information about anything other than the two neurons it connects. This is more biologically realistic than the information needed by a connection in many other neural network training algorithms, such as backpropagation. The training of a Boltzmann machine does not use the EM algorithm, which is heavily used in machine learning. By minimizing the KL-divergence, it is equivalent to maximizing the log-likelihood of the data. Therefore, the training procedure performs gradient ascent on the log-likelihood of the observed data. This is in contrast to the EM algorithm, where the posterior distribution of the hidden nodes must be calculated before the maximization of the expected value of the complete data likelihood during the M-step. Training the biases is similar, but uses only single node activity: Problems Theoretically the Boltzmann machine is a rather general computational medium. For instance, if trained on photographs, the machine would theoretically model the distribution of photographs, and could use that model to, for example, complete a partial photograph. Unfortunately, Boltzmann machines experience a serious practical problem, namely that it seems to stop learning correctly when the machine is scaled up to anything larger than a trivial size. This is due to important effects, specifically: the required time order to collect equilibrium statistics grows exponentially with the machine's size, and with the magnitude of the connection strengths connection strengths are more plastic when the connected units have activation probabilities intermediate between zero and one, leading to a so-called variance trap. The net effect is that noise causes the connection strengths to follow a random walk until the activities saturate. Types Restricted Boltzmann machine Although learning is impractical in general Boltzmann machines, it can be made quite efficient in a restricted Boltzmann machine (RBM) which does not allow intralayer connections between hidden units and visible units, i.e. there is no connection between visible to visible and hidden to hidden units. After training one RBM, the activities of its hidden units can be treated as data for training a higher-level RBM. This method of stacking RBMs makes it possible to train many layers of hidden units efficiently and is one of the most common deep learning strategies. As each new layer is added the generative model improves. An extension to the restricted Boltzmann machine allows using real valued data rather than binary data. One example of a practical RBM application is in speech recognition. Deep Boltzmann machine A deep Boltzmann machine (DBM) is a type of binary pairwise Markov random field (undirected probabilistic graphical model) with multiple layers of hidden random variables. It is a network of symmetrically coupled stochastic binary units. It comprises a set of visible units and layers of hidden units . No connection links units of the same layer (like RBM). For the , the probability assigned to vector is where are the set of hidden units, and are the model parameters, representing visible-hidden and hidden-hidden interactions. In a DBN only the top two layers form a restricted Boltzmann machine (which is an undirected graphical model), while lower layers form a directed generative model. In a DBM all layers are symmetric and undirected. Like DBNs, DBMs can learn complex and abstract internal representations of the input in tasks such as object or speech recognition, using limited, labeled data to fine-tune the representations built using a large set of unlabeled sensory input data. However, unlike DBNs and deep convolutional neural networks, they pursue the inference and training procedure in both directions, bottom-up and top-down, which allow the DBM to better unveil the representations of the input structures. However, the slow speed of DBMs limits their performance and functionality. Because exact maximum likelihood learning is intractable for DBMs, only approximate maximum likelihood learning is possible. Another option is to use mean-field inference to estimate data-dependent expectations and approximate the expected sufficient statistics by using Markov chain Monte Carlo (MCMC). This approximate inference, which must be done for each test input, is about 25 to 50 times slower than a single bottom-up pass in DBMs. This makes joint optimization impractical for large data sets, and restricts the use of DBMs for tasks such as feature representation. Spike-and-slab RBMs The need for deep learning with real-valued inputs, as in Gaussian RBMs, led to the spike-and-slab RBM (ssRBM), which models continuous-valued inputs with binary latent variables. Similar to basic RBMs and its variants, a spike-and-slab RBM is a bipartite graph, while like GRBMs, the visible units (input) are real-valued. The difference is in the hidden layer, where each hidden unit has a binary spike variable and a real-valued slab variable. A spike is a discrete probability mass at zero, while a slab is a density over continuous domain; their mixture forms a prior. An extension of ssRBM called μ-ssRBM provides extra modeling capacity using additional terms in the energy function. One of these terms enables the model to form a conditional distribution of the spike variables by marginalizing out the slab variables given an observation. In mathematics In more general mathematical setting, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning it is called a log-linear model. In deep learning the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine. History The Boltzmann machine is based on the Sherrington–Kirkpatrick spin glass model by David Sherrington and Scott Kirkpatrick. The seminal publication by John Hopfield (1982) applied methods of statistical mechanics, mainly the recently developed (1970s) theory of spin glasses, to study associative memory (later named the "Hopfield network"). The original contribution in applying such energy-based models in cognitive science appeared in papers by Geoffrey Hinton and Terry Sejnowski. In a 1995 interview, Hinton stated that in 1983 February or March, he was going to give a talk on simulated annealing in Hopfield networks, so he had to design a learning algorithm for the talk, resulting in the Boltzmann machine learning algorithm. The idea of applying the Ising model with annealed Gibbs sampling was used in Douglas Hofstadter's Copycat project (1984). The explicit analogy drawn with statistical mechanics in the Boltzmann machine formulation led to the use of terminology borrowed from physics (e.g., "energy"), which became standard in the field. The widespread adoption of this terminology may have been encouraged by the fact that its use led to the adoption of a variety of concepts and methods from statistical mechanics. The various proposals to use simulated annealing for inference were apparently independent. Similar ideas (with a change of sign in the energy function) are found in Paul Smolensky's "Harmony Theory". Ising models can be generalized to Markov random fields, which find widespread application in linguistics, robotics, computer vision and artificial intelligence. In 2024, Hopfield and Hinton were awarded Nobel Prize in Physics for their foundational contributions to machine learning, such as the Boltzmann machine. See also Restricted Boltzmann machine Helmholtz machine Markov random field (MRF) Ising model (Lenz–Ising model) Hopfield network References Further reading Kothari P (2020): https://www.forbes.com/sites/tomtaulli/2020/02/02/coronavirus-can-ai-artificial-intelligence-make-a-difference/?sh=1eca51e55817 External links Scholarpedia article by Hinton about Boltzmann machines Talk at Google by Geoffrey Hinton Neural network architectures Machine Mathematical physics
Boltzmann machine
[ "Physics", "Mathematics" ]
2,861
[ "Applied mathematics", "Theoretical physics", "Mathematical physics" ]
14,174,594
https://en.wikipedia.org/wiki/Manifold%20%28fluid%20mechanics%29
A manifold is a wider and/or larger pipe or channel, into which smaller pipes or channels lead, or a pipe fitting or similar device that connects multiple inputs or outputs for fluids. Manifolds Engineering Types of manifolds in engineering include: Exhaust manifold An engine part that collects the exhaust gases from multiple cylinders into one pipe. Also known as headers. Hydraulic manifold A component used to regulate fluid flow in a hydraulic system, thus controlling the transfer of power between actuators and pumps Inlet manifold (or "intake manifold") An engine part that supplies the air or fuel/air mixture to the cylinders Scuba manifold In a scuba set, connects two or more diving cylinders Vacuum gas manifold An apparatus used in chemistry to manipulate gases Also, many dredge pipe pieces. Biology In biology manifolds are found in: Cardiovascular system (blood vessel manifolds, etc.) Lymphatic system Respiratory system Other fields Manifolds are used in: HVAC Pipe organ Plumbing References Fluid mechanics
Manifold (fluid mechanics)
[ "Engineering" ]
198
[ "Civil engineering", "Fluid mechanics" ]
14,177,590
https://en.wikipedia.org/wiki/NCOA4
Nuclear receptor coactivator 4, also known as Androgen Receptor Activator (ARA70), is a protein that in humans is encoded by the NCOA4 gene. It plays an important role in ferritinophagy, acting as a cargo receptor, binding to the ferritin heavy chain and latching on to ATG8 on the surface of the autophagosome. Interactions NCOA4 has been shown to interact with: Androgen receptor, and Peroxisome proliferator-activated receptor gamma Ferritin ATG8 See also Transcription coregulator References Further reading External links Gene expression Transcription coregulators
NCOA4
[ "Chemistry", "Biology" ]
133
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
14,178,302
https://en.wikipedia.org/wiki/Bradykinin%20receptor%20B1
Bradykinin receptor B1 (B1) is a G-protein coupled receptor encoded by the BDKRB1 gene in humans. Its principal ligand is bradykinin, a 9 amino acid peptide generated in pathophysiologic conditions such as inflammation, trauma, burns, shock, and allergy. The B1 receptor is one of two of G protein-coupled receptors that have been found which bind bradykinin and mediate responses to these pathophysiologic conditions. B1 protein is synthesized de novo following tissue injury and receptor binding leads to an increase in the cytosolic calcium ion concentration, ultimately resulting in chronic and acute inflammatory responses. Classical agonist of this receptor includes bradykinin1-8 (bradykinin with the first 8 amino acid) and antagonist includes [Leu8]-bradykinin1-8. Antagonists LF22-0542 See also Bradykinin receptor References External links Further reading G protein-coupled receptors
Bradykinin receptor B1
[ "Chemistry" ]
197
[ "G protein-coupled receptors", "Signal transduction" ]
14,179,010
https://en.wikipedia.org/wiki/EGR2
Early growth response protein 2 is a protein that in humans is encoded by the EGR2 gene. EGR2 (also termed Krox20) is a transcription regulatory factor, containing three zinc finger DNA-binding sites, and is highly expressed in a population of migrating neural crest cells. It is later expressed in the neural crest derived cells of the cranial ganglion. The protein encoded by Krox20 contains two cys2his2-type zinc fingers. Krox20 gene expression is restricted to the early hindbrain development. It is evolutionarily conserved in vertebrates, humans, mice, chicks, and zebra fish. In addition, the amino acid sequence and most aspects of the embryonic gene pattern is conserved among vertebrates, further implicating its role in hindbrain development. When the Krox20 is deleted in mice, the protein coding ability of the Krox20 gene (including the DNA-binding domain of the zinc finger) is diminished. These mice are unable to survive after birth and exhibit major hindbrain defects. These defects include but are not limited to defects in formation of cranial sensory ganglia, partial fusion of the trigeminal nerve (V) with the facial (VII) and auditory (VII) nerves, the proximal nerve roots coming off of these ganglia were disorganized and intertwined among one another as they entered the brainstem, and there was fusion of the glossopharyngeal (IX) nerve complex. Function The early growth response protein 2 is a transcription factor with three tandem C2H2-type zinc fingers. Mutations in this gene are associated with the autosomal dominant Charcot-Marie-Tooth disease, type 1D, Dejerine–Sottas disease, and Congenital Hypomyelinating Neuropathy. Two studies have linked EGR2 expression to proliferation of osteoprogenitors and cell lines derived from Ewing sarcoma, which is a highly aggressive bone-associated cancer. New research suggests that Krox20 - or the lack of it - is the reason for male baldness. References Further reading External links GeneReviews/NCBI/NIH/UW entry on Charcot-Marie-Tooth Neuropathy Type 1 GeneReviews/NCBI/NIH/UW entry on Charcot-Marie-Tooth Neuropathy Type 4 Transcription factors
EGR2
[ "Chemistry", "Biology" ]
498
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,179,165
https://en.wikipedia.org/wiki/Cyclin%20B2
G2/mitotic-specific cyclin-B2 is a protein that in humans is encoded by the CCNB2 gene. Function Cyclin B2 is a member of the cyclin family, specifically the B-type cyclins. The B-type cyclins, B1 and B2, associate with p34cdc2 and are essential components of the cell cycle regulatory machinery. B1 and B2 differ in their subcellular localization. Cyclin B1 co-localizes with microtubules, whereas cyclin B2 is primarily associated with the Golgi region. Cyclin B2 also binds to transforming growth factor beta RII and thus cyclin B2/cdc2 may play a key role in transforming growth factor beta-mediated cell cycle control. Interactions Cyclin B2 has been shown to interact with TGF beta receptor 2. See also Cyclin B References Further reading Cell cycle regulators
Cyclin B2
[ "Chemistry" ]
197
[ "Cell cycle regulators", "Signal transduction" ]
14,187,112
https://en.wikipedia.org/wiki/Kinetic%20Dynamic%20Suspension%20System
The Kinetic Dynamic Suspension System (KDSS) technology was employed initially in the Lexus GX 470, and subsequently the 200 Series Toyota Land Cruiser. The system was invented and developed by Kinetic Pty Ltd, a small R&D company based in Dunsborough, Western Australia. It optimally adjusts front and rear stabilizers based on a set of interconnected hydraulic cylinders. The interconnection is made up of hydraulic piping and a control cylinder which is located at the frame rail. KDSS, which is fully mechanical, can disengage the stabilizer bars (the bars are jointed, allowing movement independent of one another). This system will not engage during normal driving conditions, when hydraulic pressure is equal. In off-road conditions, KDSS activates when it senses that a wheel has dropped. The Kinetic Dynamic Suspension System was first available as an option on the model year 2004 Lexus GX 470, a sport utility vehicle that was only sold in North America, and based roughly on the 120 Series Land Cruiser Prado. The system was also introduced in similar form on the 2008 Toyota Land Cruiser. For the 2008 Lexus LX 570, an electro-mechanical suspension was employed, retaining the function of the KDSS design but adding electronic components. For the 2025 model year, KDSS became a Lexus-exclusive feature. Toyota will instead use a Stabilizer with Disconnection Mechanism (SDM), only for the front sway bar, which debuted in the 2024 Land Cruiser. This is a conventional stabilizer bar with an electronic actuator directly mounted to the bar at the front axle, activated by a button inside the cab. Vehicles Models that have adopted the Kinetic Dynamic Suspension System to date include: 2010–2016 Toyota 4Runner Trail Edition 2017–2024 Toyota 4Runner TRD Off-Road 2008–2023 Toyota Land Cruiser 2004–present Lexus GX 2010–2023 Toyota Prado See also Toyota TEMS Citroën Hydractive Notes Lexus Toyota Automotive suspension technologies Automotive technology tradenames Vehicle safety technologies Auto parts Mechanical power control
Kinetic Dynamic Suspension System
[ "Physics" ]
432
[ "Mechanics", "Mechanical power control" ]
14,187,697
https://en.wikipedia.org/wiki/Semicircular%20potential%20well
In quantum mechanics, the case of a particle in a one-dimensional ring is similar to the particle in a box. The particle follows the path of a semicircle from to where it cannot escape, because the potential from to is infinite. Instead there is total reflection, meaning the particle bounces back and forth between to . The Schrödinger equation for a free particle which is restricted to a semicircle (technically, whose configuration space is the circle ) is Wave function Using cylindrical coordinates on the 1-dimensional semicircle, the wave function depends only on the angular coordinate, and so Substituting the Laplacian in cylindrical coordinates, the wave function is therefore expressed as The moment of inertia for a semicircle, best expressed in cylindrical coordinates, is . Solving the integral, one finds that the moment of inertia of a semicircle is , exactly the same for a hoop of the same radius. The wave function can now be expressed as , which is easily solvable. Since the particle cannot escape the region from to , the general solution to this differential equation is Defining , we can calculate the energy as . We then apply the boundary conditions, where and are continuous and the wave function is normalizable: Like the infinite square well, the first boundary condition demands that the wave function equals 0 at both and . Basically Since the wave function , the coefficient A must equal 0 because . The wave function also equals 0 at so we must apply this boundary condition. Discarding the trivial solution where B=0, the wave function only when m is an integer since . This boundary condition quantizes the energy where the energy equals where m is any integer. The condition m=0 is ruled out because everywhere, meaning that the particle is not in the potential at all. Negative integers are also ruled out since they can easily be absorbed in the normalization condition. We then normalize the wave function, yielding a result where . The normalized wave function is The ground state energy of the system is . Like the particle in a box, there exists nodes in the excited states of the system where both and are both 0, which means that the probability of finding the particle at these nodes are 0. Analysis Since the wave function is only dependent on the azimuthal angle , the measurable quantities of the system are the angular position and angular momentum, expressed with the operators and respectively. Using cylindrical coordinates, the operators and are expressed as and respectively, where these observables play a role similar to position and momentum for the particle in a box. The commutation and uncertainty relations for angular position and angular momentum are given as follows: Boundary conditions As with all quantum mechanics problems, if the boundary conditions are changed so does the wave function. If a particle is confined to the motion of an entire ring ranging from 0 to , the particle is subject only to a periodic boundary condition (see particle in a ring). If a particle is confined to the motion of to , the issue of even and odd parity becomes important. The wave equation for such a potential is given as: where and are for odd and even m respectively. Similarly, if the semicircular potential well is a finite well, the solution will resemble that of the finite potential well where the angular operators and replace the linear operators x and p. See also Particle in a ring Particle in a box Finite potential well Delta function potential Gas in a box Particle in a spherically symmetric potential Quantum models Quantum mechanical potentials
Semicircular potential well
[ "Physics" ]
710
[ "Quantum models", "Quantum mechanical potentials", "Quantum mechanics" ]
96,558
https://en.wikipedia.org/wiki/Maxwell%27s%20demon
Maxwell's demon is a thought experiment that appears to disprove the second law of thermodynamics. It was proposed by the physicist James Clerk Maxwell in 1867. In his first letter, Maxwell referred to the entity as a "finite being" or a "being who can play a game of skill with the molecules". Lord Kelvin would later call it a "demon". In the thought experiment, a demon controls a door between two chambers containing gas. As individual gas molecules (or atoms) approach the door, the demon quickly opens and closes the door to allow only fast-moving molecules to pass through in one direction, and only slow-moving molecules to pass through in the other. Because the kinetic temperature of a gas depends on the velocities of its constituent molecules, the demon's actions cause one chamber to warm up and the other to cool down. This would decrease the total entropy of the system, seemingly without applying any work, thereby violating the second law of thermodynamics. The concept of Maxwell's demon has provoked substantial debate in the philosophy of science and theoretical physics, which continues to the present day. It stimulated work on the relationship between thermodynamics and information theory. Most scientists argue that, on theoretical grounds, no practical device can violate the second law in this way. Other researchers have implemented forms of Maxwell's demon in experiments, though they all differ from the thought experiment to some extent and none has been shown to violate the second law. Origin and history of the idea The thought experiment first appeared in a letter Maxwell wrote to Peter Guthrie Tait on 11 December 1867. It appeared again in a letter to John William Strutt in 1871, before it was presented to the public in Maxwell's 1872 book on thermodynamics titled Theory of Heat. In his letters and books, Maxwell described the agent opening the door between the chambers as a "finite being". Being a deeply religious man, he never used the word "demon". Instead, William Thomson (Lord Kelvin) was the first to use it for Maxwell's concept, in the journal Nature in 1874, and implied that he intended the Greek mythology interpretation of a daemon, a supernatural being working in the background, rather than a malevolent being. Original thought experiment The second law of thermodynamics ensures (through statistical probability) that two bodies of different temperature, when brought into contact with each other and isolated from the rest of the Universe, will evolve to a thermodynamic equilibrium in which both bodies have approximately the same temperature. The second law is also expressed as the assertion that in an isolated system, entropy never decreases. Maxwell conceived a thought experiment as a way of furthering the understanding of the second law. His description of the experiment is as follows: In other words, Maxwell imagines one container divided into two parts, A and B. Both parts are filled with the same gas at equal temperatures and placed next to each other. Observing the molecules on both sides, an imaginary demon guards a trapdoor between the two parts. When a faster-than-average molecule from A flies towards the trapdoor, the demon opens it, and the molecule will fly from A to B. Likewise, when a slower-than-average molecule from B flies towards the trapdoor, the demon will let it pass from B to A. The average speed of the molecules in B will have increased while in A they will have slowed down on average. Since average molecular speed corresponds to temperature, the temperature decreases in A and increases in B, contrary to the second law of thermodynamics. A heat engine operating between the thermal reservoirs A and B could extract useful work from this temperature difference. The demon must allow molecules to pass in both directions in order to produce only a temperature difference; one-way passage only of faster-than-average molecules from A to B will cause higher temperature and pressure to develop on the B side. Criticism and development Several physicists have presented calculations that show that the second law of thermodynamics will not actually be violated, if a more complete analysis is made of the whole system including the demon. The essence of the physical argument is to show, by calculation, that any demon must "generate" more entropy segregating the molecules than it could ever eliminate by the method described. That is, it would take more thermodynamic work to gauge the speed of the molecules and selectively allow them to pass through the opening between A and B than the amount of energy gained by the difference of temperature caused by doing so. One of the most famous responses to this question was suggested in 1929 by Leó Szilárd, and later by Léon Brillouin. Szilárd pointed out that a real-life Maxwell's demon would need to have some means of measuring molecular speed, and that the act of acquiring information would require an expenditure of energy. Since the demon and the gas are interacting, we must consider the total entropy of the gas and the demon combined. The expenditure of energy by the demon will cause an increase in the entropy of the demon, which will be larger than the lowering of the entropy of the gas. In 1960, Rolf Landauer raised an exception to this argument. He realized that some measuring processes need not increase thermodynamic entropy as long as they were thermodynamically reversible. He suggested these "reversible" measurements could be used to sort the molecules, violating the Second Law. However, due to the connection between entropy in thermodynamics and information theory, this also meant that the recorded measurement must not be erased. In other words, to determine whether to let a molecule through, the demon must acquire information about the state of the molecule and either discard it or store it. Discarding it leads to immediate increase in entropy, but the demon cannot store it indefinitely. In 1982, Charles Bennett showed that, however well prepared, eventually the demon will run out of information storage space and must begin to erase the information it has previously gathered. Erasing information is a thermodynamically irreversible process that increases the entropy of a system. Although Bennett had reached the same conclusion as Szilard's 1929 paper, that a Maxwellian demon could not violate the second law because entropy would be created, he had reached it for different reasons. Regarding Landauer's principle, the minimum energy dissipated by deleting information was experimentally measured by Eric Lutz et al. in 2012. Furthermore, Lutz et al. confirmed that in order to approach the Landauer's limit, the system must asymptotically approach zero processing speed. Recently, Landauer's principle has also been invoked to resolve an apparently unrelated paradox of statistical physics, Loschmidt’s paradox. John Earman and John D. Norton have argued that Szilárd and Landauer's explanations of Maxwell's demon begin by assuming that the second law of thermodynamics cannot be violated by the demon, and derive further properties of the demon from this assumption, including the necessity of consuming energy when erasing information, etc. It would therefore be circular to invoke these derived properties to defend the second law from the demonic argument. Bennett later acknowledged the validity of Earman and Norton's argument, while maintaining that Landauer's principle explains the mechanism by which real systems do not violate the second law of thermodynamics. Recent progress Although the argument by Landauer and Bennett only answers the consistency between the second law of thermodynamics and the whole cyclic process of the entire system of a Szilard engine (a composite system of the engine and the demon), a recent approach based on the non-equilibrium thermodynamics for small fluctuating systems has provided deeper insight on each information process with each subsystem. From this viewpoint, the measurement process is regarded as a process where the correlation (mutual information) between the engine and the demon increases, decreasing the entropy of the system in an amount given by the mutual information. If the correlation changes, thermodynamic relations such as the second law of thermodynamics and the fluctuation theorem for each subsystem should be modified, and for the case of external control a second-law like inequality and a generalized fluctuation theorem with mutual information are satisfied. For more general information processes including biological information processing, both inequality and equality with mutual information hold. When repeated measurements are performed, the entropy reduction of the system is given by the entropy of the sequence of measurements, which takes into account the reduction of information due to the correlation between the measurements. Applications Real-life versions of Maxwellian demons occur, but all such "real demons" or molecular demons have their entropy-lowering effects duly balanced by increase of entropy elsewhere. Molecular-sized mechanisms are no longer found only in biology; they are also the subject of the emerging field of nanotechnology. Single-atom traps used by particle physicists allow an experimenter to control the state of individual quanta in a way similar to Maxwell's demon. If hypothetical mirror matter exists, Zurab Silagadze proposes that demons can be envisaged, "which can act like perpetuum mobiles of the second kind: extract heat energy from only one reservoir, use it to do work and be isolated from the rest of ordinary world. Yet the Second Law is not violated because the demons pay their entropy cost in the hidden (mirror) sector of the world by emitting mirror photons." Experimental work In 2007, David Leigh announced the creation of a nano-device based on the Brownian ratchet popularized by Richard Feynman. Leigh's device is able to drive a chemical system out of equilibrium, but it must be powered by an external source (light in this case) and therefore does not violate thermodynamics. Previously, researchers including Nobel Prize winner Fraser Stoddart had created ring-shaped molecules called rotaxanes which could be placed on an axle connecting two sites, A and B. Particles from either site would bump into the ring and move it from end to end. If a large collection of these devices were placed in a system, half of the devices had the ring at site A and half at B, at any given moment in time. Leigh made a minor change to the axle so that if a light is shone on the device, the center of the axle will thicken, restricting the motion of the ring. It keeps the ring from moving, however, only if it is at A. Over time, therefore, the rings will be bumped from B to A and get stuck there, creating an imbalance in the system. In his experiments, Leigh was able to take a pot of "billions of these devices" from 50:50 equilibrium to a 70:30 imbalance within a few minutes. In 2009, Mark G. Raizen developed a laser atomic cooling technique which realizes the process Maxwell envisioned of sorting individual atoms in a gas into different containers based on their energy. The new concept is a one-way wall for atoms or molecules that allows them to move in one direction, but not go back. The operation of the one-way wall relies on an irreversible atomic and molecular process of absorption of a photon at a specific wavelength, followed by spontaneous emission to a different internal state. The irreversible process is coupled to a conservative force created by magnetic fields and/or light. Raizen and collaborators proposed using the one-way wall in order to reduce the entropy of an ensemble of atoms. In parallel, Gonzalo Muga and Andreas Ruschhaupt independently developed a similar concept. Their "atom diode" was not proposed for cooling, but rather for regulating the flow of atoms. The Raizen Group demonstrated significant cooling of atoms with the one-way wall in a series of experiments in 2008. Subsequently, the operation of a one-way wall for atoms was demonstrated by Daniel Steck and collaborators later in 2008. Their experiment was based on the 2005 scheme for the one-way wall, and was not used for cooling. The cooling method realized by the Raizen Group was called "single-photon cooling", because only one photon on average is required in order to bring an atom to near-rest. This is in contrast to other laser cooling techniques which use the momentum of the photon and require a two-level cycling transition. In 2006, Raizen, Muga, and Ruschhaupt showed in a theoretical paper that as each atom crosses the one-way wall, it scatters one photon, and information is provided about the turning point and hence the energy of that particle. The entropy increase of the radiation field scattered from a directional laser into a random direction is exactly balanced by the entropy reduction of the atoms as they are trapped by the one-way wall. This technique is widely described as a "Maxwell's demon" because it realizes Maxwell's process of creating a temperature difference by sorting high and low energy atoms into different containers. However, scientists have pointed out that it does not violate the second law of thermodynamics, does not result in a net decrease in entropy, and cannot be used to produce useful energy. This is because the process requires more energy from the laser beams than could be produced by the temperature difference generated. The atoms absorb low entropy photons from the laser beam and emit them in a random direction, thus increasing the entropy of the environment. In 2014, Pekola et al. demonstrated an experimental realization of a Szilárd engine. Only a year later and based on an earlier theoretical proposal, the same group presented the first experimental realization of an autonomous Maxwell's demon, which extracts microscopic information from a system and reduces its entropy by applying feedback. The demon is based on two capacitively coupled single-electron devices, both integrated on the same electronic circuit. The operation of the demon is directly observed as a temperature drop in the system, with a simultaneous temperature rise in the demon arising from the thermodynamic cost of generating the mutual information. In 2016, Pekola et al. demonstrated a proof-of-principle of an autonomous demon in coupled single-electron circuits, showing a way to cool critical elements in a circuit with information as a fuel. Pekola et al. have also proposed that a simple qubit circuit, e.g., made of a superconducting circuit, could provide a basis to study a quantum Szilard's engine. As metaphor Daemons in computing, generally processes that run on servers to respond to users, are named for Maxwell's demon. Historian Henry Brooks Adams, in his manuscript The Rule of Phase Applied to History, attempted to use Maxwell's demon as a historical metaphor, though he misunderstood and misapplied the original principle. Adams interpreted history as a process moving towards "equilibrium", but he saw militaristic nations (he felt Germany pre-eminent in this class) as tending to reverse this process, a Maxwell's demon of history. Adams made many attempts to respond to the criticism of his formulation from his scientific colleagues, but the work remained incomplete at his death in 1918 and was published posthumously. See also Brownian ratchet Catalysis Chance and Necessity Dispersive mass transfer Entropy in thermodynamics and information theory Evaporation Gibbs paradox Hall effect Heisenberg's uncertainty principle Joule–Thomson effect Laplace's demon Laws of thermodynamics Mass spectrometry Photoelectric effect Quantum tunnelling Schrödinger's cat Second law of thermodynamics Thermionic emission Vortex tube Notes References External links How Maxwell's Demon Continues to Startle Scientists Bennett, C. H. (1987) "Demons, Engines and the Second Law", Scientific American, November, pp108-116 Maroney, O. J. E. (2009) ""Information Processing and Thermodynamic Entropy" The Stanford Encyclopedia of Philosophy (Autumn 2009 Edition) , reprinted (2001) New York: Dover, Raizen, Mark G. (2011) "Demons, Entropy, and the Quest for Absolute Zero", Scientific American, March, pp54-59 Reaney, Patricia. "Scientists build nanomachine", Reuters, February 1, 2007 Rubi, J Miguel, "Does Nature Break the Second Law of Thermodynamics?"; Scientific American, October 2008 : Splasho (2008) – Historical development of Maxwell's demon Weiss, Peter. "Breaking the Law – Can quantum mechanics + thermodynamics = perpetual motion?", Science News, October 7, 2000 1867 introductions Fictional demons James Clerk Maxwell Nanotechnology Perpetual motion Philosophy of thermal and statistical physics Thought experiments in physics
Maxwell's demon
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,450
[ "Philosophy of thermal and statistical physics", "Materials science", "Thermodynamics", "Nanotechnology", "Statistical mechanics" ]
96,842
https://en.wikipedia.org/wiki/Cytochrome%20c%20oxidase
The enzyme cytochrome c oxidase or Complex IV (was , now reclassified as a translocase EC 7.1.1.9) is a large transmembrane protein complex found in bacteria, archaea, and the mitochondria of eukaryotes. It is the last enzyme in the respiratory electron transport chain of cells located in the membrane. It receives an electron from each of four cytochrome c molecules and transfers them to one oxygen molecule and four protons, producing two molecules of water. In addition to binding the four protons from the inner aqueous phase, it transports another four protons across the membrane, increasing the transmembrane difference of proton electrochemical potential, which the ATP synthase then uses to synthesize ATP. Structure The complex The complex is a large integral membrane protein composed of several metal prosthetic sites and 13 protein subunits in mammals. In mammals, ten subunits are nuclear in origin, and three are synthesized in the mitochondria. The complex contains two hemes, a cytochrome a and cytochrome a, and two copper centers, the Cu and Cu centers. In fact, the cytochrome a and Cu form a binuclear center that is the site of oxygen reduction. Cytochrome c, which is reduced by the preceding component of the respiratory chain (cytochrome bc1 complex, Complex III), docks near the Cu binuclear center and passes an electron to it, being oxidized back to cytochrome c containing Fe. The reduced Cu binuclear center now passes an electron on to cytochrome a, which in turn passes an electron on to the cytochrome a>-Cu binuclear center. The two metal ions in this binuclear center are 4.5 Å apart and coordinate a hydroxide ion in the fully oxidized state. Crystallographic studies of cytochrome c oxidase show an unusual post-translational modification, linking C6 of Tyr(244) and the ε-N of His(240) (bovine enzyme numbering). It plays a vital role in enabling the cytochrome a- Cu binuclear center to accept four electrons in reducing molecular oxygen and four protons to water. The mechanism of reduction was formerly thought to involve a peroxide intermediate, which was believed to lead to superoxide production. However, the currently accepted mechanism involves a rapid four-electron reduction involving immediate oxygenoxygen bond cleavage, avoiding any intermediate likely to form superoxide. The conserved subunits Assembly COX assembly in yeast are a complex process that is not entirely understood due to the rapid and irreversible aggregation of hydrophobic subunits that form the holoenzyme complex, as well as aggregation of mutant subunits with exposed hydrophobic patches. COX subunits are encoded in both the nuclear and mitochondrial genomes. The three subunits that form the COX catalytic core are encoded in the mitochondrial genome. Over 30 different nuclear-encoded chaperone proteins are required for COX assembly. Cofactors, including hemes, are inserted into subunits I & II. The two heme molecules reside in subunit I, helping with transport to subunit II where two copper molecules aid with the continued transfer of electrons. Subunits I and IV initiate assembly. Different subunits may associate to form sub-complex intermediates that later bind to other subunits to form the COX complex. In post-assembly modifications, COX will form a homodimer. This is required for activity. Dimers are connected by a cardiolipin molecule, which has been found to play a key role in stabilization of the holoenzyme complex. The dissociation of subunits VIIa and III in conjunction with the removal of cardiolipin results in total loss of enzyme activity. Subunits encoded in the nuclear genome are known to play a role in enzyme dimerization and stability. Mutations to these subunits eliminate COX function. Assembly is known to occur in at least three distinct rate-determining steps. The products of these steps have been found, though specific subunit compositions have not been determined. Synthesis and assembly of COX subunits I, II, and III are facilitated by translational activators, which interact with the 5’ untranslated regions of mitochondrial mRNA transcripts. Translational activators are encoded in the nucleus. They can operate through either direct or indirect interaction with other components of translation machinery, but exact molecular mechanisms are unclear due to difficulties associated with synthesizing translation machinery in-vitro. Though the interactions between subunits I, II, and III encoded within the mitochondrial genome make a lesser contribution to enzyme stability than interactions between bigenomic subunits, these subunits are more conserved, indicating potential unexplored roles for enzyme activity. Biochemistry The overall reaction is 4 Fe – cytochrome c + 4 H + O → 4 Fe – cytochrome c + 2 HO ΔG' = - 218 kJ/mol, E' = +565 mV Two electrons are passed from two cytochrome c's, through the Cu and cytochrome a sites to the cytochrome a–Cu binuclear center, reducing the metals to the Fe form and Cu. The hydroxide ligand is protonated and lost as water, creating a void between the metals that is filled by O. The oxygen is rapidly reduced, with two electrons coming from the Fe-cytochrome a, which is converted to the ferryl oxo form (Fe=O). The oxygen atom close to Cu picks up one electron from Cu, and a second electron and a proton from the hydroxyl of Tyr(244), which becomes a tyrosyl radical. The second oxygen is converted to a hydroxide ion by picking up two electrons and a proton. A third electron from another cytochrome c is passed through the first two electron carriers to the cytochrome a–Cu binuclear center, and this electron and two protons convert the tyrosyl radical back to Tyr, and the hydroxide bound to Cu to a water molecule. The fourth electron from another cytochrome c flows through Cu and cytochrome a to the cytochrome a–Cu binuclear center, reducing the Fe=O to Fe, with the oxygen atom picking up a proton simultaneously, regenerating this oxygen as a hydroxide ion coordinated in the middle of the cytochrome a–Cu center as it was at the start of this cycle. Overall, four reduced cytochrome c's are oxidized while O and four protons are reduced to two water molecules. Inhibition COX exists in three conformational states: fully oxidized (pulsed), partially reduced, and fully reduced. Each inhibitor has a high affinity to a different state. In the pulsed state, both the heme a and the Cu nuclear centers are oxidized; this is the conformation of the enzyme that has the highest activity. A two-electron reduction initiates a conformational change that allows oxygen to bind at the active site to the partially-reduced enzyme. Four electrons bind to COX to fully reduce the enzyme. Its fully reduced state, which consists of a reduced Fe at the cytochrome a heme group and a reduced Cu binuclear center, is considered the inactive or resting state of the enzyme. Cyanide, azide, and carbon monoxide all bind to cytochrome c oxidase, inhibiting the protein from functioning and leading to the chemical asphyxiation of cells. Higher concentrations of molecular oxygen are needed to compensate for increasing inhibitor concentrations, leading to an overall decrease in metabolic activity in the cell in the presence of an inhibitor. Other ligands, such as nitric oxide and hydrogen sulfide, can also inhibit COX by binding to regulatory sites on the enzyme, reducing the rate of cellular respiration. Cyanide is a non-competitive inhibitor for COX, binding with high affinity to the partially-reduced state of the enzyme and hindering further reduction of the enzyme. In the pulsed state, cyanide binds slowly, but with high affinity. The ligand is posited to electrostatically stabilize both metals at once by positioning itself between them. A high nitric oxide concentration, such as one added exogenously to the enzyme, reverses cyanide inhibition of COX. Nitric oxide can reversibly bind to either metal ion in the binuclear center to be oxidized to nitrite. NO and CN will compete with oxygen to bind at the site, reducing the rate of cellular respiration. Endogenous NO, however, which is produced at lower levels, augments CN inhibition. Higher levels of NO, which correlate with the existence of more enzyme in the reduced state, lead to a greater inhibition of cyanide. At these basal concentrations, NO inhibition of Complex IV is known to have beneficial effects, such as increasing oxygen levels in blood vessel tissues. The inability of the enzyme to reduce oxygen to water results in a buildup of oxygen, which can diffuse deeper into surrounding tissues. NO inhibition of Complex IV has a larger effect at lower oxygen concentrations, increasing its utility as a vasodilator in tissues of need. Hydrogen sulfide will bind COX in a noncompetitive fashion at a regulatory site on the enzyme, similar to carbon monoxide. Sulfide has the highest affinity to either the pulsed or partially reduced states of the enzyme, and is capable of partially reducing the enzyme at the heme a center. It is unclear whether endogenous HS levels are sufficient to inhibit the enzyme. There is no interaction between hydrogen sulfide and the fully reduced conformation of COX. Methanol in methylated spirits is converted into formic acid, which also inhibits the same oxidase system. High levels of ATP can allosterically inhibit cytochrome c oxidase, binding from within the mitochondrial matrix. Extramitochondrial and subcellular localizations Cytochrome c oxidase has 3 subunits which are encoded by mitochondrial DNA (cytochrome c oxidase subunit I, subunit II, and subunit III). Of these 3 subunits encoded by mitochondrial DNA, two have been identified in extramitochondrial locations. In pancreatic acinar tissue, these subunits were found in zymogen granules. Additionally, in the anterior pituitary, relatively high amounts of these subunits were found in growth hormone secretory granules. The extramitochondrial function of these cytochrome c oxidase subunits has not yet been characterized. Besides cytochrome c oxidase subunits, extramitochondrial localization has also been observed for large numbers of other mitochondrial proteins. This raises the possibility about existence of yet unidentified specific mechanisms for protein translocation from mitochondria to other cellular destinations. Genetic defects and disorders Defects involving genetic mutations altering cytochrome c oxidase (COX) functionality or structure can result in severe, often fatal metabolic disorders. Such disorders usually manifest in early childhood and affect predominantly tissues with high energy demands (brain, heart, muscle). Among the many classified mitochondrial diseases, those involving dysfunctional COX assembly are thought to be the most severe. The vast majority of COX disorders are linked to mutations in nuclear-encoded proteins referred to as assembly factors, or assembly proteins. These assembly factors contribute to COX structure and functionality, and are involved in several essential processes, including transcription and translation of mitochondrion-encoded subunits, processing of preproteins and membrane insertion, and cofactor biosynthesis and incorporation. Currently, mutations have been identified in seven COX assembly factors: SURF1, SCO1, SCO2, COX10, COX15, COX20, COA5 and LRPPRC. Mutations in these proteins can result in altered functionality of sub-complex assembly, copper transport, or translational regulation. Each gene mutation is associated with the etiology of a specific disease, with some having implications in multiple disorders. Disorders involving dysfunctional COX assembly via gene mutations include Leigh syndrome, cardiomyopathy, leukodystrophy, anemia, and sensorineural deafness. Histochemistry The increased reliance of neurons on oxidative phosphorylation for energy facilitates the use of COX histochemistry in mapping regional brain metabolism in animals, since it establishes a direct and positive correlation between enzyme activity and neuronal activity. This can be seen in the correlation between COX enzyme amount and activity, which indicates the regulation of COX at the level of gene expression. COX distribution is inconsistent across different regions of the animal brain, but its pattern of its distribution is consistent across animals. This pattern has been observed in the monkey, mouse, and calf brain. One isozyme of COX has been consistently detected in histochemical analysis of the brain. Such brain mapping has been accomplished in spontaneous mutant mice with cerebellar disease such as reeler and a transgenic model of Alzheimer's disease. This technique has also been used to map learning activity in the animal brain. Additional images See also Cytochrome c oxidase subunit I Cytochrome c oxidase subunit II Cytochrome c oxidase subunit III Heme a References External links The Cytochrome Oxidase home page at Rice University Interactive Molecular model of cytochrome c oxidase (Requires MDL Chime) Cellular respiration EC 1.9.3 Hemoproteins Integral membrane proteins Copper enzymes
Cytochrome c oxidase
[ "Chemistry", "Biology" ]
2,790
[ "Biochemistry", "Cellular respiration", "Metabolism" ]
96,910
https://en.wikipedia.org/wiki/Respiratory%20complex%20I
Respiratory complex I, (also known as NADH:ubiquinone oxidoreductase, Type I NADH dehydrogenase and mitochondrial complex I) is the first large protein complex of the respiratory chains of many organisms from bacteria to humans. It catalyzes the transfer of electrons from NADH to coenzyme Q10 (CoQ10) and translocates protons across the inner mitochondrial membrane in eukaryotes or the plasma membrane of bacteria. This enzyme is essential for the normal functioning of cells, and mutations in its subunits lead to a wide range of inherited neuromuscular and metabolic disorders. Defects in this enzyme are responsible for the development of several pathological processes such as ischemia/reperfusion damage (stroke and cardiac infarction), Parkinson's disease and others. Function Complex I is the first enzyme of the mitochondrial electron transport chain. There are three energy-transducing enzymes in the electron transport chain - NADH:ubiquinone oxidoreductase (complex I), Coenzyme Q – cytochrome c reductase (complex III), and cytochrome c oxidase (complex IV). Complex I is the largest and most complicated enzyme of the electron transport chain. The reaction catalyzed by complex I is: NADH + H+ + CoQ + 4H+in→ NAD+ + CoQH2 + 4H+out In this process, the complex translocates four protons across the inner membrane per molecule of oxidized NADH, helping to build the electrochemical potential difference used to produce ATP. Escherichia coli complex I (NADH dehydrogenase) is capable of proton translocation in the same direction to the established Δψ, showing that in the tested conditions, the coupling ion is H+. Na+ transport in the opposite direction was observed, and although Na+ was not necessary for the catalytic or proton transport activities, its presence increased the latter. H+ was translocated by the Paracoccus denitrificans complex I, but in this case, H+ transport was not influenced by Na+, and Na+ transport was not observed. Possibly, the E. coli complex I has two energy coupling sites (one Na+ independent and the other Na+dependent), as observed for the Rhodothermus marinus complex I, whereas the coupling mechanism of the P. denitrificans enzyme is completely Na+ independent. It is also possible that another transporter catalyzes the uptake of Na+. Complex I energy transduction by proton pumping may not be exclusive to the R. marinus enzyme. The Na+/H+ antiport activity seems not to be a general property of complex I. However, the existence of Na+-translocating activity of the complex I is still in question. The reaction can be reversed – referred to as aerobic succinate-supported NAD+ reduction by ubiquinol – in the presence of a high membrane potential, but the exact catalytic mechanism remains unknown. Driving force of this reaction is a potential across the membrane which can be maintained either by ATP-hydrolysis or by complexes III and IV during succinate oxidation. Complex I may have a role in triggering apoptosis. In fact, there has been shown to be a correlation between mitochondrial activities and programmed cell death (PCD) during somatic embryo development. Complex I is not homologous to Na+-translocating NADH Dehydrogenase (NDH) Family (TC# 3.D.1), a member of the Na+ transporting Mrp superfamily. As a result of a two NADH molecule being oxidized to NAD+, three molecules of ATP can be produced by Complex V (ATP synthase) downstream in the respiratory chain. Mechanism Overall mechanism All redox reactions take place in the hydrophilic domain of complex I. NADH initially binds to complex I, and transfers two electrons to the flavin mononucleotide (FMN) prosthetic group of the enzyme, creating FMNH2. The electron acceptor – the isoalloxazine ring – of FMN is identical to that of FAD. The electrons are then transferred through the FMN via a series of iron-sulfur (Fe-S) clusters, and finally to coenzyme Q10 (ubiquinone). This electron flow changes the redox state of the protein, inducing conformational changes of the protein which alters the pK values of ionizable side chain, and causes four hydrogen ions to be pumped out of the mitochondrial matrix. Ubiquinone (CoQ) accepts two electrons to be reduced to ubiquinol (CoQH2). Electron transfer mechanism The proposed pathway for electron transport prior to ubiquinone reduction is as follows: NADH – FMN – N3 – N1b – N4 – N5 – N6a – N6b – N2 – Q, where Nx is a labelling convention for iron sulfur clusters. The high reduction potential of the N2 cluster and the relative proximity of the other clusters in the chain enable efficient electron transfer over long distance in the protein (with transfer rates from NADH to N2 iron-sulfur cluster of about 100 μs). The equilibrium dynamics of Complex I are primarily driven by the quinone redox cycle. In conditions of high proton motive force (and accordingly, a ubiquinol-concentrated pool), the enzyme runs in the reverse direction. Ubiquinol is oxidized to ubiquinone, and the resulting released protons reduce the proton motive force. Proton translocation mechanism The coupling of proton translocation and electron transport in Complex I is currently proposed as being indirect (long range conformational changes) as opposed to direct (redox intermediates in the hydrogen pumps as in heme groups of Complexes III and IV). The architecture of the hydrophobic region of complex I shows multiple proton transporters that are mechanically interlinked. The three central components believed to contribute to this long-range conformational change event are the pH-coupled N2 iron-sulfur cluster, the quinone reduction, and the transmembrane helix subunits of the membrane arm. Transduction of conformational changes to drive the transmembrane transporters linked by a 'connecting rod' during the reduction of ubiquinone can account for two or three of the four protons pumped per NADH oxidized. The remaining proton must be pumped by direct coupling at the ubiquinone-binding site. It is proposed that direct and indirect coupling mechanisms account for the pumping of the four protons. The N2 cluster's proximity to a nearby cysteine residue results in a conformational change upon reduction in the nearby helices, leading to small but important changes in the overall protein conformation. Further electron paramagnetic resonance studies of the electron transfer have demonstrated that most of the energy that is released during the subsequent CoQ reduction is on the final ubiquinol formation step from semiquinone, providing evidence for the "single stroke" H+ translocation mechanism (i.e. all four protons move across the membrane at the same time). Alternative theories suggest a "two stroke mechanism" where each reduction step (semiquinone and ubiquinol) results in a stroke of two protons entering the intermembrane space. The resulting ubiquinol localized to the membrane domain interacts with negatively charged residues in the membrane arm, stabilizing conformational changes. An antiporter mechanism (Na+/H+ swap) has been proposed using evidence of conserved Asp residues in the membrane arm. The presence of Lys, Glu, and His residues enable for proton gating (a protonation followed by deprotonation event across the membrane) driven by the pKa of the residues. Composition and structure NADH:ubiquinone oxidoreductase is the largest of the respiratory complexes. In mammals, the enzyme contains 44 separate water-soluble peripheral membrane proteins, which are anchored to the integral membrane constituents. Of particular functional importance are the flavin prosthetic group (FMN) and eight iron-sulfur clusters (FeS). Of the 44 subunits, seven are encoded by the mitochondrial genome. The structure is an "L" shape with a long membrane domain (with around 60 trans-membrane helices) and a hydrophilic (or peripheral) domain, which includes all the known redox centres and the NADH binding site. All thirteen of the E. coli proteins, which comprise NADH dehydrogenase I, are encoded within the nuo operon, and are homologous to mitochondrial complex I subunits. The antiporter-like subunits NuoL/M/N each contains 14 conserved transmembrane (TM) helices. Two of them are discontinuous, but subunit NuoL contains a 110 Å long amphipathic α-helix, spanning the entire length of the domain. The subunit, NuoL, is related to Na+/ H+ antiporters of TC# 2.A.63.1.1 (PhaA and PhaD). Three of the conserved, membrane-bound subunits in NADH dehydrogenase are related to each other, and to Mrp sodium-proton antiporters. Structural analysis of two prokaryotic complexes I revealed that the three subunits each contain fourteen transmembrane helices that overlay in structural alignments: the translocation of three protons may be coordinated by a lateral helix connecting them. Complex I contains a ubiquinone binding pocket at the interface of the 49-kDa and PSST subunits. Close to iron-sulfur cluster N2, the proposed immediate electron donor for ubiquinone, a highly conserved tyrosine constitutes a critical element of the quinone reduction site. A possible quinone exchange path leads from cluster N2 to the N-terminal beta-sheet of the 49-kDa subunit. All 45 subunits of the bovine NDHI have been sequenced. Each complex contains noncovalently bound FMN, coenzyme Q and several iron-sulfur centers. The bacterial NDHs have 8-9 iron-sulfur centers. A recent study used electron paramagnetic resonance (EPR) spectra and double electron-electron resonance (DEER) to determine the path of electron transfer through the iron-sulfur complexes, which are located in the hydrophilic domain. Seven of these clusters form a chain from the flavin to the quinone binding sites; the eighth cluster is located on the other side of the flavin, and its function is unknown. The EPR and DEER results suggest an alternating or “roller-coaster” potential energy profile for the electron transfer between the active sites and along the iron-sulfur clusters, which can optimize the rate of electron travel and allow efficient energy conversion in complex I. Notes: a Found in all species except fungi b May or may not be present in any species c Found in fungal species such as Schizosaccharomyces pombe d Recent research has described NDUFA4 to be a subunit of complex IV, and not of complex I Inhibitors Inhibition of complex I is the mode of action of the METI acaricides and insecticides: fenazaquin, fenpyroximate, pyrimidifen, pyridaben, tebufenpyrad, and tolfenpyrad. They are assigned to IRAC group 21A. Perhaps the best-known inhibitor of complex I is rotenone, which is used as a piscicide and previously commonly used as an organic pesticide, but now banned in many countries. It is in IRAC group 21B. Rotenone and rotenoids are isoflavonoids occurring in several genera of tropical plants such as Antonia (Loganiaceae), Derris and Lonchocarpus (Faboideae, Fabaceae). There have been reports of the indigenous people of French Guiana using rotenone-containing plants to fish - due to its ichthyotoxic effect - as early as the 17th century. Rotenone binds to the ubiquinone binding site of complex I as well as piericidin A, another potent inhibitor with a close structural homologue to ubiquinone. Acetogenins from Annonaceae are even more potent inhibitors of complex I. They cross-link to the ND2 subunit, which suggests that ND2 is essential for quinone-binding. Rolliniastatin-2, an acetogenin, is the first complex I inhibitor found that does not share the same binding site as rotenone. Bullatacin (an acetogenin found in Asimina triloba fruit) is the most potent known inhibitor of NADH dehydrogenase (ubiquinone) (=1.2 nM, stronger than rotenone). Despite more than 50 years of study of complex I, no inhibitors blocking the electron flow inside the enzyme have been found. Hydrophobic inhibitors like rotenone or piericidin most likely disrupt the electron transfer between the terminal FeS cluster N2 and ubiquinone. It has been shown that long-term systemic inhibition of complex I by rotenone can induce selective degeneration of dopaminergic neurons. Complex I is also blocked by adenosine diphosphate ribose – a reversible competitive inhibitor of NADH oxidation – by binding to the enzyme at the nucleotide binding site. Both hydrophilic NADH and hydrophobic ubiquinone analogs act at the beginning and the end of the internal electron-transport pathway, respectively. The antidiabetic drug Metformin has been shown to induce a mild and transient inhibition of the mitochondrial respiratory chain complex I, and this inhibition appears to play a key role in its mechanism of action. Inhibition of complex I has been implicated in hepatotoxicity associated with a variety of drugs, for instance flutamide and nefazodone. Further, complex I inhibition was shown to trigger NAD+-independent glucose catabolism. Active/inactive transition The catalytic properties of eukaryotic complex I are not simple. Two catalytically and structurally distinct forms exist in any given preparation of the enzyme: one is the fully competent, so-called “active” A-form and the other is the catalytically silent, dormant, “inactive”, D-form. After exposure of idle enzyme to elevated, but physiological temperatures (>30 °C) in the absence of substrate, the enzyme converts to the D-form. This form is catalytically incompetent but can be activated by the slow reaction (k~4 min−1) of NADH oxidation with subsequent ubiquinone reduction. After one or several turnovers the enzyme becomes active and can catalyse physiological NADH:ubiquinone reaction at a much higher rate (k~104 min−1). In the presence of divalent cations (Mg2+, Ca2+), or at alkaline pH the activation takes much longer. The high activation energy (270 kJ/mol) of the deactivation process indicates the occurrence of major conformational changes in the organisation of the complex I. However, until now, the only conformational difference observed between these two forms is the number of cysteine residues exposed at the surface of the enzyme. Treatment of the D-form of complex I with the sulfhydryl reagents N-Ethylmaleimide or DTNB irreversibly blocks critical cysteine residues, abolishing the ability of the enzyme to respond to activation, thus inactivating it irreversibly. The A-form of complex I is insensitive to sulfhydryl reagents. It was found that these conformational changes may have a very important physiological significance. The inactive, but not the active form of complex I was susceptible to inhibition by nitrosothiols and peroxynitrite. It is likely that transition from the active to the inactive form of complex I takes place during pathological conditions when the turnover of the enzyme is limited at physiological temperatures, such as during hypoxia, ischemia or when the tissue nitric oxide:oxygen ratio increases (i.e. metabolic hypoxia). Production of superoxide Recent investigations suggest that complex I is a potent source of reactive oxygen species. Complex I can produce superoxide (as well as hydrogen peroxide), through at least two different pathways. During forward electron transfer, only very small amounts of superoxide are produced (probably less than 0.1% of the overall electron flow). During reverse electron transfer, complex I might be the most important site of superoxide production within mitochondria, with around 3-4% of electrons being diverted to superoxide formation. Reverse electron transfer, the process by which electrons from the reduced ubiquinol pool (supplied by succinate dehydrogenase, glycerol-3-phosphate dehydrogenase, electron-transferring flavoprotein or dihydroorotate dehydrogenase in mammalian mitochondria) pass through complex I to reduce NAD+ to NADH, driven by the inner mitochondrial membrane potential electric potential. Although it is not precisely known under what pathological conditions reverse-electron transfer would occur in vivo, in vitro experiments indicate that this process can be a very potent source of superoxide when succinate concentrations are high and oxaloacetate or malate concentrations are low. This can take place during tissue ischaemia, when oxygen delivery is blocked. Superoxide is a reactive oxygen species that contributes to cellular oxidative stress and is linked to neuromuscular diseases and aging. NADH dehydrogenase produces superoxide by transferring one electron from FMNH2 (or semireduced flavin) to oxygen (O2). The radical flavin leftover is unstable, and transfers the remaining electron to the iron-sulfur centers. It is the ratio of NADH to NAD+ that determines the rate of superoxide formation. Pathology Mutations in the subunits of complex I can cause mitochondrial diseases, including Leigh syndrome. Point mutations in various complex I subunits derived from mitochondrial DNA (mtDNA) can also result in Leber's Hereditary Optic Neuropathy.There is some evidence that complex I defects may play a role in the etiology of Parkinson's disease, perhaps because of reactive oxygen species (complex I can, like complex III, leak electrons to oxygen, forming highly toxic superoxide). Although the exact etiology of Parkinson's disease is unclear, it is likely that mitochondrial dysfunction, along with proteasome inhibition and environmental toxins, may play a large role. In fact, the inhibition of complex I has been shown to cause the production of peroxides and a decrease in proteasome activity, which may lead to Parkinson's disease. Additionally, Esteves et al. (2010) found that cell lines with Parkinson's disease show increased proton leakage in complex I, which causes decreased maximum respiratory capacity. Brain ischemia/reperfusion injury is mediated via complex I impairment. Recently it was found that oxygen deprivation leads to conditions in which mitochondrial complex I lose its natural cofactor, flavin mononucleotide (FMN) and become inactive. When oxygen is present the enzyme catalyzes a physiological reaction of NADH oxidation by ubiquinone, supplying electrons downstream of the respiratory chain (complexes III and IV). Ischemia leads to dramatic increase of succinate level. In the presence of succinate mitochondria catalyze reverse electron transfer so that fraction of electrons from succinate is directed upstream to FMN of complex I. Reverse electron transfer results in a reduction of complex I FMN, increased generation of ROS, followed by a loss of the reduced cofactor (FMNH2) and impairment of mitochondria energy production. The FMN loss by complex I and I/R injury can be alleviated by the administration of FMN precursor, riboflavin. Recent studies have examined other roles of complex I activity in the brain. Andreazza et al. (2010) found that the level of complex I activity was significantly decreased in patients with bipolar disorder, but not in patients with depression or schizophrenia. They found that patients with bipolar disorder showed increased protein oxidation and nitration in their prefrontal cortex. These results suggest that future studies should target complex I for potential therapeutic studies for bipolar disorder. Similarly, Moran et al. (2010) found that patients with severe complex I deficiency showed decreased oxygen consumption rates and slower growth rates. However, they found that mutations in different genes in complex I lead to different phenotypes, thereby explaining the variations of pathophysiological manifestations of complex I deficiency. Exposure to pesticides can also inhibit complex I and cause disease symptoms. For example, chronic exposure to low levels of dichlorvos, an organophosphate used as a pesticide, has been shown to cause liver dysfunction. This occurs because dichlorvos alters complex I and II activity levels, which leads to decreased mitochondrial electron transfer activities and decreased ATP synthesis. In chloroplasts A proton-pumping, ubiquinone-using NADH dehydrogenase complex, homologous to complex I, is found in the chloroplast genomes of most land plants under the name ndh. This complex is inherited from the original symbiosis from cyanobacteria, but has been lost in most eukaryotic algae, some gymnosperms (Pinus and gnetophytes), and some very young lineages of angiosperms. The purpose of this complex is originally cryptic as chloroplasts do not participate in respiration, but now it is known that ndh serves to maintain photosynthesis in stressful situations. This makes it at least partially dispensable in favorable conditions. It is evident that angiosperm lineages without ndh do not last long from their young ages, but how gymnosperms survive on land without ndh for so long is unknown. Genes The following is a list of humans genes that encode components of complex I: NADH dehydrogenase (ubiquinone) 1 alpha subcomplex NDUFA1 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 1, 7.5kDa NDUFA2 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 2, 8kDa NDUFA3 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 3, 9kDa NDUFA4 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 4, 9kDa - recently described to be part of complex IV NDUFA4L – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 4-like NDUFA4L2 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 4-like 2 NDUFA5 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 5, 13kDa NDUFA6 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 6, 14kDa NDUFA7 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 7, 14.5kDa NDUFA8 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 8, 19kDa NDUFA9 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 9, 39kDa NDUFA10 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 10, 42kDa NDUFA11 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 11, 14.7kDa NDUFA12 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 12 NDUFA13 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, 13 NDUFAB1 – NADH dehydrogenase (ubiquinone) 1, alpha/beta subcomplex, 1, 8kDa NDUFAF1 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 1 NDUFAF2 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 2 NDUFAF3 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 3 NDUFAF4 – NADH dehydrogenase (ubiquinone) 1 alpha subcomplex, assembly factor 4 NADH dehydrogenase (ubiquinone) 1 beta subcomplex NDUFB1 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 1, 7kDa NDUFB2 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 2, 8kDa NDUFB3 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 3, 12kDa NDUFB4 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 4, 15kDa NDUFB5 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 5, 16kDa NDUFB6 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 6, 17kDa NDUFB7 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 7, 18kDa NDUFB8 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 8, 19kDa NDUFB9 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 9, 22kDa NDUFB10 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 10, 22kDa NDUFB11 – NADH dehydrogenase (ubiquinone) 1 beta subcomplex, 11, 17.3kDa NADH dehydrogenase (ubiquinone) 1, subcomplex unknown NDUFC1 – NADH dehydrogenase (ubiquinone) 1, subcomplex unknown, 1, 6kDa NDUFC2 – NADH dehydrogenase (ubiquinone) 1, subcomplex unknown, 2, 14.5kDa NADH dehydrogenase (ubiquinone) Fe-S protein NDUFS1 – NADH dehydrogenase (ubiquinone) Fe-S protein 1, 75kDa (NADH-coenzyme Q reductase) NDUFS2 – NADH dehydrogenase (ubiquinone) Fe-S protein 2, 49kDa (NADH-coenzyme Q reductase) NDUFS3 – NADH dehydrogenase (ubiquinone) Fe-S protein 3, 30kDa (NADH-coenzyme Q reductase) NDUFS4 – NADH dehydrogenase (ubiquinone) Fe-S protein 4, 18kDa (NADH-coenzyme Q reductase) NDUFS5 – NADH dehydrogenase (ubiquinone) Fe-S protein 5, 15kDa (NADH-coenzyme Q reductase) NDUFS6 – NADH dehydrogenase (ubiquinone) Fe-S protein 6, 13kDa (NADH-coenzyme Q reductase) NDUFS7 – NADH dehydrogenase (ubiquinone) Fe-S protein 7, 20kDa (NADH-coenzyme Q reductase) NDUFS8 – NADH dehydrogenase (ubiquinone) Fe-S protein 8, 23kDa (NADH-coenzyme Q reductase) NADH dehydrogenase (ubiquinone) flavoprotein 1 NDUFV1 – NADH dehydrogenase (ubiquinone) flavoprotein 1, 51kDa NDUFV2 – NADH dehydrogenase (ubiquinone) flavoprotein 2, 24kDa NDUFV3 – NADH dehydrogenase (ubiquinone) flavoprotein 3, 10kDa mitochondrially encoded NADH dehydrogenase subunit MT-ND1 - mitochondrially encoded NADH dehydrogenase subunit 1 MT-ND2 - mitochondrially encoded NADH dehydrogenase subunit 2 MT-ND3 - mitochondrially encoded NADH dehydrogenase subunit 3 MT-ND4 - mitochondrially encoded NADH dehydrogenase subunit 4 MT-ND4L - mitochondrially encoded NADH dehydrogenase subunit 4L MT-ND5 - mitochondrially encoded NADH dehydrogenase subunit 5 MT-ND6 - mitochondrially encoded NADH dehydrogenase subunit 6 References External links Institute of Science and Technology Austria (ISTA): Sazanov Group MRC MBU Sazanov group Interactive Molecular model of NADH dehydrogenase (Requires MDL Chime) Complex I homepage Complex I news facebook page Cellular respiration Glycolysis EC 7.1.1 Integral membrane proteins
Respiratory complex I
[ "Chemistry", "Biology" ]
6,281
[ "Carbohydrate metabolism", "Cellular respiration", "Glycolysis", "Biochemistry", "Metabolism" ]
97,039
https://en.wikipedia.org/wiki/Succinic%20acid
Succinic acid () is a dicarboxylic acid with the chemical formula (CH2)2(CO2H)2. In living organisms, succinic acid takes the form of an anion, succinate, which has multiple biological roles as a metabolic intermediate being converted into fumarate by the enzyme succinate dehydrogenase in complex 2 of the electron transport chain which is involved in making ATP, and as a signaling molecule reflecting the cellular metabolic state. Succinate is generated in mitochondria via the tricarboxylic acid (TCA) cycle. Succinate can exit the mitochondrial matrix and function in the cytoplasm as well as the extracellular space, changing gene expression patterns, modulating epigenetic landscape or demonstrating hormone-like signaling. As such, succinate links cellular metabolism, especially ATP formation, to the regulation of cellular function. Dysregulation of succinate synthesis, and therefore ATP synthesis, happens in some genetic mitochondrial diseases, such as Leigh syndrome, and Melas syndrome, and degradation can lead to pathological conditions, such as malignant transformation, inflammation and tissue injury. Succinic acid is marketed as food additive E363. The name derives from Latin succinum, meaning amber. Physical properties Succinic acid is a white, odorless solid with a highly acidic taste. In an aqueous solution, succinic acid readily ionizes to form its conjugate base, succinate (). As a diprotic acid, succinic acid undergoes two successive deprotonation reactions: (CH2)2(CO2H)2 → (CH2)2(CO2H)(CO2)− + H+ (CH2)2(CO2H)(CO2)− → (CH2)2(CO2)22− + H+ The pKa of these processes are 4.3 and 5.6, respectively. Both anions are colorless and can be isolated as the salts, e.g., Na(CH2)2(CO2H)(CO2) and Na2(CH2)2(CO2)2. In living organisms, primarily succinate, not succinic acid, is found. As a radical group it is called a succinyl () group. Like most simple mono- and dicarboxylic acids, it is not harmful but can be an irritant to skin and eyes. Commercial production Historically, succinic acid was obtained from amber by distillation and has thus been known as spirit of amber (). Common industrial routes include hydrogenation of maleic acid, oxidation of 1,4-butanediol, and carbonylation of ethylene glycol. Succinate is also produced from butane via maleic anhydride. Global production is estimated at 16,000 to 30,000 tons a year, with an annual growth rate of 10%. Genetically engineered Escherichia coli and Saccharomyces cerevisiae are proposed for the commercial production via fermentation of glucose. Chemical reactions Succinic acid can be dehydrogenated to fumaric acid or be converted to diesters, such as diethylsuccinate (CH2CO2CH2CH3)2. This diethyl ester is a substrate in the Stobbe condensation. Dehydration of succinic acid gives succinic anhydride. Succinate can be used to derive 1,4-butanediol, maleic anhydride, succinimide, 2-pyrrolidinone and tetrahydrofuran. Applications In 2004, succinate was placed on the US Department of Energy's list of top 12 platform chemicals from biomass. Precursor to polymers, resins, and solvents Succinic acid is a precursor to some polyesters and a component of some alkyd resins. 1,4-Butanediol (BDO) can be synthesized using succinic acid as a precursor. The automotive and electronics industries heavily rely on BDO to produce connectors, insulators, wheel covers, gearshift knobs and reinforcing beams. Succinic acid also serves as the bases of certain biodegradable polymers, which are of interest in tissue engineering applications. Acylation with succinic acid is called succination. Oversuccination occurs when more than one succinate adds to a substrate. Food and dietary supplement As a food additive and dietary supplement, succinic acid is generally recognized as safe by the U.S. Food and Drug Administration. Succinic acid is used primarily as an acidity regulator in the food and beverage industry. It is also available as a flavoring agent, contributing a somewhat sour and astringent component to umami taste. As an excipient in pharmaceutical products, it is also used to control acidity or as a counter ion. Drugs involving succinate include metoprolol succinate, sumatriptan succinate, doxylamine succinate or solifenacin succinate. Biosynthesis Tricarboxylic acid (TCA) cycle Succinate is a key intermediate in the tricarboxylic acid cycle, a primary metabolic pathway used to produce chemical energy in the presence of O2. Succinate is generated from succinyl-CoA by the enzyme succinyl-CoA synthetase in a GTP/ATP-producing step: Succinyl-CoA + NDP + Pi → Succinate + CoA + NTP Catalyzed by the enzyme succinate dehydrogenase (SDH), succinate is subsequently oxidized to fumarate: Succinate + FAD → Fumarate + FADH2 SDH also participates in the mitochondrial electron transport chain, where it is known as respiratory complex II. This enzyme complex is a 4 subunit membrane-bound lipoprotein which couples the oxidation of succinate to the reduction of ubiquinone via the intermediate electron carriers FAD and three 2Fe-2S clusters. Succinate thus serves as a direct electron donor to the electron transport chain, and itself is converted into fumarate. Reductive branch of the TCA cycle Succinate can alternatively be formed by reverse activity of SDH. Under anaerobic conditions certain bacteria such as A. succinogenes, A. succiniciproducens and M. succiniciproducens, run the TCA cycle in reverse and convert glucose to succinate through the intermediates of oxaloacetate, malate and fumarate. This pathway is exploited in metabolic engineering to net generate succinate for human use. Additionally, succinic acid produced during the fermentation of sugar provides a combination of saltiness, bitterness and acidity to fermented alcohols. Accumulation of fumarate can drive the reverse activity of SDH, thus enhancing succinate generation. Under pathological and physiological conditions, the malate-aspartate shuttle or the purine nucleotide shuttle can increase mitochondrial fumarate, which is then readily converted to succinate. Glyoxylate cycle Succinate is also a product of the glyoxylate cycle, which converts two two-carbon acetyl units into the four-carbon succinate. The glyoxylate cycle is utilized by many bacteria, plants and fungi and allows these organisms to subsist on acetate or acetyl CoA yielding compounds. The pathway avoids the decarboxylation steps of the TCA cycle via the enzyme isocitrate lyase which cleaves isocitrate into succinate and glyoxylate. Generated succinate is then available for either energy production or biosynthesis. GABA shunt Succinate is the re-entry point for the gamma-aminobutyric acid (GABA) shunt into the TCA cycle, a closed cycle which synthesizes and recycles GABA. The GABA shunt serves as an alternate route to convert alpha-ketoglutarate into succinate, bypassing the TCA cycle intermediate succinyl-CoA and instead producing the intermediate GABA. Transamination and subsequent decarboxylation of alpha-ketoglutarate leads to the formation of GABA. GABA is then metabolized by GABA transaminase to succinic semialdehyde. Finally, succinic semialdehyde is oxidized by succinic semialdehyde dehydrogenase (SSADH) to form succinate, re-entering the TCA cycle and closing the loop. Enzymes required for the GABA shunt are expressed in neurons, glial cells, macrophages and pancreatic cells. Cellular metabolism Metabolic intermediate Succinate is produced and concentrated in the mitochondria and its primary biological function is that of a metabolic intermediate. All metabolic pathways that are interlinked with the TCA cycle, including the metabolism of carbohydrates, amino acids, fatty acids, cholesterol, and heme, rely on the temporary formation of succinate. The intermediate is made available for biosynthetic processes through multiple pathways, including the reductive branch of the TCA cycle or the glyoxylate cycle, which are able to drive net production of succinate. In rodents, mitochondrial concentrations are approximately ~0.5 mM while plasma concentration are only 2–20 μM. ROS production The activity of succinate dehydrogenase (SDH), which interconverts succinate into fumarate participates in mitochondrial reactive oxygen species (ROS) production by directing electron flow in the electron transport chain. Under conditions of succinate accumulation, rapid oxidation of succinate by SDH can drive reverse electron transport (RET). If mitochondrial respiratory complex III is unable to accommodate excess electrons supplied by succinate oxidation, it forces electrons to flow backwards along the electron transport chain. RET at mitochondrial respiratory complex 1, the complex normally preceding SDH in the electron transport chain, leads to ROS production and creates a pro-oxidant microenvironment. Additional biologic functions In addition to its metabolic roles, succinate serves as an intracellular and extracellular signaling molecule. Extra-mitochondrial succinate alters the epigenetic landscape by inhibiting the family of 2-oxogluterate-dependent dioxygenases. Alternative, succinate can be released into the extracellular milieu and the blood stream where it is recognized by target receptors. In general, leakage from the mitochondria requires succinate overproduction or underconsumption and occurs due to reduced, reverse or completely absent activity of SDH or alternative changes in metabolic state. Mutations in SDH, hypoxia or energetic misbalance are all linked to an alteration of flux through the TCA cycle and succinate accumulation. Upon exiting the mitochondria, succinate serves as a signal of metabolic state, communicating to neighboring cells how metabolically active the originating cell population is. As such, succinate links TCA cycle dysfunction or metabolic changes to cell-cell communication and to oxidative stress-related responses. Transporters Succinate requires specific transporters to move through both the mitochondrial and plasma membrane. Succinate exits the mitochondrial matrix and passes through the inner mitochondrial membrane via dicarboxylate transporters, primarily SLC25A10, a succinate-fumarate/malate transporter. In the second step of mitochondrial export, succinate readily crosses the outer mitochondrial membrane through porins, nonspecific protein channels that facilitate the diffusion of molecules less than 1.5 kDa. Transport across the plasma membrane is likely tissue specific. A key candidate transporter is INDY (I'm not dead yet), a sodium-independent anion exchanger, which moves both dicarboxylate and citrate into the bloodstream. Extracellular signaling Extracellular succinate can act as a signaling molecule with hormone-like functions in stimulating a variety of cells such as those in the blood, adipose tissues, immune tissues, liver, heart, retina and kidney. Extracellular succinate works by binding to and thereby activating the GPR91 (also termed SUCNR1) receptor on the cells that express this receptor. Most studies have reported that the GPR91 protein consists of 330 amino acids although a few studies have detected a 334 amino acid product of GPR91 gene. Arg99, His103, Arg252, and Arg281 near the center of the GPR91 protein generate a positively charged binding site for succinate. GPR91 resides on its target cells' surface membranes with its binding site facing the extracellular space. It is a G protein-coupled receptor sub-type of receptor that, depending on the cell type bearing it, interacts with multiple G proteins subtypes including Gs, Gi and Gq. This enables GPR91 to regulate a multitude of signaling outcomes. Succinate has a high affinity for GPR91, with an EC50 (i.e., concentration that induces a half maximal response) for stimulating GPR91 in the 20–50 μM range. Succinate's activation of the GPR91 receptor simulates a wide range of cell types and physiological responses (see Functions regulated by SUCNR1). Effect on adipocytes In adipocytes, the succinate-activated GPR91 signaling cascade inhibits lipolysis. Effect on the liver and retina Succinate signaling often occurs in response to hypoxic conditions. In the liver, succinate serves as a paracrine signal, released by anoxic hepatocytes, and targets stellate cells via GPR91. This leads to stellate cell activation and fibrogenesis. Thus, succinate is thought to play a role in liver homeostasis. In the retina, succinate accumulates in retinal ganglion cells in response to ischemic conditions. Autocrine succinate signaling promotes retinal neovascularization, triggering the activation of angiogenic factors such as endothelial growth factor (VEGF). Effect on the heart Extracellular succinate regulates cardiomyocyte viability through GPR91 activation; long-term succinate exposure leads to pathological cardiomyocyte hypertrophy. Stimulation of GPR91 triggers at least two signaling pathways in the heart: a MEK1/2 and ERK1/2 pathway that activates hypertrophic gene expression and a phospholipase C pathway which changes the pattern of Ca2+ uptake and distribution and triggers CaM-dependent hypertrophic gene activation. Effect on immune cells SUCNR1 is highly expressed on immature dendritic cells, where succinate binding stimulates chemotaxis. Furthermore, SUCNR1 synergizes with toll-like receptors to increase the production of proinflammatory cytokines such as TNF alpha and interleukin-1beta. Succinate may enhance adaptive immunity by triggering the activity of antigen-presenting cells that, in turn, activate T-cells. Effect on platelets SUCNR1 is one of the highest expressed G protein-coupled receptors on human platelets, present at levels similar to P2Y12, though the role of succinate signaling in platelet aggregation is debated. Multiple studies have demonstrated succinate-induced aggregation, but the effect has high inter-individual variability. Effect on the kidneys Succinate serves as a modulator of blood pressure by stimulating renin release in macula densa and juxtaglomerular apparatus cells via GPR91. Therapies targeting succinate to reduce cardiovascular risk and hypertension are currently under investigation. Intracellular signaling Accumulation of either fumarate or succinate reduces the activity of 2-oxoglutarate-dependent dioxygenases, including histone and DNA demethylases, prolyl hydroxylases and collagen prolyl-4-hydroxylases, through competitive inhibition. 2-oxoglutarate-dependent dioxygenases require an iron cofactor to catalyze hydroxylations, desaturations and ring closures. Simultaneous to substrate oxidation, they convert 2-oxoglutarate, also known as alpha-ketoglutarate, into succinate and CO2. 2-oxoglutarate-dependent dioxygenases bind substrates in a sequential, ordered manner. First, 2-oxoglutarate coordinates with an Fe(II) ion bound to a conserved 2-histidinyl–1-aspartyl/glutamyl triad of residues present in the enzymatic center. Subsequently, the primary substrate enters the binding pocket and lastly dioxygen binds to the enzyme-substrate complex. Oxidative decarboxylation then generates a ferryl intermediate coordinated to succinate, which serves to oxidize the bound primary substrate. Succinate may interfere with the enzymatic process by attaching to the Fe(II) center first, prohibiting the binding of 2-oxoglutarate. Thus, via enzymatic inhibition, increased succinate load can lead to changes in transcription factor activity and genome-wide alterations in histone and DNA methylation. Epigenetic effects Succinate and fumarate inhibit the TET (ten-eleven translocation) family of 5-methylcytosine DNA modifying enzymes and the JmjC domain-containing histone lysine demethylase (KDM). Pathologically elevated levels of succinate lead to hypermethylation, epigenetic silencing and changes in neuroendocrine differentiation, potentially driving cancer formation. Gene regulation Succinate inhibition of prolyl hydroxylases (PHDs) stabilizes the transcription factor hypoxia inducible factor (HIF)1α. PHDs hydroxylate proline in parallel to oxidatively decarboxylating 2-oxyglutarate to succinate and CO2. In humans, three HIF prolyl 4-hydroxylases regulate the stability of HIFs. Hydroxylation of two prolyl residues in HIF1α facilitates ubiquitin ligation, thus marking it for proteolytic destruction by the ubiquitin/proteasome pathway. Since PHDs have an absolute requirement for molecular oxygen, this process is suppressed in hypoxia allowing HIF1α to escape destruction. High concentrations of succinate will mimic the hypoxia state by suppressing PHDs, therefore stabilizing HIF1α and inducing the transcription of HIF1-dependent genes even under normal oxygen conditions. HIF1 is known to induce transcription of more than 60 genes, including genes involved in vascularization and angiogenesis, energy metabolism, cell survival, and tumor invasion. Role in human health Inflammation Metabolic signaling involving succinate can be involved in inflammation via stabilization of HIF1-alpha or GPR91 signaling in innate immune cells. Through these mechanisms, succinate accumulation has been shown to regulate production of inflammatory cytokines. For dendritic cells, succinate functions as a chemoattractant and increases their antigen-presenting function via receptor stimulated cytokine production. In inflammatory macrophages, succinate-induced stability of HIF1 results in increased transcription of HIF1-dependent genes, including the pro-inflammatory cytokine interleukin-1β. Other inflammatory cytokines produced by activated macrophages such as tumor necrosis factor or interleukin 6 are not directly affected by succinate and HIF1. The mechanism by which succinate accumulates in immune cells is not fully understood. Activation of inflammatory macrophages through toll-like receptors induces a metabolic shift towards glycolysis. In spite of a general downregulation of the TCA cycle under these conditions, succinate concentration is increased. However, lipopolysaccharides involved in the activation of macrophages increase glutamine and GABA transporters. Succinate may thus be produced from enhanced glutamine metabolism via alpha-ketoglutarate or the GABA shunt. Tumorigenesis Succinate is one of three oncometabolites, metabolic intermediates whose accumulation causes metabolic and non-metabolic dysregulation implicated in tumorigenesis. Loss-of-function mutations in the genes encoding succinate dehydrogenase, frequently found in hereditary paraganglioma and pheochromocytoma, cause pathological increase in succinate. SDH mutations have also been identified in gastrointestinal stromal tumors, renal tumors, thyroid tumors, testicular seminomas and neuroblastomas. The oncogenic mechanism caused by mutated SHD is thought to relate to succinate's ability to inhibit 2-oxogluterate-dependent dioxygenases. Inhibition of KDMs and TET hydroxylases results in epigenetic dysregulation and hypermethylation affecting genes involved in cell differentiation. Additionally, succinate-promoted activation of HIF-1α generates a pseudo-hypoxic state that can promote tumorneogensis by transcriptional activation of genes involved in proliferation, metabolism and angiogenesis. The other two oncometabolites, fumarate and 2-hydroxyglutarate have similar structures to succinate and function through parallel HIF-inducing oncogenic mechanisms. Ischemia reperfusion injury Succinate accumulation under hypoxic conditions has been implicated in the reperfusion injury through increased ROS production. During ischemia, succinate accumulates. Upon reperfusion, succinate is rapidly oxidized leading to abrupt and extensive production of ROS. ROS then trigger the cellular apoptotic machinery or induce oxidative damage to proteins, membranes, organelles etc. In animal models, pharmacological inhibition of ischemic succinate accumulation ameliorated ischemia-reperfusion injury. As of 2016 the inhibition of succinate-mediated ROS production was under investigation as a therapeutic drug target. See also Flame retardant Oil of amber, procured by heating succinic acid Citric acid cycle Metabolite Oncometabolism References External links FDA Succinic Acid Calculator: Water and solute activities in aqueous succinic acid PubChem: Compound Summary for Succinic Acid Citric acid cycle compounds Dicarboxylic acids Excipients Succinates E-number additives Metabolic intermediates
Succinic acid
[ "Chemistry" ]
4,865
[ "Citric acid cycle compounds", "Metabolic intermediates", "Metabolism", "Biomolecules" ]
97,375
https://en.wikipedia.org/wiki/Bailey%20bridge
A Bailey bridge is a type of portable, pre-fabricated, truss bridge. It was developed in 1940–1941 by the British for military use during the Second World War and saw extensive use by British, Canadian and American military engineering units. A Bailey bridge has the advantages of requiring no special tools or heavy equipment to assemble. The wood and steel bridge elements were small and light enough to be carried in trucks and lifted into place by hand, without the use of a crane. These bridges were strong enough to carry tanks. Bailey bridges continue to be used extensively in civil engineering construction projects and to provide temporary crossings for pedestrian and vehicle traffic. Design The success of the Bailey bridge was due to the simplicity of the fabrication and assembly of its modular components, combined with the ability to erect and deploy sections with a minimum of assistance from heavy equipment. Many previous designs for military bridges required cranes to lift the pre-assembled bridge and lower it into place. The Bailey parts were made of standard steel alloys, and were simple enough that parts made at a number of different factories were interchangeable. Each individual part could be carried by a small number of men, enabling army engineers to move more easily and quickly, in preparing the way for troops and materiel advancing behind them. The modular design allowed engineers to build each bridge to be as long and as strong as needed, doubling or tripling the supportive side panels, or on the roadbed sections. The basic bridge consists of three main parts. The bridge's strength is provided by the panels on the sides. The panels are , , cross-braced rectangles that each weigh , and can be lifted by four men. The panel was constructed of welded steel. The top and bottom chord of each panel had interlocking male and female lugs into which engineers could insert panel connecting pins. The floor of the bridge consists of a number of transoms that run across the bridge, with stringers running between them, and over the top of the transoms, forming a square. Transoms rest on the lower chord of the panels, and clamps hold them together. Stringers are placed atop the completed structural frame, and wood planking (chesses) are placed atop the stringers to provide a roadbed. Ribands bolt the planking to the stringers. Later in the war, the wooden planking was covered by steel plates, which were more resistant to damage of tank tracks. Each unit constructed in this fashion creates a single section of bridge, with a roadbed. After one section is complete it is typically pushed forward over rollers on the bridgehead, and another section built behind it. The two are then connected together with pins pounded into holes in the corners of the panels. For added strength up to three panels (and transoms) can be bolted on either side of the bridge. Another solution is to stack the panels vertically. With three panels across and two high, the Bailey Bridge can support tanks over a . Footways can be installed on the outside of the side-panels. The side-panels form an effective barrier between foot and vehicle traffic, allowing pedestrians to safely use the bridge. A useful feature of the Bailey bridge is its ability to be launched from one side of a gap, without a need for ANY equipment or personnel on the far bank. In this system the front-most portion of the bridge is angled up with short "launch-links" to form a "launching nose" and most of the bridge is left without the roadbed and ribands. The bridge is placed on rollers and simply pushed across the gap, using manpower or a truck or tracked vehicle, at which point the roller is removed (with the help of jacks) and the ribands and roadbed installed, along with any additional panels and transoms that might be needed. During WWII, Bailey bridge parts were made by companies with little experience of this kind of engineering. Although the parts were simple, they had to be precisely manufactured to fit correctly, so they were assembled into a test jig at each factory to verify this. To do this efficiently, newly manufactured parts would be continuously added to the test bridge, while at the same time the far end of the test bridge was continuously dismantled and the parts dispatched to the end-users. History Donald Bailey was a civil servant in the British War Office who tinkered with model bridges as a hobby. He had proposed an early prototype for a Bailey bridge before the war in 1936, but the idea was not acted upon. Bailey drew an original proposal for the bridge on the back of an envelope in 1940. On 14 February 1941, the Ministry of Supply requested that Bailey have a full-scale prototype completed by 1 May. Work on the bridge was completed with particular support from Ralph Freeman. The design was tested at the Experimental Bridging Establishment (EBE), in Christchurch, Dorset, with several parts from Braithwaite & Co., beginning in December 1940 and ending in 1941. The first prototype was tested in 1941. For early tests, the bridge was laid across a field, about above the ground, and several Mark V tanks were filled with pig iron and stacked upon each other. The prototype of this was used to span Mother Siller's Channel, which cuts through the nearby Stanpit Marshes, an area of marshland at the confluence of the River Avon and the River Stour. It remains there () as a functioning bridge. Full production began in July 1941. Thousands of workers and over 650 firms, including Littlewoods, were engaged in making the bridge, with production eventually rising to 25,000 bridge panels a month. The first Bailey bridges were in military service by December 1941, Bridges in the other formats were built, temporarily, to cross the Avon and Stour in the meadows nearby. After successful development and testing, the bridge was taken into service by the Corps of Royal Engineers and first used in North Africa in 1942. The original design violated a patent on the Callender-Hamilton bridge. The designer of that bridge, A. M. Hamilton, successfully applied to the Royal Commission on Awards to Inventors. The Bailey Bridge was more easily constructed, but less portable than the Hamilton bridge. Hamilton was awarded £4,000 in 1936 by the War Office for the use of his early bridges and the Royal Commission on Awards to Inventors awarded him £10,000 in 1954 for the use, mainly in Asia, of his later bridges. Lieutenant General Sir Giffard Le Quesne Martel was awarded £500 for infringement on the design of his box girder bridge, the Martel bridge. Bailey was later knighted for his invention, and awarded £12,000. Use in the Second World War The first operational Bailey bridge during the Second World War was built by 237 Field Company R.E. over Medjerda River near Medjez el Bab in Tunisia on the night of 26 November 1942. The first Bailey bridge built under fire was constructed at Leonforte by members of the 3rd Field Company, Royal Canadian Engineers. The Americans soon adopted the Bailey bridge technique, calling it the Portable Panel Bridge. In early 1942, the United States Army Corps of Engineers initially awarded contracts to the Detroit Steel Products Company, the American Elevator Company and the Commercial Shearing and Stamping Company, and later several others. The Bailey provided a solution to the problem of German and Italian armies destroying bridges as they retreated. By the end of the war, the US Fifth Army and British 8th Army had built over 3,000 Bailey bridges in Sicily and Italy alone, totaling over of bridge, at an average length of . One Bailey, built to replace the Sangro River bridge in Italy, spanned . Another on the Chindwin River in Burma, spanned . Such long bridges required support from either piers or pontoons. A number of bridges were available by 1944 for D-Day, when production was accelerated. The US also licensed the design and started rapid construction for their own use. A Bailey Bridge constructed over the River Rhine at Rees, Germany, in 1945 by the Royal Canadian Engineers was named "Blackfriars Bridge", and, at 558 m (1814 ft) including the ramps at each end, was then the longest Bailey bridge ever constructed. In all, over 600 firms were involved in the making of over 200 miles of bridges composing of 500,000 tons, or 700,000 panels of bridging during the war. At least 2,500 Bailey bridges were built in Italy, and another 2,000 elsewhere. Field Marshal Bernard Montgomery wrote in 1947: Post-war applications The Skylark launch tower at Woomera was built up of Bailey bridge components. In the years immediately following World War II, the Ontario Hydro-Electric Power Commission purchased huge amounts of war-surplus Bailey bridging from the Canadian War Assets Corporation. The commission used bridging in an office building. Over 200,000 tons of bridging were used in a hydroelectric project. The Ontario government was, several years after World War II, the largest holder of Bailey Bridging components. After World War II and especially post Hurricane Hazel in 1954, some of the bridging was used to construct replacement bridges in the Toronto area: 16th Avenue Bailey Bridge c. 1945 Lake Shore Boulevard Bailey Bridge was built in 1952 for Ontario Hydro Old Finch Avenue Bailey Bridge, built by the 2nd Field Engineer Regiment, is the last still in use. The longest Bailey bridge was put into service in October 1975. This , two-lane bridge crossed the Derwent River at Hobart, Australia. The Bailey bridge was in use until the reconstruction of the Tasman Bridge was completed on 8 October 1977. Bailey bridges are in regular use throughout the world, particularly as a means of bridging in remote regions. In 2018, the Indian Army erected three new footbridges at Elphinstone Road, a commuter railway station in Mumbai, and at Currey Road and Ambivli. These were erected quickly, in response to a stampede some months earlier, where 23 people died. The United States Army Corps of Engineers uses Bailey Bridges in construction projects, including an emergency replacement bridge on the Hana Highway in Hawaii. Two temporary Bailey bridges have been used on the northern span of the Dufferin Street bridges in Toronto since 2014. The first Bailey Bridge built for civilian use in India was on the Pamba river in a place called Ranni in Pathanamthitta district of the state of Kerala. It was on 1996 November 08. In 2017 the Irish Army built a Bailey bridge to replace a road bridge across the Cabry River, in County Donegal, after the original bridge was destroyed in floods. In 2021 a Bailey bridge was built across the river Dijle in Rijmenam (Belgium) for the transportation of excavated soil from one side to the other of the river. The bridge allowed the trucks to cross the river without having to pass the city center. In March 2021, the Michigan Department of Transportation constructed a Bailey bridge on M-30 to temporarily reconnect the highway after the old structure was destroyed in the May 2020 flooding and subsequent failure of the Edenville Dam. The department will replace the temporary bridge with a permanent structure in the coming years. Following the 2023 Auckland Anniversary Weekend floods and Cyclone Gabrielle in the North Island of New Zealand, Bailey bridges were installed to reconnect communities. Following the 2023 floods in Madrid, Spain, the Spanish Army is set to build a Bailey bridge in the village of Aldea del Fresno. In 2024, following the catastrophic landslide in Kerala’s Wayanad district, the Indian Army build a 190 feet Bailey bridge in the village of Mundakkai. Gallery See also AM 50 Armoured vehicle-launched bridge Mabey Logistic Support Bridge Medium Girder Bridge a modern bridge of analogous use Military engineer Pontoon bridge for another bridge type with mobile military application References Bibliography External links Bailey Bridges in New Zealand Animated build of a modern Mabey Compact 200 Bridge (similar to the original Bailey Bridge) US Army Field Manual FM5-277 Dated 9 May 1986. Portable bridges Truss bridges by type Bridges by structural type Military bridging equipment English inventions
Bailey bridge
[ "Engineering" ]
2,460
[ "Military bridging equipment", "Military engineering" ]
97,503
https://en.wikipedia.org/wiki/Heme
Heme (American English), or haem (Commonwealth English, both pronounced /hi:m/ ), is a ring-shaped iron-containing molecular component of hemoglobin, which is necessary to bind oxygen in the bloodstream. It is composed of four pyrrole rings with 2 vinyl and 2 propionic acid side chains. Heme is biosynthesized in both the bone marrow and the liver. Heme plays a critical role in multiple different redox reactions in mammals, due to its ability to carry the oxygen molecule. Reactions include oxidative metabolism (cytochrome c oxidase, succinate dehydrogenase), xenobiotic detoxification via cytochrome P450 pathways (including metabolism of some drugs), gas sensing (guanyl cyclases, nitric oxide synthase), and microRNA processing (DGCR8). Heme is a coordination complex "consisting of an iron ion coordinated to a tetrapyrrole acting as a tetradentate ligand, and to one or two axial ligands". The definition is loose, and many depictions omit the axial ligands. Among the metalloporphyrins deployed by metalloproteins as prosthetic groups, heme is one of the most widely used and defines a family of proteins known as hemoproteins. Hemes are most commonly recognized as components of hemoglobin, the red pigment in blood, but are also found in a number of other biologically important hemoproteins such as myoglobin, cytochromes, catalases, heme peroxidase, and endothelial nitric oxide synthase. The word haem is derived from Greek haima 'blood'. Function Hemoproteins have diverse biological functions including the transportation of diatomic gases, chemical catalysis, diatomic gas detection, and electron transfer. The heme iron serves as a source or sink of electrons during electron transfer or redox chemistry. In peroxidase reactions, the porphyrin molecule also serves as an electron source, being able to delocalize radical electrons in the conjugated ring. In the transportation or detection of diatomic gases, the gas binds to the heme iron. During the detection of diatomic gases, the binding of the gas ligand to the heme iron induces conformational changes in the surrounding protein. In general, diatomic gases only bind to the reduced heme, as ferrous Fe(II) while most peroxidases cycle between Fe(III) and Fe(IV) and hemeproteins involved in mitochondrial redox, oxidation-reduction, cycle between Fe(II) and Fe(III). It has been speculated that the original evolutionary function of hemoproteins was electron transfer in primitive sulfur-based photosynthesis pathways in ancestral cyanobacteria-like organisms before the appearance of molecular oxygen. Hemoproteins achieve their remarkable functional diversity by modifying the environment of the heme macrocycle within the protein matrix. For example, the ability of hemoglobin to effectively deliver oxygen to tissues is due to specific amino acid residues located near the heme molecule. Hemoglobin reversibly binds to oxygen in the lungs when the pH is high, and the carbon dioxide concentration is low. When the situation is reversed (low pH and high carbon dioxide concentrations), hemoglobin will release oxygen into the tissues. This phenomenon, which states that hemoglobin's oxygen binding affinity is inversely proportional to both acidity and concentration of carbon dioxide, is known as the Bohr effect. The molecular mechanism behind this effect is the steric organization of the globin chain; a histidine residue, located adjacent to the heme group, becomes positively charged under acidic conditions (which are caused by dissolved CO2 in working muscles, etc.), releasing oxygen from the heme group. Types Major hemes There are several biologically important kinds of heme: The most common type is heme B; other important types include heme A and heme C. Isolated hemes are commonly designated by capital letters while hemes bound to proteins are designated by lower case letters. Cytochrome a refers to the heme A in specific combination with membrane protein forming a portion of cytochrome c oxidase. Other hemes The following carbon numbering system of porphyrins is an older numbering used by biochemists and not the 1–24 numbering system recommended by IUPAC, which is shown in the table above. Heme l is the derivative of heme B which is covalently attached to the protein of lactoperoxidase, eosinophil peroxidase, and thyroid peroxidase. The addition of peroxide with the glutamyl-375 and aspartyl-225 of lactoperoxidase forms ester bonds between these amino acid residues and the heme 1- and 5-methyl groups, respectively. Similar ester bonds with these two methyl groups are thought to form in eosinophil and thyroid peroxidases. Heme l is one important characteristic of animal peroxidases; plant peroxidases incorporate heme B. Lactoperoxidase and eosinophil peroxidase are protective enzymes responsible for the destruction of invading bacteria and virus. Thyroid peroxidase is the enzyme catalyzing the biosynthesis of the important thyroid hormones. Because lactoperoxidase destroys invading organisms in the lungs and excrement, it is thought to be an important protective enzyme. Heme m is the derivative of heme B covalently bound at the active site of myeloperoxidase. Heme m contains the two ester bonds at the heme 1- and 5-methyl groups also present in heme l of other mammalian peroxidases, such as lactoperoxidase and eosinophil peroxidase. In addition, a unique sulfonamide ion linkage between the sulfur of a methionyl amino-acid residue and the heme 2-vinyl group is formed, giving this enzyme the unique capability of easily oxidizing chloride and bromide ions to hypochlorite and hypobromite. Myeloperoxidase is present in mammalian neutrophils and is responsible for the destruction of invading bacteria and viral agents. It perhaps synthesizes hypobromite by "mistake". Both hypochlorite and hypobromite are very reactive species responsible for the production of halogenated nucleosides, which are mutagenic compounds. Heme D is another derivative of heme B, but in which the propionic acid side chain at the carbon of position 6, which is also hydroxylated, forms a γ-spirolactone. Ring III is also hydroxylated at position 5, in a conformation trans to the new lactone group. Heme D is the site for oxygen reduction to water of many types of bacteria at low oxygen tension. Heme S is related to heme B by having a formyl group at position 2 in place of the 2-vinyl group. Heme S is found in the hemoglobin of a few species of marine worms. The correct structures of heme B and heme S were first elucidated by German chemist Hans Fischer. The names of cytochromes typically (but not always) reflect the kinds of hemes they contain: cytochrome a contains heme A, cytochrome c contains heme C, etc. This convention may have been first introduced with the publication of the structure of heme A. Use of capital letters to designate the type of heme The practice of designating hemes with upper case letters was formalized in a footnote in a paper by Puustinen and Wikstrom, which explains under which conditions a capital letter should be used: "we prefer the use of capital letters to describe the heme structure as isolated. Lowercase letters may then be freely used for cytochromes and enzymes, as well as to describe individual protein-bound heme groups (for example, cytochrome bc, and aa3 complexes, cytochrome b5, heme c1 of the bc1 complex, heme a3 of the aa3 complex, etc)." In other words, the chemical compound would be designated with a capital letter, but specific instances in structures with lowercase. Thus cytochrome oxidase, which has two A hemes (heme a and heme a3) in its structure, contains two moles of heme A per mole protein. Cytochrome bc1, with hemes bH, bL, and c1, contains heme B and heme C in a 2:1 ratio. The practice seems to have originated in a paper by Caughey and York in which the product of a new isolation procedure for the heme of cytochrome aa3 was designated heme A to differentiate it from previous preparations: "Our product is not identical in all respects with the heme a obtained in solution by other workers by the reduction of the hemin a as isolated previously (2). For this reason, we shall designate our product heme A until the apparent differences can be rationalized." In a later paper, Caughey's group uses capital letters for isolated heme B and C as well as A. Synthesis The enzymatic process that produces heme is properly called porphyrin synthesis, as all the intermediates are tetrapyrroles that are chemically classified as porphyrins. The process is highly conserved across biology. In humans, this pathway serves almost exclusively to form heme. In bacteria, it also produces more complex substances such as cofactor F430 and cobalamin (vitamin B12). The pathway is initiated by the synthesis of δ-aminolevulinic acid (dALA or δALA) from the amino acid glycine and succinyl-CoA from the citric acid cycle (Krebs cycle). The rate-limiting enzyme responsible for this reaction, ALA synthase, is negatively regulated by glucose and heme concentration. Mechanism of inhibition of ALAs by heme or hemin is by decreasing stability of mRNA synthesis and by decreasing the intake of mRNA in the mitochondria. This mechanism is of therapeutic importance: infusion of heme arginate or hematin and glucose can abort attacks of acute intermittent porphyria in patients with an inborn error of metabolism of this process, by reducing transcription of ALA synthase. The organs mainly involved in heme synthesis are the liver (in which the rate of synthesis is highly variable, depending on the systemic heme pool) and the bone marrow (in which rate of synthesis of Heme is relatively constant and depends on the production of globin chain), although every cell requires heme to function properly. However, due to its toxic properties, proteins such as emopexin (Hx) are required to help maintain physiological stores of iron in order for them to be used in synthesis. Heme is seen as an intermediate molecule in catabolism of hemoglobin in the process of bilirubin metabolism. Defects in various enzymes in synthesis of heme can lead to group of disorder called porphyrias, which include acute intermittent porphyria, congenital erythropoetic porphyria, porphyria cutanea tarda, hereditary coproporphyria, variegate porphyria, and erythropoietic protoporphyria. Synthesis for food Impossible Foods, producers of plant-based meat substitutes, use an accelerated heme synthesis process involving soybean root leghemoglobin and yeast, adding the resulting heme to items such as meatless (vegan) Impossible burger patties. The DNA for leghemoglobin production was extracted from the soybean root nodules and expressed in yeast cells to overproduce heme for use in the meatless burgers. This process claims to create a meaty flavor in the resulting products. Degradation Degradation begins inside macrophages of the spleen, which remove old and damaged erythrocytes from the circulation. In the first step, heme is converted to biliverdin by the enzyme heme oxygenase (HO). NADPH is used as the reducing agent, molecular oxygen enters the reaction, carbon monoxide (CO) is produced and the iron is released from the molecule as the ferrous ion (Fe2+). CO acts as a cellular messenger and functions in vasodilation. In addition, heme degradation appears to be an evolutionarily-conserved response to oxidative stress. Briefly, when cells are exposed to free radicals, there is a rapid induction of the expression of the stress-responsive heme oxygenase-1 (HMOX1) isoenzyme that catabolizes heme (see below). The reason why cells must increase exponentially their capability to degrade heme in response to oxidative stress remains unclear but this appears to be part of a cytoprotective response that avoids the deleterious effects of free heme. When large amounts of free heme accumulates, the heme detoxification/degradation systems get overwhelmed, enabling heme to exert its damaging effects. In the second reaction, biliverdin is converted to bilirubin by biliverdin reductase (BVR): Bilirubin is transported into the liver by facilitated diffusion bound to a protein (serum albumin), where it is conjugated with glucuronic acid to become more water-soluble. The reaction is catalyzed by the enzyme UDP-glucuronosyltransferase. This form of bilirubin is excreted from the liver in bile. Excretion of bilirubin from liver to biliary canaliculi is an active, energy-dependent and rate-limiting process. The intestinal bacteria deconjugate bilirubin diglucuronide releasing free bilirubin, which can either be reabsorbed or reduced to urobilinogen by the bacterial enzyme bilirubin reductase. Some urobilinogen is absorbed by intestinal cells and transported into the kidneys and excreted with urine (urobilin, which is the product of oxidation of urobilinogen, and is responsible for the yellow colour of urine). The remainder travels down the digestive tract and is converted to stercobilinogen. This is oxidized to stercobilin, which is excreted and is responsible for the brown color of feces. In health and disease Under homeostasis, the reactivity of heme is controlled by its insertion into the "heme pockets" of hemoproteins. Under oxidative stress however, some hemoproteins, e.g. hemoglobin, can release their heme prosthetic groups. The non-protein-bound (free) heme produced in this manner becomes highly cytotoxic, most probably due to the iron atom contained within its protoporphyrin IX ring, which can act as a Fenton's reagent to catalyze in an unfettered manner the production of free radicals. It catalyzes the oxidation and aggregation of protein, the formation of cytotoxic lipid peroxide via lipid peroxidation and damages DNA through oxidative stress. Due to its lipophilic properties, it impairs lipid bilayers in organelles such as mitochondria and nuclei. These properties of free heme can sensitize a variety of cell types to undergo programmed cell death in response to pro-inflammatory agonists, a deleterious effect that plays an important role in the pathogenesis of certain inflammatory diseases such as malaria and sepsis. Cancer There is an association between high intake of heme iron sourced from meat and increased risk of colorectal cancer. The American Institute for Cancer Research (AICR) and World Cancer Research Fund International (WCRF) concluded in a 2018 report that there is limited but suggestive evidence that foods containing heme iron increase risk of colorectal cancer. A 2019 review found that heme iron intake is associated with increased breast cancer risk. Genes The following genes are part of the chemical pathway for making heme: ALAD: aminolevulinic acid, δ-, dehydratase (deficiency causes ala-dehydratase deficiency porphyria) ALAS1: aminolevulinate, δ-, synthase 1 ALAS2: aminolevulinate, δ-, synthase 2 (deficiency causes sideroblastic/hypochromic anemia) CPOX: coproporphyrinogen oxidase (deficiency causes hereditary coproporphyria) FECH: ferrochelatase (deficiency causes erythropoietic protoporphyria) HMBS: hydroxymethylbilane synthase (deficiency causes acute intermittent porphyria) PPOX: protoporphyrinogen oxidase (deficiency causes variegate porphyria) UROD: uroporphyrinogen decarboxylase (deficiency causes porphyria cutanea tarda) UROS: uroporphyrinogen III synthase (deficiency causes congenital erythropoietic porphyria) Notes and references Porphyrins Biomolecules Cofactors Iron(II) compounds Iron complexes
Heme
[ "Chemistry", "Biology" ]
3,703
[ "Natural products", "Biochemistry", "Organic compounds", "Biomolecules", "Molecular biology", "Structural biology", "Porphyrins" ]
97,528
https://en.wikipedia.org/wiki/Isoprene
Isoprene, or 2-methyl-1,3-butadiene, is a common volatile organic compound with the formula CH2=C(CH3)−CH=CH2. In its pure form it is a colorless volatile liquid. It is produced by many plants and animals (including humans) and its polymers are the main component of natural rubber. History and etymology C. G. Williams named the compound in 1860 after obtaining it from the pyrolysis of natural rubber. He correctly deduced the mass shares of carbon and hydrogen (but due to modern atomic weight of carbon not yet adopted at the Karlsruhe Congress arrived at an incorrect formula C10H8). He didn't specify the reasons for the name, but it's hypothesized that it came from "propylene" with which isoprene shares some physical and chemical properties. The first one to observe recombination of isoprene into rubber-like substance was in 1879, and William A. Tilden identified its structure five years later. Natural occurrences Isoprene is produced and emitted by many species of trees (major producers are oaks, poplars, eucalyptus, and some legumes). Yearly production of isoprene emissions by vegetation is around 600 million metric tons, half from tropical broadleaf trees and the remainder primarily from shrubs. This is about equivalent to methane emissions and accounts for around one-third of all hydrocarbons released into the atmosphere. In deciduous forests, isoprene makes up approximately 80% of hydrocarbon emissions. While their contribution is small compared to trees, microscopic and macroscopic algae also produce isoprene. Plants Isoprene is made through the methyl-erythritol 4-phosphate pathway (MEP pathway, also called the non-mevalonate pathway) in the chloroplasts of plants. One of the two end-products of MEP pathway, dimethylallyl pyrophosphate (DMAPP), is cleaved by the enzyme isoprene synthase to form isoprene and diphosphate. Therefore, inhibitors that block the MEP pathway, such as fosmidomycin, also block isoprene formation. Isoprene emission increases dramatically with temperature and maximizes at around 40 °C. This has led to the hypothesis that isoprene may protect plants against heat stress (thermotolerance hypothesis, see below). Emission of isoprene is also observed in some bacteria and this is thought to come from non-enzymatic degradations from DMAPP. Global emission of isoprene by plants is estimated at 350 million tons per year. Regulation Isoprene emission in plants is controlled both by the availability of the substrate (DMAPP) and by enzyme (isoprene synthase) activity. In particular, light, CO2 and O2 dependencies of isoprene emission are controlled by substrate availability, whereas temperature dependency of isoprene emission is regulated both by substrate level and enzyme activity. Human & other organisms Isoprene is the most abundant hydrocarbon measurable in the breath of humans. The estimated production rate of isoprene in the human body is 0.15 μmol/(kg·h), equivalent to approximately 17 mg/day for a person weighing 70 kg. Human breath isoprene originates from lipolytic cholesterol metabolism within the skeletal muscular peroxisomes and IDI2 gene acts as the production determinant. Due to the absence of IDI2 gene, animals such as pigs and bottle-nose dolphins do not exhale isoprene. Isoprene is common in low concentrations in many foods. Many species of soil and marine bacteria, such as Actinomycetota, are capable of degrading isoprene and using it as a fuel source. Biological roles Isoprene emission appears to be a mechanism that trees use to combat abiotic stresses. In particular, isoprene has been shown to protect against moderate heat stress (around 40 °C). It may also protect plants against large fluctuations in leaf temperature. Isoprene is incorporated into and helps stabilize cell membranes in response to heat stress. Isoprene also confers resistance to reactive oxygen species. The amount of isoprene released from isoprene-emitting vegetation depends on leaf mass, leaf area, light (particularly photosynthetic photon flux density, or PPFD) and leaf temperature. Thus, during the night, little isoprene is emitted from tree leaves, whereas daytime emissions are expected to be substantial during hot and sunny days, up to 25 μg/(g dry-leaf-weight)/hour in many oak species. Isoprenoids The isoprene skeleton can be found in naturally occurring compounds called terpenes and terpenoid (oxygenated terpenes), collectively called isoprenoids. These compounds do not arise from isoprene itself. Instead, the precursor to isoprene units in biological systems is dimethylallyl pyrophosphate (DMAPP) and its isomer isopentenyl pyrophosphate (IPP). The plural 'isoprenes' is sometimes used to refer to terpenes in general. Examples of isoprenoids include carotene, phytol, retinol (vitamin A), tocopherol (vitamin E), dolichols, and squalene. Heme A has an isoprenoid tail, and lanosterol, the sterol precursor in animals, is derived from squalene and hence from isoprene. The functional isoprene units in biological systems are dimethylallyl pyrophosphate (DMAPP) and its isomer isopentenyl pyrophosphate (IPP), which are used in the biosynthesis of naturally occurring isoprenoids such as carotenoids, quinones, lanosterol derivatives (e.g. steroids) and the prenyl chains of certain compounds (e.g. phytol chain of chlorophyll). Isoprenes are used in the cell membrane monolayer of many Archaea, filling the space between the diglycerol tetraether head groups. This is thought to add structural resistance to harsh environments in which many Archaea are found. Similarly, natural rubber is composed of linear polyisoprene chains of very high molecular weight and other natural molecules. Industrial production Isoprene is most readily available industrially as a byproduct of the thermal cracking of petroleum naphtha or oil, as a side product in the production of ethylene. Where thermal cracking of oil is less common, isoprene can be produced by dehydrogenation of isopentane. Isoprene can be synthesized in two steps from isobutylene, starting with its ene reaction with formaldehyde to give isopentenol, which can be dehydrated to isoprene: Where cheap acetylene is produced from coal-derived calcium carbide, it may be combined with acetone to make 3-methylbutynol which is then hydrogenated and dehydrated to isoprene. About 800,000 metric tons are produced annually. About 95% of isoprene production is used to produce cis-1,4-polyisoprene—a synthetic version of natural rubber. Natural rubber consists mainly of poly-cis-isoprene with a molecular mass of 100,000 to 1,000,000 g/mol. Typically natural rubber contains a few percent of other materials, such as proteins, fatty acids, resins, and inorganic materials. Some natural rubber sources, called gutta percha, are composed of trans-1,4-polyisoprene, a structural isomer that has similar, but not identical, properties. See also Natural rubber Neoprene Further reading References Further reading External links Report on Carcinogens, Fourteenth Edition; U.S. Department of Health and Human Services, Public Health Service, National Toxicology Program Science News article describing how isoprene released by plants is converted to light-scattering aerosols Alkadienes Hemiterpenes IARC Group 2B carcinogens Monomers Conjugated dienes Substances discovered in the 19th century
Isoprene
[ "Chemistry", "Materials_science" ]
1,729
[ "Monomers", "Polymer chemistry" ]
17,001,945
https://en.wikipedia.org/wiki/Soft%20water%20path
The concept of the soft path was first used for energy resource management and was developed by Amory Lovins shortly after the shock of the 1973 energy crisis in the United States. This concept has now been refined and applied to water, most notably by water experts Peter Gleick and David Brooks. The soft path is often framed as a more integrated and effective alternative to supply-side water resource management. Supply-side water management focuses on meeting demands for water through centralized, large-scale physical infrastructure, and centralized water management systems. In the 20th century, this approach focused on constructing bigger dams and drilling deeper wells to access more water to meet projected demands of consumers. More recently, a focus on demand-side management has emerged in regions where water supply is increasingly constrained (see, for example, Peak water), and it focuses on managing demand and making current practices more efficient. The soft path integrates both supply and demand concepts but in a broader context by recognizing that water is a means to satisfy demands for goods and services and asking how much water, of what qualities, is actually required to satisfy those demands efficiently and sustainably. Soft path water planning also requires broader institutional approaches to water management including the application of smart economics, the potential for distributed rather than centralized water systems, and more democratic participation in water policy decisions. Others have described the soft path as "unleashing the full potential of demand-side management.", Publications The Soft Path for Water iPacific Instituten a Nutshell. 2005. Oliver M Brandes and David B Brooks. Friends of the Earth and POLIS Project on Ecological Governance. University of Victoria, Victoria, BC. G. Wolff and P.H. Gleick, "The Soft Path for Water" in The World's Water 2002-2003 (Island Press, Washington D.C., pp. 1-32. P.H. Gleick, 2003. Science, Volume 302, November 28, 2003, pp. 1524-1528. P.H. Gleick, 2002. Nature, Volume 418, pg. 373, July 25, 2002. Manitoba Water Soft Path 2006. The Energy Controversy: Soft Path Questions and Answers (1979) A New Path to Water Sustainability for the Town of Oliver, BC - Soft Path Case Study by Oliver M Brandes Tony Maas Adam Mjolsness Ellen Reynolds. Uvic Printers. Feb 2008. See also Soft energy path Backcasting Ecological governance References Water supply Water and the environment
Soft water path
[ "Chemistry", "Engineering", "Environmental_science" ]
511
[ "Hydrology", "Water supply", "Environmental engineering" ]
17,002,524
https://en.wikipedia.org/wiki/Temperature-responsive%20polymer
Temperature-responsive polymers or thermoresponsive polymers are polymers that exhibit drastic and discontinuous changes in their physical properties with temperature. The term is commonly used when the property concerned is solubility in a given solvent, but it may also be used when other properties are affected. Thermoresponsive polymers belong to the class of stimuli-responsive materials, in contrast to temperature-sensitive (for short, thermosensitive) materials, which change their properties continuously with environmental conditions. In a stricter sense, thermoresponsive polymers display a miscibility gap in their temperature-composition diagram. Depending on whether the miscibility gap is found at high or low temperatures, either an upper critical solution temperature (UCST) or a lower critical solution temperature (LCST) exists. Research mainly focuses on polymers that show thermoresponsivity in aqueous solution. Promising areas of application are tissue engineering, liquid chromatography, drug delivery and bioseparation. Only a few commercial applications exist, for example, cell culture plates coated with an LCST-polymer. History The theory of thermoresponsive polymer (similarly, microgels) begins in the 1940s with work from Flory and Huggins who both independently produced similar theoretical expectations for polymer in solution with varying temperature. The effects of external stimuli on particular polymers were investigated in the 1960s by Heskins and Guillet. They established as the lower critical solution temperature (LCST) for poly(N-isopropylacrylamide). Coil-globule transition Thermoresponsive polymer chains in solution adopt an expanded coil conformation. At the phase separation temperature they collapse to form compact globuli. This process can be observed directly by methods of static and dynamic light scattering. The drop in viscosity can be indirectly observed. When mechanisms which reduce surface tension are absent, the globules aggregate, subsequently causing turbidity and the formation of visible particles. Phase diagrams of thermoresponsive polymers The phase separation temperature (and hence, the cloud point) is dependent on polymer concentration. Therefore, temperature-composition diagrams are used to display thermoresponsive behavior over a wide range of concentrations. Phases separate into a polymer-poor and a polymer-rich phase. In strictly binary mixtures the composition of the coexisting phases can be determined by drawing tie-lines. However, since polymers display a molar mass distribution this straightforward approach may be insufficient. During the process of phase separation the polymer-rich phase can vitrify before equilibrium is reached. This depends on the glass transition temperature for each individual composition. It is convenient to add the glass transition curve to the phase diagram, although it is no real equilibrium. The intersection of the glass transition curve with the cloud point curve is called Berghmans point. In the case of UCST polymers, above the Berghmans point the phases separate into two liquid phases, below this point into a liquid polymer-poor phase and a vitrified polymer-rich phase. For LCST polymers the inverse behavior is observed. Thermodynamics Polymers dissolve in a solvent when the Gibbs energy of the system decreases, i.e., the change of Gibbs energy (ΔG) is negative. From the known Legendre transformation of the Gibbs–Helmholtz equation it follows that ΔG is determined by the enthalpy of mixing (ΔH) and entropy of mixing (ΔS). Without interactions between the compounds there would be no enthalpy of mixing and the entropy of mixing would be ideal. The ideal entropy of mixing of multiple pure compounds is always positive (the term -T∙ΔS is negative) and ΔG would be negative for all compositions, causing complete miscibility. Therefore, the fact that miscibility gaps are observed can only be explained by interaction. In the case of polymer solutions, polymer-polymer, solvent-solvent and polymer-solvent interactions have to be taken into account. A model for the phenomenological description of polymer phase diagrams was developed by Flory and Huggins (see Flory–Huggins solution theory). The resulting equation for the change of Gibbs energy consists of a term for the entropy of mixing for polymers and an interaction parameter that describes the sum of all interactions. where R = universal gas constant m = number of occupied lattice sites per molecule (for polymer solutions m1 is approximately equal to the degree of polymerization and m2=1) φ = volume fraction of the polymer and the solvent, respectively χ = interaction parameter A consequence of the Flory-Huggins theory is, for instance, that the UCST (if it exists) increases and shifts into the solvent-rich region when the molar mass of the polymer increases. Whether a polymer shows LCST and/or UCST behavior can be derived from the temperature-dependence of the interaction parameter (see figure). The interaction parameter not only comprises enthalpic contributions but also the non-ideal entropy of mixing, which again consists of many individual contributions (e.g., the strong hydrophobic effect in aqueous solutions). For these reasons, classical Flory-Huggins theory cannot provide much insight into the molecular origin of miscibility gaps. Applications Bioseparation Thermoresponsive polymers can be functionalized with moieties that bind to specific biomolecules. The polymer-biomolecule conjugate can be precipitated from solution by a small change of temperature. Isolation may be achieved by filtration or centrifugation. Thermoresponsive surfaces Tissue engineering For some polymers it was demonstrated that thermoresponsive behavior can be transferred to surfaces. The surface is either coated with a polymer film or the polymer chains are bound covalently to the surface. This provides a way to control the wetting properties of a surface by small temperature changes. The described behavior can be exploited in tissue engineering since the adhesion of cells is strongly dependent on the hydrophilicity/hydrophobicity. This way, it is possible to detach cells from a cell culture dish by only small changes in temperature, without the need to additionally use enzymes (see figure). Respective commercial products are already available. Chromatography Thermoresponsive polymers can be used as the stationary phase in liquid chromatography. Here, the polarity of the stationary phase can be varied by temperature changes, altering the power of separation without changing the column or solvent composition. Thermally related benefits of gas chromatography can now be applied to classes of compounds that are restricted to liquid chromatography due to their thermolability. In place of solvent gradient elution, thermoresponsive polymers allow the use of temperature gradients under purely aqueous isocratic conditions. The versatility of the system is controlled not only by changing temperature, but also by adding modifying moieties that allow for a choice of enhanced hydrophobic interaction, or by introducing the prospect of electrostatic interaction. These developments have already brought major improvements to the fields of hydrophobic interaction chromatography, size exclusion chromatography, ion exchange chromatography, and affinity chromatography separations, as well as pseudo-solid phase extractions ("pseudo" because of phase transitions). Thermoresponsive gels Covalently linked gels Three-dimensional covalently linked polymer networks are insoluble in all solvents, they merely swell in good solvents. Thermoresponsive polymer gels show a discontinuous change of the degree of swelling with temperature. At the volume phase transition temperature (VPTT) the degree of swelling changes drastically. Researchers try to exploit this behavior for temperature-induced drug delivery. In the swollen state, previously incorporated drugs are released easily by diffusion. More sophisticated "catch and release" techniques have been elaborated in combination with lithography and molecular imprinting. Physical gels In physical gels unlike covalently linked gels the polymers chains are not covalently linked together. That means that the gel could re-dissolve in a good solvent under some conditions. Thermoresponsive physical gels, also sometimes called thermoresponsive injectable gels have been used in Tissue Engineering. This involves mixing at room temperature the thermoresponsive polymer in solution with the cells and then inject the solution to the body. Due to the temperature increase (to body temperature) the polymer creates a physical gel. Within this physical gel the cells are encapsulated. Tailoring the temperature that the polymer solution gels can be challenging because this depend by many factors like the polymer composition, architecture as well as the molar mass. Thermoreversible materials Some thermoreversible gels are used in biomedicine. For instance, hydrogels made of proteins are used as scaffolds in knee replacement. In baking, thermoreversible glazes such as pectin are prized for their ability to set and then reset after melting, and are used in nappage and other processes to ensure a smooth final surface for a presented dish. In manufacturing, thermoplastic elastomers can be set into a shape and then reset to their original shape through thermal reversibility, unlike one-way thermoset elastomers. Characterization of thermoresponsive polymer solutions Cloud point Experimentally, the phase separation can be followed by turbidimetry. There is no universal approach for determining the cloud point suitable for all systems. It is often defined as the temperature at the onset of cloudiness, the temperature at the inflection point of the transmittance curve, or the temperature at a defined transmittance (e.g., 50%). The cloud point can be affected by many structural parameters of the polymer like the hydrophobic content, architecture and even the molar mass. Hysteresis The cloud points upon cooling and heating of a thermoresponsive polymer solution do not coincide because the process of equilibration takes time. The temperature interval between the cloud points upon cooling and heating is called hysteresis. The cloud points are dependent on the cooling and heating rates, and hysteresis decreases with lower rates. There are indications that hysteresis is influenced by the temperature, viscosity, glass transition temperature and the ability to form additional intra- and inter-molecular hydrogen bonds in the phase separated state. Other properties Another important property for potential applications is the extent of phase separation, represented by the difference in polymer content in the two phases after phase separation. For most applications, phase separation in pure polymer and pure solvent would be desirable although it is practically impossible. The extent of phase separation in a given temperature interval depends on the particular polymer-solvent phase diagram. Example: From the phase diagram of polystyrene (molar mass 43,600 g/mol) in the solvent cyclohexane it follows that at a total polymer concentration of 10%, cooling from 25 to 20 °C causes phase separation into a polymer-poor phase with 1% polymer and a polymer-rich phase with 30% polymer content. Also desirable for many applications is a sharp phase transition, which is reflected by a sudden drop in transmittance. The sharpness of the phase transition is related to the extent of phase separation but additionally relies on whether all present polymer chains exhibit the same cloud point. This depends on the polymer endgroups, dispersity, or—in the case of copolymers—varying copolymer compositions. As a result of phase separation, thermoresponsive polymer systems can form well-defined self-assembled nanostructures with a number of different practical application such as in drug and gene delivery, tissue engineering, etc. In order to establish the required properties for applications, a rigorous characterization of the phase separation phenomenon can be carried out by different spectroscopic and calorimetric methods, including nuclear magnetic resonance (NMR) , dynamic light scattering (DLS), small-angle X-ray scattering (SAXS), infrared spectroscopy (IR), Raman spectroscopy, and Differential scanning calorimetry (DSC). Examples of thermoresponsive polymers Thermoresponsivity in organic solvents Due to the low entropy of mixing, miscibility gaps are often observed for polymer solutions. Many polymers are known that show UCST or LCST behavior in organic solvents. Examples for organic polymer solutions with UCST are polystyrene in cyclohexane, polyethylene in diphenylether or polymethylmethacrylate in acetonitrile. An LCST is observed for, e.g., polypropylene in n-hexane, polystyrene in butylacetate or polymethylmethacrylate in 2-propanone. Thermoresponsivity in water Polymer solutions that show thermoresponsivity in water are especially important since water as a solvent is cheap, safe and biologically relevant. Current research efforts focus on water-based applications like drug delivery systems, tissue engineering, bioseparation (see the section Applications). Numerous polymers with LCST in water are known. The most studied polymer is poly(N-isopropylacrylamide). Further examples are poly[2-(dimethylamino)ethyl methacrylate] (pDMAEMA) hydroxypropylcellulose, , poly-2-isopropyl-2-oxazoline and polyvinyl methyl ether. Some industrially relevant polymers show LCST as well as UCST behavior whereas the UCST is found outside the 0-to-100 °C region and can only be observed under extreme experimental conditions. Examples are polyethylene oxide, polyvinylmethylether and polyhydroxyethylmethacrylate. There are also polymers that exhibit UCST behavior between 0 and 100 °C. However, there are large differences concerning the ionic strength at which UCST behavior is detected. Some zwitterionic polymers show UCST behavior in pure water and also in salt-containing water or even at higher salt concentration. By contrast, polyacrylic acid displays UCST behavior solely at high ionic strength. Examples for polymer that show UCST behavior in pure water as well as under physiological conditions are poly(N-acryloylglycinamide), ureido-functionalized polymers, copolymers from N-vinylimidazole and 1-vinyl-2-(hydroxylmethyl)imidazole or copolymers from acrylamide and acrylonitrile. Polymers for which UCST relies on non-ionic interactions are very sensitive to ionic contamination. Small amounts of ionic groups may suppress phase separation in pure water. The UCST is dependent on the molecular mass of the polymer. For the LCST this is not necessarily the case, as shown for poly(N-isopropylacrylamide). Schizophrenic behavior of UCST-LCST diblock copolymers A more complex scenario can be found in the case of diblock copolymers that feature two orthogonally thermo-responsive blocks, i.e., an UCST and an LCST-type block. By applying a temperature stimulus, the individual polymer blocks show different phase transitions, e.g. by increasing the temperature, the UCST-type block features an insoluble-soluble transition, while the LCST-type block undergoes a soluble-insoluble transition. The order of the individual phase transitions depends on the relative positions of the UCST and LCST. Thus, upon temperature change the roles of the soluble and insoluble polymer blocks are reversed and this structural inversion is typically called ‘schizophrenic’ in the literature. Besides the fundamental interest in the mechanism of this behavior, such block copolymers have been proposed for application in smart emulsification, drug delivery, and rheology control. Schizophrenic diblock copolymer have also been applied as thin films for potential use as sensors, smart coatings or nanoswitches, and soft robotics. References Polymer material properties Smart materials Temperature
Temperature-responsive polymer
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,377
[ "Scalar physical quantities", "Temperature", "Thermodynamic properties", "Physical quantities", "SI base quantities", "Intensive quantities", "Materials science", "Polymer material properties", "Smart materials", "Thermodynamics", "Polymer chemistry", "Wikipedia categories named after physical...
17,003,295
https://en.wikipedia.org/wiki/Ray%20tracing%20%28physics%29
In physics, ray tracing is a method for calculating the path of waves or particles through a system with regions of varying propagation velocity, absorption characteristics, and reflecting surfaces. Under these circumstances, wavefronts may bend, change direction, or reflect off surfaces, complicating analysis. Historically, ray tracing involved analytic solutions to the ray's trajectories. In modern applied physics and engineering physics, the term also encompasses numerical solutions to the Eikonal equation. For example, ray-marching involves repeatedly advancing idealized narrow beams called rays through the medium by discrete amounts. Simple problems can be analyzed by propagating a few rays using simple mathematics. More detailed analysis can be performed by using a computer to propagate many rays. When applied to problems of electromagnetic radiation, ray tracing often relies on approximate solutions to Maxwell's equations such as geometric optics, that are valid as long as the light waves propagate through and around objects whose dimensions are much greater than the light's wavelength. Ray theory can describe interference by accumulating the phase during ray tracing (e.g., complex-valued Fresnel coefficients and Jones calculus). It can also be extended to describe edge diffraction, with modifications such as the geometric theory of diffraction, which enables tracing diffracted rays. More complicated phenomena require methods such as physical optics or wave theory. Technique Ray tracing works by assuming that the particle or wave can be modeled as a large number of very narrow beams (rays), and that there exists some distance, possibly very small, over which such a ray is locally straight. The ray tracer will advance the ray over this distance, and then use a local derivative of the medium to calculate the ray's new direction. From this location, a new ray is sent out and the process is repeated until a complete path is generated. If the simulation includes solid objects, the ray may be tested for intersection with them at each step, making adjustments to the ray's direction if a collision is found. Other properties of the ray may be altered as the simulation advances as well, such as intensity, wavelength, or polarization. This process is repeated with as many rays as are necessary to understand the behavior of the system. Uses Astronomy Ray tracing is being increasingly used in astronomy to simulate realistic images of the sky. Unlike conventional simulations, ray tracing does not use the expected or calculated point spread function (PSF) of a telescope and instead traces the journey of each photon from entrance in the upper atmosphere to collision with the detector. Most of the dispersion and distortion, arising mainly from atmosphere, optics and detector are taken into account. While this method of simulating images is inherently slow, advances in CPU and GPU capabilities has somewhat mitigated this problem. It can also be used in designing telescopes. Notable examples include Large Synoptic Survey Telescope where this kind of ray tracing was first used with PhoSim to create simulated images. Radio signals One particular form of ray tracing is radio signal ray tracing, which traces radio signals, modeled as rays, through the ionosphere where they are refracted and/or reflected back to the Earth. This form of ray tracing involves the integration of differential equations that describe the propagation of electromagnetic waves through dispersive and anisotropic media such as the ionosphere. An example of physics-based radio signal ray tracing is shown to the right. Radio communicators use ray tracing to help determine the precise behavior of radio signals as they propagate through the ionosphere. The image at the right illustrates the complexity of the situation. Unlike optical ray tracing where the medium between objects typically has a constant refractive index, signal ray tracing must deal with the complexities of a spatially varying refractive index, where changes in ionospheric electron densities influence the refractive index and hence, ray trajectories. Two sets of signals are broadcast at two different elevation angles. When the main signal penetrates into the ionosphere, the magnetic field splits the signal into two component waves which are separately ray traced through the ionosphere. The ordinary wave (red) component follows a path completely independent of the extraordinary wave (green) component. Ocean acoustics Sound velocity in the ocean varies with depth due to changes in density and temperature, reaching a local minimum near a depth of 800–1000 meters. This local minimum, called the SOFAR channel, acts as a waveguide, as sound tends to bend towards it. Ray tracing may be used to calculate the path of sound through the ocean up to very large distances, incorporating the effects of the SOFAR channel, as well as reflections and refractions off the ocean surface and bottom. From this, locations of high and low signal intensity may be computed, which are useful in the fields of ocean acoustics, underwater acoustic communication, and acoustic thermometry. Optical design Ray tracing may be used in the design of lenses and optical systems, such as in cameras, microscopes, telescopes, and binoculars, and its application in this field dates back to the 1900s. Geometric ray tracing is used to describe the propagation of light rays through a lens system or optical instrument, allowing the image-forming properties of the system to be modeled. The following effects can be integrated into a ray tracer in a straightforward fashion: Dispersion leads to chromatic aberration Polarization Crystal optics Fresnel equations Laser light effects Thin film interference (optical coating, soap bubble) can be used to calculate the reflectivity of a surface. For the application of lens design, two special cases of wave interference are important to account for. In a focal point, rays from a point light source meet again and may constructively or destructively interfere with each other. Within a very small region near this point, incoming light may be approximated by plane waves which inherit their direction from the rays. The optical path length from the light source is used to compute the phase. The derivative of the position of the ray in the focal region on the source position is used to obtain the width of the ray, and from that the amplitude of the plane wave. The result is the point spread function, whose Fourier transform is the optical transfer function. From this, the Strehl ratio can also be calculated. The other special case to consider is that of the interference of wavefronts, which are approximated as planes. However, when the rays come close together or even cross, the wavefront approximation breaks down. Interference of spherical waves is usually not combined with ray tracing, thus diffraction at an aperture cannot be calculated. However, these limitations can be resolved by an advanced modeling technique called Field Tracing. Field Tracing is a modelling technique, combining geometric optics with physical optics enabling to overcome the limitations of interference and diffraction in designing. The ray tracing techniques are used to optimize the design of the instrument by minimizing aberrations, for photography, and for longer wavelength applications such as designing microwave or even radio systems, and for shorter wavelengths, such as ultraviolet and X-ray optics. Before the advent of the computer, ray tracing calculations were performed by hand using trigonometry and logarithmic tables. The optical formulas of many classic photographic lenses were optimized by roomfuls of people, each of whom handled a small part of the large calculation. Now they are worked out in optical design software. A simple version of ray tracing known as ray transfer matrix analysis is often used in the design of optical resonators used in lasers. The basic principles of the most frequently used algorithm could be found in Spencer and Murty's fundamental paper: "General ray tracing Procedure". Focal-plane ray tracing There is a ray tracing technique called focal-plane ray tracing where how an optical ray will be after a lens is determined based on a lens focal plane and how the ray crosses the plane. This method utilizes the fact that rays from a point on the front focal plane of a positive lens will be parallel right after the lens and rays toward a point on the back or rear focal plane of a negative lens will also be parallel after the lens. In each case, the direction of the parallel rays after the lens is determined by a ray appearing to cross the lens nodal points (or the lens center for a thin lens). Seismology In seismology, geophysicists use ray tracing to aid in earthquake location and tomographic reconstruction of the Earth's interior. Seismic wave velocity varies within and beneath Earth's crust, causing these waves to bend and reflect. Ray tracing may be used to compute paths through a geophysical model, following them back to their source, such as an earthquake, or deducing the properties of the intervening material. In particular, the discovery of the seismic shadow zone (illustrated at right) allowed scientists to deduce the presence of Earth's molten core. General relativity In general relativity, where gravitational lensing can occur, the geodesics of the light rays receiving at the observer are integrated backwards in time until they hit the region of interest. Image synthesis under this technique can be view as an extension of the usual ray tracing in computer graphics. An example of such synthesis is found in the 2014 film Interstellar. Laser Plasma Interactions In laser-plasma physics ray-tracing can be used to simplify the calculations of laser propagation inside a plasma. Analytic solutions for ray trajectories in simple plasma density profiles are a well established, however researchers in laser-plasma physics often rely on ray-marching techniques due to the complexity of plasma density, temperature, and flow profiles which are often solved for using computational fluid dynamics simulations. See also Atmospheric optics ray-tracing codes Atmospheric refraction Gradient-index optics List of ray tracing software Ocean acoustic tomography Ray tracing (graphics) Ray transfer matrix analysis References Computational physics Geometrical optics
Ray tracing (physics)
[ "Physics" ]
2,005
[ "Computational physics" ]
17,006,174
https://en.wikipedia.org/wiki/Aberdeen%20chronograph
The Aberdeen chronograph was the first portable gun chronograph, an instrument for measuring the muzzle velocity and striking power of a projectile fired by a gun. It was invented in 1918 by Alfred Lee Loomis at the U.S. Army's Aberdeen Proving Ground. The method prevalent at the time was the Boulengé chronograph, which relied on the projectile passing through two wire screens. Breaking the first screen would release a rod held by electromagnets. While the rod was free-falling, breaking the second screen would activate a knife that marked the rod. Loomis' chronograph had a drum rotating at constant speed with a tape spooled inside. The projectile would pass through two screens, breaking the insulation between metal plates and creating a short circuit. This created a spark that left two visible marks on the tape and measuring the distance between these marks would give the speed of the projectile. This method made it easier to measure the speed of larger shells and aircraft catapults. Loomis was issued a patent in 1921 for his chronograph. References Ballistics American inventions
Aberdeen chronograph
[ "Physics" ]
233
[ "Applied and interdisciplinary physics", "Ballistics" ]
17,007,606
https://en.wikipedia.org/wiki/Seawater%20greenhouse
A seawater greenhouse is a greenhouse structure that enables the growth of crops and the production of fresh water in arid regions. Arid regions constitute about one third of the Earth's land area. Seawater greenhouse technology aims to mitigate issues such as global water scarcity, peak water and soil becoming salted. The system uses seawater and solar energy, and has a similar structure to the pad-and-fan greenhouse, but with additional evaporators and condensers. The seawater is pumped into the greenhouse to create a cool and humid environment, the optimal conditions for the cultivation of temperate crops. The freshwater is produced in a condensed state created by the solar desalination principle, which removes salt and impurities. Finally, the remaining humidified air is expelled from the greenhouse and used to improve growing conditions for outdoor plants. Projects The Seawater Greenhouse Ltd The seawater greenhouse concept was first researched and developed in 1991 by Charlie Paton's company Light Works Ltd, which is now known as the Seawater Greenhouse Ltd. Charlie Paton and Philip Davies worked on the first pilot project commenced in 1992, on the Canary Island of Tenerife. A prototype seawater greenhouse was assembled in the UK and constructed on the site in Tenerife covering an area of 360 m2. The temperate crops successfully cultivated included tomatoes, spinach, dwarf peas, peppers, artichokes, French beans, and lettuce. The second pilot design was installed in 2000 on the coast of Al-Aryam Island, Abu Dhabi, United Arab Emirates. The design is a light steel structure, similar to a multi-span polytunnel, which relies purely on solar energy. A pipe array is installed to improve the design of the greenhouse by decreasing the temperature and increasing the freshwater production. The greenhouse has an area of 864 m2 and has a daily water production of 1 m3, which nearly meets the crop's irrigation demand. The third pilot seawater greenhouse, which is 864 m2, is near Muscat in Oman which produces 0.3 to 0.6 m3 of freshwater per day. This project was created as a collaboration between Sultan Qaboos University. It provides an opportunity to develop a sustainable horticultural sector on the Batinah coast. These projects have enabled the validation of a thermodynamic simulation model which, given appropriate meteorological data, accurately predicts and quantifies how the seawater greenhouse will perform in other parts of the world. The fourth project is the commercial installation in Port Augusta, Australia, installed in 2010. It is currently a 20 hectare seawater greenhouse owned and run by Sundrop Farms which has developed it further. The fifth design was constructed in 2017 in Berbera, Somaliland. The design was researched to be simplified and inexpensive with advanced greenhouse modeling techniques. This design includes a shading system which retains core evaporative cooling elements. Sahara Forest Project The Sahara Forest Project (SFP) combines the seawater greenhouse technology and concentrated solar power and constructed pilot projects in Jordan and Qatar. The seawater greenhouse evaporates 50 m3 of seawater and harvests 5 m3 of fresh water per hectare per day. The solar power production capacity through PV panels produces 39 kW on the 3 hectares area with 1350 m2 growing area. The greenhouses are 15 degrees cooler than the outside temperatures which enables the production up to 130,000 of kg of vegetables per year and up to 20,000 liters of fresh water per day. Additionally, the project includes revegetation by soil reclamation of nitrogen-fixing and salt-removing desert plants by repurposed waste products from agriculture and saltwater evaporation. Process A seawater greenhouse uses the surrounding environment to grow temperate crops and produce freshwater. A conventional greenhouse uses solar heat to create a warmer environment to allow adequate growing temperature, whereas the seawater greenhouse does the opposite by creating a cooler environment. The roof traps infrared heat, while allowing visible light through to promote photosynthesis. The design for cooling the microclimate primarily consists of humidification and dehumidification (HD) desalination process or multiple-effect humidification. A simple seawater greenhouse consists of two evaporative coolers (evaporators), a condenser, fans, seawater and distilled water pipes and crops in between the two evaporators. This is shown in schematic figures 1 and 2. The process recreates the natural hydrological cycle within a controlled environment of the greenhouse by evaporating water from saline water source and regains it as freshwater by condensation. The first part of the system uses seawater, an evaporator, and a condenser. The front wall of the greenhouse consists of a seawater-wetted evaporator which faces the prevailing wind. These are mostly constituted of corrugated cardboard shown in Figure 3. If the wind is not prevalent enough, fans blow the outside air through the evaporator into the greenhouse. The ambient warm air exchanges the heat with the seawater which cools it down and gets it humidified. The cool and humid air creates an adequate growing environment for the crops. The remaining evaporatively-cooled seawater is collected and pumped to the condenser as a coolant. The second part of the system has another evaporator. The seawater flows from the first evaporator which preheats it and thereafter flows through the solar thermal collector on the roof to heat it up sufficiently before it flows to the second evaporator. The seawater, or coolant, flows through a circuit consisting of the evaporators, solar heating pipe, and condenser with an intake of seawater and an output of fresh water. The fresh water is produced by hot and relatively high humidity air which can produce sufficient distilled water for irrigation. The volume of fresh water is determined by air temperature, relative humidity, solar radiation and the airflow rate. These conditions can be modeled with appropriate meteorological data, enabling the design and process to be optimized for any suitable location. Applicability The technique is applicable to sites in arid regions near the sea. The distance and elevation from the sea must be evaluated considering the energy required to pump water to the site. There are numerous suitable locations on the coasts; others are below sea level, such as the Dead Sea and the Qattara Depression, where hydro schemes have been proposed to exploit the hydraulic pressure to generate power, e.g., Red Sea–Dead Sea Canal. Studies In 1996, Paton and Davies used the Simulink toolkit under MATLAB to model forced ventilation of the greenhouse in Tenerife, Cape Verde, Namibia, and Oman. The greenhouse is assisted by the prevailing wind, evaporative cooling, transpiration, solar heating, heat transfer through the walls and roof, and condensation which is analyzed in the study. They found that the amount of water required by the plants is reduced by 80% and 2.6-6.4 kWh electrical energy is needed for m3 of fresh water produced. In 2005, Paton and Davis Evaluated design options with thermal modeling using the United Arab Emirates model as a baseline. They studied three options:perforated screen, C-shaped air path, and pipe array, to find a better seawater circuit to cool the environment and produce the most freshwater. The study found that a pipe array gave the best results: an air temperature decrease of 1 °C, a mean radiant temperature decrease of 7.5 °C, and a freshwater production increase of 63%. This can be implemented to improve seawater greenhouses in hot arid regions such as the second pilot design in the United Arab Emirates. In 2018, Paton and Davis researched brine utilization for cooling and salt production in wind-driven seawater greenhouses to design and model it. The brine disposed by the seawater desalination may disturb the ecosystem as the same amount of brine is produced as freshwater. By using the brine valoristation method of wind-driven air flow by cooling the greenhouse with seawater evaporation, salt can be produced as shown in Figure 4. This brine is the by-product of the freshwater production, but can also be the ingredient to make salt, making it into a product that can be merchandised. An additional finding of this research was the importance of the shade-net which is modelled by a thin film in the study shown in Figure 5. It not only provides cooling, but also elongates the cooling plume by containing the cold air plume from the evaporative cooling pad. See also Adaptation to global warming Agroforestry Concentrating solar power Desertec Ecological engineering methods Evaporation pond Evaporite Green Sahara IBTS Greenhouse Open pan salt making Peak water Saltern Solar desalination Solar humidification Water crisis References External links "Engineers race to steal nature's secrets. Giant wind turbines based on a seed, and desalination plant that mimics a beetle", The Guardian (2006) "Seawater Greenhouse: A new approach to restorative agriculture" "The Sahara Forest Project a new source of fresh water, food and energy" Greenhouses Water desalination Water technology Climate change mitigation Sustainable agriculture
Seawater greenhouse
[ "Chemistry" ]
1,877
[ "Water technology", "Water treatment", "Water desalination" ]
17,007,865
https://en.wikipedia.org/wiki/R%20Serpentis
R Serpentis is a Mira variable type star in the equatorial constellation of Serpens. It ranges between apparent magnitude 5.16 and 14.4, and spectral types M5e to M8e, over a period of 356.41 days. The variability of this star was discovered in 1826 by Karl Ludwig Harding. References M-type giants Mira variables Serpens Durchmusterung objects 141850 077615 5894 Serpentis, R
R Serpentis
[ "Astronomy" ]
97
[ "Constellations", "Serpens" ]
15,305,337
https://en.wikipedia.org/wiki/Atomic%20gardening
Atomic gardening is a form of mutation breeding where plants are exposed to radiation. Some of the mutations produced thereby have turned out to be useful. Typically this is gamma radiation in which case it is a produced by cobalt-60. The practice of plant irradiation has resulted in the development of more than 2,000 new varieties of plants, most of which are now used in agricultural production. One example is the resistance to verticillium wilt of the 'Todd's Mitcham' cultivar of peppermint, which was produced from a breeding and test program at Brookhaven National Laboratory from the mid-1950s. Additionally, the Rio Red Grapefruit, developed at the Texas A&M Citrus Center in the 1970s and approved in 1984, accounted for more than three quarters of the grapefruit produced in Texas by 2007. History Beginning in the 1950s, atomic gardens were a part of "Atoms for Peace", an American program to develop peaceful uses of fission energy after World War II. Gamma gardens were established in laboratories in the United States, Europe, Soviet Union, India, and Japan. Though these gardens were initially designed with the aim of testing the effects of radiation on plant life, research gradually turned towards using radiation to introduce beneficial mutations that could give plants useful characteristics. Such characteristics include increased resilience to adverse weather, or a faster growth rate. In addition, the Atomic Gardening Society was established in 1959 by Muriel Howorth, an atomic activist from the United Kingdom, in conjunction with a growing movement to bring atomic energy and experimentation into the lives of ordinary citizens. In 1960, Howorth published a book entitled "Atomic Gardening for the Layman" along a similar theme. The Atomic Gardening Society utilized an early form of crowd-sourcing, in which members received irradiated seeds, planted them in their gardens, and sent reports back to Howorth detailing the results. Howorth herself made national news upon growing a two-foot-tall peanut plant after planting an irradiated nut. The youngest member of the society was Christopher Abbey (15), a student at Eastbourne College and the son of a dentist, who received a certificate of merit for propagating several species of irradiated seeds to maturity. Irradiated seeds were sold to the public by C.J. Speas, a Tennessee dentist who had obtained a license for a cobalt-60 source; and sold seeds produced in a backyard cinderblock bunker. Speas did so upon seeing an opportunity for amateur gardeners to get involved in testing. Howorth, in an effort to give the members of her society a broader selection, began ordering seeds from Speas in large quantities. By 1960, Speas had reportedly shipped Howorth over three and a half million seeds, which were then distributed to nearly a thousand individual Society members. Despite the initial enthusiasm, the Atomic Gardening Society declined by the mid 1960s. This was due to a combination of public opinion moving away from atomic energy and a failure on the part of the crowd-sourced Society to produce noteworthy results. In spite of this, large-scale gamma gardens remained in use, and a number of commercial plant varieties were developed and released by laboratories and private companies alike. Methodology Gamma gardens were typically five acres (two hectares) in size, and were arranged in a circular pattern with a retractable radiation source in the middle. Plants were usually laid out like slices of a pie, stemming from the central radiation source; this pattern produced a range of radiation doses over the radius from the center. Radioactive bombardment would take place for around twenty hours, after which scientists wearing protective equipment would enter the garden and assess the results. The plants nearest the center usually died, while the ones further out often featured "tumors and other growth abnormalities". Beyond these were the plants of interest, with a higher than usual range of mutations, though not to the damaging extent of those closer to the radiation source. These gamma gardens have continued to operate on largely the same designs as those conceived in the 1950s. Research into the potential benefits of atomic gardening has continued, most notably through a joint operation between the International Atomic Energy Agency and the U.N.'s Food and Agriculture Organization. Japan's Institute of Radiation Breeding is well-known for its modern-day usage of atomic gardening techniques. Cultural significance The popularity of atomic gardening coincided with a postwar society seeking to put newly discovered atomic energy to use. Many scientists and the public believed that atomic energy could be harnessed to address numerous worldwide issues, including famine and energy shortages, leading them to embrace the new atomic era. Some scientists that had worked on the military application of atomic energy in the past invested in or sponsored programs dedicated to bringing more peaceful applications of atomic energy to the public domain, and this included atomic gardening. As public skepticism of atomic energy grew, and as nuclear arsenals continued to increase in size across the globe, atomic gardening fell out of favor, along with other Atoms for Peace initiatives. See also The Effect of Gamma Rays on Man-in-the-Moon Marigolds Mutation breeding GMO References External links Institute of Radiation Breeding (IRB), NIAS, MAFF, Hitachiohmiya, Japan IRB gamma field on Google maps Atomic Gardening: An Online History, a comprehensive outline of Atomic Gardening by Dr. Paige Johnson. Gardening Plant genetics Radiobiology
Atomic gardening
[ "Chemistry", "Biology" ]
1,085
[ "Radiobiology", "Plants", "Radioactivity", "Plant genetics" ]
15,307,594
https://en.wikipedia.org/wiki/Strained%20quantum-well%20laser
A strained quantum well laser is a type of quantum-well laser, which was invented by Professor Alf Adams at the University of Surrey in 1986. The laser is distinctive for producing a more concentrated beam than other quantum well lasers, making it considerably more efficient. The lasers are notable for usage in CD, DVD, and Blu-Ray drives as well as applications in supermarket barcode readers and telephone optical transmission. References Semiconductor lasers Photonics Quantum optics British inventions
Strained quantum-well laser
[ "Physics" ]
92
[ "Quantum optics", "Quantum mechanics" ]
3,357,839
https://en.wikipedia.org/wiki/Deposition%20%28aerosol%20physics%29
In the physics of aerosols, deposition is the process by which aerosol particles collect or deposit themselves on solid surfaces, decreasing the concentration of the particles in the air. It can be divided into two sub-processes: dry and wet deposition. The rate of deposition, or the deposition velocity, is slowest for particles of an intermediate size. Mechanisms for deposition are most effective for either very small or very large particles. Very large particles will settle out quickly through sedimentation (settling) or impaction processes, while Brownian diffusion has the greatest influence on small particles. This is because very small particles coagulate in few hours until they achieve a diameter of 0.5 micrometres. At this size they no longer coagulate. This has a great influence in the amount of PM-2.5 present in the air. Deposition velocity is defined from , where is flux density, is deposition velocity and is concentration. In gravitational deposition, this velocity is the settling velocity due to the gravity-induced drag. Often studied is whether or not a certain particle will impact with a certain obstacle. This can be predicted with the Stokes number , where is stopping distance (which depends on particle size, velocity and drag forces), and is characteristic size (often the diameter of the obstacle). If the value of is less than 1, the particle will not collide with that obstacle. However, if the value of is greater than 1, it will. Deposition due to Brownian motion obeys both Fick's first and second laws. The resulting deposition flux is defined as , where is deposition flux, is the initial number density, is the diffusion constant and is time. This can be integrated to determine the concentration at each moment of time. Dry deposition Dry deposition is caused by: Impaction. This is when small particles interfacing a bigger obstacle are not able to follow the curved streamlines of the flow due to their inertia, so they hit or impact the droplet. The larger the masses of the small particles facing the big one, the greater the displacement from the flow streamline. Gravitational sedimentation – the settling of particles fall down due to gravity. Interception. This is when small particles follow the streamlines, but if they flow too close to an obstacle, they may collide (e.g. a branch of a tree). Turbulence. Turbulent eddies in the air transfer particles which can collide. Again, there is a net flux towards lower concentrations. Other processes, such as: thermophoresis, turbophoresis, diffusiophoresis and electrophoresis. Wet deposition In wet deposition, atmospheric hydrometeors (rain drops, snow etc.) scavenge aerosol particles. This means that wet deposition is gravitational, Brownian and/or turbulent coagulation with water droplets. Different types of wet deposition include: Below-cloud scavenging. This happens when falling rain droplets or snow particles collide with aerosol particles through Brownian diffusion, interception, impaction and turbulent diffusion. In-cloud scavenging. This is where aerosol particles get into cloud droplets or cloud ice crystals through working as cloud nuclei, or being captured by them through collision. They can be brought to the ground surface when rain or snow forms in clouds. Within aerosol computer models aerosols and cloud droplets are mostly treated separately so that nucleation represents a loss process that has to be parametrised. See also Condensation in aerosol dynamics Particle collection in wet scrubbers Van der Waals force References Particulates Aerosols
Deposition (aerosol physics)
[ "Chemistry" ]
735
[ "Particulates", "Particle technology", "Aerosols", "Colloids" ]
3,361,324
https://en.wikipedia.org/wiki/Boost%20converter
A boost converter or step-up converter is a DC-to-DC converter that increases voltage, while decreasing current, from its input (supply) to its output (load). It is a class of switched-mode power supply (SMPS) containing at least two semiconductors, a diode and a transistor, and at least one energy storage element: a capacitor, inductor, or the two in combination. To reduce voltage ripple, filters made of capacitors (sometimes in combination with inductors) are normally added to such a converter's output (load-side filter) and input (supply-side filter). Overview Power for the boost converter can come from any suitable DC source, such as batteries, solar panels, rectifiers, and DC generators. A process that changes one DC voltage to a different DC voltage is called DC to DC conversion. A boost converter is a DC to DC converter with an output voltage greater than the source voltage. A boost converter is sometimes called a step-up converter since it "steps up" the source voltage. Since power () must be conserved, the output current is lower than the source current. History For high efficiency, the switched-mode power supply (SMPS) switch must turn on and off quickly and have low losses. The advent of a commercial semiconductor switch in the 1950s represented a major milestone that made SMPSs such as the boost converter possible. The major DC to DC converters were developed in the early 1960s when semiconductor switches had become available. The aerospace industry’s need for small, lightweight, and efficient power converters led to the converter’s rapid development. Switched systems such as SMPS are a challenge to design since their models depend on whether a switch is opened or closed. R. D. Middlebrook from Caltech in 1977 published the models for DC to DC converters used today. Middlebrook averaged the circuit configurations for each switch state in a technique called state-space averaging. This simplification reduced two systems into one. The new model led to insightful design equations which helped the growth of SMPS. Applications Battery power systems Battery power systems often stack cells in series to achieve higher voltage. However, sufficient stacking of cells is not possible in many high voltage applications due to lack of space. Boost converters can increase the voltage and reduce the number of cells. Two battery-powered applications that use boost converters are used in hybrid electric vehicles (HEV) and lighting systems. The NHW20 model Toyota Prius HEV uses a 500 V motor. Without a boost converter, the Prius would need nearly 417 cells to power the motor. However, a Prius actually uses only 168 cells and boosts the battery voltage from 202 V to 500 V. Boost converters also power devices at smaller scale applications, such as portable lighting systems. A white LED typically requires 3.3 V to emit light, and a boost converter can step up the voltage from a single 1.5 V alkaline cell to power the lamp. Joule thief An unregulated boost converter is used as the voltage increase mechanism in the circuit known as the "Joule thief", based on blocking oscillator concepts. This circuit topology is used with low power battery applications, and is aimed at the ability of a boost converter to "steal" the remaining energy in a battery. This energy would otherwise be wasted since the low voltage of a nearly depleted battery makes it unusable for a normal load. This energy would otherwise remain untapped because many applications do not allow enough current to flow through a load when voltage decreases. This voltage decrease occurs as batteries become depleted, and is a characteristic of the ubiquitous alkaline battery. Since the equation for power is , and R tends to be stable, power available to the load goes down significantly as voltage decreases. Photovoltaic cells The special kind of boost-converters called voltage-lift type boost converters are used in solar photovoltaic (PV) systems. These power converters add up the passive components (diode, inductor and capacitor) of a traditional boost-converter to improve the power quality and increase the performance of complete PV system. Circuit analysis Operation The key principle that drives the boost converter is the tendency of an inductor to resist changes in current by either increasing or decreasing the energy stored in the inductor's magnetic field. In a boost converter, the output voltage is always higher than the input voltage. A schematic of a boost power stage is shown in Figure 1. When the switch is closed (on-state), current flows through the inductor in the clockwise direction and the inductor stores some energy by generating a magnetic field. The polarity of the left side of the inductor is positive. When the switch is opened (off-state), the magnetic field previously created will be reduced in energy to maintain the current through the inductor. The polarity of the inductor will be reversed, which means the left side of the inductor will become negative. As a result, the current from both the voltage source and the inductor in series will add together and be redirected through the now forward-biased diode "D" towards the load. If the switch is cycled fast enough, the inductor will not discharge fully in between charging stages, and the load will always see a voltage greater than that of the input source alone when the switch is opened. Also, while the switch is opened, the capacitor, in parallel with the load, is charged to this combined voltage. When the switch is then closed, and the right-hand side is shorted out from the left-hand side, the capacitor is, therefore, able to provide the voltage and energy to the load. During this time, the blocking diode prevents the capacitor from discharging through the switch. The switch must, of course, be opened again fast enough to prevent the capacitor from discharging too much. The basic principle of a boost converter consists of 2 distinct states (see Figure 2): In the on-state, the switch S (see Figure 1) is closed, resulting in an increase in the inductor current; In the off-state, the switch is open, and the only path offered to inductor current is through the flyback diode D, the capacitor C and the load R. This results in transferring the energy accumulated during the on-state into the capacitor. The input current is the same as the inductor current, as shown in figure 2. So, it is not discontinuous as in the buck converter, and the requirements on the input filter are relaxed compared to a buck converter. Continuous mode When a boost converter operates in continuous mode, the current through the inductor () never falls to zero. Figure 3 shows the typical waveforms of inductor current and voltage in a converter operating in this mode. In the steady state, the DC (average) voltage across the inductor must be zero so that after each cycle, the inductor returns the same state because the voltage across the inductor is proportional to the rate of change of current through it (explained in more detail below). Note in Figure 1 that the left-hand side of L is at , and the right-hand side of L sees the voltage waveform from Figure 3. The average value of is , where D is the duty cycle of the waveform driving the switch. From this we get the ideal transfer function: or . We get the same result from a more detailed analysis as follows: The output voltage can be calculated as follows in the case of an ideal converter (i.e. using components with an ideal behaviour) operating in steady conditions: During the on-state, the switch S is closed, which makes the input voltage () appear across the inductor, which causes a change in current () flowing through the inductor during a time period (t) by the formula: Where L is the inductor value. At the end of the on-state, the increase of IL is therefore: D is the duty cycle. It represents the fraction of the commutation period T during which the switch is on. Therefore, D ranges between 0 (S is never on) and 1 (S is always on). During the Off-state, the switch S is open, so the inductor current flows through the load. If we consider zero voltage drop in the diode and a capacitor large enough for its voltage to remain constant, the evolution of IL is: Therefore, the variation of IL during the Off-period is: As we consider that the converter operates in steady state conditions, the amount of energy stored in each of its components has to be the same at the beginning and at the end of a commutation cycle. In particular, the energy stored in the inductor is given by: So, the inductor current has to be the same at the start and end of the commutation cycle. This means the overall change in the current (the sum of the changes) is zero: Substituting and by their expressions yields: This can be written as: The above equation shows that the output voltage is always higher than the input voltage (as the duty cycle goes from 0 to 1), and that it increases with D, theoretically to infinity as D approaches 1. This is why this converter is sometimes referred to as a step-up converter. Rearranging the equation reveals the duty cycle to be: Discontinuous mode If the ripple amplitude of the current is too high, the inductor may be completely discharged before the end of a whole commutation cycle. This commonly occurs under light loads. In this case, the current through the inductor falls to zero during part of the period (see waveforms in Figure 4). Although the difference is slight, it has a strong effect on the output voltage equation. The voltage gain can be calculated as follows: As the inductor current at the beginning of the cycle is zero, its maximum value (at ) is During the off-period, IL falls to zero after : Using the two previous equations, δ is: The load current Io is equal to the average diode current (ID). As can be seen in Figure 4, the diode current is equal to the inductor current during the off-state. The average value of Io can be sorted out geometrically from figure 4. Therefore, the output current can be written as: Replacing ILmax and δ by their respective expressions yields: Therefore, the output voltage gain can be written as follows: Compared to the expression of the output voltage gain for continuous mode, this expression is much more complicated. Furthermore, in discontinuous operation, the output voltage gain not only depends on the duty cycle (D), but also on the inductor value (L), the input voltage (Vi), the commutation period (T) and the output current (Io). Substituting into the equation (R is the load), the output voltage gain can be rewritten as: where See also Joule thief Buck converter Buck-boost converter Split-pi topology Transformer Vibrator (electronic) Voltage doubler Voltage multiplier The hydraulic ram can be seen as analogous to a boost converter, using the electronic–hydraulic analogy. Further reading References External links Explanation of nonlinear behavior, modeling, and linearization of the boost dc/dc converter Boost converter maximum output power operation for energy harvesting Choppers Voltage regulation
Boost converter
[ "Physics" ]
2,432
[ "Voltage", "Physical quantities", "Voltage regulation" ]
3,362,237
https://en.wikipedia.org/wiki/Pulsed%20columns
Pulsed columns are a type of liquid-liquid extraction equipment; examples of this class of extraction equipment is used at the BNFL plant THORP. Special use in nuclear industries for fuel reprocessing, where spent fuel from reactors is subjected to solvent extraction. A pulsation is created using air by a pulse leg. The feed is aqueous solution containing radioactive solutes, and the solvent used is TBP (Tri-Butyl Phosphate) in suitable hydrocarbon. To create turbulence for dispersion of one phase in other, a mechanical agitator is used in conventional equipments. But, because of radioactivity, and frequent maintenance required for mechanical agitators, pulsing is used in extraction columns. References Chemical equipment
Pulsed columns
[ "Chemistry", "Engineering" ]
152
[ "Chemical equipment", "nan" ]
3,362,809
https://en.wikipedia.org/wiki/Galileo%20affair
The Galileo affair () began around 1610, and culminated with the trial and condemnation of Galileo Galilei by the Roman Catholic Inquisition in 1633. Galileo was prosecuted for holding as true the doctrine of heliocentrism, the astronomical model in which the Earth and planets revolve around the Sun at the centre of the universe. In 1610, Galileo published his Sidereus Nuncius (Starry Messenger), describing the observations that he had made with his new, much stronger telescope, amongst them, the Galilean moons of Jupiter. With these observations and additional observations that followed, such as the phases of Venus, he promoted the heliocentric theory of Nicolaus Copernicus published in De revolutionibus orbium coelestium in 1543. Galileo's opinions were met with opposition within the Catholic Church, and in 1616 the Inquisition declared heliocentrism to be "formally heretical". Galileo went on to propose a theory of tides in 1616, and of comets in 1619; he argued that the tides were evidence for the motion of the Earth. In 1632, Galileo published his Dialogue Concerning the Two Chief World Systems, which defended heliocentrism, and was immensely popular. Responding to mounting controversy over theology, astronomy and philosophy, the Roman Inquisition tried Galileo in 1633, found him "vehemently suspect of heresy", and sentenced him to house arrest where he remained until his death in 1642. At that point, heliocentric books were banned and Galileo was ordered to abstain from holding, teaching or defending heliocentric ideas after the trial. The affair was complex since very early on Pope Urban VIII had been a patron to Galileo and had given him permission to publish on the Copernican theory as long as he treated it as a hypothesis, but after the publication in 1632, the patronage was broken off due to numerous reasons. Historians of science have corrected numerous false interpretations of the affair. Initial controversies Galileo began his telescopic observations in the later part of 1609, and by March 1610 was able to publish a small book, The Starry Messenger (Sidereus Nuncius), describing some of his discoveries: mountains on the Moon, lesser moons in orbit around Jupiter, and the resolution of what had been thought to be very cloudy masses in the sky (nebulae) into collections of stars too faint to see individually without a telescope. Other observations followed, including the phases of Venus and the existence of sunspots. Galileo's contributions caused difficulties for theologians and natural philosophers of the time, as they contradicted scientific and philosophical ideas based on those of Aristotle and Ptolemy and closely associated with the Catholic Church. In particular, Galileo's observations of the phases of Venus, which showed it to circle the Sun, and the observation of moons orbiting Jupiter, contradicted the geocentric model of Ptolemy, which was backed and accepted by the Roman Catholic Church, and supported the Copernican model advanced by Galileo. Jesuit astronomers, experts both in Church teachings, science, and in natural philosophy, were at first skeptical and hostile to the new ideas; however, within a year or two the availability of good telescopes enabled them to repeat the observations. In 1611, Galileo visited the Collegium Romanum in Rome, where the Jesuit astronomers by that time had repeated his observations. Christoph Grienberger, one of the Jesuit scholars on the faculty, sympathized with Galileo's theories, but was asked to defend the Aristotelian viewpoint by Claudio Acquaviva, the Father General of the Jesuits. Not all of Galileo's claims were completely accepted: Christopher Clavius, the most distinguished astronomer of his age, never was reconciled to the idea of mountains on the Moon, and outside the collegium many still disputed the reality of the observations. In a letter to Kepler of August 1610, Galileo complained that some of the philosophers who opposed his discoveries had refused even to look through a telescope: In 1611, the same year that Galileo visited the Collegium Romanum, his theories first came to the attention of the Roman Inquisition. A commission of cardinals working with the inquisition made inquiries into Galileo's activites, and asked the city of Padua if he had any connections to Cesare Cremonini, a professor at the university of Padua who had been charged with heresy by the Inquisition. These inquiries marked the first time Galileo's name was brought before the inquisition. Geocentrists who did verify and accept Galileo's findings had an alternative to Ptolemy's model in an alternative geocentric (or "geo-heliocentric") model proposed some decades earlier by Tycho Brahe – a model in which, for example, Venus circled the Sun. Tycho argued that the distance to the stars in the Copernican system would have to be 700 times greater than the distance from the Sun to Saturn. (The nearest star other than the Sun, Proxima Centauri, is in fact over 28,000 times the distance from the Sun to Saturn.) Moreover, the only way the stars could be so distant and still appear the sizes they do in the sky would be if even average stars were gigantic – at least as big as the orbit of the Earth, and of course vastly larger than the sun. (See the articles on the Tychonic System and Stellar parallax.) Galileo became involved in a dispute over priority in the discovery of sunspots with Christoph Scheiner, a Jesuit. This became a bitter lifelong feud. Neither of them, however, was the first to recognise sunspots – the Chinese had already been familiar with them for centuries. At this time, Galileo also engaged in a dispute over the reasons that objects float or sink in water, siding with Archimedes against Aristotle. The debate was unfriendly, and Galileo's blunt and sometimes sarcastic style, though not extraordinary in academic debates of the time, made him enemies. During this controversy one of Galileo's friends, the painter Lodovico Cardi da Cigoli, informed him that a group of malicious opponents, which Cigoli subsequently referred to derisively as "the Pigeon league", was plotting to cause him trouble over the motion of the Earth, or anything else that would serve the purpose. According to Cigoli, one of the plotters asked a priest to denounce Galileo's views from the pulpit, but the latter refused. Nevertheless, three years later another priest, Tommaso Caccini, did in fact do precisely that, as described below. Bible argument In the Catholic world prior to Galileo's conflict with the Church, the majority of educated people subscribed to the Aristotelian geocentric view that the Earth was the centre of the universe and that all heavenly bodies revolved around the Earth, though Copernican theories were used to reform the calendar in 1582. Geostaticism agreed with a literal interpretation of Scripture in several places, such as , , , , (but see varied interpretations of ). Heliocentrism, the theory that the Earth was a planet, which along with all the others revolved around the Sun, contradicted both geocentrism and the prevailing theological support of the theory. One of the first suggestions of heresy that Galileo had to deal with came in 1613 from a professor of philosophy, poet and specialist in Greek literature, Cosimo Boscaglia. In conversation with Galileo's patron Cosimo II de' Medici and Cosimo's mother Christina of Lorraine, Boscaglia said that the telescopic discoveries were valid, but that the motion of the Earth was obviously contrary to Scripture: Dr. Boscaglia had talked to Madame [Christina] for a while, and though he conceded all the things you have discovered in the sky, he said that the motion of the Earth was incredible and could not be, particularly since Holy Scripture obviously was contrary to such motion. Galileo was defended on the spot by his former student Benedetto Castelli, now a professor of mathematics and Benedictine abbot. The exchange having been reported to Galileo by Castelli, Galileo decided to write a letter to Castelli, expounding his views on what he considered the most appropriate way of treating scriptural passages which made assertions about natural phenomena. Later, in 1615, he expanded this into his much longer Letter to the Grand Duchess Christina. Tommaso Caccini, a Dominican friar, appears to have made the first dangerous attack on Galileo. Preaching a sermon in Florence at the end of 1614, he denounced Galileo, his associates, and mathematicians in general (a category that included astronomers). The biblical text for the sermon on that day was Joshua 10, in which Joshua makes the Sun stand still; this was the story that Castelli had to interpret for the Medici family the year before. It is said, though it is not verifiable, that Caccini also used the passage from Acts 1:11, "Ye men of Galilee, why stand ye gazing up into heaven?". First meetings with theological authorities In late 1614 or early 1615, one of Caccini's fellow Dominicans, Niccolò Lorini, acquired a copy of Galileo's letter to Castelli. Lorini and other Dominicans at the Convent of San Marco considered the letter of doubtful orthodoxy, in part because it may have violated the decrees of the Council of Trent: Lorini and his colleagues decided to bring Galileo's letter to the attention of the Inquisition. In February 1615, Lorini accordingly sent a copy to the Secretary of the Inquisition, Cardinal Paolo Emilio Sfondrati, with a covering letter critical of Galileo's supporters: On March 19, Caccini arrived at the Inquisition's offices in Rome to denounce Galileo for his Copernicanism and various other alleged heresies supposedly being spread by his pupils. Galileo soon heard reports that Lorini had obtained a copy of his letter to Castelli and was claiming that it contained many heresies. He also heard that Caccini had gone to Rome and suspected him of trying to stir up trouble with Lorini's copy of the letter. As 1615 wore on he became more concerned, and eventually determined to go to Rome as soon as his health permitted, which it did at the end of the year. By presenting his case there, he hoped to clear his name of any suspicion of heresy, and to persuade the Church authorities not to suppress heliocentric ideas. In going to Rome Galileo was acting against the advice of friends and allies, and of the Tuscan ambassador to Rome, Piero Guicciardini. Bellarmine Cardinal Robert Bellarmine, one of the most respected Catholic theologians of the time, was called on to adjudicate the dispute between Galileo and his opponents. The question of heliocentrism had first been raised with Cardinal Bellarmine, in the case of Paolo Antonio Foscarini, a Carmelite father; Foscarini had published a book, Lettera ... sopra l'opinione ... del Copernico, which attempted to reconcile Copernicus with the biblical passages that seemed to be in contradiction. Bellarmine at first expressed the opinion that Copernicus's book would not be banned, but would at most require some editing so as to present the theory purely as a calculating device for "saving the appearances" (i.e. preserving the observable evidence). Foscarini sent a copy of his book to Bellarmine, who replied in a letter of April 12, 1615. Galileo is mentioned by name in the letter, and a copy was soon sent to him. After some preliminary salutations and acknowledgements, Bellarmine begins by telling Foscarini that it is prudent for him and Galileo to limit themselves to treating heliocentrism as a merely hypothetical phenomenon and not a physically real one. Further on he says that interpreting heliocentrism as physically real would be "a very dangerous thing, likely not only to irritate all scholastic philosophers and theologians, but also to harm the Holy Faith by rendering Holy Scripture as false." Moreover, while the topic was not inherently a matter of faith, the statements about it in Scripture were so by virtue of who said them – namely, the Holy Spirit. He conceded that if there were conclusive proof, "then one would have to proceed with great care in explaining the Scriptures that appear contrary; and say rather that we do not understand them, than that what is demonstrated is false." However, demonstrating that heliocentrism merely "saved the appearances" could not be regarded as sufficient to establish that it was physically real. Although he believed that the former may well have been possible, he had "very great doubts" that the latter would be, and in case of doubt it was not permissible to depart from the traditional interpretation of Scriptures. His final argument was a rebuttal of an analogy that Foscarini had made between a moving Earth and a ship on which the passengers perceive themselves as apparently stationary and the receding shore as apparently moving. Bellarmine replied that in the case of the ship the passengers know that their perceptions are erroneous and can mentally correct them, whereas the scientist on the Earth clearly experiences that it is stationary and therefore the perception that the Sun, Moon and stars are moving is not in error and does not need to be corrected. Bellarmine found no problem with heliocentrism so long as it was treated as a purely hypothetical calculating device and not as a physically real phenomenon, but he did not regard it as permissible to advocate the latter unless it could be conclusively proved through current scientific standards. This put Galileo in a difficult position, because he believed that the available evidence strongly favoured heliocentrism, and he wished to be able to publish his arguments. Francesco Ingoli In addition to Bellarmine, Monsignor Francesco Ingoli initiated a debate with Galileo, sending him in January 1616 an essay disputing the Copernican system. Galileo later stated that he believed this essay to have been instrumental in the action against Copernicanism that followed in February. According to philosopher Maurice Finocchiaro, Ingoli had probably been commissioned by the Inquisition to write an expert opinion on the controversy, and the essay provided the "chief direct basis" for the ban. The essay focused on eighteen physical and mathematical arguments against heliocentrism. It borrowed primarily from the arguments of Tycho Brahe, and it notedly mentioned Brahe's argument that heliocentrism required the stars to be much larger than the Sun. Ingoli wrote that the great distance to the stars in the heliocentric theory "clearly proves ... the fixed stars to be of such size, as they may surpass or equal the size of the orbit circle of the Earth itself." Ingoli included four theological arguments in the essay, but suggested to Galileo that he focus on the physical and mathematical arguments. Galileo did not write a response to Ingoli until 1624, in which, among other arguments and evidence, he listed the results of experiments such as dropping a rock from the mast of a moving ship. Inquisition and first judgment, 1616 Deliberation On February 19, 1616, the Inquisition asked a commission of theologians, known as qualifiers, about the propositions of the heliocentric view of the universe. Historians of the Galileo affair have offered different accounts of why the matter was referred to the qualifiers at this time. Beretta points out that the Inquisition had taken a deposition from Gianozzi Attavanti in November 1615, as part of its investigation into the denunciations of Galileo by Lorini and Caccini. In this deposition, Attavanti confirmed that Galileo had advocated the Copernican doctrines of a stationary Sun and a mobile Earth, and as a consequence the Tribunal of the Inquisition would have eventually needed to determine the theological status of those doctrines. It is however possible, as surmised by the Tuscan ambassador, Piero Guiccardini, in a letter to the Grand Duke, that the actual referral may have been precipitated by Galileo's aggressive campaign to prevent the condemnation of Copernicanism. Judgement On February 24 the Qualifiers delivered their unanimous report: the proposition that the Sun is stationary at the centre of the universe is "foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture"; the proposition that the Earth moves and is not at the centre of the universe "receives the same judgement in philosophy; and ... in regard to theological truth it is at least erroneous in faith." The original report document was made widely available in 2014. At a meeting of the cardinals of the Inquisition on the following day, Pope Paul V instructed Bellarmine to deliver this result to Galileo, and to order him to abandon the Copernican opinions; should Galileo resist the decree, stronger action would be taken. On February 26, Galileo was called to Bellarmine's residence and ordered, With no attractive alternatives, Galileo accepted the orders delivered, even sterner than those recommended by the Pope. Galileo met again with Bellarmine, apparently on friendly terms; and on March 11 he met with the Pope, who assured him that he was safe from prosecution so long as he, the Pope, should live. Nonetheless, Galileo's friends Sagredo and Castelli reported that there were rumors that Galileo had been forced to recant and do penance. To protect his good name, Galileo requested a letter from Bellarmine stating the truth of the matter. This letter assumed great importance in 1633, as did the question whether Galileo had been ordered not to "hold or defend" Copernican ideas (which would have allowed their hypothetical treatment) or not to teach them in any way. If the Inquisition had issued the order not to teach heliocentrism at all, it would have been ignoring Bellarmine's position. In the end, Galileo did not persuade the Church to stay out of the controversy, but instead saw heliocentrism formally declared false. It was consequently termed heretical by the Qualifiers, since it contradicted the literal meaning of the Scriptures, though this position was not binding on the Church. Copernican books banned Following the Inquisition's injunction against Galileo, the papal Master of the Sacred Palace ordered that Foscarini's Letter be banned, and Copernicus' De revolutionibus suspended until corrected. The papal Congregation of the Index preferred a stricter prohibition, and so with the Pope's approval, on March 5 the Congregation banned all books advocating the Copernican system, which it called "the false Pythagorean doctrine, altogether contrary to Holy Scripture." Francesco Ingoli, a consultor to the Holy Office, recommended that De revolutionibus be amended rather than banned due to its utility for calendrics. In 1618, the Congregation of the Index accepted his recommendation, and published their decision two years later, allowing a corrected version of Copernicus' book to be used. The uncorrected De revolutionibus remained on the Index of banned books until 1758. Galileo's works advocating Copernicanism were therefore banned, and his sentence prohibited him from "teaching, defending… or discussing" Copernicanism. In Germany, Kepler's works were also banned by the papal order. Dialogue Concerning the Two Chief World Systems In 1623, Pope Gregory XV died and was succeeded by Pope Urban VIII who showed greater favor to Galileo, particularly after Galileo traveled to Rome to congratulate the new Pontiff. Galileo's Dialogue Concerning the Two Chief World Systems, which was published in 1632 to great popularity, was an account of conversations between a Copernican scientist, Salviati, an impartial and witty scholar named Sagredo, and a ponderous Aristotelian named Simplicio, who employed stock arguments in support of geocentricity, and was depicted in the book as being an intellectually inept fool. Simplicio's arguments are systematically refuted and ridiculed by the other two characters with what Youngson calls "unassailable proof" for the Copernican theory (at least versus the theory of Ptolemy – as Finocchiaro points out, "the Copernican and Tychonic systems were observationally equivalent and the available evidence could be explained equally well by either"), which reduces Simplicio to baffled rage, and makes the author's position unambiguous. Indeed, although Galileo states in the preface of his book that the character is named after a famous Aristotelian philosopher (Simplicius in Latin, Simplicio in Italian), the name "Simplicio" in Italian also had the connotation of "simpleton." Authors Langford and Stillman Drake asserted that Simplicio was modeled on philosophers Lodovico delle Colombe and Cesare Cremonini. Pope Urban demanded that his own arguments be included in the book, which resulted in Galileo putting them in the mouth of Simplicio. Some months after the book's publication, Pope Urban VIII banned its sale and had its text submitted for examination by a special commission. Trial and second judgment, 1633 With the loss of many of his defenders in Rome because of Dialogue Concerning the Two Chief World Systems, in 1633 Galileo was ordered to stand trial on suspicion of heresy "for holding as true the false doctrine taught by some that the sun is the center of the world" against the 1616 condemnation, since "it was decided at the Holy Congregation [...] on 25 Feb 1616 that [...] the Holy Office would give you an injunction to abandon this doctrine, not to teach it to others, not to defend it, and not to treat of it; and that if you did not acquiesce in this injunction, you should be imprisoned". Galileo was interrogated while threatened with physical torture. A panel of theologians, consisting of Melchior Inchofer, Agostino Oreggi and Zaccaria Pasqualigo, reported on the Dialogue. Their opinions were strongly argued in favour of the view that the Dialogue taught the Copernican theory. Galileo was found guilty, and the sentence of the Inquisition, issued on 22 June 1633, was in three essential parts: Galileo was found "vehemently suspect of heresy", namely of having held the opinions that the Sun lies motionless at the centre of the universe, that the Earth is not at its centre and moves, and that one may hold and defend an opinion as probable after it has been declared contrary to Holy Scripture. He was required to "abjure, curse, and detest" those opinions. He was sentenced to formal imprisonment at the pleasure of the Inquisition. On the following day this was commuted to house arrest, which he remained under for the rest of his life. His offending Dialogue was banned; and in an action not announced at the trial, publication of any of his works was forbidden, including any he might write in the future. According to popular legend, after his abjuration Galileo allegedly muttered the rebellious phrase "and yet it moves" (Eppur si muove), but there is no evidence that he actually said this or anything similar. The first account of the legend dates to a century after his death. The phrase "Eppur si muove" does appear, however, in a painting of the 1640s by the Spanish painter Bartolomé Esteban Murillo or an artist of his school. The painting depicts an imprisoned Galileo apparently pointing to a copy of the phrase written on the wall of his dungeon. After a period with the friendly Archbishop Piccolomini in Siena, Galileo was allowed to return to his villa at Arcetri near Florence, where he spent the rest of his life under house arrest. He continued his work on mechanics, and in 1638 he published a scientific book in Holland. His standing would remain questioned at every turn. In March 1641, Vincentio Reinieri, a follower and pupil of Galileo, wrote him at Arcetri that an Inquisitor had recently compelled the author of a book printed at Florence to change the words "most distinguished Galileo" to "Galileo, man of noted name". However, partially in tribute to Galileo, at Arcetri the first academy devoted to the new experimental science, the Accademia del Cimento, was formed, which is where Francesco Redi performed controlled experiments, and many other important advancements were made which would eventually help usher in The Age of Enlightenment. Modern views Historians and scholars Pope Urban VIII had been a patron to Galileo and had given him permission to publish on the Copernican theory as long as he treated it as a hypothesis, but after the publication in 1632, the patronage broke due to Galileo placing Urban's arguments for God's omnipotence, which Galileo had been required to include, in the mouth of a simpleton character named "Simplicio" in the book; this caused great offense to the Pope. There is some evidence that enemies of Galileo persuaded Urban that Simplicio was intended to be a caricature of him. Modern historians have dismissed it as most unlikely that this had been Galileo's intention. Dava Sobel argues that during this time, Urban had fallen under the influence of court intrigue and problems of state. His friendship with Galileo began to take second place to his feelings of persecution and fear for his own life. The problem of Galileo was presented to the pope by court insiders and enemies of Galileo, following claims by a Spanish cardinal that Urban was a poor defender of the church. This situation did not bode well for Galileo's defense of his book. In his 1998 book, Scientific Blunders, Robert Youngson indicates that Galileo struggled for two years against the ecclesiastical censor to publish a book promoting heliocentrism. He claims the book passed only as a result of possible idleness or carelessness on the part of the censor, who was eventually dismissed. On the other hand, Jerome K. Langford and Raymond J. Seeger contend that Pope Urban and the Inquisition gave formal permission to publish the book, Dialogue Concerning the Two Chief World Systems, Ptolemaic & Copernican. They claim Urban personally asked Galileo to give arguments for and against heliocentrism in the book, to include Urban's own arguments, and for Galileo not to advocate heliocentrism. Some historians emphasize Galileo's confrontation not only with the church, but also with Aristotelian philosophy, either secular or religious. Views on Galileo's scientific arguments While Galileo never claimed that his arguments themselves directly proved heliocentrism to be true, they were significant evidence in its favor. According to Finocchiaro, defenders of the Catholic church's position have sometimes attempted to argue, unsuccessfully, that Galileo was right on the facts but that his scientific arguments were weak or unsupported by evidence of the day; Finocchiaro rejects this view, saying that some of Galileo's key epistemological arguments are accepted fact today. Direct evidence ultimately confirmed the motion of the Earth, with the emergence of Newtonian mechanics in the late 17th century, the observation of the stellar aberration of light by James Bradley in the 18th century, the analysis of orbital motions of binary stars by William Herschel in the 19th century, and the accurate measurement of the stellar parallax in the 19th century. According to Christopher Graney, an Adjunct Scholar at the Vatican Observatory, one of Galileo's observations did not support the Copernican heliocentric view, but was more consistent with Tycho Brahe's hybrid model where the Earth did not move, and everything else circled around it and the Sun. Redondi's theory According to a controversial alternative theory proposed by Pietro Redondi in 1983, the main reason for Galileo's condemnation in 1633 was his attack on the Aristotelian doctrine of matter rather than his defence of Copernicanism. An anonymous denunciation, labeled "G3", discovered by Redondi in the Vatican archives, had argued that the atomism espoused by Galileo in his previous work of 1623, The Assayer, was incompatible with the doctrine of transubstantiation of the Eucharist. At the time, investigation of this complaint was apparently entrusted to a Father Giovanni di Guevara, who was well-disposed towards Galileo, and who cleared The Assayer of any taint of unorthodoxy. A similar attack against The Assayer on doctrinal grounds was penned by Jesuit Orazio Grassi in 1626 under the pseudonym "Sarsi". According to Redondi: The Jesuits, who had already linked The Assayer to allegedly heretical atomist ideas, regarded the ideas about matter expressed by Galileo in The Dialogue as further evidence that his atomism was heretically inconsistent with the doctrine of the Eucharist, and protested against it on these grounds. Pope Urban VIII, who had been under attack by Spanish cardinals for being too tolerant of heretics, and who had also encouraged Galileo to publish The Dialogue, would have been compromised had his enemies among the Cardinal Inquisitors been given an opening to comment on his support of a publication containing Eucharistic heresies. Urban, after banning the book's sale, established a commission to examine The Dialogue, ostensibly for the purpose of determining whether it would be possible to avoid referring the matter to the Inquisition at all, and as a special favor to Galileo's patron, the Grand Duke of Tuscany. Urban's real purpose, though, was to avoid having the accusations of Eucharistic heresy referred to the Inquisition, and he stacked the commission with friendly commissioners who could be relied upon not to mention them in their report. The commission reported against Galileo. Redondi's hypothesis concerning the hidden motives behind the 1633 trial has been criticized, and mainly rejected, by other Galileo scholars. However, it has been supported recently, as of 2007, by novelist and science writer Michael White. Modern Catholic Church views In 1758 the Catholic Church dropped the general prohibition of books advocating heliocentrism from the Index of Forbidden Books. It did not, however, explicitly rescind the decisions issued by the Inquisition in its judgement of 1633 against Galileo, or lift the prohibition of uncensored versions of Copernicus's De Revolutionibus or Galileo's Dialogue. The issue finally came to a head in 1820 when the Master of the Sacred Palace (the Church's chief censor), Filippo Anfossi, refused to license a book by a Catholic canon, Giuseppe Settele, because it openly treated heliocentrism as a physical fact. Settele appealed to pope Pius VII. After the matter had been reconsidered by the Congregation of the Index and the Holy Office, Anfossi's decision was overturned. Copernicus's De Revolutionibus and Galileo's Dialogue were then subsequently omitted from the next edition of the Index when it appeared in 1835. In 1979, Pope John Paul II expressed the hope that "theologians, scholars and historians, animated by a spirit of sincere collaboration, will study the Galileo case more deeply and in loyal recognition of wrongs, from whatever side they come." However, the Pontifical Interdisciplinary Study Commission constituted in 1981 to study the case did not reach any definitive result. Because of this, the Pope's 1992 speech that closed the project was vague, and did not fulfill his intentions expressed in 1979. On February 15, 1990, in a speech delivered at La Sapienza University in Rome, Cardinal Ratzinger (later Pope Benedict XVI) cited some current views on the Galileo affair as forming what he called "a symptomatic case that illustrates the extent to which modernity’s doubts about itself have grown today in science and technology". As evidence, he presented the views of a few prominent philosophers including Ernst Bloch and Carl Friedrich von Weizsäcker, as well as Paul Feyerabend, whom he quoted as saying: Ratzinger did not directly say whether he agreed or disagreed with Feyerabend's assertions, but did say in this same context that "It would be foolish to construct an impulsive apologetic on the basis of such views." In 1992, it was reported that the Catholic Church had turned towards vindicating Galileo: In January 2008, students and professors protested the planned visit of Pope Benedict XVI to La Sapienza University, stating in a letter that the pope's expressed views on Galileo "offend and humiliate us as scientists who are loyal to reason and as teachers who have dedicated our lives to the advance and dissemination of knowledge". In response the pope canceled his visit. The full text of the speech that would have been given was made available a few days following Pope Benedict's cancelled appearance at the university. La Sapienza's rector, Renato Guarini, and former Italian Prime Minister Romano Prodi opposed the protest and supported the pope's right to speak. Also notable were public counter-statements by La Sapienza professors Giorgio Israel and Bruno Dalla Piccola. List of artistic treatments In addition to the large non-fiction literature and the many documentary films about Galileo and the Galileo affair, there have also been several treatments in historical plays and films. The Museo Galileo has posted a listing of several of the plays. A listing centered on the films was presented in a 2010 article by Cristina Olivotto and Antonella Testa. Galilée is a French play by François Ponsard first performed in 1867. Galileo Galilei is a short Italian silent film by Luigi Maggi that was released in 1909. Life of Galileo is a play by the German playwright Bertolt Brecht that exists in several versions, including a 1947 version in English written with Charles Laughton. The play has been called "Brecht's masterpiece" by Michael Billington of The Guardian British newspaper. Joseph Losey, who directed the first productions of the English language version in 1947, made a film based on the play that was released in 1975. Lamp at Midnight is a play by Barrie Stavis that was first performed in 1947. An adaptation of the play was televised in 1964; it was directed by George Schaefer. A recording was released as a VHS tape in the 1980s. Galileo is a 1968 Italian film written and directed by Liliana Cavani. Galileo Galilei is an opera with music by Philip Glass and a libretto by Mary Zimmerman. It was first performed in 2002. See also Aristarchus of Samos Giordano Bruno Catholic Church and science Conflict thesis Vincenzo Maculani Notes References A searchable online copy is available on the Institute and Museum of the History of Science, Florence, and a brief overview of Le Opere is available at Finn's fine books, and here. Original edition published by Hutchinson (London). . Original edition by Desclee (New York, 1966) McMullen, Emerson Thomas, Galileo's condemnation: The real and complex story (Georgia Journal of Science, vol. 61(2) 2003) . Speller, Jules, Galileo's Inquisition Trial Revisited , Peter Lang Europäischer Verlag, Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2008. 431 pp.  / US- External links Galileo Galilei, Scriptural Exegete, and the Church of Rome, Advocate of Science lecture (audio here) by Thomas Aquinas College tutor Dr. Christopher Decaen "The End of the Myth of Galileo Galilei" by The Starry Messenger (1610) . An English translation from Bard College Sidereus Nuncius (1610) Original Latin text at LiberLiber online library. Galileo's letter to Castelli of 1613. The English translation given on the web page at this link is from Finocchiaro (1989), contrary to the claim made in the citation given on the page itself. Galileo's letter to the Grand Duchess Christina of 1615 Bellarmine's letter to Foscarini of 1615 Inquisition documents, 1616 and 1633 Galileo: Science and Religion Extensively documented series of lectures by William E. Carroll and Peter Hodgson. Edizione Nationale . A searchable online copy of Favaro's National Edition of Galileo's works at the website of the Institute and Museum of the History of Science, Florence. 17th-century Catholicism 17th century in science Astronomical controversies Cognitive inertia Copernican Revolution Events relating to freedom of expression Galileo Galilei History of astronomy Inquisition Pope Paul V 1610 beginnings 1633 endings
Galileo affair
[ "Astronomy" ]
7,576
[ "Copernican Revolution", "Astronomical controversies", "Galileo affair", "History of astronomy" ]
3,363,481
https://en.wikipedia.org/wiki/Brockram
Brockram is a type of rock found in northern England. It is a basal breccia of cemented limestone and sandstone fragments dating from the Permian period, forming part of the Appleby Group. Brockram outcrops in the Whitehaven and Workington district (Geological survey of Gt. Britain sheet 28). Saltom Bay gives a good exposure of it. Along the coast (Saltom Bay to St. Bees ) its thickness varies from 0.75m to 20.5m. Inland boreholes have revealed its thickness to be up to 121m. Brockram has been used as a building material in Kirkby Stephen and the rest of the Vale of Eden where it has also been quarried for lime burning. It is visible also beside a river bed under a bridge on the edge of Kirkby Stephen. References Breccias Geology of England Permian United Kingdom
Brockram
[ "Materials_science" ]
181
[ "Breccias", "Fracture mechanics" ]
717,778
https://en.wikipedia.org/wiki/Kinetic%20term
In physics, a kinetic term is the part of the Lagrangian that is bilinear in the fields (and for nonlinear sigma models, they are not even bilinear), and usually contains two derivatives with respect to time (or space); in the case of fermions, the kinetic term usually has one derivative only. The equation of motion derived from such a Lagrangian contains differential operators which are generated by the kinetic term. Unitarity requires kinetic terms to be positive. In mechanics, the kinetic term is In quantum field theory, the kinetic terms for real scalar fields, electromagnetic field and Dirac field are Quantum field theory
Kinetic term
[ "Physics" ]
136
[ "Quantum field theory", "Quantum mechanics" ]
717,826
https://en.wikipedia.org/wiki/Non-Gaussianity
In physics, a non-Gaussianity is the correction that modifies the expected Gaussian function estimate for the measurement of a physical quantity. In physical cosmology, the fluctuations of the cosmic microwave background are known to be approximately Gaussian, both theoretically as well as experimentally. However, most theories predict some level of non-Gaussianity in the primordial density field. Detection of these non-Gaussian signatures will allow discrimination between various models of inflation and their alternatives. References External links Testing gaussianity, homogeneity and isotropy with the cosmic microwave background Measurement Physical cosmology
Non-Gaussianity
[ "Physics", "Astronomy", "Mathematics" ]
133
[ "Astronomical sub-disciplines", "Physical quantities", "Quantity", "Theoretical physics", "Astrophysics", "Size", "Measurement", "Physical cosmology" ]
718,273
https://en.wikipedia.org/wiki/Design%20for%20Six%20Sigma
Design for Six Sigma (DFSS) is a collection of best-practices for the development of new products and processes. It is sometimes deployed as an engineering design process or business process management method. DFSS originated at General Electric to build on the success they had with traditional Six Sigma; but instead of process improvement, DFSS was made to target new product development. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off. There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define - Measure - Analyze - Improve - Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure. DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists. DFSS as an approach to design DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design. Critics Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model. The uncertainties can be handled via a Bayesian predictive approach, which considers the uncertainties in the model parameters as part of the optimization. The optimization is not based on a fitted model for the mean response, E[Y], but rather, the posterior probability that the responses satisfies given specifications is maximized according to the available experimental data. Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. Distinctions from DMAIC Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis. It is often seen that the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for "back end" Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well. Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries. However, it has become clear that the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs. Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., "fire fighting"), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., "fire prevention"). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions). Similarities with other methods Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering. Software engineering applications Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’. DFSS for software is essentially a non superficial modification of "classical DFSS" since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels. DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models. Data mining and predictive analytics application Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining, However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ] With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set. Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior. References Further reading Del Castillo, E. (2007). Process Optimization, a Statistical Approach. New York: Springer. https://link.springer.com/book/10.1007/978-0-387-71435-6 Product design Six Sigma Product development
Design for Six Sigma
[ "Engineering" ]
2,543
[ "Product design", "Design" ]
718,507
https://en.wikipedia.org/wiki/Antitranspirant
Antitranspirants are compounds applied to the leaves of plants to reduce transpiration. They are used on Christmas trees, on cut flowers, on newly transplanted shrubs, and in other applications to preserve and protect plants from drying out too quickly. They have also been used to protect leaves from salt burn and fungal diseases . They block the active excretion of hydrogen cation from the guard cells. Due to presence of carbon dioxide, a rapid acidification of cytoplasm takes place leading to stomatal closure. Milbarrow (1974) has described the formation of these chemicals in the chloroplast. It moves to the stomata, where it is responsible for checking the intake of Potassium ion or induces loss of potassium ion from the guard cells. Antitranspirants are of two types: metabolic inhibitors and film-forming antitranspirants. Metabolic inhibitors reduce the stomatal opening and increase the leaf resistance to water vapour diffusion without affecting carbon dioxide uptake. Examples include phenylmercury acetate, abscisic acid (ABA), and aspirin. Film-forming antitranspirants form a colorless film on the leaf surface that allows diffusion of gases but not of water vapour. Examples include silicone oil, waxes. References Plant physiology
Antitranspirant
[ "Chemistry", "Biology" ]
276
[ "Plant physiology", "Plants", "Organic chemistry stubs" ]
718,833
https://en.wikipedia.org/wiki/Chute%20%28gravity%29
A chute is a vertical or inclined plane, channel, or passage through which objects are moved by means of gravity. Landform A chute, also known as a race, flume, cat, or river canyon, is a steep-sided passage through which water flows rapidly. Akin to these, man-made chutes, such as the timber slide and log flume, were used in the logging industry to facilitate the downstream transportation of timber along rivers. These are no longer in common use. Man-made chutes may also be a feature of spillways on some dams. Some types of water supply and irrigation systems are gravity fed, hence chutes. These include aqueducts, puquios, and acequias. Building chutes Chutes are in common use in tall buildings to allow fast and efficient transport of items and materials from the upper floors to a central location on one of the lower floors, especially the basement. Chutes may be round, square or rectangular at the top and/or the bottom. Laundry chutes in hotels are placed on each floor to allow the expedient transfer and collection of dirty laundry to the hotel's laundry facility without having to use elevators or stairs. These chutes are generally aluminized steel and welded together to avoid any extruding parts that may rip or damage the materials. Home laundry chutes are typically found in homes with basement laundry to allow the collection of all household members' dirty laundry, conveniently near the bedrooms and laundry facilities, without the constant transport of laundry bins from story-to-story or room-to-room or up and down stairs. Home laundry chutes may be less common than previously due to building codes or concern regarding fireblocking, the prevention of fire from spreading from floor-to-floor, as well as child safety. However, construction including cabinets, doors, lids, and locks may make both risks significantly less than with simple stairwells. Refuse chutes or garbage chutes are common in high-rise apartment buildings and are used to collect all the building's garbage in one place. Often the bottom end of the chute is placed directly above a large, open waste container, at times this also includes a mechanical waste compactor. This makes garbage collection faster and more efficient, however can be a hygiene risk due to garbage residue left inside the chutes. Mail chutes are used in some buildings to collect the occupants' mail. A notable example is the Asia Insurance Building. Escape chutes are used and proposed for use in evacuation of mining equipment and high-rise buildings. Construction chutes are used to safely remove rubble and similar demolition materials and waste from taller buildings. These temporary structures typically consist of a chain of cylindrical or conical plastic tubes, each fitted into the top of the one below and tied together, usually with chains. Together they form a long flexible tube, which is hung down the side of the building. The lower end of this tube is placed over a skip or other receptacle, and waste materials are dropped through the top. Heavy duty steel chutes may also be used when the debris being deposited is heavy duty and in cases of particularly taller buildings. An elevator is not a chute as it does not move by gravity. Chutes in transportation Goust, a hamlet in southwestern France, is notable for its mountainside chute that is used to transport coffins. Chutes are also found in: Hopper cars Hopper barges References Building engineering Landforms
Chute (gravity)
[ "Engineering" ]
706
[ "Building engineering", "Civil engineering", "Architecture" ]
718,855
https://en.wikipedia.org/wiki/Self-organized%20criticality
Self-organized criticality (SOC) is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality. The concept was put forward by Per Bak, Chao Tang and Kurt Wiesenfeld ("BTW") in a paper published in 1987 in Physical Review Letters, and is considered to be one of the mechanisms by which complexity arises in nature. Its concepts have been applied across fields as diverse as geophysics, physical cosmology, evolutionary biology and ecology, bio-inspired computing and optimization (mathematics), economics, quantum gravity, sociology, solar physics, plasma physics, neurobiology and others. SOC is typically observed in slowly driven non-equilibrium systems with many degrees of freedom and strongly nonlinear dynamics. Many individual examples have been identified since BTW's original paper, but to date there is no known set of general characteristics that guarantee a system will display SOC. Overview Self-organized criticality is one of a number of important discoveries made in statistical physics and related fields over the latter half of the 20th century, discoveries which relate particularly to the study of complexity in nature. For example, the study of cellular automata, from the early discoveries of Stanislaw Ulam and John von Neumann through to John Conway's Game of Life and the extensive work of Stephen Wolfram, made it clear that complexity could be generated as an emergent feature of extended systems with simple local interactions. Over a similar period of time, Benoît Mandelbrot's large body of work on fractals showed that much complexity in nature could be described by certain ubiquitous mathematical laws, while the extensive study of phase transitions carried out in the 1960s and 1970s showed how scale invariant phenomena such as fractals and power laws emerged at the critical point between phases. The term self-organized criticality was first introduced in Bak, Tang and Wiesenfeld's 1987 paper, which clearly linked together those factors: a simple cellular automaton was shown to produce several characteristic features observed in natural complexity (fractal geometry, pink (1/f) noise and power laws) in a way that could be linked to critical-point phenomena. Crucially, however, the paper emphasized that the complexity observed emerged in a robust manner that did not depend on finely tuned details of the system: variable parameters in the model could be changed widely without affecting the emergence of critical behavior: hence, self-organized criticality. Thus, the key result of BTW's paper was its discovery of a mechanism by which the emergence of complexity from simple local interactions could be spontaneous—and therefore plausible as a source of natural complexity—rather than something that was only possible in artificial situations in which control parameters are tuned to precise critical values. An alternative view is that SOC appears when the criticality is linked to a value of zero of the control parameters. Despite the considerable interest and research output generated from the SOC hypothesis, there remains no general agreement with regards to its mechanisms in abstract mathematical form. Bak Tang and Wiesenfeld based their hypothesis on the behavior of their sandpile model. Models of self-organized criticality In chronological order of development: Stick-slip model of fault failure Bak–Tang–Wiesenfeld sandpile Forest-fire model Olami–Feder–Christensen model Bak–Sneppen model Early theoretical work included the development of a variety of alternative SOC-generating dynamics distinct from the BTW model, attempts to prove model properties analytically (including calculating the critical exponents), and examination of the conditions necessary for SOC to emerge. One of the important issues for the latter investigation was whether conservation of energy was required in the local dynamical exchanges of models: the answer in general is no, but with (minor) reservations, as some exchange dynamics (such as those of BTW) do require local conservation at least on average . It has been argued that the energy released in the BTW "sandpile" model should actually generate 1/f2 noise rather than 1/f noise. This claim was based on untested scaling assumptions, and a more rigorous analysis showed that sandpile models generally produce 1/fa spectra, with a<2. However, the dynamics of the accumulated stress does exhibit the 1/f noise in the BTW model. Other simulation models were proposed later that could also produce true 1/f noise. In addition to the nonconservative theoretical model mentioned above , other theoretical models for SOC have been based upon information theory, mean field theory, the convergence of random variables, and cluster formation. A continuous model of self-organised criticality is proposed by using tropical geometry. Key theoretical issues yet to be resolved include the calculation of the possible universality classes of SOC behavior and the question of whether it is possible to derive a general rule for determining if an arbitrary algorithm displays SOC. Self-organized criticality in nature SOC has become established as a strong candidate for explaining a number of natural phenomena, including: The magnitude of earthquakes (Gutenberg–Richter law) and frequency of aftershocks (Omori law) Fluctuations in economic systems such as financial markets (references to SOC are common in econophysics) The evolution of proteins Forest fires Neuronal avalanches in the cortex Acoustic emission from fracturing materials Despite the numerous applications of SOC to understanding natural phenomena, the universality of SOC theory has been questioned. For example, experiments with real piles of rice revealed their dynamics to be far more sensitive to parameters than originally predicted. Furthermore, it has been argued that 1/f scaling in EEG recordings are inconsistent with critical states, and whether SOC is a fundamental property of neural systems remains an open and controversial topic. Self-organized criticality and optimization It has been found that the avalanches from an SOC process make effective patterns in a random search for optimal solutions on graphs. An example of such an optimization problem is graph coloring. The SOC process apparently helps the optimization from getting stuck in a local optimum without the use of any annealing scheme, as suggested by previous work on extremal optimization. See also 1/f noise Complex systems Critical brain hypothesis Critical exponents Detrended fluctuation analysis, a method to detect power-law scaling in time series. Dual-phase evolution, another process that contributes to self-organization in complex systems. Fractals Ilya Prigogine, a systems scientist who helped formalize dissipative system behavior in general terms. Power laws Red Queen hypothesis Scale invariance Self-organization Self-organized criticality control References Further reading Papercore summary. Self-organized criticality on arxiv.org Critical phenomena Applied and interdisciplinary physics Chaos theory Self-organization
Self-organized criticality
[ "Physics", "Materials_science", "Mathematics" ]
1,413
[ "Self-organization", "Physical phenomena", "Applied and interdisciplinary physics", "Critical phenomena", "Condensed matter physics", "Statistical mechanics", "Dynamical systems" ]
719,460
https://en.wikipedia.org/wiki/Laplace%E2%80%93Runge%E2%80%93Lenz%20vector
In classical mechanics, the Laplace–Runge–Lenz vector (LRL vector) is a vector used chiefly to describe the shape and orientation of the orbit of one astronomical body around another, such as a binary star or a planet revolving around a star. For two bodies interacting by Newtonian gravity, the LRL vector is a constant of motion, meaning that it is the same no matter where it is calculated on the orbit; equivalently, the LRL vector is said to be conserved. More generally, the LRL vector is conserved in all problems in which two bodies interact by a central force that varies as the inverse square of the distance between them; such problems are called Kepler problems. The hydrogen atom is a Kepler problem, since it comprises two charged particles interacting by Coulomb's law of electrostatics, another inverse-square central force. The LRL vector was essential in the first quantum mechanical derivation of the spectrum of the hydrogen atom, before the development of the Schrödinger equation. However, this approach is rarely used today. In classical and quantum mechanics, conserved quantities generally correspond to a symmetry of the system. The conservation of the LRL vector corresponds to an unusual symmetry; the Kepler problem is mathematically equivalent to a particle moving freely on the surface of a four-dimensional (hyper-)sphere, so that the whole problem is symmetric under certain rotations of the four-dimensional space. This higher symmetry results from two properties of the Kepler problem: the velocity vector always moves in a perfect circle and, for a given total energy, all such velocity circles intersect each other in the same two points. The Laplace–Runge–Lenz vector is named after Pierre-Simon de Laplace, Carl Runge and Wilhelm Lenz. It is also known as the Laplace vector, the Runge–Lenz vector and the Lenz vector. Ironically, none of those scientists discovered it. The LRL vector has been re-discovered and re-formulated several times; for example, it is equivalent to the dimensionless eccentricity vector of celestial mechanics. Various generalizations of the LRL vector have been defined, which incorporate the effects of special relativity, electromagnetic fields and even different types of central forces. Context A single particle moving under any conservative central force has at least four constants of motion: the total energy and the three Cartesian components of the angular momentum vector with respect to the center of force. The particle's orbit is confined to the plane defined by the particle's initial momentum (or, equivalently, its velocity ) and the vector between the particle and the center of force (see Figure 1). This plane of motion is perpendicular to the constant angular momentum vector ; this may be expressed mathematically by the vector dot product equation . Given its mathematical definition below, the Laplace–Runge–Lenz vector (LRL vector) is always perpendicular to the constant angular momentum vector for all central forces (). Therefore, always lies in the plane of motion. As shown below, points from the center of force to the periapsis of the motion, the point of closest approach, and its length is proportional to the eccentricity of the orbit. The LRL vector is constant in length and direction, but only for an inverse-square central force. For other central forces, the vector is not constant, but changes in both length and direction. If the central force is approximately an inverse-square law, the vector is approximately constant in length, but slowly rotates its direction. A generalized conserved LRL vector can be defined for all central forces, but this generalized vector is a complicated function of position, and usually not expressible in closed form. The LRL vector differs from other conserved quantities in the following property. Whereas for typical conserved quantities, there is a corresponding cyclic coordinate in the three-dimensional Lagrangian of the system, there does not exist such a coordinate for the LRL vector. Thus, the conservation of the LRL vector must be derived directly, e.g., by the method of Poisson brackets, as described below. Conserved quantities of this kind are called "dynamic", in contrast to the usual "geometric" conservation laws, e.g., that of the angular momentum. History of rediscovery The LRL vector is a constant of motion of the Kepler problem, and is useful in describing astronomical orbits, such as the motion of planets and binary stars. Nevertheless, it has never been well known among physicists, possibly because it is less intuitive than momentum and angular momentum. Consequently, it has been rediscovered independently several times over the last three centuries. Jakob Hermann was the first to show that is conserved for a special case of the inverse-square central force, and worked out its connection to the eccentricity of the orbital ellipse. Hermann's work was generalized to its modern form by Johann Bernoulli in 1710. At the end of the century, Pierre-Simon de Laplace rediscovered the conservation of , deriving it analytically, rather than geometrically. In the middle of the nineteenth century, William Rowan Hamilton derived the equivalent eccentricity vector defined below, using it to show that the momentum vector moves on a circle for motion under an inverse-square central force (Figure 3). At the beginning of the twentieth century, Josiah Willard Gibbs derived the same vector by vector analysis. Gibbs' derivation was used as an example by Carl Runge in a popular German textbook on vectors, which was referenced by Wilhelm Lenz in his paper on the (old) quantum mechanical treatment of the hydrogen atom. In 1926, Wolfgang Pauli used the LRL vector to derive the energy levels of the hydrogen atom using the matrix mechanics formulation of quantum mechanics, after which it became known mainly as the Runge–Lenz vector. Definition An inverse-square central force acting on a single particle is described by the equation The corresponding potential energy is given by . The constant parameter describes the strength of the central force; it is equal to for gravitational and for electrostatic forces. The force is attractive if and repulsive if . The LRL vector is defined mathematically by the formula where is the mass of the point particle moving under the central force, is its momentum vector, is its angular momentum vector, is the position vector of the particle (Figure 1), is the corresponding unit vector, i.e., , and is the magnitude of , the distance of the mass from the center of force. The SI units of the LRL vector are joule-kilogram-meter (J⋅kg⋅m). This follows because the units of and are kg⋅m/s and J⋅s, respectively. This agrees with the units of (kg) and of (N⋅m2). This definition of the LRL vector pertains to a single point particle of mass moving under the action of a fixed force. However, the same definition may be extended to two-body problems such as the Kepler problem, by taking as the reduced mass of the two bodies and as the vector between the two bodies. Since the assumed force is conservative, the total energy is a constant of motion, The assumed force is also a central force. Hence, the angular momentum vector is also conserved and defines the plane in which the particle travels. The LRL vector is perpendicular to the angular momentum vector because both and are perpendicular to . It follows that lies in the plane of motion. Alternative formulations for the same constant of motion may be defined, typically by scaling the vector with constants, such as the mass , the force parameter or the angular momentum . The most common variant is to divide by , which yields the eccentricity vector, a dimensionless vector along the semi-major axis whose modulus equals the eccentricity of the conic: An equivalent formulation multiplies this eccentricity vector by the major semiaxis , giving the resulting vector the units of length. Yet another formulation divides by , yielding an equivalent conserved quantity with units of inverse length, a quantity that appears in the solution of the Kepler problem where is the angle between and the position vector . Further alternative formulations are given below. Derivation of the Kepler orbits The shape and orientation of the orbits can be determined from the LRL vector as follows. Taking the dot product of with the position vector gives the equation where is the angle between and (Figure 2). Permuting the scalar triple product yields Rearranging yields the solution for the Kepler equation This corresponds to the formula for a conic section of eccentricity e where the eccentricity and is a constant. Taking the dot product of with itself yields an equation involving the total energy , which may be rewritten in terms of the eccentricity, Thus, if the energy is negative (bound orbits), the eccentricity is less than one and the orbit is an ellipse. Conversely, if the energy is positive (unbound orbits, also called "scattered orbits"), the eccentricity is greater than one and the orbit is a hyperbola. Finally, if the energy is exactly zero, the eccentricity is one and the orbit is a parabola. In all cases, the direction of lies along the symmetry axis of the conic section and points from the center of force toward the periapsis, the point of closest approach. Circular momentum hodographs The conservation of the LRL vector and angular momentum vector is useful in showing that the momentum vector moves on a circle under an inverse-square central force. Taking the dot product of with itself yields Further choosing along the -axis, and the major semiaxis as the -axis, yields the locus equation for , In other words, the momentum vector is confined to a circle of radius centered on . For bounded orbits, the eccentricity corresponds to the cosine of the angle shown in Figure 3. For unbounded orbits, we have and so the circle does not intersect the -axis. In the degenerate limit of circular orbits, and thus vanishing , the circle centers at the origin . For brevity, it is also useful to introduce the variable . This circular hodograph is useful in illustrating the symmetry of the Kepler problem. Constants of motion and superintegrability The seven scalar quantities , and (being vectors, the latter two contribute three conserved quantities each) are related by two equations, and , giving five independent constants of motion. (Since the magnitude of , hence the eccentricity of the orbit, can be determined from the total angular momentum and the energy , only the direction of is conserved independently; moreover, since must be perpendicular to , it contributes only one additional conserved quantity.) This is consistent with the six initial conditions (the particle's initial position and velocity vectors, each with three components) that specify the orbit of the particle, since the initial time is not determined by a constant of motion. The resulting 1-dimensional orbit in 6-dimensional phase space is thus completely specified. A mechanical system with degrees of freedom can have at most constants of motion, since there are initial conditions and the initial time cannot be determined by a constant of motion. A system with more than constants of motion is called superintegrable and a system with constants is called maximally superintegrable. Since the solution of the Hamilton–Jacobi equation in one coordinate system can yield only constants of motion, superintegrable systems must be separable in more than one coordinate system. The Kepler problem is maximally superintegrable, since it has three degrees of freedom () and five independent constant of motion; its Hamilton–Jacobi equation is separable in both spherical coordinates and parabolic coordinates, as described below. Maximally superintegrable systems follow closed, one-dimensional orbits in phase space, since the orbit is the intersection of the phase-space isosurfaces of their constants of motion. Consequently, the orbits are perpendicular to all gradients of all these independent isosurfaces, five in this specific problem, and hence are determined by the generalized cross products of all of these gradients. As a result, all superintegrable systems are automatically describable by Nambu mechanics, alternatively, and equivalently, to Hamiltonian mechanics. Maximally superintegrable systems can be quantized using commutation relations, as illustrated below. Nevertheless, equivalently, they are also quantized in the Nambu framework, such as this classical Kepler problem into the quantum hydrogen atom. Evolution under perturbed potentials The Laplace–Runge–Lenz vector is conserved only for a perfect inverse-square central force. In most practical problems such as planetary motion, however, the interaction potential energy between two bodies is not exactly an inverse square law, but may include an additional central force, a so-called perturbation described by a potential energy . In such cases, the LRL vector rotates slowly in the plane of the orbit, corresponding to a slow apsidal precession of the orbit. By assumption, the perturbing potential is a conservative central force, which implies that the total energy and angular momentum vector are conserved. Thus, the motion still lies in a plane perpendicular to and the magnitude is conserved, from the equation . The perturbation potential may be any sort of function, but should be significantly weaker than the main inverse-square force between the two bodies. The rate at which the LRL vector rotates provides information about the perturbing potential . Using canonical perturbation theory and action-angle coordinates, it is straightforward to show that rotates at a rate of, where is the orbital period, and the identity was used to convert the time integral into an angular integral (Figure 5). The expression in angular brackets, , represents the perturbing potential, but averaged over one full period; that is, averaged over one full passage of the body around its orbit. Mathematically, this time average corresponds to the following quantity in curly braces. This averaging helps to suppress fluctuations in the rate of rotation. This approach was used to help verify Einstein's theory of general relativity, which adds a small effective inverse-cubic perturbation to the normal Newtonian gravitational potential, Inserting this function into the integral and using the equation to express in terms of , the precession rate of the periapsis caused by this non-Newtonian perturbation is calculated to be which closely matches the observed anomalous precession of Mercury and binary pulsars. This agreement with experiment is strong evidence for general relativity. Poisson brackets Unscaled functions The algebraic structure of the problem is, as explained in later sections, . The three components Li of the angular momentum vector have the Poisson brackets where =1,2,3 and is the fully antisymmetric tensor, i.e., the Levi-Civita symbol; the summation index is used here to avoid confusion with the force parameter defined above. Then since the LRL vector transforms like a vector, we have the following Poisson bracket relations between and : Finally, the Poisson bracket relations between the different components of are as follows: where is the Hamiltonian. Note that the span of the components of and the components of is not closed under Poisson brackets, because of the factor of on the right-hand side of this last relation. Finally, since both and are constants of motion, we have The Poisson brackets will be extended to quantum mechanical commutation relations in the next section and to Lie brackets in a following section. Scaled functions As noted below, a scaled Laplace–Runge–Lenz vector may be defined with the same units as angular momentum by dividing by . Since still transforms like a vector, the Poisson brackets of with the angular momentum vector can then be written in a similar form The Poisson brackets of with itself depend on the sign of , i.e., on whether the energy is negative (producing closed, elliptical orbits under an inverse-square central force) or positive (producing open, hyperbolic orbits under an inverse-square central force). For negative energies—i.e., for bound systems—the Poisson brackets are We may now appreciate the motivation for the chosen scaling of : With this scaling, the Hamiltonian no longer appears on the right-hand side of the preceding relation. Thus, the span of the three components of and the three components of forms a six-dimensional Lie algebra under the Poisson bracket. This Lie algebra is isomorphic to , the Lie algebra of the 4-dimensional rotation group . By contrast, for positive energy, the Poisson brackets have the opposite sign, In this case, the Lie algebra is isomorphic to . The distinction between positive and negative energies arises because the desired scaling—the one that eliminates the Hamiltonian from the right-hand side of the Poisson bracket relations between the components of the scaled LRL vector—involves the square root of the Hamiltonian. To obtain real-valued functions, we must then take the absolute value of the Hamiltonian, which distinguishes between positive values (where ) and negative values (where ). Laplace-Runge-Lenz operator for the hydrogen atom in momentum space Scaled Laplace-Runge-Lenz operator in the momentum space was found in 2022 . The formula for the operator is simpler than in position space: where the "degree operator" multiplies a homogeneous polynomial by its degree. Casimir invariants and the energy levels The Casimir invariants for negative energies are and have vanishing Poisson brackets with all components of and , C2 is trivially zero, since the two vectors are always perpendicular. However, the other invariant, C1, is non-trivial and depends only on , and . Upon canonical quantization, this invariant allows the energy levels of hydrogen-like atoms to be derived using only quantum mechanical canonical commutation relations, instead of the conventional solution of the Schrödinger equation. This derivation is discussed in detail in the next section. Quantum mechanics of the hydrogen atom Poisson brackets provide a simple guide for quantizing most classical systems: the commutation relation of two quantum mechanical operators is specified by the Poisson bracket of the corresponding classical variables, multiplied by . By carrying out this quantization and calculating the eigenvalues of the 1 Casimir operator for the Kepler problem, Wolfgang Pauli was able to derive the energy levels of hydrogen-like atoms (Figure 6) and, thus, their atomic emission spectrum. This elegant 1926 derivation was obtained before the development of the Schrödinger equation. A subtlety of the quantum mechanical operator for the LRL vector is that the momentum and angular momentum operators do not commute; hence, the quantum operator cross product of and must be defined carefully. Typically, the operators for the Cartesian components are defined using a symmetrized (Hermitian) product, Once this is done, one can show that the quantum LRL operators satisfy commutations relations exactly analogous to the Poisson bracket relations in the previous section—just replacing the Poisson bracket with times the commutator. From these operators, additional ladder operators for can be defined, These further connect different eigenstates of , so different spin multiplets, among themselves. A normalized first Casimir invariant operator, quantum analog of the above, can likewise be defined, where is the inverse of the Hamiltonian energy operator, and is the identity operator. Applying these ladder operators to the eigenstates |ℓ〉 of the total angular momentum, azimuthal angular momentum and energy operators, the eigenvalues of the first Casimir operator, 1, are seen to be quantized, . Importantly, by dint of the vanishing of C2, they are independent of the ℓ and quantum numbers, making the energy levels degenerate. Hence, the energy levels are given by which coincides with the Rydberg formula for hydrogen-like atoms (Figure 6). The additional symmetry operators have connected the different ℓ multiplets among themselves, for a given energy (and C1), dictating states at each level. In effect, they have enlarged the angular momentum group to . Conservation and symmetry The conservation of the LRL vector corresponds to a subtle symmetry of the system. In classical mechanics, symmetries are continuous operations that map one orbit onto another without changing the energy of the system; in quantum mechanics, symmetries are continuous operations that "mix" electronic orbitals of the same energy, i.e., degenerate energy levels. A conserved quantity is usually associated with such symmetries. For example, every central force is symmetric under the rotation group SO(3), leading to the conservation of the angular momentum . Classically, an overall rotation of the system does not affect the energy of an orbit; quantum mechanically, rotations mix the spherical harmonics of the same quantum number without changing the energy. The symmetry for the inverse-square central force is higher and more subtle. The peculiar symmetry of the Kepler problem results in the conservation of both the angular momentum vector and the LRL vector (as defined above) and, quantum mechanically, ensures that the energy levels of hydrogen do not depend on the angular momentum quantum numbers and . The symmetry is more subtle, however, because the symmetry operation must take place in a higher-dimensional space; such symmetries are often called "hidden symmetries". Classically, the higher symmetry of the Kepler problem allows for continuous alterations of the orbits that preserve energy but not angular momentum; expressed another way, orbits of the same energy but different angular momentum (eccentricity) can be transformed continuously into one another. Quantum mechanically, this corresponds to mixing orbitals that differ in the and quantum numbers, such as the () and () atomic orbitals. Such mixing cannot be done with ordinary three-dimensional translations or rotations, but is equivalent to a rotation in a higher dimension. For negative energies – i.e., for bound systems – the higher symmetry group is , which preserves the length of four-dimensional vectors In 1935, Vladimir Fock showed that the quantum mechanical bound Kepler problem is equivalent to the problem of a free particle confined to a three-dimensional unit sphere in four-dimensional space. Specifically, Fock showed that the Schrödinger wavefunction in the momentum space for the Kepler problem was the stereographic projection of the spherical harmonics on the sphere. Rotation of the sphere and re-projection results in a continuous mapping of the elliptical orbits without changing the energy, an symmetry sometimes known as Fock symmetry; quantum mechanically, this corresponds to a mixing of all orbitals of the same energy quantum number . Valentine Bargmann noted subsequently that the Poisson brackets for the angular momentum vector and the scaled LRL vector formed the Lie algebra for . Simply put, the six quantities and correspond to the six conserved angular momenta in four dimensions, associated with the six possible simple rotations in that space (there are six ways of choosing two axes from four). This conclusion does not imply that our universe is a three-dimensional sphere; it merely means that this particular physics problem (the two-body problem for inverse-square central forces) is mathematically equivalent to a free particle on a three-dimensional sphere. For positive energies – i.e., for unbound, "scattered" systems – the higher symmetry group is , which preserves the Minkowski length of 4-vectors Both the negative- and positive-energy cases were considered by Fock and Bargmann and have been reviewed encyclopedically by Bander and Itzykson. The orbits of central-force systems – and those of the Kepler problem in particular – are also symmetric under reflection. Therefore, the , and groups cited above are not the full symmetry groups of their orbits; the full groups are , , and O(3,1), respectively. Nevertheless, only the connected subgroups, , , and , are needed to demonstrate the conservation of the angular momentum and LRL vectors; the reflection symmetry is irrelevant for conservation, which may be derived from the Lie algebra of the group. Rotational symmetry in four dimensions The connection between the Kepler problem and four-dimensional rotational symmetry can be readily visualized. Let the four-dimensional Cartesian coordinates be denoted where represent the Cartesian coordinates of the normal position vector . The three-dimensional momentum vector is associated with a four-dimensional vector on a three-dimensional unit sphere where is the unit vector along the new axis. The transformation mapping to can be uniquely inverted; for example, the component of the momentum equals and similarly for and . In other words, the three-dimensional vector is a stereographic projection of the four-dimensional vector, scaled by (Figure 8). Without loss of generality, we may eliminate the normal rotational symmetry by choosing the Cartesian coordinates such that the axis is aligned with the angular momentum vector and the momentum hodographs are aligned as they are in Figure 7, with the centers of the circles on the axis. Since the motion is planar, and and are perpendicular, and attention may be restricted to the three-dimensional vector The family of Apollonian circles of momentum hodographs (Figure 7) correspond to a family of great circles on the three-dimensional sphere, all of which intersect the axis at the two foci , corresponding to the momentum hodograph foci at . These great circles are related by a simple rotation about the -axis (Figure 8). This rotational symmetry transforms all the orbits of the same energy into one another; however, such a rotation is orthogonal to the usual three-dimensional rotations, since it transforms the fourth dimension . This higher symmetry is characteristic of the Kepler problem and corresponds to the conservation of the LRL vector. An elegant action-angle variables solution for the Kepler problem can be obtained by eliminating the redundant four-dimensional coordinates in favor of elliptic cylindrical coordinates where , and are Jacobi's elliptic functions. Generalizations to other potentials and relativity The Laplace–Runge–Lenz vector can also be generalized to identify conserved quantities that apply to other situations. In the presence of a uniform electric field , the generalized Laplace–Runge–Lenz vector is where is the charge of the orbiting particle. Although is not conserved, it gives rise to a conserved quantity, namely . Further generalizing the Laplace–Runge–Lenz vector to other potentials and special relativity, the most general form can be written as where and , with the angle defined by and is the Lorentz factor. As before, we may obtain a conserved binormal vector by taking the cross product with the conserved angular momentum vector These two vectors may likewise be combined into a conserved dyadic tensor , In illustration, the LRL vector for a non-relativistic, isotropic harmonic oscillator can be calculated. Since the force is central, the angular momentum vector is conserved and the motion lies in a plane. The conserved dyadic tensor can be written in a simple form although and are not necessarily perpendicular. The corresponding Runge–Lenz vector is more complicated, where is the natural oscillation frequency, and Proofs that the Laplace–Runge–Lenz vector is conserved in Kepler problems The following are arguments showing that the LRL vector is conserved under central forces that obey an inverse-square law. Direct proof of conservation A central force acting on the particle is for some function of the radius . Since the angular momentum is conserved under central forces, and where the momentum and where the triple cross product has been simplified using Lagrange's formula The identity yields the equation For the special case of an inverse-square central force , this equals Therefore, is conserved for inverse-square central forces A shorter proof is obtained by using the relation of angular momentum to angular velocity, , which holds for a particle traveling in a plane perpendicular to . Specifying to inverse-square central forces, the time derivative of is where the last equality holds because a unit vector can only change by rotation, and is the orbital velocity of the rotating vector. Thus, is seen to be a difference of two vectors with equal time derivatives. As described elsewhere in this article, this LRL vector is a special case of a general conserved vector that can be defined for all central forces. However, since most central forces do not produce closed orbits (see Bertrand's theorem), the analogous vector rarely has a simple definition and is generally a multivalued function of the angle between and . Hamilton–Jacobi equation in parabolic coordinates The constancy of the LRL vector can also be derived from the Hamilton–Jacobi equation in parabolic coordinates , which are defined by the equations where represents the radius in the plane of the orbit The inversion of these coordinates is Separation of the Hamilton–Jacobi equation in these coordinates yields the two equivalent equations where is a constant of motion. Subtraction and re-expression in terms of the Cartesian momenta and shows that is equivalent to the LRL vector Noether's theorem The connection between the rotational symmetry described above and the conservation of the LRL vector can be made quantitative by way of Noether's theorem. This theorem, which is used for finding constants of motion, states that any infinitesimal variation of the generalized coordinates of a physical system that causes the Lagrangian to vary to first order by a total time derivative corresponds to a conserved quantity In particular, the conserved LRL vector component corresponds to the variation in the coordinates where equals 1, 2 and 3, with and being the -th components of the position and momentum vectors and , respectively; as usual, represents the Kronecker delta. The resulting first-order change in the Lagrangian is Substitution into the general formula for the conserved quantity yields the conserved component of the LRL vector, Lie transformation Noether's theorem derivation of the conservation of the LRL vector is elegant, but has one drawback: the coordinate variation involves not only the position , but also the momentum or, equivalently, the velocity . This drawback may be eliminated by instead deriving the conservation of using an approach pioneered by Sophus Lie. Specifically, one may define a Lie transformation in which the coordinates and the time are scaled by different powers of a parameter λ (Figure 9), This transformation changes the total angular momentum and energy , but preserves their product EL2. Therefore, the eccentricity and the magnitude are preserved, as may be seen from the equation for The direction of is preserved as well, since the semiaxes are not altered by a global scaling. This transformation also preserves Kepler's third law, namely, that the semiaxis and the period form a constant . Alternative scalings, symbols and formulations Unlike the momentum and angular momentum vectors and , there is no universally accepted definition of the Laplace–Runge–Lenz vector; several different scaling factors and symbols are used in the scientific literature. The most common definition is given above, but another common alternative is to divide by the quantity to obtain a dimensionless conserved eccentricity vector where is the velocity vector. This scaled vector has the same direction as and its magnitude equals the eccentricity of the orbit, and thus vanishes for circular orbits. Other scaled versions are also possible, e.g., by dividing by alone or by which has the same units as the angular momentum vector . In rare cases, the sign of the LRL vector may be reversed, i.e., scaled by . Other common symbols for the LRL vector include , , , and . However, the choice of scaling and symbol for the LRL vector do not affect its conservation. An alternative conserved vector is the binormal vector studied by William Rowan Hamilton, which is conserved and points along the minor semiaxis of the ellipse. (It is not defined for vanishing eccentricity.) The LRL vector is the cross product of and (Figure 4). On the momentum hodograph in the relevant section above, is readily seen to connect the origin of momenta with the center of the circular hodograph, and to possess magnitude . At perihelion, it points in the direction of the momentum. The vector is denoted as "binormal" since it is perpendicular to both and . Similar to the LRL vector itself, the binormal vector can be defined with different scalings and symbols. The two conserved vectors, and can be combined to form a conserved dyadic tensor , where and are arbitrary scaling constants and represents the tensor product (which is not related to the vector cross product, despite their similar symbol). Written in explicit components, this equation reads Being perpendicular to each another, the vectors and can be viewed as the principal axes of the conserved tensor , i.e., its scaled eigenvectors. is perpendicular to , since and are both perpendicular to as well, . More directly, this equation reads, in explicit components, See also Astrodynamics Orbit Eccentricity vector Orbital elements Bertrand's theorem Binet equation Two-body problem References Further reading Updated version of previous source. . Classical mechanics Orbits Rotational symmetry Vectors (mathematics and physics) Articles containing proofs Mathematical physics
Laplace–Runge–Lenz vector
[ "Physics", "Mathematics" ]
6,775
[ "Applied mathematics", "Theoretical physics", "Classical mechanics", "Mechanics", "Articles containing proofs", "Mathematical physics", "Symmetry", "Rotational symmetry" ]
719,496
https://en.wikipedia.org/wiki/Scagliola
Scagliola (from the Italian scaglia, meaning "chips") is a type of fine plaster used in architecture and sculpture. The same term identifies the technique for producing columns, sculptures, and other architectural elements that resemble inlays in marble. The scagliola technique came into fashion in 17th-century Tuscany as an effective substitute for costly marble inlays, the pietra dura works created for the Medici family in Florence. The use of scagliola declined in the 20th century. Scagliola is a composite substance made from plaster of Paris, glue and natural pigments, imitating marble and other hard stones. The material may be veined with colors and applied to a core, or desired pattern may be carved into a previously prepared scagliola matrix. The pattern's indentations are then filled with the colored, plaster-like scagliola composite, and then polished with flax oil for brightness, and wax for protection. The combination of materials and technique provides a complex texture, and richness of color not available in natural veined marbles. A comparable material is terrazzo. Marmorino is a synonym, but scagliola and terrazzo should not be confused with plaster of Paris, which is one ingredient. Method Batches of pigmented plaster, modified with animal glue are applied to molds, armatures and pre-plastered wall planes in a manner that accurately mimics natural stone, breccia and marble. In one technique, veining is created by drawing strands of raw silk saturated in pigment through the plaster mix. Another technique involves trowelling on several layers of translucent renders and randomly cutting back to a previous layer to achieve colour differential similar to jasper. When dry, the damp surface was pumiced smooth, then buffed with a linen cloth impregnated with Tripoli (a siliceous rottenstone) and charcoal; finally it was buffed with oiled felt; beeswax was sometimes used for this purpose. Because the colours are integral to the plaster, the pattern is more resistant to scratching than with other techniques, such as painting on wood. There are two scagliola techniques: in traditional 'Bavarian scagliola' coloured batches of plaster of Paris are worked to a stiff, dough-like consistency. The plaster is modified with the addition of animal glues such as isinglass or hide glue. 'Marezzo scagliola' is worked with the pigmented batches of plaster in a liquid state and relies mainly on the use of Keene's cement, a unique gypsum plaster product in which plaster of Paris was steeped in alum or borate, then burned in a kiln and ground to a fine powder; invented around 1840, it sets to an exceptionally hard state. It is typically used without the addition of animal glues. Marezzo scagliola is often called American scagliola because of its widespread use in the United States in the late nineteenth and early twentieth century. Slabs of Marezzo scagliola may be used as table tops. When set, scagliola is hard enough to be turned on a lathe to form vases, balusters and finials. History While there is evidence of scagliola decoration in ancient Roman architecture, scagliola decoration became popular in Italian Baroque buildings in the 17th century, and was imitated throughout Europe until the 19th century. Superb altar frontals using this technique are to be found at the Certosa di Padula in the Campania, Southern Italy. An early use of scagliola in England is in a fireplace at Ham House, Surrey, which was brought from Italy along with the window sill in the reign of Charles II. This employs the use of a scagliola background which was then cut into to lay in the design. In 1761, a scagliolista, Domenico Bartoli, from Livorno arrived in London and was employed by William Constable of Burton Constable in Yorkshire. Here he produced two chimneypieces in white marble inlaid with the scagliola embellishments directly into cut matrices in the marble. Apart from the protective edges of altars at Padula this seems to be the first use of this technique. In 1766 he went into partnership with Johannes Richter, possibly from Dresden, who may have brought a young Pietro Bossi with him. The name Bossi is associated with a family of Northern Italian scagliolisti. Bartoli supplied table tops to Ireland and one chimneypiece at Belvedere House in Dublin could be attributed to Richter. Their styles are very different. There is little evidence that either of them came to Ireland. Pietro Bossi arrived in Dublin in 1784 and probably died there in 1798. He produced a number of chimneypieces in Dublin of very good quality. Scagliola inlay proved to be desirable in Ireland and there appears to be a continuation long after it became unfashionable in England. In 1911, Herbert Cescinsky, in English Furniture remarked that scagliola had been popular in Dublin fifty years before. This would explain one at 86, Stephen's Green, clearly an 18th. Century chimneypiece, which has been later embellished in the mid 19th. Century for Crofton Vanderleur, formerly at 4, Parnell Square. A later firm, Sharpe & Emery, Pearce St., Dublin produce a number of examples in the neo-classical Bossi style, sometimes using original chimneypieces. The correspondence between British Resident in Florence Sir Horace Mann and Horace Walpole describes the process of obtaining a prized scagliola table top. Having received his first top from the Irishman Friar Ferdinando Henrico Hugford (1695–1771) around 1740 Walpole had asked his friend Mann to acquire some more... (one of these tables is at The Vyne. That table has the arms of Walpole (with his post 1726 Garter Knight embellishments) impaling Shorter - for Prime Minister Sir Robert Walpole and his first wife Catherine Shorter, who died 20 August 1737. He married Maria Skerret in early 1738, thus The Vyne's table could seem have been ordered before c1736-37). In a letter dated 26 November 1741 Mann writes to Walpole: Your scagliola table was near finished when behold the stone on which the stuff is put, opened of itself so that all that was done, to his [Hugford's] great mortification is spoilt. He would have been off for beginning again on account of his eyes etc., but I have begged he will do it and he is about it and on 15 July 1742: Your scagliola table is almost finished (you remember the first he [Hugford] undertook broke when near done) and is very handsome, but even in this commission my success is not complete, for I cannot persuade the padre [Hugford] to make its companion and on 30 October 1742: Your scagliola table is finished, though I have not got it home. The nasty priest [Hugford] will have 25 zecchins [£12 10s] besides many thanks, for the preference given to me, for some simple English have been tampering with him and offered 30 to get it, though it is by no means such a fine performance. The priest wishes I would not take it, as he would make a present of it to the Pope. He leaves Florence for good and on 11 July 1747: You bid me get you two scagliola tables, but don't mention the size or any other particulars. The man who made yours is no longer in Florence. Here is a scholar of his [Don Pietro Belloni?], but vastly inferior to him, and so slow in working that he has been almost three years about a pair for a Mr Leson [Joseph Leeson], and requires still six months more. I will endeavour to get somebody to write to the first friar [Hugford] and to engage him to make two tables in his convent and send them to Florence, of which I hope to be able to give you an account by next post. and on 10 October 1749: I am glad your scagliola tables please. You must make the greater account of them, as it is impossible to get any more of the same man [Hugford], nor indeed of his disciple here [Belloni], who is a priest too, and has been four years about a pair I bespoke of him, which he tells me plainly he cannot finish in less than two more. They work for diversion and won't be hurried. In modern times—Tusmore House, Oxfordshire: The great triumph of the saloon, however, is the use of scagliola, including the richly coloured and figured Sienna shafts of the eight fluted Corinthian columns...and the urns, entablature and balustrade to the second-floor landing which gives access to four plaster-vaulted ante rooms serving the main bedrooms. All this scagliola was produced by Richard Feroze, England's leading contemporary scagliola-maker. Italian plasterworkers produced scagliola columns and pilasters for Robert Adam at Syon House (notably the columns in the Anteroom) and at Kedleston Hall (notably the pilasters in the Saloon). In 1816 the Coade Ornamental Stone Manufactory extended their practice to include scagliola; their scagliola was used by Benjamin Dean Wyatt at Apsley House, London. In the United States scagliola was popular in the 19th and 20th centuries. Important US buildings featuring scagliola include the Mississippi State Capitol in Jackson, Mississippi (1903), Allen County Courthouse in Fort Wayne, Indiana, Belcourt Castle in Newport, Rhode Island, in the old El Paso County Courthouse (Colorado) in Colorado Springs, in the Kansas State Capitol in Topeka, Kansas, in Shea's Performing Arts Center in Buffalo, New York, and in the Navarro County Courthouse in Corsicana, TX. St. Louis Union Station in St. Louis, Missouri, prominently features scagliola in its magnificent Grand Hall, the Rialto Square Theatre, Joliet, IL, Cathedral of St. Helena in Helena, MT, Congregation Shearith Israel in New York City, Milwaukee Public Library Central Library in Milwaukee, WI and the French Lick Resort Casino, French Lick, IN which recently underwent a major restoration. Scagliola has historically been considered an Ersatz material and an inexpensive alternative to natural stone. However, it has eventually come to be recognised as an exceptional example of the plasterer's craft and is now prized for its historic value as well as being used in new construction because of its benefits as a plastic material suited to molding in ornate shapes. Scagliola columns are not generally built of the solid material. Instead scagliola is trowelled onto a canvas which is wrapped around the column's core, and the canvas peeled away when semi-hardened. The scagliola is then surfaced in place. The verd antique columns and pilasters in the Anteroom at Syon House are made out of marble not scagliola as it is widely perceived (a beautiful and rare, predominantly green marble that was quarried in Larissa of Greece since antiquity). These columns are not solid. Round sections of marble were painstakingly cut as a veneer of an approximate thickness of 5–6 mm and then glued onto a column core that is hollow and was probably made out of plaster. On closer inspection the viewer can see the joints of the various sections. The discerning eye will soon realise that they are looking at verd antique veneered marble and not verd antique scagliola. The 3.6 metre high verd antique scagliola columns that can be seen at Dropmore House, Buckinghamshire, are based on the colours and design of this historical work at Syon House. Both research and execution of these new columns were undertaken recently by the contemporary scagliolist Michael Koumbouzis. Gallery See also Coade stone Marbleizing Polished plaster References John Fleming, "The Hugfords of Florence", The Connoisseur, 1955, cxxxvi. 109. Conor O'Neill, In Search of Bossi, The Journal of the Irish Georgian Society, Vol. I, 1998, pp. 146–175 A. M. Massinelli, Scagliola. L'arte della pietra di luna :it:Bianco Bianchi (artigiano), Roma, 1997 R. de Salis, Fane-Stanhope scagliola, London, 2008. Donald Cameron, Scagliola Inlay Work: the problems of attribution, The Journal of the Irish Georgian Society, Vol. VII, 2004, pp. 140 – 155 Patrick Pilkington: The Chimneypiece in Ireland in the 18th. Century, Ireland's Antiques & Period Properties, Vol. 5, no.2, 2008-9 pp. 78–82. Notes Architecture Building materials Craft materials Plastering Wallcoverings Italian inventions
Scagliola
[ "Physics", "Chemistry", "Engineering" ]
2,730
[ "Building engineering", "Coatings", "Architecture", "Construction", "Materials", "Plastering", "Matter", "Building materials" ]
719,534
https://en.wikipedia.org/wiki/Biological%20carbon%20fixation
Biological carbon fixation, or сarbon assimilation, is the process by which living organisms convert inorganic carbon (particularly carbon dioxide, ) to organic compounds. These organic compounds are then used to store energy and as structures for other biomolecules. Carbon is primarily fixed through photosynthesis, but some organisms use chemosynthesis in the absence of sunlight. Chemosynthesis is carbon fixation driven by chemical energy rather than from sunlight. The process of biological carbon fixation plays a crucial role in the global carbon cycle, as it serves as the primary mechanism for removing from the atmosphere and incorporating it into living biomass. The primary production of organic compounds allows carbon to enter the biosphere. Carbon is considered essential for life as a base element for building organic compounds. The element of carbon forms the bases biogeochemical cycles (or nutrient cycles) and drives communities of living organisms. Understanding biological carbon fixation is essential for comprehending ecosystem dynamics, climate regulation, and the sustainability of life on Earth. Organisms that grow by fixing carbon, such as most plants and algae, are called autotrophs. These include photoautotrophs (which use sunlight) and lithoautotrophs (which use inorganic oxidation). Heterotrophs, such as animals and fungi, are not capable of carbon fixation but are able to grow by consuming the carbon fixed by autotrophs or other heterotrophs. Seven natural autotrophic carbon fixation pathways are currently known. They are the: i) Calvin-Benson-Bassham (Calvin Cycle), ii) Reverse Krebs (rTCA) cycle, iii) the reductive acetyl-CoA (Wood-Ljungdahl pathway), iv) 3-hydroxy propionate [3-HP] bicycle, v) 3-hydroypropionate/4- hydroxybutyrate (3-HP/4-HB) cycle, vi) the dicarboxylate/ 4-hydroxybutyrate (DC/4-HB) cycle, and vii) the reductive glycine (rGly) pathway. "Fixed carbon," "reduced carbon," and "organic carbon" may all be used interchangeably to refer to various organic compounds. Net vs. gross CO2 fixation The primary form of fixed inorganic carbon is carbon dioxide (CO2). It is estimated that approximately 250 billion tons of carbon dioxide are converted by photosynthesis annually. The majority of the fixation occurs in terrestrial environments, especially the tropics. The gross amount of carbon dioxide fixed is much larger since approximately 40% is consumed by respiration following photosynthesis. Historically, it is estimated that approximately 2×1011 billion tons of carbon has been fixed since the origin of life. Overview of the carbon fixation cycles Seven autotrophic carbon fixation pathways are known: the Calvin Cycle, the Reverse Krebs Cycle, the reductive acetyl-CoA, the 3-HP bicycle, the 3-HP/4-HB cycle, the DC/4-HB cycles, and the reductive glycine pathway. The organisms the Calvin cycle is found in are plants, algae, cyanobacteria, aerobic proteobacteria, and purple bacteria. The Calvin cycle fixes carbon in the chloroplasts of plants and algae, and in the cyanobacteria. It also fixes carbon in the anoxygenic photosynthesis in one type of Pseudomonadota called purple bacteria, and in some non-phototrophic Pseudomonadota. Of the other autotrophic pathways, three are known only in bacteria (the reductive citric acid cycle, the 3-hydroxypropionate cycle, and the reductive glycine pathway), two only in archaea (two variants of the 3-hydroxypropionate cycle), and one in both bacteria and archaea (the reductive acetyl CoA pathway). Sulfur- and hydrogen-oxidizing bacteria often use the Calvin cycle or the reductive citric acid cycle. List of pathways Calvin cycle The Calvin cycle accounts for 90% of biological carbon fixation. Consuming adenosine triphosphate (ATP) and nicotinamide adenine dinucleotide phosphate (NADPH), the Calvin cycle in plants accounts for the predominance of carbon fixation on land. In algae and cyanobacteria, it accounts for the dominance of carbon fixation in the oceans. The Calvin cycle converts carbon dioxide into sugar, as triose phosphate (TP), which is glyceraldehyde 3-phosphate (GAP) together with dihydroxyacetone phosphate (DHAP): 3 CO2 + 12 e− + 12 H+ + Pi → TP + 4 H2O An alternative perspective accounts for NADPH (source of e−) and ATP: 3 CO2 + 6 NADPH + 6 H+ + 9 ATP + 5 H2O → TP + 6 NADP+ + 9 ADP + 8 Pi The formula for inorganic phosphate (Pi) is HOPO32− + 2 H+.Formulas for triose and TP are C2H3O2-CH2OH and C2H3O2-CH2OPO32− + 2 H+. Reverse Krebs cycle The reverse Krebs cycle, also known as the reverse TCA cycle (rTCA) or reductive citric acid cycle, is an alternative to the standard Calvin-Benson cycle for carbon fixation. It has been found in strict anaerobic or microaerobic bacteria (as Aquificales) and anaerobic archea. It was discovered by Evans, Buchanan and Arnon in 1966 working with the photosynthetic green sulfur bacterium Chlorobium limicola. In particular, it is one of the most used pathways in hydrothermal vents by the Campylobacterota. This feature allows primary production in the ocean's aphotic environments, or "dark primary production." Without it, there would be no primary production in aphotic environments, which would lead to habitats without life. The cycle involves the biosynthesis of acetyl-CoA from two molecules of CO2. The key steps of the reverse Krebs cycle are: Oxaloacetate to malate, using NADH + H+ Oxaloacetate + NADH/H+ -> Malate + NAD+ Fumarate to succinate, catalyzed by an oxidoreductase, Fumarate reductase Fumarate + FADH2 <=> Succinate + FAD Succinate to succinyl-CoA, an ATP-dependent step Succinate + ATP + CoA -> Succinyl-CoA + ADP + Pi Succinyl-CoA to alpha-ketoglutarate, using one molecule of CO2 Succinyl-CoA + CO2 + Fd{(red)} -> alpha-ketoglutarate + Fd{(ox)} Alpha-ketoglutarate to isocitrate, using NADPH + H+ and another molecule of CO2 Alpha-ketoglutarate + CO2 + NAD(P)H/H+ -> Isocitrate + NAD(P)+ Citrate converted into oxaloacetate and acetyl-CoA, this is an ATP dependent step and the key enzyme is the ATP citrate lyase Citrate + ATP + CoA -> Oxaloacetate + Acetyl-CoA + ADP + Pi This pathway is cyclic due to the regeneration of the oxaloacetate. The bacteria Gammaproteobacteria and Riftia pachyptila switch from the Calvin-Benson cycle to the rTCA cycle in response to concentrations of H2S. Reductive acetyl CoA pathway The reductive acetyl CoA pathway (CoA) pathway, also known as the Wood-Ljungdahl pathway uses CO2 as electron acceptor and carbon source, and H2 as an electron donor to form acetic acid. This metabolism is widespread within the phylum Bacillota, especially in the Clostridia. The pathway is also used by methanogens, which are mainly Euryarchaeota, and several anaerobic chemolithoautotrophs, such as sulfate-reducing bacteria and archaea. It is probably performed also by the Brocadiales, an order of Planctomycetota that oxidize ammonia in anaerobic conditions. Hydrogenotrophic methanogenesis, which is only found in certain archaea and accounts for 80% of global methanogenesis, is also based on the reductive acetyl CoA pathway. The Carbon Monoxide Dehydrogenase/Acetyl-CoA Synthase is the oxygen-sensitive enzyme that permits the reduction of CO2 to CO and the synthesis of acetyl-CoA in several reactions. One branch of this pathway, the methyl branch, is similar but non-homologous between bacteria and archaea. In this branch happens the reduction of CO2 to a methyl residue bound to a cofactor. The intermediates are formate for bacteria and formyl-methanofuran for archaea, and also the carriers, tetrahydrofolate and tetrahydropterins respectively in bacteria and archaea, are different, such as the enzymes forming the cofactor-bound methyl group. Otherwise, the carbonyl branch is homologous between the two domains and consists of the reduction of another molecule of CO2 to a carbonyl residue bound to an enzyme, catalyzed by the CO dehydrogenase/acetyl-CoA synthase. This key enzyme is also the catalyst for the formation of acetyl-CoA starting from the products of the previous reactions, the methyl and the carbonyl residues. This carbon fixation pathway requires only one molecule of ATP for the production of one molecule of pyruvate, which makes this process one of the main choice for chemolithoautotrophs limited in energy and living in anaerobic conditions. 3-Hydroxypropionate [3-HP] bicycle The 3-hydroxypropionate bicycle, also known as 3-HP/malyl-CoA cycle, discovered only in 1989, is utilized by green non-sulfur phototrophs of Chloroflexaceae family, including the maximum exponent of this family Chloroflexus auranticus by which this way was discovered and demonstrated. The 3-hydroxypropionate bicycle is composed of two cycles, and the name of this way comes from the 3-hydroxypropionate, which corresponds to an intermediate characteristic of it. The first cycle is a way of synthesis of glyoxylate. During this cycle, two equivalents of bicarbonate are fixed by the action of two enzymes: the acetyl-CoA carboxylase catalyzes the carboxylation of the acetyl-CoA to malonyl-CoA and propionyl-CoA carboxylase catalyses the carboxylation of propionyl-CoA to methylamalonyl-CoA. From this point, a series of reactions lead to the formation of glyoxylate, which will thus become part of the second cycle. In the second cycle, glyoxylate is approximately one equivalent of propionyl-CoA forming methylamalonyl-CoA. This, in turn, is then converted through a series of reactions into citramalyl-CoA. The citramalyl-CoA is split into pyruvate and acetyl-CoA thanks to the enzyme MMC lyase. The pyruvate is released at this point, while the acetyl-CoA is reused and carboxylated again at malonyl-CoA, thus reconstituting the cycle. A total of 19 reactions are involved in the 3-hydroxypropionate bicycle, and 13 multifunctional enzymes are used. The multi-functionality of these enzymes is an important feature of this pathway which thus allows the fixation of three bicarbonate molecules. It is a costly pathway: 7 ATP molecules are consumed to synthesise the new pyruvate and 3 ATP for the phosphate triose. An important characteristic of this cycle is that it allows the co-assimilation of numerous compounds, making it suitable for the mixotrophic organisms. Cycles related to the 3-hydroxypropionate cycle A variant of the 3-hydroxypropionate cycle was found to operate in the aerobic extreme thermoacidophile archaeon Metallosphaera sedula. This pathway is called the 3-hydroxypropionate/4-hydroxybutyrate (3-HP/4-HB) cycle. Yet another variant of the 3-hydroxypropionate cycle is the dicarboxylate/4-hydroxybutyrate (DC/4-HB) cycle. It was discovered in anaerobic archaea. It was proposed in 2008 for the hyperthermophile archeon Ignicoccus hospitalis. Enoyl-CoA carboxylases/reductases fixation is catalyzed by enoyl-CoA carboxylases/reductases. Non-autotrophic pathways Although no heterotrophs use carbon dioxide in biosynthesis, some carbon dioxide is incorporated in their metabolism. Notably pyruvate carboxylase consumes carbon dioxide (as bicarbonate ions) as part of gluconeogenesis, and carbon dioxide is consumed in various anaplerotic reactions. 6-phosphogluconate dehydrogenase catalyzes the reductive carboxylation of ribulose 5-phosphate to 6-phosphogluconate in E. coli under elevated CO2 concentrations. Carbon isotope discrimination Some carboxylases, particularly RuBisCO, preferentially bind the lighter carbon stable isotope carbon-12 over the heavier carbon-13. This is known as carbon isotope discrimination and results in carbon-12 to carbon-13 ratios in the plant that are higher than in the free air. Measurement of this isotopic ratio is important in the evaluation of water use efficiency in plants, and also in assessing the possible or likely sources of carbon in global carbon cycle studies. Biological carbon fixation in soils In addition to photosynthetic and chemosynthetic processes, biological carbon fixation occurs in soil through the activity of microorganisms, such as bacteria and fungi. These soil microbes play a crucial role in the global carbon cycle by sequestering carbon from decomposed organic matter and recycling it back into the soil, thereby contributing to soil fertility and ecosystem productivity. In soil environments, organic matter derived from dead plant and animal material undergoes decomposition, a process carried out by a diverse community of microorganisms. During decomposition, complex organic compounds are broken down into simpler molecules by the action of enzymes produced by bacteria, fungi, and other soil organisms. As organic matter is decomposed, carbon is released in various forms, including carbon dioxide () and dissolved organic carbon (DOC). However, not all of the carbon released during decomposition is immediately lost to the atmosphere; a significant portion is retained in the soil through processes collectively known as soil carbon sequestration. Soil microbes, particularly bacteria and fungi, play a pivotal role in this process by incorporating decomposed organic carbon into their biomass or by facilitating the formation of stable organic compounds, such as humus and soil organic matter. One key mechanism by which soil microbes sequester carbon is through the production of microbial biomass. Bacteria and fungi assimilate carbon from decomposed organic matter into their cellular structures as they grow and reproduce. This microbial biomass serves as a reservoir for stored carbon in the soil, effectively sequestering carbon from the atmosphere. Additionally, soil microbes contribute to the formation of stable soil organic matter through the synthesis of extracellular polymers, enzymes, and other biochemical compounds. These substances help bind together soil particles, forming aggregates that protect organic carbon from microbial decomposition and physical erosion. Over time, these aggregates accumulate in the soil, resulting in the formation of soil organic matter, which can persist for centuries to millennia. The sequestration of carbon in soil not only helps mitigate the accumulation of atmospheric and mitigate climate change but also enhances soil fertility, water retention, and nutrient cycling, thereby supporting plant growth and ecosystem productivity. Consequently, understanding the role of soil microbes in biological carbon fixation is essential for managing soil health, mitigating climate change, and promoting sustainable land management practices. Biological carbon fixation is a fundamental process that sustains life on Earth by regulating atmospheric levels, supporting the growth of plants and other photosynthetic organisms, and maintaining ecological balance. See also Blue carbon Nitrogen fixation Oxygen cycle Biogeochemical cycles References Further reading Photosynthesis Carbon Metabolic pathways Atmospheric chemistry Microbiology
Biological carbon fixation
[ "Chemistry", "Biology" ]
3,578
[ "Biochemistry", "Microbiology", "Photosynthesis", "nan", "Microscopy", "Metabolic pathways", "Metabolism" ]
719,984
https://en.wikipedia.org/wiki/Yellowcake
Yellowcake (also called urania) is a type of powdered uranium concentrate obtained from leach solutions, in an intermediate step in the processing of uranium ores. It is a step in the processing of uranium after it has been mined but before fuel fabrication or uranium enrichment. Yellowcake concentrates are prepared by various extraction and refining methods, depending on the types of ores. Typically, yellowcakes are obtained through the milling and chemical processing of uranium ore, forming a coarse powder that has a pungent odor, is insoluble in water, and contains about 80% uranium oxide, which melts at approximately 2880 °C. Overview Originally, raw uranium ore was extracted by traditional mining, and this is still the case in many mines. It is first crushed to a fine powder by passing it through crushers and grinders to produce "pulped" ore. This is further processed with concentrated acid, alkaline, or peroxide solutions to leach out the uranium. However, nearly half of yellowcake production is now produced by in situ leaching in which the solution is pumped through the uranium deposit without disturbing the ground. Yellowcake is what remains after drying and filtering. The yellowcake produced by most modern mills is actually brown or black, not yellow; the name comes from the color and texture of the concentrates produced by early mining operations. Initially, the compounds formed in yellowcakes were not identified; in 1970, the U.S. Bureau of Mines still referred to yellowcakes as the final precipitate formed in the milling process and considered it to be ammonium diuranate or sodium diuranate. The compositions were variable and depended upon the leachant and subsequent precipitating conditions. The compounds identified in yellowcakes include uranyl hydroxide, uranyl sulfate, sodium para-uranate, and uranyl peroxide, along with various uranium oxides. Modern yellowcake typically contains 70% to 90% triuranium octoxide (U3O8) by weight. Other oxides such as uranium dioxide (UO2) and uranium trioxide (UO3) exist. Yellowcake is produced by all countries in which uranium ore is mined. Further processing Yellowcake is used in the preparation of uranium fuel for nuclear reactors, for which it is smelted into purified UO2 for use in fuel rods for pressurized heavy-water reactors and other systems that use natural unenriched uranium. Purified uranium can also be enriched into the isotope U-235. In this process, the uranium oxides are combined with fluorine to form uranium hexafluoride gas (UF6). Next, the gas undergoes isotope separation through the process of gaseous diffusion, or in a gas centrifuge. This can produce low-enriched uranium containing up to 20% U-235 that is suitable for use in most large civilian electric-power reactors. With further processing, one obtains highly enriched uranium, containing 20% or more U-235, that is suitable for use in compact nuclear reactors—usually used to power naval warships and submarines. Further processing can yield weapons-grade uranium with U-235 levels usually above 90%, suitable for nuclear weapons. Radioactivity and safety The uranium in yellowcake is almost exclusively (>99%) U-238, with very low radioactivity. U-238 has a half-life of 4.468 billion years and emits radiation at a slow rate. This stage of processing is before the more radioactive U-235 is concentrated, so by definition, this stage of uranium has the same radioactivity as it did in nature when it was underground, as the proportions of isotopes are at their native relative concentration. Yellowcake is hazardous when inhaled. See also Uranium ore deposits Uranium mining Uraninite, an ore that is mostly uranium dioxide (UO2) Yellowcake forgery, fraudulently depicted Saddam Hussein trying to buy uranium powder Sequoyah Fuels Corporation, an American company involved in yellowcake processing COMINAK, a Niger uranium mining and processing company SOMAIR, a Niger uranium mining and processing company Vanadium(V) oxide, hydrous precipitates of which are known as "redcake" References Uranium compounds Oxides Nuclear materials
Yellowcake
[ "Physics", "Chemistry" ]
882
[ "Oxides", "Salts", "Materials", "Nuclear materials", "Matter" ]
720,240
https://en.wikipedia.org/wiki/Potassium%20bitartrate
Potassium bitartrate, also known as potassium hydrogen tartrate, with formula KC4H5O6, is a chemical compound with a number of uses. It is the potassium acid salt of tartaric acid (a carboxylic acid). Especially in cooking, it is also known as cream of tartar. It is used as a component of baking powders and baking mixes, as mordant in textile dyeing, as reducer of chromium trioxide in mordants for wool, as a metal processing agent that prevents oxidation, as an intermediate for other potassium tartrates, as a cleaning agent when mixed with a weak acid such as vinegar, and as reference standard pH buffer. Medical uses include as a medical cathartic, as a diuretic, and as a historic veterinary laxative and diuretic. It is produced as a byproduct of winemaking by purifying the precipitate that is deposited in wine barrels. It arises from the tartaric acid and potassium naturally occurring in grapes. In culinary applications, potassium bitartrate is valued for its role in stabilizing egg whites, which enhances the volume and texture of meringues and soufflés. Its acidic properties prevent sugar syrups from crystallizing, aiding in the production of smooth confections such as candies and frostings. When combined with baking soda, it acts as a leavening agent, producing carbon dioxide gas that helps baked goods rise. Additionally, potassium bitartrate is used to stabilize whipped cream, allowing it to retain its shape for longer periods. History Potassium bitartrate was first characterized by Swedish chemist Carl Wilhelm Scheele (1742–1786). This was a result of Scheele's work studying fluorite and hydrofluoric acid. Scheele may have been the first scientist to publish work on potassium bitartrate, but use of potassium bitartrate has been reported to date back 7000 years to an ancient village in northern Iran. Modern applications of cream of tartar started in 1768 after it gained popularity when the French started using it regularly in their cuisine. In 2021, a connection between potassium bitartrate and canine and feline toxicity of grapes was first proposed. Since then, it has been deemed likely as the source of grape and raisin toxicity to pets. Occurrence Potassium bitartrate is naturally formed in grapes from the acid dissociation of tartaric acid into bitartrate and tartrate ions. Potassium bitartrate has a low solubility in water. It crystallizes in wine casks during the fermentation of grape juice, and can precipitate out of wine in bottles. The rate of potassium bitartrate precipitation depends on the rates of nuclei formation and crystal growth, which varies based on a wine's alcohol, sugar, and extract content. The crystals (wine diamonds) will often form on the underside of a cork in wine-filled bottles that have been stored at temperatures below , and will seldom, if ever, dissolve naturally into the wine. Over time, crystal formation is less likely to occur due to the decreasing supersaturation of potassium bitartrate, with the greatest amount of precipitation occurring in the initial few days of cooling. Historically, it was known as beeswing for its resemblance to the sheen of bees' wings. It was collected and purified to produce the white, odorless, acidic powder used for many culinary and other household purposes. These crystals also precipitate out of fresh grape juice that has been chilled or allowed to stand for some time. To prevent crystals from forming in homemade grape jam or jelly, the prerequisite fresh grape juice should be chilled overnight to promote crystallization. The potassium bitartrate crystals are removed by filtering through two layers of cheesecloth. The filtered juice may then be made into jam or jelly. In some cases they adhere to the side of the chilled container, making filtering unnecessary. The presence of crystals is less prevalent in red wines than in white wines. This is because red wines have a higher amount of tannin and colouring matter present as well as a higher sugar and extract content than white wines. Various methods such as promoting crystallization and filtering, removing the active species required for potassium bitartrate precipitation, and adding additives have been implemented to reduce the presence of potassium bitartrate crystals in wine. Applications In food In food, potassium bitartrate is used for: Stabilizing egg whites, increasing their warmth-tolerance and volume Stabilizing whipped cream, maintaining its texture and volume Anti-caking and thickening Preventing sugar syrups from crystallizing by causing some of the sucrose to break down into glucose and fructose Reducing discoloration of boiled vegetables Additionally, it is used as a component of: Baking powder, as an acid ingredient to activate baking soda Salt substitutes, in combination with potassium chloride A similar acid salt, sodium acid pyrophosphate, can be confused with cream of tartar because of its common function as a component of baking powder. Baking Adding cream of tartar to egg whites gives volume to cakes, and makes them more tender. As cream of tartar is added, the pH decreases to around the isoelectric point of the foaming proteins in egg whites. Foaming properties of egg whites are optimal at this pH due to increased protein-protein interactions. The low pH also results in a whiter crumb in cakes due to flour pigments that respond to these pH changes. However, adding too much cream of tartar (>2.4% weight of egg white) can affect the texture and taste of cakes. The optimal cream of tartar concentration to increase volume and the whiteness of interior crumbs without making the cake too tender, is about 1/4 tsp per egg white. As an acid, cream of tartar with heat reduces sugar crystallization in invert syrups by helping to break down sucrose into its monomer components - fructose and glucose in equal parts. Preventing the formation of sugar crystals makes the syrup have a non-grainy texture, shinier and less prone to break and dry. However, a downside of relying on cream of tartar to thin out crystalline sugar confections (like fudge) is that it can be hard to add the right amount of acid to get the desired consistency. Cream of tartar is used as a type of acid salt that is crucial in baking powder. Upon dissolving in batter or dough, the tartaric acid that is released reacts with baking soda to form carbon dioxide that is used for leavening. Since cream of tartar is fast-acting, it releases over 70 percent of carbon dioxide gas during mixing. Household use Potassium bitartrate can be mixed with an acidic liquid, such as lemon juice or white vinegar, to make a paste-like cleaning agent for metals, such as brass, aluminium, or copper, or with water for other cleaning applications, such as removing light stains from porcelain. This mixture is sometimes mistakenly made with vinegar and sodium bicarbonate (baking soda), which actually react to neutralize each other, creating carbon dioxide and a sodium acetate solution. Cream of tartar was often used in traditional dyeing where the complexing action of the tartrate ions was used to adjust the solubility and hydrolysis of mordant salts such as tin chloride and alum. Cream of tartar, when mixed into a paste with hydrogen peroxide, can be used to clean rust from some hand tools, notably hand files. The paste is applied, left to set for a few hours, and then washed off with a baking soda/water solution. After another rinse with water and thorough drying, a thin application of oil will protect the file from further rusting. Slowing the set time of plaster of Paris products (most widely used in gypsum plaster wall work and artwork casting) is typically achieved by the simple introduction of almost any acid diluted into the mixing water. A commercial retardant premix additive sold by USG to trade interior plasterers includes at least 40% potassium bitartrate. The remaining ingredients are the same plaster of Paris and quartz-silica aggregate already prominent in the main product. This means that the only active ingredient is the cream of tartar. Cosmetics For dyeing hair, potassium bitartrate can be mixed with henna as the mild acid needed to activate the henna. Medicinal use Cream of tartar has been used internally as a purgative, but this is dangerous because an excess of potassium, or hyperkalemia, may occur. Chemistry Potassium bitartrate is the United States' National Institute of Standards and Technology's primary reference standard for a pH buffer. Using an excess of the salt in water, a saturated solution is created with a pH of 3.557 at . Upon dissolution in water, potassium bitartrate will dissociate into acid tartrate, tartrate, and potassium ions. Thus, a saturated solution creates a buffer with standard pH. Before use as a standard, it is recommended that the solution be filtered or decanted between and . Potassium carbonate can be made by burning cream of tartar, which produces "pearl ash". This process is now obsolete but produced a higher quality (reasonable purity) than "potash" extracted from wood or other plant ashes. Production It is produced as a byproduct of winemaking by purifying the precipitate that is deposited in wine barrels. It arises from the tartaric acid and potassium naturally occurring in grapes. See also Tartrate Tartaric acid Potassium tartrate (K2C4H4O6) Potassium bicarbonate References External links Description of Potassium Bitartrate at Monash Scientific Material Safety Data Sheet (MSDS) for Potassium Bitartrate at Fisher Scientific Acid salts Potassium compounds Tartrates Leavening agents Edible thickening agents
Potassium bitartrate
[ "Chemistry" ]
2,037
[ "Acid salts", "Salts" ]
720,834
https://en.wikipedia.org/wiki/IDF%20Caterpillar%20D9
The IDF Caterpillar D9 — nicknamed Doobi (, for teddy bear) — is a Caterpillar D9 armored bulldozer used by the Israel Defense Forces (IDF). It is supplied by Caterpillar Inc. and modified by the Israel Defense Forces, Israeli Military Industries and Israel Aerospace Industries to increase the survivability of the bulldozer in hostile environments and enable it to withstand attack. In the 1980s the IDF began modifying D9 bulldozers to incorporate armour. The bulldozers can also be fitted with weaponry: machine guns and grenade launchers. There are various models, including a remote controlled version. Over the course of numerous campaigns, IDF bulldozers have been used to demolish thousands of Palestinian homes in Gaza, leaving tens of thousands of people homeless. The Office of the UN High Commissioner on Human Rights has advised Caterpillar Inc. that by supplying the bulldozers to the IDF it may be complicit in human rights violations. The IDF Caterpillar D9 is operated by the Israel Defense Forces (IDF) Combat Engineering Corps essentially for combat engineering operations. It has been involved in incidents of civilian deaths, including the 2003 killing of activist Rachel Corrie and civilians sheltering outside the Kamal Adwan Hospital reported in 2023 and 2024 during the 2023 Israel–Hamas war. Characteristics The D9R, the latest generation of Caterpillar D9 bulldozers in IDF service, has a power of and drawbar pull of 71.6 metric tons (about 702 kN). Older generations, such as D9L and D9N are still in service, mainly in the reserve forces. The D9 has a crew of two: operator and commander. It is operated by the TZAMA (In = ציוד מכני הנדסי, mechanical engineering equipment) units of the Combat Engineering Corps. In some cases the bulldozers have been fitted with machine guns and grenade launchers. The IDF uses the D9 for a wide variety of combat engineering tasks, such as earthworks, digging moats, mounting sand barriers, building fortifications, rescuing stuck, overturned or damaged armored fighting vehicles (along with the M88 Recovery Vehicle), clearing land mines, detonating IEDs and explosives, handling booby traps, clearing terrain obstacles and opening routes to armored fighting vehicles and infantry, as well as structures demolition, including under fire. History Caterpillar Inc. introduced the Caterpillar D9 bulldozer in 1954 and it quickly found its way to civilian engineering in Israel and from there it was recruited to military service by the Israel Defense Forces (IDF). Earlier use Unarmored D9 bulldozers were used in the Sinai War (1956), Six-Day War (1967), Yom Kippur War (1973) and the 1982 Lebanon War (1982). During the 1982 Lebanon War D9s were employed in breaching and paving ways through mountains and fields in the mountain landscape of southern Lebanon. The D9s also cleared minefields and explosive belly charges set on the main routes by Syrian army and Palestinian insurgents. Because the D9 served as front-line tools, the IDF developed armor kits to protect the lives of the soldiers operating them. The Second Intifada Armored D9 bulldozers were used during the Second Intifada (2000–2005), a Palestinian uprising against Israeli occupation. They were used to open safe routes to IDF forces and detonate explosive charges planted by Palestinian militants. The bulldozers were used extensively to clear shrubbery and structures which were used as cover for Palestinian attacks. In addition they razed houses of families of suicide bombers. Over 3,000 homes in Palestine were demolished by Israel during the conflict, leaving tens of thousands of people homeless. The destruction of Palestinian homes promoted protests. In one such protest in Rafah in 2003 a group of eight people tried to stop a D9 bulldozer from demolishing a family home. The operator of the bulldozer drove over one of the protesters, Rachel Corrie. She died as a result of her injuries. Following several incidents where armed Palestinians barricaded themselves inside houses and killed soldiers attempting to breach the entries, the IDF developed "Nohal Sir Lachatz" (נוהל סיר לחץ "pressure cooker procedure") in which D9s and other engineering vehicles were used to bring them out by razing the houses; most of them surrendered because of fear of being buried alive. During the 2002 Battle of Jenin armored D9 bulldozers cleared booby traps and improvised explosive devices, and eventually razed houses from which militants fired upon Israeli soldiers or contained possible IEDs and booby traps. A translated interview with one of the drivers was published by Gush Shalom. After the deadly ambush in which 13 soldiers were killed, D9 bulldozers razed the center of the Jenin refugee camp and forced the remaining Palestinian fighters to surrender, thus finishing the battle with an Israeli victory. D9R and early 21st century During the early 2000s, the new D9R entered IDF service, equipped with a new generation armor designed by the IDF's MASHA (, lit. Restoration and Maintenance Center), Israel Aerospace Industries and Zoko Shiloovim/ITE (Caterpillar Inc. importers in Israel). Due to the increasing threat of shaped charge anti-tank rockets and anti-tank missile, the IDF introduced in 2005 a slat armor, installed in large numbers on the IDF D9R dozers in 2006. The slat armor proved to be effective and life-saving; its developers and installers won the IDF's Ground Command award. The IDF also operates armored remote-controlled D9N bulldozers, called "Raam HaShachar" (, lit. "thunder of dawn") often incorrectly referred as "black thunder". The remote-controlled bulldozer has been used to clear mines. They were used in the Second Lebanon War in 2006 and the Gaza War (2008–2009). Armored D9R bulldozers took part in the effort to extinguish 2010 Mount Carmel forest fire. The armored bulldozers opened routes to fire trucks and fire fighters into the heart of the fire. They also created fire breaks by clearing shrubbery and pushing up soil barriers in order to prevent the fire from spreading. They also helped extinguish fires by burying them in dirt and soil. Gaza War (2008–2009) In total, 100 D9s were deployed during the Gaza War (2008–2009), dubbed 'Operation Cast Lead' by Israel. The war led to extensive destruction in Gaza, especially of Palestinian homes; Israeli bulldozers and anti-tank mines were commonly used. According to Amnesty International: In March 2009, The Jerusalem Post reported that the IDF intended to increase its use of unmanned D9 bulldozers, doubling the number it had. The following year Israel's Channel 2 reported that Caterpillar would delay the delivery of D9 bulldozers to the IDF while an investigation into the killing of Rachel Corrie took place. 2014 Gaza War IDF D9 armored bulldozers took major role in the 2014 Gaza War, both in defensive missions and offensive maneuvers. The D9s assisted other heavy equipment such as excavators and drillers in exposing and destroying cross-border underground tunnels penetrating into Israel, more than 30 of these tunnels were destroyed during the operation. The reserve mechanical engineering equipment (צמ"ה) and bulldozers battalion of the Central Command received a citation of recommendation (צל"ש, tzalash) from the Chief of Staff of the Israel Defense Forces. D9s participated in the ground offensive, opening routes to tanks and infantry forces, and demolishing structures that were used by Palestinian militants. On July 27, one D9 was hit by an anti-tank missile, killing its operator and wounding its commander. Another D9 demolished the building from which the missile was launched, killing 8 militants and capturing two more. The crew received a citation of recommendation (צל"ש, tzalash) for their action. D9T Panda In 2018 the Israel Defense Forces Combat Engineering Corps started to deploy and operate the "Panda" – a remote-controlled version of an armored Caterpillar D9T bulldozer. In 2018, Israel Aerospace Industries announced that it had signed a contract to equip the IDF with more D9T Panda dozers. In 2022-2023 the Panda entered regular service with the IDF. In 2019, Elbit Systems was awarded an IMOD contract to install the Iron Fist active protection system on the IDF's armored D9 bulldozers, to give them extra protection from anti-tank missiles. Israel–Hamas war During the Israel–Hamas war D9 Bulldozers were deployed on the ground offensive into Gaza where it was used to clear routes for ground forces to manoeuvre and expose shafts of Hamas combat tunnels. According to The Independent around 100 D9 bulldozers were expected to be used in the opening stage of the war. On 16 December the IDF captured the Kamal Adwan Hospital; in doing so IDF bulldozers crushed people who had been sheltering outside the hospital. An investigation by CNN published in January 2024 used satellite imagery to identify sixteen burial grounds in Gaza that had been desecrated by the IDF using bulldozers to level cemeteries and dig up bodies. Later that month Ynet reported that the IDF would buy a further 100 bulldozers. A shipment of 134 bulldozers had not arrived by November. Bulldozers were also used in the deliberate destruction of Gaza's environment, with an estimated 38–48% of Gaza's farmland and tree cover destroyed by Israel's military. Norwegian pension fund Kommunal Landspensjonskasse stopped investing in Caterpillar due to the use of its products by the IDF. Models in IDF service Criticism Caterpillar's sales of D9 bulldozers to the Israeli military for use in the occupied Palestinian territories has long drawn criticism from human rights groups, society groups and responsible investment monitors. Amnesty International released a report in May 2004 on home demolition in the occupied Palestinian territories in May 2004 that noted the risk of complicity for Caterpillar in human rights violations. The Office of the UN High Commissioner on Human Rights sent a letter to the company the next month warning that by selling bulldozers to the IDF Caterpillar may be complicit in human rights violations, specifically the right to food as the bulldozers were used to destroy Palestinian farms. Human Rights Watch reported the same year on the systematic use of D9 bulldozers in illegal demolitions throughout the occupied territories and called on Caterpillar to suspend its sales to Israel, citing the company's own code of conduct. The punitive destruction of Palestinian homes has been described as a form of collective punishment, and in the view of Human Rights Watch may be considered a war crime. The pro-Palestinian group Jewish Voice for Peace and four Roman Catholic orders of nuns planned to introduce a resolution at a Caterpillar shareholder meeting subsequent to the human rights reports asking for an investigation into whether Israel's use of the company's bulldozer to destroy Palestinian homes conformed with the company's code of business conduct. In response, the pro-Israel advocacy group StandWithUs urged its members to buy Caterpillar stock and to write letters of support to the company. The US investment indexer MSCI removed Caterpillar from three of its indexes for socially responsible investments in 2012, citing the Israeli military’s use of its bulldozers in the Palestinian territories. In 2017, documents emerged that showed Caterpillar had hired private investigators to spy on the family of Rachel Corrie, the American human rights activist who was killed by a D9 bulldozer in Rafah in early 2003. In 2022, the Palestinian non-governmental organization Stop the Wall called Caterpillar, alongside Hyundai Heavy Industries, JCB and Volvo Group, complicit in what they referred to as Israel's ethnic cleansing of the occupied Palestinian territories through the use of its equipment in the demolition of eight Palestinian villages in Masafer Yatta in the southern West Bank. See also Israeli demolition of Palestinian property Israeli war crimes Collective punishment Mahmoud Tawalbe, head of the Palestinian Islamic Jihad, killed in the Battle of Jenin (2002) by an IDF D9 Rachel Corrie, an ISM activist killed by an IDF D9 while acting as a human shield References External links Caterpillar D-Series Track-Type Tractors – Official Caterpillar website Armoured D9R Dozer (of the IDF) – review im Army-Technology D9 in Israel's Combat Engineering Corps website (Hebrew) The Israel Defense Forces operate the most heavily armored bulldozer in the world, We Are The Mighty, October 2022 D9 D9 D9 Caterpillar Inc. vehicles Military vehicles introduced in the 1950s
IDF Caterpillar D9
[ "Engineering" ]
2,670
[ "Engineering vehicles", "Military engineering", "Military engineering vehicles", "Caterpillar Inc. vehicles" ]
19,232,416
https://en.wikipedia.org/wiki/Dynamic%20scraped%20surface%20heat%20exchanger
The dynamic scraped surface heat exchanger (DSSHE) is a type of heat exchanger used to remove or add heat to fluids, mainly foodstuffs, but also other industrial products. They have been designed to address specific problems that impede efficient heat transfer. DSSHEs improve efficiency by removing fouling layers, increasing turbulence in the case of high viscosity flow, and avoiding the generation of crystals and other process by-products. DSSHEs incorporate an internal mechanism which periodically removes the product from the heat transfer wall. The sides are scraped by blades made of a rigid plastic material to prevent damage to the scraped surface. Introduction An applicable technologies for indirect heat transfer use tubes (shell-and-tube exchangers) or flat surfaces (plate exchangers). Their goal is to exchange the maximum amount of heat per unit area by generating as much turbulence as possible below given pumping power limits. Typical approaches to achieve this consist of corrugating the tubes or plates or extending their surface with fins. However, these geometry conformation technologies, the calculation of optimum mass flows and other turbulence related factors become diminished when fouling appears, obliging designers to fit significantly larger heat transfer areas. There are several types of fouling, including particulate accumulation, precipitation (crystallization), sedimentation, generation of ice layers, etc. Another factor posing difficulties to heat transfer is viscosity. Highly viscous fluids tend to generate deep laminar flow, a condition with very poor heat transfer rates and high pressure losses involving a considerable pumping power, often exceeding the exchanger design limits. This problem becomes worsened frequently when processing non-newtonian fluids. The DSSHE has been designed to face the aforementioned problems. They increase heat transfer by: removing the fouling layers, increasing turbulence in case of high viscosity flow, and avoiding the generation of ice and other process by-products. Description The dynamic scraped surface heat exchangers incorporate an internal mechanism which periodically removes the product from the heat transfer wall. The product side is scraped by blades attached to a moving shaft or frame. The blades are made of a rigid plastic material to prevent damage to the scraped surface. This material is FDA approved in the case of food applications. Types There are basically three types of DSSHEs depending on the arrangement of the blades: Rotating, tubular DSSHEs. The shaft is placed parallel to the tube axis, not necessarily coincident, and spins at various frequencies, from a few dozen rpm to more than 1000 rpm. The number of blades oscillates between 1 and 4 and may take advantage of centrifugal forces to scrape the inner surface of the tube. Examples are the Waukesha Cherry-Burrell Votator II, Alfa Laval Contherm, Terlet Terlotherm and Kelstream's scraped surface heat exchanger. Another example is the HRS Heat Exchangers R Series or Sakura Seisakusho Ltd. Japan Onlator. Reciprocating, tubular DSSHEs. The shaft is concentric to the tube and moves longitudinally without rotating. The frequency spans between 10 and 60 strokes per minute. The blades may vary in number and shape, from baffle-like arrangements to perforated disk configurations. An example is the HRS Heat Exchangers Unicus. Rotating, plate DSSHEs. The blades wipe the external surface of circular plates arranged in series inside a shell. The heating/cooling fluid runs inside the plates. The frequency is about several dozen rpm. An example is the HRS Spiratube T-Sensation. Evaluation Computational fluid dynamics (CFD) techniques are the standard tools to analyse and evaluate heat exchangers and similar equipment. However, for quick calculation purposes, the evaluation of DSSHEs are usually carried out with the help of ad hoc (semi)empirical correlations based on the Buckingham π theorem: Fa = Fa(Re, Re', n, ...) for pressure loss and Nu = Nu(Re, Re', Pr, Fa, L/D, N, ...) for heat transfer, where Nu is the Nusselt number, Re is the standard Reynolds number based on the inner diameter of the tube, Re''' is the specific Reynolds number based on the wiping frequency, Pr is the Prandtl number, Fa is the Fanning friction factor, L is the length of the tube, D is the inner diameter of the tube, n'' is the number of blades and the dots account for any other relevant dimensionless parameters. Applications The range of applications covers a number of industries, including food, chemical, petrochemical and pharmaceutical. The DSSHEs are appropriate whenever products are prone to fouling, very viscous, particulate, heat sensitive or crystallizing. See also Pumpable ice technology References Related reading Heat exchangers Fouling
Dynamic scraped surface heat exchanger
[ "Chemistry", "Materials_science", "Engineering" ]
993
[ "Chemical equipment", "Materials degradation", "Heat exchangers", "Fouling" ]
19,236,411
https://en.wikipedia.org/wiki/Thin-film%20lithium-ion%20battery
The thin-film lithium-ion battery is a form of solid-state battery. Its development is motivated by the prospect of combining the advantages of solid-state batteries with the advantages of thin-film manufacturing processes. Thin-film construction could lead to improvements in specific energy, energy density, and power density on top of the gains from using a solid electrolyte. It allows for flexible cells only a few microns thick. It may also reduce manufacturing costs from scalable roll-to-roll processing and even allow for the use of cheap materials. Background Lithium-ion batteries store chemical energy in reactive chemicals at the anodes and cathodes of a cell. Typically, anodes and cathodes exchange lithium (Li+) ions through a fluid electrolyte that passes through a porous separator which prevents direct contact between the anode and cathode. Such contact would lead to an internal short circuit and a potentially hazardous uncontrolled reaction. Electric current is usually carried by conductive collectors at the anodes and cathodes to and from the negative and positive terminals of the cell (respectively). In a thin-film lithium battery the electrolyte is solid and the other components are deposited in layers on a substrate. In some designs, the solid electrolyte also serves as a separator. Components of thin film battery Cathode materials Cathode materials in thin-film lithium-ion batteries are the same as in classical lithium-ion batteries. They are normally metal oxides that are deposited as a film by various methods. Metal oxide materials are shown below as well as their relative specific capacities (), open circuit voltages (), and energy densities (). Deposition methods for cathode materials There are various methods being used to deposit thin film cathode materials onto the current collector. Pulsed laser deposition In pulsed laser deposition, materials are fabricated by controlling parameters such as laser energy and fluence, substrate temperature, background pressure, and target-substrate distance. Magnetron sputtering In magnetron sputtering the substrate is cooled for deposition. Chemical vapor deposition In chemical vapor deposition, volatile precursor materials are deposited onto a substrate material. Sol-gel processing Sol-gel processing allows for homogeneous mixing of precursor materials at the atomic level. Electrolyte The greatest difference between classical lithium-ion batteries and thin, flexible, lithium-ion batteries is in the electrolyte material used. Progress in lithium-ion batteries relies as much on improvements in the electrolyte as it does in the electrode materials, as the electrolyte plays a major role in safe battery operation. The concept of thin-film lithium-ion batteries was increasingly motivated by manufacturing advantages presented by the polymer technology for their use as electrolytes. LiPON, lithium phosphorus oxynitride, is an amorphous glassy material used as an electrolyte material in thin film flexible batteries. Layers of LiPON are deposited over the cathode material at ambient temperatures by RF magnetron sputtering. This forms the solid electrolyte used for ion conduction between anode and cathode. LiBON, lithium boron oxynitride, is another amorphous glassy material used as a solid electrolyte material in thin film flexible batteries. Solid polymer electrolytes offer several advantages in comparison to a classical liquid lithium-ion battery. Rather than having separate components of electrolyte, binder, and separator, these solid electrolytes can act as all three. This increases the overall energy density of the assembled battery because the constituents of the entire cell are more tightly packed. Separator material Separator materials in lithium-ion batteries must not block the transport of lithium ions while preventing the physical contact of the anode and cathode materials, e.g. short-circuiting. In a liquid cell, this separator would be a porous glass or polymer mesh that allows ion transport via the liquid electrolyte through the pores, but keeps the electrodes from contacting and shorting. However, in a thin film battery the electrolyte is a solid, which conveniently satisfies both the ion transportation and the physical separation requirements without the need for a dedicated separator. Current collector Current collectors in thin film batteries must be flexible, have high surface area, and be cost-effective. Silver nanowires with improved surface area and loading weight have been shown to work as a current collector in these battery systems, but still are not as cost-effective as desired. Extending graphite technology to lithium-ion batteries, solution processed carbon nanotubes (CNT) films are being looked into for use as both the current collector and anode material. CNTs have the ability to intercalate lithium and maintain high operating voltages, all with low mass loading and flexibility. Advantages and challenges Thin-film lithium-ion batteries offer improved performance by having a higher average output voltage, lighter weights thus higher energy density (3x), and longer cycling life (1200 cycles without degradation) and can work in a wider range of temperatures (between -20 and 60 °C)than typical rechargeable lithium-ion batteries. Li-ion transfer cells are the most promising systems for satisfying the demand of high specific energy and high power and would be cheaper to manufacture. In the thin-film lithium-ion battery, both electrodes are capable of reversible lithium insertion, thus forming a Li-ion transfer cell. In order to construct a thin film battery it is necessary to fabricate all the battery components, as an anode, a solid electrolyte, a cathode and current leads into multi-layered thin films by suitable technologies. In a thin film based system, the electrolyte is normally a solid electrolyte, capable of conforming to the shape of the battery. This is in contrast to classical lithium-ion batteries, which normally have liquid electrolyte material. Liquid electrolytes can be challenging to utilize if they are not compatible with the separator. Also liquid electrolytes in general call for an increase in the overall volume of the battery, which is not ideal for designing a system that has high energy density. Additionally, in a thin film flexible Li-ion battery, the electrolyte, which is normally polymer-based, can act as the electrolyte, separator, and binder material. This provides the ability to have flexible systems since the issue of electrolyte leakage is circumvented. Finally, solid systems can be packed together tightly which affords an increase in energy density when compared to classical liquid lithium-ion batteries. Separator materials in lithium-ion batteries must have the ability to transport ions through their porous membranes while maintaining a physical separation between the anode and cathode materials in order to prevent short-circuiting. Furthermore, the separator must be resistant to degradation during the battery’s operation. In a thin film Li-ion battery, the separator must be a thin and flexible solid. Typically today, this material is a polymer-based material. Since thin film batteries are made of all solid materials, allows one to use simpler separator materials in these systems such as Xerox paper rather than in liquid based Li-ion batteries. Scientific development Development of thin solid state batteries allows for roll to roll type production of batteries to decrease production costs. Solid-state batteries can also afford increased energy density due to decrease in overall device weight, while the flexible nature allows for novel battery design and easier incorporation into electronics. Development is still required in cathode materials which will resist capacity reduction due to cycling. Makers Murata Manufacturing Applications The advancements made to the thin-film lithium-ion battery have allowed for many potential applications. The majority of these applications are aimed at improving the currently available consumer and medical products. Thin-film lithium-ion batteries can be used to make thinner portable electronics, because the thickness of the battery required to operate the device can be reduced greatly. These batteries have the ability to be an integral part of implantable medical devices, such as defibrillators and neural stimulators, “smart” cards, radio frequency identification tags and wireless sensors. They can also serve as a way to store energy collected from solar cells or other harvesting devices. Each of these applications is possible because of the flexibility in the size and shape of the batteries. The size of these devices does not have to revolve around the size of the space needed for the battery anymore. The thin film batteries can be attached to the inside of the casing or in some other convenient way. There are many opportunities in which to use this type of batteries. Renewable energy storage devices The thin-film lithium-ion battery can serve as a storage device for the energy collected from renewable sources with a variable generation rate, such as a solar cell or wind turbine. These batteries can be made to have a low self discharge rate, which means that these batteries can be stored for long periods of time without a major loss of the energy that was used to charge it. These fully charged batteries could then be used to power some or all of the other potential applications listed below, or provide more reliable power to an electric grid for general use. Smart cards Smart cards have the same size as a credit card, but they contain a microchip that can be used to access information, give authorization, or process an application. These cards can go through harsh production conditions, with temperatures in the range of 130 to 150 °C, in order to complete the high temperature, high pressure lamination processes. These conditions can cause other batteries to fail because of degassing or degradation of organic components within the battery. Thin-film lithium-ion batteries have been shown to withstand temperatures of -40 to 150 °C. This use of thin-film lithium-ion batteries is hopeful for other extreme temperature applications. Radio frequency identification tags Radio-frequency identification tags can be used in many different applications. These tags can be used in packaging, inventory control, used to verify authenticity and even allow or deny access to something. These ID tags can even have other integrated sensors to allow for the physical environment to be monitored, such as temperature or shock during travel or shipping. Also, the distance required to read the information in the tag depends on the strength of the battery. The farther away you want to be able to read the information, the stronger the output will have to be and thus the greater the power supply to accomplish this output. As these tags get more and more complex, the battery requirements will need to keep up. Thin-film lithium-ion batteries have shown that they can fit into the designs of the tags because of the flexibility of the battery in size and shape and are sufficiently powerful enough to accomplish the goals of the tag. Low cost production methods, like roll to roll lamination, of these batteries may even allow for this kind of radio frequency identification technology to be implemented in disposable applications. Implantable medical devices Thin films of LiCoO2 have been synthesized in which the strongest X-ray reflection is either weak or missing, indicating a high degree of preferred orientation. Thin film solid state batteries with these textured cathode films can deliver practical capacities at high current densities. For example, for one of the cells 70% of the maximum capacity between 4.2 V and 3 V (approximately 0.2 mAh/cm2) was delivered at a current of 2 mA/cm2. When cycled at rates of 0.1 mA/cm2, the capacity loss was 0.001%/cycle or less. The reliability and performance of Li LiCoO2 thin-film batteries make them attractive for application in implantable devices such as neural stimulators, pacemakers, and defibrillators. Implantable medical devices require batteries that can deliver a steady, reliable power source for as long as possible. These applications call for a battery that has a low self-discharge rate, for when it’s not in use, and a high power rate, for when it needs to be used, especially in the case of an implantable defibrillator. Also, users of the product will want a battery that can go through many cycles, so these devices will not have to be replaced or serviced often. Thin-film lithium-ion batteries have the ability to meet these requirements. The advancement from a liquid to a solid electrolyte has allowed these batteries to take almost any shape without the worry of leaking, and it has been shown that certain types of thin film rechargeable lithium batteries can last for around 50,000 cycles. Another advantage to these thin film batteries is that they can be arranged in series to give a larger voltage equal to the sum of the individual battery voltages. This fact can be used in reducing the “footprint” of the battery, or the size of the space needed for the battery, in the design of a device. Wireless sensors Wireless sensors need to be in use for the duration of their application, whether that may be in package shipping or in the detection of some unwanted compound, or controlling inventory in a warehouse. If the wireless sensor cannot transmit its data due to low or no battery power, the consequences could potentially be severe based on the application. Also, the wireless sensor must be adaptable to each application. Therefore the battery must be able to fit within the designed sensor. This means that the desired battery for these devices must be long-lasting, size specific, low cost, if they are going to be used in disposable technologies, and must meet the requirements of the data collection and transmission processes. Once again, thin-film lithium-ion batteries have shown the ability to meet all of these requirements. See also List of battery types Lithium-ion battery References Rechargeable batteries Lithium-ion batteries Thin films Flexible electronics
Thin-film lithium-ion battery
[ "Materials_science", "Mathematics", "Engineering" ]
2,834
[ "Materials science", "Electronic engineering", "Flexible electronics", "Nanotechnology", "Planes (geometry)", "Thin films" ]
19,245,671
https://en.wikipedia.org/wiki/Partial%20molar%20property
In thermodynamics, a partial molar property is a quantity which describes the variation of an extensive property of a solution or mixture with changes in the molar composition of the mixture at constant temperature and pressure. It is the partial derivative of the extensive property with respect to the amount (number of moles) of the component of interest. Every extensive property of a mixture has a corresponding partial molar property. Definition The partial molar volume is broadly understood as the contribution that a component of a mixture makes to the overall volume of the solution. However, there is more to it than this: When one mole of water is added to a large volume of water at 25 °C, the volume increases by 18 cm3. The molar volume of pure water would thus be reported as 18 cm3 mol−1. However, addition of one mole of water to a large volume of pure ethanol results in an increase in volume of only 14 cm3. The reason that the increase is different is that the volume occupied by a given number of water molecules depends upon the identity of the surrounding molecules. The value 14 cm3 is said to be the partial molar volume of water in ethanol. In general, the partial molar volume of a substance X in a mixture is the change in volume per mole of X added to the mixture. The partial molar volumes of the components of a mixture vary with the composition of the mixture, because the environment of the molecules in the mixture changes with the composition. It is the changing molecular environment (and the consequent alteration of the interactions between molecules) that results in the thermodynamic properties of a mixture changing as its composition is altered. If, by , one denotes a generic extensive property of a mixture, it will always be true that it depends on the pressure (), temperature (), and the amount of each component of the mixture (measured in moles, n). For a mixture with q components, this is expressed as Now if temperature T and pressure P are held constant, is a homogeneous function of degree 1, since doubling the quantities of each component in the mixture will double . More generally, for any : By Euler's first theorem for homogeneous functions, this implies where is the partial molar of component defined as: By Euler's second theorem for homogeneous functions, is a homogeneous function of degree 0 (i.e., is an intensive property) which means that for any : In particular, taking where , one has where is the concentration expressed as the mole fraction of component . Since the molar fractions satisfy the relation the xi are not independent, and the partial molar property is a function of only mole fractions: The partial molar property is thus an intensive property - it does not depend on the size of the system. The partial volume is not the partial molar volume. Applications Partial molar properties are useful because chemical mixtures are often maintained at constant temperature and pressure and under these conditions, the value of any extensive property can be obtained from its partial molar property. They are especially useful when considering specific properties of pure substances (that is, properties of one mole of pure substance) and properties of mixing (such as the heat of mixing or entropy of mixing). By definition, properties of mixing are related to those of the pure substances by: Here denotes a pure substance, the mixing property, and corresponds to the specific property under consideration. From the definition of partial molar properties, substitution yields: So from knowledge of the partial molar properties, deviation of properties of mixing from single components can be calculated. Relationship to thermodynamic potentials Partial molar properties satisfy relations analogous to those of the extensive properties. For the internal energy U, enthalpy H, Helmholtz free energy A, and Gibbs free energy G, the following hold: where is the pressure, the volume, the temperature, and the entropy. Differential form of the thermodynamic potentials The thermodynamic potentials also satisfy where is the chemical potential defined as (for constant nj with j≠i): This last partial derivative is the same as , the partial molar Gibbs free energy. This means that the partial molar Gibbs free energy and the chemical potential, one of the most important properties in thermodynamics and chemistry, are the same quantity. Under isobaric (constant P) and isothermal (constant T ) conditions, knowledge of the chemical potentials, , yields every property of the mixture as they completely determine the Gibbs free energy. Measuring partial molar properties To measure the partial molar property of a binary solution, one begins with the pure component denoted as and, keeping the temperature and pressure constant during the entire process, add small quantities of component ; measuring after each addition. After sampling the compositions of interest one can fit a curve to the experimental data. This function will be . Differentiating with respect to will give . is then obtained from the relation: Relation to apparent molar quantities The relation between partial molar properties and the apparent ones can be derived from the definition of the apparent quantities and of the molality. The relation holds also for multicomponent mixtures, just that in this case subscript i is required. See also Apparent molar property Ideal solution Excess molar quantity Partial specific volume Thermodynamic activity References Further reading P. Atkins and J. de Paula, "Atkins' Physical Chemistry" (8th edition, Freeman 2006), chap.5 T. Engel and P. Reid, "Physical Chemistry" (Pearson Benjamin-Cummings 2006), p. 210 K.J. Laidler and J.H. Meiser, "Physical Chemistry" (Benjamin-Cummings 1982), p. 184-189 P. Rock, "Chemical Thermodynamics" (MacMillan 1969), chap.9 Ira Levine, "Physical Chemistry" (6th edition, McGraw Hill 2009), p. 125-128 External links Lecture notes from the University of Arizona detailing mixtures, partial molar quantities, and ideal solutions[archive] On-line calculator for densities and partial molar volumes of aqueous solutions of some common electrolytes and their mixtures, at temperatures up to 323.15 K. Physical chemistry Thermodynamic properties Chemical thermodynamics Molar quantities
Partial molar property
[ "Physics", "Chemistry", "Mathematics" ]
1,297
[ "Thermodynamic properties", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Intensive quantities", "Thermodynamics", "nan", "Chemical thermodynamics", "Physical chemistry", "Molar quantities" ]
18,223,985
https://en.wikipedia.org/wiki/Spiral%20separator
The term spiral separator can refer to either a device for separating slurry components by density (wet spiral separators), or for a device for sorting particles by shape (dry spiral separators). Wet spiral separators Spiral separators of the wet type, also called spiral concentrators, are devices to separate solid components in a slurry, based upon a combination of the solid particle density as well as the particle's hydrodynamic properties (e.g. drag). The device consists of a tower, around which is wound a sluice, from which slots or channels are placed in the base of the sluice to extract solid particles that have come out of suspension. As larger and heavier particles sink to the bottom of the sluice faster and experience more drag from the bottom, they travel slower, and so move towards the center of the spiral. Conversely, light particles stay towards the outside of the spiral, with the water, and quickly reach the bottom. At the bottom, a "cut" is made with a set of adjustable bars, channels, or slots, separating the low and high density parts. Efficiency Typical spiral concentrators will use a slurry from about 20%-40% solids by weight, with a particle size somewhere between 0.75—1.5mm (17-340 mesh), though somewhat larger particle sizes are sometimes used. The spiral separator is less efficient at the particle sizes of 0.1—0.074mm however. For efficient separation, the density difference between the heavy minerals and the light minerals in the feedstock should be at least 1 g/cm3; and because the separation is dependent upon size and density, spiral separators are most effective at purifying ore if its particles are of uniform size and shape. A spiral separator may process a couple tons per hour of ore, per flight, and multiple flights may be stacked in the same space as one, to improve capacity. Many things can be done to improve the separation efficiency, including: changing the rate of material feed changing the grain size of the material changing the slurry mass percentage adjusting the cutter bar positions running the output of one spiral separator (often, a third, intermediate, cut) through a second. adding washwater inlets along the length of the spiral, to aid in separating light minerals adding multiple outlets along the length, to improve the ability of the spiral to remove heavy contaminants adding ridges on the sluice at an angle to the direction of flow. Dry spiral separators Dry spiral separators, capable of distinguishing round particles from nonrounds, are used to sort the feed by shape. The device consists of a tower, around which is wound an inwardly inclined flight. A catchment funnel is placed around this inner flight. Round particles roll at a higher speed than other objects, and so are flung off the inner flight and into the collection funnel. Shapes which are not round enough are collected at the bottom of the flight. Separators of this type may be used for removing weed seeds from the intended harvest, or to remove deformed lead shot. See also Sieve Screw conveyor Cyclone (separator) Mineral processing Mechanical screening References Further reading External links Screw Conveyor separator Chemical equipment Separation processes
Spiral separator
[ "Chemistry", "Engineering" ]
687
[ "Chemical equipment", "nan", "Separation processes" ]
4,548,229
https://en.wikipedia.org/wiki/Interaural%20time%20difference
The interaural time difference (or ITD) when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source. When a signal is produced in the horizontal plane, its angle in relation to the head is referred to as its azimuth, with 0 degrees (0°) azimuth being directly in front of the listener, 90° to the right, and 180° being directly behind. Different methods for measuring ITDs For an abrupt stimulus such as a click, onset ITDs are measured. An onset ITD is the time difference between the onset of the signal reaching two ears. A transient ITD can be measured when using a random noise stimulus and is calculated as the time difference between a set peak of the noise stimulus reaching the ears. If the stimulus used is not abrupt but periodic, then ongoing ITDs are measured. This is where the waveforms reaching both ears can be shifted in time until they perfectly match up, and the size of this shift is recorded as the ITD. This shift is known as the interaural phase difference (IPD) and can be used for measuring the ITDs of periodic inputs such as pure tones and amplitude modulated stimuli. An amplitude-modulated stimulus IPD can be assessed by looking at either the waveform envelope or the waveform fine structure. Duplex theory The duplex theory proposed by Lord Rayleigh (1907) provides an explanation for the ability of humans to localise sounds by time differences between the sounds reaching each ear (ITDs) and differences in sound level entering the ears (interaural level differences, ILDs). But there still lies a question whether ITD or ILD is prominent. The duplex theory states that ITDs are used to localise low-frequency sounds, in particular, while ILDs are used in the localisation of high-frequency sound inputs. However, the frequency ranges for which the auditory system can use ITDs and ILDs significantly overlap, and most natural sounds will have both high- and low-frequency components, so that the auditory system will in most cases have to combine information from both ITDs and ILDs to judge the location of a sound source. A consequence of this duplex system is that it is also possible to generate so-called "cue trading" or "time–intensity trading" stimuli on headphones, where ITDs pointing to the left are offset by ILDs pointing to the right, so the sound is perceived as coming from the midline. A limitation of the duplex theory is that the theory does not completely explain directional hearing, as no explanation is given for the ability to distinguish between a sound source directly in front and behind. Also the theory only relates to localising sounds in the horizontal plane around the head. The theory also does not take into account the use of the pinna in localisation (Gelfand, 2004). Experiments conducted by Woodworth (1938) tested the duplex theory by using a solid sphere to model the shape of the head and measuring the ITDs as a function of azimuth for different frequencies. The model used had a distance between the two ears of approximately 22–23 cm. Initial measurements found that there was a maximum time delay of approximately 660 μs when the sound source was placed at directly 90° azimuth to one ear. This time delay correlates to the wavelength of a sound input with a frequency of 1500 Hz. The results concluded that when a sound played had a frequency less than 1500 Hz, the wavelength is greater than this maximum time delay between the ears. Therefore, there is a phase difference between the sound waves entering the ears providing acoustic localisation cues. With a sound input with a frequency closer to 1500 Hz the wavelength of the sound wave is similar to the natural time delay. Therefore, due to the size of the head and the distance between the ears there is a reduced phase difference, so localisations errors start to be made. When a high-frequency sound input is used with a frequency greater than 1500 Hz, the wavelength is shorter than the distance between the ears, a head shadow is produced, and ILD provide cues for the localisation of this sound. Feddersen et al. (1957) also conducted experiments taking measurements on how ITDs alter with changing the azimuth of the loudspeaker around the head at different frequencies. But unlike the Woodworth experiments, human subjects were used rather than a model of the head. The experiment results agreed with the conclusion made by Woodworth about ITDs. The experiments also concluded that there is no difference in ITDs when sounds are provided from directly in front or behind at 0° and 180° azimuth. The explanation for this is that the sound is equidistant from both ears. Interaural time differences alter as the loudspeaker is moved around the head. The maximum ITD of 660 μs occurs when a sound source is positioned at 90° azimuth to one ear. Current findings Starting in 1948, the prevailing theory on interaural time differences centered on the idea that inputs from the medial superior olive differentially process inputs from the ipsilateral and contralateral side relative to the sound. This is accomplished through a discrepancy in arrival time of excitatory inputs into the medial superior olive, based on differential conductance of the axons, which allows both sounds to ultimately converge at the same time through neurons with complementary intrinsic properties. Franken et al. attempted to further elucidate the mechanisms underlying ITD in mammalian brains. One experiment they performed was to isolate discrete inhibitory post-synaptic potentials and try to determine whether inhibitory inputs to the superior olive were allowing the faster excitatory input to delay firing until the two signals were synced. However, after blocking EPSPs with a glutamate receptor blocker, they determine that the size of inhibitory inputs was too marginal to appear to play a significant role in phase locking. This was verified when the experimenters blocked inhibitory input and still saw clear phase locking of the excitatory inputs in their absence. This led them to the theory that in-phase excitatory inputs are summated such that the brain can process sound localization by counting the number of action potentials that arise from various magnitudes of summated depolarization. Franken et al. also examined anatomical and functional patterns within the superior olive to clarify previous theories about the rostrocaudal axis serving as a source of tonotopy. Their results showed a significant correlation between tuning frequency and relative position along the dorsoventral axis, while they saw no distinguishable pattern of tuning frequency on the rostrocaudal axis. Lastly, they went on to further explore the driving forces behind the interaural time difference, specifically whether the process is simply the alignment of inputs that is processed by a coincidence detector, or whether the process is more complicated. Evidence from Franken et al. shows that the processing is affected by inputs that precede the binaural signal, which would alter the functioning of voltage-gated sodium and potassium channels to shift the membrane potential of the neuron. Furthermore, the shift is dependent on the frequency tuning of each neuron, ultimately creating a more complex confluence and analysis of sound. These findings provide several pieces of evidence that contradict existing theories about binaural audition. The anatomy of the ITD pathway The auditory nerve fibres, known as the afferent nerve fibres, carry information from the organ of Corti to the brainstem and brain. Auditory afferent fibres consist of two types of fibres called type I and type II fibres. Type I fibres innervate the base of one or two inner hair cells and Type II fibres innervate the outer hair cells. Both leave the organ of Corti through an opening called the habenula perforata. The type I fibres are thicker than the type II fibres and may also differ in how they innervate the inner hair cells. Neurons with large calyceal endings ensure preservation of timing information throughout the ITD pathway. Next in the pathway is the cochlear nucleus, which receives mainly ipsilateral (that is, from the same side) afferent input. The cochlear nucleus has three distinct anatomical divisions, known as the antero-ventral cochlear nucleus (AVCN), postero-ventral cochlear nucleus (PVCN) and dorsal cochlear nucleus (DCN) and each have different neural innervations. The AVCN contains predominant bushy cells, with one or two profusely branching dendrites; it is thought that bushy cells may process the change in the spectral profile of complex stimuli. The AVCN also contain cells with more complex firing patterns than bushy cells called multipolar cells, these cells have several profusely branching dendrites and irregular shaped cell bodies. Multipolar cells are sensitive to changes in acoustic stimuli and in particular, onset and offset of sounds, as well as changes in intensity and frequency. The axons of both cell types leave the AVCN as large tract called the ventral acoustic stria, which forms part of the trapezoid body and travels to the superior olivary complex. A group of nuclei in pons make up the superior olivary complex (SOC). This is the first stage in auditory pathway to receive input from both cochleas, which is crucial for our ability to localise the sounds source in the horizontal plane. The SOC receives input from cochlear nuclei, primarily the ipsilateral and contralateral AVCN. Four nuclei make up the SOC but only the medial superior olive (MSO) and the lateral superior olive (LSO) receive input from both cochlear nuclei. The MSO is made up of neurons which receive input from the low-frequency fibers of the left and right AVCN. The result of having input from both cochleas is an increase in the firing rate of the MSO units. The neurons in the MSO are sensitive to the difference in the arrival time of sound at each ear, also known as the interaural time difference (ITD). Research shows that if stimulation arrives at one ear before the other, many of the MSO units will have increased discharge rates. The axons from the MSO continue to higher parts of the pathway via the ipsilateral lateral lemniscus tract.(Yost, 2000) The lateral lemniscus (LL) is the main auditory tract in the brainstem connecting SOC to the inferior colliculus. The dorsal nucleus of the lateral lemniscus (DNLL) is a group of neurons separated by lemniscus fibres, these fibres are predominantly destined for the inferior colliculus (IC). In studies using an unanesthetized rabbit the DNLL was shown to alter the sensitivity of the IC neurons and may alter the coding of interaural timing differences (ITDs) in the IC.(Kuwada et al., 2005) The ventral nucleus of the lateral lemniscus (VNLL) is a chief source of input to the inferior colliculus. Research using rabbits shows the discharge patterns, frequency tuning and dynamic ranges of VNLL neurons supply the inferior colliculus with a variety of inputs, each enabling a different function in the analysis of sound.(Batra & Fitzpatrick, 2001) In the inferior colliculus (IC) all the major ascending pathways from the olivary complex and the central nucleus converge. The IC is situated in the midbrain and consists of a group of nuclei the largest of these is the central nucleus of inferior colliculus (CNIC). The greater part of the ascending axons forming the lateral lemniscus will terminate in the ipsilateral CNIC however a few follow the commissure of Probst and terminate on the contralateral CNIC. The axons of most of the CNIC cells form the brachium of IC and leave the brainstem to travel to the ipsilateral thalamus. Cells in different parts of the IC tend to be monaural, responding to input from one ear, or binaural and therefore respond to bilateral stimulation. The spectral processing that occurs in the AVCN and the ability to process binaural stimuli, as seen in the SOC, are replicated in the IC. Lower centres of the IC extract different features of the acoustic signal such as frequencies, frequency bands, onsets, offsets, changes in intensity and localisation. The integration or synthesis of acoustic information is thought to start in the CNIC.(Yost, 2000) Effect of a hearing loss A number of studies have looked into the effect of hearing loss on interaural time differences. In their review of localisation and lateralisation studies, Durlach, Thompson, and Colburn (1981), cited in Moore (1996) found a "clear trend for poor localization and lateralization in people with unilateral or asymmetrical cochlear damage". This is due to the difference in performance between the two ears. In support of this, they did not find significant localisation problems in individuals with symmetrical cochlear losses. In addition to this, studies have been conducted into the effect of hearing loss on the threshold for interaural time differences. The normal human threshold for detection of an ITD is up to a time difference of 10 μs. Studies by Gabriel, Koehnke, & Colburn (1992), Häusler, Colburn, & Marr (1983) and Kinkel, Kollmeier, & Holube (1991) (cited by Moore, 1996) have shown that there can be great differences between individuals regarding binaural performance. It was found that unilateral or asymmetric hearing losses can increase the threshold of ITD detection in patients. This was also found to apply to individuals with symmetrical hearing losses when detecting ITDs in narrowband signals. However, ITD thresholds seem to be normal for those with symmetrical losses when listening to broadband sounds. See also Sound localization References Further reading Feddersen, W. E., Sandel, T. T., Teas, D. C., Jeffress, L. A. (1957) Localization of high frequency tones. Journal of the Acoustical Society of America. 29: 988–991. Fitzpatrick, D. C., Batra, R., Kuwada, S. (1997). Neurons Sensitive to InterauralTemporal Disparities in the Medial Part of the Ventral Nucleus of the Lateral Lemniscus. The Journal of Neurophysiology. 78: 511–515. Franken TP, Roberts MT, Wei L, NL NLG, Joris PX. In vivo coincidence detection in mammalian sound localization generates phase delays. Nature neuroscience. 2015;18(3):444-452. doi:10.1038/nn.3948. Gelfand, S. A. (2004) Hearing: An Introduction to Psychological and Physiological Acoustics. 4th Edition New York: Marcel Dekker. Kuwada, S., Fitzpatrick, D. C., Batra, R., Ostapoff, E. M. ( 2005). Sensitivity to Interaural Time Difference in the Dorsal Nucleus of the Lateral Lemniscus of the Unanesthetized Rabbit: Comparison with other structures. Journal of Neurophysiology. 95: 1309–1322. Moore, B. (1996) Perceptual Consequences of Cochlear Hearing Loss and their Implications for the Design of Hearing Aids. Ear and Hearing. 17(2):133-161 Moore, B. C. (2004) An Introduction to the Psychology of Hearing. 5th Edition London: Elsevier Academic Press. Woodworth, R. S. (1938) Experimental Psychology. New York: Holt, Rinehart, Winston. Yost, W. A. (2000) Fundamentals of Hearing: An Introduction. 4th Edition San Diego: Academic Press. External links Calculation of phase angle (phase difference) from time delay (time of arrival ITD) and frequency Audio engineering
Interaural time difference
[ "Engineering" ]
3,470
[ "Electrical engineering", "Audio engineering" ]
4,548,351
https://en.wikipedia.org/wiki/Dyall%20Hamiltonian
In quantum chemistry, the Dyall Hamiltonian is a modified Hamiltonian with two-electron nature. It can be written as follows: where labels , , denote core, active and virtual orbitals (see Complete active space) respectively, and are the orbital energies of the involved orbitals, and operators are the spin-traced operators . These operators commute with and , therefore the application of these operators on a spin-pure function produces again a spin-pure function. The Dyall Hamiltonian behaves like the true Hamiltonian inside the CAS space, having the same eigenvalues and eigenvectors of the true Hamiltonian projected onto the CAS space. References Quantum chemistry
Dyall Hamiltonian
[ "Physics", "Chemistry" ]
141
[ "Quantum chemistry stubs", "Quantum chemistry", "Theoretical chemistry stubs", "Quantum mechanics", "Theoretical chemistry", " molecular", "Atomic", "Physical chemistry stubs", " and optical physics" ]
4,550,348
https://en.wikipedia.org/wiki/Nanophotonics
Nanophotonics or nano-optics is the study of the behavior of light on the nanometer scale, and of the interaction of nanometer-scale objects with light. It is a branch of optics, optical engineering, electrical engineering, and nanotechnology. It often involves dielectric structures such as nanoantennas, or metallic components, which can transport and focus light via surface plasmon polaritons. The term "nano-optics", just like the term "optics", usually refers to situations involving ultraviolet, visible, and near-infrared light (free-space wavelengths from 300 to 1200 nanometers). Background Normal optical components, like lenses and microscopes, generally cannot normally focus light to nanometer (deep subwavelength) scales, because of the diffraction limit (Rayleigh criterion). Nevertheless, it is possible to squeeze light into a nanometer scale using other techniques like, for example, surface plasmons, localized surface plasmons around nanoscale metal objects, and the nanoscale apertures and nanoscale sharp tips used in near-field scanning optical microscopy (SNOM or NSOM) and photoassisted scanning tunnelling microscopy. Application Nanophotonics researchers pursue a very wide variety of goals, in fields ranging from biochemistry to electrical engineering to carbon-free energy. A few of these goals are summarized below. Optoelectronics and microelectronics If light can be squeezed into a small volume, it can be absorbed and detected by a small detector. Small photodetectors tend to have a variety of desirable properties including low noise, high speed, and low voltage and power. Small lasers have various desirable properties for optical communication including low threshold current (which helps power efficiency) and fast modulation (which means more data transmission). Very small lasers require subwavelength optical cavities. An example is spasers, the surface plasmon version of lasers. Integrated circuits are made using photolithography, i.e. exposure to light. In order to make very small transistors, the light needs to be focused into extremely sharp images. Using various techniques such as immersion lithography and phase-shifting photomasks, it has indeed been possible to make images much finer than the wavelength—for example, drawing 30 nm lines using 193 nm light. Plasmonic techniques have also been proposed for this application. Heat-assisted magnetic recording is a nanophotonic approach to increasing the amount of data that a magnetic disk drive can store. It requires a laser to heat a tiny, subwavelength area of the magnetic material before writing data. The magnetic write-head would have metal optical components to concentrate light at the right location. Miniaturization in optoelectronics, for example the miniaturization of transistors in integrated circuits, has improved their speed and cost. However, optoelectronic circuits can only be miniaturized if the optical components are shrunk along with the electronic components. This is relevant for on-chip optical communication (i.e. passing information from one part of a microchip to another by sending light through optical waveguides, instead of changing the voltage on a wire). Solar cells Solar cells often work best when the light is absorbed very close to the surface, both because electrons near the surface have a better chance of being collected, and because the device can be made thinner, which reduces cost. Researchers have investigated a variety of nanophotonic techniques to intensify light in the optimal locations within a solar cell. Controlled release of anti-cancer therapeutics Nanophotonics has also been implicated in aiding the controlled and on-demand release of anti-cancer therapeutics like adriamycin from nanoporous optical antennas to target triple-negative breast cancer and mitigate exocytosis anti-cancer drug resistance mechanisms and therefore circumvent toxicity to normal systemic tissues and cells. Spectroscopy Using nanophotonics to create high peak intensities: If a given amount of light energy is squeezed into a smaller and smaller volume ("hot-spot"), the intensity in the hot-spot gets larger and larger. This is especially helpful in nonlinear optics; an example is surface-enhanced Raman scattering. It also allows sensitive spectroscopy measurements of even single molecules located in the hot-spot, unlike traditional spectroscopy methods which take an average over millions or billions of molecules. Microscopy One goal of nanophotonics is to construct a so-called "superlens", which would use metamaterials (see below) or other techniques to create images that are more accurate than the diffraction limit (deep subwavelength). In 1995, Guerra demonstrated this by imaging a silicon grating having 50 nm lines and spaces with illumination having 650 nm wavelength in air. This was accomplished by coupling a transparent phase grating having 50 nm lines and spaces (metamaterial) with an immersion microscope objective (superlens). Near-field scanning optical microscope (NSOM or SNOM) is a quite different nanophotonic technique that accomplishes the same goal of taking images with resolution far smaller than the wavelength. It involves raster-scanning a very sharp tip or very small aperture over the surface to be imaged. Near-field microscopy refers more generally to any technique using the near-field (see below) to achieve nanoscale, subwavelength resolution. In 1987, Guerra (while at the Polaroid Corporation) achieved this with a non-scanning whole-field Photon tunneling microscope. In another example, dual-polarization interferometry has picometer resolution in the vertical plane above the waveguide surface. Optical data storage Nanophotonics in the form of subwavelength near-field optical structures, either separate from the recording media, or integrated into the recording media, were used to achieve optical recording densities much higher than the diffraction limit allows. This work began in the 1980s at Polaroid Optical Engineering (Cambridge, Massachusetts), and continued under license at Calimetrics (Bedford, Massachusetts) with support from the NIST Advanced Technology Program. Band-gap engineering In 2002, Guerra (Nanoptek Corporation) demonstrated that nano-optical structures of semiconductors exhibit bandgap shifts because of induced strain. In the case of titanium dioxide, structures on the order of less than 200 nm half-height width will absorb not only in the normal ultraviolet part of the solar spectrum, but well into the high-energy visible blue as well. In 2008, Thulin and Guerra published modeling that showed not only bandgap shift, but also band-edge shift, and higher hole mobility for lower charge recombination. The band-gap engineered titanium dioxide is used as a photoanode in efficient photolytic and photo-electro-chemical production of hydrogen fuel from sunlight and water. Silicon nanophotonics Silicon photonics is a silicon-based subfield of nanophotonics in which nano-scale structures of the optoelectronic devices realized on silicon substrates and that are capable to control both light and electrons. They allow to couple electronic and optical functionality in one single device. Such devices find a wide variety of applications outside of academic settings, e.g. mid-infrared and overtone spectroscopy, logic gates and cryptography on a chip etc. As of 2016 the research of in silicon photonics spanned light modulators, optical waveguides and interconnectors, optical amplifiers, photodetectors, memory elements, photonic crystals etc. An area of particular interest is silicon nanostructures capable to efficiently generate electrical energy from solar light (e.g. for solar panels). Principles Plasmons and metal optics Metals are an effective way to confine light to far below the wavelength. This was originally used in radio and microwave engineering, where metal antennas and waveguides may be hundreds of times smaller than the free-space wavelength. For a similar reason, visible light can be confined to the nano-scale via nano-sized metal structures, such as nano-sized structures, tips, gaps, etc. Many nano-optics designs look like common microwave or radiowave circuits, but shrunk down by a factor of 100,000 or more. After all, radiowaves, microwaves, and visible light are all electromagnetic radiation; they differ only in frequency. So other things equal, a microwave circuit shrunk down by a factor of 100,000 will behave the same way but at 100,000 times higher frequency. This effect is somewhat analogous to a lightning rod, where the field concentrates at the tip. The technological field that makes use of the interaction between light and metals is called plasmonics. It is fundamentally based on the fact that the permittivity of the metal is very large and negative. At very high frequencies (near and above the plasma frequency, usually ultraviolet), the permittivity of a metal is not so large, and the metal stops being useful for concentrating fields. For example, researchers have made nano-optical dipoles and Yagi–Uda antennas following essentially the same design as used for radio antennas. Metallic parallel-plate waveguides (striplines), lumped-constant circuit elements such as inductance and capacitance (at visible light frequencies, the values of the latter being of the order of femtohenries and attofarads, respectively), and impedance-matching of dipole antennas to transmission lines, all familiar techniques at microwave frequencies, are some current areas of nanophotonics development. That said, there are a number of very important differences between nano-optics and scaled-down microwave circuits. For example, at optical frequency, metals behave much less like ideal conductors, and also exhibit interesting plasmon-related effects like kinetic inductance and surface plasmon resonance. Likewise, optical fields interact with semiconductors in a fundamentally different way than microwaves do. Near-field optics Fourier transform of a spatial field distribution consists of different spatial frequencies. The higher spatial frequencies correspond to the very fine features and sharp edges. In nanophotonics, strongly localized radiation sources (dipolar emitters such as fluorescent molecules) are often studied. These sources can be decomposed into a vast spectrum of plane waves with different wavenumbers, which correspond to the angular spatial frequencies. The frequency components with higher wavenumbers compared to the free-space wavenumber of the light form evanescent fields. Evanescent components exist only in the near field of the emitter and decay without transferring net energy to the far field. Thus, subwavelength information from the emitter is blurred out; this results in the diffraction limit in the optical systems. Nanophotonics is primarily concerned with the near-field evanescent waves. For example, a superlens (mentioned above) would prevent the decay of the evanescent wave, allowing higher-resolution imaging. Metamaterials Metamaterials are artificial materials engineered to have properties that may not be found in nature. They are created by fabricating an array of structures much smaller than a wavelength. The small (nano) size of the structures is important: That way, light interacts with them as if they made up a uniform, continuous medium, rather than scattering off the individual structures. See also ACS Photonics Photonics Photonics Spectra journal Ultraperformance Nanophotonic Intrachip Communications References External links ePIXnet Nanostructuring Platform for Photonic Integration Optically induced mass transport in near fields "Photonics Breakthrough for Silicon Chips: Light can exert enough force to flip switches on a silicon chip," by Hong X. Tang, IEEE Spectrum, October 2009 Nanophotonics, nano-optics and nanospectroscopy A. J. Meixner (Ed.) Thematic Series in the Open Access Beilstein Journal of Nanotechnology Photonics Nanoelectronics
Nanophotonics
[ "Materials_science" ]
2,480
[ "Nanotechnology", "Nanoelectronics" ]
4,550,431
https://en.wikipedia.org/wiki/Sum-frequency%20generation
Sum-frequency generation (SFG) is a second order nonlinear optical process based on the mixing of two input photons at frequencies and to generate a third photon at frequency . As with any optical phenomenon in nonlinear optics, this can only occur under conditions where: the light is interacting with matter, that lacks centrosymmetry (for example, surfaces and interfaces); the light has a very high intensity (typically from a pulsed laser). Sum-frequency generation is a "parametric process", meaning that the photons satisfy energy conservation, leaving the matter unchanged: Second-harmonic generation A special case of sum-frequency generation is second-harmonic generation, in which . In fact, in experimental physics, this is the most common type of sum-frequency generation. This is because in second-harmonic generation, only one input light beam is required, but if , two simultaneous beams are required, which can be more difficult to arrange. In practice, the term "sum-frequency generation" usually refers to the less common case in which . Phase-matching For sum-frequency generation to occur efficiently, phase-matching conditions must be satisfied: where are the angular wavenumbers of the three waves as they travel through the medium. (Note that the equation resembles the equation for conservation of momentum.) As this condition is satisfied more and more accurately, the sum-frequency generation becomes more and more efficient. Sum frequency generation spectroscopy Sum frequency generation spectroscopy uses two laser beams mixed at an interface to generate an output beam with a frequency equal to the sum of the two input frequencies. Sum frequency generation spectroscopy is used to analyze surfaces and interfaces, carrying complementary information to infrared and Raman spectroscopy. References Nonlinear optics Surface science
Sum-frequency generation
[ "Physics", "Chemistry", "Materials_science" ]
346
[ "Condensed matter physics", "Surface science" ]
4,551,850
https://en.wikipedia.org/wiki/Subtropical%20front
A subtropical front is a surface water mass boundary or front, which is a narrow zone of transition between air masses of contrasting density, air masses of different temperatures or different water vapour concentrates. It is also characterized by an unforeseen change in wind direction, and speed across its surface between water systems, which are based on temperature and salinity. The subtropical separates the more saline subtropical waters from the fresher sub-Antarctic waters. Subtropical frontal zone A subtropical frontal zone (STFZ) is a large seasonal cycle located on the eastern side of basins. It is made up of fronts of multiple weak sea surface temperature (SST), aligned northwest–southeast, spread over a large latitudinal span. On the far eastern side of basins, the subtropical frontal zone becomes narrower and temperature gradients stronger, but still much weaker than across the dynamical subtropical frontal zone. A dynamical frontal zone sits at the southern limit of the saline subtropical waters on the western sides of basins. There are no water mass boundaries or fronts in correlation with the sea surface temperature at the subtropical frontal zone at the surface or beneath. The structure of a subtropical frontal zone results in the formation of a positive wind stress curl, which is the shear stress exerted by wind on the surface of water. The areas of most positive wind stress curl are characterized by very weak sea surface temperature incline, and are likely consistent to regions of mode water. Northern subtropical front The Northern subtropical front is found in the Pacific Ocean between 25° and 30° north latitude. North Atlantic subtropical fronts The North Atlantic subtropical fronts possess the characteristics of seasonal variability. Highest front occurrences are during early spring in the western region. Less front probability occurs in late spring to early summer in the eastern region. The strengths of the fronts differ with seasons, building strength when moving southward during the winter and spring, and weakening when moving northward during the summer. North Pacific subtropical fronts The North Pacific subtropical fronts are occupied by wind driven submesoscale subduction. Due to the constant thermohaline circulation fronts, cold air flows near the surface and bottom of the ocean. There are alternating fluxes throughout the year, that is influenced by jet streams which causes temperatures in these areas to differ. Southern subtropical front The Southern subtropical front is caused by warm, salty subtropical waters and Antarctic waters, found in all three ocean basins. A commonly used criterion found is that the salinity at a depth of 100m drops below 34.9 practical salinity units. South Atlantic subtropical frontal zone A characteristic of the South Atlantic subtropical frontal zone, between 15°W and 5°E, is the conversion from subtropical to sub-polar waters. As a result, this coerces the South Atlantic Current flow and is surrounded by a distinct front. See also Ocean current References External links Southern Subtropical Front Physical oceanography
Subtropical front
[ "Physics", "Chemistry" ]
579
[ "Ocean currents", "Applied and interdisciplinary physics", "Physical oceanography", "Fluid dynamics" ]
4,552,940
https://en.wikipedia.org/wiki/LM317
The LM317 is an adjustable positive linear voltage regulator. It was designed by Bob Dobkin in 1976 while he worked at National Semiconductor. The LM337 is the negative complement to the LM317, which regulates voltages below a reference. It was designed by Bob Pease, who also worked for National Semiconductor. Specifications Without a heat sink at an ambient temperature at 50 °C, a maximum power dissipation of (TJ-TA)/RθJA = ((125-50)/80) = 0.98 W can be permitted. In a constant voltage mode with an input voltage source at VIN at 34 V and a desired output voltage of 5 V, the maximum output current will be PMAX / (VIN-VO) = 0.98 / (34-5) = 32 mA. For a constant current mode with an input voltage source at VIN at 12 V and a forward voltage drop of VF=3.6 V, the maximum output current will be PMAX / (VIN - VF) = 0.98 / (12-3.6) = 117 mA. Operation As linear regulators, the LM317 and LM337 are used in DC to DC converter applications. Linear regulators inherently waste power; the power dissipated is the current passed multiplied by the voltage difference between input and output. A LM317 commonly requires a heat sink to prevent the operating temperature from rising too high. For large voltage differences, the power lost as heat can ultimately be greater than that provided to the circuit. This is the tradeoff for using linear regulators, which are a simple way to provide a stable voltage with few additional components. The alternative is to use a switching voltage regulator, which is usually more efficient, but has a larger footprint and requires a larger number of associated components. In packages with a heat-dissipating mounting tab, such as TO-220, the tab is connected internally to the output pin which may make it necessary to electrically isolate the tab or the heat sink from other parts of the application circuit. Failure to do this may cause the circuit to short. Voltage regulator The LM317 has three pins: INput, OUTput, and ADJustment. Internally the device has a bandgap voltage reference which produces a stable reference voltage of Vref= 1.25 V followed by a feedback-stabilized amplifier with a relatively high output current capacity. How the adjustment pin is connected determines the output voltage as follows. If the adjustment pin is connected to ground, the output pin delivers a regulated voltage of 1.25 V at currents up to the maximum. Higher regulated voltages are obtained by connecting the adjustment pin to a resistive voltage divider between the output and ground. Then Vref is the difference in voltage between the OUT pin and the ADJ pin. Vref is typically 1.25 V during normal operation. Because some quiescent current flows from the adjustment pin of the device, an error term is added: To make the output more stable, the device is designed to keep the quiescent current at or below 100 μA, making it possible to ignore the error term in nearly all practical cases. Current regulator The device can be configured to regulate the current to a load, rather than the voltage, by replacing the low-side resistor of the divider with the load itself. The output current is that resulting from dropping the reference voltage across the resistor. Ideally, this is: Accounting for quiescent current, this becomes: LM317 can also be used to design various other circuits like 0 V to 30 V regulator circuit, adjustable regulator circuit with improved ripple rejection, precision current limiter circuit, tracking pre-regulator circuit, 1.25 V to 20 V regulator circuit with minimum program current, adjustable multiple on-card regulators with single control, battery charger circuit, 50 mA constant current battery charger circuit, slow turn-on 15 V regulator circuit, ac voltage regulator circuit, current-limited 6 V charger circuit, adjustable 4 V regulator circuit, high-current adjustable regulator circuit and many more. Compared to 78xx/79xx The LM317 is an adjustable analogue to the popular 78xx fixed regulators. Like the LM317, each of the 78xx regulators is designed to adjust the output voltage until it is some fixed voltage above the adjustment pin (which in this case is labelled "ground"). The mechanism used is similar enough that a voltage divider can be used in the same way as with the LM317 and the output follows the same formula, using the regulator's fixed voltage for Vref (e.g. 5 V for 7805). However, the 78xx device's quiescent current is substantially higher and less stable. Because of this, the error term in the formula cannot be ignored and the value of the low-side resistor becomes more critical. More stable adjustments can be made by providing a reference voltage that is less sensitive than a resistive divider to current fluctuations, such as a diode drop or a voltage buffer. The LM317 is designed to compensate for these fluctuations internally, making such measures unnecessary. The LM337 relates in the same way to the fixed 79xx regulators. Second sources from Eastern Bloc The LM317 has an East European equivalent, the B3170V, which was manufactured in the German Democratic Republic (East Germany) by HFO (part of Kombinat Mikroelektronik Erfurt). Also, in USSR was manufactured and most popular ICs K142EN12A and KR142EN12A. These ICs are functional analogues of the LM317 See also Bandgap voltage reference Brokaw bandgap reference List of LM-series integrated circuits References External links LM317 Circuit Schematics and Pinouts Band-Gap The Design of Band-Gap Reference Circuits: Trials and Tribulations – Robert Pease, National Semiconductor (shows LM317 design in Figure 4: LM117) LM317 Bandgap Voltage Reference Example (ECE 327) – Brief explanation of the temperature-independent bandgap reference circuit within the LM317. Datasheets / Databooks Voltage Regulator Databook (Historical 1980), National Semiconductor LM317 (positive), LM350 (3 Amp), Texas Instruments (TI acquired National Semiconductor) LM317 (positive), LM350 (3 Amp), ON Semiconductor LM317 (positive), STMicroelectronics LM337 (negative), Texas Instruments Linear integrated circuits Voltage regulation
LM317
[ "Physics" ]
1,373
[ "Voltage", "Physical quantities", "Voltage regulation" ]
4,554,151
https://en.wikipedia.org/wiki/Institute%20of%20Materials%2C%20Minerals%20and%20Mining
The Institute of Materials, Minerals and Mining (IOM3) is a British engineering institution with activities including promotion of the development of materials science. It has been a registered charity governed by a royal charter and a member of the United Kingdom's Science Council, since 2002. In 2019, the IOM3 celebrated the 150-year anniversary of the establishment of the Iron and Steel Institute which the IOM3 now encompasses. In 2022, it had a gross income of £3.99 million. Structure Having resided at Carlton House Terrace off Pall Mall in St James's in central London since 2002, the institute moved to 297 Euston Road on 30 June 2015. The organization has its membership, education, sales, and knowledge transfer office in Grantham. Members qualify for different grades of membership, ranging from Affiliate to Fellow of the Institute of Materials, Minerals and Mining (FIMMM), depending on academic qualifications and professional experience. IOM3 has an individual membership of 15,000, and represents a combination of scientific, technical and human resources. Approximately 25 UK 'local societies' are affiliated with the institute, covering a wide range of disciplines such as ceramics, composites, mining, packaging, polymers, and metallurgy, and organizing events throughout the year. Technical Communities Since April 2022 IOM3 has 22 Technical Community groups representing the breadth of disciplines covered and the materials cycle. These groups previously known as Divisions are now termed as the "IOM3 XXXXX Group" with a common identity and branding/logo. IOM3 Adhesion and Adhesives Group (formerly Society of Adhesion and Adhesives, SAA) IOM3 Applied Earth Science Group IOM3 Biomedical Applications Group IOM3 Ceramics Group ( formerly the Institute of Ceramics until 1993 and Ceramics Society) IOM3 Composites Group (formerly the British Composites Society, BCS) IOM3 Construction Materials Group IOM3 Defense, Safety and Security Group IOM3 Elastomer Group (formerly the Rubber in Engineering Group, RIEG) IOM3 Energy Materials Group IOM3 Energy Transition Group IOM3 Iron & Steel Group IOM3 Materials Characterization and Properties Group IOM3 Materials Processing and Manufacturing Group IOM3 Minerals Processing and Extractive Metallurgy Group IOM3 Mining Technology Group IOM3 Natural Materials Group IOM3 Non-ferrous and Light Metals Group IOM3 Packaging Group (formerly the Institute of Packaging (2005), and Packaging Society) IOM3 Polymer Group IOM3 Surface Technologies Group – includes the Corrosion subgroup, and the former Institute of Vitreous Enameling, IVE IOM3 Sustainable Development Group, includes the Resources Strategy Group, RSG IOM3 Wood Technology Group History The institute's roots go back to the Iron and Steel Institute. In 1869, ironmaster William Menelaus convened and chaired a meeting at the Midland Railway's Queen's Hotel in Birmingham, West Midlands, which led to the founding of the Iron and Steel Institute, which received its royal charters in 1899. Menelaus was its president from 1875 to 1877, and in 1881 was awarded the Bessemer Gold Medal. In 1974, the Iron and Steel Institute merged into the Institute of Metals. The Institute of Metals then merged in 1993, with The Institute of Ceramics and The Plastics and Rubber Institute (PRI) to form the Institute of Materials (IoM). The PRI was itself a merger of The Plastics Institute and the Institution of the Rubber Industry (known as the IRI) during the 1980s, a reflection of the declining UK rubber manufacturing industry during this period. IOM3 was formed from the merger of the Institute of Materials and the Institution of Mining and Metallurgy (IMM) in June 2002. More recent mergers include the Institute of Packaging (2005), the Institute of Clay Technology (2006) the Institute of Wood Science (2009) and the Institute of Vitreous Enamellers (2010). List of presidents Function The institute ensures that courses in materials, minerals, mining technology and engineering conform to the standards for professional registration with the Engineering Council UK which establishes codes of practice and monitors legislative matters affecting members' professional interests. The professional development program run by the institute contributes to members' careers towards senior grades of membership and Chartered Scientist (CSci) and Chartered Engineer (CEng) status. Members receive reduced rates for the institute's many books, journals and conferences and from access to the institute's Information Services. These include extensive library resources as well as a team of materials experts who provide consultancy services to Institute members, and to companies who have joined the institute's Business Partner Program. Activities The institute's educational activities aim to promote the materials discipline to younger generations by allowing access, through the Schools Affiliate Scheme, to a range of educational resources and materials. The institute has close links with schools and colleges and is responsible for accrediting university and college courses and industrial training schemes. The Education & Outreach Trust, which incorporated the institute's existing education activities and was granted charitable status in 2022, offers teachers courses and teaching resources on materials, as well as careers advice for students. Institute publications such as definitive textbooks are available to students at reduced prices. The institute also offers a series of grants and bursaries to encourage students and organizes events such as the Young Persons' Lecture Competition. Publications The trading subsidiary of the institute, IOM Communications Ltd, is responsible for producing any related journals. To expand, these include the members' journals (magazines) Materials World magazine and Clay Technology. Sage Publishing produces a range of learned journals for the institute, including the Ironmaking and Steelmaking journal, Surface Engineering, Powder Metallurgy, Corrosion Engineering, International Materials Reviews and Materials Science and Technology.. The institute also publishes ICON, incorporating IMMAGE (Information on Mining, Metallurgy and Geological Exploration), a reference database of abstracts and citations of scientific and engineering literature for the international minerals industry, and it has links to OneMine, a database of mining publications. Materials World Materials World is the member's magazine of the institute, specifically devoted to the engineering materials cycle, from mining and extraction, through processing and application, to recycling and recovery. Editorially, it embraces the whole spectrum of materials and minerals – metals, plastics, polymers, rubber, composites, ceramics and glasses – with particular emphasis on advanced technologies, latest developments and new applications, giving prominence to the topics that are of fundamental importance to those in the industry. Advice The Materials Information Service is a service of the institute which has been giving advice to industry on the selection and use of materials since 1988. This is now part of the institute's Information Services which includes technical inquiry and library services for the materials, minerals and mining sectors, an information help desk, regionally based advisors, and related services. Companies can gain access to the institute's information resources by joining its Business Partner Program Scheme. Conferences The institute's Conference Department organizes conferences, events, and exhibitions with the institute's technical committees to help keep members and other delegates informed of the latest developments within the materials, minerals, and mining arena. Awards The IOM3 grants several awards including: Fellowship: Fellow of the Institute of Materials, Minerals and Mining (FIMMM) The Bessemer Gold Medal is an annual prize awarded by the institute for "outstanding services to the steel industry". It was established and endowed by Sir Henry Bessemer in 1874. It was first awarded to Isaac Lowthian Bell in 1874. The 2016 award was to Alan Cramb. The Silver Medal is awarded annually to a young scientist (under the age of 35) designated as "outstanding" in recognition of a crucial contribution to a field of interest. In addition, the institute has many other significant awards for Personal Achievement and Published Works covering materials, minerals and mining. In particular, there are awards covering surface engineering, biomedical materials, ceramics, rubber and plastics, iron and steel, and automotive areas. There are also awards covering education and local societies. Youth On 10 November 2016, the institute launched an Engineering Extravaganza event to encourage people aged 12 to 14 to consider careers in engineering as part of . See also List of mechanical engineering awards References Further reading The Institute produces the magazines Materials World and Clay Technology. They are available to members or by subscription. Materials World now incorporates The Packaging Professional and Wood Focus magazines. Design exchange: Institute of Materials, Minerals and Mining Design exchange: Materials Knowledge Transfer Network Materials Science: Materials Knowledge Transfer Network External links Materials World website 2002 establishments in the United Kingdom Bessemer Gold Medal Charities based in London ECUK Licensed Members Engineering societies based in the United Kingdom Geology organizations Materials science organizations Mining in the United Kingdom Mining organizations Organisations based in the London Borough of Camden Organizations established in 2002
Institute of Materials, Minerals and Mining
[ "Chemistry", "Materials_science", "Engineering" ]
1,787
[ "Bessemer Gold Medal", "Materials science organizations", "Chemical engineering awards", "Materials science" ]
4,555,635
https://en.wikipedia.org/wiki/Frequency%20comb
A frequency comb or spectral comb is a spectrum made of discrete and regularly spaced spectral lines. In optics, a frequency comb can be generated by certain laser sources. A number of mechanisms exist for obtaining an optical frequency comb, including periodic modulation (in amplitude and/or phase) of a continuous-wave laser, four-wave mixing in nonlinear media, or stabilization of the pulse train generated by a mode-locked laser. Much work has been devoted to this last mechanism, which was developed around the turn of the 21st century and ultimately led to one half of the Nobel Prize in Physics being shared by John L. Hall and Theodor W. Hänsch in 2005. The frequency domain representation of a perfect frequency comb is like a Dirac comb, a series of delta functions spaced according to where is an integer, is the comb tooth spacing (equal to the mode-locked laser's repetition rate or, alternatively, the modulation frequency), and is the carrier offset frequency, which is less than . Combs spanning an octave in frequency (i.e., a factor of two) can be used to directly measure (and correct for drifts in) . Thus, octave-spanning combs can be used to steer a piezoelectric mirror within a carrier–envelope phase-correcting feedback loop. Any mechanism by which the combs' two degrees of freedom ( and ) are stabilized generates a comb that is useful for mapping optical frequencies into the radio frequency for the direct measurement of optical frequency. Generation Using a mode-locked laser The most popular way of generating a frequency comb is with a mode-locked laser. Such lasers produce a series of optical pulses separated in time by the round-trip time of the laser cavity. The spectrum of such a pulse train approximates a series of Dirac delta functions separated by the repetition rate (the inverse of the round-trip time) of the laser. This series of sharp spectral lines is called a frequency comb or a frequency Dirac comb. The most common lasers used for frequency-comb generation are Ti:sapphire solid-state lasers or Er:fiber lasers with repetition rates typically between 100 MHz and 1 GHz or even going as high as 10 GHz. Using four-wave mixing Four-wave mixing is a process where intense light at three frequencies interact to produce light at a fourth frequency . If the three frequencies are part of a perfectly spaced frequency comb, then the fourth frequency is mathematically required to be part of the same comb as well. Starting with intense light at two or more equally spaced frequencies, this process can generate light at more and more different equally spaced frequencies. For example, if there are a lot of photons at two frequencies , four-wave mixing could generate light at the new frequency . This new frequency would get gradually more intense, and light can subsequently cascade to more and more new frequencies on the same comb. Therefore, a conceptually simple way to make an optical frequency comb is to take two high-power lasers of slightly different frequency and shine them simultaneously through a photonic-crystal fiber. This creates a frequency comb by four-wave mixing as described above. In microresonators An alternative variation of four-wave-mixing-based frequency combs is known as Kerr frequency comb. Here, a single laser is coupled into a microresonator (such as a microscopic glass disk that has whispering-gallery modes). This kind of structure naturally has a series of resonant modes with approximately equally spaced frequencies (similar to a Fabry–Pérot interferometer). Unfortunately the resonant modes are not exactly equally spaced due to dispersion. Nevertheless, the four-wave mixing effect above can create and stabilize a perfect frequency comb in such a structure. Basically, the system generates a perfect comb that overlaps the resonant modes as much as possible. In fact, nonlinear effects can shift the resonant modes to improve the overlap with the perfect comb even more. (The resonant mode frequencies depend on refractive index, which is altered by the optical Kerr effect.) In the time domain, while mode-locked lasers almost always emit a series of short pulses, Kerr frequency combs generally do not. However, a special sub-type of Kerr frequency comb, in which a "cavity soliton" forms in the microresonator, does emit a series of pulses. Using electro-optic modulation of a continuous-wave laser An optical frequency comb can be generated by modulating the amplitude and/or phase of a continuous-wave laser with an external modulator driven by a radio-frequency source. In this manner, the frequency comb is centered around the optical frequency provided by the continuous-wave laser and the modulation frequency or repetition rate is given by the external radio-frequency source. The advantage of this method is that it can reach much higher repetition rates (>10 GHz) than with mode-locked lasers and the two degrees of freedom of the comb can be set independently. The number of lines is lower than with a mode-locked laser (typically a few tens), but the bandwidth can be significantly broadened with nonlinear fibers. This type of optical frequency comb is usually called electrooptic frequency comb. The first schemes used a phase modulator inside an integrated Fabry–Perot cavity, but with advances in electro-optic modulators new arrangements are possible. Low-frequency combs using electronics A purely electronic device which generates a series of pulses, also generates a frequency comb. These are produced for electronic sampling oscilloscopes, but also used for frequency comparison of microwaves, because they reach up to 1 THz. Since they include 0 Hz, they do not need the tricks which make up the rest of this article. Widening to one octave For many applications, the comb must be widened to at least an octave: that is, the highest frequency in the spectrum must be at least twice the lowest frequency. One of three techniques may be used: supercontinuum generation by strong self-phase modulation in nonlinear photonic crystal fiber or integrated waveguide a Ti:sapphire laser using intracavity self-phase modulation the second harmonic can be generated in a long crystal so that by consecutive sum frequency generation and difference frequency generation the spectrum of first and second harmonic widens until they overlap. These processes generate new frequencies on the same comb for similar reasons as discussed above. Carrier–envelope offset measurement An increasing offset between the optical phase and the maximum of the wave envelope of an optical pulse can be seen on the right. Each line is displaced from a harmonic of the repetition rate by the carrier–envelope offset frequency. The carrier–envelope offset frequency is the rate at which the peak of the carrier frequency slips from the peak of the pulse envelope on a pulse-to-pulse basis. Measurement of the carrier–envelope offset frequency is usually done with a self-referencing technique, in which the phase of one part of the spectrum is compared to its harmonic. Different possible approaches for carrier–envelope offset phase control were proposed in 1999. The two simplest approaches, which require only one nonlinear optical process, are described in the following. In the "f − 2f technique, light at the lower-energy side of the broadened spectrum is doubled using second-harmonic generation (SHG) in a nonlinear crystal, and a heterodyne beat is generated between that and light at the same wavelength on the upper-energy side of the spectrum. This beat signal, detectable with a photodiode, includes a difference-frequency component, which is the carrier–envelope offset frequency. Conceptually, light at frequency is doubled to , and mixed with light at the very similar frequency to produces a beat signal at frequency In practice, this is not done with a single frequency but with a range of values, but the effect is the same Alternatively, difference-frequency generation (DFG) can be used. From light at opposite ends of the broadened spectrum the difference frequency is generated in a nonlinear crystal, and a heterodyne beat between this mixing product and light at the same wavelength of the original spectrum is measured. This beat frequency, detectable with a photodiode, is the carrier–envelope offset frequency. Here, light at frequencies and is mixed to produce light at frequency . This is then mixed with light at frequency to produce a beat frequency of This avoids the need for frequency doubling at the cost of a second optical mixing step. Again, practical implementation uses a range of values, not a single one. Because the phase is measured directly, and not the frequency, it is possible to set the frequency to zero and additionally lock the phase, but because the intensity of the laser and this detector is not very stable, and because the whole spectrum beats in phase, one has to lock the phase on a fraction of the repetition rate. Carrier–envelope offset control In the absence of active stabilization, the repetition rate and carrier–envelope offset frequency would be free to drift. They vary with changes in the cavity length, refractive index of laser optics, and nonlinear effects such as the Kerr effect. The repetition rate can be stabilized using a piezoelectric transducer, which moves a mirror to change the cavity length. In Ti:sapphire lasers using prisms for dispersion control, the carrier–envelope offset frequency can be controlled by tilting the high reflector mirror at the end of the prism pair. This can be done using piezoelectric transducers. In high repetition rate Ti:sapphire ring lasers, which often use double-chirped mirrors to control dispersion, modulation of the pump power using an acousto-optic modulator is often used to control the offset frequency. The phase slip depends strongly on the Kerr effect, and by changing the pump power one changes the peak intensity of the laser pulse and thus the size of the Kerr phase shift. This shift is far smaller than 6 rad, so an additional device for coarse adjustment is needed. A pair of wedges, one moving in or out of the intra-cavity laser beam can be used for this purpose. The breakthrough which led to a practical frequency comb was the development of technology for stabilizing the carrier–envelope offset frequency. An alternative to stabilizing the carrier–envelope offset frequency is to cancel it completely by use of difference frequency generation (DFG). If the difference frequency of light of opposite ends of a broadened spectrum is generated in a nonlinear crystal, the resulting frequency comb is carrier–envelope offset-free since the two spectral parts contributing to the DFG share the same carrier–envelope offset frequency (CEO frequency). This was first proposed in 1999 and demonstrated in 2011 using an erbium fiber frequency comb at the telecom wavelength. This simple approach has the advantage that no electronic feedback loop is needed as in conventional stabilization techniques. It promises to be more robust and stable against environmental perturbations. Applications A frequency comb allows a direct link from radio frequency standards to optical frequencies. Current frequency standards such as atomic clocks operate in the microwave region of the spectrum, and the frequency comb brings the accuracy of such clocks into the optical part of the electromagnetic spectrum. A simple electronic feedback loop can lock the repetition rate to a frequency standard. There are two distinct applications of this technique. One is the optical clock, where an optical frequency is overlapped with a single tooth of the comb on a photodiode, and a radio frequency is compared to the beat signal, the repetition rate, and the CEO-frequency (carrier–envelope offset). Applications for the frequency-comb technique include optical metrology, frequency-chain generation, optical atomic clocks, high-precision spectroscopy, and more precise GPS technology. The other is doing experiments with few-cycle pulses, like above-threshold ionization, attosecond pulses, highly efficient nonlinear optics or high-harmonics generation. These can be single pulses, so that no comb exists, and therefore it is not possible to define a carrier–envelope offset frequency, rather the carrier–envelope offset phase is important. A second photodiode can be added to the setup to gather phase and amplitude in a single shot, or difference-frequency generation can be used to even lock the offset on a single-shot basis, albeit with low power efficiency. Without an actual comb one can look at the phase vs frequency. Without a carrier–envelope offset all frequencies are cosines. This means that all frequencies have the phase zero. The time origin is arbitrary. If a pulse comes at later times, the phase increases linearly with frequency, but still the zero-frequency phase is zero. This phase at zero frequency is the carrier–envelope offset. The second harmonic not only has twice the frequency, but also twice the phase. Thus for a pulse with zero offset the second harmonic of the low-frequency tail is in phase with the fundamental of the high-frequency tail, and otherwise it is not. Spectral phase interferometry for direct electric-field reconstruction (SPIDER) measures how the phase increases with frequency, but it cannot determine the offset, so the name “electric field reconstruction” is a bit misleading. In recent years, the frequency comb has been garnering interest for astro-comb applications, extending the use of the technique as a spectrographic observational tool in astronomy. There are other applications that do not need to lock the carrier–envelope offset frequency to a radio-frequency signal. These include, among others, optical communications, the synthesis of optical arbitrary waveforms, spectroscopy (especially dual-comb spectroscopy) or radio-frequency photonics. On the other hand, optical frequency combs have found new applications in remote sensing. Ranging lidars based on dual comb spectroscopy have been developed, enabling high-resolution range measurements at fast update rates. Optical frequency combs can also be utilized to measure greenhouse gas emissions with great precision. For instance, in 2019, scientists at NIST employed spectroscopy to quantify methane emissions from oil and gas fields . More recently, a greenhouse gas lidar based on electro-optic combs has been successfully demonstrated. History The frequency comb was proposed in 2000. Before its introduction, the EM spectrum was divided between the electronic/radio frequency range and the optical/laser frequency range. The radio frequency range had accurate frequency counters, allowing highly accurate measurements of absolute frequency. The optical range has no such device. The two ranges are separated by a frequency gap of . Before the frequency comb, the only way to bridge the gap were the harmonic frequency chains, which doubles radio frequency in 15 stages, reaching a frequency multiplication of . However, those were large and expensive to operate. The frequency comb managed to bridge that gap in one stage. Theodor W. Hänsch and John L. Hall shared half of the 2005 Nobel Prize in Physics for contributions to the development of laser-based precision spectroscopy, including the optical frequency-comb technique. The other half of the prize was awarded to Roy Glauber. Also in 2005, the femtosecond comb technique was extended to the extreme ultraviolet range, enabling frequency metrology in that region of the spectrum. See also Astro-comb Atomic clock Bandwidth-limited pulse Magneto-optical trap References Further reading Nobel prize for Physics (2005) Press Release External links Attosecond control of optical waveforms Femtosecond laser comb Optical frequency comb for dimensional metrology, atomic and molecular spectroscopy, and precise time keeping Rulers of Light: Using Lasers to Measure Distance and Time by Steven Cundiff in Scientific American On-chip, electronically tunable frequency comb, article by Leah Burrows | March 18, 2019 Optical Frequency Combs explanation by NIST Nonlinear optics Laser science Spectroscopy Spectrum (physical sciences)
Frequency comb
[ "Physics", "Chemistry" ]
3,188
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Waves", "Spectroscopy" ]
4,556,017
https://en.wikipedia.org/wiki/Nitroxyl
Nitroxyl (common name) or azanone (IUPAC name) is the chemical compound HNO. It is well known in the gas phase. Nitroxyl can be formed as a short-lived intermediate in the solution phase. The conjugate base, NO−, nitroxide anion, is the reduced form of nitric oxide (NO) and is isoelectronic with dioxygen. The bond dissociation energy of H−NO is , which is unusually weak for a bond to the hydrogen atom. Generation Nitroxyl is produced from the reagents Angeli's salt (Na2N2O3) and Piloty's acid (PhSO2NHOH). Other notable studies on the production of HNO exploit cycloadducts of acyl nitroso species, which are known to decompose via hydrolysis to HNO and acyl acid. Upon photolysis these compounds release the acyl nitroso species which then further decompose. HNO is generated via organic oxidation of cyclohexanone oxime with lead tetraacetate to form 1-nitrosocyclohexyl acetate: This compound can be hydrolyzed under basic conditions in a phosphate buffer to HNO, acetic acid, and cyclohexanone. Dichloramine reacts with the hydroxide ion, which is always present in water, to yield nitroxyl and the chloride ion. Alkali metals react with nitric oxide to give salts of the form . However, generation of the (unstable) free acid from these salts is not entirely straightforward (see below). Reactions Nitroxyl is a weak acid, with pKa of about 11, the conjugate base being the triplet state of NO−, sometimes called nitroxide. Nitroxyl itself, however, is a singlet ground state. Thus, deprotonation of nitroxyl uniquely involves the forbidden spin crossing from the singlet state starting material to triplet state product: 1HNO + B− → 3NO− + BH Due to the spin-forbidden nature of deprotonation, proton abstraction is many orders of magnitude slower (k = for deprotonation by OH−) than what one would expect for a heteroatom proton-transfer process (processes that are so fast that they are sometimes diffusion-controlled). The Ka of starting from or ending with the electronic excited states has also been determined. When process of deprotonating singlet state HNO to obtain singlet state NO− has a pKa is about 23. On the other hand, when deprotonating triplet HNO to obtain triplet NO−, the pKa is about −1.8. Nitroxyl rapidly decomposes by a bimolecular pathway to nitrous oxide (k at 298 K = ): 2 HNO → N2O + H2O The reaction proceeds via dimerization to hyponitrous acid, H2N2O2, which subsequently undergoes dehydration. Therefore, HNO is generally prepared in situ as described above. Nitroxyl is very reactive towards nucleophiles, including thiols. The initial adduct rearranges to a sulfinamide: HNO + RSH → RS(O)NH2 Detection In biological samples, nitroxyl can be detected using fluorescent sensors, many of which are based on the reduction of copper(II) to copper(I) with concomitant increase in fluorescence. Medicinal chemistry Nitroxyl donors, known as nitroso compounds, show potential in the treatment of heart failure and ongoing research is focused on finding new molecules for this task. See also Nitroxyl radicals (also called aminoxyl radicals) — chemical species containing the R2N−O• functional group References Aldehyde dehydrogenase inhibitors Hydrogen compounds Triatomic molecules Nitrogen oxoacids Oxygen compounds
Nitroxyl
[ "Physics", "Chemistry" ]
841
[ "Molecules", "Triatomic molecules", "Matter" ]
12,516,682
https://en.wikipedia.org/wiki/Iron%20in%20biology
Iron is an important biological element. It is used in both the ubiquitous iron-sulfur proteins and in vertebrates it is used in hemoglobin which is essential for blood and oxygen transport. Overview Iron is required for life. The iron–sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen fixation. Iron-containing proteins participate in transport, storage and use of oxygen. Iron proteins are involved in electron transfer. The ubiquity of Iron in life has led to the Iron–sulfur world hypothesis that iron was a central component of the environment of early life. Examples of iron-containing proteins in higher organisms include hemoglobin, cytochrome (see high-valent iron), and catalase. The average adult human contains about 0.005% body weight of iron, or about four grams, of which three quarters is in hemoglobin – a level that remains constant despite only about one milligram of iron being absorbed each day, because the human body recycles its hemoglobin for the iron content. Microbial growth may be assisted by oxidation of iron(II) or by reduction of iron (III). Biochemistry Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral pH. Thus, these organisms have developed means to absorb iron as complexes, sometimes taking up ferrous iron before oxidising it back to ferric iron. In particular, bacteria have evolved very high-affinity sequestering agents called siderophores. After uptake in human cells, iron storage is precisely regulated. A major component of this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it in the blood to cells. Transferrin contains Fe3+ in the middle of a distorted octahedron, bonded to one nitrogen, three oxygens and a chelating carbonate anion that traps the Fe3+ ion: it has such a high stability constant that it is very effective at taking up Fe3+ ions even from the most stable complexes. At the bone marrow, transferrin is reduced from Fe3+ and Fe2+ and stored as ferritin to be incorporated into hemoglobin. The most commonly known and studied bioinorganic iron compounds (biological iron molecules) are the heme proteins: examples are hemoglobin, myoglobin, and cytochrome P450. These compounds participate in transporting gases, building enzymes, and transferring electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase, lipoxygenases, and IRE-BP. Hemoglobin is an oxygen carrier that occurs in red blood cells and contributes their color, transporting oxygen in the arteries from the lungs to the muscles where it is transferred to myoglobin, which stores it until it is needed for the metabolic oxidation of glucose, generating energy. Here the hemoglobin binds to carbon dioxide, produced when glucose is oxidized, which is transported through the veins by hemoglobin (predominantly as bicarbonate anions) back to the lungs where it is exhaled. In hemoglobin, the iron is in one of four heme groups and has six possible coordination sites; four are occupied by nitrogen atoms in a porphyrin ring, the fifth by an imidazole nitrogen in a histidine residue of one of the protein chains attached to the heme group, and the sixth is reserved for the oxygen molecule it can reversibly bind to. When hemoglobin is not attached to oxygen (and is then called deoxyhemoglobin), the Fe2+ ion at the center of the heme group (in the hydrophobic protein interior) is in a high-spin configuration. It is thus too large to fit inside the porphyrin ring, which bends instead into a dome with the Fe2+ ion about 55 picometers above it. In this configuration, the sixth coordination site reserved for the oxygen is blocked by another histidine residue. When deoxyhemoglobin picks up an oxygen molecule, this histidine residue moves away and returns once the oxygen is securely attached to form a hydrogen bond with it. This results in the Fe2+ ion switching to a low-spin configuration, resulting in a 20% decrease in ionic radius so that now it can fit into the porphyrin ring, which becomes planar. (Additionally, this hydrogen bonding results in the tilting of the oxygen molecule, resulting in a Fe–O–O bond angle of around 120° that avoids the formation of Fe–O–Fe or Fe–O2–Fe bridges that would lead to electron transfer, the oxidation of Fe2+ to Fe3+, and the destruction of hemoglobin.) This results in a movement of all the protein chains that leads to the other subunits of hemoglobin changing shape to a form with larger oxygen affinity. Thus, when deoxyhemoglobin takes up oxygen, its affinity for more oxygen increases, and vice versa. Myoglobin, on the other hand, contains only one heme group and hence this cooperative effect cannot occur. Thus, while hemoglobin is almost saturated with oxygen in the high partial pressures of oxygen found in the lungs, its affinity for oxygen is much lower than that of myoglobin, which oxygenates even at low partial pressures of oxygen found in muscle tissue. As described by the Bohr effect (named after Christian Bohr, the father of Niels Bohr), the oxygen affinity of hemoglobin diminishes in the presence of carbon dioxide. Carbon monoxide and phosphorus trifluoride¿†¿ are poisonous to humans because they bind to hemoglobin similarly to oxygen, but with much more strength, so that oxygen can no longer be transported throughout the body. Hemoglobin bound to carbon monoxide is known as carboxyhemoglobin. This effect also plays a minor role in the toxicity of cyanide, but there the major effect is by far its interference with the proper functioning of the electron transport protein cytochrome a. The cytochrome proteins also involve heme groups and are involved in the metabolic oxidation of glucose by oxygen. The sixth coordination site is then occupied by either another imidazole nitrogen or a methionine sulfur, so that these proteins are largely inert to oxygen – with the exception of cytochrome a, which bonds directly to oxygen and thus is very easily poisoned by cyanide. Here, the electron transfer takes place as the iron remains in low spin but changes between the +2 and +3 oxidation states. Since the reduction potential of each step is slightly greater than the previous one, the energy is released step-by-step and can thus be stored in adenosine triphosphate. Cytochrome a is slightly distinct, as it occurs at the mitochondrial membrane, binds directly to oxygen, and transports protons as well as electrons, as follows: 4 Cytc2+ + O2 + 8H → 4 Cytc3+ + 2 H2O + 4H Although the heme proteins are the most important class of iron-containing proteins, the iron–sulfur proteins are also very important, being involved in electron transfer, which is possible since iron can exist stably in either the +2 or +3 oxidation states. These have one, two, four, or eight iron atoms that are each approximately tetrahedrally coordinated to four sulfur atoms; because of this tetrahedral coordination, ths in the surrounding peptide chains. Another important class of iron–sulfur proteins is the ferredoxins, which have multiple iron atoms. Transferrin does not belong to either of these classes. The ability of sea mussels to maintain their grip on rocks in the ocean is facilitated by their use of organometallic iron-based bonds in their protein-rich cuticles. Based on synthetic replicas, the presence of iron in these structures increased elastic modulus 770 times, tensile strength 58 times, and toughness 92 times. The amount of stress required to permanently damage them increased 76 times. Vertebrate metabolism In vertebrates, iron is an essential component of hemoglobin, the oxygen transport protein. Human body iron stores Most well-nourished people in industrialized countries have 4 to 5 grams of iron in their bodies (~38 mg iron/kg body weight for women and ~50 mg iron/kg body for men). Of this, about is contained in the hemoglobin needed to carry oxygen through the blood (around 0.5 mg of iron per mL of blood), and most of the rest (approximately 2 grams in adult men, and somewhat less in women of childbearing age) is contained in ferritin complexes that are present in all cells, but most common in bone marrow, liver, and spleen. The liver stores of ferritin are the primary physiologic source of reserve iron in the body. The reserves of iron in industrialized countries tend to be lower in children and women of child-bearing age than in men and in the elderly. Women who must use their stores to compensate for iron lost through menstruation, pregnancy or lactation have lower non-hemoglobin body stores, which may consist of , or even less. Of the body's total iron content, about is devoted to cellular proteins that use iron for important cellular processes like storing oxygen (myoglobin) or performing energy-producing redox reactions (cytochromes). A relatively small amount (3–4 mg) circulates through the plasma, bound to transferrin. Because of its toxicity, free soluble iron is kept in low concentration in the body. Iron deficiency first affects the storage of iron in the body, and depletion of these stores is thought to be relatively asymptomatic, although some vague and non-specific symptoms have been associated with it. Since iron is primarily required for hemoglobin, iron deficiency anemia is the primary clinical manifestation of iron deficiency. Iron-deficient people will suffer or die from organ damage well before their cells run out of the iron needed for intracellular processes like electron transport. Macrophages of the reticuloendothelial system store iron as part of the process of breaking down and processing hemoglobin from engulfed red blood cells. Iron is also stored as a pigment called hemosiderin, which is an ill-defined deposit of protein and iron, created by macrophages where excess iron is present, either locally or systemically, e.g., among people with iron overload due to frequent blood cell destruction and the necessary transfusions their condition calls for. If systemic iron overload is corrected, over time the hemosiderin is slowly resorbed by the macrophages. Mechanisms of iron regulation Human iron homeostasis is regulated at two different levels. Systemic iron levels are balanced by the controlled absorption of dietary iron by enterocytes, the cells that line the interior of the intestines, and the uncontrolled loss of iron from epithelial sloughing, sweat, injuries and blood loss. In addition, systemic iron is continuously recycled. Cellular iron levels are controlled differently by different cell types due to the expression of particular iron regulatory and transport proteins. Systemic iron regulation Dietary iron uptake The absorption of dietary iron is a variable and dynamic process. The amount of iron absorbed compared to the amount ingested is typically low, but may range from 5% to as much as 35% depending on circumstances and type of iron. The efficiency with which iron is absorbed varies depending on the source. Generally, the best-absorbed forms of iron come from animal products. Absorption of dietary iron in iron salt form (as in most supplements) varies somewhat according to the body's need for iron, and is usually between 10% and 20% of iron intake. Absorption of iron from animal products, and some plant products, is in the form of heme iron, and is more efficient, allowing absorption of from 15% to 35% of intake. Heme iron in animals is from blood and heme-containing proteins in meat and mitochondria, whereas in plants, heme iron is present in mitochondria in all cells that use oxygen for respiration. Like most mineral nutrients, the majority of the iron absorbed from digested food or supplements is absorbed in the duodenum by enterocytes of the duodenal lining. These cells have special molecules that allow them to move iron into the body. To be absorbed, dietary iron can be absorbed as part of a protein such as heme protein or iron must be in its ferrous Fe2+ form. A ferric reductase enzyme on the enterocytes' brush border, duodenal cytochrome B (Dcytb), reduces ferric Fe3+ to Fe2+. A protein called divalent metal transporter 1 (DMT1), which can transport several divalent metals across the plasma membrane, then transports iron across the enterocyte's cell membrane into the cell. If the iron is bound to heme it is instead transported across the apical membrane by heme carrier protein 1 (HCP1). These intestinal lining cells can then either store the iron as ferritin, which is accomplished by Fe2+ binding to apoferritin (in which case the iron will leave the body when the cell dies and is sloughed off into feces), or the cell can release it into the body via the only known iron exporter in mammals, ferroportin. Hephaestin, a ferroxidase that can oxidize Fe2+ to Fe3+ and is found mainly in the small intestine, helps ferroportin transfer iron across the basolateral end of the intestine cells. In contrast, ferroportin is post-translationally repressed by hepcidin, a 25-amino acid peptide hormone. The body regulates iron levels by regulating each of these steps. For instance, enterocytes synthesize more Dcytb, DMT1 and ferroportin in response to iron deficiency anemia. Iron absorption from diet is enhanced in the presence of vitamin C and diminished by excess calcium, zinc, or manganese. The human body's rate of iron absorption appears to respond to a variety of interdependent factors, including total iron stores, the extent to which the bone marrow is producing new red blood cells, the concentration of hemoglobin in the blood, and the oxygen content of the blood. The body also absorbs less iron during times of inflammation, in order to deprive bacteria of iron. Recent discoveries demonstrate that hepcidin regulation of ferroportin is responsible for the syndrome of anemia of chronic disease. Iron recycling and loss Most of the iron in the body is hoarded and recycled by the reticuloendothelial system, which breaks down aged red blood cells. In contrast to iron uptake and recycling, there is no physiologic regulatory mechanism for excreting iron. People lose a small but steady amount by gastrointestinal blood loss, sweating and by shedding cells of the skin and the mucosal lining of the gastrointestinal tract. The total amount of loss for healthy people in the developed world amounts to an estimated average of a day for men, and 1.5–2 mg a day for women with regular menstrual periods. People with gastrointestinal parasitic infections, more commonly found in developing countries, often lose more. Those who cannot regulate absorption well enough get disorders of iron overload. In these diseases, the toxicity of iron starts overwhelming the body's ability to bind and store it. Cellular iron regulation Iron import Most cell types take up iron primarily through receptor-mediated endocytosis via transferrin receptor 1 (TFR1), transferrin receptor 2 (TFR2) and GAPDH. TFR1 has a 30-fold higher affinity for transferrin-bound iron than TFR2 and thus is the main player in this process. The higher order multifunctional glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) also acts as a transferrin receptor. Transferrin-bound ferric iron is recognized by these transferrin receptors, triggering a conformational change that causes endocytosis. Iron then enters the cytoplasm from the endosome via importer DMT1 after being reduced to its ferrous state by a STEAP family reductase. Alternatively, iron can enter the cell directly via plasma membrane divalent cation importers such as DMT1 and ZIP14 (Zrt-Irt-like protein 14). Again, iron enters the cytoplasm in the ferrous state after being reduced in the extracellular space by a reductase such as STEAP2, STEAP3 (in red blood cells), Dcytb (in enterocytes) and SDR2. The labile iron pool In the cytoplasm, ferrous iron is found in a soluble, chelatable state which constitutes the labile iron pool (~0.001 mM). In this pool, iron is thought to be bound to low-mass compounds such as peptides, carboxylates and phosphates, although some might be in a free, hydrated form (aqua ions). Alternatively, iron ions might be bound to specialized proteins known as metallochaperones. Specifically, poly-r(C)-binding proteins PCBP1 and PCBP2 appear to mediate transfer of free iron to ferritin (for storage) and non-heme iron enzymes (for use in catalysis). The labile iron pool is potentially toxic due to iron's ability to generate reactive oxygen species. Iron from this pool can be taken up by mitochondria via mitoferrin to synthesize Fe-S clusters and heme groups. The storage iron pool Iron can be stored in ferritin as ferric iron due to the ferroxidase activity of the ferritin heavy chain. Dysfunctional ferritin may accumulate as hemosiderin, which can be problematic in cases of iron overload. The ferritin storage iron pool is much larger than the labile iron pool, ranging in concentration from 0.7 mM to 3.6 mM. Iron export Iron export occurs in a variety of cell types, including neurons, red blood cells, macrophages and enterocytes. The latter two are especially important since systemic iron levels depend upon them. There is only one known iron exporter, ferroportin. It transports ferrous iron out of the cell, generally aided by ceruloplasmin and/or hephaestin (mostly in enterocytes), which oxidize iron to its ferric state so it can bind ferritin in the extracellular medium. Hepcidin causes the internalization of ferroportin, decreasing iron export. Besides, hepcidin seems to downregulate both TFR1 and DMT1 through an unknown mechanism. Another player assisting ferroportin in effecting cellular iron export is GAPDH. A specific post translationally modified isoform of GAPDH is recruited to the surface of iron loaded cells where it recruits apo-transferrin in close proximity to ferroportin so as to rapidly chelate the iron extruded. The expression of hepcidin, which only occurs in certain cell types such as hepatocytes, is tightly controlled at the transcriptional level and it represents the link between cellular and systemic iron homeostasis due to hepcidin's role as "gatekeeper" of iron release from enterocytes into the rest of the body. Erythroblasts produce erythroferrone, a hormone which inhibits hepcidin and so increases the availability of iron needed for hemoglobin synthesis. Translational control of cellular iron Although some control exists at the transcriptional level, the regulation of cellular iron levels is ultimately controlled at the translational level by iron-responsive element-binding proteins IRP1 and especially IRP2. When iron levels are low, these proteins are able to bind to iron-responsive elements (IREs). IREs are stem loop structures in the untranslated regions (UTRs) of mRNA. Both ferritin and ferroportin contain an IRE in their 5' UTRs, so that under iron deficiency their translation is repressed by IRP2, preventing the unnecessary synthesis of storage protein and the detrimental export of iron. In contrast, TFR1 and some DMT1 variants contain 3' UTR IREs, which bind IRP2 under iron deficiency, stabilizing the mRNA, which guarantees the synthesis of iron importers. Marine systems Iron plays an essential role in marine systems and can act as a limiting nutrient for planktonic activity. Because of this, too much of a decrease in iron may lead to a decrease in growth rates in phytoplanktonic organisms such as diatoms. Iron can also be oxidized by marine microbes under conditions that are high in iron and low in oxygen. Iron can enter marine systems through adjoining rivers and directly from the atmosphere. Once iron enters the ocean, it can be distributed throughout the water column through ocean mixing and through recycling on the cellular level. In the arctic, sea ice plays a major role in the store and distribution of iron in the ocean, depleting oceanic iron as it freezes in the winter and releasing it back into the water when thawing occurs in the summer. The iron cycle can fluctuate the forms of iron from aqueous to particle forms altering the availability of iron to primary producers. Increased light and warmth increases the amount of iron that is in forms that are usable by primary producers. See also known to incorporate iron into its exoskeleton References Biological systems Biology and pharmacology of chemical elements Dietary minerals Biology Nutrition Physiology
Iron in biology
[ "Chemistry", "Biology" ]
4,630
[ "Pharmacology", "Properties of chemical elements", "Physiology", "Biology and pharmacology of chemical elements", "nan", "Biochemistry" ]
7,876,320
https://en.wikipedia.org/wiki/Flow%20coefficient
The flow coefficient of a device is a relative measure of its efficiency at allowing fluid flow. It describes the relationship between the pressure drop across an orifice valve or other assembly and the corresponding flow rate. Mathematically the flow coefficient (or flow-capacity rating of valve) can be expressed as where is the rate of flow (expressed in US gallons per minute), SG is the specific gravity of the fluid (for water = 1), is the pressure drop across the valve (expressed in psi). In more practical terms, the flow coefficient is the volume (in US gallons) of water at that will flow per minute through a valve with a pressure drop of across the valve. The use of the flow coefficient offers a standard method of comparing valve capacities and sizing valves for specific applications that is widely accepted by industry. The general definition of the flow coefficient can be expanded into equations modeling the flow of liquids, gases and steam using the discharge coefficient. For gas flow in a pneumatic system the for the same assembly can be used with a more complex equation. Absolute pressures (psia) must be used for gas rather than simply differential pressure. For air flow at room temperature, when the outlet pressure is less than 1/2 the absolute inlet pressure, the flow becomes quite simple (although it reaches sonic velocity internally). With = 1.0 and 200 psia inlet pressure, the flow is 100 standard cubic feet per minute (scfm). The flow is proportional to the absolute inlet pressure, so the flow in scfm would equal the flow coefficient if the inlet pressure were reduced to 2 psia and the outlet were connected to a vacuum with less than 1 psi absolute pressure (1.0 scfm when = 1.0, 2 psia input). Flow factor The metric equivalent flow factor () is calculated using metric units: where is the flow factor (expressed in m3/h), is the flowrate (expressed in m3/h), SG is the specific gravity of the fluid (for water = 1), is the differential pressure across the device (expressed in bar). can be calculated from using the equation References See also Discharge coefficient Fluid dynamics
Flow coefficient
[ "Chemistry", "Engineering" ]
441
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
7,879,361
https://en.wikipedia.org/wiki/Harry%20Atwater
Harry Albert Atwater, Jr. is an American physicist and materials scientist and is the Otis Booth Leadership Chair of the division of engineering and applied science at the California Institute of Technology. Currently he is the Howard Hughes Professor of Applied Physics and Materials Science and the director for the Liquid Sunlight Alliance (LiSA), a Department of Energy Hub program for solar fuels.  Atwater's scientific effort focuses on nanophotonic light-matter interactions and solar energy conversion.  His current research in energy centers on high efficiency photovoltaics, carbon capture and removal, and photoelectrochemical processes for generation of solar fuels. His research has resulted in world records for solar photovoltaic conversion and photoelectrochemical water splitting. His work also spans fundamental nanophotonic phenomena, in plasmonics and 2D materials, and also applications including active metasurfaces and optical propulsion.   From 2014 to 2020, Atwater served as director of the Joint Center for Artificial Photosynthesis (JCAP), the DOE Energy Innovation Hub for solar fuels.   Atwater was an early pioneer in nanophotonics and plasmonics; he gave the name to the field of plasmonics in 2001.  Atwater is a Member of US National Academy of Engineering, and a Web of Science Highly Cited Researcher.  He is also founder of 5 early-stage companies, including Captura, which is developing scalable approaches to carbon dioxide removal from oceanwater, and Alta Devices, which set world records for photovoltaic cell and module efficiency. He is also a Fellow of the SPIE as well as APS, MRS, Optica, and the National Academy of Inventors. He is also the founding editor in chief of the journal ACS Photonics, and chair of the LightSail Committee for the Breakthrough Starshot program. He is the recipient of numerous awards, including the 2021 von Hippel Award of the Materials Research Society. Biography Atwater received his S.B. (1981), S.M. (1983), and Ph.D. (1987) in electrical engineering from the Massachusetts Institute of Technology. He serves as director of the DOE Energy Frontier Research Center on Light-Material Interactions in Solar Energy Conversion and was named director of the Resnick Institute for Science, Energy and Sustainability, Caltech's largest endowed research program focused on energy. Atwater is founder and chief technical advisor for Alta Devices, a venture-backed company in Santa Clara, CA developing a transformational high efficiency/low cost photovoltaics technology, and Aonex Corporation, a compound semiconductor materials company. He has also served an editorial board member for Surface Review and Letters. Professor Atwater has actively served the materials community in various capacities, including Materials Research Society Meeting Chair (1997), Materials Research Society President (2000), AVS Electronic Materials and Processing Division Chair (1999), and board of trustees of the Gordon Research Conferences. In 2008, he served as Chair for the Gordon Research Conference on Plasmonics. Since 2014, he has served as the editor-in-chief of the journal ACS Photonics, published by the American Chemical Society. In 2015, Atwater was elected as a member into the National Academy of Engineering for his contributions to plasmonics. Research Atwater's research interests center around two interwoven research themes: photovoltaics and solar energy; and plasmonics and optical metamaterials. Atwater and his group have been active in photovoltaics research for more than 20 years. Together, Atwater and his group have created new photovoltaic devices, including the silicon wire array solar cell, and layer-transferred fabrication approaches to III-V semiconductor III-V and multijunction cells, as well as making advances in plasmonic light absorber structures for III-V compound and silicon thin films. His research group's developments in the solar and plasmonics fields have been featured in Scientific American and in research papers such as Science, Nature Materials, Nature Photonics and Advanced Materials. Recently, his research has expanded to include the study of artificial photosynthesis to design fully-integrated photoelectrochemical (PEC) device for the production of renewable fuels. Additionally, Atwater's group is currently investigating the distinctive material characteristics of graphene as they relate to plasmonics that can be adjusted. Through the process of designing Fabry–Perot nanoresonators (small optical structures that consist of two parallel mirrors or reflectors separated by a nanoscale gap) onto a graphene sheet that has been doped and patterned, the Atwater group aims to observe a plasmonic resonance that changes in accordance with the size of the resonator. Awards Atwater is a member of the National Academy of Engineering and an MRS Fellow. He has been honored by awards including Von Hippel Award from the Materials Research Society 2021; 2021 ENI award for Renewable and Non-Conventional Energy; MRS Kavli Lecturer in Nanoscience in 2010; Popular Mechanics Breakthrough Award, 2010; Joop Los Fellowship from the Dutch Society for Fundamental Research on Matter in 2005; A.T. & T. Foundation Award, 1990; NSF Presidential Young Investigator Award, 1989; IBM Faculty Development Award, 1989–1990; Member, Bohmische Physical Society, 1990; IBM Postdoctoral Fellowship, 1987. Selected publications Enright, Michael J.; Jasrasaria, Dipti; Hanchard, Mathilde M.; Needell, David R.; Phelan, Megan E.; Weinberg, Daniel; McDowell, Brinn E.; Hsiao, Haw-Wen; Akbari, Hamidreza; Kottwitz, Matthew; Potter, Maggie M.; Wong, Joeson; Zuo, Jian-Min; Atwater, Harry A.; Rabani, Eran (2022-05-05). "Role of Atomic Structure on Exciton Dynamics and Photoluminescence in NIR Emissive InAs/InP/ZnSe Quantum Dots". The Journal of Physical Chemistry C. 126 (17): 7576–7587. doi:10.1021/acs.jpcc.2c01499. ISSN 1932-7447 Sullivan, Ian; Goryachev, Andrey; Digdaya, Ibadillah A.; Li, Xueqian; Atwater, Harry A.; Vermaas, David A.; Xiang, Chengxiang (November 2021). "Coupling electrochemical CO2 conversion with CO2 capture". Nature Catalysis. 4 (11): 952–958. doi:10.1038/s41929-021-00699-7. ISSN 2520-1158 Digdaya, Ibadillah A.; Sullivan, Ian; Lin, Meng; Han, Lihao; Cheng, Wen-Hui; Atwater, Harry A.; Xiang, Chengxiang (2020-09-04). "A direct coupled electrochemical system for capture and conversion of CO2 from oceanwater". Nature Communications. 11 (1): 4412. doi:10.1038/s41467-020-18232-y. ISSN 2041-1723 Ilic, Ognjen; Atwater, Harry A. (2019-04). "Self-stabilizing photonic levitation and propulsion of nanostructured macroscopic objects". Nature Photonics. 13 (4): 289–295. doi:10.1038/s41566-019-0373-y. ISSN 1749-4893 R. Shaner, Matthew; A. Atwater, Harry; S. Lewis, Nathan; W. McFarland, Eric (2016). "A comparative technoeconomic analysis of renewable hydrogen production using solar energy". Energy & Environmental Science. 9 (7): 2354–2371. doi:10.1039/C5EE02573G Callahan, Dennis M.; Munday, Jeremy N.; Atwater, Harry A. (2012-01-11). "Solar Cell Light Trapping beyond the Ray Optic Limit". Nano Letters. 12 (1): 214–218. doi:10.1021/nl203351k. ISSN 1530-6984 Yokogawa, Sozo; Burgos, Stanley P.; Atwater, Harry A. (2012-08-08). "Plasmonic Color Filters for CMOS Image Sensor Applications". Nano Letters. 12 (8): 4349–4354. doi:10.1021/nl302110z. ISSN 1530-6984. Aydin, Koray; Ferry, Vivian E.; Briggs, Ryan M.; Atwater, Harry A. (2011). "Broadband polarization-independent resonant light absorption using ultrathin plasmonic super absorbers." Nature Communications. 2 (1): 517. Ferry, Vivian E.; Munday, Jeremy N.; Atwater, Harry A. (2010-11-16). "Design Considerations for Plasmonic Photovoltaics". Advanced Materials. 22 (43): 4794–4808. doi:10.1002/adma.201000488 Ferry, Vivian E.; Verschuuren, Marc A.; Li, Hongbo B. T.; Verhagen, Ewold; Walters, Robert J.; Schropp, Ruud E. I.; Atwater, Harry A.; Polman, Albert (2010-06-21). "Light trapping in ultrathin plasmonic solar cells". Optics Express. 18 (102): A237–A245. doi:10.1364/OE.18.00A237. ISSN 1094-4087. Dionne, Jennifer A.; Diest, Kenneth; Sweatlock, Luke A.; Atwater, Harry A. (2009-02-11). "PlasMOStor: A Metal−Oxide−Si Field Effect Plasmonic Modulator". Nano Letters. 9 (2): 897–902. doi:10.1021/nl803868k. ISSN 1530-6984. Ferry, Vivian E.; Sweatlock, Luke A.; Pacifici, Domenico; Atwater, Harry A. (2008-12-10). "Plasmonic Nanostructure Design for Efficient Light Coupling into Solar Cells". Nano Letters. 8 (12): 4391–4397. doi:10.1021/nl8022548. ISSN 1530-6984. Atwater, Harry A. (2007). "The Promise of PLASMONICS". Scientific American. 296 (4): 56–63. ISSN 0036-8733. References External links/sources Atwater's profile at Caltech Atwater Research Group Website Light-Material Interactions in Energy Conversion Energy Frontier Research Center Resnick Institute ENI Award homepage Harry A. Atwater: SPIE Photonics West plenary presentation: Tunable and Quantum Metaphotonics MIT School of Engineering alumni California Institute of Technology faculty Living people Year of birth missing (living people) 20th-century American physicists American electrical engineers American materials scientists Members of the United States National Academy of Engineering Optical physicists Optical engineers Metamaterials scientists American nanotechnologists 21st-century American physicists 20th-century American engineers 21st-century American engineers Fellows of the American Physical Society
Harry Atwater
[ "Materials_science" ]
2,429
[ "Metamaterials scientists", "Metamaterials" ]
15,311,568
https://en.wikipedia.org/wiki/Potting%20%28electronics%29
In electronics, potting is the process of filling a complete electronic assembly with a solid or gelatinous compound. This is done to exclude water, moisture, or corrosive agents, to increase resistance to shocks and vibrations, or to prevent gaseous phenomena such as corona discharge in high-voltage assemblies. Potting has also been used to protect against reverse engineering or to protect parts of cryptography processing cards. When such materials are used only on single components instead of entire assemblies, the process is referred to as encapsulation. Thermosetting plastics or silicone rubber gels are often used, though epoxy resins are also very common. When epoxy resins are used, low chloride grades are usually specified. Many sites recommend using a potting product to protect sensitive electronic components from impact, vibration, and loose wires. In the potting process, an electronic assembly is placed inside a mold (the "pot") which is then filled with an insulating liquid compound that hardens, permanently protecting the assembly. The mold may be part of the finished article and may provide shielding or heat dissipating functions in addition to acting as a mold. When the mold is removed the potted assembly is described as cast. As an alternative, many circuit board assembly houses coat assemblies with a layer of transparent conformal coating rather than potting. Conformal coating gives most of the benefits of potting, and is lighter and easier to inspect, test, and repair. Conformal coatings can be applied as liquid or condensed from a vapor phase. When potting a circuit board that uses surface-mount technology, low glass transition temperature (Tg) potting compounds such as polyurethane or silicone may be used. High Tg potting compounds may break solder bonds through solder fatigue by hardening at a higher temperature because the coating then shrinks as a rigid solid over a larger part of the temperature range, thus developing greater force. See also Integrated circuit packaging Resin dispensing References Electronic design Electronics manufacturing
Potting (electronics)
[ "Engineering" ]
418
[ "Electronic design", "Electronic engineering", "Electronics manufacturing", "Design" ]
15,314,901
https://en.wikipedia.org/wiki/Proper%20velocity
In relativity, proper velocity (also known as celerity) w of an object relative to an observer is the ratio between observer-measured displacement vector and proper time elapsed on the clocks of the traveling object: It is an alternative to ordinary velocity, the distance per unit time where both distance and time are measured by the observer. The two types of velocity, ordinary and proper, are very nearly equal at low speeds. However, at high speeds proper velocity retains many of the properties that velocity loses in relativity compared with Newtonian theory. For example, proper velocity equals momentum per unit mass at any speed, and therefore has no upper limit. At high speeds, as shown in the figure at right, it is proportional to an object's energy as well. Proper velocity w can be related to the ordinary velocity v via the Lorentz factor γ: where t is coordinate time or "map time". For unidirectional motion, each of these is also simply related to a traveling object's hyperbolic velocity angle or rapidity η by . Introduction In flat spacetime, proper velocity is the ratio between distance traveled relative to a reference map frame (used to define simultaneity) and proper time τ elapsed on the clocks of the traveling object. It equals the object's momentum p divided by its rest mass m, and is made up of the space-like components of the object's four-vector velocity. William Shurcliff's monograph mentioned its early use in the Sears and Brehme text. Fraundorf has explored its pedagogical value while Ungar, Baylis and Hestenes have examined its relevance from group theory and geometric algebra perspectives. Proper velocity is sometimes referred to as celerity. Unlike the more familiar coordinate velocity v, proper velocity is synchrony-free (does not require synchronized clocks) and is useful for describing both super-relativistic and sub-relativistic motion. Like coordinate velocity and unlike four-vector velocity, it resides in the three-dimensional slice of spacetime defined by the map frame. As shown below and in the example figure at right, proper-velocities even add as three vectors with rescaling of the out-of-frame component. This makes them more useful for map-based (e.g. engineering) applications, and less useful for gaining coordinate-free insight. Proper speed divided by lightspeed c is the hyperbolic sine of rapidity η, just as the Lorentz factor γ is rapidity's hyperbolic cosine, and coordinate speed v over lightspeed is rapidity's hyperbolic tangent. Imagine an object traveling through a region of spacetime locally described by Hermann Minkowski's flat-space metric equation . Here a reference map frame of yardsticks and synchronized clocks define map position x and map time t respectively, and the d preceding a coordinate means infinitesimal change. A bit of manipulation allows one to show that proper velocity where as usual coordinate velocity . Thus finite w ensures that v is less than lightspeed c. By grouping γ with v in the expression for relativistic momentum p, proper velocity also extends the Newtonian form of momentum as mass times velocity to high speeds without a need for relativistic mass. Proper velocity addition formula The proper velocity addition formula: where is the beta factor given by . This formula provides a proper velocity gyrovector space model of hyperbolic geometry that uses a whole space compared to other models of hyperbolic geometry which use discs or half-planes. In the unidirectional case this becomes commutative and simplifies to a Lorentz factor product times a coordinate velocity sum, e.g. to , as discussed in the application section below. Relation to other velocity parameters Speed table The table below illustrates how the proper velocity of or "one map-lightyear per traveler-year" is a natural benchmark for the transition from sub-relativistic to super-relativistic motion. Note from above that velocity angle η and proper-velocity w run from 0 to infinity and track coordinate-velocity when . On the other hand, when , proper velocity tracks Lorentz factor while velocity angle is logarithmic and hence increases much more slowly. Interconversion equations The following equations convert between four alternate measures of speed (or unidirectional velocity) that flow from Minkowski's flat-space metric equation: . Lorentz factor γ: energy over mc2 ≥ 1 Proper velocity w: momentum per unit mass Coordinate velocity: v ≤ c Hyperbolic velocity angle or rapidity or in terms of logarithms: . Applications Comparing velocities at high speed Proper velocity is useful for comparing the speed of objects with momentum per unit rest mass (w) greater than lightspeed c. The coordinate speed of such objects is generally near lightspeed, whereas proper velocity indicates how rapidly they are covering ground on traveling-object clocks. This is important for example if, like some cosmic ray particles, the traveling objects have a finite lifetime. Proper velocity also clues us in to the object's momentum, which has no upper bound. For example, a 45 GeV electron accelerated by the Large Electron–Positron Collider (LEP) at Cern in 1989 would have had a Lorentz factor γ of about 88,000 (45 GeV divided by the electron rest mass of 511 keV). Its coordinate speed v would have been about sixty four trillionths shy of lightspeed c at 1 light-second per map second. On the other hand, its proper speed would have been w = γv ~ 88,000 light-seconds per traveler second. By comparison the coordinate speed of a 250 GeV electron in the proposed International Linear Collider (ILC) will remain near c, while its proper speed will significantly increase to ~489,000 lightseconds per traveler second. Proper velocity is also useful for comparing relative velocities along a line at high speed. In this case where A, B and C refer to different objects or frames of reference. For example, wAC refers to the proper speed of object A with respect to object C. Thus in calculating the relative proper speed, Lorentz factors multiply when coordinate speeds add. Hence each of two electrons (A and C) in a head-on collision at 45 GeV in the lab frame (B) would see the other coming toward them at vAC ~ c and wAC = 88,0002(1 + 1) ~ 1.55×1010 lightseconds per traveler second. Thus from the target's point of view, colliders can explore collisions with much higher projectile energy and momentum per unit mass. Proper velocity-based dispersion relations Plotting "(γ − 1) versus proper velocity" after multiplying the former by mc2 and the latter by mass m, for various values of m yields a family of kinetic energy versus momentum curves that includes most of the moving objects encountered in everyday life. Such plots can for example be used to show where the speed of light, the Planck constant, and Boltzmann energy kT figure in. To illustrate, the figure at right with log-log axes shows objects with the same kinetic energy (horizontally related) that carry different amounts of momentum, as well as how the speed of a low-mass object compares (by vertical extrapolation) to the speed after perfectly inelastic collision with a large object at rest. Highly sloped lines (rise/run = 2) mark contours of constant mass, while lines of unit slope mark contours of constant speed. Objects that fit nicely on this plot are humans driving cars, dust particles in Brownian motion, a spaceship in orbit around the Sun, molecules at room temperature, a fighter jet at Mach 3, one radio wave photon, a person moving at one lightyear per traveler year, the pulse of a 1.8 MegaJoule laser, a 250 GeV electron, and our observable universe with the blackbody kinetic energy expected of a single particle at 3 kelvin. Unidirectional acceleration via proper velocity Proper acceleration at any speed is the physical acceleration experienced locally by an object. In spacetime it is a three-vector acceleration with respect to the object's instantaneously varying free-float frame. Its magnitude α is the frame-invariant magnitude of that object's four-acceleration. Proper acceleration is also useful from the vantage point (or spacetime slice) of external observers. Not only may observers in all frames agree on its magnitude, but it also measures the extent to which an accelerating rocket "has its pedal to the metal". In the unidirectional case i.e. when the object's acceleration is parallel or anti-parallel to its velocity in the spacetime slice of the observer, the change in proper velocity is the integral of proper acceleration over map time i.e. for constant α. At low speeds this reduces to the well-known relation between coordinate velocity and coordinate acceleration times map time, i.e. . For constant unidirectional proper acceleration, similar relationships exist between rapidity η and elapsed proper time Δτ, as well as between Lorentz factor γ and distance traveled Δx. To be specific: , where as noted above the various velocity parameters are related by . These equations describe some consequences of accelerated travel at high speed. For example, imagine a spaceship that can accelerate its passengers at 1 g (or 1.03 lightyears/year2) halfway to their destination, and then decelerate them at 1 g for the remaining half so as to provide Earth-like artificial gravity from point A to point B over the shortest possible time. For a map distance of ΔxAB, the first equation above predicts a midpoint Lorentz factor (up from its unit rest value) of γmid = 1 + α(ΔxAB/2)/c2. Hence the round-trip time on traveler clocks will be Δτ = 4(c/α)cosh−1[γmid], during which the time elapsed on map clocks will be Δt = 4(c/α)sinh[cosh−1[γmid]]. This imagined spaceship could offer round trips to Proxima Centauri lasting about 7.1 traveler years (~12 years on Earth clocks), round trips to the Milky Way's central black hole of about 40 years (~54,000 years elapsed on Earth clocks), and round trips to Andromeda Galaxy lasting around 57 years (over 5 million years on Earth clocks). Unfortunately, while rocket accelerations of 1 g can easily be achieved, they cannot be sustained over long periods of time. See also Kinematics: for studying ways that position changes with time Lorentz factor: γ = dt/dτ or kinetic energy over mc2 Rapidity: hyperbolic velocity angle in imaginary radians Four-velocity: combining travel through time and space Uniform acceleration: holding coordinate acceleration fixed Gullstrand–Painlevé coordinates: free-float frames in curved spacetime. Notes and references External links Spacetime Physics by Edwin F. Taylor and John Archibald Wheeler Minkowski spacetime
Proper velocity
[ "Physics" ]
2,321
[ "Physical phenomena", "Physical quantities", "Motion (physics)", "Vector physical quantities", "Velocity", "Wikipedia categories named after physical quantities" ]
15,318,324
https://en.wikipedia.org/wiki/Helmert%20transformation
The Helmert transformation (named after Friedrich Robert Helmert, 1843–1917) is a geometric transformation method within a three-dimensional space. It is frequently used in geodesy to produce datum transformations between datums. The Helmert transformation is also called a seven-parameter transformation and is a similarity transformation. Definition It can be expressed as: where is the transformed vector is the initial vector The parameters are: – translation vector. Contains the three translations along the coordinate axes – scale factor, which is unitless; if it is given in ppm, it must be divided by 1,000,000 and added to 1. – rotation matrix. Consists of three axes (small rotations around each of the three coordinate axes) , , . The rotation matrix is an orthogonal matrix. The angles are given in either degrees or radians. Variations A special case is the two-dimensional Helmert transformation. Here, only four parameters are needed (two translations, one scaling, one rotation). These can be determined from two known points; if more points are available then checks can be made. Sometimes it is sufficient to use the five parameter transformation, composed of three translations, only one rotation about the Z-axis, and one change of scale. Restrictions The Helmert transformation only uses one scale factor, so it is not suitable for: The manipulation of measured drawings and photographs The comparison of paper deformations while scanning old plans and maps. In these cases, a more general affine transformation is preferable. Application The Helmert transformation is used, among other things, in geodesy to transform the coordinates of the point from one coordinate system into another. Using it, it becomes possible to convert regional surveying points into the WGS84 locations used by GPS. For example, starting with the Gauss–Krüger coordinate, and , plus the height, , are converted into 3D values in steps: Undo the map projection: calculation of the ellipsoidal latitude, longitude and height (, , ) Convert from geodetic coordinates to geocentric coordinates: Calculation of , and relative to the reference ellipsoid of surveying 7-parameter transformation (where , and almost always change by a few hundred metres at most, and distances by a few mm per km). Because of this, terrestrially measured positions can be compared with GPS data; these can then be brought into the surveying as new points – transformed in the opposite order. The third step consists of the application of a rotation matrix, multiplication with the scale factor (with a value near 1) and the addition of the three translations, , , . The coordinates of a reference system B are derived from reference system A by the following formula (position vector transformation convention and very small rotation angles simplification): or for each single parameter of the coordinate: For the reverse transformation, each element is multiplied by −1. The seven parameters are determined for each region with three or more "identical points" of both systems. To bring them into agreement, the small inconsistencies (usually only a few cm) are adjusted using the method of least squares – that is, eliminated in a statistically plausible manner. Standard parameters Note: the rotation angles given in the table are in arcseconds and must be converted to radians before use in the calculation. These are standard parameter sets for the 7-parameter transformation (or data transformation) between two datums. For a transformation in the opposite direction, inverse transformation parameters should be calculated or inverse transformation should be applied (as described in paper "On geodetic transformations"). The translations , , are sometimes described as , , , or , , . The rotations rx, ry, and rz are sometimes also described as , and . In the United Kingdom the prime interest is the transformation between the OSGB36 datum used by the Ordnance survey for Grid References on its Landranger and Explorer maps to the WGS84 implementation used by GPS technology. The Gauss–Krüger coordinate system used in Germany normally refers to the Bessel ellipsoid. A further datum of interest was ED50 (European Datum 1950) based on the Hayford ellipsoid. ED50 was part of the fundamentals of the NATO coordinates up to the 1980s, and many national coordinate systems of Gauss–Krüger are defined by ED50. The earth does not have a perfect ellipsoidal shape, but is described as a geoid. Instead, the geoid of the earth is described by many ellipsoids. Depending upon the actual location, the "locally best aligned ellipsoid" has been used for surveying and mapping purposes. The standard parameter set gives an accuracy of about for an OSGB36/WGS84 transformation. This is not precise enough for surveying, and the Ordnance Survey supplements these results by using a lookup table of further translations in order to reach accuracy. Estimating the parameters If the transformation parameters are unknown, they can be calculated with reference points (that is, points whose coordinates are known before and after the transformation. Since a total of seven parameters (three translations, one scale, three rotations) have to be determined, at least two points and one coordinate of a third point (for example, the Z-coordinate) must be known. This gives a system with seven equations and seven unknowns, which can be solved. For transformations between conformal map projections near an arbitrary point, the Helmert transformation parameters can be calculated exactly from the Jacobian matrix of the transformation function. In practice, it is best to use more points. Through this correspondence, more accuracy is obtained, and a statistical assessment of the results becomes possible. In this case, the calculation is adjusted with the Gaussian least squares method. A numerical value for the accuracy of the transformation parameters is obtained by calculating the values at the reference points, and weighting the results relative to the centroid of the points. While the method is mathematically rigorous, it is entirely dependent on the accuracy of the parameters that are used. In practice, these parameters are computed from the inclusion of at least three known points in the networks. However the accuracy of these will affect the following transformation parameters, as these points will contain observation errors. Therefore, a "real-world" transformation will only be a best estimate and should contain a statistical measure of its quality. See also Geographic coordinate conversion Procrustes analysis Surveying References External links Helmert transform in PROJ coordinate transformation software Computing Helmert Transformations Geodesy Transformation (function)
Helmert transformation
[ "Mathematics" ]
1,329
[ "Applied mathematics", "Geodesy", "Geometry", "Transformation (function)" ]
11,593,376
https://en.wikipedia.org/wiki/Doug%20Stinson
Douglas Robert Stinson (born 1956 in Guelph, Ontario) is a Canadian mathematician and cryptographer, currently a Professor Emeritus at the University of Waterloo. Stinson received his B.Math from the University of Waterloo in 1978, his M.Sc. from Ohio State University in 1980, and his Ph.D. from the University of Waterloo in 1981. He was at the University of Manitoba from 1981 to 1989, and the University of Nebraska-Lincoln from 1990 to 1998. In 2011 he was named as a Fellow of the Royal Society of Canada. Stinson is the author of over 300 research publications as well as the mathematics-based cryptography textbook Cryptography: Theory and Practice (). Selected publications See also List of University of Waterloo people References External links Doug Stinson's home page Living people 1956 births People from Guelph Canadian mathematicians Canadian computer scientists Combinatorialists Modern cryptographers University of Waterloo alumni Ohio State University alumni Academic staff of the University of Waterloo
Doug Stinson
[ "Mathematics" ]
204
[ "Combinatorialists", "Combinatorics" ]
11,593,471
https://en.wikipedia.org/wiki/Hydramethylnon
Hydramethylnon (AC 217,300) is an insecticide used primarily in the form of baits for cockroaches and ants. It works by inhibiting complex III in the mitochondrial inner membrane and leads to a halting of oxidative phosphorylation (IRAC class 20A). Some brands of hydramethylnon are Amdro, Blatex, Combat, Cyaforce, Cyclon, Faslane, Grant's, Impact, Matox, Maxforce, Pyramdron, Siege, Scuttle and Wipeout. Hydramethylnon is a slow-acting poison with delayed toxicity that needs to be eaten to be effective. Toxicology Hydramethylnon has low toxicity in mammals. The oral is 1100–1300 mg/kg in rats and above 28,000 mg/kg in dogs. Hydramethylnon is toxic to fish; the 96-hour LC50 in rainbow trout is 0.16 mg/L, 0.10 mg/L in channel catfish, and 1.70 mg/L in bluegill sunfish. Hydramethylnon, when fed to rats for two years, led to an increase in uterine and adrenal tumors at the highest dose; therefore, the Environmental Protection Agency classifies hydramethylnon as a possible human carcinogen. See also Fipronil, another insecticide used for similar purposes References External links Hydramethylnon Technical Fact Sheet - National Pesticide Information Center Hydramethylnon General Fact Sheet - National Pesticide Information Center Hydramethylnon Pesticide Information Profile - Extension Toxicology Network Maxforce MSDS. Amdro MSDS Alkene derivatives Phenylene compounds Hydrazones Insecticides Trifluoromethyl compounds
Hydramethylnon
[ "Chemistry" ]
364
[ "Hydrazones", "Functional groups" ]
11,594,369
https://en.wikipedia.org/wiki/Biotransformation
Biotransformation is the biochemical modification of one chemical compound or a mixture of chemical compounds. Biotransformations can be conducted with whole cells, their lysates, or purified enzymes. Increasingly, biotransformations are effected with purified enzymes. Major industries and life-saving technologies depend on biotransformations. Advantages and disadvantages Compared to the conventional production of chemicals, biotransformations are often attractive because their selectivities can be high, limiting the coproduction of undesirable coproducts. Generally operating under mild temperatures and pressures in aqueous solutions, many biotransformations are "green". The catalysts, i.e. the enzymes, are amenable to improvement by genetic manipulation. Biotechnology usually is restrained by substrate scope. Petrochemicals for example are often not amenable to biotransformations, especially on the scale required for some applications, e.g. fuels. Biotransformations can be slow and are often incompatible with high temperatures, which are employed in traditional chemical synthesis to increase rates. Enzymes are generally only stable <100 °C, and usually much lower. Enzymes, like other catalysts are poisonable. In some cases, performance or recyclability can be improved by using immobilized enzymes. Historical Wine and beer making are examples of biotransformations that have been practiced since ancient times. Vinegar has long been produced by fermentation, involving the oxidation of ethanol to acetic acid. Cheesemaking traditionally relies on microbes to convert dairy precursors. Yogurt is produced by inoculating heat-treated milk with microorganisms such as Streptococcus thermophilus and Lactobacillus bulgaricus. Modern examples Pharmaceuticals Beta-lactam antibiotics, e.g., penicillin and cephalosporin are produced by biotransformations in an industry valued several billions of dollars. Processes are conducted in vessels up to 60,000 gal in volume. Sugars, methionine, and ammonium salts are used as C,S,N sources. Genetically modified Penicillium chrysogenum is employed for penicillin production. Some steroids are hydroxylated in vitro to give drugs. Sugars High fructose corn syrup is generated by biotransformation of corn starch, which is converted to a mixture of glucose and fructose. Glucoamylase is one enzyme used in the process. Cyclodextrins are produced by transferases. Amino acids Amino acids are sometimes produced industrially by transaminases. In other cases, amino acids are obtained by biotransformations of peptides using peptidases. Acrylamide With acrylonitrile and water as substrates, nitrile hydratase enzymes are used to produce acrylamide, a valued monomer. Biofuels Many kinds of fuels and lubricants are produced by processes that include biotransformations starting from natural precursors such as fats, cellulose, and sugars. See also Biotechnology Biodegradation References Bioremediation Biotechnology Biodegradation
Biotransformation
[ "Chemistry", "Biology", "Environmental_science" ]
671
[ "Biotechnology", "Biodegradation", "Ecological techniques", "nan", "Bioremediation", "Environmental soil science" ]
11,594,904
https://en.wikipedia.org/wiki/Hyperbaric%20welding
Hyperbaric welding is the process of extreme welding at elevated pressures, normally underwater. Hyperbaric welding can either take place wet in the water itself or dry inside a specially constructed positive pressure enclosure and hence a dry environment. It is predominantly referred to as "hyperbaric welding" when used in a dry environment, and "underwater welding" when in a wet environment. The applications of hyperbaric welding are diverse—it is often used to repair ships, offshore oil platforms, and pipelines. Steel is the most common material welded. Dry welding is used in preference to wet underwater welding when high quality welds are required because of the increased control over conditions which can be maintained, such as through application of prior and post weld heat treatments. This improved environmental control leads directly to improved process performance and a generally much higher quality weld than a comparative wet weld. Thus, when a very high quality weld is required, dry hyperbaric welding is normally utilized. Research into using dry hyperbaric welding at depths of up to is ongoing. In general, assuring the integrity of underwater welds can be difficult (but is possible using various nondestructive testing applications), especially for wet underwater welds, because defects are difficult to detect if the defects are beneath the surface of the weld. Underwater hyperbaric welding was invented by the Soviet metallurgist Konstantin Khrenov in 1932. Application Welding processes have become increasingly important in almost all manufacturing industries and for structural applications (metal skeletons of buildings). Of the many techniques for welding in atmosphere, most cannot be applied in offshore and marine applications in contact with water. Most offshore repair and surfacing work is done at shallow depth or in the region intermittently covered by water (the splash zone). However, the most technologically challenging task is repair at greater depths, especially for pipeline construction and the repair of tears and breaks in marine structures and vessels. Underwater welding can be the least expensive option for marine maintenance and repair, because it bypasses the need to pull the structure out of the sea and saves valuable time and dry docking costs. It also enables emergency repairs to allow the damaged structure to be safely transported to dry facilities for permanent repair or scrapping. Underwater welding is applied in both inland and offshore environments, though seasonal weather inhibits offshore underwater welding during winter. In either location, surface supplied air is the most common diving method for underwater welders. Dry welding Dry hyperbaric welding involves the weld being performed at raised pressure in a chamber filled with a gas mixture sealed around the structure being welded. Most arc welding processes such as shielded metal arc welding (SMAW), flux-cored arc welding (FCAW), gas tungsten arc welding (GTAW), gas metal arc welding (GMAW), plasma arc welding (PAW) could be operated at hyperbaric pressures, but all suffer as the pressure increases. Gas tungsten arc welding is most commonly used. The degradation is associated with physical changes of the arc behaviour as the gas flow regime around the arc changes and the arc roots contract and become more mobile. Of note is a dramatic increase in arc voltage which is associated with the increase in pressure. Overall a degradation in capability and efficiency results as the pressure increases. Special control techniques have been applied which have allowed welding down to simulated water depth in the laboratory, but dry hyperbaric welding has thus far been limited operationally to less than water depth by the physiological capability of divers to operate the welding equipment at high pressures and practical considerations concerning construction of an automated pressure / welding chamber at depth. Wet welding Wet underwater welding directly exposes the diver and electrode to the water and surrounding elements. Divers usually use around 300–400 amps of direct current to power their electrode, and they weld using varied forms of arc welding. This practice commonly uses a variation of shielded metal arc welding, which uses an electrode that is waterproof. Other processes that are used include flux-cored arc welding and friction welding. In each of these cases, the welding power supply is connected to the welding equipment through cables and hoses. The process is generally limited to low carbon equivalent steels, especially at greater depths, because of hydrogen-caused cracking. Wet welding with a stick electrode is done with similar equipment to that used for dry welding, but the electrode holders are designed for water cooling and are more heavily insulated. They will overheat if used out of the water. A constant current welding machine is used for manual metal arc welding. Direct current is used, and a heavy duty isolation switch is installed in the welding cable at the surface control position, so that the welding current can be disconnected when not in use. The welder instructs the surface operator to make and break the contact as required during the procedure. The contacts should only be closed during actual welding, and opened at other times, particularly when changing electrodes. The electric arc heats the workpiece and the welding rod, and the molten metal is transferred through the gas bubble around the arc. The gas bubble is partly formed from decomposition of the flux coating on the electrode but it is usually contaminated to some extent by steam. Current flow induces transfer of metal droplets from the electrode to the workpiece and enables positional welding by a skilled operator. Slag deposition on the weld surface helps to slow the rate of cooling, but rapid cooling is one of the biggest problems in producing a quality weld. Hazards and risks The hazards of underwater welding include the risk of electric shock for the welder. To prevent this, the welding equipment must be adaptable to a marine environment, properly insulated and the welding current must be controlled. Commercial divers must also consider the occupational safety issues that divers face, most notably the risk of decompression sickness due to the increased pressure of breathing gases. Many divers have reported a metallic taste that is related to the galvanic breakdown of dental amalgam. There may also be long term cognitive and possibly musculoskeletal effects associated with underwater welding. See also References External links Health and Safety Executive - Performs research on long term health effects from underwater welding. Welding Underwater work Soviet inventions Russian inventions
Hyperbaric welding
[ "Engineering" ]
1,260
[ "Welding", "Mechanical engineering" ]
11,596,097
https://en.wikipedia.org/wiki/Prescription%20cascade
Prescription cascade is the process whereby the side effects of drugs are misdiagnosed as symptoms of another problem, resulting in further prescriptions and further side effects and unanticipated drug interactions, which itself may lead to further symptoms and further misdiagnoses. This is a pharmacological example of a feedback loop. Such cascades can be reversed through deprescribing. Theory Over the past 20 years, spending on prescription drugs has increased drastically. This can be attributed to several different situations: There is the increased diagnosis of chronic conditions; and the use of numerous medications by the older population; and an increase in the incidence of obesity has meant an increase in chronic conditions such as diabetes and hypertension. As each condition is treated with a specific drug, a correlating side-effect of each drug comes into play. If a doctor fails to acknowledge all the drugs that a patient is taking, an adverse drug reaction may be misinterpreted as a new medical condition. Another drug is prescribed to treat the new condition, and an adverse drug side-effect occurs that is again mistakenly diagnosed as a new medical condition. Thus the patient is at risk of developing additional adverse effects. The most frequent medical intervention performed by a doctor is the writing of a prescription. Because chronic illness increases with advancing age, older people are more likely to have conditions that require drug treatment, and they are more likely to suffer the effects of a prescription cascade. A prescriber can do little to modify age-related physiological changes when trying to minimize the likelihood that an older person will develop an adverse drug reaction. However, when assessing a patient who is already taking drugs, a doctor should always consider the development of any new signs and symptoms as a possible consequence of the patient's drug treatment. Polypharmacy Polypharmacy is the use of numerous medications at the same time (from the root "multiple pharmacies"). As people age, various health conditions may arise and must be treated. Suffering a range of issues from short-term medical conditions to chronic conditions like diabetes or high blood pressure, the older patient may be medicated by a variety of drugs at one time. A review in 2010 found that the average 81-year-old is taking an average of 15 different medications at the same time, ranging from 6 to 28 medications. It also found approximately 8.9 drug-related problems per patient in the study, ranging from 3 to 19 problems. The review found that patients were commonly taking medications that they did not need anymore. More specifically, work from Australia has identified that 16% of older people use medicines that are part of a prescribing cascade. References Prescription of drugs
Prescription cascade
[ "Chemistry" ]
549
[ "Drug safety" ]
11,596,512
https://en.wikipedia.org/wiki/Quartz%20crisis
The quartz crisis (Swiss) or quartz revolution (America, Japan and other countries) was the advancement in the watchmaking industry caused by the advent of quartz watches in the 1970s and early 1980s, that largely replaced mechanical watches around the world. It caused a significant decline of the Swiss watchmaking industry, which chose to remain focused on traditional mechanical watches, while the majority of the world's watch production shifted to Japanese companies such as Seiko, Citizen and Casio which embraced the new electronic technology. The strategy employed by Swiss makers was to call this revolution a 'crisis' thereby downgrading the advancement from Japanese brands. The quartz crisis took place amid the postwar global Digital Revolution (or "Third Industrial Revolution"). The crisis started with the Astron, the world's first quartz watch, which was introduced by Seiko in December 1969. The key advances included replacing the mechanical or electromechanical movement with a quartz clock movement as well as replacing analog displays with digital displays such as LED displays and later liquid-crystal displays (LCDs). In general, quartz timepieces are much more accurate than mechanical timepieces, in addition to having a generally lower cost and therefore sales price. History Before the crisis During World War II, Swiss neutrality permitted the watch industry to continue making consumer time-keeping apparatus, while the major nations of the world shifted timing apparatus production to timing devices for military ordnance. As a result, the Swiss watch industry enjoyed an effective monopoly. The industry prospered in the absence of any real competition. Thus, prior to the 1970s, the Swiss watch industry had 50% of the world watch market. In the early 1950s a joint venture between the Elgin Watch Company in the United States and Lip of France to produce an electromechanical watch – one powered by a small battery rather than an unwinding spring – laid the groundwork for the quartz watch. Although the Lip-Elgin enterprise produced only prototypes, in 1957 the first battery-driven watch was in production, the American-made Hamilton 500. In 1954, Swiss engineer Max Hetzel developed an electronic wristwatch that used an electrically charged tuning fork powered by a 1.35 volt battery. The tuning fork resonated at precisely 360 Hz and it powered the hands of the watch through an electromechanical gear train. This watch was called the Accutron and was marketed by Bulova, starting in 1960. Although Bulova did not have the first battery-powered wristwatch, the Accutron was a powerful catalyst, as by that time the Swiss watch-manufacturing industry was a mature industry with a centuries-old global market and deeply entrenched patterns of manufacturing, marketing, and sales. Beginning of the revolution In the late 1950s and early 1960s, both Seiko and a consortium of Switzerland's top watch firms, including Patek Philippe, Piaget, and Omega, fiercely competed to develop the first quartz wristwatch. In 1962, the Centre Electronique Horloger (CEH), consisting of around 20 Swiss watch manufacturers, was established in Neuchâtel to develop a Swiss-made quartz wristwatch, while simultaneously in Japan, Seiko was also working on an electric watch and developing quartz technology. One of the first successes was a portable quartz clock called the Seiko Crystal Chronometer QC-951. This portable clock was used as a backup timer for marathon events in the 1964 Summer Olympics in Tokyo. In 1966, prototypes of the world's first quartz pocket watch were unveiled by Seiko and Longines in the Neuchâtel Observatory's 1966 competition. In 1967, both the CEH and Seiko presented prototypes of quartz wristwatches to the Neuchâtel Observatory competition. On 25 December 1969, Seiko unveiled the Astron, the world's first quartz watch, which marked the beginning of the quartz revolution. The first Swiss quartz analog watch – the Ebauches SA Beta 21 – arrived at the 1970 Basel Fair. The Beta 21 was released by numerous manufacturers including the Omega Electroquartz. On 6 May 1970, Hamilton introduced the Pulsar – the world's first electronic digital watch. In 1971 Girard-Perregaux introduced the Caliber 350, with an advertised accuracy within about 0.164 seconds per day, which had a quartz oscillator with a frequency of 32,768 Hz, which was faster than previous quartz watch movements and has since become the oscillation frequency used by most quartz clocks. The rise of quartz In 1974 Omega introduced the Omega Marine Chronometer, the first quartz watch ever to be certified as a marine chronometer, accurate to 12 seconds per year using a quartz circuit that produces 2,400,000 vibrations per second. In 1976 Omega introduced the Omega Chrono-Quartz, the world's first analogue-digital chronograph, which was succeeded within 12 months by the Calibre 1620, the company's first completely LCD chronograph wristwatch. Despite these dramatic advancements, the Swiss hesitated to embrace quartz watches. At the time, Swiss mechanical watches dominated world markets. In addition, excellence in watchmaking was a large component of Swiss national identity. From their position of market strength, and with a national watch industry organized broadly and deeply to foster mechanical watches, many in Switzerland thought that moving into electronic watches was unnecessary. Others outside Switzerland, however, saw the advantage and further developed the technology. By 1978, quartz watches overtook mechanical watches in popularity, plunging the Swiss watch industry into crisis while at the same time strengthening both the Japanese and American watch industries. This period of time was marked by a lack of innovation in Switzerland at the same time that the watch-making industries of other nations were taking full advantage of emerging technologies, specifically quartz watch technology, hence the term "quartz crisis". As a result of the economic turmoil that ensued, many once-profitable and famous Swiss watch houses became insolvent or disappeared. This period of time completely upset the Swiss watch industry both economically and psychologically. During the 1970s and early 1980s, technological upheavals, i.e. the appearance of the quartz technology, and an otherwise difficult economic situation resulted in a reduction in the size of the Swiss watch industry. Between 1970 and 1983, the number of Swiss watchmakers dropped from 1,600 to 600. Between 1970 and 1988, Swiss watch employment fell from 90,000 to 28,000. Outside Switzerland, the crisis is often referred to as the "quartz revolution", particularly in the United States where many American companies had gone out of business or had been bought out by foreign interests by the 1960s. When the first quartz watches were introduced in 1969, the United States promptly took a technological lead in part due to microelectronics research for military and space programs. American companies like Texas Instruments, Fairchild Semiconductor, and National Semiconductor started the mass production of digital quartz watches and made them affordable. It did not remain so forever; by 1978 Hong Kong exported the largest number of electronic watches worldwide, and US semiconductor companies came to pull out of the watch market entirely. With the exception of Timex and Bulova, the remaining traditional American watch companies, including Hamilton, went out of business and sold their brand names to foreign competitors; Bulova would ultimately sell to the Japanese-owned Citizen in 2008. Aftermath The Swatch Group By 1983, the crisis reached a critical point. The Swiss watch industry, which had 1,600 watchmakers in 1970, had declined to 600. In March 1983, the two biggest Swiss watch groups, ASUAG (Allgemeine Schweizerische Uhrenindustrie AG) and SSIH (Société Suisse pour l'Industrie Horlogère), merged to form ASUAG/SSIH in order to save the industry. This organization was renamed SMH (Société de Microélectronique et d'Horlogerie) in 1986, and then The Swatch Group in 1998. It would be instrumental in reviving the Swiss watch industry; today, the Swatch Group is the largest watch manufacturer in the world. The Swatch product was sealed in a plastic case, sold as a disposable commodity with little probability of repair, and had fewer moving parts (51) than mechanical watches (about 91). Furthermore, production was essentially automated, which resulted in higher profitability. The Swatch was a huge success; in less than two years, more than 2.5 million Swatches were sold. Besides its own product line Swatch, the Swatch Group also acquired other watch brands including Blancpain, Breguet, Glashütte Original, Harry Winston, Longines, Omega, Hamilton and Tissot. Renaissance of mechanical watches The larger global market still largely reflected other trends, however. In the US domestic market, for example, the Swatch was something of a 1980s fad resting largely on variety of colors and patterns, and the bulk of production still came from offshore sites such as China and Japan, in digitally-dominated or hybrid brands like Casio, Timex, and Armitron. On the other hand, the quartz revolution drove many Swiss manufacturers to seek refuge in (or be winnowed out to) the higher end of the market, such as Patek Philippe, Vacheron Constantin, Audemars Piguet, and Rolex. Mechanical watches have gradually become luxury goods appreciated for their elaborate craftsmanship, aesthetic appeal, and glamorous design, sometimes associated with the social status of their owners, rather than simple timekeeping devices. The rise of smartwatches Since the 2010s, smartwatches have begun to significantly increase their shares in the global watch market, especially after the launch of the Apple Watch in 2015. There are concerns over the formation of a new type of crisis which may further threaten the Swiss watchmaking industry. See also List of watch manufacturers Smartwatch References External links Le Rouage dégrippé, Les crises horlogères, une fatalité en voie de disparition? Watch Wars (the Smithsonian) Watches tick to a different beat Swatch Print Ads from the Early 1980s Watches Horology Economic history of Switzerland Business rivalries 1970s in Switzerland 1970s in economic history
Quartz crisis
[ "Physics" ]
2,085
[ "Spacetime", "Horology", "Physical quantities", "Time" ]
11,597,892
https://en.wikipedia.org/wiki/C17H27NO3
{{DISPLAYTITLE:C17H27NO3}} The molecular formula C17H27NO3 (molar mass: 293.40 g/mol, exact mass: 293.1991 u) may refer to: Embutramide Nordihydrocapsaicin Nonivamide, or PAVA Pramocaine Molecular formulas
C17H27NO3
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
11,598,515
https://en.wikipedia.org/wiki/Lillie%27s%20trichrome
Lillie's trichrome is a combination of dyes used in histology. It is similar to Masson's trichrome stain, but it uses Biebrich scarlet for the plasma stain. It was initially published by Ralph D. Lillie in 1940. It is applied by submerging the fixated sample into the following three solutions: Weigert's iron hematoxylin working solution, Biebrich scarlet solution, and Fast Green FCF solution. The resulting stains are black cell nuclei, brown cytoplasm, red muscle and myelinated fibers, blue collagen, and scarlet erythrocytes. Applications Trichrome stains are normally used to differentiate between collagen and muscle tissues. Some studies that benefit from its application include end stage liver disease (cirrhosis), myocardial infarction, muscular dystrophy, and tumor analysis. References External links Lillie's trichrome at StainsFile.info Histology Staining
Lillie's trichrome
[ "Chemistry", "Biology" ]
204
[ "Staining", "Biotechnology stubs", "Biochemistry stubs", "Histology", "Microbiology techniques", "Microscopy", "Biochemistry", "Cell imaging" ]
11,598,742
https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor%20receptor%203
Fibroblast growth factor receptor 3 (FGFR-3) is a protein that in humans is encoded by the FGFR3 gene. FGFR3 has also been designated as CD333 (cluster of differentiation 333). The gene, which is located on chromosome 4, location p16.3, is expressed in tissues such as the cartilage, brain, intestine, and kidneys. The FGFR3 gene produces various forms of the FGFR-3 protein; the location varies depending on the isoform of FGFR-3. Since the different forms are found within different tissues the protein is responsible for multiple growth factor interactions. Gain of function mutations in FGFR3 inhibits chondrocyte proliferation and underlies achondroplasia and hypochondroplasia. Function FGFR-3 is a member of the fibroblast growth factor receptor family, where amino acid sequence is highly conserved between members and throughout evolution. FGFR family members differ from one another in their ligand affinities and tissue distribution. A full-length representative protein would consist of an extracellular region, composed of three immunoglobulin-like domains, a single hydrophobic membrane-spanning segment and a cytoplasmic tyrosine kinase domain. The extracellular portion of the protein interacts with fibroblast growth factors, setting in motion a cascade of downstream signals which ultimately influence cell mitogenesis and differentiation. This particular family member binds both acidic and basic fibroblast growth factor and plays a role in bone development and maintenance. The FGFR-3 protein plays a role in bone growth by regulating ossification. Alternative splicing occurs and additional variants have been described, including those utilizing alternate exon 8 rather than 9, but their full-length nature has not been determined. Mutations Simplification on the mutation 46 XX 4q16.3 (female), 46XY 4q16.3 (male). Gain of function mutations in this gene can develop dysfunctional proteins "impede cartilage growth and development and affect chondrocyte proliferation and calcification" which can lead to craniosynostosis and multiple types of skeletal dysplasia (osteochondrodysplasia). In achondroplasia, the FGFR3 gene has a missense mutation at nucleotide 1138 resulting from either a G>A or G>C. This point mutation in the FGFR3 gene causes hydrogen bonds to form between two arginine side chains leading to ligand-independent stabilization of FGFR3 dimers. Overactivity of FGFR3 inhibits chondrocyte proliferation and restricts long bone length. FGFR3 mutations are also linked with spermatocytic tumor, which occur more frequently in older men. Disease linkage Defects in the FGFR3 gene has been associated with several conditions, including craniosynostosis and seborrheic keratosis. Bladder cancer Mutations of FGFR3, FGFR3–TACC3 and FGFR3–BAIAP2L1 fusion proteins are frequently associated with bladder cancer, while some FGFR3 mutations are also associated with a better prognosis. Hence FGFR3 represents a potential therapeutic target for the treatment of bladder cancer. Post-translational modification of FGFR3 occur in bladder cancer that do not occur in normal cells and can be targeted by immunotherapeutic antibodies. Glioblastoma FGFR3-TACC3 fusions have been identified as the primary mitogenic drivers in a subset of glioblastomas (approximately 4%) and other gliomas and may be associated with slightly improved overall survival. The FGFR3-TACC3 fusion represents a possible therapeutic target in glioblastoma. Achondroplasia Achondroplasia is a dominant genetic disorder caused by mutations in FGFR3 that make the resulting protein overactive. Individuals with these mutation have a head size that is larger than normal and are significantly shorter in height. Only a single copy of the mutated FGFR3 gene results in achondroplasia. It is generally caused by spontaneous mutations in germ cells; roughly 80 percent of the time, parents with children that have this disorder are normal size. Thanatophoric dysplasia Thanatophoric dysplasia is a genetic disorder caused by gain-of-function mutations in FGFR3 that is often fatal during the perinatal period because the child cannot breathe. There are two types. TD type I is caused by a stop codon mutation that is located in part of the gene coding for the extracellular domain of the protein. TD type II is a result of a substitution in a Lsy650Glu which is located in the tyrosine kinase area of FGFR3. Muenke syndrome Muenke syndrome, a disorder characterized by craniosynostosis, is caused by protein changes on FGFR3. The specific pathogenic variant c.749C>G changes the protein p.Pro250Arg, in turn resulting in this condition. Characteristics of Muenke syndrome include coronal synostosis (usually bilateral), midfacial retrusion, strabismus, hearing loss, and developmental delay. Turribrachycephaly, cloverleaf skull, and frontal bossing are also possible. As a drug target An FGFR3 inhibitor Erdafitinib has been approved as a cancer treatment in several jurisdictions for FGFR3+ urothelial carcinoma. The FGFR3 receptor has a tyrosine kinase signaling pathway that is associated with many biological developments embryonically and in tissues. Studying the tyrosine kinase signaling pathway that FGFR3 displays has played a crucial role in the development of research of several cell activities such as cell proliferation and cellular resistance to anti-cancer medications. Interactions Fibroblast growth factor receptor 3 has been shown to interact with FGF8 and FGF9. See also Cluster of differentiation Fibroblast growth factor receptor References Further reading External links GeneReviews/NIH/NCBI/UW entry on FGFR-Related Craniosynostosis Syndromes GeneReviews/NIH/NCBI/UW entry on Muenke Syndrome GeneReviews/NIH/NCBI/UW entry on Hypochondroplasia Clusters of differentiation Tyrosine kinase receptors
Fibroblast growth factor receptor 3
[ "Chemistry" ]
1,373
[ "Tyrosine kinase receptors", "Signal transduction" ]
17,010,869
https://en.wikipedia.org/wiki/Variants%20of%20PCR
The versatility of polymerase chain reaction (PCR) has led to modifications of the basic protocol being used in a large number of variant techniques designed for various purposes. This article summarizes many of the most common variations currently or formerly used in molecular biology laboratories; familiarity with the fundamental premise by which PCR works and corresponding terms and concepts is necessary for understanding these variant techniques. Basic modifications Often only a small modification needs to be made to the standard PCR protocol to achieve a desired goal: Multiplex-PCR uses several pairs of primers annealing to different target sequences. This permits the simultaneous analysis of multiple targets in a single sample. For example, in testing for genetic mutations, six or more amplifications might be combined. In the standard protocol for DNA fingerprinting, the targets assayed are often amplified in groups of 3 or 4. Multiplex Ligation-dependent Probe Amplification (MLPA) permits multiple targets to be amplified using only a single pair of primers, avoiding the resolution limitations of multiplex PCR. Multiplex PCR has also been used for analysis of microsatellites and SNPs. Variable Number of Tandem Repeats (VNTR) PCR targets repetitive areas of the genome that exhibit length variation. Analysis of the genotypes in the samples usually involves sizing of the amplification products by gel electrophoresis. Analysis of smaller VNTR segments known as short tandem repeats (or STRs) is the basis for DNA fingerprinting databases such as CODIS. Asymmetric PCR preferentially amplifies one strand of a double-stranded DNA target. It is used in some sequencing methods and hybridization probing to generate one DNA strand as product. Thermocycling is carried out exactly as in conventional PCR, but with a limiting amount or leaving out one of the primers. When the limiting primer becomes depleted, replication increases arithmetically rather than exponentially through extension of the excess primer. A modification of this process, named Linear-After-The-Exponential-PCR (or LATE-PCR), uses a limiting primer with a higher melting temperature (Tm) than the excess primer in order to maintain reaction efficiency as the limiting primer concentration decreases mid-reaction. See also overlap-extension PCR. Some modifications are needed to perform long PCR. The original Klenow-based PCR process did not generate products that were larger than about 400 bp. Taq polymerase can however amplify targets of up to several thousand bp long. Since then, modified protocols with Taq enzyme have allowed targets of over 50 kb to be amplified. Nested PCR is used to increase the specificity of DNA amplification. Two sets of primers are used in two successive reactions. In the first PCR, one pair of primers is used to generate DNA products, which may contain products amplified from non-target areas. The products from the first PCR are then used as template in a second PCR, using one ('hemi-nesting') or two different primers whose binding sites are located (nested) within the first set, thus increasing specificity. Nested PCR is often more successful in specifically amplifying long DNA products than conventional PCR, but it requires more detailed knowledge of the sequence of the target. Quantitative PCR (qPCR) is used to measure the specific amount of target DNA (or RNA) in a sample. By measuring amplification only within the phase of true exponential increase, the amount of measured product more accurately reflects the initial amount of target. Special thermal cyclers are used that monitor the amount of product during the amplification. Quantitative Real-Time PCR (QRT-PCR), sometimes simply called Real-Time PCR (RT-PCR), refers to a collection of methods that use fluorescent dyes, such as Sybr Green, or fluorophore-containing DNA probes, such as TaqMan, to measure the amount of amplified product in real time as the amplification progresses. Hot-start PCR is a technique performed manually by heating the reaction components to the DNA melting temperature (e.g. 95 °C) before adding the polymerase. In this way, non-specific amplification at lower temperatures is prevented. Alternatively, specialized reagents inhibit the polymerase's activity at ambient temperature, either by the binding of an antibody, or by the presence of covalently bound inhibitors that only dissociate after a high-temperature activation step. 'Hot-start/cold-finish PCR' is achieved with new hybrid polymerases that are inactive at ambient temperature and are only activated at elevated temperatures. In touchdown PCR, the annealing temperature is gradually decreased in later cycles. The annealing temperature in the early cycles is usually 3–5 °C above the standard Tm of the primers used, while in the later cycles it is a similar amount below the Tm. The initial higher annealing temperature leads to greater specificity for primer binding, while the lower temperatures permit more efficient amplification at the end of the reaction. Assembly PCR (also known as Polymerase Cycling Assembly or PCA) is the synthesis of long DNA structures by performing PCR on a pool of long oligonucleotides with short overlapping segments, to assemble two or more pieces of DNA into one piece. It involves an initial PCR with primers that have an overlap and a second PCR using the products as the template that generates the final full-length product. This technique may substitute for ligation-based assembly. In colony PCR, bacterial colonies are screened directly by PCR, for example, the screen for correct DNA vector constructs. Colonies are sampled with a sterile pipette tip and a small quantity of cells transferred into a PCR mix. To release the DNA from the cells, the PCR is either started with an extended time at 95 °C (when standard polymerase is used), or with a shortened denaturation step at 100 °C and special chimeric DNA polymerase. The digital polymerase chain reaction simultaneously amplifies thousands of samples, each in a separate droplet within an emulsion or partition within an micro-well. Suicide PCR is typically used in paleogenetics or other studies where avoiding false positives and ensuring the specificity of the amplified fragment is the highest priority. It was originally described in a study to verify the presence of the microbe Yersinia pestis in dental samples obtained from 14th-century graves of people supposedly killed by plague during the medieval Black Death epidemic. The method prescribes the use of any primer combination only once in a PCR (hence the term "suicide"), which should never have been used in any positive-control PCR reaction, and the primers should always target a genomic region never amplified before in the lab using this or any other set of primers. This ensures that no contaminating DNA from previous PCR reactions is present in the lab, which could otherwise generate false positives. COLD-PCR (co-amplification at lower denaturation temperature-PCR) is a modified protocol that enriches variant alleles from a mixture of wild-type and mutation-containing DNA samples. Pretreatments and extensions The basic PCR process can sometimes precede or follow another technique. RT-PCR (or Reverse Transcription PCR) is used to reverse-transcribe and amplify RNA to cDNA. PCR is preceded by a reaction using reverse transcriptase, an enzyme that converts RNA into cDNA. The two reactions may be combined in a tube, with the initial heating step of PCR being used to inactivate the transcriptase. The Tth polymerase (described below) has RT activity, and can carry out the entire reaction. RT-PCR is widely used in expression profiling, which detects the expression of a gene. It can also be used to obtain sequence of an RNA transcript, which may aid the determination of the transcription start and termination sites (by RACE-PCR) and facilitate mapping of the location of exons and introns in a gene sequence. Two-tailed PCR uses a single primer that binds to a microRNA target with both 3' and 5' ends, known as hemiprobes. Both ends must be complementary for binding to occur. The 3'-end is then extended by reverse transcriptase forming a long cDNA. The cDNA is then amplified using two target specific PCR primers. The combination of two hemiprobes, both targeting the short microRNA target, makes the Two-tailed assay exceedingly sensitive and specific. Ligation-mediated PCR uses small DNA oligonucleotide 'linkers' (or adaptors) that are first ligated to fragments of the target DNA. PCR primers that anneal to the linker sequences are then used to amplify the target fragments. This method is deployed for DNA sequencing, genome walking, and DNA footprinting. A related technique is amplified fragment length polymorphism, which generates diagnostic fragments of a genome. Methylation-specific PCR (MSP) is used to identify patterns of DNA methylation at cytosine-guanine (CpG) islands in genomic DNA. Target DNA is first treated with sodium bisulfite, which converts unmethylated cytosine bases to uracil, which is complementary to adenosine in PCR primers. Two amplifications are then carried out on the bisulfite-treated DNA: one primer set anneals to DNA with cytosines (corresponding to methylated cytosine), and the other set anneals to DNA with uracil (corresponding to unmethylated cytosine). MSP used in quantitative PCR provides quantitative information about the methylation state of a given CpG island. Other modifications Adjustments of the components in PCR is commonly used for optimal performance. The divalent magnesium ion (Mg++) is required for PCR polymerase activity. Lower concentrations Mg++ will increase replication fidelity, while higher concentrations will introduce more mutations. Denaturants(such as DMSO) can increase amplification specificity by destabilizing non-specific primer binding. Other chemicals, such as glycerol, are stabilizers for the activity of the polymerase during amplification. Detergents (such as Triton X-100) can prevent polymerase stick to itself or to the walls of the reaction tube. DNA polymerases occasionally incorporate mismatch bases into the extending strand. High-fidelity PCR employs enzymes with 3'-5' exonuclease activity that decreases this rate of mis-incorporation. Examples of enzymes with proofreading activity include Pfu; adjustments of the Mg++ and dNTP concentrations may help maximize the number of products that exactly match the original target DNA. Primer modifications Adjustments to the synthetic oligonucleotides used as primers in PCR are a rich source of modification: Normally PCR primers are chosen from an invariant part of the genome, and might be used to amplify a polymorphic area between them. In allele-specific PCR the opposite is done. At least one of the primers is chosen from a polymorphic area, with the mutations located at (or near) its 3'-end. Under stringent conditions, a mismatched primer will not initiate replication, whereas a matched primer will. The appearance of an amplification product therefore indicates the genotype. (For more information, see SNP genotyping.) InterSequence-Specific PCR (or ISSR-PCR) is method for DNA fingerprinting that uses primers selected from segments repeated throughout a genome to produce a unique fingerprint of amplified product lengths. The use of primers from a commonly repeated segment is called Alu-PCR, and can help amplify sequences adjacent (or between) these repeats. Primers can also be designed to be 'degenerate' – able to initiate replication from a large number of target locations. Whole genome amplification (or WGA) is a group of procedures that allow amplification to occur at many locations in an unknown genome, and which may only be available in small quantities. Other techniques use degenerate primers that are synthesized using multiple nucleotides at particular positions (the polymerase 'chooses' the correctly matched primers). Also, the primers can be synthesized with the nucleoside analog inosine, which hybridizes to three of the four normal bases. A similar technique can force PCR to perform Site-directed mutagenesis. (also see Overlap extension polymerase chain reaction) Normally the primers used in PCR are designed to be fully complementary to the target. However, the polymerase is tolerant to mis-matches away from the 3' end. Tailed-primers include non-complementary sequences at their 5' ends. A common procedure is the use of linker-primers, which ultimately place restriction sites at the ends of the PCR products, facilitating their later insertion into cloning vectors. An extension of the 'colony-PCR' method (above), is the use of vector primers. Target DNA fragments (or cDNA) are first inserted into a cloning vector, and a single set of primers are designed for the areas of the vector flanking the insertion site. Amplification occurs for whatever DNA has been inserted. PCR can easily be modified to produce a labeled product for subsequent use as a hybridization probe. One or both primers might be used in PCR with a radioactive or fluorescent label already attached, or labels might be added after amplification. These labeling methods can be combined with 'asymmetric-PCR' (above) to produce effective hybridization probes. RNase H-dependent PCR (rhPCR) can reduce primer-dimer formation, and increase the number of assays in multiplex PCR. The method utilizes primers with a cleavable block on the 3’ end that is removed by the action of a thermostable RNase HII enzyme. DNA Polymerases There are several DNA polymerases that are used in PCR. The Klenow fragment, derived from the original DNA Polymerase I from E. coli, was the first enzyme used in PCR. Because of its lack of stability at high temperature, it needs be replenished during each cycle, and therefore is not commonly used in PCR. The bacteriophage T4 DNA polymerase (family A) was also initially used in PCR. It has a higher fidelity of replication than the Klenow fragment, but is also destroyed by heat. T7 DNA polymerase (family B) has similar properties and purposes. It has been applied to site-directed mutagenesis and Sanger sequencing. Taq polymerase, the DNA Polymerase I from Thermus aquaticus, was the first thermostable polymerase used in PCR, and is still the one most commonly used. The enzyme can be isolated from its native source, or from its cloned gene expressed in E. coli. A 61kDa truncated from lacking 5'-3' exonuclease activity is known as the Stoffel fragment, and is expressed in E. coli. The lack of exonuclease activity may allow it to amplify longer targets than the native enzyme. It has been commercialized as AmpliTaq and Klentaq. A variant designed for hot-start PCR called the "Faststart polymerase" has also been produced. It requires strong heat activation, thereby avoiding non-specific amplification due to polymerase activity at low temperature. Many other variants have been created. Other Thermus polymerases, such as Tth polymerase I () from Thermus thermophilus, has seen some use. Tth has reverse transcriptase activity in the presence of Mn2+ ions, allowing PCR amplification from RNA targets. The archean genus Pyrococcus has proven a rich source of thermostable polymerases with proofreading activity. Pfu DNA polymerase, isolated from the P. furiosus shows a 5-fold decrease in the error rate of replication compared to Taq. Since errors increase as PCR progresses, Pfu is the preferred polymerase when products are to be individually cloned for sequencing or expression. Other lesser used polymerases from this genus include Pwo () from Pyrococcus woesei, Pfx from an unnamed species, "Deep Vent" polymerase () from strain GB-D. Vent or Tli polymerase is an extremely thermostable DNA polymerase isolated from Thermococcus litoralis. The polymerase from Thermococcus fumicolans (Tfu) has also been commercialized. Mechanism modifications Sometimes even the basic mechanism of PCR can be modified. Unlike normal PCR, Inverse PCR allows amplification and sequencing of DNA that surrounds a known sequence. It involves initially subjecting the target DNA to a series of restriction enzyme digestions, and then circularizing the resulting fragments by self ligation. Primers are designed to be extended outward from the known segment, resulting in amplification of the rest of the circle. This is especially useful in identifying sequences to either side of various genomic inserts. Similarly, thermal asymmetric interlaced PCR (or TAIL-PCR) is used to isolate unknown sequences flanking a known area of the genome. Within the known sequence, TAIL-PCR uses a nested pair of primers with differing annealing temperatures. A 'degenerate' primer is used to amplify in the other direction from the unknown sequence. Isothermal amplification methods Some DNA amplification protocols have been developed that may be used alternatively to PCR. They are isothermal, meaning that they are run at a constant temperature. Helicase-dependent amplification (HDA) is similar to traditional PCR, but uses a constant temperature rather than cycling through denaturation and annealing/extension steps. DNA Helicase, an enzyme that unwinds DNA, is used in place of thermal denaturation. Loop-mediated isothermal amplification is a similar idea, but done with a strand-displacing polymerase. Nicking enzyme amplification reaction (NEAR) and its cousin strand displacement amplification (SDA) are isothermal, replicating DNA at a constant temperature using a polymerase and nicking enzyme. Recombinase Polymerase Amplification (RPA) uses a recombinase to specifically pair primers with double-stranded DNA on the basis of homology, thus directing DNA synthesis from defined DNA sequences present in the sample. Presence of the target sequence initiates DNA amplification, and no thermal or chemical melting of DNA is required. The reaction progresses rapidly and results in specific DNA amplification from just a few target copies to detectable levels typically within 5–10 minutes. The entire reaction system is stable as a dried formulation and does not need refrigeration. RPA can be used to replace PCR in a variety of laboratory applications and users can design their own assays. Other types of isothermal amplification include whole genome amplification (WGA), Nucleic acid sequence-based amplification (NASBA), and transcription-mediated amplification (TMA). See also Vectorette PCR References External links PCR Applications Manual (from Roche Diagnostics). The Reference in qPCR -- an Academic & Industrial Information Platform www.eConferences.de streaming portal -- Amplify your knowledge in qPCR, dPCR and NGS! Polymerase chain reaction
Variants of PCR
[ "Chemistry", "Biology" ]
4,148
[ "Biochemistry methods", "Genetics techniques", "Polymerase chain reaction" ]
17,016,531
https://en.wikipedia.org/wiki/Hagen%E2%80%93Poiseuille%20equation
In non ideal fluid dynamics, the Hagen–Poiseuille equation, also known as the Hagen–Poiseuille law, Poiseuille law or Poiseuille equation, is a physical law that gives the pressure drop in an incompressible and Newtonian fluid in laminar flow flowing through a long cylindrical pipe of constant cross section. It can be successfully applied to air flow in lung alveoli, or the flow through a drinking straw or through a hypodermic needle. It was experimentally derived independently by Jean Léonard Marie Poiseuille in 1838 and Gotthilf Heinrich Ludwig Hagen, and published by Hagen in 1839 and then by Poiseuille in 1840–41 and 1846. The theoretical justification of the Poiseuille law was given by George Stokes in 1845. The assumptions of the equation are that the fluid is incompressible and Newtonian; the flow is laminar through a pipe of constant circular cross-section that is substantially longer than its diameter; and there is no acceleration of fluid in the pipe. For velocities and pipe diameters above a threshold, actual fluid flow is not laminar but turbulent, leading to larger pressure drops than calculated by the Hagen–Poiseuille equation. Poiseuille's equation describes the pressure drop due to the viscosity of the fluid; other types of pressure drops may still occur in a fluid (see a demonstration here). For example, the pressure needed to drive a viscous fluid up against gravity would contain both that as needed in Poiseuille's law plus that as needed in Bernoulli's equation, such that any point in the flow would have a pressure greater than zero (otherwise no flow would happen). Another example is when blood flows into a narrower constriction, its speed will be greater than in a larger diameter (due to continuity of volumetric flow rate), and its pressure will be lower than in a larger diameter (due to Bernoulli's equation). However, the viscosity of blood will cause additional pressure drop along the direction of flow, which is proportional to length traveled (as per Poiseuille's law). Both effects contribute to the actual pressure drop. Equation In standard fluid-kinetics notation: where is the pressure difference between the two ends, is the length of pipe, is the dynamic viscosity, is the volumetric flow rate, is the pipe radius, is the cross-sectional area of pipe. The equation does not hold close to the pipe entrance. The equation fails in the limit of low viscosity, wide and/or short pipe. Low viscosity or a wide pipe may result in turbulent flow, making it necessary to use more complex models, such as the Darcy–Weisbach equation. The ratio of length to radius of a pipe should be greater than 1/48 of the Reynolds number for the Hagen–Poiseuille law to be valid. If the pipe is too short, the Hagen–Poiseuille equation may result in unphysically high flow rates; the flow is bounded by Bernoulli's principle, under less restrictive conditions, by because it is impossible to have negative (absolute) pressure (not to be confused with gauge pressure) in an incompressible flow. Relation to the Darcy–Weisbach equation Normally, Hagen–Poiseuille flow implies not just the relation for the pressure drop, above, but also the full solution for the laminar flow profile, which is parabolic. However, the result for the pressure drop can be extended to turbulent flow by inferring an effective turbulent viscosity in the case of turbulent flow, even though the flow profile in turbulent flow is strictly speaking not actually parabolic. In both cases, laminar or turbulent, the pressure drop is related to the stress at the wall, which determines the so-called friction factor. The wall stress can be determined phenomenologically by the Darcy–Weisbach equation in the field of hydraulics, given a relationship for the friction factor in terms of the Reynolds number. In the case of laminar flow, for a circular cross section: where is the Reynolds number, is the fluid density, and is the mean flow velocity, which is half the maximal flow velocity in the case of laminar flow. It proves more useful to define the Reynolds number in terms of the mean flow velocity because this quantity remains well defined even in the case of turbulent flow, whereas the maximal flow velocity may not be, or in any case, it may be difficult to infer. In this form the law approximates the Darcy friction factor, the energy (head) loss factor, friction loss factor or Darcy (friction) factor in the laminar flow at very low velocities in cylindrical tube. The theoretical derivation of a slightly different form of the law was made independently by Wiedman in 1856 and Neumann and E. Hagenbach in 1858 (1859, 1860). Hagenbach was the first who called this law Poiseuille's law. The law is also very important in hemorheology and hemodynamics, both fields of physiology. Poiseuille's law was later in 1891 extended to turbulent flow by L. R. Wilberforce, based on Hagenbach's work. Derivation The Hagen–Poiseuille equation can be derived from the Navier–Stokes equations. The laminar flow through a pipe of uniform (circular) cross-section is known as Hagen–Poiseuille flow. The equations governing the Hagen–Poiseuille flow can be derived directly from the Navier–Stokes momentum equations in 3D cylindrical coordinates by making the following set of assumptions: The flow is steady ( ). The radial and azimuthal components of the fluid velocity are zero ( ). The flow is axisymmetric ( ). The flow is fully developed ( ). Here however, this can be proved via mass conservation, and the above assumptions. Then the angular equation in the momentum equations and the continuity equation are identically satisfied. The radial momentum equation reduces to , i.e., the pressure is a function of the axial coordinate only. For brevity, use instead of . The axial momentum equation reduces to where is the dynamic viscosity of the fluid. In the above equation, the left-hand side is only a function of and the right-hand side term is only a function of , implying that both terms must be the same constant. Evaluating this constant is straightforward. If we take the length of the pipe to be and denote the pressure difference between the two ends of the pipe by (high pressure minus low pressure), then the constant is simply defined such that is positive. The solution is Since needs to be finite at , . The no slip boundary condition at the pipe wall requires that at (radius of the pipe), which yields . Thus we have finally the following parabolic velocity profile: The maximum velocity occurs at the pipe centerline (), . The average velocity can be obtained by integrating over the pipe cross section, The easily measurable quantity in experiments is the volumetric flow rate . Rearrangement of this gives the Hagen–Poiseuille equation Although more lengthy than directly using the Navier–Stokes equations, an alternative method of deriving the Hagen–Poiseuille equation is as follows. Liquid flow through a pipe Assume the liquid exhibits laminar flow. Laminar flow in a round pipe prescribes that there are a bunch of circular layers (lamina) of liquid, each having a velocity determined only by their radial distance from the center of the tube. Also assume the center is moving fastest while the liquid touching the walls of the tube is stationary (due to the no-slip condition). To figure out the motion of the liquid, all forces acting on each lamina must be known: The pressure force pushing the liquid through the tube is the change in pressure multiplied by the area: . This force is in the direction of the motion of the liquid. The negative sign comes from the conventional way we define . Viscosity effects will pull from the faster lamina immediately closer to the center of the tube. Viscosity effects will drag from the slower lamina immediately closer to the walls of the tube. Viscosity When two layers of liquid in contact with each other move at different speeds, there will be a shear force between them. This force is proportional to the area of contact , the velocity gradient perpendicular to the direction of flow , and a proportionality constant (viscosity) and is given by The negative sign is in there because we are concerned with the faster moving liquid (top in figure), which is being slowed by the slower liquid (bottom in figure). By Newton's third law of motion, the force on the slower liquid is equal and opposite (no negative sign) to the force on the faster liquid. This equation assumes that the area of contact is so large that we can ignore any effects from the edges and that the fluids behave as Newtonian fluids. Faster lamina Assume that we are figuring out the force on the lamina with radius . From the equation above, we need to know the area of contact and the velocity gradient. Think of the lamina as a ring of radius , thickness , and length . The area of contact between the lamina and the faster one is simply the surface area of the cylinder: . We don't know the exact form for the velocity of the liquid within the tube yet, but we do know (from our assumption above) that it is dependent on the radius. Therefore, the velocity gradient is the change of the velocity with respect to the change in the radius at the intersection of these two laminae. That intersection is at a radius of . So, considering that this force will be positive with respect to the movement of the liquid (but the derivative of the velocity is negative), the final form of the equation becomes where the vertical bar and subscript following the derivative indicates that it should be taken at a radius of . Slower lamina Next let's find the force of drag from the slower lamina. We need to calculate the same values that we did for the force from the faster lamina. In this case, the area of contact is at instead of . Also, we need to remember that this force opposes the direction of movement of the liquid and will therefore be negative (and that the derivative of the velocity is negative). Putting it all together To find the solution for the flow of a laminar layer through a tube, we need to make one last assumption. There is no acceleration of liquid in the pipe, and by Newton's first law, there is no net force. If there is no net force then we can add all of the forces together to get zero or First, to get everything happening at the same point, use the first two terms of a Taylor series expansion of the velocity gradient: The expression is valid for all laminae. Grouping like terms and dropping the vertical bar since all derivatives are assumed to be at radius , Finally, put this expression in the form of a differential equation, dropping the term quadratic in . The above equation is the same as the one obtained from the Navier–Stokes equations and the derivation from here on follows as before. Startup of Poiseuille flow in a pipe When a constant pressure gradient is applied between two ends of a long pipe, the flow will not immediately obtain Poiseuille profile, rather it develops through time and reaches the Poiseuille profile at steady state. The Navier–Stokes equations reduce to with initial and boundary conditions, The velocity distribution is given by where is the Bessel function of the first kind of order zero and are the positive roots of this function and is the Bessel function of the first kind of order one. As , Poiseuille solution is recovered. Poiseuille flow in an annular section If is the inner cylinder radii and is the outer cylinder radii, with constant applied pressure gradient between the two ends , the velocity distribution and the volume flux through the annular pipe are When , , the original problem is recovered. Poiseuille flow in a pipe with an oscillating pressure gradient Flow through pipes with an oscillating pressure gradient finds applications in blood flow through large arteries. The imposed pressure gradient is given by where , and are constants and is the frequency. The velocity field is given by where where and are the Kelvin functions and . Plane Poiseuille flow Plane Poiseuille flow is flow created between two infinitely long parallel plates, separated by a distance with a constant pressure gradient is applied in the direction of flow. The flow is essentially unidirectional because of infinite length. The Navier–Stokes equations reduce to with no-slip condition on both walls Therefore, the velocity distribution and the volume flow rate per unit length are Poiseuille flow through some non-circular cross-sections Joseph Boussinesq derived the velocity profile and volume flow rate in 1868 for rectangular channel and tubes of equilateral triangular cross-section and for elliptical cross-section. Joseph Proudman derived the same for isosceles triangles in 1914. Let be the constant pressure gradient acting in direction parallel to the motion. The velocity and the volume flow rate in a rectangular channel of height and width are The velocity and the volume flow rate of tube with equilateral triangular cross-section of side length are The velocity and the volume flow rate in the right-angled isosceles triangle , are The velocity distribution for tubes of elliptical cross-section with semiaxes and is Here, when , Poiseuille flow for circular pipe is recovered and when , plane Poiseuille flow is recovered. More explicit solutions with cross-sections such as snail-shaped sections, sections having the shape of a notch circle following a semicircle, annular sections between homofocal ellipses, annular sections between non-concentric circles are also available, as reviewed by . Poiseuille flow through arbitrary cross-section The flow through arbitrary cross-section satisfies the condition that on the walls. The governing equation reduces to If we introduce a new dependent variable as then it is easy to see that the problem reduces to that integrating a Laplace equation satisfying the condition on the wall. Poiseuille's equation for an ideal isothermal gas For a compressible fluid in a tube the volumetric flow rate and the axial velocity are not constant along the tube; but the mass flow rate is constant along the tube length. The volumetric flow rate is usually expressed at the outlet pressure. As fluid is compressed or expanded, work is done and the fluid is heated or cooled. This means that the flow rate depends on the heat transfer to and from the fluid. For an ideal gas in the isothermal case, where the temperature of the fluid is permitted to equilibrate with its surroundings, an approximate relation for the pressure drop can be derived. Using ideal gas equation of state for constant temperature process (i.e., is constant) and the conservation of mass flow rate (i.e., is constant), the relation can be obtained. Over a short section of the pipe, the gas flowing through the pipe can be assumed to be incompressible so that Poiseuille law can be used locally, Here we assumed the local pressure gradient is not too great to have any compressibility effects. Though locally we ignored the effects of pressure variation due to density variation, over long distances these effects are taken into account. Since is independent of pressure, the above equation can be integrated over the length to give Hence the volumetric flow rate at the pipe outlet is given by This equation can be seen as Poiseuille's law with an extra correction factor expressing the average pressure relative to the outlet pressure. Electrical circuits analogy Electricity was originally understood to be a kind of fluid. This hydraulic analogy is still conceptually useful for understanding circuits. This analogy is also used to study the frequency response of fluid-mechanical networks using circuit tools, in which case the fluid network is termed a hydraulic circuit. Poiseuille's law corresponds to Ohm's law for electrical circuits, . Since the net force acting on the fluid is equal to , where , i.e. , then from Poiseuille's law, it follows that . For electrical circuits, let be the concentration of free charged particles (in m−3) and let be the charge of each particle (in coulombs). (For electrons, .) Then is the number of particles in the volume , and is their total charge. This is the charge that flows through the cross section per unit time, i.e. the current . Therefore, . Consequently, , and But , where is the total charge in the volume of the tube. The volume of the tube is equal to , so the number of charged particles in this volume is equal to , and their total charge is . Since the voltage , it follows then This is exactly Ohm's law, where the resistance is described by the formula . It follows that the resistance is proportional to the length of the resistor, which is true. However, it also follows that the resistance is inversely proportional to the fourth power of the radius , i.e. the resistance is inversely proportional to the second power of the cross section area of the resistor, which is different from the electrical formula. The electrical relation for the resistance is where is the resistivity; i.e. the resistance is inversely proportional to the cross section area of the resistor. The reason why Poiseuille's law leads to a different formula for the resistance is the difference between the fluid flow and the electric current. Electron gas is inviscid, so its velocity does not depend on the distance to the walls of the conductor. The resistance is due to the interaction between the flowing electrons and the atoms of the conductor. Therefore, Poiseuille's law and the hydraulic analogy are useful only within certain limits when applied to electricity. Both Ohm's law and Poiseuille's law illustrate transport phenomena. Medical applications – intravenous access and fluid delivery The Hagen–Poiseuille equation is useful in determining the vascular resistance and hence flow rate of intravenous (IV) fluids that may be achieved using various sizes of peripheral and central cannulas. The equation states that flow rate is proportional to the radius to the fourth power, meaning that a small increase in the internal diameter of the cannula yields a significant increase in flow rate of IV fluids. The radius of IV cannulas is typically measured in "gauge", which is inversely proportional to the radius. Peripheral IV cannulas are typically available as (from large to small) 14G, 16G, 18G, 20G, 22G, 26G. As an example, assuming cannula lengths are equal, the flow of a 14G cannula is 1.73 times that of a 16G cannula, and 4.16 times that of a 20G cannula. It also states that flow is inversely proportional to length, meaning that longer lines have lower flow rates. This is important to remember as in an emergency, many clinicians favor shorter, larger catheters compared to longer, narrower catheters. While of less clinical importance, an increased change in pressure () — such as by pressurizing the bag of fluid, squeezing the bag, or hanging the bag higher (relative to the level of the cannula) — can be used to speed up flow rate. It is also useful to understand that viscous fluids will flow slower (e.g. in blood transfusion). See also Couette flow Darcy's law Pulse Wave Hydraulic circuit Cited references References . . . External links Poiseuille's law for power-law non-Newtonian fluid Poiseuille's law in a slightly tapered tube Hagen–Poiseuille equation calculator Eponymous laws of physics Equations of fluid dynamics Mathematics in medicine
Hagen–Poiseuille equation
[ "Physics", "Chemistry", "Mathematics" ]
4,148
[ "Equations of fluid dynamics", "Equations of physics", "Applied mathematics", "Mathematics in medicine", "Fluid dynamics" ]
17,017,119
https://en.wikipedia.org/wiki/Boundary%20current
Boundary currents are ocean currents with dynamics determined by the presence of a coastline, and fall into two distinct categories: western boundary currents and eastern boundary currents. Eastern boundary currents Eastern boundary currents are relatively shallow, broad and slow-flowing. They are found on the eastern side of oceanic basins (adjacent to the western coasts of continents). Subtropical eastern boundary currents flow equatorward, transporting cold water from higher latitudes to lower latitudes; examples include the Benguela Current, the Canary Current, the Humboldt (Peru) Current, and the California Current. Coastal upwelling often brings nutrient-rich water into eastern boundary current regions, making them productive areas of the ocean. Western boundary currents Western boundary currents may themselves be divided into sub-tropical or low-latitude western boundary currents. Sub-tropical western boundary currents are warm, deep, narrow, and fast-flowing currents that form on the west side of ocean basins due to western intensification. They carry warm water from the tropics poleward. Examples include the Gulf Stream, the Agulhas Current, and the Kuroshio Current. Low-latitude western boundary currents are similar to sub-tropical western boundary currents but carry cool water from the subtropics equatorward. Examples include the Mindanao Current and the North Brazil Current. Western intensification Western intensification applies to the western arm of an oceanic current, particularly a large gyre in such a basin. The trade winds blow westward in the tropics. The westerlies blow eastward at mid-latitudes. This applies a stress to the ocean surface with a curl in north and south hemispheres, causing Sverdrup transport equatorward (toward the tropics). Because of conservation of mass and of potential vorticity, that transport is balanced by a narrow, intense poleward current, which flows along the western coast, allowing the vorticity introduced by coastal friction to balance the vorticity input of the wind. The reverse effect applies to the polar gyres – the sign of the wind stress curl and the direction of the resulting currents are reversed. The principal west-side currents (such as the Gulf Stream of the North Atlantic Ocean) are stronger than those opposite (such as the California Current of the North Pacific Ocean). The mechanics were made clear by the American oceanographer Henry Stommel. In 1948, Stommel published his key paper in Transactions, American Geophysical Union: "The Westward Intensification of Wind-Driven Ocean Currents", in which he used a simple, homogeneous, rectangular ocean model to examine the streamlines and surface height contours for an ocean at a non-rotating frame, an ocean characterized by a constant Coriolis parameter and finally, a real-case ocean basin with a latitudinally-varying Coriolis parameter. In this simple modeling the principal factors that were accounted for influencing the oceanic circulation were: surface wind stress bottom friction a variable surface height leading to horizontal pressure gradients the Coriolis effect. In this, Stommel assumed an ocean of constant density and depth seeing ocean currents; he also introduced a linearized, frictional term to account for the dissipative effects that prevent the real ocean from accelerating. He starts, thus, from the steady-state momentum and continuity equations: Here is the strength of the Coriolis force, is the bottom-friction coefficient, is gravity, and is the wind forcing. The wind is blowing towards the west at and towards the east at . Acting on (1) with and on (2) with , subtracting, and then using (3), gives If we introduce a Stream function and linearize by assuming that , equation (4) reduces to Here and The solutions of (5) with boundary condition that be constant on the coastlines, and for different values of , emphasize the role of the variation of the Coriolis parameter with latitude in inciting the strengthening of western boundary currents. Such currents are observed to be much faster, deeper, narrower and warmer than their eastern counterparts. For a non-rotating state (zero Coriolis parameter) and where that is a constant, ocean circulation has no preference toward intensification/acceleration near the western boundary. The streamlines exhibit a symmetric behavior in all directions, with the height contours demonstrating a nearly parallel relation to the streamlines, in a homogeneously rotating ocean. Finally, on a rotating sphere - the case where the Coriolis force is latitudinally variant, a distinct tendency for asymmetrical streamlines is found, with an intense clustering along the western coasts. Mathematically elegant figures within models of the distribution of streamlines and height contours in such an ocean if currents uniformly rotate can be found in the paper. Sverdrup balance and physics of western intensification The physics of western intensification can be understood through a mechanism that helps maintain the vortex balance along an ocean gyre. Harald Sverdrup was the first one, preceding Henry Stommel, to attempt to explain the mid-ocean vorticity balance by looking at the relationship between surface wind forcings and the mass transport within the upper ocean layer. He assumed a geostrophic interior flow, while neglecting any frictional or viscosity effects and presuming that the circulation vanishes at some depth in the ocean. This prohibited the application of his theory to the western boundary currents, since some form of dissipative effect (bottom Ekman layer) would be later shown to be necessary to predict a closed circulation for an entire ocean basin and to counteract the wind-driven flow. Sverdrup introduced a potential vorticity argument to connect the net, interior flow of the oceans to the surface wind stress and the incited planetary vorticity perturbations. For instance, Ekman convergence in the sub-tropics (related to the existence of the trade winds in the tropics and the westerlies in the mid-latitudes) was suggested to lead to a downward vertical velocity and therefore, a squashing of the water columns, which subsequently forces the ocean gyre to spin more slowly (via angular momentum conservation). This is accomplished via a decrease in planetary vorticity (since relative vorticity variations are not significant in large ocean circulations), a phenomenon attainable through an equatorially directed, interior flow that characterizes the subtropical gyre. The opposite is applicable when Ekman divergence is induced, leading to Ekman absorption (suction) and a subsequent, water column stretching and poleward return flow, a characteristic of sub-polar gyres. This return flow, as shown by Stommel, occurs in a meridional current, concentrated near the western boundary of an ocean basin. To balance the vorticity source induced by the wind stress forcing, Stommel introduced a linear frictional term in the Sverdrup equation, functioning as the vorticity sink. This bottom ocean, frictional drag on the horizontal flow allowed Stommel to theoretically predict a closed, basin-wide circulation, while demonstrating the west-ward intensification of wind-driven gyres and its attribution to the Coriolis variation with latitude (beta effect). Walter Munk (1950) further implemented Stommel's theory of western intensification by using a more realistic frictional term, while emphasizing "the lateral dissipation of eddy energy". In this way, not only did he reproduce Stommel's results, recreating thus the circulation of a western boundary current of an ocean gyre resembling the Gulf stream, but he also showed that sub-polar gyres should develop northward of the subtropical ones, spinning in the opposite direction. Climate change Observations indicate that the ocean warming over the subtropical western boundary currents is two-to-three times stronger than the global mean surface ocean warming. A study finds that the enhanced warming may be attributed to an intensification and poleward shift of the western boundary currents as a side-effect of the widening Hadley circulation under global warming. These warming hotspots cause severe environmental and economic problems, such as the rapid sea level rise along the East Coast of the United States, collapse of the fishery over the Gulf of Maine and Uruguay. See also References AMS glossary Professor Raphael Kudela, UCSC, lectures OCEA1 Fall 2007 Munk, W.H., On the wind-driven ocean circulation, J. Meteorol., Vol. 7, 1950 Stommel, H., "The Westward Intensification of Wind-Driven Ocean Currents", Transactions American Geophysical Union, vol. 29, 1948 Thurman, Harold V., Trujillo, Alan P., Introductory Oceanography, tenth edition. Footnotes External links biophysics.sbg.ac.at (JPG) learner.org aos.princeton.edu (PDF) Physical oceanography Ocean currents
Boundary current
[ "Physics", "Chemistry" ]
1,841
[ "Ocean currents", "Applied and interdisciplinary physics", "Physical oceanography", "Fluid dynamics" ]
1,771,980
https://en.wikipedia.org/wiki/Domatic%20number
In graph theory, a domatic partition of a graph is a partition of into disjoint sets , ,..., such that each Vi is a dominating set for G. The figure on the right shows a domatic partition of a graph; here the dominating set consists of the yellow vertices, consists of the green vertices, and consists of the blue vertices. The domatic number is the maximum size of a domatic partition, that is, the maximum number of disjoint dominating sets. The graph in the figure has domatic number 3. It is easy to see that the domatic number is at least 3 because we have presented a domatic partition of size 3. To see that the domatic number is at most 3, we first review a simple upper bound. Upper bounds Let be the minimum degree of the graph . The domatic number of is at most . To see this, consider a vertex of degree . Let consist of and its neighbours. We know that (1) each dominating set must contain at least one vertex in (domination), and (2) each vertex in is contained in at most one dominating set (disjointness). Therefore, there are at most disjoint dominating sets. The graph in the figure has minimum degree , and therefore its domatic number is at most 3. Hence we have shown that its domatic number is exactly 3; the figure shows a maximum-size domatic partition. Lower bounds If there is no isolated vertex in the graph (that is,  ≥ 1), then the domatic number is at least 2. To see this, note that (1) a weak 2-coloring is a domatic partition if there is no isolated vertex, and (2) any graph has a weak 2-coloring. Alternatively, (1) a maximal independent set is a dominating set, and (2) the complement of a maximal independent set is also a dominating set if there are no isolated vertices. The figure on the right shows a weak 2-coloring, which is also a domatic partition of size 2: the dark nodes are a dominating set, and the light nodes are another dominating set (the light nodes form a maximal independent set). See weak coloring for more information. Computational complexity Finding a domatic partition of size 1 is trivial: let . Finding a domatic partition of size 2 (or determining that it does not exist) is easy: check if there are isolated nodes, and if not, find a weak 2-coloring. However, finding a maximum-size domatic partition is computationally hard. Specifically, the following decision problem, known as the domatic number problem, is NP-complete: given a graph and an integer , determine whether the domatic number of is at least . Therefore, the problem of determining the domatic number of a given graph is NP-hard, and the problem of finding a maximum-size domatic partition is NP-hard as well. There is a polynomial-time approximation algorithm with a logarithmic approximation guarantee, that is, it is possible to find a domatic partition whose size is within a factor of the optimum. However, under plausible complexity-theoretic assumptions, there is no polynomial-time approximation algorithm with a sub-logarithmic approximation factor. More specifically, a polynomial-time approximation algorithm for domatic partition with the approximation factor for a constant would imply that all problems in NP can be solved in slightly super-polynomial time . Comparison with similar concepts Domatic partition Partition of vertices into disjoint dominating sets. The domatic number is the maximum number of such sets. Vertex coloring Partition of vertices into disjoint independent sets. The chromatic number is the minimum number of such sets. Clique partition Partition of vertices into disjoint cliques. Equal to vertex coloring in the complement graph. Edge coloring Partition of edges into disjoint matchings. The edge chromatic number is the minimum number of such sets. Let G = (U ∪ V, E) be a bipartite graph without isolated nodes; all edges are of the form {u, v} ∈ E with u ∈ U and v ∈ V. Then {U, V} is both a vertex 2-coloring and a domatic partition of size 2; the sets U and V are independent dominating sets. The chromatic number of G is exactly 2; there is no vertex 1-coloring. The domatic number of G is at least 2. It is possible that there is a larger domatic partition; for example, the complete bipartite graph Kn,n for any n ≥ 2 has domatic number n. Notes References . A1.1: GT3, p. 190. . Graph invariants NP-complete problems Computational problems in graph theory
Domatic number
[ "Mathematics" ]
981
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Graph invariants", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
1,772,649
https://en.wikipedia.org/wiki/National%20Microbiology%20Laboratory
The National Microbiology Laboratory (NML) is part of the Public Health Agency of Canada (PHAC), the agency of the Government of Canada that is responsible for public health, health emergency preparedness and response, and infectious and chronic disease control and prevention. NML is located in several sites across the country including the Canadian Science Centre for Human and Animal Health (CSCHAH) in Winnipeg, Manitoba. NML has a second site in Winnipeg, the JC Wilt Infectious Disease Research Centre on Logan Avenue which serves as a hub for HIV research and diagnostics in Canada. The three other primary sites include locations in Guelph, St. Hyacinthe and Lethbridge. The CSCHAH is a biosafety level 4 infectious disease laboratory facility, the only one of its kind in Canada. With maximum containment, scientists are able to work with pathogens including Ebola, Marburg and Lassa fever. The NML's CSCHAH is also home to the Canadian Food Inspection Agency's National Centre for Foreign Animal Disease, and thus the scientists at the NML share their premises with animal virologists. History The National Microbiology Laboratory was preceded by the Bureau of Microbiology which was originally part of the Laboratory Centre for Disease Control of Health Canada in Ottawa. In the 1980s, Health Canada identified both the need to replace existing laboratory space that was reaching the end of its lifespan and the need for Containment Level 4 space in the country. Around the same time, Agriculture Canada (prior to the Canadian Food Inspection Agency being formed) also identified the need for new laboratory space including high-containment. Numerous benefits were identified for housing both laboratories in one building and Winnipeg was chosen as the site; an announcement to that effect was made in October 1987. After some debate, the spot chosen for the site was a city works yard near to the Health Sciences Centre (a major teaching hospital), the University of Manitoba's medical school, and other life science organizations. Construction of the facility that came to be named the Canadian Science Centre for Human and Animal Health (often referred to locally as "the Virology Lab") began with an official ground-breaking in December 1992. The joint venture design team of Toronto-based Dunlop Architects and Winnipeg-based Smith Carter Architects and Engineers visited 30 laboratories to seek best practices in containment and design. Construction was largely complete by the end of 1997 with the first programs beginning in the spring of 1998 and all laboratories coming on line after that. The official opening took place in June 1999. Following the SARS outbreak in 2003, the Public Health Agency of Canada was formed in 2004 to provide a stronger focus on public health and emergency preparedness in the country. It is a member of the federal Health Portfolio (along with Health Canada, the Canadian Institute of Health Research, and other organizations). By 2018 the NML was beginning to use genomics and advanced computing to study microbes at the genetic level in so-called "dry lab" facilities, as opposed to "wet labs" with Petri dishes and cell cultures. The NML (PHAC) fired Chinese nationals Xiangguo Qiu and her husband Keding Cheng from their jobs as BSL4 infectious disease researchers in January 2021; previously in July 2019 the pair had been dismissed from their positions as unpaid members of the University of Winnipeg for their agency in a mysterious trans-Pacific shipment of BSL4-grade virus materials back to their homeland when the RCMP was called in. Containment Human pathogens are classified into risk groups. The criteria to determine the group includes the level of risk to the health of a person or to public health, as well as the likelihood that the human pathogen will actually cause disease in a human, and whether treatment and preventative measures are available. It can depend on the type of work being done as to which level of containment is needed for pathogens from specific risk groups; as an example, culturing (or growing) a virus or bacterium requires higher containment than some diagnostic tests. NML operates Containment Level 2, 3 and 4 laboratories. In human health infectious disease laboratories, the design and construction of the facility, the engineering controls, and the training and techniques of staff are all focused on protecting lab workers, containing the pathogens, and preventing contamination of materials to ensure accurate diagnosis and research. All of these factors vary depending on the level of containment. The vast majority (87.7%) of NML's lab space is Containment Level 2 (CL2). This is the same type of laboratory found in doctors' offices, hospitals and universities. In a Level 2 lab, work with infectious materials is done inside a biosafety cabinet (BSC) and appropriate personal protective gear is worn relative to activities (gloves, eye protection, lab coats, gowns, etc.). Risk Group 2 pathogens worked with in Level 2 can cause disease but are not a serious hazard and they are often circulating in the community. Environmental contamination must be minimized by the use of hand washing sinks and decontamination facilities such as autoclaves. Examples include E-coli; whooping cough; and seasonal influenza. NML also has Containment Level 3 (CL3) laboratories (8.6% of lab space). Risk Group 3 pathogens may be transmitted by the airborne route, often need only a low infectious dose to produce effects, and can cause serious or life-threatening disease. CL3 emphasizes additional primary and secondary barriers to minimize the release of infectious organisms into the immediate laboratory and the environment. Additional features to prevent transmission of CL3 organisms are appropriate respiratory protection, HEPA filtration of exhausted laboratory air, and strictly controlled laboratory access. Examples include tuberculosis; West Nile virus; and pandemic H1N1 influenza. A small percentage of laboratory space (3.6%) is devoted to Containment Level 4 (CL4) at NML. These agents have the potential for aerosol transmission, often have a low infectious dose and produce very serious and often fatal disease; there is no licensed treatment or vaccine available. This level of containment represents an isolated unit independent of other areas. CL4 emphasizes maximum containment of the infectious agent by completely sealing the facility perimeter with confirmation by negative pressure testing, isolation of the researcher from the pathogen by an enclosed positive pressure suit, and decontamination of air and all other materials. Examples include Ebola, Nipah, Marburg, and 1918 pandemic influenza. Structure NML programs are housed in several facilities across the country. Two of these facilities are in proximity to each other in Winnipeg: The Canadian Science Centre for Human and Animal Health on Arlington Street and the J.C. Wilt Infectious Diseases Research Centre on Logan Avenue. The other facilities are located in Guelph, ON; St. Hyacinthe, QC and Lethbridge, AB. NML is divided into five main laboratory divisions which are supported by scientific and administrative services. The primary NML divisions are: Bacterial Pathogens - focussing on bacterial diseases such as tuberculosis and antibiotic resistant organisms. Enteric Diseases - focussing on food and water-borne pathogens including E.coli and Salmonella. Viral Diseases - addressing a range of viral diseases, including hepatitis and other blood-borne pathogens, respiratory viruses and viral exanthemata, such as measles. Zoonotic Diseases and Special Pathogens - dealing with viral, bacterial and rickettsial zoonoses (diseases transmitted to humans from other species), such as West Nile Virus and Lyme disease, along with risk group 4 agents such as Ebola, Marburg and Lassa fever viruses. HIV and Retrovirology - providing laboratory services and scientific expertise relating to HIV and emerging retroviruses. The Science Technology Core and Services Division works with these divisions to provide technological approaches, including genomics, proteomics and bioinformatics. There is also the Public Health Risk Sciences Division, which is a specialized resource that provides scientific knowledge and solutions to better assess public health risks and enable decisions, with specific attention to infectious disease threats transmitted from food, the animals, or the physical environment. These science-based divisions are complemented and supported by numerous other units that ensure their ongoing operations such as the Office of Science Planning, Program Support and Services, Scientific Informatic Services, Science Support and Client Services, Surveillance and References Services, the Facility and Property Management Division, and the Biorisk and Occupational Safety Services Division. NML also funds the National Reference Centre for Parasitology in Montreal and has a Laboratory Liaison Technical Officer in most provincial labs. Workforce NML employs scientists (MD, PhD, and DVM), biologists, and laboratory technologists, but it also includes informatics specialists, biosafety experts, specialized operations and maintenance staff, and administrative staff, among others. In total, there are approximately 600 staff members as of 2016. The laboratory has collaborated with scientists from the People's Liberation Army from at least 2016 to 2020. Accomplishments NML is renowned for its work on a broad spectrum of infectious diseases from seasonal influenza to Ebola and its accomplishments are too many to detail. Some recent examples of the work done by NML include their involvement in the response to the West African Ebola outbreak. For a period of about 18 months, teams from NML travelled to West Africa to aid in the diagnostics during the outbreak. They worked closely with the World Health Organization and Médecins sans frontières to ensure people were properly diagnosed so that they could be properly cared for and isolated from others to stop the spread. Also during this outbreak, a promising vaccine and treatment for Ebola that were developed at NML, in conjunction with collaborators, were fast tracked into clinical trials so that they could get to the people that needed it as soon as possible. Another accomplishment was the response to the 2009 H1N1 influenza pandemic. In April 2009, the Mexican national lab approached NML for assistance with identifying a respiratory virus that was causing outbreaks in Mexico. NML was able to quickly identify the new virus and recognize that it matched the virus that was beginning to circulate in the U.S. As the lead laboratory in Canada, NML rapidly developed diagnostic tests and equipped provincial labs to be able to test for the new virus. NML also assisted Mexico by providing additional testing and sent staff to their national laboratory to enable to help them set up their own testing protocols. In the international laboratory sector, NML has developed different types of mobile labs: a lab-truck, a lab-trailer, and a "lab in a suitcase". The lab-truck is generally used for in-country deployments at high-profile events such as the 2010 Olympics, the lab-trailer is used for international large-scale events where there may be a threat of bioterrorism or other deliberate acts involving infectious agents, and the lab in a suitcase is frequently used in remote areas of the world with little available infrastructure. An example would be the multiple deployments over the years to combat outbreaks of Ebola in Africa. This model was adopted by many other countries during the 2014-2015 Ebola outbreak in West Africa. NML houses the secretariats for both the Canadian Public Health Laboratory Network (CPHLN) and the Global Health Security Action Group – Laboratory Network (GHSAG-LN). The role of CPHLN is to provide a forum for public health laboratory leaders to share knowledge. The GHSAG-LN network's goals are to coordinate the diagnostic capabilities of all participants and contribute to disease surveillance around the world. The Canadian Network for Public Health Intelligence (CNPHI) is an innovation developed by NML staff. It is a secure web-based system that compiles information from various surveillance systems and issues alerts to users. More than 4,000 public health officials across Canada now subscribe to it. CNPHI tools assist in determining the existence or extent of an outbreak through the recognition of related cases across jurisdictions. Directors From 2000 to 2014, Dr. Frank Plummer was the Scientific Director General of the National Microbiology Laboratory. Under Dr. Plummer's guidance, the NML developed into one of the world's premier institutions in the research, detection, and response to global infectious disease and bio-security threats. Dr. Plummer received his medical degree from the University of Manitoba in 1976. Between 1984 and 2001, Dr. Plummer lived in Nairobi, Kenya where he spearheaded the development of the world renowned "Kenya AIDS Control Program," established by the Universities of Manitoba and Nairobi. This HIV epidemiological work was central to global understanding of the risk factors for HIV transmission and how to prevent its spread. Dr. Plummer was the first to reveal that heterosexual women could also be infected with HIV/AIDS and that a cohort of Nairobi sex workers had a natural immunity to HIV/AIDS. This latter discovery suggested the possibility that a vaccine could eventually be developed. Dr. Plummer stepped down as the NML's Scientific Director General to take the position as senior adviser to the Agency's Chief Public Health Officer in 2014. He remained as a distinguished professor at the University of Manitoba prior to his death in February 2020. In 2015, Dr. Matthew Gilmour became the Scientific Director General of the National Microbiology Laboratory and the Laboratory for Foodborne Zoonoses. Dr. Gilmour spearheaded the partnership that brought these two laboratories together under the National Microbiology Laboratory umbrella. He was previously the Chief, Enteric Diseases and subsequently the Program Director, Bacteriology and Enteric Diseases at the NML. Dr. Gilmour has won a number of scientific awards including Canadian Society of Microbiologists' Canadian Graduate Student Microbiologist of the Year Award; the Public Health Agency of Canada's Most Promising Researcher Merit Award and Dr. Andrés Petrasovits Public Health Merit Award; and Health Canada's Excellence Award in Collaborative Leadership and Award for Excellence in Science. Dr. Gilmour continues to be an Assistant Professor at the University of Manitoba's Department of Medical Microbiology and Infectious Diseases as well as the Secretary Treasurer of the Canadian Association for Clinical Microbiology and Infectious Diseases (CACMID). See also VSV-EBOV ZMapp Notes and references External links National Microbiology Laboratory official website https://www.canada.ca/en/public-health/programs/national-microbiology-laboratory.html Public Health Agency of Canada https://www.canada.ca/en/public-health.html Laboratories in Canada Research institutes in Canada Health Canada Research institutes in Manitoba Medical research institutes in Canada Biosafety level 4 laboratories Life sciences industry Virology institutes 1999 establishments in Canada Organizations based in Winnipeg
National Microbiology Laboratory
[ "Biology" ]
3,003
[ "Life sciences industry" ]
1,774,701
https://en.wikipedia.org/wiki/Uniformizable%20space
In mathematics, a topological space X is uniformizable if there exists a uniform structure on X that induces the topology of X. Equivalently, X is uniformizable if and only if it is homeomorphic to a uniform space (equipped with the topology induced by the uniform structure). Any (pseudo)metrizable space is uniformizable since the (pseudo)metric uniformity induces the (pseudo)metric topology. The converse fails: There are uniformizable spaces that are not (pseudo)metrizable. However, it is true that the topology of a uniformizable space can always be induced by a family of pseudometrics; indeed, this is because any uniformity on a set X can be defined by a family of pseudometrics. Showing that a space is uniformizable is much simpler than showing it is metrizable. In fact, uniformizability is equivalent to a common separation axiom: A topological space is uniformizable if and only if it is completely regular. Induced uniformity One way to construct a uniform structure on a topological space X is to take the initial uniformity on X induced by C(X), the family of real-valued continuous functions on X. This is the coarsest uniformity on X for which all such functions are uniformly continuous. A subbase for this uniformity is given by the set of all entourages where f ∈ C(X) and ε > 0. The uniform topology generated by the above uniformity is the initial topology induced by the family C(X). In general, this topology will be coarser than the given topology on X. The two topologies will coincide if and only if X is completely regular. Fine uniformity Given a uniformizable space X there is a finest uniformity on X compatible with the topology of X called the fine uniformity or universal uniformity. A uniform space is said to be fine if it has the fine uniformity generated by its uniform topology. The fine uniformity is characterized by the universal property: any continuous function f from a fine space X to a uniform space Y is uniformly continuous. This implies that the functor F : CReg → Uni that assigns to any completely regular space X the fine uniformity on X is left adjoint to the forgetful functor sending a uniform space to its underlying completely regular space. Explicitly, the fine uniformity on a completely regular space X is generated by all open neighborhoods D of the diagonal in X × X (with the product topology) such that there exists a sequence D1, D2, … of open neighborhoods of the diagonal with D = D1 and . The uniformity on a completely regular space X induced by C(X) (see the previous section) is not always the fine uniformity. References Properties of topological spaces Uniform spaces
Uniformizable space
[ "Mathematics" ]
577
[ "Uniform spaces", "Properties of topological spaces", "Space (mathematics)", "Topological spaces", "Topology" ]
1,774,970
https://en.wikipedia.org/wiki/Multigraph
In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge. There are 2 distinct notions of multiple edges: Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes. Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges. A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two. For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph that is permitted to have loops. Undirected multigraph (edges without own identity) A multigraph G is an ordered pair G := (V, E) with V a set of vertices or nodes, E a multiset of unordered pairs of vertices, called edges or lines. Undirected multigraph (edges with own identity) A multigraph G is an ordered triple G := (V, E, r) with V a set of vertices or nodes, E a set of edges or lines, r : E → {{x,y} : x, y ∈ V}, assigning to each edge an unordered pair of endpoint nodes. Some authors allow multigraphs to have loops, that is, an edge that connects a vertex to itself, while others call these pseudographs, reserving the term multigraph for the case with no loops. Directed multigraph (edges without own identity) A multidigraph is a directed graph which is permitted to have multiple arcs, i.e., arcs with the same source and target nodes. A multidigraph G is an ordered pair G := (V, A) with V a set of vertices or nodes, A a multiset of ordered pairs of vertices called directed edges, arcs or arrows. A mixed multigraph G := (V, E, A) may be defined in the same way as a mixed graph. Directed multigraph (edges with own identity) A multidigraph or quiver G is an ordered 4-tuple G := (V, A, s, t) with V a set of vertices or nodes, A a set of edges or lines, , assigning to each edge its source node, , assigning to each edge its target node. This notion might be used to model the possible flight connections offered by an airline. In this case the multigraph would be a directed graph with pairs of directed parallel edges connecting cities to show that it is possible to fly both to and from these locations. In category theory a small category can be defined as a multidigraph (with edges having their own identity) equipped with an associative composition law and a distinguished self-loop at each vertex serving as the left and right identity for composition. For this reason, in category theory the term graph is standardly taken to mean "multidigraph", and the underlying multidigraph of a category is called its underlying digraph. Labeling Multigraphs and multidigraphs also support the notion of graph labeling, in a similar way. However there is no unity in terminology in this case. The definitions of labeled multigraphs and labeled multidigraphs are similar, and we define only the latter ones here. Definition 1: A labeled multidigraph is a labeled graph with labeled arcs. Formally: A labeled multidigraph G is a multigraph with labeled vertices and arcs. Formally it is an 8-tuple where is a set of vertices and is a set of arcs. and are finite alphabets of the available vertex and arc labels, and are two maps indicating the source and target vertex of an arc, and are two maps describing the labeling of the vertices and arcs. Definition 2: A labeled multidigraph is a labeled graph with multiple labeled arcs, i.e. arcs with the same end vertices and the same arc label (note that this notion of a labeled graph is different from the notion given by the article graph labeling). See also Multidimensional network Glossary of graph theory terms Graph theory Notes References External links Extensions and generalizations of graphs de:Graph (Graphentheorie)#Multigraph
Multigraph
[ "Mathematics" ]
925
[ "Mathematical relations", "Graph theory", "Extensions and generalizations of graphs" ]
1,775,224
https://en.wikipedia.org/wiki/Jetronic
Jetronic is a trade name of a manifold injection technology for automotive petrol engines, developed and marketed by Robert Bosch GmbH from the 1960s onwards. Bosch licensed the concept to many automobile manufacturers. There are several variations of the technology offering technological development and refinement. D-Jetronic (1967–1979) Analogue fuel injection, 'D' is from meaning pressure. Inlet manifold vacuum is measured using a pressure sensor located in, or connected to the intake manifold, in order to calculate the duration of fuel injection pulses. Originally, this system was called Jetronic, but the name D-Jetronic was later created as a retronym to distinguish it from subsequent Jetronic iterations. D-Jetronic was essentially a further refinement of the Electrojector fuel delivery system developed by the Bendix Corporation in the late 1950s. Rather than choosing to eradicate the various reliability issues with the Electrojector system, Bendix instead licensed the design to Bosch. With the role of the Bendix system being largely forgotten D-Jetronic became known as the first widely successful precursor of modern electronic common rail systems; it had constant pressure fuel delivery to the injectors and pulsed injections, albeit grouped (2 groups of injectors pulsed together) rather than sequential (individual injector pulses) as on later systems. As in the Electrojector system, D-Jetronic used analogue circuitry, with no microprocessor nor digital logic, the ECU used about 25 transistors to perform all of the processing. Two important factors that led to the ultimate failure of the Electrojector system: the use of paper-wrapped capacitors unsuited to heat-cycling and amplitude modulation (tv/ham radio) signals to control the injectors were superseded. The still present lack of processing power and the unavailability of solid-state sensors meant that the vacuum sensor was a rather expensive precision instrument, rather like a barometer, with brass bellows inside to measure the manifold pressure. Although conceptually similar to most later systems with individual electrically controlled injectors per cylinder, and pulse-width modulated fuel delivery, the fuel pressure was not modulated by manifold pressure, and the injectors were fired only once per 2 revolutions on the engine (with half of the injectors being fired each revolution). The system was last used (with a Lucas designed timing mechanism and Lucas labels super-imposed on some components) on the Jaguar V12 engine (XJ12 and XJ-S) from 1975 until 1979. K-Jetronic (1973–1994) Mechanical fuel injection, 'K' stands for , meaning continuous. Commonly called 'Continuous Injection System (CIS) in the USA. K-Jetronic is different from pulsed injection systems in that the fuel flows continuously from all injectors, while the fuel pump pressurises the fuel up to approximately 5 bar (73.5 psi). The volume of air taken in by the engine is measured to determine the amount of fuel to inject. This system has no lambda loop or lambda control. K-Jetronic debuted in the 1973.5 Porsche 911T in January 1973, and was later installed into a number of Porsche, Volkswagen, Audi, BMW, Mercedes-Benz, Rolls-Royce, Bentley, Lotus, Ferrari, Peugeot, Nissan, Renault, Volvo, Saab, TVR and Ford automobiles. The final car to use K-Jetronic was the 1994 Porsche 911 Turbo 3.6. Fuel is pumped from the tank to a large control valve called a fuel distributor, which divides the single fuel supply line from the tank into smaller lines, one for each injector. The fuel distributor is mounted atop a control vane through which all intake air must pass, and the system works by varying fuel volume supplied to the injectors based on the angle of a moving vane in the air flow meter, which in turn is determined by the volume of air passing the vane, and by the control pressure. The control pressure is regulated with a mechanical device called the control pressure regulator (CPR) or the warm-up regulator (WUR). Depending on the model, the CPR may be used to compensate for altitude, full load, and/or a cold engine. The injectors are simple spring-loaded check valves with nozzles; once fuel system pressure becomes high enough to overcome the counterspring, the injectors begin spraying. K-Jetronic (Lambda) First introduced on the PRV V6, appearing initially in the Volvo 265 in 1976 and later used in the DMC DeLorean in 1981. A variant of K-Jetronic with closed-loop lambda control, also named Ku-Jetronic, the letter u denominating USA. The system was developed to comply with U.S.A. state of California's California Air Resources Board exhaust emission regulations, and later replaced by KE-Jetronic. KE-Jetronic (1985–1993) Electronically controlled mechanical fuel injection. The engine control unit (ECU) may be either analog or digital, and the system may or may not have closed-loop lambda control. The system is based on the K-Jetronic mechanical system, with the addition of an electro-hydraulic actuator, essentially a fuel injector inline with the fuel return. Instead of injecting fuel into the intake, this injector allows fuel to bypass the fuel distributor, which varies the fuel pressure supplied to the mechanical injection components based on several inputs (engine speed, air pressure, coolant temperature, throttle position, lambda etc.) via the ECU. With the electronics disconnected, this system will operate as a K-Jetronic system. Commonly known as 'CIS-E' in the USA. The later KE3 (CIS-E III) variant features knock sensing capabilities. L-Jetronic (1974–1989) Analog fuel injection. L-Jetronic was often called Air-Flow Controlled (AFC) injection to further separate it from the pressure-controlled D-Jetronic — with the 'L' in its name derived from , meaning 'air'. In the system, air flow into the engine is measured by a moving vane (indicating engine load) known as the volume air flow sensor (VAF) — referred to in German documentation as the LuftMengenMesser or LMM. L-Jetronic used custom-designed integrated circuits, resulting in a simpler and more reliable engine control unit (ECU) than the D-Jetronic's. L-Jetronic was used heavily in 1980s-era European cars, as well as BMW K-Series motorcycles. Licensing some of Bosch's L-Jetronic concepts and technologies, Lucas, Hitachi Automotive Products, NipponDenso, and others produced similar fuel injection systems for Asian car manufacturers. L-Jetronic manufactured under license by Japan Electronic Control Systems was fitted to the 1980 Kawasaki Z1000-H1, the world's first production fuel injected motorcycle. Despite physical similarity between L-Jetronic components and those produced under license by other manufacturers, the non-Bosch systems should not be called L-Jetronic, and the parts are usually incompatible. LE1-Jetronic, LE2-Jetronic, LE3-Jetronic (1981–1991) This is a simplified and more modern variant of L-Jetronic. The ECU was much cheaper to produce due to more modern components, and was more standardised than the L-Jetronic ECUs. As per L-Jetronic, a vane-type airflow sensor is used. Compared with L-Jetronic, the fuel injectors used by LE-Jetronic have a higher impedance. Three variants of LE-Jetronic exist: LE1, the initial version. LE2 (1984–), featured cold start functionality integrated in the ECU, which does not require the cold start injector and thermo time switch used by older systems. LE3 (1989–), featuring miniaturised ECU with hybrid technology, integrated into the junction box of the mass airflow meter. LU1-Jetronic, LU2-Jetronic (1983–1991) The same as LE1-Jetronic and LE2-Jetronic respectively, but with closed-loop lambda control. Initially designed for the US market. LH-Jetronic (1982–1998) Digital fuel injection, introduced for California bound 1982 Volvo 240 models. The 'LH' stands for - the hotwire anemometer technology used to determine the mass of air into the engine. This air mass meter is called HLM2 (Hitzdrahtluftmassenmesser 2) by Bosch. The LH-Jetronic was mostly used by Scandinavian car manufacturers, and by sports and luxury cars produced in small quantities, such as Porsche 928. The most common variants are LH 2.2, which uses an Intel 8049 (MCS-48) microcontroller, and usually a 4 kB programme memory, and LH 2.4, which uses a Siemens 80535 microcontroller (a variant of Intel's 8051/MCS-51 architecture) and 32 kB programme memory based on the 27C256 chip. LH-Jetronic 2.4 has adaptive lambda control, and support for a variety of advanced features; including fuel enrichment based on exhaust gas temperature (ex. Volvo B204GT/B204FT engines). Some later (post-1995) versions contain hardware support for first generation diagnostics according to ISO 9141 (a.k.a. OBD-II) and immobiliser functions. Mono-Jetronic (1988–1995) Digital fuel injection. This system features one centrally positioned fuel injection nozzle. In the US, this kind of single-point injection was marketed as 'throttle body injection' (TBI, by GM), or 'central fuel injection' (CFI, by Ford). Mono-Jetronic is different from all other known single-point systems, in that it only relies on a throttle position sensor for judging the engine load. There are no sensors for air flow, or intake manifold vacuum. Mono-Jetronic always had adaptive closed-loop lambda control, and due to the simple engine load sensing, it is heavily dependent on the lambda sensor for correct functioning. The ECU uses an Intel 8051 microcontroller, usually with 16 KB of programme memory and without advanced on-board diagnostics (OBD-II became a requirement in model-year 1996.) See also Motronic References External links History of the D-Jetronic System Volvo enthusiasts. The site mostly focuses on 240-series cars with the Bosch K-Jet fuel injection systems Fuel injection systems Embedded systems Power control Engine technology Automotive technology tradenames Bosch (company)
Jetronic
[ "Physics", "Technology", "Engineering" ]
2,267
[ "Computer engineering", "Physical quantities", "Engines", "Embedded systems", "Computer systems", "Engine technology", "Power (physics)", "Computer science", "Power control" ]
1,775,454
https://en.wikipedia.org/wiki/New%20chemical%20entity
A new chemical entity (NCE) is, according to the U.S. Food and Drug Administration, a novel, small, chemical molecule drug that is undergoing clinical trials or has received a first approval (not a new use) by the FDA in any other application submitted under section 505(b) of the Federal Food, Drug, and Cosmetic Act. A new molecular entity (NME) is a broader term that encompasses both an NCE or an NBE (New Biological Entity). Definition An active moiety is a molecule or ion, excluding those appended portions of the molecule that cause the drug to be an ester, salt (including a salt with hydrogen or coordination bonds), or other noncovalent derivative (such as a complex, chelate, or clathrate) of the molecule, responsible for the physiological or pharmacological action of the drug substance. An NCE is a molecule developed by the innovator company in the early drug discovery stage, which after undergoing clinical trials could translate into a drug that could be a treatment for some disease. Synthesis of an NCE is the first step in the process of drug development. Once the synthesis of the NCE has been completed, companies have two options before them. They can either go for clinical trials on their own or license the NCE to another company. In the latter option, companies can avoid the expensive and lengthy process of clinical trials, as the licensee company would be conducting further clinical trials and subsequently launching the drug. Companies adopting this model of business would be able to generate high margins as they get a huge one-time payment for the NCE as well as entering into a revenue sharing agreement with the licensee company. Under the Food and Drug Administration Amendments Act of 2007, all new chemical entities must first be reviewed by an advisory committee before the FDA can approve these products. See also Molecular/chemical entity Medicinal chemistry Drug development References External links NME's worldwide and in Germany CDER Drug and Biologic Approval Reports Medicinal chemistry Drug development Life sciences industry
New chemical entity
[ "Chemistry", "Biology" ]
414
[ "Life sciences industry", "Medicinal chemistry stubs", "Biochemistry stubs", "nan", "Medicinal chemistry", "Biochemistry" ]
1,775,884
https://en.wikipedia.org/wiki/Betaine
A betaine () in chemistry is any neutral chemical compound with a positively charged cationic functional group that bears no hydrogen atom, such as a quaternary ammonium or phosphonium cation (generally: onium ions), and with a negatively charged functional group, such as a carboxylate group that may not be adjacent to the cationic site. Historically, the term was reserved for trimethylglycine (TMG), which is involved in methylation reactions and detoxification of homocysteine. This is a modified amino acid consisting of glycine with three methyl groups serving as methyl donor for various metabolic pathways. Pronunciation The pronunciation of the compound reflects its origin and first isolation from sugar beets (Beta vulgaris subsp. vulgaris), and does not derive from the Greek letter beta (β). It is commonly pronounced beta-INE or BEE-tayn. Glycine betaine The original betaine, N,N,N-trimethylglycine, was named after its discovery in sugar beet (Beta vulgaris subsp. vulgaris) in the nineteenth century. It is a small N-trimethylated amino acid. It is a zwitterion, which cannot isomerize because there is no labile hydrogen atom attached to the nitrogen atom. This substance may be called glycine betaine to distinguish it from other betaines. Uses Biochemistry Phosphonium betaines are intermediates in the Wittig reaction. The addition of betaine to polymerase chain reactions improves the amplification of DNA by reducing the formation of secondary structure in GC-rich regions. The addition of betaine may enhance the specificity of the polymerase chain reaction by eliminating the base pair composition dependence of DNA melting. Food additive In 2017, the European Food Safety Authority concluded that betaine was safe "as a novel food to be used at a maximum intake level of 6 mg/kg body weight per day in addition to the intake from the background diet." Approved drug A prescription drug (Cystadane) containing betaine has limited use for oral treatment of genetic homocystinuria to lower levels of homocysteine in circulating blood. Dietary supplement Trimethylglycine, a betaine, is used as a dietary supplement, although there is no evidence that it is effective or safe. Common side effects of taking oral betaine include nausea and stomach upset. Safety Many betaines are irritants of the eyes and skin. See also Cocamidopropyl betaine Mesoionic Mesomeric betaine Osmoprotectants Ylide References Further reading Quaternary ammonium compounds Zwitterions Surfactants
Betaine
[ "Physics", "Chemistry" ]
559
[ "Ions", "Zwitterions", "Matter" ]
1,776,396
https://en.wikipedia.org/wiki/Ready-mix%20concrete
Ready-mix concrete (RMC) is concrete that is manufactured in a batch plant, according to each specific job requirement, then delivered to the job site "ready to use". There are two types with the first being the barrel truck or in–transit mixers. This type of truck delivers concrete in a plastic state to the site. The second is the volumetric concrete mixer. This delivers the ready mix in a dry state and then mixes the concrete on site. However, other sources divide the material into three types: Transit Mix, Central Mix or Shrink Mix concrete. Ready-mix concrete refers to concrete that is specifically manufactured for customers' construction projects, and supplied to the customer on site as a single product. It is a mixture of Portland or other cements, water and aggregates: sand, gravel, or crushed stone. All aggregates should be of a washed type material with limited amounts of fines or dirt and clay. An admixture is often added to improve workability of the concrete and/or increase setting time of concrete (using retarders) to factor in the time required for the transit mixer to reach the site. The global market size is disputed depending on the source. It was estimated at 650 billion dollars in 2019. However it was estimated at just under 500 billion dollars in 2018. History There is some dispute as to when the first ready-mix delivery was made and when the first factory was built. Some sources suggest as early as 1913 in Baltimore. By 1929 there were over 100 plants operating in the United States. The industry did not expand significantly until the 1960s, and has continued to grow since then. Design Batch plants combine a precise amount of gravel, sand, water and cement by weight (as per a mix design formulation for the grade of concrete recommended by the structural engineer or architect), allowing specialty concrete mixtures to be developed and implemented on construction sites. Ready-mix concrete is often used instead of other materials due to the cost and wide range of uses in building, particularly in large projects like high-rise buildings and bridges. It has a long life span when compared to other products of a similar use, like roadways. It has an average life span of 30 years under high traffic areas compared to the 10 to 12 year life of asphalt concrete with the same traffic. Ready-mixed concrete is used in construction projects where the construction site is not willing, or is unable, to mix concrete on site. Using ready-mixed concrete means product is delivered finished, on demand, in the specific quantity required, in the specific mix design required. For a small to medium project, the cost and time of hiring mixing equipment, labour, plus purchase and storage for the ingredients of concrete, added to environmental concerns (cement dust is an airborne health hazard) may simply be not worthwhile when compared to the cost of ready-mixed concrete, where the customer pays for what they use, and allows others do the work up to that point. For a large project, outsourcing concrete production to ready-mixed concrete suppliers means delegating the quality control and testing, material logistics and supply chain issues and mix design, to specialists who are already established for those tasks, trading off against introducing another contracted external supplier who needs to make a profit, and losing the control and immediacy of on-site mixing. Ready-mix concrete is bought and sold by volume – usually expressed in cubic meters (cubic yards in the US). Batching and mixing is done under controlled conditions. In the UK, ready-mixed concrete is specified either informally, by constituent weight or volume (1-2-4 or 1-3-6 being common mixes) or using the formal specification standards of the European standard EN 206+ A1, which is supplemented in the UK by BS 8500. This allows the customer to specify what the concrete has to be able to withstand in terms of ground conditions, exposure, and strength, and allows the concrete manufacturer to design a mix that meets that requirement using the materials locally available to a batching plant. This is verified by laboratory testing, such as performing cube tests to verify compressive strength, flexural tests, and supplemented by field testing, such as slump tests done on site to verify plasticity of the mix. The performance of a concrete mix can be altered by use of admixtures. Admixtures can be used to reduce water requirements, entrain air into a mixture, to improve surface durability, or even superplasticise concrete to make it self-levelling, as self-consolidating concrete, the use of admixtures requires precision in dosing and mix design, which is more difficult without the dosing/measuring equipment and laboratory backing of a batching plant, which means they are not easily used outside of ready-mixed concrete. Concrete has a limited lifespan between batching / mixing and curing. This means that ready-mixed concrete should be placed within 30 to 45 minutes of the batching process to hold slump and mix design specifications in the US, though in the UK, environmental and material factors, plus in-transit mixing, allow for up two hours to elapse. Modern admixtures and water reducers can modify that time span to some degree. Ready-mixed concrete can be transported and placed at site using a number of methods. The most common and simplest is the chute fitted to the back of transit mixer trucks (as in picture), which is suitable for placing concrete near locations where a truck can back in. Dumper trucks, crane hoppers, truck-mounted conveyors, and, in extremis, wheelbarrows, can be used to place concrete from trucks where access is not direct. Some concrete mixes are suitable for pumping with a concrete pump. In 2011, there were 2,223 companies employing 72,925 workers that produced ready-mix concrete in the United States. Advantages of ready-mix concrete Materials are combined in a batch plant, and the hydration process begins at the moment water meets the cement, so the travel time from the plant to the site, and the time before the concrete is placed on-site, is critical over longer distances. Some sites are just too distant. The use of admixtures, retarders, and cement-like pulverized fly ash or ground granulated blast-furnace slag (GGBFS) can be used to slow the hydration process, allowing for longer transit and waiting time. Concrete is formable and pourable, but a steady supply is needed for large forms. If there is a supply interruption, and the concrete cannot be poured all at once, a cold joint may appear in the finished form. The biggest advantage is that concrete is produced under controlled conditions. Therefore, Quality concrete is obtained, as a ready-mix concrete mix plant makes use of sophisticated equipment and consistent methods. There is strict control over the testing of materials, process parameters, and continuous monitoring of key practices during the manufacturing process. Poor control on the input materials, batching and mixing methods in the case of site mix concrete is solved in a ready-mix concrete production method. Speed in the construction practices followed in ready mix concrete plant is followed continuously by having mechanized operations. The output obtained from a site mix concrete plant using a 8/12 mixer is 4 to 5 metric cubes per hour which is 30-60 metric cubes per hour in a ready mix concrete plant. Better handling and proper mixing practice will help reduce the consumption of cement by 10 – 12%. The use of admixtures and other cementitious materials will help to reduce the amount of cement as is required to make the desired grade of concrete. Less consumption of cement indirectly results in less environmental pollution. Ready mix concrete manufacture have less dependency on human labor hence the chances of human error are reduced. This will also reduce the dependency on intensive labor. Cracking and shrinkage. Concrete shrinks as it cures. It can shrink over a 10-foot long area (3.05 meters). This causes stress internally on the concrete and must be accounted for by the engineers and finishers placing the concrete, and may require the use of steel reinforcement or pre-stressed concrete elements where this is critical. Access roads and site access have to be able to carry the weight of the ready-mix truck plus load which can be up to 32 tonnes for an eight-wheel 9 m3 truck. (Green concrete is approximately . This problem can be overcome by utilizing so-called "mini mix" trucks which use smaller 4 m3 capacity mixers able to reach more weight restricted sites. Even smaller mixers are used to allow a 7.5 tonne truck to hold approximately 1.25 m3, to reach restricted inner city areas with bans on larger trucks. Metered concrete As an alternative to centralized batch plant system is the volumetric mobile mixer. This is often referred to as on-site concrete, site mixed concrete or mobile mix concrete. This is a mobile miniaturized version of the large stationary batch plant. They are used to provide ready mix concrete utilizing a continuous batching process or metered concrete system. The volumetric mobile mixer is a truck that holds sand, rock, cement, water, fiber, and some add mixtures and color depending on how the batch plant is outfitted. These trucks mix or batch the ready mix on the job site. This type of truck can mix as much or as little amount of concrete as needed. The on-site mixing eliminates the travel time hydration that can cause the transit mixed concrete to become unusable. These trucks are as precise as the centralized batch plant system, since the trucks are scaled and tested using the same ASTM (American standard test method) like all other ready mix manufactures. This is a hybrid approach between centralized batch plants and traditional on-site mixing. Each type of system has advantages and disadvantages, depending on the location, size of the job, and mix design set forth by the engineer. Transit mixed ready mix versus volumetric mixed ready mix A centralized concrete batching plant can serve a wide area. Site-mix trucks can serve an even larger area including very remote locations that standard trucks cannot. The batch plants are located in areas zoned for industrial use, while the delivery trucks can service residential districts or inner cities. Site-mix trucks have the same capabilities. Volumetric trucks often have a lower water demand during the batching process. This will produce a concrete that can be significantly stronger in compressive strength compared to the centralized batch plant for the same mix design using the ASTM C109 test method. Centralized batch systems are limited by the size of the fleet. It may take upwards of 10 minutes to batch and load out one truck depending on the plant size and type. They are unable to change mix designs in the middle of an individual batching process, but can quickly offer a greater range of mixes overall as a central yard has more stock capacity for different types of cement, aggregates, and admixtures than a single truck has room for on site. Volumetric mixers can seamlessly change all aspects of the mix design while still producing concrete, as long as the raw materials are on site. They can continuously mix quality concrete for an indefinite time while being continuously loaded with fresh materials. They can produce 1 yard of concrete in as little as 40 seconds depending on the mix design and batch plant size outfitted. Centralised batching, using the same supply of materials over a long period (a fixed plant will likely have a fixed set of suppliers in its locality), the same scales which can be calibrated by weighbridges, the same measuring equipment for admixtures, moisture etc., and often the same batching operator, can have tighter tolerances for mixes, use a centralised lab to design and verify dozens of mixes to different specifications across multiple jobs for that plant, and can therefore produce a very predictable, consistent result for major projects. Each plant will have a batching recipe book (or equivalent automated batching program) to batch and load any quantity of any mix design on demand. Centralized batching can scale quickly with less movement than on site mixers, using aggregate trucks, cement tankers and ground stocks to achieve up to 240 cubic metres an hour from a single plant. This allows consistent large-scale pours across a site quickly, as supply logistics for cement, water, and aggregate are fixed to a single point with greater storage capacity, and therefore easier to scale, and more tolerant of short supply interruptions. For small loads (orders under 10 yards) transit mixers typically return to their batch plant after each delivery. Volumetric trucks can go directly from job to job until a truck is emptied, reducing traffic and fuel consumption. See also Types of concrete Reinforced concrete References Notes Bibliography External links National Ready Mixed Concrete Association Ready Mixed Concrete Manufacturer's Association European Ready Mixed Concrete Organisation Illinois Ready Mixed Concrete Association Concrete Batching plant Manufacturer - European Technology Concrete Building materials Civil engineering
Ready-mix concrete
[ "Physics", "Engineering" ]
2,644
[ "Structural engineering", "Building engineering", "Construction", "Materials", "Building materials", "Civil engineering", "Concrete", "Matter", "Architecture" ]
1,776,503
https://en.wikipedia.org/wiki/Knoevenagel%20condensation
In organic chemistry, the Knoevenagel condensation () reaction is a type of chemical reaction named after German chemist Emil Knoevenagel. It is a modification of the aldol condensation. A Knoevenagel condensation is a nucleophilic addition of an active hydrogen compound to a carbonyl group followed by a dehydration reaction in which a molecule of water is eliminated (hence condensation). The product is often an α,β-unsaturated ketone (a conjugated enone). In this reaction the carbonyl group is an aldehyde or a ketone. The catalyst is usually a weakly basic amine. The active hydrogen component has the forms: or for instance diethyl malonate, Meldrum's acid, ethyl acetoacetate or malonic acid, or cyanoacetic acid. , for instance nitromethane. where Z is an electron withdrawing group. Z must be powerful enough to facilitate deprotonation to the enolate ion even with a mild base. Using a strong base in this reaction would induce self-condensation of the aldehyde or ketone. The Hantzsch pyridine synthesis, the Gewald reaction and the Feist–Benary furan synthesis all contain a Knoevenagel reaction step. The reaction also led to the discovery of CS gas. Doebner modification The Doebner modification of the Knoevenagel condensation entails the use of pyridine as a solvent with at least one of the withdrawing groups on the nucleophile is a carboxylic acid, for example, with malonic acid. Under these conditions the condensation is accompanied by decarboxylation. For example, the reaction of acrolein and malonic acid in pyridine gives trans-2,4-entadienoic acid with one carboxylic acid group and not two. Sorbic acid can be prepared similarly by replacing acrolein with crotonaldehyde. Examples and applications A Knoevenagel condensation is demonstrated in the reaction of 2-methoxybenzaldehyde 1 with the thiobarbituric acid 2 in ethanol using piperidine as a base. The resulting enone 3 is a charge transfer complex molecule. The Knoevenagel condensation is a key step in the commercial production of the antimalarial drug lumefantrine (a component of Coartem): The initial reaction product is a 50:50 mixture of E and Z isomers but beecause both isomers equilibrate rapidly around their common hydroxyl precursor, the more stable Z-isomer can eventually be obtained. A multicomponent reaction featuring a Knoevenagel condensation is demonstrated in this MORE synthesis with cyclohexanone, malononitrile and 3-amino-1,2,4-triazole: Weiss–Cook reaction The Weiss–Cook reaction consists in the synthesis of cis-bicyclo[3.3.0]octane-3,7-dione employing an acetonedicarboxylic acid ester and a diacyl (1,2 ketone). The mechanism operates in the same way as the Knoevenagel condensation: See also Malonic ester synthesis Aldol condensation Nitroalkene Iminocoumarin References Condensation reactions Name reactions Carbon-carbon bond forming reactions
Knoevenagel condensation
[ "Chemistry" ]
743
[ "Carbon-carbon bond forming reactions", "Coupling reactions", "Organic reactions", "Name reactions", "Condensation reactions" ]
1,776,871
https://en.wikipedia.org/wiki/Miraculin
Miraculin is a taste modifier, a glycoprotein extracted from the fruit of Synsepalum dulcificum. The berry, also known as the miracle fruit, was documented by explorer Chevalier des Marchais, who searched for many different fruits during a 1725 excursion to its native West Africa. Miraculin itself does not taste sweet. When taste buds are exposed to miraculin, the protein binds to the sweetness receptors. This causes normally sour-tasting acidic foods, such as citrus, to be perceived as sweet. The effect can last for one or two hours. History The sweetening properties of Synsepalum dulcificum berries were first noted by des Marchais during expeditions to West Africa in the 18th century. The term miraculin derived from experiments to isolate and purify the active glycoprotein that gave the berries their sweetening effects, results that were published simultaneously by Japanese and Dutch scientists working independently in the 1960s (the Dutch team called the glycoprotein mieraculin). The word miraculin was in common use by the mid-1970s. Glycoprotein structure Miraculin was first sequenced in 1989 and was found to be a 24.6 kilodalton glycoprotein consisting of 191 amino acids and 13.9% by weight of various sugars. The sugars consist of a total of 3.4 kDa, composed of a molar ratio of glucosamine (31%), mannose (30%), fucose (22%), xylose (10%), and galactose (7%). The native state of miraculin is a tetramer consisting of two dimers, each held together by a disulfide bridge. Both tetramer miraculin and native dimer miraculin in its crude state have the taste-modifying activity of turning sour tastes into sweet tastes. Miraculin belongs to the Kunitz STI protease inhibitor family. Sweetness properties Miraculin, unlike curculin (another taste-modifying agent), is not sweet by itself, but it can change the perception of sourness to sweetness, even for a long period after consumption. The duration and intensity of the sweetness-modifying effect depends on various factors, such as miraculin concentration, duration of contact of the miraculin with the tongue, and acid concentration. Miraculin reaches its maximum sweetness with a solution containing at least 4*10−7 mol/L miraculin, which is held in the mouth for about 3 minutes. Maximum is equivalent in sweetness to a 0.4 mol/L solution of sucrose. Miraculin degrades permanently via denaturation at high temperatures and at pH below 3 or above 12. Although the detailed mechanism of the taste-inducing behavior is unknown, it appears the sweet receptors are activated by acids which are related to sourness, an effect remaining until the taste buds perceive a neutral pH. Sweeteners are perceived by the human sweet taste receptor, hT1R2-hT1R3, which belongs to G protein-coupled receptors, modified by the two histidine residues (i.e. His30 and His60) which participate in the taste-modifying behavior. One site maintains the attachment of the protein to the membranes while the other (with attached xylose or arabinose) activates the sweet receptor membrane in acid solutions. As a sweetener As miraculin is a readily soluble protein and relatively heat stable, it is a potential sweetener in acidic food, such as soft drinks. While attempts to express it in yeast and tobacco plants have failed, researchers have succeeded in preparing genetically modified E. coli bacteria that express miraculin. Lettuce and tomato have also been used for mass production of miraculin. The use of miraculin as a food additive was denied in 1974 by the United States Food and Drug Administration. However, it can still be sold in the form of whole berries or tablets (as "dietary supplements"). In 2011 the FDA banned a certain brand of miraculin tablets imported from Taiwan as it was thought to be "hard candy" with non-approved sweeteners. Miraculin has a novel food status in the European Union. It is approved in Japan as a safe food additive, according to the List of Existing Food Additives published by the Ministry of Health and Welfare (published by the Japan External Trade Organization). See also Brazzein Curculin Monellin Thaumatin Pentadin Cynarin Stevia References Taste modifiers Sugar substitutes Food science Biomolecules Chemopreventive agents
Miraculin
[ "Chemistry", "Biology" ]
970
[ "Pharmacology", "Natural products", "Organic compounds", "Chemopreventive agents", "Biomolecules", "Structural biology", "Biochemistry", "Molecular biology" ]
1,777,403
https://en.wikipedia.org/wiki/Electron%20multiplier
An electron multiplier is a vacuum-tube structure that multiplies incident charges. In a process called secondary emission, a single electron can, when bombarded on secondary-emissive material, induce emission of roughly 1 to 3 electrons. If an electric potential is applied between this metal plate and yet another, the emitted electrons will accelerate to the next metal plate and induce secondary emission of still more electrons. This can be repeated a number of times, resulting in a large shower of electrons all collected by a metal anode, all having been triggered by just one. History In 1930, Russian physicist Leonid Aleksandrovitch Kubetsky proposed a device which used photocathodes combined with dynodes, or secondary electron emitters, in a single tube to remove secondary electrons by increasing the electric potential through the device. The electron multiplier can use any number of dynodes in total, which use a coefficient, σ, and created a gain of σn where n is the number of emitters. Discrete dynode Secondary electron emission begins when one electron hits a dynode inside a vacuum chamber and ejects electrons that cascade onto more dynodes and repeats the process over again. The dynodes are set up so that each time an electron hits the next one it will have an increase of about 100 electron Volts greater than the last dynode. Some advantages of using this include a response time in the picoseconds, a high sensitivity, and an electron gain of about 108 electrons. Continuous dynode A continuous dynode system uses a horn-shaped funnel of glass coated with a thin film of semiconducting materials. The electrodes have increasing resistance to allow secondary emission. Continuous dynodes use a negative high voltage in the wider end and goes to a positive near ground at the narrow end. The first device of this kind was called a Channel Electron Multiplier (CEM). CEMs required 2-4 kilovolts in order to achieve a gain of 106 electrons. Microchannel plate Another geometry of continuous-dynode electron multiplier is called the microchannel plate (MCP). It may be considered a 2-dimensional parallel array of very small continuous-dynode electron multipliers, built together and powered in parallel. Each microchannel is generally parallel-walled, not tapered or funnel-like. MCPs are constructed from lead glass and carry a resistance of 109 Ω between each electrode. Each channel has a diameter of 10-100 μm. The electron gain for one microchannel plate can be around 104-107 electrons. Applications Instruments In mass spectrometry electron multipliers are often used as a detector of ions that have been separated by a mass analyzer of some sort. They can be the continuous-dynode type and may have a curved horn-like funnel shape or can have discrete dynodes as in a photomultiplier. Continuous dynode electron multipliers are also used in NASA missions and are coupled to a gas chromatography mass spectrometer (GC-MS) which allows scientists to determine the amount and types of gasses present on Titan, Saturn's largest moon. Night-vision Microchannel plates are also used in night-vision goggles. As electrons hit the millions of channels, they release thousands of secondary electrons. These electrons then hit a phosphor screen where they are amplified and converted back into light. The resulting image patterns the original and allows for better vision in the dark, while only using a small battery pack to provide a voltage for the MCP. See also Faraday cup Daly detector Phototube Photo-multiplier tube Scintillation counter Lucas cell Zoltán Lajos Bay (developer) References External links Olympus Tutorial How Discrete Dynode Electron Multipliers work Measuring instruments Radio electronics Mass spectrometry Electron Analytical chemistry de:Sekundärelektronenvervielfacher
Electron multiplier
[ "Physics", "Chemistry", "Technology", "Engineering" ]
815
[ "Electron", "Radio electronics", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Measuring instruments", "Mass spectrometry", "nan", "Matter" ]
14,189,709
https://en.wikipedia.org/wiki/Rutherford%20backscattering%20spectrometry
Rutherford backscattering spectrometry (RBS) is an analytical technique used in materials science. Sometimes referred to as high-energy ion scattering (HEIS) spectrometry, RBS is used to determine the structure and composition of materials by measuring the backscattering of a beam of high energy ions (typically protons or alpha particles) impinging on a sample. Geiger–Marsden experiment Rutherford backscattering spectrometry is named after Lord Rutherford, a physicist sometimes referred to as the father of nuclear physics. Rutherford supervised a series of experiments carried out by Hans Geiger and Ernest Marsden between 1909 and 1914 studying the scattering of alpha particles through metal foils. While attempting to eliminate "stray particles" they believed to be caused by an imperfection in their alpha source, Rutherford suggested that Marsden attempt to measure backscattering from a gold foil sample. According to the then-dominant plum-pudding model of the atom, in which small negative electrons were spread through a diffuse positive region, backscattering of the high-energy positive alpha particles should have been nonexistent. At most small deflections should occur as the alpha particles passed almost unhindered through the foil. Instead, when Marsden positioned the detector on the same side of the foil as the alpha particle source, he immediately detected a noticeable backscattered signal. According to Rutherford, "It was quite the most incredible event that has ever happened to me in my life. It was almost as incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you." Rutherford interpreted the result of the Geiger–Marsden experiment as an indication of a Coulomb collision with a single massive positive particle. This led him to the conclusion that the atom's positive charge could not be diffuse but instead must be concentrated in a single massive core: the atomic nucleus. Calculations indicated that the charge necessary to accomplish this deflection was approximately 100 times the charge of the electron, close to the atomic number of gold. This led to the development of the Rutherford model of the atom in which a positive nucleus made up of Ne positive particles, or protons, was surrounded by N orbiting electrons of charge -e to balance the nuclear charge. This model was eventually superseded by the Bohr atom, incorporating some early results from quantum mechanics. If the energy of the incident particle is increased sufficiently, the Coulomb barrier is exceeded and the wavefunctions of the incident and struck particles overlap. This may result in nuclear reactions in certain cases, but frequently the interaction remains elastic, although the scattering cross-sections may fluctuate wildly as a function of energy and no longer be calculable analytically. This case is known as "Elastic (non-Rutherford) Backscattering Spectrometry" (EBS). There has recently been great progress in determining EBS scattering cross-sections, by solving Schrödinger's equation for each interaction. However, for the EBS analysis of matrices containing light elements, the utilization of experimentally measured scattering cross-section data is also considered to be a very credible option. Basic principles We describe Rutherford backscattering as an elastic, hard-sphere collision between a high kinetic energy particle from the incident beam (the projectile) and a stationary particle located in the sample (the target). Elastic in this context means that no energy is transferred between the incident particle and the stationary particle during the collision, and the state of the stationary particle is not changed. (Except that for a small amount of momentum, which is ignored.) Nuclear interactions are generally not elastic, since a collision may result in a nuclear reaction, with the release of considerable quantities of energy. Nuclear reaction analysis (NRA) is useful for detecting light elements. However, this is not Rutherford scattering. Considering the kinematics of the collision (that is, the conservation of momentum and kinetic energy), the energy E1 of the scattered projectile is reduced from the initial energy E0: where k is known as the kinematical factor, and where particle 1 is the projectile, particle 2 is the target nucleus, and is the scattering angle of the projectile in the laboratory frame of reference (that is, relative to the observer). The plus sign is taken when the mass of the projectile is less than that of the target, otherwise the minus sign is taken. While this equation correctly determines the energy of the scattered projectile for any particular scattering angle (relative to the observer), it does not describe the probability of observing such an event. For that we need the differential cross-section of the backscattering event: where and are the atomic numbers of the incident and target nuclei. This equation is written in the centre of mass frame of reference and is therefore not a function of the mass of either the projectile or the target nucleus. The scattering angle in the laboratory frame of reference is not the same as the scattering angle in the centre of mass frame of reference (although for RBS experiments they are usually very similar). However, heavy ion projectiles can easily recoil lighter ions which, if the geometry is right, can be ejected from the target and detected. This is the basis of the Elastic Recoil Detection (ERD, with synonyms ERDA, FRS, HFS) technique. RBS often uses a He beam which readily recoils H, so simultaneous RBS/ERD is frequently done to probe the hydrogen isotope content of samples (although H ERD with a He beam above 1 MeV is not Rutherford: see http://www-nds.iaea.org/sigmacalc). For ERD the scattering angle in the lab frame of reference is quite different from that in the centre of mass frame of reference. Heavy ions cannot backscatter from light ones: it is kinematically prohibited. The kinematical factor must remain real, and this limits the permitted scattering angle in the laboratory frame of reference. In ERD it is often convenient to place the recoil detector at recoil angles large enough to prohibit signal from the scattered beam. The scattered ion intensity is always very large compared to the recoil intensity (the Rutherford scattering cross-section formula goes to infinity as the scattering angle goes to zero), and for ERD the scattered beam usually has to be excluded from the measurement somehow. The singularity in the Rutherford scattering cross-section formula is unphysical of course. If the scattering cross-section is zero it implies that the projectile never comes close to the target, but in this case it also never penetrates the electron cloud surrounding the nucleus either. The pure Coulomb formula for the scattering cross-section shown above must be corrected for this screening effect, which becomes more important as the energy of the projectile decreases (or, equivalently, its mass increases). While large-angle scattering only occurs for ions which scatter off target nuclei, inelastic small-angle scattering can also occur off the sample electrons. This results in a gradual decrease in the kinetic energy of incident ions as they penetrate into the sample, so that backscattering off interior nuclei occurs with a lower "effective" incident energy. Similarly backscattered ions lose energy to electrons as they exit the sample. The amount by which the ion energy is lowered after passing through a given distance is referred to as the stopping power of the material and is dependent on the electron distribution. This energy loss varies continuously with respect to distance traversed, so that stopping power is expressed as For high energy ions stopping power is usually proportional to ; however, precise calculation of stopping power is difficult to carry out with any accuracy. Stopping power (properly, stopping force) has units of energy per unit length. It is generally given in thin film units, that is eV /(atom/cm2) since it is measured experimentally on thin films whose thickness is always measured absolutely as mass per unit area, avoiding the problem of determining the density of the material which may vary as a function of thickness. Stopping power is now known for all materials at around 2%, see http://www.srim.org. Instrumentation An RBS instrument generally includes three essential components: An ion source, usually alpha particles (He2+ ions) or, less commonly, protons. A linear particle accelerator capable of accelerating incident ions to high energies, usually in the range 1-3 MeV. A detector capable of measuring the energies of backscattered ions over some range of angles. Two common source/acceleration arrangements are used in commercial RBS systems, working in either one or two stages. One-stage systems consist of a He+ source connected to an acceleration tube with a high positive potential applied to the ion source, and the ground at the end of the acceleration tube. This arrangement is simple and convenient, but it can be difficult to achieve energies of much more than 1 MeV due to the difficulty of applying very high voltages to the system. Two-stage systems, or "tandem accelerators", start with a source of He− ions and position the positive terminal at the center of the acceleration tube. A stripper element included in the positive terminal removes electrons from ions which pass through, converting He− ions to He++ ions. The ions thus start out being attracted to the terminal, pass through and become positive, and are repelled until they exit the tube at ground. This arrangement, though more complex, has the advantage of achieving higher accelerations with lower applied voltages: a typical tandem accelerator with an applied voltage of 750 kV can achieve ion energies of over 2 MeV. Detectors to measure backscattered energy are usually silicon surface barrier detectors, a very thin layer (100 nm) of P-type silicon on an N-type substrate forming a p-n junction. Ions which reach the detector lose some of their energy to inelastic scattering from the electrons, and some of these electrons gain enough energy to overcome the band gap between the semiconductor valence and conduction bands. This means that each ion incident on the detector will produce some number of electron-hole pairs which is dependent on the energy of the ion. These pairs can be detected by applying a voltage across the detector and measuring the current, providing an effective measurement of the ion energy. The relationship between ion energy and the number of electron-hole pairs produced will be dependent on the detector materials, the type of ion and the efficiency of the current measurement; energy resolution is dependent on thermal fluctuations. After one ion is incident on the detector, there will be some dead time before the electron-hole pairs recombine in which a second incident ion cannot be distinguished from the first. Angular dependence of detection can be achieved by using a movable detector, or more practically by separating the surface barrier detector into many independent cells which can be measured independently, covering some range of angles around direct (180 degrees) back-scattering. Angular dependence of the incident beam is controlled by using a tiltable sample stage. Composition and depth measurement The energy loss of a backscattered ion is dependent on two processes: the energy lost in scattering events with sample nuclei, and the energy lost to small-angle scattering from the sample electrons. The first process is dependent on the scattering cross-section of the nucleus and thus on its mass and atomic number. For a given measurement angle, nuclei of two different elements will therefore scatter incident ions to different degrees and with different energies, producing separate peaks on an N(E) plot of measurement count versus energy. These peaks are characteristic of the elements contained in the material, providing a means of analyzing the composition of a sample by matching scattered energies to known scattering cross-sections. Relative concentrations can be determined by measuring the heights of the peaks. The second energy loss process, the stopping power of the sample electrons, does not result in large discrete losses such as those produced by nuclear collisions. Instead it creates a gradual energy loss dependent on the electron density and the distance traversed in the sample. This energy loss will lower the measured energy of ions which backscatter from nuclei inside the sample in a continuous manner dependent on the depth of the nuclei. The result is that instead of the sharp backscattered peaks one would expect on an N(E) plot, with the width determined by energy and angular resolution, the peaks observed trail off gradually towards lower energy as the ions pass through the depth occupied by that element. Elements which only appear at some depth inside the sample will also have their peak positions shifted by some amount which represents the distance an ion had to traverse to reach those nuclei. In practice, then, a compositional depth profile can be determined from an RBS N(E) measurement. The elements contained by a sample can be determined from the positions of peaks in the energy spectrum. Depth can be determined from the width and shifted position of these peaks, and relative concentration from the peak heights. This is especially useful for the analysis of a multilayer sample, for example, or for a sample with a composition which varies more continuously with depth. This kind of measurement can only be used to determine elemental composition; the chemical structure of the sample cannot be determined from the N(E) profile. However, it is possible to learn something about this through RBS by examining the crystal structure. This kind of spatial information can be investigated by taking advantage of blocking and channeling. Structural measurements: blocking and channeling To fully understand the interaction of an incident beam of nuclei with a crystalline structure, it is necessary to comprehend two more key concepts: blocking and channeling. When a beam of ions with parallel trajectories is incident on a target atom, scattering off that atom will prevent collisions in a cone-shaped region "behind" the target relative to the beam. This occurs because the repulsive potential of the target atom bends close ion trajectories away from their original path, and is referred to as blocking. The radius of this blocked region, at a distance L from the original atom, is given by When an ion is scattered from deep inside a sample, it can then re-scatter off a second atom, creating a second blocked cone in the direction of the scattered trajectory. This can be detected by carefully varying the detection angle relative to the incident angle. Channeling is observed when the incident beam is aligned with a major symmetry axis of the crystal. Incident nuclei which avoid collisions with surface atoms are excluded from collisions with all atoms deeper in the sample, due to blocking by the first layer of atoms. When the interatomic distance is large compared to the radius of the blocked cone, the incident ions can penetrate many times the interatomic distance without being backscattered. This can result in a drastic reduction of the observed backscattered signal when the incident beam is oriented along one of the symmetry directions, allowing determination of a sample's regular crystal structure. Channeling works best for very small blocking radii, i.e. for high-energy, low-atomic-number incident ions such as He+. The tolerance for the deviation of the ion beam angle of incidence relative to the symmetry direction depends on the blocking radius, making the allowable deviation angle proportional to While the intensity of an RBS peak is observed to decrease across most of its width when the beam is channeled, a narrow peak at the high-energy end of larger peak will often be observed, representing surface scattering from the first layer of atoms. The presence of this peak opens the possibility of surface sensitivity for RBS measurements. Profiling of displaced atoms In addition, channeling of ions can also be used to analyze a crystalline sample for lattice damage. If atoms within the target are displaced from their crystalline lattice site, this will result in a higher backscattering yield in relation to a perfect crystal. By comparing the spectrum from a sample being analyzed to that from a perfect crystal, and that obtained at a random (non-channeling) orientation (representative of a spectrum from an amorphous sample), it is possible to determine the extent of crystalline damage in terms of a fraction of displaced atoms. Multiplying this fraction by the density of the material when amorphous then also gives an estimate for the concentration of displaced atoms. The energy at which the increased backscattering occurs can also be used to determine the depth at which the displaced atoms are and a defect depth profile can be built up as a result. Surface sensitivity While RBS is generally used to measure the bulk composition and structure of a sample, it is possible to obtain some information about the structure and composition of the sample surface. When the signal is channeled to remove the bulk signal, careful manipulation of the incident and detection angles can be used to determine the relative positions of the first few layers of atoms, taking advantage of blocking effects. The surface structure of a sample can be changed from the ideal in a number of ways. The first layer of atoms can change its distance from subsequent layers (relaxation); it can assume a different two-dimensional structure than the bulk (reconstruction); or another material can be adsorbed onto the surface. Each of these cases can be detected by RBS. For example, surface reconstruction can be detected by aligning the beam in such a way that channeling should occur, so that only a surface peak of known intensity should be detected. A higher-than-usual intensity or a wider peak will indicate that the first layers of atoms are failing to block the layers beneath, i.e. that the surface has been reconstructed. Relaxations can be detected by a similar procedure with the sample tilted so the ion beam is incident at an angle selected so that first-layer atoms should block backscattering at a diagonal; that is, from atoms which are below and displaced from the blocking atom. A higher-than-expected backscattered yield will indicate that the first layer has been displaced relative to the second layer, or relaxed. Adsorbate materials will be detected by their different composition, changing the position of the surface peak relative to the expected position. RBS has also been used to measure processes which affect the surface differently from the bulk by analyzing changes in the channeled surface peak. A well-known example of this is the RBS analysis of the premelting of lead surfaces by Frenken, Maree and van der Veen. In an RBS measurement of the Pb(110) surface, a well-defined surface peak which is stable at low temperatures was found to become wider and more intense as temperature increase past two-thirds of the bulk melting temperature. The peak reached the bulk height and width as temperature reached the melting temperature. This increase in the disorder of the surface, making deeper atoms visible to the incident beam, was interpreted as pre-melting of the surface, and computer simulations of the RBS process produced similar results when compared with theoretical pre-melting predictions. RBS has also been combined with nuclear microscopy, in which a focused ion beam is scanned across a surface in a manner similar to a scanning electron microscope. The energetic analysis of backscattered signals in this kind of application provides compositional information about the surface, while the microprobe itself can be used to examine features such as periodic surface structures. See also Collision cascade Elastic recoil detection Geiger–Marsden experiment Ion beam analysis Nuclear microscopy Nuclear reaction analysis Particle induced X-ray emission Rutherford scattering Secondary ion mass spectrometry Stopping power (particle radiation) Surface science References Citations Bibliography Materials science Scientific techniques Spectroscopy Ernest Rutherford
Rutherford backscattering spectrometry
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,947
[ "Ion beam methods", "Molecular physics", "Spectrum (physical sciences)", "Applied and interdisciplinary physics", "Instrumental analysis", "Materials science", "Surface science", "nan", "Spectroscopy" ]
14,189,946
https://en.wikipedia.org/wiki/Constant%20Awake%20Mode
In the context of wireless networking, Constant Awake Mode (CAM) is a mode that is intended for devices when power is not an issue, such as when AC power is available to a device. This mode provides the best connectivity from the user perspective. CAM is also appropriate when a portable device will be used for only a short time that the battery can easily accommodate. This is the most commonly used mode, and can be contrasted with power saving modes, which may or may not be offered by a particular device. References Electric power
Constant Awake Mode
[ "Physics", "Technology", "Engineering" ]
107
[ "Physical quantities", "Computer network stubs", "Power (physics)", "Electric power", "Computing stubs", "Electrical engineering" ]
14,194,164
https://en.wikipedia.org/wiki/Lymphoproliferative%20response
Lymphoproliferative response is a specific immune response that entails rapid T-cell replication. Standard antigens, such as tetanus toxoid, that elicit this response are used in lab tests of immune competence. References External links Online Medical Dictionary, lymphoproliferative response Immune system
Lymphoproliferative response
[ "Biology" ]
68
[ "Immune system", "Organ systems" ]
14,194,283
https://en.wikipedia.org/wiki/Non-Hausdorff%20manifold
In geometry and topology, it is a usual axiom of a manifold to be a Hausdorff space. In general topology, this axiom is relaxed, and one studies non-Hausdorff manifolds: spaces locally homeomorphic to Euclidean space, but not necessarily Hausdorff. Examples Line with two origins The most familiar non-Hausdorff manifold is the line with two origins, or bug-eyed line. This is the quotient space of two copies of the real line, and (with ), obtained by identifying points and whenever An equivalent description of the space is to take the real line and replace the origin with two origins and The subspace retains its usual Euclidean topology. And a local base of open neighborhoods at each origin is formed by the sets with an open neighborhood of in For each origin the subspace obtained from by replacing with is an open neighborhood of homeomorphic to Since every point has a neighborhood homeomorphic to the Euclidean line, the space is locally Euclidean. In particular, it is locally Hausdorff, in the sense that each point has a Hausdorff neighborhood. But the space is not Hausdorff, as every neighborhood of intersects every neighbourhood of It is however a T1 space. The space is second countable. The space exhibits several phenomena that do not happen in Hausdorff spaces: The space is path connected but not arc connected. In particular, to get a path from one origin to the other one can first move left from to within the line through the first origin, and then move back to the right from to within the line through the second origin. But it is impossible to join the two origins with an arc, which is an injective path; intuitively, if one moves first to the left, one has to eventually backtrack and move back to the right. The intersection of two compact sets need not be compact. For example, the sets and are compact, but their intersection is not. The space is locally compact in the sense that every point has a local base of compact neighborhoods. But the line through one origin does not contain a closed neighborhood of that origin, as any neighborhood of one origin contains the other origin in its closure. So the space is not a regular space, and even though every point has at least one closed compact neighborhood, the origin points do not admit a local base of closed compact neighborhoods. The space does not have the homotopy type of a CW-complex, or of any Hausdorff space. Line with many origins The line with many origins is similar to the line with two origins, but with an arbitrary number of origins. It is constructed by taking an arbitrary set with the discrete topology and taking the quotient space of that identifies points and whenever Equivalently, it can be obtained from by replacing the origin with many origins one for each The neighborhoods of each origin are described as in the two origin case. If there are infinitely many origins, the space illustrates that the closure of a compact set need not be compact in general. For example, the closure of the compact set is the set obtained by adding all the origins to , and that closure is not compact. From being locally Euclidean, such a space is locally compact in the sense that every point has a local base of compact neighborhoods. But the origin points do not have any closed compact neighborhood. Branching line Similar to the line with two origins is the branching line. This is the quotient space of two copies of the real line with the equivalence relation This space has a single point for each negative real number and two points for every non-negative number: it has a "fork" at zero. Etale space The etale space of a sheaf, such as the sheaf of continuous real functions over a manifold, is a manifold that is often non-Hausdorff. (The etale space is Hausdorff if it is a sheaf of functions with some sort of analytic continuation property.) Properties Because non-Hausdorff manifolds are locally homeomorphic to Euclidean space, they are locally metrizable (but not metrizable in general) and locally Hausdorff (but not Hausdorff in general). See also Notes References General topology Manifolds Topology
Non-Hausdorff manifold
[ "Physics", "Mathematics" ]
859
[ "General topology", "Space (mathematics)", "Topological spaces", "Topology", "Space", "Manifolds", "Geometry", "Spacetime" ]
14,194,971
https://en.wikipedia.org/wiki/Crushed%20stone
Crushed stone or angular rock is a form of construction aggregate, typically produced by mining a suitable rock deposit and breaking the removed rock down to the desired size using crushers. It is distinct from naturally occurring gravel, which is produced by natural processes of weathering and erosion and typically has a more rounded shape. Use Angular crushed stone is the key material for macadam road construction, which depends on the interlocking of the individual stones' angular faces for its strength. As riprap As railroad track ballast As filter stone. As composite material (with a binder) in concrete, tarmac, and asphalt concrete. In landscaping as a groundcover, walkway and driveway pavement, and infill for permeable pavers. As a mineral groundcover its benefits include erosion control, water conservation, weed suppression, and aesthetics. It is often seen used in rock gardens and cactus gardens. Background Crushed stone is a major basic raw material used by construction, agriculture, and other industries. Despite the low value of its basic products, the crushed stone industry is a major contributor to and an indicator of the economic well-being of a nation. The demand for crushed stone is determined mostly by the level of construction activity, and, therefore, the demand for construction materials. Stone resources of the world are very large. High-purity limestone and dolomite suitable for specialty uses are limited in many geographic areas. Crushed stone substitutes for roadbuilding include sand and gravel, and slag. Substitutes for crushed stone used as construction aggregates include sand and gravel, iron and steel slag, sintered or expanded clay or shale, and perlite or vermiculite. Crushed stone is a high-volume, low-value commodity. The industry is highly competitive and is characterized by many operations serving local or regional markets. Production costs are determined mainly by the cost of labor, equipment, energy, and water, in addition to the costs of compliance with environmental and safety regulations. These costs vary depending on geographic location, the nature of the deposit, and the number and type of products produced. Crushed stone has one of the lowest average by weight values of all mineral commodities. The average unit price increased from US$1.58 per metric ton, f.o.b. plant, in 1970 to US$4.39 in 1990. However, the unit price in constant 1982 dollars fluctuated between US$3.48 and US$3.91 per metric ton for the same period. Increased productivity achieved through increased use of automation and more efficient equipment was mainly responsible for maintaining the prices at this level. Transportation is a major factor in the delivered price of crushed stone. The cost of moving crushed stone from the plant to the market often equals or exceeds the sale price of the product at the plant. Because of the high cost of transportation and the large quantities of bulk material that have to be shipped, crushed stone is usually marketed locally. The high cost of transportation is responsible for the wide dispersion of quarries, usually located near highly populated areas. However, increasing land values combined with local environmental concerns are moving crushed stone quarries farther from the end-use locations, increasing the price of delivered material. Economies of scale, which might be realized if fewer, larger operations served larger marketing areas, would probably not offset the increased transportation costs. United States statistical data According to the United States Geological Survey, 1.72 billion tons of crushed stone worth $13.8 billion was sold or used in 2006, of which 1.44 billion tons was used as construction aggregate, 74.9 million tons used for cement manufacture, and 18.1 million tons used to make lime. Crushed marble sold or used totaled 11.8 million tons, the majority of which was ground very fine and used as calcium carbonate. In 2006, 9.40 million tons of crushed stone (almost all limestone or dolomite) was used for soil treatment, primarily to reduce soil acidity. Soils tend to become acidic from heavy use of nitrogen-containing fertilizers, unless a soil conditioner is used. Using aglime or agricultural lime, a finely-ground limestone or dolomite, to change the soil from acidic to nearly neutral particularly benefits crops by maximizing availability of plant nutrients, and also by reducing aluminum or manganese toxicity, promoting soil microbe activity, and improving the soil structure. In 2006, 5.29 million tons of crushed stone (mostly limestone or dolomite) was used as a flux in blast furnaces and in certain steel furnaces to react with gangue minerals (i.e. silica and silicate impurities) to produce liquid slag that floats and can be poured off from the much denser molten metal (i.e., iron). The slag cools to become a stone-like material that is commonly crushed and recycled as construction aggregate. In addition, 4.53 million tons of crushed stone was used for fillers and extenders (including asphalt fillers or extenders), 2.71 million tons for sulfur oxide removal-mine dusting-acid water treatment, and 1.45 million tons sold or used for poultry grit or mineral food. Crushed stone is recycled primarily as construction aggregate or concrete. Terminology and variations The term "high performance bedding" (HPB) is used to refer to 1/4" and 3/8" crushed rock. Granular A, Granular B, and Granular C, all represent crushed rock mixed with sand in which Granular A has the least amount of sand, and Granular B has the most amount of sand. "Granular A" is also referred to as "angular crushed down to fines" or "minus" gravel. See also Aggregate (composite) Dimension stone References External links USGS 2006 Minerals Yearbook: Stone, Crushed Crushed Stone Statistics and Information - United States Geological Survey minerals information for crushed stone Building stone Granularity of materials Building materials Natural materials Pavements Stone (material) Earthworks (engineering)
Crushed stone
[ "Physics", "Chemistry", "Engineering" ]
1,217
[ "Natural materials", "Building engineering", "Architecture", "Construction", "Materials", "Particle technology", "Granularity of materials", "Matter", "Building materials" ]
14,198,628
https://en.wikipedia.org/wiki/Muscarinic%20acetylcholine%20receptor%20M4
{{DISPLAYTITLE:Muscarinic acetylcholine receptor M4}} The muscarinic acetylcholine receptor M4, also known as the cholinergic receptor, muscarinic 4 (CHRM4), is a protein that, in humans, is encoded by the CHRM4 gene. Function M4 muscarinic receptors are coupled to Gi/o heterotrimeric proteins. They function as inhibitory autoreceptors for acetylcholine. Activation of M4 receptors inhibits acetylcholine release in the striatum. The M2 subtype of acetylcholine receptor functions similarly as an inhibitory autoreceptor to acetylcholine release, albeit functioning actively primarily in the hippocampus and cerebral cortex. Muscarinic acetylcholine receptors possess a regulatory effect on dopaminergic neurotransmission. Activation of M4 receptors in the striatum inhibit D1-induced locomotor stimulation in mice. M4 receptor-deficient mice exhibit increased locomotor simulation in response to D1 agonists, amphetamine and cocaine. Neurotransmission in the striatum influences extrapyramidal motor control, thus alterations in M4 activity may contribute to conditions such as Parkinson's disease. The M4 muscarinic receptor has been found to be a regulator of erythroid progenitor cell differentiation. Inhibition of the M4 muscarinic receptor provides therapeutic benefits in myelodysplastic syndrome and anemia. Ligands Agonists Acetylcholine Carbachol CMI-936 NBI-1117568 (HTL-0016878) ML-007 Oxotremorine Xanomeline Positive allosteric modulators Emraclidine (CVL-231, PF-06852231) LY-2033298 NS-136 SUVN-L3307032 VU-0152100 (ML-108) VU-0152099 Antagonists AFDX-384 (mixed M2/M4 antagonist, N-[2-[2-[(Dipropylamino)methyl]-1-piperidinyl]ethyl]-5,6-dihydro-6-oxo-11H-pyrido[2,3-b][1,4]benzodiazepine-11-carboxamide, CAS# 118290-27-0) Dicycloverine Diphenhydramine Himbacine Mamba toxin 3 NBI-1076968 PD-102,807 (3,6a,11,14-Tetrahydro-9-methoxy-2-methyl-(12H)-isoquino[1,2-b]pyrrolo[3,2-f][1,3]benzoxazine-1-carboxylic acid ethyl ester, CAS# 23062-91-1) PD-0298029 Tropicamide - moderate selectivity over other muscarinic subtypes (2-5x approx) See also Muscarinic acetylcholine receptor References Further reading External links G protein-coupled receptors Human proteins Muscarinic acetylcholine receptors
Muscarinic acetylcholine receptor M4
[ "Chemistry" ]
720
[ "G protein-coupled receptors", "Signal transduction" ]
14,200,011
https://en.wikipedia.org/wiki/BCM%20theory
Bienenstock–Cooper–Munro (BCM) theory, BCM synaptic modification, or the BCM rule, named after Elie Bienenstock, Leon Cooper, and Paul Munro, is a physical theory of learning in the visual cortex developed in 1981. The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity. According to the BCM model, when a pre-synaptic neuron fires, the post-synaptic neurons will tend to undergo LTP if it is in a high-activity state (e.g., is firing at high frequency, and/or has high internal calcium concentrations), or LTD if it is in a lower-activity state (e.g., firing in low frequency, low internal calcium concentrations). This theory is often used to explain how cortical neurons can undergo both LTP or LTD depending on different conditioning stimulus protocols applied to pre-synaptic neurons (usually high-frequency stimulation, or HFS, for LTP, or low-frequency stimulation, LFS, for LTD). Development In 1949, Donald Hebb proposed a working mechanism for memory and computational adaption in the brain now called Hebbian learning, or the maxim that cells that fire together, wire together. This notion is foundational in the modern understanding of the brain as a neural network, and though not universally true, remains a good first approximation supported by decades of evidence. However, Hebb's rule has problems, namely that it has no mechanism for connections to get weaker and no upper bound for how strong they can get. In other words, the model is unstable, both theoretically and computationally. Later modifications gradually improved Hebb's rule, normalizing it and allowing for decay of synapses, where no activity or unsynchronized activity between neurons results in a loss of connection strength. New biological evidence brought this activity to a peak in the 1970s, where theorists formalized various approximations in the theory, such as the use of firing frequency instead of potential in determining neuron excitation, and the assumption of ideal and, more importantly, linear synaptic integration of signals. That is, there is no unexpected behavior in the adding of input currents to determine whether or not a cell will fire. These approximations resulted in the basic form of BCM below in 1979, but the final step came in the form of mathematical analysis to prove stability and computational analysis to prove applicability, culminating in Bienenstock, Cooper, and Munro's 1982 paper. Since then, experiments have shown evidence for BCM behavior in both the visual cortex and the hippocampus, the latter of which plays an important role in the formation and storage of memories. Both of these areas are well-studied experimentally, but both theory and experiment have yet to establish conclusive synaptic behavior in other areas of the brain. It has been proposed that in the cerebellum, the parallel-fiber to Purkinje cell synapse follows an "inverse BCM rule", meaning that at the time of parallel fiber activation, a high calcium concentration in the Purkinje cell results in LTD, while a lower concentration results in LTP. Furthermore, the biological implementation for synaptic plasticity in BCM has yet to be established. Theory The basic BCM rule takes the form where: is the synaptic weight of the th synapse, is th synapse's input current, is the inner product of weights and input currents (weighted sum of inputs), is a non-linear function. This function must change sign at some threshold , that is, if and only if . See below for details and properties. and is the (often negligible) time constant of uniform decay of all synapses. This model is a modified form of the Hebbian learning rule, , and requires a suitable choice of function to avoid the Hebbian problems of instability. Bienenstock at al. rewrite as a function where is the time average of . With this modification and discarding the uniform decay the rule takes the vectorial form: The conditions for stable learning are derived rigorously in BCM noting that with and with the approximation of the average output , it is sufficient that or equivalently, that the threshold , where and are fixed positive constants. When implemented, the theory is often taken such that where is a time constant of selectivity. The model has drawbacks, as it requires both long-term potentiation and long-term depression, or increases and decreases in synaptic strength, something which has not been observed in all cortical systems. Further, it requires a variable activation threshold and depends strongly on stability of the selected fixed points and . However, the model's strength is that it incorporates all these requirements from independently derived rules of stability, such as normalizability and a decay function with time proportional to the square of the output. Example This example is a particular case of the one at chapter "Mathematical results" of Bienenstock at al. work, assuming and . With these values and we decide that fulfills the stability conditions said in previous chapter. Assume two presynaptic neurons that provides inputs and , its activity a repetitive cycle with half of time and remainder time . time average will be the average of value in first and second half of a cycle. Let initial value of weights . In the first half of time and , the weighted sum is equal to 0.095 and we use same value as initial average . That means , , . Adding 10% of the derivative to the weights we obtain new ones . In next half of time, inputs are and weights . That means , of full cycle is 0.075, , , . Adding 10% of the derivative to the weights we obtain new ones . Repeating previous cycle we obtain, after several hundred of iterations, that stability is reached with , (first half) and (remainder time), , , and . Note how, as predicted, the final weight vector has become orthogonal to one of the input patterns, being the final values of in both intervals zeros of the function . Experiment The first major experimental confirmation of BCM came in 1992 in investigating LTP and LTD in the hippocampus. Serena Dudek's experimental work showed qualitative agreement with the final form of the BCM activation function. This experiment was later replicated in the visual cortex, which BCM was originally designed to model. This work provided further evidence of the necessity for a variable threshold function for stability in Hebbian-type learning (BCM or others). Experimental evidence has been non-specific to BCM until Rittenhouse et al. confirmed BCM's prediction of synapse modification in the visual cortex when one eye is selectively closed. Specifically, where describes the variance in spontaneous activity or noise in the closed eye and is time since closure. Experiment agreed with the general shape of this prediction and provided an explanation for the dynamics of monocular eye closure (monocular deprivation) versus binocular eye closure. The experimental results are far from conclusive, but so far have favored BCM over competing theories of plasticity. Applications While the algorithm of BCM is too complicated for large-scale parallel distributed processing, it has been put to use in lateral networks with some success. Furthermore, some existing computational network learning algorithms have been made to correspond to BCM learning. References External links Scholarpedia article Biophysics Computational neuroscience Neuroplasticity
BCM theory
[ "Physics", "Biology" ]
1,577
[ "Applied and interdisciplinary physics", "Biophysics" ]
1,166,647
https://en.wikipedia.org/wiki/Magnetic%20reconnection
Magnetic reconnection is a physical process occurring in electrically conducting plasmas, in which the magnetic topology is rearranged and magnetic energy is converted to kinetic energy, thermal energy, and particle acceleration. Magnetic reconnection involves plasma flows at a substantial fraction of the Alfvén wave speed, which is the fundamental speed for mechanical information flow in a magnetized plasma. The concept of magnetic reconnection was developed in parallel by researchers working in solar physics and in the interaction between the solar wind and magnetized planets. This reflects the bidirectional nature of reconnection, which can either disconnect formerly connected magnetic fields or connect formerly disconnected magnetic fields, depending on the circumstances. Ron Giovanelli is credited with the first publication invoking magnetic energy release as a potential mechanism for particle acceleration in solar flares. Giovanelli proposed in 1946 that solar flares stem from the energy obtained by charged particles influenced by induced electric fields within close proximity of sunspots. In the years 1947-1948, he published more papers further developing the reconnection model of solar flares. In these works, he proposed that the mechanism occurs at points of neutrality (weak or null magnetic field) within structured magnetic fields. James Dungey is credited with first use of the term “magnetic reconnection” in his 1950 PhD thesis, to explain the coupling of mass, energy and momentum from the solar wind into Earth's magnetosphere. The concept was published for the first time in a seminal paper in 1961. Dungey coined the term "reconnection" because he envisaged field lines and plasma moving together in an inflow toward a magnetic neutral point (2D) or line (3D), breaking apart and then rejoining again but with different magnetic field lines and plasma, in an outflow away from the magnetic neutral point or line. In the meantime, the first theoretical framework of magnetic reconnection was established by Peter Sweet and Eugene Parker at a conference in 1956. Sweet pointed out that by pushing two plasmas with oppositely directed magnetic fields together, resistive diffusion is able to occur on a length scale much shorter than a typical equilibrium length scale. Parker was in attendance at this conference and developed scaling relations for this model during his return travel. Fundamental principles Magnetic reconnection is a breakdown of "ideal-magnetohydrodynamics" and so of "Alfvén's theorem" (also called the "frozen-in flux theorem") which applies to large-scale regions of a highly-conducting magnetoplasma, for which the Magnetic Reynolds Number is very large: this makes the convective term in the induction equation dominate in such regions. The frozen-in flux theorem states that in such regions the field moves with the plasma velocity (the mean of the ion and electron velocities, weighted by their mass). The reconnection breakdown of this theorem occurs in regions of large magnetic shear (by Ampére's law these are current sheets) which are regions of small width where the Magnetic Reynolds Number can become small enough to make the diffusion term in the induction equation dominate, meaning that the field diffuses through the plasma from regions of high field to regions of low field. In reconnection, the inflow and outflow regions both obey Alfvén's theorem and the diffusion region is a very small region at the centre of the current sheet where field lines diffuse together, merge and reconfigure such that they are transferred from the topology of the inflow regions (i.e., along the current sheet) to that of the outflow regions (i.e., threading the current sheet). The rate of this magnetic flux transfer is the electric field associated with both the inflow and the outflow and is called the "reconnection rate". The equivalence of magnetic shear and current can be seen from one of Maxwell's equations In a plasma (ionized gas), for all but exceptionally high frequency phenomena, the second term on the right-hand side of this equation, the displacement current, is negligible compared to the effect of the free current and this equation reduces to Ampére's law for free charges. The displacement current is neglected in both the Parker-Sweet and Petschek theoretical treatments of reconnection, discussed below, and in the derivation of ideal MHD and Alfvén's theorem which is applied in those theories everywhere outside the small diffusion region. The resistivity of the current layer allows magnetic flux from either side to diffuse through the current layer, cancelling outflux from the other side of the boundary. However, the small spatial scale of the current sheet makes the Magnetic Reynolds Number small and so this alone can make the diffusion term dominate in the induction equation without the resistivity being enhanced. When the diffusing field lines from the two sites of the boundary touch they form the separatrices and so have both the topology of the inflow region (i.e. along the current sheet) and the outflow region (i.e., threading the current sheet). In magnetic reconnection the field lines evolve from the inflow topology through the separatrices topology to the outflow topology. When this happens, the plasma is pulled out by Magnetic tension force acting on the reconfigured field lines and ejecting them along the current sheet. The resulting drop in pressure pulls more plasma and magnetic flux into the central region, yielding a self-sustaining process. The importance of Dungey's concept of a localized breakdown of ideal-MHD is that the outflow along the current sheet prevents the build-up in plasma pressure that would otherwise choke off the inflow. In Parker-Sweet reconnection the outflow is only along a thin layer the centre of the current sheet and this limits the reconnection rate that can be achieved to low values. On the other hand, in Petschek reconnection the outflow region is much broader, being between shock fronts (now thought to be Alfvén waves) that stand in the inflow: this allows much faster escape of the plasma frozen-in on reconnected field lines and the reconnection rate can be much higher. Dungey coined the term "reconnection" because he initially envisaged field lines of the inflow topology breaking and then joining together again in the outflow topology. However, this means that magnetic monopoles would exist, albeit for a very limited period, which would violate Maxwell's equation that the divergence of the field is zero. However, by considering the evolution through the separatrix topology, the need to invoke magnetic monopoles is avoided. Global numerical MHD models of the magnetosphere, which use the equations of ideal MHD, still simulate magnetic reconnection even though it is a breakdown of ideal MHD. The reason is close to Dungey's original thoughts: at each time step of the numerical model the equations of ideal MHD are solved at each grid point of the simulation to evaluate the new field and plasma conditions. The magnetic field lines then have to be re-traced. The tracing algorithm makes errors at thin current sheets and joins field lines up by threading the current sheet where they were previously aligned with the current sheet. This is often called "numerical resistivity" and the simulations have predictive value because the error propagates according to a diffusion equation. A current problem in plasma physics is that observed reconnection happens much faster than predicted by MHD in high Lundquist number plasmas (i.e. fast magnetic reconnection). Solar flares, for example, proceed 13–14 orders of magnitude faster than a naive calculation would suggest, and several orders of magnitude faster than current theoretical models that include turbulence and kinetic effects. One possible mechanism to explain the discrepancy is that the electromagnetic turbulence in the boundary layer is sufficiently strong to scatter electrons, raising the plasma's local resistivity. This would allow the magnetic flux to diffuse faster. Properties Physical interpretation The qualitative description of the reconnection process is such that magnetic field lines from different magnetic domains (defined by the field line connectivity) are spliced to one another, changing their patterns of connectivity with respect to the sources. It is a violation of an approximate conservation law in plasma physics, called Alfvén's theorem (also called the "frozen-in flux theorem") and can concentrate mechanical or magnetic energy in both space and time. Solar flares, the largest explosions in the Solar System, may involve the reconnection of large systems of magnetic flux on the Sun, releasing, in minutes, energy that has been stored in the magnetic field over a period of hours to days. Magnetic reconnection in Earth's magnetosphere is one of the mechanisms responsible for the aurora, and it is important to the science of controlled nuclear fusion because it is one mechanism preventing magnetic confinement of the fusion fuel. In an electrically conductive plasma, magnetic field lines are grouped into 'domains'— bundles of field lines that connect from a particular place to another particular place, and that are topologically distinct from other field lines nearby. This topology is approximately preserved even when the magnetic field itself is strongly distorted by the presence of variable currents or motion of magnetic sources, because effects that might otherwise change the magnetic topology instead induce eddy currents in the plasma; the eddy currents have the effect of canceling out the topological change. Types of reconnection In two dimensions, the most common type of magnetic reconnection is separator reconnection, in which four separate magnetic domains exchange magnetic field lines. Domains in a magnetic plasma are separated by separatrix surfaces: curved surfaces in space that divide different bundles of flux. Field lines on one side of the separatrix all terminate at a particular magnetic pole, while field lines on the other side all terminate at a different pole of similar sign. Since each field line generally begins at a north magnetic pole and ends at a south magnetic pole, the most general way of dividing simple flux systems involves four domains separated by two separatrices: one separatrix surface divides the flux into two bundles, each of which shares a south pole, and the other separatrix surface divides the flux into two bundles, each of which shares a north pole. The intersection of the separatrices forms a separator, a single line that is at the boundary of the four separate domains. In separator reconnection, field lines enter the separator from two of the domains, and are spliced one to the other, exiting the separator in the other two domains (see the first figure). In three dimensions, the geometry of the field lines become more complicated than the two-dimensional case and it is possible for reconnection to occur in regions where a separator does not exist, but with the field lines connected by steep gradients. These regions are known as quasi-separatrix layers (QSLs), and have been observed in theoretical configurations and solar flares. Theoretical descriptions Slow reconnection: Sweet–Parker model The first theoretical framework of magnetic reconnection was established by Peter Sweet and Eugene Parker at a conference in 1956. Sweet pointed out that by pushing two plasmas with oppositely directed magnetic fields together, resistive diffusion is able to occur on a length scale much shorter than a typical equilibrium length scale. Parker was in attendance at this conference and developed scaling relations for this model during his return travel. The Sweet–Parker model describes time-independent magnetic reconnection in the resistive MHD framework when the reconnecting magnetic fields are antiparallel (oppositely directed) and effects related to viscosity and compressibility are unimportant. The initial velocity is simply an velocity, so where is the out-of-plane electric field, is the characteristic inflow velocity, and is the characteristic upstream magnetic field strength. By neglecting displacement current, the low-frequency Ampere's law, , gives the relation where is the current sheet half-thickness. This relation uses that the magnetic field reverses over a distance of . By matching the ideal electric field outside of the layer with the resistive electric field inside the layer (using Ohm's law), we find that where is the magnetic diffusivity. When the inflow density is comparable to the outflow density, conservation of mass yields the relationship where is the half-length of the current sheet and is the outflow velocity. The left and right hand sides of the above relation represent the mass flux into the layer and out of the layer, respectively. Equating the upstream magnetic pressure with the downstream dynamic pressure gives where is the mass density of the plasma. Solving for the outflow velocity then gives where is the Alfvén velocity. With the above relations, the dimensionless reconnection rate can then be written in two forms, the first in terms of using the result earlier derived from Ohm's law, the second in terms of from the conservation of mass as Since the dimensionless Lundquist number is given by the two different expressions of are multiplied by each other and then square-rooted, giving a simple relation between the reconnection rate and the Lundquist number Sweet–Parker reconnection allows for reconnection rates much faster than global diffusion, but is not able to explain the fast reconnection rates observed in solar flares, the Earth's magnetosphere, and laboratory plasmas. Additionally, Sweet–Parker reconnection neglects three-dimensional effects, collisionless physics, time-dependent effects, viscosity, compressibility, and downstream pressure. Numerical simulations of two-dimensional magnetic reconnection typically show agreement with this model. Results from the Magnetic Reconnection Experiment (MRX) of collisional reconnection show agreement with a generalized Sweet–Parker model which incorporates compressibility, downstream pressure and anomalous resistivity. Fast reconnection: Petschek model The fundamental reason that Petschek reconnection is faster than Parker-Sweet is that it broadens the outflow region and thereby removes some of the limitation caused by the build up in plasma pressure. The inflow velocity, and thus the reconnection rate, can only be very small if the outflow region is narrow. In 1964, Harry Petschek proposed a mechanism where the inflow and outflow regions are separated by stationary slow mode shocks that stand in the inflows. The aspect ratio of the diffusion region is then of order unity and the maximum reconnection rate becomes This expression allows for fast reconnection and is almost independent of the Lundquist number. Theory and numerical simulations show that most of the actions of the shocks that were proposed by Petschek can be carried out by Alfvén waves and in particular rotational discontinuities (RDs). In cases of asymmetric plasma densities on the two sides of the current sheet (as at Earth's dayside magnetopause) the Alfvén wave that propagates into the inflow on higher-density side (in the case of the magnetopause the denser magnetosheath) has a lower propagation speed and so the field rotation increasingly becomes at that RD as the field line propagates away from the reconnection site: hence the magnetopause current sheet becomes increasingly concentrated in the outer, slower, RD. Simulations of resistive MHD reconnection with uniform resistivity showed the development of elongated current sheets in agreement with the Sweet–Parker model rather than the Petschek model. When a localized anomalously large resistivity is used, however, Petschek reconnection can be realized in resistive MHD simulations. Because the use of an anomalous resistivity is only appropriate when the particle mean free path is large compared to the reconnection layer, it is likely that other collisionless effects become important before Petschek reconnection can be realized. Anomalous resistivity and Bohm diffusion In the Sweet–Parker model, the common assumption is that the magnetic diffusivity is constant. This can be estimated using the equation of motion for an electron with mass and electric charge : where is the collision frequency. Since in the steady state, , then the above equation along with the definition of electric current, , where is the electron number density, yields Nevertheless, if the drift velocity of electrons exceeds the thermal velocity of plasma, a steady state cannot be achieved and magnetic diffusivity should be much larger than what is given in the above. This is called anomalous resistivity, , which can enhance the reconnection rate in the Sweet–Parker model by a factor of . Another proposed mechanism is known as the Bohm diffusion across the magnetic field. This replaces the Ohmic resistivity with , however, its effect, similar to the anomalous resistivity, is still too small compared with the observations. Stochastic reconnection In stochastic reconnection, magnetic field has a small scale random component arising because of turbulence. For the turbulent flow in the reconnection region, a model for magnetohydrodynamic turbulence should be used such as the model developed by Goldreich and Sridhar in 1995. This stochastic model is independent of small scale physics such as resistive effects and depends only on turbulent effects. Roughly speaking, in stochastic model, turbulence brings initially distant magnetic field lines to small separations where they can reconnect locally (Sweet-Parker type reconnection) and separate again due to turbulent super-linear diffusion (Richardson diffusion ). For a current sheet of the length , the upper limit for reconnection velocity is given by where . Here , and are turbulence injection length scale and velocity respectively and is the Alfvén velocity. This model has been successfully tested by numerical simulations. Non-MHD process: Collisionless reconnection On length scales shorter than the ion inertial length (where is the ion plasma frequency), ions decouple from electrons and the magnetic field becomes frozen into the electron fluid rather than the bulk plasma. On these scales, the Hall effect becomes important. Two-fluid simulations show the formation of an X-point geometry rather than the double Y-point geometry characteristic of resistive reconnection. The electrons are then accelerated to very high speeds by Whistler waves. Because the ions can move through a wider "bottleneck" near the current layer and because the electrons are moving much faster in Hall MHD than in standard MHD, reconnection may proceed more quickly. Two-fluid/collisionless reconnection is particularly important in the Earth's magnetosphere. Observations Solar atmosphere Magnetic reconnection occurs during solar flares, coronal mass ejections, and many other events in the solar atmosphere. The observational evidence for solar flares includes observations of inflows/outflows, downflowing loops, and changes in the magnetic topology. In the past, observations of the solar atmosphere were done using remote imaging; consequently, the magnetic fields were inferred or extrapolated rather than observed directly. However, the first direct observations of solar magnetic reconnection were gathered in 2012 (and released in 2013) by the High Resolution Coronal Imager. Earth's magnetosphere Magnetic reconnection events that occur in the Earth's magnetosphere (in the dayside magnetopause and in the magnetotail) were for many years inferred because they uniquely explained many aspects of the large-scale behaviour of the magnetosphere and its dependence on the orientation of the near-Earth Interplanetary magnetic field. Subsequently, spacecraft such as Cluster II and the Magnetospheric Multiscale Mission. have made observations of sufficient resolution and in multiple locations to observe the process directly and in-situ. Cluster II is a four-spacecraft mission, with the four spacecraft arranged in a tetrahedron to separate the spatial and temporal changes as the suite flies through space. It has observed numerous reconnection events in which the Earth's magnetic field reconnects with that of the Sun (i.e. the Interplanetary Magnetic Field). These include 'reverse reconnection' that causes sunward convection in the Earth's ionosphere near the polar cusps; 'dayside reconnection', which allows the transmission of particles and energy into the Earth's vicinity and 'tail reconnection', which causes auroral substorms by injecting particles deep into the magnetosphere and releasing the energy stored in the Earth's magnetotail. The Magnetospheric Multiscale Mission, launched on 13 March 2015, improved the spatial and temporal resolution of the Cluster II results by having a tighter constellation of spacecraft. This led to a better understanding of the behavior of the electrical currents in the electron diffusion region. On 26 February 2008, THEMIS probes were able to determine the triggering event for the onset of magnetospheric substorms. Two of the five probes, positioned approximately one third the distance to the Moon, measured events suggesting a magnetic reconnection event 96 seconds prior to auroral intensification. Dr. Vassilis Angelopoulos of the University of California, Los Angeles, who is the principal investigator for the THEMIS mission, claimed, "Our data show clearly and for the first time that magnetic reconnection is the trigger.". Laboratory plasma experiments Magnetic reconnection has also been observed in numerous laboratory experiments. For example, studies on the Large Plasma Device (LAPD) at UCLA have observed and mapped quasi-separatrix layers near the magnetic reconnection region of a two flux rope system, while experiments on the Magnetic Reconnection Experiment (MRX) at the Princeton Plasma Physics Laboratory (PPPL) have confirmed many aspects of magnetic reconnection, including the Sweet–Parker model in regimes where the model is applicable. Analysis of the physics of helicity injection, used to create the initial plasma current in the NSTX spherical tokamak, led Dr. Fatima Ebrahimi to propose a plasma thruster that uses fast magnetic reconnection to accelerate plasma to produce thrust for space propulsion. Sawtooth oscillations are periodic mixing events occurring in the tokamak plasma core. The Kadomtsev model describes sawtooth oscillations as a consequence of magnetic reconnection due to displacement of the central region with safety factor caused by the internal kink mode. See also Current sheet Solar corona Magnetic switchback References Further reading Eric Priest, Terry Forbes, Magnetic Reconnection, Cambridge University Press 2000, , contents and sample chapter online Discoveries about magnetic reconnection in space could unlock fusion power, Space.com, 6 February 2008 Nasa MMS-SMART mission, The Magnetospheric Multiscale (MMS) mission, Solving Magnetospheric Acceleration, Reconnection, and Turbulence. Due for launch in 2014. Cluster spacecraft science results External links Magnetism on the Sun Magnetic Reconnection Experiment (MRX) Plasma phenomena Stellar phenomena Solar phenomena Articles containing video clips
Magnetic reconnection
[ "Physics" ]
4,732
[ "Physical phenomena", "Plasma physics", "Plasma phenomena", "Solar phenomena", "Stellar phenomena" ]