id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
13,345,788 | https://en.wikipedia.org/wiki/Freidlin%E2%80%93Wentzell%20theorem | In mathematics, the Freidlin–Wentzell theorem (due to Mark Freidlin and Alexander D. Wentzell) is a result in the large deviations theory of stochastic processes. Roughly speaking, the Freidlin–Wentzell theorem gives an estimate for the probability that a (scaled-down) sample path of an Itō diffusion will stray far from the mean path. This statement is made precise using rate functions. The Freidlin–Wentzell theorem generalizes Schilder's theorem for standard Brownian motion.
Statement
Let B be a standard Brownian motion on Rd starting at the origin, 0 ∈ Rd, and let Xε be an Rd-valued Itō diffusion solving an Itō stochastic differential equation of the form
where the drift vector field b : Rd → Rd is uniformly Lipschitz continuous. Then, on the Banach space C0 = C0([0, T]; Rd) equipped with the supremum norm ||⋅||∞, the family of processes (Xε)ε>0 satisfies the large deviations principle with good rate function I : C0 → R ∪ {+∞} given by
if ω lies in the Sobolev space H1([0, T]; Rd), and I(ω) = +∞ otherwise. In other words, for every open set G ⊆ C0 and every closed set F ⊆ C0,
and
References
(See chapter 5.6)
Asymptotic analysis
Stochastic differential equations
Theorems in statistics
Large deviations theory
Probability theorems | Freidlin–Wentzell theorem | [
"Mathematics"
] | 327 | [
"Mathematical theorems",
"Theorems in statistics",
"Mathematical analysis",
"Theorems in probability theory",
"Asymptotic analysis",
"Mathematical problems"
] |
13,345,968 | https://en.wikipedia.org/wiki/Virtual%20black%20hole | In quantum gravity, a virtual black hole is a hypothetical micro black hole that exists temporarily as a result of a quantum fluctuation of spacetime. It is an example of quantum foam and is the gravitational analog of the virtual electron–positron pairs found in quantum electrodynamics. Theoretical arguments suggest that virtual black holes should have mass on the order of the Planck mass, lifetime around the Planck time, and occur with a number density of approximately one per Planck volume.
The emergence of virtual black holes at the Planck scale is a consequence of the uncertainty relation.
where is the radius of curvature of spacetime small domain, is the coordinate of the small domain, is the Planck length, is the reduced Planck constant, is the Newtonian constant of gravitation, and is the speed of light. These uncertainty relations are another form of Heisenberg's uncertainty principle at the Planck scale.
If virtual black holes exist, they provide a mechanism for proton decay. This is because when a black hole's mass increases via mass falling into the hole, and is theorized to decrease when Hawking radiation is emitted from the hole, the elementary particles emitted are, in general, not the same as those that fell in. Therefore, if two of a proton's constituent quarks fall into a virtual black hole, it is possible for an antiquark and a lepton to emerge, thus violating conservation of baryon number.
The existence of virtual black holes aggravates the black hole information loss paradox, as any physical process may potentially be disrupted by interaction with a virtual black hole.
See also
Quantum foam
Virtual particle
Quantum tunnelling
References
Further reading
Quantum gravity
Black holes | Virtual black hole | [
"Physics",
"Astronomy"
] | 341 | [
"Black holes",
"Physical phenomena",
"Physical quantities",
"Unsolved problems in physics",
"Astrophysics",
"Quantum gravity",
"Density",
"Stellar phenomena",
"Astronomical objects",
"Physics beyond the Standard Model"
] |
13,347,172 | https://en.wikipedia.org/wiki/Luminosity%20function%20%28astronomy%29 | In astronomy, a luminosity function gives the number of stars or galaxies per luminosity interval. Luminosity functions are used to study the properties of large groups or classes of objects, such as the stars in clusters or the galaxies in the Local Group.
Note that the term "function" is slightly misleading, and the luminosity function might better be described as a luminosity distribution. Given a luminosity as input, the luminosity function essentially returns the abundance of objects with that luminosity (specifically, number density per luminosity interval).
Main sequence luminosity function
The main sequence luminosity function maps the distribution of main sequence stars according to their luminosity. It is used to compare star formation and death rates, and evolutionary models, with observations. Main sequence luminosity functions vary depending on their host galaxy and on selection criteria for the stars, for example in the Solar neighbourhood or the Small Magellanic Cloud.
White dwarf luminosity function
The white dwarf luminosity function (WDLF) gives the number of white dwarf stars with a given luminosity. As this is determined by the rates at which these stars form and cool, it is of interest for the information it gives about the physics of white dwarf cooling and the age and history of the Galaxy.
Schechter luminosity function
The Schechter luminosity function provides an approximation of the abundance of galaxies in a luminosity interval . The luminosity function has units of a number density per unit luminosity and is given by a power law with an exponential cut-off at high luminosity
where is a characteristic galaxy luminosity controlling the cut-off, and the normalization has units of number density.
Equivalently, this equation can be expressed in terms of log-quantities with
The galaxy luminosity function may have different parameters for different populations and environments; it is not a universal function. One measurement from field galaxies is .
It is often more convenient to rewrite the Schechter function in terms of magnitudes, rather than luminosities. In this case, the Schechter function becomes:
Note that because the magnitude system is logarithmic, the power law has logarithmic slope . This is why a Schechter function with is said to be flat.
Integrals of the Schechter function can be expressed via the incomplete gamma function
Historically, the Schechter luminosity function was inspired by the Press–Schechter model. However, the connection between the two is not straight forward. If one assumes that every dark matter halo hosts one galaxy, then the Press-Schechter model yields a slope for galaxies instead of the value given above which is closer to -1. The reason for this failure is that large halos tend to have a large host galaxy and many smaller satellites, and small halos may not host any galaxies with stars. See, e.g., halo occupation distribution, for a more-detailed description of the halo-galaxy connection.
References
Stellar astronomy
Galaxies
Photometry
Equations of astronomy | Luminosity function (astronomy) | [
"Physics",
"Astronomy"
] | 640 | [
"Concepts in astronomy",
"Galaxies",
"Equations of astronomy",
"Astronomical objects",
"Astronomical sub-disciplines",
"Stellar astronomy"
] |
13,347,268 | https://en.wikipedia.org/wiki/Radiobiology | Radiobiology (also known as radiation biology, and uncommonly as actinobiology) is a field of clinical and basic medical sciences that involves the study of the effects of ionizing radiation on living things, in particular health effects of radiation. Ionizing radiation is generally harmful and potentially lethal to living things but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis. Its most common impact is the induction of cancer with a latent period of years or decades after exposure. High doses can cause visually dramatic radiation burns, and/or rapid fatality through acute radiation syndrome. Controlled doses are used for medical imaging and radiotherapy.
Health effects
In general, ionizing radiation is harmful and potentially lethal to living beings but can have health benefits in radiation therapy for the treatment of cancer and thyrotoxicosis.
Most adverse health effects of radiation exposure may be grouped in two general categories:
deterministic effects (harmful tissue reactions) due in large part to the killing or malfunction of cells following high doses; and
stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells.
Stochastic
Some effects of ionizing radiation on human health are stochastic, meaning that their probability of occurrence increases with dose, while the severity is independent of dose. Radiation-induced cancer, teratogenesis, cognitive decline, and heart disease are all stochastic effects induced by ionizing radiation.
Its most common impact is the stochastic induction of cancer with a latent period of years or decades after exposure. The mechanism by which this occurs is well understood, but quantitative models predicting the level of risk remain controversial. The most widely accepted model posits that the incidence of cancers due to ionizing radiation increases linearly with effective radiation dose at a rate of 5.5% per sievert. If this linear model is correct, then natural background radiation is the most hazardous source of radiation to general public health, followed by medical imaging as a close second.
Quantitative data on the effects of ionizing radiation on human health is relatively limited compared to other medical conditions because of the low number of cases to date, and because of the stochastic nature of some of the effects. Stochastic effects can only be measured through large epidemiological studies where enough data has been collected to remove confounding factors such as smoking habits and other lifestyle factors. The richest source of high-quality data comes from the study of Japanese atomic bomb survivors. In vitro and animal experiments are informative, but radioresistance varies greatly across species.
The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 in 2,000.
Deterministic
Deterministic effects are those that reliably occur above a threshold dose, and their severity increases with dose.
High radiation dose gives rise to deterministic effects which reliably occur above a threshold, and their severity increases with dose. Deterministic effects are not necessarily more or less serious than stochastic effects; either can ultimately lead to a temporary nuisance or a fatality. Examples of deterministic effects are:
Acute radiation syndrome, by acute whole-body radiation
Radiation burns, from radiation to a particular body surface
Radiation-induced thyroiditis, a potential side effect from radiation treatment against hyperthyroidism
Chronic radiation syndrome, from long-term radiation.
Radiation-induced lung injury, from for example radiation therapy to the lungs
Cataracts and infertility.
The US National Academy of Sciences Biological Effects of Ionizing Radiation Committee "has concluded that there is no compelling evidence to indicate a dose threshold below which the risk of tumor induction is zero".
By type of radiation
When alpha particle emitting isotopes are ingested, they are far more dangerous than their half-life or decay rate would suggest. This is due to the high relative biological effectiveness of alpha radiation to cause biological damage after alpha-emitting radioisotopes enter living cells. Ingested alpha emitter radioisotopes such as transuranics or actinides are an average of about 20 times more dangerous, and in some experiments up to 1000 times more dangerous than an equivalent activity of beta emitting or gamma emitting radioisotopes. If the radiation type is not known, it can be determined by differential measurements in the presence of electrical fields, magnetic fields, or with varying amounts of shielding.
In pregnancy
The risk for developing radiation-induced cancer at some point in life is greater when exposing a fetus than an adult, both because the cells are more vulnerable when they are growing, and because there is much longer lifespan after the dose to develop cancer. If there is too much radiation exposure there could be harmful effects on the unborn child or reproductive organs. Research shows that scanning more than once in nine months can harm the unborn child.
Possible deterministic effects include of radiation exposure in pregnancy include miscarriage, structural birth defects, growth restriction and intellectual disability. The deterministic effects have been studied at for example survivors of the atomic bombings of Hiroshima and Nagasaki and cases where radiation therapy has been necessary during pregnancy:
The intellectual deficit has been estimated to be about 25 IQ points per 1,000 mGy at 10 to 17 weeks of gestational age.
These effects are sometimes relevant when deciding about medical imaging in pregnancy, since projectional radiography and CT scanning exposes the fetus to radiation.
Also, the risk for the mother of later acquiring radiation-induced breast cancer seems to be particularly high for radiation doses during pregnancy.
Measurement
The human body cannot sense ionizing radiation except in very high doses, but the effects of ionization can be used to characterize the radiation. Parameters of interest include disintegration rate, particle flux, particle type, beam energy, kerma, dose rate, and radiation dose.
The monitoring and calculation of doses to safeguard human health is called dosimetry and is undertaken within the science of health physics. Key measurement tools are the use of dosimeters to give the external effective dose uptake and the use of bio-assay for ingested dose. The article on the sievert summarises the recommendations of the ICRU and ICRP on the use of dose quantities and includes a guide to the effects of ionizing radiation as measured in sieverts, and gives examples of approximate figures of dose uptake in certain situations.
The committed dose is a measure of the stochastic health risk due to an intake of radioactive material into the human body. The ICRP states "For internal exposure, committed effective doses are generally determined from an assessment of the intakes of radionuclides from bioassay measurements or other quantities. The radiation dose is determined from the intake using recommended dose coefficients".
Absorbed, equivalent and effective dose
The absorbed dose is a physical dose quantity D representing the mean energy imparted to matter per unit mass by ionizing radiation. In the SI system of units, the unit of measure is joules per kilogram, and its special name is gray (Gy). The non-SI CGS unit rad is sometimes also used, predominantly in the USA.
To represent stochastic risk the equivalent dose H T and effective dose E are used, and appropriate dose factors and coefficients are used to calculate these from the absorbed dose. Equivalent and effective dose quantities are expressed in units of the sievert or rem which implies that biological effects have been taken into account. These are usually in accordance with the recommendations of the International Committee on Radiation Protection (ICRP) and International Commission on Radiation Units and Measurements (ICRU). The coherent system of radiological protection quantities developed by them is shown in the accompanying diagram.
Organizations
The International Commission on Radiological Protection (ICRP) manages the International System of Radiological Protection, which sets recommended limits for dose uptake. Dose values may represent absorbed, equivalent, effective, or committed dose.
Other important organizations studying the topic include:
International Commission on Radiation Units and Measurements (ICRU)
United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR)
US National Council on Radiation Protection and Measurements (NCRP)
UK UK Health Security Agency
US National Academy of Sciences (NAS through the BEIR studies)
French Institut de radioprotection et de sûreté nucléaire (IRSN)
European Committee on Radiation Risk (ECRR) the stage of radiation depends on the stage the body parts are affected
Exposure pathways
External
External exposure is exposure which occurs when the radioactive source (or other radiation source) is outside (and remains outside) the organism which is exposed. Examples of external exposure include:
A person who places a sealed radioactive source in his pocket
A space traveller who is irradiated by cosmic rays
A person who is treated for cancer by either teletherapy or brachytherapy. While in brachytherapy the source is inside the person it is still considered external exposure because it does not result in a committed dose.
A nuclear worker whose hands have been dirtied with radioactive dust. Assuming that his hands are cleaned before any radioactive material can be absorbed, inhaled or ingested, skin contamination is considered to be external exposure.
External exposure is relatively easy to estimate, and the irradiated organism does not become radioactive, except for a case where the radiation is an intense neutron beam which causes activation.
By type of medical imaging
Internal
Internal exposure occurs when the radioactive material enters the organism, and the radioactive atoms become incorporated into the organism. This can occur through inhalation, ingestion, or injection. Below are a series of examples of internal exposure.
The exposure caused by potassium-40 present within a normal person.
The exposure to the ingestion of a soluble radioactive substance, such as 89Sr in cows' milk.
A person who is being treated for cancer by means of a radiopharmaceutical where a radioisotope is used as a drug (usually a liquid or pill). A review of this topic was published in 1999. Because the radioactive material becomes intimately mixed with the affected object it is often difficult to decontaminate the object or person in a case where internal exposure is occurring. While some very insoluble materials such as fission products within a uranium dioxide matrix might never be able to truly become part of an organism, it is normal to consider such particles in the lungs and digestive tract as a form of internal contamination which results in internal exposure.
Boron neutron capture therapy (BNCT) involves injecting a boron-10 tagged chemical that preferentially binds to tumor cells. Neutrons from a nuclear reactor are shaped by a neutron moderator to the neutron energy spectrum suitable for BNCT treatment. The tumor is selectively bombarded with these neutrons. The neutrons quickly slow down in the body to become low energy thermal neutrons. These thermal neutrons are captured by the injected boron-10, forming excited (boron-11) which breaks down into lithium-7 and a helium-4 alpha particle both of these produce closely spaced ionizing radiation. This concept is described as a binary system using two separate components for the therapy of cancer. Each component in itself is relatively harmless to the cells, but when combined for treatment they produce a highly cytocidal (cytotoxic) effect which is lethal (within a limited range of 5-9 micrometers or approximately one cell diameter). Clinical trials, with promising results, are currently carried out in Finland and Japan.
When radioactive compounds enter the human body, the effects are different from those resulting from exposure to an external radiation source. Especially in the case of alpha radiation, which normally does not penetrate the skin, the exposure can be much more damaging after ingestion or inhalation. The radiation exposure is normally expressed as a committed dose.
History
Although radiation was discovered in late 19th century, the dangers of radioactivity and of radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when German physicist Wilhelm Röntgen intentionally subjected his fingers to X-rays in 1895. He published his observations concerning the burns that developed, though he misattributed them to ozone, a free radical produced in air by X-rays. Other free radicals produced within the body are now understood to be more important. His injuries healed later.
As a field of medical sciences, radiobiology originated from Leopold Freund's 1896 demonstration of the therapeutic treatment of a hairy mole using the newly discovered form of electromagnetic radiation called X-rays. After irradiating frogs and insects with X-rays in early 1896, Ivan Romanovich Tarkhanov concluded that these newly discovered rays not only photograph, but also "affect the living function". At the same time, Pierre and Marie Curie discovered the radioactive polonium and radium later used to treat cancer.
The genetic effects of radiation, including the effects on cancer risk, were recognized much later. In 1927 Hermann Joseph Muller published research showing genetic effects, and in 1946 was awarded the Nobel prize for his findings.
More generally, the 1930s saw attempts to develop a general model for radiobiology. Notable here was Douglas Lea, whose presentation also included an exhaustive review of some 400 supporting publications.
Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine and radioactive quackery. Examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie spoke out against this sort of treatment, warning that the effects of radiation on the human body were not well understood. Curie later died of aplastic anemia caused by radiation poisoning. Eben Byers, a famous American socialite, died of multiple cancers (but not acute radiation syndrome) in 1932 after consuming large quantities of radium over several years; his death drew public attention to dangers of radiation. By the 1930s, after a number of cases of bone necrosis and death in enthusiasts, radium-containing medical products had nearly vanished from the market.
In the United States, the experience of the so-called Radium Girls, where thousands of radium-dial painters contracted oral cancers— but no cases of acute radiation syndrome— popularized the warnings of occupational health associated with radiation hazards. Robley D. Evans, at MIT, developed the first standard for permissible body burden of radium, a key step in the establishment of nuclear medicine as a field of study. With the development of nuclear reactors and nuclear weapons in the 1940s, heightened scientific attention was given to the study of all manner of radiation effects.
The atomic bombings of Hiroshima and Nagasaki resulted in a large number of incidents of radiation poisoning, allowing for greater insight into its symptoms and dangers. Red Cross Hospital surgeon Dr. Terufumi Sasaki led intensive research into the Syndrome in the weeks and months following the Hiroshima bombings. Sasaki and his team were able to monitor the effects of radiation in patients of varying proximities to the blast itself, leading to the establishment of three recorded stages of the syndrome. Within 25–30 days of the explosion, the Red Cross surgeon noticed a sharp drop in white blood cell count and established this drop, along with symptoms of fever, as prognostic standards for Acute Radiation Syndrome. Actress Midori Naka, who was present during the atomic bombing of Hiroshima, was the first incident of radiation poisoning to be extensively studied. Her death on August 24, 1945, was the first death ever to be officially certified as a result of radiation poisoning (or "atomic bomb disease").
The Atomic Bomb Casualty Commission and the Radiation Effects Research Foundation have been monitoring the health status of the survivors and their descendants since 1946. They have found that radiation exposure increases cancer risk, but also that the average lifespan of survivors was reduced by only a few months compared to those not exposed to radiation. No health effects of any sort have thus far been detected in children of the survivors.
Areas of interest
The interactions between organisms and electromagnetic fields (EMF) and ionizing radiation can be studied in a number of ways:
Radiation physics
Radiation chemistry
Molecular and cell biology
Molecular genetics
Cell death and apoptosis
High and low-level electromagnetic radiation and health
Specific absorption rates of organisms
Radiation poisoning
Radiation oncology (radiation therapy in cancer)
Bioelectromagnetics
Electric field and Magnetic field - their general nature.
Electrophysiology - the scientific study of the electrical properties of biological cells and tissues.
Biomagnetism - the magnetic properties of living systems (see, for example, the research of David Cohen using SQUID imaging) and Magnetobiology - the study of effect of magnets upon living systems. See also Electromagnetic radiation and health
Bioelectromagnetism - the electromagnetic properties of living systems and Bioelectromagnetics - the study of the effect of electromagnetic fields on living systems.
Electrotherapy
Radiation therapy
Radiogenomics
Transcranial magnetic stimulation - a powerful electric current produces a transient, spatially focussed magnetic field that can penetrate the scalp and skull of a subject and induce electrical activity in the neurons on the surface of the brain.
Magnetic resonance imaging - a very powerful magnetic field is used to obtain a 3D image of the density of water molecules of the brain, revealing different anatomical structures. A related technique, functional magnetic resonance imaging, reveals the pattern of blood flow in the brain and can show which parts of the brain are involved in a particular task.
Embryogenesis, Ontogeny and Developmental biology - a discipline that has given rise to many scientific field theories.
Bioenergetics - the study of energy exchange on the molecular level of living systems.
Biological psychiatry, Neurology, Psychoneuroimmunology
Radiation sources for experimental radiobiology
Radiobiology experiments typically make use of a radiation source which could be:
An isotopic source, typically 137Cs or 60Co.
A particle accelerator generating high energy protons, electrons or charged ions. Biological samples can be irradiated using either a broad, uniform beam, or using a microbeam, focused down to cellular or subcellular sizes.
A UV lamp.
See also
Biological effects of radiation on the epigenome
Cell survival curve
Health threat from cosmic rays
NASA Space Radiation Laboratory
Radioactivity in biology
Radiology
Radiophobia
Radiosensitivity
References
Sources
ICRP, 2007. The 2007 Recommendations of the International Commission on Radiological Protection. ICRP Publication 103. Ann. ICRP 37 (2-4).
Further reading
Eric Hall, Radiobiology for the Radiologist. 2006. Lippincott
G.Gordon Steel, "Basic Clinical Radiobiology". 2002. Hodder Arnold.
The Institute for Radiation Biology at the Helmholtz-Center for Environmental Health | Radiobiology | [
"Physics",
"Chemistry",
"Materials_science",
"Biology"
] | 3,823 | [
"Radiation health effects",
"Applied and interdisciplinary physics",
"Radiobiology",
"Medical physics",
"Radiation effects",
"Radioactivity"
] |
13,347,566 | https://en.wikipedia.org/wiki/Meromyosin | Meromyosin is a part of myosin (mero meaning "part of"). With regards to human anatomy myosin and actin constitute the basic functional unit of a muscle fiber, called sarcomere, playing a role in muscle contraction.
Biochemically viewed meromyosin form subunits of the actin-associated motor protein, myosin, as commonly obtained by trypsin proteolysis (protein breakdown). Following this proteolysis, two types of meromyosin are formed: heavy meromyosin (HMM) and light meromyosin (LMM).
Light meromyosin has a long, straight portion in the “tail” region. Heavy meromyosin (HMM) is a protein chain terminating in a globular head portion/cross bridge. HMM consists of two subunits, Heavy Meromyosin Subunit 1 and 2 (HMMS-1 and HMMS-2). The majority of myosin activity is concentrated in HMMS-1. HMMS-1 has an actin binding site and ATP binding site (myosin ATPase) that determines the rate of muscle contraction when muscle is stretched.
Light and heavy meromyosin are subunits of myosin filaments (thick myofilaments).
References
Motor proteins | Meromyosin | [
"Chemistry",
"Biology"
] | 278 | [
"Biotechnology stubs",
"Motor proteins",
"Biochemistry stubs",
"Molecular machines",
"Biochemistry"
] |
13,350,759 | https://en.wikipedia.org/wiki/Dielectric%20wireless%20receiver | Dielectric wireless receiver is a type of radiofrequency receiver front-end featuring a complete absence of electronic circuitry and metal interconnects. It offers immunity against damage from intense electromagnetic radiation, produced by EMP and HPM sources. This receiver is known as ADNERF (an acronym used to signify an All-Dielectric Non-Electronic Radio Front-End). ADNERF is a type of Electro-Magnetic Pulse Tolerant Microwave Receiver (EMPiRe).
Background
The continuing trend towards reduced feature size and voltage in integrated circuits renders modern electronics highly susceptible to damage caused by High Power Microwave (HPM) and other microwave based directed energy sources. These induce high voltage transient surges of thousands of volts which can punch through the gate insulator in the transistor and can destroy the circuit's metal interconnects. To immunize electronic systems against such threats, the “soft spots” (metal and transistor) in a conventional receiver front-end, must be eliminated.
Operation
The basic concept of this photonic-assisted all-dielectric RF front-end technology is shown in Fig. 1. The Dielectric Resonator Antenna (DRA) in the front-end, functions as a concentrator of incoming electromagnetic field. When the electromagnetic (EM) field excites the resonance of DRA, a mode field pattern is built up inside the structure. The electro-optical (EO) resonator is placed at the location of the peak field magnitude (Fig. 2). The EO resonator converts the received EM signal to an intensity modulated optical signal which is then carried away from the antenna front-end via an optical fiber. At the remote location, the signal is converted back to an RF signal which is then amplified and processed using conventional techniques.
This front-end design significantly increases the threshold for damage associated with high power microwave signals. The lack of metal interconnects eliminates the one source of failure. In addition, the charge isolation provided by the optical link protects the electronic circuitry. Good sensitivity can be achieved due to signal enhancement provided by the microwave resonance in the DRA and optical resonance in the EO resonator. The modulating E-field (ERF) applied to the resonator should not be uniform across the disk otherwise no modulation occurs. To prevent this from happening, the EO resonator is placed off center from the symmetrical axis of DRA as shown in Fig. 2. The location of the EO resonator is chosen to coincide with the peak EM field inside the DRA, which is identified using 3-D EM simulations. The field profile shown in Figure 2b does not include the presence of EO resonator. In practice the presence of the EO crystal will change the field distribution.
References
Abrams, M. Dawn of the e-bomb. IEEE Spectrum 40, 24-30 (2003).
R. C. J. Hsu, A. Ayazi, B. Houshmand and B. Jalali, “All-Dielectric Photonic-Assisted Radio Front-End Technology,” Nature Photonics 1, 535–538 (2007).
A. Ayazi, C. J. Hsu, B. Houshmand, W. H. Steier, and B. Jalali, “All-dielectric photonics assisted wireless receiver,” Optics Express (2008).
DARPA's EMPiRe program.
Nonlinear optics
Optical devices
Radio frequency antenna types
Antennas (radio) | Dielectric wireless receiver | [
"Materials_science",
"Engineering"
] | 740 | [
"Glass engineering and science",
"Optical devices"
] |
16,027,304 | https://en.wikipedia.org/wiki/Interferometric%20modulator%20display | Interferometric modulator display (IMOD, trademarked mirasol) is a technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid crystal displays (LCD). An IMOD-based reflective flat panel display includes hundreds of thousands of individual IMOD elements each a microelectromechanical systems (MEMS)-based device.
In one state, an IMOD subpixel absorbs incident light and appears black to the viewer. In a second state, it reflects light at a specific wavelength, using a diffraction grating effect. When not being addressed, an IMOD display consumes very little power. Unlike conventional back-lit liquid crystal displays, it is clearly visible in bright ambient light such as sunlight. IMOD prototypes as of mid-2010 could emit 15 frames per second (fps), and in November 2011 Qualcomm demonstrated another prototype reaching 30 fps, suitable for video playback. The smartwatch Qualcomm Toq features this display with 40 fps.
Mirasol screens were only able to produce 60 Hz video but it quickly drained the battery. Devices that used the screen have colors that look washed out, so the technology never saw mainstream support.
Working principle
The basic elements of an IMOD-based display are microscopic devices that act essentially as mirrors that can be switched on or off individually. Each of these elements reflects only one exact wavelength of light, such as a specific hue of red, green or blue, when turned on, and absorbs light (appears black) when off. Elements are organised into a rectangular array in order to produce a display screen.
An array of elements that all reflect the same color when turned on produces a monochromatic display, for example black and red (in this example using IMOD elements that reflect red light when "on"). As each element reflects only a certain amount of light, grouping several elements of the same color together as subpixels allows different brightness levels for a pixel based on how many elements are reflective at a particular time.
Multiple color displays are created by using subpixels, each designed to reflect a specific different color. Multiple elements of each color are generally used to both give more combinations of displayable color (by mixing the reflected colors) and to balance the overall brightness of the pixel.
Because elements only use power in order to switch between on and off states (no power is needed to reflect or absorb light hitting the display once the element is either reflecting or absorbing), IMOD-based displays potentially use much less power than displays that generate light and/or need constant power to keep pixels in a particular state. Being a reflective display, they require an external light source (such as daylight or a lamp) to be readable, just like paper or other electronic paper technologies.
Details
A pixel in an IMOD-based display consists of one or more subpixels that are individual microscopic interferometric cavities similar in operation to Fabry–Pérot interferometers (etalons). While a simple etalon consists of two half-silvered mirrors, an IMOD comprises a reflective membrane which can move in relation to a semi-transparent thin film stack. With an air gap defined within this cavity, the IMOD behaves like an optically resonant structure whose reflected color is determined by the size of the airgap. Application of a voltage to the IMOD creates electrostatic forces which bring the membrane into contact with the thin film stack. When this happens the behavior of the IMOD changes to that of an induced absorber. The consequence is that almost all incident light is absorbed and no colors are reflected. It is this binary operation that is the basis for the IMOD's application in reflective flat panel displays. Since the display utilizes light from ambient sources, the display's brightness increases in high ambient environments (i.e. sunlight). In contrast, a back-lit LCD suffers from incident light.
For a practical RGB color model (RGB) display, a single RGB pixel is built from several subpixels, because the brightness of a monochromatic pixel is not adjusted. A monochromatic array of subpixels represents different brightness levels for each color, and for each pixel, there are three such arrays: red, green and blue.
Development
The IMOD technology was invented by Mark W. Miles, a MEMS researcher and founder of Etalon, Inc., and (co-founder) of Iridigm Display Corporation. Qualcomm took over the development of this technology after its acquisition of Iridigm in 2004, and subsequently formed Qualcomm MEMS Technologies (QMT). Qualcomm has allowed commercialization of the technology under the trademark name "mirasol". This energy-efficient, biomimetic technology sees application and use in portable electronics such as e-book readers and mobile phones.
Future IMOD panels manufacturers include Qualcomm in conjunction with Foxlink, having established a joint-venture with Sollink (高強光電) in 2009 with a future facility dedicated to manufacturing IMOD panels. Production for this began in Jan 2011, with the fabricated panels intended for devices such as e-readers.
As of 2015, the IMOD Mirasol display laboratory in Longtan, Taiwan, formerly run by Qualcomm, is now apparently run by Apple.
Uses
IMOD displays are now available in the commercial marketplace. QMT's displays, using IMOD technology, are found in the Acoustic Research ARWH1 Stereo Bluetooth headset device, the Showcare Monitoring system (Korea), the Hisense C108, and MP3 applications from Freestyle Audio and Skullcandy. In the mobile phone marketplace, Taiwanese manufacturers Inventec and Cal-Comp have announced phones with mirasol displays, and LG claims to be developing "one or more" handsets using mirasol technology. These products all have only two-color (black plus one other) "bi-chromic" displays. A multi-color IMOD display is used in the Qualcomm Toq smartwatch.
References
Bibliography
Display technology
Qualcomm | Interferometric modulator display | [
"Engineering"
] | 1,303 | [
"Electronic engineering",
"Display technology"
] |
16,032,107 | https://en.wikipedia.org/wiki/Becher%20process | The Becher process is a process to produce rutile, a form of titanium dioxide, from the ore ilmenite. Although it is competitive with the chloride process and the sulfate process,
. the Becher process is not used on scale.
With the idealized formula FeTiO3, ilmenite contains 55-65% titanium dioxide, the rest being iron oxide. The Becher process, like other beneficiation processes, aims to remove iron. The Becher process exploits the conversion of the ferrous iron (FeO) to ferric iron (Fe2O3). Ilmenite ores can be upgraded to synthetic rutile by increasing their TiO2 content to between 90 and 96 percent.
History
This technology was developed in the early 1960s in Western Australia by a joint initiative between industry and government. The process was named after Robert Gordon Becher, who while working at the Western Australian Government Chemical Laboratories (the precursor to ChemCentre) invented, developed and introduced the technique to the Western Australian Mineral Sands industry. The process was patented in 1961.
Process
The Becher process is suitable for weathered ilmenite that has low concentrations of chromium and magnesium. There are four steps involved in removing the iron portion of the ilmenite:
Oxidation
Reduction
Aeration
Leaching
Oxidation
Oxidation involves heating the ilmenite in a rotary kiln with air to convert iron to iron(III) oxide:
4 FeTiO3 + O2 → 2 Fe2O3·TiO2 + 2 TiO2
This step is suitable for a range of ilmenite-containing feedstocks.
Reduction
Reduction is performed in a rotary kiln with pseudobrookite (Fe2O3.TiO2), coal, and sulfur, then heated to a temperature greater than 1200 °C. The iron oxide in the mineral grains is reduced to metallic iron to produce reduced ilmenite:
Fe2O3·TiO2 + 3 CO → 2 Fe + TiO2 + 3 CO2
The "reduced ilmenite" is separated from the char prior to the next step.
Aeration
Aeration involves the removal of the metallic iron created in the last step by "rusting" it out. This conversion is achieved in large tanks that contain 1% ammonium chloride solution with air being pumped through the tank. The tank is being continuously agitated, and the iron will rust and precipitate in the form of a slime.
4 Fe + 3 O2 → 2 Fe2O3
The finer iron oxide is then separated from the larger particles of synthetic rutile.
Acid leach
Once the majority of the iron oxide has been removed the remainder of it is leached away using 0.5M sulfuric acid.
References
Further reading
Chemical processes
Industrial processes
Titanium processes | Becher process | [
"Chemistry"
] | 569 | [
"Metallurgical processes",
"Titanium processes",
"Chemical processes",
"nan",
"Chemical process engineering"
] |
7,032,835 | https://en.wikipedia.org/wiki/Knudsen%20diffusion | Knudsen diffusion, named after Martin Knudsen, is a means of diffusion that occurs when the scale length of a system is comparable to or smaller than the mean free path of the particles involved. An example of this is in a long pore with a narrow diameter (2–50 nm) because molecules frequently collide with the pore wall. As another example, consider the diffusion of gas molecules through very small capillary pores. If the pore diameter is smaller than the mean free path of the diffusing gas molecules, and the density of the gas is low, the gas molecules collide with the pore walls more frequently than with each other, leading to Knudsen diffusion.
In fluid mechanics, the Knudsen number is a good measure of the relative importance of Knudsen diffusion. A Knudsen number much greater than one indicates Knudsen diffusion is important. In practice, Knudsen diffusion applies only to gases because the mean free path for molecules in the liquid state is very small, typically near the diameter of the molecule itself.
Mathematical description
The diffusivity for Knudsen diffusion is obtained from the self-diffusion coefficient derived from the kinetic theory of gases:
For Knudsen diffusion, path length λ is replaced with pore diameter , as species A is now more likely to collide with the pore wall as opposed with another molecule. The Knudsen diffusivity for diffusing species A, is thus
where is the gas constant (8.3144 J/(mol·K) in SI units), molar mass is expressed in units of kg/mol, and temperature T (in kelvins). Knudsen diffusivity thus depends on the pore diameter, species molar mass and temperature. Expressed as a molecular flux, Knudsen diffusion follows the equation for Fick's first law of diffusion:
Here, is the molecular flux in mol/m²·s, is the molar concentration in . The diffusive flux is driven by a concentration gradient, which in most cases is embodied as a pressure gradient (i.e. therefore where is the pressure difference between both sides of the pore and is the length of the pore).
If we assume that is much less than , the average absolute pressure in the system (i.e. ) then we can express the Knudsen flux as a volumetric flow rate as follows:
,
where is the volumetric flow rate in . If the pore is relatively short, entrance effects can significantly reduce to net flux through the pore. In this case, the law of effusion can be used to calculate the excess resistance due to entrance effects rather easily by substituting an effective length in for . Generally, the Knudsen process is significant only at low pressure and small pore diameter. However there may be instances where both Knudsen diffusion and molecular diffusion are important. The effective diffusivity of species A in a binary mixture of A and B, is determined by
where and is the flux of component i.
For cases where α = 0 (, i.e. countercurrent diffusion) or where is close to zero, the equation reduces to
Knudsen self diffusion
In the Knudsen diffusion regime, the molecules do not interact with one another, so that they move in straight lines between points on the pore channel surface. Self-diffusivity is a measure of the translational mobility of individual molecules. Under conditions of thermodynamic equilibrium, a molecule is tagged and its trajectory followed over a long time. If the motion is diffusive, and in a medium without long-range correlations, the squared displacement of the molecule from its original position will eventually grow linearly with time (Einstein’s equation). To reduce statistical errors in simulations, the self-diffusivity, , of a species is defined from ensemble averaging Einstein’s equation over a large enough number of molecules N.
See also
Knudsen flow
Knudsen equation
Atomic diffusion
Mass diffusivity
References
External links
Knudsen number and diffusivity calculators
Diffusion | Knudsen diffusion | [
"Physics",
"Chemistry"
] | 857 | [
"Transport phenomena",
"Physical phenomena",
"Diffusion"
] |
10,864,076 | https://en.wikipedia.org/wiki/Statistical%20static%20timing%20analysis | Conventional static timing analysis (STA) has been a stock analysis algorithm for the design of digital circuits for a long time. However the increased variation in semiconductor devices and interconnect has introduced a number of issues that cannot be handled by traditional (deterministic) STA. This has led to considerable research into statistical static timing analysis, which replaces the normal deterministic timing of gates and interconnects with probability distributions, and gives a distribution of possible circuit outcomes rather than a single outcome.
Comparison with conventional STA
Deterministic STA is popular for good reasons:
It requires no vectors, so it does not miss paths.
The run time is linear in circuit size (for the basic algorithm).
The result is conservative.
It typically uses some fairly simple libraries (typically delay and output slope as a function of input slope and output load).
It is easy to extend to incremental operation for use in optimization.
STA, while very successful, has a number of limitations:
Cannot easily handle within-die correlation, especially if spatial correlation is included.
Needs many corners to handle all possible cases.
If there are significant random variations, then in order to be conservative at all times, it is too pessimistic to result in competitive products.
Changes to address various correlation problems, such as CPPR (Common Path Pessimism Removal) make the basic algorithm slower than linear time, or non-incremental, or both.
SSTA attacks these limitations more or less directly. First, SSTA uses sensitivities to find correlations among delays. Then it uses these correlations when computing how to add statistical distributions of delays.
There is no technical reason why determistic STA could not be enhanced to handle correlation and sensitivities, by keeping a vector of sensitivities with each value as SSTA does. Historically, this seemed like a big burden to add to STA, whereas it was clear it was needed for SSTA, so no-one complained. See some of the criticism of SSTA below where this alternative is proposed.
Methods
There are two main categories of SSTA algorithms – path-based and block-based methods.
A path-based algorithm sums gate and wire delays on specific paths. The statistical calculation is simple, but the paths of interest must be identified prior to running the analysis. There is the potential that some other paths may be relevant but not analyzed so path selection is important.
A block-based algorithm generates the arrival times (and required) times for each node, working forward (and backward) from the clocked elements. The advantage is completeness, and no need for path selection. The biggest problem is that a statistical max (or min) operation that also considered correlation is needed, which is a hard technical problem.
There are SSTA cell characterization tools that are now available such as Altos Design Automation's Variety tool.
Criticism
A number of criticisms have been leveled at SSTA:
It's too complex, especially with realistic (non-gaussian) distributions.
It's hard to couple to an optimization flow or algorithm.
It's hard to get the data the algorithm needs. Even if you can get this data, it is likely to be time-varying and hence unreliable.
If used seriously by the customers of a fab, it restricts the changes the fab might make, if they change that statistical properties of the process.
The benefit is relatively small, compared to an enhanced deterministic STA that also takes into account sensitivities and correlation.
Tools that perform static timing analysis
FPGAs
Altera Quartus II
Xilinx ISE
ASICs
Synopsys Primetime (Synopsys PrimeTime)
Cadence Encounter Timing System (Cadence Tempus)
IBM EinsTimer
ANSYS Path FX (ANSYS Path FX)
See also
Dynamic timing analysis
References
Timing in electronic circuits
Formal methods | Statistical static timing analysis | [
"Engineering"
] | 795 | [
"Software engineering",
"Formal methods"
] |
14,503,823 | https://en.wikipedia.org/wiki/Esophageal%20stent | An esophageal stent is a stent (tube) placed in the esophagus to keep a blocked area open so the patient can swallow soft food and liquids. They are effective in the treatment of conditions causing intrinsic esophageal obstruction or external esophageal compression. For the palliative treatment of esophageal cancer most esophageal stents are self-expandable metallic stents. For benign esophageal disease such as refractory esophageal strictures, plastic stents are available. Common complications include chest pain, overgrowth of tissue around the stent and stent migration. Esophageal stents may also be used to staunch the bleeding of esophageal varices.
Esophageal stents are placed using endoscopy when after the tip of the endoscope is positioned above the area to be stented, then guidewire is passed through the obstruction into the stomach. The endoscope is withdrawn and using the guidewire with either fluoroscopic or endoscopic guidance the stent is passed down the guidewire to the affected area of the esophagus and deployed. Finally, the guidewire is removed and the stent is left to fully expand over the next 2–3 days.
In one study of 997 patients who had self-expanding metal stents for malignant esophageal obstruction it was found that esophageal stents were 95% effective.
Pros of Esophageal Stent
There are several potential benefits of an esophageal stent procedure:
Symptoms relief: stents can help by alleviating symptoms e.g. swallowing, chest pain, and weight loss caused by a narrowed or blocked esophagus.
Fast Results: Normally performed in a day and quick recovery.
Minor invasive: When using an endoscope, it makes the procedure less invasive than some other treatments.
Palliative care: Stents help patients with advanced esophageal cancer by relieving symptoms and improving the quality of life.
Alternative to surgery: For older and less healthy patients, an esophageal stent is a viable alternative to surgery,
Cons of Esophageal Stent
There are also several potential drawbacks to an esophageal stent procedure:
Complications: Bleeding, infection, and perforation of the esophagus may occur.
Stent migration: Stent may move causing symptoms to recur or lead to other complications.
Stent obstruction: Blockage can occur, repeating symptoms or other complications.
Stent related pain: Chest or throat pain may occur after the procedure; requiring additional treatment or adjustment of the stent.
Stent removal: Check with your doctor on the stent type used for the procedure. Ask if it may need to be removed at a later date and the process and issues that may come about as a result.
Additional images
References
External links
Esophageal stent entry in the public domain NCI Dictionary of Cancer Terms
Surgical oncology
Implants (medicine)
Medical devices | Esophageal stent | [
"Biology"
] | 648 | [
"Medical devices",
"Medical technology"
] |
14,504,482 | https://en.wikipedia.org/wiki/Micro%20power%20source | Micro power sources and nano power sources are units of RFID, MEMS, microsystems and nanosystems for energy-power generation, harvesting from ambient, storage and conversion.
References
[1] La O` G.J., In H.J., Crumlin E., Barbastathis G., Shao-Horn Y. Resent advances in microdevices for electrochemical energy conversion and storage // Int. J. Energy Res. 2007. V.31. P.548-575.
[2] Curtright A.E., Bouwman P.J., Wartena R.C., Swider-Lyons K.E. Power sources for nanotechnology // International Journal of Nanotechnology. 2004. V.1. Nos.1/2. P.226-239
Energy technology
Microtechnology | Micro power source | [
"Materials_science",
"Engineering"
] | 182 | [
"Materials science",
"Microtechnology"
] |
14,510,148 | https://en.wikipedia.org/wiki/Weibel%20instability | The Weibel instability is a plasma instability present in homogeneous or nearly homogeneous electromagnetic plasmas which possess an anisotropy in momentum (velocity) space. This anisotropy is most generally understood as two temperatures in different directions. Burton Fried showed that this instability can be understood more simply as the superposition of many counter-streaming beams. In this sense, it is like the two-stream instability except that the perturbations are electromagnetic and result in filamentation as opposed to electrostatic perturbations which would result in charge bunching. In the linear limit the instability causes exponential growth of electromagnetic fields in the plasma which help restore momentum space isotropy. In very extreme cases, the Weibel instability is related to one- or two-dimensional stream instabilities.
Consider an electron-ion plasma in which the ions are fixed and the electrons are hotter in the y-direction than in x or z-direction.
To see how magnetic field perturbation would grow, suppose a field B = B cos kx spontaneously arises from noise. The Lorentz force then bends the electron trajectories with the result that upward-moving-ev x B electrons congregate at B and downward-moving ones at A. The resulting current sheets generate magnetic field that enhances the original field and thus
perturbation grows.
Weibel instability is also common in astrophysical plasmas, such as collisionless shock formation in supernova remnants and -ray bursts.
A Simple Example of Weibel Instability
As a simple example of Weibel instability, consider an electron beam with density and initial velocity propagating in a plasma of density with velocity . The analysis below will show how an electromagnetic perturbation in the form of a plane wave gives rise to a Weibel instability in this simple anisotropic plasma system. We assume a non-relativistic plasma for simplicity.
We assume there is no background electric or magnetic field i.e. . The perturbation will be taken as an electromagnetic wave propagating along i.e. . Assume the electric field has the form
With the assumed spatial and time dependence, we may use and . From Faraday's Law, we may obtain the perturbation magnetic field
Consider the electron beam. We assume small perturbations, and so linearize the velocity and density . The goal is to find the perturbation electron beam current density
where second-order terms have been neglected. To do that, we start with the fluid momentum equation for the electron beam
which can be simplified by noting that and neglecting second-order terms. With the plane wave assumption for the derivatives, the momentum equation becomes
We can decompose the above equations in components, paying attention to the cross product at the far right, and obtain the non-zero components of the beam velocity perturbation:
To find the perturbation density , we use the fluid continuity equation for the electron beam
which can again be simplified by noting that and neglecting second-order terms. The result is
Using these results, we may use the equation for the beam perturbation current density given above to find
Analogous expressions can be written for the perturbation current density of the left-moving plasma. By noting that the x-component of the perturbation current density is proportional to , we see that with our assumptions for the beam and plasma unperturbed densities and velocities the x-component of the net current density will vanish, whereas the z-components, which are proportional to , will add. The net current density perturbation is therefore
The dispersion relation can now be found from Maxwell's Equations:
where is the speed of light in free space. By defining the effective plasma frequency , the equation above results in
This bi-quadratic equation may be easily solved to give the dispersion relation
In the search for instabilities, we look for ( is assumed real). Therefore, we must take the dispersion relation/mode corresponding to the minus sign in the equation above.
To gain further insight on the instability, it is useful to harness our non-relativistic assumption to simplify the square root term, by noting that
The resulting dispersion relation is then much simpler
is purely imaginary. Writing
we see that , indeed corresponding to an instability.
The electromagnetic fields then have the form
Therefore, the electric and magnetic fields are out of phase, and by noting that
so we see this is a primarily magnetic perturbation although there is a non-zero electric perturbation. The magnetic field growth results in the characteristic filamentation structure of Weibel instability. Saturation will happen when the growth rate is on the order of the electron cyclotron frequency
References
Lecture
See also
Chromo-Weibel Instability
Plasma instabilities
Particle physics | Weibel instability | [
"Physics"
] | 983 | [
"Plasma phenomena",
"Physical phenomena",
"Particle physics",
"Plasma instabilities"
] |
3,016,399 | https://en.wikipedia.org/wiki/List%20of%20recombinant%20proteins | The following is a list of notable proteins that are produced from recombinant DNA, using biomolecular engineering. In many cases, recombinant human proteins have replaced the original animal-derived version used in medicine. The prefix "rh" for "recombinant human" appears less and less in the literature. A much larger number of recombinant proteins is used in the research laboratory. These include both commercially available proteins (for example most of the enzymes used in the molecular biology laboratory), and those that are generated in the course specific research projects.
Human recombinants that largely replaced animal or harvested from human types
Medicinal applications
Human growth hormone (rHGH): Humatrope from Lilly and Serostim from Serono replaced cadaver harvested human growth hormone
human insulin (BHI): Humulin from Lilly and Novolin from Novo Nordisk among others largely replaced bovine and porcine insulin for human therapy. Some prefer to continue using the animal-sourced preparations, as there is some evidence that synthetic insulin varieties are more likely to induce hypoglycemia unawareness. Remaining manufacturers of highly purified animal-sourced insulin include the U.K.'s Wockhardt Ltd. (headquartered in India), Argentina's Laboratorios Beta S.A., and China's Wanbang Biopharma Co.
Follicle-stimulating hormone (FSH) as a recombinant gonadotropin preparation replaced Serono's Pergonal which was previously isolated from post-menopausal female urine
Factor VIII: Kogenate from Bayer replaced blood harvested factor VIII
Research applications
Ribosomal proteins: For the studies of individual ribosomal proteins, the use of proteins that are produced and purified from recombinant sources has largely replaced those that are obtained through isolation. However, isolation is still required for the studies of the whole ribosome.
Lysosomal proteins: Lysosomal proteins are difficult to produce recombinantly due to the number and type of post-translational modifications that they have (e.g. glycosylation). As a result, recombinant lysosomal proteins are usually produced in mammalian cells. Plant cell culture was used to produce FDA-approved glycosylated lysosomal protein-drug, and additional drug candidates. Recent studies have shown that it may be possible to produce recombinant lysosomal proteins with microorganisms such as Escherichia coli and Saccharomyces cerevisiae. Recombinant lysosomal proteins are used for both research and medical applications, such as enzyme replacement therapy.
Human recombinants with recombination as only source
Medicinal applications
Erythropoietin (EPO): Epogen from Amgen
Granulocyte colony-stimulating factor (G-CSF): filgrastim sold as Neupogen from Amgen; pegfilgrastim sold as Neulasta
alpha-galactosidase A: Fabrazyme by Genzyme
alpha-L-iduronidase: (rhIDU; laronidase) Aldurazyme by BioMarin Pharmaceutical and Genzyme
N-acetylgalactosamine-4-sulfatase (rhASB; galsulfase): Naglazyme by BioMarin Pharmaceutical
Dornase alfa, a DNase sold under the trade name Pulmozyme by Genentech
Tissue plasminogen activator (TPA) Activase by Genentech
Glucocerebrosidase: Ceredase by Genzyme
Interferon (IF) Interferon-beta-1a: Avonex from Biogen Idec; Rebif from Serono; Interferon beta-1b as Betaseron from Schering. It is being investigated for the treatments of diseases including Guillain-Barré syndrome and multiple sclerosis.
Insulin-like growth factor 1 (IGF-1)
Rasburicase, a Urate Oxidase analog sold as Elitek from Sanofi
Animal recombinants
Medicinal applications
Bovine somatotropin (bST)
Porcine somatotropin (pST)
Bovine Chymosin
Bacterial recombinants
Industrial applications
Xylanases
Proteases, which have found applications in both the industrial (such as the food industry) and domestic settings.
Viral recombinants
Medicinal applications
Envelope protein of the hepatitis B virus marketed as Engerix-B by SmithKline Beecham
HPV Vaccine proteins
Plant recombinants
Research applications
Polyphenol oxidases (PPOs): These include both catechol oxidases and tyrosinases. In additional to research, PPOs have also found applications as biocatalysts.
Cystatins are proteins that inhibit cysteine proteases. Research are ongoing to evaluate the potential of using cystatins in crop protection to control herbivorous pests and pathogens.
Industrial applications
Laccases have found a wide range of application, from food additive and beverage processing to biomedical diagnosis, and as cross‐linking agents for furniture construction or in the production of biofuels.
The tyrosinase‐induced polymerization of peptides offers facile access to artificial mussel foot protein analogues. Next generation universal glues can be envisioned that perform effectively even under rigorous seawater conditions and adapt to a broad range of difficult surfaces.
See also
Protein production
Gene expression
Protein purification
Host cell protein
References
External links
Laboratorios Beta S.A website
CP Pharma/Wockhardt UK website
Biotechnology
Biotechnology products | List of recombinant proteins | [
"Biology"
] | 1,182 | [
"nan",
"Recombinant proteins",
"Biotechnology products",
"Biotechnology"
] |
3,017,438 | https://en.wikipedia.org/wiki/De%20Rham%E2%80%93Weil%20theorem | In algebraic topology, the De Rham–Weil theorem allows computation of sheaf cohomology using an acyclic resolution of the sheaf in question.
Let be a sheaf on a topological space and a resolution of by acyclic sheaves. Then
where denotes the -th sheaf cohomology group of with coefficients in
The De Rham–Weil theorem follows from the more general fact that derived functors may be computed using acyclic resolutions instead of simply injective resolutions.
See also
de Rham theorem
References
Homological algebra
Sheaf theory | De Rham–Weil theorem | [
"Mathematics"
] | 121 | [
"Theorems in algebraic geometry",
"Mathematical structures",
"Sheaf theory",
"Topology",
"Category theory",
"Fields of abstract algebra",
"Theorems in geometry",
"Homological algebra"
] |
3,018,251 | https://en.wikipedia.org/wiki/Beryllium%20oxide%20%28data%20page%29 | This page provides supplementary chemical data on beryllium oxide.
Material Safety Data Sheet
Beryllium Oxide MSDS from American Beryllia
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Beryllium oxide (data page) | [
"Chemistry"
] | 48 | [
"Chemical data pages",
"nan"
] |
3,018,300 | https://en.wikipedia.org/wiki/Bismuth%28III%29%20oxide%20%28data%20page%29 | This page provides supplementary chemical data on bismuth(III) oxide.
Material Safety Data Sheet
MSDS from Fischer Scientific
Structure and properties
Thermodynamic properties
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Bismuth(III) oxide (data page) | [
"Chemistry"
] | 45 | [
"Chemical data pages",
"nan"
] |
3,018,887 | https://en.wikipedia.org/wiki/Mautner%27s%20lemma | Mautner's lemma in representation theory, named after Austrian-American mathematician Friederich Mautner, states that if G is a topological group and π a unitary representation of G on a Hilbert space H, then for any x in G, which has conjugates
yxy−1
converging to the identity element e, for a net of elements y, then any vector v of H invariant under all the π(y) is also invariant under π(x).
References
F. Mautner, Geodesic flows on symmetric Riemannian spaces (1957), Ann. Math. 65, 416-430
Unitary representation theory
Topological groups
Theorems in representation theory
Lemmas in group theory | Mautner's lemma | [
"Mathematics"
] | 149 | [
"Algebra stubs",
"Space (mathematics)",
"Topological spaces",
"Topological groups",
"Algebra"
] |
3,018,981 | https://en.wikipedia.org/wiki/Formal%20moduli | In mathematics, formal moduli are an aspect of the theory of moduli spaces (of algebraic varieties or vector bundles, for example), closely linked to deformation theory and formal geometry. Roughly speaking, deformation theory can provide the Taylor polynomial level of information about deformations, while formal moduli theory can assemble consistent Taylor polynomials to make a formal power series theory. The step to moduli spaces, properly speaking, is an algebraization question, and has been largely put on a firm basis by Artin's approximation theorem.
A formal universal deformation is by definition a formal scheme over a complete local ring, with special fiber the scheme over a field being studied, and with a universal property amongst such set-ups. The local ring in question is then the carrier of the formal moduli.
References
Moduli theory
Algebraic geometry
Geometric algebra | Formal moduli | [
"Mathematics"
] | 168 | [
"Mathematical analysis",
"Algebraic geometry",
"Mathematical analysis stubs",
"Fields of abstract algebra",
"Geometry",
"Geometry stubs"
] |
3,019,112 | https://en.wikipedia.org/wiki/Trifluoroacetic%20acid | Trifluoroacetic acid (TFA) is a synthetic organofluorine compound with the chemical formula CF3CO2H. It is a haloacetic acid, with all three of the acetyl group's hydrogen atoms replaced by fluorine atoms. It is a colorless liquid with a vinegar-like odor. TFA is a stronger acid than acetic acid, having an acid ionisation constant, Ka, that is approximately 34,000 times higher, as the highly electronegative fluorine atoms and consequent electron-withdrawing nature of the trifluoromethyl group weakens the oxygen-hydrogen bond (allowing for greater acidity) and stabilises the anionic conjugate base. TFA is commonly used in organic chemistry for various purposes.
Synthesis
TFA is prepared industrially by the electrofluorination of acetyl chloride or acetic anhydride, followed by hydrolysis of the resulting trifluoroacetyl fluoride:
+ 4 → + 3 +
+ → +
Where desired, this compound may be dried by addition of trifluoroacetic anhydride.
An older route to TFA proceeds via the oxidation of 1,1,1-trifluoro-2,3,3-trichloropropene with potassium permanganate. The trifluorotrichloropropene can be prepared by Swarts fluorination of hexachloropropene.
Uses
TFA is the precursor to many other fluorinated compounds such as trifluoroacetic anhydride, trifluoroperacetic acid, and 2,2,2-trifluoroethanol. It is a reagent used in organic synthesis because of a combination of convenient properties: volatility, solubility in organic solvents, and its strength as an acid. TFA is also less oxidizing than sulfuric acid but more readily available in anhydrous form than many other acids. One complication to its use is that TFA forms an azeotrope with water (b. p. 105 °C).
TFA is used as a strong acid to remove protecting groups such as Boc used in organic chemistry and peptide synthesis.
At a low concentration, TFA is used as an ion pairing agent in liquid chromatography (HPLC) of organic compounds, particularly peptides and small proteins. TFA is a versatile solvent for NMR spectroscopy (for materials stable in acid). It is also used as a calibrant in mass spectrometry.
TFA is used to produce trifluoroacetate salts.
Safety
Trifluoroacetic acid is a strong acid. TFA is harmful when inhaled, causes severe skin burns and is toxic for aquatic organisms even at low concentrations.
Skin burns are severe, heal poorly and can be necrotic. Vapour fumes have an LC50 of 10.01 mg/L, tested on rats over 4 hours. Inhalation symptoms include mucus irritation, coughing, shortness of breath and possible formation of oedemas in the respiratory tract. Exposure damages the kidneys.
Environment
Although trifluoroacetic acid is not produced biologically or abiotically, it is a metabolic breakdown product of the volatile anesthetic agent halothane. It is also thought to be responsible for halothane-induced hepatitis. It also may be formed by photooxidation of the commonly used refrigerant 1,1,1,2-tetrafluoroethane (R-134a). Moreover, it is formed as an atmospheric degradation product of almost all fourth-generation synthetic refrigerants, also called hydrofluoroolefins (HFO), such as 2,3,3,3-tetrafluoropropene.
Trifluoroacetic acid degrades very slowly in the environment and has been found in increasing amounts as a contaminant in water, soil, food, and the human body. Median concentrations of a few micrograms per liter have been found in beer and tea. Seawater can contain about 200 ng of TFA per liter. Biotransformation by decarboxylation to fluoroform has been discussed.
Trifluoroacetic acid is mildly phytotoxic.
See also
Fluoroacetic acidhighly toxic but naturally occurring rodenticide CH2FCOOH
Difluoroacetic acid
Trichloroacetic acid, the chlorinated analog
Trifluoroacetone – also abbreviated TFA
References
Perfluorocarboxylic acids
Reagents for organic chemistry
Organic compounds with 2 carbon atoms | Trifluoroacetic acid | [
"Chemistry"
] | 991 | [
"Organic compounds",
"Reagents for organic chemistry",
"Organic compounds with 2 carbon atoms"
] |
3,020,122 | https://en.wikipedia.org/wiki/Goal%20programming | Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis:
Determine the required resources to achieve a desired set of objectives.
Determine the degree of attainment of the goals with the available resources.
Providing the best satisfying solution under a varying amount of resources and priorities of the goals.
History
Goal programming was first used by Charnes, Cooper and Ferguson in 1955, although the actual name first appeared in a 1961 text by Charnes and Cooper. Seminal works by Lee, Ignizio, Ignizio and Cavalier, and Romero followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, and Jones and Tamiz give an annotated bibliography of the period 1990-2000. A recent textbook by Jones and Tamiz . gives a comprehensive overview of the state-of-the-art in goal programming.
The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon.
Variants
The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a higher priority level being infinitely more important than any deviations in lower priority levels. This is known as lexicographic or pre-emptive goal programming. Ignizio gives an algorithm showing how a lexicographic goal programme can be solved as a series of linear programmes. Lexicographic goal programming is used when there exists a clear priority ordering amongst the goals to be achieved.
If the decision maker is more interested in direct comparisons of the objectives then weighted or non-pre-emptive goal programming should be used. In this case, all the unwanted deviations are multiplied by weights, reflecting their relative importance, and added together as a single sum to form the achievement function. Deviations measured in different units cannot be summed directly due to the phenomenon of incommensurability.
Hence each unwanted deviation is multiplied by a normalisation constant to allow direct comparison. Popular choices for normalisation constants are the goal target value of the corresponding objective (hence turning all deviations into percentages) or the range of the corresponding objective (between the best and the worst possible values, hence mapping all deviations onto a zero-one range). For decision makers more interested in obtaining a balance between the competing objectives, Chebyshev goal programming is used. Introduced by Flavell in 1976, this variant seeks to minimise the maximum unwanted deviation, rather than the sum of deviations. This utilises the Chebyshev distance metric.
Strengths and weaknesses
A major strength of goal programming is its simplicity and ease of use. This accounts for the large number of goal programming applications in many and diverse fields. Linear goal programmes can be solved using linear programming software as either a single linear programme, or in the case of the lexicographic variant, a series of connected linear programmes.
Goal programming can hence handle relatively large numbers of variables, constraints and objectives. A debated weakness is the ability of goal programming to produce solutions that are not Pareto efficient. This violates a fundamental concept of decision theory, that no rational decision maker will knowingly choose a solution that is not Pareto efficient. However, techniques are available to detect when this occurs and project the solution onto the Pareto efficient solution in an appropriate manner.
The setting of appropriate weights in the goal programming model is another area that has caused debate, with some authors suggesting the use of the analytic hierarchy process or interactive methods for this purpose. Also, the weights of the objective functions can be calculated based on their preference using the ordinal priority approach.
See also
Decision-making software
External links
LiPS — Free easy-to-use GUI program intended for solving linear, integer and goal programming problems.
LINSOLVE - Free Windows command-line window linear programming and linear goal programming]
References
Mathematical optimization
Multiple-criteria decision analysis
Goal
de:Entscheidung unter Sicherheit#Zielprogrammierung | Goal programming | [
"Mathematics"
] | 981 | [
"Mathematical optimization",
"Mathematical analysis"
] |
3,021,101 | https://en.wikipedia.org/wiki/Class%20number%20formula | In number theory, the class number formula relates many important invariants of an algebraic number field to a special value of its Dedekind zeta function.
General statement of the class number formula
We start with the following data:
is a number field.
, where denotes the number of real embeddings of , and is the number of complex embeddings of .
is the Dedekind zeta function of .
is the class number, the number of elements in the ideal class group of .
is the regulator of .
is the number of roots of unity contained in .
is the discriminant of the extension .
Then:
Theorem (Class Number Formula). converges absolutely for and extends to a meromorphic function defined for all complex with only one simple pole at , with residue
This is the most general "class number formula". In particular cases, for example when is a cyclotomic extension of , there are particular and more refined class number formulas.
Proof
The idea of the proof of the class number formula is most easily seen when K = Q(i). In this case, the ring of integers in K is the Gaussian integers.
An elementary manipulation shows that the residue of the Dedekind zeta function at s = 1 is the average of the coefficients of the Dirichlet series representation of the Dedekind zeta function. The n-th coefficient of the Dirichlet series is essentially the number of representations of n as a sum of two squares of nonnegative integers. So one can compute the residue of the Dedekind zeta function at s = 1 by computing the average number of representations. As in the article on the Gauss circle problem, one can compute this by approximating the number of lattice points inside of a quarter circle centered at the origin, concluding that the residue is one quarter of pi.
The proof when K is an arbitrary imaginary quadratic number field is very similar.
In the general case, by Dirichlet's unit theorem, the group of units in the ring of integers of K is infinite. One can nevertheless reduce the computation of the residue to a lattice point counting problem using the classical theory of real and complex embeddings and approximate the number of lattice points in a region by the volume of the region, to complete the proof.
Dirichlet class number formula
Peter Gustav Lejeune Dirichlet published a proof of the class number formula for quadratic fields in 1839, but it was stated in the language of quadratic forms rather than classes of ideals. It appears that Gauss already knew this formula in 1801.
This exposition follows Davenport.
Let d be a fundamental discriminant, and write h(d) for the number of equivalence classes of quadratic forms with discriminant d. Let be the Kronecker symbol. Then is a Dirichlet character. Write for the Dirichlet L-series based on . For d > 0, let t > 0, u > 0 be the solution to the Pell equation for which u is smallest, and write
(Then is either a fundamental unit of the real quadratic field or the square of a fundamental unit.)
For d < 0, write w for the number of automorphisms of quadratic forms of discriminant d; that is,
Then Dirichlet showed that
This is a special case of Theorem 1 above: for a quadratic field K, the Dedekind zeta function is just , and the residue is . Dirichlet also showed that the L-series can be written in a finite form, which gives a finite form for the class number. Suppose is primitive with prime conductor . Then
Galois extensions of the rationals
If K is a Galois extension of Q, the theory of Artin L-functions applies to . It has one factor of the Riemann zeta function, which has a pole of residue one, and the quotient is regular at s = 1. This means that the right-hand side of the class number formula can be equated to a left-hand side
Π L(1,ρ)dim ρ
with ρ running over the classes of irreducible non-trivial complex linear representations of Gal(K/Q) of dimension dim(ρ). That is according to the standard decomposition of the regular representation.
Abelian extensions of the rationals
This is the case of the above, with Gal(K/Q) an abelian group, in which all the ρ can be replaced by Dirichlet characters (via class field theory) for some modulus f called the conductor. Therefore all the L(1) values occur for Dirichlet L-functions, for which there is a classical formula, involving logarithms.
By the Kronecker–Weber theorem, all the values required for an analytic class number formula occur already when the cyclotomic fields are considered. In that case there is a further formulation possible, as shown by Kummer. The regulator, a calculation of volume in 'logarithmic space' as divided by the logarithms of the units of the cyclotomic field, can be set against the quantities from the L(1) recognisable as logarithms of cyclotomic units. There result formulae stating that the class number is determined by the index of the cyclotomic units in the whole group of units.
In Iwasawa theory, these ideas are further combined with Stickelberger's theorem.
See also
Brumer–Stark conjecture
Smith–Minkowski–Siegel mass formula
Notes
References
Algebraic number theory
Quadratic forms | Class number formula | [
"Mathematics"
] | 1,143 | [
"Quadratic forms",
"Algebraic number theory",
"Number theory"
] |
3,021,207 | https://en.wikipedia.org/wiki/Current%20%28mathematics%29 | In mathematics, more particularly in functional analysis, differential topology, and geometric measure theory, a k-current in the sense of Georges de Rham is a functional on the space of compactly supported differential k-forms, on a smooth manifold M. Currents formally behave like Schwartz distributions on a space of differential forms, but in a geometric setting, they can represent integration over a submanifold, generalizing the Dirac delta function, or more generally even directional derivatives of delta functions (multipoles) spread out along subsets of M.
Definition
Let denote the space of smooth m-forms with compact support on a smooth manifold A current is a linear functional on which is continuous in the sense of distributions. Thus a linear functional
is an m-dimensional current if it is continuous in the following sense: If a sequence of smooth forms, all supported in the same compact set, is such that all derivatives of all their coefficients tend uniformly to 0 when tends to infinity, then tends to 0.
The space of m-dimensional currents on is a real vector space with operations defined by
Much of the theory of distributions carries over to currents with minimal adjustments. For example, one may define the support of a current as the complement of the biggest open set such that
whenever
The linear subspace of consisting of currents with support (in the sense above) that is a compact subset of is denoted
Homological theory
Integration over a compact rectifiable oriented submanifold M (with boundary) of dimension m defines an m-current, denoted by :
If the boundary ∂M of M is rectifiable, then it too defines a current by integration, and by virtue of Stokes' theorem one has:
This relates the exterior derivative d with the boundary operator ∂ on the homology of M.
In view of this formula we can define a boundary operator on arbitrary currents
via duality with the exterior derivative by
for all compactly supported m-forms
Certain subclasses of currents which are closed under can be used instead of all currents to create a homology theory, which can satisfy the Eilenberg–Steenrod axioms in certain cases. A classical example is the subclass of integral currents on Lipschitz neighborhood retracts.
Topology and norms
The space of currents is naturally endowed with the weak-* topology, which will be further simply called weak convergence. A sequence of currents, converges to a current if
It is possible to define several norms on subspaces of the space of all currents. One such norm is the mass norm. If is an m-form, then define its comass by
So if is a simple m-form, then its mass norm is the usual L∞-norm of its coefficient. The mass of a current is then defined as
The mass of a current represents the weighted area of the generalized surface. A current such that M(T) < ∞ is representable by integration of a regular Borel measure by a version of the Riesz representation theorem. This is the starting point of homological integration.
An intermediate norm is Whitney's flat norm, defined by
Two currents are close in the mass norm if they coincide away from a small part. On the other hand, they are close in the flat norm if they coincide up to a small deformation.
Examples
Recall that
so that the following defines a 0-current:
In particular every signed regular measure is a 0-current:
Let (x, y, z) be the coordinates in Then the following defines a 2-current (one of many):
See also
Georges de Rham
Herbert Federer
Differential geometry
Varifold
Notes
References
.
Differential topology
Functional analysis
Generalized functions
Generalized manifolds
Schwartz distributions | Current (mathematics) | [
"Mathematics"
] | 743 | [
"Functions and mappings",
"Functional analysis",
"Mathematical objects",
"Topology",
"Mathematical relations",
"Differential topology"
] |
3,021,393 | https://en.wikipedia.org/wiki/Length%20function | In the mathematical field of geometric group theory, a length function is a function that assigns a number to each element of a group.
Definition
A length function L : G → R+ on a group G is a function satisfying:
Compare with the axioms for a metric and a filtered algebra.
Word metric
An important example of a length is the word metric: given a presentation of a group by generators and relations, the length of an element is the length of the shortest word expressing it.
Coxeter groups (including the symmetric group) have combinatorially important length functions, using the simple reflections as generators (thus each simple reflection has length 1). See also: length of a Weyl group element.
A longest element of a Coxeter group is both important and unique up to conjugation (up to different choice of simple reflections).
Properties
A group with a length function does not form a filtered group, meaning that the sublevel sets do not form subgroups in general.
However, the group algebra of a group with a length functions forms a filtered algebra: the axiom corresponds to the filtration axiom.
References
Group theory
Geometric group theory | Length function | [
"Physics",
"Mathematics"
] | 236 | [
"Geometric group theory",
"Group actions",
"Group theory",
"Fields of abstract algebra",
"Symmetry"
] |
3,021,657 | https://en.wikipedia.org/wiki/Icemaker | An icemaker, ice generator, or ice machine may refer to either a consumer device for making ice, found inside a home freezer; a stand-alone appliance for making ice, or an industrial machine for making ice on a large scale. The term "ice machine" usually refers to the stand-alone appliance.
The ice generator is the part of the ice machine that actually produces the ice. This would include the evaporator and any associated drives/controls/subframe that are directly involved with making and ejecting the ice into storage. When most people refer to an ice generator, they mean this ice-making subsystem alone, minus refrigeration.
An ice machine, however, particularly if described as 'packaged', would typically be a complete machine including refrigeration, controls, and dispenser, requiring only connection to power and water supplies.
The term icemaker is more ambiguous, with some manufacturers describing their packaged ice machine as an icemaker, while others describe their generators in this way.
History
In 1748, the first known artificial refrigeration was demonstrated by William Cullen at the University of Glasgow. Mr. Cullen never used his discovery for any practical purposes. This may be the reason why the history of the icemakers begins with Oliver Evans, an American inventor who designed the first refrigeration machine in 1805. In 1834, Jacob Perkins built the first practical refrigerating machine using ether in a vapor compression cycle. The American inventor, mechanical engineer and physicist received 21 American and 19 English patents (for innovations in steam engines, the printing industry and gun manufacturing among others) and is considered today the father of the refrigerator.
In 1844, an American physician, John Gorrie, built a refrigerator based on Oliver Evans' design to make ice to cool the air for his yellow fever patients. His plans date back to 1842, making him one of the founding fathers of the refrigerator. Unfortunately for John Gorrie, his plans of manufacturing and selling his invention were met with fierce opposition by Frederic Tudor, the Boston “Ice King”. By then, Tudor was shipping ice from the United States to Cuba and was planning to expand his business to India. Fearing that Gorrie’s invention would ruin his business, he began a smear campaign against the inventor. In 1851, John Gorrie was awarded U.S. Patent 8080 for an ice machine. After struggling with Tudor's campaign and the death of his partner, John Gorrie also died, bankrupt and humiliated. His original icemaker plans and the prototype machine are held today at the National Museum of American History, Smithsonian Institution in Washington, D.C.
In 1853, Alexander Twining was awarded U.S. Patent 10221 for an icemaker. Twining’s experiments led to the development of the first commercial refrigeration system, built in 1856. He also established the first artificial method of producing ice. Just like Perkins before him, James Harrison started experimenting with ether vapor compression. In 1854, James Harrison successfully built a refrigeration machine capable of producing 3,000 kilograms of ice per day and in 1855 he received an icemaker patent in Australia, similar to that of Alexander Twining. Harrison continued his experiments with refrigeration. Today he is credited for his major contributions to the development of modern cooling system designs and functionality strategies. These systems were later used to ship refrigerated meat across the globe.
In 1867, Andrew Muhl built an ice-making machine in San Antonio, Texas, to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, which produced the world's first commercial icemakers. William Riley Brown served as its president and George Jasper Golden served as its superintendent.
In 1876, German engineer Carl von Linde patented the process of liquefying gas that would later become an important part of basic refrigeration technology (U.S. Patent 1027862). In 1879 and 1891, two African American inventors patented improved refrigerator designs in the United States (Thomas Elkins – U.S. patent #221222 and respectively John Standard – U.S. patent #455891).
In 1902, the Teague family of Montgomery purchased control of the firm. Their last advertisement in Ice and Refrigeration appeared in March 1904. In 1925, controlling interest in the Columbus Iron Works passed from the Teague family to W.C. Bradely of W.C. Bradley, Co.
Jurgen Hans is credited with the invention of the first ice machine to produce edible ice in 1929. In 1932 he founded a company called Kulinda and started manufacturing edible ice, but by 1949 the business switched its central product from ice to central air conditioning.
The ice machines from the late 1800s to the 1930s used toxic gases such as ammonia (NH3), methyl chloride (CH3Cl), and sulfur dioxide (SO2) as refrigerants. During the 1920s, several fatal accidents were registered. They were caused by the refrigerators leaking methyl chloride. In the quest of replacing dangerous refrigerants – especially methyl chloride – collaborative research ensued in American corporations. The result of this research was the discovery of Freon. In 1930, General Motors and DuPont formed Kinetic Chemicals to produce Freon, which would later become the standard for almost all consumer and industrial refrigerators. The original "Freon" produced at this time was chlorofluorocarbon, a moderately toxic gas causing ozone depletion.
Principle of ice making
All refrigeration equipment is made of four key components; the evaporator, the condenser, the compressor and the throttle valve. Ice machines all work the same way. The function of the compressor is to compress low-pressure refrigerant vapor to high-pressure vapor, and deliver it to the condenser. Here, the high-pressure vapor is condensed into high-pressure liquid, and drained out through the throttle valve to become low-pressure liquid. At this point, the liquid is conducted to the evaporator, where heat exchanging occurs, and ice is created. This is one complete refrigeration cycle.
Consumer icemakers
Freezer icemakers
Automatic icemakers for the home were first offered by the Servel company around 1953. They are usually found inside the freezer compartment of a refrigerator. They produce crescent-shaped ice cubes from a metal mold. An electromechanical or electronic timer first opens a solenoid valve for a few seconds, allowing the mold to fill with water from the domestic cold water supply. The timer then closes the valve and lets the ice freeze for about 30 minutes. Then, the timer turns on a low-power electric heating element inside the mold for several seconds, to melt the ice cubes slightly so they will not stick to the mold. Finally, the timer runs a rotating arm that scoops the ice cubes out of the mold and into a bin, and the cycle repeats. If the bin fills with ice, the ice pushes up a wire arm, which shuts off the icemaker until the ice level in the bin goes down again. The user can also lift up the wire arm at any time to stop the production of ice.
Later automatic icemakers in Samsung refrigerators use a flexible plastic mold. When the ice cubes are frozen, which is sensed by a Thermistor, the timer causes a motor to invert the mold and twist it so that the cubes detach and fall into a bin.
Early icemakers dropped the ice into a bin in the freezer compartment; the user had to open the freezer door to obtain ice. In 1965, Frigidaire introduced icemakers that dispensed from the front of the freezer door. In these models, pressing a glass against a cradle on the outside of the door runs a motor, which turns an auger in the bin and delivers ice cubes to the glass. Most dispensers can optionally route the ice through a crushing mechanism to deliver crushed ice. Some dispensers can also dispense chilled water.
Fresh food compartment icemakers
There are alternatives to freezer compartment icemakers developed by manufacturers such as Whirlpool, LG, Samsung. This new type of icemaker located in the fresh food compartment is becoming a more popular feature among customers shopping for a new refrigerator with an icemaker. In order to function properly, the icemaker compartment should keep temperature inside around and needs to be properly sealed from the outside, since it is located in the fresh food compartment where temperatures are usually higher than . Unfortunately, there are some disadvantages for this type of icemakers and due to design flaws of icemaker compartment in the Samsung refrigerator, warm air getting inside through the seals and create water condensation. This condensation turning into ice chunks and jamming icemaker mechanism. Thousands of people in the United States were experiencing this issue and in 2017 was created a lawsuit against Samsung refusing to properly fix this issue.
Portable icemakers
Portable icemakers are units that can fit on a countertop. They are the fastest and smallest icemakers on the market. The ice produced by a portable icemaker is bullet-shaped and has a cloudy, opaque appearance. The first batch of ice can be made within 10 minutes of turning the appliance on and adding water. The water is pumped into a small tube with metal pegs immersed in the water. Because the unit is portable, water must be filled manually. The water is pumped from the bottom of the reservoir to the freeze tray. The pegs use a heating and cooling system inside to freeze the water around them and then heat up so the ice slips off the peg and into the storage bin. Ice begins to form in a matter of minutes, however, the size of ice cubes depends on the freezing cycle - a longer cycle results in thicker cubes. Portable icemakers will not keep the ice from melting, but the appliance will recycle the water to make more ice. Once the storage tray is full, the system will turn off automatically.
Built-in and freestanding icemakers
Built-in icemakers are engineered to fit under a kitchen or bar counter, but they can be used as freestanding units. Some produce crescent-shaped ice like the ice from a freezer icemaker; the ice is cloudy and opaque instead of clear, because the water is frozen faster than in others which are clear cube icemakers. In the process, tiny air bubbles get trapped, causing the cloudy appearance of the ice. However, most under-counter ice makers are clear ice makers in which the ice is missing the air bubbles, and therefore the ice is clear and melts much slower.
Industrial icemakers
Commercial ice makers improve the quality of ice by using moving water. The water is run down a high nickel content stainless steel evaporator. The surface must be below freezing. Salt water requires lower temperatures to freeze and will last longer. Generally used to package seafood products. Air and undissolved solids will be washed away to such an extent that in horizontal evaporator machines the water has 98% of the solids removed, resulting in very hard, virtually pure, clear ice. In vertical evaporators the ice is softer, more so if there are actual individual cube cells. Commercial ice machines can make different sizes of ice like flakes, crushed, cubes, octagons, and tubes.
When the sheet of ice on the cold surface reaches the desired thickness, the sheet is slid down onto a grid of wires, where the sheet's weight causes it to be broken into the desired shapes, after which it falls into a storage bin.
Flake ice machine
Flake ice is made of the mixture of brine and water (max salt per ton of water), in some cases can be directly made from brine water. Thickness between , irregular shape with diameters from .
The evaporator of the flake ice machine is a vertically placed drum-shaped stainless steel container, equipped with a rotating blade that spins and scratches the ice off the inner wall of the drum. When operating, the principal shaft and blade spin anti-clockwise pushed by the reducer. Water is sprayed down from the sprinkler; ice is formed from the water brine on the inner wall. The water tray at the bottom catches the cold water while deflecting Ice and re-circulates it back into the sump. The sump will typically use a float valve to fill as needed during production. Flake machines have a tendency to form an ice ring inside the bottom of the drum. Electric heaters are in wells at the very bottom to prevent this accumulation of ice where the crusher does not reach. Some machines use scrapers to assist this. This system utilizes a low-temperature condensing unit; like all ice machines. Most manufactures also utilize an evaporator pressure regulating valve (EPRV).
Applications
Sea water flake ice machine can make ice directly from the seawater. This ice can be used in the fast cooling of fish and other sea products. The fishing industry is the largest user of flake ice machines. Flake ice can lower the temperature of cleaning water and sea products, therefore it resists the growth of bacteria and keeps the seafood fresh.
Because of its large contact and less damage with refrigerated materials, it is also applied in vegetable, fruit, and meat storing and transporting.
In baking, during the mixing of flour and milk, flake ice can be added to prevent the flour from self-raising.
In most cases of biosynthesis and chemosynthesis, flake ice is used to control the reaction rate and maintain the liveness. Flake ice is sanitary, clean with a rapid temperature reduction effect.
Flake ice is used as the direct source of water in the concrete cooling process, more than 80% in weight. Concrete will not crack if has been mixed and poured at a constant and low temperature.
Flake ice is also used for artificial snow, so it is widely applied in ski resorts and entertainment parks.
Cube icemaker
Cube ice machines are classified as small ice machines, in contrast to tube ice machines, flake ice machines, or other ice machines. Common capacities range from to . Since the emergence of cube ice machines in the 1970s, they have evolved into a diverse family of ice machines.
Cube ice machines are commonly seen as vertical modular devices. The upper part is an evaporator, and the lower part is an ice bin. The refrigerant circulates inside pipes of a self-contained evaporator, where it conducts the heat exchange with water, and freezes the water into ice cubes. Once frozen, an ejection mechanism releases the cubes into a collection bin. Frigidaire ice makers introduced in various types such as under counter, countertop, and commercial models, cube ice makers cater to diverse settings including food and beverage industries, healthcare, and residential use. When the water is thoroughly frozen into ice, it is automatically released, and falls into the ice bin.
Ice machines can have either a self-contained refrigeration system where the compressor is built into the unit, or a remote refrigeration system where the refrigeration components are located elsewhere, often the roof of the business.
Compressor
Most compressors are either positive displacement compressors or radial compressors. Positive displacement compressors are currently the most efficient type of compressor, and have the largest refrigerating effect per single unit (). They have a large range of possible power supplies, and can be , , or even higher. The principle behind positive displacement compressors utilizes a turbine to compress refrigerant into high-pressure vapor. Positive displacement compressors are of four main types: screw compressor, rolling piston compressor, reciprocating compressor, and rotary compressor.
Screw compressors can yield the largest refrigerating effect among positive displacement compressors, with their refrigerating capacity normally ranging from to . Screw compressors also can be divided into single-screw type and dual-screw type. The Dual-screw type is more often seen in use because it is very efficient.
Rolling piston compressors and reciprocating compressors have similar refrigerating effects, and the maximum refrigerating effect can reach .
Reciprocating compressors are the most common type of compressor because the technology is mature and reliable. Their refrigerating effect ranges from to . They compress gas by utilizing a piston pushed by a crank shaft.
Rotary compressors, mainly used in air conditioning equipment, have a very low refrigerating effect, normally not exceeding . They work by compressing gas using a piston pushed by a rotor, which spins in an isolated compartment.
Condenser
All condensers can be classified as one of three types: air cooling, water cooling, or evaporative cooling.
An air cooling condenser uses air as the heat-conducting media by blowing air through the surface of condensers, which carries heat away from the high-pressure, high-temperature refrigerant vapor.
A water cooling condenser uses water as the heat-conducting media to cooling refrigerant vapor to liquid.
An evaporative condenser cools the refrigerant vapor by using heat exchange between the evaporator pipes and the evaporated water which is sprayed on the surface of the pipes. This type of condenser is capable of working in warm environments; they are also very efficient and reliable.
Tube ice generator
A tube ice generator is an ice generator in which the water is frozen in tubes that are extended vertically within a surrounding casing—the freezing chamber. At the bottom of the freezing chamber, there is a distributor plate having apertures surrounding the tubes and attached to the separate chamber into which a warm gas is passed to heat the tubes and cause the ice rods to slide down.
Tube ice can be used in cooling processes, such as temperature controlling, fresh fish freezing, and beverage bottle freezing. It can be consumed alone or with food or beverages.
Global applications and impact
As of 2019 there were approximately 2 billion household refrigerators and over 40 million square meters of cold-storage facilities operating worldwide. In the US in 2018 almost 12 million refrigerators were sold. This data supports the assertion that refrigeration has global applications with positive impact upon the economy, technology, social dynamics, health, and the environment.
Economic applications
Refrigeration is necessary for the implementation of many current or future energy sources (hydrogen liquefying for alternative fuels in the automotive industry and thermonuclear fusion production for the alternative energy industries).
The petro-chemical and pharmaceutical industries also need refrigeration, as it is used to control and moderate many types of reactions.
Heat pumps, operating based on refrigeration processes, are frequently used as an energy-efficient way of producing heat.
The production and transport of cryogenic fuels (liquid hydrogen and oxygen) as well as the long-term storage of these fluids is necessary for the space industry.
In the transportation industry, refrigeration is used in marine containers, reefer ships, refrigerated rail cars, road transport, and in liquefied gas tankers.
Health applications
In the food industry, refrigeration contributes to reducing post-harvest losses while supplying foods to consumers, enabling perishable foods to be preserved at all stages from production to consumption.
In the medical sector, refrigeration is used for transport of vaccines, organs, and stem cells, while cryotechnology is used in surgery and other medical research courses of action.
Environmental applications
Refrigeration is used in biodiversity maintenance based on the cryopreservation of genetic resources (cells; tissues; and organs of plants, animals and micro-organisms).
Refrigeration enables the liquefaction of for underground storage, allowing the potential separation of from fossil fuels in power stations via cryogenic technology.
Environmental aspects
At an environmental level, the impact of refrigeration is caused by atmospheric emissions of refrigerant gases used in refrigerating installations and the energy consumption of these refrigerating installations which contribute to emissions – and consequently to global warming – thus reducing global energy resources. The atmospheric emissions of refrigerant gases are based on the leaks occurring in insufficiently leak-tight refrigerating installations or during maintenance-related refrigerant-handling processes.
Depending on the refrigerants used, these installations and their subsequent leaks can lead to ozone depletion (chlorinated refrigerants like CFCs and HCFCs) and/or climate change, by exerting an additional greenhouse effect (fluorinated refrigerants: CFCs, HCFCs and HFCs).
Alternative refrigerants
In their continuous research of methods to replace ozone-depleting refrigerants and greenhouse refrigerants (CFCs, HCFCs and HFCs, respectively) the scientific community together with the refrigerant industry came up with alternative all-natural refrigerants which are eco-friendly. According to a report issued by the UN Environment Programme, “the increase in HFC emissions is projected to offset much of the climate benefit achieved by the earlier reduction in the emissions of Ozone depleting substances”. Among non-HFC refrigerants found to successfully replace the traditional ones are ammonia, hydrocarbons and carbon dioxide.
Ammonia
The history of refrigeration began with the use of ammonia. After more than 120 years, this substance is still the preeminent refrigerant used by household, commercial and industrial refrigeration systems. The major problem with ammonia is its toxicity at relatively low concentrations. On the other hand, ammonia has zero impact on the ozone layer and very low global warming effects. While deaths caused by ammonia exposure are extremely rare, the scientific community has come up with safer and technologically solid mechanisms of preventing ammonia leakage in modern refrigerating equipment. This problem out of the way, ammonia is considered an eco-friendly refrigerant with numerous applications.
Carbon dioxide (CO2)
Carbon dioxide has been used as a refrigerant for many years. Just like ammonia, it has fallen in almost complete disuse due to its low critical point and its high operating pressure. Carbon dioxide has zero impact on the ozone layer and the global warming effects of the quantities required for use as a refrigerant are also negligible. Modern technology is solving such issues and is widely used today as an alternative to traditional refrigeration in several fields: industrial refrigeration ( is usually combined with ammonia, either in cascade systems or as a volatile brine), the food industry (food and retail refrigeration), heating (heat pumps) and the transportation industry (transport refrigeration).
Hydrocarbons
Hydrocarbons are natural products with high thermodynamic properties, zero ozone-layer impact and negligible global warming effects. One issue with hydrocarbons is that they are highly flammable, restricting their use to specific applications in the refrigeration industry.
In 2011, the EPA has approved three alternative refrigerants to replace hydrofluorocarbons (HFCs) in commercial and household freezers via the Significant New Alternatives Policy (SNAP) program. The three alternative refrigerants legalized by the EPA were hydrocarbons propane, isobutane and a substance called HCR188C – a hydrocarbon blend (ethane, propane, isobutane and n-butane). HCR188C is used today in commercial refrigeration applications (supermarket refrigerators, stand-alone refrigerators and refrigerating display cases), in refrigerated transportation, automotive air-conditioning systems and retrofit safety valve (for automotive applications) and residential window air-conditioners.
Future of refrigeration
In October 2016, negotiators from 197 countries have reached an agreement to reduce emissions of chemical refrigerants that contribute to global warming, re-emphasizing the historical importance of the Montreal Protocol and aiming to increase its impact upon the use greenhouse gases besides the efforts made to reduce ozone depletion caused by the chlorofluorocarbons. The agreement, closed at a United Nations meeting in Kigali, Rwanda set the terms for a rapid phasedown of hydrofluorocarbons (HFCs) which would be stopped from manufacturing altogether and have their uses reduced over time.
The UN agenda and the Rwanda deal aims to find a new generation of refrigerants to be safe from both an ozone layer and greenhouse effect point of view. The legally binding agreement could reduce projected emissions by as much as 88% and lower global warming with almost 0.5 degrees Celsius (nearly 1 degree Fahrenheit) by 2100.
See also
Pumpable ice technology
Yakhchāl
References
External links
How to Buy an Energy-Efficient Commercial Ice Machine. Federal Energy Management Program. Accessed April 2, 2009.
Heating, ventilation, and air conditioning
Cooling technology
Food preservation
Water ice
Home appliances | Icemaker | [
"Physics",
"Technology"
] | 5,186 | [
"Physical systems",
"Machines",
"Home appliances"
] |
3,021,875 | https://en.wikipedia.org/wiki/Entropy%20of%20mixing | In thermodynamics, the entropy of mixing is the increase in the total entropy when several initially separate systems of different composition, each in a thermodynamic state of internal equilibrium, are mixed without chemical reaction by the thermodynamic operation of removal of impermeable partition(s) between them, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new unpartitioned closed system.
In general, the mixing may be constrained to occur under various prescribed conditions. In the customarily prescribed conditions, the materials are each initially at a common temperature and pressure, and the new system may change its volume, while being maintained at that same constant temperature, pressure, and chemical component masses. The volume available for each material to explore is increased, from that of its initially separate compartment, to the total common final volume. The final volume need not be the sum of the initially separate volumes, so that work can be done on or by the new closed system during the process of mixing, as well as heat being transferred to or from the surroundings, because of the maintenance of constant pressure and temperature.
The internal energy of the new closed system is equal to the sum of the internal energies of the initially separate systems. The reference values for the internal energies should be specified in a way that is constrained to make this so, maintaining also that the internal energies are respectively proportional to the masses of the systems.
For concision in this article, the term 'ideal material' is used to refer to either an ideal gas (mixture) or an ideal solution.
In the special case of mixing ideal materials, the final common volume is in fact the sum of the initial separate compartment volumes. There is no heat transfer and no work is done. The entropy of mixing is entirely accounted for by the diffusive expansion of each material into a final volume not initially accessible to it.
In the general case of mixing non-ideal materials, however, the total final common volume may be different from the sum of the separate initial volumes, and there may occur transfer of work or heat, to or from the surroundings; also there may be a departure of the entropy of mixing from that of the corresponding ideal case. That departure is the main reason for interest in entropy of mixing. These energy and entropy variables and their temperature dependences provide valuable information about the properties of the materials.
On a molecular level, the entropy of mixing is of interest because it is a macroscopic variable that provides information about constitutive molecular properties. In ideal materials, intermolecular forces are the same between every pair of molecular kinds, so that a molecule feels no difference between other molecules of its own kind and of those of the other kind. In non-ideal materials, there may be differences of intermolecular forces or specific molecular effects between different species, even though they are chemically non-reacting. The entropy of mixing provides information about constitutive differences of intermolecular forces or specific molecular effects in the materials.
The statistical concept of randomness is used for statistical mechanical explanation of the entropy of mixing. Mixing of ideal materials is regarded as random at a molecular level, and, correspondingly, mixing of non-ideal materials may be non-random.
Mixing of ideal species at constant temperature and pressure
In ideal species, intermolecular forces are the same between every pair of molecular kinds, so that a molecule "feels" no difference between itself and its molecular neighbors. This is the reference case for examining corresponding mixing of non-ideal species.
For example, two ideal gases, at the same temperature and pressure, are initially separated by a dividing partition.
Upon removal of the dividing partition, they expand into a final common volume (the sum of the two initial volumes), and the entropy of mixing
is given by
where is the gas constant, the total number of moles and the mole fraction of component , which initially occupies volume . After the removal of the partition, the moles of component may explore the combined volume , which causes an entropy increase equal to for each component gas.
In this case, the increase in entropy is entirely due to the irreversible processes of expansion of the two gases, and involves no heat or work flow between the system and its surroundings.
Gibbs free energy of mixing
The Gibbs free energy change determines whether mixing at constant (absolute) temperature and pressure is a spontaneous process. This quantity combines two physical effects—the enthalpy of mixing, which is a measure of the energy change, and the entropy of mixing considered here.
For an ideal gas mixture or an ideal solution, there is no enthalpy of mixing (), so that the Gibbs free energy of mixing is given by the entropy term only:
For an ideal solution, the Gibbs free energy of mixing is always negative, meaning that mixing of ideal solutions is always spontaneous. The lowest value is when the mole fraction is 0.5 for a mixture of two components, or 1/n for a mixture of n components.
Solutions and temperature dependence of miscibility
Ideal and regular solutions
The above equation for the entropy of mixing of ideal gases is valid also for certain liquid (or solid) solutions—those formed by completely random mixing so that the components move independently in the total volume. Such random mixing of solutions occurs if the interaction energies between unlike molecules are similar to the average interaction energies between like molecules. The value of the entropy corresponds exactly to random mixing for ideal solutions and for regular solutions, and approximately so for many real solutions.
For binary mixtures the entropy of random mixing can be considered as a function of the mole fraction of one component.
For all possible mixtures, , so that and are both negative and the entropy of mixing is positive and favors mixing of the pure components.
The curvature of as a function of is given by the second derivative
This curvature is negative for all possible mixtures , so that mixing two solutions to form a solution of intermediate composition also increases the entropy of the system. Random mixing therefore always favors miscibility and opposes phase separation.
For ideal solutions, the enthalpy of mixing is zero so that the components are miscible in all proportions. For regular solutions a positive enthalpy of mixing may cause incomplete miscibility (phase separation for some compositions) at temperatures below the upper critical solution temperature (UCST). This is the minimum temperature at which the term in the Gibbs energy of mixing is sufficient to produce miscibility in all proportions.
Systems with a lower critical solution temperature
Nonrandom mixing with a lower entropy of mixing can occur when the attractive interactions between unlike molecules are significantly stronger (or weaker) than the mean interactions between like molecules. For some systems this can lead to a lower critical solution temperature (LCST) or lower limiting temperature for phase separation.
For example, triethylamine and water are miscible in all proportions below 19 °C, but above this critical temperature, solutions of certain compositions separate into two phases at equilibrium with each other. This means that is negative for mixing of the two phases below 19 °C and positive above this temperature. Therefore, is negative for mixing of these two equilibrium phases. This is due to the formation of attractive hydrogen bonds between the two components that prevent random mixing. Triethylamine molecules cannot form hydrogen bonds with each other but only with water molecules, so in solution they remain associated to water molecules with loss of entropy. The mixing that occurs below 19 °C is due not to entropy but to the enthalpy of formation of the hydrogen bonds.
Lower critical solution temperatures also occur in many polymer-solvent mixtures. For polar systems such as polyacrylic acid in 1,4-dioxane, this is often due to the formation of hydrogen bonds between polymer and solvent. For nonpolar systems such as polystyrene in cyclohexane, phase separation has been observed in sealed tubes (at high pressure) at temperatures approaching the liquid-vapor critical point of the solvent. At such temperatures the solvent expands much more rapidly than the polymer, whose segments are covalently linked. Mixing therefore requires contraction of the solvent for compatibility of the polymer, resulting in a loss of entropy.
Statistical thermodynamical explanation of the entropy of mixing of ideal gases
Since thermodynamic entropy can be related to statistical mechanics or to information theory, it is possible to calculate the entropy of mixing using these two approaches. Here we consider the simple case of mixing ideal gases.
Proof from statistical mechanics
Assume that the molecules of two different substances are approximately the same size, and regard space as subdivided into a square lattice whose cells are the size of the molecules. (In fact, any lattice would do, including close packing.) This is a crystal-like conceptual model to identify the molecular centers of mass. If the two phases are liquids, there is no spatial uncertainty in each one individually. (This is, of course, an approximation. Liquids have a "free volume". This is why they are (usually) less dense than solids.) Everywhere we look in component 1, there is a molecule present, and likewise for component 2. After the two different substances are intermingled (assuming they are miscible), the liquid is still dense with molecules, but now there is uncertainty about what kind of molecule is in which location. Of course, any idea of identifying molecules in given locations is a thought experiment, not something one could do, but the calculation of the uncertainty is well-defined.
We can use Boltzmann's equation for the entropy change as applied to the mixing process
where is the Boltzmann constant. We then calculate the number of ways of arranging molecules of component 1 and molecules of component 2 on a lattice, where
is the total number of molecules, and therefore the number of lattice sites.
Calculating the number of permutations of objects, correcting for the fact that of them are identical to one another, and likewise for ,
After applying Stirling's approximation for the factorial of a large integer m:
,
the result is
where we have introduced the mole fractions, which are also the probabilities of finding any particular component in a given lattice site.
Since the Boltzmann constant , where is the Avogadro constant, and the number of molecules , we recover the thermodynamic expression for the mixing of two ideal gases,
This expression can be generalized to a mixture of components, , with
The Flory–Huggins solution theory is an example of a more detailed model along these lines.
Relationship to information theory
The entropy of mixing is also proportional to the Shannon entropy or compositional uncertainty of information theory, which is defined without requiring Stirling's approximation. Claude Shannon introduced this expression for use in information theory, but similar formulas can be found as far back as the work of Ludwig Boltzmann and J. Willard Gibbs. The Shannon uncertainty is not the same as the Heisenberg uncertainty principle in quantum mechanics which is based on variance. The Shannon entropy is defined as:
where pi is the probability that an information source will produce the ith symbol from an r-symbol alphabet and is independent of previous symbols. (thus i runs from 1 to r ). H is then a measure of the expected amount of information (log pi) missing before the symbol is known or measured, or, alternatively, the expected amount of information supplied when the symbol becomes known. The set of messages of length N symbols from the source will then have an entropy of NH.
The thermodynamic entropy is only due to positional uncertainty, so we may take the "alphabet" to be any of the r different species in the gas, and, at equilibrium, the probability that a given particle is of type i is simply the mole fraction xi for that particle. Since we are dealing with ideal gases, the identity of nearby particles is irrelevant. Multiplying by the number of particles N yields the change in entropy of the entire system from the unmixed case in which all of the pi were either 1 or 0. We again obtain the entropy of mixing on multiplying by the Boltzmann constant .
So thermodynamic entropy with r chemical species with a total of N particles has a parallel to an information source that has r distinct symbols with messages that are N symbols long.
Application to gases
In gases there is a lot more spatial uncertainty because most of their volume is merely empty space. We can regard the mixing process as allowing the contents of the two originally separate contents to expand into the combined volume of the two conjoined containers. The two lattices that allow us to conceptually localize molecular centers of mass also join. The total number of empty cells is the sum of the numbers of empty cells in the two components prior to mixing. Consequently, that part of the spatial uncertainty concerning whether any molecule is present in a lattice cell is the sum of the initial values, and does not increase upon "mixing".
Almost everywhere we look, we find empty lattice cells. Nevertheless, we do find molecules in a few occupied cells. When there is real mixing, for each of those few occupied cells, there is a contingent uncertainty about which kind of molecule it is. When there is no real mixing because the two substances are identical, there is no uncertainty about which kind of molecule it is. Using conditional probabilities, it turns out that the analytical problem for the small subset of occupied cells is exactly the same as for mixed liquids, and the increase in the entropy, or spatial uncertainty, has exactly the same form as obtained previously. Obviously the subset of occupied cells is not the same at different times. But only when there is real mixing and an occupied cell is found do we ask which kind of molecule is there.
See also: Gibbs paradox, in which it would seem that "mixing" two samples of the same gas would produce entropy.
Application to solutions
If the solute is a crystalline solid, the argument is much the same. A crystal has no spatial uncertainty at all, except for crystallographic defects, and a (perfect) crystal allows us to localize the molecules using the crystal symmetry group. The fact that volumes do not add when dissolving a solid in a liquid is not important for condensed phases. If the solute is not crystalline, we can still use a spatial lattice, as good an approximation for an amorphous solid as it is for a liquid.
The Flory–Huggins solution theory provides the entropy of mixing for polymer solutions, in which the macromolecules are huge compared to the solvent molecules. In this case, the assumption is made that each monomer subunit in the polymer chain occupies a lattice site.
Note that solids in contact with each other also slowly interdiffuse, and solid mixtures of two or more components may be made at will (alloys, semiconductors, etc.). Again, the same equations for the entropy of mixing apply, but only for homogeneous, uniform phases.
Mixing under other constraints
Mixing with and without change of available volume
In the established customary usage, expressed in the lead section of this article, the entropy of mixing comes from two mechanisms, the intermingling and possible interactions of the distinct molecular species, and the change in the volume available for each molecular species, or the change in concentration of each molecular species. For ideal gases, the entropy of mixing at prescribed common temperature and pressure has nothing to do with mixing in the sense of intermingling and interactions of molecular species, but is only to do with expansion into the common volume.
According to Fowler and Guggenheim (1939/1965), the conflating of the just-mentioned two mechanisms for the entropy of mixing is well established in customary terminology, but can be confusing unless it is borne in mind that the independent variables are the common initial and final temperature and total pressure; if the respective partial pressures or the total volume are chosen as independent variables instead of the total pressure, the description is different.
Mixing with each gas kept at constant partial volume, with changing total volume
In contrast to the established customary usage, "mixing" might be conducted reversibly at constant volume for each of two fixed masses of gases of equal volume, being mixed by gradually merging their initially separate volumes by use of two ideal semipermeable membranes, each permeable only to one of the respective gases, so that the respective volumes available to each gas remain constant during the merge. Either one of the common temperature or the common pressure is chosen to be independently controlled by the experimenter, the other being allowed to vary so as to maintain constant volume for each mass of gas. In this kind of "mixing", the final common volume is equal to each of the respective separate initial volumes, and each gas finally occupies the same volume as it did initially.
This constant volume kind of "mixing", in the special case of perfect gases, is referred to in what is sometimes called Gibbs' theorem. It states that the entropy of such "mixing" of perfect gases is zero.
Mixing at constant total volume and changing partial volumes, with mechanically controlled varying pressure, and constant temperature
An experimental demonstration may be considered. The two distinct gases, in a cylinder of constant total volume, are at first separated by two contiguous pistons made respectively of two suitably specific ideal semipermeable membranes. Ideally slowly and fictively reversibly, at constant temperature, the gases are allowed to mix in the volume between the separating membranes, forcing them apart, thereby supplying work to an external system. The energy for the work comes from the heat reservoir that keeps the temperature constant. Then, by externally forcing ideally slowly the separating membranes together, back to contiguity, work is done on the mixed gases, fictively reversibly separating them again, so that heat is returned to the heat reservoir at constant temperature. Because the mixing and separation are ideally slow and fictively reversible, the work supplied by the gases as they mix is equal to the work done in separating them again. Passing from fictive reversibility to physical reality, some amount of additional work, that remains external to the gases and the heat reservoir, must be provided from an external source for this cycle, as required by the second law of thermodynamics, because this cycle has only one heat reservoir at constant temperature, and the external provision of work cannot be completely efficient.
Gibbs' paradox: "mixing" of identical species versus mixing of closely similar but non-identical species
For entropy of mixing to exist, the putatively mixing molecular species must be chemically or physically detectably distinct. Thus arises the so-called Gibbs paradox, as follows. If molecular species are identical, there is no entropy change on mixing them, because, defined in thermodynamic terms, there is no mass transfer, and thus no thermodynamically recognized process of mixing. Yet the slightest detectable difference in constitutive properties between the two species yields a thermodynamically recognized process of transfer with mixing, and a possibly considerable entropy change, namely the entropy of mixing.
The "paradox" arises because any detectable constitutive distinction, no matter how slight, can lead to a considerably large change in amount of entropy as a result of mixing. Though a continuous change in the properties of the materials that are mixed might make the degree of constitutive difference tend continuously to zero, the entropy change would nonetheless vanish discontinuously when the difference reached zero.
From a general physical viewpoint, this discontinuity is paradoxical. But from a specifically thermodynamic viewpoint, it is not paradoxical, because in that discipline the degree of constitutive difference is not questioned; it is either there or not there. Gibbs himself did not see it as paradoxical. Distinguishability of two materials is a constitutive, not a thermodynamic, difference, for the laws of thermodynamics are the same for every material, while their constitutive characteristics are diverse.
Though one might imagine a continuous decrease of the constitutive difference between any two chemical substances, physically it cannot be continuously decreased till it actually vanishes. It is hard to think of a smaller difference than that between ortho- and para-hydrogen. Yet they differ by a finite amount. The hypothesis, that the distinction might tend continuously to zero, is unphysical. This is neither examined nor explained by thermodynamics. Differences of constitution are explained by quantum mechanics, which postulates discontinuity of physical processes.
For a detectable distinction, some means should be physically available. One theoretical means would be through an ideal semi-permeable membrane. It should allow passage, backwards and forwards, of one species, while passage of the other is prevented entirely. The entirety of prevention should include perfect efficacy over a practically infinite time, in view of the nature of thermodynamic equilibrium. Even the slightest departure from ideality, as assessed over a finite time, would extend to utter non-ideality, as assessed over a practically infinite time. Such quantum phenomena as tunneling ensure that nature does not allow such membrane ideality as would support the theoretically demanded continuous decrease, to zero, of detectable distinction. The decrease to zero detectable distinction must be discontinuous.
For ideal gases, the entropy of mixing does not depend on the degree of difference between the distinct molecular species, but only on the fact that they are distinct; for non-ideal gases, the entropy of mixing can depend on the degree of difference of the distinct molecular species. The suggested or putative "mixing" of identical molecular species is not in thermodynamic terms a mixing at all, because thermodynamics refers to states specified by state variables, and does not permit an imaginary labelling of particles. Only if the molecular species are different is there mixing in the thermodynamic sense.
See also
CALPHAD
Enthalpy of mixing
Gibbs energy
Notes
References
External links
Online lecture
Statistical mechanics
Thermodynamic entropy | Entropy of mixing | [
"Physics"
] | 4,493 | [
"Statistical mechanics",
"Entropy",
"Physical quantities",
"Thermodynamic entropy"
] |
9,350,089 | https://en.wikipedia.org/wiki/ADAM17 | A disintegrin and metalloprotease 17 (ADAM17), also called TACE (tumor necrosis factor-α-converting enzyme), is a 70-kDa enzyme that belongs to the ADAM protein family of disintegrins and metalloproteases, activated by substrate presentation.
Structure
ADAM17 is an 824-amino acid polypeptide.
ADAM17 has multidomain structure that includes a pro-domain, a metallo-protease domain, a disintegrin domain, a cysteine-rich domain, an EGF-like domain, a transmembrane domain, and a cytoplasmic tail. The metalloprotease domain is responsible for the enzyme's catalytic activity, cleaving membrane-bound proteins, including cytokines like TNF-alpha, to release their soluble forms. The disintegrin and cysteine-rich domains are implicated in cell adhesion and interaction with integrins, while the transmembrane domain anchors the protein in the membrane. The cytoplasmic tail is involved in intracellular signaling and protein-protein interactions. ADAM17's activity is tightly regulated through multiple mechanisms, including the removal of its pro-domain and interactions with regulatory proteins such as TIMPs (tissue inhibitors of metalloproteinases).
Function
ADAM17 is understood to be involved in the processing of tumor necrosis factor alpha (TNF-α) at the surface of the cell, and from within the intracellular membranes of the trans-Golgi network. This process, which is also known as 'shedding', involves the cleavage and release of a soluble ectodomain from membrane-bound pro-proteins (such as pro-TNF-α), and is of known physiological importance. ADAM17 was the first 'sheddase' to be identified, and is also understood to play a role in the release of a diverse variety of membrane-anchored cytokines, cell adhesion molecules, receptors, ligands, and enzymes.
Cloning of the TNF-α gene revealed it to encode a 26 kDa type II transmembrane pro-polypeptide that becomes inserted into the cell membrane during its maturation. At the cell surface, pro-TNF-α is biologically active, and is able to induce immune responses via juxtacrine intercellular signaling. However, pro-TNF-α can undergo a proteolytic cleavage at its Ala76-Val77 amide bond, which releases a soluble 17kDa extracellular domain (ectodomain) from the pro-TNF-α molecule. This soluble ectodomain is the cytokine commonly known as TNF-α, which is of pivotal importance in paracrine signaling. This proteolytic liberation of soluble TNF-α is catalyzed by ADAM17.
ADAM17 may play a prominent role in the Notch signaling pathway, during the proteolytic release of the Notch intracellular domain (from the Notch1 receptor) that occurs following ligand binding. ADAM17 also regulates the MAP kinase signaling pathway by regulating shedding of the EGFR ligand amphiregulin in the mammary gland. ADAM17 also has a role in the shedding of L-selectin, a cellular adhesion molecule.
Activation
The localization of ADAM17 is speculated to be an important determinant of shedding activity. TNF-α processing has classically been understood to occur in the trans-Golgi network, and be closely connected to transport of soluble TNF-α to the cell surface. Shedding is also associated with clustering of ADAM17 with its substrate, membrane bound TNF, in lipid rafts. The overall process is called substrate presentation and regulated by cholesterol. Research also suggests that the majority of mature, endogenous ADAM17 may be localized to a perinuclear compartment, with only a small amount of TACE being present on the cell surface. The localization of mature ADAM17 to a perinuclear compartment, therefore, raises the possibility that ADAM17-mediated ectodomain shedding may also occur in the intracellular environment, in contrast with the conventional model.
Functional ADAM17 has been documented to be ubiquitously expressed in the human colon, with increased activity in the colonic mucosa of patients with ulcerative colitis, a main form of inflammatory bowel disease. Other experiments have also suggested that expression of ADAM17 may be inhibited by ethanol.
Interactions
ADAM17 has been shown to interact with:
DLG1
MAD2L1, and
MAPK1.
iRhom2.
Clinical significance
Adam17 may facilitate entry of the SARS‑CoV‑2 virus, possibly by enabling fusion of virus particles with the cytoplasmic membrane. Adam17 has similar ACE2 cleavage activity as TMPRSS2, but by forming soluble ACE2, Adam17 may actually have the protective effect of blocking circulating SARS‑CoV‑2 virus particles.
Adam17 sheddase activity may contribute to COVID-19 inflammation by cleavage of TNF-α and Interleukin-6 receptor.
Recently, ADAM17 was discovered as a crucial mediator of resistance to radiotherapy. Radiotherapy can induce a dose-dependent increase of furin-mediated cleavage of the ADAM17 proform to active ADAM17, which results in enhanced ADAM17 activity in vitro and in vivo. It was also shown that radiotherapy activates ADAM17 in non-small cell lung cancer, which results in shedding of multiple survival factors, growth factor pathway activation, and radiotherapy-induced treatment resistance.
References
Further reading
External links
Proteases
Clusters of differentiation
EC 3.4.24
Signal transduction
Human proteins
Genes mutated in mice | ADAM17 | [
"Chemistry",
"Biology"
] | 1,206 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
9,350,418 | https://en.wikipedia.org/wiki/Molar%20conductivity | The molar conductivity of an electrolyte solution is defined as its conductivity divided by its molar concentration.
where:
κ is the measured conductivity (formerly known as specific conductance),
c is the molar concentration of the electrolyte.
The SI unit of molar conductivity is siemens metres squared per mole (S m2 mol−1). However, values are often quoted in S cm2 mol−1. In these last units, the value of Λm may be understood as the conductance of a volume of solution between parallel plate electrodes one centimeter apart and of sufficient area so that the solution contains exactly one mole of electrolyte.
Variation of molar conductivity with dilution
There are two types of electrolytes: strong and weak. Strong electrolytes usually undergo complete ionization, and therefore they have higher conductivity than weak electrolytes, which undergo only partial ionization. For strong electrolytes, such as salts, strong acids and strong bases, the molar conductivity depends only weakly on concentration. On dilution there is a regular increase in the molar conductivity of strong electrolyte, due to the decrease in solute–solute interaction. Based on experimental data Friedrich Kohlrausch (around the year 1900) proposed the non-linear law for strong electrolytes:
where
Λ is the molar conductivity at infinite dilution (or limiting molar conductivity), which can be determined by extrapolation of Λm as a function of ,
K is the Kohlrausch coefficient, which depends mainly on the stoichiometry of the specific salt in solution,
α is the dissociation degree even for strong concentrated electrolytes,
fλ is the lambda factor for concentrated solutions.
This law is valid for low electrolyte concentrations only; it fits into the Debye–Hückel–Onsager equation.
For weak electrolytes (i.e. incompletely dissociated electrolytes), however, the molar conductivity strongly depends on concentration: The more dilute a solution, the greater its molar conductivity, due to increased ionic dissociation. For example, acetic acid has a higher molar conductivity in dilute aqueous acetic acid than in concentrated acetic acid.
Kohlrausch's law of independent migration of ions
Friedrich Kohlrausch in 1875–1879 established that to a high accuracy in dilute solutions, molar conductivity can be decomposed into contributions of the individual ions. This is known as Kohlrausch's law of independent ionic migration.
For any electrolyte AxBy, the limiting molar conductivity is expressed as x times the limiting molar conductivity of Ay+ and y times the limiting molar conductivity of Bx−.
where:
λi is the limiting molar ionic conductivity of ion i,
νi is the number of ions i in the formula unit of the electrolyte (e.g. 2 and 1 for Na+ and in Na2SO4).
Kohlrausch's evidence for this law was that the limiting molar conductivities of two electrolytes with two different cations and a common anion differ by an amount which is independent of the nature of the anion. For example, = for X = Cl−, I− and . This difference is ascribed to a difference in ionic conductivities between K+ and Na+. Similar regularities are found for two electrolytes with a common anion and two cations.
Molar ionic conductivity
The molar ionic conductivity of each ionic species is proportional to its electrical mobility (μ), or drift velocity per unit electric field, according to the equation
where z is the ionic charge, and F is the Faraday constant.
The limiting molar conductivity of a weak electrolyte cannot be determined reliably by extrapolation. Instead it can be expressed as a sum of ionic contributions, which can be evaluated from the limiting molar conductivities of strong electrolytes containing the same ions. For aqueous acetic acid as an example,
Values for each ion may be determined using measured ion transport numbers. For the cation:
and for the anion:
Most monovalent ions in water have limiting molar ionic conductivities in the range of . For example:
The order of the values for alkali metals is surprising, since it shows that the smallest cation Li+ moves more slowly in a given electric field than Na+, which in turn moves more slowly than K+. This occurs because of the effect of solvation of water molecules: the smaller Li+ binds most strongly to about four water molecules so that the moving cation species is effectively . The solvation is weaker for Na+ and still weaker for K+. The increase in halogen ion mobility from F− to Cl− to Br− is also due to decreasing solvation.
Exceptionally high values are found for H+ () and OH− (), which are explained by the Grotthuss proton-hopping mechanism for the movement of these ions. The H+ also has a larger conductivity than other ions in alcohols, which have a hydroxyl group, but behaves more normally in other solvents, including liquid ammonia and nitrobenzene.
For multivalent ions, it is usual to consider the conductivity divided by the equivalent ion concentration in terms of equivalents per litre, where 1 equivalent is the quantity of ions that have the same amount of electric charge as 1 mol of a monovalent ion: mol Ca2+, mol , mol Al3+, mol , etc. This quotient can be called the equivalent conductivity, although IUPAC has recommended that use of this term be discontinued and the term molar conductivity be used for the values of conductivity divided by equivalent concentration. If this convention is used, then the values are in the same range as monovalent ions, e.g. for Ca2+ and for .
From the ionic molar conductivities of cations and anions, effective ionic radii can be calculated using the concept of Stokes radius. The values obtained for an ionic radius in solution calculated this way can be quite different from the ionic radius for the same ion in crystals, due to the effect of hydration in solution.
Applications
Ostwald's law of dilution, which gives the dissociation constant of a weak electrolyte as a function of concentration, can be written in terms of molar conductivity. Thus, the pKa values of acids can be calculated by measuring the molar conductivity and extrapolating to zero concentration. Namely, pKa = p() at the zero-concentration limit, where K is the dissociation constant from Ostwald's law.
References
Electrochemical concepts
Physical chemistry
Molar quantities | Molar conductivity | [
"Physics",
"Chemistry"
] | 1,426 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Intensive quantities",
"Electrochemical concepts",
"Electrochemistry",
"nan",
"Physical chemistry",
"Molar quantities"
] |
9,351,265 | https://en.wikipedia.org/wiki/Friedlander%E2%80%93Iwaniec%20theorem | In analytic number theory the Friedlander–Iwaniec theorem states that there are infinitely many prime numbers of the form . The first few such primes are
2, 5, 17, 37, 41, 97, 101, 137, 181, 197, 241, 257, 277, 281, 337, 401, 457, 577, 617, 641, 661, 677, 757, 769, 821, 857, 881, 977, … .
The difficulty in this statement lies in the very sparse nature of this sequence: the number of integers of the form less than is roughly of the order .
History
The theorem was proved in 1997 by John Friedlander and Henryk Iwaniec. Iwaniec was awarded the 2001 Ostrowski Prize in part for his contributions to this work.
Refinements
The theorem was refined by D.R. Heath-Brown and Xiannan Li in 2017. In particular, they proved that the polynomial represents infinitely many primes when the variable is also required to be prime. Namely, if is the prime numbers less than in the form then
where
Special case
When , the Friedlander–Iwaniec primes have the form , forming the set
2, 5, 17, 37, 101, 197, 257, 401, 577, 677, 1297, 1601, 2917, 3137, 4357, 5477, 7057, 8101, 8837, 12101, 13457, 14401, 15377, … .
It is conjectured (one of Landau's problems) that this set is infinite. However, this is not implied by the Friedlander–Iwaniec theorem.
References
Further reading
.
Additive number theory
Theorems in analytic number theory
Theorems about prime numbers | Friedlander–Iwaniec theorem | [
"Mathematics"
] | 368 | [
"Theorems in mathematical analysis",
"Theorems in number theory",
"Theorems in analytic number theory",
"Theorems about prime numbers"
] |
9,353,706 | https://en.wikipedia.org/wiki/Spontaneous%20combustion | Spontaneous combustion or spontaneous ignition is a type of combustion which occurs by self-heating (increase in temperature due to exothermic internal reactions), followed by thermal runaway (self heating which rapidly accelerates to high temperatures) and finally, autoignition. It is distinct from (but has similar practical effects to) pyrophoricity, in which a compound needs no self-heat to ignite. The correct storage of spontaneously combustible materials is extremely important, as improper storage is the main cause of spontaneous combustion. Materials such as coal, cotton, hay, and oils should be stored at proper temperatures and moisture levels to prevent spontaneous combustion.
Allegations of spontaneous human combustion are considered pseudoscience.
Cause and ignition
Spontaneous combustion can occur when a substance with a relatively low ignition temperature such as hay, straw, peat, etc., begins to release heat. This may occur in several ways, either by oxidation in the presence of moisture and air, or bacterial fermentation, which generates heat. These materials are thermal insulators that prevent the escape of heat causing the temperatures of the material to rise above its ignition point. Combustion will begin when a sufficient oxidizer, such as oxygen, and fuel are present to maintain the reaction into thermal runaway.
Thermal runaway can occur when the amount of heat produced is greater than the rate at which the heat is lost. Materials that produce a lot of heat may combust in relatively small volumes, while materials that produce very little heat may only become dangerous when well insulated or stored in large volumes. Most oxidation reactions accelerate at higher temperatures, so a pile of material that would have been safe at a low ambient temperature may spontaneously combust during hotter weather.
Affected materials
Confirmed
Hay and compost piles may self-ignite because of heat produced by bacterial fermentation, which then can cause pyrolysis and oxidation that leads to thermal runaway reactions that reach autoignition temperature. Rags soaked with drying oils or varnish can oxidize rapidly due to the large surface area, and even a small pile can produce enough heat to ignite under the right conditions. Coal can ignite spontaneously when exposed to oxygen, which causes it to react and heat up when there is insufficient ventilation for cooling. Pyrite oxidation is often the cause of coal's spontaneous ignition in old mine tailings. Pistachio nuts are highly flammable when stored in large quantities, and are prone to self-heating and spontaneous combustion. Large manure piles can spontaneously combust during conditions of extreme heat. Cotton and linen can ignite when they come into contact with polyunsaturated vegetable oils (linseed, massage oils); bacteria will slowly decompose the materials, producing heat. If these materials are stored in a way so the heat cannot escape, the heat buildup increases the rate of decomposition and thus the rate of heat buildup increases. Once ignition temperature is reached, combustion occurs with oxidizers present (oxygen). Nitrate film, when improperly stored, can deteriorate into an extremely flammable condition and combust. The 1937 Fox vault fire was caused by spontaneously combusting nitrate film.
Hay
Hay is one of the most widely studied materials in spontaneous combustion. It is very difficult to establish a unified theory of what occurs in hay self-heating because of the variation in the types of grass used in hay preparation, and the different locations where it is grown. It is anticipated that dangerous heating will occur in hay that contains more than 25% moisture. The largest number of fires occur within two to six weeks of storage, with the majority occurring in the fourth or fifth week.
The process may begin with microbiological activity (bacteria or mold) which ferments the hay, creating ethanol. Ethanol has a flash point of . So with an ignition source such as static electricity, e.g. from a mouse running through the hay, combustion may occur. The temperature then increases, igniting the hay itself.
Microbiological activity reduces the amount of oxygen available in the hay. At 100 °C, wet hay absorbed twice the amount of oxygen of dry hay. There has been conjecture that the complex carbohydrates present in hay break down to simpler sugars, which are more readily fermented to ethanol.
Charcoal
Charcoal, when freshly prepared, can self-heat and catch fire. This is separate from hot spots which may have developed from the preparation of charcoal. Charcoal that has been exposed to air for a period of eight days is not considered to be hazardous. There are many factors involved, among them the type of wood and the temperature at which the charcoal was prepared.
Coal
Extensive studies have been completed on the self-heating of coal. Improper storage of coal is a main cause of spontaneous combustion, as there can be a continuous oxygen supply and the oxidization of coal produces heat that doesn't dissipate. Over time, these conditions can cause self-heating. The tendency to self-heat decreases with the increasing rank of the coal. Lignite coals are more active than bituminous coals, which are more active than anthracite coals. Freshly mined coal consumes oxygen more rapidly than weathered coal, and freshly mined coal self-heats to a greater extent than weathered coal. The presence of water vapor may also be important, as the rate of heat generation accompanying the absorption of water in dry coal from saturated air can be an order of magnitude or more than the same amount of dry air.
Cotton
Cotton too can be at great risk of spontaneous combustion. In an experimental study on the spontaneous combustion of cotton, three different types of cotton were tested at different heating rates and pressures. Different cotton varieties can have different self-heating oxidation temperature and larger reactions. Understanding what type of cotton is being stored will help reduce the risk of spontaneous combustion. A striking example of a cargo igniting spontaneously occurred on the ship in the Indian Ocean on 24 August 1834.
Oil seeds and oil-seed products
Oil seeds and residue from oil extraction will self-heat if too moist. Typically, storage at 9–14% moisture is satisfactory, but limits are established for each individual variety of oil seed. In the presence of excess moisture that is just below the level required for germinating seed, the activity of mold fungi is a likely candidate for generating heat. This was established for flax and sunflower seeds, and soy beans. Many of the oil seeds generate oils that are self-heating. Palm kernels, rapeseed, and cotton seed have also been studied. Rags soaked in linseed oil can spontaneously ignite if improperly stored or discarded.
Copra
Copra, the dried, white flesh of the coconut from which coconut oil is extracted, has been classed with dangerous goods due to its spontaneously combustive nature. It is identified as a Division 4.2 substance.
Human
There have been unconfirmed anecdotal reports of people spontaneously combusting. This alleged phenomenon is not considered true spontaneous combustion, as supposed cases have been largely attributed to the wick effect, whereby an external source of fire ignites nearby flammable materials and human fat or other sources.
Predictions and preventions
There are many factors that can help predict spontaneous combustion and prevent it. The longer a material sits, the higher the risk of spontaneous combustion. Preventing spontaneous combustion can be as simple as not leaving materials stored for extended periods of time, controlling air flow, moisture, methane, and pressure balances. There are also many materials that prevent spontaneous combustion. For example, spontaneous coal combustion can be prevented by physical based materials such as chlorine salts, ammonium salts, alkalis, inert gases, colloids, polymers, aerosols, and LDHs, as well as chemical-based materials like antioxidants, ionic liquids, and composite materials.
References
Bibliography
External links
Article on the spontaneous combustion of coal, May 1993
Spontaneous combustion demonstration
Combustion | Spontaneous combustion | [
"Chemistry"
] | 1,613 | [
"Combustion"
] |
9,353,915 | https://en.wikipedia.org/wiki/Colored%20dissolved%20organic%20matter | Colored dissolved organic matter (CDOM) is the optically measurable component of dissolved organic matter in water. Also known as chromophoric dissolved organic matter, yellow substance, and gelbstoff, CDOM occurs naturally in aquatic environments and is a complex mixture of many hundreds to thousands of individual, unique organic matter molecules, which are primarily leached from decaying detritus and organic matter. CDOM most strongly absorbs short wavelength light ranging from blue to ultraviolet, whereas pure water absorbs longer wavelength red light. Therefore, water with little or no CDOM, such as the open ocean, appears blue. Waters containing high amounts of CDOM can range from brown, as in many rivers, to yellow and yellow-brown in coastal waters. In general, CDOM concentrations are much higher in fresh waters and estuaries than in the open ocean, though concentrations are highly variable, as is the estimated contribution of CDOM to the total dissolved organic matter pool.
Significance
The concentration of CDOM can have a significant effect on biological activity in aquatic systems. CDOM diminishes light intensity as it penetrates water. Very high concentrations of CDOM can have a limiting effect on photosynthesis and inhibit the growth of phytoplankton, which form the basis of oceanic food chains and are a primary source of atmospheric oxygen. However, the influence of CDOM on algal photosynthesis can be complex in other aquatic systems like lakes where CDOM increases photosynthetic rates at low and moderate concentrations, but decreases photosynthetic rates at high concentrations. CDOM concentrations reflect hierarchical controls. Concentrations vary among lakes in close proximity due to differences in lake and watershed morphometry, and regionally because of difference in climate and dominant vegetation. CDOM also absorbs harmful UVA/B radiation, protecting organisms from DNA damage.
Absorption of UV radiation causes CDOM to "bleach", reducing its optical density and absorptive capacity. This bleaching (photodegradation) of CDOM produces low-molecular-weight organic compounds which may be utilized by microbes, release nutrients that may be used by phytoplankton as a nutrient source for growth, and generates reactive oxygen species, which may damage tissues and alter the bioavailability of limiting trace metals.
CDOM can be detected and measured from space using satellite remote sensing and often interferes with the use of satellite spectrometers to remotely estimate phytoplankton populations. As a pigment necessary for photosynthesis, chlorophyll is a key indicator of the phytoplankton abundance. However, CDOM and chlorophyll both absorb light in the same spectral range so it is often difficult to differentiate between the two.
Although variations in CDOM are primarily the result of natural processes including changes in the amount and frequency of precipitation, human activities such as logging, agriculture, effluent discharge, and wetland drainage can affect CDOM levels in fresh water and estuarine systems.
Measurement
Traditional methods of measuring CDOM include UV-visible spectroscopy (absorbance) and fluorometry (fluorescence). Optical proxies have been developed to characterize sources and properties of CDOM, including specific ultraviolet absorbance at 254 nm (SUVA254) and spectral slopes for absorbance, and the fluorescence index (FI), biological index (BIX), and humification index (HIX) for fluorescence. Excitation emission matrices (EEMs) can be resolved into components in a technique called parallel factor analysis (PARAFAC), where each component is often labelled as "humic-like", "protein-like", etc. As mentioned above, remote sensing is the newest technique to detect CDOM from space.
See also
Blackwater river
Color of water
Dissolved organic carbon (DOC)
Ocean turbidity
Secchi disk
References
External links
The Color of the Ocean from science@NASA
Aquatic ecology
Chemical oceanography
Environmental chemistry
Organic chemistry
Water chemistry
Water quality indicators
Water supply | Colored dissolved organic matter | [
"Chemistry",
"Engineering",
"Biology",
"Environmental_science"
] | 828 | [
"Hydrology",
"Environmental chemistry",
"Water pollution",
"Chemical oceanography",
"Water quality indicators",
"Ecosystems",
"nan",
"Environmental engineering",
"Aquatic ecology",
"Water supply"
] |
9,354,293 | https://en.wikipedia.org/wiki/Pasteur%20effect | The Pasteur effect describes how available oxygen inhibits ethanol fermentation, driving yeast to switch toward aerobic respiration for increased generation of the energy carrier adenosine triphosphate (ATP). More generally, in the medical literature, the Pasteur effect refers to how the cellular presence of oxygen causes in cells a decrease in the rate of glycolysis and also a suppression of lactate accumulation. The effect occurs in animal tissues, as well as in microorganisms belonging to the fungal kingdom.
Discovery
The effect was described by Louis Pasteur in 1857 in experiments showing that aeration of yeasted broth causes cell growth to increase while the fermentation rate decreases, based on lowered ethanol production.
Explanation
Yeast fungi, being facultative anaerobes, can either produce energy through ethanol fermentation or aerobic respiration. When the O2 concentration is low, the two pyruvate molecules formed through glycolysis are each fermented into ethanol and carbon dioxide. While only 2 ATP are produced per glucose, this method is utilized under anaerobic conditions because it oxidizes the electron shuttle NADH into NAD+ for another round of glycolysis and ethanol fermentation.
If the concentration of oxygen increases, pyruvate is instead converted to acetyl CoA, used in the citric acid cycle, and undergoes oxidative phosphorylation. Per glucose, 10 NADH and 2 FADH2 are produced in cellular respiration for a significant amount of proton pumping to produce a proton gradient utilized by ATP Synthase. While the exact ATP output ranges based on considerations like the overall electrochemical gradient, aerobic respiration produces far more ATP than the anaerobic process of ethanol fermentation. The increased ATP and citrate from aerobic respiration allosterically inhibit the glycolysis enzyme phosphofructokinase 1 because less pyruvate is needed to produce the same amount of ATP.
Despite this energetic incentive, Rosario Lagunas has shown that yeast continue to partially ferment available glucose into ethanol for many reasons. First, glucose metabolism is faster through ethanol fermentation because it involves fewer enzymes and limits all reactions to the cytoplasm. Second, ethanol has bactericidal activity by causing damage to the cell membrane and protein denaturing, allowing yeast fungus to outcompete environmental bacteria for resources. Third, partial fermentation may be a defense mechanism against environmental competitors depleting all oxygen faster than the yeast's regulatory systems could fully switch from aerobic respiration to ethanol fermentation.
Practical implications
The fermentation processes used in alcohol production is commonly maintained in low oxygen conditions, under a blanket of carbon dioxide, while growing yeast for biomass involves aerating the broth for maximized energy production. Despite the bactericidal effects of ethanol, acidifying effects of fermentation, and low oxygen conditions of industrial alcohol production, bacteria that undergo lactic acid fermentation can contaminate such facilities because lactic acid has a low pKa of 3.86 to avoid decoupling the pH membrane gradient that supports regulated transport.
See also
Ethanol fermentation
Fermentation (biochemistry)
Facultative anaerobic organism
Allosteric regulation
References
Further reading
Fermentation
Metabolism | Pasteur effect | [
"Chemistry",
"Biology"
] | 683 | [
"Cellular respiration",
"Cellular processes",
"Biochemistry",
"Metabolism",
"Fermentation"
] |
9,355,054 | https://en.wikipedia.org/wiki/Malaria%20antigen%20detection%20tests | Malaria antigen detection tests are a group of commercially available rapid diagnostic tests of the rapid antigen test type that allow quick diagnosis of malaria by people who are not otherwise skilled in traditional laboratory techniques for diagnosing malaria or in situations where such equipment is not available. There are currently over 20 such tests commercially available (WHO product testing 2008). The first malaria antigen suitable as target for such a test was a soluble glycolytic enzyme Glutamate dehydrogenase.
None of the rapid tests are currently as sensitive as a thick blood film, nor as cheap. A major drawback in the use of all current dipstick methods is that the result is essentially qualitative. In many endemic areas of tropical Africa, however, the quantitative assessment of parasitaemia is important, as a large percentage of the population will test positive in any qualitative assay.
Antigen-based Malaria Rapid Diagnostic Tests
Malaria is a curable disease if the patients have access to early diagnosis and prompt treatment. Antigen-based rapid diagnostic tests (RDTs) have an important role at the periphery of health services capability because many rural clinics do not have the ability to diagnose malaria on-site due to a lack of microscopes and trained technicians to evaluate blood films. Furthermore, in regions where the disease is not endemic, laboratory technologists have very limited experience in detecting and identifying malaria parasites. An ever increasing numbers of travelers from temperate areas each year visit tropical countries and many of them return with a malaria infection. The RDT tests are still regarded as complements to conventional microscopy but with some improvements it may well replace the microscope. The tests are simple and the procedure can be performed on the spot in field conditions. These tests use finger-stick or venous blood, the completed test takes a total of 15–20 minutes, and a laboratory is not needed. The threshold of detection by these rapid diagnostic tests is in the range of 100 parasites/μL of blood compared to 5 by thick film microscopy.
pGluDH
An accurate diagnosis is becoming more and more important, in view of the increasing resistance of Plasmodium falciparum and the high price of alternatives to chloroquine. The enzyme pGluDH does not occur in the host red blood cell and was recommended as a marker enzyme for Plasmodium species by Picard-Maureau et al. in 1975. The malaria marker enzyme test is suitable for routine work and is now a standard test in most departments dealing with malaria. Presence of pGluDH is known to represent parasite viability and a rapid diagnostic test using pGluDH as antigen would have the ability to differentiate live from dead organisms. A complete RDT with pGluDH as antigen has been developed in China and is now undergoing clinical trials.
GluDHs are ubiquitous enzymes that occupy an important branch-point between carbon and nitrogen metabolism. Both nicotinamide adenine dinucleotide (NAD) [EC 1.4.1.2] and nicotinamide adenine dinucleotide phosphate (NADP) dependent GluDH [EC 1.4.1.4] enzymes are present in Plasmodia; the NAD-dependent GluDH is relatively unstable and not useful for diagnostic purposes. Glutamate dehydrogenase provides an oxidizable carbon source used for the production of energy as well as a reduced electron carrier, NADH. Glutamate is a principal amino donor to other amino acids in subsequent transamination reactions. The multiple roles of glutamate in nitrogen balance make it a gateway between free ammonia and the amino groups of most amino acids. Its crystal structure is published. The GluDH activity in P.vivax, P.ovale and P. malariae has never been tested, but given the importance of GluDH as a branch point enzyme, every cell must have a high concentration of GluDH. It is well known that enzymes with a high molecular weight (like GluDH) have many isozymes, which allows strain differentiations (given the right monoclonal antibody). The host produces antibodies against the parasitic enzyme indicating a low sequence identity.
Histidine rich protein II
The histidine-rich protein II (HRP II) is a histidine- and alanine-rich, water-soluble protein, which is localized in several cell compartments including the parasite cytoplasm. The antigen is expressed only by P. falciparum trophozoites. HRP II from P. falciparum has been implicated in the biocrystallization of hemozoin, an inert, crystalline form of ferriprotoporphyrin IX (Fe(3+)-PPIX) produced by the parasite. A substantial amount of the HRP II is secreted by the parasite into the host bloodstream and the antigen can be detected in erythrocytes, serum, plasma, cerebrospinal fluid and even urine as a secreted water-soluble protein. These antigens persist in the circulating blood after the parasitaemia has cleared or has been greatly reduced. It generally takes around two weeks after successful treatment for HRP2-based tests to turn negative, but may take as long as one month, which compromises their value in the detection of active infection. False positive dipstick results were reported in patients with rheumatoid-factor-positive rheumatoid arthritis. Since HRP-2 is expressed only by P. falciparum, these tests will give negative results with samples containing only P. vivax, P. ovale, or P. malariae; many cases of non-falciparum malaria may therefore be misdiagnosed as malaria negative (some P.falciparum strains also don't have HRP II). The variability in the results of pHRP2-based RDTs is related to the variability in the target antigen.
pLDH
P. falciparum lactate dehydrogenase (PfLDH) is a 33 kDa oxidoreductase [EC 1.1.1.27]. It is the last enzyme of the glycolytic pathway, essential for ATP generation and one of the most abundant enzymes expressed by P. falciparum. Plasmodium LDH (pLDH) from P. vivax, P. malariae, and P. ovale) exhibit 90-92% identity to PfLDH from P. falciparum. pLDH levels have been seen to reduce in the blood sooner after treatment than HRP2. In this respect, pLDH is similar to pGluDH. Nevertheless, the kinetic properties and sensitivities to inhibitors targeted to the cofactor binding site differ significantly and are identifiable by measuring dissociation constants for inhibitors which, differ by up to 21-fold.
pAldo
Fructose-bisphosphate aldolase [EC 4.1.2.13] catalyzes a key reaction in glycolysis and energy production and is produced by all four species. The P.falciparum aldolase is a 41 kDa protein and has 61-68% sequence similarity to known eukaryotic aldolases. Its crystal structure has been published. The presence of antibodies against p41 in the sera of human adults partially immune to malaria suggest that p41 is implicated in protective immune response against the parasite.
See also
Romanowsky stain
References
External links
Malaria Antibodies
Roll back malaria
WHO product testing 2008
WHO Rapid Diagnostic Tests (RDTs)
Malaria
Blood tests | Malaria antigen detection tests | [
"Chemistry"
] | 1,570 | [
"Blood tests",
"Chemical pathology"
] |
9,356,096 | https://en.wikipedia.org/wiki/Polymeric%20liquid%20crystal | Polymeric liquid crystals are similar to monomeric liquid crystals used in displays. Both have dielectric anitroscopy, or the ability to change directions and absorb or transmit light depending on electric fields. Polymeric liquid crystals form long head-to-tail or side chain polymers, which are woven in thick mats and therefore have high viscosities. The high viscosities allow the polymeric liquid crystals to be used in complex structures, but they are harder to align, limiting their usefulness. The polymerics align in microdomains facing all different directions, which ruins the optical effect. One solution to this is to mix in a small amount of photo-curing polymer, which when spin-coated onto a surface can be hardened. Basically, the polymeric liquid crystal and photocurer are aligned in one direction, and then the photo curer is cured, "freezing" the polymeric in one direction.
References
Liquid crystals | Polymeric liquid crystal | [
"Physics",
"Materials_science"
] | 192 | [
"Materials science stubs",
"Condensed matter stubs",
"Condensed matter physics"
] |
9,360,334 | https://en.wikipedia.org/wiki/Modified-release%20dosage | Modified-release dosage is a mechanism that (in contrast to immediate-release dosage) delivers a drug with a delay after its administration (delayed-release dosage) or for a prolonged period of time (extended-release [ER, XR, XL] dosage) or to a specific target in the body (targeted-release dosage).
Sustained-release dosage forms are dosage forms designed to release (liberate) a drug at a predetermined rate in order to maintain a constant drug concentration for a specific period of time with minimum side effects. This can be achieved through a variety of formulations, including liposomes and drug-polymer conjugates (an example being hydrogels). Sustained release's definition is more akin to a "controlled release" rather than "sustained".
Extended-release dosage consists of either sustained-release (SR) or controlled-release (CR) dosage. SR maintains drug release over a sustained period but not at a constant rate. CR maintains drug release over a sustained period at a nearly constant rate.
Sometimes these and other terms are treated as synonyms, but the United States Food and Drug Administration has in fact defined most of these as different concepts. Sometimes the term "depot tablet" is used, by analogy to the term for an injection formulation of a drug which releases slowly over time, but this term is not medically or pharmaceutically standard for oral medication.
Modified-release dosage and its variants are mechanisms used in tablets (pills) and capsules to dissolve a drug over time in order to be released more slowly and steadily into the bloodstream, while having the advantage of being taken at less frequent intervals than immediate-release (IR) formulations of the same drug. For example, orally administered extended-release morphine can enable certain chronic pain patients to take only tablets per day, rather than needing to redose every as is typical with standard-release morphine tablets.
Most commonly it refers to time-dependent release in oral dose formulations. Timed release has several distinct variants such as sustained release where prolonged release is intended, pulse release, delayed release (e.g. to target different regions of the GI tract) etc. A distinction of controlled release is that it not only prolongs action, but it attempts to maintain drug levels within the therapeutic window to avoid potentially hazardous peaks in drug concentration following ingestion or injection and to maximize therapeutic efficiency.
In addition to pills, the mechanism can also apply to capsules and injectable drug carriers (that often have an additional release function), forms of controlled release medicines include gels, implants and devices (e.g. the vaginal ring and contraceptive implant) and transdermal patches.
Examples for cosmetic, personal care, and food science applications often centre on odour or flavour release.
The release technology scientific and industrial community is represented by the Controlled Release Society (CRS). The CRS is the worldwide society for delivery science and technologies. CRS serves more than 1,600 members from more than 50 countries. Two-thirds of CRS membership is represented by industry and one-third represents academia and government. CRS is affiliated with the Journal of Controlled Release and Drug Delivery and Translational Research scientific journals.
List of abbreviations
There is no industry standard for these abbreviations, and confusion and misreading have sometimes caused prescribing errors. Clear handwriting is necessary. For some drugs with multiple formulations, putting the meaning in parentheses is advisable.
A few other abbreviations are similar to these (in that they may serve as suffixes) but refer to dose rather than release rate. They include ES and XS (Extra Strength).
Methods
Today, most time-release drugs are formulated so that the active ingredient is embedded in a matrix of insoluble substance(s) (various: some acrylics, even chitin; these substances are often patented) such that the dissolving drug must find its way out through the holes.
In some SR formulations, the drug dissolves into the matrix, and the matrix physically swells to form a gel, allowing the drug to exit through the gel's outer surface.
Micro-encapsulation is also regarded as a more complete technology to produce complex dissolution profiles. Through coating an active pharmaceutical ingredient around an inert core and layering it with insoluble substances to form a microsphere, one can obtain more consistent and replicable dissolution rates in a convenient format that can be mixed and matched with other instant release pharmaceutical ingredients into any two piece gelatin capsule.
There are certain considerations for the formation of sustained-release formulation:
If the pharmacological activity of the active compound is not related to its blood levels, time releasing has no purpose except in some cases, such as bupropion, to reduce possible side effects.
If the absorption of the active compound involves an active transport, the development of a time-release product may be problematic.
The biological half-life of the drug refers to the drug's elimination from the bloodstream which can be caused by metabolism, urine, and other forms of excretion. If the active compound has a long half-life (over 6 hours), it is sustained on its own. If the active compound has a short half-life, it would require a large amount to maintain a prolonged effective dose. In this case, a broad therapeutic window is necessary to avoid toxicity; otherwise, the risk is unwarranted and another mode of administration would be recommended. Appropriate half-lives used to apply sustained methods are typically 3–4 hours and a drug dose greater than 0.5 grams is too high.
The therapeutic index also factors whether a drug can be used as a time release drug. A drug with a thin therapeutic range, or small therapeutic index, will be determined unfit for a sustained release mechanism in partial fear of dose dumping which can prove fatal at the conditions mentioned. For a drug that is made to be released over time, the objective is to stay within the therapeutic range as long as needed.
There are many different methods used to obtain a sustained release.
Diffusion systems
Diffusion systems' rate release is dependent on the rate at which the drug dissolves through a barrier which is usually a type of polymer. Diffusion systems can be broken into two subcategories, reservoir devices and matrix devices.
Reservoir devices coat the drug with polymers and in order for the reservoir devices to have sustained-release effects, the polymer must not dissolve and let the drug be released through diffusion. The rate of reservoir devices can be altered by changing the polymer and is possible be made to have zero-order release; however, drugs with higher molecular weight have difficulty diffusing through the membrane.
Matrix devices forms a matrix (drug(s) mixed with a gelling agent) where the drug is dissolved/dispersed. The drug is usually dispersed within a polymer and then released by undergoing diffusion. However, to make the drug SR in this device, the rate of dissolution of the drug within the matrix needs to be higher than the rate at which it is released. The matrix device cannot achieve a zero-order release but higher molecular weight molecules can be used. The diffusion matrix device also tends to be easier to produce and protect from changing in the gastrointestinal tract, but factors such as food can affect the release rate.
Dissolution systems
Dissolution systems must have the system dissolved slowly in order for the drug to have sustained release properties which can be achieved by using appropriate salts and/or derivatives as well as coating the drug with a dissolving material. It is used for drug compounds with high solubility in water. When the drug is covered with some slow dissolving coat, it will eventually release the drug. Instead of diffusion, the drug release depends on the solubility and thickness of the coating. Because of this mechanism, the dissolution will be the rate limiting factor for drug release. Dissolution systems can be broken down to subcategories called reservoir devices and matrix devices.
The reservoir device coats the drug with an appropriate material which will dissolve slowly. It can also be used to administer beads as a group with varying thickness, making the drug release in multiple times creating a SR.
The matrix device has the drug in a matrix and the matrix is dissolved instead of a coating. It can come either as drug-impregnated spheres or drug-impregnated tablets.
Osmotic systems
Osmotic controlled-release oral delivery systems (OROS) have the form of a rigid tablet with a semi-permeable outer membrane and one or more small laser drilled holes in it. As the tablet passes through the body, water is absorbed through the semipermeable membrane via osmosis, and the resulting osmotic pressure is used to push the active drug through the opening(s) in the tablet. OROS is a trademarked name owned by ALZA Corporation, which pioneered the use of osmotic pumps for oral drug delivery.
Osmotic release systems have a number of major advantages over other controlled-release mechanisms. They are significantly less affected by factors such as pH, food intake, GI motility, and differing intestinal environments. Using an osmotic pump to deliver drugs has additional inherent advantages regarding control over drug delivery rates. This allows for much more precise drug delivery over an extended period of time, which results in much more predictable pharmacokinetics. However, osmotic release systems are relatively complicated, somewhat difficult to manufacture, and may cause irritation or even blockage of the GI tract due to prolonged release of irritating drugs from the non-deformable tablet.
Ion-exchange resin
In the ion-exchange method, the resins are cross-linked water-insoluble polymers that contain ionisable functional groups that form a repeating pattern of polymers, creating a polymer chain. The drug is attached to the resin and is released when an appropriate interaction of ions and ion exchange groups occur. The area and length of the drug release and number of cross-link polymers dictate the rate at which the drug is released, determining the SR effect.
Floating systems
A floating system is a system where it floats on gastric fluids due to low density. The density of the gastric fluids is about 1 g/mL; thus, the drug/tablet administered must have a smaller density. The buoyancy will allow the system to float to the top of the stomach and release at a slower rate without worry of excreting it. This system requires that there are enough gastric fluids present as well as food. Many types of forms of drugs use this method such as powders, capsules, and tablets.
Bio-adhesive systems
Bio-adhesive systems generally are meant to stick to mucus and can be favorable for mouth based interactions due to high mucus levels in the general area but not as simple for other areas. Magnetic materials can be added to the drug so another magnet can hold it from outside the body to assist in holding the system in place. However, there is low patient compliance with this system.
Matrix systems
The matrix system is the mixture of materials with the drug, which will cause the drug to slow down. However, this system has several subcategories: hydrophobic matrices, lipid matrices, hydrophilic matrices, biodegradable matrices, and mineral matrices.
A hydrophobic matrix is a drug mixed with a hydrophobic polymer. This causes SR because the drug, after being dissolved, will have to be released by going through channels made by the hydrophilic polymer.
A hydrophilic matrix will go back to the matrix as discussed before where a matrix is a mixture of a drug or drugs with a gelling agent. This system is well liked because of its cost and broad regulatory acceptance. The polymers used can be broken down into categories: cellulose derivatives, non-cellulose natural, and polymers of acrylic acid.
A lipid matrix uses wax or similar materials. Drug release happens via diffusion through, and erosion of, the wax and tends to be sensitive to digestive fluids.
Biodegradable matrices are made with unstable, linked monomers that will erode by biological compounds such as enzymes and proteins.
A mineral matrix which generally means the polymers used are obtained in seaweed.
Stimuli inducing release
Examples of stimuli that may be used to bring about release include pH, enzymes, light, magnetic fields, temperature, ultrasonics, osmosis, cellular traction forces, and electronic control of MEMS and NEMS.
Spherical hydrogels, in micro-size (50-600 μm diameter) with 3-dimensional cross-linked polymer, can be used as drug carrier to control the release of the drug. These hydrogels are called microgels. They may possess a negative charge as example DC-beads. By ion-exchange mechanism, a large amount of oppositely charged amphiphilic drugs can be loaded inside these microgels. Then, the release of these drugs can be controlled by a specific triggering factor like pH, ionic strength or temperature.
Pill splitting
Some time release formulations do not work properly if split, such as controlled-release tablet coatings, while other formulations such as micro-encapsulation still work if the microcapsules inside are swallowed whole.
Among the health information technology (HIT) that pharmacists use are medication safety tools to help manage this problem. For example, the ISMP "do not crush" list can be entered into the system so that warning stickers can be printed at the point of dispensing, to be stuck on the pill bottle.
Pharmaceutical companies that do not supply a range of half-dose and quarter-dose versions of time-release tablets can make it difficult for patients to be slowly tapered off their drugs.
History
The earliest SR drugs are associated with a patent in 1938 by Israel Lipowski, who coated pellets which led to coating particles. The science of controlled release developed further with more oral sustained-release products in the late 1940s and early 1950s, the development of controlled release of marine anti-foulants in the 1950s, and controlled release fertilizer in the 1970s where sustained and controlled delivery of nutrients was achieved following a single application to the soil. Delivery is usually effected by dissolution, degradation, or disintegration of an excipient in which the active compound is formulated. Enteric coating and other encapsulation technologies can further modify release profiles.
See also
Depot injection
Tablet (pharmacy)
Footnotes
External links
Controlled Release Society
United Kingdom & Ireland Controlled Release Society
Controlled Release Technology 5-day short course at MIT with Professor Robert Langer.
Dosage forms
Routes of administration
Drug delivery devices
Pharmacokinetics | Modified-release dosage | [
"Chemistry"
] | 3,019 | [
"Pharmacology",
"Drug delivery devices",
"Pharmacokinetics",
"Routes of administration"
] |
9,360,859 | https://en.wikipedia.org/wiki/Nitrazine | Nitrazine or phenaphthazine is a pH indicator dye often used in medicine. More sensitive than litmus, nitrazine indicates pH in the range of 4.5 to 7.5. Nitrazine is usually used as the disodium salt.
Use
This test is done to ascertain the nature of fluid in the vagina during pregnancy especially when premature rupture of membranes (PROM) is suspect. This test involves putting a drop of fluid obtained from the vagina onto paper strips containing nitrazine dye. The strips change color depending on the pH of the fluid. The strips will turn blue if the pH is greater than 6.0. A blue strip means it's more likely the membranes have ruptured.
This test, however, can produce false positives. If blood gets in the sample or if there is an infection present, the pH of the vaginal fluid may be higher than normal. Semen also has a higher pH, so recent vaginal intercourse can produce a false reading.
To perform a fecal pH test for diagnosing intestinal infections or other digestive problems
In civil engineering, to determine the carbonatation spread in concrete structures and therefore assess the state of the rebar's passivation film.
References
Stool tests
PH indicators
Nitrobenzene derivatives
Naphthalenesulfonic acids
Anilines
Obstetric drugs
Obstetrics | Nitrazine | [
"Chemistry",
"Materials_science"
] | 288 | [
"Titration",
"PH indicators",
"Chromism",
"Chemical tests",
"Equilibrium chemistry"
] |
9,361,398 | https://en.wikipedia.org/wiki/Lipofectamine | Lipofectamine or Lipofectamine 2000 is a common transfection reagent, produced and sold by Invitrogen, used in molecular and cellular biology. It is used to increase the transfection efficiency of RNA (including mRNA and siRNA) or plasmid DNA into in vitro cell cultures by lipofection. Lipofectamine contains lipid subunits that can form liposomes in an aqueous environment, which entrap the transfection payload, e.g. DNA plasmids.
Lipofectamine consists of a 3:1 mixture of DOSPA (2,3‐dioleoyloxy‐N‐ [2(sperminecarboxamido)ethyl]‐N,N‐dimethyl‐1‐propaniminium trifluoroacetate) and DOPE, which complexes with negatively charged nucleic acid molecules to allow them to overcome the electrostatic repulsion of the cell membrane. Lipofectamine's cationic lipid molecules are formulated with a neutral co-lipid (helper lipid). The DNA-containing liposomes (positively charged on their surface) can fuse with the negatively charged plasma membrane of living cells, due to the neutral co-lipid mediating fusion of the liposome with the cell membrane, allowing nucleic acid cargo molecules to cross into the cytoplasm for replication or expression.
In order for a cell to express a transgene, the nucleic acid must reach the nucleus of the cell to begin transcription. However, the transfected genetic material may never reach the nucleus in the first place, instead being disrupted somewhere along the delivery process. In dividing cells, the material may reach the nucleus by being trapped in the reassembling nuclear envelope following mitosis. But also in non-dividing cells, research has shown that Lipofectamine improves the efficiency of transfection, which suggests that it additionally helps the transfected genetic material penetrate the intact nuclear envelope.
This method of transfection was invented by Dr. Yongliang Chu.
See also
Lipofection
Transfection
Vectors in gene therapy
Cationic liposome
References
US Active US7479573B2, Yongliang Chu; Malek Masoud & Gulliat Gebeyehu, "Transfection reagents", assigned to Life Technologies Corp and Invitrogen Group
Molecular biology
Gene delivery | Lipofectamine | [
"Chemistry",
"Biology"
] | 507 | [
"Genetics techniques",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry",
"Gene delivery"
] |
11,833,672 | https://en.wikipedia.org/wiki/Polystannane | Polystannanes are organotin compounds with the formula (R2Sn)n. These polymers have been of intermittent academic interest; they are unusual because heavy elements comprise the backbone. Structurally related but better characterized (and more useful) are the polysilanes (R2Si)n.
History and synthesis
Oligo- or polystannanes were first described by Löwig in 1852, only 2 years after Edward Frankland's report on the isolation of the first organotin compounds. Löwig's route involved treating an Sn/K and Sn/Na alloys with iodoethane, in the presence of quartz sand which was used to control the reaction rate. Products with elemental compositions close to those of oligo(diethylstannane)s or poly(diethylstannane) were obtained. Cahours obtained similar products and attributed the formation of the so-called "stannic ethyl" to a reaction of the Wurtz type. Already in 1858, "stannic ethyl" was formulated as a polymeric compound denoted with the composition n(SnC4H5). In 1917 Grüttner, who reinvestigated results on hexaethyl-distannanes(H5C2)3Sn-Sn(C2H5)3 (reported by Ladenburg in 1870) confirmed the presence of Sn-Sn bonds and predicated for the first time that tin could form chain like compounds. In 1943, it was postulated that “diphenyltin” exists as a type of polymeric material because of its yellow color, and indeed a bathochromic shift of the wavelength at maximum absorption with increasing number of Sn atoms was found later in the case of oligo(dibutylstannane)s comprising up to 15 Sn atoms.
The Wurtz reaction is still used for the preparation of poly(dialkylstannane)s. Treatment of dialkyltin dichlorides with sodium lead to polystannanes of high molar mass, however, in low yields and with formation of (cyclic) oligomers. Other efforts to prepare high molar mass polystannanes by electrochemical reactions or by catalytic dehydropolymerization of dialkylstannanes (R2SnH2) were also made. Unfortunately, frequently, the polymers prepared by those methods were not isolated and typically contained significant fractions of cyclic oligomers.
Alternatively, alkyltin halides react with excess electride in ammonia solutions to give metal alkylstannides. Added alkyltin halides then couple to the stannides to give polystannanes.
Linear polystannanes
Dialkytin dihydrides (R2SnH2) were reported in 2005 to undergo dehydropolymerization in the presence of Wilkinson’s catalyst. This method afforded polystannanes without detectable amounts of "cyclic"-byproducts. The polymers were yellow with number average molar masses of 10 to 70 kg/mol and a polydispersity of 2 – 3. By variation of the catalyst concentration the molar masses of the synthesized polymers could be adjusted. A strong influence of the temperature on the degree of conversion was observed. Determination of the molar mass at different degrees of conversion indicated that polymerization did not proceed according to a statistical condensation mechanism, but, likely, by growth onto the catalyst, e.g. by insertion of SnR2-like units.
The poly(dialkylstannane)s were found to be thermotropic and displayed first-order phase transitions from one liquid-crystalline phase into another or directly to the isotropic state, depending on the length of the side groups. More specifically, poly(dibutylstannane) for example showed an endothermic phase transition at ~0 °C from a rectangular to a pure nematic phase, as determined by X-ray diffraction.
Like polysilanes, polystannanes are semi-conductive. Temperature-dependent, time-resolved pulse radiolysis microwave conductivity measurements of poly(dibutylstannane) yielded values of charge-carrier mobilities of 0.1 to 0.03 cm2 V−1 s−1, which are similar to those found for pi-bond-conjugated carbon-based polymers. By partial oxidation of the material with SbF5 conductivities of 0.3 S cm−1 could be monitored.
The liquid-crystalline characteristics of the poly(dialkylstannane)s permitted facile orientation of these macromolecules, for instance, by mechanical shearing or tensile drawing of blends with poly(ethylene). Poly(dialkylstannane)s with short side groups invariably arranged parallel to the external orientation direction, while the polymers with longer side groups had a tendency to order themselves perpendicular to that axis.
References
External links
Fabien Choffat (2007) Polystannane, Doctoral dissertation, Swiss Federal Institute of Technology, Zürich.
Polymers
Inorganic polymers
Conductive polymers
Plastics
Organotin compounds
Tin(II) compounds | Polystannane | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,080 | [
"Inorganic compounds",
"Inorganic polymers",
"Unsolved problems in physics",
"Molecular electronics",
"Polymer chemistry",
"Polymers",
"Amorphous solids",
"Conductive polymers",
"Plastics"
] |
11,840,868 | https://en.wikipedia.org/wiki/Entropy%20power%20inequality | In information theory, the entropy power inequality (EPI) is a result that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.
Statement of the inequality
For a random vector X : Ω → Rn with probability density function f : Rn → R, the differential entropy of X, denoted h(X), is defined to be
and the entropy power of X, denoted N(X), is defined to be
In particular, N(X) = |K| 1/n when X is normal distributed with covariance matrix K.
Let X and Y be independent random variables with probability density functions in the Lp space Lp(Rn) for some p > 1. Then
Moreover, equality holds if and only if X and Y are multivariate normal random variables with proportional covariance matrices.
Alternative form of the inequality
The entropy power inequality can be rewritten in an equivalent form that does not explicitly depend on the definition of entropy power (see Costa and Cover reference below).
Let X and Y be independent random variables, as above. Then, let X' and Y' be independently distributed random variables with gaussian distributions, such that
and
Then,
See also
Information entropy
Information theory
Limiting density of discrete points
Self-information
Kullback–Leibler divergence
Entropy estimation
References
Information theory
Probabilistic inequalities
Statistical inequalities | Entropy power inequality | [
"Mathematics",
"Technology",
"Engineering"
] | 348 | [
"Theorems in statistics",
"Telecommunications engineering",
"Applied mathematics",
"Statistical inequalities",
"Theorems in probability theory",
"Computer science",
"Probabilistic inequalities",
"Information theory",
"Inequalities (mathematics)"
] |
11,843,393 | https://en.wikipedia.org/wiki/Clock%20angle%20problem | Clock angle problems are a type of mathematical problem which involve finding the angle between the hands of an analog clock.
Math problem
Clock angle problems relate two different measurements: angles and time. The angle is typically measured in degrees from the mark of number 12 clockwise. The time is usually based on a 12-hour clock.
A method to solve such problems is to consider the rate of change of the angle in degrees per minute. The hour hand of a normal 12-hour analogue clock turns 360° in 12 hours (720 minutes) or 0.5° per minute. The minute hand rotates through 360° in 60 minutes or 6° per minute.
Equation for the angle of the hour hand
where:
is the angle in degrees of the hand measured clockwise from the 12
is the hour.
is the minutes past the hour.
is the number of minutes since 12 o'clock.
Equation for the angle of the minute hand
where:
is the angle in degrees of the hand measured clockwise from the 12 o'clock position.
is the minute.
Example
The time is 5:24. The angle in degrees of the hour hand is:
The angle in degrees of the minute hand is:
Equation for the angle between the hands
The angle between the hands can be found using the following formula:
where
is the hour
is the minute
If the angle is greater than 180 degrees then subtract it from 360 degrees.
Example 1
The time is 2:20.
Example 2
The time is 10:16.
When are the hour and minute hands of a clock superimposed?
The hour and minute hands are superimposed only when their angle is the same.
is an integer in the range 0–11. This gives times of: 0:00, 1:05., 2:10., 3:16., 4:21., 5:27.. 6:32., 7:38., 8:43., 9:49.,
10:54., and 12:00.
(0. minutes are exactly 27. seconds.)
See also
Clock position
References
External links
https://web.archive.org/web/20100615083701/http://delphiforfun.org/Programs/clock_angle.htm
http://www.ldlewis.com/hospital_clock/ - extensive clock angle analysis
https://web.archive.org/web/20100608044951/http://www.jimloy.com/puzz/clock1.htm
Mathematics education
Elementary mathematics
Elementary geometry
Mathematical problems
Clocks | Clock angle problem | [
"Physics",
"Mathematics",
"Technology",
"Engineering"
] | 534 | [
"Machines",
"Clocks",
"Measuring instruments",
"Physical systems",
"Elementary mathematics",
"Elementary geometry",
"Mathematical problems"
] |
8,696,119 | https://en.wikipedia.org/wiki/Ultraviolet%20photoelectron%20spectroscopy | Ultraviolet photoelectron spectroscopy (UPS) refers to the measurement of kinetic energy spectra of photoelectrons emitted by molecules that have absorbed ultraviolet photons, in order to determine molecular orbital energies in the valence region.
Basic theory
If Albert Einstein's photoelectric law is applied to a free molecule, the kinetic energy () of an emitted photoelectron is given by
where h is the Planck constant, ν is the frequency of the ionizing light, and I is an ionization energy for the formation of a singly charged ion in either the ground state or an excited state. According to Koopmans' theorem, each such ionization energy may be identified with the energy of an occupied molecular orbital. The ground-state ion is formed by removal of an electron from the highest occupied molecular orbital, while excited ions are formed by removal of an electron from a lower occupied orbital.
History
Before 1960, virtually all measurements of photoelectron kinetic energies were for electrons emitted from metals and other solid surfaces. In about 1956, Kai Siegbahn developed X-ray photoelectron spectroscopy (XPS) for surface chemical analysis. This method uses x-ray sources to study energy levels of atomic core electrons, and at the time had an energy resolution of about 1 eV (electronvolt).
The ultraviolet photoelectron spectroscopy (UPS) was pioneered by Feodor I. Vilesov, a physicist at St. Petersburg (Leningrad) State University in Russia (USSR) in 1961 to study the photoelectron spectra of free molecules in the gas phase. The early experiments used monochromatized radiation from a hydrogen discharge and a retarding potential analyzer to measure the photoelectron energies.
The PES was further developed by David W. Turner, a physical chemist at Imperial College in London and then at Oxford University, in a series of publications from 1962 to 1967. As a photon source, he used a helium discharge lamp that emits a wavelength of 58.4 nm (corresponding to an energy of 21.2 eV) in the vacuum ultraviolet region. With this source, Turner's group obtained an energy resolution of 0.02 eV. Turner referred to the method as "molecular photoelectron spectroscopy", now usually "ultraviolet photoelectron spectroscopy" or UPS. As compared to XPS, UPS is limited to energy levels of valence electrons, but measures them more accurately. After 1967, commercial UPS spectrometers became available. One of the latest commercial devices was the Perkin Elmer PS18. For the last twenty years, the systems have been homemade. One of the latest in progress – Phoenix II – is that of the laboratory of Pau, IPREM developed by Dr. Jean-Marc Sotiropoulos.
Application
The UPS measures experimental molecular orbital energies for comparison with theoretical values from quantum chemistry, which was also extensively developed in the 1960s. The photoelectron spectrum of a molecule contains a series of peaks each corresponding to one valence-region molecular orbital energy level. Also, the high resolution allowed the observation of fine structure due to vibrational levels of the molecular ion, which facilitates the assignment of peaks to bonding, nonbonding or antibonding molecular orbitals.
The method was later extended to the study of solid surfaces where it is usually described as photoemission spectroscopy (PES). It is particularly sensitive to the surface region (to 10 nm depth), due to the short range of the emitted photoelectrons (compared to X-rays). It is therefore used to study adsorbed species and their binding to the surface, as well as their orientation on the surface.
A useful result from characterization of solids by UPS is the determination of the work function of the material. An example of this determination is given by Park et al. Briefly, the full width of the photoelectron spectrum (from the highest kinetic energy/lowest binding energy point to the low kinetic energy cutoff) is measured and subtracted from the photon energy of the exciting radiation, and the difference is the work function. Often, the sample is electrically biased negative to separate the low energy cutoff from the spectrometer response.
Gas discharge lines
Outlook
UPS has seen a considerable revival with the increasing availability of synchrotron light sources that provide a wide range of monochromatic photon energies.
See also
Angle resolved photoemission spectroscopy (ARPES)
Photoelectron photoion coincidence spectroscopy (PEPICO)
Time-resolved two-photon photoelectron spectroscopy
References
Emission spectroscopy
Surface science
Electron spectroscopy
Soviet inventions | Ultraviolet photoelectron spectroscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 932 | [
"Spectrum (physical sciences)",
"Electron spectroscopy",
"Emission spectroscopy",
"Surface science",
"Condensed matter physics",
"Spectroscopy"
] |
8,696,928 | https://en.wikipedia.org/wiki/Sky%20and%20Water%20I | Sky and Water I is a woodcut print by the Dutch artist M. C. Escher first printed in June 1938. The basis of this print is a regular division of the plane consisting of birds and fish. Both prints have the horizontal series of these elements—fitting into each other like the pieces of a jigsaw puzzle—in the middle, transitional portion of the prints. In this central layer the pictorial elements are equal: birds and fish are alternately foreground or background, depending on whether the eye concentrates on light or dark elements. The birds take on an increasing three-dimensionality in the upward direction, and the fish, in the downward direction. But as the fish progress upward and the birds downward they gradually lose their shapes to become a uniform background of sky and water, respectively.
According to Escher: "In the horizontal center strip there are birds and fish equivalent to each other. We associate flying with sky, and so for each of the black birds the sky in which it is flying is formed by the four white fish which encircle it. Similarly swimming makes us think of water, and therefore the four black birds that surround a fish become the water in which it swims."
This print has been used in physics, geology, chemistry, and in psychology for the study of visual perception. In the pictures a number of visual elements unite into a simple visual representation, but separately each forms a point of departure for the elucidation of a theory in one of these disciplines.
See also
Sky and Water II
Tessellation
Sources
M. C. Escher—The Graphic Work; Benedikt-Taschen Publishers.
M. C. Escher—29 Master Prints; Harry N. Abrams, Inc., Publishers.
Locher, J. L. (2000). The Magic of M. C. Escher. Harry N. Abrams, Inc. .
Works by M. C. Escher
1938 works
Woodcuts
Fish in art
Birds in art
Optical illusions
he:שמים ומים#שמים ומים 1 | Sky and Water I | [
"Physics"
] | 427 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
8,699,846 | https://en.wikipedia.org/wiki/Gold%E2%80%93aluminium%20intermetallic | Gold–aluminium intermetallic is a type of intermetallic compound of gold and aluminium that usually forms at contacts between the two metals. Gold–aluminium intermetallic have different properties from the individual metals, such as low conductivity and high melting point depending on their composition. Due to the difference of density between the metals and intermetallics, the growth of the intermetallic layers causes reduction in volume, and therefore creates gaps in the metal near the interface between gold and aluminium.
The production of gaps lowers the strength of the metal compound, which can cause mechanical failure at the joint, fostering the problems that the intermetallics causes in metal compounds. In microelectronics, these properties can cause problems in wire bonding.
The main compounds formed are usually Au5Al2 (white plague) and AuAl2 (purple plague), both of which form at high temperatures, then Au5Al2 and AuAl2 can further react with Au to form more stable compound, Au2Al.
Properties
Au5Al2 has low electrical conductivity and relatively low melting point. Au5Al2's formation at the joint causes increase of electrical resistance, which can lead to electrical failure. Au5Al2 typically forms at 95% of Au and 5% of Al by mass, its melting point is about 575 °C, which is the lowest among the major gold-aluminum intermetallic compounds. AuAl2 is a brittle bright-purple compound, with a composition of about 78.5% Au and 21.5% Al by mass.
AuAl2 is the most thermally stable species of the Au–Al intermetallic compounds, with a melting point of 1060 °C (see phase diagram), which is similar to the melting point of pure gold. AuAl2 can react with Au, therefore is often replaced by Au2Al, a tan-colored substance, which forms at composition of 93% of Au and 7% of Al by mass. It is also a poor conductor and can cause electrical failure of the joint, which can further lead to mechanical failure.
Voiding
At lower temperatures, about 400–450 °C, an interdiffusion process takes place at the junction, leading to formation of layers of different gold-aluminum intermetallic compounds with different growth rates. Gaps are formed as the denser and faster-growing layers consume the slower-growing layers. This process is known as the Kirkendall voiding, which leads to both increased electrical resistance and mechanical weakening of the wire bond. When the voids forms along the diffusion front, this process is aided by contaminants present in the lattice, and is known as the Horsting voiding, which is a similar process to the Kirkendall voiding.
See also
Colored gold
Tin whiskers
References
External links
Harvard: Gold Aluminium Intermetallics
Aluminium aurate – purple gold
Corrosion
Gold
Aluminides
Integrated circuits
Intermetallics | Gold–aluminium intermetallic | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 608 | [
"Inorganic compounds",
"Computer engineering",
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Intermetallics",
"Condensed matter physics",
"Alloys",
"Materials degradation",
"Aluminides",
"Integrated circuits"
] |
8,701,085 | https://en.wikipedia.org/wiki/List%20of%20books%20in%20computational%20geometry | This is a list of books in computational geometry.
There are two major, largely nonoverlapping categories:
Combinatorial computational geometry, which deals with collections of discrete objects or defined in discrete terms: points, lines, polygons, polytopes, etc., and algorithms of discrete/combinatorial character are used
Numerical computational geometry, also known as geometric modeling and computer-aided geometric design (CAGD), which deals with modelling of shapes of real-life objects in terms of curves and surfaces with algebraic representation.
Combinatorial computational geometry
General-purpose textbooks
The book is the first comprehensive monograph on the level of a graduate textbook to systematically cover the fundamental aspects of the emerging discipline of computational geometry. It is written by founders of the field and the first edition covered all major developments in the preceding 10 years.
In the aspect of comprehensiveness it was preceded only by the 1984 survey paper, Lee, D, T., Preparata, F. P.: "Computational geometry - a survey". IEEE Trans. on Computers. Vol. 33, No. 12, pp. 1072–1101 (1984). It is focused on two-dimensional problems, but also has digressions into higher dimensions.
The initial core of the book was M.I.Shamos' doctoral dissertation, which was suggested to turn into a book by a yet another pioneer in the field, Ronald Graham.
The introduction covers the history of the field, basic data structures, and necessary notions from the theory of computation and geometry.
The subsequent sections cover geometric searching (point location, range searching), convex hull computation, proximity-related problems (closest points, computation and applications of the Voronoi diagram, Euclidean minimum spanning tree, triangulations, etc.), geometric intersection problems, algorithms for sets of isothetic rectangles
The monograph is a rather advanced exposition of problems and approaches in computational geometry focused on the role of hyperplane arrangements, which are shown to constitute a basic underlying combinatorial-geometric structure in certain areas of the field. The primary target audience are active theoretical researchers in the field, rather than application developers. Unlike most of books in computational geometry focused on 2- and 3-dimensional problems (where most applications of computational geometry are), the book aims to treat its subject in the general multi-dimensional setting.
The textbook provides an introduction to computation geometry from the point of view of practical applications. Starting with an introduction chapter, each of the 15 remaining ones formulates a real application problem, formulates an underlying geometrical problem, and discusses techniques of computational geometry useful for its solution, with algorithms provided in pseudocode. The book treats mostly 2- and 3-dimensional geometry.
The goal of the book is to provide a comprehensive introduction into methods and approached, rather than the cutting edge of the research in the field: the presented algorithms provide transparent and reasonably efficient solutions based on fundamental "building blocks" of computational geometry.
The book consists of the following chapters (which provide both solutions for the topic of the title and its applications): "Computational Geometry (Introduction)" "Line Segment Intersection", "Polygon Triangulation", "Linear Programming", "Orthogonal Range Searching", "Point Location", "Voronoi Diagrams", "Arrangements and Duality", "Delaunay Triangulations", "More Geometric Data Structures", "Convex Hulls", "Binary Space Partitions", "Robot Motion Planning", "Quadtrees", "Visibility Graphs", "Simplex Range Searching".
This book is an interactive introduction to the fundamental algorithms of computational geometry, formatted as an interactive document viewable using software based on Mathematica.
Specialized textbooks and monographs
References
Numerical computational geometry (geometric modelling, computer-aided geometric design)
Monographs
Other
Conferences
Paper collections
"Combinatorial and Computational Geometry", eds. Jacob E. Goodman, János Pach, Emo Welzl (MSRI Publications – Volume 52), 2005, .
32 papers, including surveys and research articles on geometric arrangements, polytopes, packing, covering, discrete convexity, geometric algorithms and their computational complexity, and the combinatorial complexity of geometric objects.
"Surveys on Discrete and Computational Geometry: Twenty Years Later" ("Contemporary Mathematics" series), American Mathematical Society, 2008,
See also
List of important publications in mathematics
References
External links
Computational Geometry Pages
Computational geometry
Computer science books
Computational geometry | List of books in computational geometry | [
"Mathematics"
] | 912 | [
"Computational geometry",
"Computational mathematics"
] |
8,701,191 | https://en.wikipedia.org/wiki/Tetrapod%20%28structure%29 | A tetrapod is a form of wave-dissipating concrete block used to prevent erosion caused by weather and longshore drift, primarily to enforce coastal structures such as seawalls and breakwaters. Tetrapods are made of concrete, and use a tetrahedral shape to dissipate the force of incoming waves by allowing water to flow around rather than against them, and to reduce displacement by interlocking.
டெட்ராபோட் என்பது வானிலை மற்றும் நீண்ட கரையோர சறுக்கலால் ஏற்படும் அரிப்பைத் தடுக்கப் பயன்படும் அலை-சிதறல் கான்கிரீட் தொகுதியின் ஒரு வடிவமாகும், இது முதன்மையாக கடல் சுவர்கள் மற்றும் பிரேக்வாட்டர்கள் போன்ற கடலோர கட்டமைப்புகளை செயல்படுத்த பயன்படுகிறது. டெட்ராபோட்கள் கான்கிரீட்டால் ஆனவை, மேலும் அவை நான்முக வடிவத்தைப் பயன்படுத்தி உள்வரும் அலைகளின் சக்தியைக் கலைக்கின்றன, அவை தண்ணீரை அவற்றுக்கு எதிராகப் பாய விட சுற்றிப் பாய அனுமதிப்பதன் மூலமும், இடைப்பூட்டு மூலம் இடப்பெயர்ச்சியைக் குறைக்கின்றன.
Invention
Tetrapods were originally developed in 1950 by Pierre Danel and Paul Anglès d'Auriac of Laboratoire Dauphinois d'Hydraulique (now Artelia) in Grenoble, France, who received a patent for the design. The French invention was named , derived from Greek and , a reference to the tetrahedral shape. Tetrapods were first used at the thermal power station in Roches Noires in Casablanca, Morocco, to protect the sea water intake.
Adoption
Tetrapods have become popular across the world, particularly in Japan; it is estimated that nearly 50 percent of Japan's coastline has been covered or somehow altered by tetrapods and other forms of concrete. Their proliferation on the island of Okinawa, a popular vacation destination in Japan, has made it difficult for tourists to find unaltered beaches and shoreline, especially in the southern half of the island.
Similar designs
See also
References
Further reading
Coastal engineering
Wave-dissipating concrete blocks
Tetrahedra
French inventions | Tetrapod (structure) | [
"Engineering"
] | 477 | [
"Coastal engineering",
"Civil engineering"
] |
8,702,775 | https://en.wikipedia.org/wiki/Zero-forcing%20equalizer | The zero-forcing equalizer is a form of linear equalization algorithm used in communication systems which applies the inverse of the frequency response of the channel. This form of equalizer was first proposed by Robert Lucky.
The zero-forcing equalizer applies the inverse of the channel frequency response to the received signal, to restore the signal after the channel. It has many useful applications. For example, it is studied heavily for IEEE 802.11n (MIMO) where knowing the channel allows recovery of the two or more streams which will be received on top of each other on each antenna. The name zero-forcing corresponds to bringing down the intersymbol interference (ISI) to zero in a noise-free case. This will be useful when ISI is significant compared to noise.
For a channel with frequency response the zero-forcing equalizer is constructed by . Thus the combination of channel and equalizer gives a flat frequency response and linear phase .
In reality, zero-forcing equalization does not work in most applications, for the following reasons:
Even though the channel impulse response has finite length, the impulse response of the equalizer needs to be infinitely long
At some frequencies the received signal may be weak. To compensate, the magnitude of the zero-forcing filter ("gain") grows very large. As a consequence, any noise added after the channel gets boosted by a large factor and destroys the overall signal-to-noise ratio. Furthermore, the channel may have zeros in its frequency response that cannot be inverted at all. (Gain * 0 still equals 0).
This second item is often the more limiting condition. These problems are addressed in the linear MMSE equalizer by making a small modification to the denominator of : , where k is related to the channel response and the signal SNR.
Algorithm
If the channel response (or channel transfer function) for a particular channel is H(s) then the input signal is multiplied by the reciprocal of it. This is intended to remove the effect of channel from the received signal, in particular the intersymbol interference (ISI).
The zero-forcing equalizer removes all ISI, and is ideal when the channel is noiseless. However, when the channel is noisy, the zero-forcing equalizer will amplify the noise greatly at frequencies f where the channel response H(j2πf) has a small magnitude (i.e. near zeroes of the channel) in the attempt to invert the channel completely. A more balanced linear equalizer in this case is the minimum mean-square error equalizer, which does not usually eliminate ISI completely but instead minimizes the total power of the noise and ISI components in the output.
References
Filter theory | Zero-forcing equalizer | [
"Engineering"
] | 557 | [
"Telecommunications engineering",
"Filter theory"
] |
8,702,779 | https://en.wikipedia.org/wiki/Nanofiltration | Nanofiltration is a membrane filtration process that uses nanometer sized pores through which particles smaller than about 1–10 nanometers pass through the membrane. Nanofiltration membranes have pore sizes of about 1–10 nanometers, smaller than those used in microfiltration and ultrafiltration, but a slightly bigger than those in reverse osmosis. Membranes used are predominantly polymer thin films. It is used to soften, disinfect, and remove impurities from water, and to purify or separate chemicals such as pharmaceuticals.
Membranes
Membrane materials that are commonly used are polymer thin films such as polyethylene terephthalate or metals such as aluminium. Pore dimensions are controlled by pH, temperature and time during development with pore densities ranging from 1 to 106 pores per cm2.
Membranes made from polyethylene terephthalate (PET) and other similar materials, are referred to as "track-etch" membranes, named after the way the pores on the membranes are made. "Tracking" involves bombarding the polymer thin film with high energy particles. This results in making tracks that are chemically developed into the membrane, or "etched" into the membrane, which are the pores.
Membranes created from metal such as alumina membranes, are made by electrochemically growing a thin layer of aluminum oxide from aluminum in an acidic medium.
Range of applications
Historically, nanofiltration and other membrane technology used for molecular separation was applied entirely on aqueous systems. The original uses for nanofiltration were water treatment and in particular water softening. Nanofilters "soften" water by retaining scale-forming divalent ions (e.g. Ca2+, Mg2+).
Nanofiltration has been extended into other industries such as milk and juice production as well as pharmaceuticals, fine chemicals, and flavour and fragrance industries.
Advantages and disadvantages
One of the main advantages of nanofiltration as a method of softening water is that during the process of retaining calcium and magnesium ions while passing smaller hydrated monovalent ions, filtration is performed without adding extra sodium ions, as used in ion exchangers. Many separation processes do not operate at room temperature (e.g. distillation), which greatly increases the cost of the process when continuous heating or cooling is applied. Performing gentle molecular separation is linked with nanofiltration that is often not included with other forms of separation processes (centrifugation). These are two of the main benefits that are associated with nanofiltration.
Nanofiltration has a very favorable benefit of being able to process large volumes and continuously produce streams of products. Still, Nanofiltration is the least used method of membrane filtration in industry as the membrane pores sizes are limited to only a few nanometers. Anything smaller, reverse osmosis is used and anything larger is used for ultrafiltration. Ultrafiltration can also be used in cases where nanofiltration can be used, due to it being more conventional.
A main disadvantage associated with nanotechnology, as with all membrane filter technology, is the cost and maintenance of the membranes used. Nanofiltration membranes are an expensive part of the process. Repairs and replacement of membranes is dependent on total dissolved solids, flow rate and components of the feed. With nanofiltration being used across various industries, only an estimation of replacement frequency can be used. This causes nanofilters to be replaced a short time before or after their prime usage is complete.
Design and operation
Industrial applications of membranes require hundreds to thousands of square meters of membranes and therefore an efficient way to reduce the footprint by packing them is required. Membranes first became commercially viable when low cost methods of housing in 'modules' were achieved.
Membranes are not self-supporting. They need to be stayed by a porous support that can withstand the pressures required to operate the NF membrane without hindering the performance of the membrane. To do this effectively, the module needs to provide a channel to remove the membrane permeation and provide appropriate flow condition that reduces the phenomena of concentration polarisation. A good design minimises pressure losses on both the feed side and permeate side and thus energy requirements.
Concentration polarisation
Concentration polarization describes the accumulation of the species being retained close to the surface of the membrane which reduces separation capabilities. It occurs because the particles are convected towards the membrane with the solvent and its magnitude is the balance between this convection caused by solvent flux and the particle transport away from the membrane due to the concentration gradient (predominantly caused by diffusion.) Although concentration polarization is easily reversible, it can lead to fouling of the membrane.
Spiral wound module
Spiral wound modules are the most commonly used style of module and are 'standardized' design, available in a range of standard diameters (2.5", 4" and 8") to fit standard pressure vessel that can hold several modules in series connected by O-rings. The module uses flat sheets wrapped around a central tube. The membranes are glued along three edges over a permeate spacer to form 'leaves'. The permeate spacer supports the membrane and conducts the permeate to the central permeate tube. Between each leaf, a mesh like feed spacer is inserted. The reason for the mesh like dimension of the spacer is to provide a hydrodynamic environment near the surface of the membrane that discourages concentration polarisation. Once the leaves have been wound around the central tube, the module is wrapped in a casing layer and caps placed on the end of the cylinder to prevent 'telescoping' that can occur in high flow rate and pressure conditions
Tubular module
Tubular modules look similar to shell and tube heat exchangers with bundles of tubes with the active surface of the membrane on the inside. Flow through the tubes is normally turbulent, ensuring low concentration polarisation but also increasing energy costs. The tubes can either be self-supporting or supported by insertion into perforated metal tubes. This module design is limited for nanofiltration by the pressure they can withstand before bursting, limiting the maximum flux possible. Due to both the high energy operating costs of turbulent flow and the limiting burst pressure, tubular modules are more suited to 'dirty' applications where feeds have particulates such as filtering raw water to gain potable water in the Fyne process. The membranes can be easily cleaned through a 'pigging' technique with foam balls are squeezed through the tubes, scouring the caked deposits.
Flux enhancing strategies
These strategies work to reduce the magnitude of concentration polarisation and fouling. There is a range of techniques available however the most common is feed channel spacers as described in spiral wound modules. All of the strategies work by increasing eddies and generating a high shear in the flow near the membrane surface. Some of these strategies include vibrating the membrane, rotating the membrane, having a rotor disk above the membrane, pulsing the feed flow rate and introducing gas bubbling close to the surface of the membrane.
Characterisation
Performance parameters
Retention of both charged and uncharged solutes and permeation measurements can be categorised into performance parameters since the performance under natural conditions of a membrane is based on the ratio of solute retained/ permeated through the membrane.
For charged solutes, the ionic distribution of salts near the membrane-solution interface plays an important role in determining the retention characteristic of a membrane. If the charge of the membrane and the composition and concentration of the solution to be filtered is known, the distribution of various salts can be found. This in turn can be combined with the known charge of the membrane and the Gibbs–Donnan effect to predict the retention characteristics for that membrane.
Uncharged solutes cannot be characterised simply by Molecular Weight Cut Off (MWCO,) although in general an increase in molecular weight or solute size leads to an increase in retention. The charge and structure, pH of the solute, influence the retention characteristics.
Morphology parameters
The morphology of a membrane is usually established by microscopy. Atomic force microscopy (AFM) is one method used to characterise the surface roughness of a membrane by passing a small sharp tip (<100 Ă) across the surface of a membrane and measuring the resulting Van der Waals force between the atoms in the end of the tip and the surface. This is useful as a direct correlation between surface roughness and colloidal fouling has been developed. Correlations also exist between fouling and other morphology parameters, such as hydrophobe, showing that the more hydrophobic a membrane is, the less prone to fouling it is. See membrane fouling for more information.
Methods to determine the porosity of porous membranes have also been found via permporometry, making use of differing vapour pressures to characterise the pore size and pore size distribution within the membrane. Initially all pores in the membrane are completely filled with a liquid and as such no permeation of a gas occurs, but after reducing the relative vapour pressure some gaps will start to form within the pores as dictated by the Kelvin equation. Polymeric (non-porous) membranes cannot be subjected to this methodology as the condensable vapour should have a negligible interaction within the membrane.
Solute transport and rejection
Unlike membranes with larger and smaller pore sizes, passage of solutes through nanofiltration is significantly more complex.
Because of the pore sizes, there are three modes of transport of solutes through the membrane. These include 1) diffusion (molecule travel due to concentration potential gradients, as seen through reverse osmosis membranes), 2) convection (travel with flow, like in larger pore size filtration such as microfiltration), and 3) electromigration (attraction or repulsion from charges within and near the membrane).
Additionally, the exclusion mechanisms in nanofiltration are more complex than in other forms of filtration. Most filtration systems operate solely by size (steric) exclusion, but at small length scales seen in nanofiltration, important effects include surface charge and hydration (solvation shell). The exclusion due to hydration is referred to as dielectric exclusion, a reference to the dielectric constants (energy) associated with a particles precense in solution versus within a membrane substrate. Solution pH strongly impacts surface charge, providing a method to understand and better control rejection.
The transport and exclusion mechanisms are heavily influenced by membrane pore size, solvent viscosity, membrane thickness, solute diffusivity, solution temperature, solution pH, and membrane dielectric constant. The pore size distribution is also important. Modeling rejection accurately for NF is very challenging. It can be done with applications of the Nernst–Planck equation, although a heavy reliance on fitting parameters to experimental data is usually required.
In general, charged solutes are much more effectively rejected in NF than uncharged solutes, and multivalent solutes like (valence of 2) experience very high rejection.
Typical figures for industrial applications
Keeping in mind that NF is usually part of a composite system for purification, a single unit is chosen based on the design specifications for the NF unit. For drinking water purification many commercial membranes exist, coming from chemical families having diverse structures, chemical tolerances and salt rejections.
NF units in drinking water purification range from extremely low salt rejection (<5% in 1001A membranes) to almost complete rejection (99% in 8040-TS80-TSA membranes.) Flow rates range from 25 to 60 m3/day for each unit, so commercial filtration requires multiple NF units in parallel to process large quantities of feed water. The pressures required in these units are generally between 4.5 and 7.5 bar.
For seawater desalination using a NF-RO system a typical process is shown below.
Because NF permeate is rarely clean enough to be used as the final product for drinking water and other water purification, is it commonly used as a pre treatment step for reverse osmosis (RO) as is shown above.
Post-treatment
As with other membrane based separations such as ultrafiltration, microfiltration and reverse osmosis, post-treatment of either permeate or retentate flow streams (depending on the application) – is a necessary stage in industrial NF separation prior to commercial distribution of the product. The choice and order of unit operations employed in post-treatment is dependent on water quality regulations and the design of the NF system. Typical NF water purification post-treatment stages include aeration and disinfection & stabilisation.
Aeration
A Polyvinyl chloride (PVC) or fibre-reinforced plastic (FRP) degasifier is used to remove dissolved gases such as carbon dioxide and hydrogen sulfide from the permeate stream. This is achieved by blowing air in a countercurrent direction to the water falling through packing material in the degasifier. The air effectively strips the unwanted gases from the water.
Disinfection and stabilisation
The permeate water from a NF separation is demineralised and may be disposed to large changes in pH, thus providing a substantial risk of corrosion in piping and other equipment components. To increase the stability of the water, chemical addition of alkaline solutions such as lime and caustic soda is employed. Furthermore, disinfectants such as chlorine or chloroamine are added to the permeate, as well as phosphate or fluoride corrosion inhibitors in some cases.
Research trends
Challenges in nanofiltration (NF) technology include minimising membrane fouling and reducing energy requirements. Thin film composite membranes (TFC), which consist of a number of extremely thin selective layers interfacially polymerized over a microporous substrate, have had commercial success in industrial membrane applications. Electrospunnanofibrous membrane layers (ENMs) enhances permeate flux.
Energy-efficient alternatives to the commonly used spiral wound arrangement are hollow fibre membranes, which require less pre-treatment. Titanium Dioxide nanoparticles have been used to minimize for membrane fouling.
See also
References
External links
Project ETAP-ERN, that uses renewable energies for desalinization.
Nano based methods to improve water quality - Hawk's Perch Technical Writing, LLC
Nanotechnology
Water treatment
Filters
Water desalination
Membrane technology | Nanofiltration | [
"Chemistry",
"Materials_science",
"Engineering",
"Environmental_science"
] | 2,946 | [
"Water desalination",
"Separation processes",
"Water treatment",
"Chemical equipment",
"Filters",
"Materials science",
"Water pollution",
"Membrane technology",
"Filtration",
"Environmental engineering",
"Water technology",
"Nanotechnology"
] |
8,704,223 | https://en.wikipedia.org/wiki/Outline%20of%20chemical%20engineering | The following outline is provided as an overview of and topical guide to chemical engineering:
Chemical engineering – deals with the application of physical science (e.g., chemistry and physics), and life sciences (e.g., biology, [[microbi
logy]] and biochemistry) with mathematics and economics, to the process of converting raw materials or chemicals into more useful or valuable forms. In addition to producing useful materials, modern chemical engineering is also concerned with pioneering valuable new materials and techniques – such as nanotechnology, fuel cells and biomedical engineering.
Essence of chemical engineering
Math
Chemistry
Physics
Fluid Mechanics
Chemical Reaction Engineering
Thermodynamics
Chemical Thermodynamics
Engineering Mechanics
Fluid Dynamics
Heat Transfer
Mass Transfer
Transport Phenomena
Green Chemistry and Sustainability
Process Control
Process Instrumentation
Process Safety
Unit Operation
Process Design
Chemical Process Modeling and Simulation
Engineering Economics
Branches of chemical engineering
Biochemical engineering
Biomedical engineering
Biotechnology
Ceramics
Chemical process modeling
Chemical Technologist
Chemical reactor
Chemical reaction engineering
Distillation Design
Electrochemistry
Fluid dynamics
Food engineering
Heat transfer
Mass transfer
Materials science
Microfluidics
Nanotechnology
Natural environment
Plastics engineering
Polymer engineering
Process control
Process design (chemical engineering)
Separation processes (see also: separation of mixture)
Crystallization processes
Distillation processes
Membrane processes
Semiconductors
Thermodynamics
Transport phenomena
Unit operations
Unit Operations of Chemical Engineering
History of chemical engineering
History of chemical engineering
Batch production
General chemical engineering concepts
Chemical engineer
Chemical reaction
Distillation Design
Fluid mechanics
Heat transfer
Mass transfer and equilibrium stages
Operations involving particulate solids.
Process design
Transport Phenomena
Unit operations
Polymerization
3D Plant Design
FEED
Leaders in chemical engineering
List of chemical engineers
See also
Outline of chemistry
References
External links
Computer Aids for Chemical Engineering Education (CACHE)
Engineering Learning Resources Wiki
What is a Chemical Engineer?
Chemical Engineers' Resource Page
History of Chemical Engineering Timeline
American Institute of Chemical Engineers (USA)
Institution of Chemical Engineers (UK)
Canadian Society for Chemical Engineers
Brazilian Association of Chemical Engineering (BRA)
Engineers Australia (AUS)
Chemical Engineering Information -Turkey (TR)
Chemical Engineering Information Exchange)
Chemical engineering
Chemical engineering
Chemical engineering | Outline of chemical engineering | [
"Chemistry",
"Engineering"
] | 411 | [
"Chemical engineering",
"nan"
] |
5,516,588 | https://en.wikipedia.org/wiki/Glaze3D | Glaze3D was a family of graphics cards announced by BitBoys Oy on August 2, 1999, that would have produced substantially better performance than other consumer products available at the time. The family, which would have come in the Glaze3D 1200, Glaze3D 2400 and Glaze3D 4800 models, was supposed to offer full support for DirectX 7, OpenGL 1.2, AGP 4×, 4× anisotropic filtering, full-screen anti-aliasing and a host of other technologies not commonly seen at the time. The 1.5 million gate GPU would have been fabricated by Infineon on a 0.2 μm eDRAM process, later to be reduced to 0.17 μm with a minimum of 9 MB of embedded DRAM and 128 to 512 MB of external SDRAM. The maximum supported video resolution was 2048×1536 pixels.
Development history
The Glaze3D family of cards were developed in several generations, beginning with the original Glaze3D "400" with multi-channel RDRAM instead of internal eDRAM. This was offered only as IP but with no takers. Bitboys revised the design and decided to have it manufactured themselves, in cooperation with Infineon Technologies, the chip fabrication arm of Siemens. They came up with a new Glaze3D pitched for release in Q1, 2000. The card promised extremely high performance compared to contemporary consumer GPUs. As bug-hunting, validation and manufacturing problems delayed the launch, new features became necessary and a DX7 variant with built-in hardware Transform & Lighting was announced, but never appeared.
The GPU was later redesigned under a new codename, Axe, to take advantage of DirectX 8 and compete with a developing competition. The new version sported such features as an additional 3 MB of eDRAM, proprietary Matrix Antialiasing and a vastly improved fillrate, as well as offering a programmable vertex shader and widened internal memory bus. The new card was to have been released as Avalanche3D by the end of 2001.
The third development, codenamed Hammer, started development as Axe lost viability toward the end of 2001. This new card was to be a high-end DirectX 9 part, offering new features such as occlusion culling, improved rendering performance and various other innovations. This version, like the ones before it, never shipped commercially.
Bitboys turned to mobile graphics and developed an accelerator licensed and probably used by at least one flat panel display manufacture, although it was intended and designed primarily for higher-end handhelds. Later on ATI bought Bitboys for an extra research and development unit, so as of 2008 Bitboys was owned by AMD. In 2009, Bitboys was transferred to Qualcomm.
Specifications
Glaze3D chip
Infineon on a 0.2 μm eDRAM process
Compatible with OpenGL and DirectX
Quad-pixel pipeline at 150 MHz
4.5 million Triangles
10 million Triangles with multi-chip
1.5 million logic gate
130 mm2 die size
304 pin BGA
Thor Geometry processor
PCI or AGP 2X/4X
Fillrate
1.2 GigaTextel/s
4.8 GigaTextel/s with multi-chip
0.6 GigaTextel/s (Dual textured)
2.4 GigaTextel/s with multi-chip
Memory
Embedded RAM
9 MB Embedded framebuffer memory
4 module of 2.25 MB with 3 bank each
150 MHz
9.6 GB/s memory bandwidth
512 bit interface
External RAM
Up to 128 MB max
Texture cache
16 KB for even mipmap and surface texture
8 KB for odd mipmap and lightmap
Two-way associative
Performance claims
The Glaze3D family was well known for the bold performance claims that were associated with it. The low-end 1200 model was purported to achieve a fillrate of 1.2 billion texels per second, with a geometry throughput of 15 million triangles per second. Most importantly, the card was originally claimed to achieve over 200 frames per second in id Software's Quake III Arena at maximum visual quality.
The 1200 model's claimed specifications would place it as the rough equivalent of the GeForce FX 5200 Ultra or Radeon 9200 Pro (very low performance GPUs of 2002 vintage), while its claimed performance would place it at the same level as the GeForce 3 Ti 500 or Radeon 8500 (high-end GPUs from 2000 to 2001). To compound matters, the cards' specifications were later updated to nearly double their original performance levels.
While the Glaze3D 1200 was supposed to achieve unheard-of performance in video games, it was claimed that the 2400 and 4800 models would each be substantially more powerful in turn. Using two and four GPU configurations respectively, and including an additional geometry accelerator on the 4800, the higher-end Glaze3D cards were to be aimed at the very highest end of the video-gaming market.
See also
ATI Technologies
Nvidia
References
External links
Glaze3D Announced (link expired as of 09.2019).
PDF version of a presentation by Petri Norlund, Chief Architect at BitBoys Oy in 1999.
BitBoys at Siggraph - analysis of the Glaze3D cards. (link expired as of 09.2019).
A Look Inside BitBoys - a detailed description of the development history of Glaze3D. (archived).
Vaporware
Graphics cards | Glaze3D | [
"Technology"
] | 1,120 | [
"Computer industry",
"Vaporware"
] |
5,517,556 | https://en.wikipedia.org/wiki/Extension%20topology | In topology, a branch of mathematics, an extension topology is a topology placed on the disjoint union of a topological space and another set. There are various types of extension topology, described in the sections below.
Extension topology
Let X be a topological space and P a set disjoint from X. Consider in X ∪ P the topology whose open sets are of the form A ∪ Q, where A is an open set of X and Q is a subset of P.
The closed sets of X ∪ P are of the form B ∪ Q, where B is a closed set of X and Q is a subset of P.
For these reasons this topology is called the extension topology of X plus P, with which one extends to X ∪ P the open and the closed sets of X. As subsets of X ∪ P the subspace topology of X is the original topology of X, while the subspace topology of P is the discrete topology. As a topological space, X ∪ P is homeomorphic to the topological sum of X and P, and X is a clopen subset of X ∪ P.
If Y is a topological space and R is a subset of Y, one might ask whether the extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no.
Note the similarity of this extension topology construction and the Alexandroff one-point compactification, in which case, having a topological space X which one wishes to compactify by adding a point ∞ in infinity, one considers the closed sets of X ∪ {∞} to be the sets of the form K, where K is a closed compact set of X, or B ∪ {∞}, where B is a closed set of X.
Open extension topology
Let be a topological space and a set disjoint from . The open extension topology of plus is Let . Then is a topology in . The subspace topology of is the original topology of , i.e. , while the subspace topology of is the discrete topology, i.e. .
The closed sets in are . Note that is closed in and is open and dense in .
If Y a topological space and R is a subset of Y, one might ask whether the open extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no.
Note that the open extension topology of is smaller than the extension topology of .
Assuming and are not empty to avoid trivialities, here are a few general properties of the open extension topology:
is dense in .
If is finite, is compact. So is a compactification of in that case.
is connected.
If has a single point, is ultraconnected.
For a set Z and a point p in Z, one obtains the excluded point topology construction by considering in Z the discrete topology and applying the open extension topology construction to Z – {p} plus p.
Closed extension topology
Let X be a topological space and P a set disjoint from X. Consider in X ∪ P the topology whose closed sets are of the form X ∪ Q, where Q is a subset of P, or B, where B is a closed set of X.
For this reason this topology is called the closed extension topology of X plus P, with which one extends to X ∪ P the closed sets of X. As subsets of X ∪ P the subspace topology of X is the original topology of X, while the subspace topology of P is the discrete topology.
The open sets of X ∪ P are of the form Q, where Q is a subset of P, or A ∪ P, where A is an open set of X. Note that P is open in X ∪ P and X is closed in X ∪ P.
If Y is a topological space and R is a subset of Y, one might ask whether the closed extension topology of Y – R plus R is the same as the original topology of Y, and the answer is in general no.
Note that the closed extension topology of X ∪ P is smaller than the extension topology of X ∪ P.
For a set Z and a point p in Z, one obtains the particular point topology construction by considering in Z the discrete topology and applying the closed extension topology construction to Z – {p} plus p.
Notes
Works cited
Topological spaces
Topology | Extension topology | [
"Physics",
"Mathematics"
] | 888 | [
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
5,518,314 | https://en.wikipedia.org/wiki/Association%20for%20Computer%20Aided%20Design%20in%20Architecture | The Association for Computer Aided Design In Architecture (ACADIA) is a 501(c)(3) non-profit organization active in the area of computer-aided architectural design (CAAD).
Mission statement
Begun in 1981, the organization's objectives are recorded in its bylaws:
"ACADIA was formed for the purpose of facilitating communication and information exchange regarding the use of computers in architecture, planning and building science. A particular focus is education and the software, hardware and pedagogy involved in education."
"The organization is also committed to the research and development of computer aides that enhance design creativity, rather than simply production, and that aim at contributing to the construction of humane physical environments."
Membership
Membership is open to anyone who subscribes to the objectives of the organization, including architects, educators, and software developers, whether resident in North America or not. An online membership registration form and directory is available via the organization.
The organization is primarily governed by the elected Board of Directors. The organization is led by the elected President, who presides over Board of Directors meetings, but does not vote except in the case of a tie.
Presidents (elected)
Activities
Annual conference
ACADIA sponsors an annual national conference, held in the autumn of each year at a different site in North America. Papers for the conferences undergo extensive blind review before being accepted for presentation (and publication). Membership is not a prerequisite for submission of a paper.
Proceedings
Each year the conference papers are gathered into a proceedings publication which is distributed to members, and available to the public via the open access database CumInCAD.
Awards
Started in 1998, ACADIA Awards of Excellence are "the highest award that can be achieved in the field of architectural computing". The awards are given in areas of practice, teaching, research and service, with at most one award in each category per year. Past awards have recognized various significant contributors to the field of architectural computing.
The current awards given annually or biannually are the Lifetime Achievement Award, the Digital Practice Award of Excellence, the Innovative Academic Program Award of Excellence, the Innovative Research Award of Excellence, the Society Award for Leadership, and the Teaching Award of Excellence.
Lifetime Achievement Award
Innovative Research Award of Excellence
Digital Practice Award of Excellence
Society Award for Leadership
Innovative Academic Program Award of Excellence
History
ACADIA was founded in 1981 by some of the pioneers in the field of design computation including Bill Mitchell, Chuck Eastman, and Chris Yessios. Since then, ACADIA has hosted over 40 conferences across North America and has grown into a strong network of academics and professionals in the design computation field.
Related organizations
Sister organizations
There are four sister organizations around the world to provide a more accessible regional forum for discussion of computing and design. The major ones are
CAADRIA - The Association for Computer Aided Architectural Design in Asia, since 1996.
SIGraDi - Iberoamerican Society of Digital Graphics, since 1997.
ASCAAD - The Arab Society for Computer Aided Architectural Design, since 2001.
eCAADe - The Association for Education and Research in Computer-Aided Architectural Design in Europe.
Other related organizations
CAAD Futures - Computer Aided Architectural Design Futures, since 1985.
CUMINCAD - The Cumulative Index of Computer Aided Architectural Design, with public CumInCAD records available via an Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) feed and records are available via multiple bibliographic archives and citation indexes online.
References
External links
Association for Computer Aided Design In Architecture
Information technology organizations based in North America
Architectural design
Non-profit architecture organizations based in the United States
Charities based in North Dakota | Association for Computer Aided Design in Architecture | [
"Engineering"
] | 737 | [
"Design",
"Architectural design",
"Architecture"
] |
5,518,587 | https://en.wikipedia.org/wiki/W.%20E.%20P.%20Duncan | Wilfred Eben Pinkerton Duncan (1897 – 28 January 1977) was an important figure in the early period of the Toronto Transit Commission's history.
He was born in Glasgow, Scotland, and graduated with a B.Sc. degree in engineering from Glasgow University. He emigrated to Canada and worked from 1910 to 1914 in the construction department of the Canadian Pacific Railway. Between 1915 and 1919 he served overseas in the Great War with the Canadian Expeditionary Force and the Royal Engineers, attaining the rank of Major. After the war he worked as a construction engineer in Toronto. He joined the Toronto Transportation Commission in 1921, and served in various engineering roles. By 1945 he was the TTC's Chief Engineer, and he became General Manager, the senior staff position, in 1952. In 1959, when the senior position was split in two, he became General Manager – Subway Construction, while John G. Inglis assumed the role of General Manager - Operations. Duncan retired in 1961 but remained active as a General Consultant to the TTC until the opening of the University Subway in 1963.
He was instrumental in the growth of the system and was in charge of the TTC during the building of the Yonge Subway.
The Duncan Shops, a heavy bus maintenance facility at the TTC's Hillcrest Complex, is named in his honour.
References
TTC Coupler, September 1952 Vol 27 No 9
TTC Coupler, March 1961 Vol 36 No 3
TTC Coupler, March 1977 Vol 52 No 3
Specific
1897 births
1977 deaths
Engineers from Glasgow
Alumni of the University of Glasgow
Canadian civil engineers
Toronto Transit Commission general managers
Scottish emigrants to Canada
Royal Engineers | W. E. P. Duncan | [
"Engineering"
] | 330 | [
"Civil engineering",
"Civil engineering stubs"
] |
5,518,588 | https://en.wikipedia.org/wiki/1-Chloro-9%2C10-bis%28phenylethynyl%29anthracene | 1-Chloro-9,10-bis(phenylethynyl)anthracene is a fluorescent dye used in lightsticks. It emits yellow-green light, used in 30-minute high-intensity Cyalume sticks.
See also
9,10-Bis(phenylethynyl)anthracene
2-Chloro-9,10-bis(phenylethynyl)anthracene
References
Fluorescent dyes
Organic semiconductors
Anthracenes
Alkyne derivatives
Chloroarenes | 1-Chloro-9,10-bis(phenylethynyl)anthracene | [
"Chemistry"
] | 115 | [
"Semiconductor materials",
"Molecular electronics",
"Organic semiconductors"
] |
5,520,917 | https://en.wikipedia.org/wiki/Multimedia%20over%20Coax%20Alliance | The Multimedia over Coax Alliance (MoCA) is an international standards consortium that publishes specifications for networking over coaxial cable. The technology was originally developed to distribute IP television in homes using existing cabling, but is now used as a general-purpose Ethernet link where it is inconvenient or undesirable to replace existing coaxial cable with optical fiber or twisted pair cabling.
MoCA 1.0 was approved in 2006, MoCA 1.1 in April 2010, MoCA 2.0 in June 2010, and MoCA 2.5 in April 2016. The most recently released version of the standard, MoCA 3.0, supports speeds of up to . This technology is not yet available to customers.
Membership
The Alliance currently has 45 members including pay TV operators, OEMs, CE manufacturers and IC vendors.
MoCA's board of directors consists of Arris, Comcast, Cox Communications, DirecTV, Echostar, Intel, InCoax, MaxLinear and Verizon.
Technology
Within the scope of the Internet protocol suite, MoCA is a protocol that provides the link layer. In the 7-layer OSI model, it provides definitions within the data link layer (layer 2) and the physical layer (layer 1). DLNA approved of MoCA as a layer 2 protocol. A MoCA network can contain up to 16 nodes for MoCA 1.1 and higher, with a maximum of 8 for MoCA 1.0. The network provides a shared-medium, half-duplex link between all nodes using time-division multiplexing; within each timeslot, any pair of nodes communicates directly with each other using the highest mutually-supported version of the standard.
Versions
MoCA 1.0 The first version of the standard, MoCA 1.0, was ratified in 2006 and supports transmission speeds of up to 135 Mb/s.
MoCA 1.1 MoCA 1.1 provides 175 Mbit/s net throughputs (275 Mbit/s PHY rate) and operates in the 500 to 1500 MHz frequency range.
MoCA 2.0 MoCA 2.0 offers actual throughputs (MAC rate) up to 1 Gbit/s. Operating frequency range is 500 to 1650 MHz. Packet error rate is 1 packet error in 100 million. MoCA 2.0 also offers lower power modes of sleep and standby and is backward compatible with MoCA 1.1. In March 2017, SCTE/ISBE society and MoCA consortium began creating a new "standards operational practice" (SCTE 235) to provide MoCA 2.0 with DOCSIS 3.1 interoperability. Interoperability is necessary because both MoCA 2.0 and DOCSIS 3.1 may operate in the frequency range above 1 GHz. The standard "addresses the need to prevent degradation or failure of signals due to a shared frequency range above 1 GHz".
MoCA 2.5 MoCA 2.5 (introduced April 13, 2016) offers actual data rates up to 2.5 Gbit/s, continues to be backward compatible with MoCA 2.0 and MoCA 1.1, and adds MoCA protected setup (MPS), Management Proxy, Enhanced Privacy, Network wide Beacon Power, and Bridge detection. MoCA Access is intended for multiple dwelling units (MDUs) such as hotels, resorts, hospitals, or educational facilities. It is based on the current MoCA 2.0 standard which is capable of 1 Gbit/s net throughputs, and MoCA 2.5 which is capable of 2.5 Gbit/s.
MoCA 3.0 The MoCA 3.0 standard has been released and increases the maximum throughput to 10 Gbit/s. However, this is not yet available to customers.
Performance profiles
Frequency band plan
Notes:
Channel C4 is commonly used for Verizon FiOS for the "WAN" link from the ONT to the router.
Channels D1-D8 are commonly used for "LAN" links, between set-top boxes and the router.
E band channels are commonly used by DirecTV converter boxes. The DirecTV Ethernet-to-Coax Adapter (DECA) uses MoCA on this "Mid-RF" frequency band.
D10A 100 MHz wide means it goes up to 1675 MHz, so splitters need to be 5-1675 MHz.
See also
Ethernet over coax
G.hn
Home gateway
Home network
HomePlug Powerline Alliance
HomePNA
IEEE 802.3
IEEE 802.11
IEEE 1905
Ultra-high-definition television
Wi-Fi over Coax
Wireless LAN
References
External links
Computer networking
Computer network organizations
Consumer electronics
Ethernet standards | Multimedia over Coax Alliance | [
"Technology",
"Engineering"
] | 965 | [
"Computer networking",
"Computer science",
"Computer engineering"
] |
586,599 | https://en.wikipedia.org/wiki/Penning%20trap | A Penning trap is a device for the storage of charged particles using a homogeneous magnetic field and a quadrupole electric field. It is mostly found in the physical sciences and related fields of study for precision measurements of properties of ions and stable subatomic particles, like for example mass, fission yields and isomeric yield ratios. One initial object of study was the so-called geonium atoms, which represent a way to measure the electron magnetic moment by storing a single electron. These traps have been used in the physical realization of quantum computation and quantum information processing by trapping qubits. Penning traps are in use in many laboratories worldwide, including CERN, to store and investigate anti-particles such as antiprotons. The main advantages of Penning traps are the potentially long storage times and the existence of a multitude of techniques to manipulate and non-destructively detect the stored particles. This makes Penning traps versatile for the investigation of stored particles, but also for their selection, preparation or mere storage.
History
The Penning trap was named after F. M. Penning (1894–1953) by Hans Georg Dehmelt (1922–2017) who built the first trap. Dehmelt got inspiration from the vacuum gauge built by F. M. Penning where a current through a discharge tube in a magnetic field is proportional to the pressure. Citing from H. Dehmelt's autobiography: "I began to focus on the magnetron/Penning discharge geometry, which, in the Penning ion gauge, had caught my interest already at Göttingen and at Duke. In their 1955 cyclotron resonance work on photoelectrons in vacuum Franken and Liebes had reported undesirable frequency shifts caused by accidental electron trapping. Their analysis made me realize that in a pure electric quadrupole field the shift would not depend on the location of the electron in the trap. This is an important advantage over many other traps that I decided to exploit. A magnetron trap of this type had been briefly discussed in J.R. Pierce's 1949 book, and I developed a simple description of the axial, magnetron, and cyclotron motions of an electron in it. With the help of the expert glassblower of the Department, Jake Jonson, I built my first high vacuum magnetron trap in 1959 and was soon able to trap electrons for about 10 sec and to detect axial, magnetron and cyclotron resonances." – H. Dehmelt
H. Dehmelt shared the Nobel Prize in Physics in 1989 for the development of the ion trap technique.
Operation
Penning traps use a strong homogeneous axial magnetic field to confine particles radially and a quadrupole electric field to confine the particles axially. The static electric potential can be generated using a set of three electrodes: a ring and two endcaps. In an ideal Penning trap the ring and endcaps are hyperboloids of revolution. For trapping of positive (negative) ions, the endcap electrodes are kept at a positive (negative) potential relative to the ring. This potential produces a saddle point in the centre of the trap, which traps ions along the axial direction. The electric field causes ions to oscillate (harmonically in the case of an ideal Penning trap) along the trap axis. The magnetic field in combination with the electric field causes charged particles to move in the radial plane with a motion which traces out an epitrochoid.
The orbital motion of ions in the radial plane is composed of two modes at frequencies which are called the magnetron and the modified cyclotron frequencies. These motions are similar to the deferent and epicycle, respectively, of the Ptolemaic model of the solar system.
The sum of these two frequencies is the cyclotron frequency, which depends only on the ratio of electric charge to mass and on the strength of the magnetic field. This frequency can be measured very accurately and can be used to measure the masses of charged particles. Many of the highest-precision mass measurements (masses of the electron, proton, 2H, 20Ne and 28Si) come from Penning traps.
Buffer gas cooling, resistive cooling, and laser cooling are techniques to remove energy from ions in a Penning trap. Buffer gas cooling relies on collisions between the ions and neutral gas molecules that bring the ion energy closer to the energy of the gas molecules. In resistive cooling, moving image charges in the electrodes are made to do work through an external resistor, effectively removing energy from the ions. Laser cooling can be used to remove energy from some kinds of ions in Penning traps. This technique requires ions with an appropriate electronic structure. Radiative cooling is the process by which the ions lose energy by creating electromagnetic waves by virtue of their acceleration in the magnetic field. This process dominates the cooling of electrons in Penning traps, but is very small and usually negligible for heavier particles.
Using the Penning trap can have advantages over the radio frequency trap (Paul trap). Firstly, in the Penning trap only static fields are applied and therefore there is no micro-motion and resultant heating of the ions due to the dynamic fields, even for extended 2- and 3-dimensional ion Coulomb crystals. Also, the Penning trap can be made larger whilst maintaining strong trapping. The trapped ion can then be held further away from the electrode surfaces. Interaction with patch potentials on the electrode surfaces can be responsible for heating and decoherence effects and these effects scale as a high power of the inverse distance between the ion and the electrode.
Fourier-transform mass spectrometry
Fourier-transform ion cyclotron resonance mass spectrometry (also known as Fourier-transform mass spectrometry) is a type of mass spectrometry used for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap where they are excited to a larger cyclotron radius by an oscillating electric field perpendicular to the magnetic field. The excitation also results in the ions moving in phase (in a packet). The signal is detected as an image current on a pair of plates which the packet of ions passes close to as they cyclotron. The resulting signal is called a free induction decay (fid), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum.
Single ions can be investigated in a Penning trap held at a temperature of 4 K. For this the ring electrode is segmented and opposite electrodes are connected to a superconducting coil and the source and the gate of a field-effect transistor. The coil and the parasitic capacitances of the circuit form a LC circuit with a Q of about 50 000. The LC circuit is excited by an external electric pulse. The segmented electrodes couple the motion of the single electron to the LC circuit. Thus the energy in the LC circuit in resonance with the ion slowly oscillates between the many electrons (10000) in the gate of the field effect transistor and the single electron. This can be detected in the signal at the drain of the field effect transistor.
Geonium atom
A geonium atom is a pseudo-atomic system that consists of a single electron or ion stored in a Penning trap which is 'bound' to the remaining Earth, hence the term 'geonium'. The name was coined by H.G. Dehmelt.
In the typical case, the trapped system consists of only one particle or ion. Such a quantum system is determined by quantum states of one particle, like in the hydrogen atom. Hydrogen consists of two particles, the nucleus and electron, but the electron motion relative to the nucleus is equivalent to one particle in an external field, see center-of-mass frame.
The properties of geonium are different from a typical atom. The charge undergoes cyclotron motion around the trap axis and oscillates along the axis. An inhomogeneous magnetic "bottle field" is applied to measure the quantum properties by the "continuous Stern-Gerlach" technique. Energy levels and g-factor of the particle can be measured with high precision. Van Dyck, et al. explored the magnetic splitting of geonium spectra in 1978 and in 1987 published high-precision measurements of electron and positron g-factors, which constrained the electron radius.
Single particle
In November 2017, an international team of scientists isolated a single proton in a Penning trap in order to measure its magnetic moment to the highest precision to date. It was found to be . The CODATA 2018 value matches this.
References
External links
Nobel Prize in Physics 1989
The High-precision Penning Trap Mass Spectrometer SMILETRAP in Stockholm, Sweden
High-precision mass determination of unstable nuclei with a Penning trap mass spectrometer at ISOLDE/CERN, Switzerland
High-precision mass measurements of rare isotopes using the LEBIT and SIPT Penning traps at the National Superconducting Cyclotron Laboratory, USA
High-precision mass measurements of short-lived isotopes using the TITAN Penning trap at TRIUMF in Vancouver, Canada
Measuring instruments
Atomic physics
Mass spectrometry
Particle traps | Penning trap | [
"Physics",
"Chemistry",
"Technology",
"Engineering"
] | 1,939 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Quantum mechanics",
"Measuring instruments",
"Particle traps",
"Mass spectrometry",
"Atomic physics",
" molecular",
"Atomic",
"Matter",
" and optical physics"
] |
586,817 | https://en.wikipedia.org/wiki/Mass%20spectrum | A mass spectrum is a histogram plot of intensity vs. mass-to-charge ratio (m/z) in a chemical sample, usually acquired using an instrument called a mass spectrometer. Not all mass spectra of a given substance are the same; for example, some mass spectrometers break the analyte molecules into fragments; others observe the intact molecular masses with little fragmentation. A mass spectrum can represent many different types of information based on the type of mass spectrometer and the specific experiment applied. Common fragmentation processes for organic molecules are the McLafferty rearrangement and alpha cleavage. Straight chain alkanes and alkyl groups produce a typical series of peaks: 29 (CH3CH2+), 43 (CH3CH2CH2+), 57 (CH3CH2CH2CH2+), 71 (CH3CH2CH2CH2CH2+) etc.
X-axis: m/z (mass-to-charge ratio)
The x-axis of a mass spectrum represents a relationship between the mass of a given ion and the number of elementary charges that it carries. This is written as the IUPAC standard m/z to denote the quantity formed by dividing the mass of an ion (in daltons) by the dalton unit and by its charge number (positive absolute value). Thus, m/z is a dimensionless quantity with no associated units. Despite carrying neither units of mass nor charge, the m/z is referred to as the mass-to-charge ratio of an ion. However, this is distinct from the mass-to-charge ratio, m/Q (SI standard units kg/C), which is commonly used in physics. The m/z is used in applied mass spectrometry because convenient and intuitive numerical relationships naturally arise when interpreting spectra. A single m/z value alone does not contain sufficient information to determine the mass or charge of an ion. However, mass information may be extracted when considering the whole spectrum, such as the spacing of isotopes or the observation of multiple charge states of the same molecule. These relationships and the relationship to the mass of the ion in daltons tend toward approximately rational number values in m/z space. For example, ions with one charge exhibit spacing between isotopes of 1 and the mass of the ion in daltons is numerically equal to the m/z. The IUPAC Gold Book gives an example of appropriate use: "for the ion C7H72+, m/z equals 45.5".
Alternative x-axis notations
There are several alternatives to the standard m/z notation that appear in the literature; however, these are not currently accepted by standards organizations and most journals. m/e appears in older historical literature. A label more consistent with the IUPAC green book and ISO 31 conventions is m/Q or m/q where m is the symbol for mass and Q or q the symbol for charge with the units u/e or Da/e. This notation is not uncommon in the physics of mass spectrometry but is rarely used as the abscissa of a mass spectrum. It was also suggested to introduce a new unit thomson (Th) as a unit of m/z, where 1 Th = 1 u/e. According to this convention, mass spectra x axis could be labeled m/z (Th) and negative ions would have negative values. This notation is rare and not accepted by IUPAC or any other standards organisation.
History of x-axis notation
In 1897 the mass-to-charge ratio of the electron was first measured by J. J. Thomson. By doing this he showed that the electron, which was postulated before in order to explain electricity, was in fact a particle with a mass and a charge and that its mass-to-charge ratio was much smaller than the one for the hydrogen ion H+. In 1913 he measured the mass-to-charge ratio of ions with an instrument he called a parabola spectrograph. Although this data was not represented as a modern mass spectrum, it was similar in meaning. Eventually there was a change to the notation as m/e giving way to the current standard of m/z.
Early in mass spectrometry research the resolution of mass spectrometers did not allow for accurate mass determination. Francis William Aston won the Nobel prize in Chemistry in 1922. "For his discovery, by means of his mass spectrograph, of isotopes, in a large number of non-radioactive elements, and for his enunciation of the Whole Number Rule." In which he stated that all atoms (including isotopes) follow a whole-number rule This implied that the masses of atoms were not on a scale but could be expressed as integers (in fact multiple charged ions were rare, so for the most part the ratio was whole as well). There have been several suggestions (e.g. the unit thomson) to change the official mass spectrometry nomenclature to be more internally consistent.
Y-axis: signal intensity
The y-axis of a mass spectrum represents signal intensity of the ions. When using counting detectors the intensity is often measured in counts per second (cps). When using analog detection electronics the intensity is typically measured in volts. In FTICR and Orbitraps the frequency domain signal (the y-axis) is related to the power (~amplitude squared) of the signal sine wave (often reduced to an rms power); however, the axis is usually not labeled as such for many reasons. In most forms of mass spectrometry, the intensity of ion current measured by the spectrometer does not accurately represent relative abundance, but correlates loosely with it. Therefore, it is common to label the y-axis with "arbitrary units".
Y-axis and relative abundance
Signal intensity may be dependent on many factors, especially the nature of the molecules being analyzed and how they ionize. The efficiency of ionization varies from molecule to molecule and from ion source to ion source. For example, in electrospray sources in positive ion mode a quaternary amine will ionize exceptionally well whereas a large hydrophobic alcohol will most likely not be seen no matter how concentrated. In an EI source these molecules will behave very differently. Additionally there may be factors that affect ion transmission disproportionally between ionization and detection.
On the detection side there are many factors that can also affect signal intensity in a non-proportional way. The size of the ion will affect the velocity of impact and with certain detectors the velocity is proportional to the signal output. In other detection systems, such as FTICR, the number of charges on the ion are more important to signal intensity. In Fourier transform ion cyclotron resonance and Orbitrap type mass spectrometers the signal intensity (Y-axis) is related to the amplitude of the free induction decay signal. This is fundamentally a power relationship (amplitude squared) but often computed as an [rms]. For decaying signals the rms is not equal to the average amplitude. Additionally the damping constant (decay rate of the signal in the fid) is not the same for all ions. In order to make conclusions about relative intensity a great deal of knowledge and care is required.
A common way to get more quantitative information out of a mass spectrum is to create a standard curve to compare the sample to. This requires knowing what is to be quantitated ahead of time, having a standard available and designing the experiment specifically for this purpose. A more advanced variation on this is the use of an internal standard which behaves very similarly to the analyte. This is often an isotopically labeled version of the analyte. There are forms of mass spectrometry, such as accelerator mass spectrometry that are designed from the bottom up to be quantitative.
Spectral skewing
Spectral skewing is the change in relative intensity of mass spectral peaks due to the changes in concentration of the analyte in the ion source as the mass spectrum is scanned. This situation occurs routinely as chromatographic components elute into a continuous ion source. Spectral skewing is not observed in ion trap (quadrupole (this has been seen also in QMS) or magnetic) or time-of-flight (TOF) mass analyzers because potentially all ions formed in operational cycle (a snapshot in time) of the instrument are available for detection.
See also
Kendrick mass
References
External links
Quantities, Units and Symbols in Physical Chemistry (IUPAC green book)
An introductory video on Mass Spectrometry The Royal Society of Chemistry
NIST Standard Reference Database 1A v17
Mass spectrometry | Mass spectrum | [
"Physics",
"Chemistry"
] | 1,789 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
587,271 | https://en.wikipedia.org/wiki/Torsion%20spring | A torsion spring is a spring that works by twisting its end along its axis; that is, a flexible elastic object that stores mechanical energy when it is twisted. When it is twisted, it exerts a torque in the opposite direction, proportional to the amount (angle) it is twisted. There are various types:
A torsion bar is a straight bar of metal or rubber that is subjected to twisting (shear stress) about its axis by torque applied at its ends.
A more delicate form used in sensitive instruments, called a torsion fiber consists of a fiber of silk, glass, or quartz under tension, that is twisted about its axis.
A helical torsion spring, is a metal rod or wire in the shape of a helix (coil) that is subjected to twisting about the axis of the coil by sideways forces (bending moments) applied to its ends, twisting the coil tighter.
Clocks use a spiral wound torsion spring (a form of helical torsion spring where the coils are around each other instead of piled up) sometimes called a "clock spring" or colloquially called a mainspring. Those types of torsion springs are also used for attic stairs, clutches, typewriters and other devices that need near constant torque for large angles or even multiple revolutions.
Torsion, bending
Torsion bars and torsion fibers do work by torsion. However, the terminology can be confusing because in helical torsion spring (including clock spring), the forces acting on the wire are actually bending stresses, not torsional (shear) stresses. A helical torsion spring actually works by torsion when it is bent (not twisted).
We will use the word "torsion" in the following for a torsion spring according to the definition given above, whether the material it is made of actually works by torsion or by bending.
Torsion coefficient
As long as they are not twisted beyond their elastic limit, torsion springs obey an angular form of Hooke's law:
where
is the torque exerted by the spring in newton-meters, and is the angle of twist from its equilibrium position in radians
is a constant with units of newton-meters / radian, variously called the spring's torsion coefficient, torsion elastic modulus, rate, or just spring constant, equal to the change in torque required to twist the spring through an angle of 1 radian.
The torsion constant may be calculated from the geometry and various material properties. It is analogous to the spring constant of a linear spring. The negative sign indicates that the direction of the torque is opposite to the direction of twist.
The energy U, in joules, stored in a torsion spring is:
Uses
Some familiar examples of uses are the strong, helical torsion springs that operate clothespins and traditional spring-loaded-bar type mousetraps. Other uses are in the large, coiled torsion springs used to counterbalance the weight of garage doors, and a similar system is used to assist in opening the trunk (boot) cover on some sedans. Small, coiled torsion springs are often used to operate pop-up doors found on small consumer goods like digital cameras and compact disc players. Other more specific uses:
A torsion bar suspension is a thick, steel torsion-bar spring attached to the body of a vehicle at one end and to a lever arm which attaches to the axle of the wheel at the other. It absorbs road shocks as the wheel goes over bumps and rough road surfaces, cushioning the ride for the passengers. Torsion-bar suspensions are used in many modern cars and trucks, as well as military vehicles.
The sway bar used in many vehicle suspension systems also uses the torsion spring principle.
The torsion pendulum used in torsion pendulum clocks is a wheel-shaped weight suspended from its center by a wire torsion spring. The weight rotates about the axis of the spring, twisting it, instead of swinging like an ordinary pendulum. The force of the spring reverses the direction of rotation, so the wheel oscillates back and forth, driven at the top by the clock's gears.
Torsion springs consisting of twisted ropes or sinew, were used to store potential energy to power several types of ancient weapons; including the Greek ballista and the Roman scorpio and catapults like the onager.
The balance spring or hairspring in mechanical watches is a fine, spiral-shaped torsion spring that pushes the balance wheel back toward its center position as it rotates back and forth. The balance wheel and spring function similarly to the torsion pendulum above in keeping time for the watch.
The D'Arsonval movement used in mechanical pointer-type meters to measure electric current is a type of torsion balance (see below). A coil of wire attached to the pointer twists in a magnetic field against the resistance of a torsion spring. Hooke's law ensures that the angle of the pointer is proportional to the current.
A DMD or digital micromirror device chip is at the heart of many video projectors. It uses hundreds of thousands of tiny mirrors on tiny torsion springs fabricated on a silicon surface to reflect light onto the screen, forming the image.
Badge tether
Torsion balance
The torsion balance, also called torsion pendulum, is a scientific apparatus for measuring very weak forces, usually credited to Charles-Augustin de Coulomb, who invented it in 1777, but independently invented by John Michell sometime before 1783. Its most well-known uses were by Coulomb to measure the electrostatic force between charges to establish Coulomb's Law, and by Henry Cavendish in 1798 in the Cavendish experiment to measure the gravitational force between two masses to calculate the density of the Earth, leading later to a value for the gravitational constant.
The torsion balance consists of a bar suspended from its middle by a thin fiber. The fiber acts as a very weak torsion spring. If an unknown force is applied at right angles to the ends of the bar, the bar will rotate, twisting the fiber, until it reaches an equilibrium where the twisting force or torque of the fiber balances the applied force. Then the magnitude of the force is proportional to the angle of the bar. The sensitivity of the instrument comes from the weak spring constant of the fiber, so a very weak force causes a large rotation of the bar.
In Coulomb's experiment, the torsion balance was an insulating rod with a metal-coated ball attached to one end, suspended by a silk thread. The ball was charged with a known charge of static electricity, and a second charged ball of the same polarity was brought near it. The two charged balls repelled one another, twisting the fiber through a certain angle, which could be read from a scale on the instrument. By knowing how much force it took to twist the fiber through a given angle, Coulomb was able to calculate the force between the balls. Determining the force for different charges and different separations between the balls, he showed that it followed an inverse-square proportionality law, now known as Coulomb's law.
To measure the unknown force, the spring constant of the torsion fiber must first be known. This is difficult to measure directly because of the smallness of the force. Cavendish accomplished this by a method widely used since: measuring the resonant vibration period of the balance. If the free balance is twisted and released, it will oscillate slowly clockwise and counterclockwise as a harmonic oscillator, at a frequency that depends on the moment of inertia of the beam and the elasticity of the fiber. Since the inertia of the beam can be found from its mass, the spring constant can be calculated.
Coulomb first developed the theory of torsion fibers and the torsion balance in his 1785 memoir, Recherches theoriques et experimentales sur la force de torsion et sur l'elasticite des fils de metal &c. This led to its use in other scientific instruments, such as galvanometers, and the Nichols radiometer which measured the radiation pressure of light. In the early 1900s gravitational torsion balances were used in petroleum prospecting. Today torsion balances are still used in physics experiments. In 1987, gravity researcher A. H. Cook wrote:
The most important advance in experiments on gravitation and other delicate measurements was the introduction of the torsion balance by Michell and its use by Cavendish. It has been the basis of all the most significant experiments on gravitation ever since.In the Eötvös experiment, a torsion balance was used to prove the equivalence principle - the idea that inertial mass and gravitational mass are one and the same.
Torsional harmonic oscillators
Torsion balances, torsion pendulums and balance wheels are examples of torsional harmonic oscillators that can oscillate with a rotational motion about the axis of the torsion spring, clockwise and counterclockwise, in harmonic motion. Their behavior is analogous to translational spring-mass oscillators (see Harmonic oscillator Equivalent systems). The general differential equation of motion is:
If the damping is small, , as is the case with torsion pendulums and balance wheels, the frequency of vibration is very near the natural resonant frequency of the system:
Therefore, the period is represented by:
The general solution in the case of no drive force (), called the transient solution, is:
where:
Applications
The balance wheel of a mechanical watch is a harmonic oscillator whose resonant frequency sets the rate of the watch. The resonant frequency is regulated, first coarsely by adjusting with weight screws set radially into the rim of the wheel, and then more finely by adjusting with a regulating lever that changes the length of the balance spring.
In a torsion balance the drive torque is constant and equal to the unknown force to be measured , times the moment arm of the balance beam , so . When the oscillatory motion of the balance dies out, the deflection will be proportional to the force:
To determine it is necessary to find the torsion spring constant . If the damping is low, this can be obtained by measuring the natural resonant frequency of the balance, since the moment of inertia of the balance can usually be calculated from its geometry, so:
In measuring instruments, such as the D'Arsonval ammeter movement, it is often desired that the oscillatory motion die out quickly so the steady state result can be read off. This is accomplished by adding damping to the system, often by attaching a vane that rotates in a fluid such as air or water (this is why magnetic compasses are filled with fluid). The value of damping that causes the oscillatory motion to settle quickest is called the critical damping:
See also
Beam (structure)
Slinky, helical toy spring
References
Bibliography
. Detailed account of Coulomb's experiment.
. Shows pictures of the Coulomb torsion balance, and describes Coulomb's contributions to torsion technology.
. Describes the Nichols radiometer.
. Description of how torsion balances were used in petroleum prospecting, with pictures of a 1902 instrument.
External links
Torsion balance interactive java tutorial
Torsion spring calculator
Big G measurement, description of 1999 Cavendish experiment at Univ. of Washington, showing torsion balance[link broken]
How torsion balances were used in petroleum prospecting (web archive link)
Mechanics of torsion springs. Web archive link, accessed December 8, 2016.
Solved mechanics problems involving springs (springs in series and in parallel)
Milestones in the History of Springs
Articles containing video clips
Pendulums
Springs (mechanical)
Torque | Torsion spring | [
"Physics"
] | 2,432 | [
"Wikipedia categories named after physical quantities",
"Force",
"Physical quantities",
"Torque"
] |
589,225 | https://en.wikipedia.org/wiki/Bimetallic%20strip | A bimetallic strip or bimetal strip is a strip that consists of two strips of different metals which expand at different rates as they are heated. They are used to convert a temperature change into mechanical displacement. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled.
The invention of the bimetallic strip is generally credited to John Harrison, an eighteenth-century clockmaker who made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England.
Characteristics
The strip consists of two strips of different metals which expand at different rates as they are heated, usually steel and copper, or in some cases steel and brass. The strips are joined together throughout their length by riveting, brazing or welding. The different expansions force the flat strip to bend one way if heated, and in the opposite direction if cooled below its initial temperature. The metal with the higher coefficient of thermal expansion is on the outer side of the curve when the strip is heated and on the inner side when cooled. The sideways displacement of the strip is much larger than the small lengthways expansion in either of the two metals.
In some applications, the bimetal strip is used in the flat form. In others, it is wrapped into a coil for compactness. The greater length of the coiled version gives improved sensitivity.
The radius of curvature of a bimetallic strip depends on temperature according the formula derived by French physicist Yvon Villarceau in 1863 in his research for improving the precision of clocks:
,
where is the total thickness of the bimetal and is a dimensionless coefficient. For each metallic strip: is the Young modulus, is the coefficient of thermal expansion and is the thickness. The formula can also be rewritten as a function of the thermal misfit strain . And if the modulus and height are similar, we simply have .
An equivalent formula can be derived from the beam theory.
History
The earliest surviving bimetallic strip was made by the eighteenth-century clockmaker John Harrison who is generally credited with its invention. He made it for his third marine chronometer (H3) of 1759 to compensate for temperature-induced changes in the balance spring. It should not be confused with the bimetallic mechanism for correcting for thermal expansion in his gridiron pendulum. His earliest examples had two individual metal strips joined by rivets but he also invented the later technique of directly fusing molten brass onto a steel substrate. A strip of this type was fitted to his last timekeeper, H5. Harrison's invention is recognized in the memorial to him in Westminster Abbey, England.
Composition
The metals involved in a bimetallic strip can vary in composition so long as their thermal expansion coefficients differ. The metal of lower thermal expansion coefficient is sometimes called the passive metal, while the other is called the active metal. Copper, steel, brass, iron, and nickel are commonly used metals in bimetallic strips. Metal alloys have been used in bimetallic strips as well, such as invar and constantan. Material selection has a significant impact on the working temperature range of a bimetallic strip, with some having a temperature limit up to 500°C, with others only reaching 150°C before failing.
Applications
This effect is used in a range of mechanical and electrical devices.
Clocks
Mechanical clock mechanisms are sensitive to temperature changes as each part has tiny tolerance and it leads to errors in time keeping. A bimetallic strip is used to compensate this phenomenon in the mechanism of some timepieces. The most common method is to use a bimetallic construction for the circular rim of the balance wheel. What it does is move a weight in a radial way looking at the circular plane down by the balance wheel, varying then, the momentum of inertia of the balance wheel. As the spring controlling the balance becomes weaker with the increasing temperature, the balance becomes smaller in diameter to decrease the momentum of inertia and keep the period of oscillation (and hence timekeeping) constant.
Nowadays this system is not used anymore since the appearance of low temperature coefficient alloys like nivarox, parachrom and many others depending on each brand.
Thermostats
In the regulation of heating and cooling, thermostats that operate over a wide range of temperatures are used. In these, one end of the bimetallic strip is mechanically fixed and attached to an electrical power source, while the other (moving) end carries an electrical contact. In adjustable thermostats another contact is positioned with a regulating knob or lever. The position so set controls the regulated temperature, called the set point.
Some thermostats use a mercury switch connected to both electrical leads. The angle of the entire mechanism is adjustable to control the set point of the thermostat.
Depending upon the application, a higher temperature may open a contact (as in a heater control) or it may close a contact (as in a refrigerator or air conditioner).
The electrical contacts may control the power directly (as in a household iron) or indirectly, switching electrical power through a relay or the supply of natural gas or fuel oil through an electrically operated valve. In some natural gas heaters the power may be provided with a thermocouple that is heated by a pilot light (a small, continuously burning, flame). In devices without pilot lights for ignition (as in most modern gas clothes dryers and some natural gas heaters and decorative fireplaces) the power for the contacts is provided by reduced household electrical power that operates a relay controlling an electronic ignitor, either a resistance heater or an electrically powered spark generating device.
Thermometers
A direct indicating dial thermometer, common in household devices (such as a patio thermometer or a meat thermometer), uses a bimetallic strip wrapped into a coil in its most common design. The coil changes the linear movement of the metal expansion into a circular movement thanks to the helicoidal shape it draws. One end of the coil is fixed to the housing of the device as a fix point and the other drives an indicating needle inside a circular indicator. A bimetallic strip is also used in a recording thermometer. Breguet's thermometer consists of a tri-metallic helix in order to have a more accurate result.
Heat engine
Heat engines are not the most efficient ones, and with the use of bimetallic strips the efficiency of the heat engine is even lower as there is no chamber to contain the heat. Moreover, the bimetallic strips cannot produce strength in its moves, the reason why is that in order to achieve reasonables bendings (movements) both metallic strips have to be thin to make the difference between the expansion noticeable. So the uses for metallic strips in heat engines are mostly in simple toys that have been built to demonstrate how the principle can be used to drive a heat engine.
Electrical devices
Bimetal strips are used in miniature circuit breakers to protect circuits from excess current. A coil of wire is used to heat a bimetal strip, which bends and operates a linkage that unlatches a spring-operated contact. This interrupts the circuit and can be reset when the bimetal strip has cooled down.
Bimetal strips are also used in time-delay relays, gas oven safety valves, thermal flashers for older turn signal lamps, and fluorescent lamp starters. In some devices, the current running directly through the bimetal strip is sufficient to heat it and operate contacts directly. It has also been used in mechanical PWM voltage regulators for automotive uses.
See also
Thermotime switch
References
Article about compensating the balance wheel against temperature changes by Hodinkee magazine
Article about the hairspring used in watches by Monochrome magazine
Notes
External links
Video of a circular bimetallic wire powering a small motor with iced water. Accessed February 2011.
Video of a bimetlic coil powering engine (among others like Curie, Stirling and Hero)
English inventions
Engineering thermodynamics
Mechanical engineering
Heating, ventilation, and air conditioning
Energy conversion
Thermometers
Bimetal | Bimetallic strip | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,743 | [
"Applied and interdisciplinary physics",
"Metallurgy",
"Engineering thermodynamics",
"Measuring instruments",
"Bimetal",
"Thermodynamics",
"Mechanical engineering",
"Thermometers"
] |
589,286 | https://en.wikipedia.org/wiki/Pi%20bond | In chemistry, pi bonds (π bonds) are covalent chemical bonds, in each of which two lobes of an orbital on one atom overlap with two lobes of an orbital on another atom, and in which this overlap occurs laterally. Each of these atomic orbitals has an electron density of zero at a shared nodal plane that passes through the two bonded nuclei. This plane also is a nodal plane for the molecular orbital of the pi bond. Pi bonds can form in double and triple bonds but do not form in single bonds in most cases.
The Greek letter π in their name refers to p orbitals, since the orbital symmetry of the pi bond is the same as that of the p orbital when seen down the bond axis. One common form of this sort of bonding involves p orbitals themselves, though d orbitals also engage in pi bonding. This latter mode forms part of the basis for metal-metal multiple bonding.
Properties
Pi bonds are usually weaker than sigma bonds. The C-C double bond, composed of one sigma and one pi bond, has a bond energy less than twice that of a C-C single bond, indicating that the stability added by the pi bond is less than the stability of a sigma bond. From the perspective of quantum mechanics, this bond's weakness is explained by significantly less overlap between the component p-orbitals due to their parallel orientation. This is contrasted by sigma bonds which form bonding orbitals directly between the nuclei of the bonding atoms, resulting in greater overlap and a strong sigma bond.
Pi bonds result from overlap of atomic orbitals that are in contact through two areas of overlap. Most orbital overlaps that do not include the s-orbital, or have different internuclear axes (for example px + py overlap, which does not apply to an s-orbital) are generally all pi bonds. Pi bonds are more diffuse bonds than the sigma bonds. Electrons in pi bonds are sometimes referred to as pi electrons. Molecular fragments joined by a pi bond cannot rotate about that bond without breaking the pi bond, because rotation involves destroying the parallel orientation of the constituent p orbitals.
For homonuclear diatomic molecules, bonding π molecular orbitals have only the one nodal plane passing through the bonded atoms, and no nodal planes between the bonded atoms. The corresponding antibonding, or π* ("pi-star") molecular orbital, is defined by the presence of an additional nodal plane between these two bonded atoms.
Multiple bonds
A typical double bond consists of one sigma bond and one pi bond; for example, the C=C double bond in ethylene (H2C=CH2). A typical triple bond, for example in acetylene (HC≡CH), consists of one sigma bond and two pi bonds in two mutually perpendicular planes containing the bond axis. Two pi bonds are the maximum that can exist between a given pair of atoms. Quadruple bonds are extremely rare and can be formed only between transition metal atoms, and consist of one sigma bond, two pi bonds and one delta bond.
A pi bond is weaker than a sigma bond, but the combination of pi and sigma bond is stronger than either bond by itself. The enhanced strength of a multiple bond versus a single (sigma bond) is indicated in many ways, but most obviously by a contraction in bond lengths. For example, in organic chemistry, carbon–carbon bond lengths are about 154 pm in ethane, 134 pm in ethylene and 120 pm in acetylene. More bonds make the total bond length shorter and the bond becomes stronger.
Special cases
A pi bond can exist between two atoms that do not have a net sigma-bonding effect between them.
In certain metal complexes, pi interactions between a metal atom and alkyne and alkene pi antibonding orbitals form pi-bonds.
In some cases of multiple bonds between two atoms, there is no net sigma-bonding at all, only pi bonds. Examples include diiron hexacarbonyl (Fe2(CO)6), dicarbon (C2), and diborane(2) (B2H2). In these compounds the central bond consists only of pi bonding because of a sigma antibond accompanying the sigma bond itself. These compounds have been used as computational models for analysis of pi bonding itself, revealing that in order to achieve maximum orbital overlap the bond distances are much shorter than expected.
See also
Aromatic interaction
Delta bond
Molecular geometry
Pi backbonding
Pi interaction
References
Chemical bonding | Pi bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 920 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
589,303 | https://en.wikipedia.org/wiki/Molecular%20orbital%20theory | In chemistry, molecular orbital theory (MO theory or MOT) is a method for describing the electronic structure of molecules using quantum mechanics. It was proposed early in the 20th century. The MOT explains the paramagnetic nature of O2, which valence bond theory cannot explain.
In molecular orbital theory, electrons in a molecule are not assigned to individual chemical bonds between atoms, but are treated as moving under the influence of the atomic nuclei in the whole molecule. Quantum mechanics describes the spatial and energetic properties of electrons as molecular orbitals that surround two or more atoms in a molecule and contain valence electrons between atoms.
Molecular orbital theory revolutionized the study of chemical bonding by approximating the states of bonded electrons – the molecular orbitals – as linear combinations of atomic orbitals (LCAO). These approximations are made by applying the density functional theory (DFT) or Hartree–Fock (HF) models to the Schrödinger equation.
Molecular orbital theory and valence bond theory are the foundational theories of quantum chemistry.
Linear combination of atomic orbitals (LCAO) method
In the LCAO method, each molecule has a set of molecular orbitals. It is assumed that the molecular orbital wave function ψj can be written as a simple weighted sum of the n constituent atomic orbitals χi, according to the following equation:
One may determine cij coefficients numerically by substituting this equation into the Schrödinger equation and applying the variational principle. The variational principle is a mathematical technique used in quantum mechanics to build up the coefficients of each atomic orbital basis. A larger coefficient means that the orbital basis is composed more of that particular contributing atomic orbital – hence, the molecular orbital is best characterized by that type. This method of quantifying orbital contribution as a linear combination of atomic orbitals is used in computational chemistry. An additional unitary transformation can be applied on the system to accelerate the convergence in some computational schemes. Molecular orbital theory was seen as a competitor to valence bond theory in the 1930s, before it was realized that the two methods are closely related and that when extended they become equivalent.
Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
There are three main requirements for atomic orbital combinations to be suitable as approximate molecular orbitals.
The atomic orbital combination must have the correct symmetry, which means that it must belong to the correct irreducible representation of the molecular symmetry group. Using symmetry adapted linear combinations, or SALCs, molecular orbitals of the correct symmetry can be formed.
Atomic orbitals must also overlap within space. They cannot combine to form molecular orbitals if they are too far away from one another.
Atomic orbitals must be at similar energy levels to combine as molecular orbitals. Because if the energy difference is great, when the molecular orbitals form, the change in energy becomes small. Consequently, there is not enough reduction in energy of electrons to make significant bonding.
History
Molecular orbital theory was developed in the years after valence bond theory had been established (1927), primarily through the efforts of Friedrich Hund, Robert Mulliken, John C. Slater, and John Lennard-Jones. MO theory was originally called the Hund-Mulliken theory. According to physicist and physical chemist Erich Hückel, the first quantitative use of molecular orbital theory was the 1929 paper of Lennard-Jones. This paper predicted a triplet ground state for the dioxygen molecule which explained its paramagnetism (see ) before valence bond theory, which came up with its own explanation in 1931. The word orbital was introduced by Mulliken in 1932. By 1933, the molecular orbital theory had been accepted as a valid and useful theory.
Erich Hückel applied molecular orbital theory to unsaturated hydrocarbon molecules starting in 1931 with his Hückel molecular orbital (HMO) method for the determination of MO energies for pi electrons, which he applied to conjugated and aromatic hydrocarbons. This method provided an explanation of the stability of molecules with six pi-electrons such as benzene.
The first accurate calculation of a molecular orbital wavefunction was that made by Charles Coulson in 1938 on the hydrogen molecule. By 1950, molecular orbitals were completely defined as eigenfunctions (wave functions) of the self-consistent field Hamiltonian and it was at this point that molecular orbital theory became fully rigorous and consistent. This rigorous approach is known as the Hartree–Fock method for molecules although it had its origins in calculations on atoms. In calculations on molecules, the molecular orbitals are expanded in terms of an atomic orbital basis set, leading to the Roothaan equations. This led to the development of many ab initio quantum chemistry methods. In parallel, molecular orbital theory was applied in a more approximate manner using some empirically derived parameters in methods now known as semi-empirical quantum chemistry methods.
The success of Molecular Orbital Theory also spawned ligand field theory, which was developed during the 1930s and 1940s as an alternative to crystal field theory.
Types of orbitals
Molecular orbital (MO) theory uses a linear combination of atomic orbitals (LCAO) to represent molecular orbitals resulting from bonds between atoms. These are often divided into three types, bonding, antibonding, and non-bonding. A bonding orbital concentrates electron density in the region between a given pair of atoms, so that its electron density will tend to attract each of the two nuclei toward the other and hold the two atoms together. An anti-bonding orbital concentrates electron density "behind" each nucleus (i.e. on the side of each atom which is farthest from the other atom), and so tends to pull each of the two nuclei away from the other and actually weaken the bond between the two nuclei. Electrons in non-bonding orbitals tend to be associated with atomic orbitals that do not interact positively or negatively with one another, and electrons in these orbitals neither contribute to nor detract from bond strength.
Molecular orbitals are further divided according to the types of atomic orbitals they are formed from. Chemical substances will form bonding interactions if their orbitals become lower in energy when they interact with each other. Different bonding orbitals are distinguished that differ by electron configuration (electron cloud shape) and by energy levels.
The molecular orbitals of a molecule can be illustrated in molecular orbital diagrams.
Common bonding orbitals are sigma (σ) orbitals which are symmetric about the bond axis and pi (π) orbitals with a nodal plane along the bond axis. Less common are delta (δ) orbitals and phi (φ) orbitals with two and three nodal planes respectively along the bond axis. Antibonding orbitals are signified by the addition of an asterisk. For example, an antibonding pi orbital may be shown as π*.
Bond order
Bond order is the number of chemical bonds between a pair of atoms. The bond order of a molecule can be calculated by subtracting the number of electrons in anti-bonding orbitals from the number of bonding orbitals, and the resulting number is then divided by two. A molecule is expected to be stable if it has bond order larger than zero. It is adequate to consider the valence electron to determine the bond order. Because (for principal quantum number n > 1) when MOs are derived from 1s AOs, the difference in number of electrons in bonding and anti-bonding molecular orbital is zero. So, there is no net effect on bond order if the electron is not the valence one.
From bond order, one can predict whether a bond between two atoms will form or not. For example, the existence of He2 molecule. From the molecular orbital diagram, the bond order is . That means, no bond formation will occur between two He atoms which is seen experimentally. It can be detected under very low temperature and pressure molecular beam and has binding energy of approximately 0.001 J/mol. (The helium dimer is a van der Waals molecule.)
Besides, the strength of a bond can also be realized from bond order (BO). For example:
For H2: Bond order is ; bond energy is 436 kJ/mol.
For H2+: Bond order is ; bond energy is 171 kJ/mol.
As the bond order of H2+ is smaller than H2, it should be less stable which is observed experimentally and can be seen from the bond energy.
Overview
MOT provides a global, delocalized perspective on chemical bonding. In MO theory, any electron in a molecule may be found anywhere in the molecule, since quantum conditions allow electrons to travel under the influence of an arbitrarily large number of nuclei, as long as they are in eigenstates permitted by certain quantum rules. Thus, when excited with the requisite amount of energy through high-frequency light or other means, electrons can transition to higher-energy molecular orbitals. For instance, in the simple case of a hydrogen diatomic molecule, promotion of a single electron from a bonding orbital to an antibonding orbital can occur under UV radiation. This promotion weakens the bond between the two hydrogen atoms and can lead to photodissociation, the breaking of a chemical bond due to the absorption of light.
Molecular orbital theory is used to interpret ultraviolet–visible spectroscopy (UV–VIS). Changes to the electronic structure of molecules can be seen by the absorbance of light at specific wavelengths. Assignments can be made to these signals indicated by the transition of electrons moving from one orbital at a lower energy to a higher energy orbital. The molecular orbital diagram for the final state describes the electronic nature of the molecule in an excited state.
Although in MO theory some molecular orbitals may hold electrons that are more localized between specific pairs of molecular atoms, other orbitals may hold electrons that are spread more uniformly over the molecule. Thus, overall, bonding is far more delocalized in MO theory, which makes it more applicable to resonant molecules that have equivalent non-integer bond orders than valence bond theory. This makes MO theory more useful for the description of extended systems.
Robert S. Mulliken, who actively participated in the advent of molecular orbital theory, considers each molecule to be a self-sufficient unit. He asserts in his article: ...Attempts to regard a molecule as consisting of specific atomic or ionic units held together by discrete numbers of bonding electrons or electron-pairs are considered as more or less meaningless, except as an approximation in special cases, or as a method of calculation […]. A molecule is here regarded as a set of nuclei, around each of which is grouped an electron configuration closely similar to that of a free atom in an external field, except that the outer parts of the electron configurations surrounding each nucleus usually belong, in part, jointly to two or more nuclei....An example is the MO description of benzene, , which is an aromatic hexagonal ring of six carbon atoms and three double bonds. In this molecule, 24 of the 30 total valence bonding electrons – 24 coming from carbon atoms and 6 coming from hydrogen atoms – are located in 12 σ (sigma) bonding orbitals, which are located mostly between pairs of atoms (C–C or C–H), similarly to the electrons in the valence bond description. However, in benzene the remaining six bonding electrons are located in three π (pi) molecular bonding orbitals that are delocalized around the ring. Two of these electrons are in an MO that has equal orbital contributions from all six atoms. The other four electrons are in orbitals with vertical nodes at right angles to each other. As in the VB theory, all of these six delocalized π electrons reside in a larger space that exists above and below the ring plane. All carbon–carbon bonds in benzene are chemically equivalent. In MO theory this is a direct consequence of the fact that the three molecular π orbitals combine and evenly spread the extra six electrons over six carbon atoms.
In molecules such as methane, , the eight valence electrons are found in four MOs that are spread out over all five atoms. It is possible to transform the MOs into four localized sp3 orbitals. Linus Pauling, in 1931, hybridized the carbon 2s and 2p orbitals so that they pointed directly at the hydrogen 1s basis functions and featured maximal overlap. However, the delocalized MO description is more appropriate for predicting ionization energies and the positions of spectral absorption bands. When methane is ionized, a single electron is taken from the valence MOs, which can come from the s bonding or the triply degenerate p bonding levels, yielding two ionization energies. In comparison, the explanation in valence bond theory is more complicated. When one electron is removed from an sp3 orbital, resonance is invoked between four valence bond structures, each of which has a single one-electron bond and three two-electron bonds. Triply degenerate T2 and A1 ionized states (CH4+) are produced from different linear combinations of these four structures. The difference in energy between the ionized and ground state gives the two ionization energies.
As in benzene, in substances such as beta carotene, chlorophyll, or heme, some electrons in the π orbitals are spread out in molecular orbitals over long distances in a molecule, resulting in light absorption in lower energies (the visible spectrum), which accounts for the characteristic colours of these substances. This and other spectroscopic data for molecules are well explained in MO theory, with an emphasis on electronic states associated with multicenter orbitals, including mixing of orbitals premised on principles of orbital symmetry matching. The same MO principles also naturally explain some electrical phenomena, such as high electrical conductivity in the planar direction of the hexagonal atomic sheets that exist in graphite. This results from continuous band overlap of half-filled p orbitals and explains electrical conduction. MO theory recognizes that some electrons in the graphite atomic sheets are completely delocalized over arbitrary distances, and reside in very large molecular orbitals that cover an entire graphite sheet, and some electrons are thus as free to move and therefore conduct electricity in the sheet plane, as if they resided in a metal.
See also
Cis effect
Configuration interaction
Coupled cluster
Frontier molecular orbital theory
Ligand field theory (MO theory for transition metal complexes)
Møller–Plesset perturbation theory
Quantum chemistry computer programs
Semi-empirical quantum chemistry methods
Valence bond theory
References
External links
Molecular Orbital Theory - Purdue University
Molecular Orbital Theory - Sparknotes
Molecular Orbital Theory - Mark Bishop's Chemistry Site
Introduction to MO Theory - Queen Mary, London University
Molecular Orbital Theory - a related terms table
An introduction to Molecular Group Theory - Oxford University
Chemical bonding
Chemistry theories
General chemistry
Quantum chemistry | Molecular orbital theory | [
"Physics",
"Chemistry",
"Materials_science"
] | 3,107 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
"Condensed matter physics",
" molecular",
"nan",
"Atomic",
"Chemical bonding",
" and optical physics"
] |
1,540,333 | https://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius%20theorem | In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems (subshifts of finite type); to economics (Okishio's theorem, Hawkins–Simon condition);
to demography (Leslie population age distribution model);
to social networks (DeGroot learning process); to Internet search engines (PageRank); and even to ranking of American football
teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau.
Statement
Let positive and non-negative respectively describe matrices with exclusively positive real numbers as elements and matrices with exclusively non-negative real numbers as elements. The eigenvalues of a real square matrix A are complex numbers that make up the spectrum of the matrix. The exponential growth rate of the matrix powers Ak as k → ∞ is controlled by the eigenvalue of A with the largest absolute value (modulus). The Perron–Frobenius theorem describes the properties of the leading eigenvalue and of the corresponding eigenvectors when A is a non-negative real square matrix. Early results were due to and concerned positive matrices. Later, found their extension to certain classes of non-negative matrices.
Positive matrices
Let be an positive matrix: for . Then the following statements hold.
There is a positive real number r, called the Perron root or the Perron–Frobenius eigenvalue (also called the leading eigenvalue, principal eigenvalue or dominant eigenvalue), such that r is an eigenvalue of A and any other eigenvalue λ (possibly complex) in absolute value is strictly smaller than r , |λ| < r. Thus, the spectral radius is equal to r. If the matrix coefficients are algebraic, this implies that the eigenvalue is a Perron number.
The Perron–Frobenius eigenvalue is simple: r is a simple root of the characteristic polynomial of A. Consequently, the eigenspace associated to r is one-dimensional. (The same is true for the left eigenspace, i.e., the eigenspace for AT, the transpose of A.)
There exists an eigenvector v = (v1,...,vn)T of A with eigenvalue r such that all components of v are positive: A v = r v, vi > 0 for 1 ≤ i ≤ n. (Respectively, there exists a positive left eigenvector w : wT A = wT r, wi > 0.) It is known in the literature under many variations as the Perron vector, Perron eigenvector, Perron-Frobenius eigenvector, leading eigenvector, principal eigenvector or dominant eigenvector.
There are no other positive (moreover non-negative) eigenvectors except positive multiples of v (respectively, left eigenvectors except ww'w), i.e., all other eigenvectors must have at least one negative or non-real component.
, where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix vwT is the projection onto the eigenspace corresponding to r. This projection is called the Perron projection.
Collatz–Wielandt formula: for all non-negative non-zero vectors x, let f(x) be the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real valued function whose maximum over all non-negative non-zero vectors x is the Perron–Frobenius eigenvalue.
A "Min-max" Collatz–Wielandt formula takes a form similar to the one above: for all strictly positive vectors x, let g(x) be the maximum value of [Ax]i / xi taken over i. Then g is a real valued function whose minimum over all strictly positive vectors x is the Perron–Frobenius eigenvalue.
Birkhoff–Varga formula: Let x and y be strictly positive vectors. Then,
Donsker–Varadhan–Friedland formula: Let p be a probability vector and x a strictly positive vector. Then,Friedland, S., 1981. Convex spectral functions. Linear and multilinear algebra, 9(4), pp.299-316.
Fiedler formula:
The Perron–Frobenius eigenvalue satisfies the inequalities
All of these properties extend beyond strictly positive matrices to primitive matrices (see below). Facts 1–7 can be found in Meyer chapter 8 claims 8.2.11–15 page 667 and exercises 8.2.5,7,9 pages 668–669.
The left and right eigenvectors w and v are sometimes normalized so that the sum of their components is equal to 1; in this case, they are sometimes called stochastic eigenvectors. Often they are normalized so that the right eigenvector v sums to one, while .
Non-negative matrices
There is an extension to matrices with non-negative entries. Since any non-negative matrix can be obtained as a limit of positive matrices, one obtains the existence of an eigenvector with non-negative components; the corresponding eigenvalue will be non-negative and greater than or equal, in absolute value, to all other eigenvalues. However, for the example , the maximum eigenvalue r = 1 has the same absolute value as the other eigenvalue −1; while for , the maximum eigenvalue is r = 0, which is not a simple root of the characteristic polynomial, and the corresponding eigenvector (1, 0) is not strictly positive.
However, Frobenius found a special subclass of non-negative matrices — irreducible matrices — for which a non-trivial generalization is possible. For such a matrix, although the eigenvalues attaining the maximal absolute value might not be unique, their structure is under control: they have the form , where is a real strictly positive eigenvalue, and ranges over the complex h th roots of 1 for some positive integer h called the period of the matrix.
The eigenvector corresponding to has strictly positive components (in contrast with the general case of non-negative matrices, where components are only non-negative). Also all such eigenvalues are simple roots of the characteristic polynomial. Further properties are described below.
Classification of matrices
Let A be a n × n square matrix over field F.
The matrix A is irreducible if any of the following equivalent properties
holds.Definition 1 : A does not have non-trivial invariant coordinate subspaces.
Here a non-trivial coordinate subspace means a linear subspace spanned by any proper subset of standard basis vectors of Fn. More explicitly, for any linear subspace spanned by standard basis vectors ei1 , ...,
eik, 0 < k < n its image under the action of A is not contained in the same subspace.Definition 2: A cannot be conjugated into block upper triangular form by a permutation matrix P:
where E and G are non-trivial (i.e. of size greater than zero) square matrices.Definition 3: One can associate with a matrix A a certain directed graph GA. It has n vertices labeled 1,...,n, and there is an edge from vertex i to vertex j precisely when aij ≠ 0. Then the matrix A is irreducible if and only if its associated graph GA is strongly connected.
If F is the field of real or complex numbers, then we also have the following condition.Definition 4: The group representation of on or on given by has no non-trivial invariant coordinate subspaces. (By comparison, this would be an irreducible representation if there were no non-trivial invariant subspaces at all, not only considering coordinate subspaces.)
A matrix is reducible if it is not irreducible.
A real matrix A is primitive if it is non-negative and its mth power is positive for some natural number m (i.e. all entries of Am are positive).
Let A be real and non-negative. Fix an index i and define the period of index i to be the greatest common divisor of all natural numbers m such that (Am)ii > 0. When A is irreducible, the period of every index is the same and is called the period of A. In fact, when A is irreducible, the period can be defined as the greatest common divisor of the lengths of the closed directed paths in GA (see Kitchens page 16). The period is also called the index of imprimitivity (Meyer page 674) or the order of cyclicity. If the period is 1, A is aperiodic. It can be proved that primitive matrices are the same as irreducible aperiodic non-negative matrices.
All statements of the Perron–Frobenius theorem for positive matrices remain true for primitive matrices. The same statements also hold for a non-negative irreducible matrix, except that it may possess several eigenvalues whose absolute value is equal to its spectral radius, so the statements need to be correspondingly modified. In fact the number of such eigenvalues is equal to the period.
Results for non-negative matrices were first obtained by Frobenius in 1912.
Perron–Frobenius theorem for irreducible non-negative matrices
Let be an irreducible non-negative matrix with period and spectral radius .
Then the following statements hold.
The number is a positive real number and it is an eigenvalue of the matrix . It is called Perron–Frobenius eigenvalue.
The Perron–Frobenius eigenvalue is simple. Both right and left eigenspaces associated with are one-dimensional.
has both a right and a left eigenvectors, respectively and , with eigenvalue and whose components are all positive. Moreover these are the only eigenvectors whose components are all positive are those associated with the eigenvalue .
The matrix has exactly (where is the period) complex eigenvalues with absolute value . Each of them is a simple root of the characteristic polynomial and is the product of with an th root of unity.
Let . Then the matrix is similar to , consequently the spectrum of is invariant under multiplication by (i.e. to rotations of the complex plane by the angle ).
If then there exists a permutation matrix such that
where denotes a zero matrix and the blocks along the main diagonal are square matrices.
Collatz–Wielandt formula: for all non-negative non-zero vectors let be the minimum value of taken over all those such that . Then is a real valued function whose maximum is the Perron–Frobenius eigenvalue.
The Perron–Frobenius eigenvalue satisfies the inequalities
The example shows that the (square) zero-matrices along the diagonal may be of different sizes, the blocks Aj need not be square, and h need not divide n.
Further properties
Let A be an irreducible non-negative matrix, then:
(I+A)n−1 is a positive matrix. (Meyer claim 8.3.5 p. 672). For a non-negative A, this is also a sufficient condition.
Wielandt's theorem. If |B|<A, then ρ(B)≤ρ(A). If equality holds (i.e. if μ=ρ(A)eiφ is eigenvalue for B), then B = eiφ D AD−1 for some diagonal unitary matrix D (i.e. diagonal elements of D equals to eiΘl, non-diagonal are zero).
If some power Aq is reducible, then it is completely reducible, i.e. for some permutation matrix P, it is true that: , where Ai are irreducible matrices having the same maximal eigenvalue. The number of these matrices d is the greatest common divisor of q and h, where h is period of A.
If c(x) = xn + ck1 xn-k1 + ck2 xn-k2 + ... + cks xn-ks is the characteristic polynomial of A in which only the non-zero terms are listed, then the period of A equals the greatest common divisor of k1, k2, ... , ks.
Cesàro averages: where the left and right eigenvectors for A are normalized so that wTv = 1. Moreover, the matrix v wT is the spectral projection corresponding to r, the Perron projection.
Let r be the Perron–Frobenius eigenvalue, then the adjoint matrix for (r-A) is positive.
If A has at least one non-zero diagonal element, then A is primitive.
If 0 ≤ A < B, then rA ≤ rB. Moreover, if B is irreducible, then the inequality is strict: rA < rB.
A matrix A is primitive provided it is non-negative and Am is positive for some m, and hence Ak is positive for all k ≥ m. To check primitivity, one needs a bound on how large the minimal such m can be, depending on the size of A:
If A is a non-negative primitive matrix of size n, then An2 − 2n + 2 is positive. Moreover, this is the best possible result, since for the matrix M below, the power Mk is not positive for every k < n2 − 2n + 2, since (Mn2 − 2n+1)1,1 = 0.
Applications
Numerous books have been written on the subject of non-negative matrices, and Perron–Frobenius theory is invariably a central feature. The following examples given below only scratch the surface of its vast application domain.
Non-negative matrices
The Perron–Frobenius theorem does not apply directly to non-negative matrices. Nevertheless, any reducible square matrix A may be written in upper-triangular block form (known as the normal form of a reducible matrix)
PAP−1 =
where P is a permutation matrix and each Bi is a square matrix that is either irreducible or zero. Now if A is
non-negative then so too is each block of PAP−1, moreover the spectrum of A is just the union of the spectra of the
Bi.
The invertibility of A can also be studied. The inverse of PAP−1 (if it exists) must have diagonal blocks of the form Bi−1 so if any
Bi isn't invertible then neither is PAP−1 or A.
Conversely let D be the block-diagonal matrix corresponding to PAP−1, in other words PAP−1 with the
asterisks zeroised. If each Bi is invertible then so is D and D−1(PAP−1) is equal to the
identity plus a nilpotent matrix. But such a matrix is always invertible (if Nk = 0 the inverse of 1 − N is
1 + N + N2 + ... + Nk−1) so PAP−1 and A are both invertible.
Therefore, many of the spectral properties of A may be deduced by applying the theorem to the irreducible Bi. For example, the Perron root is the maximum of the ρ(Bi). While there will still be eigenvectors with non-negative components it is quite possible
that none of these will be positive.
Stochastic matrices
A row (column) stochastic matrix is a square matrix each of whose rows (columns) consists of non-negative real numbers whose sum is unity. The theorem cannot be applied directly to such matrices because they need not be irreducible.
If A is row-stochastic then the column vector with each entry 1 is an eigenvector corresponding to the eigenvalue 1, which is also ρ(A) by the remark above. It might not be the only eigenvalue on the unit circle: and the associated eigenspace can be multi-dimensional. If A is row-stochastic and irreducible then the Perron projection is also row-stochastic and all its rows are equal.
Algebraic graph theory
The theorem has particular use in algebraic graph theory. The "underlying graph" of a nonnegative n-square matrix is the graph with vertices numbered 1, ..., n and arc ij if and only if Aij ≠ 0. If the underlying graph of such a matrix is strongly connected, then the matrix is irreducible, and thus the theorem applies. In particular, the adjacency matrix of a strongly connected graph is irreducible.
Finite Markov chains
The theorem has a natural interpretation in the theory of finite Markov chains (where it is the matrix-theoretic equivalent of the convergence of an irreducible finite Markov chain to its stationary distribution, formulated in terms of the transition matrix of the chain; see, for example, the article on the subshift of finite type).
Compact operators
More generally, it can be extended to the case of non-negative compact operators, which, in many ways, resemble finite-dimensional matrices. These are commonly studied in physics, under the name of transfer operators, or sometimes Ruelle–Perron–Frobenius operators (after David Ruelle). In this case, the leading eigenvalue corresponds to the thermodynamic equilibrium of a dynamical system, and the lesser eigenvalues to the decay modes of a system that is not in equilibrium. Thus, the theory offers a way of discovering the arrow of time in what would otherwise appear to be reversible, deterministic dynamical processes, when examined from the point of view of point-set topology.
Proof methods
A common thread in many proofs is the Brouwer fixed point theorem. Another popular method is that of Wielandt (1950). He used the Collatz–Wielandt formula described above to extend and clarify Frobenius's work. Another proof is based on the spectral theory from which part of the arguments are borrowed.
Perron root is strictly maximal eigenvalue for positive (and primitive) matrices
If A is a positive (or more generally primitive) matrix, then there exists a real positive eigenvalue r (Perron–Frobenius eigenvalue or Perron root), which is strictly greater in absolute value than all other eigenvalues, hence r is the spectral radius of A.
This statement does not hold for general non-negative irreducible matrices, which have h eigenvalues with the same absolute eigenvalue as r, where h is the period of A.
Proof for positive matrices
Let A be a positive matrix, assume that its spectral radius ρ(A) = 1 (otherwise consider A/ρ(A)). Hence, there exists an eigenvalue λ on the unit circle, and all the other eigenvalues are less or equal 1 in absolute value. Suppose that another eigenvalue λ ≠ 1 also falls on the unit circle. Then there exists a positive integer m such that Am is a positive matrix and the real part of λm is negative. Let ε be half the smallest diagonal entry of Am and set T = Am − εI which is yet another positive matrix. Moreover, if Ax = λx then Amx = λmx thus λm − ε is an eigenvalue of T. Because of the choice of m this point lies outside the unit disk consequently ρ(T) > 1. On the other hand, all the entries in T are positive and less than or equal to those in Am so by Gelfand's formula ρ(T) ≤ ρ(Am) ≤ ρ(A)m = 1. This contradiction means that λ=1 and there can be no other eigenvalues on the unit circle.
Absolutely the same arguments can be applied to the case of primitive matrices; we just need to mention the following simple lemma, which clarifies the properties of primitive matrices.
Lemma
Given a non-negative A, assume there exists m, such that Am is positive, then Am+1, Am+2, Am+3,... are all positive.
Am+1 = AAm, so it can have zero element only if some row of A is entirely zero, but in this case the same row of Am will be zero.
Applying the same arguments as above for primitive matrices, prove the main claim.
Power method and the positive eigenpair
For a positive (or more generally irreducible non-negative) matrix A the dominant eigenvector is real and strictly positive (for non-negative A respectively non-negative.)
This can be established using the power method, which states that for a sufficiently generic (in the sense below) matrix A the sequence of vectors bk+1 = Abk / | Abk | converges to the eigenvector with the maximum eigenvalue. (The initial vector b0 can be chosen arbitrarily except for some measure zero set). Starting with a non-negative vector b0 produces the sequence of non-negative vectors bk. Hence the limiting vector is also non-negative. By the power method this limiting vector is the dominant eigenvector for A, proving the assertion. The corresponding eigenvalue is non-negative.
The proof requires two additional arguments. First, the power method converges for matrices which do not have several eigenvalues of the same absolute value as the maximal one. The previous section's argument guarantees this.
Second, to ensure strict positivity of all of the components of the eigenvector for the case of irreducible matrices. This follows from the following fact, which is of independent interest:
Lemma: given a positive (or more generally irreducible non-negative) matrix A and v as any non-negative eigenvector for A, then it is necessarily strictly positive and the corresponding eigenvalue is also strictly positive.
Proof. One of the definitions of irreducibility for non-negative matrices is that for all indexes i,j there exists m, such that (Am)ij is strictly positive. Given a non-negative eigenvector v, and that at least one of its components say i-th is strictly positive, the corresponding eigenvalue is strictly positive, indeed, given n such that (An)ii >0, hence: rnvi =
Anvi ≥
(An)iivi
>0. Hence r is strictly positive. The eigenvector is strict positivity. Then given m, such that (Am)ji >0, hence: rmvj =
(Amv)j ≥
(Am)jivi >0, hence
vj is strictly positive, i.e., the eigenvector is strictly positive.
Multiplicity one
This section proves that the Perron–Frobenius eigenvalue is a simple root of the characteristic polynomial of the matrix. Hence the eigenspace associated to Perron–Frobenius eigenvalue r is one-dimensional. The arguments here are close to those in Meyer.
Given a strictly positive eigenvector v corresponding to r and another eigenvector w with the same eigenvalue. (The vectors v and w can be chosen to be real, because A and r are both real, so the null space of A-r has a basis consisting of real vectors.) Assuming at least one of the components of w is positive (otherwise multiply w by −1). Given maximal possible α such that u=v- α w is non-negative, then one of the components of u is zero, otherwise α is not maximum. Vector u is an eigenvector. It is non-negative, hence by the lemma described in the previous section non-negativity implies strict positivity for any eigenvector. On the other hand, as above at least one component of u is zero. The contradiction implies that w does not exist.
Case: There are no Jordan blocks corresponding to the Perron–Frobenius eigenvalue r and all other eigenvalues which have the same absolute value.
If there is a Jordan block, then the infinity norm
(A/r)k∞ tends to infinity for k → ∞ ,
but that contradicts the existence of the positive eigenvector.
Given r = 1, or A/r. Letting v be a Perron–Frobenius strictly positive eigenvector, so Av=v, then:
So ‖Ak‖∞ is bounded for all k. This gives another proof that there are no eigenvalues which have greater absolute value than Perron–Frobenius one. It also contradicts the existence of the Jordan block for any eigenvalue which has absolute value equal to 1 (in particular for the Perron–Frobenius one), because existence of the Jordan block implies that ‖Ak‖∞ is unbounded. For a two by two matrix:
hence ‖Jk‖∞ = |k + λ| (for |λ| = 1), so it tends to infinity when k does so. Since Jk = C−1 AkC, then Ak ≥ Jk/ (C−1 C ), so it also tends to infinity. The resulting contradiction implies that there are no Jordan blocks for the corresponding eigenvalues.
Combining the two claims above reveals that the Perron–Frobenius eigenvalue r is simple root of the characteristic polynomial. In the case of nonprimitive matrices, there exist other eigenvalues which have the same absolute value as r. The same claim is true for them, but requires more work.
No other non-negative eigenvectors
Given positive (or more generally irreducible non-negative matrix) A, the Perron–Frobenius eigenvector is the only (up to multiplication by constant) non-negative eigenvector for A.
Other eigenvectors must contain negative or complex components since eigenvectors for different eigenvalues are orthogonal in some sense, but two positive eigenvectors cannot be orthogonal, so they must correspond to the same eigenvalue, but the eigenspace for the Perron–Frobenius is one-dimensional.
Assuming there exists an eigenpair (λ, y) for A, such that vector y is positive, and given (r, x), where x – is the left Perron–Frobenius eigenvector for A (i.e. eigenvector for AT), then
rxTy = (xT A) y = xT (Ay) = λxTy, also xT y > 0, so one has: r = λ. Since the eigenspace for the Perron–Frobenius eigenvalue r is one-dimensional, non-negative eigenvector y is a multiple of the Perron–Frobenius one.
Collatz–Wielandt formula
Given a positive (or more generally irreducible non-negative matrix) A, one defines
the function f on the set of all non-negative non-zero vectors x such that f(x) is the minimum value of [Ax]i / xi taken over all those i such that xi ≠ 0. Then f is a real-valued function, whose maximum is the Perron–Frobenius eigenvalue r.
For the proof we denote the maximum of f by the value R. The proof requires to show R = r. Inserting the Perron-Frobenius eigenvector v into f, we obtain f(v) = r and conclude r ≤ R. For the opposite inequality, we consider an arbitrary nonnegative vector x and let ξ=f(x). The definition of f gives 0 ≤ ξx ≤ Ax (componentwise). Now, we use the positive right eigenvector w for A for the Perron-Frobenius eigenvalue r, then ξ wT x = wT ξx ≤ wT (Ax) = (wT A)x = r wT x . Hence f(x) = ξ ≤ r, which implies
R ≤ r.
Perron projection as a limit: Ak/rk
Let A be a positive (or more generally, primitive) matrix, and let r be its Perron–Frobenius eigenvalue.
There exists a limit Ak/rk for k → ∞, denote it by P.
P is a projection operator: P2 = P, which commutes with A: AP = PA.
The image of P is one-dimensional and spanned by the Perron–Frobenius eigenvector v (respectively for PT—by the Perron–Frobenius eigenvector w for AT).
P = vwT, where v,w are normalized such that wT v = 1.
Hence P is a positive operator.
Hence P is a spectral projection for the Perron–Frobenius eigenvalue r, and is called the Perron projection. The above assertion is not true for general non-negative irreducible matrices.
Actually the claims above (except claim 5) are valid for any matrix M such that there exists an eigenvalue r which is strictly greater than the other eigenvalues in absolute value and is the simple root of the characteristic polynomial. (These requirements hold for primitive matrices as above).
Given that M is diagonalizable, M is conjugate to a diagonal matrix with eigenvalues r1, ... , rn on the diagonal (denote r1 = r). The matrix Mk/rk will be conjugate (1, (r2/r)k, ... , (rn/r)k), which tends to (1,0,0,...,0), for k → ∞, so the limit exists. The same method works for general M (without assuming that M is diagonalizable).
The projection and commutativity properties are elementary corollaries of the definition: MMk/rk = Mk/rk M ; P2 = lim M2k/r2k = P. The third fact is also elementary: M(Pu) = M lim Mk/rk u = lim rMk+1/rk+1u, so taking the limit yields M(Pu) = r(Pu), so image of P lies in the r-eigenspace for M, which is one-dimensional by the assumptions.
Denoting by v, r-eigenvector for M (by w for MT). Columns of P are multiples of v, because the image of P is spanned by it. Respectively, rows of w. So P takes a form (a v wT), for some a. Hence its trace equals to (a wT v). Trace of projector equals the dimension of its image. It was proved before that it is not more than one-dimensional. From the definition one sees that P acts identically on the r-eigenvector for M. So it is one-dimensional. So choosing (wTv) = 1, implies P = vwT.
Inequalities for Perron–Frobenius eigenvalue
For any non-negative matrix A its Perron–Frobenius eigenvalue r satisfies the inequality:
This is not specific to non-negative matrices: for any matrix A with an eigenvalue it is true
that . This is an immediate corollary of the
Gershgorin circle theorem. However another proof is more direct:
Any matrix induced norm satisfies the inequality for any eigenvalue because, if is a corresponding eigenvector, . The infinity norm of a matrix is the maximum of row sums: Hence the desired inequality is exactly applied to the non-negative matrix A.
Another inequality is:
This fact is specific to non-negative matrices; for general matrices there is nothing similar. Given that A is positive (not just non-negative), then there exists a positive eigenvector w such that Aw = rw and the smallest component of w (say wi) is 1. Then r = (Aw)i ≥ the sum of the numbers in row i of A. Thus the minimum row sum gives a lower bound for r and this observation can be extended to all non-negative matrices by continuity.
Another way to argue it is via the Collatz-Wielandt formula. One takes the vector x = (1, 1, ..., 1) and immediately obtains the inequality.
Further proofs
Perron projection
The proof now proceeds using spectral decomposition. The trick here is to split the Perron root from the other eigenvalues. The spectral projection associated with the Perron root is called the Perron projection and it enjoys the following property:
The Perron projection of an irreducible non-negative square matrix is a positive matrix.
Perron's findings and also (1)–(5) of the theorem are corollaries of this result. The key point is that a positive projection always has rank one. This means that if A is an irreducible non-negative square matrix then the algebraic and geometric multiplicities of its Perron root are both one. Also if P is its Perron projection then AP = PA = ρ(A)P so every column of P is a positive right eigenvector of A and every row is a positive left eigenvector. Moreover, if Ax = λx then PAx = λPx = ρ(A)Px which means Px = 0 if λ ≠ ρ(A). Thus the only positive eigenvectors are those associated with ρ(A). If A is a primitive matrix with ρ(A) = 1 then it can be decomposed as P ⊕ (1 − P)A so that An = P + (1 − P)An. As n increases the second of these terms decays to zero leaving P as the limit of An as n → ∞.
The power method is a convenient way to compute the Perron projection of a primitive matrix. If v and w are the positive row and column vectors that it generates then the Perron projection is just wv/vw. The spectral projections aren't neatly blocked as in the Jordan form. Here they are overlaid and each generally has complex entries extending to all four corners of the square matrix. Nevertheless, they retain their mutual orthogonality which is what facilitates the decomposition.
Peripheral projection
The analysis when A is irreducible and non-negative is broadly similar. The Perron projection is still positive but there may now be other eigenvalues of modulus ρ(A) that negate use of the power method and prevent the powers of (1 − P)A decaying as in the primitive case whenever ρ(A) = 1. So we consider the peripheral projection', which is the spectral projection of A corresponding to all the eigenvalues that have modulus ρ(A). It may then be shown that the peripheral projection of an irreducible non-negative square matrix is a non-negative matrix with a positive diagonal.
Cyclicity
Suppose in addition that ρ(A) = 1 and A has h eigenvalues on the unit circle. If P is the peripheral projection then the matrix R = AP = PA is non-negative and irreducible, Rh = P, and the cyclic group P, R, R2, ...., Rh−1 represents the harmonics of A. The spectral projection of A at the eigenvalue λ on the unit circle is given by the formula . All of these projections (including the Perron projection) have the same positive diagonal, moreover choosing any one of them and then taking the modulus of every entry invariably yields the Perron projection. Some donkey work is still needed in order to establish the cyclic properties (6)–(8) but it's essentially just a matter of turning the handle. The spectral decomposition of A is given by A = R ⊕ (1 − P)A so the difference between An and Rn is An − Rn = (1 − P)An representing the transients of An which eventually decay to zero. P may be computed as the limit of Anh as n → ∞.
Counterexamples
The matrices L = , P = , T = , M = provide simple examples of what can go wrong if the necessary conditions are not met. It is easily seen that the Perron and peripheral projections of L are both equal to P, thus when the original matrix is reducible the projections may lose non-negativity and there is no chance of expressing them as limits of its powers. The matrix T is an example of a primitive matrix with zero diagonal. If the diagonal of an irreducible non-negative square matrix is non-zero then the matrix must be primitive but this example demonstrates that the converse is false. M is an example of a matrix with several missing spectral teeth. If ω = eiπ/3 then ω6 = 1 and the eigenvalues of M are {1,ω2,ω3=-1,ω4} with a dimension 2 eigenspace for +1 so ω and ω5 are both absent. More precisely, since M is block-diagonal cyclic, then the eigenvalues are {1,-1} for the first block, and {1,ω2,ω4} for the lower one
Terminology
A problem that causes confusion is a lack of standardisation in the definitions. For example, some authors use the terms strictly positive and positive to mean > 0 and ≥ 0 respectively. In this article positive means > 0 and non-negative means ≥ 0. Another vexed area concerns decomposability and reducibility: irreducible is an overloaded term. For avoidance of doubt a non-zero non-negative square matrix A such that 1 + A is primitive is sometimes said to be connected. Then irreducible non-negative square matrices and connected matrices are synonymous.
The nonnegative eigenvector is often normalized so that the sum of its components is equal to unity; in this case, the eigenvector is the vector of a probability distribution and is sometimes called a stochastic eigenvector.Perron–Frobenius eigenvalue and dominant eigenvalue are alternative names for the Perron root. Spectral projections are also known as spectral projectors and spectral idempotents. The period is sometimes referred to as the index of imprimitivity or the order of cyclicity.
See also
Metzler matrix (Quasipositive matrix)
Notes
References
(1959 edition had different title: "Applications of the theory of matrices". Also the numeration of chapters is different in the two editions.)
Further reading
Abraham Berman, Robert J. Plemmons, Nonnegative Matrices in the Mathematical Sciences, 1994, SIAM. .
Chris Godsil and Gordon Royle, Algebraic Graph Theory, Springer, 2001.
A. Graham, Nonnegative Matrices and Applicable Topics in Linear Algebra, John Wiley&Sons, New York, 1987.
R. A. Horn and C.R. Johnson, Matrix Analysis, Cambridge University Press, 1990
Bas Lemmens and Roger Nussbaum, Nonlinear Perron-Frobenius Theory, Cambridge Tracts in Mathematics 189, Cambridge Univ. Press, 2012.
S. P. Meyn and R. L. Tweedie, Markov Chains and Stochastic Stability London: Springer-Verlag, 1993. (2nd edition, Cambridge University Press, 2009)
Seneta, E. Non-negative matrices and Markov chains. 2nd rev. ed., 1981, XVI, 288 p., Softcover Springer Series in Statistics. (Originally published by Allen & Unwin Ltd., London, 1973)
(The claim that Aj has order n/h'' at the end of the statement of the theorem is incorrect.)
.
Matrix theory
Theorems in linear algebra
Markov processes | Perron–Frobenius theorem | [
"Mathematics"
] | 8,520 | [
"Theorems in algebra",
"Theorems in linear algebra"
] |
1,540,704 | https://en.wikipedia.org/wiki/Equation%20of%20state%20%28cosmology%29 | In cosmology, the equation of state of a perfect fluid is characterized by a dimensionless number , equal to the ratio of its pressure to its energy density :
It is closely related to the thermodynamic equation of state and ideal gas law.
The equation
The perfect gas equation of state may be written as
where is the mass density, is the particular gas constant, is the temperature and is a characteristic thermal speed of the molecules. Thus
where is the speed of light, and for a "cold" gas.
FLRW equations and the equation of state
The equation of state may be used in Friedmann–Lemaître–Robertson–Walker (FLRW) equations to describe the evolution of an isotropic universe filled with a perfect fluid. If is the scale factor then
If the fluid is the dominant form of matter in a flat universe, then
where is the proper time.
In general the Friedmann acceleration equation is
where is the cosmological constant and is Newton's constant, and is the second proper time derivative of the scale factor.
If we define (what might be called "effective") energy density and pressure as
and
the acceleration equation may be written as
Non-relativistic particles
The equation of state for ordinary non-relativistic 'matter' (e.g. cold dust) is , which means that its energy density decreases as , where is a volume. In an expanding universe, the total energy of non-relativistic matter remains constant, with its density decreasing as the volume increases.
Ultra-relativistic particles
The equation of state for ultra-relativistic 'radiation' (including neutrinos, and in the very early universe other particles that later became non-relativistic) is which means that its energy density decreases as . In an expanding universe, the energy density of radiation decreases more quickly than the volume expansion, because its wavelength is red-shifted.
Acceleration of cosmic inflation
Cosmic inflation and the accelerated expansion of the universe can be characterized by the equation of state of dark energy. In the simplest case, the equation of state of the cosmological constant is . In this case, the above expression for the scale factor is not valid and , where the constant is the Hubble parameter. More generally, the expansion of the universe is accelerating for any equation of state . The accelerated expansion of the Universe was indeed observed. According to observations, the value of equation of state of cosmological constant is near -1.
Hypothetical phantom energy would have an equation of state , and would cause a Big Rip. Using the existing data, it is still impossible to distinguish between phantom and non-phantom .
Fluids
In an expanding universe, fluids with larger equations of state disappear more quickly than those with smaller equations of state. This is the origin of the flatness and monopole problems of the Big Bang: curvature has and monopoles have , so if they were around at the time of the early Big Bang, they should still be visible today. These problems are solved by cosmic inflation which has . Measuring the equation of state of dark energy is one of the largest efforts of observational cosmology. By accurately measuring , it is hoped that the cosmological constant could be distinguished from quintessence which has .
Scalar modeling
A scalar field can be viewed as a sort of perfect fluid with equation of state
where is the time-derivative of and is the potential energy. A free () scalar field has , and one with vanishing kinetic energy is equivalent to a cosmological constant: . Any equation of state in between, but not crossing the barrier known as the Phantom Divide Line (PDL), is achievable, which makes scalar fields useful models for many phenomena in cosmology.
Table
Different kinds of energy have different scaling properties.
Notes
Physical cosmology
Equations of state | Equation of state (cosmology) | [
"Physics",
"Astronomy"
] | 791 | [
"Astronomical sub-disciplines",
"Equations of physics",
"Theoretical physics",
"Astrophysics",
"Statistical mechanics",
"Equations of state",
"Physical cosmology"
] |
1,540,711 | https://en.wikipedia.org/wiki/Phantom%20energy | Phantom energy is a hypothetical form of dark energy satisfying the equation of state with . It possesses negative kinetic energy, and predicts expansion of the universe in excess of that predicted by a cosmological constant, which leads to a Big Rip. The idea of phantom energy is often dismissed, as it would suggest that the vacuum is unstable with negative mass particles bursting into existence. The concept is hence tied to emerging theories of a continuously created negative mass dark fluid, in which the cosmological constant can vary as a function of time.
Big Rip mechanism
The existence of phantom energy could cause the expansion of the universe to accelerate so quickly that a scenario known as the Big Rip, a possible end to the universe, occurs. The expansion of the universe reaches an infinite degree in finite time, causing expansion to accelerate without bounds. This acceleration necessarily passes the speed of light (since it involves expansion of the universe itself, not particles moving within it), causing more and more objects to leave our observable universe faster than its expansion, as light and information emitted from distant stars and other cosmic sources cannot "catch up" with the expansion. As the observable universe expands, objects will be unable to interact with each other via fundamental forces, and eventually, the expansion will prevent any action of forces between any particles, even within atoms, "ripping apart" the universe, making distances between individual particles infinite.
One application of phantom energy in 2007 was to a cyclic model of the universe, which reverses its expansion extremely shortly before the would-be Big Rip. This cyclic model can be more complicated if the mass–energy of every point in the universe dense enough to collapse into black hole core substance that will bounce after reaching a maximum threshold of compression causing the next big bang (the overall scenario is highly unlikely).
See also
Quintessence (physics)
References
Further reading
Robert R. Caldwell et al.: Phantom Energy and Cosmic Doomsday
Dark energy
Physical cosmological concepts | Phantom energy | [
"Physics",
"Astronomy"
] | 398 | [
"Physical cosmological concepts",
"Unsolved problems in astronomy",
"Physical quantities",
"Concepts in astrophysics",
"Concepts in astronomy",
"Unsolved problems in physics",
"Energy (physics)",
"Dark energy",
"Wikipedia categories named after physical quantities"
] |
1,541,301 | https://en.wikipedia.org/wiki/Lugol%27s%20iodine | Lugol's iodine, also known as aqueous iodine and strong iodine solution, is a solution of potassium iodide with iodine in water. It is a medication and disinfectant used for a number of purposes. Taken by mouth it is used to treat thyrotoxicosis until surgery can be carried out, protect the thyroid gland from radioactive iodine, and to treat iodine deficiency. When applied to the cervix it is used to help in screening for cervical cancer. As a disinfectant it may be applied to small wounds such as a needle stick injury. A small amount may also be used for emergency disinfection of drinking water.
Side effects may include allergic reactions, headache, vomiting, and conjunctivitis. Long term use may result in trouble sleeping and depression. It should not typically be used during pregnancy or breastfeeding. Lugol's iodine is a liquid made up of two parts potassium iodide for every one part elemental iodine in water.
Lugol's iodine was first made in 1829 by the French physician Jean Lugol. It is on the World Health Organization's List of Essential Medicines. Lugol's iodine is available as a generic medication and over the counter. Lugol's solution is available in different strengths of iodine. Large volumes of concentrations more than 2.2% may be subject to regulation.
Uses
Medical uses
Preoperative administration of Lugol's solution decreases intraoperative blood loss during thyroidectomy in patients with Graves' disease. However, it appears ineffective in patients who are already euthyroid on anti-thyroid drugs and levothyroxine.
During colposcopy, Lugol's iodine is applied to the vagina and cervix. Normal vaginal tissue stains brown due to its high glycogen content, while tissue suspicious for cancer does not stain, and thus appears pale compared to the surrounding tissue. Biopsy of suspicious tissue can then be performed. This is called a Schiller's test.
Patients at high risk of oesophageal squamous cell carcinoma are usually followed using a combination of Lugol's chromoendoscopy and narrow-band imaging. With Lugol's iodine, low-grade dysplasia appears as an unstained or weakly stained area; high-grade dysplasia is consistently unstained.
Lugol's iodine may also be used to better visualize the mucogingival junction in the mouth. Similar to the method of staining mentioned above regarding a colposcopy, alveolar mucosa has a high glycogen content that gives a positive iodine reaction vs. the keratinized gingiva.
Lugol's iodine may also be used as an oxidizing germicide, however it is somewhat undesirable in that it may lead to scarring and discolors the skin temporarily. One way to avoid this problem is by using a solution of 70% ethanol to wash off the iodine later.
Lugol's iodine was distributed in Polish People's Republic after the Chernobyl catastrophe, due to government not being informed of how severe the event was and overestimating radiation, and unavailability of iodine tablets.
Science
As a mordant when performing a Gram stain. It is applied for 1 minute after staining with crystal violet, but before ethanol to ensure that gram positive organisms' peptidoglycan remains stained, easily identifying it as a gram positive in microscopy.
This solution is used as an indicator test for the presence of starches in organic compounds, with which it reacts by turning a dark-blue/black. Elemental iodine solutions like Lugol's will stain starches due to iodine's interaction with the coil structure of the polysaccharide. Starches include the plant starches amylose and amylopectin and glycogen in animal cells. Lugol's solution will not detect simple sugars such as glucose or fructose. In the pathologic condition amyloidosis, amyloid deposits (i.e., deposits that stain like starch, but are not) can be so abundant that affected organs will also stain grossly positive for the Lugol reaction for starch.
It can be used as a cell stain, making the cell nuclei more visible and for preserving phytoplankton samples.
Lugol's solution can also be used in various experiments to observe how a cell membrane uses osmosis and diffusion.
Lugol's solution is also used in the marine aquarium industry. Lugol's solution provides a strong source of free iodine and iodide to reef inhabitants and macroalgae. Although the solution is thought to be effective when used with stony corals, systems containing xenia and soft corals are assumed to be particularly benefited by the use of Lugol's solution. Used as a dip for stony and soft or leather corals, Lugol's may help rid the animals of unwanted parasites and harmful bacteria. The solution is thought to foster improved coloration and possibly prevent bleaching of corals due to changes in light intensity, and to enhance coral polyp expansion. The blue colors of Acropora spp. are thought to be intensified by the use of potassium iodide. Specially packaged supplements of the product intended for aquarium use can be purchased at specialty stores and online.
Outdated uses
Up until early 1970s, it was often recommended for use in victims of rape in order to avoid pregnancy. The idea stemmed from the fact that, in the laboratory, Lugol's iodine appeared to kill sperm cells even in such great dilutions as 1:32. Thus it was thought that an intrauterine application of Lugol's iodine, immediately after the event, would help avoid pregnancy.
Side effects
Because it contains free iodine, Lugol's solution at 2% or 5% concentration without dilution is irritating and destructive to mucosa, such as the lining of the esophagus and stomach. Doses of 10 mL of undiluted 5% solution have been reported to cause gastric lesions when used in endoscopy. The LD50 for 5% Iodine is 14,000 mg/kg (14 g/kg) in rats, and 22,000 mg/kg (22 g/kg) in mice.
The World Health Organization classifies substances taken orally with an LD50 of 5–50 mg/kg as the second highest toxicity class, Class Ib (Highly Hazardous). The Global Harmonized System of Classification and Labeling of Chemicals categorizes this as Category 2 with a hazard statement "Fatal if swallowed". Potassium iodide is not considered hazardous.
Mechanism of action
The above uses and effects are consequences of the fact that the solution is a source of effectively free elemental iodine, which is readily generated from the equilibrium between elemental iodine molecules and polyiodide ions in the solution.
History
It was historically used as a first-line treatment for hyperthyroidism, as the administration of pharmacologic amounts of iodine leads to temporary inhibition of iodine organification in the thyroid gland, caused by phenomena including the Wolff–Chaikoff effect and the Plummer effect. However it is not used to treat certain autoimmune causes of thyroid disease as iodine-induced blockade of iodine organification may result in hypothyroidism. They are not considered as a first line therapy because of possible induction of resistant hyperthyroidism but may be considered as an adjuvant therapy when used together with other hyperthyroidism medications.
Lugol's iodine has been used traditionally to replenish iodine deficiency. Because of its wide availability as a drinking-water decontaminant, and high content of potassium iodide, emergency use of it was at first recommended to the Polish government in 1986, after the Chernobyl disaster to replace and block any intake of radioactive , even though it was known to be a non-optimal agent, due to its somewhat toxic free-iodine content. Other sources state that pure potassium iodide solution in water (SSKI) was eventually used for most of the thyroid protection after this accident. There is "strong scientific evidence" for potassium iodide thyroid protection to help prevent thyroid cancer. Potassium iodide does not provide immediate protection but can be a component of a general strategy in a radiation emergency.
Historically, Lugol's iodine solution has been widely available and used for a number of health problems with some precautions. Lugol's is sometimes prescribed in a variety of alternative medical treatments. Only since the end of the Cold War has the compound become subject to national regulation in the English-speaking world.
Society and culture
Regulation
Until 2007, in the United States, Lugol's solution was unregulated and available over the counter as a general reagent, an antiseptic, a preservative, or as a medicament for human or veterinary application.
Since 1 August 2007, the DEA regulates all iodine solutions containing greater than 2.2% elemental iodine as a List I precursor because they may potentially be used in the illicit production of methamphetamine. Transactions of up to one fluid ounce (30 ml) of Lugol's solution are exempt from this regulation.
Formula and manufacture
Lugol's solution is commonly available in different potencies of (nominal) 1%, 2%, 5% or 10%. Iodine concentrations greater than 2.2% are subject to US regulations. If the US regulations are taken literally, their 2.2% maximum iodine concentration limits a Lugol's solution to maximum (nominal) 0.87%.
The most commonly used (nominal) 5% solution consists of 5% (wt/v) iodine () and 10% (wt/v) potassium iodide (KI) mixed in distilled water and has a total iodine content of 126.4 mg/mL. The (nominal) 5% solution thus has a total iodine content of 6.32 mg per drop of 0.05 mL; the (nominal) 2% solution has 2.53 mg total iodine content per drop.
Potassium iodide renders the elementary iodine soluble in water through the formation of the triiodide () ion. It is not to be confused with tincture of iodine solutions, which consist of elemental iodine, and iodide salts dissolved in water and alcohol. Lugol's solution contains no alcohol.
Other names for Lugol's solution are (iodine-potassium iodide); Markodine, Strong solution (Systemic); and Aqueous Iodine Solution BP.
Economics
In the United Kingdom, in 2015, the NHS paid £9.57 per 500 ml of solution.
References
Antiseptics
Chemical tests
Disinfectants
Iodine
Staining dyes
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Lugol's iodine | [
"Chemistry"
] | 2,295 | [
"Chemical tests"
] |
1,542,238 | https://en.wikipedia.org/wiki/Smooth%20structure | In mathematics, a smooth structure on a manifold allows for an unambiguous notion of smooth function. In particular, a smooth structure allows mathematical analysis to be performed on the manifold.
Definition
A smooth structure on a manifold is a collection of smoothly equivalent smooth atlases. Here, a smooth atlas for a topological manifold is an atlas for such that each transition function is a smooth map, and two smooth atlases for are smoothly equivalent provided their union is again a smooth atlas for This gives a natural equivalence relation on the set of smooth atlases.
A smooth manifold is a topological manifold together with a smooth structure on
Maximal smooth atlases
By taking the union of all atlases belonging to a smooth structure, we obtain a maximal smooth atlas. This atlas contains every chart that is compatible with the smooth structure. There is a natural one-to-one correspondence between smooth structures and maximal smooth atlases.
Thus, we may regard a smooth structure as a maximal smooth atlas and vice versa.
In general, computations with the maximal atlas of a manifold are rather unwieldy. For most applications, it suffices to choose a smaller atlas.
For example, if the manifold is compact, then one can find an atlas with only finitely many charts.
Equivalence of smooth structures
If and are two maximal atlases on the two smooth structures associated to and are said to be equivalent if there is a diffeomorphism such that
Exotic spheres
John Milnor showed in 1956 that the 7-dimensional sphere admits a smooth structure that is not equivalent to the standard smooth structure. A sphere equipped with a nonstandard smooth structure is called an exotic sphere.
E8 manifold
The E8 manifold is an example of a topological manifold that does not admit a smooth structure. This essentially demonstrates that Rokhlin's theorem holds only for smooth structures, and not topological manifolds in general.
Related structures
The smoothness requirements on the transition functions can be weakened, so that the transition maps are only required to be -times continuously differentiable; or strengthened, so that the transition maps are required to be real-analytic. Accordingly, this gives a or (real-)analytic structure on the manifold rather than a smooth one. Similarly, a complex structure can be defined by requiring the transition maps to be holomorphic.
See also
References
Differential topology
Structures on manifolds | Smooth structure | [
"Mathematics"
] | 474 | [
"Topology",
"Differential topology"
] |
1,543,358 | https://en.wikipedia.org/wiki/Darboux%27s%20theorem | In differential geometry, a field in mathematics, Darboux's theorem is a theorem providing a normal form for special classes of differential 1-forms, partially generalizing the Frobenius integration theorem. It is named after Jean Gaston Darboux who established it as the solution of the Pfaff problem.
It is a foundational result in several fields, the chief among them being symplectic geometry. Indeed, one of its many consequences is that any two symplectic manifolds of the same dimension are locally symplectomorphic to one another. That is, every -dimensional symplectic manifold can be made to look locally like the linear symplectic space with its canonical symplectic form.
There is also an analogous consequence of the theorem applied to contact geometry.
Statement
Suppose that is a differential 1-form on an -dimensional manifold, such that has constant rank . Then
if everywhere, then there is a local system of coordinates in which
if everywhere, then there is a local system of coordinates in which
Darboux's original proof used induction on and it can be equivalently presented in terms of distributions or of differential ideals.
Frobenius' theorem
Darboux's theorem for ensures that any 1-form such that can be written as in some coordinate system .
This recovers one of the formulation of Frobenius theorem in terms of differential forms: if is the differential ideal generated by , then implies the existence of a coordinate system where is actually generated by .
Darboux's theorem for symplectic manifolds
Suppose that is a symplectic 2-form on an -dimensional manifold . In a neighborhood of each point of , by the Poincaré lemma, there is a 1-form with . Moreover, satisfies the first set of hypotheses in Darboux's theorem, and so locally there is a coordinate chart near in which
Taking an exterior derivative now shows
The chart is said to be a Darboux chart around . The manifold can be covered by such charts.
To state this differently, identify with by letting . If is a Darboux chart, then can be written as the pullback of the standard symplectic form on :
A modern proof of this result, without employing Darboux's general statement on 1-forms, is done using Moser's trick.
Comparison with Riemannian geometry
Darboux's theorem for symplectic manifolds implies that there are no local invariants in symplectic geometry: a Darboux basis can always be taken, valid near any given point. This is in marked contrast to the situation in Riemannian geometry where the curvature is a local invariant, an obstruction to the metric being locally a sum of squares of coordinate differentials.
The difference is that Darboux's theorem states that can be made to take the standard form in an entire neighborhood around . In Riemannian geometry, the metric can always be made to take the standard form at any given point, but not always in a neighborhood around that point.
Darboux's theorem for contact manifolds
Another particular case is recovered when ; if everywhere, then is a contact form. A simpler proof can be given, as in the case of symplectic structures, by using Moser's trick.
The Darboux-Weinstein theorem
Alan Weinstein showed that the Darboux's theorem for sympletic manifolds can be strengthened to hold on a neighborhood of a submanifold:
Let be a smooth manifold endowed with two symplectic forms and , and let be a closed submanifold. If , then there is a neighborhood of in and a diffeomorphism such that .
The standard Darboux theorem is recovered when is a point and is the standard symplectic structure on a coordinate chart.
This theorem also holds for infinite-dimensional Banach manifolds.
See also
Carathéodory–Jacobi–Lie theorem, a generalization of this theorem.
Moser's trick
Symplectic basis
References
External links
G. Darboux, "On the Pfaff Problem", transl. by D. H. Delphenich
G. Darboux, "On the Pfaff Problem (cont.)", transl. by D. H. Delphenich
Differential systems
Symplectic geometry
Coordinate systems in differential geometry
Theorems in differential geometry
Mathematical physics | Darboux's theorem | [
"Physics",
"Mathematics"
] | 903 | [
"Theorems in differential geometry",
"Applied mathematics",
"Theoretical physics",
"Coordinate systems in differential geometry",
"Theorems in geometry",
"Coordinate systems",
"Mathematical physics"
] |
1,543,711 | https://en.wikipedia.org/wiki/Compactification%20%28physics%29 | In theoretical physics, compactification means changing a theory with respect to one of its space-time dimensions. Instead of having a theory with this dimension being infinite, one changes the theory so that this dimension has a finite length, and may also be periodic.
Compactification plays an important part in thermal field theory where one compactifies time, in string theory where one compactifies the extra dimensions of the theory, and in two- or one-dimensional solid state physics, where one considers a system which is limited in one of the three usual spatial dimensions.
At the limit where the size of the compact dimension goes to zero, no fields depend on this extra dimension, and the theory is dimensionally reduced.
In string theory
In string theory, compactification is a generalization of Kaluza–Klein theory. It tries to reconcile the gap between the conception of our universe based on its four observable dimensions with the ten, eleven, or twenty-six dimensions which theoretical equations lead us to suppose the universe is made with.
For this purpose it is assumed the extra dimensions are "wrapped" up on themselves, or "curled" up on Calabi–Yau spaces, or on orbifolds. Models in which the compact directions support fluxes are known as flux compactifications. The coupling constant of string theory, which determines the probability of strings splitting and reconnecting, can be described by a field called a dilaton. This in turn can be described as the size of an extra (eleventh) dimension which is compact. In this way, the ten-dimensional type IIA string theory can be described as the compactification of M-theory in eleven dimensions. Furthermore, different versions of string theory are related by different compactifications in a procedure known as T-duality.
The formulation of more precise versions of the meaning of compactification in this context has been promoted by discoveries such as the mysterious duality.
Flux compactification
A flux compactification is a particular way to deal with additional dimensions required by string theory.
It assumes that the shape of the internal manifold is a Calabi–Yau manifold or generalized Calabi–Yau manifold which is equipped with non-zero values of fluxes, i.e. differential forms, that generalize the concept of an electromagnetic field (see p-form electrodynamics).
The hypothetical concept of the anthropic landscape in string theory follows from a large number of possibilities in which the integers that characterize the fluxes can be chosen without violating rules of string theory. The flux compactifications can be described as F-theory vacua or type IIB string theory vacua with or without D-branes.
See also
Dimensional reduction
References
Further reading
Chapter 16 of Michael Green, John H. Schwarz and Edward Witten (1987). Superstring theory. Cambridge University Press. Vol. 2: Loop amplitudes, anomalies and phenomenology. .
Brian R. Greene, "String Theory on Calabi–Yau Manifolds". .
Mariana Graña, "Flux compactifications in string theory: A comprehensive review", Physics Reports 423, 91–158 (2006). .
Michael R. Douglas and Shamit Kachru "Flux compactification", Rev. Mod. Phys. 79, 733 (2007). .
Ralph Blumenhagen, Boris Körs, Dieter Lüst, Stephan Stieberger, "Four-dimensional string compactifications with D-branes, orientifolds and fluxes", Physics Reports 445, 1–193 (2007). .
String theory | Compactification (physics) | [
"Astronomy"
] | 737 | [
"String theory",
"Astronomical hypotheses"
] |
1,543,837 | https://en.wikipedia.org/wiki/Phase%20response | In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter.
Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time.
See also
Group delay and phase delay
References
Trigonometry
Wave mechanics
Signal processing | Phase response | [
"Physics",
"Technology",
"Engineering"
] | 148 | [
"Physical phenomena",
"Telecommunications engineering",
"Computer engineering",
"Signal processing",
"Classical mechanics",
"Waves",
"Wave mechanics"
] |
1,544,409 | https://en.wikipedia.org/wiki/Savepoint | A savepoint is a way of implementing subtransactions (also known as nested transactions) within a relational database management system by indicating a point within a transaction that can be "rolled back to" without affecting any work done in the transaction before the savepoint was created. Multiple savepoints can exist within a single transaction. Savepoints are useful for implementing complex error recovery in database applications. If an error occurs in the midst of a multiple-statement transaction, the application may be able to recover from the error (by rolling back to a savepoint) without needing to abort the entire transaction.
A savepoint can be declared by issuing a SAVEPOINT name statement. All changes made after a savepoint has been declared can be undone by issuing a ROLLBACK TO SAVEPOINT name command. Issuing RELEASE SAVEPOINT name will cause the named savepoint to be discarded, but will not otherwise affect anything. Issuing the commands ROLLBACK or COMMIT will also discard any savepoints created since the start of the main transaction.
Savepoints are defined in the SQL standard and are supported by all established SQL relational databases, including PostgreSQL, Oracle Database, Microsoft SQL Server, MySQL, IBM Db2, SQLite (since 3.6.8), Firebird, H2 Database Engine, and Informix (since version 11.50xC3).
Data management
Transaction processing
Database management systems | Savepoint | [
"Technology"
] | 288 | [
"Data management",
"Data"
] |
1,544,912 | https://en.wikipedia.org/wiki/Solid-state%20nuclear%20track%20detector | A solid-state nuclear track detector or SSNTD (also known as an etched track detector or a dielectric track detector, DTD) is a sample of a solid material (photographic emulsion, crystal, glass or plastic) exposed to nuclear radiation (neutrons or charged particles, occasionally also gamma rays), etched in a corrosive chemical, and examined microscopically. When the nuclear particles pass through the material they leave trails of molecular damage, and these damaged regions are etched faster than the bulk material, generating holes called tracks.
The size and shape of these tracks yield information about the mass, charge, energy and direction of motion of the particles. The main advantages over other radiation detectors are the detailed information available on individual particles, the persistence of the tracks allowing measurements to be made over long periods of time, and the simple, cheap and robust construction of the detector. For these reasons, SSNTDs are commonly used to study cosmic rays, long-lived radioactive elements, radon concentration in houses, and the age of geological samples.
The basis of SSNTDs is that charged particles damage the detector within nanometers along the track in such a way that the track can be etched many times faster than the undamaged material. Etching, typically for several hours, enlarges the damage to conical pits of micrometer dimensions, that can be observed with a microscope. For a given type of particle, the length of the track gives the energy of the particle. The charge can be determined from the etch rate of the track compared to that of the bulk. If the particles enter the surface at normal incidence, the pits are circular; otherwise the ellipticity and orientation of the elliptical pit mouth indicate the direction of incidence.
A material commonly used in SSNTDs is polyallyl diglycol carbonate (also known as CR-39). It is a clear, colorless, rigid plastic with the chemical formula C12H18O7. Etching to expose radiation damage is typically performed using solutions of caustic alkalis such as sodium hydroxide, often at elevated temperatures for several hours.
See also
nuclear track detectors that are not solid state
cloud chamber
bubble chamber
solid-state (semiconductor) nuclear detectors that do not record tracks
surface-barrier detector
silicon drift detector
lithium-drifted silicon detector - Si(Li)
intrinsic detector
External links
Gregory Choppin, Jan-Olov Liljenzin, Jan Rydberg Radiochemistry and Nuclear Chemistry, Chapter 8, "Detection and Measurement Techniques"
Nuclear physics
Particle detectors | Solid-state nuclear track detector | [
"Physics",
"Technology",
"Engineering"
] | 516 | [
"Particle detectors",
"Measuring instruments",
"Nuclear physics"
] |
1,545,079 | https://en.wikipedia.org/wiki/Barnett%20effect | The Barnett effect is the magnetization of an uncharged body when spun on its axis. It was discovered by American physicist Samuel Barnett in 1915.
An uncharged object rotating with angular velocity tends to spontaneously magnetize, with a magnetization given by
where is the gyromagnetic ratio for the material, is the magnetic susceptibility.
The magnetization occurs parallel to the axis of spin. Barnett was motivated by a prediction by Owen Richardson in 1908, later named the Einstein–de Haas effect, that magnetizing a ferromagnet can induce a mechanical rotation. He instead looked for the opposite effect, that is, that spinning a ferromagnet could change its magnetization. He established the effect with a long series of experiments between 1908 and 1915.
See also
London moment
References
Further reading
Magnetism | Barnett effect | [
"Physics",
"Materials_science"
] | 168 | [
"Materials science stubs",
"Condensed matter stubs",
"Condensed matter physics"
] |
1,545,150 | https://en.wikipedia.org/wiki/Terfenol-D | Terfenol-D, an alloy of the formula (x ≈ 0.3), is a magnetostrictive material. It was initially developed in the 1970s by the Naval Ordnance Laboratory in the United States. The technology for manufacturing the material efficiently was developed in the 1980s at Ames Laboratory under a U.S. Navy-funded program. It is named after terbium, iron (Fe), Naval Ordnance Laboratory (NOL), and the D comes from dysprosium.
Physical properties
The alloy has the highest magnetostriction of any alloy, up to 0.002 m/m at saturation; it expands and contracts in a magnetic field. Terfenol-D has a large magnetostriction force, high energy density, low sound velocity, and a low Young's modulus. At its most pure form, it also has low ductility and a low fracture resistance. Terfenol-D is a gray alloy that has different possible ratios of its elemental components that always follow a formula of . The addition of dysprosium made it easier to induce magnetostrictive responses by making the alloy require a lower level of magnetic fields. When the ratio of Tb and Dy is increased, the resulting alloy's magnetostrictive properties will operate at temperatures as low as −200 °C, and when decreased, it may operate at a maximum of 200 °C. The composition of Terfenol-D allows it to have a large magnetostriction and magnetic flux when a magnetic field is applied to it. This case exists for a large range of compressive stresses, with a trend of decreasing magnetostriction as the compressive stress increases. Crush strength has been shown (unpublished) to be quite high under certain conditions. There is also a relationship between the magnetic flux and compression in which when the compressive stress increases, the magnetic flux changes less drastically. Terfenol-D is mostly used for its magnetostrictive properties, in which it changes shape when exposed to magnetic fields in a process called magnetization. Magnetic heat treatment is shown to improve the magnetostrictive properties of Terfenol-D at low compressive stress for certain ratios of Tb and Dy.
Applications
Due to its material properties, Terfenol-D is excellent for use in the manufacturing of low frequency, high powered underwater acoustics. Its initial application was in naval sonar systems. It sees application in magnetomechanical sensors, actuators, and acoustic and ultrasonic transducers due to its high energy density and large bandwidth capabilities, e.g. in the SoundBug device (its first commercial application by FeONIC). Its strain is also larger than that of another normally used material (PZT8), which allows Terfenol-D transducers to reach greater depths for ocean explorations than past transducers. Its low Young's modulus brings some complications due to compression at large depths, which are overcome in transducer designs that may reach 1000 ft in depth and only lose a small amount of accuracy of around 1 dB. Due to its high temperature range, Terfenol-D is also useful in deep hole acoustic transducers where the environment may reach high pressure and temperatures like oil holes. Terfenol-D may also be used for hydraulic valve drivers due to its high strain and high force properties. Similarly, magnetostrictive actuators have also been considered for use in fuel injectors for diesel engines because of the high stresses that can be produced. Terfenol-D uniquely combines key characteristics that enable advanced diesel fuel injection. First, the quantum mechanical origin of magnetostriction means this effect does not degrade, giving it robustness and durability. Second, it makes good use of the compression available from diesel fuel pressure. Finally, its mechanical expansion tends to be proportional to the imposed magnetic field, making injector needle position continuously controllable. An injector needle directly operated by Terfenol-D can have lifetime durability on an engine cylinder head while enabling unprecedented control over each injection event throughout its entire duration. These properties can be used for in-cylinder treatment of efficiency, emissions, and noise while enabling fuel flexibility.
Manufacturing
The increase in use of Terfenol-D in transducers required new production techniques that increased production rates and quality because the original methods were unreliable and small scale. There are four methods that are used to produce Terfenol-D, which are free stand zone melting, modified Bridgman, sintered powder compact, and polymer matrix composites.
The first two methods, free stand zone melting (FSZM) and modified Bridgman (MB), are capable of producing Terfenol-D that has high magnetostrictive properties and energy densities. However, FSZM cannot produce a rod larger than 8 mm in diameter due to the surface tension of the Terfenol-D and how the FSZM process has no container to restrict the material. The MB process offers a minimum of 10 mm diameter size and is only restricted due to the wall interfering with the crystal growth. Both methods create solid crystals that require later manufacturing if a geometry other than a right-angle cylinder is needed. The solid crystals produced have a fine lamellar structure.
The other two techniques, sintered powder compact and polymer matrix composites, are powder based. These techniques allow for intricate geometry and detail. However, the size is limited to 10mm in diameter and 100mm in length due to the molds used. The resulting microstructures of these powder based methods differ from the solid crystal ones because they do not have a lamellar structure and have a lower density. However, all methods have similar magnetostrictive properties.
Due to size restriction, MB is the best process to produce Terfenol-D. However it is a labor-intensive method. A newer process like MB is Etrema crystal grower (ECG) that results in larger diameter Terfenol-D crystals and increased magnetostrictive performance. The reliability of magnetostrictive properties of the Terfenol-D throughout the life of the material is increased by using ECG.
Terfenol-D has some minor drawbacks which stem from its material properties. Terfenol-D has low ductility and low fracture resistance. To solve this, Terfenol-D has been added to polymers and other metals to create composites. When added to polymers, the stiffness of the resulting composite is low. When composites of Terfenol-D with ductile metal binders are created, the resulting material has increased stiffness and ductility with reduced magnetostrictive properties. These metal composites may be formed by explosion compaction. In a study done on processing Terfenol-D alloys, the resulting alloys created using copper and Terfenol-D had increased strength and hardness values, which supports the theory that the composites of ductile metal binders and Terfenol-D result in a stronger and more ductile material.
See also
Galfenol
References
External links
http://tdvib.com/terfenol-d/
https://www.qcwlc.us
Rare earth alloys
Terbium
Intermetallics | Terfenol-D | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,504 | [
"Rare earth alloys",
"Inorganic compounds",
"Metallurgy",
"Alloys",
"Intermetallics",
"Condensed matter physics"
] |
1,545,608 | https://en.wikipedia.org/wiki/Anaerobic%20digestion | Anaerobic digestion is a sequence of processes by which microorganisms break down biodegradable material in the absence of oxygen. The process is used for industrial or domestic purposes to manage waste or to produce fuels. Much of the fermentation used industrially to produce food and drink products, as well as home fermentation, uses anaerobic digestion.
Anaerobic digestion occurs naturally in some soils and in lake and oceanic basin sediments, where it is usually referred to as "anaerobic activity". This is the source of marsh gas methane as discovered by Alessandro Volta in 1776.
Anaerobic digestion comprises four stages:
Hydrolysis
Acidogenesis
Acetogenesis
Methanogenesis
The digestion process begins with bacterial hydrolysis of the input materials. Insoluble organic polymers, such as carbohydrates, are broken down to soluble derivatives that become available for other bacteria. Acidogenic bacteria then convert the sugars and amino acids into carbon dioxide, hydrogen, ammonia, and organic acids. In acetogenesis, bacteria convert these resulting organic acids into acetic acid, along with additional ammonia, hydrogen, and carbon dioxide amongst other compounds. Finally, methanogens convert these products to methane and carbon dioxide. The methanogenic archaea populations play an indispensable role in anaerobic wastewater treatments.
Anaerobic digestion is used as part of the process to treat biodegradable waste and sewage sludge. As part of an integrated waste management system, anaerobic digestion reduces the emission of landfill gas into the atmosphere. Anaerobic digesters can also be fed with purpose-grown energy crops, such as maize.
Anaerobic digestion is widely used as a source of renewable energy. The process produces a biogas, consisting of methane, carbon dioxide, and traces of other 'contaminant' gases. This biogas can be used directly as fuel, in combined heat and power gas engines or upgraded to natural gas-quality biomethane. The nutrient-rich digestate also produced can be used as fertilizer.
With the re-use of waste as a resource and new technological approaches that have lowered capital costs, anaerobic digestion has in recent years received increased attention among governments in a number of countries, among these the United Kingdom (2011), Germany, Denmark (2011), and the United States.
Process
Many microorganisms affect anaerobic digestion, including acetic acid-forming bacteria (acetogens) and methane-forming archaea (methanogens). These organisms promote a number of chemical processes in converting the biomass to biogas.
Gaseous oxygen is excluded from the reactions by physical containment. Anaerobes utilize electron acceptors from sources other than oxygen gas. These acceptors can be the organic material itself or may be supplied by inorganic oxides from within the input material. When the oxygen source in an anaerobic system is derived from the organic material itself, the 'intermediate' end products are primarily alcohols, aldehydes, and organic acids, plus carbon dioxide. In the presence of specialised methanogens, the intermediates are converted to the 'final' end products of methane, carbon dioxide, and trace levels of hydrogen sulfide. In an anaerobic system, the majority of the chemical energy contained within the starting material is released by methanogenic archaea as methane.
Populations of anaerobic microorganisms typically take a significant period of time to establish themselves to be fully effective. Therefore, common practice is to introduce anaerobic microorganisms from materials with existing populations, a process known as "seeding" the digesters, typically accomplished with the addition of sewage sludge or cattle slurry.
Process stages
The four key stages of anaerobic digestion involve hydrolysis, acidogenesis, acetogenesis and methanogenesis.
The overall process can be described by the chemical reaction, where organic material such as glucose is biochemically digested into carbon dioxide (CO2) and methane (CH4) by the anaerobic microorganisms.
C6H12O6 → 3CO2 + 3CH4
Hydrolysis
In most cases, biomass is made up of large organic polymers. For the bacteria in anaerobic digesters to access the energy potential of the material, these chains must first be broken down into their smaller constituent parts. These constituent parts, or monomers, such as sugars, are readily available to other bacteria. The process of breaking these chains and dissolving the smaller molecules into solution is called hydrolysis. Therefore, hydrolysis of these high-molecular-weight polymeric components is the necessary first step in anaerobic digestion. Through hydrolysis the complex organic molecules are broken down into simple sugars, amino acids, and fatty acids.
Acetate and hydrogen produced in the first stages can be used directly by methanogens. Other molecules, such as volatile fatty acids (VFAs) with a chain length greater than that of acetate must first be catabolised into compounds that can be directly used by methanogens.
Acidogenesis
The biological process of acidogenesis results in further breakdown of the remaining components by acidogenic (fermentative) bacteria. Here, VFAs are created, along with ammonia, carbon dioxide, and hydrogen sulfide, as well as other byproducts. The process of acidogenesis is similar to the way milk sours.
Acetogenesis
The third stage of anaerobic digestion is acetogenesis. Here, simple molecules created through the acidogenesis phase are further digested by acetogens to produce largely acetic acid, as well as carbon dioxide and hydrogen.
Methanogenesis
The terminal stage of anaerobic digestion is the biological process of methanogenesis. Here, methanogens use the intermediate products of the preceding stages and convert them into methane, carbon dioxide, and water. These components make up the majority of the biogas emitted from the system. Methanogenesis is sensitive to both high and low pHs and occurs between pH 6.5 and pH 8. The remaining, indigestible material the microbes cannot use and any dead bacterial remains constitute the digestate.
Configuration
Anaerobic digesters can be designed and engineered to operate using a number of different configurations and can be categorized into batch vs. continuous process mode, mesophilic vs. thermophilic temperature conditions, high vs. low portion of solids, and single stage vs. multistage processes. Continuous process requires more complex design, but still, it may be more economical than batch process, because batch process requires more initial building money and a larger volume of the digesters (spread across several batches) to handle the same amount of waste as a continuous process digester. Higher heat energy is required in a thermophilic system compared to a mesophilic system, but the thermophilic system requires much less time and has a larger gas output capacity and higher methane gas content, so one has to consider that trade-off carefully. For solids content, low will handle up to 15% solid content. Above this level is considered high solids content and can also be known as dry digestion. In a single stage process, one reactor houses the four anaerobic digestion steps. A multistage process utilizes two or more reactors for digestion to separate the methanogenesis and hydrolysis phases.
Batch or continuous
Anaerobic digestion can be performed as a batch process or a continuous process. In a batch system, biomass is added to the reactor at the start of the process. The reactor is then sealed for the duration of the process. In its simplest form batch processing needs inoculation with already processed material to start the anaerobic digestion. In a typical scenario, biogas production will be formed with a normal distribution pattern over time. Operators can use this fact to determine when they believe the process of digestion of the organic matter has completed. There can be severe odour issues if a batch reactor is opened and emptied before the process is well completed. A more advanced type of batch approach has limited the odour issues by integrating anaerobic digestion with in-vessel composting. In this approach inoculation takes place through the use of recirculated degasified percolate. After anaerobic digestion has completed, the biomass is kept in the reactor which is then used for in-vessel composting before it is opened As the batch digestion is simple and requires less equipment and lower levels of design work, it is typically a cheaper form of digestion. Using more than one batch reactor at a plant can ensure constant production of biogas.
In continuous digestion processes, organic matter is constantly added (continuous complete mixed) or added in stages to the reactor (continuous plug flow; first in – first out). Here, the end products are constantly or periodically removed, resulting in constant production of biogas. A single or multiple digesters in sequence may be used. Examples of this form of anaerobic digestion include continuous stirred-tank reactors, upflow anaerobic sludge blankets, expanded granular sludge beds, and internal circulation reactors.
Temperature
The two conventional operational temperature levels for anaerobic digesters determine the species of methanogens in the digesters:
Mesophilic digestion takes place optimally around 30 to 38 °C, or at ambient temperatures between 20 and 45 °C, where mesophiles are the primary microorganisms present.
Thermophilic digestion takes place optimally around 49 to 57 °C, or at elevated temperatures up to 70 °C, where thermophiles are the primary microorganisms present.
A limit case has been reached in Bolivia, with anaerobic digestion in temperature working conditions of less than 10 °C. The anaerobic process is very slow, taking more than three times the normal mesophilic time process. In experimental work at University of Alaska Fairbanks, a 1,000-litre digester using psychrophiles harvested from "mud from a frozen lake in Alaska" has produced 200–300 litres of methane per day, about 20 to 30% of the output from digesters in warmer climates. Mesophilic species outnumber thermophiles, and they are also more tolerant to changes in environmental conditions than thermophiles. Mesophilic systems are, therefore, considered to be more stable than thermophilic digestion systems. In contrast, while thermophilic digestion systems are considered less stable, their energy input is higher, with more biogas being removed from the organic matter in an equal amount of time. The increased temperatures facilitate faster reaction rates, and thus faster gas yields. Operation at higher temperatures facilitates greater pathogen reduction of the digestate. In countries where legislation, such as the Animal By-Products Regulations in the European Union, requires digestate to meet certain levels of pathogen reduction there may be a benefit to using thermophilic temperatures instead of mesophilic.
Additional pre-treatment can be used to reduce the necessary retention time to produce biogas. For example, certain processes shred the substrates to increase the surface area or use a thermal pretreatment stage (such as pasteurisation) to significantly enhance the biogas output. The pasteurisation process can also be used to reduce the pathogenic concentration in the digestate, leaving the anaerobic digester. Pasteurisation may be achieved by heat treatment combined with maceration of the solids.
Solids content
In a typical scenario, three different operational parameters are associated with the solids content of the feedstock to the digesters:
High solids (dry—stackable substrate)
High solids (wet—pumpable substrate)
Low solids (wet—pumpable substrate)
High solids (dry) digesters are designed to process materials with a solids content between 25 and 40%. Unlike wet digesters that process pumpable slurries, high solids (dry – stackable substrate) digesters are designed to process solid substrates without the addition of water. The primary styles of dry digesters are continuous vertical plug flow and batch tunnel horizontal digesters. Continuous vertical plug flow digesters are upright, cylindrical tanks where feedstock is continuously fed into the top of the digester, and flows downward by gravity during digestion. In batch tunnel digesters, the feedstock is deposited in tunnel-like chambers with a gas-tight door. Neither approach has mixing inside the digester. The amount of pretreatment, such as contaminant removal, depends both upon the nature of the waste streams being processed and the desired quality of the digestate. Size reduction (grinding) is beneficial in continuous vertical systems, as it accelerates digestion, while batch systems avoid grinding and instead require structure (e.g. yard waste) to reduce compaction of the stacked pile. Continuous vertical dry digesters have a smaller footprint due to the shorter effective retention time and vertical design. Wet digesters can be designed to operate in either a high-solids content, with a total suspended solids (TSS) concentration greater than ~20%, or a low-solids concentration less than ~15%.
High solids (wet) digesters process a thick slurry that requires more energy input to move and process the feedstock. The thickness of the material may also lead to associated problems with abrasion. High solids digesters will typically have a lower land requirement due to the lower volumes associated with the moisture. High solids digesters also require correction of conventional performance calculations (e.g. gas production, retention time, kinetics, etc.) originally based on very dilute sewage digestion concepts, since larger fractions of the feedstock mass are potentially convertible to biogas.
Low solids (wet) digesters can transport material through the system using standard pumps that require significantly lower energy input. Low solids digesters require a larger amount of land than high solids due to the increased volumes associated with the increased liquid-to-feedstock ratio of the digesters. There are benefits associated with operation in a liquid environment, as it enables more thorough circulation of materials and contact between the bacteria and their food. This enables the bacteria to more readily access the substances on which they are feeding, and increases the rate of gas production.
Complexity
Digestion systems can be configured with different levels of complexity. In a single-stage digestion system (one-stage), all of the biological reactions occur within a single, sealed reactor or holding tank. Using a single stage reduces construction costs, but results in less control of the reactions occurring within the system. Acidogenic bacteria, through the production of acids, reduce the pH of the tank. Methanogenic archaea, as outlined earlier, operate in a strictly defined pH range. Therefore, the biological reactions of the different species in a single-stage reactor can be in direct competition with each other. Another one-stage reaction system is an anaerobic lagoon. These lagoons are pond-like, earthen basins used for the treatment and long-term storage of manures. Here the anaerobic reactions are contained within the natural anaerobic sludge contained in the pool.
In a two-stage digestion system (multistage), different digestion vessels are optimised to bring maximum control over the bacterial communities living within the digesters. Acidogenic bacteria produce organic acids and more quickly grow and reproduce than methanogenic archaea. Methanogenic archaea require stable pH and temperature to optimise their performance.
Under typical circumstances, hydrolysis, acetogenesis, and acidogenesis occur within the first reaction vessel. The organic material is then heated to the required operational temperature (either mesophilic or thermophilic) prior to being pumped into a methanogenic reactor. The initial hydrolysis or acidogenesis tanks prior to the methanogenic reactor can provide a buffer to the rate at which feedstock is added. Some European countries require a degree of elevated heat treatment to kill harmful bacteria in the input waste. In this instance, there may be a pasteurisation or sterilisation stage prior to digestion or between the two digestion tanks. Notably, it is not possible to completely isolate the different reaction phases, and often some biogas is produced in the hydrolysis or acidogenesis tanks.
Residence time
The residence time in a digester varies with the amount and type of feed material, and with the configuration of the digestion system. In a typical two-stage mesophilic digestion, residence time varies between 15 and 40 days, while for a single-stage thermophilic digestion, residence times is normally faster and takes around 14 days. The plug-flow nature of some of these systems will mean the full degradation of the material may not have been realised in this timescale. In this event, digestate exiting the system will be darker in colour and will typically have more odour.
In the case of an upflow anaerobic sludge blanket digestion (UASB), hydraulic residence times can be as short as 1 hour to 1 day, and solid retention times can be up to 90 days. In this manner, a UASB system is able to separate solids and hydraulic retention times with the use of a sludge blanket. Continuous digesters have mechanical or hydraulic devices, depending on the level of solids in the material, to mix the contents, enabling the bacteria and the food to be in contact. They also allow excess material to be continuously extracted to maintain a reasonably constant volume within the digestion tanks.
Pressure
A recent development in anaerobic reactor design is High-pressure anaerobic digestion (HPAD) also referred to a Autogenerative High Pressure Digestion (AHPD). This technique produces a biogas with a elevated methane content. The produced carbon dioxide in biogas dissolves more into the water phase under pressure then methane does. Hence the produced biogas is richer in methane. Research at the University of Groningen demonstrated that the bacterial community changes in composition under the influence of pressure. Individual bacteria species have their optimum circumstances in which they grow and replicate the fastest. Commonly known are pH, temperature, salinity etc. but pressure is also one of them. Some species have adapted to life in the deep oceans where pressure is much higher than at sea level. This makes it possible in similar vein as other process parameters such as Temperature, Retention Time, pH to influence the anaerobic digestion process.
Inhibition
The anaerobic digestion process can be inhibited by several compounds, affecting one or more of the bacterial groups responsible for the different organic matter degradation steps. The degree of the inhibition depends, among other factors, on the concentration of the inhibitor in the digester. Potential inhibitors are ammonia, sulfide, light metal ions (Na, K, Mg, Ca, Al), heavy metals, some organics (chlorophenols, halogenated aliphatics, N-substituted aromatics, long chain fatty acids), etc.
Total ammonia nitrogen (TAN) has been shown to inhibit production of methane. Furthermore, it destabilises the microbial community, impacting the synthesis of acetic acid. Acetic acid is one of the driving forces in methane production. At an excess of 5000 mg/L TAN, pH adjustment is needed to keep the reaction stable. A TAN concentration above 1700– 1800 mg/L inhibits methane production and yield decreases at greater TAN concentrations. High TAN concentrations cause the reaction to turn acidic and lead to a domino effect of inhibition. Total ammonia nitrogen is the combination of free ammonia and ionized ammonia. TAN is produced through degrading material high in nitrogen, typically proteins and will naturally build in anaerobic digestion. This is depending on the organic feed stock fed to the system. In typical wastewater treatment practices, TAN reduction is done with via nitrification. Nitrification is an aerobic process where TAN is consumed by aerobic heterotrophic bacteria. These bacteria release nitrate and nitrite which are later converted to nitrogen gas through the denitrification process. Hydrolysis and acidogenesis can also be impacted by TAN concentration. In mesophilic conditions, inhibition for hydrolysis was found to occur at 5500 mg/L TAN, while acidogenesis inhibition occurs at 6500 mg/L TAN.
Feedstocks
The most important initial issue when considering the application of anaerobic digestion systems is the feedstock to the process. Almost any organic material can be processed with anaerobic digestion; however, if biogas production is the aim, the level of putrescibility is the key factor in its successful application. The more putrescible (digestible) the material, the higher the gas yields possible from the system.
Feedstocks can include biodegradable waste materials, such as waste paper, grass clippings, leftover food, sewage, and animal waste. Woody wastes are the exception, because they are largely unaffected by digestion, as most anaerobes are unable to degrade lignin. Xylophagous anaerobes (lignin consumers) or high temperature pretreatment, such as pyrolysis, can be used to break lignin down. Anaerobic digesters can also be fed with specially grown energy crops, such as silage, for dedicated biogas production. In Germany and continental Europe, these facilities are referred to as "biogas" plants. A codigestion or cofermentation plant is typically an agricultural anaerobic digester that accepts two or more input materials for simultaneous digestion.
The length of time required for anaerobic digestion depends on the chemical complexity of the material. Material rich in easily digestible sugars breaks down quickly, whereas intact lignocellulosic material rich in cellulose and hemicellulose polymers can take much longer to break down. Anaerobic microorganisms are generally unable to break down lignin, the recalcitrant aromatic component of biomass.
Anaerobic digesters were originally designed for operation using sewage sludge and manures. Sewage and manure are not, however, the material with the most potential for anaerobic digestion, as the biodegradable material has already had much of the energy content taken out by the animals that produced it. Therefore, many digesters operate with codigestion of two or more types of feedstock. For example, in a farm-based digester that uses dairy manure as the primary feedstock, the gas production may be significantly increased by adding a second feedstock, e.g., grass and corn (typical on-farm feedstock), or various organic byproducts, such as slaughterhouse waste, fats, oils and grease from restaurants, organic household waste, etc. (typical off-site feedstock).
Digesters processing dedicated energy crops can achieve high levels of degradation and biogas production. Slurry-only systems are generally cheaper, but generate far less energy than those using crops, such as maize and grass silage; by using a modest amount of crop material (30%), an anaerobic digestion plant can increase energy output tenfold for only three times the capital cost, relative to a slurry-only system.
Moisture content
A second consideration related to the feedstock is moisture content. Drier, stackable substrates, such as food and yard waste, are suitable for digestion in tunnel-like chambers. Tunnel-style systems typically have near-zero wastewater discharge, as well, so this style of system has advantages where the discharge of digester liquids are a liability. The wetter the material, the more suitable it will be to handling with standard pumps instead of energy-intensive concrete pumps and physical means of movement. Also, the wetter the material, the more volume and area it takes up relative to the levels of gas produced. The moisture content of the target feedstock will also affect what type of system is applied to its treatment. To use a high-solids anaerobic digester for dilute feedstocks, bulking agents, such as compost, should be applied to increase the solids content of the input material. Another key consideration is the carbon:nitrogen ratio of the input material. This ratio is the balance of food a microbe requires to grow; the optimal C:N ratio is 20–30:1. Excess N can lead to ammonia inhibition of digestion.
Contamination
The level of contamination of the feedstock material is a key consideration when using wet digestion or plug-flow digestion.
If the feedstock to the digesters has significant levels of physical contaminants, such as plastic, glass, or metals, then processing to remove the contaminants will be required for the material to be used. If it is not removed, then the digesters can be blocked and will not function efficiently. This contamination issue does not occur with dry digestion or solid-state anaerobic digestion (SSAD) plants, since SSAD handles dry, stackable biomass with a high percentage of solids (40-60%) in gas-tight chambers called fermenter boxes. It is with this understanding that mechanical biological treatment plants are designed. The higher the level of pretreatment a feedstock requires, the more processing machinery will be required, and, hence, the project will have higher capital costs. National Non-Food Crops Centre.
After sorting or screening to remove any physical contaminants from the feedstock, the material is often shredded, minced, and mechanically or hydraulically pulped to increase the surface area available to microbes in the digesters and, hence, increase the speed of digestion. The maceration of solids can be achieved by using a chopper pump to transfer the feedstock material into the airtight digester, where anaerobic treatment takes place.
Substrate composition
Substrate composition is a major factor in determining the methane yield and methane production rates from the digestion of biomass. Techniques to determine the compositional characteristics of the feedstock are available, while parameters such as solids, elemental, and organic analyses are important for digester design and operation. Methane yield can be estimated from the elemental composition of substrate along with an estimate of its degradability (the fraction of the substrate that is converted to biogas in a reactor). In order to predict biogas composition (the relative fractions of methane and carbon dioxide) it is necessary to estimate carbon dioxide partitioning between the aqueous and gas phases, which requires additional information (reactor temperature, pH, and substrate composition) and a chemical speciation model. Direct measurements of biomethanation potential are also made using gas evolution or more recent gravimetric assays.
Applications
Using anaerobic digestion technologies can help to reduce the emission of greenhouse gases in a number of key ways:
Replacement of fossil fuels
Reducing or eliminating the energy footprint of waste treatment plants
Reducing methane emission from landfills
Displacing industrially produced chemical fertilizers
Reducing vehicle movements
Reducing electrical grid transportation losses
Reducing usage of LP Gas for cooking
An important component of the Zero Waste initiatives.
Waste and wastewater treatment
Anaerobic digestion is particularly suited to organic material, and is commonly used for industrial effluent, wastewater and sewage sludge treatment. Anaerobic digestion, a simple process, can greatly reduce the amount of organic matter which might otherwise be destined to be dumped at sea, dumped in landfills, or burnt in incinerators.
Pressure from environmentally related legislation on solid waste disposal methods in developed countries has increased the application of anaerobic digestion as a process for reducing waste volumes and generating useful byproducts. It may either be used to process the source-separated fraction of municipal waste or alternatively combined with mechanical sorting systems, to process residual mixed municipal waste. These facilities are called mechanical biological treatment plants.
If the putrescible waste processed in anaerobic digesters were disposed of in a landfill, it would break down naturally and often anaerobically. In this case, the gas will eventually escape into the atmosphere. As methane is about 20 times more potent as a greenhouse gas than carbon dioxide, this has significant negative environmental effects.
In countries that collect household waste, the use of local anaerobic digestion facilities can help to reduce the amount of waste that requires transportation to centralized landfill sites or incineration facilities. This reduced burden on transportation reduces carbon emissions from the collection vehicles. If localized anaerobic digestion facilities are embedded within an electrical distribution network, they can help reduce the electrical losses associated with transporting electricity over a national grid.
Anaerobic digestion can be used for the remediation sludge polluted with PFAS. A 2024 study has shown that anaerobic digestion, combined with adsorption in activated carbon and voltage application can remove up to 61% of PFAS from sewage sludge.
Power generation
In developing countries, simple home and farm-based anaerobic digestion systems offer the potential for low-cost energy for cooking and lighting.
From 1975, China and India have both had large, government-backed schemes for adaptation of small biogas plants for use in the household for cooking and lighting. At present, projects for anaerobic digestion in the developing world can gain financial support through the United Nations Clean Development Mechanism if they are able to show they provide reduced carbon emissions.
Methane and power produced in anaerobic digestion facilities can be used to replace energy derived from fossil fuels, and hence reduce emissions of greenhouse gases, because the carbon in biodegradable material is part of a carbon cycle. The carbon released into the atmosphere from the combustion of biogas has been removed by plants for them to grow in the recent past, usually within the last decade, but more typically within the last growing season. If the plants are regrown, taking the carbon out of the atmosphere once more, the system will be carbon neutral. In contrast, carbon in fossil fuels has been sequestered in the earth for many millions of years, the combustion of which increases the overall levels of carbon dioxide in the atmosphere. Power generation through anaerobic digesters is best suited to large-scale operations, rather than small farms, as large operations have the volume of manure that is able to make the systems financially viable.
Biogas from sewage sludge treatment is sometimes used to run a gas engine to produce electrical power, some or all of which can be used to run the sewage works. Some waste heat from the engine is then used to heat the digester. The waste heat is, in general, enough to heat the digester to the required temperatures. The power potential from sewage works is limited – in the UK, there are about 80 MW total of such generation, with the potential to increase to 150 MW, which is insignificant compared to the average power demand in the UK of about 35,000 MW. The scope for biogas generation from nonsewage waste biological matter – energy crops, food waste, abattoir waste, etc. - is much higher, estimated to be capable of about 3,000 MW. Farm biogas plants using animal waste and energy crops are expected to contribute to reducing CO2 emissions and strengthen the grid, while providing UK farmers with additional revenues.
Some countries offer incentives in the form of, for example, feed-in tariffs for feeding electricity onto the power grid to subsidize green energy production.
In Oakland, California at the East Bay Municipal Utility District's main wastewater treatment plant (EBMUD), food waste is currently codigested with primary and secondary municipal wastewater solids and other high-strength wastes. Compared to municipal wastewater solids digestion alone, food waste codigestion has many benefits. Anaerobic digestion of food waste pulp from the EBMUD food waste process provides a higher normalized energy benefit, compared to municipal wastewater solids: 730 to 1,300 kWh per dry ton of food waste applied compared to 560 to 940 kWh per dry ton of municipal wastewater solids applied.
Grid injection
Biogas grid-injection is the injection of biogas into the natural gas grid. The raw biogas has to be previously upgraded to biomethane. This upgrading implies the removal of contaminants such as hydrogen sulphide or siloxanes, as well as the carbon dioxide. Several technologies are available for this purpose, the most widely implemented being pressure swing adsorption (PSA), water or amine scrubbing (absorption processes) and, in recent years, membrane separation. As an alternative, the electricity and the heat can be used for on-site generation, resulting in a reduction of losses in the transportation of energy. Typical energy losses in natural gas transmission systems range from 1–2%, whereas the current energy losses on a large electrical system range from 5–8%.
In October 2010, Didcot Sewage Works became the first in the UK to produce biomethane gas supplied to the national grid, for use in up to 200 homes in Oxfordshire. By 2017, UK electricity firm Ecotricity plan to have digester fed by locally sourced grass fueling 6000 homes
Vehicle fuel
After upgrading with the above-mentioned technologies, the biogas (transformed into biomethane) can be used as vehicle fuel in adapted vehicles. This use is very extensive in Sweden, where over 38,600 gas vehicles exist, and 60% of the vehicle gas is biomethane generated in anaerobic digestion plants.
Fertiliser and soil conditioner
The solid, fibrous component of the digested material can be used as a soil conditioner to increase the organic content of soils. Digester liquor can be used as a fertiliser to supply vital nutrients to soils instead of chemical fertilisers that require large amounts of energy to produce and transport. The use of manufactured fertilisers is, therefore, more carbon-intensive than the use of anaerobic digester liquor fertiliser. In countries such as Spain, where many soils are organically depleted, the markets for the digested solids can be equally as important as the biogas.
Cooking gas
By using a bio-digester, which produces the bacteria required for decomposing, cooking gas is generated. The organic waste like fallen leaves, kitchen waste, food waste etc. are fed into a crusher unit, where it is mixed with a small amount of water. The mixture is then fed into the bio-digester, where the archaea decomposes it to produce cooking gas. This gas is piped to kitchen stove. A 2 cubic meter bio-digester can produce 2 cubic meters of cooking gas. This is equivalent to 1 kg of LPG. The notable advantage of using a bio-digester is the sludge which is a rich organic manure.
Products
The three principal products of anaerobic digestion are biogas, digestate, and water.
Biogas
Biogas is the ultimate waste product of the bacteria feeding off the input biodegradable feedstock (the methanogenesis stage of anaerobic digestion is performed by archaea, a micro-organism on a distinctly different branch of the phylogenetic tree of life to bacteria), and is mostly methane and carbon dioxide, with a small amount hydrogen and trace hydrogen sulfide. (As-produced, biogas also contains water vapor, with the fractional water vapor volume a function of biogas temperature). Most of the biogas is produced during the middle of the digestion, after the bacterial population has grown, and tapers off as the putrescible material is exhausted. The gas is normally stored on top of the digester in an inflatable gas bubble or extracted and stored next to the facility in a gas holder.
The methane in biogas can be burned to produce both heat and electricity, usually with a reciprocating engine or microturbine often in a cogeneration arrangement where the electricity and waste heat generated are used to warm the digesters or to heat buildings. Excess electricity can be sold to suppliers or put into the local grid. Electricity produced by anaerobic digesters is considered to be renewable energy and may attract subsidies. Biogas does not contribute to increasing atmospheric carbon dioxide concentrations because the gas is not released directly into the atmosphere and the carbon dioxide comes from an organic source with a short carbon cycle.
Biogas may require treatment or 'scrubbing' to refine it for use as a fuel. Hydrogen sulfide, a toxic product formed from sulfates in the feedstock, is released as a trace component of the biogas. National environmental enforcement agencies, such as the U.S. Environmental Protection Agency or the English and Welsh Environment Agency, put strict limits on the levels of gases containing hydrogen sulfide, and, if the levels of hydrogen sulfide in the gas are high, gas scrubbing and cleaning equipment (such as amine gas treating) will be needed to process the biogas to within regionally accepted levels. Alternatively, the addition of ferrous chloride FeCl2 to the digestion tanks inhibits hydrogen sulfide production.
Volatile siloxanes can also contaminate the biogas; such compounds are frequently found in household waste and wastewater. In digestion facilities accepting these materials as a component of the feedstock, low-molecular-weight siloxanes volatilise into biogas. When this gas is combusted in a gas engine, turbine, or boiler, siloxanes are converted into silicon dioxide (SiO2), which deposits internally in the machine, increasing wear and tear. Practical and cost-effective technologies to remove siloxanes and other biogas contaminants are available at the present time. In certain applications, in situ treatment can be used to increase the methane purity by reducing the offgas carbon dioxide content, purging the majority of it in a secondary reactor.
In countries such as Switzerland, Germany, and Sweden, the methane in the biogas may be compressed for it to be used as a vehicle transportation fuel or input directly into the gas mains. In countries where the driver for the use of anaerobic digestion are renewable electricity subsidies, this route of treatment is less likely, as energy is required in this processing stage and reduces the overall levels available to sell.
Digestate
Digestate is the solid remnants of the original input material to the digesters that the microbes cannot use. It also consists of the mineralised remains of the dead bacteria from within the digesters. Digestate can come in three forms: fibrous, liquor, or a sludge-based combination of the two fractions. In two-stage systems, different forms of digestate come from different digestion tanks. In single-stage digestion systems, the two fractions will be combined and, if desired, separated by further processing.
The second byproduct (acidogenic digestate) is a stable, organic material consisting largely of lignin and cellulose, but also of a variety of mineral components in a matrix of dead bacterial cells; some plastic may be present. The material resembles domestic compost and can be used as such or to make low-grade building products, such as fibreboard.
The solid digestate can also be used as feedstock for ethanol production.
The third byproduct is a liquid (methanogenic digestate) rich in nutrients, which can be used as a fertiliser, depending on the quality of the material being digested. Levels of potentially toxic elements (PTEs) should be chemically assessed. This will depend upon the quality of the original feedstock. In the case of most clean and source-separated biodegradable waste streams, the levels of PTEs will be low. In the case of wastes originating from industry, the levels of PTEs may be higher and will need to be taken into consideration when determining a suitable end use for the material.
Digestate typically contains elements, such as lignin, that cannot be broken down by the anaerobic microorganisms. Also, the digestate may contain ammonia that is phytotoxic, and may hamper the growth of plants if it is used as a soil-improving material. For these two reasons, a maturation or composting stage may be employed after digestion. Lignin and other materials are available for degradation by aerobic microorganisms, such as fungi, helping reduce the overall volume of the material for transport. During this maturation, the ammonia will be oxidized into nitrates, improving the fertility of the material and making it more suitable as a soil improver. Large composting stages are typically used by dry anaerobic digestion technologies.
Wastewater
The final output from anaerobic digestion systems is water, which originates both from the moisture content of the original waste that was treated and water produced during the microbial reactions in the digestion systems. This water may be released from the dewatering of the digestate or may be implicitly separate from the digestate.
The wastewater exiting the anaerobic digestion facility will typically have elevated levels of biochemical oxygen demand (BOD) and chemical oxygen demand (COD). These measures of the reactivity of the effluent indicate an ability to pollute. Some of this material is termed 'hard COD', meaning it cannot be accessed by the anaerobic bacteria for conversion into biogas. If this effluent were put directly into watercourses, it would negatively affect them by causing eutrophication. As such, further treatment of the wastewater is often required. This treatment will typically be an oxidation stage wherein air is passed through the water in a sequencing batch reactors or reverse osmosis unit.
History
Reported scientific interest in the manufacturing of gas produced by the natural decomposition of organic matter dates from the 17th century, when Robert Boyle (1627-1691) and Stephen Hales (1677-1761) noted that disturbing the sediment of streams and lakes released flammable gas. In 1778, the Italian physicist Alessandro Volta (1745-1827), the father of electrochemistry, scientifically identified that gas as methane.
In 1808 Sir Humphry Davy proved the presence of methane in the gases produced by cattle manure. The first known anaerobic digester was built in 1859 at a leper colony in Bombay in India. In 1895, the technology was developed in Exeter, England, where a septic tank was used to generate gas for the sewer gas destructor lamp, a type of gas lighting. Also in England, in 1904, the first dual-purpose tank for both sedimentation and sludge treatment was installed in Hampton, London.
By the early 20th century, anaerobic digestion systems began to resemble the technology as it appears today. In 1906, Karl Imhoff created the Imhoff tank; an early form of anaerobic digester and model wastewater treatment system throughout the early 20th century. After 1920, closed tank systems began to replace the previously common use of anaerobic lagoons – covered earthen basins used to treat volatile solids. Research on anaerobic digestion began in earnest in the 1930s.
Around the time of World War I, production from biofuels slowed as petroleum production increased and its uses were identified. While fuel shortages during World War II re-popularized anaerobic digestion, interest in the technology decreased again after the war ended. Similarly, the 1970s energy crisis sparked interest in anaerobic digestion. In addition to high energy prices, factors affecting the adoption of anaerobic digestion systems include receptivity to innovation, pollution penalties, policy incentives, and the availability of subsidies and funding opportunities.
Modern geographical distribution
Today, anaerobic digesters are commonly found alongside farms to reduce nitrogen run-off from manure, or wastewater treatment facilities to reduce the costs of sludge disposal. Agricultural anaerobic digestion for energy production has become most popular in Germany, where there were 8,625 digesters in 2014. In the United Kingdom, there were 259 facilities by 2014, and 500 projects planned to become operational by 2019. In the United States, there were 191 operational plants across 34 states in 2012. Policy may explain why adoption rates are so different across these countries.
Feed-in tariffs in Germany were enacted in 1991, also known as FIT, providing long-term contracts compensating investments in renewable energy generation. Consequently, between 1991 and 1998 the number of anaerobic digester plants in Germany grew from 20 to 517. In the late 1990s, energy prices in Germany varied and investors became unsure of the market's potential. The German government responded by amending FIT four times between 2000 and 2011, increasing tariffs and improving the profitability of anaerobic digestion, and resulting in reliable returns for biogas production and continued high adoption rates across the country.
Incidents involving digesters
Anaerobic digesters have caused Fish kills (e.g. River Mole, Devon, River Teifi, Afon Llynfi, and loss of human life (e.g. Avonmouth explosion)
There have been explosions of Anaerobic Digesters in the US (Jay, Maine Pixelle Specialty Solutions' Androscoggin Mill; Pensacola (Cantonment) 22 January 2017 (Kamyr digester explosion); EPDM failure March 2013 Aumsville, Oregon; February 6, 1987, Pennsylvania two workers at a wastewater treatment plant were re-draining a sewage digester when an explosion lifted the 30-ton floating cover, killing both workers instantly; Southwest Wastewater Treatment Plant in Springfield, Missouri), in the UK (for example at Avonmouth and at Harper Adams College, Newport, Shropshire), plus In Europe, there were about 800 accidents on biogas plants between 2005 and 2015, e.g. in France (Saint-Fargeau) (though few of them were 'serious' with direct consequences for the human population). Fortunately, according to one source, 'less than a dozen of them had consequences on humans'- for example, the incident at Rhadereistedt, Germany (4 dead).
Safety analyses have included a 2016 study compiled a database of 169 accidents involving ADs.
See also
Anaerobic digester types
Anaerobic organism
Avonmouth explosion
Bioconversion of biomass to mixed alcohol fuels
Carbon dioxide air capture
Comparison of anaerobic and aerobic digestion
Environmental issues with energy
Global Methane Initiative
Hypoxia (environmental)
Methane capture
Microbiology of decomposition
Pasteur point
Relative cost of electricity generated by different sources
Sanitation
Sewage treatment
Upflow anaerobic sludge blanket digestion (UASB)
References
External links
Biodegradable waste management
Biofuels
Environmental engineering
Hydrogen production
Mechanical biological treatment
Power station technology
Sewerage
Sustainable technologies
Water technology
Gas technologies
Renewable energy
Food waste | Anaerobic digestion | [
"Chemistry",
"Engineering"
] | 9,652 | [
"Water technology",
"Anaerobic digestion",
"Environmental engineering"
] |
1,545,928 | https://en.wikipedia.org/wiki/Aggregate%20%28composite%29 | Aggregate is the component of a composite material that resists compressive stress and provides bulk to the material. For efficient filling, aggregate should be much smaller than the finished item, but have a wide variety of sizes. Aggregates are generally added to lower the amount of binders needed and to increase the strength of composite materials.
Sand and gravel are used as construction aggregate with cement to make concrete and increase its mechanical strength. Aggregates make up 60-80% of the volume of concrete and 70-85% of the mass of concrete.
Comparison to fiber composites
Aggregate composites are easier to fabricate, and more predictable in their finished properties, than fiber composites. Fiber orientation and continuity can have a large effect, but can be difficult to control and assess. Aggregate materials are generally less expensive. Mineral aggregates are found in nature and can often be used with minimal processing.
Not all composite materials include aggregate. Aggregate particles tend to have about the same dimensions in every direction (that is, an aspect ratio of about one), so that aggregate composites do not display the level of synergy that fiber composites often do. A strong aggregate held together by a weak matrix will be weak in tension, whereas fibers can be less sensitive to matrix properties, especially if they are properly oriented and run the entire length of the part (i.e., a continuous filament).
Most composites are filled with particles whose aspect ratio lies somewhere between oriented filaments and spherical aggregates. A good compromise is chopped fiber, where the performance of filament or cloth is traded off in favor of more aggregate-like processing techniques. Ellipsoid and plate-shaped aggregates are also sometimes used.
Properties
In most cases, the ideal finished piece would be 100% aggregate. A given application's most desirable quality (be it high strength, low cost, high dielectric constant, or low density) is usually most prominent in the aggregate itself. However, the aggregate lacks the ability of a liquid to flow and fill up a volume, and to form attachments between particles.
Aggregate size
Experiments and mathematical models show that more of a given volume can be filled with hard spheres if it is first filled with large spheres, then the spaces between (interstices) are filled with smaller spheres, and the new interstices filled with still smaller spheres as many times as possible. For this reason, control of particle size distribution can be quite important in the choice of aggregate; appropriate simulations or experiments are necessary to determine the optimal proportions of different-sized particles.
The upper limit to particle size depends on the amount of flow required before the composite sets (the gravel in paving concrete can be fairly coarse, but fine sand must be used for tile mortar), whereas the lower limit is due to the thickness of matrix material at which its properties change (clay is not included in concrete because it would "absorb" the matrix, preventing a strong bond to other aggregate particles). Particle size distribution is also the subject of much study in the fields of ceramics and powder metallurgy.
Toughened composites
Toughness is a compromise between the (often contradictory) requirements of strength and plasticity. In many cases, the aggregate will have one of these properties, and will benefit if the matrix can add what it lacks. Perhaps the most accessible examples of this are composites with an organic matrix and ceramic aggregate, such as asphalt concrete ("tarmac") and filled plastic (i.e., Nylon mixed with powdered glass), although most metal matrix composites also benefit from this effect. In this case, the correct balance of hard and soft components is necessary or the material will become either too weak or too brittle.
Nanocomposites
Many materials properties change radically at small length scales (see nanotechnology). In the case where this change is desirable, a certain range of aggregate size is necessary to ensure good performance. This naturally sets a lower limit to the amount of matrix material used.
Unless some practical method is implemented to orient the particles in micro- or nano-composites, their small size and (usually) high strength relative to the particle-matrix bond allows any macroscopic object made from them to be treated as an aggregate composite in many respects.
While bulk synthesis of such nanoparticles as carbon nanotubes is currently too expensive for widespread use, some less extreme nanostructured materials can be synthesized by traditional methods, including electrospinning and spray pyrolysis. One important aggregate made by spray pyrolysis is glass microspheres. Often called microballoons, they consist of a hollow shell several tens of nanometers thick and approximately one micrometer in diameter. Casting them in a polymer matrix yields syntactic foam, with extremely high compressive strength for its low density.
Many traditional nanocomposites escape the problem of aggregate synthesis in one of two ways:
Natural aggregates: By far the most widely used aggregates for nano-composites are naturally occurring. Usually these are ceramic materials whose crystalline structure is extremely directional, allowing it to be easily separated into flakes or fibers. The nanotechnology touted by General Motors for automotive use is in the former category: a fine-grained clay with a laminar structure suspended in a thermoplastic olefin (a class which includes many common plastics like polyethylene and polypropylene). The latter category includes fibrous asbestos composites (popular in the mid-20th century), often with matrix materials such as linoleum and Portland cement.
In-situ aggregate formation: Many micro-composites form their aggregate particles by a process of self-assembly. For example, in high impact polystyrene, two immiscible phases of polymer (including brittle polystyrene and rubbery polybutadiene) are mixed together. Special molecules (graft copolymers) include separate portions which are soluble in each phase, and so are only stable at the interface between them, in the manner of a detergent. Since the number of this type of molecule determines the interfacial area, and since spheres naturally form to minimize surface tension, synthetic chemists can control the size of polybutadiene droplets in the molten mix, which harden to form rubbery aggregates in a hard matrix. Dispersion strengthening is a similar example from the field of metallurgy. In glass-ceramics, the aggregate is often chosen to have a negative coefficient of thermal expansion, and the proportion of aggregate to matrix adjusted so that the overall expansion is very near zero. Aggregate size can be reduced so that the material is transparent to infrared light.
See also
Construction aggregate
Aggregate (geology)
Interfacial Transition Zone (ITZ)
Saturated-surface-dry
References
Aggregate (composite)
Concrete
Composite materials
Granularity of materials | Aggregate (composite) | [
"Physics",
"Chemistry",
"Engineering"
] | 1,396 | [
"Structural engineering",
"Composite materials",
"Materials",
"Concrete",
"Particle technology",
"Granularity of materials",
"Matter"
] |
1,546,092 | https://en.wikipedia.org/wiki/Electrical%20mobility | Electrical mobility is the ability of charged particles (such as electrons or protons) to move through a medium in response to an electric field that is pulling them. The separation of ions according to their mobility in gas phase is called ion mobility spectrometry, in liquid phase it is called electrophoresis.
Theory
When a charged particle in a gas or liquid is acted upon by a uniform electric field, it will be accelerated until it reaches a constant drift velocity according to the formula
where
is the drift velocity (SI units: m/s),
is the magnitude of the applied electric field (V/m),
is the mobility (m2/(V·s)).
In other words, the electrical mobility of the particle is defined as the ratio of the drift velocity to the magnitude of the electric field:
For example, the mobility of the sodium ion (Na+) in water at 25 °C is . This means that a sodium ion in an electric field of 1 V/m would have an average drift velocity of . Such values can be obtained from measurements of ionic conductivity in solution.
Electrical mobility is proportional to the net charge of the particle. This was the basis for Robert Millikan's demonstration that electrical charges occur in discrete units, whose magnitude is the charge of the electron.
Electrical mobility is also inversely proportional to the Stokes radius of the ion, which is the effective radius of the moving ion including any molecules of water or other solvent that move with it. This is true because the solvated ion moving at a constant drift velocity is subject to two equal and opposite forces: an electrical force and a frictional force , where is the frictional coefficient, is the solution viscosity. For different ions with the same charge such as Li+, Na+ and K+ the electrical forces are equal, so that the drift speed and the mobility are inversely proportional to the radius . In fact, conductivity measurements show that ionic mobility increases from Li+ to Cs+, and therefore that Stokes radius decreases from Li+ to Cs+. This is the opposite of the order of ionic radii for crystals and shows that in solution the smaller ions (Li+) are more extensively hydrated than the larger (Cs+).
Mobility in gas phase
Mobility is defined for any species in the gas phase, encountered mostly in plasma physics and is defined as
where
is the charge of the species,
is the momentum-transfer collision frequency,
is the mass.
Mobility is related to the species' diffusion coefficient through an exact (thermodynamically required) equation known as the Einstein relation:
where
is the Boltzmann constant,
is the gas temperature,
is the diffusion coefficient.
If one defines the mean free path in terms of momentum transfer, then one gets for the diffusion coefficient
But both the momentum-transfer mean free path and the momentum-transfer collision frequency are difficult to calculate. Many other mean free paths can be defined. In the gas phase, is often defined as the diffusional mean free path, by assuming that a simple approximate relation is exact:
where is the root mean square speed of the gas molecules:
where is the mass of the diffusing species. This approximate equation becomes exact when used to define the diffusional mean free path.
Applications
Electrical mobility is the basis for electrostatic precipitation, used to remove particles from exhaust gases on an industrial scale. The particles are given a charge by exposing them to ions from an electrical discharge in the presence of a strong field. The particles acquire an electrical mobility and are driven by the field to a collecting electrode.
Instruments exist which select particles with a narrow range of electrical mobility, or particles with electrical mobility larger than a predefined value. The former are generally referred to as "differential mobility analyzers". The selected mobility is often identified with the diameter of a singly charged spherical particle, thus the "electrical-mobility diameter" becomes a characteristic of the particle, regardless of whether it is actually spherical.
Passing particles of the selected mobility to a detector such as a condensation particle counter allows the number concentration of particles with the currently selected mobility to be measured. By varying the selected mobility over time, mobility vs concentration data may be obtained. This technique is applied in scanning mobility particle sizers.
References
Physical quantities
Electrophoresis
Mass spectrometry | Electrical mobility | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 878 | [
"Physical phenomena",
"Physical quantities",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Quantity",
"Mass",
"Biochemical separation processes",
"Molecular biology techniques",
"Mass spectrometry",
"Electrophoresis",
"Physical properties",
"Matter"
] |
1,546,216 | https://en.wikipedia.org/wiki/Spirit%20of%20place | Spirit of place (or soul) refers to the unique, distinctive and cherished aspects of a place; often those celebrated by artists and writers, but also those cherished in folk tales, festivals and celebrations. It is thus as much in the invisible weave of culture (stories, art, memories, beliefs, histories, etc.) as it is the tangible physical aspects of a place (monuments, boundaries, rivers, woods, architectural style, rural crafts styles, pathways, views, and so on) or its interpersonal aspects (the presence of relatives, friends and kindred spirits, and the like).
Often the term is applied to a rural or a relatively unspoiled or regenerated place — whereas the very similar term sense of place would tend to be more domestic, urban, or suburban in tone. For instance, one could logically apply 'sense of place' to an urban high street; noting the architecture, the width of the roads and pavements, the plantings, the style of the shop-fronts, the street furniture, and so on, but one could not really talk about the 'spirit of place' of such an essentially urban and commercial environment. However, an urban area that looks faceless or neglected to an adult may have deep meaning in children's street culture.
The Roman term for spirit of place was Genius loci, by which it is sometimes still referred. This has often been historically envisaged as a guardian animal or a small supernatural being (puck, fairy, elf, and the like) or a ghost. In the developed world these beliefs have been, for the most part, discarded. A new layer of less-embodied superstition on the subject, however, has arisen around ley lines, feng shui and similar concepts, on the one hand, and urban leftover spaces, such as back alleys or gaps between buildings in some North-American downtown areas, on the other hand.
The western cultural movements of Romanticism and Neo-romanticism are often deeply concerned with creating cultural forms that 're-enchant the land', in order to establish or re-establish a spirit of place.
Modern earth art (sometimes called environment art) artists such as Andy Goldsworthy have explored the contribution of natural/ephemeral sculpture to spirit of place.
Many indigenous and tribal cultures around the world are deeply concerned with spirits of place in their landscape. Spirits of place are explicitly recognized by some of the world's main religions: Shinto has its Kami which may incorporate spirits of place; and the Dvarapalas and Lokapalas in Hinduism, Vajrayana and Bonpo traditions.
See also
Bioregionalism
Common Ground (United Kingdom)
Cultural landscape
Cultural region
Deep map
Genius loci
Landvættir
Nature writing
Parochialism
Psychogeography
Topophilia
References
External links
Common Ground (UK)
The arts
Cultural geography
Psychogeography
Landscape design history
Gardening
Landscape architecture
Environmental design | Spirit of place | [
"Engineering"
] | 604 | [
"Environmental design",
"Design",
"Landscape architecture",
"Architecture"
] |
162,132 | https://en.wikipedia.org/wiki/Derangement | In combinatorial mathematics, a derangement is a permutation of the elements of a set in which no element appears in its original position. In other words, a derangement is a permutation that has no fixed points.
The number of derangements of a set of size is known as the subfactorial of or the derangement number or de Montmort number (after Pierre Remond de Montmort). Notations for subfactorials in common use include , , , or .
For , the subfactorial equals the nearest integer to , where denotes the factorial of and is Euler's number.
The problem of counting derangements was first considered by Pierre Raymond de Montmort in his Essay d'analyse sur les jeux de hazard in 1708; he solved it in 1713, as did Nicholas Bernoulli at about the same time.
Example
Suppose that a professor gave a test to 4 students – A, B, C, and D – and wants to let them grade each other's tests. Of course, no student should grade their own test. How many ways could the professor hand the tests back to the students for grading, such that no student receives their own test back? Out of 24 possible permutations (4!) for handing back the tests,
{| style="font:125% monospace;line-height:1;border-collapse:collapse;"
|ABCD,
|ABDC,
|ACBD,
|ACDB,
|ADBC,
|ADCB,
|-
|BACD,
|BADC,
|BCAD,
|BCDA,
|BDAC,
|BDCA,
|-
|CABD,
|CADB,
|CBAD,
|CBDA,
|CDAB,
|CDBA,
|-
|DABC,
|DACB,
|DBAC,
|DBCA,
|DCAB,
|DCBA.
|}
there are only 9 derangements (shown in blue italics above). In every other permutation of this 4-member set, at least one student gets their own test back (shown in bold red).
Another version of the problem arises when we ask for the number of ways n letters, each addressed to a different person, can be placed in n pre-addressed envelopes so that no letter appears in the correctly addressed envelope.
Counting derangements
Counting derangements of a set amounts to the hat-check problem, in which one considers the number of ways in which n hats (call them h1 through hn) can be returned to n people (P1 through Pn) such that no hat makes it back to its owner.
Each person may receive any of the n − 1 hats that is not their own. Call the hat which the person P1 receives hi and consider his owner: Pi receives either P1's hat, h1, or some other. Accordingly, the problem splits into two possible cases:
Pi receives a hat other than h1. This case is equivalent to solving the problem with n − 1 people and n − 1 hats because for each of the n − 1 people besides P1 there is exactly one hat from among the remaining n − 1 hats that they may not receive (for any Pj besides Pi, the unreceivable hat is hj, while for Pi it is h1). Another way to see this is to rename h1 to hi, where the derangement is more explicit: for any j from 2 to n, Pj cannot receive hj.
Pi receives h1. In this case the problem reduces to n − 2 people and n − 2 hats, because P1 received his hat and Pi received h1's hat, effectively putting both out of further consideration.
For each of the n − 1 hats that P1 may receive, the number of ways that P2, ..., Pn may all receive hats is the sum of the counts for the two cases.
This gives us the solution to the hat-check problem: Stated algebraically, the number !n of derangements of an n-element set is
for ,
where and
The number of derangements of small lengths is given in the table below.
There are various other expressions for , equivalent to the formula given above. These include
for
and
for
where is the nearest integer function and is the floor function.
Other related formulas include
and
The following recurrence also holds:
Derivation by inclusion–exclusion principle
One may derive a non-recursive formula for the number of derangements of an n-set, as well. For we define to be the set of permutations of objects that fix the object. Any intersection of a collection of of these sets fixes a particular set of objects and therefore contains permutations. There are such collections, so the inclusion–exclusion principle yields
and since a derangement is a permutation that leaves none of the n objects fixed, this implies
On the other hand, since we can choose elements to be in their own place and
derange the other elements in just ways, by definition.
Growth of number of derangements as n approaches ∞
From
and
by substituting one immediately obtains that
This is the limit of the probability that a randomly selected permutation of a large number of objects is a derangement. The probability converges to this limit extremely quickly as increases, which is why is the nearest integer to The above semi-log graph shows that the derangement graph lags the permutation graph by an almost constant value.
More information about this calculation and the above limit may be found in the article on the
statistics of random permutations.
Asymptotic expansion in terms of Bell numbers
An asymptotic expansion for the number of derangements in terms of Bell numbers is as follows:
where is any fixed positive integer, and denotes the -th Bell number. Moreover, the constant implied by the big O-term does not exceed .
Generalizations
The problème des rencontres asks how many permutations of a size-n set have exactly k fixed points.
Derangements are an example of the wider field of constrained permutations. For example, the ménage problem asks if n opposite-sex couples are seated man-woman-man-woman-... around a table, how many ways can they be seated so that nobody is seated next to his or her partner?
More formally, given sets A and S, and some sets U and V of surjections A → S, we often wish to know the number of pairs of functions (f, g) such that f is in U and g is in V, and for all a in A, f(a) ≠ g(a); in other words, where for each f and g, there exists a derangement φ of S such that f(a) = φ(g(a)).
Another generalization is the following problem:
How many anagrams with no fixed letters of a given word are there?
For instance, for a word made of only two different letters, say n letters A and m letters B, the answer is, of course, 1 or 0 according to whether n = m or not, for the only way to form an anagram without fixed letters is to exchange all the A with B, which is possible if and only if n = m. In the general case, for a word with n1 letters X1, n2 letters X2, ..., nr letters Xr, it turns out (after a proper use of the inclusion-exclusion formula) that the answer has the form
for a certain sequence of polynomials Pn, where Pn has degree n. But the above answer for the case r = 2 gives an orthogonality relation, whence the Pn's are the Laguerre polynomials (up to a sign that is easily decided).
In particular, for the classical derangements, one has that
where is the upper incomplete gamma function.
Computational complexity
It is NP-complete to determine whether a given permutation group (described by a given set of permutations that generate it) contains any derangements.
{| class="wikitable collapsible collapsed" style="margin:0; width:100%"
|+ Table of factorial and derangement values
|-
! scope="col" |
! scope="col" class="nowrap" | Permutations,
! scope="col" class="nowrap" | Derangements,
! scope="col" |
|-
| style="text-align: center" | 0
| 1
=1×100
| 1
=1×100
| = 1
|-
| style="text-align: center" | 1
| 1
=1×100
| 0
| = 0
|-
| style="text-align: center" | 2
| 2
=2×100
| 1
=1×100
| = 0.5
|-
| style="text-align: center" | 3
| 6
=6×100
| 2
=2×100
|align="right"| ≈0.33333 33333
|-
| style="text-align: center" | 4
| 24
=2.4×101
| 9
=9×100
| = 0.375
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 5
| 120
=1.20×102
| 44
=4.4×101
|align="right"| ≈0.36666 66667
|-
| style="text-align: center" | 6
| 720
=7.20×102
| 265
=2.65×102
|align="right"| ≈0.36805 55556
|-
| style="text-align: center" | 7
| 5,040
=5.04×103
| 1,854
≈1.85×103
|align="right"| ≈0.36785,71429
|-
| style="text-align: center" | 8
| 40,320
≈4.03×104
| 14,833
≈1.48×104
|align="right"| ≈0.36788 19444
|-
| style="text-align: center" | 9
| 362,880
≈3.63×105
| 133,496
≈1.33×105
|align="right"| ≈0.36787 91887
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 10
| 3,628,800
≈3.63×106
| 1,334,961
≈1.33×106
|align="right"| ≈0.36787 94643
|-
| style="text-align: center" | 11
| 39,916,800
≈3.99×107
| 14,684,570
≈1.47×107
|align="right"| ≈0.36787 94392
|-
| style="text-align: center" | 12
| 479,001,600
≈4.79×108
| 176,214,841
≈1.76×108
|align="right"| ≈0.36787 94413
|-
| style="text-align: center" | 13
| 6,227,020,800
≈6.23×109
| 2,290,792,932
≈2.29×109
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 14
| 87,178,291,200
≈8.72×1010
| 32,071,101,049
≈3.21×1010
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 15
|style="font-size:80%;"| 1,307,674,368,000
≈1.31×1012
|style="font-size:80%;"| 481,066,515,734
≈4.81×1011
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 16
|style="font-size:80%;"| 20,922,789,888,000
≈2.09×1013
|style="font-size:80%;"| 7,697,064,251,745
≈7.70×1012
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 17
|style="font-size:80%;"| 355,687,428,096,000
≈3.56×1014
|style="font-size:80%;"| 130,850,092,279,664
≈1.31×1014
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 18
|style="font-size:80%;"| 6,402,373,705,728,000
≈6.40×1015
|style="font-size:80%;"| 2,355,301,661,033,953
≈2.36×1015
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 19
|style="font-size:80%;"| 121,645,100,408,832,000
≈1.22×1017
|style="font-size:80%;"| 44,750,731,559,645,106
≈4.48×1016
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 20
|style="font-size:80%;"| 2,432,902,008,176,640,000
≈2.43×1018
|style="font-size:80%;"| 895,014,631,192,902,121
≈8.95×1017
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 21
|style="font-size:80%;"| 51,090,942,171,709,440,000
≈5.11×1019
|style="font-size:80%;"| 18,795,307,255,050,944,540
≈1.88×1019
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 22
|style="font-size:80%;"| 1,124,000,727,777,607,680,000
≈1.12×1021
|style="font-size:80%;"| 413,496,759,611,120,779,881
≈4.13×1020
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 23
|style="font-size:80%;"| 25,852,016,738,884,976,640,000
≈2.59×1022
|style="font-size:80%;"| 9,510,425,471,055,777,937,262
≈9.51×1021
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 24
|style="font-size:80%;"| 620,448,401,733,239,439,360,000
≈6.20×1023
|style="font-size:80%;"| 228,250,211,305,338,670,494,289
≈2.28×1023
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 25
|style="font-size:80%;"| 15,511,210,043,330,985,984,000,000
≈1.55×1025
|style="font-size:80%;"| 5,706,255,282,633,466,762,357,224
≈5.71×1024
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 26
|style="font-size:80%;"| 403,291,461,126,605,635,584,000,000
≈4.03×1026
|style="font-size:80%;"| 148,362,637,348,470,135,821,287,825
≈1.48×1026
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 27
|style="font-size:80%;"| 10,888,869,450,418,352,160,768,000,000
≈1.09×1028
|style="font-size:80%;"| 4,005,791,208,408,693,667,174,771,274
≈4.01×1027
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 28
|style="font-size:80%;"| 304,888,344,611,713,860,501,504,000,000
≈3.05×1029
|style="font-size:80%;"| 112,162,153,835,443,422,680,893,595,673
≈1.12×1029
|align="right"| ≈0.36787 94412
|-
| style="text-align: center" | 29
|style="font-size:80%;"| 8,841,761,993,739,701,954,543,616,000,000
≈8.84×1030
|style="font-size:80%;"| 3,252,702,461,227,859,257,745,914,274,516
≈3.25×1030
|align="right"| ≈0.36787 94412
|-style="border-top:2px solid #aaaaaa;"
| style="text-align: center" | 30
|style="font-size:80%;"| 265,252,859,812,191,058,636,308,480,000,000
≈2.65×1032
|style="font-size:80%;"| 97,581,073,836,835,777,732,377,428,235,481
≈9.76×1031
|align="right"| ≈0.36787 94412
|}
Footnotes
References
External links
Permutations
Fixed points (mathematics)
Integer sequences
es:Subfactorial | Derangement | [
"Mathematics"
] | 4,358 | [
"Sequences and series",
"Functions and mappings",
"Integer sequences",
"Mathematical structures",
"Mathematical analysis",
"Permutations",
"Recreational mathematics",
"Fixed points (mathematics)",
"Mathematical objects",
"Combinatorics",
"Topology",
"Mathematical relations",
"Numbers",
"Nu... |
162,269 | https://en.wikipedia.org/wiki/Magnesium%20oxide | Magnesium oxide (MgO), or magnesia, is a white hygroscopic solid mineral that occurs naturally as periclase and is a source of magnesium (see also oxide). It has an empirical formula of MgO and consists of a lattice of Mg2+ ions and O2− ions held together by ionic bonding. Magnesium hydroxide forms in the presence of water (MgO + H2O → Mg(OH)2), but it can be reversed by heating it to remove moisture.
Magnesium oxide was historically known as magnesia alba (literally, the white mineral from Magnesia), to differentiate it from magnesia nigra, a black mineral containing what is now known as manganese.
Related oxides
While "magnesium oxide" normally refers to MgO, the compound magnesium peroxide MgO2 is also known. According to evolutionary crystal structure prediction, MgO2 is thermodynamically stable at pressures above 116 GPa (gigapascals), and a semiconducting suboxide Mg3O2 is thermodynamically stable above 500 GPa. Because of its stability, MgO is used as a model system for investigating vibrational properties of crystals.
Electric properties
Pure MgO is not conductive and has a high resistance to electric current at room temperature. The pure powder of MgO has a relative permittivity inbetween 3.2 to 9.9 with an approximate dielectric loss of tan(δ) > 2.16x103 at 1kHz.
Production
Magnesium oxide is produced by the calcination of magnesium carbonate or magnesium hydroxide. The latter is obtained by the treatment of magnesium chloride solutions, typically seawater, with limewater or milk of lime.
Mg2+ + Ca(OH)2 → Mg(OH)2 + Ca2+
Calcining at different temperatures produces magnesium oxide of different reactivity. High temperatures 1500 – 2000 °C diminish the available surface area and produces dead-burned (often called dead burnt) magnesia, an unreactive form used as a refractory. Calcining temperatures 1000 – 1500 °C produce hard-burned magnesia, which has limited reactivity and calcining at lower temperature, (700–1000 °C) produces light-burned magnesia, a reactive form, also known as caustic calcined magnesia. Although some decomposition of the carbonate to oxide occurs at temperatures below 700 °C, the resulting materials appear to reabsorb carbon dioxide from the air.
Applications
Refractory insulator
MgO is prized as a refractory material, i.e. a solid that is physically and chemically stable at high temperatures. It has the useful attributes of high thermal conductivity and low electrical conductivity. According to a 2006 reference book:
MgO is used as a refractory material for crucibles. It is also used as an insulator in heat-resistant electrical cable.
Biomedical
Among metal oxide nanoparticles, magnesium oxide nanoparticles (MgO NPs) have distinct physicochemical and biological properties, including biocompatibility, biodegradability, high bioactivity, significant antibacterial properties, and good mechanical properties, which make it a good choice as a reinforcement in composites.
Heating elements
It is used extensively as an electrical insulator in tubular construction heating elements as in electric stove and cooktop heating elements. There are several mesh sizes available and most commonly used ones are 40 and 80 mesh per the American Foundry Society. The extensive use is due to its high dielectric strength and average thermal conductivity. MgO is usually crushed and compacted with minimal airgaps or voids.
Cement
MgO is one of the components in Portland cement in dry process plants.
Sorel cement uses MgO as the main component in combination with MgCl2 and water.
Fertilizer
MgO has an important place as a commercial plant fertilizer and as animal feed.
Fireproofing
It is a principal fireproofing ingredient in construction materials. As a construction material, magnesium oxide wallboards have several attractive characteristics: fire resistance, termite resistance, moisture resistance, mold and mildew resistance, and strength, but also a severe downside as it attracts moisture and can cause moisture damage to surrounding materials.
Medical
Magnesium oxide is used for relief of heartburn and indigestion, as an antacid, magnesium supplement, and as a short-term laxative. It is also used to improve symptoms of indigestion. Side effects of magnesium oxide may include nausea and cramping. In quantities sufficient to obtain a laxative effect, side effects of long-term use may rarely cause enteroliths to form, resulting in bowel obstruction.
Waste treatment
Magnesium oxide is used extensively in the soil and groundwater remediation, wastewater treatment, drinking water treatment, air emissions treatment, and waste treatment industries for its acid buffering capacity and related effectiveness in stabilizing dissolved heavy metal species.
Many heavy metals species, such as lead and cadmium, are least soluble in water at mildly basic conditions (pH in the range 8–11). Solubility of metals increases their undesired bioavailability and mobility in soil and groundwater. Granular MgO is often blended into metals-contaminating soil or waste material, which is also commonly of a low pH (acidic), in order to drive the pH into the 8–10 range. Metal-hydroxide complexes tend to precipitate out of aqueous solution in the pH range of 8–10.
MgO is packed in bags around transuranic waste in the disposal cells (panels) at the Waste Isolation Pilot Plant, as a getter to minimize the complexation of uranium and other actinides by carbonate ions and so to limit the solubility of radionuclides. The use of MgO is preferred over CaO since the resulting hydration product () is less soluble and releases less hydration heat. Another advantage is to impose a lower pH value (about 10.5) in case of accidental water ingress into the dry salt layers, in contast to the more soluble which would create a higher pH of 12.5 (strongly alkaline conditions). The cation being the second most abundant cation in seawater and in rocksalt, the potential release of magnesium ions dissolving in brines intruding the deep geological repository is also expected to minimize the geochemical disruption.
Niche uses
As a food additive, it is used as an anticaking agent. It is known to the US Food and Drug Administration for cacao products; canned peas; and frozen dessert. It has an E number of E530.
As a reagent in the installation of the carboxybenzyl (Cbz) group using benzyl chloroformate in EtOAc for the N-protection of amines and amides.
Doping MgO (about 1–5% by weight) into hydroxyapatite, a bioceramic mineral, increases the fracture toughness by migrating to grain boundaries, where it reduces grain size and changes the fracture mode from intergranular to transgranular.
Pressed MgO is used as an optical material. It is transparent from 0.3 to 7 μm. The refractive index is 1.72 at 1 μm and the Abbe number is 53.58. It is sometimes known by the Eastman Kodak trademarked name Irtran-5, although this designation is obsolete. Crystalline pure MgO is available commercially and has a small use in infrared optics.
An aerosolized solution of MgO is used in library science and collections management for the deacidification of at-risk paper items. In this process, the alkalinity of MgO (and similar compounds) neutralizes the relatively high acidity characteristic of low-quality paper, thus slowing the rate of deterioration.
Magnesium oxide is used as an oxide barrier in spin-tunneling devices. Owing to the crystalline structure of its thin films, which can be deposited by magnetron sputtering, for example, it shows characteristics superior to those of the commonly used amorphous Al2O3. In particular, spin polarization of about 85% has been achieved with MgO versus 40–60 % with aluminium oxide. The value of tunnel magnetoresistance is also significantly higher for MgO (600% at room temperature and 1,100 % at 4.2 K) than Al2O3 (ca. 70% at room temperature).
MgO is a common pressure transmitting medium used in high pressure apparatuses like the multi-anvil press.
Brake lining
Magnesia is used in brake linings for its heat conductivity and intermediate hardness. It helps dissipate heat from friction surfaces, preventing overheating, while minimizing wear on metal components. Its stability under high temperatures ensures reliable and durable braking performance in automotive and industrial applications.
Thin film transistors
In thin film transistors(TFTs), MgO is often used as a dielectric material or an insulator due to its high thermal stability, excellent insulating properties, and wide bandgap. Optimized IGZO/MgO TFTs demonstrated an electron mobility of 1.63 cm²/Vs, an on/off current ratio of 10⁶, and a subthreshold swing of 0.50 V/decade at −0.11 V. These TFTs are integral to low-power applications, wearable devices, and radiation-hardened electronics, contributing to enhanced efficiency and durability across diverse domains.
Historical uses
It was historically used as a reference white color in colorimetry, owing to its good diffusing and reflectivity properties. It may be smoked onto the surface of an opaque material to form an integrating sphere.
Early gas mantle designs for lighting, such as the Clamond basket, consisted mainly of magnesium oxide.
Precautions
Inhalation of magnesium oxide fumes can cause metal fume fever.
See also
Notes
References
External links
Data page at UCL
Ceramic data page at NIST
NIOSH Pocket Guide to Chemical Hazards at CDC
Magnesium minerals
Magnesium compounds
Oxides
Refractory materials
Optical materials
Ceramic materials
Antacids
E-number additives
Rock salt crystal structure | Magnesium oxide | [
"Physics",
"Chemistry",
"Engineering"
] | 2,119 | [
"Refractory materials",
"Oxides",
"Salts",
"Materials",
"Optical materials",
"Ceramic materials",
"Ceramic engineering",
"Matter"
] |
162,312 | https://en.wikipedia.org/wiki/Mechanical%20wave | In physics, a mechanical wave is a wave that is an oscillation of matter, and therefore transfers energy through a material medium.
(Vacuum is, from classical perspective, a non-material medium, where electromagnetic waves propagate.)
While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial equilibrium position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves. Some of the most common examples of mechanical waves are water waves, sound waves, and seismic waves.
Like all waves, mechanical waves transport energy. This energy propagates in the same direction as the wave. A wave requires an initial energy input; once this initial energy is added, the wave travels through the medium until all its energy is transferred. In contrast, electromagnetic waves require no medium, but can still travel through one.
Transverse wave
A transverse wave is the form of a wave in which particles of medium vibrate about their mean position perpendicular to the direction of the motion of the wave.
To see an example, move an end of a Slinky (whose other end is fixed) to the left-and-right of the Slinky, as opposed to to-and-fro. Light also has properties of a transverse wave, although it is an electromagnetic wave.
Longitudinal wave
Longitudinal waves cause the medium to vibrate parallel to the direction of the wave. It consists of multiple compressions and rarefactions. The rarefaction is the farthest distance apart in the longitudinal wave and the compression is the closest distance together. The speed of the longitudinal wave is increased in higher index of refraction, due to the closer proximity of the atoms in the medium that is being compressed. Sound is a longitudinal wave.
Surface waves
This type of wave travels along the surface or interface between two media. An example of a surface wave would be waves in a pool, or in an ocean, lake, or any other type of water body. There are two types of surface waves, namely Rayleigh waves and Love waves.
Rayleigh waves, also known as ground roll, are waves that travel as ripples with motion similar to those of waves on the surface of water. Such waves are much slower than body waves, at roughly 90% of the velocity of for a typical homogeneous elastic medium. Rayleigh waves have energy losses only in two dimensions and are hence more destructive in earthquakes than conventional bulk waves, such as P-waves and S-waves, which lose energy in all three directions.
A Love wave is a surface wave having horizontal waves that are shear or transverse to the direction of propagation. They usually travel slightly faster than Rayleigh waves, at about 90% of the body wave velocity, and have the largest amplitude.
Examples
Seismic waves
Sound waves
Wind waves on seas and lakes
Vibration
See also
Acoustics
Ultrasound
Underwater acoustics
References
Waves
Mechanics | Mechanical wave | [
"Physics",
"Engineering"
] | 617 | [
"Physical phenomena",
"Waves",
"Motion (physics)",
"Mechanics",
"Mechanical engineering"
] |
162,321 | https://en.wikipedia.org/wiki/Invariant%20mass | The invariant mass, rest mass, intrinsic mass, proper mass, or in the case of bound systems simply mass, is the portion of the total mass of an object or system of objects that is independent of the overall motion of the system. More precisely, it is a characteristic of the system's total energy and momentum that is the same in all frames of reference related by Lorentz transformations. If a center-of-momentum frame exists for the system, then the invariant mass of a system is equal to its total mass in that "rest frame". In other reference frames, where the system's momentum is nonzero, the total mass (a.k.a. relativistic mass) of the system is greater than the invariant mass, but the invariant mass remains unchanged.
Because of mass–energy equivalence, the rest energy of the system is simply the invariant mass times the speed of light squared. Similarly, the total energy of the system is its total (relativistic) mass times the speed of light squared.
Systems whose four-momentum is a null vector (for example, a single photon or many photons moving in exactly the same direction) have zero invariant mass and are referred to as massless. A physical object or particle moving faster than the speed of light would have space-like four-momenta (such as the hypothesized tachyon), and these do not appear to exist. Any time-like four-momentum possesses a reference frame where the momentum (3-dimensional) is zero, which is a center of momentum frame. In this case, invariant mass is positive and is referred to as the rest mass.
If objects within a system are in relative motion, then the invariant mass of the whole system will differ from the sum of the objects' rest masses. This is also equal to the total energy of the system divided by c2. See mass–energy equivalence for a discussion of definitions of mass. Since the mass of systems must be measured with a weight or mass scale in a center of momentum frame in which the entire system has zero momentum, such a scale always measures the system's invariant mass. For example, a scale would measure the kinetic energy of the molecules in a bottle of gas to be part of invariant mass of the bottle, and thus also its rest mass. The same is true for massless particles in such system, which add invariant mass and also rest mass to systems, according to their energy.
For an isolated massive system, the center of mass of the system moves in a straight line with a steady subluminal velocity (with a velocity depending on the reference frame used to view it). Thus, an observer can always be placed to move along with it. In this frame, which is the center-of-momentum frame, the total momentum is zero, and the system as a whole may be thought of as being "at rest" if it is a bound system (like a bottle of gas). In this frame, which exists under these assumptions, the invariant mass of the system is equal to the total system energy (in the zero-momentum frame) divided by . This total energy in the center of momentum frame, is the minimum energy which the system may be observed to have, when seen by various observers from various inertial frames.
Note that for reasons above, such a rest frame does not exist for single photons, or rays of light moving in one direction. When two or more photons move in different directions, however, a center of mass frame (or "rest frame" if the system is bound) exists. Thus, the mass of a system of several photons moving in different directions is positive, which means that an invariant mass exists for this system even though it does not exist for each photon.
Sum of rest masses
The invariant mass of a system includes the mass of any kinetic energy of the system constituents that remains in the center of momentum frame, so the invariant mass of a system may be greater than sum of the invariant masses (rest masses) of its separate constituents. For example, rest mass and invariant mass are zero for individual photons even though they may add mass to the invariant mass of systems. For this reason, invariant mass is in general not an additive quantity (although there are a few rare situations where it may be, as is the case when massive particles in a system without potential or kinetic energy can be added to a total mass).
Consider the simple case of two-body system, where object A is moving towards another object B which is initially at rest (in any particular frame of reference). The magnitude of invariant mass of this two-body system (see definition below) is different from the sum of rest mass (i.e. their respective mass when stationary). Even if we consider the same system from center-of-momentum frame, where net momentum is zero, the magnitude of the system's invariant mass is not equal to the sum of the rest masses of the particles within it.
The kinetic energy of such particles and the potential energy of the force fields increase the total energy above the sum of the particle rest masses, and both terms contribute to the invariant mass of the system. The sum of the particle kinetic energies as calculated by an observer is smallest in the center of momentum frame (again, called the "rest frame" if the system is bound).
They will often also interact through one or more of the fundamental forces, giving them a potential energy of interaction, possibly negative.
As defined in particle physics
In particle physics, the invariant mass is equal to the mass in the rest frame of the particle, and can be calculated by the particle's energy and its momentum as measured in any frame, by the energy–momentum relation:
or in natural units where ,
This invariant mass is the same in all frames of reference (see also special relativity). This equation says that the invariant mass is the pseudo-Euclidean length of the four-vector , calculated using the relativistic version of the Pythagorean theorem which has a different sign for the space and time dimensions. This length is preserved under any Lorentz boost or rotation in four dimensions, just like the ordinary length of a vector is preserved under rotations. In quantum theory the invariant mass is a parameter in the relativistic Dirac equation for an elementary particle. The Dirac quantum operator corresponds to the particle four-momentum vector.
Since the invariant mass is determined from quantities which are conserved during a decay, the invariant mass calculated using the energy and momentum of the decay products of a single particle is equal to the mass of the particle that decayed.
The mass of a system of particles can be calculated from the general formula:
where
is the invariant mass of the system of particles, equal to the mass of the decay particle.
is the sum of the energies of the particles
is the vector sum of the momentum of the particles (includes both magnitude and direction of the momenta)
The term invariant mass is also used in inelastic scattering experiments. Given an inelastic reaction with total incoming energy larger than the total detected energy (i.e. not all outgoing particles are detected in the experiment), the invariant mass (also known as the "missing mass") of the reaction is defined as follows (in natural units):
If there is one dominant particle which was not detected during an experiment, a plot of the invariant mass will show a sharp peak at the mass of the missing particle.
In those cases when the momentum along one direction cannot be measured (i.e. in the case of a neutrino, whose presence is only inferred from the missing energy) the transverse mass is used.
Example: two-particle collision
In a two-particle collision (or a two-particle decay) the square of the invariant mass (in natural units) is
Massless particles
The invariant mass of a system made of two massless particles whose momenta form an angle has a convenient expression:
Collider experiments
In particle collider experiments, one often defines the angular position of a particle in terms of an azimuthal angle and pseudorapidity . Additionally the transverse momentum, , is usually measured. In this case if the particles are massless, or highly relativistic () then the invariant mass becomes:
Rest energy
Rest energy (also called rest mass energy) is the energy associated with a particle's invariant mass.
The rest energy of a particle is defined as: where is the speed of light in vacuum. In general, only differences in energy have physical significance.
The concept of rest energy follows from the special theory of relativity that leads to Einstein's famous conclusion about equivalence of energy and mass. See .
See also
Mass in special relativity
Invariant (physics)
Transverse mass
References
Citations
Theory of relativity
Mass
Energy (physics)
Physical quantities | Invariant mass | [
"Physics",
"Mathematics"
] | 1,800 | [
"Scalar physical quantities",
"Physical phenomena",
"Physical quantities",
"Quantity",
"Mass",
"Size",
"Energy (physics)",
"Theory of relativity",
"Wikipedia categories named after physical quantities",
"Physical properties",
"Matter"
] |
162,400 | https://en.wikipedia.org/wiki/Force%20spectroscopy | Force spectroscopy is a set of techniques for the study of the interactions and the binding forces between individual molecules. These methods can be used to measure the mechanical properties of single polymer molecules or proteins, or individual chemical bonds. The name "force spectroscopy", although widely used in the scientific community, is somewhat misleading, because there is no true matter-radiation interaction.
Techniques that can be used to perform force spectroscopy include atomic force microscopy, optical tweezers, magnetic tweezers, acoustic force spectroscopy, microneedles, and biomembranes.
Force spectroscopy measures the behavior of a molecule under stretching or torsional mechanical force. In this way a great deal has been learned in recent years about the mechanochemical coupling in the enzymes responsible for muscle contraction, transport in the cell, energy generation (F1-ATPase), DNA replication and transcription (polymerases), DNA unknotting and unwinding (topoisomerases and helicases).
As a single-molecule technique, as opposed to typical ensemble spectroscopies, it allows a researcher to determine properties of the particular molecule under study. In particular, rare events such as conformational change, which are masked in an ensemble, may be observed.
Experimental techniques
There are many ways to accurately manipulate single molecules. Prominent among these are optical or magnetic tweezers, atomic-force-microscope (AFM) cantilevers and acoustic force spectroscopy. In all of these techniques, a biomolecule, such as protein or DNA, or some other biopolymer has one end bound to a surface or micrometre-sized bead and the other to a force sensor. The force sensor is usually a micrometre-sized bead or a cantilever, whose displacement can be measured to determine the force.
Atomic force microscope cantilevers
Molecules adsorbed on a surface are picked up by a
microscopic tip (nanometres wide) that is located on the end of an elastic cantilever. In a more sophisticated version of this experiment (Chemical Force Microscopy) the tips are covalently functionalized with the molecules of interest. A piezoelectric controller then pulls up the cantilever. If some force is acting on the elastic cantilever (for example because some molecule is being stretched between the surface and the tip), this will deflect upward (repulsive force) or downward (attractive force). According to Hooke's law, this deflection will be proportional to the force acting on the cantilever. Deflection is measured by the position of a laser beam reflected by the cantilever. This kind of set-up can measure forces as low as 10 pN (10−11 N), the fundamental resolution limit is given by the cantilever's thermal noise.
The so-called force curve is the graph of force (or more precisely, of cantilever deflection) versus the piezoelectric position on the Z axis. An ideal Hookean spring, for example, would display a straight diagonal force curve.
Typically, the force curves observed in the force spectroscopy experiments consist of a contact (diagonal) region where the probe contacts the sample surface, and a non-contact region where the probe is off the sample surface. When the restoring force of the cantilever exceeds tip-sample adhesion force the probe jumps out of contact, and the magnitude of this jump is often used as a measure of adhesion force or rupture force. In general the rupture of a tip-surface bond is a stochastic process; therefore reliable quantification of the adhesion force requires taking multiple individual force curves. The histogram of the adhesion forces obtained in these multiple measurements provides the main data output for force spectroscopy measurement.
In biophysics, single-molecule force spectroscopy can be used to study the energy landscape underlying the interaction between two bio-molecules, like proteins. Here, one binding partner can be attached to a cantilever tip via a flexible linker molecule (PEG chain), while the other one is immobilized on a substrate surface. In a typical approach, the cantilever is repeatedly approached and retracted from the sample at a constant speed. In some cases, binding between the two partners will occur, which will become visible in the force curve, as the use of a flexible linker gives rise to a characteristic curve shape (see Worm-like chain model) distinct from adhesion. The collected rupture forces can then be analysed as a function of the bond loading rate. The resulting graph of the average rupture force as a function of the loading rate is called the force spectrum and forms the basic dataset for dynamic force spectroscopy.
In the ideal case of a single sharp energy barrier for the tip-sample interactions the dynamic force spectrum will show a linear increase of the rupture force as function of a logarithm of the loading rate, as described by a model proposed by Bell et al. Here, the slope of the rupture force spectrum is equal to the , where is the distance from the energy minimum to the transition state. So far, a number of theoretical models exist describing the relationship between loading rate and rupture force, based upon different assumptions and predicting distinct curve shapes.
For example, Ma X.,Gosai A. et al., utilized dynamic force spectroscopy along with molecular dynamics simulations to find out the binding force between thrombin, a blood coagulation protein, and its DNA aptamer.
Acoustic force spectroscopy
A recently developed technique, acoustic force spectroscopy (AFS), allows the force manipulation of hundreds of single-molecules and single-cells in parallel, providing high experimental throughput. In this technique, a piezo element resonantly excites planar acoustic waves over a microfluidic chip. The generated acoustic waves are capable of exerting forces on microspheres with different density than the surrounding medium. Biomolecules, such as DNA, RNA or proteins, can be individually tethered between the microspheres and a surface and then probed by the acoustic forces exerted by the piezo sensor. With AFS devices it is possible to apply forces ranging from 0 to several hundreds of picoNewtons on hundreds of microspheres and obtain force-extension curves or histograms of rupture forces of many individual events in parallel.
This technique is mostly utilized to study DNA-bindings protein. For example, AFS was used to examine bacterial transcription with presence of antibacterial agents. Viral proteins also can be studied by AFS, for instance this technique was used to explore DNA compaction along with other single-molecule approaches.
Cells also can be manipulated by the acoustic forces directly, or by using microspheres as handles.
Optical tweezers
Another technique that has been gaining ground for single molecule experiments is the use of optical tweezers for applying mechanical forces on molecules. A strongly focused laser beam has the ability to catch and hold particles (of dielectric material) in a size range from nanometers to micrometers. The trapping action of optical tweezers results from the dipole or optical gradient force on the dielectric sphere. The technique of using a focused laser beam as an atom trap was first applied in 1984 at Bell laboratories. Until then experiments had been carried out using oppositely directed lasers as a means to trap particles. Later experiments, at the same project at Bell laboratories and others since, showed damage-free manipulation on cells using an infrared laser. Thus, the ground was made for biological experiments with optical trapping.
Each technique has its own advantages and disadvantages. For example, AFM cantilevers, can measure angstrom-scale, millisecond events and forces larger than 10 pN. While glass microfibers cannot achieve such fine spatial and temporal resolution, they can measure piconewton forces. Optical tweezers allow the measurement of piconewton forces and nanometer displacements which is an ideal range for many biological experiments. Magnetic tweezers can measure femtonewton forces, and additionally they can also be used to apply torsion. AFS devices allow the statistical analysis of the mechanical properties of biological systems by applying picoNewton forces to hundreds of individual particles in parallel, with sub-millisecond response time.
Applications
Common applications of force spectroscopy are measurements of polymer elasticity, especially biopolymers such as RNA and DNA. Another biophysical application of polymer force spectroscopy is on protein unfolding. Modular proteins can be adsorbed to a gold or (more rarely) mica surface and then stretched. The sequential unfolding of modules is observed as a very characteristic sawtooth pattern of the force vs elongation graph; every tooth corresponds to the unfolding of a single protein module (apart from the last that is generally the detachment of the protein molecule from the tip). Much information about protein elasticity and protein unfolding can be obtained by this technique. Many proteins in the living cell must face mechanical stress.
Moreover, force spectroscopy can be used to investigate the enzymatic activity of proteins involved in DNA replication, transcription, organization and repair. This is achieved by measuring the position of a bead attached to a DNA-protein complex stalled on a DNA tether that has one end attached to a surface, while keeping the force constant. This technique has been used, for example, to study transcription elongation inhibition by Klebsidin and Acinetodin.
The other main application of force spectroscopy is the study of mechanical resistance of chemical bonds. In this case, generally the tip is functionalized with a ligand that binds to another molecule bound to the surface. The tip is pushed on the surface, allowing for contact between the two molecules, and then retracted until the newly formed bond breaks up. The force at which the bond breaks up is measured. Since mechanical breaking is a kinetic, stochastic process, the breaking force is not an absolute parameter, but it is a function of both temperature and pulling speed. Low temperatures and high pulling speeds correspond to higher breaking forces. By careful analysis of the breaking force at various pulling speeds, it is possible to map the energy landscape of the chemical bond under mechanical force. This is leading to interesting results in the study of antibody-antigen, protein-protein, protein-living cell interaction and catch bonds.
Recently this technique has been used in cell biology to measure the aggregative stochastic forces created by motor proteins that influence the motion of particles within the cytoplasm. In this way, force spectrum microscopy may be used better to understand the many cellular processes that require the motion of particles within cytoplasm.
References
Further reading
Spectroscopy
Scanning probe microscopy | Force spectroscopy | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,210 | [
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Scanning probe microscopy",
"Microscopy",
"Nanotechnology",
"Spectroscopy"
] |
162,435 | https://en.wikipedia.org/wiki/Mind%20uploading | Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain's information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind.
Substantial mainstream research in related areas is being conducted in neuroscience and computer science, including animal brain mapping and simulation, development of faster supercomputers, virtual reality, brain–computer interfaces, connectomics, and information extraction from dynamically functioning brains. According to supporters, many of the tools and ideas needed to achieve mind uploading already exist or are under active development; however, they will admit that others are, as yet, very speculative, but say they are still in the realm of engineering possibility.
Mind uploading may potentially be accomplished by either of two methods: copy-and-upload or copy-and-delete by gradual replacement of neurons (which can be considered as a gradual destructive uploading), until the original organic brain no longer exists and a computer program emulating the brain takes control of the body. In the case of the former method, mind uploading would be achieved by scanning and mapping the salient features of a biological brain, and then by storing and copying that information state into a computer system or another computational device. The biological brain may not survive the copying process or may be deliberately destroyed during it in some variants of uploading. The simulated mind could be within a virtual reality or simulated world, supported by an anatomic 3D body simulation model. Alternatively, the simulated mind could reside in a computer inside—or either connected to or remotely controlled by—a (not necessarily humanoid) robot, biological, or cybernetic body.
Among some futurists and within part of transhumanist movement, mind uploading is treated as an important proposed life extension or immortality technology (known as "digital immortality"). Some believe mind uploading is humanity's current best option for preserving the identity of the species, as opposed to cryonics. Another aim of mind uploading is to provide a permanent backup to our "mind-file", to enable interstellar space travel, and a means for human culture to survive a global disaster by making a functional copy of a human society in a computing device. Whole-brain emulation is discussed by some futurists as a "logical endpoint" of the topical computational neuroscience and neuroinformatics fields, both about brain simulation for medical research purposes. It is discussed in artificial intelligence research publications as an approach to strong AI (artificial general intelligence) and to at least weak superintelligence. Another approach is seed AI, which would not be based on existing brains. Computer-based intelligence such as an upload could think much faster than a biological human even if it were no more intelligent. A large-scale society of uploads might, according to futurists, give rise to a technological singularity, meaning a sudden time constant decrease in the exponential development of technology. Mind uploading is a central conceptual feature of numerous science fiction novels, films, and games.
Overview
Many neuroscientists believe that the human mind is largely an emergent property of the information processing of its neuronal network.
Neuroscientists have stated that important functions performed by the mind, such as learning, memory, and consciousness, are due to purely physical and electrochemical processes in the brain and are governed by applicable laws. For example, Christof Koch and Giulio Tononi wrote in IEEE Spectrum:
Eminent computer scientists and neuroscientists have predicted that advanced computers will be capable of thought and even attain consciousness, including Koch and Tononi, Douglas Hofstadter, Jeff Hawkins, Marvin Minsky, Randal A. Koene, and Rodolfo Llinás.
Many theorists have presented models of the brain and have established a range of estimates of the amount of computing power needed for partial and complete simulations. Using these models, some have estimated that uploading may become possible within decades if trends such as Moore's law continue. As of December 2022, this kind of technology is almost entirely theoretical.
Theoretical benefits and applications
"Immortality" or backup
In theory, if the information and processes of the mind can be disassociated from the biological body, they are no longer tied to the individual limits and lifespan of that body. Furthermore, information within a brain could be partly or wholly copied or transferred to one or more other substrates (including digital storage or another brain), thereby—from a purely mechanistic perspective—reducing or eliminating "mortality risk" of such information. This general proposal was discussed in 1971 by biogerontologist George M. Martin of the University of Washington. This questions the concept of identity. From the perspective of the biological brain, the simulated brain may just be a copy, even if it is conscious and has an indistinguishable character. As such, the original biological being, before the uploading, might consider the digital twin to be a new and independent being rather than the future self.
Space exploration
An "uploaded astronaut" could be used instead of a "live" astronaut in human spaceflight, avoiding the perils of zero gravity, the vacuum of space, and cosmic radiation to the human body. It would allow for the use of smaller spacecraft, such as the proposed StarChip, and it would enable virtually unlimited interstellar travel distances.
Mind editing
While some researchers believe editing human brains to be physically possible in theory, for example by performing neurosurgery with nanobots, it would require particularly advanced technology. Editing an uploaded mind would be much easier, as long as the exact edits to be made are known. This would facilitate cognitive enhancement and the precise control of the well-being, motivations or personality of the emulated beings.
Speed
Although the number of neuronal connections in the human brain is very significant (around 100 trillions), the frequency of activation of biological neurons is limited to around 200 Hz, whereas electronic hardware can easily operate at multiple GHz. With sufficient hardware parallelism, a simulated brain could thus in theory be made to run faster than a biological brain. Uploaded beings may therefore not only be more efficient, but also supposedly have a faster rate of subjective experience than biological brains (e.g. experiencing an hour of lifetime in a single second of real time).
Relevant technologies and techniques
The focus of mind uploading, in the case of copy-and-transfer, is on data acquisition, rather than data maintenance of the brain. A set of approaches known as loosely coupled off-loading (LCOL) may be used in the attempt to characterize and copy the mental contents of a brain. The LCOL approach may take advantage of self-reports, life-logs and video recordings that can be analyzed by artificial intelligence. A bottom-up approach may focus on the specific resolution and morphology of neurons, the spike times of neurons, the times at which neurons produce action potential responses.
Computational complexity
Advocates of mind uploading point to Moore's law to support the notion that the necessary computing power is expected to become available within a few decades. However, the actual computational requirements for running an uploaded human mind are very difficult to quantify, potentially rendering such an argument specious.
Regardless of the techniques used to capture or recreate the function of a human mind, the processing demands are likely to be immense, due to the large number of neurons in the human brain along with the considerable complexity of each neuron.
Required computational capacity strongly depends on the chosen level of simulation model scale:
Scanning and mapping scale of an individual
When modelling and simulating the brain of a specific individual, a brain map or connectivity database showing the connections between the neurons must be extracted from an anatomic model of the brain. For whole brain simulation, this network map should show the connectivity of the whole nervous system, including the spinal cord, sensory receptors, and muscle cells. Destructive scanning of a small sample of tissue from a mouse brain including synaptic details is possible as of 2010.
However, if short-term memory and working memory include prolonged or repeated firing of neurons, as well as intra-neural dynamic processes, the electrical and chemical signal state of the synapses and neurons may be hard to extract. The uploaded mind may then perceive a memory loss of the events and mental processes immediately before the time of brain scanning.
A full brain map has been estimated to occupy less than 2 x 1016 bytes (20,000 TB) and would store the addresses of the connected neurons, the synapse type and the synapse "weight" for each of the brains' 1015 synapses. However, the biological complexities of true brain function (e.g. the epigenetic states of neurons, protein components with multiple functional states, etc.) may preclude an accurate prediction of the volume of binary data required to faithfully represent a functioning human mind.
Serial sectioning
A possible method for mind uploading is serial sectioning, in which the brain tissue and perhaps other parts of the nervous system are frozen and then scanned and analyzed layer by layer, which for frozen samples at nano-scale requires a cryo-ultramicrotome, thus capturing the structure of the neurons and their interconnections. The exposed surface of frozen nerve tissue would be scanned and recorded, and then the surface layer of tissue removed. While this would be a very slow and labor-intensive process, research is underway to automate the collection and microscopy of serial sections. The scans would then be analyzed, and a model of the neural net recreated in the system into which the mind was being uploaded.
There are uncertainties with this approach using current microscopy techniques. If it is possible to replicate neuron function from its visible structure alone, then the resolution afforded by a scanning electron microscope would suffice for such a technique. However, as the function of brain tissue is partially determined by molecular events (particularly at synapses, but also at other places on the neuron's cell membrane), this may not suffice for capturing and simulating neuron functions. It may be possible to extend the techniques of serial sectioning and to capture the internal molecular makeup of neurons, through the use of sophisticated immunohistochemistry staining methods that could then be read via confocal laser scanning microscopy. However, as the physiological genesis of 'mind' is not currently known, this method may not be able to access all of the necessary biochemical information to recreate a human brain with sufficient fidelity.
Brain imaging
It may be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods. Today, fMRI is often combined with MEG for creating functional maps of human cortex during more complex cognitive tasks, as the methods complement each other. Even though current imaging technology lacks the spatial resolution needed to gather the information needed for such a scan, important recent and future developments are predicted to substantially improve both spatial and temporal resolutions of existing technologies.
Brain simulation
Ongoing work in the field of brain simulation includes partial and whole simulations of some animals. For example, the C. elegans roundworm, Drosophila fruit fly, and mouse have all been simulated to various degrees.
The Blue Brain Project, initiated by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne in Switzerland, is an attempt to create a synthetic brain by reverse-engineering mammalian brain circuitry, in order to accelerate experimental research on the brain. In 2009, after a successful simulation of part of a rat brain, the director Henry Markram claimed that "A detailed, functional artificial human brain can be built within the next 10 years". In 2013, Markram became the director of the new decade-long Human Brain Project. But less than two years into it, the project was recognized to be mismanaged and its claims overblown, and Markram was asked to step down.
Issues
Philosophical issues
The main philosophical problem faced by "mind uploading" or mind copying is the hard problem of consciousness: the difficulty of explaining how a physical entity such as a human can have qualia, phenomenal consciousness, or subjective experience. Many philosophical responses to the hard problem entail that mind uploading is fundamentally or practically impossible, while others are compatible with at least some formulations of mind uploading. Many proponents of mind uploading defend the possibility of mind uploading by recourse to physicalism, which includes the philosophical belief that consciousness is an emergent feature that arises from large neural network high-level patterns of organization, which could be realized in other processing devices. Mind uploading relies on the idea that the human mind (the "self" and the long-term memory) reduces to the current neural network paths and the weights of synapses in the brain. In contrast, many dualistic and idealistic accounts seek to avoid the hard problem of consciousness by explaining it in terms of immaterial (and presumably inaccessible) substances like soul, which would pose a fundamental or at least practical challenge to the feasibility of artificial consciousness in general.
Assuming physicalism is true, the mind can be defined as the information state of the brain, so it is immaterial only in the same sense as the information content of a data file, or the state of software residing in a computer's memory. In this case, data specifying the information state of the neural network could be captured and copied as a "computer file" from the brain and re-implemented into a different physical form. This is not to deny that minds are richly adapted to their substrates. An analogy to mind uploading is to copy the information state of a computer program from the memory of the computer on which it is executing to another computer and then continue its execution on the second computer. The second computer may perhaps have different hardware architecture, but it emulates the hardware of the first computer.
These philosophical issues have a long history. In 1775, Thomas Reid wrote: “I would be glad to know... whether when my brain has lost its original structure, and when some hundred years after the same materials are fabricated so curiously as to become an intelligent being, whether, I say that being will be me; or, if, two or three such beings should be formed out of my brain; whether they will all be me, and consequently one and the same intelligent being.” Although the name of the hard problem of consciousness was coined in 1994, debate surrounding the problem itself is ancient. Augustine of Hippo argued against physicalist "Academians" in the 5th century, writing that consciousness cannot be an illusion because only a conscious being can be deceived or experience an illusion. René Descartes, the founder of mind-body dualism, made a similar objection in the 17th century, coining the popular phrase "Je pense, donc je suis" ("I think, therefore I am"). Although physicalism is known to have been proposed in ancient times, Thomas Huxley was among the first to describe mental experience as merely an epiphenomenon of interactions within the brain, having no causal power of its own and being entirely downstream from the brain's activity.
A considerable portion of transhumanists and singularitarians place great hope in the belief that they may become immortal, by creating one or many non-biological functional copies of their brains, thereby leaving their "biological shell". However, the philosopher and transhumanist Susan Schneider claims that at best, uploading would create a copy of the original person's mind. Schneider agrees that consciousness has a computational basis, but this does not mean we can upload and survive. According to her views, "uploading" would probably result in the death of the original person's brain, while only outside observers can maintain the illusion of the original person still being alive. For it is implausible to think that one's consciousness would leave one's brain and travel to a remote location; ordinary physical objects do not behave this way. Ordinary objects (rocks, tables, etc.) are not simultaneously here, and elsewhere. At best, a copy of the original mind is created. Neural correlates of consciousness, a sub-branch of neuroscience, states that consciousness may be thought of as a state-dependent property of some undefined complex, adaptive, and highly interconnected biological system.
Others have argued against such conclusions. For example, Buddhist transhumanist James Hughes has pointed out that this consideration only goes so far: if one believes the self is an illusion, worries about survival are not reasons to avoid uploading, and Keith Wiley has presented an argument wherein all resulting minds of an uploading procedure are granted equal primacy in their claim to the original identity, such that survival of the self is determined retroactively from a strictly subjective position. Some have also asserted that consciousness is a part of an extra-biological system that is yet to be discovered; therefore it cannot be fully understood under the present constraints of neurobiology. Without the transference of consciousness, true mind-upload or perpetual immortality cannot be practically achieved.
Another potential consequence of mind uploading is that the decision to "upload" may then create a mindless symbol manipulator instead of a conscious mind (see philosophical zombie). If a computer could process sensory inputs to generate the same outputs that a human mind does (speech, muscle movements, etc.) without necessarily having any experience of consciousness, then it may be impossible to determine whether the uploaded mind is truly conscious, and not merely an automaton that externally behaves the way a human would. Thought experiments like the Chinese room raise fundamental questions about mind uploading: If an upload displays behaviors that are highly indicative of consciousness, or even verbally insists that it is conscious, does that prove it is conscious? There might also be an absolute upper limit in processing speed, above which consciousness cannot be sustained. The subjectivity of consciousness precludes a definitive answer to this question.
Numerous scientists, including Ray Kurzweil, believe that whether a separate entity is conscious is impossible to know with confidence, since consciousness is inherently subjective (see also: solipsism). Regardless, some scientists believe consciousness is the consequence of computational processes which are substrate-neutral. Still other scientists, prominent among them Roger Penrose, believe consciousness may emerge from some form of quantum computation that is dependent on the organic substrate (see quantum mind).
In light of uncertainty about whether mind uploads are conscious, Sandberg proposes a cautious approach:
Ethical and legal implications
The process of developing emulation technology raises ethical issues related to animal welfare and artificial consciousness. The neuroscience required to develop brain emulation would require animal experimentation, first on invertebrates and then on small mammals before moving on to humans. Sometimes the animals would just need to be euthanized in order to extract, slice, and scan their brains, but sometimes behavioral and in vivo measures would be required, which might cause pain to living animals.
In addition, the resulting animal emulations themselves might suffer, depending on one's views about consciousness. Bancroft argues for the plausibility of consciousness in brain simulations on the basis of the "fading qualia" thought experiment of David Chalmers. He then concludes: “If, as I argue above, a sufficiently detailed computational simulation of the brain is potentially operationally equivalent to an organic brain, it follows that we must consider extending protections against suffering to simulations.” Chalmers himself has argued that such virtual realities would be genuine realities. However, if mind uploading occurs and the uploads are not conscious, there may be a significant opportunity cost. In the book Superintelligence, Nick Bostrom expresses concern that we could build a "Disneyland without children."
It might help reduce emulation suffering to develop virtual equivalents of anaesthesia, as well as to omit processing related to pain and/or consciousness. However, some experiments might require a fully functioning and suffering animal emulation. Animals might also suffer by accident due to flaws and lack of insight into what parts of their brains are suffering. Questions also arise regarding the moral status of partial brain emulations, as well as creating neuromorphic emulations that draw inspiration from biological brains but are built somewhat differently.
Brain emulations could be erased by computer viruses or malware, without the need to destroy the underlying hardware. This may make assassination easier than for physical humans. The attacker might take the computing power for its own use.
Many questions arise regarding the legal personhood of emulations. Would they be given the rights of biological humans? If a person makes an emulated copy of themselves and then dies, does the emulation inherit their property and official positions? Could the emulation ask to "pull the plug" when its biological version was terminally ill or in a coma? Would it help to treat emulations as adolescents for a few years so that the biological creator would maintain temporary control? Would criminal emulations receive the death penalty, or would they be given forced data modification as a form of "rehabilitation"? Could an upload have marriage and child-care rights?
If simulated minds would come true and if they were assigned rights of their own, it may be difficult to ensure the protection of "digital human rights". For example, social science researchers might be tempted to secretly expose simulated minds, or whole isolated societies of simulated minds, to controlled experiments in which many copies of the same minds are exposed (serially or simultaneously) to different test conditions.
Research led by cognitive scientist Michael Laakasuo has shown that attitudes towards mind uploading are predicted by an individual's belief in an afterlife; the existence of mind uploading technology may threaten religious and spiritual notions of immortality and divinity.
Political and economic implications
Emulations might be preceded by a technological arms race driven by first-strike advantages. Their emergence and existence may lead to increased risk of war, including inequality, power struggles, strong loyalty and willingness to die among emulations, and new forms of racism, xenophobia, and religious prejudice. If emulations run much faster than humans, there might not be enough time for human leaders to make wise decisions or negotiate. It is possible that humans would react violently against the growing power of emulations, especially if that depresses human wages. Emulations may not trust each other, and even well-intentioned defensive measures might be interpreted as offense.
The book The Age of Em by Robin Hanson poses many hypotheses on the nature of a society of mind uploads, including that the most common minds would be copies of adults with personalities conducive to long hours of productive specialized work.
Emulation timelines and AI risk
Kenneth D. Miller, a professor of neuroscience at Columbia University and a co-director of the Center for Theoretical Neuroscience, raised doubts about the practicality of mind uploading. His major argument is that reconstructing neurons and their connections is in itself a formidable task, but it is far from being sufficient. Operation of the brain depends on the dynamics of electrical and biochemical signal exchange between neurons; therefore, capturing them in a single "frozen" state may prove insufficient. In addition, the nature of these signals may require modeling at the molecular level and beyond. Therefore, while not rejecting the idea in principle, Miller believes that the complexity of the "absolute" duplication of an individual mind is insurmountable for the nearest hundreds of years.
There are very few feasible technologies that humans have refrained from developing. The neuroscience and computer-hardware technologies that may make brain emulation possible are widely desired for other reasons, and logically their development will continue into the future. We may also have brain emulations for a brief but significant period on the way to non-emulation based human-level AI. Assuming that emulation technology will arrive, a question becomes whether we should accelerate or slow its advance.
Arguments for speeding up brain-emulation research:
If neuroscience is the bottleneck on brain emulation rather than computing power, emulation advances may be more erratic and unpredictable based on when new scientific discoveries happen. Limited computing power would mean the first emulations would run slower and so would be easier to adapt to, and there would be more time for the technology to transition through society.
Improvements in manufacturing, 3D printing, and nanotechnology may accelerate hardware production, which could increase the "computing overhang" from excess hardware relative to neuroscience.
If one AI-development group had a lead in emulation technology, it would have more subjective time to win an arms race to build the first superhuman AI. Because it would be less rushed, it would have more freedom to consider AI risks.
Arguments for slowing brain-emulation research:
Greater investment in brain emulation and associated cognitive science might enhance the ability of artificial intelligence (AI) researchers to create "neuromorphic" (brain-inspired) algorithms, such as neural networks, reinforcement learning, and hierarchical perception. This could accelerate risks from uncontrolled AI. Participants at a 2011 AI workshop estimated an 85% probability that neuromorphic AI would arrive before brain emulation. This was based on the idea that brain emulation would require understanding of the workings and functions of the different brain components, along with the technological know-how to emulate neurons. To counter this idea, reverse engineering the Microsoft Windows code base is already hard, so reverse engineering the brain would likely be much harder. By a very narrow margin, the participants on balance leaned toward the view that accelerating brain emulation would increase expected AI risk.
Waiting might give society more time to think about the consequences of brain emulation and develop institutions to improve cooperation.
Emulation research would also accelerate neuroscience as a whole, which might accelerate medical advances, cognitive enhancement, lie detectors, and capability for psychological manipulation.
Emulations might be easier to control than de novo AI because:
Human abilities, behavioral tendencies, and vulnerabilities are more thoroughly understood, thus control measures might be more intuitive and easier to plan.
Emulations could more easily inherit human motivations.
Emulations are harder to manipulate than de novo AI, because brains are messy and complicated; this could reduce risks of their rapid takeoff. Also, emulations may be bulkier and require more hardware than AI, which would also slow the speed of a transition. Unlike AI, an emulation would not be able to rapidly expand beyond the size of a human brain. Emulations running at digital speeds would have less intelligence differential vis-à-vis AI and so might more easily control AI.
As counterpoint to these considerations, Bostrom notes some downsides:
Even if we better understand human behavior, the evolution of emulation behavior under self-improvement might be much less predictable than the evolution of safe de novo AI under self-improvement.
Emulations may not inherit all human motivations. Perhaps they would inherit our darker motivations or would behave abnormally in the unfamiliar environment of cyberspace.
Even if there is a slow takeoff toward emulations, there would still be a second transition to de novo AI later on. Two intelligence explosions may mean more total risk.
Because of the postulated difficulties that a whole brain emulation-generated superintelligence would pose for the control problem, computer scientist Stuart J. Russell in his book Human Compatible rejects creating one, simply calling it "so obviously a bad idea".
Advocates
In 1979, Hans Moravec (1979) described and endorsed mind uploading using a brain surgeon. Moravec used a similar description in 1988, calling it "transmigration".
Ray Kurzweil, director of engineering at Google, has long predicted that people will be able to "upload" their entire brains to computers and become "digitally immortal" by 2045. Kurzweil made this claim for many years, e.g. during his speech in 2013 at the Global Futures 2045 International Congress in New York, which claims to subscribe to a similar set of beliefs. Mind uploading has also been advocated by a number of researchers in neuroscience and artificial intelligence, such as Marvin Minsky. In 1993, Joe Strout created a small web site called the Mind Uploading Home Page, and began advocating the idea in cryonics circles and elsewhere on the net. That site has not been actively updated in recent years, but it has spawned other sites including MindUploading.org, run by Randal A. Koene, who also moderates a mailing list on the topic. These advocates see mind uploading as a medical procedure which could eventually save countless lives.
Many transhumanists look forward to the development and deployment of mind uploading technology, with transhumanists such as Nick Bostrom predicting that it will become possible within the 21st century due to technological trends such as Moore's law.
Michio Kaku, in collaboration with Science, hosted a documentary, Sci Fi Science: Physics of the Impossible, based on his book Physics of the Impossible. Episode four, titled "How to Teleport", mentions that mind uploading via techniques such as quantum entanglement and whole brain emulation using an advanced MRI machine may enable people to be transported vast distances at near light-speed.
The book Beyond Humanity: CyberEvolution and Future Minds by Gregory S. Paul & Earl D. Cox, is about the eventual (and, to the authors, almost inevitable) evolution of computers into sentient beings, but also deals with human mind transfer. Richard Doyle's Wetwares: Experiments in PostVital Living deals extensively with uploading from the perspective of distributed embodiment, arguing for example that humans are currently part of the "artificial life phenotype". Doyle's vision reverses the polarity on uploading, with artificial life forms such as uploads actively seeking out biological embodiment as part of their reproductive strategy.
In fiction
Mind uploading—transferring an individual's personality to a computer—appears in several works of science fiction. It is distinct from the concept of transferring a consciousness from one human body to another. It is sometimes applied to a single person and other times to an entire society. Recurring themes in these stories include whether the computerized mind is truly conscious, and if so, whether identity is preserved. It is a common feature of the cyberpunk subgenre, sometimes taking the form of digital immortality.
See also
BRAIN Initiative
Brain transplant
Brain-reading
Cyborg
Cylon (reimagining)
Democratic transhumanism
Human Brain Project
Isolated brain
Neuralink
Open individualism
Posthumanization
Robotoid
Ship of Theseus—thought experiment asking if objects having all parts replaced fundamentally remain the same object
Simulation hypothesis
Technologically enabled telepathy
Teletransportation paradox
Thought recording and reproduction device
Turing test
The Future of Work and Death
Vertiginous question
Chinese room
2045 Initiative
Dmitry Itskov
Miguel Nicolelis
Neural network (machine learning)
References
Fictional technology
Hypothetical technology
Immortality
Neurotechnology
Posthumanism
Transhumanism | Mind uploading | [
"Technology",
"Engineering",
"Biology"
] | 6,422 | [
"Genetic engineering",
"Transhumanism",
"Ethics of science and technology"
] |
162,874 | https://en.wikipedia.org/wiki/Pentamidine | Pentamidine is an antimicrobial medication used to treat African trypanosomiasis, leishmaniasis, Balamuthia infections, babesiosis, and to prevent and treat pneumocystis pneumonia (PCP) in people with poor immune function. In African trypanosomiasis it is used for early disease before central nervous system involvement, as a second line option to suramin. It is an option for both visceral leishmaniasis and cutaneous leishmaniasis. Pentamidine can be given by injection into a vein or muscle or by inhalation.
Common side effects of the injectable form include low blood sugar, pain at the site of injection, nausea, vomiting, low blood pressure, and kidney problems. Common side effects of the inhaled form include wheezing, cough, and nausea. It is unclear if doses should be changed in those with kidney or liver problems. Pentamidine is not recommended in early pregnancy but may be used in later pregnancy. Its safety during breastfeeding is unclear. Pentamidine is in the aromatic diamidine family of medications. While the way the medication works is not entirely clear, it is believed to involve decreasing the production of DNA, RNA, and protein.
Pentamidine came into medical use in 1937. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In regions of the world where trypanosomiasis is common pentamidine is provided for free by the World Health Organization (WHO).
Medical uses
Treatment of PCP caused by Pneumocystis jirovecii
Prevention of PCP in adults with HIV who have one or both of the following:
History of PCP
CD4+ count ≤ 200mm³
Treatment of leishmaniasis
Treatment of African trypanosomiasis caused by Trypanosoma brucei gambiense
Balamuthia infections
Pentamidine is classified as an orphan drug by the U.S. Food and Drug Administration
Other uses
Use as an antitumor drug has also been proposed.
Pentamidine is also identified as a potential small molecule antagonist that disrupts this interaction between S100P and RAGE receptor.
Special Populations
Pregnancy
It has not been shown to cause birth defects in animal studies when given intravenously. There are no controlled studies to show if pentamidine can harm the fetus in pregnant women. It is only recommended if the drug of choice trimethoprim-sulfamethoxazole is contraindicated.
Breastfeeding
There is no information regarding the excretion of pentamidine in breast milk, but since the adverse effects on breastfed infants are unknown currently, it is recommended by the manufacturer for the infant to not be breastfed or for the mother to stop the drug. Risks versus benefits for the mother should be considered when making this decision.
Children
Pentamidine can be used in the prevention of PCP in children with HIV who cannot tolerate Trimethoprim/Sulfamethoxazole and can use a nebulizer. Intranvenous solutions of pentamidine should only be used in children with HIV older than 2 years old when other treatments are unavailable
Elderly
There is no data for the use of pentamidine in this specific population.
Contraindications
Patients with a history of anaphylaxis or hypersensitivity to pentamidine isethionate
Side effects
Common
Burning pain, dryness, or sensation of lump in throat
Chest pain
Coughing
difficulty in breathing
difficulty in swallowing
skin rash
wheezing
Rare
Nausea and vomiting
Pain in upper abdomen, possibly radiating to the back
Severe pain in side of chest
Shortness of breath
Others
Blood: Pentamidine frequently causes leukopenia and less often thrombopenia, which may cause symptomatic bleeding. Some cases of anemia, possibly related to folic acid deficiency, have been described.
Cardiovascular: Hypotension, which may be severe. Severe or fatal arrhythmias and heart failure are quite frequent.
Kidney: 25 percent develop signs of nephrotoxicity ranging from mild, asymptomatic azotemia (increased serum creatinine and urea) to irreversible renal failure. Ample fluids or intravenous hydration may prevent some nephrotoxicity.
Liver: Elevated liver enzymes are associated with intravenous use of pentamidine. Hepatomegaly and hepatitis have been encountered with long term prophylactic use of pentamidine inhalation.
Neurological: Dizziness, drowsiness, neuralgia, confusion, hallucinations, seizures and other central side effects are reported.
Pancreas: Hypoglycemia that requires symptomatic treatment is frequently seen. On the other hand, pentamidine may cause or worsen diabetes mellitus.
Respiratory: Cough and bronchospasm, most frequently seen with inhalation.
Skin: Severe local reactions after extravasculation of intravenous solutions or following intramuscular injection treatment have been seen. Pentamidine itself may cause rash, or rarely Stevens–Johnson syndrome or Lyell syndrome.
Eye discomfort, conjunctivitis, throat irritation, splenomegaly, Herxheimer reaction, electrolyte imbalances (e.g. hypocalcemia).
Drug interactions
The additional or sequential use of other nephrotoxic drugs like aminoglycosides, amphotericin B, capreomycin, colistin, polymyxin B, vancomycin, foscarnet, or cisplatin should be closely monitored, or whenever possible completely avoided.
Mechanism of action
The mechanism seems to vary with different organisms and is not well understood. However, pentamidine is suspected to work through various methods of interference of critical functions in DNA, RNA, phospholipid and protein synthesis. Pentamidine binds to adenine-thymine-rich regions of the Trypanosoma parasite DNA, forming a cross-link between two adenines four to five base pairs apart. The drug also inhibits topoisomerase enzymes in the mitochondria of Pneumocystis jirovecii. Similarly, pentamidine inhibits type II topoisomerase in the mitochondria of the Trypanosoma parasite, resulting in a broken and unreadable mitochondrial genome.
Resistance
Strains of the Trypanosoma brucei parasite that are resistant to pentamidine have been discovered. Pentamidine is brought into the mitochondria through carrier proteins, and the absence of these carriers prevents the drug from reaching its site of action.
Pharmacokinetics
Absorption: Pentamidine is completely absorbed when given intravenously or intramuscularly. When inhaled through a nebulizer, pentamidine accumulates in the bronchoalveolar fluid of the lungs at a higher concentration compared to injections. The inhaled form is minimally absorbed in the blood. Absorption is unreliable when given orally.
Distribution: When injected, pentamidine binds to tissues and proteins in the plasma. It accumulates in the kidney, liver, lungs, pancreas, spleen, and adrenal glands. Additionally, pentamidine does not reach curative levels in the cerebrospinal fluid. It has a volume of distribution of 286-1356 liters when given intravenously and 1658-3790 liters when given intramuscularly. Inhaled pentamidine is mainly deposited into the bronchoalveolar lavage fluid of the lungs.
Metabolism: Pentamidine is primarily metabolized by Cytochrome P450 enzymes in the liver. Up to 12% of pentamidine is eliminated in the urine unchanged.
Elimination: Pentamidine has an average half-life of 5–8 hours when given intravenously and 7–11 hours when given intramuscularly. However, these may increase with severe kidney problems. Pentamidine can remain in the system for as long as 8 months after the first injection.
Chemistry
Pentamidine isethionate for injection is commercially available as a lyophilized, white crystalline powder for reconstitution with sterile water or 5% Dextrose. After reconstitution, the mixture should be free from discoloration and precipitation. Reconstitution with sodium chloride should be avoided due to formation of precipitates. Intravenous solutions of pentamidine can be mixed with intravenous HIV medications like zidovidine and intravenous heart medications like diltiazem. However, intravenous solutions of antiviral foscarnet and antifungal fluconazole are incompatible with pentamidine. To avoid side-effects associated with intravenous administration, the solution should be slowly infused to minimize the release of histamine.
History
Pentamidine was first used to treat African trypanosomiasis in 1937 and leishmaniasis in 1940 before it was registered as pentamidine mesylate in 1950.
The sudden increase in requests for use of Pentamidine isethionate in then unlicensed form from the CDC in the early 1980s for treating Pneumocystis jirovecii in young male patients was key in identifying the emergence of the HIV/AIDS epidemic at that time.
Its efficacy against Pneumocystis jirovecii was demonstrated in 1987, following its re-emergence on the drug market in 1984 in the current isethionate form.
Trade names and dose form
For oral inhalation and for nebulizer use:
NebuPent Nebulizer (APP Pharmaceuticals LLC - US)
For intravenous and intramuscular use:
US and Canada:
Pentacarinat 300 injection powder 300 mg vial (Avantis Pharma Inc - Canada)
Pentam 300 (APP Pharmaceuticals LLC - US)
Pentamidine isethionate 300 mg for injection (David Bull Laboratories LTD - Canada, Hospira Healthcare Corporation - Canada)
International Brands:
Pentamidine isethionate (Abbott)
Pentacarinat (Sanofi-Aventis)
Pentacrinat (Abbott)
Pentam (Abbott)
Pneumopent
See also
Netropsin
Lexitropsin
References
External links
Antifungals
Antiprotozoal agents
Amidines
DNA-binding substances
NMDA receptor antagonists
Phenol ethers
World Health Organization essential medicines
Wikipedia medicine articles ready to translate | Pentamidine | [
"Chemistry",
"Biology"
] | 2,215 | [
"Genetics techniques",
"Antiprotozoal agents",
"Amidines",
"Functional groups",
"DNA-binding substances",
"Biocides",
"Bases (chemistry)"
] |
162,971 | https://en.wikipedia.org/wiki/STMicroelectronics | STMicroelectronics NV (commonly referred to as ST or STMicro) is a European multinational semiconductor contract manufacturing and design company. It is the largest of such companies in Europe. It was founded in 1987 from the merger of two state-owned semiconductor corporations: Thomson Semiconducteurs of France and SGS Microelettronica of Italy. The company is incorporated in the Netherlands and headquartered in Plan-les-Ouates, Switzerland. Its shares are traded on Euronext Paris, the Borsa Italiana and the New York Stock Exchange.
History
ST was formed in 1987 by the merger of two government-owned semiconductor companies: Italian SGS Microelettronica (where SGS stands for Società Generale Semiconduttori, "General Semiconductor Company"), and French Thomson Semiconducteurs, the semiconductor arm of Thomson.
SGS Microelettronica originated in 1972 from a previous merger of two companies:
ATES (Aquila Tubi e Semiconduttori), a vacuum tube and semiconductor maker headquartered in L'Aquila, the regional capital of the region of Abruzzo in Southern Italy, which in 1961 changed its name to Azienda Tecnica ed Elettronica del Sud and relocated its manufacturing plant in the Industrial Zone of Catania, in Sicily;
Società Generale Semiconduttori (founded in 1957 by Jewish-Italian engineer, politician, and industrialist Adriano Olivetti).
Thomson Semiconducteurs was created in 1982 by the French government's widespread nationalization of industries following the election of François Mitterrand to the presidency. It included:
the semiconductor activities of the French electronics company Thomson;
in 1985 it bought Mostek, a US company founded in 1969 as a spin-off of Texas Instruments, from United Technologies;
Silec, founded in 1977;
Eurotechnique, founded in 1979 in Rousset, Bouches-du-Rhône as a joint-venture between Saint-Gobain of France and US-based National Semiconductor;
EFCIS (Étude et la Fabrication de Circuits Intégrés Spéciaux), founded in 1972 at CEA-Leti;
SESCOSEM, founded in 1969.
At the time of the merger of these two companies in 1987, the new corporation was named SGS-THOMSON and was led by chief executive officer Pasquale Pistorio. The company took its current name of STMicroelectronics in May 1998 following Thomson's sale of its shares. After its creation ST was ranked 14th among the top 20 semiconductor suppliers with sales of around US$850 million. The company has participated in the consolidation of the semiconductor industry since its formation, with acquisitions including:
In 1989, British company Inmos known for its transputer microprocessors from parent Thorn EMI;
In 1994, Canada-based Nortel's semiconductor activities;
In 1999, UK-based VLSI-Vision CMOS Image Sensor research & development company, a spin-out of Edinburgh University. Incorporated on 1 January 2000, the company became STMicroelectronics Imaging Division, currently part of the Analog MEMS and Sensors business group;
In 2000, WaferScale Integration Inc. (WSI, Fremont, California), a vendor of EPROM and flash memory-based programmable system-chips;
In 2002, Alcatel's Microelectronics division, which along with the incorporation of smaller ventures such as UK company, Synad Ltd, helped the company expand into the Wireless-LAN market;
In 2007, US-based Genesis Microchip. Genesis Microchip is known for their strength in video processing technology (Faroudja) and has design centres located in Santa Clara, California, Toronto, Taipei City and Bangalore.
On 8 December 1994, the company completed its initial public offering on the Paris and New York stock exchanges. Owner Thomson SA sold its stake in the company in 1998 when the company also listed on the Italian Bourse in Milan. In 2002, Motorola and TSMC joined ST and Philips in a new technology partnership. The Crolles 2 Alliance was created with a new 12" wafer manufacturing facility located in Crolles, France. In 2005, chief executive officer Pasquale Pistorio was succeeded by Carlo Bozotti, who then headed the memory products division and had been with the company’s predecessor since 1977. By 2005, ST was ranked fifth, behind Intel, Samsung, Texas Instruments and Toshiba, but ahead of Infineon, Renesas, NEC, NXP Semiconductors and Freescale. The company was the largest European semiconductors supplier, ahead of Infineon and NXP.
Early in 2007, NXP Semiconductors (formerly Philips Semiconductors) and Freescale (formerly Motorola Semiconductors) decided to stop their participation in Crolles 2 Alliance. Under the terms of the agreement the Alliance came to an end on December 31, 2007. On May 22, 2007, ST and Intel created a joint venture in the memory application called Numonyx: this new company merged ST and Intel Flash Memory activities. Semiconductor market consolidation continued with ST and NXP announcing on April 10, 2008, the creation of a new joint venture of their mobile activities, with ST owning 80% of the new company and NXP 20%. This joint venture began on August 20, 2008. On February 10, 2009, ST Ericsson, a joint venture bringing together ST-NXP Wireless and Ericsson Mobile Platforms, was established.
ST Ericsson was a multinational manufacturer of wireless products and semiconductors, supplying to mobile device manufacturers. ST-Ericsson was a 50/50 joint venture of STMicroelectronics and Ericsson established on February 3, 2009, and dissolved on August 2, 2013. Headquartered in Geneva, Switzerland, it was a fabless company, outsourcing semiconductor manufacturing to foundry companies.
In 2011, ST announced the creation of a joint lab with Sant'Anna School of Advanced Studies. The lab focuses on research and innovation in biorobotics, smart systems and microelectronics. Past collaborations with Sant'Anna School of Advanced Studies included DustBot, a platform that integrated self-navigating "service robots" for waste collection.
In 2015, the MEMS division of ST was ranked as the biggest European competitor of Silex Microsystems.
In 2018, chief executive Carlo Bozotti was succeeded by Jean-Marc Chery. In 2023, STMicroelectronics partnered with Synopsys to design a working chip on Microsoft Corp’s cloud, marking the first time AI software had been utilized for chip design.
In 2024, ST became the sixth shareholder of Quintauris, a joint company with the goal of standardizing RISC-V ecosystem.
Shareholders
As of December 31, 2014, the shareholders were:
68.4% public (New York Stock Exchange, Euronext Paris, Borsa Italiana Milano);
4.1% treasury shares;
27.6% STMicroelectronics Holding B.V.:
50% FT1CI (Bpifrance 79.2% and French Alternative Energies and Atomic Energy Commission (CEA) 20.8%; previously );
50% Ministry of Economy and Finance of Italy .
Manufacturing facilities
Unlike fabless semiconductor companies, STMicroelectronics owns and operates its own semiconductor wafer fabs. The company owned five 8-inch (200 mm) wafer fabs and 1 12-inch (300 mm) wafer fab in 2006. Most of the production is scaled at 0.18 μm, 0.13 μm, 90 nm and 65 nm (measurements of transistor gate length). STMicroelectronics also owns back-end plants, where silicon dies are assembled and bonded into plastic or ceramic packages.
Major sites include:
Grenoble, France
Grenoble is one of the company's most important R&D centres, employing around 4,000 staff. The Polygone site employs 2,200 staff and is one of the historical bases of the company (ex SGS). All the historical wafer fab lines are now closed but the site hosts the headquarters of many divisions (marketing, design, industrialization) and a R&D centre, focused on silicon and software design and fab process development.
The Crolles site hosts a and a fab and was originally built as a common R&D centre for submicrometre technologies as part of the 1990 Grenoble 92 partnership between SGS-Thomson and CNET, the R&D center of French telecom company France Telecom.
The fab was inaugurated by French president Jacques Chirac, on 27 February 2003. It includes an R&D centre which focuses on developing new nanometric technology processes for 90-nm to 32-nm scale using wafers and it was developed for The Crolles 2 Alliance. This alliance of STMicroelectronics, TSMC, NXP Semiconductors (formerly Philips semiconductor) and Freescale (formerly Motorola semiconductor) partnered in 2002 to develop the facility and to work together on process development. The technologies developed at the facility were also used by global semiconductor foundry TSMC of Taiwan, allowing TSMC to build the products developed in Crolles on behalf of the Alliance partners who required such foundry capacity.
Rousset, France
Employing around 3,000 staff, Rousset hosts several division headquarters including smartcards, microcontrollers, and EEPROM as well as several R&D centers. Rousset also hosts an 8-inch (200-mm) fab, which was opened on May 15, 2000 by French prime minister Lionel Jospin.
The site opened in 1979 as a fab operated by Eurotechnique, a joint venture between Saint-Gobain of France and National Semiconductor of the US. Rousset was sold to Thomson-CSF in 1982 as part of the French government's 1981–82 nationalization of several industries. As part of the nationalisation, a former Thomson plant in the center of Aix-en-Provence operating since the 1960s was closed and staff were transferred to the new Rousset site. The original fab was upgraded into and later fab in 1996. It is now being shut down. The site also has a "Wafer Level Chip Scale Packaging" accreditation for eSIM ICs.
In 1988, a small group of employees from the Thomson Rousset plant (including the director, Marc Lassus) founded a start-up company, Gemalto (formerly known as Gemplus), which became a leader in the smartcard industry.
Tours, France
Employing 1,500 staff, this site hosts a fab and R&D centres.
Milan, Italy
Employing 6,000 staff, the Milan facilities match Grenoble in importance. Agrate Brianza employs around 4,000 staff and is a historical base of the company (ex SGS). The site has several fab lines (including a fab) and an R&D center. Castelletto, employs 300 to 400 staff and hosts some divisions and R&D centres.
Catania, Italy
The Catania plant in Sicily employs 5,000 staff and hosts several R&D centers and divisions, focusing on flash memory technologies as well as two fabs. The plant was launched in 1961 by ATES to supply under licensing to RCA of the US and initially using germanium. The site's two major wafer fabs are a fab, opened in April 1997 by then-Italian Prime Minister Romano Prodi, and a fab that has never been completed and which was transferred in its current state to "Numonyx" in 2008. A new manufacturing facility for silicon carbide (SiC) substrates of 150 mm should open here in 2023.
In October 2022, the EU supported STMicroelectronics for the construction of a silicon carbide wafer plant in Catania with €293 million through the Recovery and Resilience Facility to be completed in 2026, and in line with the European Chips Act.
Caserta, Italy
STmicro eSIM and SIM production facility for embedded form factor eSIM.
Kirkop, Malta
As of 2010, ST employed around 1,800 people in Kirkop, making it the largest private sector employer, and the country's leading exporter.
Singapore
In 1970, SGS created its first assembly back-end plant in Singapore, in the area of Toa Payoh. Then in 1981, SGS decided to build a wafer
fab in Singapore. Converted up to fab, this is now an important wafer fab of the group. Ang Mo Kio also hosts some design centres. As of 2004, the site employed 6,000 staff.
Tunis, Tunisia
Application, design and support. about 110 employees.
Bouskoura, Morocco
Founded in 1979 as a radiofrequency products facility, the Bouskoura site now hosts back-end manufacturing activity, which includes chip testing and packaging. Since 2022 it also features a production line for silicon carbide products that primarily will be used in electric vehicles.
Norrköping, Sweden
The Norrköping plant is a wafer fab that, at the start of production in 2021, was the first to produce 200mm (8 in) Silicone Carbide wafers. The wafers are mostly used for SiC power devices.
Other sites
Administrative headquarters
Geneva, Switzerland: Corporate headquarter which hosts most of the ST top management. It totals some hundred of employees.
Saint-Genis-Pouilly, France, near Geneva: A few hundred of employees. Headquarters for logistics.
Paris: Marketing and support.
Regional headquarters
Coppell, Texas: US headquarters.
Singapore: Headquarters for the Asia-Pacific region.
Tokyo: Headquarters for Japan and Korea operations.
Shanghai: Headquarters for China operations.
Assembly plants
Malta: In 1981, SGS-Thomson (now STMicroelectronics) built its first assembly plant in Malta. STMicroelectronics is, as of 2008, the largest private employer on the island, employing around 1,800 people.
Muar, Malaysia: around 4000 employees. This site was built in 1974 by Thomson and is now an assembly plant.
Shenzhen, Guangdong province, China: In 1994, ST and the Shenzhen Electronics Group signed a partnership to construct and jointly operate an assembly plant (ST has majority with 60%). The plant is located in Futian Free Trade Zone and became operational in 1996. It has around 3,300 employees. A new assembly plant is built in Longgang since 2008, and closed up till 2014. The R&D, design, sales and marketing office is located in the Hi-tech industrial park in Nanshan, Shenzhen.
Calamba in the province of Laguna, Philippines: In 2008, ST acquired this plant from NXP Semiconductors. Initially as part of joint venture with NXP but later acquired the whole share turning it into a full-fledged STMicroelectronics Assembly and Testing plant. Currently it employs 2,000 employees.
Design centres
Cairo, Egypt: Hardware and software design center, started in 2020, with 50 employees.
Rabat, Morocco: A design center that employs 160 people.
Naples, Italy: A design center employing 300 people.
Lecce, Italy: HW & SW Design Center which hosts 20 researchers in the Advanced System Technology group.
Ang Mo Kio, Singapore: In 1970, SGS created its first assembly back-end plant in Singapore, in the area of Toa Payoh. Then in 1981, SGS decided to build a wafer fab in Singapore. The Singapore technical engineers have been trained in Italy and the fab of Ang Mo Kio started to produce its first wafers in 1984. Converted up to 8 inch (200 mm) fab, this is now an important 8 inch (200 mm) wafer fab of the ST group.
Greater Noida, India: The Noida site was launched in 1992 to conduct software engineering activities. A silicon design centre was inaugurated in 1995. With 120 employees, it was the largest design center of the company outside Europe at the time. In 2006, the site was shifted to Greater Noida for further expansion. The site hosts mainly design teams.
Santa Clara, California, (Silicon Valley), United States: 120 staff in marketing, design and applications.
La Jolla, California, (San Diego, United States): 80 staff in design and applications.
Lancaster, Pennsylvania, United States: Application, support, and marketing.
Prague, Czech Republic: 100 to 200 employees. Application, design and support.
Tunis, Tunisia: 110 employees. Application, design and support.
Sophia Antipolis, near Nice, France: Design center with a few hundred employees.
Edinburgh, Scotland: 200 staff focused in the field of imaging and photon detection.
Ottawa, Ontario, Canada: In 1993, SGS-Thomson purchased the semiconductor activities of Nortel which owned in Ottawa an R&D center and a fab. The fab was closed in 2000, however, a design, R&D centre and sales office is operating in the city.
Toronto, Ontario, Canada: HW & SW Design Center primarily involved with the design of video processor ICs as part of ST's TVM Division.
Bangalore, India: HW and SW design center employing more than 250 people (Including the employees of ST Ericsson and Genesis Microchip).
Zaventem, Belgium: 100 employees. Design & Application Center.
Helsinki, Finland: Design Center.
Turku, Finland: Design Center.
Oulu, Finland: Design Center.
Tampere, Finland: Design Center.
Longmont, Colorado United States: Design Center.
Graz, Austria: NFC Competence Center.
Pisa, Italy: A design center employing more than 50 people. R&D, analog and digital design.
Closing sites
The Phoenix, Arizona 8 inch (200 mm) fab, the Carrollton, Texas 6 inch (150 mm) fab, and the Ain Sebaa, Morocco fab were beginning rampdown plans, and were destined to close by 2010.
The Casablanca, Morocco site consists of two assembly parts (Bouskoura and Aïn Sebaâ) and totals around 4000 employees. It was opened in the 1960s by Thomson.
The Bristol, United Kingdom site employing well over 300 at its peak (in 2001/2) but was ramped down to approx. 150 employees at close by early 2014.
The Ottawa, Ontario, Canada plant (approx. 450 employees) was to be close down by 2013 end.
Closed sites
Rennes, France hosted a 6-inch (150 mm) fab and was closed in 2004
Rancho Bernardo, California, US a 4-inch (100 mm) fab created by Nortel and purchased by SGS-Thomson in 1994, after which it was converted into a 6-inch (150 mm) fab in 1996.
SGS's first presence in the US was a sales office based in Phoenix in the early 1980s. Later, under SGS-Thomson, an 8-inch (200 mm) fab was completed in Phoenix in 1995. The company's second 8" fab after Crolles 1, the site was first dedicated to producing microprocessors for Cyrix. On 10 July 2007, ST said that it would close this site, and in July 2010 the shell of the Phoenix PF1 FAB was bought by Western Digital Corporation.
The Carrollton, Texas, US site was built in 1969 by Mostek, an American company founded by former employees of Texas Instruments. In 1979, Mostek was acquired by United Technologies, which sold it to Thomson Semiconducteurs in 1985. Initially equipped with a 4-inch (100 mm) fab, it was converted into a 6-inch (150 mm) fab in 1988. The activities of INMOS in the US were transferred to Carrollton in 1989 following its acquisition by SGS Thomson. It was closed in 2010.
Bristol, UK This R&D site housed Inmos, which in 1978 began development of the Transputer microprocessor. The site was acquired with Inmos in 1989, and was primarily involved with the design of home video and entertainment products (e.g. Set-Top Box), GPS chips, and accompanying software. At its peak the site employed more than 250 employees. The site closed in 2014.
Future locations
On 8 August 2007, ST bought Nokia's microchip development team and plans to invest heavily in development of cellular ASIC applications. The purchase included Nokia's ASIC team in Southwood (UK) and the company plans several sites in Finland.
In June 2023, ST announced its partnership with GlobalFoundries to build a new factory in Crolles, France.
See also
Altitude SEE Test European Platform (ASTEP)
Interuniversity Microelectronics Centre (IMEC)
Numonyx
ST-Ericsson
List of semiconductor fabrication plants
STM8
STM32
STMicroelectronics Small Shareholders' Group (STM.S.S.G.)
Collectif Autonome et Démocratique de STMicroelectronics (CAD-ST)
References
External links
Electronics companies established in 1987
Companies listed on Euronext Paris
Semiconductor companies of Switzerland
Electronics companies of France
Electronics companies of Italy
Government-owned companies of Italy
Manufacturing companies based in Geneva
Partly privatized companies of Italy
Photovoltaics manufacturers
Government-owned companies of France
Multinational companies headquartered in Switzerland
CAC 40
Companies in the FTSE MIB
MEMS factories | STMicroelectronics | [
"Materials_science",
"Engineering"
] | 4,478 | [
"Photovoltaics manufacturers",
"Microelectronic and microelectromechanical systems",
"MEMS factories",
"Engineering companies"
] |
163,005 | https://en.wikipedia.org/wiki/Genlock | Genlock (generator locking) is a common technique where the video output of one source (or a specific reference signal from a signal generator) is used to synchronize other picture sources together. The aim in video applications is to ensure the coincidence of signals in time at a combining or switching point. When video instruments are synchronized in this way, they are said to be generator-locked, or genlocked.
Possible problems
Video signals generated and output by generator-locked instruments are said to be syntonized. Syntonized video signals will be precisely frequency-locked, but because of delays caused by the unequal transmission path lengths, the synchronized signals will exhibit differing phases at various points in the television system. Modern video equipment such as production switchers that have multiple video inputs often include a variable delay on each input to compensate for the phase differences and time all the input signals to precise phase coincidence.
Where two or more video signals are combined or being switched between, the horizontal and vertical timing of the picture sources should be coincident with each other. If they are not, the picture will appear to jump when switching between the sources whilst the display device re-adjusts the horizontal and/or vertical scan to correctly reframe the image.
Where composite video is in use, the phase of the chrominance subcarrier of each source being combined or switched should also be coincident. This is to avoid changes in colour hue and/or saturation during a transition between sources.
Scope
Generator locking can be used to synchronize as few as two isolated sources (e.g., a television camera and a videotape machine feeding a vision mixer (production switcher)), or in a wider facility where all the video sources are locked to a single synchronizing pulse generator (e.g., a fast-paced sporting event featuring multiple cameras and recording devices). Generator locking can also be used to ensure that multiple CRT monitors that appear in a movie are flicker-free. Generator locking is also used to synchronize two cameras for Stereoscopic 3D video recording.
In broadcast systems, an analog generator-lock signal usually consists of vertical and horizontal synchronizing pulses together with chrominance phase reference in the form of colorburst. No picture information is usually carried to avoid disturbing the timing signals, and the name reference, black and burst, color black, or black burst is usually given to such a signal. A composite colour video signal inherently carries the same reference signals and can be used as a generator-locking signal, albeit at the risk of being disturbed by out-of-specification picture signals.
Although some high-definition broadcast systems may use a standard-definition reference signal as a generator-locking reference signal, the use of tri-level synchronising pulses directly related to the frame and line rate is increasing within HD systems. A tri-level sync pulse is a signal that initially goes from 0 volts DC to a negative voltage, then a positive voltage, before returning to zero volts DC again. The voltage excursions are typically 300 mV either side of zero volts, and the duration each of the two excursions is the same as a particular number of digital picture samples.
Connections
Most television studio and professional video cameras have dedicated generator-locking ports on the camera. If the camera is tethered with a triaxial cable or optical fibre cable, the analog generator-locking signal is used to lock the camera control unit, which in turn locks the camera head by means of information carried within a data channel transmitted along the cable. If the camera is an ENG-type camera, one without a triax/fibre connection or without a dockable head, the generator-locking signal is carried through a separate cable from the video.
Variants
Natlock is a picture-source synchronizing system using audio tone signals to describe the timing discrepancies between composite video signals, while Icelock uses digital information conveyed in the vertical blanking interval of a composite video signal.
See also
Frame synchronizer (video)
Time base corrector
References
Synchronization
Film and video technology
Broadcast engineering
Television terminology | Genlock | [
"Engineering"
] | 848 | [
"Broadcast engineering",
"Electronic engineering",
"Telecommunications engineering",
"Synchronization"
] |
163,389 | https://en.wikipedia.org/wiki/Diamond%20dust | Diamond dust is a ground-level cloud composed of tiny ice crystals. This meteorological phenomenon is also referred to simply as ice crystals and is reported in the METAR code as IC. Diamond dust generally forms under otherwise clear or nearly clear skies, so it is sometimes referred to as clear-sky precipitation. Diamond dust is most commonly observed in Antarctica and the Arctic, but can occur anywhere with a temperature well below freezing. In the polar regions of Earth, diamond dust may persist for several days without interruption.
Characteristics
Diamond dust is similar to fog in that it is a cloud based at the surface; however, it differs from fog in two main ways. Generally fog refers to a cloud composed of liquid water (the term ice fog usually refers to a fog that formed as liquid water and then froze, and frequently seems to occur in valleys with airborne pollution such as Fairbanks, Alaska, while diamond dust forms directly as ice). Also, fog is a dense-enough cloud to significantly reduce visibility, while diamond dust is usually very thin and may not have any effect on visibility (there are far fewer crystals in a volume of air than there are droplets in the same volume with fog). Because mist is often classified to be more transparent than fog, diamond dust has often been referred to as ice mist. However, diamond dust still can often reduce the visibility, in some cases to under .
The depth of the diamond dust layer can vary substantially from as little as to . Because diamond dust does not always reduce visibility it is often first noticed by the brief flashes caused when the tiny crystals, tumbling through the air, reflect sunlight to the eye. This glittering effect gives the phenomenon its name since it looks like many tiny diamonds are flashing in the air.
Formation
These ice crystals usually form when a temperature inversion is present at the surface and the warmer air above the ground mixes with the colder air near the surface. Since warmer air frequently contains more water vapor than colder air, this mixing will usually also transport water vapor into the air near the surface, causing the relative humidity of the near-surface air to increase. If the relative humidity increase near the surface is large enough then ice crystals may form.
To form diamond dust the temperature must be below the freezing point of water, , or the ice cannot form or would melt. However, diamond dust is not often observed at temperatures near . At temperatures between and about increasing the relative humidity can cause either fog or diamond dust. This is because very small droplets of water can remain liquid well below the freezing point, a state known as supercooled water. In areas with a lot of small particles in the air, from human pollution or natural sources like dust, the water droplets are likely to be able to freeze at a temperature around , but in very clean areas, where there are no particles (ice nuclei) to help the droplets freeze, they can remain liquid to , at which point even very tiny, pure water droplets will freeze. In the interior of Antarctica diamond dust is fairly common at temperatures below about .
Artificial diamond dust can form from snow machines which blow ice crystals into the air. These are found at ski resorts. Diamond dust may also be observed immediately downwind from manufacturing facilities or chilled water plants that produce steam.
Optical properties
Diamond dust is often associated with halos, such as sun dogs, light pillars, etc. Like the ice crystals in cirrus or cirrostratus clouds, diamond dust crystals form directly as simple hexagonal ice crystals — as opposed to freezing drops — and generally form slowly. This combination results in crystals with well defined shapes - usually either hexagonal plates or columns - which, like a prism, can reflect and/or refract light in specific directions.
Climatology
While diamond dust can be seen in any area of the world that has cold winters, it is most frequent in the interior of Antarctica, where it is common year-round. Schwerdtfeger (1970) shows that diamond dust was observed on average 316 days a year at Plateau Station in Antarctica, and Radok and Lile (1977) estimate that over 70% of the precipitation that fell at Plateau Station in 1967 fell in the form of diamond dust. Once melted, the total precipitation for the year was only .
Weather reporting and interference
Diamond dust may sometimes cause a problem for automated airport weather stations. The ceilometer and visibility sensor do not always correctly interpret the falling diamond dust and report the visibility and ceiling as zero (overcast skies). However, a human observer would correctly notice clear skies and unrestricted visibility. The METAR identifier for diamond dust within international hourly weather reports is IC.
See also
Crepuscular rays
Light beam
False sunrise
False sunset
References
Further reading
— An excellent reference for optical phenomena including photos of displays in Antarctica caused by diamond dust.
Photo of artificial Diamond Dust
External links
A remarkable video filmed in Hokkaido, Japan. 1min 22sec HQ
Longer version of the above video. 5min 10sec HD
Note that images are different from the naked eye in that they capture out-of-focus crystals which are shown as large, blurred objects.
The Science Behind Diamond Dust: How It Reflects Solar Radiation
Psychrometrics
Precipitation
Water ice
Snow or ice weather phenomena
Atmospheric optical phenomena
Articles containing video clips
sv:Diamantstoft | Diamond dust | [
"Physics"
] | 1,074 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
163,806 | https://en.wikipedia.org/wiki/Oil%20platform | An oil platform (also called an oil rig, offshore platform, oil production platform, etc.) is a large structure with facilities to extract and process petroleum and natural gas that lie in rock formations beneath the seabed. Many oil platforms will also have facilities to accommodate the workers, although it is also common to have a separate accommodation platform linked by bridge to the production platform. Most commonly, oil platforms engage in activities on the continental shelf, though they can also be used in lakes, inshore waters, and inland seas. Depending on the circumstances, the platform may be fixed to the ocean floor, consist of an artificial island, or float. In some arrangements the main facility may have storage facilities for the processed oil. Remote subsea wells may also be connected to a platform by flow lines and by umbilical connections. These sub-sea facilities may include one or more subsea wells or manifold centres for multiple wells.
Offshore drilling presents environmental challenges, both from the produced hydrocarbons and the materials used during the drilling operation. Controversies include the ongoing US offshore drilling debate.
There are many different types of facilities from which offshore drilling operations take place. These include bottom-founded drilling rigs (jackup barges and swamp barges), combined drilling and production facilities, either bottom-founded or floating platforms, and deepwater mobile offshore drilling units (MODU), including semi-submersibles and drillships. These are capable of operating in water depths up to . In shallower waters, the mobile units are anchored to the seabed. However, in deeper water (more than ), the semisubmersibles or drillships are maintained at the required drilling location using dynamic positioning.
History
Jan Józef Ignacy Łukasiewicz (Polish pronunciation: [iɡˈnatsɨ wukaˈɕɛvitʂ] ⓘ; 8 March 1822 – 7 January 1882) was a Polish pharmacist, engineer, businessman, inventor, and philanthropist. He was one of the most prominent philanthropists in the Kingdom of Galicia and Lodomeria, crown land of Austria-Hungary. He was a pioneer who in 1856 built the world's first modern oil refinery
Around 1891, the first submerged oil wells were drilled from platforms built on piles in the fresh waters of the Grand Lake St. Marys (a.k.a. Mercer County Reservoir) in Ohio. The wide but shallow reservoir was built from 1837 to 1845 to provide water to the Miami and Erie Canal.
Around 1896, the first submerged oil wells in salt water were drilled in the portion of the Summerland field extending under the Santa Barbara Channel in California. The wells were drilled from piers extending from land out into the channel.
Other notable early submerged drilling activities occurred on the Canadian side of Lake Erie since 1913 and Caddo Lake in Louisiana in the 1910s. Shortly thereafter, wells were drilled in tidal zones along the Gulf Coast of Texas and Louisiana. The Goose Creek field near Baytown, Texas, is one such example. In the 1920s, drilling was done from concrete platforms in Lake Maracaibo, Venezuela.
The oldest offshore well recorded in Infield's offshore database is the Bibi Eibat well which came on stream in 1923 in Azerbaijan. Landfill was used to raise shallow portions of the Caspian Sea.
In the early 1930s, the Texas Company developed the first mobile steel barges for drilling in the brackish coastal areas of the gulf.
In 1937, Pure Oil Company (now Chevron Corporation) and its partner Superior Oil Company (now part of ExxonMobil Corporation) used a fixed platform to develop a field in of water, one mile (1.6 km) offshore of Calcasieu Parish, Louisiana.
In 1938, Humble Oil built a mile-long wooden trestle with railway tracks into the sea at McFadden Beach on the Gulf of Mexico, placing a derrick at its end – this was later destroyed by a hurricane.
In 1945, concern for American control of its offshore oil reserves caused President Harry Truman to issue an Executive Order unilaterally extending American territory to the edge of its continental shelf, an act that effectively ended the 3-mile limit "freedom of the seas" regime.
In 1946, Magnolia Petroleum (now ExxonMobil) drilled at a site off the coast, erecting a platform in of water off St. Mary Parish, Louisiana.
In early 1947, Superior Oil erected a drilling/production platform in of water some 18 miles off Vermilion Parish, Louisiana. But it was Kerr-McGee Oil Industries (now part of Occidental Petroleum), as operator for partners Phillips Petroleum (ConocoPhillips) and Stanolind Oil & Gas (BP), that completed its historic Ship Shoal Block 32 well in October 1947, months before Superior actually drilled a discovery from their Vermilion platform farther offshore. In any case, that made Kerr-McGee's well the first oil discovery drilled out of sight of land.
The British Maunsell Forts constructed during World War II are considered the direct predecessors of modern offshore platforms. Having been pre-constructed in a very short time, they were then floated to their location and placed on the shallow bottom of the Thames and the Mersey estuary.
In 1954, the first jackup oil rig was ordered by Zapata Oil. It was designed by R. G. LeTourneau and featured three electro-mechanically operated lattice-type legs. Built on the shores of the Mississippi River by the LeTourneau Company, it was launched in December 1955, and christened "Scorpion". The Scorpion was put into operation in May 1956 off Port Aransas, Texas. It was lost in 1969.
When offshore drilling moved into deeper waters of up to , fixed platform rigs were built, until demands for drilling equipment was needed in the to depth of the Gulf of Mexico, the first jack-up rigs began appearing from specialized offshore drilling contractors such as forerunners of ENSCO International.
The first semi-submersible resulted from an unexpected observation in 1961. Blue Water Drilling Company owned and operated the four-column submersible Blue Water Rig No.1 in the Gulf of Mexico for Shell Oil Company. As the pontoons were not sufficiently buoyant to support the weight of the rig and its consumables, it was towed between locations at a draught midway between the top of the pontoons and the underside of the deck. It was noticed that the motions at this draught were very small, and Blue Water Drilling and Shell jointly decided to try operating the rig in its floating mode. The concept of an anchored, stable floating deep-sea platform had been designed and tested back in the 1920s by Edward Robert Armstrong for the purpose of operating aircraft with an invention known as the "seadrome". The first purpose-built drilling semi-submersible Ocean Driller was launched in 1963. Since then, many semi-submersibles have been purpose-designed for the drilling industry mobile offshore fleet.
The first offshore drillship was the CUSS 1 developed for the Mohole project to drill into the Earth's crust.
As of June, 2010, there were over 620 mobile offshore drilling rigs (Jackups, semisubs, drillships, barges) available for service in the competitive rig fleet.
One of the world's deepest hubs is currently the Perdido in the Gulf of Mexico, floating in 2,438 meters of water. It is operated by Shell plc and was built at a cost of $3 billion. The deepest operational platform is the Petrobras America Cascade FPSO in the Walker Ridge 249 field in 2,600 meters of water.
Main offshore basins
Notable offshore basins include:
the North Sea
the Gulf of Mexico (offshore Texas, Louisiana, Mississippi, Alabama and Florida)
California (in the Los Angeles Basin and Santa Barbara Channel, part of the Ventura Basin)
the Caspian Sea (notably some major fields offshore Azerbaijan)
the Campos and Santos Basins off the coasts of Brazil
Newfoundland and Nova Scotia (Atlantic Canada)
several fields off West Africa, south of Nigeria, and central Africa, west of Angola
offshore fields in South East Asia and Sakhalin, Russia
major offshore oil fields are located in the Persian Gulf such as Safaniya, Manifa and Marjan which belong to Saudi Arabia and are developed by Saudi Aramco
fields in India (Mumbai High, K G Basin-East Coast Of India, Tapti Field, Gujarat, India)
the Baltic Sea oil and gas fields
the Taranaki Basin in New Zealand
the Kara Sea north of Siberia
the Arctic Ocean off the coasts of Alaska and Canada's Northwest Territories
the offshore fields in the Adriatic Sea
Types
Larger lake- and sea-based offshore platforms and drilling rig for oil.
1) & 2) Conventional fixed platforms (deepest: Shell's Bullwinkle in 1991 at 412 m/1,353 ft GOM)
3) Compliant tower (deepest: ChevronTexaco's Petronius in 1998 at 534 m /1,754 ft GOM)
4) & 5) Vertically moored tension leg and mini-tension leg platform (deepest: ConocoPhillips's Magnolia in 2004 1,425 m/4,674 ft GOM)
6) Spar (deepest: Shell's Perdido in 2010, 2,450 m/8,000 ft GOM)
7) & 8) Semi-submersibles (deepest: Shell's NaKika in 2003, 1920 m/6,300 ft GOM)
9) Floating production, storage, and offloading facility (deepest: 2005, 1,345 m/4,429 ft Brazil)
10) Sub-sea completion and tie-back to host facility (deepest: Shell's Coulomb tie to NaKika 2004, 2,307 m/ 7,570 ft)
(Numbered from left to right; all records from 2005 data)
Fixed platforms
These platforms are built on concrete or steel legs, or both, anchored directly onto the seabed, supporting the deck with space for drilling rigs, production facilities and crew quarters. Such platforms are, by virtue of their immobility, designed for very long term use (for instance the Hibernia platform). Various types of structure are used: steel jacket, concrete caisson, floating steel, and even floating concrete. Steel jackets are structural sections made of tubular steel members, and are usually piled into the seabed. To see more details regarding Design, construction and installation of such platforms refer to: and.
Concrete caisson structures, pioneered by the Condeep concept, often have in-built oil storage in tanks below the sea surface and these tanks were often used as a flotation capability, allowing them to be built close to shore (Norwegian fjords and Scottish firths are popular because they are sheltered and deep enough) and then floated to their final position where they are sunk to the seabed. Fixed platforms are economically feasible for installation in water depths up to about .
Compliant towers
These platforms consist of slender, flexible towers and a pile foundation supporting a conventional deck for drilling and production operations. Compliant towers are designed to sustain significant lateral deflections and forces, and are typically used in water depths ranging from .
Semi-submersible platform
These platforms have hulls (columns and pontoons) of sufficient buoyancy to cause the structure to float, but of weight sufficient to keep the structure upright. Semi-submersible platforms can be moved from place to place and can be ballasted up or down by altering the amount of flooding in buoyancy tanks. They are generally anchored by combinations of chain, wire rope or polyester rope, or both, during drilling and/or production operations, though they can also be kept in place by the use of dynamic positioning. Semi-submersibles can be used in water depths from .
Jack-up drilling rigs
Jack-up Mobile Drilling Units (or jack-ups), as the name suggests, are rigs that can be jacked up above the sea using legs that can be lowered, much like jacks. These MODUs (Mobile Offshore Drilling Units) are typically used in water depths up to , although some designs can go to depth. They are designed to move from place to place, and then anchor themselves by deploying their legs to the ocean bottom using a rack and pinion gear system on each leg.
Drillships
A drillship is a maritime vessel that has been fitted with drilling apparatus. It is most often used for exploratory drilling of new oil or gas wells in deep water but can also be used for scientific drilling. Early versions were built on a modified tanker hull, but purpose-built designs are used today. Most drillships are outfitted with a dynamic positioning system to maintain position over the well. They can drill in water depths up to .
Floating production systems
The main types of floating production systems are FPSO (floating production, storage, and offloading system). FPSOs consist of large monohull structures, generally (but not always) shipshaped, equipped with processing facilities. These platforms are moored to a location for extended periods, and do not actually drill for oil or gas. Some variants of these applications, called FSO (floating storage and offloading system) or FSU (floating storage unit), are used exclusively for storage purposes, and host very little process equipment. This is one of the best sources for having floating production.
The world's first floating liquefied natural gas (FLNG) facility is in production. See the section on particularly large examples below.
Tension-leg platform
TLPs are floating platforms tethered to the seabed in a manner that eliminates most vertical movement of the structure. TLPs are used in water depths up to about . The "conventional" TLP is a 4-column design that looks similar to a semisubmersible. Proprietary versions include the Seastar and MOSES mini TLPs; they are relatively low cost, used in water depths between . Mini TLPs can also be used as utility, satellite or early production platforms for larger deepwater discoveries.
Gravity-based structure
A GBS can either be steel or concrete and is usually anchored directly onto the seabed. Steel GBS are predominantly used when there is no or limited availability of crane barges to install a conventional fixed offshore platform, for example in the Caspian Sea. There are several steel GBS's in the world today (e.g. offshore Turkmenistan Waters (Caspian Sea) and offshore New Zealand). Steel GBS do not usually provide hydrocarbon storage capability. It is mainly installed by pulling it off the yard, by either wet-tow or/and dry-tow, and self-installing by controlled ballasting of the compartments with sea water. To position the GBS during installation, the GBS may be connected to either a transportation barge or any other barge (provided it is large enough to support the GBS) using strand jacks. The jacks shall be released gradually whilst the GBS is ballasted to ensure that the GBS does not sway too much from target location.
Spar platforms
Spars are moored to the seabed like TLPs, but whereas a TLP has vertical tension tethers, a spar has more conventional mooring lines. Spars have to-date been designed in three configurations: the "conventional" one-piece cylindrical hull; the "truss spar", in which the midsection is composed of truss elements connecting the upper buoyant hull (called a hard tank) with the bottom soft tank containing permanent ballast; and the "cell spar", which is built from multiple vertical cylinders. The spar has more inherent stability than a TLP since it has a large counterweight at the bottom and does not depend on the mooring to hold it upright. It also has the ability, by adjusting the mooring line tensions (using chain-jacks attached to the mooring lines), to move horizontally and to position itself over wells at some distance from the main platform location. The first production spar was Kerr-McGee's Neptune, anchored in in the Gulf of Mexico; however, spars (such as Brent Spar) were previously used as FSOs.
Eni's Devil's Tower located in of water in the Gulf of Mexico, was the world's deepest spar until 2010. The world's deepest platform as of 2011 was the Perdido spar in the Gulf of Mexico, floating in 2,438 metres of water. It is operated by Royal Dutch Shell and was built at a cost of $3 billion.
The first truss spars were Kerr-McGee's Boomvang and Nansen.
The first (and, as of 2010, only) cell spar is Kerr-McGee's Red Hawk.
Normally unmanned installations (NUI)
These installations, sometimes called toadstools, are small platforms, consisting of little more than a well bay, helipad and emergency shelter. They are designed to be operated remotely under normal conditions, only to be visited occasionally for routine maintenance or well work.
Conductor support systems
These installations, also known as satellite platforms, are small unmanned platforms consisting of little more than a well bay and a small process plant. They are designed to operate in conjunction with a static production platform which is connected to the platform by flow lines or by umbilical cable, or both.
Particularly large examples
The Petronius Platform is a compliant tower in the Gulf of Mexico modeled after the Hess Baldpate platform, which stands above the ocean floor. It is one of the world's tallest structures.
The Hibernia platform in Canada is the world's heaviest offshore platform, located on the Jeanne D'Arc Basin, in the Atlantic Ocean off the coast of Newfoundland. This gravity base structure (GBS), which sits on the ocean floor, is high and has storage capacity for of crude oil in its high caisson. The platform acts as a small concrete island with serrated outer edges designed to withstand the impact of an iceberg. The GBS contains production storage tanks and the remainder of the void space is filled with ballast with the entire structure weighing in at 1.2 million tons.
Royal Dutch Shell has developed the first Floating Liquefied Natural Gas (FLNG) facility, which is situated approximately 200 km off the coast of Western Australia. It is the largest floating offshore facility. It is approximately 488m long and 74m wide with displacement of around 600,000t when fully ballasted.
Maintenance and supply
A typical oil production platform is self-sufficient in energy and water needs, housing electrical generation, water desalinators and all of the equipment necessary to process oil and gas such that it can be either delivered directly onshore by pipeline or to a floating platform or tanker loading facility, or both. Elements in the oil/gas production process include wellhead, production manifold, production separator, glycol process to dry gas, gas compressors, water injection pumps, oil/gas export metering and main oil line pumps.
Larger platforms are assisted by smaller ESVs (emergency support vessels) like the British Iolair that are summoned when something has gone wrong, e.g. when a search and rescue operation is required. During normal operations, PSVs (platform supply vessels) keep the platforms provisioned and supplied, and AHTS vessels can also supply them, as well as tow them to location and serve as standby rescue and firefighting vessels.
Crew
Essential personnel
Not all of the following personnel are present on every platform. On smaller platforms, one worker can perform a number of different jobs. The following also are not names officially recognized in the industry:
OIM (offshore installation manager) who is the ultimate authority during his/her shift and makes the essential decisions regarding the operation of the platform;
Operations Team Leader (OTL);
Offshore Methods Engineer (OME) who defines the installation methodology of the platform;
Offshore Operations Engineer (OOE) who is the senior technical authority on the platform;
PSTL or operations coordinator for managing crew changes;
Dynamic positioning operator, navigation, ship or vessel maneuvering (MODU), station keeping, fire and gas systems operations in the event of incident;
Automation systems specialist, to configure, maintain and troubleshoot the process control systems (PCS), process safety systems, emergency support systems and vessel management systems;
Second mate to meet manning requirements of flag state, operates fast rescue craft, cargo operations, fire team leader;
Third mate to meet manning requirements of flag state, operate fast rescue craft, cargo operations, fire team leader;
Ballast control operator to operate fire and gas systems;
Crane operators to operate the cranes for lifting cargo around the platform and between boats;
Scaffolders to rig up scaffolding for when it is required for workers to work at height;
Coxswains to maintain the lifeboats and manning them if necessary;
Control room operators, especially FPSO or production platforms;
Catering crew, including people tasked with performing essential functions such as cooking, laundry and cleaning the accommodation;
Production techs to run the production plant;
Helicopter pilot(s) living on some platforms that have a helicopter based offshore and transporting workers to other platforms or to shore on crew changes;
Maintenance technicians (instrument, electrical or mechanical).
Fully qualified medic.
Radio operator to operate all radio communications.
Store Keeper, keeping the inventory well supplied
Technician to record the fluid levels in tanks
Incidental personnel
Drill crew will be on board if the installation is performing drilling operations. A drill crew will normally comprise:
Toolpusher
Driller
Roughnecks
Roustabouts
Company man
Mud engineer
Motorman See: Glossary of oilfield jargon
Derrickhand
Geologist
Welders and Welder Helpers
Well services crew will be on board for well work. The crew will normally comprise:
Well services supervisor
Wireline or coiled tubing operators
Pump operator
Pump hanger and ranger
Drawbacks
Risks
The nature of their operation—extraction of volatile substances sometimes under extreme pressure in a hostile environment—means risk; accidents and tragedies occur regularly. The U.S. Minerals Management Service reported 69 offshore deaths, 1,349 injuries, and 858 fires and explosions on offshore rigs in the Gulf of Mexico from 2001 to 2010. On July 6, 1988, 167 people died when Occidental Petroleum's Piper Alpha offshore production platform, on the Piper field in the UK sector of the North Sea, exploded after a gas leak. The resulting investigation conducted by Lord Cullen and publicized in the first Cullen Report was highly critical of a number of areas, including, but not limited to, management within the company, the design of the structure, and the Permit to Work System. The report was commissioned in 1988, and was delivered in November 1990. The accident greatly accelerated the practice of providing living accommodations on separate platforms, away from those used for extraction.
The offshore can be in itself a hazardous environment. In March 1980, the 'flotel' (floating hotel) platform Alexander L. Kielland capsized in a storm in the North Sea with the loss of 123 lives.
In 2001, Petrobras 36 in Brazil exploded and sank five days later, killing 11 people.
Given the number of grievances and conspiracy theories that involve the oil business, and the importance of gas/oil platforms to the economy, platforms in the United States are believed to be potential terrorist targets. Agencies and military units responsible for maritime counter-terrorism in the US (Coast Guard, Navy SEALs, Marine Recon) often train for platform raids.
On April 21, 2010, the Deepwater Horizon platform, 52 miles off-shore of Venice, Louisiana, (property of Transocean and leased to BP) exploded, killing 11 people, and sank two days later. The resulting undersea gusher, conservatively estimated to exceed as of early June 2010, became the worst oil spill in US history, eclipsing the Exxon Valdez oil spill.
Ecological effects
In British waters, the cost of removing all platform rig structures entirely was estimated in 2013 at £30 billion.
Aquatic organisms invariably attach themselves to the undersea portions of oil platforms, turning them into artificial reefs. In the Gulf of Mexico and offshore California, the waters around oil platforms are popular destinations for sports and commercial fishermen, because of the greater numbers of fish near the platforms. The United States and Brunei have active Rigs-to-Reefs programs, in which former oil platforms are left in the sea, either in place or towed to new locations, as permanent artificial reefs. In the US Gulf of Mexico, as of September 2012, 420 former oil platforms, about 10 percent of decommissioned platforms, have been converted to permanent reefs.
On the US Pacific coast, marine biologist Milton Love has proposed that oil platforms off California be retained as artificial reefs, instead of being dismantled (at great cost), because he has found them to be havens for many of the species of fish which are otherwise declining in the region, in the course of 11 years of research. Love is funded mainly by government agencies, but also in small part by the California Artificial Reef Enhancement Program. Divers have been used to assess the fish populations surrounding the platforms.
Effects on the environment
Offshore oil production involves environmental risks, most notably oil spills from oil tankers or pipelines transporting oil from the platform to onshore facilities, and from leaks and accidents on the platform. Produced water is also generated, which is water brought to the surface along with the oil and gas; it is usually highly saline and may include dissolved or unseparated hydrocarbons.
Offshore rigs are shut down during hurricanes. In the Gulf of Mexico hurricanes are increasing because of the increasing number of oil platforms that heat surrounding air with methane, it is estimated that U.S. Gulf of Mexico, oil and gas facilities emit approximately 500000 tons of methane each year, corresponding to a loss of produced gas of 2.9 percent. The increasing number of oil rigs also increase movement of oil tankers which also increases levels which directly warm water in the zone, warm waters are a key factor for hurricanes to form.
To reduce the amount of carbon emissions otherwise released into the atmosphere, methane pyrolysis of natural gas pumped up by oil platforms is a possible alternative to flaring for consideration. Methane pyrolysis produces non-polluting hydrogen in high volume from this natural gas at low cost. This process operates at around 1000 °C and removes carbon in a solid form from the methane, producing hydrogen. The carbon can then be pumped underground and is not released into the atmosphere.
It is being evaluated in such research laboratories as Karlsruhe Liquid-metal Laboratory (KALLA). and the chemical engineering team at University of California – Santa Barbara
Repurposing
If not decommissioned, old platforms can be repurposed to pump into rocks below the seabed. Others have been converted to launch rockets into space, and more are being redesigned for use with heavy-lift launch vehicles.
In Saudi Arabia, there are plans to repurpose decommissioned oil rigs into a theme park.
Challenges
Offshore oil and gas production is more challenging than land-based installations due to the remote and harsher environment. Much of the innovation in the offshore petroleum sector concerns overcoming these challenges, including the need to provide very large production facilities. Production and drilling facilities may be very large and a large investment, such as the Troll A platform standing on a depth of 300 meters.
Another type of offshore platform may float with a mooring system to maintain it on location. While a floating system may be lower cost in deeper waters than a fixed platform, the dynamic nature of the platforms introduces many challenges for the drilling and production facilities.
The ocean can add several thousand meters or more to the fluid column. The addition increases the equivalent circulating density and downhole pressures in drilling wells, as well as the energy needed to lift produced fluids for separation on the platform.
The trend today is to conduct more of the production operations subsea, by separating water from oil and re-injecting it rather than pumping it up to a platform, or by flowing to onshore, with no installations visible above the sea. Subsea installations help to exploit resources at progressively deeper waters—locations that had been inaccessible—and overcome challenges posed by sea ice such as in the Barents Sea. One such challenge in shallower environments is seabed gouging by drifting ice features (means of protecting offshore installations against ice action includes burial in the seabed).
Offshore manned facilities also present logistics and human resources challenges. An offshore oil platform is a small community in itself with cafeteria, sleeping quarters, management and other support functions. In the North Sea, staff members are transported by helicopter for a two-week shift. They usually receive higher salaries than onshore workers do. Supplies and waste are transported by ship, and the supply deliveries need to be carefully planned because storage space on the platform is limited. Today, much effort goes into relocating as many of the personnel as possible onshore, where management and technical experts are in touch with the platform by video conferencing. An onshore job is also more attractive for the aging workforce in the petroleum industry, at least in the western world. These efforts among others are contained in the established term integrated operations. The increased use of subsea facilities helps achieve the objective of keeping more workers onshore. Subsea facilities are also easier to expand, with new separators or different modules for different oil types, and are not limited by the fixed floor space of an above-water installation.
Deepest platforms
The world's deepest oil platform is the floating Perdido, which is a spar platform in the Gulf of Mexico in a water depth of .
Non-floating compliant towers and fixed platforms, by water depth:
Petronius Platform,
Baldpate Platform,
Troll A Platform,
Bullwinkle Platform,
Pompano Platform,
Benguela-Belize Lobito-Tomboco Platform,
Gullfaks C Platform,
Tombua Landana Platform,
Harmony Platform,
See also
List of tallest oil platforms
Accommodation platform
Chukchi Cap
Deep sea mining
Deepwater drilling
Drillship
North Sea oil
Offshore geotechnical engineering
Offshore oil and gas in the United States
Oil drilling
Protocol for the Suppression of Unlawful Acts against the Safety of Fixed Platforms Located on the Continental Shelf
SAR201
Shallow water drilling
Submarine pipeline
TEMPSC
Texas Towers
References
External links
Oil Rig Disasters Listing of oil rig accidents
Oil Rig Photos Collection of pictures of drilling rigs and production platforms
An independent review of offshore platforms in the North Sea
Overview of Conventional Platforms Pictorial treatment on the installation of platforms which extend from the seabed to the ocean surface
Offshore engineering
Petroleum production
Drilling technology
Natural gas technology
Structural engineering | Oil platform | [
"Chemistry",
"Engineering"
] | 6,266 | [
"Oil platforms",
"Structural engineering",
"Offshore engineering",
"Petroleum technology",
"Construction",
"Civil engineering",
"Natural gas technology"
] |
7,195,427 | https://en.wikipedia.org/wiki/Carbon%20detonation | Carbon detonation or carbon deflagration is the violent reignition of thermonuclear fusion in a white dwarf star that was previously slowly cooling. It involves a runaway thermonuclear process which spreads through the white dwarf in a matter of seconds, producing a type Ia supernova which releases an immense amount of energy as the star is blown apart. The carbon detonation/deflagration process leads to a supernova by a different route than the better known type II (core-collapse) supernova (the type II is caused by the cataclysmic explosion of the outer layers of a massive star as its core implodes).
White dwarf density and mass increase
A white dwarf is the remnant of a small to medium size star (the Sun is an example of these). At the end of its life, the star has burned its hydrogen and helium fuel, and thermonuclear fusion processes cease. The star does not have enough mass to either burn much heavier elements, or to implode into a neutron star or type II supernova as a larger star can, from the force of its own gravity, so it gradually shrinks and becomes very dense as it cools, glowing white and then red, for a period many times longer than the present age of the Universe.
Occasionally, a white dwarf gains mass from another source – for example, a binary star companion that is close enough for the dwarf star to siphon sufficient amounts of matter onto itself; or from a collision with other stars, the siphoned matter having been expelled during the process of the companion's own late stage stellar evolution. If the white dwarf gains enough matter, its internal pressure and temperature will rise enough for carbon to begin fusing in its core. Carbon detonation generally occurs at the point when the accreted matter pushes the white dwarf's mass close to the Chandrasekhar limit of roughly 1.4 solar masses, the mass at which gravity can overcome the electron degeneracy pressure that prevents it from collapsing during its lifetime. This also happens when two white dwarfs merge if the combined mass is over the Chandrasekhar limit, resulting in a type Ia supernova.
A main sequence star supported by thermal pressure would expand and cool which automatically counterbalances an increase in thermal energy. However, degeneracy pressure is independent of temperature; the white dwarf is unable to regulate the fusion process in the manner of normal stars, so it is vulnerable to a runaway fusion reaction.
Fusion and pressure
In the case of a white dwarf, the restarted fusion reactions releases heat, but the outward pressure that exists in the star and supports it against further collapse is initially due almost entirely to degeneracy pressure, not fusion processes or heat. Therefore, even when fusion recommences, the outward pressure that is key to the star's thermal balance does not increase much. One result is that the star does not expand much to balance its fusion and heat processes with gravity and electron pressure, as it did when burning hydrogen (until too late). This increase of heat production without a means of cooling by expansion raises the internal temperature dramatically, and therefore the rate of fusion also increases extremely fast as well, a form of positive feedback known as thermal runaway.
A 2004 analysis of such a process states that:
Supercritical event
The flame accelerates dramatically, in part due to the Rayleigh–Taylor instability and interactions with turbulence. The resumption of fusion spreads outward in a series of uneven, expanding "bubbles" in accordance with Rayleigh–Taylor instability. Within the fusion area, the increase in heat with unchanged volume results in an exponentially rapid increase in the rate of fusion – a sort of supercritical event as thermal pressure increases boundlessly. As hydrostatic equilibrium is not possible in this situation, a "thermonuclear flame" is triggered and an explosive eruption through the dwarf star's surface that completely disrupts it, seen as a type Ia supernova.
Regardless of the exact details of this nuclear fusion, it is generally accepted that a substantial fraction of the carbon and oxygen in the white dwarf is converted into heavier elements within a period of only a few seconds, raising the internal temperature to billions of degrees. This energy release from thermonuclear fusion (1–) is more than enough to unbind the star; that is, the individual particles making up the white dwarf gain enough kinetic energy to fly apart from each other. The star explodes violently and releases a shock wave in which matter is typically ejected at speeds on the order of 5,000–, roughly 6% of the speed of light. The energy released in the explosion also causes an extreme increase in luminosity. The typical visual absolute magnitude of type Ia supernovae is Mv = −19.3 (about 5 billion times brighter than the Sun), with little variation.
This process, of a volume supported by electron degeneracy pressure instead of thermal pressure gradually reaching conditions capable of igniting runaway fusion, is also found in a less dramatic form in a helium flash in the core of a sufficiently massive red giant star.
See also
Helium flash, a similar (although less cataclysmic) sudden initiation of fusion
Nuclear fusion
References
External links
JINA: Type Ia Supernova Flame Models
A Computer Simulation of Carbon Detonation/Deflagration
Stellar evolution
Stellar phenomena
White dwarfs | Carbon detonation | [
"Physics"
] | 1,104 | [
"Physical phenomena",
"Stellar phenomena",
"Astrophysics",
"Stellar evolution"
] |
7,195,699 | https://en.wikipedia.org/wiki/Driven%20right%20leg%20circuit | A Driven Right Leg circuit or DRL circuit, also known as Right Leg Driving technique, is an electric circuit that is often added to biological signal amplifiers to reduce common-mode interference. Biological signal amplifiers such as ECG (electrocardiogram) EEG (electroencephalogram) or EMG circuits measure very small electrical signals emitted by the body, often as small as several micro-volts (millionths of a volt). However, the patient's body can also act as an antenna which picks up electromagnetic interference, especially 50/60 Hz noise from electrical power lines. This interference can obscure the biological signals, making them very hard to measure. Right leg driver circuitry is used to eliminate interference noise by actively cancelling the interference.
Other methods of noise control include:
Faraday cage
Twisting Wires
High Gain Instrumentation Amplifier
Filtering
Further reading
J.G. Webster, "Medical Instrumentation", 3rd ed, New York: John Wiley & Sons, 1998, .
B. B. Winter and J. G. Webster, “Driven-right-leg circuit design,” IEEE Trans. Biomed. Eng., vol. BME-30, no. 1, pp. 62–66, Jan. 1983.
"Improving Common-Mode Rejection Using the Right-Leg Drive Amplifier" by Texas Instruments.
Electronic design
Analog circuits | Driven right leg circuit | [
"Engineering"
] | 273 | [
"Electronic design",
"Analog circuits",
"Electronic engineering",
"Design"
] |
7,202,208 | https://en.wikipedia.org/wiki/Canadian%20Electrical%20Code | The Canadian Electrical Code, CE Code, or CSA C22.1 is a standard published by the Canadian Standards Association pertaining to the installation and maintenance of electrical equipment in Canada.
The first edition of the Canadian Electrical Code was published in 1927. The current (26th) edition was published in March of 2024. Code revisions are currently scheduled on a three-year cycle. The Code is produced by a large body of volunteers from industry and various levels of government. The Code uses a prescriptive model, outlining in detail the wiring methods that are acceptable. In the current edition, the Code recognizes that other methods can be used to assure safe installations, but these methods must be acceptable to the authority enforcing the Code in a particular jurisdiction.
The Canadian Electrical Code serves as the basis for wiring regulations across Canada. Generally, legislation adopts the Code by reference, usually with a schedule of changes that amend the Code for local conditions. These amendments may be administrative in nature or may consist of technical content particular to the region. Since the Code is a copyrighted document produced by a private body, it may not be distributed without copyright permission from the Canadian Standards Association.
The Code is divided into sections, each section is labeled with an even number and a title. Sections 0, 2, 4, 6, 8, 10, 12, 14, 16, and 26 include rules that apply to installations in general; the remaining sections are supplementary and deal with installation methods in specific locations or situations. Some examples of general sections include: grounding and bonding, protection and control, conductors, and definitions. Some examples of supplementary sections include: wet locations, hazardous locations, patient care areas, emergency systems, and temporary installations. When interpreting the requirements for a particular installation, rules found in supplementary sections of the Code amend or supersede the rules in general sections of the Code.
The Canadian Electrical Code does not apply to vehicles, systems operated by an electrical or communications utility, railway systems, aircraft or ships; since these installations are already regulated by separate documents.
The Canadian Electrical Code is published in several parts: Part I is the safety standard for electrical installations. Part II is a collection of individual standards for the evaluation of electrical equipment or installations. (Part I requires that electrical products be approved to a Part II standard) Part III is the safety standard for power distribution and transmission circuits. Part IV is set of objective-based standards that may be used in certain industrial or institutional installations. Part VI establishes standards for the inspection of electrical installation in residential buildings.
Technical requirements of the Canadian Electrical Code are very similar to those of the U.S. National Electrical Code. Specific differences still exist and installations acceptable under one Code may not entirely comply with the other. Correlation of technical requirements between the two Codes is ongoing.
Several CE Code Part II electrical equipment standards have been harmonized with standards in the USA and Mexico through CANENA, The Council for the Harmonization of Electromechanical Standards of the Nations of the Americas (CANENA) is working to harmonize electrical codes in the western hemisphere.
Objective based code
In response to industry demand, CSA has developed Part IV of the Canadian Electrical Code, consisting of two standards CSA C22.4 No. 1 "Objective-based industrial electrical code" and CSA C22.4 No. 2 "Objective-based industrial electrical code - Safety management system requirements". These standards are intended for use only by authorized industrial users and would not apply, for example, to residential construction. These standards do not prescribe specific solutions for every case but instead give guidance to the user on achievement of the safety objectives of IEC 60364. Since it is less prescriptive, the OBIEC allows industrial users to use new technology not yet represented in the CE Code Part II. Use of this OBIEC is restricted to industrial and institutional users who have a safety management program in place and the engineering resources to implement the regulations. It is intended that users of the OBIEC will maintain safety while using methods that will reduce the installation cost of large industrial plants, for example, in the petrochemical business.
References
See also
Electrical code
Electrical safety
Electrical wiring | Canadian Electrical Code | [
"Physics",
"Engineering"
] | 836 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electrical engineering",
"Electrical wiring"
] |
7,203,729 | https://en.wikipedia.org/wiki/D%27Alembert%27s%20equation | In mathematics, d'Alembert's equation, sometimes also known as Lagrange's equation, is a first order nonlinear ordinary differential equation, named after the French mathematician Jean le Rond d'Alembert. The equation reads as
where . After differentiating once, and rearranging we have
The above equation is linear. When , d'Alembert's equation is reduced to Clairaut's equation.
References
Eponymous equations of physics
Mathematical physics
Ordinary differential equations | D'Alembert's equation | [
"Physics",
"Mathematics"
] | 102 | [
"Mathematical analysis",
"Equations of physics",
"Mathematical analysis stubs",
"Applied mathematics",
"Theoretical physics",
"Eponymous equations of physics",
"Mathematical physics"
] |
4,141,476 | https://en.wikipedia.org/wiki/Greenberger%E2%80%93Horne%E2%80%93Zeilinger%20state | In physics, in the area of quantum information theory, a Greenberger–Horne–Zeilinger (GHZ) state is an entangled quantum state that involves at least three subsystems (particle states, qubits, or qudits). Named for the three authors that first described this state, the GHZ state predicts outcomes from experiments that directly contradict predictions by every classical local hidden-variable theory. The state has applications in quantum computing.
History
The four-particle version was first studied by Daniel Greenberger, Michael Horne and Anton Zeilinger in 1989. The following year Abner Shimony joined in an they published a three-particle version based on suggestions
by N. David Mermin. Experimental measurements on such states contradict intuitive notions of locality and causality. GHZ states for large numbers of qubits are theorized to give enhanced performance for metrology compared to other qubit superposition states.
Definition
The GHZ state is an entangled quantum state for 3 qubits and it can be written
where the or values of the qubit correspond to any two physical states. For example the two states may correspond to spin-down and spin up along some physical axis. In physics applications the state may be written
where the numbering of the states represents spin eigenvalues.
Another example of a GHZ state is three photons in an entangled state, with the photons being in a superposition of being all horizontally polarized (HHH) or all vertically polarized (VVV), with respect to some coordinate system. The GHZ state can be written in bra–ket notation as
Prior to any measurements being made, the polarizations of the photons are indeterminate. If a measurement is made on one of the photons using a two-channel polarizer aligned with the axes of the coordinate system, each orientation will be observed, with 50% probability. However the result of all three measurements on the state gives the same result: all three polarizations are observed along the same axis.
Generalization
The generalized GHZ state is an entangled quantum state of subsystems. If each system has dimension , i.e., the local Hilbert space is isomorphic to , then the total Hilbert space of an -partite system is . This GHZ state is also called an -partite qudit GHZ state.
Its formula as a tensor product is
.
In the case of each of the subsystems being two-dimensional, that is for a collection of M qubits, it reads
The results of actual experiments agree with the predictions of quantum mechanics, not those of local realism.
GHZ experiment
In the language of quantum computation, the polarization state of each photon is a qubit, the basis of which can be chosen to be
With appropriately chosen phase factors for and , both types of measurements used in the experiment becomes Pauli measurements, with the two possible results represented as +1 and −1 respectively:
The 45° linear polarizer implements a Pauli measurement, distinguishing between the eigenstates
The circular polarizer implements a Pauli measurement, distinguishing between the eigenstates
A combination of those measurements on each of the three qubits can be regarded as a destructive multi-qubit Pauli measurement, the result of which being the product of each single-qubit Pauli measurement. For example, the combination "circular polarizer on photons 1 and 2, 45° linear polarizer on photon 3" corresponds to a measurement, and the four possible result combinations (RL+, LR+, RR−, LL−) are exactly the ones corresponding to an overall result of −1.
The quantum mechanical predictions of the GHZ experiment can then be summarized as
which is consistent in quantum mechanics because all these multi-qubit Paulis commute with each other, and
due to the anticommutativity between and .
These results lead to a contradiction in any local hidden variable theory, where each measurement must have definite (classical) values determined by hidden variables, because
must equal +1, not −1.
Properties
There is no standard measure of multi-partite entanglement because different, not mutually convertible, types of multi-partite entanglement exist. Nonetheless, many measures define the GHZ state to be a maximally entangled state.
Another important property of the GHZ state is that taking the partial trace over one of the three systems yields
which is an unentangled mixed state. It has certain two-particle (qubit) correlations, but these are of a classical nature. On the other hand, if we were to measure one of the subsystems in such a way that the measurement distinguishes between the states 0 and 1, we will leave behind either or , which are unentangled pure states. This is unlike the W state, which leaves bipartite entanglements even when we measure one of its subsystems.
A pure state of parties is called biseparable, if one can find a partition of the parties in two nonempty disjoint subsets and with such that , i.e. is a product state with respect to the partition . The GHZ state is non-biseparable and is the representative of one of the two non-biseparable classes of 3-qubit states which cannot be transformed (not even probabilistically) into each other by local quantum operations, the other being the W state, .
Thus and represent two very different kinds of entanglement for three or more particles.
The W state is, in a certain sense "less entangled" than the GHZ state; however, that entanglement is, in a sense, more robust against single-particle measurements, in that, for an N-qubit W state, an entangled (N − 1)-qubit state remains after a single-particle measurement. By contrast, certain measurements on the GHZ state collapse it into a mixture or a pure state.
Experiments on the GHZ state lead to striking non-classical correlations (1989). Particles prepared in this state lead to a version of Bell's theorem, which shows the internal inconsistency of the notion of elements-of-reality introduced in the famous Einstein–Podolsky–Rosen article. The first laboratory observation of GHZ correlations was by the group of Anton Zeilinger (1998), who was awarded a share of the 2022 Nobel Prize in physics for this work. Many more accurate observations followed. The correlations can be utilized in some quantum information tasks. These include multipartner quantum cryptography (1998) and communication complexity tasks (1997, 2004).
Pairwise entanglement
Although a measurement of the third particle of the GHZ state that distinguishes the two states results in an unentangled pair, a measurement along an orthogonal direction can leave behind a maximally entangled Bell state. This is illustrated below.
The 3-qubit GHZ state can be written as
where the third particle is written as a superposition in the X basis (as opposed to the Z basis) as and .
A measurement of the GHZ state along the X basis for the third particle then yields either , if was measured, or , if was measured. In the later case, the phase can be rotated by applying a Z quantum gate to give , while in the former case, no additional transformations are applied. In either case, the result of the operations is a maximally entangled Bell state.
This example illustrates that, depending on which measurement is made of the GHZ state is more subtle than it first appears: a measurement along an orthogonal direction, followed by a quantum transform that depends on the measurement outcome, can leave behind a maximally entangled state.
Applications
GHZ states are used in several protocols in quantum communication and cryptography, for example, in secret sharing or in the quantum Byzantine agreement.
See also
Bell's theorem
Local hidden-variable theory
NOON state
Quantum pseudo-telepathy
Dicke state
References
Quantum information theory
Quantum states | Greenberger–Horne–Zeilinger state | [
"Physics"
] | 1,621 | [
"Quantum states",
"Quantum mechanics"
] |
4,142,023 | https://en.wikipedia.org/wiki/Bongkrek%20acid | Bongkrek acid (also known as bongkrekic acid) is a respiratory toxin produced in fermented coconut or corn contaminated by the bacterium Burkholderia gladioli pathovar cocovenenans. It is a highly toxic, heat-stable, colorless, odorless, and highly unsaturated tricarboxylic acid that inhibits the ADP/ATP translocase, also called the mitochondrial ADP/ATP carrier, preventing ATP from leaving the mitochondria to provide metabolic energy to the rest of the cell. Bongkrek acid, when consumed through contaminated foods, mainly targets the liver, brain, and kidneys along with symptoms that include vomiting, diarrhea, urinary retention, abdominal pain, and excessive sweating. Most of the outbreaks are found in Indonesia and China where fermented coconut and corn-based foods are consumed.
Discovery and history
In 1895, there was a food-poisoning outbreak in Java, Indonesia. The outbreak was caused by the consumption of Indonesian traditional food called tempe bongkrek. During this time, tempe bongkrek served as a main source of protein in Java due to its inexpensiveness. Tempe bongkrek is made by extracting the coconut meat by-product from coconut milk into a form of cake, which is then fermented with R. oligosporus mold. The first outbreak of the bongkrek poisoning by tempe bongkrek was recorded by Dutch researchers; however no further research to find the cause of the poisoning was conducted in 1895. During 1930s, Indonesian government went through an economic depression, and this condition caused some of the people to make tempe bongkrek by themselves, instead of buying it directly from well-trained producers. As a result, the poisonings occurred frequently, reaching 10 to 12 a year. Dutch scientists W. K. Mertens and A. G. van Veen from the Eijkman Institute of Jakarta, started to find the cause of the poisoning in the early 1930s. They successfully identified the source of poisoning as a bacterium called Burkholderia cocovenenans (formerly known as Pseudomonas cocovenenans). This bacterium produces a poisonous substance called bongkrek acid. B. cocovenenans is commonly found in plants and soil, which can be taken up by coconuts and corn, leading to the synthesis of bongkrek acid during the fermentation of such foods. Since 1975, consumption of contaminated tempe bongkrek has caused more than 3000 cases of bongkrek acid poisoning. In Indonesia, the overall reported mortality rate has turned out to be 60%. Due to the severity of the situation, the production of tempe bongkrek has been banned since 1988.
Synthesis
There were multiple attempts to synthesize bongkrek acid using different numbers of fragments since the first total synthesis of the acid by E.J. Corey in 1984. One of the unique attempts to synthesize bongkrek acid was done by Shindo's group from Kyushu University in 2009. Unlike other attempts such as the one from Lev's group, Shindo's group used three fragments to synthesize bongkrek acid.
The Fragments 1, 2, and 3 were individually synthesized in the lab. After the synthesis of each fragment required for bongkrek acid synthesis, the fragments 2 and 3 were first coupled together through Julia olefination in the presence of KHMDS. The resulting intermediate, abbreviated as A in the scheme below, was then coupled with the fragment 1 through Suzuki coupling. After forming intermediate B, bongkrek acid was finally synthesized by treating it with methanol (primary alcohol) through Jones reagent and acid deprotection of the methoxymethyl ester. The first total synthesis of bongkrek acid by E.J. Corey required 32 steps; however Shindo successfully reduced the steps into a total of 18 steps by efficiently utilizing Julia olefination and Suzuki coupling along with a higher yield by 6.4%.
Mechanism of action
Adenine nucleotide translocator, abbreviated as ANT, provides ATP from mitochondria to the cytosol in exchanging of cytosolic ADP. The way bongkrek acid works is that it interrupts the transport process of the cytosolic ADP in the inner membrane of mitochondria by inhibiting the mitochondrial ANT. The interesting part of this inner membrane of mitochondria is that the ANT forms the internal membrane channel of the mitochondrial permeability transition pore, known as MPTP. Bongkrek acid is permeable through this membrane and binds to the surface of ANT, inhibiting ANT’s translocation. Once bongkrek acid binds to the surface of ANT, the acid forms hydrogen bonding interactions with ANT protein residues. The hydrogen bonding interactions are mainly formed with the oxygens from the carboxylic acid fragments of bongkrek acid. The most prominent contribution to the hydrogen bonding interaction comes from the interaction with the side chain amino group, Arg-197. Another prominent contribution of binding bongkrek acid with ANT is the electrostatic interaction between the acid and the ANT’s amino acid, Lys-30. As a result, the hydrogen bonding interactions and the salt bridge put bongkrek acid in the center of the ANT active site, inhibiting the action of the translocase.
Mitochondrial synthesis of ATP requires ADP transport from the cytosol into the mitochondrial matrix through the ANT, meaning it plays a critical role in providing energy for the cells in the first place. ADP/ATP exchange heavily depends on the transition between two distinct conformation states of ANT: cytosolic state (c-state) and matrix state (m-state). In the c-state, the active site of ANT faces toward the cytosol, where it attracts the cytosol ADP, and in the m-state, the active site of ANT faces toward the mitochondrial matrix, where it can release the cytosol ADP and attracts the synthesized ATP. The interaction between the acid and the ANT causes the conformational change of the ANT. Bongkrek acid locks ANT in the m-state. The structure of bongkrek acid-ANT shows that there are six transmembrane alpha helices covering up the active site of the ANT, preventing the binding of adenosine nucleotides. This means ANT can’t receive ADP from the cytosol, ultimately preventing the synthesis of ATP.
Symptoms of poisoning and treatments
After consumption of bongkrek acid-contaminated corn-based or coconut-based foods, the latency period is expected to be between 1 and 10 hours. The symptoms of bongkrek acid poisoning are like other mitochondrial toxins. The common symptoms of bongkrek acid poisoning are dizziness, somnolence, excessive sweating, palpitations, abdominal pain, vomiting, diarrhea, hematochezia, hematuria, and urinary retention. Death usually occurs 1 to 20 hours after the onset of the symptoms of bongkrek acid poisoning. Another common symptom of bongkrek acid poisoning is limb soreness. In the first reported BA poisoning case in Africa, 12/17 people were reported to have limb soreness as one of their main symptoms. A fatal dose for humans can be as low as 1 to 1.5 mg, and other source also states that oral LD50 is 3.16 mg per kg body weight.
Due to lack of studies on the toxicokinetics of bongkrek acid, there are no specific treatments or antidotes for bongkrek acid. The commonly used protocol to treat bongkrek acid poisoning is to remove the toxins that are not absorbed by the adenine nucleotide translocase (ANT) and to provide treatments that are specific to the symptoms that patients are having. Due to the lack of specific treatments and antidotes for the toxins, the timing is critical to reverse the severe physiological effects.
References
Toxicology
Carboxylic acids
Alkene derivatives
Ethers
ADP/ATP translocase inhibitors
Bacterial toxins | Bongkrek acid | [
"Chemistry",
"Environmental_science"
] | 1,719 | [
"Toxicology",
"Carboxylic acids",
"Functional groups",
"Organic compounds",
"Ethers"
] |
4,142,132 | https://en.wikipedia.org/wiki/Rubin%20vase | The Rubin vase (sometimes known as Rubin's vase, the Rubin face or the figure–ground vase) is a famous example of ambiguous or bi-stable (i.e., reversing) two-dimensional forms developed around 1915 by the Danish psychologist Edgar Rubin.
The depicted version of Rubin's vase can be seen as the black profiles of two people looking towards each other or as a white vase, but not both.
Another example of a bistable figure Rubin included in his Danish-language, two-volume book was the Maltese cross.
Rubin presented in his doctoral thesis (1915) a detailed description of the visual figure-ground relationship, an outgrowth of the visual perception and memory work in the laboratory of his mentor, Georg Elias Müller. One element of Rubin's research may be summarized in the fundamental principle, "When two fields have a common border, and one is seen as figure and the other as ground, the immediate perceptual experience is characterized by a shaping effect which emerges from the common border of the fields and which operates only on one field or operates more strongly on one than on the other".
The effect
The visual effect generally presents the viewer with two shape interpretations, each of which is consistent with the retinal image, but only one of which can be maintained at a given moment. This is because the bounding contour will be seen as belonging to the figure shape, which appears interposed against a formless background. If the latter region is interpreted instead as the figure, then the same bounding contour will be seen as belonging to it.
Explanation
These types of stimuli are both interesting and useful because they provide an excellent and intuitive demonstration of the figure–ground distinction the brain makes during visual perception. Rubin's figure–ground distinction, since it involved higher-level cognitive pattern matching, in which the overall picture determines its mental interpretation, rather than the net effect of the individual pieces, influenced the Gestalt psychologists, who discovered many similar percepts themselves.
Normally the brain classifies images by which object surrounds which – establishing depth and relationships. If one object surrounds another object, the surrounded object is seen as figure, and the presumably further away (and hence background) object is the ground, and reversed. This makes sense, since if a piece of fruit is lying on the ground, one would want to pay attention to the "figure" and not the "ground". However, when the contours are not so unequal, ambiguity starts to creep into the previously simple inequality, and the brain must begin "shaping" what it sees; it can be shown that this shaping overrides and is at a higher level than feature recognition processes that pull together the face and the vase images – one can think of the lower levels putting together distinct regions of the picture (each region of which makes sense in isolation), but when the brain tries to make sense of it as a whole, contradictions ensue, and patterns must be discarded.
Construction
The distinction is exploited by devising an ambiguous picture, whose contours match seamlessly the contours of another picture (sometimes the same picture; a practice M. C. Escher used on occasion). The picture should be "flat" and have little (if any) texture to it. The stereotypical example has a vase in the center, and a face matching its contour (since it is symmetrical, there is a matching face on the other side).
See also
Pareidolia
References
Further reading
A Psychology of Picture Perception, John M. Kennedy. 1974, Jossey-Bass Publishers,
The art and science of visual illusions, Nicholas Wade. 1982 Routledge & Kegan Paul Ltd.
Visual Space Perception, William H. Ittelson. 1969, Springer Publishing Company, LOCCCN 60-15818
"Vase or face? A neural correlates of shape-selective grouping processes in the human brain." Uri Hasson, Talma Hendler, Dafna Ben Bashat, Rafael Malach.
Journal of Cognitive Neuroscience, Vol 13(6), Aug 2001. pp. 744–753. ISSN 0898-929X (Print)
External links
Rubin's People Inside the Wall People trapped inside a Wall
Illusionworks.com article
Rubin has invented nothing The Rubin's vase before Rubin (fr)
Optical illusions | Rubin vase | [
"Physics"
] | 892 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
4,142,398 | https://en.wikipedia.org/wiki/Interstellar%20Boundary%20Explorer | Interstellar Boundary Explorer (IBEX or Explorer 91 or SMEX-10) is a NASA satellite in Earth orbit that uses energetic neutral atoms (ENAs) to image the interaction region between the Solar System and interstellar space. The mission is part of NASA's Small Explorer program and was launched with a Pegasus-XL launch vehicle on 19 October 2008.
The mission is led by Dr. David J. McComas (IBEX principal investigator), formerly of the Southwest Research Institute (SwRI) and now with Princeton University. The Los Alamos National Laboratory and the Lockheed Martin Advanced Technology Center built the IBEX-Hi and IBEX-Lo sensors respectively. The Orbital Sciences Corporation manufactured the satellite bus and was the location for spacecraft environmental testing. The nominal mission baseline duration was two years after commissioning, and the prime ended in early 2011. The spacecraft and sensors are still healthy and the mission is continuing in its extended mission.
IBEX is in a Sun-oriented spin-stabilized orbit around the Earth. In June 2011, IBEX was shifted to a new, more efficient, much more stable orbit. It does not come as close to the Moon in the new orbit, and expends less fuel to maintain its position.
The spacecraft is equipped with two large aperture imagers which detect ENAs with energies from 10 eV to 2 keV (IBEX-Lo) and 300 eV to 6 keV (IBEX-Hi).
The mission was originally planned to be a 24-month operations period. The mission has since been extended, with the spacecraft still in operation .
Spacecraft
The spacecraft is built on an octagonal base, roughly high and across. The dry mass is and the instrument payload comprises . The fully fueled mass is , and the entire flight system launch mass, including the ATK Star 27 solid rocket motor, is . The spacecraft itself has a hydrazine attitude control system. Power is produced by a solar array with a capability of 116 watts, and nominal power use is 66 W (16 W for the payload). Communications are via two hemispherical antennas with a nominal downlink data rate of 320 kbps and an uplink rate of 2 kbps.
Science goal
The Interstellar Boundary Explorer (IBEX) mission science goal is to discover the nature of the interactions between the solar wind and the interstellar medium at the edge of the Solar System. IBEX has achieved this goal by generating full sky maps of the intensity (integrated over the line-of-sight) of ENAs in a range of energies every six months. Most of these ENAs are generated in the heliosheath, which is the region of interaction.
Mission
Launch
The IBEX satellite was mated to its Pegasus XL launch vehicle at Vandenberg Air Force Base, California, and the combined vehicle was then suspended below the Lockheed L-1011 Stargazer mother airplane and flown to Kwajalein Atoll in the central Pacific Ocean. Stargazer arrived at Kwajalein Atoll on 12 October 2008.
The IBEX satellite was carried into space on 19 October 2008, by the Pegasus XL launch vehicle. The launch vehicle was released from Stargazer, which took off from Kwajalein Atoll, at 17:47:23 UTC. By launching from this site close to the equator, the Pegasus launch vehicle lifted as much as more mass to orbit than it would have with a launch from the Kennedy Space Center in Florida.
Mission profile
The IBEX satellite initially launched into a highly elliptical transfer orbit with a low perigee and used a solid fuel rocket motor as its final boost stage at apogee in order to raise its perigee greatly and to achieve its desired high-altitude elliptical orbit.
IBEX is in a highly eccentric elliptical terrestrial orbit, which ranges from a perigee of about to an apogee of about . Its original orbit was about — that is, about 80% of the distance to the Moon — which has changed primarily due to an intentional adjustment to prolong the spacecraft's useful life.
This very high orbit allows the IBEX satellite to move out of the Earth's magnetosphere when making scientific observations. This extreme altitude is critical due to the amount of charged-particle interference that would occur while taking measurements within the magnetosphere. When within the magnetosphere of the Earth (), the satellite also performs other functions, including telemetry downlinks.
Orbit adjusted
In June 2011, IBEX shifted to a new orbit that raised its perigee to more than . The new orbit has a period of one third of a lunar month, which, with the correct phasing, avoids taking the spacecraft too close to the Moon, whose gravity can negatively affect IBEX's orbit. The now spacecraft uses less fuel to maintain a stable orbit, increasing its useful lifespan to more than 40 years.
Instruments
The heliospheric boundary of the Solar System is being imaged by measuring the location and magnitude of charge-exchange collisions occurring in all directions. The satellite's payload consists of two energetic neutral atom (ENA) imagers, IBEX-Hi and IBEX-Lo. Each consists of a collimator that limits their fields of view (FoV) a conversion surface to convert neutral hydrogen and oxygen into ions, an electrostatic analyzer (ESA) to suppress ultraviolet light and to select ions of a specific energy range, and a detector to count particles and identify the type of each ion. Both of these sensors are a single-pixel camera with a field of view of roughly 7° x 7°. The IBEX-Hi instrument is recording particle counts in a higher energy band (300 eV to 6 keV) than the IBEX-Lo energy band (10 eV to 2 keV). The scientific payload also includes a Combined Electronics Unit (CEU) that controls the voltages on the collimator and the ESA, and it reads and records data from the particle detectors of each sensor.
Communication
Compared to other space observatories, IBEX has a low data transfer rate due to the limited requirements of the mission.
Data collection
IBEX is collecting energetic neutral atom (ENA) emissions that are traveling through the Solar System to Earth and cannot be measured by conventional telescopes. These ENAs are created on the boundary of our Solar System by the interactions between solar wind particles and interstellar medium particles.
On average IBEX-Hi detects about 500 particles per day, and IBEX-Lo, less than 100. By 2012, over 100 scientific papers related to IBEX were published, described by the principal investigator as "an incredible scientific harvest".
Data availability
As the IBEX data is validated, the IBEX data is made available in a series of data releases on the SwRI IBEX Public Data website. In addition, the data is periodically sent to the NASA Space Physics Data Facility (SPDF), which is the official archive site for IBEX data. SPDF data can be searched at the Heliophysics Data Portal.
Science results
Initial data revealed a previously unpredicted "very narrow ribbon that is two to three times brighter than anything else in the sky". Initial interpretations suggest that "the interstellar environment has far more influence on structuring the heliosphere than anyone previously believed". It is unknown what is creating the energetic neutral atoms (ENA) ribbon. The Sun is currently traveling through the Local Interstellar Cloud, and the heliosphere's size and shape are key factors in determining its shielding power from cosmic rays. Should IBEX detect changes in the shape of the ribbon, that could show how the heliosphere is interacting with the Local Fluff. It has also observed ENAs from the Earth's magnetosphere.
In October 2010, significant changes were detected in the ribbon after six months, based on the second set of IBEX observations.
It went on to detect neutral atoms from outside the Solar System, which were found to differ in composition from the Sun. Surprisingly, IBEX discovered that the heliosphere has no bow shock, and it measured its speed relative to the local interstellar medium (LISM) as , improving on the previous measurement of by Ulysses. Those speeds equate to 25% less pressure on the Sun's heliosphere than previously thought.
In July 2013, IBEX results revealed a 4-lobed tail on the Solar System's heliosphere.
See also
Interstellar Mapping and Acceleration Probe (IMAP), a follow-on mission to IBEX
David J. McComas, Principal Investigator of IBEX (Princeton University)
References
External links
IBEX Public Data from IBEX Science Team
Heliophysics Data Portal by NASA's Heliophysics Division
IBEX Mission Profile by NASA's Solar System Exploration
Satellites orbiting Earth
Astronomical surveys
Explorers Program
Spacecraft launched in 2008
Articles containing video clips
Spacecraft launched by Pegasus rockets
Geospace monitoring satellites | Interstellar Boundary Explorer | [
"Astronomy"
] | 1,805 | [
"Astronomical surveys",
"Astronomical objects",
"Works about astronomy"
] |
4,143,960 | https://en.wikipedia.org/wiki/Fibroblast%20growth%20factor | Fibroblast growth factors (FGF) are a family of cell signalling proteins produced by the macrophages. They are involved in a wide variety of processes, most notably as crucial elements for normal development in animal cells. Any irregularities in their function will lead to a range of developmental defects. These growth factors typically act as a systemic or locally circulating molecules of extracellular origin that activate cell surface receptors. A defining property of FGFs is that they bind to heparin and to heparan sulfate. Thus, some are sequestered in the extracellular matrix of tissues that contains heparan sulfate proteoglycans, and released locally upon injury or tissue remodeling.
Families
In humans, 23 members of the FGF family have been identified, all of which are structurally related signaling molecules:
Members FGF1 through FGF10 all bind fibroblast growth factor receptors (FGFRs). FGF1 is also known as acidic fibroblast growth factor, and FGF2 is also known as basic fibroblast growth factor.
Members FGF11, FGF12, FGF13, and FGF14, also known as FGF homologous factors 1-4 (FHF1-FHF4), have been shown to have distinct functions compared to the FGFs. Although these factors possess remarkably similar sequence homology, they do not bind FGFRs and are involved in intracellular processes unrelated to the FGFs. This group is also known as the intracellular fibroblast growth factor subfamily (iFGF).
Human FGF18 is involved in cell development and morphogenesis in various tissues including cartilage.
Human FGF20 was identified based on its homology to Xenopus FGF-20 (XFGF-20).
FGF15 through FGF23 were described later and functions are still being characterized. FGF15 is the mouse ortholog of human FGF19 (there is no human FGF15) and, where their functions are shared, they are often described as FGF15/19. In contrast to the local activity of the other FGFs, FGF15/19, FGF21 and FGF23 have hormonal systemic effects.
Receptors
The mammalian fibroblast growth factor receptor family has 4 members, FGFR1, FGFR2, FGFR3, and FGFR4. The FGFRs consist of three extracellular immunoglobulin-type domains (D1-D3), a single-span trans-membrane domain and an intracellular split tyrosine kinase domain. FGFs interact with the D2 and D3 domains, with the D3 interactions primarily responsible for ligand-binding specificity (see below). Heparan sulfate binding is mediated through the D3 domain. A short stretch of acidic amino acids located between the D1 and D2 domains has auto-inhibitory functions. This 'acid box' motif interacts with the heparan sulfate binding site to prevent receptor activation in the absence of FGFs.
Alternate mRNA splicing gives rise to 'b' and 'c' variants of FGFRs 1, 2 and 3. Through this mechanism, seven different signalling FGFR sub-types can be expressed at the cell surface. Each FGFR binds to a specific subset of the FGFs. Similarly, most FGFs can bind to several different FGFR subtypes. FGF1 is sometimes referred to as the 'universal ligand' as it is capable of activating all 7 different FGFRs. In contrast, FGF7 (keratinocyte growth factor, KGF) binds only to FGFR2b (KGFR).
The signalling complex at the cell surface is believed to be a ternary complex formed between two identical FGF ligands, two identical FGFR subunits, and either one or two heparan sulfate chains.
History
A mitogenic growth factor activity was found in pituitary extracts by Armelin in 1973 and further work by Gospodarowicz as reported in 1974 described a more defined isolation of proteins from cow brain extract which, when tested in a bioassay that caused fibroblasts to proliferate, led these investigators to apply the name "fibroblast growth factor." In 1975, they further fractionated the extract using acidic and basic pH and isolated two slightly different forms that were named "acidic fibroblast growth factor" (FGF1) and "basic fibroblast growth factor" (FGF2). These proteins had a high degree of sequence homology among their amino acid chains, but were determined to be distinct proteins.
Not long after FGF1 and FGF2 were isolated, another group of investigators isolated a pair of heparin-binding growth factors that they named HBGF-1 and HBGF-2, while a third group isolated a pair of growth factors that caused proliferation of cells in a bioassay containing blood vessel endothelium cells, which they called ECGF1 and ECGF2. These independently discovered proteins were eventually demonstrated to be the same sets of molecules, namely FGF1, HBGF-1 and ECGF-1 were all the same acidic fibroblast growth factor described by Gospodarowicz, et al., while FGF2, HBGF-2, and ECGF-2 were all the same basic fibroblast growth factor.
Functions
FGFs are multifunctional proteins with a wide variety of effects; they are most commonly mitogens but also have regulatory, morphological, and endocrine effects. They have been alternately referred to as "pluripotent" growth factors and as "promiscuous" growth factors due to their multiple actions on multiple cell types. Promiscuous refers to the biochemistry and pharmacology concept of how a variety of molecules can bind to and elicit a response from single receptor. In the case of FGF, four receptor subtypes can be activated by more than twenty different FGF ligands. Thus the functions of FGFs in developmental processes include mesoderm induction, anterior-posterior patterning, limb development, neural induction and neural development, and in mature tissues/systems angiogenesis, keratinocyte organization, and wound healing processes.
FGF is critical during normal development of both vertebrates and invertebrates and any irregularities in their function leads to a range of developmental defects.
FGFs secreted by hypoblasts during avian gastrulation play a role in stimulating a Wnt signaling pathway that is involved in the differential movement of Koller's sickle cells during formation of the primitive streak. Left, angiography of the newly formed vascular network in the region of the front wall of the left ventricle. Right, analysis quantifying the angiogenic effect.
While many FGFs can be secreted by cells to act on distant targets, some FGF act locally within a tissue, and even within a cell. Human FGF2 occurs in low molecular weight (LMW) and high molecular weight (HMW) isoforms. LMW FGF2 is primarily cytoplasmic and functions in an autocrine manner, whereas HMW FGF2s are nuclear and exert activities through an intracrine mechanism.
One important function of FGF1 and FGF2 is the promotion of endothelial cell proliferation and the physical organization of endothelial cells into tube-like structures. They thus promote angiogenesis, the growth of new blood vessels from the pre-existing vasculature. FGF1 and FGF2 are more potent angiogenic factors than vascular endothelial growth factor (VEGF) or platelet-derived growth factor (PDGF). FGF1 has been shown in clinical experimental studies to induce angiogenesis in the heart.
As well as stimulating blood vessel growth, FGFs are important players in wound healing. FGF1 and FGF2 stimulate angiogenesis and the proliferation of fibroblasts that give rise to granulation tissue, which fills up a wound space/cavity early in the wound-healing process. FGF7 and FGF10 (also known as keratinocyte growth factors KGF and KGF2, respectively) stimulate the repair of injured skin and mucosal tissues by stimulating the proliferation, migration and differentiation of epithelial cells, and they have direct chemotactic effects on tissue remodelling.
During the development of the central nervous system, FGFs play important roles in neural stem cell proliferation, neurogenesis, axon growth, and differentiation. FGF signaling is important in promoting surface area growth of the developing cerebral cortex by reducing neuronal differentiation and hence permitting the self-renewal of cortical progenitor cells, known as radial glial cells, and FGF2 has been used to induce artificial gyrification of the mouse brain. Another FGF family member, FGF8, regulates the size and positioning of the functional areas of the cerebral cortex (Brodmann areas).
FGFs are also important for maintenance of the adult brain. Thus, FGFs are major determinants of neuronal survival both during development and during adulthood.
Adult neurogenesis within the hippocampus e.g. depends greatly on FGF2. In addition, FGF1 and FGF2 seem to be involved in the regulation of synaptic plasticity and processes attributed to learning and memory, at least in the hippocampus.
The 15 exparacrine FGFs are secreted proteins that bind heparan sulfate and can, therefore, be bound to the extracellular matrix of tissues that contain heparan sulfate proteoglycans. This local action of FGF proteins is classified as paracrine signalling, most commonly through the JAK-STAT signalling pathway or the receptor tyrosine kinase (RTK) pathway.
Members of the FGF19 subfamily (FGF15, FGF19, FGF21, and FGF23) bind less tightly to heparan sulfates, and so can act in an endocrine fashion on far-away tissues, such as intestine, liver, kidney, adipose, and bone. For example:
FGF15 and FGF19 (FGF15/19) are produced by intestinal cells but act on FGFR4-expressing liver cells to downregulate the key gene (CYP7A1) in the bile acid synthesis pathway.
FGF23 is produced by bone but acts on FGFR1-expressing kidney cells to regulate the synthesis of vitamin D and phosphate homeostasis.
Structure
The crystal structures of FGF1 have been solved and found to be related to interleukin 1-beta. Both families have the same beta trefoil fold consisting of 12-stranded beta-sheet structure, with the beta-sheets are arranged in 3 similar lobes around a central axis, 6 strands forming an anti-parallel beta-barrel. In general, the beta-sheets are well-preserved and the crystal structures superimpose in these areas. The intervening loops are less well-conserved - the loop between beta-strands 6 and 7 is slightly longer in interleukin-1 beta.
Clinical applications
Dysregulation of the FGF signalling system underlies a range of diseases associated with the increased FGF expression. Inhibitors of FGF signalling have shown clinical efficacy. Some FGF ligands (particularly FGF2) have been demonstrated to enhance tissue repair (e.g. skin burns, grafts, and ulcers) in a range of clinical settings.
See also
Receptor tyrosine kinase
Granulocyte-colony stimulating factor (G-CSF)
Granulocyte-macrophage colony stimulating factor (GM-CSF)
Nerve growth factor (NGF)
Neurotrophins
Erythropoietin (EPO)
Thrombopoietin (TPO)
Myostatin (GDF8)
Growth differentiation factor 9 (GDF9)
Gyrification
Neurogenesis
References
External links
FGF5 in Hair Tonic Products
FGF1 in Cosmetic Products
Protein domains
Fibroblast growth factor
Morphogens | Fibroblast growth factor | [
"Biology"
] | 2,565 | [
"Protein domains",
"Morphogens",
"Induced stem cells",
"Protein classification"
] |
4,144,434 | https://en.wikipedia.org/wiki/P-bodies | In cellular biology, P-bodies, or processing bodies, are distinct foci formed by phase separation within the cytoplasm of a eukaryotic cell consisting of many enzymes involved in mRNA turnover. P-bodies are highly conserved structures and have been observed in somatic cells originating from vertebrates and invertebrates, plants and yeast. To date, P-bodies have been demonstrated to play fundamental roles in general mRNA decay, nonsense-mediated mRNA decay, adenylate-uridylate-rich element mediated mRNA decay, and microRNA (miRNA) induced mRNA silencing. Not all mRNAs which enter P-bodies are degraded, as it has been demonstrated that some mRNAs can exit P-bodies and re-initiate translation. Purification and sequencing of the mRNA from purified processing bodies showed that these mRNAs are largely translationally repressed upstream of translation initiation and are protected from 5' mRNA decay.
P-bodies were originally proposed to be the sites of mRNA degradation in the cell and involved in decapping and digestion of mRNAs earmarked for destruction. Later work called this into question suggesting P bodies store mRNA until needed for translation.
In neurons, P-bodies are moved by motor proteins in response to stimulation. This is likely tied to local translation in dendrites.
History
P-bodies were first described in the scientific literature by Bashkirov et al. in 1997, in which they describe "small granules… discrete, prominent foci" as the cytoplasmic location of the mouse exoribonuclease mXrn1p. It wasn’t until 2002 that a glimpse into the nature and importance of these cytoplasmic foci was published., when researchers demonstrated that multiple proteins involved with mRNA degradation localize to the foci. Their importance was recognized after experimental evidence was obtained pointing to P-bodies as the sites of mRNA degradation in the cell. The researchers named these structures processing bodies or "P bodies". During this time, many descriptive names were used also to identify the processing bodies, including "GW-bodies" and "decapping-bodies"; however "P-bodies" was the term chosen and is now widely used and accepted in the scientific literature. Recently evidence has been presented suggesting that GW-bodies and P-bodies may in fact be different cellular components. The evidence being that GW182 and Ago2, both associated with miRNA gene silencing, are found exclusively in multivesicular bodies or GW-bodies and are not localized to P-bodies. Also of note, P-bodies are not equivalent to stress granules and they contain largely non-overlapping proteins. The two structures support overlapping cellular functions but generally occur under different stimuli. Hoyle et al. suggests a novel site termed EGP bodies, or stress granules, may be responsible for mRNA storage as these sites lack the decapping enzyme.
Associations with microRNA
microRNA mediated repression occurs in two ways, either by translational repression or stimulating mRNA decay. miRNA recruit the RISC complex to the mRNA to which they are bound. The link to P-bodies comes by the fact that many, if not most, of the proteins necessary for miRNA gene silencing are localized to P-bodies, as reviewed by Kulkarni et al. (2010). These proteins include, but are not limited to, the scaffold protein GW182, Argonaute (Ago), decapping enzymes and RNA helicases.
The current evidence points toward P-bodies as being scaffolding centers of miRNA function, especially due to the evidence that a knock down of GW182 disrupts P-body formation. However, there remain many unanswered questions about P-bodies and their relationship to miRNA activity. Specifically, it is unknown whether there is a context dependent (stress state versus normal) specificity to the P-body's mechanism of action. Based on the evidence that P-bodies sometimes are the site of mRNA decay and sometimes the mRNA can exit the P-bodies and re-initiate translation, the question remains of what controls this switch. Another ambiguous point to be addressed is whether the proteins that localize to P-bodies are actively functioning in the miRNA gene silencing process or whether they are merely on standby.
Protein composition
In 2017, a new method to purify processing bodies was published. Hubstenberger et al. used fluorescence-activated particle sorting (a method based on the ideas of fluorescence-activated cell sorting) to purify processing bodies from human epithelial cells. From these purified processing bodies they were able to use mass spectrometry and RNA sequencing to determine which proteins and RNAs are found in processing bodies, respectively. This study identified 125 proteins that are significantly associated with processing bodies. Notably this work provided the most compelling evidence up to this date that P-bodies might not be the sites of degradation in the cell and instead used for storage of translationally repressed mRNA. This observation was further supported by single molecule imaging of mRNA by the Chao group in 2017.
In 2018, Youn et al. took a proximity labeling approach called BioID to identify and predict the processing body proteome. They engineered cells to express several processing body-localized proteins as fusion proteins with the BirA* enzyme. When the cells are incubated with biotin, BirA* will biotinylate proteins that are nearby, thus tagging the proteins within processing bodies with a biotin tag. Streptavidin was then used to isolate the tagged proteins and mass spectrometry to identify them. Using this approach, Youn et al. identified 42 proteins that localize to processing bodies.
References
Further reading
Molecular biology
Biochemistry | P-bodies | [
"Chemistry",
"Biology"
] | 1,193 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
4,144,576 | https://en.wikipedia.org/wiki/Prins%20reaction | The Prins reaction is an organic reaction consisting of an electrophilic addition of an aldehyde or ketone to an alkene or alkyne followed by capture of a nucleophile or elimination of an H+ ion. The outcome of the reaction depends on reaction conditions. With water and a protic acid such as sulfuric acid as the reaction medium and formaldehyde the reaction product is a 1,3-diol (3). When water is absent, the cationic intermediate loses a proton to give an allylic alcohol (4). With an excess of formaldehyde and a low reaction temperature the reaction product is a dioxane (5). When water is replaced by acetic acid the corresponding esters are formed.
History
The original reactants employed by Dutch chemist in his 1919 publication were styrene (scheme 2), pinene, camphene, eugenol, isosafrole and anethole. These procedures have been optimized.
Hendrik Jacobus Prins discovered two new organic reactions during his doctoral research in the year of 1911–1912. The first one is the addition of polyhalogen compound to
olefins and the second reaction is the acid catalyzed addition of aldehydes to olefin compounds. The early studies on Prins reaction are exploratory in nature and did not attract much attention until 1937. The development of petroleum cracking in 1937 increased the production of unsaturated hydrocarbons. As a consequence, commercial availability of lower olefin coupled with an aldehyde produced from oxidation of low boiling paraffin increased the curiosity to study the olefin-aldehyde condensation. Later on, Prins reaction emerged as a powerful C-O and C-C bond forming technique in the synthesis of various molecules in organic synthesis.
In 1937 the reaction was investigated as part of a quest for di-olefins to be used in synthetic rubber.
Reaction mechanism
The reaction mechanism for this reaction is depicted in scheme 5. The carbonyl reactant (2) is protonated by a protic acid and for the resulting oxonium ion 3 two resonance structures can be drawn. This electrophile engages in an electrophilic addition with the alkene to the carbocationic intermediate 4. Exactly how much positive charge is present on the secondary carbon atom in this intermediate should be determined for each reaction set. Evidence exists for neighbouring group participation of the hydroxyl oxygen or its neighboring carbon atom. When the overall reaction has a high degree of concertedness, the charge built-up will be modest.
The three reaction modes open to this oxocarbenium intermediate are:
in blue: capture of the carbocation by water or any suitable nucleophile through 5 to the 1,3-adduct 6.
in black: proton abstraction in an elimination reaction to unsaturated compound 7. When the alkene carries a methylene group, elimination and addition can be concerted with transfer of an allyl proton to the carbonyl group which in effect is an ene reaction in scheme 6.
in green: capture of the carbocation by additional carbonyl reactant. In this mode the positive charge is dispersed over oxygen and carbon in the resonance structures 8a and 8b. Ring closure leads through intermediate 9 to the dioxane 10. An example is the conversion of styrene to 4-phenyl-m-dioxane.
in gray: only in specific reactions and when the carbocation is very stable the reaction takes a shortcut to the oxetane 12. The photochemical Paternò–Büchi reaction between alkenes and aldehydes to oxetanes is more straightforward.
Variations
Many variations of the Prins reaction exist because it lends itself easily to cyclization reactions and because it is possible to capture the oxo-carbenium ion with a large array of nucleophiles.
The halo-Prins reaction is one such modification with replacement of protic acids and water by lewis acids such as stannic chloride and boron tribromide. The halogen is now the nucleophile recombining with the carbocation. The cyclization of certain allyl pulegones in scheme 7 with titanium tetrachloride in dichloromethane at −78 °C gives access to the decalin skeleton with the hydroxyl group and chlorine group predominantly in cis configuration (91% cis). This observed cis diastereoselectivity is due to the intermediate formation of a trichlorotitanium alkoxide making possible an easy delivery of chlorine to the carbocation ion from the same face. The trans isomer is preferred (98% cis) when the switch is made to a tin tetrachloride reaction at room temperature.
The Prins-pinacol reaction is a cascade reaction of a Prins reaction and a pinacol rearrangement. The carbonyl group in the reactant in scheme 8 is masked as a dimethyl acetal and the hydroxyl group is masked as a triisopropylsilyl ether (TIPS). With lewis acid stannic chloride the oxonium ion is activated and the pinacol rearrangement of the resulting Prins intermediate results in ring contraction and referral of the positive charge to the TIPS ether which eventually forms an aldehyde group in the final product as a mixture of cis and trans isomers with modest diastereoselectivity.
The key oxo-carbenium intermediate can be formed by other routes than simple protonation of a carbonyl. In a key step of the synthesis of exiguolide, it was formed by protonation of a vinylogous ester:
See also
Heteropoly acid
References
External links
Prins reaction in Alkaloid total synthesis Link
Prins reaction @ organic-chemistry.org
Addition reactions
Carbon-carbon bond forming reactions
Name reactions | Prins reaction | [
"Chemistry"
] | 1,238 | [
"Coupling reactions",
"Name reactions",
"Carbon-carbon bond forming reactions",
"Organic reactions"
] |
4,144,848 | https://en.wikipedia.org/wiki/Knowledge%20integration | Knowledge integration is the process of synthesizing multiple knowledge models (or representations) into a common model (representation).
Compared to information integration, which involves merging information having different schemas and representation models, knowledge integration focuses more on synthesizing the understanding of a given subject from different perspectives.
For example, multiple interpretations are possible of a set of student grades, typically each from a certain perspective. An overall, integrated view and understanding of this information can be achieved if these interpretations can be put under a common model, say, a student performance index.
The Web-based Inquiry Science Environment (WISE), from the University of California at Berkeley has been developed along the lines of knowledge integration theory.
Knowledge integration has also been studied as the process of incorporating new information into a body of existing knowledge with an interdisciplinary approach. This process involves determining how the new information and the existing knowledge interact, how existing knowledge should be modified to accommodate the new information, and how the new information should be modified in light of the existing knowledge.
A learning agent that actively investigates the consequences of new information can detect and exploit a variety of learning opportunities; e.g., to resolve knowledge conflicts and to fill knowledge gaps. By exploiting these learning opportunities the learning agent is able to learn beyond the explicit content of the new information.
The machine learning program KI, developed by Murray and Porter at the University of Texas at Austin, was created to study the use of automated and semi-automated knowledge integration to assist knowledge engineers constructing a large knowledge base.
A possible technique which can be used is semantic matching. More recently, a technique useful to minimize the effort in mapping validation and visualization has been presented which is based on Minimal Mappings. Minimal mappings are high quality mappings such that i) all the other mappings can be computed from them in time linear in the size of the input graphs, and ii) none of them can be dropped without losing property i).
The University of Waterloo operates a Bachelor of Knowledge Integration undergraduate degree program as an academic major or minor. The program started in 2008.
See also
Data integration
Knowledge value chain
References
Further reading
Linn, M. C. (2006) The Knowledge Integration Perspective on Learning and Instruction. R. Sawyer (Ed.). In The Cambridge Handbook of the Learning Sciences. Cambridge, MA. Cambridge University Press
Murray, K. S. (1996) KI: A tool for Knowledge Integration. Proceedings of the Thirteenth National Conference on Artificial Intelligence
Murray, K. S. (1995) Learning as Knowledge Integration, Technical Report TR-95-41, The University of Texas at Austin
Murray, K. S. (1990) Improving Explanatory Competence, Proceedings of the Twelfth Annual Conference of the Cognitive Science Society
Murray, K. S., Porter, B. W. (1990) Developing a Tool for Knowledge Integration: Initial Results. International Journal for Man-Machine Studies, volume 33
Murray, K. S., Porter, B. W. (1989) Controlling Search for the Consequences of New Information during Knowledge Integration. Proceedings of the Sixth International Machine Learning Conference
Shen, J., Sung, S., & Zhang, D.M. (2016) Toward an analytic framework of interdisciplinary reasoning and communication (IRC) processes in science. International Journal of Science Education, 37 (17), 2809–2835.
Shen, J., Liu, O., & Sung, S. (2014). Designing interdisciplinary assessments in science for college students: An example on osmosis. International Journal of Science Education, 36 (11), 1773–1793.
Knowledge representation
Learning
Machine learning | Knowledge integration | [
"Engineering"
] | 736 | [
"Artificial intelligence engineering",
"Machine learning"
] |
4,144,898 | https://en.wikipedia.org/wiki/Shiplap | Shiplap is a type of wooden board used commonly as exterior siding in the construction of residences, barns, sheds, and outbuildings.
Exterior walls
Shiplap is either rough-sawn or milled pine or similarly inexpensive wood between wide with a rabbet on opposite sides of each edge. The rabbet allows the boards to overlap in this area. The profile of each board partially overlaps that of the board next to it creating a channel that gives shadow line effects, provides excellent weather protection and allows for dimensional movement.
Useful for its strength as a supporting member, and its ability to form a relatively tight seal when lapped, shiplap is usually used as a type of siding for buildings that do not require extensive maintenance and must withstand cold and aggressive climates. Rough-sawn shiplap is attached vertically in post and beam construction, usually with 51–65 mm (6d–8d) common nails, while milled versions, providing a tighter seal, are more commonly placed horizontally, more suited to two-by-four frame construction.
Small doors and shutters such as those found in barns and sheds are often constructed of shiplap cut directly from the walls, with only thin members framing or crossing the back for support. Shiplap is also used indoors for the rough or rustic look that it creates when used as paneling or a covering for a wall or ceiling. Shiplap is often used to describe any rabbeted siding material that overlaps in a similar fashion.
Interior design
In interior design, shiplap is a style of wooden wall siding characterized by long planks, normally painted white, that are mounted horizontally with a slight gap between them in a manner that evokes exterior shiplap walls. A disadvantage of the style is that the gaps are prone to accumulating dust.
Installing shiplap horizontally in a room can help carry the eye around the space, making it feel larger. Installing it vertically helps emphasize the height of the room, making it feel taller. Rectangular shiplap pieces can be placed in a staggered zig-zag layout to add texture and enhance the size of the room. Shiplap can also be installed on the ceiling, to draw the eye upwards.
References
Wood products
Building engineering
Building materials
Timber framing | Shiplap | [
"Physics",
"Technology",
"Engineering"
] | 467 | [
"Timber framing",
"Building engineering",
"Architecture",
"Structural system",
"Construction",
"Materials",
"Civil engineering",
"Matter",
"Building materials"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.